Thinking about perceiving (cognitive) machines. It seems that a hybrid model should be applicable for most prosaic tasks. The key benefit of an AI would be a compact, effective structure for learning and abstraction (modelling), it would not necessarily be a good way to code low-level real-world operations. Though it might have advantages, the processing cost would often dwarf the benefits on today's hardware.
For instance, imagine we were to want to create a machine that can take a video feed and from it construct an accurate 3D model of its environment. We can imagine a machine that can do this rather mechanically, that is, by constructing layers of low-level operations, edge detection, surface detection, shadow and lighting calculations and the like.
With clever enough algorithms we could create a machine that can construct the surface and ignore the "noise" without any AI approaches. Microsoft has, I believe, even produced something like this with their 3D model from Flickr project. Thing is, much of the low-level stuff can be handled with highly optimized code, maybe even simple GPGPU stuff that doesn't even tie up the core processor of a desktop.
What role would a "hard AI" (e.g. a synthetic neurological system) play in such a system? Does it sit on top of the Direct3D model? Does it translate what it "feels" into the language of the computer, or do you make the model within the "brain" your final arbiter and try to teach it to "render" the model? Or is it merely going to act as a director, controlling the process, weeding out extraneous information more easily, learning which solutions better approximate initial conditions to avoid expensive branches in the code, an optimizer sitting at the top of the stack. How many millions of neurons would you need to make that useful?
Consider the question of speech recognition. Here again the underlying ideas have received enormous amounts of investigation, meaning that it would almost always seem easier to start with a machine that knows how to pick out phonemes directly, or which at least knew about them, rather than asking an "untrained mind" to try to figure it all out. Yet we know that with a few years of training a human mind can learn language, and far better than any human-coded system has yet been able. That is, the completely "interference free" AI approach should, in theory, be able to produce a far more adaptable and robust solution, but it will likely use processing and storage far in excess of the targeted system where a human has produced highly optimized algorithms for a particular task.
It would seem that a fully-functional neurological system, a system which is inherently adaptable, should be something that can be hooked up to any collection of low-level operations we can imagine. We could imagine a modelling machine with its input being not just a video image, but a series of inputs from our hard-coded algorithms as well. We could image that same mind outputting to a model which can rendering using OpenGL or the like to an image and use low-level comparisons with that image to optimize (direct) the higher-level algorithms.
So, what you would want to handle with an AI. Hypothesis (I'm not wedded to this, just playing with the idea): For any algorithm for which we have already "solved" the problem with simple low-level code, we likely should not handle the code with an AI. The AI should be used solely for those problems for which we have no readily coded solution. That is, while an (neurological) AI might be able to model the entirety of the human neocortex, we probably don't want to use it for that (for efficiency reasons). We would want to use the AI solely for the very small subsets of problems where we have not yet come up with solutions in non-AI code.
That is, anything you know how to code today should probably not be handled by the AI. All processes would, ideally, however, be under the control of the AI, so that the AI can decide whether it needs a given processing operation in order to make a decision. That is, instead of always running, for example, edge detection, the AI could restrict the edge detection algorithm to a small subset of the visual field to deal with an unexpected (unpredicted) situation/fragment of the perceptual field, rather than spending the processing cycles on the operation for every perceptual cycle.
The alternate hypothesis, however, is that when we figure out how to make AIs efficiently, we will suddenly discover that they should be used everywhere. Certainly the robustness of learning and adaptation would be very valuable in most AI-desiring tasks, but to make it practical we would need to make the implementation code dramatically lower. We're using large Beowulf clusters to implement something that can do pretty basic character recognition on a 100px image.
Maybe as we scale the algorithm we'll discover that it's practical to implement a consciousness from the ground up, but I'm guessing that dedicated hardware/software algorithms will often be the best inputs/outputs for AIs that we build.