The basic paradigms haven't changed since the computers were created.
We still use dual state (binary) machines, with the same basic architecture that Von Neumann described long ago, though admittedly there have been some efforts in other directions to create machines based in multiple state components. Still, as far as I know this is still experimental stuff and far from the big public.
At hardware level, all the improvements in the last decades have been related to the integration scale, clock speed and logical circuitry which give us better branching prediction and specific support for complex instructions via improved instruction sets (mmx, sse* and the likes), but nothing really revolutionary has become mainstream.
At software level however, algorithms get more complex and complete as the hardware power grows. There has been some improvement in AI, but the same inherent problems remain, however. The biggest problem of AI is that it tries to emulate true knowledge on a being that's not intelligent (the computer). But, in first place, we don't even know what exactly the knowledge is, and we don't know either how the human brain truly works. We only know it very superficially. So, we are trying to replicate something that we don't even know what it is. We can only make assumptions about lots of things.
That what makes IA really complex: it's multidisciplinary nature. In some other areas of the IA we have made a good improvement over the last years (for example, in the pattern recognition field: speech recognition, shape recognition in images, 3d/depth awareness, OCR...).
Last edited by i92guboj; 02-23-2009 at 07:17 AM.