Artificial Intelligence :: Developing a Brain
Posted 10-01-2011 at 07:44 AM by DJ Shaji
I am studying Psychology these days (as opposed to being a student of psychology, having a degree and all, which I am not, and which I do not have, respectively), and I came upon the idea of Artificial Intelligence. So, here's the idea; get ready; how about building an intelligent system - an artificial consciousness, if you will. Now, this iead is by no means new; this idea has existed ever since the first computer was built. In fact, there exist many projects already that achieve this goal to some extent. For example, there was IBM's Deep Blue which defeated Grandmaster Garry Kasparov; IBM's Watson won Jeopardy. But is are these really intelligent machines? I mean, consider this - Watson had access to tera bytes of data, which it scanned to come up with answers (or rather questions) on Jeopardy. Now, is that intelligence? If I said "Hi Watson", would it say "Hi Shaji" to me back? What if i say "Kya Haal hai" (how you doin) in Hindi, would it still be able to answer me back? Would its current algorithms allow it to learn a new language? Write poetry? [I]Think[I]?
My idea is this; design a system that emulates the human mind. So, I won't be teaching the system a language; I would be teaching the system language. The system would learn, and so it would be possible to make it acquire new knowlegde and make itself useful in a variety of situations. They key point here is to make it self aware. Make it learn things exactly the way people do. Make it think; it should be able to make judgments on it's own, experience it's interactions with it's surroundings, and learn from it's experience.
We're halfway there already. This program would be context independent. So, a webcam could become it's eyes, a microphone could become it's ears, and it ought to be able to respond to interactions through these and other means, instead of just the keyboard. If two such systems are put together, they ought to be able to talk and learn from it.
Now, this system is completely different from, say, the A.L.I.C.E project, or ELIZA, or the doctor mode in Emacs. Instead of talking in a predetermined manner, the emphasis here is to enable a system to think, and so make it possible to learn any language, or indeed invent a language of it's own.
My idea is this; design a system that emulates the human mind. So, I won't be teaching the system a language; I would be teaching the system language. The system would learn, and so it would be possible to make it acquire new knowlegde and make itself useful in a variety of situations. They key point here is to make it self aware. Make it learn things exactly the way people do. Make it think; it should be able to make judgments on it's own, experience it's interactions with it's surroundings, and learn from it's experience.
We're halfway there already. This program would be context independent. So, a webcam could become it's eyes, a microphone could become it's ears, and it ought to be able to respond to interactions through these and other means, instead of just the keyboard. If two such systems are put together, they ought to be able to talk and learn from it.
Now, this system is completely different from, say, the A.L.I.C.E project, or ELIZA, or the doctor mode in Emacs. Instead of talking in a predetermined manner, the emphasis here is to enable a system to think, and so make it possible to learn any language, or indeed invent a language of it's own.
Total Comments 7
Comments
-
Think how dangerous it could be to have a computer think for itself. If you had a panel of people you could not come to agreement on all the checks and balances. Which individual personality would it have. If could really be a horror show if the (super computer) was networked.
Posted 10-01-2011 at 08:23 AM by Larry Webb -
Well, certainly Hollywood seems to think so
I'm talking about something like a Brain Emulator; it would think in a manner similar to the human thought process. I mean, yeah, we all do have certain bad dispositions, but we don't act on them, because we have a sense of morality and a self developed code of ethical conduct. Surely a rational intelligence would take into consideration consequences of its actions. We, for instance, don't hurt others because it is wrong. Any emulator of thinking patterns would also show the corresponding psycho-social conduct as well.
To be sure, we can have built in safe-gaurds at various levels, like for example, a non-preemptible check and balance system, beginning with Isaac Asimov's laws of Robotics, among other things. I really don't believe we have to worry about computers taking over the worldThe system would have far more advantages in many applications of daily use.
Posted 10-01-2011 at 10:24 AM by DJ Shaji -
A thread on a similar topic was posted in Linux - News some time ago. You may be interested.
…as for me, I think I'll just wait for Judgement Day…Posted 10-01-2011 at 05:48 PM by MrCode
Updated 10-01-2011 at 05:51 PM by MrCode -
The problem is being able to simulate a neural network of sufficient size to emulate a brain in compact enough size to move about. Also understanding the importance of how to connect inputs and outputs so that the different parts of the neural net work correctly.
I stand by my beliefs that if you could create a machine that could think, learn, and "feel", then it would be artificial life with actual intelligence. There is nothing artificial about the intelligence if, on its own experiences, it can learn and adapt. But then the question is, why would a man, or other machine, made machine be any less alive than an organic machine?Posted 10-02-2011 at 01:03 AM by lumak -
Quote:There is nothing artificial about the intelligence if, on its own experiences, it can learn and adapt. But then the question is, why would a man, or other machine, made machine be any less alive than an organic machine?
Would it be ethical to simply create new sentient beings, and under what conditions (like, on an assembly line, etc.; I know you would probably just come back with the whole "but we already do when we have children" thing)? How would we treat them? How would we treat ourselves?
See, I figure that since creating an AI of a sufficient level as to emulate a human being would require a more-or-less complete understanding of human psychology, we would have to be "reduced to mere mechanism" either long before or just after the "robot revolution".
Science? Dominated by the new mecha-race.
Art? Dominated by the new mecha-race. Deemed worthless/eradicated.
Government/politics? Dominated by the new mecha-race.
If there's no difference between a piece of art, whether a human creates it or an AI, where is its worth (on either side)?
If a scientific discovery is no less useful because an AI made it, what have we (as humans) to be proud of?
In short, why should we consider ourselves "special" at all? Why do we delude ourselves into thinking we're somehow "worth something"? Why do we bother "expressing ourselves" if it's all worthless/meaningless in the end? We're all just matter; we're all just machines following a path every bit as predetermined as the rest of the universe, so why do we even give a fsck about anything anymore?
This, I think, is what scares the ever-loving fsck out of me about AI. We'll be replaced as a species. Not deliberately destroyed, mind you, but slowly and painfully degraded to being worthless.
I didn't want to have to expand on my "I'll just wait until Judgement Day" statement like this, but you've forced me to (I had no choice, it was determined since the beginning of the universe! Action/reaction! Chain of causation!).
Posted 10-02-2011 at 02:45 AM by MrCode
Updated 10-02-2011 at 03:13 AM by MrCode -
Quote:
Quote:I stand by my beliefs that if you could create a machine that could think, learn, and "feel", then it would be artificial life with actual intelligence. There is nothing artificial about the intelligence if, on its own experiences, it can learn and adapt. But then the question is, why would a man, or other machine, made machine be any less alive than an organic machine?Posted 10-02-2011 at 12:54 PM by DJ Shaji -
Quote:This is what concerns me most about AI, personally. It's not so much the Hollywood "kill all humans" scenario, it's the "mechanophobia", if you will; the moral implications of essentially "creating life".
Would it be ethical to simply create new sentient beings, and under what conditions (like, on an assembly line, etc.; I know you would probably just come back with the whole "but we already do when we have children" thing)? How would we treat them? How would we treat ourselves?
There are earth-like planets (possibly inhabitable) outside our galaxy, but simply too far away to explore. They require a lot more time to reach them than is possible in an astronauts' time. These, and millions of other possibilities are within our reach; imagine surgeons so accurate that they can perform life-saving operations that aren't even possible today.
Mechanical arms do everything today from building cars to making incisions on eyeballs. Scientists have created artifical skin like textures that have the ability to feel pressure and temperature (and so soft!). This is just like when man envisioned the airplane. We know it can be done - we just have to figure out a way to do it. And I reiterate - the benefits will far outweigh the risks. And technology has already invaded almost every aspect of our lives. Just imagine how easier our lives would be if that technology were to become smart.
Quote:
Quote:
Finally, we can and should and will put in fallback mechanisms in place to cater to emergency situations, if the need should ever arise. Intelligence itself is nothing to fear. We run a far more risk of being killed by a nuclear bomb or a mass epidemic than being overrun by machinesPosted 10-02-2011 at 01:15 PM by DJ Shaji