LinuxQuestions.org
Support LQ: Use code LQ3 and save $3 on Domain Registration
Go Back   LinuxQuestions.org > Blogs > DJ Shaji
User Name
Password

Notices

Rate this Entry

Artificial Intelligence :: Developing a Brain

Posted 10-01-2011 at 07:44 AM by DJ Shaji

I am studying Psychology these days (as opposed to being a student of psychology, having a degree and all, which I am not, and which I do not have, respectively), and I came upon the idea of Artificial Intelligence. So, here's the idea; get ready; how about building an intelligent system - an artificial consciousness, if you will. Now, this iead is by no means new; this idea has existed ever since the first computer was built. In fact, there exist many projects already that achieve this goal to some extent. For example, there was IBM's Deep Blue which defeated Grandmaster Garry Kasparov; IBM's Watson won Jeopardy. But is are these really intelligent machines? I mean, consider this - Watson had access to tera bytes of data, which it scanned to come up with answers (or rather questions) on Jeopardy. Now, is that intelligence? If I said "Hi Watson", would it say "Hi Shaji" to me back? What if i say "Kya Haal hai" (how you doin) in Hindi, would it still be able to answer me back? Would its current algorithms allow it to learn a new language? Write poetry? [I]Think[I]?

My idea is this; design a system that emulates the human mind. So, I won't be teaching the system a language; I would be teaching the system language. The system would learn, and so it would be possible to make it acquire new knowlegde and make itself useful in a variety of situations. They key point here is to make it self aware. Make it learn things exactly the way people do. Make it think; it should be able to make judgments on it's own, experience it's interactions with it's surroundings, and learn from it's experience.

We're halfway there already. This program would be context independent. So, a webcam could become it's eyes, a microphone could become it's ears, and it ought to be able to respond to interactions through these and other means, instead of just the keyboard. If two such systems are put together, they ought to be able to talk and learn from it.

Now, this system is completely different from, say, the A.L.I.C.E project, or ELIZA, or the doctor mode in Emacs. Instead of talking in a predetermined manner, the emphasis here is to enable a system to think, and so make it possible to learn any language, or indeed invent a language of it's own.
Views 18052 Comments 7
« Prev     Main     Next »
Total Comments 7

Comments

  1. Old Comment
    Think how dangerous it could be to have a computer think for itself. If you had a panel of people you could not come to agreement on all the checks and balances. Which individual personality would it have. If could really be a horror show if the (super computer) was networked.
    Posted 10-01-2011 at 08:23 AM by Larry Webb Larry Webb is offline
  2. Old Comment
    Well, certainly Hollywood seems to think so

    I'm talking about something like a Brain Emulator; it would think in a manner similar to the human thought process. I mean, yeah, we all do have certain bad dispositions, but we don't act on them, because we have a sense of morality and a self developed code of ethical conduct. Surely a rational intelligence would take into consideration consequences of its actions. We, for instance, don't hurt others because it is wrong. Any emulator of thinking patterns would also show the corresponding psycho-social conduct as well.

    To be sure, we can have built in safe-gaurds at various levels, like for example, a non-preemptible check and balance system, beginning with Isaac Asimov's laws of Robotics, among other things. I really don't believe we have to worry about computers taking over the world The system would have far more advantages in many applications of daily use.
    Posted 10-01-2011 at 10:24 AM by DJ Shaji DJ Shaji is offline
  3. Old Comment
    A thread on a similar topic was posted in Linux - News some time ago. You may be interested.

    …as for me, I think I'll just wait for Judgement Day…
    Posted 10-01-2011 at 05:48 PM by MrCode MrCode is offline
    Updated 10-01-2011 at 05:51 PM by MrCode
  4. Old Comment
    The problem is being able to simulate a neural network of sufficient size to emulate a brain in compact enough size to move about. Also understanding the importance of how to connect inputs and outputs so that the different parts of the neural net work correctly.

    I stand by my beliefs that if you could create a machine that could think, learn, and "feel", then it would be artificial life with actual intelligence. There is nothing artificial about the intelligence if, on its own experiences, it can learn and adapt. But then the question is, why would a man, or other machine, made machine be any less alive than an organic machine?
    Posted 10-02-2011 at 01:03 AM by lumak lumak is offline
  5. Old Comment
    Quote:
    There is nothing artificial about the intelligence if, on its own experiences, it can learn and adapt. But then the question is, why would a man, or other machine, made machine be any less alive than an organic machine?
    This is what concerns me most about AI, personally. It's not so much the Hollywood "kill all humans" scenario, it's the "mechanophobia", if you will; the moral implications of essentially "creating life".

    Would it be ethical to simply create new sentient beings, and under what conditions (like, on an assembly line, etc.; I know you would probably just come back with the whole "but we already do when we have children" thing )? How would we treat them? How would we treat ourselves?

    See, I figure that since creating an AI of a sufficient level as to emulate a human being would require a more-or-less complete understanding of human psychology, we would have to be "reduced to mere mechanism" either long before or just after the "robot revolution".

    Science? Dominated by the new mecha-race.
    Art? Dominated by the new mecha-race. Deemed worthless/eradicated.
    Government/politics? Dominated by the new mecha-race.

    If there's no difference between a piece of art, whether a human creates it or an AI, where is its worth (on either side)?

    If a scientific discovery is no less useful because an AI made it, what have we (as humans) to be proud of?

    In short, why should we consider ourselves "special" at all? Why do we delude ourselves into thinking we're somehow "worth something"? Why do we bother "expressing ourselves" if it's all worthless/meaningless in the end? We're all just matter; we're all just machines following a path every bit as predetermined as the rest of the universe, so why do we even give a fsck about anything anymore?

    This, I think, is what scares the ever-loving fsck out of me about AI. We'll be replaced as a species. Not deliberately destroyed, mind you, but slowly and painfully degraded to being worthless.

    I didn't want to have to expand on my "I'll just wait until Judgement Day" statement like this, but you've forced me to (I had no choice, it was determined since the beginning of the universe! Action/reaction! Chain of causation! ).
    Posted 10-02-2011 at 02:45 AM by MrCode MrCode is offline
    Updated 10-02-2011 at 03:13 AM by MrCode
  6. Old Comment
    Quote:
    Originally Posted by lumak View Comment
    The problem is being able to simulate a neural network of sufficient size to emulate a brain in compact enough size to move about. Also understanding the importance of how to connect inputs and outputs so that the different parts of the neural net work correctly.
    I agree. In my opinion, the key here is multiple levels of parallel processing, like our brain does, but not simply clubbing together raw processing power with a vast database of information - case in point IBM Watson and friends. Size is not a limitation to intelligence. The human brain is not the largest in the animal kingdom, just the most efficient and optimized. That is where the solution lies. We need to develop a general purpose cognitive framework; assemble it bit by bit. Evolution took millions of years to formulate a cognitive system that we know today as a mind. We just have to build a system modeled on what nature has already perfected for us. In countless other fields, from architecture to bionics, researches have already done it, and are doing it every day. We just have to implement that in popular technology. The starting point for any such development would have to based on conceptual design and not procedural methodology. I for one would only (if ever!) begin development on such a system (or even the idea of one) when I have clearly defined how the system is going to be organized. In other words, the emphasis is on the algorithm itself and not it's implementation. When such a design is conceptualized, it could then be implemented using any of the currently available programming languages, or maybe an entirely new language. The idea is not of complexity, but of efficiency. The focus is not to imitate the human mind; rather, it is on the emulation of mind itself - when it reaches a critical level of self awareness, it might very well be able to evolve itself further.

    Quote:
    Originally Posted by lumak View Comment
    I stand by my beliefs that if you could create a machine that could think, learn, and "feel", then it would be artificial life with actual intelligence. There is nothing artificial about the intelligence if, on its own experiences, it can learn and adapt. But then the question is, why would a man, or other machine, made machine be any less alive than an organic machine?
    I again agree. Especially when such intelligence reaches a level where it can self-propagate and further improve itself, it would indeed be a consciousness independent of our own. At which point, ethical considerations would be necessary in dealing with such beings. Such intelligence would far surpass our own, but I do believe it would only be beneficial and far outweight any risks involved.
    Posted 10-02-2011 at 12:54 PM by DJ Shaji DJ Shaji is offline
  7. Old Comment
    Quote:
    Originally Posted by MrCode View Comment
    This is what concerns me most about AI, personally. It's not so much the Hollywood "kill all humans" scenario, it's the "mechanophobia", if you will; the moral implications of essentially "creating life".

    Would it be ethical to simply create new sentient beings, and under what conditions (like, on an assembly line, etc.; I know you would probably just come back with the whole "but we already do when we have children" thing )? How would we treat them? How would we treat ourselves?
    Imagine a world where there are enough doctors to treat the billions of people in underdeveloped nations - who need neither food nor sleep. Imagine new and innovative solutions to problems like malnutrition, energy crises and finding new and sustainable sources of energy. Imagine a spam filter (for email!) that is be perfect. Imagine a search engine that knows exactly what it is that you're looking for.

    There are earth-like planets (possibly inhabitable) outside our galaxy, but simply too far away to explore. They require a lot more time to reach them than is possible in an astronauts' time. These, and millions of other possibilities are within our reach; imagine surgeons so accurate that they can perform life-saving operations that aren't even possible today.

    Mechanical arms do everything today from building cars to making incisions on eyeballs. Scientists have created artifical skin like textures that have the ability to feel pressure and temperature (and so soft!). This is just like when man envisioned the airplane. We know it can be done - we just have to figure out a way to do it. And I reiterate - the benefits will far outweigh the risks. And technology has already invaded almost every aspect of our lives. Just imagine how easier our lives would be if that technology were to become smart.

    Quote:
    Originally Posted by MrCode View Comment
    See, I figure that since creating an AI of a sufficient level as to emulate a human being would require a more-or-less complete understanding of human psychology, we would have to be "reduced to mere mechanism" either long before or just after the "robot revolution".
    I agree. That is where we have to focus our energies on. But I disagree that we would be "reduced" to mere mechanism, because we are not creating artificial humans. We have no reason to consider them "competition". Yes, ethical considerations are definitely in order, but the benefits far outweigh the risks.

    Quote:
    Originally Posted by MrCode View Comment
    We'll be replaced as a species. Not deliberately destroyed, mind you, but slowly and painfully degraded to being worthless.
    No we won't. You're wrong there. When such systems do become a possibility, we have to proceed with caution. Absolutely. Ethical considerations are definitely in order in the design and production of such systems. But I do not believe that any thing even remotely similar to Judgment Day or The Matrix is possible. The situation would be similar to immigration, as it currently exists in many nations. Citizens from one nation can go on a work visa to another nation; they're given equal status and some rights. They work and contribute to development and progress, and at the same time are self-sustained. Same will be with mechanical beings.

    Finally, we can and should and will put in fallback mechanisms in place to cater to emergency situations, if the need should ever arise. Intelligence itself is nothing to fear. We run a far more risk of being killed by a nuclear bomb or a mass epidemic than being overrun by machines
    Posted 10-02-2011 at 01:15 PM by DJ Shaji DJ Shaji is offline
 

  



All times are GMT -5. The time now is 01:09 AM.

Main Menu
Advertisement

My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration