LinuxQuestions.org
View the Most Wanted LQ Wiki articles.
Go Back   LinuxQuestions.org > Blogs > DJ Shaji
User Name
Password

Notices

  1. Old Comment

    Artificial Intelligence :: Developing a Brain

    Quote:
    Originally Posted by MrCode View Comment
    This is what concerns me most about AI, personally. It's not so much the Hollywood "kill all humans" scenario, it's the "mechanophobia", if you will; the moral implications of essentially "creating life".

    Would it be ethical to simply create new sentient beings, and under what conditions (like, on an assembly line, etc.; I know you would probably just come back with the whole "but we already do when we have children" thing )? How would we treat them? How would we treat ourselves?
    Imagine a world where there are enough doctors to treat the billions of people in underdeveloped nations - who need neither food nor sleep. Imagine new and innovative solutions to problems like malnutrition, energy crises and finding new and sustainable sources of energy. Imagine a spam filter (for email!) that is be perfect. Imagine a search engine that knows exactly what it is that you're looking for.

    There are earth-like planets (possibly inhabitable) outside our galaxy, but simply too far away to explore. They require a lot more time to reach them than is possible in an astronauts' time. These, and millions of other possibilities are within our reach; imagine surgeons so accurate that they can perform life-saving operations that aren't even possible today.

    Mechanical arms do everything today from building cars to making incisions on eyeballs. Scientists have created artifical skin like textures that have the ability to feel pressure and temperature (and so soft!). This is just like when man envisioned the airplane. We know it can be done - we just have to figure out a way to do it. And I reiterate - the benefits will far outweigh the risks. And technology has already invaded almost every aspect of our lives. Just imagine how easier our lives would be if that technology were to become smart.

    Quote:
    Originally Posted by MrCode View Comment
    See, I figure that since creating an AI of a sufficient level as to emulate a human being would require a more-or-less complete understanding of human psychology, we would have to be "reduced to mere mechanism" either long before or just after the "robot revolution".
    I agree. That is where we have to focus our energies on. But I disagree that we would be "reduced" to mere mechanism, because we are not creating artificial humans. We have no reason to consider them "competition". Yes, ethical considerations are definitely in order, but the benefits far outweigh the risks.

    Quote:
    Originally Posted by MrCode View Comment
    We'll be replaced as a species. Not deliberately destroyed, mind you, but slowly and painfully degraded to being worthless.
    No we won't. You're wrong there. When such systems do become a possibility, we have to proceed with caution. Absolutely. Ethical considerations are definitely in order in the design and production of such systems. But I do not believe that any thing even remotely similar to Judgment Day or The Matrix is possible. The situation would be similar to immigration, as it currently exists in many nations. Citizens from one nation can go on a work visa to another nation; they're given equal status and some rights. They work and contribute to development and progress, and at the same time are self-sustained. Same will be with mechanical beings.

    Finally, we can and should and will put in fallback mechanisms in place to cater to emergency situations, if the need should ever arise. Intelligence itself is nothing to fear. We run a far more risk of being killed by a nuclear bomb or a mass epidemic than being overrun by machines
    Posted 10-02-2011 at 01:15 PM by DJ Shaji DJ Shaji is offline
  2. Old Comment

    Artificial Intelligence :: Developing a Brain

    Quote:
    Originally Posted by lumak View Comment
    The problem is being able to simulate a neural network of sufficient size to emulate a brain in compact enough size to move about. Also understanding the importance of how to connect inputs and outputs so that the different parts of the neural net work correctly.
    I agree. In my opinion, the key here is multiple levels of parallel processing, like our brain does, but not simply clubbing together raw processing power with a vast database of information - case in point IBM Watson and friends. Size is not a limitation to intelligence. The human brain is not the largest in the animal kingdom, just the most efficient and optimized. That is where the solution lies. We need to develop a general purpose cognitive framework; assemble it bit by bit. Evolution took millions of years to formulate a cognitive system that we know today as a mind. We just have to build a system modeled on what nature has already perfected for us. In countless other fields, from architecture to bionics, researches have already done it, and are doing it every day. We just have to implement that in popular technology. The starting point for any such development would have to based on conceptual design and not procedural methodology. I for one would only (if ever!) begin development on such a system (or even the idea of one) when I have clearly defined how the system is going to be organized. In other words, the emphasis is on the algorithm itself and not it's implementation. When such a design is conceptualized, it could then be implemented using any of the currently available programming languages, or maybe an entirely new language. The idea is not of complexity, but of efficiency. The focus is not to imitate the human mind; rather, it is on the emulation of mind itself - when it reaches a critical level of self awareness, it might very well be able to evolve itself further.

    Quote:
    Originally Posted by lumak View Comment
    I stand by my beliefs that if you could create a machine that could think, learn, and "feel", then it would be artificial life with actual intelligence. There is nothing artificial about the intelligence if, on its own experiences, it can learn and adapt. But then the question is, why would a man, or other machine, made machine be any less alive than an organic machine?
    I again agree. Especially when such intelligence reaches a level where it can self-propagate and further improve itself, it would indeed be a consciousness independent of our own. At which point, ethical considerations would be necessary in dealing with such beings. Such intelligence would far surpass our own, but I do believe it would only be beneficial and far outweight any risks involved.
    Posted 10-02-2011 at 12:54 PM by DJ Shaji DJ Shaji is offline
  3. Old Comment

    Artificial Intelligence :: Developing a Brain

    Quote:
    There is nothing artificial about the intelligence if, on its own experiences, it can learn and adapt. But then the question is, why would a man, or other machine, made machine be any less alive than an organic machine?
    This is what concerns me most about AI, personally. It's not so much the Hollywood "kill all humans" scenario, it's the "mechanophobia", if you will; the moral implications of essentially "creating life".

    Would it be ethical to simply create new sentient beings, and under what conditions (like, on an assembly line, etc.; I know you would probably just come back with the whole "but we already do when we have children" thing )? How would we treat them? How would we treat ourselves?

    See, I figure that since creating an AI of a sufficient level as to emulate a human being would require a more-or-less complete understanding of human psychology, we would have to be "reduced to mere mechanism" either long before or just after the "robot revolution".

    Science? Dominated by the new mecha-race.
    Art? Dominated by the new mecha-race. Deemed worthless/eradicated.
    Government/politics? Dominated by the new mecha-race.

    If there's no difference between a piece of art, whether a human creates it or an AI, where is its worth (on either side)?

    If a scientific discovery is no less useful because an AI made it, what have we (as humans) to be proud of?

    In short, why should we consider ourselves "special" at all? Why do we delude ourselves into thinking we're somehow "worth something"? Why do we bother "expressing ourselves" if it's all worthless/meaningless in the end? We're all just matter; we're all just machines following a path every bit as predetermined as the rest of the universe, so why do we even give a fsck about anything anymore?

    This, I think, is what scares the ever-loving fsck out of me about AI. We'll be replaced as a species. Not deliberately destroyed, mind you, but slowly and painfully degraded to being worthless.

    I didn't want to have to expand on my "I'll just wait until Judgement Day" statement like this, but you've forced me to (I had no choice, it was determined since the beginning of the universe! Action/reaction! Chain of causation! ).
    Posted 10-02-2011 at 02:45 AM by MrCode MrCode is offline
    Updated 10-02-2011 at 03:13 AM by MrCode
  4. Old Comment

    Artificial Intelligence :: Developing a Brain

    The problem is being able to simulate a neural network of sufficient size to emulate a brain in compact enough size to move about. Also understanding the importance of how to connect inputs and outputs so that the different parts of the neural net work correctly.

    I stand by my beliefs that if you could create a machine that could think, learn, and "feel", then it would be artificial life with actual intelligence. There is nothing artificial about the intelligence if, on its own experiences, it can learn and adapt. But then the question is, why would a man, or other machine, made machine be any less alive than an organic machine?
    Posted 10-02-2011 at 01:03 AM by lumak lumak is offline
  5. Old Comment

    Artificial Intelligence :: Developing a Brain

    A thread on a similar topic was posted in Linux - News some time ago. You may be interested.

    …as for me, I think I'll just wait for Judgement Day…
    Posted 10-01-2011 at 05:48 PM by MrCode MrCode is offline
    Updated 10-01-2011 at 05:51 PM by MrCode
  6. Old Comment

    Artificial Intelligence :: Developing a Brain

    Well, certainly Hollywood seems to think so

    I'm talking about something like a Brain Emulator; it would think in a manner similar to the human thought process. I mean, yeah, we all do have certain bad dispositions, but we don't act on them, because we have a sense of morality and a self developed code of ethical conduct. Surely a rational intelligence would take into consideration consequences of its actions. We, for instance, don't hurt others because it is wrong. Any emulator of thinking patterns would also show the corresponding psycho-social conduct as well.

    To be sure, we can have built in safe-gaurds at various levels, like for example, a non-preemptible check and balance system, beginning with Isaac Asimov's laws of Robotics, among other things. I really don't believe we have to worry about computers taking over the world The system would have far more advantages in many applications of daily use.
    Posted 10-01-2011 at 10:24 AM by DJ Shaji DJ Shaji is offline
  7. Old Comment

    Artificial Intelligence :: Developing a Brain

    Think how dangerous it could be to have a computer think for itself. If you had a panel of people you could not come to agreement on all the checks and balances. Which individual personality would it have. If could really be a horror show if the (super computer) was networked.
    Posted 10-01-2011 at 08:23 AM by Larry Webb Larry Webb is offline
  8. Old Comment

    Suicide

    Why would you want to kill yourself? You seem like a nice young man, you should be enjoying life and all it have to offer.

    Mind if I ask, how old are you and where do you live?
    Posted 07-28-2011 at 05:32 AM by FredGSanford FredGSanford is offline
  9. Old Comment

    Suicide

    Simple question:
    Why?
    Posted 07-23-2011 at 11:06 AM by brianL brianL is offline
  10. Old Comment

    Wanna know something wierd ?

    No, yeah, but Anaconda doesn't allow an installation on less than 768 MB of RAM. I used to love GNOME, but 3 is just plain too much for my system to handle.

    I'm a little concerned that my Creative Soundblaster Live! 5.1vx might not work on FreeBSD (it barely works on Linux). Apart from that, I'm very interested in *BSD.
    Posted 07-23-2011 at 10:40 AM by DJ Shaji DJ Shaji is offline
  11. Old Comment

    Wanna know something wierd ?

    Just to point out, Fedora doesn't require that much RAM. Fedora with Gnome and a bunch of other stuff does. My Fedora 15 install uses 110MB of RAM on boot with XFCE. Go figure. Linux is all about bending your OS to your will. Make it whatever you want it to be.
    Posted 07-07-2011 at 09:48 PM by zootboy zootboy is offline
  12. Old Comment

    Wanna know something wierd ?

    Not to play the devil's advocate here, but what's stopping you from switching to a *BSD full time? (Note: I'm a card-carrying OpenBSD fanatic. Take what I saw concerning Linux vs. BSD with a grain of salt haha).
    Posted 07-06-2011 at 10:03 AM by rocket357 rocket357 is online now
  13. Old Comment

    Fedora 15: 768 MB RAM minimum ?

    No, see, we can have completion as well as efficiency, but over the years I've noticed more and more parts of the system being written in interpreted languages like python. Nothing wrong with that, except that these require an interpreter to run, and so therefore take up many times the amount of memory a corresponding C program would take. Remember the days of Windows XP? It ran blazingly fast on 256 MB of RAM (without an anti virus, spyware detection software, firewall ..). I fail to understand why similar performance cannot be achieved on GNOME. I don't want gnomevfsd running to automatically mount CDs when I don't even have a CD-ROM drive. Know what I mean?

    Plus, Fedora uses anaconda as its installer, which is written in python. So the irony is that even though I can run the live cd perfectly, I can't install it to my hard drive.

    The kernel can still boot with as low as 24 MB of RAM. That's why I say Fedora is going the wrong way with its hardware specifications.
    Posted 06-27-2011 at 07:32 AM by DJ Shaji DJ Shaji is offline
  14. Old Comment

    Fedora 15: 768 MB RAM minimum ?

    I look for most of the major distros requiring 1 gig of ram in a couple more years. It seems there is a contest to see who can make the most desirable complete automated distro.

    I can remember when the largest, most complete distros required 256 meg but they did not detect near the hardware or drivers.
    Posted 06-21-2011 at 06:46 PM by Larry Webb Larry Webb is offline

  



All times are GMT -5. The time now is 09:42 AM.

Main Menu
Advertisement

My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
identi.ca: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration