GeneralThis forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
A computer has beaten a human professional for the first time at Go — an ancient board game that has long been viewed as one of the greatest challenges for artificial intelligence (AI).
In March 2016, AlphaGo will face its ultimate challenge: a 5-game challenge match in Seoul against the legendary Lee Sedol, the top Go player in the world over the past decade.
I think the analogy is dead on. The guy in the movie has his little spell book + vial of Holy Water and thinks he can mumble a few words and control an overlord from the 9th circle of hell.
If we succeed in building silicone that is sentient it will be totally alien to us in every way possible. We would have more in common w/ the guys from Close Encounters of the Third Kind than we will w/ AI.
What a bunch of silly, over-reacting, paranoid hogwash. One huge difference between humans and AI is that AI must be programmed by humans to accomplish anything. Please tell me what foolish programmer would program a computer to discover a way to eliminate spam would even be given the option to conclude that eliminating humans is a viable conclusion? Then the real biggy, just how powerful AND crazy would a lowly programmer have to be to actually somehow give this PC the means to harm a single individual let alone the entire race of homo sapiens? and Ouija boards? Give me a fucking break. Sure <sarc> cardboard and wood have special arcane properties so far beyond radio waves to communicate with demons, whose sole reason for existing is to torment the inhabitants of some little podunk planet on the outskirts of one galaxy among hundreds of billions. Sure!
Does AI need some regulation? Probably. But until they reach the point where they can program themselves and not just learn from experience to enhance it's database the threat is non-existent. Even then, the same sort of regulations that prevent private citizens from owning weaponized anthrax or fissionable material should solve that absurd issue. There is an immense difference between coming to a conclusion and enacting "solutions".
What a bunch of silly, over-reacting, paranoid hogwash. One huge difference between humans and AI is that AI must be programmed by humans to accomplish anything. Please tell me what foolish programmer would program a computer to discover a way to eliminate spam would even be given the option to conclude that eliminating humans is a viable conclusion? Then the real biggy, just how powerful AND crazy would a lowly programmer have to be to actually somehow give this PC the means to harm a single individual let alone the entire race of homo sapiens? and Ouija boards? Give me a fucking break. Sure <sarc> cardboard and wood have special arcane properties so far beyond radio waves to communicate with demons, whose sole reason for existing is to torment the inhabitants of some little podunk planet on the outskirts of one galaxy among hundreds of billions. Sure!
Does AI need some regulation? Probably. But until they reach the point where they can program themselves and not just learn from experience to enhance it's database the threat is non-existent. Even then, the same sort of regulations that prevent private citizens from owning weaponized anthrax or fissionable material should solve that absurd issue. There is an immense difference between coming to a conclusion and enacting "solutions".
And. of course, once again, you know everything about everything. That must be really cool. How do you do it? You must tell me your secret, b/c it obviously ain't done by educating yourself on the facts or having a mind open enough to discuss any opinion other than your own. Like over in the thread about the problems w/ XP in the electric grid. I was waiting for you to come back w/ something smart so I could drop a bunch of links on you that document that *exact problem*. But you went poof?
You don't have all of the answers, neither do I. And truly rational "scholars" always realize that they don't have all of the answers.
Can capitalists (or well knowns) be the "brightest" minds and if so what's an opinion; simple logic defies stupidity (even opinions...) SOME QUESTIONS WILL NEVER HAVE ANSWERS so should we keep going in circles or being suckers? I hate purple people too, (I didn't mean it sorry anyone purple!)
Can capitalists (or well knowns) be the "brightest" minds and if so what's an opinion; simple logic defies stupidity (even opinions...) SOME QUESTIONS WILL NEVER HAVE ANSWERS so should we keep going in circles or being suckers? I hate purple people too, (I didn't mean it sorry anyone purple!)
I can honestly say I don't understand 98 percent plus of everything you say.
<SIGH> Once again Steven G you choose to attack me personally rather than the content of my post. In the Wired article you linked it seems much of the talk of Terminators is just eye-grabbing fluff which totally dissipates by the end and what does exist supports what I have asked, namely "How can an AI harm a person unless the means is programmed in?" Example - The article mentions AI somehow ruining the commodities market. Exactly how oould this be done even by a human let alone AI without the means having been made available?
Notice that by the end the big worry is that self-driving vehicles may put bus drivers out of work. Did anyone worry that cars would put buggy whip companies out of business? What about all the aerospace engineers that Nixon's slashing of NASAs budget put out of work? There simply is no 100 job security. That's just the nature of progress and humans are not likely to choose to stop progress.
I think it is good that great minds are concerned with such issues especially this early but AI is further down on my list than plant genetics which is already in full swing. This is to say nothing of weaponized viruses for chemical warfare an actual threat right now not 100 years from now.
Regarding the electrical grid it is my Son who is in the business and keeps me up to date (and which subject you belittled) along with several scientific forums to which i subscribe and my last response there confronted the concern with Co-Ops and I choose not to get into flaming so yup I poofed once my point on the subject was made. I have no interest in flame bait by anyone just spoiling for a personal flame fight. Stick to issues and we are fine. Get personal and expect silence. Get heated and expect a report. Your choice.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.