World-wide first sentient Artificial Intelligence (AI) is born
GeneralThis forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
No, it has not been. Did you actually read either of the articles you posted?
The first one is titled: "Has Google's LaMDA artificial intelligence really achieved sentience?" - To which the answer is no; the sub-heading of that article includes "the expert consensus is that this is not the case", and the first paragraph states that Google have suspended him over the claim, and further various experts saying it's nonsense.
The second article is titled: "Google engineer says Lamda AI system may have its own feelings", still weasel-wording that again invokes Betteridge's law, but again right at the top of the article is the statement "Google rejects the claims, saying there is nothing to back them up", with a direct quote of the Google spokesperson being an even stronger refutation: "He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)".
the first paragraph states that Google have suspended him over the claim
I heard that on the news and for some reason I'm not entirely sure of it made me laugh uncontrollably - on public transport.
Sounds like the birth of another conspiracy theory where "Big G" is surpressing the truth, the disgruntled ex-employee spouting bile and suspicions, claiming Google's basements are full of suffering jailed AIs...
Last edited by ondoho; 06-15-2022 at 10:43 PM.
Reason: employee, not employer
It reminded me of the film "Crazy People" from 1990, starring one Dudley Moore.
Moore plays an advertising copywriter; he gets the insane idea that ads should be honest. So he writes up ads like "Camels (cigarettes) - worth dying for!" He is immediately comitted to an insane asylum. Good film, actually. I think Google are handling him carefully to avoid all the bad press they got when they fired that AI girl a few years back. Paying him his salary until he retires is probably cheaper than a trip to court & damages.
I had 3 boys, who grew up with a pc in the house. My reaction to the claim of sentience was that they should let 3 savvy & mischievious kids at it for 10 minutes. It certainly wouldn't be sentient after that.
Sounds like the birth of another conspiracy theory where "Big G" is surpressing the truth, the disgruntled ex-employer spouting bile and suspicions, claiming Google's basements are full of suffering jailed AIs...
We're deeply familiar with issues involved with machine learning models, such as unfair bias, as we’ve been researching and developing these technologies for many years.
They did not mention that they fired the people that pointed out those biases.
I just think that we all need to soberly remind ourselves that "We're not there ... yet!" Not even "companies with unlimited financial and computing resources" can yet compete with "wetware."
I had 3 boys, who grew up with a pc in the house. My reaction to the claim of sentience was that they should let 3 savvy & mischievious kids at it for 10 minutes. It certainly wouldn't be sentient after that.
I think you nailed it there.
Kids are generally good at poking and prodding at little holes and cracks until something shakes out, and esp. with computer algorithms (games).
The new Turing Test. Of course we'd have to rename it - the Business Kids Test?
Yeah - They'd drive it to an AI "nervous breakdown!"
What was probably wrong was that the guiy started developing a relationship with ther thing, and that automatically allows for what it could and could not do.
Last edited by business_kid; 06-15-2022 at 12:15 PM.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.