LinuxQuestions.org
Help answer threads with 0 replies.
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > General
User Name
Password
General This forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun! NOTE: All new threads will be moderated. Political threads will not be approved.

Notices


Reply
  Search this Thread
Old 06-19-2025, 11:44 AM   #1
hitest
Guru
 
Registered: Mar 2004
Location: Canada
Posts: 7,505

Rep: Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927
Question The Singularity


AI scientists have mixed opinions on when or if the singularity will happen. The singularity being that point in time when AI surpasses human intelligence. Will AI ever become truly sentient or alive? Can an artificial entity ever be truly alive?

https://www.popularmechanics.com/sci...ty-six-months/
 
Old 06-19-2025, 12:57 PM   #2
jailbait
LQ Guru
 
Registered: Feb 2003
Location: Virginia, USA
Distribution: Debian 12
Posts: 8,398

Rep: Reputation: 586Reputation: 586Reputation: 586Reputation: 586Reputation: 586Reputation: 586
Alan Turing saud

Alan Turing proposed a test for artificial intelligence:

"In 1950 Turing sidestepped the traditional debate concerning the definition of intelligence by introducing a practical test for computer intelligence that is now known simply as the Turing test. The Turing test involves three participants: a computer, a human interrogator, and a human foil. The interrogator attempts to determine, by asking questions of the other two participants, which is the computer. All communication is via keyboard and display screen. The interrogator may ask questions as penetrating and wide-ranging as necessary, and the computer is permitted to do everything possible to force a wrong identification. (For instance, the computer might answer “No” in response to “Are you a computer?” and might follow a request to multiply one large number by another with a long pause and an incorrect answer.) The foil must help the interrogator to make a correct identification. A number of different people play the roles of interrogator and foil, and, if a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then (according to proponents of Turing’s test) the computer is considered an intelligent, thinking entity."

https://www.britannica.com/science/h...l-intelligence

No computer has ever passed this test.
 
Old 06-19-2025, 01:24 PM   #3
//////
Member
 
Registered: Nov 2005
Location: Land of Linux :: Finland
Distribution: Arch Linux
Posts: 839

Rep: Reputation: 350Reputation: 350Reputation: 350Reputation: 350
i hope i live long enuff to see AI that is able to write itself better and better, and if possible to become sentient.
i have roughly 30 years to see that - or not.
 
Old 06-19-2025, 01:30 PM   #4
enigma9o7
Senior Member
 
Registered: Jul 2018
Location: Silicon Valley
Distribution: Bodhi Linux
Posts: 1,742

Rep: Reputation: 665Reputation: 665Reputation: 665Reputation: 665Reputation: 665Reputation: 665
I personally welcome our AI overlords.
 
Old 06-19-2025, 03:33 PM   #5
ntubski
Senior Member
 
Registered: Nov 2005
Distribution: Debian, Arch
Posts: 3,836

Rep: Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132
I don't see any reason why machine can't become more intelligent than humans. I don't know if will happen in 6 months, 6 years, 60 years, or maybe never (e.g., civilization collapses due to WWIII, halting all tech progress). I also don't know what "truly" sentient means. If you did have an AI that was truly sentient, how could you tell?

I do notice that current LLMs are capable of answering just about all forum post-sized programming questions (yes, they sometimes answer wrongly, but so do humans).
 
Old 06-19-2025, 06:41 PM   #6
hitest
Guru
 
Registered: Mar 2004
Location: Canada
Posts: 7,505

Original Poster
Rep: Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927
Quote:
Originally Posted by ntubski View Post
If you did have an AI that was truly sentient, how could you tell?
A machine that is self aware. It can laugh, cry, dream, and wonder about its place in the Cosmos. It can feel wonder, joy, and a sense of mystery when it looks at a butterfly coming out of a chrysalis. It has emotional intelligence.
 
Old 06-19-2025, 08:02 PM   #7
ntubski
Senior Member
 
Registered: Nov 2005
Distribution: Debian, Arch
Posts: 3,836

Rep: Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132Reputation: 2132
Quote:
Originally Posted by hitest View Post
A machine that is self aware. It can laugh, cry, dream, and wonder about its place in the Cosmos. It can feel wonder, joy, and a sense of mystery when it looks at a butterfly coming out of a chrysalis. It has emotional intelligence.
Laughing and crying are physical actions, perhaps a machine could do those if equiped with appropriate hardware. For all the others, I repeat my question: how could you tell?
 
Old 06-19-2025, 08:16 PM   #8
hitest
Guru
 
Registered: Mar 2004
Location: Canada
Posts: 7,505

Original Poster
Rep: Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927
Quote:
Originally Posted by ntubski View Post
For all the others, I repeat my question: how could you tell?
It writes poetry that is deep and spiritual. It composes haunting music that isn't terrible. I guess you'd need to befriend it and interview it extensively. Is the being the same as a human other than it's based on silicon?
 
Old 06-19-2025, 08:23 PM   #9
keefaz
LQ Guru
 
Registered: Mar 2004
Distribution: Slackware
Posts: 6,858

Rep: Reputation: 980Reputation: 980Reputation: 980Reputation: 980Reputation: 980Reputation: 980Reputation: 980Reputation: 980
Quote:
Originally Posted by hitest View Post
A machine that is self aware. It can laugh, cry, dream, and wonder about its place in the Cosmos. It can feel wonder, joy, and a sense of mystery when it looks at a butterfly coming out of a chrysalis. It has emotional intelligence.
Why the AI should feel like an human, can't it be free to identify as itself?
 
Old 06-19-2025, 09:13 PM   #10
hitest
Guru
 
Registered: Mar 2004
Location: Canada
Posts: 7,505

Original Poster
Rep: Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927Reputation: 3927
Talking

Quote:
Originally Posted by keefaz View Post
Why the AI should feel like an human, can't it be free to identify as itself?
Haha! Well-said. I was trying to argue that human-like emotions could help to establish sentience. But, sure. Artificial sentience could have characteristics that may be completely alien to us. That would be even harder to define. Maybe we will never know when *they* start plotting against us.
 
Old 06-19-2025, 09:32 PM   #11
Stream
Member
 
Registered: May 2024
Posts: 223

Rep: Reputation: 46
Yes AI will become truly sentient.
About AI becoming alive, well it depends on our definition of alive.
Can it be truly alive? Well yes it can be, but I personally do not believe it so. That will not stop many of us from treating it as alive, though.

Having said that, we are far far far far from singularity. Current AI, LLMs and others, will surpass human capabilities in one or two aspects. But not in all. Already calculators and computers can do basic arithmetic better than us. The same will happen with LLMs too. But LLMs are not general AI. That will take time.

AI will have no conception of right or wrong, apart from what we tell it.
 
Old 06-19-2025, 09:45 PM   #12
wpeckham
LQ Guru
 
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS, Manjaro
Posts: 6,342

Rep: Reputation: 3043Reputation: 3043Reputation: 3043Reputation: 3043Reputation: 3043Reputation: 3043Reputation: 3043Reputation: 3043Reputation: 3043Reputation: 3043Reputation: 3043
long after we pass the point of singularity (we are not even CLOSE yet) there will be no more danger than there is now. It is not intelligence that is the problem, or Geniuses would rule the world.
The first danger point will be when a very intelligent machine becomes "self aware".
The second is when it becomes selfish!

If we live past those, the final and end point is when it become afraid.

Last edited by wpeckham; 06-19-2025 at 09:46 PM.
 
Old 06-19-2025, 09:52 PM   #13
frankbell
LQ Guru
 
Registered: Jan 2006
Location: Virginia, USA
Distribution: Slackware, Ubuntu MATE, Mageia, and whatever VMs I happen to be playing with
Posts: 20,015
Blog Entries: 28

Rep: Reputation: 6383Reputation: 6383Reputation: 6383Reputation: 6383Reputation: 6383Reputation: 6383Reputation: 6383Reputation: 6383Reputation: 6383Reputation: 6383Reputation: 6383
Methinks this article is relevant. An excerpt:

Quote:
  • Using LLMs to assist in writing can negatively impact neural and behavioral measures.
  • When supported by an LLM, memory is negatively impacted, while neural connectivity is diminished.
In other words, using AI can lead to brain rot.

Which may indeed lead to a singularity of sorts, and we are walking blithely and willingly into it.

Last edited by frankbell; 06-19-2025 at 09:54 PM.
 
Old 06-19-2025, 10:10 PM   #14
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,345

Rep: Reputation: 1336Reputation: 1336Reputation: 1336Reputation: 1336Reputation: 1336Reputation: 1336Reputation: 1336Reputation: 1336Reputation: 1336Reputation: 1336
There are companies today who have software that calculates lead-time, does planning and automatically orders materials. I have ordered a circuit board build and assembly all online without speaking with anyone (PCBWay). What happens when systems start ordering copies of themselves? I think I could get AI to write a program to do that now. Maybe the singularity is already here?
 
Old 06-20-2025, 10:34 AM   #15
onebuck
Moderator
 
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,995
Blog Entries: 46

Rep: Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232Reputation: 3232
Member Response

Hi,

I believe the development of biocell computers will excel beyond a digital computer AI driven system eventually. Several companies have developed a biocell/biological system that are able to equal and/or surpass digital with certain tasks.

The big fault of the biocell system is the life of the cells. Eventually you have to replace the the brain of the hardware or just replace the whole unit in some of the companies biocell computer.

This company has developed a biocell computer for $35K;
Quote:
A company is selling what it calls the world’s first “code deployable biological computer.”
https://gizmodo.com/this-35000-compu...lls-2000573993
Quote:
Another site; https://newatlas.com/computers/final...anoids/Current AI training methods burn colossal amounts of energy to learn, but the human brain sips just 20 W. Swiss startup FinalSpark is now selling access to cyborg biocomputers, running up to four living human brain organoids wired into silicon chips.
The human brain communicates within itself and with the rest of the body mainly through electrical signals; sights, sounds and sensations are all converted into electrical pulses before our brains can perceive them. This makes brain tissue highly compatible with silicon chips, at least for as long as you can keep it alive.
For FinalSpark's Neuroplatform, brain organoids comprising about 10,000 living neurons are grown from stem cells. These little balls, about 0.5 mm (0.02 in) in diameter, are kept in incubators at around body temperature, supplied with water and nutrients and protected from bacterial or viral contamination, and they're wired into an electrical circuit with a series of tiny electrodes.

Last edited by onebuck; 06-20-2025 at 10:35 AM.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: A Technological Singularity: What are the Implications for Free Software? LXer Syndicated Linux News 0 06-07-2008 10:30 PM
LXer: Approaching the Singularity at Microsoft LXer Syndicated Linux News 7 03-09-2008 04:53 PM
Supercomputers & Singularity Hitboxx General 3 03-02-2007 04:11 PM
LXer: The Anti-Singularity LXer Syndicated Linux News 0 08-06-2006 02:54 PM
LXer: Friday Game Review - Endgame:Singularity LXer Syndicated Linux News 0 07-14-2006 09:54 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > General

All times are GMT -5. The time now is 11:50 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration