General This forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!
NOTE: All new threads will be moderated. Political threads will not be approved. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
 |
|
06-19-2025, 11:44 AM
|
#1
|
Guru
Registered: Mar 2004
Location: Canada
Posts: 7,505
|
The Singularity
AI scientists have mixed opinions on when or if the singularity will happen. The singularity being that point in time when AI surpasses human intelligence. Will AI ever become truly sentient or alive? Can an artificial entity ever be truly alive?
https://www.popularmechanics.com/sci...ty-six-months/
|
|
|
06-19-2025, 12:57 PM
|
#2
|
LQ Guru
Registered: Feb 2003
Location: Virginia, USA
Distribution: Debian 12
Posts: 8,398
|
Alan Turing saud
Alan Turing proposed a test for artificial intelligence:
"In 1950 Turing sidestepped the traditional debate concerning the definition of intelligence by introducing a practical test for computer intelligence that is now known simply as the Turing test. The Turing test involves three participants: a computer, a human interrogator, and a human foil. The interrogator attempts to determine, by asking questions of the other two participants, which is the computer. All communication is via keyboard and display screen. The interrogator may ask questions as penetrating and wide-ranging as necessary, and the computer is permitted to do everything possible to force a wrong identification. (For instance, the computer might answer “No” in response to “Are you a computer?” and might follow a request to multiply one large number by another with a long pause and an incorrect answer.) The foil must help the interrogator to make a correct identification. A number of different people play the roles of interrogator and foil, and, if a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then (according to proponents of Turing’s test) the computer is considered an intelligent, thinking entity."
https://www.britannica.com/science/h...l-intelligence
No computer has ever passed this test.
|
|
|
06-19-2025, 01:24 PM
|
#3
|
Member
Registered: Nov 2005
Location: Land of Linux :: Finland
Distribution: Arch Linux
Posts: 839
|
i hope i live long enuff to see AI that is able to write itself better and better, and if possible to become sentient.
i have roughly 30 years to see that - or not.
|
|
|
06-19-2025, 01:30 PM
|
#4
|
Senior Member
Registered: Jul 2018
Location: Silicon Valley
Distribution: Bodhi Linux
Posts: 1,742
|
I personally welcome our AI overlords.
|
|
|
06-19-2025, 03:33 PM
|
#5
|
Senior Member
Registered: Nov 2005
Distribution: Debian, Arch
Posts: 3,836
|
I don't see any reason why machine can't become more intelligent than humans. I don't know if will happen in 6 months, 6 years, 60 years, or maybe never (e.g., civilization collapses due to WWIII, halting all tech progress). I also don't know what "truly" sentient means. If you did have an AI that was truly sentient, how could you tell?
I do notice that current LLMs are capable of answering just about all forum post-sized programming questions (yes, they sometimes answer wrongly, but so do humans).
|
|
|
06-19-2025, 06:41 PM
|
#6
|
Guru
Registered: Mar 2004
Location: Canada
Posts: 7,505
Original Poster
|
Quote:
Originally Posted by ntubski
If you did have an AI that was truly sentient, how could you tell?
|
A machine that is self aware. It can laugh, cry, dream, and wonder about its place in the Cosmos. It can feel wonder, joy, and a sense of mystery when it looks at a butterfly coming out of a chrysalis. It has emotional intelligence.
|
|
|
06-19-2025, 08:02 PM
|
#7
|
Senior Member
Registered: Nov 2005
Distribution: Debian, Arch
Posts: 3,836
|
Quote:
Originally Posted by hitest
A machine that is self aware. It can laugh, cry, dream, and wonder about its place in the Cosmos. It can feel wonder, joy, and a sense of mystery when it looks at a butterfly coming out of a chrysalis. It has emotional intelligence.
|
Laughing and crying are physical actions, perhaps a machine could do those if equiped with appropriate hardware. For all the others, I repeat my question: how could you tell?
|
|
|
06-19-2025, 08:16 PM
|
#8
|
Guru
Registered: Mar 2004
Location: Canada
Posts: 7,505
Original Poster
|
Quote:
Originally Posted by ntubski
For all the others, I repeat my question: how could you tell?
|
It writes poetry that is deep and spiritual. It composes haunting music that isn't terrible. I guess you'd need to befriend it and interview it extensively. Is the being the same as a human other than it's based on silicon?
|
|
|
06-19-2025, 08:23 PM
|
#9
|
LQ Guru
Registered: Mar 2004
Distribution: Slackware
Posts: 6,858
|
Quote:
Originally Posted by hitest
A machine that is self aware. It can laugh, cry, dream, and wonder about its place in the Cosmos. It can feel wonder, joy, and a sense of mystery when it looks at a butterfly coming out of a chrysalis. It has emotional intelligence.
|
Why the AI should feel like an human, can't it be free to identify as itself? 
|
|
|
06-19-2025, 09:13 PM
|
#10
|
Guru
Registered: Mar 2004
Location: Canada
Posts: 7,505
Original Poster
|
Quote:
Originally Posted by keefaz
Why the AI should feel like an human, can't it be free to identify as itself? 
|
Haha! Well-said. I was trying to argue that human-like emotions could help to establish sentience. But, sure. Artificial sentience could have characteristics that may be completely alien to us. That would be even harder to define. Maybe we will never know when *they* start plotting against us. 
|
|
|
06-19-2025, 09:32 PM
|
#11
|
Member
Registered: May 2024
Posts: 223
Rep:
|
Yes AI will become truly sentient.
About AI becoming alive, well it depends on our definition of alive.
Can it be truly alive? Well yes it can be, but I personally do not believe it so. That will not stop many of us from treating it as alive, though.
Having said that, we are far far far far from singularity. Current AI, LLMs and others, will surpass human capabilities in one or two aspects. But not in all. Already calculators and computers can do basic arithmetic better than us. The same will happen with LLMs too. But LLMs are not general AI. That will take time.
AI will have no conception of right or wrong, apart from what we tell it.
|
|
|
06-19-2025, 09:45 PM
|
#12
|
LQ Guru
Registered: Apr 2010
Location: Continental USA
Distribution: Debian, Ubuntu, RedHat, DSL, Puppy, CentOS, Knoppix, Mint-DE, Sparky, VSIDO, tinycore, Q4OS, Manjaro
Posts: 6,342
|
long after we pass the point of singularity (we are not even CLOSE yet) there will be no more danger than there is now. It is not intelligence that is the problem, or Geniuses would rule the world.
The first danger point will be when a very intelligent machine becomes "self aware".
The second is when it becomes selfish!
If we live past those, the final and end point is when it become afraid.
Last edited by wpeckham; 06-19-2025 at 09:46 PM.
|
|
|
06-19-2025, 09:52 PM
|
#13
|
LQ Guru
Registered: Jan 2006
Location: Virginia, USA
Distribution: Slackware, Ubuntu MATE, Mageia, and whatever VMs I happen to be playing with
Posts: 20,015
|
Methinks this article is relevant. An excerpt:
Quote:
- Using LLMs to assist in writing can negatively impact neural and behavioral measures.
- When supported by an LLM, memory is negatively impacted, while neural connectivity is diminished.
|
In other words, using AI can lead to brain rot.
Which may indeed lead to a singularity of sorts, and we are walking blithely and willingly into it.
Last edited by frankbell; 06-19-2025 at 09:54 PM.
|
|
|
06-19-2025, 10:10 PM
|
#14
|
Senior Member
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: Fedora
Posts: 4,345
|
There are companies today who have software that calculates lead-time, does planning and automatically orders materials. I have ordered a circuit board build and assembly all online without speaking with anyone (PCBWay). What happens when systems start ordering copies of themselves? I think I could get AI to write a program to do that now. Maybe the singularity is already here?
|
|
|
06-20-2025, 10:34 AM
|
#15
|
Moderator
Registered: Jan 2005
Location: Central Florida 20 minutes from Disney World
Distribution: Slackware®
Posts: 13,995
|
Member Response
Hi,
I believe the development of biocell computers will excel beyond a digital computer AI driven system eventually. Several companies have developed a biocell/biological system that are able to equal and/or surpass digital with certain tasks.
The big fault of the biocell system is the life of the cells. Eventually you have to replace the the brain of the hardware or just replace the whole unit in some of the companies biocell computer.
This company has developed a biocell computer for $35K;
Quote:
Another site; https://newatlas.com/computers/final...anoids/Current AI training methods burn colossal amounts of energy to learn, but the human brain sips just 20 W. Swiss startup FinalSpark is now selling access to cyborg biocomputers, running up to four living human brain organoids wired into silicon chips.
The human brain communicates within itself and with the rest of the body mainly through electrical signals; sights, sounds and sensations are all converted into electrical pulses before our brains can perceive them. This makes brain tissue highly compatible with silicon chips, at least for as long as you can keep it alive.
For FinalSpark's Neuroplatform, brain organoids comprising about 10,000 living neurons are grown from stem cells. These little balls, about 0.5 mm (0.02 in) in diameter, are kept in incubators at around body temperature, supplied with water and nutrients and protected from bacterial or viral contamination, and they're wired into an electrical circuit with a series of tiny electrodes.
|

Last edited by onebuck; 06-20-2025 at 10:35 AM.
|
|
|
All times are GMT -5. The time now is 11:50 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|