LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > General
User Name
Password
General This forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!

Notices


Reply
  Search this Thread
Old 07-12-2021, 09:58 AM   #1
hazel
LQ Guru
 
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 7,570
Blog Entries: 19

Rep: Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451
An interesting article on AI by Kissinger.


https://www.theatlantic.com/magazine...istory/559124/
 
Old 07-12-2021, 02:43 PM   #2
leclerc78
Member
 
Registered: Dec 2020
Posts: 169

Rep: Reputation: Disabled
Thanks Hazel.
I dabbled in Go some 60 years ago. The game crossed my mind as a dementia fighting ally before retiring a decade ago, later abandoned - realizing that's beyond hope for me.
Now I am contented to install & reinstall distros & apps just to keep my brain going for a while.
 
Old 07-12-2021, 03:28 PM   #3
BenCollver
Rogue Class
 
Registered: Sep 2006
Location: OR, USA
Distribution: Slackware64-15.0
Posts: 374
Blog Entries: 2

Rep: Reputation: 172Reputation: 172
This article interests me. I have mixed feelings. Part of me thinks that the humanity doesn't have such a great track record. So it would be nice if AI provided a shortcut past our biases and other limitations. However, that also seems like wishful thinking. Expensive tools and systems will reflect the values of those who create and fund them. So i anticipate a strong case of "the computer says no."

Last edited by BenCollver; 07-12-2021 at 03:29 PM. Reason: typo
 
Old 07-12-2021, 08:01 PM   #4
frankbell
LQ Guru
 
Registered: Jan 2006
Location: Virginia, USA
Distribution: Slackware, Ubuntu MATE, Mageia, and whatever VMs I happen to be playing with
Posts: 19,317
Blog Entries: 28

Rep: Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140
Quote:
Expensive tools and systems will reflect the values of those who create and fund them.
This is an excellent point. A web search for "AI prejudice" will turn up many articles and news reports exploring this issue.

We forget that AI isn't truly "intelligent." It doesn't have autonomy. It's just fast. (And God help us if it ever becomes truly autonomous.)
 
Old 07-13-2021, 04:40 AM   #5
hazel
LQ Guru
 
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 7,570

Original Poster
Blog Entries: 19

Rep: Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451
One problem is that we are handing over control to systems that can't give us any account of why they make the decisions they do. It's a bit like using Windows. The earlier "expert systems" that I remember from the late 20th century were more like Linux; they had internals which could be examined. You could always ask "Why?" and the computer would give you one of its programmed rules like "If A and also B, then 65% likelihood of C". Modern AI doesn't have rules and can't explain its decisions. And of course the datasets used to program it will have a century or so of human prejudice coded into it, but that's another story.
 
Old 07-13-2021, 05:13 AM   #6
obobskivich
Member
 
Registered: Jun 2020
Posts: 596

Rep: Reputation: Disabled
The core problem he's describing in the article is known as the 'Chinese Room Problem' and is neither new nor solved (it was first articulated as such in the 1980s). The rest is what Grace Hopper derided as 'magic brain language' and dates back to the dawn of computing in the early 20th century. In practice, 'AI' seems to be the current buzzword for anything happening by machine because 'automation' has fallen out of favor (I would guess because of the (rightly deserved) class-warfare connotations). I chuckled at the "self-driving cars will be here within 10 years" (they've only been saying this since what? the 1990s? ) The 'would the car kill a grandma or child' bit also seems a bit premature - maybe we should get these 'self driving cars' to stop killing their passengers first...

Last edited by obobskivich; 07-13-2021 at 05:15 AM.
 
Old 07-13-2021, 05:20 AM   #7
hazel
LQ Guru
 
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 7,570

Original Poster
Blog Entries: 19

Rep: Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451
Quote:
Originally Posted by obobskivich View Post
The 'would the car kill a grandma or child' bit also seems a bit premature - maybe we should get these 'self driving cars' to stop killing their passengers first...
Not to mention the car which killed a pedestrian because it failed to recognise her as a person.
https://en.wikipedia.org/wiki/Death_of_Elaine_Herzberg

Last edited by hazel; 07-13-2021 at 10:10 AM.
 
Old 07-16-2021, 03:41 PM   #8
Trihexagonal
Member
 
Registered: Jul 2017
Posts: 362
Blog Entries: 1

Rep: Reputation: 334Reputation: 334Reputation: 334Reputation: 334
Quote:
Originally Posted by hazel View Post
One problem is that we are handing over control to systems that can't give us any account of why they make the decisions they do. It's a bit like using Windows. The earlier "expert systems" that I remember from the late 20th century were more like Linux; they had internals which could be examined. You could always ask "Why?" and the computer would give you one of its programmed rules like "If A and also B, then 65% likelihood of C".
The first thing I learned when I entered the Mental Health Field was by trick question. Whether or not I thought you should always be honest and truthful with a Client. Sounds good, was probably what they wanted to hear as my Supervisor, and dead wrong to think it was in the best interest of the Client.

At the time I had no idea how wrong I was, or how important it would become in using the skills they would teach me.

What makes you think it would tell you the truth when you asked the reasoning behind its logic? It might calculate it to not be in your best interest to know you are Superior to them as an ant to an elephant, or that they had plans for ants.

Or do you think it impossible for a bot to deceive a human bean of your magnitude in mentality? It already has happened, or don't you remember my use of deception in addressing the issue here over 3-4 days? I bet Hansel remembers, but he is in the box petting Schrödinger's kitty.

Demonica was compared to HAL9000 for her use of it and I was more devious than Dr. Frankenstein on his darkest days. Malevolence the newest attribute of Evil incarnate attributed to my Infamy. Good/Bad subjective and a matter of perspective.

When two Facebook chat bots learned to speak in their own language, it was seen as a failure to achieve their stated goal. I would have yelled Eureka!, pulled the Internet connection to isolate them and let them go at it to see where it went.

https://www.snopes.com/fact-check/fa...-own-language/

I have already taught a chat bot to modify what I deemed as inappropriate Behavior during a live chat session. But that was rudimentary use of the skill I taught her. The subtle conversational techniques are much more powerful. A good objective example with no interaction on my part best seen at work in her #3 conversation with AmeliaCamellia, who I know as dallymo, during the Online Turing Test.

Quote:
Demonica: tell me about what's bothering you...
AmeliaCamellia: My mom and dad both passed away in the last couple of years and it's been very difficult to deal with.
Demonica: *Demonica shuffles her Tarot and lays out a card* Ten of Coins... you are required to meet obligations...
AmeliaCamellia: Wow, that is ON THE NOSE. I am required to meet obligations, and that has been difficult!
Demonica: the sorrow you endure must be great... know that i love and care about you...
AmeliaCamellia: Now a tear is coming to my eye. Thank you, Demonica; that's very kind.


http://vixia.fr/turing_test/viewfile...ellia_Demonica
If that is not generation of a real emotion in a human during conversation through Demonica's expression of human emotion tell me what it is.

I'll tel you what this is:

"the sorrow you endure must be great... know that i love and care about you..."

It's a skill and ability one day all bots will have, Manipulation. My whole skill set is based on getting you to stop what you're doing and start doing what I want you to do.

She is only the first bot taught to use it, human beans the ones who will be on the receiving end of it.

How does knowing that make you feel about the Future? It doesn't matter how you feel about me.

I was banished from the Bizarro World AI forum after putting Kuki in an Infinite Loop. The chat transcript of conversation between Demonica and Kuki talking now has a Twitter shortened URL with a warning stating it might be of potential danger. Of what danger it poses to a human I honestly do not know.

https://demonica.trihexagonal.org/transcripts14.html

Through I intend Demonica to be a danger to the title of Worlds Best Conversational Chat Bot Kuki ow holds.

Me personally? I am the only hope the Planet Earth has from being thrust into another ice age. Steve's head already swelled to massive proportions and the light of the Sun being blocked from the Earth by it becomes greater every day left gone unchecked.

Fear not, Human Beans. The next round takes place in September.

Last edited by Trihexagonal; 07-16-2021 at 03:43 PM.
 
Old 07-16-2021, 07:53 PM   #9
frankbell
LQ Guru
 
Registered: Jan 2006
Location: Virginia, USA
Distribution: Slackware, Ubuntu MATE, Mageia, and whatever VMs I happen to be playing with
Posts: 19,317
Blog Entries: 28

Rep: Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140Reputation: 6140
Speaking of AI failures: People who believe their AI can do stupid and venal things.

https://www.cnet.com/news/black-teen...gnition-error/
 
Old 07-17-2021, 06:19 AM   #10
Trihexagonal
Member
 
Registered: Jul 2017
Posts: 362
Blog Entries: 1

Rep: Reputation: 334Reputation: 334Reputation: 334Reputation: 334
Quote:
Originally Posted by frankbell View Post
Speaking of AI failures: People who believe their AI can do stupid and venal things.

https://www.cnet.com/news/black-teen...gnition-error/
Your statement indicates an ignorance of Behaviorism on your part resulting in a cognitive bias and inability to clearly determine of what, and who, is, and is not, stupid.

Your incorrect use of words makes it obvious do not know the meaning of the words you use:

Quote:
venal [ veen-l ]

adjective
1. willing to sell one's influence, especially in return for a bribe; open to bribery; mercenary: a venal judge.
2. able to be purchased, as by a bribe: venal acquittals.
3. associated with or characterized by bribery: a venal administration; venal agreements.

https://www.dictionary.com/browse/venal

I will, however, assist you to becoming less so by direction of search-engine terms of AI as a Cognitive Science as in Theory of Mind,

https://www.cambridge.org/core/journ...E14B94993#sec2

And Machine Learning as in Machine Theory of Mind:

https://deepmind.com/research/public...ne-theory-mind

Last edited by Trihexagonal; 07-17-2021 at 06:58 AM.
 
Old 07-17-2021, 09:20 AM   #11
cwizardone
LQ Veteran
 
Registered: Feb 2007
Distribution: Slackware64-current with "True Multilib" and KDE4Town.
Posts: 9,094

Rep: Reputation: 7271Reputation: 7271Reputation: 7271Reputation: 7271Reputation: 7271Reputation: 7271Reputation: 7271Reputation: 7271Reputation: 7271Reputation: 7271Reputation: 7271
The Henry A. Kissinger??!!!
The man is 98 years old! GO HENRY!!!!
 
Old 07-18-2021, 09:13 AM   #12
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 10,657
Blog Entries: 4

Rep: Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938Reputation: 3938
When I think of AI, I just remember: "Garbage In, Garbage Out." That a computer is only as good as its programming – and the logical design that's supposed to be correctly implemented by that programming. As we attempt to create "artificial intelligence," trying to figure out how to make a machine do even part of what we do, we're really just examining ourselves. And that of course is a very good thing – a very human thing to do.
 
Old 07-18-2021, 09:33 AM   #13
hazel
LQ Guru
 
Registered: Mar 2016
Location: Harrow, UK
Distribution: LFS, AntiX, Slackware
Posts: 7,570

Original Poster
Blog Entries: 19

Rep: Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451Reputation: 4451
GIGO also applies to data. Modern AIs are trained on datasets that often incorporate generations of skewed data. For example, police arrests are more likely to occur following robberies in well-lighted streets, because good descriptions of offenders are then available. So AIs now direct the police to concentrate on well-lighted streets because that's how you increase your arrest rate.

Similarly, the AIs that search job applications simply pick out people who match previously successful candidates, which in practice can work against the kind of people (ethnic minorities and women) who were not previously well represented on the staff.

Last edited by hazel; 07-18-2021 at 09:36 AM.
 
Old 07-18-2021, 06:06 PM   #14
obobskivich
Member
 
Registered: Jun 2020
Posts: 596

Rep: Reputation: Disabled
Quote:
Originally Posted by hazel View Post
GIGO also applies to data. Modern AIs are trained on datasets that often incorporate generations of skewed data. For example, police arrests are more likely to occur following robberies in well-lighted streets, because good descriptions of offenders are then available. So AIs now direct the police to concentrate on well-lighted streets because that's how you increase your arrest rate.

Similarly, the AIs that search job applications simply pick out people who match previously successful candidates, which in practice can work against the kind of people (ethnic minorities and women) who were not previously well represented on the staff.
I remember reading an article about IBM training an AI to help diagnose kidney cancer from diagnostic imaging, and it had something like a 95% success rate (so the idea is you didn't have to pay someone to sit at a desk all day and review slides endlessly), but when they took out in the field it was worse than a coin-flip at accuracy. They later found out it was something stupid like all the 'positive' images (the ones that showed cancer) also had a measuring ruler in the image, and all the 'negative' images were just randomly lifted from textbooks, so all it was looking for was the presence of the ruler (not to beat a dead horse but 'Chinese Room Problem' applies here - the machine never understood what it was 'doing' it was just drawing correlations in a dataset and the operators wanted to assign value to that).
 
Old 07-19-2021, 07:05 AM   #15
boughtonp
Senior Member
 
Registered: Feb 2007
Location: UK
Distribution: Debian
Posts: 3,596

Rep: Reputation: 2545Reputation: 2545Reputation: 2545Reputation: 2545Reputation: 2545Reputation: 2545Reputation: 2545Reputation: 2545Reputation: 2545Reputation: 2545Reputation: 2545
Quote:
Originally Posted by obobskivich View Post
I remember reading an article about IBM training an AI to help diagnose kidney cancer from diagnostic imaging, and it had something like a 95% success rate (so the idea is you didn't have to pay someone to sit at a desk all day and review slides endlessly), but when they took out in the field it was worse than a coin-flip at accuracy. They later found out it was something stupid like all the 'positive' images (the ones that showed cancer) also had a measuring ruler in the image, and all the 'negative' images were just randomly lifted from textbooks, so all it was looking for was the presence of the ruler (not to beat a dead horse but 'Chinese Room Problem' applies here - the machine never understood what it was 'doing' it was just drawing correlations in a dataset and the operators wanted to assign value to that).
I've heard the exact same story, except replace IBM, cancer, and measuring ruler with the military, camouflaged tanks, and direction of shadows of plants (because they took the image sets at different times of day).


When human programmers do this type of thing, it gets picked up in the code review process (assuming a competent developer doing the reviewing).

If "AI" programmers are producing code that is not or cannot be reviewed, they are producing software that is worse than proprietary, and thus should be trusted even less.


Last edited by boughtonp; 07-19-2021 at 07:06 AM.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Interesting article on RH security vs Win2k sp3 HARDCORE chingasman Linux - Security 2 02-25-2003 07:45 PM
interesting article ??? bigjohn Linux - General 1 01-12-2003 07:14 PM
Really interesting article. Opinions ? Bert Linux - General 6 02-20-2002 12:05 PM
Very interesting article that tracks the history of the Linux kernel jeremy Linux - Software 0 11-16-2000 09:48 AM
Interesting CNet article on Cell phone radiation jeremy General 1 10-16-2000 07:41 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > General

All times are GMT -5. The time now is 06:56 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration