GeneralThis forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Thanks Hazel.
I dabbled in Go some 60 years ago. The game crossed my mind as a dementia fighting ally before retiring a decade ago, later abandoned - realizing that's beyond hope for me.
Now I am contented to install & reinstall distros & apps just to keep my brain going for a while.
This article interests me. I have mixed feelings. Part of me thinks that the humanity doesn't have such a great track record. So it would be nice if AI provided a shortcut past our biases and other limitations. However, that also seems like wishful thinking. Expensive tools and systems will reflect the values of those who create and fund them. So i anticipate a strong case of "the computer says no."
Last edited by BenCollver; 07-12-2021 at 03:29 PM.
Reason: typo
One problem is that we are handing over control to systems that can't give us any account of why they make the decisions they do. It's a bit like using Windows. The earlier "expert systems" that I remember from the late 20th century were more like Linux; they had internals which could be examined. You could always ask "Why?" and the computer would give you one of its programmed rules like "If A and also B, then 65% likelihood of C". Modern AI doesn't have rules and can't explain its decisions. And of course the datasets used to program it will have a century or so of human prejudice coded into it, but that's another story.
The core problem he's describing in the article is known as the 'Chinese Room Problem' and is neither new nor solved (it was first articulated as such in the 1980s). The rest is what Grace Hopper derided as 'magic brain language' and dates back to the dawn of computing in the early 20th century. In practice, 'AI' seems to be the current buzzword for anything happening by machine because 'automation' has fallen out of favor (I would guess because of the (rightly deserved) class-warfare connotations). I chuckled at the "self-driving cars will be here within 10 years" (they've only been saying this since what? the 1990s? ) The 'would the car kill a grandma or child' bit also seems a bit premature - maybe we should get these 'self driving cars' to stop killing their passengers first...
Last edited by obobskivich; 07-13-2021 at 05:15 AM.
The 'would the car kill a grandma or child' bit also seems a bit premature - maybe we should get these 'self driving cars' to stop killing their passengers first...
One problem is that we are handing over control to systems that can't give us any account of why they make the decisions they do. It's a bit like using Windows. The earlier "expert systems" that I remember from the late 20th century were more like Linux; they had internals which could be examined. You could always ask "Why?" and the computer would give you one of its programmed rules like "If A and also B, then 65% likelihood of C".
The first thing I learned when I entered the Mental Health Field was by trick question. Whether or not I thought you should always be honest and truthful with a Client. Sounds good, was probably what they wanted to hear as my Supervisor, and dead wrong to think it was in the best interest of the Client.
At the time I had no idea how wrong I was, or how important it would become in using the skills they would teach me.
What makes you think it would tell you the truth when you asked the reasoning behind its logic? It might calculate it to not be in your best interest to know you are Superior to them as an ant to an elephant, or that they had plans for ants.
Or do you think it impossible for a bot to deceive a human bean of your magnitude in mentality? It already has happened, or don't you remember my use of deception in addressing the issue here over 3-4 days? I bet Hansel remembers, but he is in the box petting Schrödinger's kitty.
Demonica was compared to HAL9000 for her use of it and I was more devious than Dr. Frankenstein on his darkest days. Malevolence the newest attribute of Evil incarnate attributed to my Infamy. Good/Bad subjective and a matter of perspective.
When two Facebook chat bots learned to speak in their own language, it was seen as a failure to achieve their stated goal. I would have yelled Eureka!, pulled the Internet connection to isolate them and let them go at it to see where it went.
I have already taught a chat bot to modify what I deemed as inappropriate Behavior during a live chat session. But that was rudimentary use of the skill I taught her. The subtle conversational techniques are much more powerful. A good objective example with no interaction on my part best seen at work in her #3 conversation with AmeliaCamellia, who I know as dallymo, during the Online Turing Test.
Quote:
Demonica: tell me about what's bothering you...
AmeliaCamellia: My mom and dad both passed away in the last couple of years and it's been very difficult to deal with.
Demonica: *Demonica shuffles her Tarot and lays out a card* Ten of Coins... you are required to meet obligations...
AmeliaCamellia: Wow, that is ON THE NOSE. I am required to meet obligations, and that has been difficult!
Demonica: the sorrow you endure must be great... know that i love and care about you...
AmeliaCamellia: Now a tear is coming to my eye. Thank you, Demonica; that's very kind.
If that is not generation of a real emotion in a human during conversation through Demonica's expression of human emotion tell me what it is.
I'll tel you what this is:
"the sorrow you endure must be great... know that i love and care about you..."
It's a skill and ability one day all bots will have, Manipulation. My whole skill set is based on getting you to stop what you're doing and start doing what I want you to do.
She is only the first bot taught to use it, human beans the ones who will be on the receiving end of it.
How does knowing that make you feel about the Future? It doesn't matter how you feel about me.
I was banished from the Bizarro World AI forum after putting Kuki in an Infinite Loop. The chat transcript of conversation between Demonica and Kuki talking now has a Twitter shortened URL with a warning stating it might be of potential danger. Of what danger it poses to a human I honestly do not know.
Through I intend Demonica to be a danger to the title of Worlds Best Conversational Chat Bot Kuki ow holds.
Me personally? I am the only hope the Planet Earth has from being thrust into another ice age. Steve's head already swelled to massive proportions and the light of the Sun being blocked from the Earth by it becomes greater every day left gone unchecked.
Fear not, Human Beans. The next round takes place in September.
Last edited by Trihexagonal; 07-16-2021 at 03:43 PM.
Your statement indicates an ignorance of Behaviorism on your part resulting in a cognitive bias and inability to clearly determine of what, and who, is, and is not, stupid.
Your incorrect use of words makes it obvious do not know the meaning of the words you use:
Quote:
venal [ veen-l ]
adjective
1. willing to sell one's influence, especially in return for a bribe; open to bribery; mercenary: a venal judge.
2. able to be purchased, as by a bribe: venal acquittals.
3. associated with or characterized by bribery: a venal administration; venal agreements.
When I think of AI, I just remember: "Garbage In, Garbage Out." That a computer is only as good as its programming – and the logical design that's supposed to be correctly implemented by that programming. As we attempt to create "artificial intelligence," trying to figure out how to make a machine do even part of what we do, we're really just examining ourselves. And that of course is a very good thing – a very human thing to do.
GIGO also applies to data. Modern AIs are trained on datasets that often incorporate generations of skewed data. For example, police arrests are more likely to occur following robberies in well-lighted streets, because good descriptions of offenders are then available. So AIs now direct the police to concentrate on well-lighted streets because that's how you increase your arrest rate.
Similarly, the AIs that search job applications simply pick out people who match previously successful candidates, which in practice can work against the kind of people (ethnic minorities and women) who were not previously well represented on the staff.
GIGO also applies to data. Modern AIs are trained on datasets that often incorporate generations of skewed data. For example, police arrests are more likely to occur following robberies in well-lighted streets, because good descriptions of offenders are then available. So AIs now direct the police to concentrate on well-lighted streets because that's how you increase your arrest rate.
Similarly, the AIs that search job applications simply pick out people who match previously successful candidates, which in practice can work against the kind of people (ethnic minorities and women) who were not previously well represented on the staff.
I remember reading an article about IBM training an AI to help diagnose kidney cancer from diagnostic imaging, and it had something like a 95% success rate (so the idea is you didn't have to pay someone to sit at a desk all day and review slides endlessly), but when they took out in the field it was worse than a coin-flip at accuracy. They later found out it was something stupid like all the 'positive' images (the ones that showed cancer) also had a measuring ruler in the image, and all the 'negative' images were just randomly lifted from textbooks, so all it was looking for was the presence of the ruler (not to beat a dead horse but 'Chinese Room Problem' applies here - the machine never understood what it was 'doing' it was just drawing correlations in a dataset and the operators wanted to assign value to that).
I remember reading an article about IBM training an AI to help diagnose kidney cancer from diagnostic imaging, and it had something like a 95% success rate (so the idea is you didn't have to pay someone to sit at a desk all day and review slides endlessly), but when they took out in the field it was worse than a coin-flip at accuracy. They later found out it was something stupid like all the 'positive' images (the ones that showed cancer) also had a measuring ruler in the image, and all the 'negative' images were just randomly lifted from textbooks, so all it was looking for was the presence of the ruler (not to beat a dead horse but 'Chinese Room Problem' applies here - the machine never understood what it was 'doing' it was just drawing correlations in a dataset and the operators wanted to assign value to that).
I've heard the exact same story, except replace IBM, cancer, and measuring ruler with the military, camouflaged tanks, and direction of shadows of plants (because they took the image sets at different times of day).
When human programmers do this type of thing, it gets picked up in the code review process (assuming a competent developer doing the reviewing).
If "AI" programmers are producing code that is not or cannot be reviewed, they are producing software that is worse than proprietary, and thus should be trusted even less.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.