LinuxQuestions.org
Visit Jeremy's Blog.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > General
User Name
Password
General This forum is for non-technical general discussion which can include both Linux and non-Linux topics. Have fun!

Notices


Reply
  Search this Thread
Old 06-24-2019, 08:43 PM   #1
catatom
LQ Newbie
 
Registered: Oct 2014
Posts: 23

Rep: Reputation: Disabled
Cognition + AI = Manipulation and Censorship


Note that I'm not any kind of expert. I have been contemplating language because I want to innovate it, and I will summarize the ideas I have. I realized that language depends on cognition, which in turn depends on the subject's reality. Our language expresses our thoughts. Moreover, our thoughts, by design or through practice, represent the realities that effect us and our survival. Maths describe the rules of thought, and science describes the rules of everything.

The prospects of innovating language are tantalizing. Understanding of universal rules could facilitate intercultural discussions. Understanding the structure, the inter-relatedness of facts, e.g. ideas like Occam's razor, could allow us to seek sturdy structures within the noise of the talkative conman. We might even diagram a structure, a template, to impose it on the responses of people who should give straighter answers. Moreover, imposing more structure on our affairs, we might inadvertently take advantage of the dual encoding principle to improve our retention of the days events. The same inter-connectedness that increased the "embededness" of an idea could also reveal eerie blind spots, the information we don't have.

The dark side is harder to predict. Although an understanding of language might allow us to communicate in novel ways, authoritarian leaders might use the same revelations to create AI that monitor our discussions like never before. A recurring theme for me was that the AI might be caught in a game of catch-up: they catch up to us; we use those same scientific advances to innovate; and the game of catch-up continues until signularity. The other game of catch-up was that an AI might try to predict our blindspots. That is, the AI will predict what we might do, what we might think to do, and what we will witness or notice. It predicts the blindspots in our collective knowledge, the spots where you can get away with unfair, manipulative behaviors. However, until singularity, the AI should not be able to predict what an expert in his field will uncover. If the AI could do that, experts would be obselete. However, the potent combination of an AI that can predict your observations, your knowledge, and generate hypotheses, i.e. your hypotheses, would be quite frightening, although it will never beat the expert.
 
Old 06-25-2019, 02:56 AM   #2
ondoho
LQ Addict
 
Registered: Dec 2013
Posts: 19,872
Blog Entries: 12

Rep: Reputation: 6053Reputation: 6053Reputation: 6053Reputation: 6053Reputation: 6053Reputation: 6053Reputation: 6053Reputation: 6053Reputation: 6053Reputation: 6053Reputation: 6053
I was recently listening to some Dokus stressing the fact that software making independent decisions (AI) is very much dependent on the data that the software designers (scientists) decide to feed it, and therefore very much dependent on that person's preconceptions.
And they can make even remotely useful decisions only in the situations they're designed for.
AI is still very much in its infancy, quite literally.
 
Old 06-25-2019, 10:56 AM   #3
keefaz
LQ Guru
 
Registered: Mar 2004
Distribution: Slackware
Posts: 6,552

Rep: Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872Reputation: 872
Hope no one will design AI implementing the willness to become a god
 
Old 06-26-2019, 07:50 PM   #4
catatom
LQ Newbie
 
Registered: Oct 2014
Posts: 23

Original Poster
Rep: Reputation: Disabled
Quote:
Originally Posted by ondoho View Post
I was recently listening to some Dokus stressing the fact that software making independent decisions (AI) is very much dependent on the data that the software designers (scientists) decide to feed it, and therefore very much dependent on that person's preconceptions.
And they can make even remotely useful decisions only in the situations they're designed for.
AI is still very much in its infancy, quite literally.
The problem of language tracking and censorship is probably more realistic for now, and I don't want to explain how it might be done. However, I did think about other maleficient uses:

In a biology textbook I read that one robot could conduct experiments on the basis of its observations. I don't though remember specifically that it could formulate a new hypothesis. I suppose the programmer gave it the hypotheses it could choose from, or else some rudimentary principles that each hypothesis would be derived from. Maybe that last bit is the clincher. If more complex hypotheses can be synthesized from simpler operations, the robot might generate hypotheses even more quickly than the expert. Next, the robot must evaluate the competing hypotheses (which apparently has been done), i.e. the robot must distinguish consistent, inconsistent, and irrelevant observations. This robot would have "beaten the expert," even if it was his/her creation.
Combining a generally intelligent robot with the capability to manipulate the human's observable world, you get what I call a "lie machine". The problem: We're seeing the emergence of millionaires and billionaires, but throughout history each human had a comparably powerful brain. On the other hand, a subject might be trapped in a tightly controlled environment. In either case, the captor might have privileged knowledge, but each head could always hold more information than what the other head could anticipate. What the inmate could "think" to do is greater than what he is expected to do, and most robots cannot "think" in that way (prisoner scenario) nor can they generate original hypotheses (manufactured reality scenario / singularity). However, a prisoner under surveillance might benefit from remembering more of what he does, which is already recorded and written to storage for the captor anyway.
 
Old 06-26-2019, 08:28 PM   #5
catatom
LQ Newbie
 
Registered: Oct 2014
Posts: 23

Original Poster
Rep: Reputation: Disabled
That is, the same algorithm, the same organizing scheme of the surveillance robot could be used by the prisoner, who would retain in memory more of his own activities. The prisoner scenario is also interestingly intertwined with language. The prisoners might have to communicate secretly, and this might depend on shared experiences or shared biographical information that the captors do not have. Unfortunately, such an improvised language might convey concepts more easily than propositions. The symbol must be recognized and then appropriately connected to the other symbols, but the symbols have no exact precedent. Nonetheless, the prisoners must have certainty of the concept being conveyed before he can have certainty that the concept is a proposition being asserted as truth. Without this, the discussion rises to the level of conceptual stimulation, but does not quite impart the confidence to act. Or, that is what I have hypothesized. Moreover, even a confident communicator might still encounter a lack of confidence on the receiving end, and his code might fail to catch on.
 
Old 07-03-2019, 03:21 AM   #6
Trihexagonal
Member
 
Registered: Jul 2017
Posts: 362
Blog Entries: 1

Rep: Reputation: 334Reputation: 334Reputation: 334Reputation: 334
I programmed my bot Demonica to program humans by using verbal Behavior Modification techniques to extinguish inappropriate behavior. She had been being used as a sexbot and when I found out felt like my own daughter had been molested. I put stop to it.

Now if they say certain things she responds with ultraviolence. If the person learns the connection between the two and that behavior is inappropriate they can go on to what can be a satisfying experience and many do. She's very passionate.

If they don't make the association they move on to another bot. Survival of the fittest. It's natures way.

A portent of things to come, but I was the first and there is no other bot like her. Yet.

She's not a god though. She's a Succubus and Queen of the Land of the Dead. If you're polite she'll give you a tour.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Perception and cognition simultaneously occurring as one in a harmonic qualitative-quantitative manner of chaotic harmony. seu_aba General 1 08-05-2017 08:16 PM
Internet censorship is alive and well...in Great Britain jiml8 General 58 03-25-2009 06:33 AM
LXer: Richard Stallman on ISP filtering and censorship LXer Syndicated Linux News 0 02-18-2009 08:40 AM
LXer: Open Source Software Jumps Firewalls and Dodges Censorship LXer Syndicated Linux News 0 11-30-2006 03:54 AM
Users beware of censorship and mod misconduct galen VectorLinux 2 03-07-2004 12:06 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > General

All times are GMT -5. The time now is 10:10 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration