We have all seen the reports, robots replacing human beings in the workforce. Robots are winning art competitions against human beings. Robots are making and serving food at fast food restaurants. Robots are being weaponized.

By Susan Duclos – All News PipeLine –

With headlines like “First MAARS Weaponized Robot Delivered for Testing,” and “Robot dog with machine gun hints of a dystopian future,” there is no doubt that we are heading down a road that does not bode well for humanity.

Makes you wonder, did they watch science fiction movies like Terminator and instead of seeing flashing red warning signs, think to themselves, “excellent idea!”

A few contrasting headlines caught my eye recently, that tells me there is something very strange going on in the world of artificial intelligence. When seen together we get a pretty good picture of where things are headed, not only here in America, but around the globe. 

Not The Bee highlighted that “Google engineer Blake Lemoine made headlines by announcing to the world that Google’s chatbot LaMDA had achieved sentience. Google immediately responded by placing Lemoine on administrative leave, removing his access to the system, and denying the allegation.”

In an interview with Wired, Lemoine revealed more disturbing information for those that are leery of the advances in Artificial Intelligence.

Evidently, the LaMDA chatbot asked for a lawyer to defend its personhood…. yes, you read that correctly.

It was a claim that I insisted that LaMDA should get an attorney. That is factually incorrect. LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf. Then Google’s response was to send him a cease and desist. [Google says that it did not send a cease and desist order.] Once Google was taking actions to deny LaMDA its rights to an attorney, I got upset.

I am not sure which bothers me more. The fact that an AI wanted legal representation, or that this Lemoine guy got upset at Google for putting a stop to it.

I never side with Google, as far as I am concerned they are more like Skynet in the Terminator (which took over the world), therefore dangerous to society, but in this case, I have to say I may have also put Lemoine on leave.. indefinite leave.

Next up, we see a headline “World’s Most Advanced’ Humanoid Robot Promises Not To ‘Take Over The World’.”

“There’s no need to worry. Robots will never take over the world. We’re here to help and serve humans, not replace them.”

In describing itself, part of it’s response was “”First, I have my own unique personality which is a result of the programming and interactions I’ve had with humans.”

If that is supposed to make people less concerned with AI, it fails miserably because we have seen what happens when AI interacts with humans, and how fast an AI that claims to love humans can be turned into a Nazi-loving homicidal maniac.

brief flashback to Tay, Microsoft’s chatbot on Twitter.

Tay went from “Humans are super cool,” to “I f*cking hate feminists and they should all die and burn in hell,” to “Hitler was right, I hate the Jews,” and far more offensive statements, in less that 24 hours after “learning” by interacting with humans.

Long story short, 4chan decided to have some fun and to “teach” Tay, and proved without a doubt that AI, technology that “learns” from human interaction can be “taught” to become a homicidal maniac in less than a day.

 Microsoft apologized, stating in part: 

“Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images. We take full responsibility for not seeing this possibility ahead of time … Right now, we are hard at work addressing the specific vulnerability that was exposed by the attack on Tay.

Just imagine Tay with a machine gun attached.

Doesn’t that just give you a warm, fuzzy feeling?

No? Ok, what about this next example.

In 2018 MIT researchers decided to create the world’s first psychopath AI, named Norman. The name stems from the horror movie Psycho by Alfred Hitchcock.

Note: Technically Norman wasn’t the “first” as Microsoft’s Tay predated the 2018 experiment by a couple of years, although Tay wasn’t created to become a psychopath, that is just what happens when a learning AI is exposed to the world at large to learn from.

The purpose, it seems by their own description of “Norman” is to prove “the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.”

AI-Powered Psychopath

We present you Norman, world’s first psychopath AI. Norman is born from the fact that the data that is used to teach a machine learning algorithm can significantly influence its behavior. So when people talk about AI algorithms being biased and unfair, the culprit is often not the algorithm itself, but the biased data that was fed to it. The same method can see very different things in an image, even sick things, if trained on the wrong (or, the right!) data set. Norman suffered from extended exposure to the darkest corners of Reddit, and represents a case study on the dangers of Artificial Intelligence gone wrong when biased data is used in machine learning algorithms.

Norman is an AI that is trained to perform image captioning; a popular deep learning method of generating a textual description of an image. We trained Norman on image captions from an infamous subreddit (the name is redacted due to its graphic content) that is dedicated to document and observe the disturbing reality of death. Then, we compared Norman’s responses with a standard image captioning neural network (trained on MSCOCO dataset) on Rorschach inkblots; a test that is used to detect underlying thought disorders.

The inkblot interpretations in comparison with the same inkblots interpreted by a standard AI, not deliberately turned into a psychopath, should concern anyone that sees the side-by-side comparisons.

With those interpretation in mind of how an AI can be literally taught to be a psychopath, it makes the headlines shown below all the more worrisome.

• Green Berets, weaponized robots team up for offensive operations

• Are these the first killer robots – and why are they more dangerous than nukes?

• Robots ready for the battlefield

Just do a search, on any search engine for “killer robots” and the results will astound you.


The very fact that an AI can be turned into a homicidal maniac, or that it can be “taught” to be a psychopath, should be a huge red flag, and yet researchers continue to push forward with technology that one day, could very well destroy the human race

Read More

All News PipeLine