MIT Researchers developed “Psychopathic” AI, just to show how terrible of an idea it is.

Yes you read that title right. MIT has developed a Machine Learning algorithm named “Norman” in a nod to the Hitchcock thriller Psycho. The purpose of Norman is to interpret inkblot paintings as one would in a psychologists office. So essentially they taught a computer how to take the inkblot test and then set out to poison it’s mind.

One of the ways that machine learning AI’s are put together involves feeding large amounts of existing data into them and letting it find the patterns on its own. for example if you were trying to develop an autonomous vehicle you would probably show it millions of pictures of stop signs until it could identify them on its own. The difference in this case is that source data is not a pile of traffic sign photos but the contents of a particularly morbid and gruesome subreddit. The name of the subreddit has been redacted by MIT in an apparent effort to protect the guilty.

So after the AI was trained they set it to it’s purpose. Interpreting Rorschach inkblot tests. Which it does so in the most morbid way possible. I won’t ruin them for you but you can view the original project here.

Some people have gone a little overboard shooting this down as a terrible idea, herald of the end times, etc. Really i think this is pretty innocent as far as projects go. First thing everyone need to understand here is that there is nothing close to sentience occurring here with these machine learning algorithms. In fact I wish they wouldn’t use the term artificial intelligence in this light. But it does get clicks so people will continue to use it in this regard. AI carries a very specific meaning to most people, but to truly have something like human intelligence in a computer we are going to have to model and emulate the human brain within a computer. That is a method that I think possible but what is going on here is completely different.

Also the limited scope of this project is another important safeguard if you think they are needed here which I do not. The thing only can comment on inkblot tests. Not a problem, and no chance to go all Blade Runnerey.

Gizmondo; Your worst Alexa nightmares are coming true.

Your Worst Alexa Nightmares Are Coming True

What’s the most terrifying thing you can imagine an Amazon Echo doing? Think realistically. Would it be something simple but sinister, like artificially intelligent speaker recording a conversation between you and a loved one and then sending that recording to an acquaintance? That seems pretty bad to me.

Meet “Google AI” the new re-branded title of Google Research.

AI reigns king today, and any problem that can be potentially solved with an algorithm is, regardless of whether or not it is appropriate to let a computer decide in the first place. Please see the link I posted earlier today regarding the EFF article “Math can’t solve everything” In which attention is called to situations where computers are now deciding whether child protective services should act on a potential child being at risk. Or whether or not someone should continue receiving welfare benefits.

Well, Google announced today that they are doubling down on this practice, rolling its research division together with it’s AI division. Signaling to the world that Google will no longer focus on non-AI research. Or, (as I think is infinitely more likely) that they never did in the first place. (NOTE; Google has said specifically that non-AI research will continue under the Google AI banner. Yet the validity of this statement remains unclear.)

The rebranding process has already been completed on all of Google’s websites & social media channels. The new site can be found here. It does, however, precede the popular Google I/O developers conference, which will occur later today. It is possible that this new division will be showcasing new products and services today like the machine learning framework Tensor Flow. More on that if & when it occurs.