OpenAI, an AI research institute cofounded by Elon Musk and Sam Altman, built an AI text generator that its creators worry is dangerous.
Jack Clark, policy director at OpenAI, says that example shows how technology like this might shake up the processes behind online disinformation or trolling, some of which already use some form of automation. “As costs of producing text fall, we may see behaviors of bad actors alter,” he says.
Based on the examples I think it’s safe to say this AI would pass the Turing Test.
Check It Out: This AI Tool Scares the Crap Out of Elon Musk
Scary for a couple of reasons.
The propaganda angle of course.
Plus as the article suggests, joining a more advanced version of this with Deepfakes technology would mean that nothing you see on the web would be trustworthy.
Add to this the use of this technology for writing novels, screenplays, and other “creative” output. It used to be that we assumed only a person could create. But between these AIs and the low standards for much of what the public consumes, human Writers might be another endangered species. They would go the way of human Computers of a few decades or a century ago.
This IS the beginning of the Robot Apocalypse. Not with violence, but by simply doing more and more jobs better than the people that do them now.