What is the signal of change?
GPT-3 is the newest language-generating software recently made available to the public by OpenAI. Trained on hundreds of billions of words on the internet, this AI predicts the most likely words to follow when prompted by a human, with impressive results.
It was first made available to some selected few researchers and AI experts before being made widely public, and it created quite a steer, as the text the software spilled out seemed uncannily human. A college kid used GPT-3 to write fake blogposts that ended up ranking at the top of Hacker News. That’s how good this technology is, or is it?
Before anybody starts thinking that we have finally invented the machine capable of passing the Turing Test, we should clarify that GPT-3, like many other Artificial Intelligent systems, isn’t really intelligent. It doesn’t know what it is saying and doesn’t really understand what is being asked. Based on a statistical analysis of what is the most likely word to follow another word, it utters sentences that make sense and are grammatically correct, but it doesn’t comprehend the world. When probed further or asked questions, it can respond with incoherent or incorrect responses. GPT-3 sounds human and intelligent, but it has no idea what it’s talking about.
Still, this is a strong signal of an AI taking on more and more tasks that we thought were exclusively human, like writing an essay or an article, and doing them almost at a human level.
What would happen if this signal were more widespread?
This signal is about a language generating AI that is good enough to pass for human. GPT-3 still fails if probed and questioned sufficiently, and it occasionally can give incoherent responses, but if we had slightly better technology, it is not difficult to see how this could bring meaningful changes to our society.
For a starter, many jobs that until now we thought were safe (journalists, copywriters, bloggers, authors, speechwriters), would be subject to automation and, therefore, replacement (this is already happening, but at a small scale). That doesn’t mean we wouldn’t still have some human authors, or people wouldn’t write for pleasure. If the quality were good enough, many companies and other clients currently hiring professionals related to writing would buy the technology, as it would be cheaper.
Every year we produce 2 million books and 30 billion blog posts1. This is already a lot, and it is increasing every year. Imagine the explosion of content we would have if there existed an AI able to produce literature at human level capacity in a fraction of the time. Who would filter all that content for us? How would we know what literature is best and most suited for our taste? Some other AI would probably have to be created to filter all the content available based on our taste.
For me, the arrival of such technology signals something much more profound. Until now, we thought we were the only beings with enough intelligence and intuition to build complex narratives and create art. The moment AI can produce writing that nobody can distinguish from a human being’s, this would cease to be true. This would be the moment in which we would finally realize that we don’t possess the sole monopoly over creative intelligence.
This realization may hurt our pride and may make us feel threatened. It could also fill us with even more pride, as this fantastic feat would be made by another being created by us, and with hope, hope for all the new possibilities opened by another intelligent being coexisting and working with us.