Containing ChatGPT May be Impossible

Generative AI has spilled into society like an oil slick on an ocean.  There is no escaping it. The wind, the waves, and the current will carry it to places we have yet to imagine in terms of its capabilities, uses, and applications.  It encroaches on us without much of a plan to contain it, without technology to corral where and what it can do. And for many standing on their virtual shorelines watching it approach, it strikes a bit of fear – we always fear the unknown.

Wading into the Unknown

There are many examples of where new technology and capabilities quickly evolved. Consider the first cell phones as an example. They were convenient communication tools replacing landlines. Soon however they became our cameras capturing our life moments, music and video players, and eventually the gateway to social media. What started as a simple phone became our memory locker and for some our digital identity. The data on our phones defines us more and more.  And we do not need to go too far to also find ways this has created negative outcomes from cyberbullying to the demise of traditional news sources and the rise of misinformation and disinformation.

Trustworthy?

The problem with the truth is sometimes knowing where to find it. Can generative AI be trained to know the truth from the ridiculous? Perhaps, but at the moment it relies mostly on us humans to give it a sense of direction. This is a result of how generative AI is trained on a broad range of data sources, largely direct and unfiltered from the internet, and as we have learned the internet has its fair share of misinformation. Generative AI, like ChatGPT, is relying on what it has learned from us.  Consider this example:

Your Aunt Beatrix swears that her rhubarb pie is the cure for seasonal allergies and posts that online. Well, ChatGPT reads that too.  Get enough Aunt Beatrixs to post the same information and all of a sudden ChatGPT might start believing this connection between rhubarb and allergies is a true thing.  With all these people making miracle cure pies, what is a computer to do? The large language models (LLMs) do not fact-check as humans do.  There is not a sense of reasonableness applied to its answers and some users have made a game of tricking it into making false statements.

Self-Containment

With new technology, it is not always clear at the beginning where it is going and as a result, industry regulation lags behind.  Several authors and media companies have sued OpenAI (ChatGPT) under the premise their copyrighted works were never intended to be used to train generative models. It is a completely new way of using data and there are no clear laws guiding it. 

Can this technology really be contained by a few and will the industry be able to self-regulate?

Recently large tech companies have started some self-regulation, such as agreeing to create “reasonable precautions” against AI election interference. It is a good start, although it only applies to a handful of companies, and as we noted generative AI is being added to hundreds of other applications, from image creation to software coding. Can this technology really be contained by a few and will the industry be able to self-regulate?

The rush to conquer AI is on and those with deliberate plans to go slow, patiently considering its implications will find themselves adrift. And in that eagerness to create the next generative AI capability we may find the consequences on our shores are more misinformation and our waning ability to discern the truth. It might be naïve to believe this technology can be restricted, monitored, and contained. There are already many open-source competitors and methods where generative AI can be run on other platforms without restrictions including local computers. Our life raft, if you will, relies on our abilities to be wary of what we see and read, skeptical when appropriate, and apply our own sense of data defense.

Leave a comment