Misinformation Lessons from Pizza Glue and Eating Rocks: A Beginning

Recently the data community and the media have been buzzing about Generative AI and in particular Google’s AI Overview responses to questions like ‘How many rocks should I eat?’ or ‘How to glue cheese to your pizza?’ These answers of course are silly and the memes have been fun to read, however, there is a more serious lesson here about AI’s more present role in the spread of misinformation.

Watching The Magician’s Hands

Sometimes generative AI feels a bit like magic. How it can be trained on billions of data inputs and then provide a human-like conversational answer is a fresh way for us to interact with computers and gather information that is truly different than a web search. Like magic they say if you want to figure out how a trick works (or not be fooled by it) you need to watch the magician’s hands and avoid their purposeful misdirection.  The same can be true for AI and misinformation. The misdirection is sometimes showing something we really want to believe is true and then supporting those claims in a way that looks polished and professional. AI responses can be confidently incorrect and are known to reference non-existent sources.

Black Mirror Effect

I have written before about how we perceive our interactions with tools like ChatGPT as better than whatever the internet was before. (Reminder readers that the ‘old’ pre-ChatGPT internet was a mere 20 months ago). Large Language Models (LLMs) like these are trained on massive amounts of data gathered from the internet, however just like the old internet there is not a fact-or-fiction filter. In some ways the old basic Google searches did a better job of putting the most likely fact responses at the top of searches. This is in part because search responses are ranked by how often a human uses them before going on to the next link in the list.  

reflecting back to us the data it is trained on without qualificationwithout human input, and without bias towards the Truth.

The challenge with the current GPT models is they are reflecting back to us the data it is trained on without qualificationwithout human input, and without bias – for better or worse – towards the truth.  This is the Black Mirror Effect and as the GPT ‘hallucinations’ show the data we get is a shimmering of the collective data we put in.

Misinformation and Misdirection

More people are beginning to use ChatGPT, Claude, Perplexity, and a host of other similar tools to gather information, which is displacing the use of internet search tools. We are learning more about how to use them and maybe a bit about how not to use them in certain situations. When we see the suggestion about eating rocks or pizza glue it is obvious this is misinformation, but what happens when things are less obvious? When responses are a mix of fact, opinion, and misinformation it becomes harder for us mere humans to sort the truth from the deception – the misinformation from the misdirection.

Maybe these examples are ‘accidental’ responses. A one-millionth of odd response among tons of other helpful ones, is not so bad. Maybe it is an artifact of LLMs and how they ‘echo’ data back at us. This would be acceptable and even a bit comforting at the moment given the infancy of the technology. But it also raises some concerns when they go off track and we wonder if this is a result of purposeful design or intentional manipulation, such as the experimental Golden Gate Claude. If so, then the challenges to fending against misinformation have seriously increased.

Perhaps this is a result of a product that is not quite ready for the world or maybe a world that is not yet ready for a technological revolution.

Leave a comment