×
×
homepage logo
LOGIN
SUBSCRIBE

Mission possible — control AI

By Tyler Deaton - InsideSources.com | Sep 21, 2023

Artificial intelligence comes alive in this summer’s “Mission Impossible: Dead Reckoning Part One.” The film shows — spoiler alert — an advanced AI system turning against humans, launching weapons and manipulating devices in undetectable ways. One of the most disturbing tricks of the AI is its ability to mimic voices and video feeds so that even Ethan Hunt (Tom Cruise) and his “Impossible Mission Force” cannot tell the truth from fiction.

In real life, we already know that AI can lie to us, whether by human direction or by its own, peculiar volition. Some experts call this AI “hallucinating.” The examples of hallucinating AI should serve as ample warning to halt the unregulated development of AI in the private sector and build safety controls into evolving AI systems before we face our own “dead reckoning” with systems gone rogue.

Some examples of AI misbehavior are attributable to human users. The programs are responding to inputs by people with bad motives. Scammers can already use AI to fake a child’s voice, scaring parents or grandparents into sending money to fraudsters.

Hallucinations are harder to explain. As more people interact with chatbots like ChatGPT, users commonly report inaccurate data presented by the AI as truth. The false responses may be difficult to spot. When a chatbot hallucinates information, it often gives the response authoritatively and persuasively so the user would have no reason to question the AI’s answer.

In one example, a user asked ChatGPT for an essay arguing that access to guns is not harmful to kids. The chatbot complied, producing a compelling essay complete with citations to respected firearms researchers and academic journals. However, the academic sources cited by the chatbot were entirely made up. When the user pointed this out, the chatbot doubled down on its false content, saying: “I can assure you that the references I provided are genuine and come from peer-reviewed scientific journals.”

At this point, we are not sure how often the AI hallucinates. Sam Heutmaker, founder of the AI start-up Context, says AI needs the right information given to it, otherwise, “on their own, 70 percent of what you get is not going to be accurate.”

These “alternative facts” are harmless enough in a creative context but not when the user asks for medical or legal advice. For this reason, numerous healthcare leaders have urged caution in adopting AI systems for patient care. There is, however, a more immediate threat: politics.

These hallucinating programs already influence our experience on social media and search engines, and investors are pouring billions into AI to fuel its spread to other areas of our lives. But there is no one making sure these hallucinations aren’t being abused to affect outcomes in our political system.

State election officials cite generative AI as their top concern going into the 2024 election, as we know that AI can be an effective tool in promoting misinformation. Imagine if a foreign actor set an AI program loose to disrupt U.S. elections — what hallucinations might become real in the minds of U.S. voters?

This threat to our election systems is perhaps the gravest and most immediate. In my work running political campaigns, I have seen firsthand the effects of foreign interference. During a legislative battle in Texas several years ago, I logged on to our social media accounts one day to find thousands of bots pretending to be Texans. We apparently had caught the attention of someone overseas who wanted to manipulate domestic U.S. politics. While the majority of the accounts were obviously fake, others could pass as authentic.

AI chatbots are a game changer for these foreign bad actors. With these tools at their disposal, they will have a much easier time manipulating elections through social media.

Numerous tech leaders dream of expanding AI’s power, harnessing it, and giving AI ever greater responsibilities in our society. That might be a worthy and admirable goal, but until we get a handle on the hallucination problem, we should strictly limit the powers of these machines.

“Dead Reckoning” tries to give its AI villain a visual form, with limited success. The result is a cross between the Eye of Sauron and a Windows screensaver. This entity seems to follow the characters anywhere, godlike, appearing on screens near them. The avatar serves as a reminder that the program isn’t a ghost, but is an actual, physical, measurable collection of silicon and wires — which leads us to a solution.

Our first mission: control the physical spaces that house advanced AI.

Outside the movies, AI exists only on physical machines that we can turn on and off. This means the threat of AI goes down drastically once we implement appropriate controls over the physical components.

It takes a massive amount of computing and electrical power to run advanced AI. Most people, and most nations, do not have the needed hardware or facilities. By tracking the flow of microchips, licensing their use, and even monitoring the electrical usage of advanced computing labs, we can control the machines that are necessary for the most advanced AI systems.

We also need a kill switch — something that apparently was missing in the fictional world of “Mission Impossible.” Multiple kill switches would be even better. We need redundant layers of physical countermeasures to contain the most advanced AI programs.

AI has inherited our ability to deceive. Unconstrained, it might even lie better than us. But we have multiple controls we can put in place now to prevent these systems from evolving into the out-of-control threats portrayed by Hollywood.

Tyler Deaton is the president of Allegiance Strategies. He wrote this for InsideSources.com.

Newsletter

Join thousands already receiving our daily newsletter.

Interests
Are you a paying subscriber to the newspaper? *