In today's world, technology is changing faster than ever. One of the most talked-about innovations is Artificial Intelligence (AI). AI can now create incredibly realistic voices and videos, often called "deepfakes." While this technology has creative uses, it also presents serious challenges, especially in politics. What happens when you can no longer trust what you hear? In this lesson, we'll explore a real case where an AI-generated voice was used to interfere in an election, discuss the ethical problems it creates, and debate possible solutions.
Warm-up: Real or Fake?
Your teacher will play two short audio clips. Both clips feature a politician's voice. One is real, and the other is a voice clone created by AI.
Clip A
Clip B
Think-Pair-Share:
- Listen carefully to both clips. Which one do you think is the AI clone?
- What clues in the voice, tone, or wording helped you decide?
- Discuss with a partner, then share your ideas with the class.
Teacher's Note
Clip B is the fake.
Speed-read: The Story Behind the Call
Quickly read the article below to understand a real-world example of voice cloning in politics. Your goal is to find three key facts from the text.
AI-Generated Robocall Targets Voters in U.S. Election
In January 2024, residents in New Hampshire, a state in the USA, received a disturbing phone call. The voice on the call sounded exactly like President Joe Biden. The message, however, was fake. This "robocall" told Democrats not to vote in the state's primary election, falsely claiming it would prevent them from voting in the main election later in the year.
This event quickly became a high-profile example of a "deepfake"—a piece of audio or video that has been altered by AI to show something that never really happened. Authorities investigated the incident and traced the operation to a political consultant. The use of AI to create a voice clone for political purposes raised serious concerns about election integrity.
As a result of this and similar incidents, the U.S. Federal Communications Commission (FCC) took action. On February 8, 2024, the FCC officially declared that using AI-generated voices in robocalls is illegal without prior consent from the person being called. The ruling makes it easier for authorities to stop these deceptive practices. The government agency stated that this technology has the potential to misinform voters and defraud consumers. This decision is a significant step, but the debate continues about what other safeguards are needed to protect the public from the misuse of AI.
AI can mimic human voices with remarkable accuracy.
Key Vocabulary
Here are some important terms from the reading and our discussion. Understanding them will help you discuss the topic more accurately.
| Word/Phrase | Definition | Example |
|---|---|---|
| deepfake | An audio or video clip that has been realistically altered or manipulated with AI. | The video of the celebrity saying strange things turned out to be a deepfake. |
| mimic | To imitate or copy someone's actions or voice. | The AI was able to mimic the president's voice almost perfectly. |
| robocall | An automated phone call that delivers a pre-recorded message to many people. | I received another annoying robocall during dinner last night. |
| consent | Permission for something to happen or agreement to do something. | The company cannot use your photo without your consent. |
| safeguard | A measure taken to protect someone or something from harm or damage. | The new law acts as a safeguard against election fraud. |
| outright | Wholly and completely. | The proposal was rejected outright by the committee. |
| disclose | To make new or secret information known. | Journalists must disclose their sources when required by law. |
| feasible | Possible to do easily or conveniently. | Building a new airport is not financially feasible right now. |
Language Focus: Hedging & Certainty
When we discuss current events, we often don't have all the facts. We use "hedging" language to show we are not 100% certain. On the other hand, when the evidence is strong, we use language of certainty. This is important for thinking critically about the news.
Hedging (Expressing Uncertainty)
We use these phrases when the information is incomplete or unconfirmed.
- It appears to be... / It seems to be...
- Evidence suggests... / Reports suggest...
- It is likely / unlikely that...
- This could / might / may be...
"The robocall appears to be an attempt to confuse voters. Evidence suggests it was created by a political opponent."
Showing Contrast
We use discourse markers to connect and contrast different ideas.
- However,...
- On the other hand,...
- Whereas...
- In contrast,...
"AI offers amazing creative tools. However, it also creates risks for democracy."
Speaking: Stakeholder Carousel
In this activity, you will role-play different people involved in the issue of political deepfakes.
- Form groups of four.
- Each person in the group chooses one role:
- A Voter: You feel confused and worried. You want to know how to trust the information you see and hear. Your goal is to demand clear, simple ways to identify fake content.
- A Journalist: You are concerned about misinformation. Your job is to report the truth, but deepfakes make that harder. Your goal is to propose rules that help journalists verify information quickly.
- A Campaign Staffer: You want to win the election, but you want to do it fairly. You worry your candidate could be the victim of a deepfake. Your goal is to suggest a practical way for all campaigns to fight against deepfakes.
- A Regulator (from the government): Your job is to protect the public and the fairness of elections. You need to create rules that are effective but don't limit free speech too much. Your goal is to propose one new, specific law or rule.
- Take 5 minutes to think about your role. What is your main concern? What is one safeguard you would propose?
- Each person in the group takes 2-3 minutes to present their viewpoint and proposed safeguard. Use the hedging and contrast language we just studied.
Mini-Debate
As a class or in larger groups, you will debate the following motion.
Motion: "Political deepfakes should be banned outright during election periods."
FOR the motion (Agree): Argue that the risk of misinformation is too high. Deepfakes could change election results, destroy a candidate's reputation unfairly, and erode public trust. A complete ban is the only safe option.
AGAINST the motion (Disagree): Argue that an outright ban is not feasible and could limit free speech. It might be better to focus on solutions like watermarking (labeling) AI content, educating voters, and punishing only malicious uses. Banning the technology won't stop criminals from using it.
Use phrases like:
- "I believe that..."
- "On the one hand..., but on the other hand..."
- "That's a valid point, however..."
- "The evidence suggests that..."
Exit Ticket: Quick Reflection
Think about the discussion. What is the single most important action that social media platforms or governments should take this year to address the problem of AI deepfakes in elections? Be prepared to share your idea with the class.