In this lesson, you will learn how to identify bias in news headlines and rewrite them to be neutral and factual. We will focus on the new EU AI Act as our main topic.
Warm-up: Spot the Spin
Read the three headlines below. They are all about the same event: the European Union's new law for Artificial Intelligence (AI). Which headline do you think is the most neutral? Discuss with a partner.
Headline A
EU Protects Citizens from Dangerous AI with Landmark New Law
Headline B
EU Finalizes Rules for Artificial Intelligence Technology
Headline C
EU's Stifling AI Regulations Threaten to Kill Innovation
Fact, Claim, or Opinion?
In the news, it's important to understand the difference between facts, claims, and opinions. This helps us see the writer's perspective and potential bias.
First, let's watch a short video that explains the basics of the EU AI Act.
The EU AI Act Explained
This video provides a neutral overview of the new legislation.
Video Transcript
00:00 The European Union really loves a good piece of tech regulation. First, it was the GDPR, then the DMA, and then the DSA, and now we have the AI Act. Some hail it as a milestone, a major win with Europe leading the way as the first to
00:15 regulate AI globally. But others argue it's a step backward, claiming Europe is quick to regulate but very slow to innovate. So who's right? Well, in this video, we'll break down
00:30 what the AI Act is all about and explore both sides of the debate. The Act aims to ensure the human-centric and ethical development of artificial intelligence in Europe by laying down some ground rules. Essentially, the Act classifies AI
00:45 into four levels of risk, with each level requiring a different degree of regulation. First, there's level one: minimal risk. This category includes the majority of AI programs, such as AI-enabled video games and spam filters.
01:00 These are so safe that they don't need any regulation. No red tape here. Next, we have level two: limited risk. Things get a bit more interesting here. AI systems in this category include deepfakes and chatbots. The main rule: be transparent.
01:15 Users need to know they're chatting with a bot unless it's super obvious. For example, ChatGPT falls into this area. Then there's level three: high risk. Now
01:30 we're in serious territory. This covers AI in critical areas where decisions and actions can have a profound impact on people's lives. For example, transportation, where AI powers self-driving cars, making split-second
01:45 decisions to keep you safe on the road. Or healthcare, where AI can assist in surgeries and diagnosis. A tiny mistake could have major consequences, so precision is key. Or education, where AI
02:00 can be used to grade exams and assess student performance. Fairness is crucial here, so these systems must avoid bias, with human oversight ensuring just outcomes. Or public safety, where AI in surveillance technologies identifies
02:15 potential threats. It has to respect privacy and avoid false alarms, protecting everyone without overstepping boundaries. With the new regulation, high-risk AI must undergo thorough risk assessment, use top-quality data, maintain
02:30 detailed logs, and always have a human ready to step in when needed. And lastly, there's level four: unacceptable risk. And here's the no-go zone. AI that plays judge and jury with your life, like social scoring systems, are strictly off-limits.
02:45 Take China's social credit system, where your behavior can earn or lose you points affecting everything from your travel plans to your kids' school options. These kind of systems are flat out banned in the European Union. So what
03:00 happens now? Well, the AI Act was officially implemented on the 1st of August 2024. The countdown has now begun for member states to take action. They have until the 2nd of August 2025 to appoint national authorities responsible
03:15 for enforcing AI regulations and overseeing market surveillance. Then come August 2026, the majority of the Act's rules will kick in. The European Artificial Intelligence Board will play a crucial role in ensuring that the Act is applied
03:30 consistently across all member states. Alongside this, a scientific panel of experts will provide technical advice and issue alerts about potential risks. Companies that don't play by these new rules could be hit with some serious penalties.
03:45 Think fines of up to 7% of their global annual turnover for the biggest rule-breakers. But is the Act any good, though? Let's start out with some of the benefits. First up, it's trying to keep us safe. AI's power is immense, but
04:00 with that power comes serious risks. You wouldn't want an AI slipping up during your surgery or a self-driving car confusing a river for a road. The AI Act is here to prevent those nightmares from becoming a reality. Second, it harmonizes
04:15 standards across the EU and potentially the world. Any company operating in the EU, no matter where they're from, has to comply with these rules. This not only makes the regulatory landscape easier to navigate for businesses but also ensures
04:30 everyone is playing by the same rules across the entire region. But it's not all smooth sailing. Let's talk about some of the downsides: stifling innovation, which can happen for two big reasons: red tape and costs. First, the red tape. The AI
04:45 Act could turn into a maze of paperwork, especially for high-risk AI. Picture this: in the US, you've got a cool new AI idea and you're off to the races, getting it to market quickly. But in Europe, you might find yourself buried in forms and
05:00 approvals, slowing everything down. It's a real creativity killer. And then there's the cost. Compliance with the AI Act isn't exactly cheap. For small and medium enterprises in the EU, it's estimated they'll need to spend between 1 to 2.7%
05:15 of their revenue just to meet these regulations. That's a big hit for smaller companies, potentially giving bigger, richer players an unfair advantage. So, what do you think of this new AI act? Good for Europe or bad for Europe? Let us know in the comments. And please like the video and subscribe to our channel. And
Now, let's define our key terms:
- Fact: A statement that can be proven true. It is objective.
The EU AI Act entered into force in August 2024.
- Claim: A statement that is presented as a fact but needs evidence to be proven. It is often made by an expert or a group.
Some experts argue that the law will set a global standard for AI regulation.
- Opinion: A personal belief or feeling. It cannot be proven true or false. It is subjective.
It’s frightening to think about how AI will change our world.
Think/Pair/Share
Read the sentences below, taken from an article about the AI Act. Are they facts, claims, or opinions? Discuss your reasoning with a partner.
- The Act categorizes AI applications into different risk levels.
- This legislation appears to be the most comprehensive AI law to date.
- The best way to regulate AI is through international cooperation, not regional laws.
Language Focus: The Language of Neutral Reporting
Journalists use specific language to report the news neutrally. They avoid showing their own opinion by using reporting verbs and hedging language. This helps maintain objectivity and builds trust with the reader.
Reporting Verbs
Reporting verbs attribute information to a source. Some verbs are neutral, while others can suggest doubt or a particular stance.
| Verb Type | Examples | Sample Sentence |
|---|---|---|
| Neutral | reports, states, explains, announces, notes | The European Commission states that the goal is to ensure safety. |
| Suggests an argument or opinion (less neutral) | claims, argues, asserts, alleges | Critics argue that the law is too restrictive. |
Hedging Language
Hedging language makes statements sound less certain or absolute. This is common when reporting on claims or future possibilities. Words like appears, suggests, likely, could, may, might, and seems are often used.
Without hedging (sounds like a fact):
The new rules will stop innovation.
With hedging (sounds like a claim/possibility):
Some analysts believe the new rules could slow innovation.
Newsroom Desk: Rewriting Headlines
Imagine you are editors in a newsroom. Your job is to make sure headlines are neutral, factual, and clear.
Your Task
Work in pairs. Read the "loaded" (biased) headline below.
EU Bureaucrats Crush Tech Freedom with Dangerous AI Law
- Discuss: Why is this headline biased? Which words show opinion or emotion?
- Rewrite: Rewrite the headline to be neutral. Your new headline must be 12 words or less.
- Prepare: Be ready to explain your changes in a 45-second "editor stand-up."
Editor Stand-up
After a few minutes, your teacher will ask some pairs to share. One person from the pair should stand up and explain their new headline and their reasons, just like an editor pitching to their team.

Student A: Our new headline is: "EU AI Act Introduces New Regulations for Tech Companies."
Teacher: Great. Why did you make those changes?


Student A: We removed "bureaucrats crush tech freedom" because that is negative and emotional language. It shows a strong bias. "Dangerous AI law" is also an opinion. We changed it to "introduces new regulations" because that is a neutral and verifiable fact about the law's function.