With the birth of AI, disinformation has entered a new era, rendering it critical that students build strong information literacy skills.


Key points:

From New York to Texas, the pro-Palestinian protests sweeping U.S. colleges have become a flashpoint for viral disinformation, from falsely attributed “Jewish genocide” chants to debunked claims of Hamas presence. With the tenor of allegations reaching a fever pitch, Columbia University students have even launched their own fact-checking Twitter account. As this highly-charged moment collides with a hyper-partisan landscape, it offers a stark reminder of how disinformation thrives at the intersection of fierce emotions and polarized politics, threatening to drown out nuance, facts, and good-faith dialogue when they are needed most. All of this points to the urgency of tackling disinformation through information literacy.

Disinformation has long played a role in global events. Technological change and increasingly global communications have made the deliberate spread of inaccurate information faster and more impactful. With the birth of AI, disinformation has entered a new era, rendering it critical to teach students how to question sources, spot fakes and be discerning consumers of news, social media, and information.  

AI has dramatically complicated the information landscape by rapidly generating and amplifying deceptive narratives, deepfakes, and AI-generated visuals, drawing concern from global leaders as a major emerging challenge. The World Economic Forum’s latest Global Risks Report, which surveyed experts from academia, business, government, the international community, and civil society, named misinformation and disinformation from AI as the top global risk over the next two years–ahead of climate change and war.

The stakes are high, especially as the U.S. approaches a critical election year–one that will undoubtedly be subject to disinformation, a force that voters will remember as having played a critical role in the 2016 and 2020 elections.

As an academic who has studied how digital technology is used by governments and non-state actors for the purposes of repression and information control, these issues are especially concerning. There is an urgent need to promote greater critical thinking among young people, to give them the tools to detect what information is authentic and what has been manipulated. Information literacy, specifically across digital platforms, should be a mandatory part of every K-12 curriculum, to combat the rise of disinformation and develop more discerning students ready to take on an AI-driven future.

How and where disinformation can take place

Disinformation can show up anywhere, but it thrives on stories that appeal to emotions. Election issues and partisan politics are a prime example. During the pandemic, COVID-19 disinformation narratives, spanning the bizarre claims that the disease is spread by 5G and other conspiracies, spread faster than the virus itself–thanks to digital technology. Anti-vaccine groups essentially tricked Facebook’s algorithms into allowing posts that spread disinformation by using a carrot emoji in place of the word “vaccine.” Looking at climate change–another highly polarized and partisan issue–a probe into a subset of social media accounts revealed hundreds of AI-generated and stolen pictures used in greenwashing campaigns.

Praying on the emotions that emerged after the deadly October 7th attacks and the ensuing attacks on Gaza, deepfakes powered by AI have spread at an unprecedented pace. Soon after October 7th, a fake story emerged that Qatar had threatened to cut off the world’s natural gas supply if Israel didn’t stop its bombing in Gaza, garnering millions of views before it was ultimately debunked. More recently, the United Nations Relief and Works Agency (UNRWA) has been a target of disinformation, thanks to a network of fake accounts and websites that have collaborated to spread accusations about the agency’s ties with Hamas. 

Not only is disinformation incredibly damaging to the delivery of accurate, verifiable information, it has eroded the public’s trust in some of our most reliable institutions. Only 32 percent of Americans say they trust the mass media, a figure that is tied with record-low levels in 2016.

Engaging with disinformation and AI as teachable moments

Disinformation can be rectified through fact checking, but in many cases, a false story has already done its damage before it is corrected. Another strategy is ‘prebunking,’ a technique gaining momentum that helps to build preemptive resilience to misinformation.

We can combat the spread of disinformation by encouraging and teaching more critical thinking, especially about AI, algorithms, and deception, and the value of greater subject matter knowledge.  

Whether you are a teacher in K-12 schools, a university instructor, or simply an individual who actively engages in online platforms, there are many steps that can be taken to ensure a greater understanding and literacy around disinformation and AI. This will in turn instill greater trust in the institutions and organizations that disseminate the information we are seeking.

Context-based case studies, such as videos of celebrities and influencers, can serve as important teaching moments. In my classes, I’ve challenged students to discern what is a deepfake or AI-generated image through exercises such as reverse image searches. This teaches them to detect clues such as fuzzy details, inconsistent lighting, out-of-sync audio and visuals, and the credibility of the image source. We spend time analyzing and discussing the spread, origins, and nature of social media manipulation, which equips students with important data literacy skills.

Bringing the study of disinformation to the classroom

What we know about the world ultimately informs how we approach disinformation and deception. Today’s students need a cross-disciplinary approach that starts early, so the foundations of critical thinking and information literacy are instilled at a young age and stick with them as they grow and mature.

In Finland, media literacy constitutes a core component part of the national curriculum, starting in preschool. They start with understanding the basic elements of media, and build from there to understand more complex elements, such as identifying sources. It is not a single subject–rather, it is taught across different disciplines, including Finnish language and literature, math, and art to grow a well-rounded set of analytical skills. In a survey published by the Open Society Institute in Bulgaria, Finland has ranked No. 1 of 41 European countries on resilience against misinformation for the fifth time in a row. Finland’s population also has a higher level of trust in news and other institutions, with 76 percent of Finns considering print and digital newspapers to be reliable, according to a survey conducted by market research company IRO research.

There is no denying the impact of disinformation and the stronghold it is having on political processes around the world. We will doubtless see the use of disinformation throughout 2024 U.S. presidential election battle, but a concerted effort on developing greater critical thinking can help alleviate the impact. By becoming more knowledgeable about what disinformation is, as well as different countries, cultures, and subjects, we can better navigate the array of disinformation scenarios in the digital world and foster a questioning mindset.

Latest posts by eSchool Media Contributors (see all)





Source link