Shaping Social Media

Defeating terrorist messaging requires deleting offensive content and devising counternarratives

UNIPATH STAFF

Daesh suffered crushing defeats on the battlefields of Iraq and Syria in 2017, but there’s one battlefield on which the terrorists still operate without fear.

In the summer of 2018 — a full year after Daesh was driven from some of its last strongholds in Mosul — the terrorist group continued to spread its message of hate across some of the internet’s most popular platforms.

YouTube is a format most associated with fun and informative videos produced by teens and young adults, but Daesh messaging crept onto the platform continuously in the first half of 2018. According to the Counter Extremism Project, in the spring of 2018 alone, Daesh and its collaborators uploaded 1,348 YouTube videos.

Even though YouTube’s parent company Google managed to pull down the videos after a couple of hours, the damage was done: Those videos attracted 163,391 views before they were yanked from the platform.

“That’s a lot eyes on those videos,” said Hany Farid, a researcher at the Counter Extremism Project.

For all its benefits, social media in the form of Twitter, YouTube, Facebook and their imitators have been manipulated by some of the world’s most vicious extremists to indoctrinate and recruit youths around the world.

Governments have reacted in many ways to the proliferation of these terrorist hate messages. Many have blocked popular websites or temporarily banned citizens from using the internet entirely. 

Such measures are reinforced by the internet companies themselves, which, pressured by counterterrorism agencies, have strived to expunge propaganda from their sites.

Another promising innovation is “artificial intelligence” that purports to pinpoint terrorist material and remove it from the internet with minimal human involvement. 

But there’s a growing suspicion that a heavy-handed approach that harms the innocent and guilty alike does more harm than good. Governments are increasingly turning to the creation of counternarratives to confront the extremists wherever they lurk in cyberspace.

Since policing the internet is ultimately insufficient as extremists bore more deeply into the dark crevasses of the web, these counternarratives promise to have the greatest reach and influence globally.

“Merely banning social media platforms is not enough,’ said Maqsoud Kruse, executive director of Hedayah, the counterterrorism center based in the United Arab Emirates (UAE). “We must also be able to build our social media outreach for a counternarrative.”

Policing methods

Most governments of the Middle East and South and Central Asia have used various methods to block, ban and filter terrorist content on the internet. Such methods — if restricted to violent extremists and not misused to censor and persecute innocent citizens — is a necessary first stop to blunting terrorist recruitment.

But loads of propaganda videos manage to skirt the attention of the authorities. The quantity of material available on social media is simply too vast to police effectively. Terrorists not only disguise their material to slip it onto mainstream websites but also revert to closed networks on the darknet to transmit hateful messages.

Relatively new services such as Google Drive accept thousands of hours of mostly unmonitored video, including that created by terrorists. Experts have counted at least 400 websites on which Daesh has posted videos. Some of these include step-by-step instructions on how to make bombs. 

Nikita Malik, author of a recent research paper for the London-based Centre on Radicalisation and Terrorism, said violent extremists have become creative in avoiding scrutiny by counterterrorism services. They use the internet not just for messaging but to raise money through shadowy cryptocurrencies such as bitcoin.

“We have denied Islamic State territory in the real world, but it has a whole new safe haven in cyberspace, which we need first to understand and then to close down,” Malik told The Guardian newspaper. “The authorities must move urgently to increase their knowledge of terrorists’ activities in cyberspace and their use of technologies such as bitcoin. Regulation in this area has to move carefully if we are to balance liberties with guarding against threats to our security — but the time has come to deny extremists the space they need online to plan fresh atrocities.”

Great Britain claimed success in the use of what it called artificial intelligence to eliminate Daesh recruitment material from hundreds of websites across the internet. The developers used 1,000 existing Daesh videos to “train” the computer program to recognize Daesh content on the internet, and they claim the technology has achieved a 94 percent success rate. The tool will aid the authorities in removing Daesh content from more obscure platforms such as Vimeo, Telegram and pCloud.

Internet companies have begun policing themselves using, in most cases, their own proprietary “fingerprinting” technology to detect key words and themes popular with terrorists.

In 2017, Microsoft, Facebook, YouTube and Twitter formed the Global Internet Forum to Counter Terrorism with the goal of sharing technology and best practices to remove content from violent extremists. Other social media companies such as LinkedIn, Instagram and Snap have since joined the forum.

“Governments and telecommunication companies have a role to play in educating the public on the danger posed by terrorist organizations who use the social media platforms to recruit individuals and promote hatred,” said Salem Al Za’abi, head of UAE National Counter-Terrorism Committee.

Designing counternarratives

While important, removing extremist content from online forums isn’t enough. Countries must design counternarratives that not only refute the content of terrorist recruitment videos but do so in a way to attract the eyes of jaded young men who may be suspicious of older authority figures.

To conduct this form of strategic communication, countries have enlisted techniques borrowed from fields as diverse as psychology, advertising, theology and communications. In some cases, would-be terrorists surfing the web for extremist content are redirected to counterterrorist websites.

Hedayah acts as a clearing house for counterterrorism research. It hosts a counternarrative library that directs users to hundreds of pages of content already available on the internet, including movies, TV shows, books, cartoons, videos, news articles and blogs. In Europe, an organization called the Radicalisation Awareness Network provides a similar service. 

“Why is it that when these terrorist groups create a social media message, it resonates with some of us and not with some others? The answer is simple, it is the content,” Kruse said. “So we need to create a better idea that resonates more. In this competition of ideas, we need to be innovative, creative and visionary.”

But tailoring a counterterrorism campaign for a diverse audience is difficult. Although the goal is to discredit and demystify violent extremism using ideology, morality, logic or facts, not every message or method resonates with every person.

For example, social media that might dissuade an unhappy teen looking for adventure in Syria would have little power to change the mind of a hardened criminal. A third-generation European from Paris might respond differently to countermessaging than an uneducated Afghan from a remote village.

To use the language of marketing, counterterrorism media campaigns often entail deterring “consumers” from “buying” a terrorist organization’s “brand.”  

“Counter-narrative centric strategies are inherently defensive and reactive; they depend on the adversary’s messaging in order to craft its own messaging. Consequently, the adversary tends to not only initiate but shape the pace and nature of the information contest,” noted the authors of a November 2017 European Union research paper called “Countering Terrorist Narratives.”

Experts suggest the best campaigns relentlessly transmit thematically varied messages across multiple platforms, but ensure those messages coalesce around a simple, overarching narrative.

The U.S. Department of Homeland Security enlists American college students — from the same age group as many terrorist recruits — to fine tune its counterterrorism messaging.

A program called Peer to Peer: Challenging Extremism invited dozens of universities to develop social media campaigns to blunt terrorist recruitment. The result were innovative websites that used forums such as video games and music to transmit their counterextremist message. The winning school earned a $5,000 prize.

“The program addresses two fundamental things,” said George Selim of the Department of Homeland Security. “It brings real-world national security problems to the classroom, and it gives young people a chance to have their voices heard.”

A combination of harder approaches — deleting and disrupting social media that incites violence — and softer approaches — launching web-based counternarratives — need to be part of every society’s counterterrorism toolkit.

“Where these awful crimes are facilitated by the use of social media, we want to close down the criminals’ ability to communicate,” said Max Hill, a prosecutor who served as Great Britain’s independent reviewer of terrorism legislation.

“And yet, we must recognize that policing the internet, and controlling social media comes at a very high price if it interferes with the freedom of communication which every citizen enjoys.”  


Four Trends in Web-based Counterterrorism

  1. Disrupting — Blocking distribution, deleting content, banning websites.
  2. Redirecting — Steering viewers of terrorist material to countermessaging sites.
  3. Countermessaging — Devising and implementing campaigns that offer alternative viewpoints to violent extremism. 
  4. Synchronizing — Multinational coordination to ensure messaging is practical, consistent and effective.

Source: European Union

Comments are closed.