Social media has grown rapidly in the past decades with enormous potential. Government officials in the Philippines (in 2001), Spain (2004), Moldova (2009), and Egypt (2011) have been toppled due to mass protests co-ordinated through social media1. However, it is also open to malicious exploitation and has become a key component for terrorists (particularly Islamic State) to spread propaganda and recruit jihadis across the world. Some have described social media as becoming a “New Frontier Of Terrorism”2 for these groups, others describing it as a “Digital Caliphate”3. Omar Mateen, who attacked Pulse nightclub in Orlando, Florida, allegedly became radicalised towards ISIS in part through materials available on social media4. Shannon Conley, a white middle-class American, converted to extremist Islam and began using Skype5 to contact a Tunisian ISIS soldier who created plans for her to go to Syria.
Numerous organisations and high-ranking officials have pressured social media companies to do more in fighting extremism channelled through their platforms. Former Prime Minister Theresa May insisted technology companies must go “further and faster”6 in removing extremist content whilst also insisting they must end the “safe spaces”7 that terrorists thrive on in cyberspace. Berger (2015) outlines the extremists typical methodology as being to find a potential recruit, create a “micro-community”, isolate them from friends and family, shift contact to private communications, and encourage attacks8. However, extremist content could also refer to spreading terrorist propaganda, supporting those who have carried out attacks, sympathising with terrorists, or stirring up fear and hatred among a larger community. With such an international audience available to extremists online the possibilities are unfortunately worldwide.
British security officials know that as ISIS crumbles in the Middle East, their efforts to recruit online will still remain, and perhaps increase. According to Pew Research, 86%of adults aged 18-29 use social media9, while a Cisco report estimated that by the year 2020 there will be over four billion global internet users10. With ISIS likely to focus greater importance on the world wide web and with their potential audience growing the need to fight terrorism online has never been so important.
Social media can be a terrorists best friend and worst enemy. Verton (2014)11 as well as Awan (2017)12 each pinpoint Facebook, Twitter and YouTube to be the major platforms exploited by jihadis. One of them, Facebook, has already been linked in part to various terrorist attacks. Tashfeen Malik, one of the duo responsible for the San Bernardino attack which killed 14 people in 2015, pledged allegiance to ISIS on Facebook13, and “spoke openly”14 about Islamic jihad and how she “wanted to be a part of it”. Malik even posted support for jihad during the attack.
New York Police Department’s Deputy Commissioner of Intelligence and Counterterrorism John Miller insisted that Sayfullo Saipov (whose New York truck attack left 8 dead) appeared “to have followed almost exactly to a T”15 the guidelines that ISIS had put on its social media platforms instructing followers to carry out similar attacks. For some, the link between ISIS propaganda16 and the death of eight people is painfully evident, that being the platforms that hosted such radical material, namely Facebook, YouTube, and Twitter.
Further criticism to the company arose after Westminster attack. Amber Rudd insisted tech companies were giving terrorists a “place to hide”17 when it was revealed the attacker used Facebook-owned WhatsApp to send encrypted messages in the build-up to the attack. Propaganda may also be less discreet, as one recent trend is the creation of Facebook eulogies for jihadis killed in their efforts18, a macabre celebrity status that may appeal to younger Muslims seeking glorification.
Facebook announced in June 2017 that artificial intelligence (AI) is being deployed19 to automatically detect and remove extremist content. AI and algorithms recognises content that is suspicious or forbidden by the website and either blocks them or alerts staff for them to take action. Given the presence of ISIS however, the language-barrier may be a problem. Furthermore a computer cannot understand context as well as a human, therefore innocent people discussing world affairs may be highlighted as suspicious. Alternatively, terrorists may avoid detection by avoiding using specific words they predict will be deemed suspicious. To rectify this problem, Facebook further revealed plans to expand their own counter-terrorism team by adding 150 experts to make their website “a hostile place for terrorists”20.
Having extended their use of artificial intelligence, algorithms and human reviewing, Facebook insisted in November 2017 it was removing around 99% of content21 regarding jihadist organisations before it was flagged by another user. Additionally, 83% of terrorist content is removed within the hour of being uploaded.
This is evidently a significant improvement however despite this there is still problems. AI and algorithms are not as likely to detect potential recruiters who may be using regular language to coerce people. Only humans can spot this, and the sheer scale of content to be moderated makes this a nigh-on impossible task to fully resolve.
Twitter is also an inadvertent host to radical extremists who have turned the platform into an “ISIS megaphone”22. One source estimates there may be as many as 200,000 pro-ISIS tweets daily, although this included retweets and content generated by computers23. Tracing the origin24 of the tweets and their account-holder is harder than the likes of Facebook, thus giving them increased secrecy. Another of ISIS's social media ventures is an Arabic Twitter app “Dawn Of Glad Tidings”25, helping users keep up to date with the latest news about the Islamic terrorist group26.
Widely circulated content regarding the beheading of James Foley (an American journalist) in 2014 forced the platform into taking tougher action against extremist content. Twitter has asserted it is working extensively to combat violent extremist content on its platform, and evidence of these efforts are show in its most recent Transparency Report released on March 21 201727. According to the report, between July 2016 and the end of the year almost 380,000 accounts were suspended due to violations such as promotion of terrorism, other sources raise this as high as 636,00028. Similarly to Facebook, Twitter has also deployed AI to help track abusive or suspicious messages29 from users and to remove these accounts. Facebook and Twitter (along with Google and Microsoft) created the Global Internet Forum to Counter Terrorism (GIFCT)30 in 2017, though this is still young. Co-operative efforts were made by Twitter with the Center for Strategic Counterterrorism Communications (CSCC)31 to match ISIS's efforts with messages debunking or discrediting them, but their output in contrast to that of ISIS was a lowly one anti-ISIS post for every 99 supporting the group.
Twitter is evidently making great efforts to counter these threats, and the numbers of accounts removed is impressive, however creating a new account is a minor nuisance to terrorists. More must be done to prevent this in the first place, such as a tougher verification process to register. That said, with the amount of effort ISIS puts into the site, Twitter can only do so much to prevent the barrage of content it faces.
YouTube, a video-sharing site, has developed extraordinarily from cat videos onto streaming live TV free online. Inevitably terrorist groups see this as an opportunity to spread propaganda. Previously, organisations such as Al-Qaeda posted grainy low-quality videos to mail services to get their messages across to media stations. Social media now means they can upload extremist content in minutes32. Typically this is done via promoting suicide bombing, delivering propaganda and praising martyrs33. Tashfeen Malik watched extensive amounts of extremist preaching and jihadist propaganda through the site34.
Google (YouTube's parent company) in June 2017 announced further machine-learning technology such as AI and algorithms to recognise illicit and banned content. Furthermore it vowed to add 50 expert NGO's to the other 63 organisations35 that currently already work with the platform as part of its “Trusted Flagger” program.
YouTube's director of Public Policy and Government Relations insisted that machine-learning technology persistently removes as much as 99% of “violent extremism” videos, a significant rise from 40% the previous year. Additionally, around 70% is claimed to be removed within eight hours, and 50% in less than two hours after it is posted. After vowing to increase their efforts in December 2017, the company pledged to add as many as 10,000 people committed36 to removing extremist content on their platform as soon as possible.
The efforts made by YouTube are extensive and impressive, and of the three they appear to be the most successful in countering extremism on their platform.
Additional Problems
Terrorists exploitation of social media inevitably opens numerous other problems, socially, culturally and psychologically. While it is important to pressure these companies to remove extremist content, extensive condemnation of social media could funnel the blame onto the company themselves. In doing so this may distract away from greater societal or cultural problems such as the disillusionment of Muslim youths driving them to extremism37 (Awan: 2011), or foreign policy which may motivate attacks, all of which are out-with the control of social media.
Additionally, other websites may put their efforts in vain. While YouTube (for example) may do their best to block terrorist content on their channel, they may be fighting a losing battle when other more extreme “shock-sites” allow this freely. Websites such as BestGore and theYNC allow footage of gruesome beheadings or suicide bombings, and the availability of this may lead to a simple migration from one platform to another to find specific content. For this, there is nothing Facebook, Twitter or YouTube can do.
Algorithms are good for detecting terms for automatic removal but in doing so it may overlook people who are genuine threats. Should one person post threats to attack, an algorithm may delete this before policing authorities have a chance to resolve the issue accordingly. Furthermore, if algorithms are searching for recognised illicit material, artificial intelligence technology would also likely be blind to the extremist context of innocent material. News footage of the Finsbury Park attack is legitimately newsworthy but this could be exploited for extremist purposes to justify acts of revenge, something that may be overlooked by computers as innocent media.
This also raises the question of “What is extremist content?”. To a Western European, the Charlie Hebdo cartoons portraying the Prophet Muhammad38 will likely be viewed as inoffensive satire, whereas in other nations39 they may be viewed as incredibly extremist and blasphemous. For platforms to remove this would take the sides of those offended at the expense of the uploader, while inaction would imply taking the side of the uploader at the expense of the offender. For this, they would be in a volatile lose-lose situation.
From this arises another problem: the fact that social media is borderless and global itself40. If legal changes are brought about in the UK towards what can be posted on social media, they would not have power of an Iraqi posting such material despite the fact the content can be viewed in any nation. Patrikarakos insisted their must be a global response41 from international bodies such as the UN to create a strict world-wide guideline of what is and is not prohibited, something of a digital “Paris Accord”. The world is of course full of many cultures, religions and traditions so finding a balanced middle-ground to agree upon would remain an incredibly daunting task.
Possible Solutions
Despite these difficulties, various authors and officials have suggested potential solutions to the problem, and despite their efforts there are ways that social media companies can improve. Ben Wallace (the U.K. Minister of State for Security and Economic Crime)42 amongst others43 called for a tax-hike for social media companies who are being exploited by terrorists in order to compensate for the governmental expenditure to counter these threats. This may act as a strong motivation for social media companies to raise their counter-terrorism standards, however alternatively this may also lead to them moving their business to another country to avoid such tax hikes.
A tougher verification process to create an account may prevent large amounts of bots from being automatically created or to at least may slow down the speed that terrorists can create accounts, therefore slowing down their output. Alternatively, allowing companies to access encrypted conversations may lead to the revealing of plots of attacks such as that carried out at Westminster, and may also uncover radicalisation at its earlier stages. However, this would likely be met with intense criticism from privacy-advocate companies.
In Conclusion
It is difficult to say whether social media companies are doing enough in their fight against terrorism as it is difficult to define how much “enough” is. Five years ago 10,000 accounts removed in a month (for example) may have been a great success, however nowadays it may be deemed a poor effort. Some people may decry that absolutely any terrorist activity on these platforms is far too much, whilst others may view the problem as realistically impossible to suppress entirely.
Social media is succeeding in utilising new technology such as AI and algorithms and these can only become more effective as the technology develops. The addition of more staffers dedicated to removing such content can only improve the hostility against terrorist propaganda. The creation of organisations such as the Global Internet Forum to Counter Terrorism is a further step in the right direction though it is far too early to assess the effect this may have.
The numbers given by Facebook, Twitter and YouTube are impressive. Facebook claims to have removed around 99% of jihadist content before it is even flagged publicly, Twitter is removing an estimate of 63,000 jihadist or terrorist related accounts a month, whilst YouTube asserts it is removing almost three-quarters of extremist content within eight hours. This shows the extensive efforts being put into countering jihadism on their websites however this unfortunately also shows the scale of the threat they are up against.
Some may insist that absolutely any jihadist content on these channels is far too much however unfortunately wiping clean 100% of such material appears unrealistic. Reis likened fighting terrorist activity online to a game of “Whack-A-Mole”, as soon as one channel or account is shut down, another pops up moments later44. Because of this, they can only do so much.
1Robin Thomson, “Radicalisation and the Use of Social Media” (2011) Journal of Strategic Security, 4.4, 175.
2Laura Scaife, Social Networks as the New Frontier of Terrorism: #Terror, (Routledge, 1st edition, 2017).
3Abdel Bari Atwan, Islamic State: The Digital Caliphate (Saqi, new edition, 2015).
4Alexander Tsesis, “Social Media Accountability For Terrorist Propaganda” (2017) Fordham Law Review, v86, 608.
5Michael Martinez, Ana Cabrera and Sara Weisfeldt, “Colorado Woman Gets 4 Years for Wanting to Join ISIS” (CNN, 24 January 2015).
6Uncredited Author, “Theresa May Warns Tech Firms Over Terror Content” BBC News (20 September 2017).
7George Parker, “Theresa May Warns Tech Companies: ‘No Safe Space’ For Extremists” Financial Times (4 June 2017).
8J.M. Berger, “How Terrorists Recruit Online (and How to Stop it)” (Brookings, 9 November 2015) <https://www.brookings.edu/blog/markaz/2015/11/09/how-terrorists-recruit-online-and-how-to-stop-it/> accessed 28 February 2018.
9David Cohen, “86% of U.S. Adults Aged 18-29 Are Social Media Users” (Adweek, 12 January 2017) accessed 14 February 2018.
10Uncredited Author, “Infographic: The Number of Internet Users By 2020” (Gemalto, 4 August 2016).
11Dan Verton, “Are Social Media Companies Doing Enough to Stop Terrorist Recruitment?” (FedScoop, 10 December 2014) <https://www.fedscoop.com/social-media-companies-enough-stop-terrorist-recruitment/> accessed 20 February 2018.
12Imran Awan, “Cyber-Extremism: Isis and the Power of Social Media” (2017) Society, issue 2, 138.
13Alexander Tsesis, “Social Media Accountability For Terrorist Propaganda” (2017) Fordham Law Review, v86, 611.
14Uncredited Author, “U.S. Missed "Red Flags" With San Bernardino Shooter” (CBS News, 14 December 2015).
15David Patrikarakos, “Social Media Networks Are the Handmaiden to Dangerous Propaganda” (Time, 2 November 2017).
16David Patrikarakos, “Social Media Networks Are the Handmaiden to Dangerous Propaganda” (Time, 2 November 2017).
17Todd Goodenough, “What the Papers Say: Are Tech Companies Doing Enough to Tackle Terror?” (Spectator, 27 March 2017).
18Thomas Chen, Violent Extremism Online: New Perspectives on Terrorism and the Internet (Routledge, 1st edition, 2016) 50.
19Uncredited Author, “Facebook Uses AI in Fight Against Terrorism” (Sky News, 16 June 2017).
20Lisa Vaas, How Social Media Companies Are Using AI to Fight Terrorist Content” (Naked Security, 20 June 2017) <https://nakedsecurity.sophos.com/2017/06/20/how-social-media-companies-are-using-ai-to-fight-terrorist-content/> accessed 27 March 2018.
21Julia Fioretti, “Facebook Reports Progress in Removing Extremist Content” (Reuters, 29 November 2017).
22Imran Awan, “Cyber-Extremism: Isis and the Power of Social Media” (2017) Society, issue 2, 142.
23Lisa Blaker, “The Islamic State’s Use of Online Social Media” (2015) Journal of the Military Cyber Professionals Association, 1.1, 1.
24Seraphin Alava, Divina Frau-Meigs and Ghayda Hassan, “Youth and Violent Extremism on Social Media: Mapping the Research” (2017) UNESCO, 16.
25J.M. Berger, “How ISIS Games Twitter” (Atlantic, 16 June 2014).
26Yasmin Tadjdeh, “Government, Industry Countering Islamic States Social Media Campaign” (2014) 3.
27Lisa Vaas, How Social Media Companies Are Using AI to Fight Terrorist Content” (Naked Security, 20 June 2017) <https://nakedsecurity.sophos.com/2017/06/20/how-social-media-companies-are-using-ai-to-fight-terrorist-content/> accessed 27 March 2018.
28Andrea Heathman, “These are all the ways Facebook, Google and Twitter are trying to fight terrorism online” (Verdict, 1 August 2017) <https://www.verdict.co.uk/ways-facebook-google-twitter-trying-fight-terrorism-online/> accessed 21 February 2018.
29Andrea Heathman, “These are all the ways Facebook, Google and Twitter are trying to fight terrorism online” (Verdict, 1 August 2017) <https://www.verdict.co.uk/ways-facebook-google-twitter-trying-fight-terrorism-online/> accessed 21 February 2018.
30Andrea Heathman, “These are all the ways Facebook, Google and Twitter are trying to fight terrorism online” (Verdict, 1 August 2017) <https://www.verdict.co.uk/ways-facebook-google-twitter-trying-fight-terrorism-online/> accessed 21 February 2018.
31David Patrikarakos, “Social Media Networks Are the Handmaiden to Dangerous Propaganda” (Time, 2 November 2017).
32Luke Bertram, “Terrorism, the Internet, and the Social Media Advantage” (2016) Journal For Deradicalisation, v7, 233.
33Seraphin Alava, Divina Frau-Meigs and Ghayda Hassan, “Youth and Violent Extremism on Social Media: Mapping the Research” (2017) UNESCO, 16.
34Alexander Tsesis, “Social Media Accountability For Terrorist Propaganda” (2017) Fordham Law Review, v86, 611.
35Lisa Vaas, How Social Media Companies Are Using AI to Fight Terrorist Content” (Naked Security, 20 June 2017) <https://nakedsecurity.sophos.com/2017/06/20/how-social-media-companies-are-using-ai-to-fight-terrorist-content/> accessed 27 March 2018.
36Sean Morrison, “Google to Create 10,000-Strong Team to Tackle Extremist YouTube Posts” (Standard, 5 December 2017).
37Imran Awan, “COUNTERBLAST: Terror in the Eye of the Beholder: The ‘Spycam’ Saga: Counter-Terrorism or Counter Productive?” (2011) Howard J.Crim.Just.
38Ahmed Al-Rawi, Islam on YouTube: Online Debates, Protests, and Extremism (Springer, 1st edition, 2017) 43.
39Lizzie Dearden, “Charlie Hebdo Protests: Five Dead as Churches and French Flags Burn in Niger Riots Over Prophet Mohamed Cover” (Independent, 17 January 2015).
40David Patrikarakos, “Social Media Networks Are the Handmaiden to Dangerous Propaganda” (Time, 2 November 2017).
41David Patrikarakos, “Social Media Networks Are the Handmaiden to Dangerous Propaganda” (Time, 2 November 2017).
42Andrew Blake, “British Official Touts Tax Hike for Social Media Companies, Citing Role in Radicalizing Terrorists” (Washington Times, 2 January 2018).
43David Patrikarakos, “Social Media Networks Are the Handmaiden to Dangerous Propaganda” (Time, 2 November 2017).
44Lisa Blaker, “The Islamic State’s Use of Online Social Media” (2015) Journal of the Military Cyber Professionals Association, 1.1, 9.