The Christchurch Call, established in May 2019, is an initiative that brought both the private and public sectors together with the goal of eliminating terrorist and violent extremist content online (TVECO). The call characterizes collective and voluntary commitments from governments and online service providers to prevent the abuse of online platforms that made possible the publicizing of the Christchurch attacks. The initiative creates a robust shared data set and employs Artificial Intelligence (AI) and Machine Learning (ML) to detect and remove TVECO. The first-ever transparency report by The Global Internet Forum to Counter Terrorism (GIFCT), an industry-led initiative, has shown success in creating a robust shared data set to counter TVECO. Concurrently, it confirms that hesitation from big tech in advancing AI and ML to detect anomalies could lead to TVECO in the future.
On May 15th, 2019, New Zealand’s Prime Minister Jacinda Ardern and French President Emmanuel Macron brought together heads of government and leaders from large tech companies to Paris, France. The participants met to discuss the terrorist attack on two mosques in Christchurch, New Zealand. Two months before the meeting, 4,000 Facebook users witnessed roughly one hundred people killed or wounded via live stream. Since its launch in June 2017, GIFCT has lacked efficiency in dealing with such situations on its own. Consequently, Facebook, Twitter, and YouTube joined 18 countries and international organizations in supporting the call to prevent TVECO. The move boosts efforts by the GIFCT in countering the problem.
Despite the existence of the forum, Facebook’s human and artificial monitoring efforts failed to respond to the Christchurch terrorist attack. The company’s pattern-recognition tool called “hashing” was unsuccessful in detecting recognized content with character strings enabling algorithms to spot TVECO duplicates whenever they were uploaded. Some 300,000 out of 1.2 million video upload attempts went past Facebook’s Risk and Response Team that was working to remove offensive material. Facebook received the first report from users and moderators about the real-time incident only 12 minutes after the video had been live-streamed. The Christchurch terrorist attack proves that a self-policing red flags approach along with a content moderators’ oversight is simply inadequate. A change in GIFCT’s policy towards TVECO was clearly needed.
On July 24, 2019, GIFCT introduced a joint content incident protocol for responding to emerging or ongoing events similar to the terrorist attack in Christchurch. The protocol increased the volume of “hashes” within the database from 100,000 in 2018 to 200,000 in 2019 and bases the forum’s terrorist content taxonomy upon the UN Terrorist Sanctions lists. It removes posts on a company’s service via a triage system, but the source link and hosted content remain intact on third-party platforms. Primary research materials on radical Islam remain available for researchers studying terrorism, while the protocol denies the same access to terrorists and people vulnerable to recruitment. Furthermore, the GIFCT is developing a cross-platform counter-violent extremist toolkit in cooperation with the London-based think tank, Institute for Strategic Dialogue. The platform assists civil society organizations in developing online campaigns and counternarratives to challenge extremist ideologies.
The response from big tech companies to the Christchurch attack seems far more reactive than proactive. GIFCT continues to favor a supervised machine learning algorithm technique over an unsupervised approach when recognizing TVECO. The supervised method allows machines to recognize known TVECO by assigning value to an existing “hash” within the GIFCT’s database. The unsupervised method processes unlabeled and uncategorized data using algorithms without prior training. This method searches for patterns that could indicate TVECO by clustering similar data points. While the unsupervised learning algorithm technique is less accurate and less specific about data sorting than a supervised approach, it detects anomalies that are incomprehensive to the supervised technique. The autonomous method extends supervised decision-making boundaries to better tackle an unknown problem. Thus, the unsupervised method is a more challenging but appropriate model for a robust tech industry's approach in dealing with still unknown TVECO.
Big tech companies continue to favor a reactive approach when confronting terrorist and violent extremist content online. While the GIFCT’s joint content incident protocol is a step forward in reducing extremist’s exposure throughout the internet, the new tool leaves out the threat coming from unknown TVECO. AI and ML algorithms hold much more significant potential than the currently utilized correlative approach in tackling online extremist ideologies. The tech giants possess sufficient human and artificial intelligence to produce high-quality, anti-terror algorithms. To reach this standard, the tech companies could use the same enthusiasm while creating a set of rules for making the best business-friendly decisions. If the unsupervised algorithm technique becomes a big part of GIFCT’s approach in dealing with TVECO in the future, then the big tech companies will fully answer the Christchurch call. Before that happens, however, the tech giants will continue setting their moral compasses.