The tragic unfolding of violence and protests at U.S. embassies around the world in response to the hateful anti-Islamic film created by an individual in the U.S. and posted on You Tube in an edited form, begs the question of why we can’t do a better job of managing inflammatory and potentially dangerous content before it reaches such heights. Freedom of speech is an incredibly important right to protect, but the new availability of social media channels means that hateful messaging can receive far wider exposure more quickly than what was possible in the past.
A map of Muslim protests around the world in response to the film via @BusinessInsider
The responsibility for managing the availability of such content surely lies with its creator, but those in charge of the platforms themselves such a You Tube must also consider the rising specter of such circumstances. Here is a 3-step suggestion for how to handle this situation:
1. EARLY WARNING SOCIAL LISTENING SYSTEM: Provide an early monitoring system, much like an earthquake monitoring system, where content that receives a sudden and negative response is flagged and categorized according to the subject matter. Parties then directly responsible for the content including its creator, platform owner and relevant parties (political, institutional, corporate or non-profit) are informed of its existence and rising resonance.
2. CRISIS PROTOCOL: Beyond the obvious step of taking down the content and tracking it’s various iterations around the web, both regulatory and clandestine bodies such as Anonymous can be engaged to minimize the spread of the hateful content.
3. REDRESS STRATEGY: A crisis protocol should be established, as exists with almost every large brand, that details, step-by-step , how to damage control the situation. This includes PR efforts that range from apologies through to a variety of content across all media channels that contains, contextualizes and redresses the issues.
Obviously this approach begs several questions including:
1. At what point do the rights to freedom of speech and privacy supersede the explicit or implicit intent of a piece of content.
2. How do you define hateful and where do you draw the line?
3. Whose responsibility is it to monitor, moderate and manage such situations?
These questions still elude answers but the volatility of international relations has thrown this issue into stark relief. Many factors beyond this film have played into the embassy attacks ranging from food prices and terrorism to frustrations surrounding the Arab Spring revolutions, but this situation demonstrates how inflammatory content must be carefully managed now that media is in the hands of citizens and customers potentially putting innocent lives at stake.
Do you think it is appropriate to intercept hateful content on open social media channels? If so whose responsibility is it to manage it?
Reading Time: 1 minutesSimon Mainwaring is the founder of We First, a leading brand consultancy that provides purpose-driven strategy, content, and training that empowers companies to lead business, shape culture, and better our world.