THE alchemy of modern media works with amazing speed. Start
with a cheesy anti-Muslim video that resembles a bad trailer for a Sacha Baron
Cohen comedy. It becomes YouTube fuel for protest across the Islamic world and a
pretext for killing American diplomats. That angry spasm begets an inflammatory
Newsweek cover, “MUSLIM RAGE,” which in turn inspires a Twitter hashtag that
reduces the whole episode to a running joke:
“There’s no prayer room in this nightclub. #MuslimRage.”
“You lose your nephew at the airport but you can’t yell his name because it’s
JIHAD. #MuslimRage.”
From provocation to trauma to lampoon in a few short news cycles. It’s over in a
week, forgotten in two. Now back to Snooki and Honey Boo Boo.
Except, of course, it’s far from over. It moves temporarily off-screen, and then
it is back: the Pakistani retailer accused last week of “blasphemy” because he
refused to close his shops during a protest against the video; France locking
down diplomatic outposts in about 20 countries because a Paris satirical
newspaper has published new caricatures of the prophet.
It’s not really over for Salman Rushdie, whose new memoir recounts a decade
under a clerical death sentence for the publication of his novel “The Satanic
Verses.” That fatwa, if not precisely the starting point in our modern
confrontation with Islamic extremism, was a major landmark. The fatwa was
dropped in 1998 and Rushdie is out of hiding, but he is still careful. His book
tour for “Joseph Anton” (entitled for the pseudonym he used in his clandestine
life) won’t be taking him to Islamabad or Cairo.
Rushdie grew up in a secular Muslim family, the son of an Islam scholar. His
relationship to Islam was academic, then literary, before it became
excruciatingly personal. His memoir is not a handbook on how America should deal
with the Muslim world. But he brings to that subject a certain moral authority
and the wisdom of an unusually motivated thinker. I invited him to help me draw
some lessons from the stormy Arab Summer.
The first and most important thing Rushdie will tell you is, it’s not about
religion. Not then, not now.
When the founding zealot of revolutionary Iran, Ayatollah Khomeini, issued his
Rushdie death warrant in 1989, the imam was not defending the faith; he was
trying to regenerate enthusiasm for his regime, sapped by eight years of
unsuccessful war with Iraq. Likewise, Muslim clerics in London saw the fatwa
against a British Indian novelist as an opportunity to arouse British Muslims,
who until that point were largely unstirred by sectarian politics. “This case
was a way for the mosque to assert a kind of primacy over the community,” the
novelist said the other day. “I think something similar is going on now.”
It’s pretty clear that the protests against that inane video were not
spontaneous. Antisecular and anti-American zealots, beginning with a Cairo TV
personality whose station is financed by Saudi fundamentalists, seized on the
video as a way to mobilize pressure on the start-up governments in Egypt,
Tunisia and Libya. The new governments condemned the violence and called in
police to protect American diplomatic outposts, but not before a good bit of
nervous wobbling.
(One of the principal goals of the extremists, I was reminded by experts at
Human Rights First, who follow the region vigilantly, is to pressure these
transitional governments to enact and enforce strict laws against blasphemy.
These laws can then be used to purge secularists and moderates.)
Like the fanatics in the Middle East and North Africa, our homegrown hatemongers
have an interest in making this out to be a great clash of faiths. The
Islamophobes — the fringe demagogues behind the Koran-burning parties and that
tawdry video, the more numerous (mainly right-wing Republican) defenders against
the imaginary encroachment of Islamic law on our domestic freedom — are easily
debunked. But this is the closest thing we have to a socially acceptable form of
bigotry. And their rants feed the anti-American opportunists.
Rushdie acknowledges that there are characteristics of Islamic culture that make
it tinder for the inciters: an emphasis on honor and shame, and in recent
decades a paranoiac sense of the world conspiring against them. We can argue who
is more culpable — the hostile West, the sponsors, the appeasers, the fanatics
themselves — but Islam has been particularly susceptible to the rise of identity
politics, Rushdie says. “You define yourself by what offends you. You define
yourself by what outrages you.”
But blaming Islamic culture dismisses the Muslim majorities who are not enraged,
let alone violent, and it leads to a kind of surrender: Oh, it’s just the
Muslims, nothing to be done. I detect a whiff of this cultural fatalism in Mitt
Romney’s patronizing remarks about the superiority of Israeli culture and the
backwardness of Palestinian culture. That would explain his assertion, on that
other notorious video, that an accommodation with the Palestinians is “almost
unthinkable.” That’s a strangely defeatist line of thought for a man who
professes to be an optimist and a problem-solver.
Romney and Rushdie are a little more in tune when it comes to mollifying the
tender feelings of irate Muslims.
In his new book, Rushdie recounts being urged by the British authorities who
were protecting him to “lower the temperature” by issuing a statement that could
be taken for an apology. He does so. It fills him almost immediately with
regret, and the attacks on him are unabated. He “had taken the weak position and
was therefore treated as a weakling,” he writes.
Of the current confrontation, he says, “I think it’s very important that we hold
our ground. It’s very important to say, ‘We live like this.’ ” Rushdie made his
post-fatwa life in America in part because he reveres the freedoms, including
the freedom, not so protected in other Western democracies, to say hateful,
racist, blasphemous things.
“Terrible ideas, reprehensible ideas, do not disappear if you ban them,” he told
me. “They go underground. They acquire a kind of glamour of taboo. In the harsh
light of day, they are out there and, like vampires, they die in the sunlight.”
And so he would have liked a more robust White House defense of the rights that
made the noxious video possible.
“It’s not for the American government to regret what American citizens do. They
should just say, ‘This is not our affair and the [violent] response is
completely inappropriate.’ ”
I would cut the diplomats a little more slack when they are trying to defuse an
explosive situation. But I agree that the administration pushed up against the
line that separates prudence from weakness. And the White House request that
Google consider taking down the anti-Muslim video, however gentle the nudge, was
a mistake.
By far the bigger mistake, though, would be to write off the aftermath of the
Arab Spring as a lost cause.
It is fairly astounding to hear conservatives who were once eager to invade Iraq
— ostensibly to plant freedom in the region — now giving up so quickly on
fledgling democracies that might actually be won over without 10 bloody years of
occupation. Or lamenting our abandonment of that great stabilizing autocrat
Hosni Mubarak. Or insisting that we bully and blackmail the new governments to
conform to our expectations.
These transition governments present an opportunity. Fortifying the democratic
elements in the post-Arab Spring nation-building, without discrediting them as
American stooges, is a delicate business. The best argument we have is not our
aid money, though that plays a part. It is the choice between two futures,
between building or failing to build a rule of law, an infrastructure of rights,
and an atmosphere of tolerance. One future looks something like Turkey,
prospering, essentially secular and influential. The other future looks a lot
like Pakistan, a land of fear and woe.
We can’t shape the Islamic world to our specifications. But if we throw up our
hands, if we pull back, we now have a more vivid picture of what will fill the
void.
COMPANIES are usually accountable to no one but their shareholders.
Internet companies are a different breed. Because they traffic in speech —
rather than, say, corn syrup or warplanes — they make decisions every day about
what kind of expression is allowed where. And occasionally they come under
pressure to explain how they decide, on whose laws and values they rely, and how
they distinguish between toxic speech that must be taken down and that which can
remain.
The storm over an incendiary anti-Islamic video posted on YouTube has stirred
fresh debate on these issues. Google, which owns YouTube, restricted access to
the video in Egypt and Libya, after the killing of a United States ambassador
and three other Americans. Then, it pulled the plug on the video in five other
countries, where the content violated local laws.
Some countries blocked YouTube altogether, though that didn’t stop the
bloodshed: in Pakistan, where elections are to be scheduled soon, riots on
Friday left a death toll of 19.
The company pointed to its internal edicts to explain why it rebuffed calls to
take down the video altogether. It did not meet its definition of hate speech,
YouTube said, and so it allowed the video to stay up on the Web. It didn’t say
very much more.
That explanation revealed not only the challenges that confront companies like
Google but also how opaque they can be in explaining their verdicts on what can
be said on their platforms. Google, Facebook and Twitter receive hundreds of
thousands of complaints about content every week.
“We are just awakening to the need for some scrutiny or oversight or public
attention to the decisions of the most powerful private speech controllers,”
said Tim Wu, a Columbia University law professor who briefly advised the Obama
administration on consumer protection regulations online.
Google was right, Mr. Wu believes, to selectively restrict access to the crude
anti-Islam video in light of the extraordinary violence that broke out. But he
said the public deserved to know more about how private firms made those
decisions in the first place, every day, all over the world. After all, he
added, they are setting case law, just as courts do in sovereign countries.
Mr. Wu offered some unsolicited advice: Why not set up an oversight board of
regional experts or serious YouTube users from around the world to make the
especially tough decisions?
Google has not responded to his proposal, which he outlined in a blog post for
The New Republic.
Certainly, the scale and nature of YouTube makes this a daunting task. Any
analysis requires combing through over a billion videos and overlaying that
against the laws and mores of different countries. It’s unclear whether expert
panels would allow for unpopular minority opinion anyway. The company said in a
statement on Friday that, like newspapers, it, too, made “nuanced” judgments
about content: “It’s why user-generated content sites typically have clear
community guidelines and remove videos or posts that break them.”
Privately, companies have been wrestling with these issues for some time.
The Global Network Initiative, a conclave of executives, academics and
advocates, has issued voluntary guidelines on how to respond to government
requests to filter content.
And the Anti-Defamation League has convened executives, government officials and
advocates to discuss how to define hate speech and what to do about it.
Hate speech is a pliable notion, and there will be arguments about whether it
covers speech that is likely to lead to violence (think Rwanda) or demeans a
group (think Holocaust denial), just as there will be calls for absolute free
expression.
Behind closed doors, Internet companies routinely make tough decisions on
content.
Apple and Google earlier this year yanked a mobile application produced by
Hezbollah. In 2010, YouTube removed links to speeches by an American-born
cleric, Anwar al-Awlaki, in which he advocated terrorist violence; at the time,
the company said it proscribed posts that could incite “violent acts.”
ON rare occasions, Google has taken steps to educate users about offensive
content. For instance, the top results that come up when you search for the word
“Jew” include a link to a virulently anti-Jewish site, followed by a promoted
link from Google, boxed in pink. It links to a page that lays out Google’s
rationale: the company says it does not censor search results, despite
complaints.
Susan Benesch, who studies hate speech that incites violence, said it would be
wise to have many more explanations like this, not least to promote debate.
“They certainly don’t have to,” said Ms. Benesch, director of the Dangerous
Speech Project at the World Policy Institute. “But we can encourage them to
because of the enormous power they have.”
The companies point out that they obey the laws of every country in which they
do business. And their employees and algorithms vet content that may violate
their user guidelines, which are public.
YouTube prohibits hate speech, which it defines as that which “attacks or
demeans a group” based on its race, religion and so on; Facebook’s hate speech
ban likewise covers “content that attacks people” on the basis of identity.
Google and Facebook prohibit hate speech; Twitter does not explicitly ban it.
And anyway, legal scholars say, it is exceedingly difficult to devise a
universal definition of hate speech.
Shibley Telhami, a political scientist at the University of Maryland, said he
hoped the violence over the video would encourage a nuanced conversation about
how to safeguard free expression with other values, like public safety. “It’s
really about at what point does speech becomes action; that’s a boundary that
becomes difficult to draw, and it’s a slippery slope,” Mr. Telhami said.
He cautioned that some countries, like Russia, which threatened to block YouTube
altogether, would be thrilled to have any excuse to squelch speech. “Does Russia
really care about this film?” Mr. Telhami asked.
International law does not protect speech that is designed to cause violence.
Several people have been convicted in international courts for incitement to
genocide in Rwanda.
One of the challenges of the digital age, as the YouTube case shows, is that
speech articulated in one part of the world can spark mayhem in another. Can the
companies that run those speech platforms predict what words and images might
set off carnage elsewhere? Whoever builds that algorithm may end up saving
lives.
SAN FRANCISCO — As violence spread in the Arab world over a
video on YouTube ridiculing the Prophet Muhammad, Google, the owner of YouTube,
blocked access to it in two of the countries in turmoil, Egypt and Libya, but
did not remove the video from its Web site.
Google said it decided to block the video in response to violence that killed
four American diplomatic personnel in Libya. The company said its decision was
unusual, made because of the exceptional circumstances. Its policy is to remove
content only if it is hate speech, violating its terms of service, or if it is
responding to valid court orders or government requests. And it said it had
determined that under its own guidelines, the video was not hate speech.
Millions of people across the Muslim world, though, viewed the video as one of
the most inflammatory pieces of content to circulate on the Internet. From
Afghanistan to Libya, the authorities have been scrambling to contain an
outpouring of popular outrage over the video and calling on the United States to
take measures against its producers.
Google’s action raises fundamental questions about the control that Internet
companies have over online expression. Should the companies themselves decide
what standards govern what is seen on the Internet? How consistently should
these policies be applied?
“Google is the world’s gatekeeper for information so if Google wants to define
the First Amendment to exclude this sort of material then there’s not a lot the
rest of the world can do about it,” said Peter Spiro, a constitutional and
international law professor at Temple University in Philadelphia. “It makes this
episode an even more significant one if Google broadens the block.”
He added, though, that “provisionally,” he thought Google made the right call.
“Anything that helps calm the situation, I think is for the better.”
Under YouTube’s terms of service, hate speech is speech against individuals, not
against groups. Because the video mocks Islam but not Muslim people, it has been
allowed to stay on the site in most of the world, the company said Thursday.
“This video — which is widely available on the Web — is clearly within our
guidelines and so will stay on YouTube,” it said. “However, given the very
difficult situation in Libya and Egypt we have temporarily restricted access in
both countries.”
Though the video is still visible in other Arab countries where violence has
flared, YouTube is closely monitoring the situation, according to a person
briefed on YouTube’s decision-making who was not authorized to speak publicly.
The Afghan government has asked YouTube to remove the video, and some Google
services were blocked there Thursday.
Google is walking a precarious line, said Kevin Bankston, director of the free
expression project at the Center for Democracy and Technology, a nonprofit in
Washington that advocates for digital civil liberties.
On the one hand, he said, blocking the video “sends the message that if you
violently object to speech you disagree with, you can get it censored.” At the
same time, he said, “the decision to block in those two countries specifically
is kind of hard to second guess, considering the severity of the violence in
those two areas.”
“It seems they’re trying to balance the concern about censorship with the threat
of actual violence in Egypt and Libya,” he added. “It’s a difficult calculation
to make and highlights the difficult positions that content platforms are
sometimes put in.”
All Web companies that allow people to post content online — Facebook and
Twitter as well as Google — have grappled with issues involving content. The
questions are complicated by the fact that the Internet has no geographical
boundaries, so companies must navigate a morass of laws and cultural mores. Web
companies receive dozens of requests a month to remove content. Google alone
received more than 1,965 requests from government agencies last year to remove
at least 20,311 pieces of content, it said.
These included a request from a Canadian government office to remove a video of
a Canadian citizen urinating on his passport and flushing it down the toilet,
and a request from a Pakistan government office to remove six videos satirizing
Pakistani officials. In both cases, Google refused to remove the videos.
But it did block access in Turkey to videos that exposed private details about
public officials because, in response to Turkish government and court requests,
it determined that they violated local laws.
Similarly, in India it blocked local access to some videos of protests and those
that used offensive language against religious leaders because it determined
that they violated local laws prohibiting speech that could incite enmity
between communities.
Requests for content removal from United States governments and courts doubled
over the course of last year to 279 requests to remove 6,949 items, according to
Google. Members of Congress have publicly requested that YouTube take down
jihadist videos they say incite terrorism, and in some cases YouTube has agreed.
Google has continually fallen back on its guidelines to remove only content that
breaks laws or its terms of service, at the request of users, governments or
courts, which is why blocking the anti-Islam video was exceptional.
Some wonder what precedent this might set, especially for government authorities
keen to stanch expression they think will inflame their populace.
“It depends on whether this is the beginning of a trend or an extremely
exceptional response to an extremely exceptional situation,” said Rebecca
MacKinnon, co-founder of Global Voices, a network of bloggers worldwide, and
author of “Consent of the Networked,” a book that addresses free speech in the
digital age.