Facebook announced Tuesday that it’s stepping up efforts to clean its platform of QAnon content:
Starting today, we will remove any Facebook Pages, Groups and Instagram accounts representing QAnon, even if they contain no violent content…
Facebook had already taken several rounds of action against QAnon, including the removal this summer of “over 1,500 Pages and Groups.” Restricting bans to groups featuring “discussions of potential violence” apparently didn’t do the trick, however, so the platform expended bans to include content “tied to real world harm”:
Other QAnon content [is] tied to different forms of real world harm, including recent claims that the west coast wildfires were started by certain groups, which diverted attention of local officials from fighting the fires and protecting the public.
Describing what QAnon is, in a way that satisfies what its followers would might say represents their belief system and separates out the censorship issue, is not easy. The theory is constantly evolving and not terribly rational. It’s also almost always described by mainstream outlets in terms that implicitly make the case for its banning, referencing concepts like “offline harm” or the above-mentioned “real-world harm” in descriptions. As you’re learning what QAnon is, you’re usually also learning that it is not tolerable or safe.
“QAnon was once a fringe phenomenon, the kind most people could safely ignore,” the New York Times wrote recently. “But in recent months, it’s gone mainstream.”
In rough terms, QAnon is a gospel spun by “Q,” ostensibly a current or former government official, who keeps the public appraised of an epic secret battle between good and evil, undertaken in political shadows. The villains are a globalist pedophile ring involving the mega-rich, Hollywood actors, and the Clintons (among many others), while Donald Trump leads the army of the righteous.
As Seventh-Day Adventists waited for the second coming, Q followers wait for the “storm,” a day when America-defenders led by Trump will round up the evildoers in a series of mass arrests.
The movement went into overdrive three years ago when Trump, in characteristically head-scratching fashion, appeared to tease the concept in front of baffled pool reporters.
“You know what this represents?” he asked. “The calm before the storm.”
“What storm, Mr. President?”
“You’ll find out.”
{vembed Y=79D2YXv5Me0}
Q was launched three years ago, in the wake of scandal involving its prequel movement, the Pizzagate theory about pedophile Democrats abusing kids in the basement of a Washington, D.C. pizzeria. Since that case there have been a series of incidents tied to Q followers that make the case for that “offline harm” reporters are always talking about. For instance, an Illinois woman named Jessica Prim traveled to New York in possession of a “dozen illegal knives,” apparently with a plan to kill Joe Biden. “Have you heard about the kids?” Prim supposedly asked through tears, while being arrested.
QAnon followers have been tied to a range of other acts, from the trivial (a man raising a Q flag above his Cornish castle) to the deadly serious (a 24 year-old accused of shooting and killing a Gambino mob figure in Staten Island). A chorus of people complaining they’ve lost friends and family members to the cult-like movement is among the most upsetting parts of the Q story. It’s not uncommon to hear about marriages thrown on the rocks after one member goes down the Internet rabbit hole and begins to see the other as “part of the narrative.”
For all this, the Q ban pulls the curtain back on one of the more bizarre developments of the Trump era, the seeming about-face of the old-school liberals who were once the country’s most faithful protectors of speech rights.
Bring up bans of QAnon or figures like Alex Jones (or even the suppression or removal of left-wing outlets like the World Socialist Web Site, teleSUR, or the Palestinian Information Centre) and you’re likely to hear that the First Amendment rights of companies like Facebook and Google are paramount. We’re frequently reminded there is no constitutional issue when private firms decide they don’t want to profit off the circulation of hateful, dangerous, and possibly libelous conspiracy theories.
That argument is easy to understand, but it misses the complex new reality of speech in the Internet era. It is true that the First Amendment only regulates government bans. However, what do we a call a situation when the overwhelming majority of news content is distributed across a handful of tech platforms, and those platforms are — openly — partners with the federal government, and law enforcement in particular?
In my mind, this argument became complicated in 2017, when the Senate Intelligence Committee dragged Facebook, Twitter, and Google to the Hill and essentially ordered them to come up with a “mission statement” explaining how they would prevent the “fomenting of discord.”
Platforms that previously rejected the idea they were in the editing business — “We are a tech company, not a media company,” said Mark Zuckerberg just a year before, in 2016, after meeting with the Pope — soon were agreeing to start working together with congress, law enforcement, and government-affiliated groups like the Atlantic Council. They pledged to target foreign interference, “discord,” and other problems.
Their decision might have been accelerated by a series of threats to increase regulation and taxation of the platforms, with Virginia Senator Mark Warner’s 23-page white paper in 2018 proposing new rules for data collection being just one example. Whatever the reason for the about-face, the tech companies now work with the FBI in what the Bureau calls “private sector partnerships,” which involve “strategic engagement… including threat indicator sharing.”
Does any of this make “private” bans of content a First Amendment issue? The answer I usually get from lawyers is “probably not,” but it’s not clear-cut. It doesn’t take much imagination to see how this could go sideways quickly, as the same platforms the FBI engages with often have records of working with security services to suppress speech in clearly inappropriate ways in other countries.
As far back as 2016, for instance, Israel’s Justice Minister Ayelet Shaked was saying that Facebook and Google were complying with up to “95 percent” of its requests for content deletion. The minister noted cheerfully that the rate of cooperation had just risen sharply. Here’s how Reuters described the sudden burst of enthusiasm on the part of the platforms to cooperate with the state:
Perhaps spurred by the minister’s threat to legislate to make companies open to prosecution if they host images or messages that encourage terrorism, their rate of voluntary compliance has soared from 50 percent in a year, she said.
Whether or not one views Internet bans as censorship or a First Amendment issue really depends on how much one buys concepts like “voluntary compliance.”
The biggest long-term danger in all of this has always centered on the unique situation of media distribution now being concentrated in the hands of a such a relatively small number of companies. Instead of breaking up these oligopolies, or finding more transparent ways of dealing with speech issues, there exists now a temptation for governments to leave the power of these opaque behemoth companies intact, and appropriate their influence for their own sake.
As we’ve seen abroad, a relatively frictionless symbiosis can result: the platforms keep making monster sums, while security services, if they can wriggle inside the tent of these distributors, have an opportunity to control information in previously unheard-of ways. Particularly in a country like the United States, which has never had a full-time federal media regulator, such official leverage would represent a dramatic change in our culture. As one law professor put it to me when I first started writing about the subject two years ago, “What government doesn’t want to control what news you see?”
The sheer scale of the logistical task involved with sorting through billions of pieces of content a day makes any hope at even-handed moderation a fantasy. Once companies go down the road of quashing “harm,” there are really only two possible outcomes: an ever-expanding game of speech Whac-a-Mole, or a double-standard. In the best-case scenario, companies like Facebook will be relying upon a combination of AI and human subject-matter experts to answer such questions as, “What is journalistically true?” and “What is dangerous?” involving too much material to responsibly review.
Inevitably that means relying upon the credentials of these would-be impartial judges, a problem given that many of the most “reputable” news agencies and authorities have themselves engaged in conspiratorial thinking, falsehoods, or deceptions.
Take a pair of recent moderation decisions. In one, Twitter suspended an account that posted a clip of Joe Biden saying “Jeez, the reason I was able to stay sequestered in my home is because some Black woman was able to stack the grocery shelf.”
The clip was real, but it left out context — Biden wasn’t speaking about himself, but about an America that is realizing during the crisis who exactly is doing the work to keep society functioning. Publishing the clip was misleading, and I understand the logic of pulling it down.
However, one could say the same thing about a hundred other recent stories that were not policed in the same way, including a similar bait-and-switch involving Melania Trump. A tape of the First Lady saying “Give me a fucking break,” allegedly about border kids, turned out to be something else, even opposite, in context — she was actually swearing at her media critics, not children. But once sanctified by the New York Times and reporters from other major outlets, many of whom dumped the quote online without the context, it’s unlikely to be pulled.
In secretly recorded audio, Melania Trump says about reporters asking her about kids separated at border: "Give me a fucking break." She assails the "liberal media," says she doesn't want to do a story on Fox, and adds, "Who gives a fuck about Christmas stuff and decorations?" pic.twitter.com/Ia2U2WmzN5
— Oliver Darcy (@oliverdarcy) October 2, 2020
That’s not to stress the victimhood of Melania Trump (Hillary Clinton has been the target of far more nastiness), but to note how arbitrary this all is. “Reputable” news sources tend not to get dinged for the same behavior as small accounts, even when they’re running transparent conspiratorial gibberish. Tales of Cuban sound weapons, killer Putin-dolphins, or a recent Washington Post warning that we are being targeted by a foreign campaign of “perception hacking,” or “manipulating people into thinking they are being manipulated,” represent just a few examples of Approved™ nonsense.
With this new, non-transparent, private star-chamber type system, what content we do and do not see is now dependent upon upper-class intellectual fashions, and the whims of politicians, media employees, and executives at tech firms.
It’s not hard to imagine a scenario where a whole range of left-wing sites are declared to be “tied to” real-world incidents of arson or anti-police violence (just think of how speech tied to this summer’s protests might have been viewed by a different group of watchers), and shut down. Progressives tempted to scoff at this should note that Facebook has already taken down hundreds of sites tied to “militia organizations and those encouraging riots, including some who may identify as Antifa.”
The argument about speech restrictions and political violence isn’t new. Go back any troubled time in our history, like say the Haymarket bombing of 1886, and you’ll find government efforts to define as a rioter anyone who “inflames people’s minds” (the state even banned the use of the color red in advertising after that incident). Traditionally it’s been working-class movements that are the targets of such laws. This is why the liberal position has always been to try to avoid censorship, knowing that once society has the tools to clamp down, it tends to get mission creep, and reach for bans more and more instead of making difficult choices.
The main difference now is the Internet makes the prohibition route even easier. Another difference is, political progressives have somehow been convinced that system has been set up on their behalf, and that a faraway group of tech execs and FBI agents will keep it that way in perpetuity.
Nobody raised in the modern media system can be a true speech absolutist. We already navigate a series of complex speech restrictions before publication, worrying about libel, defamation, incitement to “imminent” lawless action, and numerous other tests (including simply avoiding inaccuracy for commercial or reputation reasons). Accusing specific politicians and other figures of being involved in child sex rings is pretty much the definition of what responsible media people have always been taught is forbidden speech.
Still, we had a rational, transparent, litigation-based system for dealing with those issues, one that separated what the courts called “mere advocacy” with actual harm, a concrete thing that had to be proved. Moreover there was no possibility of politicians or law enforcement being involved with speech regulation at any level.
This current system is the worst of all worlds. It’s invisible to the public, clearly invites government recommendations on speech, allows a gameable system of anonymous complaints to influence content, and gives awesome power to an unelected, unaccountable body of private media regulators. Whatever the right method is for dealing with dangerous content in the Internet era — and it’s clear we need a better one — this isn’t it.
Author