The Central Scrutinizers

Star Chamber

The big jury story making the rounds right now is the saga of juror #52 in the Derek Chauvin trial for the murder of George Floyd.

I’m more interested in a very different jury.

It’s called “The Oversight Board” and its decisions are final: there is no appeal.

It’s Facebook’s Star Chamber, and DR makes an offhanded reference to it in a story about Donald Trump’s recent announcement about a forthcoming new platform of his own.

The council, which has been referred to as Facebook’s Supreme Court, should have announced its decision [on whether to reinstate Donald Trump’s Facebook account] last month, but postponed the decision to take a closer look at around 9,000 inquiries from the public.

The council consists of a number of paid journalists, human rights activists, lawyers and academics – including the former Danish Prime Minister Helle Thorning-Schmidt.

If you’re anything but doggedly left of center in your outlook, then the idea of a bunch of journalists, activists, lawyers, and academics holding the keys to the digital kingdom ought to make your blood run cold.

But before we get into that, I should note that the Star Chamber issued its opinion today, after the DR article was published.

Here’s their own summary of the decision:

The Board has upheld Facebook’s decision on January 7, 2021, to restrict then-President Donald Trump’s access to posting content on his Facebook page and Instagram account.

However, it was not appropriate for Facebook to impose the indeterminate and standardless penalty of indefinite suspension. Facebook’s normal penalties include removing the violating content, imposing a time-bound period of suspension, or permanently disabling the page and account.

The Board insists that Facebook review this matter to determine and justify a proportionate response that is consistent with the rules that are applied to other users of its platform. Facebook must complete its review of this matter within six months of the date of this decision. The Board also made policy recommendations for Facebook to implement in developing clear, necessary, and proportionate policies that promote public safety and respect freedom of expression.

In describing the case at hand, they cite the president’s two Facebook posts of January 6th (my emphases):

At 4:21 pm Eastern Standard Time, as the riot continued, Mr. Trump posted a video on Facebook and Instagram:

I know your pain. I know you’re hurt. We had an election that was stolen from us. It was a landslide election, and everyone knows it, especially the other side, but you have to go home now. We have to have peace. We have to have law and order. We have to respect our great people in law and order. We don’t want anybody hurt. It’s a very tough period of time. There’s never been a time like this where such a thing happened, where they could take it away from all of us, from me, from you, from our country. This was a fraudulent election, but we can’t play into the hands of these people. We have to have peace. So go home. We love you. You’re very special. You’ve seen what happens. You see the way others are treated that are so bad and so evil. I know how you feel. But go home and go home in peace.

Eighty minutes later, Facebook removed that post for “violating its Community Standard on Dangerous Individuals and Organizations.”


At 6:07 pm Eastern Standard Time, as police were securing the Capitol, Mr. Trump posted a written statement on Facebook:

These are the things and events that happen when a sacred landslide election victory is so unceremoniously viciously stripped away from great patriots who have been badly unfairly treated for so long. Go home with love in peace. Remember this day forever!

Eight minutes later, Facebook nuked this post for violating the same “Community Standard on Dangerous Individuals and Organizations.” And it blocked the man who was still the president of the United States from posting on Facebook or Instagram for 24 hours.

In two posts, he uses the phrase “go home” five times. He said we have to have peace. He said we have to have law and order. He said we don’t want anybody hurt.

In the upside-down world in which we live, this violates “Community Standards on Dangerous Individuals and Organizations.”


Here’s one of the Star Chamber’s “key findings:”

The Board found that the two posts by Mr. Trump on January 6 severely violated Facebook’s Community Standards and Instagram’s Community Guidelines. “We love you. You’re very special” in the first post and “great patriots” and “remember this day forever” in the second post violated Facebook’s rules prohibiting praise or support of people engaged in violence.


I didn’t know that “praise or support of people engaged in violence” was a violation of Facebook Community standards.

Now that I do know, I’m curious: shouldn’t Facebook have blocked any posts praising or supporting BLM or Antifa while various American cities were undergoing riots, arson, looting, and general mayhem last summer? And last month?

(It’s a rhetorical question.)

In February, Newsweek ran an article entitled “Facebook Oversight Board Rules in Emmanuel Macron ‘Devil’ Case, ‘Instructive’ in Deciding Trump Account.”

Here’s how the Star Chamber summarized that case (their italics):

A user posted a photo in a Facebook group, depicting a man in leather armor holding a sheathed sword in his right hand. The photo has a text overlay in Hindi that discusses drawing a sword from its scabbard in response to “infidels” criticizing the prophet. The photo includes a logo with the words “Indian Muslims” in English. The accompanying text, also in English, includes hashtags calling President Emmanuel Macron of France “the devil” and calling for the boycott of French products.

Here’s how the Star Chamber decided, according to that February article from Newsweek:

The Oversight Board explained that while the post in question was removed under its Violence and Incitement Community Standard, it noted that “considering the circumstances of the case, the majority of the Board did not believe that this post was likely to cause harm.”

“In conclusion, a majority found that, for this specific post, Facebook did not accurately assess all contextual information and that international human rights standards on expression justify the Board’s decision to restore the content,” the group wrote in its blog post.

Also fascinating.

I can think of someone who might disagree with the Star Chamber’s assessment. Her name is Stéphanie Monfermé.

Ring a bell?

Stéphanie was the French police officer and mother of two who was, while on her way to work, stabbed to death (in the stomach) by a man shouting Allahu Akhbar. It happened just a couple of weeks ago in Rambouillet, France. She was laid to rest last Friday:

So sure, she might have some interesting observations on “all contextual information” related to Muslims calling for swords to be unsheathed against the infidels of France, but her death by a blade-wielding Muslim precludes the possibility of our hearing them.

Go home, go home, go home, go home, go home, says an American president. He says “you have to go home now. We have to have peace. We have to have law and order. We have to respect our great people in law and order. We don’t want anybody hurt.”

Star Chamber: Facebook was right to block his posts!

Some whack job in France glorifies the Muslim blade against infidels who criticize Islam while calling the French President the devil.

Star Chamber: Protected speech because, we don’t know, context or something. Also human rights about expression and stuff!

I don’t want to get bogged down in specifics (too late?). Let’s step back and have a look at the big picture. The problem isn’t what this one says or that one says, or what Facebook chooses to block or allow, or even what the Star Chamber of “journalists, activists, lawyers, and academics” (oh my!) may think Facebook did right or wrong.

The real problem is the idea of “community standards” that can be universally applied.

There’s no such animal.

Facebook’s own “Community Standards” section lays out their standards on hate speech. Let’s walk through the whole thing together:

We believe that people use their voice and connect more freely when they don’t feel attacked on the basis of who they are.

Probably true.

That’s why we don’t allow hate speech on Facebook.

The implication is that “hate speech” is speech that makes people feel attacked “on the basis of who they are.”

It creates an environment of intimidation and exclusion, and in some cases may promote offline violence.

…and because it creates a yucky environment and can lead to “offline violence.” (By which I have to assume they simply mean “violence,” since it’s impossible to physically hurt someone online, and violence is the act of physically hurting someone.)

We define hate speech as a direct attack against people on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.

A few sentences ago we were talking about attacking people on the basis of “who they are.” Now we’re talking about various human properties or characteristics, some of which are immutable (race, ethnicity), some of which are mutable (religious affiliation, gender identity), some of which are hopelessly ambiguous (which diseases are serious? who decides?).

We define attacks as violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing, and calls for exclusion or segregation.

Speech cannot be violent. It can be nasty, cruel, vicious, spiteful, disgusting, and it can include incitements to violence, but has everyone forgotten that sticks and stones may break our bones, but names can never hurt us? Was kindergarten really that long ago for so many people?

And one wouldn’t have to be much of a lawyer to find whole acres of wiggle room in “harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal,” or “cursing.” That “calls for exclusion or segregation” are considered “attacks” is interesting, but as we’ll see later there’s a pretty sweet loophole for that one.

We consider age a protected characteristic when referenced along with another protected characteristic.

“Old people suck” is therefore acceptable. “Old Christians suck,” “old women suck,” and “old Italians suck” is not.

We also protect refugees, migrants, immigrants and asylum seekers from the most severe attacks,

Who defines severity? On what parameters?

though we do allow commentary and criticism of immigration policies.

They “allow” commentary and criticism of public policy. That’s thoughtful.

Similarly, we provide some protections for characteristics like occupation, when they’re referenced along with a protected characteristic.

Which is presumably why “All Cops Are Bastards” is okay: it’s only a problem if you’re dehumanizing white cops, black cops, Irish cops, or Presbyterian cops.

We recognize that people sometimes share content that includes someone else’s hate speech to condemn it or raise awareness.

They acknowledge the existence of satire, irony, facetiousness, and context.

In other cases, speech that might otherwise violate our standards can be used self-referentially or in an empowering way.

Are there “empowering” ways of using hate speech that aren’t also self-referential? I ask seriously. I also ask: what does “empowering” mean?

But here’s the loophole for the growing segregationist lobby: you can call for segregation to empower your own group.

Although I doubt that’s a universally applicable loophole: for all the segregated graduations we’ve been seeing, I don’t think I’ve seen any instigated by whites demanding a graduation for whites only. I’m glad for that; but I don’t think ceremonies for blacks only, or Latinos only, or even low income only, are any less odious.

Come together, people.

Our policies are designed to allow room for these types of speech,

(The original context makes it clear that by “these types of speech” they’re talking about the allowable exceptions.)

but we require people to clearly indicate their intent. If intention is unclear, we may remove content.

Ladies and gentlemen of the jury, this is why we have the First Amendment to the Constitution of the United States, and why, contra Chris Cuomo, there is no “hate speech” carve out.

Either you have free speech or you don’t. It’s like virginity that way, or pregnancy.

You cannot define “hate speech” in any meaningful way because it is always, in every case, even those that seem the most obvious, entirely relative.

I’m not a lawyer, but I’m tolerably sure that even the most half-assed of first year law students could use Facebook’s standards to justify the blocking of just about any post insulting just about anyone in just about any way.

It isn’t the “standards” that matter, but their enforcement. Enforcement requires enforcers. And unless Facebook has found the divine spark of perfect and infallible objectivity in all the frontline employees responsible for enforcing its standards (at an average salary of less than $29,000 per year), that’s always going to be a muddled mess of contradictions and inconsistencies.

Facebook has wisely assembled their Star Chamber to give their own fallibility a fig leaf of legitimacy, as though a small group of highly paid people will always be more perfect and less fallible than a larger group of lesser paid people.

The members of that Star Chamber, it should be remembered, do not go unrewarded for their troubles:

The tech giant pays 20 members of its 20-person Oversight Board, an independent committee that can overrule Facebook’s content moderation guidelines, “six-figure salaries for putting in about fifteen hours a week,” according to The New Yorker.

Do you believe in the moral and ethical infallibility of Helle Thorning Schmidt and the other nineteen “journalists, human rights activists, lawyers and academics” of the Facebook Star Chamber?

If you’re on Facebook, your answer is yes.