Another idiotic controversy over a Racket-commissioned Matt Orfalea video highlights a spiraling problem
When you know you’re being censored, you can protest. But what to do about silent editorial punishment, dished without announcement, by tech platforms that appear to be learning fast how to avoid public outcry?
A year ago, this site had to throw a public fit to resolve a preposterous controversy involving videographer Matt Orfalea and YouTube. The issue centered around the above video, “‘Rigged’ Election Claims, Trump 2020 vs. Clinton 2016,” which despite total factual accuracy was cited under its “Elections Misinformation” policy. YouTube in July of last year demonetized Orf’s entire channel over his content, saying “we think it violates our violent criminal organizations policy.”
As you will see if you click now, the above video, as I argued to Google, could not possibly be violative of any “misinformation” guideline, as it was comprised entirely of “real, un-altered clips of public figures making public comments.” After both Orf and I tantrumed in public — there’s not much else to do in these situations — YouTube sent Matt the “Great News!” that “after manually reviewing your video, we’ve determined that it is suitable for all advertisers”:
We thought the matter was settled.
This week, Orf discovered the video had been re-classified as problematic by a new “human reviewer,” who declared it in violation for “harmful or dangerous acts” that “may endanger participants.” Potential problems, the reviewer determined, included “glorification, recruitment, or graphic portrayal of dangerous organizations,” by which I can only presume they mean former Bernie voters like Orf and myself whose political homelessness apparently constitutes a threat.
I’ve once again sent complaints up the Google/YouTube flagpole. Perhaps Racket readers are tired of digital censorship tales. If so, I understand, I do. I want to underscore that the chief reason now for sharing incidents like this is to show the rapid progression of tactics being used not just against this site, or Orf, but everyone.
In the last 6-8 months — hell, the last 2-3 months — the landscape for non-corporate media businesses has tightened dramatically. Independent media content is increasingly hard to find via platform searches, even when exact terminology, bylines, or dates are entered by users. Social media platforms that once provided effective marketing and distribution at little to no cost are now difficult to navigate even with the aid of paid boosting tools. In other words, even if your business does well enough to pay full retail rates for marketing, a widening lattice of algorithmic restriction across platforms is making distribution for non-corporate media a nightmare anyway.
It’s an unfortunate coincidence that this situation involving Orf arrives as Racket is preparing a story about new techniques being deployed in recent months to reclassify even non-violative true content as misinformation. Like this affair, that coming story touches on a phenomenon we saw repeatedly in the Twitter Files, but didn’t delve into in detail then: the use of deamplification and “visibility filtering” as PR-friendly alternatives to outright bans.
This episode with Orf represents a crack in the system, where the user isn’t formally notified of a demonetization or deamplification decision, but somehow learns of it anyway. How often is it happening when users don’t find out? Also, are these tools being used pre-emptively, for certain topics? There are so many things we need to learn still, about how access to information is being controlled.
Until then, will YouTube do the right thing and fix this particular idiocy? Even for your company, this shouldn’t be a hard call.
If the video above somehow meets your definition of “harmful or dangerous acts,” you’ve gone crazy, in addition to rendering both of those terms totally meaningless. If you believe otherwise, could you at least explain your thinking, so the public can evaluate it?
Sincerely, the editor, etc.