Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation

Musk’s ‘Twitter Files’ offer a glimpse of the raw, complicated and thankless task of moderation

Elon Musk, Twitter’s new owner is feverishly promoting himself as his. “Twitter Files”: Select internal communications were laboriously tweeted by sympathetic amanuenses. Musk’s apparent conviction that he released partisan kraken is false. Far from conspiracy or systemic abuse. The files are a valuable peek behind a curtain of moderation at large, hinting at the Sisyphean labors performed by every social media platform.

Companies like Twitter, YouTube, Facebook and others have been performing an elaborate dance for a decade to keep details of their moderation processes out of reach of regulators and bad actors.

It would be a mistake to reveal too much, as it could expose the processes to abuse spammers and scammers (who do indeed take advantage every detail published or leaked), while revealing too little can lead to damaging reports and rumors as they lose their control over the narrative. They must also be able to justify and support their methods, or they will face fines and censure from government agencies.

The result is that everyone knows a Very little It’s not difficult to see how these companies filter, arrange, and organize the content posted on their platforms.

Sometimes, there are exposeds of the methods we suspected — contractors working in hour-shifts clicking through violent or sexual imagery An abhorrent, but apparently necessary industry. Sometimes companies play too much. For example, repeated claims that AI is revolutionizing moderation are followed by subsequent reports that AI systems are unreliable and inexplicable.

It’s almost impossible for companies to do this, as they don’t usually have the option. However, the tools and processes used in content moderation at large scale are made public with no filter. Musk did this, possibly to his own peril but certainly to the benefit of all who have ever wondered what moderators do and click when they make decisions that could affect millions.

Do not pay attention to the complex, honest conversation behind the curtain

This important and poorly understood process is shown in the email chains, Slack chats, and screenshots (or screen shots) that were released over the past week. We see a little of the raw material, but it is not the partisan illumination some expected. However, it is clear from its highly selective presentation that this is what we are meant for to perceive.

Far from it. The people involved are at turns cautious and confident and practical and philosophical, outspoken, accommodating, and pragmatic. This shows that the decision to limit or ban something is not arbitrarily made, but rather based on an evolving consensus of opposing views.

(Update: A few moments after I posted this, a New thread It was more of the same: earnest discussions about complex issues in coordination and cooperation with experts, law enforcement, etc.

The bombshell packaging of the documents has not insinuated partisanship or conspiracy leading to the decision to temporarily limit the Hunter Biden laptop story — which was probably the most contentious moderation choice of the last few decades, after banning Trump.

Instead, we find thoughtful, serious people trying to reconcile conflicting and inadequat policies. What is “hacked” material? What confidence do we have in this or that assessment What is a proportionate answer? What is a proportionate response? How do we communicate it to whom and when? What are the consequences of not communicating it? What precedents can we set and break?

These questions have complex answers that require a lot of research and discussion. They had to be done quickly, before things got out of control. Dissent from within as well as without (from a U.S. representative, no less — ironically doxxed in this thread with Jack Dorsey in contravention of the same policy) was taken into consideration and honestly integrated.

Yoel Roth, former Trust and Safety chief, stated that “this is an emerging situation in which the facts remain uncertain.” “We are erring on one side and including a warning to prevent this content from being amplified.”

Some people question the decision. Some question the facts presented. Others claim it is not supported by their interpretation of the policy. One suggests that they should make the ad-hoc basis and the extent of the action clear, as it will be scrutinized as a partisan one. Jim Baker, Deputy General Counsel, calls for more information, but caution is advised. There is no clear precedent. The facts are either missing or unverified at this point. Some of the material is clearly nonconsensual nude imagery.

Rep. Ro Khanna concedes that Twitter should limit what it recommends or posts in trending news. However, she also believes the current action is too extreme. It’s difficult to strike a balance.

These conversations were not made public by the press or the public. We are as curious and as in the dark as our readers. It would be wrong to say that the published materials are a complete or accurate representation of the entire process (they are blatantly, even ineffectively, chosen and chosen to fit a narrative), however, even so, we are more informed than ever before.

Tools of the trade

The next thread was even more transparent, as it featured screenshots of actual moderation tools used by Twitter employees. Although the thread attempts to make shadow banning seem like the use of these tools, the screenshots don’t show any nefarious activity and they don’t need to for their interest.

Image Credits Twitter

Contrary to popular belief, what is displayed is compelling because it is so simple and so boringly systematic. Here are the different techniques that social media companies use. Previously, it was explained in cheery diplomatic cant. Now it is presented without comment: Trends Blacklist,” High Profile,” “DO NOT ACT” and the rest.

Yoel Roth says that actions and policies need better aligned. He also suggests that more research is needed. Plans are being made to improve.

We believe that misinformation exposure directly causes harm. Therefore, we need to use remediations that reduce that exposure. However, it is important to present a stronger case for including this in our policy remediations, especially for other policy areas.

Again, the content is not in the right context. These are not the deliberations a secret liberal cabal hammering at its ideological enemies with ban hammers. It’s an enterprise-grade dashboard, such as you might see for account tracking, logistics, and lead tracking. This is being discussed and iterated on by sober-minded people working within practical limits and trying to satisfy multiple stakeholders.

As it should: Twitter, along with other social media platforms, has worked for years to make moderation efficient and systematic enough that it can function at scale. Not only to ensure that the platform doesn’t become overrun by spammers and bots, but also to comply with legal frameworks such as the FTC orders or the GDPR. (Of which the “extensive and unfiltered access” outsiders had to the pictured tool could be considered a breach. TechCrunch was informed by the relevant authorities. They are “engaging” Twitter in the matter.)

It is impossible to moderate or meet legal requirements if a few employees make arbitrary decisions without any oversight or rubric. Resignation One of the most important aspects of automation (as many as a few people on Twitter’s Trust & Safety Council today confirm) is automation. A large network of people must work together and follow a standard system with clear boundaries and escalation protocols. The screenshots Musk has published seem to show that.

The documents don’t show any systematic bias. Musk’s stand-ins suggest but can’t prove it. It doesn’t matter if it fits in the narrative they want it to, but anyone who believes these companies should be more open about their policies will find it of interest. This is a win for transparency, even though Musk’s opaque approach to it may be more accidental.

Continue reading