Twitter says that it’s making progress on its plan to review its authentication system, which it’s conducting in the wake of backlash against the social network verifying the account of a white supremacist rally organizer. In a series of tweets today, the Twitter Support account acknowledges that verification comes across as endorsement, and that the social network’s treatment of verification has led to this. As a result, it’s going to change how it treats verification of accounts, and will unverify some users “whose behavior does not fall within” its new guidelines.
Twitter announced new guidelines for verified accounts on Wednesday, one week after the company was harshly criticized for granting the coveted blue checkmark to Jason Kessler, the organizer of the deadly white nationalist rally in Charlottesville in August.
“We are conducting an initial review of verified accounts and will remove verification from accounts whose behavior does not fall within these new guidelines,” the company wrote on Twitter.
The Twitter Support account begins with an admission that providing “visual prominence” to verified accounts has contributed to the perception that it’s an endorsement by the network of those specific users, and that it should have taken action earlier to clear up any confusion in this regard. It also says that opening up the verification process to public submission further exacerbated this problem.
Now, it says it’s reworking the entire system, and has already changed its official guidelines on what verification means, and it’s still not accepting any submissions for verification from the general public.
The biggest news here is likely that the company will take action to remove existing verification for accounts where their activity doesn’t meet its restated guidelines. It’s unclear who will be affected and when, but there’s bound to be some attention given to it should any highly visible profiles become unverified as a result of this review.
On 19 October, the company rolled out a timeline for policy updates intended to “make Twitter a safer place”. It updated its rules to ban more objectionable content, such as hate symbols, unwanted sexual advances, and non-consensual nudity, and has pledged to improve its systems for issues such as reporting abuse or appealing suspensions.
Twitter’s new approach to verification will probably open up an entirely new can of worms for a company already facing pressure from the US Congress over its role in a Russian influence campaign to affect the 2016 presidential election.