Knowing the number of bots on Twitter is key to the conclusion of the case. And a student project may be the key to determining that.
A student project is being key in the development of Twitter ‘s lawsuit against Elon Musk for the failed purchase of the social network.
Elon Musk accused Twitter of misrepresenting the numbers of fake accounts on the platform, using doctoral student Kaicheng Yang ‘s bot detection software ‘ Botometer ‘ to do so.
The bots, the center of the discussion
According to legal documents, Botometer , a free tool that claims it can identify how likely a Twitter account is to be a bot, has been instrumental in helping the Musk team prove that there aren’t just 5% fake accounts on the platform. .
“Contrary to Twitter ‘s representations that its business was minimally affected by fake accounts or spam, preliminary estimates by the Musk parties show otherwise,” the countersuit says.
But differentiating between humans and bots is harder than it seems, and researchers have accused Botometer of “pseudoscience” for making it look easy. Twitter was quick to point out that Musk used a tool with a history of making mistakes.
In its legal filings, the platform reminded the court that Botometer defined Musk himself as likely to be a bot earlier this year.
As a result, not only will Musk and Twitter be on trial in October, but also the science behind bot detection .
How does Botometer work?
This software has been running for 8 years and its creators are no longer the same: they inherited it from Yang at university.
Botometer is a supervised machine learning tool, which means that it has been taught to separate bots from humans on its own. Yang tells Wired that Botometer differentiates bots from humans by looking at more than 1,000 details associated with a single Twitter account , such as its name, profile picture, followers, and tweet-to-retweet ratio, before giving it a score of zero to five.
Most importantly, though, Botometer doesn’t give users a threshold, a definitive number that defines all accounts with the highest scores as bots . Yang says the tool should absolutely not be used to decide whether individual accounts or groups of accounts are bots . He prefers that it be used comparatively to understand if one topic of conversation is more robot-tainted than another.