Laziness in humans could be used to tell us apart from bots

Humans’ unique laziness when it comes to interacting on social media could be the key to telling us apart from artificially intelligent ‘bots’, a new study shows.

US researchers have identified behavioural trends of humans on Twitter that are absent in social media bots – namely a decrease in tweet length over time.

The team studied how the behaviour of humans and bots changed over the course of a session on Twitter relating to political events.  

While humans get lazier as sessions progress and can’t be bothered typing out long tweets, bots maintain consistent levels of engagement over time. 

Such a behavioural difference could inform new machine learning algorithms for bot detection software.

The team report they’ve already used their findings to create a successful bot-detection system that outperforms a basic bot detection model. 

As Twitter sessions progress, human users grow tired and are less likely to undertake complex activities, the researcher report

‘We are continuously trying to identify dimensions that are particular to the behaviour of humans on social media that can in turn be used to develop more sophisticated toolkits to detect bots,’ said co-author Emilio Ferrara, assistant professor of computer science and the University of Southern California Information Sciences Institute.

‘Bots are constantly evolving – with fast-paced advancements in AI, it’s possible to create ever-increasingly realistic bots that can mimic more and more how we talk and interact in online platforms.’

Bots – social media accounts that are controlled by software rather than humans – serve a variety of purposes, from news aggregation to automated customer assistance for online retailers. 

Early social media bots in the 2000s were created to tackle simple tasks, such as automatically retweeting content posted by a set of sources.  

But boots have since become more sophisticated and are to some extent capable of emulating the short-term behaviour of human users.

They have recently been linked to more nefarious forms of online manipulation, such as influencing public opinion during electoral campaigns. 

One well-publicised example is Russia’s use of bots to influence Americans leading up to Donald Trump’s election in November 2016. 

‘Remarkably, bots continuously improve to mimic more and more of the behaviour humans typically exhibit on social media,’ said Professor Ferrara.

‘Every time we identify a characteristic we think is prerogative of human behaviour, such as sentiment of topics of interest, we soon discover that newly-developed open-source bots can now capture those aspects.’

While much research had focused on social bot detection, little attention had been given to measuring the behaviour and activity of bots, as opposed to humans.

Social bots are all those social media accounts that are controlled by artificial, as opposed to human, intelligence. Their services include news aggregation to collect and relay pieces of news from different sources

Social bots are all those social media accounts that are controlled by artificial, as opposed to human, intelligence. Their services include news aggregation to collect and relay pieces of news from different sources

To find out more, researchers used a large Twitter dataset, made up of Twitter activity from both bot and human accounts, related to recent political events.

The team’s dataset, called French Elections (FE), consisted of a collection of more than 16 million tweets posted by more than 2 million different users.

The tweets were posted between April 25 and May 7, 2017 – the two-week period leading to the second round of the French presidential election.

Another dataset was made up of tweets produced by bot accounts active in viral spamming campaigns at different times, plus a group of human tweets.

Over the course of a single flurry of Twitter activity, researchers measured the amount of produced content and the propensity to engage in social interactions.

They focused on indicators of the quantity and quality of social interactions a user engaged in, including the number of retweets, replies and mentions, as well as the length of tweets.

Humans showed a decrease in the amount of content produced, signified by a decreasing trend in average tweet length – something that wasn’t present in the bot accounts.

A computer algorithm powered by artificial intelligence can spot people who will develop diabetes, even if they have not been diagnosed yet, scientists claim

A computer algorithm powered by artificial intelligence can spot people who will develop diabetes, even if they have not been diagnosed yet, scientists claim 

As sessions progress, human users grew tired and less likely to engage complex activities, such as ‘composing original content’.

The users were likely to become more bored by the matter in hand or distracted by something else on the internet, the study suggests. 

Human accounts were also found to increase their amount of social interaction over the course of a session, illustrated by higher amounts of retweets and replies.

Yet again, bots were shown to not be affected by this factor and maintained steady and consistent rates of engagement through the session.

‘In general, our experiments reveal the presence of a temporal evolution in the human behavior over the course of a session on an online social network, whereas, confirming our expectations, no evidence is found of a similar evolution for bot accounts,’ Professor Ferrara and her co-author write in in Frontiers in Physics. 

Researchers compared these results between bots and humans and used these to create a ‘bot detection system’, which outperformed a basic model that wasn’t trained on the new detection methods.

User behaviour on social media evolves in a very different manner between bots and humans on an activity session, researchers concluded.

Bot accounts and Russian trolls posted messaging in favour of then-presidential hopeful Donald Trump during the election.

Bot accounts and Russian trolls posted messaging in favour of then-presidential hopeful Donald Trump during the election.

‘Our analysis highlights the presence of short-term behavioural trends in humans, which can be associated with a cognitive origin, that are absent in bots, intuitively due to the automated nature of their activity,’ they said.

These differences can be used to successfully implement a machine learning-based bot detection system or help improve existing versions.  

Machine learning algorithms can learn to improve their ability to perform a certain task without being explicitly programmed to do so.

Such systems can find patterns or trends in sets of data to come to conclusions or help humans make better decisions.  

RUSSIAN BOTS INTERFERED WITH US PRESIDENTIAL ELECTIONS

Intelligence agencies have determined that Russia interfered in the 2016 elections.

Fake Facebook and Twitter accounts were used to quickly spread disinformation.

Bot accounts and Russian trolls posted messaging in favour of then-presidential hopeful Donald Trump during the election.

Officials suspect these these accounts also stoked conversations about controversial topics in order to stoke division in the US.

Facebook announced that it had removed 650 pages and groups that were spreading false information.

It also claimed to have caught and ended Russian cyber attacks on conservative groups.

A former Facebook executive had warned that it is ‘too late’ to keep Russia and other malevolent parties from meddling in the 2018 elections.