By Janine Erasmus
If you’re a regular internet user, you’ll have come across bots. They’re the little guys who pop up in the bottom right of the screen to welcome you to a website and ask if you need help, or help you practice your skills when you learn a new language, or guide you through a menu when you order pizza, or help you plan your travels, or play chess with you.
What exactly is a bot? Put simply, it is an automated programme used to perform a certain task on the internet. Some scan and index content from web pages, and some send alerts when a specified topic is in the news. Others are programmed to engage with a living person for a certain purpose.
Yet others are designed to make the spread of information on social media easier. Because they automate various tasks such as reposting or retweeting, following, or liking on social media platforms, they can engage with or share content much quicker than a human can.
Initially used as a marketing or customer relations tool, bots are now more often associated with social media and on these platforms, they mimic human users, often with insidious intent – trawling the web for contact details which are used for sending spam or fake news, or discrediting people with differing views, or influencing political discourse, or committing cybercrime.
It is that insidious intent that we will discuss in this article, with a focus specifically on social media bots. We also distinguish between chatbots and social media bots, although the terms are sometimes used interchangeably.
Internet cloud provider Cloudflare differentiates the two this way: “Chatbots are bots that can independently hold a conversation, while social media bots do not have to have that ability. Chatbots are able to respond to user input, but social media bots do not need to ‘know’ how to converse. In fact, many social media bots don’t communicate using language at all; they only perform more simple interactions such as providing ‘follows’ and ‘likes’.”
If the chatbot concept sounds familiar, it’s because the same technology is used to build the likes of digital assistants such as Siri or OK Google, or smart devices such as Google Home or Amazon Echo.
While a chatbot often requires a person or a team to maintain its functionality and tweak its algorithms, says Cloudflare, social media bots are much simpler to manage. Hundreds or even thousands of social media bots may be managed by a single person and for this reason, there are many more social media bots than there are genuine chatbots.
Malicious intent
This is where the problem begins, and where the question of ethics comes in.
When used for malicious purposes, social media bots in the guise of humans can wreak havoc. Often people have no idea they’re interacting with a bot, because they seem to act just like humans would.
The European Parliament says: “Fake accounts and bot networks contribute to the spread of disinformation, online manipulation, and polarisation, and can therefore pose a direct threat to democracy.”
Programming with straightforward algorithms such as 'if…then…' statements is the simplest way to set up a bot. If the bot identifies a relevant topic, then it will post the content programmed to correspond with that topic. “To find relevant topics, social media bots work with simple keyword searches and scan Twitter timelines or Facebook posts for specific wording and hashtags. Then they publish pre-written texts as statements or try to steer conversations in a certain direction,” says domain and cloud services provider Ionos.
Other social media bots are technically much more complex, using artificial intelligence, and text and data analysis. This enables them to come up with original comments that are clearly not-pre-programmed – sometimes even referencing current events. “They usually assemble their posts from different online texts, which they simply rearrange. These more complex social media bots are more difficult to expose,” says Ionos.
How do social media bots work?
Social media is crawling with bots, and no platform is safe from them. However, research has shown that X (formerly known as Twitter) is consistently the worst – this is because the tweets are restricted to a short number of characters, making it easier to disguise the bot. “Bots’ poor language skills are harder to recognise in short tweets,” says Ionos.
Research by Soumendra Lahiri, a maths and statistics professor at Washington University, and Dhrubajyoti Ghosh, one of his doctoral students, suggests that a significant number of X users – anywhere between 25% and 68% – are bots. This was in 2022, and it’s unlikely that much has changed.
But whatever the platform, social media bots share some commonalities. A social media bot usually posts using a fake account, says Ionos. It will have its own profile, in most cases complete with photo, posts, and friends or followers. This is the account used to distribute whatever message the creator wants to disseminate, via “likes and retweets or in the form of posts or comments. Using a programming interface or API, a social media bot can access social networks and receive and send data”.
Social media bots usually operate at times when other users are more active, says Ionos. “In addition, they usually post at varying intervals to give the idea of being human when in fact a machine is behind all the posts.”
Have you ever received a Facebook friend request from a unknown person whose profile is strangely empty of activity, no friends to speak of, often a widow/widower or former military serviceperson or from another admirable profession? This is an easy way to spot a potential bot.
Social media bots can send friend requests, says Ionos. “If a request is confirmed by a human user, the social media bot can then collect and analyse the data of the user. As early on as 2011, a Canadian study showed that social media bots are able to collect data and analyse the account information from users that have accepted their friend request.”
This is one of the big reasons why social media users should be careful about whom they admit to their circle of friends, whether virtual or not.
The more the merrier
These bots are at their most dangerous – yes, dangerous – when they connect with each other in a so-called botnet. This is a group of connected computers running the software that is the bot.
“A botnet might be a room full of computers, networked, running the same bot for whatever purpose (often referred to as a click farm),” says ClickCease, a developer of ad fraud and click-fraud detection and protection software. “Or, it could be a group of remote devices, infected with a hidden bot (often in the form of a ‘virus’).”
Here they co-ordinate with each other, feeding off each other’s posts, liking and sharing posts written by other social bots and growing their influence.
And that influence has been disruptive. Some of the high-profile cases include the role of bots in the 2016 and 2020 US presidential elections, which has been extensively researched. Data scientist Emilio Ferrara of the University of Southern California tracked bots that were deployed in the former elections, and found that after Donald Trump's victory, "these accounts sort of went dark". Later some of them emerged during the 2022 French election, pushing far-right candidate Marine Le Pen, says Ferrara.
A botnet set up to sway opinion during the 2016 Brexit referendum was the subject of research by the Royal Society, and there, too, it was discovered that a good percentage of the accounts that shared information and opinions disappeared afterwards. The accounts were used to “artificially amplify electoral messages”, according to the University of London’s research.
In Germany in 2017, an election debate turned nasty when then chancellor Angela Merkel and her main opponent Martin Schulz came under attack on social media, with both branded as traitors. Researchers said that much of the hatred was driven by bots.
How to identify them
Although it is not hard to identify those bots which behave in a decidedly non-human way, the task can get tricky. Because of social media bots’ very nature – that of mimicking a human and interacting like one – and because of their increasing complexity, recognising them can be challenging.
There are patterns that can be discerned by a vigilant user, however, which will add to the likelihood of an account being merely a bot. Some of these, advises cyber-security company Cheq, include the following:
Profile information: Bots often have incomplete or generic profile information. Look for indicators such as a lack of profile pictures, limited bio, or repetitive phrases.
Activity patterns: A high level of activity within a short period is not so typical for legitimate users. Additionally, bots can also be noticed as being active throughout the day without any breaks.
Content quality: Bots typically generate low-quality or nonsensical content. Usually, they also don’t make grammatical mistakes, which are quite common for human users.
Lack of engagement: Bots often have a minimal level of engagement with other users. They may not respond to comments or messages, and their interactions may seem scripted or generic.
Amplification of specific content: A social media bot may exhibit a pattern of amplifying specific types of content, such as political propaganda or commercial promotions. They often lack diversity in the topics they engage with and share.
Cloudflare suggests running a questionable account’s profile picture through a reverse image search to see if it has appropriated a photo of someone.
Ionos also offers advice on questions to ask:
How likely is it that a person would create this profile? Bots usually follow a lot of accounts without having many followers themselves. If an account has only two or three friends, the probability that it’s a bot is quite high.
What is the account posting? If the account keeps posting similar posts – with links to the same media – it is obviously a bot that is trying to start a conversation on a certain topic.
How often does the account post and how often does it like other posts? Also observe the account’s reaction time: if the account responds and posts merely seconds later, this is a clear indication that a human isn’t behind the account.
How does the account respond to contextual questions? One of the most reliable methods of identifying a bot is to ask contextual questions, which must be answered according to the situation. If a bot is asked: “What does the profile picture of the user above you look like?”, it will have difficulty answering this contextual question.
This is all just the tip of the iceberg when it comes to social media bots, and it is impossible to cover everything in a single article. Reputable sources such as those mentioned in this article, and many more, offer valuable advice on how to users may protect themselves.
While social media platforms have stepped up their efforts to remove social media bots, they may never be able to eliminate them completely, and it falls equally to the user to always be vigilant.
Commentaires