Bots started their activity in Internet in the early years of the World Wide Web and continued their evolution while modern search engines were emerging. Bots have always been there. However, it seems that we are starting to realise their presence, for better or worse, as a result of Donald Trump’s election victory and, in hindsight, the discovery of thousands of fake news that went viral on Facebook, Google, and Twitter, potentially giving Trump unfair advantage.
Behind the circulation of this misinformation are algorithms and bots, which can be configured with good or evil intentions. Spamming, scamming, and ads fraud are consequences of the activity of malign bots. However, bots are also very beneficial. Are bots blessing or curse? How can webmasters, social media, online retailers, and governments fight against this fakeness and misinformation spread?
Facebook and Google are taking action
After the controversy caused by these findings, Mark Zuckerberg said in a Facebook post that they would “take misinformation seriously” and “have been working on this problem for a long time”, even though “the percentage of misinformation is relatively small”. He also briefly described some of the on-going projects that are carried out, including making the mechanism of reporting easier for users, implementing better algorithms to strengthen the detection process, implementing ads policies, and better ad farm detection.
Also Google announced to punish websites by banning them from using its AdSense service, which lets users display ads on their sites and generate revenue per click, The New York Times reported.
Bots: kinds and tasks
Although Mark Zuckerberg said recently that 99% of the news that circulate through Facebook are genuine, some stats confirmed not only that fakeness is more present in social media networks than we think, but also the powerful influence social media and search engines have on forming public opinions.
The fact is Internet is riddled with bots of different kinds and fake users circulate among social media. Actually, there are around 20 million fake Twitter users and estimated 81 million fake Facebook profiles [Social Pilot]. Behind them, there are following bots, whose task is to follow accounts with the objective of increasing the number of followers of those accounts artificially. However, this practice is useless because those fake accounts cannot convert. Is it worth inflating those figures just to keep up appearances?
There are also traffic bots that aim to drive more traffic to certain websites and even to click on the ads displayed on the website. Google has been fighting against them for a few years but it is increasingly difficult to detect them because the sender hides the source of the traffic, for example via proxy URLs. These bots can also influence the number of views of Youtube videos and the amount of Facebook likes.
I also would like to mention crisis bots. These bots aim to mobilise people in social media against a company or brand as well as to generate negative ideas and information about organisations, undermining their reputation. For instance, Elon Musk, CEO of Tesla and SpaceX recently suffered a targeted campaign full of fake information about him.
Regulation or more awareness?
To all those who have wondered if using bots with evil intentions is legal or illegal, the fact is that there is no regulation to this respect yet. But one thing is clear: If social networks are colonised by bots, they will lose their essence.
But then, what should social networks, search engines & co. do? Should they block malign bots? Should regulations be implemented? The answers to these questions are complex because there is no a clear way to neutralise only malign bots, but it is said that something is brewing.
There is a crowdsourcing initiative, promoted by Eli Pariser, author of the seminal book “The Filter Bubble”, that aims to find solutions to the fake- news problem. Facebook is also listening to journalists, experts, even users.
Perhaps, the first step to be done should be to raise global awareness of this problem because, whether we like it or not, social media and also search engines are becoming the new fourth power of this century. Some protective mechanisms should be implemented somehow to safeguard the culture of truth and trust that hallmarks the idiosyncrasy of social networks and search engines.