The battle for truth in the media
By Weronika Wiesiołek
“Fake news” has recently become a buzzword across the Internet, having conquered social media, political debates and, ironically enough, news services. From angry tweets to serious research papers, there is plenty of evidence that news providers are losing trust among their consumers. According to a study conducted in February 2019 by Pew Research Centre, only 40% of British adults would admit they usually trust the information spread by mainstream media – and if that’s not alarming enough, some states in the USA achieve scores between 10 and 20 percent.
But isn’t it obvious, after all? Our mass outpouring of data multiplies the amount of information flowing through our daily feeds by millions – reading an increasing number of untrue stories disguised as legitimate articles would only be a natural consequence of such a change. Headlines keep being recycled in an unethical manner, and little can be done to stop it. What we can do, though, is try to single out the real content from the debris created solely for the purpose of generating Internet traffic.
We have actually been doing it for quite some time. Computer programs checking the reliability of information are already implemented in numerous Internet media. The algorithms behind them are often complicated, and rely mainly on machine learning (which means we cannot entirely reproduce the logic behind them). With “AI” (artificial intelligence) being even more of a buzzword than “fake news” itself, people put all their trust in developing more and more algorithms that judge the credibility of data. The problem is that the majority of methods were wrong.
A team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has recently found a loophole in the popular approach taken when dealing with this issue. The most popular way to fight “fake news” is to detect the origin of the text – those labelled as automatically generated are then deemed fake and blocked. Bots stand behind a significant fraction of Internet traffic, and have been used in many malicious ways – so why not restrict our sources to more reliable human journalists? The researchers have proven that they could fool state-of-the-art “fake news”-detecting bots by automatically generating texts indistinguishable from those written by humans. Instead of using OpenAI’s resources to produce articles, they decided to alter existing, human-written news. They substituted only the most significant keywords so that the news sounded exactly as mass media content should, and these were classified as legitimate articles written by human journalists. However, the main theses of these articles were pure nonsense; they often consisted of incoherent sentences describing entirely unrelated topics. Having shown that current systems are far from being infallible, the researchers went further. In an interview with MIT News, CSAIL PhD student and lead author on the paper, Tal Schuster, said, “I had an inkling that something was lacking in the current approaches to identifying fake information by detecting auto-generated text – is auto-generated text always fake? Is human-generated text always real?” It turned out that perfectly correct summarisations of scientific studies could be detected as fake if they had been created with the help of software. To support their hypotheses, the team used automatically generated summaries of NASA research papers. These were indeed classified as “fake news,” despite being entirely true to the sources and scientifically useful. Strategies used until now have taught an important lesson: extensive use of technology will not prevent people from bringing human biases into the solutions to various problems. The term “artificial intelligence” still causes unease, and the words “automatically generated” decrease our willingness to trust the source of information. Somewhere in this train of thought, a crucial notion is being lost: humans are more than capable of lying and spreading misinformation, and there is no explicit reason to trust humans more than bots. Schuster’s team’s conclusion, presented in the recent MIT paper, mentions the same fallacy: “We need to have the mindset that the most intrinsic ‘fake news’ characteristic is factual falseness, not whether or not the text was generated by machines.”
From Issue 19
Comments