The fast rise of fake news has forced tech and social media companies into action.
The same artificial intelligence (AI) technology that accelerated the spread of online lies and falsehoods is now deployed to fight misinformation. That’s good in principle but most of the proposed solutions lack one critical element: they don’t actually deal with the core problem; a point that gets lost in most debates.
Fake News. What Fake News?
Fake news comes in two forms: 1) the old playbook of party line and propaganda, misinformation and disinformation, half truths and lies, and now ‘alternative facts’, and 2) the in-your-face rejection of truthful and fact-based reporting that doesn’t fit into one’s world view or agenda. None of that should survive more than five minutes in broad daylight. This is the 21st century and there is no lack of information or access to it. Why then do so many people fall for either of them? And why do they stick with them even when proven wrong?
Gordon Pennycook and David Rand asked that question in a recent New York Times op-ed. The authors, both psychologists, examined the two schools of thought: the one arguing that we tend to rationalize our convictions, essentially trying to convince ourselves that what we’ve always believed to be true or wanted to be true is in fact true, and the other one conceding that we simply “fail to exercise our critical faculties.” It’s almost certainly a combination of both and that makes it so toxic.
Fake news has moved from propaganda to entire ecosystems of lies and falsehoods. Websites, Twitter and Facebook accounts, TV news channels, radio stations and print media are set up to manipulate and distort the truth, often citing fake witness accounts and using invented backup stories. Things are going to get worse with the spreading of software that alters videos, audio and images to make people say and do things they never did – the ‘deep fakes’ you hear so much about.
As far as the ‘alleged fake news’ (truthful reports actually) is concerned, humans are programmed to trust those they like more than those they dislike. That’s not to say that fake news succeeds because reason is always somehow locked in by who we are and how we see the world. People are capable of changing their minds. But if and when laziness or partisan convictions obstruct the forming of accurate beliefs the impact is far greater today than it was some ten years ago. Social media amplifies the message, especially when it comes from friends and followers, and constant repetition advances the chance of broader acceptance.
The Daily Information Overkill
All of us are being bombarded with information, around the clock, every single day. Perhaps we are really ‘amusing ourselves to death’, to use the famous line once coined by US media critic Neil Postman. His son Andrew certainly thinks so: ”My dad predicted Trump in 1985” he said. “It's not Orwell, he warned, it's Brave New World”. The elder Postman was referring to the rising popularity of private TV channels in the 80’s. Had he foreseen the internet and social media he probably would have called this a full-blown global emergency.
The ‘Easy Fixes’
At some point this crazy information overload had to lead to some form of content management on the consumers’ side. Some savvy users have long opted to manually filter their news. Social media and tech companies took note and are catching up with human or automatic fact-checkers, user restrictions and screening apps.
Facebook was the first to come under fire for not identifying and blocking fake accounts and allowing fake news to spread across its platform. In the headlines for repeated privacy violations, the social network is using more than 30 external fact-checking agencies such as Politifact and Factcheck.org to help battle its misinformation crisis. The downside: according to The Guardian, two of them, snopes.com and AP, have already stopped working with Facebook, citing frustration caused by Facebook’s lack of transparency and it being too controlling. Very few people nowadays trust Mark Zuckerberg with anything.
WhatsApp, which is Facebook owned, had its own share of issues related to fake content. There appears to be an acceptance of responsibility as it has recently blocked any forwarding of single messages more than five times. This was in response to mob lynchings in India that were blamed on fake reports that spread via the messaging service. That’s probably the most efficient response we have seen so far.
Twitter has begun to close fake accounts and introduced the ‘I don’t like this Tweet’ option. Not many people know about it and those who do are usually not clear what this actually does. It won’t block the user and is no ‘unfollow’ either. As some tech geeks gathered it is likely to affect some algorithm that will filter what you see in the future. Meanwhile the hashtag #JackStopTheHate is spreading like wildfire as CEO Jack Dorsey is preparing to testify before the U.S. Congress. This after meeting with Trump and not disclosing what was discussed. Twitter is a very useful tool and its user base and earnings are up. One can only hope it will do more to identify and swiftly shut down fake accounts.
Microsoft announced that the NewsGuard privacy extension is now available for users of its Edge mobile browser on iOS and Android phones. NewsGuard Co-CEO Steven Brill said: “The Green and Red ratings of news and information websites provide “a journalistic solution to a journalism problem.” Say that again? True: RT, Breitbart, Infowars or Drudge Report all hate NewsGuard because their ratings are deep red. But FoxNews is labeled as generally maintaining “basic standards of accuracy and accountability.” There goes your journalistic solution. [A quick check of leading news media in the Arab world, in China, Europe and Australia revealed that the vast majority are still unrated (“submit this site for review by NewsGuard”).]
So, tech and social media to the rescue? I’d answer that with a conditional yes. But it will take some additional hard work.
The Problem isn’t Social Media, it’s Media Illiteracy
In a politicized and partisan world, fake news has an outsized impact on public discourse. Apps, add-ons and extensions for safer browsing are useful but we shouldn’t consider the underlying issue resolved by downloading an app. If we start leaving fact checks to third parties we are doomed because we unlearn to use our head.
Anjana Susarla, Associate Professor of Information Systems at Michigan State University looked into information overload and the proliferation of digital devices, making the point that the divide is no longer just about access. Here is what she wrote in The Conversation:
“The savvier users are navigating away from devices and becoming aware about how algorithms affect their lives. Meanwhile, consumers who have less information are relying even more on algorithms to guide their decisions.”
The same argument applies to fake news. The better informed we are the less likely we have (or ever will) let algorithms take over and guide our opinions. That narrows down the options to just one: actively fighting media illiteracy. Relearning and sharing how to decode media messages and adapt to fake news, not by trying to better shield ourselves from them but to be more aware how they influence our beliefs and behaviors. We are better off if we start to declutter our day and free up more time to think so that we can spot and reject the lies and falsehoods.
Anyone remotely linked to the generation of content and dissemination of information should join in the effort, helping encourage alertness, serious debate, better public education and new policies to combat all forms of misinformation. That includes educators, researchers, ethics and legal experts, lawmakers, writers, journalists and PR.
Finding the Right Balance
Avaaz, the world’s largest online activist network just proposed the ‘Correct the Record’ plan that could curb fake news by requiring social media companies to direct all users who have been exposed to demonstrably false information on their pages and sites toward fact-checks. Avaaz claims that according to new polling an overwhelming majority of Europeans (86.6%) would support this radical new measure.
In Singapore, the government is preparing guidelines for trainers, public agencies and other organizations on ways to spot fake news. To be launched by June, the plan is to nurture an informed public and promote fact checking as part of an effort to advance public media literacy.
Two proposals to effectively counter fake news, using education and/or social media to undo the damage done. It’s yet to be seen how it all pans out as there will be setbacks, criticism and doubts about the real intent. But for now these plans seem to be the best way forward.