Most people seem to agree that “fake news” is a big problem online, but what’s the best way to deal with it? Is technology too blunt an instrument to discern truth from lies, satire from propaganda? Are human beings better at flagging up false stories?
During the run-up to the 2016 US presidential election, we were treated to headlines such as “Hillary Clinton sold weapons to Isis” and “Pope Francis endorsed Donald Trump for President”. Both were completely untrue.
But they were just two examples of a tsunami of attention-grabbing, false stories that flooded social media and the internet. We were awash with so-called “fake news”. Many such headlines were simply trying to drive traffic to websites for the purpose of earning advertising dollars. Others though, seemed part of a concerted attempt to sway public opinion in favor of one presidential candidate or the other.
But a study conducted by news website BuzzFeed revealed that fake news travelled faster and further during the US election campaign. The 20 top-performing false election stories generated 8,711,000 shares, reactions, and comments on Facebook, whereas the 20 best-performing election stories from 19 reputable news websites generated 7,367,000 shares, reactions and comments.
With the new election season upon us, with historical importance for the United States and the world, people are concerned that the 2016 election cycle related fake news strategy used by people to favor Trump and discredit Hillary Clinton should not be repeated and all steps need to be taken to prevent fake news reaching the public.
Facebook, Twitter Inc. and Google parent Alphabet Inc. are discovering the harsh reality that disinformation and hate speech are even more challenging in emerging markets than in places like the U.S. or Europe.
India with as many as 900 million voters in the recently concluded election that culminated with Prime Minister Narendra Modi’s ruling coalition returned to an unprecedented victory, the Social Media giants, Facebook Inc. to Google, had made huge efforts with Facebook hiring contractors to verify content in 10 of the country’s 23 official languages.
Today, there are more technological advances in creating and circulating fake news today than ever before. Recently, I came across a report by BBC, “Dangerous AI offers to write fake news.” The writer suggested that Artificial Intelligence (AI) system has been found to be able to “generates realistic stories, poems and articles has been updated, with some claiming it is now almost as good as a human writer.”
In February this year, OpenAI catapulted itself into the public eye when it produced a language model so good at generating fake news that the organization decided not to release it. Last month, they released an advanced version of it. The model, called GPT-2, was trained on a dataset of eight million web pages, and is able to adapt to the style and content of the initial text given to it. “It can finish a Shakespeare poem as well as write articles and epithets,” the report stated.
A BBC report, based on research and tests done by BBC staff and technocrats found that a Text Generator, built by research firm OpenAI, has developed a new, powerful version of the system – that could be used to create fake news or abusive spam on social media.
Tristan Greene, an author, commented about AI, “I’m terrified of GPT-2 because it represents the kind of technology that evil humans are going to use to manipulate the population – and in my opinion that makes it more dangerous than any gun.”
President Donald Trump has been warning about “fake news” throughout his entire political career putting a dark cloud over the journalism professional. A new program called “deepfaking,” a product of AI and machine learning advancements that allows high-tech computers to produce completely false yet remarkably realistic videos depicting events that never happened or people saying things they never said.
Deepfake technology is allowing organizations that produce fake news to augment their “reporting” with seemingly legitimate videos, blurring the line between reality and fiction like never before — and placing the reputation of journalists and the media at greater risk.
It is alarming that machines are now equipped with the “intelligence” to create fake news, and write like humans, adapting to human style and content, appealing to the sections of audience they want to target.
The quest for artificial intelligence (AI) began over 70 years ago, with the idea that computers would one day be able to think like us. Ambitious predictions attracted generous funding, but after a few decades there was little to show for it. But, in the last 25 years, new approaches to AI, coupled with advances in technology, mean that we may now be on the brink of realizing those pioneers’ dreams.
The AI has its origin during The World War Two, when scientists from many disciplines, including the emerging fields of neuroscience and computing were searching for answers to the many issues they had faced over 70 years ago. As per reports, mathematician Alan Turing and neurologist Grey Walter from England were two of the bright minds who tackled the challenges of intelligent machines. They traded ideas in an influential dining society called the Ratio Club. Walter built some of the first ever robots. Turing went on to invent the so-called Turing Test, which set the bar for an intelligent machine: a computer that could fool someone into thinking they were talking to another person.
The term ‘artificial intelligence’ was coined for a summer conference at Dartmouth University, organized by a young computer scientist, John McCarthy. AI is a constellation of technologies—from machine learning to natural language processing—that allows machines to sense, comprehend, act and learn.
Supporters of top-down AI still had their champions: supercomputers like Deep Blue, which in 1997 took on world chess champion Garry Kasparov. The IBM-built machine was, on paper, far superior to Kasparov – capable of evaluating up to 200 million positions a second. But could it think strategically? The answer was a resounding yes. The supercomputer won the contest, dubbed ‘the brain’s last stand’, with such flair that Kasparov believed a human being had to be behind the controls. Some hailed this as the moment that AI came of age. But for others, this simply showed brute force at work on a highly specialized problem with clear rules.
In November 2008, a small feature appeared on the new Apple iPhone – a Google app with speech recognition. It seemed simple. But this heralded a major breakthrough. Despite speech recognition being one of AI’s key goals, decades of investment had never lifted it above 80% accuracy. Google pioneered a new approach: thousands of powerful computers, running parallel neural networks, learning to spot patterns in the vast volumes of data streaming in from Google’s many users. At first it was still fairly inaccurate but, after years of learning and improvements, Google now claims it is 92% accurate.
In 2011, IBM’s Watson took on the human brain on US quiz show Jeopardy. This was a far greater challenge for the machine than chess. Watson had to answer riddles and complex questions. Its makers used a myriad of AI techniques, including neural networks, and trained the machine for more than three years to recognise patterns in questions and answers. Watson trounced its opposition – the two best performers of all time on the show. The victory went viral and was hailed as a triumph for AI.
Sixty-four years after Turing published his idea of a test that would prove machine intelligence, a chatbot called Eugene Goostman finally passed. But very few AI experts saw this a watershed moment. Eugene Goostman was seen as ‘taught for the test’, using tricks to fool the judges. It was other developments in 2014 that really showed how far AI had come in 70 years. From Google’s billion dollar investment in driverless cars, to Skype’s launch of real-time voice translation, intelligent machines were now becoming an everyday reality that would change all of our lives.
Companies recognize AI’s strategic importance and its impact on their business, yet many are stalled in making it a key enabler for their strategy. Artificial intelligence is able to transform the relationship between people and technology, charging our creativity and skills. The future of AI promises a new era of disruption and productivity, where human ingenuity is enhanced by speed and precision.
When this happens, the journalism industry is going to face a massive consumer trust issue, according to Zhao. He fears it will be hard for top-tier media outlets to distinguish a real video from a doctored one, let alone news consumers who haphazardly stumble across the video on Twitter.
While Artificial Intelligence has advanced much, with the noble purpose of making life easier for human beings, it has thrown massive challenges for all of us and for the need to carefully distinguish reality from fake news; from truth to falsehood.