41
portfolio_page-template-default,single,single-portfolio_page,postid-41,stockholm-core-2.3.2,qode-social-login-1.1.3,qode-restaurant-1.1.1,select-theme-ver-9.0,ajax_fade,page_not_loaded,popup-menu-slide-from-left,,qode_menu_,wpb-js-composer js-comp-ver-6.8.0,vc_responsive
Title Image

Disinformation

Disinformation

Is “disinformation” a five-dollar version of “fake news”?

 

No. Fake news and post-truth politics are new names for old tools used by propagandists. Before Breitbart in the 2010s, there were fact-free tabloids published by 19th century journalistic titans Randolph Hearst and Joseph Pulitzer. Their newspapers, which often went out of the way to twist facts to fit a conclusion, used propaganda to bend public opinion in favor of the Spanish-American War.

 

In the 2000s, the George W. Bush administration manipulated New York Times columnist Judith Miller into becoming a mouthpiece to drum up support for the Iraq War. In the 1930s, Leni Riefenstahl used film to make Hitler’s twisted Aryan vision breathtaking and beautiful. In the 1950s, Senator Joe McCarthy used his position and the media to create a government-sponsored anti-Communist juggernaut that destroyed many careers but found few real communists. Almost 60 years after the Vietnam War, there is still no verifiable proof that anyone spit on US soldiers returning from Southeast Asia.

 

The technical term is “computational propaganda”.

 

According to the Computational Propaganda Project, a part of Oxford University’s Oxford Internet Institute, computational propaganda (CP) is “the use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.”

 

You recognize “computational propaganda” as “disinformation”.

 

The tools are the same, customized for the tone and technology of the time.

 

WE enable the threat of disinformation

 

As a result of still being so close to the black hole of negativity that was the 2016 US presidential election, the threat of disinformation is either unknown, underestimated or unheeded. Americans’ general disdain of politics made it easier for modern propaganda to enter our zeitgeist. I think there are three reasons for this.

 

One, politics is one area where Americans are generally loyal over time and inured to over-the-top statements of support. The perfect setting for professional manipulators.

 

Two, Americans also tend to forgive, giving propagandists an opportunity to deceive AND spin the reason why.

 

Three, despite evidence, it took time for people to realize American politics was actually valuable enough to influence. Since we discount the process, we discounted the warnings as politically motivated.

 

If you want to understand more about how computational propaganda is being used in US politics, this report from Oxford’s Computational Propaganda Project will be valuable for background. Within the past few years, computational propaganda has been used for a range of crimes and atrocities.

 

Disinformation (and Facebook) helped kill 43,000 people—just one case of many

 

Myanmar is a small Asian country that borders Thailand on the south and China on the north. It has a long history of being one of the most repressive regimes on the planet. Unsurprisingly, the country has been wracked by violence and war for years.

 

According to an October 2018 New York Times report, the military had used Facebook for the previous five years in a campaign designed to condition the public to incite violence against

the country’s Rohingya Muslim minority. In August 2018, Facebook had removed 65 pages and 29 accounts that were controlled by the military. These accounts had over 13 million followers in a country where 20 million people had an internet connection.

 

Time Magazine reported 43,000 Rohingya missing and presumed dead by March 2018. Over the same period, over 700,000 Rohingya fled the country. In an apology-free statement announcing the account removals, Facebook said “…we weren’t doing enough to help prevent our platform from being used to foment division and incite offline violence. We agree that we can and should do more.”

 

And this is just one case. Here’s the lede in a May 2018 Guardian.com article on anti-Muslim rhetoric presented to Sri Lankans on Facebook:

 

Kill all Muslims, do not spare even an infant, they are dogs,” a Facebook status, white Sinhalese text against a fuchsia background, screamed on 7 March 2018. Six days later – after hundreds of Muslim families had watched their homes ransacked and their businesses set on fire – it was allowed to remain online, despite being reported for violating the company’s community standards.” Granted, users and citizens need to do better at understanding when they’re being manipulated. But Facebook needs to be taken to task for profiting off the increase in traffic, time spent on page, and ad clicks related to these posts.

 

Thanksgiving 2015: the first time Russian disinformation attacked America

 

Following a release of Twitter data from deactivated Russian bot accounts, the Wall Street Journal reported that Russia’s Internet Research Agency conducted a Thanksgiving Day social media propaganda campaign designed to make people believe turkeys from Wal-Mart were poisoning people. I encourage you to open the links.

 

Beginning in October 2015, Russian botnet and social media accounts linked to the current Mueller investigation and report began to establish web properties and compile stories about food poisoning and turkey meat. On Thanksgiving Day morning, an account attributed to an Alice Morton posted her son got food poisoning from the turkey she prepared. The same day, a blog post created at a bogus site called “Proud to be Black” featured a story about tainted turkey meat putting 200 New Yorkers into critical condition. The story linked to another about a brined turkey recall in the Minneapolis, MN area.

 

Once that launched, the Russian propagandists posted an article to Wikipedia about the “2015 New York poisoned turkey incident,” and cited the article from Proud to be Black. Even though Wikipedia quickly found and deleted the article, the propagandists were able to use the link to attach to the content they’d previously created around food poisoning and turkey meat. They posted around 1,500 messages between 5:00 and 9:00 am Thanksgiving morning. These messages were in turn picked up by regular people as retweets, reposts and shares.

 

The morning after Thanksgiving, the US Agriculture department reported a complaint from the NYC area alleging food poisoning. They declined to investigate. The New York City Department of Health and Mental Hygiene reported no record of food poisoning of that type. The company cited as supplying turkey to NYC Wal-Mart stores said they didn’t have a contract with the company.

 

What if the Russians had a few more hours to post messages, increasing the chance the hoax went viral? Could Wal-Mart stock have taken a little hit? Could their reputation take a big hit?

 

Disinformation is apolitical

 

In October 2018, Austin, TX-based New Knowledge, an NLP startup, discovered Russian botnet accounts with a bogus Taco Bell sales promotion pre-loaded and ready to go. The offer was free lunch for a year at Taco Bell along with a ticket to the Coachella Music Festival.

 

The information was found in Twitter data that the social media giant considered related to “potential influence operations that have been up and running since 2016.” Timestamps on the bogus promotion went as far back as 2013. According to the article, building tricks and leaving them dormant for later use is common behavior among propagandists and fraudsters.

 

Think through the confusion, financial and reputational disaster had the Russians launched that attack then. Or now.

 

A darker question is what if their goal were financial? It’s not inconceivable that this fraud could have caused a dip in stock price. Nor would it be inconceivable for the bad guys to use the moment to make money on the options market.

 

Disinformation attacks from unexpected angles

 

China, long a player in manipulating its own people, practices it differently. Their goal is distraction and the suppression of collective action. In 2014, a Chinese dissident leaked email data from 43,000 social media posts created and disseminated by the 50c Party. The group is the official voice of domestic propaganda.

 

The leaked archive was an archive of data on a series of campaigns, complete with information on copy, author accounts, social network information, performance analytics and reporting across the regional office. When a group of US researchers from Harvard, Stanford and UCSD studied it, the information provided a literal look at the operations and organizational structure of 50c. From the abstract of their 2017 study, “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument.”

 

“The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people…We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime’s strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We infer that the goal of this massive secretive operation is instead to regularly distract the public and change the subject, as most of these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime.”

 

The above begs the question, “what does that look like?” The overwhelming number of posts in the archive featured conversation supporting the government, followed by posts of non-argumentative praise or suggestions. Which to non-academics means “cheerleading.”

 

Party members were instructed to “promote unity and stability through positive publicity’ and ‘actively guide public opinion during emergency events’ In this context, ’emergency events” are events with collective action potential.” Here are a few examples from the archive, by category:

 

Non-argumentative praise–“The government has done a lot of practical things, among which is solving a significant part of the housing problem.”

Cheerleading for China—” We all have to work harder, to rely on ourselves, and to take the initiative to move forward.”

Factual reporting—” On January 16, Jiangxi Party Committee Member and Ganzhou City Party Secretary Shi Wenqing will communicate with netizens on the China Ganzhou Web, to hear comments, suggestions, and demands from netizens.”

Taunting of foreign countries—”Last year, at the Shangri-la Dialogue where Obama invited 23 countries to participate in the containment of China, he said: “China has 1.3 billion people, the faster China rises, the more difficult it will be for us to live, because the earth’s resources are limited. For us to remain at our current living standard, we must contain China’s development.”

Argumentative praise or criticism— “My dear friends, you if you go through your Weibo, you’ll discover that the system automatically had you follow Xue Manzi, Li Kaifu, Zuo Yeben, Han Han, Li Chengpeng and other populist Weibo users. This is a typical tactic of indoctrination and brainwashing; I suggest you unfollow them.”

 

One strategy to succeed against the threat? Name and shame.

 

In March 2019, the Wall Street Journal reported on how the British government countered a Russian disinformation campaign. In the wake of the March 2018 radioactive poisoning attack on Russian émigré Sergei Skripal and his daughter, the Kremlin launched a social media and influencer messaging campaign intent on deflecting and neutralizing the idea that Russia was behind it.

 

Instead of riding out the wave of disinformation, UK Prime Minister Theresa May ordered the expulsion of Russian diplomats, rallied European and NATO partners to condemn the Kremlin; and most importantly posted the details, images and CCTV footage of the two suspects. Non-governmental organizations picked up on that information, leading to the men being identified as Russian intelligence officers, Anatoily Chepiga and Alexander Mishkin.

 

The strategy, to name the people involved and shame their activity meant it will be quite difficult for the two agents to ever work covertly.

 

Brands and comms departments probably won’t have to worry about Russian plots. This is an emerging threat, and it is easy to see how people can dismiss it. But an emerging threat is still a threat.

 

Here’s how CP related activity could impact your agency or department. What if your client becomes the target of a coordinated consumer action—like customers smashing their own Keurig coffee machines to protest the company cutting their ad buy on Fox News’ “Hannity” show? What if your organization is targeted on social media because of its mission or clientele? How can you better deal with a sophisticated troll? These are the scenarios that need to be part of crisis planning meetings.

 

Another option? Track it like the academics do.

 

I interviewed Dr. Vidya Narayanan, Director of Research at the Oxford Computational Propaganda Project at Oxford University.  I wanted to understand a little bit more on their process of tracking online disinformation. As well as measure my understanding of CP.

 

“We track the spread of misinformation disinformation or what we call junk news. We track this stuff on social media platforms particularly before important public events. When the project first started, we identified high frequency posting accounts so which we would classify as suspicious behavior. We don’t have enough data to categorically state that this might be a bot this might not be a human. We track the frequency and tweeting pattern of accounts that are associated with political events.”

 

She goes on, “we collected a list of relevant political hashtags way ahead of the U.S. election, Brexit and ahead of the Indian elections. And from these some of so we create a list of the accounts that use these hashtags. We also extract links to news sources that have been posted using these hashtags.”

 

Even though you want to get started on who they are and what they say; a better question is “are they real?” In Chapter 5 we learned that a 30 or more tweets per day by a single account is a marker for a bot. If you understand the bad guys are using a botnet, your counter-strategies may adjust. Here’s how Narayanan and her Oxford researchers approach categorizing social media data:

 

“We analyze the links to news sources that use these hashtags. We have a typology that we’ve refined over a number of studies to classify them into categories and label their content as polarizing, junk news and content from Russia. We map how other accounts are using links and hashtags from this content. We map Twitter networks and Facebook networks looking for who uses the links and related content, and for what.”

 

The researchers that analyzed data from the 50c party were able to Identify authors, opinion leaders, followers and layers of government officials without AI. No wonder the study took 2 years.

 

If you feel the need to start immediately, then some of Oxford’s methodology would be helpful. You can find their data here. Otherwise, this would be a good time to talk about AI as a tool to neutralize computational propaganda.

 

Natural language processing can analyze what they say. Neural nets can learn their patterns and behavior.

 

If you have the money, an enterprise solution from tools Crimson Hexagon can help, but there are a number of products that we looked at in Chapter 3 for the curious or bootstrappers.

 

Co-reference resolution tools help determine words and objects. Automatic summarization makes reporting about data much easier. Machine translation is getting better, with some products able to discern regional dialects. Natural language understanding can do the block and tackle on syntax, semantics and pragmatics. Computer audition can parse speech in video and audio files. Computer vision is how you scan, analyze and categorize images and video frames used in posts.

 

Neural nets can make sense of it all. Once you can start to wrap your mind around it, you can plan a counterstrategy.

 

If you want to understand the who and what people are really saying about you on the public internet, it would be smart to meet independently with a data scientist and an NLP product vendor. Each has a benefit. Each comes at the solution from either side of a grow versus buy discussion. The data scientist can help you understand the data landscape of your company or client. They can also provide advice on adoption and best practices for technology transformation. You can use vendors to learn more about tools, build up information on customer success planning, and pricing.

Date

March 4, 2016

Category

Disinformation