SSCI page

Opening Statements

Senate Select Committee on Intelligence

Hearing on “Social Media Influence in the 2016 US Elections”

Date & Time: Wednesday, November 1, 2017 - 9:30am
Location: Hart 216

Witnesses

Vice President and General Counsel Colin Stretch  Facebook

General Counsel Sean Edgett  Twitter

Senior Vice President and General Counsel Kent Walker  Google



Opening Statement of Sen. Richard Burr
Chairman, Senate Intelligence Committee


Opening Statement of Sen. Mark Warner
Vice Chairman, Senate Intelligence Committee
Open Hearing on Social Media Influence in the 2016 U.S. Elections
November 1, 2017

In this age of social media, you can’t afford to waste too much time – or too many characters – in getting the point across, so I’ll get straight to the bottom line.
 
Russian operatives are attempting to infiltrate and manipulate American social media to hijack the national conversation and to make Americans angry, to set us against ourselves and to undermine our democracy.  They did it during the 2016 U.S. presidential campaign. They are still doing it now.  And not one of us is doing enough to stop it.
 
That is why we are here today.
 
In many ways, this threat is not new. Russians have been conducting information warfare for decades.
 
But what is new is the advent of social media tools with the power to magnify propaganda and fake news on a scale that was unimaginable back in the days of the Berlin Wall.  Today’s tools seem almost purpose-built for Russian disinformation techniques.
 
Russia’s playbook is simple, but formidable.  It works like this:

1.      Disinformation agents set up thousands of fake accounts, groups and pages across a wide array of platforms.
2.      These fake accounts populate content on Facebook, Instagram, Twitter, YouTube, Reddit, LinkedIn, and others.
3.      Each of these fake accounts spend months developing networks of real people to follow and like their content, boosted by tools like paid ads and automated bots. Most of their real-life followers have no idea they are caught up in this web.   
4.      These networks are later utilized to push an array of disinformation, including stolen emails, state-led propaganda (like RT and Sputnik), fake news, and divisive content.

The goal here is to get this content into the news feeds of as many potentially receptive Americans as possible and to covertly and subtly push them in the direction the Kremlin wants them to go.
 
As one who deeply respects the tech industry and was involved in the tech business for twenty years, it has taken me some time to really understand this threat.  Even I struggle to keep up with the language and mechanics.  The difference between bots, trolls, and fake accounts.  How they generate Likes, Tweets, and Shares.  And how all of these players and actions are combined into an online ecosystem.
 
What is clear, however, is that this playbook offers a tremendous bang for the disinformation buck.  With just a small amount of money, adversaries use hackers to steal and weaponize data, trolls to craft disinformation, fake accounts to build networks, bots to drive traffic, and ads to target new audiences.  They can force propaganda into the mainstream and wreak havoc on our online discourse.  That’s a big return on investment.  
 
So where do we go from here?
 
It will take all of us – the platform companies, the United States government, and the American people – to deal with this new and evolving threat.
 
Social media and the innovative tools each of you have developed have changed our world for the better.  You have transformed the way we do everything from shopping for groceries to growing our small businesses.  But Russia’s actions are further exposing the dark underbelly of the ecosystem you have created.  And there is no doubt that their successful campaign will be replicated by other adversaries – both nation states and terrorists – that wish to do harm to democracies around the globe.
 
As such, each of you here today needs to commit more resources to identifying bad actors and, when possible, preventing them from abusing our social media ecosystem.
 
Thanks in part to pressure from this Committee, each company has uncovered some evidence of the ways Russians exploited their platforms during the 2016 election.
 
For Facebook, much of the attention has been focused on the paid ads Russian trolls targeted to Americans.  However, these ads are just the tip of a very large iceberg.  The real story is the amount of misinformation and divisive content that was pushed for free on Russian-backed Pages, which then spread widely on the News Feeds of tens of millions of Americans.
 
According to data Facebook has provided, 120 Russian-backed Pages built a network of over 3.3 million real people.  From these now-suspended Pages, 80,000 organic unpaid posts reached an estimated 126 million real people.  That is an astonishing reach from just one group in St. Petersburg.  And I doubt that the so-called Internet Research Agency represents the only Russian trolls out there.  Facebook has more work to do to see how deep this goes, including looking into the reach of the IRA-backed Instagram posts, which represent another 120,000 pieces of content.
 
The anonymity provided by Twitter and the speed by which it shares news makes it an ideal tool to spread disinformation. According to one study, during the 2016 campaign, junk news actually outperformed real news in some battleground states in the lead-up to Election Day.   Another study found that bots generated one out of every five political messages posted on Twitter over the entire presidential campaign.  
 
I’m concerned that Twitter seems to be vastly under-estimating the number of fake accounts and bots pushing disinformation.  Independent researchers have estimated that up to 15% of Twitter accounts – or potentially 48 million accounts – are fake or automated.   Despite evidence of significant incursion and outreach from researchers, Twitter has, to date, only uncovered a small percentage of that activity.  Though, I am pleased to see that number has been rising in recent weeks. 
 
Google’s search algorithms continue to have problems in surfacing fake news or propaganda.  Though we can’t necessarily attribute to the Russian effort, false stories and unsubstantiated rumors were elevated on Google Search during the recent mass shooting in Las Vegas.  Meanwhile, YouTube has become RT’s go-to platform.  You have also now uncovered 1100 videos associated with this campaign.  Much more of your content was likely spread through other platforms.     
 
It is not just the platforms that need to do more.  The U.S. government has thus far proven incapable of adapting to meet this 21st century challenge.  Unfortunately, I believe this effort is suffering, in part, because of a lack of leadership at the top.  We have a President who remains unwilling to acknowledge the threat that Russia poses to our democracy.  President Trump should stop actively delegitimizing American journalism and acknowledge and address this real threat posed by Russian propaganda.
 
Congress, too, must do more.  We need to recognize that current law was not built to address these threats. I have partnered with Senators Klobuchar and McCain on a light-touch legislative approach, which I hope my colleagues with review.  The Honest Ads Act is a national security bill intended to protect our elections from foreign influence.
 
Finally – but perhaps most importantly – the American people also need to be aware of what is happening on our news feeds. We all need to take a more discerning approach to what we are reading and sharing, and who we are connecting with online. We need to recognize that the person at the other end of that Facebook or Twitter argument may not be a real person at all.
 
The fact is that this Russian weapon has already proven its success and cost effectiveness.  We can all be assured that other adversaries, including foreign intelligence operatives and potentially terrorist organizations, are reading their playbook and already taking action.  We don’t have the luxury of waiting for this Committee’s final report before taking action to respond to this threat to our democracy.
    
To our witnesses today, I hope you will detail what you saw in this last election and tell us what steps you will undertake to get ready for the next one.  We welcome your participation and encourage your continued commitment to addressing this shared responsibility.

###


HEARING BEFORE THE UNITED STATES SENATE SELECT COMMITTEE ON INTELLIGENCE

November 1, 2017

Testimony of Colin Stretch General Counsel, Facebook

Chairman Burr, Vice Chairman Warner, and distinguished members of the Committee, thank you for this opportunity to appear before you today. My name is Colin Stretch, and since July 2013, I’ve served as the General Counsel of Facebook. We appreciate this Committee’s hard work to investigate Russian interference in the 2016 election.

At Facebook, our mission is to create technology that gives people the power to build community and bring the world closer together. We don’t take for granted that each one of you uses Facebook to connect with your constituents, and that the people you represent expect authentic experiences when they come to our platform to share.

We also believe we have an important role to play in the democratic process—and a responsibility to protect it on our platform. That’s why we take what’s happened on Facebook so seriously. The foreign interference we saw is reprehensible and outrageous and opened a new battleground for our company, our industry, and our society. That foreign actors, hiding behind fake accounts, abused our platform and other internet services to try to sow division and discord—and to try to undermine our election process—is an assault on democracy, and it violates all of our values.

In our investigation, which continues to this day, we’ve found that these actors used fake accounts to place ads on Facebook and Instagram that reached millions of Americans over a two- year period, and that those ads were used to promote Pages, which in turn posted more content. People shared these posts, spreading them further. Many of these ads and posts are inflammatory. Some are downright offensive.

In aggregate, these ads and posts were a very small fraction of the overall content on Facebook— but any amount is too much. All of these accounts and Pages violated our policies, and we removed them.

Going forward, we’re making some very significant investments—we’re hiring more ad reviewers, doubling or more our security engineering efforts, putting in place tighter ad content restrictions, launching new tools to improve ad transparency, and requiring documentation from political ad buyers. We’re building artificial intelligence to help locate more banned content, and bad actors. We’re working more closely with industry to share information on how to identify and prevent threats so that we can all respond faster and more effectively. And we are expanding our efforts to work more closely with law enforcement.

I’m here today to share with you what we know so far about what happened—and what we’re doing about it. At the outset, let me explain how our service works and why people choose to use it.

II. FIGHTING ELECTION INTERFERENCE ON FACEBOOK

        A. Understanding what you see on Facebook

1. The News Feed Experience: A Personalized Collection of Stories. When people come to Facebook to share with their friends and discover new things, they see a personalized homepage we call News Feed. News Feed is a constantly updating, highly personalized list of stories, including status updates, photos, videos, links, and activity from the people and things you’re connected to on Facebook. The goal of News Feed is to show people the stories that are most relevant to them. The average person has thousands of things on any given day that they could read in their News Feed, so we use personalized ranking to determine the order of stories we show them. Each person’s News Feed is unique. It’s shaped by the friends they add; the people, topics, and news sources they follow; the groups they join; and other signals like their past interactions. On average, a person in the US is served roughly 220 stories in News Feed each day. Over the time period in question, from 2015 to 2017, Americans using Facebook were exposed to, or “served,” a total of over 33 trillion stories in their News Feeds.

2. Advertising and Pages as Sources of Stories in News Feed. News Feed is also a place where people see ads on Facebook. To advertise in News Feed, a person must first set up a Facebook account—using their real identity—and then create a Facebook Page. Facebook Pages represent a wide range of people, places, and things, including causes, that people are interested in. Any user may create a Page to express support for or interest in a topic, but only official representatives can create a Page on behalf of an organization, business, brand, or public figure. It is against our terms for Pages to contain false, misleading, fraudulent, or deceptive claims or content. Facebook marks some official Pages—such as for a public figure, media company, or brand—with a “verified” badge to let people know they’re authentic. All Pages must comply with our Community Standards and ensure that all the stories they post or share respect our policies prohibiting hate speech, violence, and sexual content, among other restrictions. People can like or follow a Page to get updates, such as posts, photos, or videos, in their News Feed. The average person in the US likes 178 Pages. People do not necessarily see every update from each of the Pages they are connected to. Our News Feed ranking determines how relevant we think a story from a Page will be to each person. We make it easy for people to override our recommendations by giving them additional controls over whether they see a Page’s updates higher in their News Feed or not at all. For context, from 2015 to 2017, people in the United States saw 11.1 trillion posts from Pages on Facebook.

3. Advertising to Promote Pages. Page administrators can create ads to promote their Page and show their posts to more people. The vast majority of our advertisers are small- and medium- sized businesses that use our self-service tools to create ads to reach their customers. Advertisers choose the audience they want to reach based on demographics, interests, behaviors or contact information. They can choose from different ad formats, upload images or video, and write the text they want people to see. Advertisers can serve ads on our platform for as little as $0.50 per day using a credit card or other payment method. By using these tools, advertisers agree to our Self-Serve Ad Terms. Before ads appear on Facebook or Instagram, they go through our ad review process that includes automated checks of an ad’s images, text, targeting and positioning, in addition to the content on the ad’s landing page. People on Facebook can also report ads, find more information about why they are being shown a particular ad, and update their ad preferences to influence the type of ads they see.

        B. Promoting Authentic Conversation

Our authenticity policy is the cornerstone of how we prevent abuse on our platform, and was the basis of our internal investigation and what we found.

From the beginning, we have always believed that Facebook is a place for authentic dialogue, and that the best way to ensure authenticity is to require people to use the names they are known by. Fake accounts undermine this objective, and are closely related to the creation and spread of inauthentic communication such as spam—as well as used to carry out disinformation campaigns like the one associated with the Internet Research Agency (IRA).

We build and update technical systems every day to better identify and remove inauthentic accounts, which also helps reduce the distribution of material that can be spread by accounts that violate our policies. Each day, we block millions of fake accounts at registration. Our systems examine thousands of account attributes and focus on detecting behaviors that are very difficult for bad actors to fake, including their connections to others on our platform. By constantly improving our techniques, we also aim to reduce the incentives for bad actors who rely on distribution to make their efforts worthwhile.

Protecting authenticity is an ongoing challenge. As our tools and security efforts evolve, so will the techniques of those who want to evade our authenticity requirements. As in other areas of cybersecurity, our security and operations teams need to continually adapt.

        C. Protecting the Security of the 2016 Election and Learning Lessons Quickly

1. The Evolution of Facebook’s Security Protections. From its earliest days, Facebook has always been focused on security. These efforts are continuous and involve regular contact with law enforcement authorities in the United States and around the world. Elections are particularly sensitive events for our security operations, and as the role our service plays in promoting political dialogue and debate has grown, so has the attention of our security team.

As your investigation has revealed, our country now faces a new type of national cyber-security threat—one that will require a new level of investment and cooperation across our society. At Facebook, we’re prepared to do our part. At each step of this process, we have spoken out about threats to internet platforms, shared our findings, and provided information to investigators. As we learn more, we will continue to identify and implement improvements to our security systems, and work more closely with other technology companies to share information on how to identify and prevent threats and how to respond faster and more effectively.

2. Security Leading Up to the 2016 Election.

a. Fighting Hacking and Malware.
For years, we had been aware of other types of activity that appeared to come from Russian sources—largely traditional security threats such as attacking people’s accounts or using social media platforms to spread stolen information. What we saw early in the 2016 campaign cycle followed this pattern. Our security team that focuses on threat intelligence—which investigates advanced security threats as part of our overall information security organization—was, from the outset, alert to the possibility of Russian activity. In several instances before November 8, 2016, this team detected and mitigated threats from actors with ties to Russia and reported them to US law enforcement officials. This included activity from a cluster of accounts we had assessed to belong to a group (“APT28”) that the US government has publicly linked to Russian military intelligence services. This activity, which was aimed at employees of major US political parties, fell into the normal categories of offensive cyber activities we monitor for. We warned the targets who were at highest risk, and were later in contact with law enforcement authorities about this activity.

Later in the summer we also started to see a new kind of behavior from APT28-related accounts—namely, the creation of fake personas that were then used to seed stolen information to journalists. These fake personas were organized under the banner of an organization that called itself DC Leaks. This activity violated our policies, and we removed the DC Leaks accounts.

b. Understanding Fake Accounts and Fake News. After the election, when the public discussion of “fake news” rapidly accelerated, we continued to investigate and learn more about the new threat of using fake accounts to amplify divisive material and deceptively influence civic discourse. We shared what we learned with government officials and others in the tech industry. And in April 2017, we shared our findings with the public by publishing a white paper that described the activity we detected and the initial techniques we used to combat it.

As with all security threats, we have also been applying what we learned in order to do better in the future. We use a variety of technologies and techniques to detect and shut down fake accounts, and in October 2016, for example, we disabled about 5.8 million fake accounts in the United States. At the time, our automated tooling did not yet reflect our knowledge of fake accounts focused on social or political issues. But we incorporated what we learned from the 2016 elections into our detection systems, and as a result of these improvements, we disabled more than 30,000 accounts in advance of the French election. This same technology helped us disable tens of thousands more accounts before the German elections in September. In other words, we believe that we’re already doing better at detecting these forms of abuse, although we know that people who want to abuse our platform will get better too and so we must stay vigilant.

3. Investigating the Role of Ads and Foreign Interference. After the 2016 election, we learned from press accounts and statements by congressional leaders that Russian actors might have tried to interfere in the election by exploiting Facebook’s ad tools. This is not something we had seen before, and so we started an investigation that continues to this day. We found that fake accounts associated with the IRA spent approximately $100,000 on more than 3,000 Facebook and Instagram ads between June 2015 and August 2017. Our analysis also showed that these accounts used these ads to promote the roughly 120 Facebook Pages they had set up, which in turn posted more than 80,000 pieces of content between January 2015 and August 2017. The Facebook accounts that appeared tied to the IRA violated our policies because they came from a set of coordinated, inauthentic accounts. We shut these accounts down and began trying to understand how they misused our platform.

a. Advertising by Accounts Associated with the IRA. Below is an overview of what we’ve learned so far about the IRA’s ads:

  • Impressions (an “impression” is how we count the number of times something is on screen, for example this can be the number of times something was on screen in a person’s News Feed):

    o 44% of total ad impressions were before the US election on November 8, 2016.

    o 56% of total ad impressions were after the election.

  • Reach (the number of people who saw a story at least once):

    o We estimate 11.4 million people in the US saw at least one of these ads between 2015 and 2017.

  • Ads with zero impressions:

    o Roughly 25% of the ads were never shown to anyone. That’s because advertising auctions are designed so that ads reach people based on relevance, and certain ads may not reach anyone as a result.

  • Amount spent on ads:

o For 50% of the ads, less than $3 was spent.

o For 99% of the ads, less than $1,000 was spent.

o Many of the ads were paid for in Russian currency, though currency alone is a weak signal for suspicious activity.

•  Content of ads:

o Most of the ads appear to focus on divisive social and political messages across the ideological spectrum, touching on topics from LGBT matters to race issues to immigration to gun rights.

o A number of the ads encourage people to follow Pages on these issues, which in turn produced posts on similarly charged subjects.

b. Content Posted by Pages Associated with the IRA. We estimate that roughly 29 million people were served content in their News Feeds directly from the IRA’s 80,000 posts over the two years. Posts from these Pages were also shared, liked, and followed by people on Facebook, and, as a result, three times more people may have been exposed to a story that originated from the Russian operation. Our best estimate is that approximately 126 million people may have been served content from a Page associated with the IRA at some point during the two-year period. This equals about four-thousandths of one percent (0.004%) of content in News Feed, or approximately 1 out of 23,000 pieces of content.

Though the volume of these posts was a tiny fraction of the overall content on Facebook, any amount is too much. Those accounts and Pages violated Facebook’s policies—which is why we removed them, as we do with all fake or malicious activity we find. We also deleted roughly 170 Instagram accounts that posted about 120,000 pieces of content.

Our review of this activity is ongoing. Many of the ads and posts we’ve seen so far are deeply disturbing—seemingly intended to amplify societal divisions and pit groups of people against each other. They would be controversial even if they came from authentic accounts in the United States. But coming from foreign actors using fake accounts they are simply unacceptable.

That’s why we’ve given the ads and posts to Congress—because we want to do our part to help investigators gain a deeper understanding of foreign efforts to interfere in the US political system and explain those activities to the public. These actions run counter to Facebook’s mission of building community and everything we stand for. And we are determined to do everything we can to address this new threat.

        D. Mobilizing to Address the New Threat

We are taking steps to enhance trust in the authenticity of activity on our platform, including increasing ads transparency, implementing a more robust ads review process, imposing tighter content restrictions, and exploring how to add additional authenticity safeguards.

1. Promoting Authenticity and Preventing Fake Accounts. We maintain a calendar of upcoming elections and use internal and external resources to best predict the threat level to each. We take preventative measures based on our information, including working with election officials where appropriate. Within this framework, we set up direct communication channels to escalate issues quickly. These efforts complement our civic engagement work, which includes voter education. In October 2017, for example, we launched a Canadian Election Integrity Initiative to help candidates guard against hackers and help educate voters on how to spot false news.

Going forward, we’re also requiring political advertisers to provide more documentation to verify their identities and disclose when they’re running election ads. Potential advertisers will have to confirm the business or organization they represent before they can buy ads. Their accounts and their ads will be marked as political, and they will have to show details, including who paid for the ads. We’ll start doing this with federal elections in the US and then move onto other elections in the US and other countries. For political advertisers that don’t proactively identify themselves, we’re building machine learning tools that will help us find them and require them to verify their identity.

Authenticity is important for Pages as well as ads. We’ll soon test ways for people to verify that the people and organizations behind political and issue-based Pages are who they say they are.

2. Partnering with Industry on Standards. We have been working with many others in the technology industry, including with Google and Twitter, on a range of elements related to this investigation. Our companies have a long history of working together on other issues such as child safety and counter-terrorism.

We are also reaching out to leaders in our industry and governments around the world to share information on bad actors and threats so that we can make sure they stay off all platforms. We are trying to make this an industry standard practice.

3. Strengthening Our Advertising Policies. We know that some of you and other members of Congress are exploring new legislative approaches to political advertising—and that’s a conversation we welcome. We are already working with some of you on how best to put new requirements into law. But we aren’t waiting for legislation. Instead we’re taking steps where we can on our own, to improve our own approach to transparency, ad review, and authenticity requirements.

a. Providing Transparency. We believe that when you see an ad, you should know who ran it to be able to understand what other ads they’re running—which is why we show you the Page name for any ads that run in your News Feed.

To provide even greater transparency for people and accountability for advertisers, we’re now building new tools that will allow you to see the other ads a Page is running as well—including ads that aren’t targeted to you directly. We hope that this will establish a new standard for our industry in ad transparency. We try to catch material that shouldn’t be on Facebook before it’s even posted—but because this is not always possible, we also take action when people report ads that violate our policies. We’re grateful to our community for this support, and hope that more transparency will mean more people can report violating ads.

b. Enforcing Our Policies. We rely on both automated and manual ad review, and we’re now taking steps to strengthen both. Reviewing ads means assessing not just what’s in an ad but also the context in which it was bought and the intended audience—so we’re changing our ads review system to pay more attention to these signals. We’re also adding more than 1,000 people to our global ads review teams over the next year and investing more in machine learning to better understand when to flag and take down ads. Enforcement is never perfect, but we will get better at finding and removing improper ads.

c. Restricting Ad Content. We hold people on Facebook to our Community Standards, and we hold advertisers to even stricter guidelines. Our ads policies already prohibit shocking content, direct threats and the promotion of the sale or use of weapons. Going forward, we are expanding these policies to prevent ads that use even more subtle expressions of violence.

III. CONCLUSION

Any attempt at deceptive interference using our platform is unacceptable, and runs counter to everything we are working toward. What happened in the 2016 election cycle was an affront to us, and, more importantly, to the people who come to Facebook every day to have authentic conversations and to share. We are committed to learning from these events, and to improving. We know we have a responsibility to do our part—and to do better. We look forward to working with everyone on this Committee, in the government, and across the tech industry and civil society, to address this important national security matter so that we can prevent similar abuse from happening again.


United States Senate Select Committee on Intelligence

Testimony of Sean J. Edgett Acting General Counsel, Twitter, Inc.

November 1, 2017

Chairman Burr, Vice Chairman Warner, and Members of the Committee:

Twitter understands the importance of the Committee’s inquiry into Russia’s interference in the 2016 election, and we appreciate the opportunity to appear here today.

The events underlying this hearing have been deeply concerning to our company and the broader Twitter community. We are committed to providing a service that fosters and facilitates free and open democratic debate and that promotes positive change in the world. We take seriously reports that the power of our service was misused by a foreign actor for the purpose of influencing the U.S. presidential election and undermining public faith in the democratic process.

Twitter is familiar with problems of spam and automation, including how they can be used to amplify messages. The abuse of those methods by sophisticated foreign actors to attempt state-sponsored manipulation of elections is a new challenge for us—and one that we are determined to meet. Today, we intend to demonstrate the seriousness of our commitment to addressing this new threat, both through the effort that we are devoting to uncovering what happened in 2016 and by taking steps to prevent it from happening again.

We begin by explaining the values that shape Twitter and that we aspire as a community to promote and embody. We then describe our response to reports about the role of automation in the 2016 election and on social media more generally. As we discuss, that response includes the creation of a dedicated team within Twitter to enhance the quality of the information our users see and to block malicious activity whenever and wherever we find it. In addition, we have launched a retrospective analysis of activity on our system that indicates Russian efforts to influence the 2016 election through automation, coordinated activity, and advertising. Although the work of that review continues, we share what we know, today, in the interests of transparency and out of appreciation for the urgency of this matter. We do so recognizing that our findings may be supplemented as we work with Committee staff and other companies, discover more facts, and gain a greater understanding of these events. Indeed, what happened on Twitter is only one part of the story, and the Committee is best positioned to see how the various pieces fit together. We look forward to continued partnership, information sharing, and feedback.

We also detail the steps we are taking to ensure that Twitter remains a safe, open, transparent, and positive platform for our users. Those changes include enhanced safety policies, better tools and resources for detecting and stopping malicious activity, tighter advertising standards, and increased transparency to promote public understanding of all of these areas. Our work on these challenges will continue for as long as malicious actors seek to abuse our system, and will need to evolve to stay ahead of new tactics.

We are resolved to continue this work in coordination with the government and our industry peers. Twitter believes that this hearing is an important step toward furthering our shared understanding of how social media platforms, working hand-in-hand with the public and private sectors, can prevent this type of abuse both generally and, of critical importance, in the context of the electoral process.

I. Twitter’s Values

Twitter was founded upon and remains committed to a core set of values that have guided us as we respond to the new threat that brings us here today.

Among those values are defending and respecting the user’s voice—a two-part commitment to freedom of expression and privacy. Twitter has a history of facilitating civic engagement and political freedom, and we intend for Twitter to remain a vital avenue for free expression here and abroad. But we cannot foster free expression without ensuring trust in our platform. We are determined to take the actions necessary to prevent the manipulation of Twitter, and we can and must make sure Twitter is a safe place.

Keeping Twitter safe includes maintaining the quality of information on our platform. Our users look to us for useful, timely, and appropriate information. To preserve that experience, we are always working to ensure that we surface for our users the highest quality and most relevant content first. While Twitter’s open and real-time environment is a powerful antidote to the abusive spreading of false information, we do not rest on user interaction alone. We are taking active steps to stop malicious accounts and Tweets from spreading, and we are determined that our strategies will keep ahead of the tactics of bad actors.

Twitter is founded on a commitment to transparency. Since 2012, we have published the Twitter Transparency Report on a semiannual basis, providing the public with key metrics about requests from governments and certain private actors for user information, content removal, copyright violations, and most recently, Terms of Service (“TOS”) violations. We are also committed to open communication about how we enforce our TOS and the Twitter Rules, and about how we protect the privacy of our users.

Following through on those commitments takes both resolve and resources. And the fight against malicious activity and abuse goes beyond any single election or event. We work every day to give everyone the power to create and share ideas and information instantly, without barriers.

II. Background on Twitter’s Operation

Understanding the steps we are taking to address election-related abuse of our platform requires an explanation of certain fundamentals of Twitter’s operation. We therefore turn now to a description of the way our users interact with our system, how we approach automated content, and the basics of advertising on Twitter.

A. User Interaction

Twitter has 330 million monthly active users around the world, 67 million of which are located in the United States. Users engage with our platform in a variety of ways. Users choose what content they primarily see by following (and unfollowing) other user accounts. Users generate content on the platform by Tweeting original content, including text, hashtags, photos, GIFs, and videos. They may also reply to Tweets, Retweet content already posted on the platform, and like Tweets and Retweets; the metric we use to describe such activity is “engagement”—the different ways in which users are engaged with the content they are viewing. Users can also exchange messages with users and accounts they follow (or, if their privacy settings permit, with any other user) through direct messaging (“DM”).

The volume of activity on our system is enormous: Our users generate thousands of Tweets per second, hundreds of thousands of Tweets per minute, hundreds of millions of Tweets per day, and hundreds of billions of Tweets every year.

Another metric we use is how many times a specific piece of content such as a Tweet is viewed. That metric—which we refer to as “impressions”—does not require any additional engagement by the user; viewing content generates an impression, although there is no guarantee that a user has actually read the Tweet. Impressions are not “unique,” so multiple impressions may be created by one account, by a single person using multiple accounts, or by many accounts.

A third important concept is “trends.” Trends are words, phrases, or hashtags that may relate to an event or other topic (e.g., #CommitteeHearing). Twitter detects trends through an advanced algorithm that picks up on topics about which activity is growing quickly and thus showing a new or heightened interest among our users. Trends thus do not measure the aggregate popularity of a topic, but rather the velocity of Tweets with related content. The trends that a user sees may depend on a number of factors, including their location and their interests. If a user clicks on a trend, the user can see Tweets that contain that hashtag.

B. Malicious Automation and Responsive Measures

Automation refers to a process that generates user activity—Tweets, likes, or following behavior—without ongoing human input. Automated activity may be designed to occur on a schedule, or it may be designed to respond to certain signals or events. Accounts that rely on automation are sometimes also referred to as “bots.”

Automation is not categorically prohibited on Twitter; in fact, it often serves a useful and important purpose. Automation is essential for certain informational content, particularly when time is of the essence, including for law enforcement or public safety notifications. Examples include Amber Alerts, earthquake and other storm warnings, and notices to “shelter in place” during active emergency situations. Automation is also used to provide customer service for a range of companies. For example, as of April 11, 2017, users are able to Tweet @TwitterSupport to request assistance from Twitter. If a user reports a forgotten password or has a question about our rules, the initial triage of those messages is performed by our automated system—a Twitter-developed program to assist users in troubleshooting account issues.

But automation can also be used for malicious purposes, most notably in generating spam—unwanted content consisting of multiple postings either from the same account or from multiple coordinated accounts. While “spam” is frequently viewed as having a commercial element since it is a typical vector for spreading advertising, Twitter’s Rules take an expansive view of spam because it negatively impacts the user experience. Examples of spam violations on Twitter include automatically Retweeting content to reach as many users as possible, automatically Tweeting about topics on Twitter in an attempt to manipulate trends, generating multiple Tweets with hashtags unrelated to the topics of those hashtags, repeatedly following and unfollowing accounts to tempt other users to follow reciprocally, tweeting duplicate replies and mentions, and generating large volumes of unsolicited mentions.

Our systems are built to detect automated and spam accounts across their lifecycles, including detection at the account creation and login phase and detection based on unusual activity (e.g., patterns of Tweets, likes, and follows). Our ability to detect such activity on our platform is bolstered by internal, manual reviews conducted by Twitter employees. Those efforts are further supplemented by user reports, which we rely on not only to address the content at issue but also to calibrate our detection tools to identify similar content as spam.

Once our systems detect an account as generating automated content or spam, we can take action against that account at either the account level or the Tweet level. Depending on the mode of detection, we have varying levels of confidence about our determination that an account is violating our rules. We have a range of options for enforcement, and generally, the higher our confidence that an account is violating our rules, the stricter our enforcement action will be, with immediate suspension as the harshest penalty. If we are not sufficiently confident to suspend an account on the basis of a given detection technique, we may challenge the account to verify a phone number or to otherwise prove human operation, or we may flag the account for review by Twitter personnel. Until the user completes the challenge, or until the review by our teams has been completed, the account is temporarily suspended; the user cannot produce new content (or perform actions like Retweets or likes), and the account’s contents are hidden from other Twitter users.

We also have the capability to detect suspicious activity at the Tweet level and, if certain criteria are met, to internally tag that Tweet as spam, automated, or otherwise suspicious. Tweets that have been assigned those designations are hidden from searches, do not count toward generating trends, and generally will not appear in feeds unless a user follows that account. Typically, users whose Tweets are designated as spam are also put through the challenges described above and are suspended if they cannot pass.

For safety-related TOS violations, we have a number of enforcement options. For example, we can stop the spread of malicious content by categorizing a Tweet as “restricted pending deletion,” which requires a user to delete the Tweet before the user is permitted to continue using the account and engaging with the platform. So long as the Tweet is restricted— and until the user deletes the Tweet—the Tweet remains inaccessible to and hidden from all Twitter users. The user is blocked from Tweeting further unless and until he or she deletes the restricted Tweet. This mechanism is a common enforcement approach to addressing less severe content violations of our TOS outside the spam context; it also promotes education among our users. More serious violations, such as posting child sexual exploitation or promoting terrorism, result in immediate suspension and may prompt interaction with law enforcement.

C. Advertising Basics

Advertising on Twitter generally takes the form of promoted Tweets, which advertisers purchase to reach new groups of users or spark engagement from their existing followers. Promoted Tweets are clearly labeled as “promoted” when an advertiser pays for their placement on Twitter. In every other respect, promoted Tweets look and act just like regular Tweets and can be Retweeted, replied to, and liked.

Advertisers can post promoted Tweets through a self-service model on the Twitter platform or through account managers, who manage relationships with advertising partners. When purchasing a promoted Tweet, an advertiser can target its audience based on information such as interests, geography, gender, device type, or other specific characteristics. For most campaigns, advertisers pay only when users engage with the promoted Tweet, such as following the advertiser; liking, replying to, or clicking on the Tweet; watching a Tweet’s video; or taking some other action.

Because promoted Tweets are presented to our users from accounts they have not yet chosen to follow, Twitter applies to those Tweets a robust set of policies that prohibit, among other things, ads for illegal goods and services, ads making misleading or deceptive claims, ads for drugs or drug paraphernalia, ads containing hate content, sensitive topics, and violence, and ads containing offensive or inflammatory content.

Twitter relies on two methods to prevent prohibited promoted content from appearing on the platform: a proactive method and a reactive method. Proactively, Twitter relies on custom- built algorithms and models for detecting Tweets or accounts that might violate its advertising policies. Reactively, Twitter takes user feedback through a “Report Ad” process, which flags an ad for manual human review. Once our teams have reviewed the content, typically one of three decisions will be made: if the content complies with our policy, we may approve it; if the content/account violates the policy, we may stop the particular Tweet from being promoted to users; or, if Twitter deems the account to be in repeated violation of our policies at the Tweet level, we may revoke an account’s advertising privileges (also known as off-boarding the advertiser).

III. Malicious Automation in the 2016 Election: Real-Time Observations and Response

Although Twitter has been fighting the problem of spam and malicious automation for many years, in the period preceding the 2016 election we observed new ways in which accounts were abusing automation to propagate misinformation on our platform. Among other things, we noticed accounts that Tweeted false information about voting in the 2016 election, automated accounts that Tweeted about trending hashtags, and users who abused their access to the platform we provide developers.

At the time, we understood these to be isolated incidents, rather than manifestations of a larger, coordinated effort at misinformation on our platform. Once we understood the systemic nature of the problem in the aftermath of the election, we launched a dedicated initiative to research and  combat that new threat.

A. Malicious Automation and Misinformation Detected in 2016

We detected examples of automated activity and deliberate misinformation in 2016, including in the run-up to the 2016 election, that in retrospect appear to be signals of the broader automation problem that came into focus after the election had concluded.

On December 2, 2016, for example, we learned of @PatrioticPepe, an account that automatically replied to all Tweets from @realDonaldTrump with spam content. Those automatic replies were enabled through an application that had been created using our Application Programming Interface (“API”). Twitter provides access to the API for developers who want to design Twitter-compatible applications and innovate using Twitter data. Some of the most creative uses of our platform originate with applications built on our API, but we know that a large quantity of automated spam on our platform is also generated and disseminated through such applications. We noticed an upward swing in such activity during the period leading up to the election, and @PatrioticPepe was one such example. On the same day we identified @PatrioticPepe, we suspended the API credentials associated with that user for violation of our automation rules. On average, we take similar actions against violative applications more than 7,000 times per week.

Another example of aberrant activity we identified and addressed during this period involved voter suppression efforts. In particular, Twitter identified, and has since provided to the Committee, examples of Tweets with images in English and Spanish that encouraged Clinton supporters to vote online, vote by phone, or vote by text.

In response to the attempted “vote-by-text” effort and similar voter suppression attempts, Twitter restricted as inaccessible, pending deletion, 918 Tweets from 529 users who proliferated that content. Twitter also permanently suspended 106 accounts that were collectively responsible for 734 “vote-by-text” Tweets. Twitter identified, but did not take action against, an additional 286 Tweets of the relevant content from 239 Twitter accounts, because we determined that those accounts were seeking to refute the “text-to-vote” message and alert other users that the information was false and misleading. Notably, those refuting Retweets generated significantly greater engagement across the platform compared to the Tweets spreading the misinformation—8 times as many impressions, engagement by 10 times as many users, and twice as many replies.

Before the election, we also detected and took action on activity relating to hashtags that have since been reported as manifestations of efforts to interfere with the 2016 election. For example, our automated spam detection systems helped mitigate the impact of automated Tweets promoting the #PodestaEmails hashtag, which originated with Wikileaks’ publication of thousands of emails from the Clinton campaign chairman John Podesta’s Gmail account. The core of the hashtag was propagated by Wikileaks, whose account sent out a series of 118 original Tweets containing variants on the hashtag #PodestaEmails referencing the daily installments of the emails released on the Wikileaks website. In the two months preceding the election, around 57,000 users posted approximately 426,000 unique Tweets containing variations of the #PodestaEmails hashtag. Approximately one quarter (25%) of those Tweets received internal tags from our automation detection systems that hid them from searches. As described in greater detail below, our systems detected and hid just under half (48%) of the Tweets relating to variants of another notable hashtag, #DNCLeak, which concerned the disclosure of leaked emails from the Democratic National Committee. These steps were part of our general efforts at the time to fight automation and spam on our platform across all areas.

B. Information Quality Initiative

After the election, we followed with great concern the reports that malicious actors had used automated activity and promoted deliberate falsehoods on social media as part of a coordinated misinformation campaign. Along with other platforms that were focused on the problem, we realized that the instances our automated systems had detected in 2016 were not isolated but instead represented a broader pattern of conduct that we needed to address in a more comprehensive way.

Recognizing that elections continue and that the health and safety of our platform was a top priority, our first task was to prevent similar abuse in the future. We responded by launching an initiative to combat the problem of malicious automation and disinformation going forward. The objective of that effort, called the Information Quality initiative, is to enhance the strategies we use to detect and deny bad automation, improve machine learning to spot spam, and increase the precision of our tools designed to prevent such content from contaminating our platform.

Since the 2016 election, we have made significant improvements to reduce external attempts to manipulate content visibility. These improvements were driven by investments into methods to detect malicious automation through abuse of our API, limit the ability of malicious actors to create new accounts in bulk, detect coordinated malicious activity across clusters of accounts, and better enforce policies against abusive third-party applications.

Our efforts have produced clear results in terms of our ability to detect and block such content. With our current capabilities, we detect and block approximately 450,000 suspicious logins each day that we believe to be generated through automation. In October 2017, our systems identified and challenged an average of 4 million suspicious accounts globally per week, including over three million challenged upon signup, before they ever had an impact on the platform—more than double our rate of detection at this time last year.

We also recognized the need to address more systematically spam generated by third- party applications, and we have invested in the technology and human resources required to do so. Our efforts have been successful. Since June 2017, we have suspended more than 117,000 malicious applications for abusing our API. Those applications are collectively responsible for more than 1.5 billion Tweets posted in 2017.

We have developed new techniques for identifying patterns of activity inconsistent with legitimate use of our platform (such as near-instantaneous replies to Tweets, non-random Tweet timing, and coordinated engagement), and we are currently implementing these detections across our platform. We have improved our phone verification process and introduced new challenges, including reCAPTCHAs (utilizing an advanced risk-analysis engine developed by Google), to give us additional tools to validate that a human is in control of an account. We have enhanced our capabilities to link together accounts that were formed by the same person or that are working in concert. And we are improving how we detect when accounts may have been hacked or compromised.

In the coming year, we plan to build upon our 2017 improvements, specifically including efforts to invest even further in machine-learning capabilities that help us detect and mitigate the effect on users of fake, coordinated, and automated account activity. Our engineers and product specialists continue this work every day, further refining our systems so that we capture and address as much malicious content as possible. We are committed to continuing to invest all necessary resources into making sure that our platform remains safe for our users.

We also actively engage with civil society and journalistic organizations on the issue of misinformation. Enhancing media literacy is critical to ensuring that voters can discern which sources of information have integrity and which may be suspect. We are creating a dedicated media literacy program to demonstrate how Twitter can be an effective tool in media literacy education. Moreover, we engage in collaborations and trainings with NGOs, such as Committee to Protect Journalists, Reporters without Borders, and Reporters Committee for Freedom of the Press. We do so in order to ensure that journalists and journalistic organizations are familiar with how to utilize Twitter effectively and to convey timely information around our policies and practices.

IV. Retrospective Reviews of Malicious Activity in the 2016 Election

In addition to the forward-looking efforts we launched in the immediate aftermath of the election, we have initiated a focused, retrospective review of malicious Russian activity specifically in connection with last year’s presidential election. Those reviews cover the core Twitter product as well as the advertising product. They draw on all parts of the company and involve a significant commitment of resources and time. We are reporting on our progress today and commit to providing updates to the Committee as our work continues.

A. Malicious Automated and Human-Coordinated Activity

For our review of Twitter’s core product, we analyzed election-related activity from the period preceding and including the election (September 1, 2016 to November 15, 2016) in order to identify content that appears to have originated from automated accounts or from human-coordinated activity associated with Russia. We then assessed the results to discern trends, evaluate our existing detection systems, and identify areas for improvement and enhancement of our detection tools.

1. Methodology

We took a broad approach for purposes of our review of what constitutes an election- related Tweet, relying on annotations derived from a variety of information sources, including Twitter handles, hashtags, and Tweets about significant events. For example, Tweets mentioning @HillaryClinton and @realDonaldTrump received an election-related annotation, as did Tweets that included #primaries and #feelthebern. In total, we included more than 189 million Tweets annotated in this way out of the total corpus of more than 16 billion unique Tweets posted during this time period (excluding Retweets).

To ensure that we captured all relevant automated accounts in our review, Twitter analyzed the data not only using the detection tools that existed at the time the activity occurred, but also using newly developed and more robust detection tools that have been implemented since then. We compared the results to determine whether our new detection tools are able to capture automated activity that our 2016 techniques could not. These analyses do not attempt to differentiate between “good” and “bad” automation; they rely on objective, measurable signals, such as the timing of Tweets and engagements, to classify a given action as automated.

We took a similarly expansive approach to defining what qualifies as a Russian-linked account. Because there is no single characteristic that reliably determines geographic origin or affiliation, we relied on a number of criteria, including whether the account was created in Russia, whether the user registered the account with a Russian phone carrier or a Russian email address, whether the user’s display name contains Cyrillic characters, whether the user frequently Tweets in Russian, and whether the user has logged in from any Russian IP address, even a single time. We considered an account to be Russian-linked if it had even one of the relevant criteria.

Despite the breadth of our approach, there are technological limits to what we can determine based on the information we can detect regarding a user’s origin. In the course of our analysis—and based in part on work conducted by our Information Quality team—we observed that a high concentration of automated engagement and content originated from data centers and users accessing Twitter via Virtual Private Networks (“VPNs”) and proxy servers. In fact, nearly 12% of Tweets created during the election originated with accounts that had an indeterminate location. Use of such facilities obscures the actual origin of traffic. Although our conclusions are thus necessarily contingent on the limitations we face, and although we recognize that there may be other methods for analyzing the data, we believe our approach is the most effective way to capture an accurate understanding of activity on our system.

2. Analysis and Key Findings

We began our review with a universe of over 16 billion Tweets—the total volume of original Tweets on our platform during the relevant period. Applying the methodology described above, and using detection tools we currently have in place, we identified 36,746 accounts that generated automated, election-related content and had at least one of the characteristics we used to associate an account with Russia.

During the relevant period, those accounts generated approximately 1.4 million automated, election-related Tweets, which collectively received approximately 288 million impressions.

Because of the scale on which Twitter operates, it is important to place those numbers in context:

  • •  The 36,746 automated accounts that we identified as Russian-linked and tweeting election-related content represent approximately one one-hundredth of a percent (0.012%) of the total accounts on Twitter at the time.

  • •  The 1.4 million election-related Tweets that we identified through our retrospective review as generated by Russian-linked, automated accounts constituted less than three quarters of one percent (0.74%) of the overall election-related Tweets on Twitter at the time. See Appendix 1.

  • •  Those 1.4 million Tweets received only one-third of a percent (0.33%) of impressions on election-related Tweets. In the aggregate, automated, Russian-linked, election- related Tweets consistently underperformed in terms of impressions relative to their volume on the platform. See Appendix 2.

In 2016, we detected and labeled some, but not all, of those Tweets using our then- existing anti-automation tools. Specifically, in real, time we detected and labeled as automated over half of the Tweets (791,000) from approximately half of the accounts (18,064), representing 0.42% of overall election-related Tweets and 0.14% of election-related Tweet impressions.

Thus, based on our analysis of the data, we determined that the number of accounts we could link to Russia and that were Tweeting election-related content was small in comparison to the total number of accounts on our platform during the relevant time period. Similarly, the volume of automated, election-related Tweets that originated from those accounts was small in comparison to the overall volume of election-related activity on our platform. And those Tweets generated significantly fewer impressions as compared to a typical election-related Tweet.

3. Level of Engagement

In an effort to better understand the impact of Russian-linked accounts on broader conversations on Twitter, we examined those accounts’ volume of engagements with election- related content.

We first reviewed the accounts’ engagement with Tweets from @HillaryClinton and @realDonaldTrump. Our data showed that, during the relevant time period, a total of 1,625 @HillaryClinton Tweets were Retweeted approximately 8.3 million times. Of those Retweets, 32,254—or 0.39%—were from Russian-linked automated accounts. Tweets from @HillaryClinton received approximately 18 million likes during this period; 111,326—or 0.62%—were from Russian-linked automated accounts. The volume of engagements with @realDonaldTrump Tweets from Russian-linked automated accounts was higher, but still relatively small. The 851 Tweets from the @realDonaldTrump account during this period were Retweeted more than 11 million times; 416,632—or 3.66%—of those Retweets were from Russian-linked, automated accounts. Those Tweets received approximately 27 million likes across our platform; 480,346—or 1.8%—of those likes came from Russian-linked automated accounts.

We also reviewed engagement between automated or Russia-linked accounts and the @Wikileaks, @DCLeaks_, and @GUCCIFER_2 accounts. The amount of automated engagement with these accounts ranged from 47% to 72% of Retweets and 36% to 63% of likes during this time—substantially higher than the average level of automated engagement, including with other high-profile accounts. The volume of automated engagements from Russian-linked accounts was lower overall. Our data show that, during the relevant time period, a total of 1,010 @Wikileaks tweets were retweeted approximately 5.1 million times. Of these retweets, 155,933—or 2.98%—were from Russian-linked automated accounts. The 27 tweets from @DCLeaks_ during this time period were Retweeted approximately 4,700 times, of which 1.38% were from Russian-linked automated accounts. The 23 tweets from @GUCCIFER_2 during this time period were Retweeted approximately 18,000 times, of which 1.57% were from Russia-linked automated accounts.

We next examined activity surrounding hashtags that have been reported as potentially connected to Russian interference efforts. We noted above that, with respect to two such hashtags—#PodestaEmails and #DNCLeak—our automated systems detected, labeled, and hid a portion of related Tweets at the time they were created. The insights from our retrospective review have allowed us to draw additional conclusions about the activity around those hashtags.

We found that slightly under 4% of Tweets containing #PodestaEmails came from accounts with potential links to Russia, and that those Tweets accounted for less than 20% of impressions within the first seven days of posting. Approximately 75% of impressions on the trending topic were views by U.S.-based users. A significant portion of these impressions, however, are attributable to a handful of high-profile accounts, primarily @Wikileaks. At least one heavily-retweeted Tweet came from another potentially Russia-linked account that showed signs of automation.

With respect to #DNCLeak, approximately 23,000 users posted around 140,000 unique Tweets with that hashtag in the relevant period. Of those Tweets, roughly 2% were from potentially Russian-linked accounts. As noted above, our automated systems at the time detected, labeled, and hid just under half (48%) of all the original Tweets with #DNCLeak. Of the total Tweets with the hashtag, 0.84% were hidden and also originated from accounts that met at least one of the criteria for a Russian-linked account. Those Tweets received 0.21% of overall Tweet impressions. We learned that a small number of Tweets from several large accounts were principally responsible for the propagation of this trend. In fact, two of the ten most-viewed Tweets with #DNCLeak were posted by @Wikileaks, an account with millions of followers.

4. Human-Coordinated Russian-Linked Accounts

We separately analyzed the accounts that we have thus far identified through information obtained from third-party sources as linked to the Internet Research Agency (“IRA”). We have so far identified 2,752 such accounts. Those 2,752 accounts include the 201 accounts that we previously identified to the Committee. In responding to the Committee and through our cooperation with its requests, we have since linked the 201 accounts to other efforts to locate IRA-linked accounts from third-party information. We discovered that we had found some of the 201 accounts as early as 2015, and many had already been suspended as part of these previous efforts. Our retrospective work, guided by information provided by investigators and others, has thus allowed us to connect the 201 accounts to broader Russian election-focused efforts, including the full set of accounts that we now believe at this point are associated with the IRA. This is an active area of inquiry, and we will update the Committee as we continue the analysis.

The 2,752 IRA-linked accounts exhibited a range of behaviors, including automation. Of the roughly 131,000 Tweets posted by those accounts during the relevant time period, approximately 9% were election-related, and many of their Tweets—over 47%—were automated.

While automation may have increased the volume of content created by these accounts, IRA-linked accounts exhibited non-automated patterns of activity that attempted more overt forms of broadcasting their message. Some of those accounts represented themselves as news outlets, members of activist organizations, or politically-engaged Americans. We have seen evidence of the accounts actively reaching out to journalists and prominent individuals (without the use of automation) through mentions. Some of the accounts appear to have attempted to organize rallies and demonstrations, and several engaged in abusive behavior and harassment. All 2,752 accounts have been suspended, and we have taken steps to block future registrations related to these accounts.

B. Advertising Review

In the second component of our retrospective review, we focused on determining whether or how malicious Russian actors may have sought to abuse our platform using advertising.

1. Methodology

To evaluate the scope and impact of election-related advertisements, we used a custom- built machine-learning model that we refined over a number of iterations to maximize accuracy. That model was designed to detect all election-related content in the universe of English- language promoted Tweets that appeared on our system in 2016.

Our model yielded 6,493 accounts. We then divided those accounts into three categories of high, medium, and low interest based on a number of factors: the number of promoted Tweets the account had purchased in 2016, the percentage of promoted Tweets from the whole that our model suggested were election-related (a concept known as “election density”), whether the account had Russian-specific characteristics, and whether the account had non-Russian international characteristics.

For the purpose of this review, we deemed an account to be Russian-linked if any of the following criteria were present: (1) the account had a Russian email address, mobile number, credit card, or login IP; (2) Russia was the declared country on the account; or (3) Russian language or Cyrillic characters appeared in the account information or name. (As in the core- product review, here too, we encountered technological challenges associated with VPNs, data centers, and proxy servers that do not allow us to identify location.) We treated as election- related any promoted Tweets that referred to any candidates (directly or indirectly), political parties, notable debate topics, the 2016 election generally, events associated with the election, or any political figures in the United States.

Experienced advertising policy content reviewers then engaged in a manual evaluation of each account to determine whether they had promoted violative content in 2016. While we reviewed every account, the level of review corresponded to the category in which the account belonged. For high-interest accounts (197), we reviewed 100% percent of the account’s promoted content, as well as information about the account itself, including location and past advertising activity. For other types of accounts, we adjusted our level of manual review according to the interest category of the account. For the medium interest accounts (1,830), we reviewed approximately three quarters of the promoted content associated with the account, together with the account information. For the low interest accounts (4,466), we reviewed about one quarter of the promoted content, together with other account information. For each Tweet our reviewers examined, the reviewers evaluated its contents, including any attached media, geographical and keyword targeting, and account-level details, such as profile, avatar, and non- promoted Tweets. Reviewers looked at the Russian signals connected to any account, regardless of its interest category.

Finally, we tested our results against accounts we knew to be Russian, such as Russia Today accounts, to ensure that our methodology was sound. As we did with the retrospective review of election-related Tweets, we evaluated the advertising data both using the policies in place at the time and using our new policies that we have since introduced. That permitted us to compare what we would have detected and stopped promoting during the relevant time period had the more recent improvements been in place.

2. Analysis and Key Findings

We identified nine accounts that had at least one of the criteria for a Russian-linked account and promoted election-related content Tweets that, based on our manual review, violated existing or recently implemented ads policies, such as those prohibiting inflammatory or low- quality content.

Two of those accounts were @RT_COM and @RT_America. Those two accounts represented the vast majority of the promoted Tweets, spend and impressions for the suspect group identified in our review. Together, the two accounts spent $516,900 in advertising in 2016, with $234,600 of that amount devoted to ads that were served to users in the U.S. During that period, the two accounts promoted 1,912 Tweets and generated approximately 192 million impressions across all ad campaigns, with approximately 53.5 million representing impressions generated by U.S.-based users.

On Thursday, October 26, 2017, Twitter announced that it would no longer accept advertisements from RT and will donate the $1.9 million that RT had spent globally on advertising on Twitter to academic research into elections and civil engagement.

The remaining seven accounts that our review identified represented small, apparently unconnected actors. Those accounts spent a combined total of $2,282 on advertising through Twitter in 2016, with $1,184 of this amount spent on ads that were served to users in the U.S. Our available impressions data indicates that in 2016, those accounts ran 404 promoted Tweets and generated a total of 2.29 million impressions across all ad campaigns. Approximately 222,000 of those impressions were generated by U.S.-based users. We have since off-boarded these advertisers.

V. Post-Election Improvements and Next Steps

While Russian, election-related malicious activity on our platform appears to have been small in comparison to overall activity, we find any such activity unacceptable. Our review has prompted us to commit ourselves to further enhancing our policies and to tightening our systems to make them as safe as possible. Over the coming months, we will be focusing on a series of improvements both to our user safety rules and our advertising policies that we believe will advance the progress we have already made this year.

A. Enhancements to User Safety and Prevention of Abuse

In 2017, Twitter prioritized work to promote safety and fight abuse across much of the platform. Our engineering, product, policy, and user operations teams worked with urgency to make important and overdue changes designed to shift the burden of reporting online abuse away from the victim and to enable Twitter proactively to identify and act on such content.

As a result of that focus, we have:

  • •  Improved Twitter’s detection of new accounts created by users who have been permanently banned;

  • •  Introduced safer search, which is activated by default and limits potentially sensitive and abusive content from search results;

  • •  Limited the visibility and reach of abusive and low-quality Tweets;

  • •  Provided additional user controls both to limit notifications from accounts without verified email or phone numbers and/or profile photos and to allow more options to block and mute; and

  • •  Launched new forms of enforcement to interrupt abuse while it is happening.

While we have made progress on many of our goals, our CEO recently acknowledged that much work remains and that we recognize the need for greater openness about the work we are doing. We are therefore increasing our efforts on safety. Consistent with our commitment to transparency—and to offer full visibility to the Committee, the public, and the Twitter community—on October 19, 2017, we published a calendar of our immediate plans. That calendar identifies dates for upcoming changes to the Twitter Rules that we plan to make in the next three months. These changes will enhance our ability to remove non-consensual nudity, glorification of acts of violence, use of hate symbols in account profiles, and various changes to user-reported Twitter Rules violations. See https://blog.twitter.com/official/en_us/topics/ company/2017/safetycalendar.html. We plan to offer periodic, real-time updates about our progress.

We are implementing these safety measures alongside the enhanced techniques and tools that the Information Quality initiative has generated for stopping malicious automated content. As described above, we have recently made enhancements to our enforcement mechanisms for detecting automated suspicious activity and have more improvements planned for the coming weeks and months. One of our key initiatives has been to shorten the amount of time that suspicious accounts remain visible on our platform while pending verification—from 35 days to two weeks—with unverified accounts being suspended after that time. While these suspicious accounts cannot Tweet while they are pending verification, we want to further reduce their visibility. We will also introduce new and escalating enforcement mechanisms for suspicious logins, Tweets, and engagements, leveraging our improved detection methods from the past year. Such changes are not meant to be definitive solutions, but they will further limit the reach of malicious actors on the platform and ensure that users have less exposure to harmful or malicious content.

These new threats to our system require us to continually reevaluate how to counter them. As the role of social media in foreign disinformation campaigns comes into focus, it has become clearer that attempts to abuse technology and manipulate public discourse on social media and the Internet through automation and otherwise will not be limited to one election—or indeed to elections at all. We will provide updates on our progress to Congress and to the American people in real time.

B. Enhancements to Advertising Policy

Last week, we announced a new policy to increase transparency regarding advertising on Twitter. We will soon launch an industry-leading transparency center that will provide the public with more detail than ever before about social media and online advertisers. The enhancements include the ability to see what advertisements are currently running on Twitter, how long the advertisements have been running, and all creative pieces associated with an advertising campaign.

Users will also have greater insight into and control over their experience with advertising on Twitter. Individual users will be able to see all advertisements that have been targeted to them, and all advertisements that the user is eligible to see based on a campaign’s targeting. We will also make it possible for users to provide negative feedback regarding an advertisement, whether or not the user has been targeted by the campaign.

Our new policy also changes how Twitter treats electioneering advertisements, or advertisements that clearly identify a candidate or party associated with a candidate for any elected office. Electioneering advertisers will be required to identify themselves to Twitter, and they will be subject to stricter requirements for targeting and harsher penalties for violations of our policies. Any campaign that an electioneering advertiser runs will be clearly marked on the platform to allow users to easily identify it. In addition to the information provided about all advertisements on Twitter, this disclosure will include current and historical spending by an electioneering advertiser, the identity of the organization funding the campaign, and targeting demographics used by the advertiser, such as age, gender, or geographic location.

We recognize that not all political advertising is electioneering advertising. While there is not yet a clear industry definition for issue-based advertisements, we will work with our industry peers and with policymakers to clearly define them and develop policies to treat them similarly to electioneering advertisements.

***

We have heard the concerns about Twitter’s role in Russian efforts to disrupt the 2016 election and about our commitment to addressing this issue. Twitter believes that any activity of that kind—regardless of magnitude—is intolerable, and we agree that we must do better to prevent it. We hope that our appearance today and the description of the work we have undertaken demonstrates our commitment to working with you, our industry partners, and other stakeholders to ensure that the experience of 2016 never happens again.

Indeed, cooperation to combat this challenge is essential. We cannot defeat this novel, shared threat alone. As with most technology-based threats, the best approach is to share information and ideas to increase our collective knowledge. Working with the broader community, we will continue to test, to learn, to share, and to improve, so that our product remains effective and safe.

We look forward to answering your questions and working with you in the coming months.


Written Testimony of Kent Walker
Senior Vice President and General Counsel, Google Senate Select Committee on Intelligence
Hearing on “Social Media Influence in the 2016 US Elections” Written Congressional Testimony
November 1, 2017

Chairman Burr, Vice-Chair Warner, and members of the Committee, thank you for the opportunity to appear before you this morning.

My name is Kent Walker. I am Senior Vice President and General Counsel at Google and I lead our Legal, Policy, Trust and Safety, and Philanthropy teams. I’ve worked at the intersection of technology, security, and the law for over 25 years, including a tour early in my career as an Assistant US Attorney at the Department of Justice focusing on technology crimes.

We believe that we have a responsibility to prevent the misuse of our platforms and we take that very seriously. Google was founded with a mission to organize the world’s information and make it universally accessible and useful. The abuse of the tools and platforms we build is antithetical to that mission.

Google is deeply concerned about attempts to undermine democratic elections. We are committed to working with Congress, law enforcement, others in our industry, and the NGO community to strengthen protections around elections, ensure the security of users, and help combat disinformation.

We are dealing with difficult questions that balance free expression issues, unprecedented access to information, and the need to provide high quality content to our users. There are no easy answers here, but we are deeply committed to getting this right. We recognize the importance of this Committee’s mandate, and we welcome the opportunity to share information and talk about solutions.

Of course disinformation and propaganda campaigns aren’t new, and have involved many different types of media and publications. When it comes to online platforms, for many years we have seen nation states and criminals attempt to breach our firewalls, game our search results, and interfere with our platforms. These attempts range from large-scale threats, such as distributed denial of service attacks, which we are able to identify and thwart quickly, all the way down to small-scale, extremely targeted attacks, such as attempts to gain access to email accounts of high-profile individuals.

We take these threats very seriously. We serve billions of users every day, so our solutions need to work at scale. We’ve built industry-leading security systems and we’ve put these tools into our consumer products. Back in 2007, we launched the first version of our Safe Browsing tool, which helps protect users from phishing, malware, and other attack vectors. Today, Safe Browsing is used on more than three billion devices worldwide. If we suspect that users are subject to government-sponsored attacks we warn them. And last month, we launched our Advanced Protection Program, which integrates physical security keys to protect those at greatest risk of attack, like journalists, business leaders, and politicians. We face motivated and resourceful attackers, and we are continually evolving our tools to stay ahead of ever-changing threats.

Our tools don’t just protect our physical and network security, but also detect and prevent artificially boosting content, spam, and other attempts to manipulate our systems. On Google News, for example, we label links so users can see if the content is locally sourced, an OpEd, or an in-depth piece. For Google Search, we have updated our quality guidelines and evaluations to help identify misleading information, helping surface more authoritative content from the web. We have updated our advertising guidelines to prohibit ads on sites that misrepresent themselves. And on YouTube we employ a sophisticated spam and security-breach detection system to detect anomalous behavior and catch people trying to inflate view counts of videos or numbers of subscribers.

We have deployed our most advanced technologies to increase security and fight manipulation, but we realize that no system is going to be 100% perfect. It is hard to rapidly identify all untruthful content at massive scale, and harder yet to understand the motives and potential connections of the people posting that content. But we have made substantial progress in preventing and detecting abuse, and we are seeing continued success in stopping bad actors attempting to game them. And as threats evolve, we will continue to adapt in order to understand and prevent new attempts to misuse our platforms.

With respect to the Committee’s work on the 2016 election, we have looked across our products to understand whether individuals apparently connected to government-backed entities were using those products to disseminate information with the purpose of interfering with the US election. We based this review on research into misinformation campaigns from Alphabet’s Jigsaw group, our information security team’s own methods, and leads provided by other companies.

While we did find activity associated with suspected government-backed accounts, that activity appears to have been limited on our platforms. Of course, any activity like this is more than we would like to see. We have provided the relevant information to the Committee, have issued a public summary of the results of our review, and will continue to cooperate with the Committee's investigation.

Starting with our ads products, we found two accounts that appear to be associated with this effort. These accounts spent approximately $4700 dollars in connection with the 2016 presidential election, representing less than 0.0002 percent of the total amount spent on that race. We believe that the activity we found was limited because of various safeguards that we had in place in advance of the 2016 election, and the fact that Google’s products didn’t lend themselves to the kind of micro-targeting or viral dissemination that these actors seemed to prefer.

As part of our investigation, we also looked at our other services. Let me share a few key points. On YouTube, we did find 18 channels on YouTube with roughly 1,100 videos, a total of 43 hours of content, uploaded by individuals who we suspect are associated with this effort and which contained political content. That compares with the 400 hours of YouTube content uploaded every minute, and the over one billion hours of YouTube content watched every day. These videos generally had very low view counts; only around 3% percent had more than 5,000 views. The videos were not targeted to any particular sector of the US population as that's not feasible on YouTube. Additionally, we found a limited number of Gmail accounts that appear to have been primarily used to to set up accounts on social media platforms.

We continue to expand our use of cutting-edge technology to protect our users and will continue working with governments to ensure that our platforms aren’t used for nefarious purposes.

We will also be making political advertising more transparent, easier for users to understand, and even more secure.

•  In 2018 we’ll release a transparency report for election ads, sharing data about who is buying election ads on our platforms and how much money is being spent. We’ll pair our transparency report with a database of election ad creatives from across our ads products. And we will make the database available for public research.

•  We’re also going to make it easier for users to understand who bought the election ads they see on our networks. Going forward, users will be able to find the name of any advertiser running an election ad on Search, YouTube, and the Google Display Network with one click on an icon above the ad.

•  We will continue enhancing our existing safeguards to ensure that we only permit US nationals to buy US election ads. We already tightly restrict which advertisers can serve ads to audiences based on their political leanings. Moving forward, we'll go further by verifying the identity of anyone who wants to run an election ad or use our political-interest-based tools and confirming that person is permitted to run that ad.

We certainly can’t do this alone. Combating disinformation campaigns requires efforts from across the industry. We’ll continue to work with other companies to better protect the collective digital ecosystem, and, even as we take our own steps, we are open to working with governments on legislation that promotes electoral transparency.

Our commitment to addressing these issues extends beyond our services. Google has supported significant outreach to increase security for candidates and campaigns across the United States, France, Germany, and other countries. We’ve offered in-person briefings and introduced a suite of digital tools designed to help election websites and political campaigns protect themselves from phishing, unauthorized account access, and other digital attacks. We’ve partnered with the National Cyber Security Alliance to fund and advise on security training programs that focus specifically on elected officials, campaigns, and staff members. We are also increasing our long-standing support for the bipartisan Defending Digital Democracy Project at the Belfer Center for Science and International Affairs at Harvard Kennedy School.

Let me conclude by recognizing the importance of the work of this Committee. Our users, advertisers, and creators must be able to trust in their security and safety. We share the goal of identifying bad actors who have attempted to interfere with our systems and abuse the electoral process. We look forward to continued cooperation, both with the members of this Committee and with our fellow companies, to both provide access to tools that help citizens express themselves while avoiding abuses that undercut the integrity of elections.

Thank you for the opportunity to tell you about our ongoing efforts in this space. We look forward to continuing to work with Congress on these important issues, and I'm happy to answer any questions you might have.