Facebook ads run on Politico on Nov. 4, 2017
https://www.facebook.com/help/1991443604424859
Update on Our Advertising Transparency and Authenticity Efforts
by Rob Goldman, VP of Ads
When it comes to advertising on Facebook, people should be able to
tell who the advertiser is and see the ads they’re running, especially
for political ads.
That level of transparency is good for democracy and it’s good for
the electoral process. Transparency helps everyone, especially
political watchdog groups and reporters, keep advertisers accountable
for who they say they are and what they say to different groups.
In September, our CEO Mark Zuckerberg talked
about the initial steps
we were taking to help protect the integrity of elections, both in the
United States and around the world. Earlier this month, our VP of
Public Policy Joel Kaplan provided
additional
details
on what we’re doing to make advertising more transparent, increase
requirements for authenticity and strengthen our enforcement against
ads that violate our policies.
Today we’re sharing an update on the progress we’ve made towards
accomplishing those tasks.
ADDITIONAL TRANSPARENCY FOR ALL ADVERTISING
We’re going to make advertising more transparent, and not just for
political ads.
Starting next month, people will be able to click “View Ads” on a
Page and view ads a Page is running on Facebook, Instagram and
Messenger — whether or not the person viewing is in the intended target
audience for the ad. All Pages will be part of this effort, and we will
require that all ads be associated with a Page as part of the ad
creation process. We will start this test in Canada and roll it out to
the US by this summer, ahead of the US midterm elections in November,
as well as broadly to all other countries around the same time.
[graphic]
We know how important it is to our community that we get this
feature just right — and so we’re first rolling it out in only one
country. Testing in one market allows us to learn the various ways an
entire population uses the feature at a scale that allows us to learn
and iterate. Starting in Canada was a natural choice as this effort
aligns with our election integrity work already underway there.
During this initial test, we will only show active ads. However,
when we expand to the US we plan to begin building an archive of
federal-election related ads so that we can show both current and
historical federal-election related ads. In addition, for each
federal-election related ad, we will:
- Include the ad in a searchable archive that, once full, will
cover
a rolling four-year period – starting from when we launch the archive.
- Provide details on the total amounts spent.
- Provide the number of impressions that delivered.
- Provide demographics information (e.g. age, location, gender)
about the audience that the ads reached.
POLITICAL ADVERTISERS WILL HAVE TO VERIFY THEIR IDENTITY
As Joel Kaplan mentioned, we’re going to require more thorough
documentation from advertisers who want to run election-related ads. We
are starting with federal elections in the US, and will progress from
there to additional contests and elections in other countries and
jurisdictions. As part of the documentation process, advertisers may be
required to identify that they are running election-related advertising
and verify both their entity and location.
Once verified, these advertisers will have to include a disclosure
in their election-related ads, which reads: “Paid for by.” When you
click on the disclosure, you will be able to see details about the
advertiser. Like other ads on Facebook, you will also be able to see an
explanation of why you saw that particular ad.
[video]
For political advertisers that do not proactively disclose
themselves, we are building machine learning tools that will help us
find them and require them to verify their identity.
We remain deeply committed to helping protect the integrity of the
electoral process on Facebook. And we will continue to work with our
industry partners, lawmakers and our entire community to better ensure
transparency and accountability in our advertising products.
Text of
Facebook full-page ad run in Oct. 4, 2017
New York Times and
Washington Post:
Protecting Our Community from Election Interference
We take the trust of the Facebook
community seriously. We will fight
any attempt to interfere with elections or civic engagement on Facebook.
Immediate actions we're taking:
1. Making advertising more transparent
We are building new tools that will allow you to see the ads a Facebook
Page is running, including ads that aren't targeted to you directly.
2. Strengthening our ad policies and enforcement
We are adding more than 1,000 people to our global ad review teams,
requiring more thorough documentation from advertisers who want to run
US federal election-related ads, and expanding our policies around
violence in ads.
3. Investing in security
We will more than double the team working to prevent election
interference on Facebook and develop new technologies dedicated to
security and safety.
4. Sharing the ads we've found with Congress
We shared more than 3,000 ads that appear to have come from a Russian
entity known as the Internet Research Agency.
5. Continuing our internal investigation
We are working to further our understanding of how foreign groups may
have misused Facebook in order to prevent further abuse.
6. Fighting threats across the internet
We recognize this is a global, industry-wide problem so we are sharing
threat information with other companies. Any actor trying to misuse
Facebook is likely trying to abuse other internet platforms and we need
to work together.
7. Expanding our partnerships with election commissions
We are working with election commissions around the word to proactively
communicate online risks we've identified.
8. Supporting elections globally
We have been actively working to help protect the integrity of
elections on Facebook around the world.
9. Building civic engagement tools
We will build even more tools to empower our community to engage in
political discourse, and to protect them when they do.
From Facebook's
Hard
Questions blog:
October 2, 2017
https://newsroom.fb.com/news/2017/10/hard-questions-russian-ads-delivered-to-congress/
Hard Questions: Russian Ads Delivered to
Congress
By Elliot Schrage, Vice President of Policy and
Communications
What was in the ads you shared with Congress? How many
people saw them?
Most of the ads appear to focus on divisive social and political
messages across the ideological spectrum, touching on topics from LGBT
matters to race issues to immigration to gun rights. A number of them
appear to encourage people to follow Pages on these issues.
Here are a few other facts about the ads:
- An estimated 10 million people in the US saw the ads. We were
able
to approximate the number of unique people (“reach”) who saw at least
one of these ads, with our best modeling
- 44% of total ad impressions (number of times ads were displayed)
were before the US election on November 8, 2016; 56% were after the
election.
- Roughly 25% of the ads were never shown to anyone. That’s because
advertising auctions are designed so that ads reach people based
on
relevance, and certain ads may not reach anyone as a result.
- For 50% of the ads, less than $3 was spent; for 99% of the ads,
less than $1,000 was spent.
- About 1% of the ads used a specific type of Custom Audiences
targeting to reach people on Facebook who had visited that advertiser’s
website or liked the advertiser’s Page — as well as to reach people who
are similar to those audiences. None of the ads used another type of
Custom Audiences targeting based on personal information such as email
addresses. (This bullet added October 3, 2017.)
Why do you allow ads like these to target certain
demographic or interest groups?
Our ad targeting is designed to show people ads they might find useful,
instead of showing everyone ads that they might find irrelevant or
annoying. For instance, a baseball clothing line can use our targeting
categories to reach people just interested in baseball, rather than
everyone who likes sports. Other examples include a business selling
makeup designed specifically for African-American women. Or a language
class wanting to reach potential students.
These are worthwhile uses of ad targeting because they enable people
to connect with the things they care about. But we know ad targeting
can be abused, and we aim to prevent abusive ads from running on our
platform. To begin, ads containing certain types of targeting will now
require additional human review and approval.
In looking for such abuses, we examine all of the components of an
ad: who created it, who it’s intended for, and what its message is.
Sometimes a combination of an ad’s message and its targeting can be
pernicious. If we find any ad — including those targeting a cultural
affinity interest group — that contains a message spreading hate or
violence, it will be rejected or removed. Facebook’s Community
Standards strictly prohibit attacking people based on their protected
characteristics, and our advertising terms are even more restrictive,
prohibiting advertisers from discriminating against people based on
religion and other attributes.
Why can’t you catch every ad that breaks your rules?
We review millions of ads each week, and about 8 million people report
ads to us each day. In the last year alone, we have significantly grown
the number of people working on ad review. And in order to do better at
catching abuse on our platform, we’re announcing
a number of improvements, including:
- Making advertising more transparent
- Strengthening enforcement against improper ads
- Tightening restrictions on advertiser content
- Increasing requirements for authenticity
- Establishing industry standards and best practices
Weren’t some of these ads paid for in Russian currency? Why
didn’t your ad review system notice this and bring the ads to your
attention?
Some of the ads were paid for in Russian currency. Currency alone isn’t
a good way of identifying suspicious activity, because the overwhelming
majority of advertisers who pay in Russian currency, like the
overwhelming majority of people who access Facebook from Russia, aren’t
doing anything wrong. We did use this as a signal to help identify
these ads, but it wasn’t the only signal. We are continuing to refine
our techniques for identifying the kinds of ads in question. We’re not
going to disclose more details because we don’t want to give bad actors
a roadmap for avoiding future detection.
If the ads had been purchased by Americans instead of
Russians, would they have violated your policies?
We require authenticity regardless of location. If Americans conducted
a coordinated, inauthentic operation — as the Russian organization did
in this case — we would take their ads down, too.
However, many of these ads did not violate our content policies.
That means that for most of them, if they had been run by authentic
individuals, anywhere, they could have remained on the platform.
Shouldn’t you stop foreigners from meddling in US social
issues?
The right to speak out on global issues that cross borders is
an important principle. Organizations such as UNICEF, Oxfam or
religious organizations depend on the ability to communicate — and
advertise — their views in a wide range of countries. While we may not
always agree with the positions of those who would speak on issues
here, we believe in their right to do so — just as we believe in the
right of Americans to express opinions on issues in other countries.
Some of these ads and other content on Facebook appear to
sow division in America and other countries at a time of increasing
social unrest. If these ads or content were placed or posted
authentically, you would allow many of these. Why?
This is an issue we have debated a great deal. We understand that
Facebook has become an important platform for social and political
expression in the US and around the world. We are focused on developing
greater safeguards against malicious interference in elections and
strengthening our advertising policies and enforcement to prevent abuse.
As an increasingly important and widespread platform for political
and social expression, we at Facebook — and all of us — must also take
seriously the crucial place that free political speech occupies around
the world in protecting democracy and the rights of those who are in
the minority, who are oppressed or who have views that are not held by
the majority or those in power. Even when we have taken all steps to
control abuse, there will be political and social content that will
appear on our platform that people will find objectionable, and that we
will find objectionable. We permit these messages because we share the
values of free speech — that when the right to speech is censored or
restricted for any of us, it diminishes the rights to speech for all of
us, and that when people have the right and opportunity to engage in
free and full political expression, over time, they will move forward,
not backwards, in promoting democracy and the rights of all.
Are you working with other companies and the government to
prevent interference that exploits platforms like yours?
The threats we’re confronting are bigger than any one company, or even
any one industry. The kind of malicious interference we’re seeing
requires everyone working together, across business, government and
civil society, to share information and arrive at the best responses.
We have been working with many others in the technology industry,
including with Google and Twitter, on a range of elements related to
this investigation. We also have a long history of working together to
fight online threats and develop best practices on other issues, such
as child safety and counterterrorism. And we will continue all of this
work.
With all these new efforts you’re putting in place, would
any of them have prevented these ads from running?
We believe we would have caught these malicious actors faster and
prevented more improper ads from running. Our effort to require US
election-related advertisers to authenticate their business will help
catch suspicious behavior. The ad transparency tool we’re building will
be accessible to anyone, including industry and political watchdog
groups. And our improved enforcement and more restrictive content
standards for ads would have rejected more of the ads when submitted.
Is there more out there that you haven’t found?
It’s possible. We’re still looking for abuse and bad actors on our
platform — our internal investigation continues. We hope that by
cooperating with Congress, the Special Counsel and our industry
partners, we will help keep bad actors off our platform.
Do you now have a complete view of what happened in this
election?
The 2016 US election was the first where evidence has been widely
reported that foreign actors sought to exploit the internet to
influence voter behavior. We understand more about how our service was
abused and we will continue to investigate to learn all we can. We know
that our experience is only a small piece of a much larger puzzle.
Congress and the Special Counsel are best placed to put these pieces
together because they have much broader investigative power to obtain
information from other sources.
We strongly believe in free and fair elections. We strongly believe
in free speech and robust public debate. We strongly believe free
speech and free elections depend upon each other. We’re fast developing
both standards and greater safeguards against malicious and illegal
interference on our platform. We’re strengthening our advertising
policies to minimize and even eliminate abuse. Why? Because we are
mindful of the importance and special place political speech occupies
in protecting both democracy and civil society. We are dedicated to
being an open platform for all ideas — and that may sometimes mean
allowing people to express views we — or others — find objectionable.
This has been the longstanding challenge for all democracies: how to
foster honest and authentic political speech while protecting civic
discourse from manipulation and abuse. Now that the challenge has taken
a new shape, it will be up to all of us to meet it.
September 21,
2017
https://newsroom.fb.com/news/2017/09/hard-questions-more-on-russian-ads/
Hard Questions: More on Russian Ads
By Elliot Schrage,
Vice President of Policy and Communications
1) Why did Facebook finally decide to share the ads with Congress?
As our General Counsel has
explained,
this is an extraordinary investigation — one that raises questions that
go to the integrity of the US elections. After an extensive legal and
policy review, we’ve concluded that sharing the ads we’ve discovered
with Congress, in a manner that is consistent with our obligations to
protect user information, will help government authorities complete the
vitally important work of assessing what happened in the 2016 election.
That is an assessment that can be made only by investigators with
access to classified intelligence and information from all relevant
companies and industries — and we want to do our part. Congress is best
placed to use the information we and others provide to inform the
public comprehensively and completely.
2) Why are you sharing these with Special Counsel and Congress —
and not releasing them to the public?
Federal law places strict limitations on the disclosure of account
information. Given the sensitive national security and privacy issues
involved in this extraordinary investigation, we think Congress is best
placed to use the information we and others provide to inform the
public comprehensively and completely. For further understanding on
this decision, see our General
Counsel’s post.
3) Let’s go back to the beginning. Did Facebook know when the ads
were purchased that they might be part of a Russian operation? Why not?
No, we didn’t.
The vast majority of our over 5 million advertisers use our
self-service tools. This allows individuals or businesses to create a
Facebook Page, attach a credit card or some other payment method and
run ads promoting their posts.
In some situations, Facebook employees work directly with our larger
advertisers. In the case of the Russian ads, none of those we found
involved in-person relationships.
At the same time, a significant number of advertisers run ads
internationally, and a high number of advertisers run content that
addresses social issues — an ad from a non-governmental organization,
for example, that addresses women’s rights. So there was nothing
necessarily noteworthy at the time about a foreign actor running an ad
involving a social issue. Of course, knowing what we’ve learned since
the election, some of these ads were indeed both noteworthy and
problematic, which is why our CEO today announced
a number of important steps we are taking to help prevent this kind of
deceptive interference in the future.
4) Do you expect to find more ads from Russian or other foreign
actors using fake accounts?
It’s possible.
When we’re looking for this type of abuse, we cast a wide net in
trying to identify any activity that looks suspicious. But it’s a game
of cat and mouse. Bad actors are always working to use more
sophisticated methods to obfuscate their origins and cover their
tracks. That in turn leads us to devise new methods and smarter tactics
to catch them — things like machine learning, data science and highly
trained human investigators. And, of course, our internal inquiry
continues.
It’s possible that government investigators have information that
could help us, and we welcome any information the authorities are
willing to share to help with our own investigations.
Using ads and other messaging to affect political discourse has
become a common part of the cybersecurity arsenal for organized,
advanced actors. This means all online platforms will need to address
this issue, and get smarter about how to address it, now and in the
future.
5) I’ve heard that Facebook disabled tens of thousands of
accounts in France and only hundreds in the United States. Is this
accurate?
No, these numbers represent different things and can’t be directly
compared.
To explain it, it’s important to understand how large platforms try
to stop abusive behavior at scale. Staying ahead of those who try to
misuse our service is an ongoing effort led by our security and
integrity teams, and we recognize this work will never be done. We
build and update technical systems every day to make it easier to
respond to reports of abuse, detect and remove spam, identify and
eliminate fake accounts, and prevent accounts from being compromised.
This work also reduces the distribution of content that violates our
policies, since fake accounts often distribute deceptive material, such
as false news, hoaxes, and misinformation.
This past April, we announced improvements to these systems aimed at
helping us detect fake accounts on our service more effectively. As we
began to roll out these changes globally, we took action against tens
of thousands of fake accounts in France. This number represents fake
accounts of all varieties, the most common being those that are used
for financially-motivated spam. While we believe that the removal of
these accounts also reduced the spread of disinformation, it’s
incorrect to state that these tens of thousands of accounts represent
organized campaigns from any particular country or set of countries.
In contrast, the approximately 470 accounts and Pages we shut down
recently were identified by our dedicated security team that manually
investigates specific, organized threats. They found that this set of
accounts and Pages were affiliated with one another — and were likely
operated out of Russia.