Listen Live  
Radio Free - radiofree.org
Radio Free never takes money from corporations, keeping our focus on people, not profits. Radio Free is an independent, free news organization headquartered in Tucson, Arizona. We are currently in need of community volunteers to help cover local politics. To learn more and get involved, please visit radiofree.org.

MP’s new ‘fake news’ report largely ignores other platforms like Google and YouTube, and surveillance capitalism itself – and risks sending regulation in the wrong direction.

Yesterday morning, the House of Commons Digital, Culture, Media and Sport Select
Committee published its long-awaited final report into disinformation and ‘fake
news’. The report – which follows a long and at times dramatic investigation –
is full of interesting and insightful details about political microtargeting
(the targeting of political messaging to relatively small groups of people) and
the spread of disinformation.

But the report’s myopic focus on one company – Facebook –
means that it misses the bigger picture – including the internet’s dominant
variety of capitalism.

It is of course welcome that attention is being paid to
these problems, and there is much in the Committee’s report that’s good. The
report is undoubtedly right to find that Britain’s electoral laws are woefully
inadequate for the age of the algorithm and are badly in need of reform. Its
recommendation that inferences drawn from analysis of other data about people should
be more clearly considered to be personal data likewise seems eminently
sensible.

Is it ok to manipulate people to extract their money, just not for
politics?

But there are also clear shortcomings. Focusing on disinformation
itself as a target for regulation brings an obvious problem. By calling for interventions
based on ‘harmful’ content, the report asks the Government to step into the
dangerous territory of regulating lawful political conversations between
people. Are private companies to be mandated to police these communications on
the Government’s behalf? There are numerous good reasons why this is deeply
undesirable (not to mention incompatible with human rights laws).

The biggest oversight, however, is in diagnosing disinformation
as essentially a problem with Facebook, rather than a systemic issue emerging in
part from the pollution of online spaces by the business model that Facebook
shares with others: the surveillance and modification of human behaviour for
profit.

‘Surveillance capitalism’, as it’s known, involves gathering
as much data as possible about as many people as possible doing as many things
as possible from as many sources as possible. These huge datasets are then algorithmically
analysed so as to spot patterns and correlations from which future behaviour
can be predicted. A personalised, highly dynamic, and responsive form of
behavioural nudging then seeks to influence that future behaviour to drive
engagement and profit for platforms and advertisers. These targeted behaviour
modification tools rely on triggering cognitive biases and known short-cuts in
human decision-making. Platforms and advertisers extensively experiment to find
the most effective way to influence behaviour.

Without looking at surveillance capitalism, it’s impossible
to understand microtargeting in its wider context. It’s impossible to
understand the desires for profit and market position driving these practices.
And it’s impossible to understand that the same behaviour modification tools
are sold to advertisers, political parties, and anyone else who’s willing to pay.
Without considering these practices within surveillance capitalism more
generally, the report seems to implicitly accept that manipulating people through
psychological vulnerabilities is fine if you’re doing it to extract their money,
but not if you’re doing it for politics.

Notably, both Google and YouTube, its subsidiary, were
largely omitted from the report. They get the odd mention, but it’s clear that
the Committee was too fixated on Facebook to pay them sufficient attention. Google
invented surveillance capitalism and remains arguably its foremost
practitioner, with significant influence over the world’s access to
information. And YouTube (also running on a surveillance business model,
naturally) has serious problems of its own in terms of promoting violent extremism,
disinformation, and conspiracy theories. This led the academic Zeynep Tufekci,
writing in the New York Times last year, to describe YouTube and its video
recommendation system as “[maybe] one
of the most powerful radicalizing instruments of the 21st century
”.

It’s not “fake news” that’s the problem, it’s the algorithms that
disseminate it

This brings us to the second aspect missed by the Committee:
the increasingly prevalent algorithmic construction of reality. Take
disinformation. As noted above, the report focused on false content itself. This
seems to have missed one of the key routes by which an individual piece of
content can become a systemic problem worthy of attention. In the grand scheme
of things, a YouTube video about a wild conspiracy theory doesn’t really matter
if it’s only seen by 10 people. It matters if people watching relatively
innocuous content are driven towards it by YouTube’s recommendation system. It
matters if it’s algorithmically promoted by YouTube and then seen by 10 million
people.

Platforms might argue that they can’t be held responsible
for the content they host or for the actions of their users (outside of things
which are clearly illegal). But recommending content is not simply hosting it,
and it is not a neutral act. Platforms selectively target content (including
advertising) through their recommender systems so as to show us what they think
will keep us engaged with their services, bring them revenue, and help them
build market share. Make no mistake – through these platforms we do not get a
true picture of what’s going on in the world. The spaces we inhabit online are
viewed through the lens of corporate desires. What we encounter is
algorithmically mediated to suit the platforms’ interests. While microtargeting
is increasingly recognised as manipulation, this is a softer, perhaps more
insidious form of corporate algorithmic influence.

Unsurprisingly, various actors have learned how to game
these systems to boost the audience for their content, including conspiracy theorists
and extremists. And bots and other fake accounts are often being used to take
advantage of the algorithmic construction of online space to manipulate content
rankings. This allows them to game trending topics so as to shape discourse
more generally, and drive fringe ideas into the mainstream (a common
misconception of bots holds that they are usually intended to change the
opinion of real users with whom they come into contact).

The influence wielded by surveillance platforms through
personalisation gives them significant means to shape the online public sphere.
They are, of course, motivated by profit and duty to shareholders rather than
by public good and duty to wider society. You might think that this is fine –
they are, after all, private corporations. But while television, mass media,
and the advertising industry have long shaped our world, never before have
private companies had such influence over the construction of the everyday reality
we inhabit. Never before have they exercised such influence over the private activity
of individuals talking to other individuals about their lives. They do so
without any democratic legitimacy, and with little transparency over their
processes or accountability for their actions.

To properly address the problems of manipulation,
disinformation, and violent extremism fermenting on online platforms, future
regulation must properly acknowledge the role of surveillance capitalism – not
just through targeting tools but in the algorithmic construction of online
spaces. Future regulation should recognise that content isn’t necessarily the
problem in and of itself. It must consider the active role of platforms in
promoting content, and establish minimum standards for doing so (in the form of
paid-for advertising or otherwise). This approach benefits from largely
sidestepping much of the content regulation debate. Regulating the use of
technical systems by corporations rather than intervening in communications between
individuals means that people should still be free to post, view, or share
anything that is not illegal. Freedom of expression demands nothing less.

Surveillance companies’ exorbitant profits and their
influence on the construction of our reality is in large part driven by their
use of recommender systems. That must come with responsibility in some form for
what they’re algorithmically disseminating. They will argue that being more
careful with recommender systems could result in lower revenues. In 2018 Google
brought in $136 billion; Facebook took $56 billion. They can afford to take the
hit. Perhaps that should be understood as the cost of doing business in future.
This industry wouldn’t be the first to have its practices and its profits
reined in by regulation for the good of society.

Because of its restricted focus, the usefulness of many of the
solutions proposed in the DCMS Committee’s report is somewhat limited. That’s
disappointing. But all is far from lost, and there are other directions for
progress. To get there, we need to think bigger than Facebook. It’s time to
acknowledge the role of surveillance capitalism in these systemic issues. It’s
time to recognise that the problem isn’t just content – it’s dissemination and
amplification by algorithm to maximise profit at all costs.

Citations

[1]Why a focus on "fake news" and Facebook misses the internet's real problems - and solutions | openDemocracy .... http://feedproxy.google.com/~r/opendemocracy/~3/wAQwI86xWX0/why-focus-on-fake-news-and-facebook-misses-internets-real-problems-and-solutions[2] .... https://www.nytimes.com/2018/03/10/opinion/sunday/youtube-politics-radical.html[3]Why a focus on "fake news" and Facebook misses the internet's real problems - and solutions | openDemocracy .... http://feedproxy.google.com/~r/opendemocracy/~3/wAQwI86xWX0/why-focus-on-fake-news-and-facebook-misses-internets-real-problems-and-solutions