LifeStyle
Like

The invisible ways Facebook controls us

24 мая, 2016     Автор: Юлия Клюева
The invisible ways Facebook controls us

 

Whether it’s using Facebook or Google, our choices are subtly nudged by the human biases acting behind the scenes, argues Tom Chatfield.

In an age increasingly concerned with software telling us what to think, something more old-fashioned has been in the news: a select group of unaccountable individuals telling us what counts as news. Facebook, it turns out, uses humans to select what topics do or don’t get seen by users. Ironically for those accustomed to lamenting human usurpation by machines, the problem is the absence of an algorithm.

The world’s most powerful information-sharing platform is inscrutably able to select what gets seen

The most controversial claim to emerge is that the site’s trending topics selections have an anti-conservative bias, disproportionately suppressing conservative news and views (a claim the company has vigorously disputed). When tech site Gizmodo broke the initial story in March, however, it suggested two entwined reasons why Facebook may be embarrassed irrespective of any political bias. First, the presence of flesh-and-blood contractors damages “the illusion of a bias-free news ranking process”. 

Second, these contracted “news curators” seem to have been treated little better than software: operating outside of any culture of editorial accountability or leadership, beholden to the vague concept of “trending news”, working to meet quantity-first quotas.

In a sense, the actual presence of human actors is beside the point – as is the question of ideological bias. What matters is that the world’s most powerful information-sharing platform is inscrutably able to select what gets seen. Platforms like Facebook are curating our news and information under catchall headings like “trending topics”, or criteria like “relevance” – but we only rarely get glimpses into how the filtering happens.

p03w00q5

This is important because subtle changes in the information we are exposed to can transform our behaviour.

To understand why, consider an insight from behavioural science that has been widely adopted by governments and other authorities around the world: the policy “nudge”. 

This is where subtle tactics are used to encourage us to adopt a particular behaviour. One famous example is making organ donation opt-out rather than opt-in. Instead of requiring people to register themselves as organ donors, an opt-out system automatically assumes that anyone’s organs can be used for donation unless they have specified otherwise. Simply by switching the default assumption, more people end up donating.  

What’s not to like about nudging? Among other things, critics are uneasy around its erosion of informed choice. As the author Nick Harkaway argued in an article for the Institute of Art and Ideas, “instead of explaining the issue and fitting the policy to the considered will of the people, [nudging] fits the will of the people to the desired policy. Choice is a skill, a habit, even a minor reworking of the architecture of the brain, and it must be practised to be honed”.

To return to the digital world and how nudges might apply there: when we navigate online space, we are continually faced with choices – from what to buy to what to believe – and designers and engineers can also subtly sway our decisions there.

It’s not only Facebook who is in the information selection game

p03w00qt

After all, it’s not only Facebook who is in the information selection game. Smarter and smarter recommendation systems are driving much of the current boom in artificial intelligence, wearable tech and the internet of things alike; from Google to Apple to Amazon, seamless personalised information delivery is the name of the game. Yet what’s at stake isn’t so much a question of human versus machine as of informed choice versus nudged compliance.

The more relevant information we have at our fingertips, the better the decisions we can make: this is one of the founding principles of information technology as a positive force.

The philosopher of technology Luciano Floridi, author of the book The Fourth Revolution, uses the phrase “pro-ethical design” to describe this process at its best: a balanced presentation of clear information that compels you consciously to address, and to take responsibility for, an important decision. Information systems should expand rather than contract our ethical engagement, Floridi argues, by resisting the temptation to nudge us too hard. Don’t make organ donation an automated default: make people face up to the question it poses. Don’t silently impose a tailored vision of relevance: invite users to tinker, interrogate and improve.

There are fundamental tensions here: between convenience and consideration; between what users want and what may be best for them; between transparency and commercial edge. The more asymmetrical the information on each side of the screen – what the system knows about you compared to what you know about it – the more your choices risk becoming just a series of reactions to unseen prods and pokes. Yet the balance between what’s out there and what you can know is shifting further every day towards individual ignorance. 

There’s no simple antidote to this, and no grand conspiracy. Indeed, the skillful combination of software and human curation is fast becoming the only way we can hope to navigate the exabytes of data accumulating across our world. Still, it is worth remembering that the designers of the technology we use have different goals to our own – and that, whether our intercessor is an algorithm or an editor, navigating it successfully means losing the pretense that there’s any escape from human bias.

Юлия Клюева

5941 posts
0 comments