Facebook provides an alternative media channel in state-media-dominated nations, a community organising tool, and a convenient way for simply keeping in touch with friends and relatives. It’s also an effective tool for advertisers.

But what are consumers trading for this service? And how can technologists build a future that betters humanity?

Your data on Facebook

Facebook collects data on you in at least five ways:

  • What you provide: for example, photos, videos, every news item you like, share, or comment on, every single thing (TV show, music, movies, and more) that you like, who your friends are, how often you interact with them, what you talk about, and the contacts you have stored on your phone or in your email account.

  • What other Facebook users provide about you: have you ever wondered how Facebook keeps finding so many “people you may know” you’d never expect – a long lost friend or relative, an ex-partner from years back, or someone you bumped into once at a party? Other people can share your phone number, email, address, name, birthday, with Facebook when they sync their contacts from their phone (conveniently finding people to talk to on Messenger) or email. Facebook uses this data to build shadow profiles and a graph of potential connections between Facebook users.

  • What they can get from your phone: for example, on Android phones, Facebook has been collecting call and text message history for years.

  • Your web browsing outside of Facebook: on 24% of websites. Mark Zuckerberg testified before the US Senate that he believes that most of the time, when people see Facebook buttons on other websites, they understand this means that Facebook records that they viewed the page. I think that’s pretty unlikely. If you have logged into Facebook in your browser, and you don’t have a tracker blocker, any other website you visit in that browser – your email, a newspaper article, your own website, an airline, etc – can potentially be tracked by Facebook.

  • Until recently, data brokerage: this is a $2bn industry. Facebook was a customer of this industry, buying more data about you, that you probably didn’t realise anyone had. This kind of data comes from loyalty cards, credit cards, bank accounts – anyone who has data about you can sell it to a data broker, as long as they include a provision to do so in the fine print of their T&C’s (terms and conditions). Information about what exactly data brokers buy and sell is unfortunately hard to come by. It’s time for data brokers to come out of the shadows.

Why does all this matter?

One data point (e.g. one like) means very little, but when all these data points are aggregated, they can construct a very accurate profile of who you are, what you care about, and how you can be influenced.

Research shows that an artificial intelligence algorithm is able to determine your personality traits more accurately than a friend or flatmate when it uses just 70 likes. With 150 likes, it is more accurate than parents and siblings. And with 300 likes, it is more accurate than spouses. The average Facebook user has 227 likes.

98.5% of Facebook’s revenue comes from selling advertising that’s very precisely targeted based on understanding its users in this way. That’s how Facebook is worth roughly $500bn. But in the last few weeks, Facebook’s value has dropped about $100bn. It seems that’s enough for Zuckerberg to have recognised he was wrong when, in 2010, he said “privacy is no longer a social norm”. Testifying yesterday, he changed his tune a little: “the number one thing people care about is privacy of their data”.

What about Cambridge Analytica?

The kinds of information described above were also available to Facebook app developers. Installing an app could give its developer access to your data and your friends’ data. That’s how 310,000 victims in Australia had their data hoovered up and sold to Cambridge Analytica, when only 53 people in Australia had installed Aleksandr Kogan’s thisisyourdigitallife Facebook app.

Cambridge Analytica used this information to target and adapt political messaging so that it would be least critically received by its targets. For example, they targeted idealistic white liberals, young women, and African Americans voters in swing states, who were more likely to vote for Clinton than Trump, with adverts containing negative information about Clinton, using language tailored to them, to discourage them from voting at all.

Advertisers use this information to target consumers who might buy a certain product. For example, a leaked presentation that Facebook gave to a big Australian bank last year boasted that advertisers could target teens as young as 14 when they’re feeling at their most vulnerable.

Filter bubbles, fake news, and false feelings

You might have noticed that when you’re on Facebook, it can seem like everyone agrees with your point of view. Facebook shows you stories that you are more likely to engage with, and so you often see things that you agree with. This is called a filter bubble, and it makes it hard for real journalism or dissenting opinions to compete with fake news. And it corrupts the public space. If politicians can send individually targeted, potentially contradictory messages to constituents, how do we assess what they really stand for? Consumers could surely be misled in the same vein.

Fake news spreads rapidly because we love to read things we agree with, i.e. things that appear to prove that we are right. We think far less critically about the veracity of a story when we agree with its conclusion, so we read it, feel the sweet hit of dopamine, and smack those like and share buttons. Facebook’s algorithms then propagate the story because it gets so much attention, and it’s shown to more and more people with similar profiles, who are likely to agree with the story, and so the cycle continues.

Facebook has conducted experiments on 689,000 of its users, unbeknown to them, that show it can manipulate your emotions by adjusting the kinds of stories you see. Why wait for people to feel a certain way that will make them more receptive to your advertising, when you can just make them feel however you want?

So what’s the solution? One worrying non-solution that Facebook seems keen on is “doing more” to police content on the network. That’s a concern because allowing corporations to define what is and isn’t appropriate speech for the whole world gives Facebook and other big tech companies even more power in setting global agendas than they already have.

We should demand transparency for all algorithms that play a pivotal role in society. In cryptography, where the effectiveness of the algorithm is crucial, the only standards that are respected and widely used are those that are open, because that allows them to be inspected by experts. The same respect for correctness and fairness should apply to social media algorithms.

An equitable tech future

The future will be informed by data. If technologists can move beyond dark patterns like privacy zuckering, toward more transparency and informed consent, we can have a happy future of innovation. We can have better humanity, augmented with technology, rather than humanity controlled and enslaved by technology and soul-crushing always-on hyper-targeted advertising.

Data is the foundation of emerging technologies like artificial intelligence, and businesses that can unlock the value of the data they have will outperform those that can’t. This can be done with customers’ consent and it can enhance customer experience and societal outcomes. Many businesses are now starting down this road, and regulation like GDPR should be viewed as a helpful guardrail that encourages businesses to respect users’ privacy, seek informed consent, entrench the right to be forgotten, and enable privacy by design for users worldwide, not just for EU citizens.

There are technologists who care. Just look at the reports of Facebook employees quitting or moving department over ethical concerns. Or new collaborations such as the Time Well Spent initiative. Or open source data initiatives and alternatives like OpenStreetMap. Or the growing data privacy and security movements. Or the businesses, like ThoughtWorks (my employer), that are beginning to focus on building an equitable tech future.

So what now?

Here’s what you can do in the meantime.

  • Option 1 – get off Facebook: some people are not in a position to do so, but a wave of people are deciding that the trade of their privacy for the use of the service is no longer worth it. You too could delete Facebook, and before you do, since Zuckerberg himself can’t confirm that data is truly deleted, do it this way.

  • Option 2 – take control (as much as you can): go through the privacy settings on your Facebook account and make sure they’re all locked down as much as possible. Check which applications you have installed that can access your data, and remove any you no longer need. Pay attention when the next thing you install asks for permissions, and don’t allow it if it asks for too much. Don’t be fooled by cutesy images and language, or promises of convenience. Remember that you are trading your privacy, and your friends’ privacy, for use of this service. And as your last walls of defence, install an ad blocker, like uBlock Origin, and a tracker blocker, like Disconnect or DuckDuckGo’s Privacy Essentials, or open Facebook in a separate browser or profile.

  • Option 3 – demand algorithmic transparency and respect for users’ privacy: fix all the things, especially if you’re a technologist. Learn how to use data effectively to guide business, without compromising privacy, and help to build an equitable tech future. And if you live in Australia, come to Internet Freedom Hack: Defending Truth or join Hack for Privacy.