INSIDE CAMBRIDGE ANALYTICA’S PLOT TO BREAK THE WORLD
“We need to take a step back from arguing about how we’re using these platforms and instead ask: ‘what is this architecture that we’re all implicitly consenting to?’”
Words:
Florence Robson
Christopher Wylie is a fan of metaphors. Within an hour’s conversation he cycles through blind dates, building fires and town squares, all in an attempt to encapsulate the complexity of our current relationship with social media – in particular, Facebook.
Wylie has a more personal connection to Facebook’s darker powers than most. In 2018, as the world was still feeling the aftershocks from Britain’s vote to leave the EU, Wylie decided to turn whistleblower on his current employers – an obscure British military contractor called Cambridge Analytica – and, in doing so, lifted the lid on the biggest political scandal of the century so far.
He’s here today to promote his new book, ‘Mindf*ck: Inside Cambridge Analytica’s Plot to Break the World’, described as a “revelatory” account of the alt-right power games and psychological warfare driving our current political era. In conversation with Paul van Zyl, The Conduit’s Co-Founder and Chief Creative Officer, Wylie is brimming with intelligence and passionate to the point of stubbornness in getting his point across, particularly when it comes to the culpability of the minds behind social media platforms.
“Is this technology always innately dangerous for consumers or can it be used for a greater good?”
The men kicked off the conversation by delving into the technicalities of Cambridge Analytica’s approach, emphasising the need to be clear on the terminology of Wylie’s world before exploring the wider consequences. Microtargeting, Wylie explains, is using datasets to build algorithms that identify shared attributes between individuals in order to create profiles. In Cambridge Analytica’s early incarnation, Wylie and his colleagues were using microtargeting to understand how extremism spreads online on behalf of the British military. “We started by looking at young unmarried men and searching for the characteristics within that demographic who would be most likely to be engaged in extremist thinking”, he explains. When the director of his team was introduced to Steve Bannon, this approach was co-opted to target potential audiences for Bannon’s company, Breitbart News. “The targeting and profiling remained the same as it had for our counter-extremism work but rather than mitigating or trying to intervene in that process of radicalisation, [Bannon and his associates] were catalysing that process of radicalisation in the United States and later in the United Kingdom.”
It’s easy to see from this example how the same technology can be utilised to achieve outcomes on either extreme of the political spectrum but is this technology always innately dangerous for consumers or can it be used for a greater good?
“Imagine for a second that we’re on a blind date”, says Wylie. “As soon as we sit down, I start telling you how much I love your favourite musicians and movies, and how I have the same goals and struggles as you. In that moment I sound perfect for you. But later you realise that the only reason I sound perfect is that I’ve spent two years reading your messages, following you around and talking to your friends. There’s an innate power imbalance in that relationship that makes the other person vulnerable to being exploited. Facebook and other platforms like it allow you to engage with people in a similar way.”
But, argues van Zyl, could one argue that it’s not so much a problem with the platforms themselves but with who uses those platforms most effectively?
Wylie disagrees, citing the significance of the architecture itself as an enabler for data misuse and the spreading of ‘fake news’. “The bread and butter of the staff for Facebook and Google is engineers and architects – people who design structures. We need to take a step back from arguing about how we’re using these platforms and instead ask: ‘what is this architecture that we’re all implicitly consenting to?’.
He turns to another metaphor to make his point. “As a physical architect you have to think about what happens when there’s a fire, or an earthquake, or an elevator shaft stops working. The biggest difference between a physical architect and a software architect is that [the latter] doesn’t have to think in that way, meaning that problems are very difficult to solve further down the line.”
“We need to take a step back from arguing about how we’re using these platforms and instead ask: ‘what is this architecture that we’re all implicitly consenting to?’”
The day before this conversation takes place, Mark Zuckerberg defended Facebook’s decision to allow ads that contain false information on its platform, essentially allowing politicians a “truth exemption” (as van Zyl puts it). Both men agree that this has potentially catastrophic consequences for democracies across the globe. “In a town square, there is an opportunity for people to listen to you and call you out on your ideas”, says Wylie, “whereas with targeted advertising candidates can become invisible. I could move through a town square as a spectator and whisper something different to each specific person, whether or not they’re actively participating in the debate. I can appear like a friend, an expert, a newspaper – whatever I want to be. That’s why the premise of Facebook as a public forum is false because we’ve privatised public discourse.”
So far, so bleak: but if we can’t put the genie back in the bottle, how do we move forward? The key instead, Wylie proffers, is to establish design principles for constructing these architectures in the future. “Can the outcome of what you’re doing be reasonably expected by your user? Are the risks you’re putting this person or community in proportional to the benefits you argue they get? I want these kinds of questions to be debated every single day on design and engineering teams at places like Facebook and Google.”
“In no other sector would we tolerate an attitude of “sh*t happens.”
Wylie gets his point across clearest when drawing comparisons to our treatment of other industries. “An aerospace company can’t just shrug their shoulders at the likelihood that their planes will fall out of the sky and blame it on complicated engineering. In no other sector would we tolerate an attitude of “sh*t happens”. These [tech] companies are able to change how we see the world in a way that’s more powerful than anything else we interact with. Because of that power, there should be an obligation to consider these issues prior to designing their products.”
As the conversation draws to a close, van Zyl brings up the topic on everyone’s minds: the upcoming UK election. How easy is it for an outside power to use these platforms to influence the outcome? Unsurprisingly, Wylie’s answer is not reassuring. “Although Cambridge Analytica has dissolved, the capacity hasn’t gone away. Facebook is actively trying not to do anything. My concern is that a foreign power becomes the next Cambridge Analytica because there’s no preparation within the British government.”
So, what does Christopher Wylie recommend that we do? “Yell at your MP – politely! Demand we regulate software engineers as we do other sectors.” For one of the only times in the evening, Wylie’s tone loses a little of its dogged self-assurance. “I don’t think anyone has all the answers now but I’m arguing that we should start debating what those answers are.”
Share This Article