Put your rating for this post for encouraging the author

Facebook and the Attention Economy: How Facebook represents a social media revolution that has spiralled out of control

If we take a moment to reflect on the state of our social lives, we will notice just how deeply they are rooted in social media. It is without a doubt that social media is among the most pervasive forces within human society. Since the dawn of the 21st century, its reach has extended beyond just our social lives, touching virtually every facet of civilisation. The question of whether this is for better or for worse is, in many ways, yet to be resolved. However, to understand this question, we must trace its origins into the past and observe the fluctuating history that has birthed this new age. Equally, as we are headed into a time that is seeing the digital world eclipse what our older generations once knew as reality, we must think forward and predict the likely course that our collective future will take. In doing this, it is indisputable that there is one main character that dominates the social media narrative: Facebook. 

Though deceitfully introverted and ordinary in appearance, the mastermind behind Facebook leaves behind a trail of controversy wherever he goes. Since his college days, Mark Zuckerberg has been an object of public fascination and outrage; the very nature of his wildcard personality can be felt at every stage of Facebook’s development. As a 19-year-old Harvard student, Zuckerberg already had a reputation for being a computing prodigy and enjoyed creating simple networking programmes to be used by fellow Harvard students. Most notorious of these was Facemash. Perhaps a telling insight into his character, Zuckerberg used his talent and mastery of human psychology for troublesome ends. Facemash was a crude game that allowed students to vote on the attractiveness of the female student body; people could rank two girls at a time in a kind of instant ‘hot or not’ manner. In clear violation of privacy regulations held by the university, he obtained photos of the female students without any form of consent for them to be used in the game. Unsurprisingly, within two days of going live, the campus-wide indignation it provoked led to its quick shut-down and the near-expulsion of its creator. However, despite its short debut, Facemash was wildly popular; this proved the success potential of such an interactive social network and became the foundation for Zuckerberg’s greatest brainchild of all. 

Online “face books” were used at Harvard to act as digital directories, featuring photos of students with a small amount of information about them. However, there wasn’t a singular “face book” for the entire student population so Zuckerberg decided that he would create one himself. On February the 4th 2004, Zuckerberg and his college roommates launched Facebook (then known as Thefacebook) to the students at Harvard, and within 24 hours over 1000 students had joined. The scale of this online network’s growth cannot be underestimated: within a few months it had expanded to universities across the United States and beyond, prompting its founder to drop out of college to establish an HQ in California with newly acquired investments. Finally, in 2006, any person with a valid email address could join this vast web of Facebook users. 17 years since it first went live, Facebook has taken the world by storm with a staggering 2.7 billion users. Wielding immense monopoly powers, it has taken over rival social media competitors, such as Instagram and WhatsApp, whilst paying vast sums of money to ensure that the Facebook app comes pre-installed and unremovable on new phones; it is practically impossible to avoid its influence. Facebook has become a titanic institution that has unquestionably rewoven the social fabric of humanity and accumulated a level of power that is often hard to fathom.

Although this is the narrative we hear most often, it fails to acknowledge the fact that Facebook was made possible as a direct result of prior technological breakthroughs, and there is a rich history to account for the developments that laid the foundations of the social media age. We can start this story in the early 1990s when British computer scientist, Tim Berners-Lee, invented the World Wide Web. This transformative development became the skeleton of the digital world, making Berners-Lee among the most important figures in history. The World Wide Web, commonly shortened to the ‘web’, combined a new field of technology called ‘hypertext.’ Hypertext enabled documents to be linked together with a distributed architecture that allowed for them to be accessed and stored on many different computers and controlled by different individuals. The terms ‘web’ and ‘internet’ are often used interchangeably despite meaning different things; the web consists of the pages you see when you are at a device and online; whereas the internet is the network of interconnected computers that the web works on as well as the email and files that travel across. We can use an analogy to think about this: the internet can be seen as the roads that connect towns and cities together; and the world wide web contains the things you see on these roads, such as the houses and the shops. Then, the vehicles are the data moving about, some go between the websites and others transfer your files across the internet, separate to the web. This system has retrospectively been called Web 1.0, and it has now evolved beyond the limitations of hypertext into a new form that we recognise today as Web 2.0. This new movement, pioneered by Tim O’Reilly, is characterised by interactivity and ‘user generated content’: people can now control and create digital media on the internet with ease; there are videos, images, games, pop-up windows and complete applications. This second generation of the world wide web gave people the ability to share many new forms of information and to use the internet as a means of self-expression. When put together, these two functions define the social media we know and love today.

The first recognisable social media site was called Six Degrees; created in 1997, it allowed people to upload a profile and make friends with other users. This was soon followed by the first blogging sites in 1999; they allowed individuals to express themselves to their fellow users and build up personalised networks of associations. These preliminary applications were tremendously popular; they established a fundamental blueprint that combined networking with self-expression, a formula that was adopted and built upon by the next generation of social media. Sites like MySpace and LinkedIn gained prominence in the early 2000s, alongside Photobucket and Flickr which inaugurated online photo sharing. Youtube, launching in 2005, pioneered an entirely new way to communicate and share videos irrespective of geography. Riding the wave of this internet revolution, 19-year-old Zuckerberg harnessed the rich opportunities and new capabilities inherent in Web 2.0, taking inspiration from the popular social networks that had emerged in its wake. In 2006, just months before the launch of Twitter, Facebook went public. 

You might wonder, given the sheer number of social networks that exist today, why has Facebook been especially significant in changing the way we conduct our social lives, as opposed to the other big names such as Instagram, WhatsApp, Twitter, or Snapchat? Well, a good reason to pay particular attention to Facebook is the fact that it is the most used social networking site worldwide. In addition to its individual success, as Facebook began to balloon in popularity and wealth Zuckerberg set about trying to expand its sphere of influence by making strategic moves to buy out promising competitors and maintain its position in the market. In 2012, Facebook bought a young Instagram, only a year old with just 13 employees; this was a move often regarded as the best acquisition in the history of tech. Today, under Facebook’s control, Instagram has an ever-increasing count of 1 billion users. Following this, in 2014, Zuckerberg bought the hugely popular WhatsApp; standing at 1.5 billion users today, this figure is projected to increase by half a million every two years. Over the past 15 years, Facebook has bought 82 companies in a series of acquisitions that have led to its very literal global domination. The number of active users on Facebook’s core products equates to a mind blowing 3.14 billion people – nearly half the global population! 

As digital media has become cemented into the social landscape worldwide, we must think about the implications of this powerful presence in society; does it do more harm than good? In 2020, a striking documentary called The Social Dilemma premiered on Netflix. Featuring insider interviews with former employees and executives from the leading tech companies and social media platforms, it calls us to look at social media with a critical eye. The film examines the damage social media has caused to society through its exploitation and manipulation of us, as its users, for financial gain from the covert collection and monetisation of our personal data. It sheds light on the operation of these tech giants who design their social network systems to hack into our behaviour patterns and create addiction; it looks at how they use their power to manipulate their platform’s use in politics to spread misinformation and divide us, and it also touches upon the serious issue of social media’s corrosive effect on mental health. Despite all the incredible opportunities for connection, collaboration, learning, and empowerment that social media has gifted society there is a darker side that has only begun to rear its ugly head in very recent years. Could this be a glimpse into kind of future we will all soon find ourselves in if we allow these corporations to continue growing and operating as they are?

Tristan Harris, a former design ethicist for Google and a leading figure in the documentary, has spoken up about many of these issues through the lens of his past experience in designing the manipulative tactics deployed to hook us onto these apps. He argues that a new business model has emerged: the attention economy. At face value, Facebook alongside most of the other players in this field appear totally benign since they offer their services to us for free. However, there is a cost to this that many of us are unaware of. These social media companies are all in the business of competing for as much as they can gather of one very lucrative resource: our attention. It may sound abstract, but how else do these organisations become so astronomically rich when their product is free for us to use? Sure enough, over 98% of Facebook’s revenue comes from targeted advertising fuelled by its users’ data. 

Every detail of an individual user’s behaviour is monitored and quantified as a data point: how long you watch a video for, the kinds of posts you gravitate to, your likes and dislikes, the time of day at which you are most and least active, the people you are friends with and even your search history. This information creates a unique profile on you that Facebook stores so when third-party companies ask to advertise themselves on the platform Facebook can use their algorithm, driven by your data, to decide what product ad to show you; what time of day to show it; how many times they should show the ad before you engage with it, and a vast number of other highly specific factors that allow them to predict the likelihood of you clicking on the ad. This is all because the third-party companies pay Facebook based on user engagement with their advertisements: impressions, likes, clicks and shares and so on. All of this goes unnoticed because on the other side of the screen the oblivious user is provided with an infinite scrolling experience yielding unlimited entertainment. And it’s not just Facebook; every mainstream social media company, and increasingly other apps and digital content providers, have to take on the attention-based model in order to survive. This is because a precedent has been set in which we expect these services to be free, so we are disincentivised to pay upfront for a well-intentioned app when there are hundreds just like it at no damage to our wallets at all. In essence, users get the service for free, but in exchange we provide data which the platform owns and then sells in the form of targeted advertising. To be at the top of the game in the attention economy, it is, therefore, in the best interests of the company to design its software in such a way that maximises the amount of time you spend on it; the more time spent online, the more ads you are exposed to, and the greater the likelihood that you’ll engage with them. Your attention is directly proportional to profit and this is precisely why it is no accident nor fault of your own that you find social media, in all its many forms, so very addictive.

You might argue, however, that this particular issue is not all that bad because if you think about it the attention economy existed long before the advent of social media as we know it. Whenever an author decides to publish a book, for instance, the money that they make is entirely dependent on the attention that the book receives. They might strategically opt for an eye-catching cover design and select the very best reviews to print on the front in the effort to catch our attention on the bookshop shelves, after all, it is just a fact of human nature: we do judge a book by its cover. Furthermore, the radio shows, the podcasts we listen to, and virtually all the products on offer in our consumerist society today derive their value from capturing attention; you might, therefore, think it to be a little incongruous that we vilify our social apps whilst perceiving these other communication media as benign. Before the days of Instagram, the colourful magazines that teenagers once relied on for their fashion tips and celebrity gossip, were designed to entice us to turn the page and read the next article. With each page turned we’d be exposed to the next set of third-party advertisements designed specifically to catch the eye of the teenage girl readership. You might ask: apart from the fact that the very same content is viewed on your smartphone, instead of a glossy page, how is this any different to the ways of Instagram or Youtube?

In addition to this, it is without doubt that people love social media; we are captivated by the content of these apps. Why else would we spend significant portions of our days scrolling through stimulating and engaging content that is personalised to our tastes and delivered to us for free. The so-called ‘cost’ of having our data used to further personalise our feed and show us ads for products that we love is actually an additional benefit, many may say.

These arguments all make good sense to begin with, however, there is crucial factor that gets missed from this understanding: the extraordinarily powerful Artificial Intelligence (AI) that is at work on the other side of your phone screen. Mark Zuckerberg has openly spoken about how he buys entire tech companies, often not for their product value, but for the talented individuals working for them. Facebook and the other giants in the field channel many billions of dollars into combining the world’s most powerful supercomputers with the world’s most exceptional computer scientists, behavioural psychologists, neuroscientists, technicians, graphic designers, et cetera. This is all in the effort to continually improve and refine the most valuable, money-making asset that they have: the algorithm. On the other side of your phone screen there is an impeccably programmed supercomputer calculating every minutia of your behaviour on and off that app. Let’s say you click on a Youtube video about a new weight loss diet or a politics channel: that supercomputer will have already determined the next set of videos to recommend you in order to draw you in and keep you online. Interestingly, scattered among the recommendations there tends to be a few videos containing more extreme versions of the content you’d sought out. For instance, the dieter may be served some pro-anorexia videos, and the politics consumer could very likely be presented with some eye-catching political conspiracy theories. Not surprisingly, this strategy works; a shocking 70% of all the billions of hours of Youtube videos watched worldwide are the ones driven by the recommendation system. This just goes to show that in the face of such a powerful algorithm, we have very little autonomous control over how we spend our time online. 

Our short-term memory can hold about seven pieces of information; the AI is designed to constantly overwhelm us with far more information than our brains could possibly process at once, leading to the state of being perpetually distracted when online. When we are distracted, we lose track of the task that we had originally set out to do and get carried along by the next new wave of information overload. So, before you criticise yourself for unwittingly falling down a rabbit-hole of scrolling and mindlessly watching your screen, just remember this is a battle between your pre-frontal cortex, which evolved in a totally different environment millions of years ago, and a cutting-edge supercomputer specifically designed to hack the inherent vulnerabilities of your human mind; the winner has already been determined.

The digital landscape has plunged the young people of the world into a social media pressure cooker. With so many social media channels to choose from, and a culture that coerces teens into creating thriving digital identities that match equally picture-perfect lives in the real world, it is unsurprising that teen depression has skyrocketed in the past decade. The shocking mismatch between the expectations placed by social media culture on a young mind and the reality of what can actually be achieved creates a new dimension of mental health risk. And when this is combined with the inherently addiction-inducing algorithms, the result is a toxic, identity-warping force that is degrading entire generations in their infancy. The AI exploits the positive feeling we all get from social approval, a fundamental element of human nature. For instance, when a teenage girl posts a picture on Instagram, the algorithm intentionally pushes her picture to the top of her friends’ feeds so that they see it immediately and flood it with likes and comments; this intense burst of social approval overwhelms her dopamine reward system, and she feels euphoric. However, a few minutes later the algorithm has, also intentionally, let her picture get buried underneath other people’s posts, and she is now getting far less engagement; she no longer is getting that dopamine rush, leading to withdrawal symptoms which are often feelings of depression and isolation. Many teens report this exact scenario whereby they are driven by self-doubt and anxiety to delete a post within a few minutes once that engagement spree suddenly drops off. This is the very same dopamine reward system hijacked by opiates, cocaine and amphetamines, so it is no wonder that so many young people become entrenched in the cycle of posting more and more to sustain the very real social high this induces.

Another fundamentally new issue that has arisen as a result of AI-based social media is the problem of misinformation. We check our smartphones over 100 times a day; they are intimately woven into the fabric of our daily lives, and now most people use the same apps that they use to socialise to source their information about the world. In the same way that our social media entertainment is so highly personalised to our individual preferences and biases, the news we get is filtered through this same system. So, for an alarming number of us, the main source of information we have is a machine designed to feed us warped versions of reality and spread conspiracy theories and lies faster than facts; these types of information trigger fear and outrage which are emotions that drive us to keep engaging with the content, and ultimately spending more time online. You may have noticed that reading a nuanced, and balanced news report doesn’t seem to captivate your emotional response and attention anywhere near as much as a shorter, more radical interpretation of the same story does, especially when it either perfectly aligns with, or directly opposes, your opinion on the matter. 

Computer scientist, Jaron Lanier, gives a good analogy for this phenomenon: imagine if Wikipedia presented you with information that was totally dependent on your search history and all the data on you that had been collected about your biases, preferences and the ways in which your attention can be gamed, so when each of us went to the same Wikipedia page there was a guarantee that we would each be seeing a different set of facts designed around our individual psychologies. In this sense, we are living in our own Truman shows, our own more extreme versions of reality. We have designed an apparatus whose purpose is to fragment our worldview and make it impossible to converge on a common understanding of what is real and what is important. Right at the centre of this all is Facebook.

When the Cambridge Analytica scandal dropped in the aftermath of the 2016 US election, we finally woke up to the fact that social media has reached a level of potency that poses a very real threat to democracy. Many years before, Facebook had started making an effort to expand on its popularity and move from simply having a high number of users, to increasing the activity of all of these users on the site. They collaborated with game developers to incorporate a wide range of social games, such as FarmVille and CityVille, into the network; users could share their friends lists with the game to play within their Facebook friend groups. This issue was flagged by legal bodies for its failure to adopt informed consent policies to allow users more control over the use of their data, but Facebook made no real effort to enforce such regulations, and the sharing of data continued. This caught the attention of a political consulting firm, Cambridge Analytica who saw the opportunity to collect users’ easily accessible friends lists to create a product that would change the outcome of the upcoming presidential election. They wanted to use the notion of “psychographics”, which was essentially profiling people using their personality data into different categories based on how persuadable they were in their voting choices; the explicit goal was to use this information to further the Republican political agenda. Cambridge Analytica originally went to the University of Cambridge, a research institution legitimately working with Facebook, to ask for data; they refused to cooperate, so Cambridge Analytica hired one of the university’s researchers, Alexandr Kogan, who knew the ins-and-outs of Facebook’s data systems. Kogan created a third-party app called “thisisyourdigitallife”, which exploited the same mechanism used by Facebook’s other social games to instantly obtain the friends lists of the people who participated in the game. About 300,000 people played the game, but the web of ‘friend list’ associations that this exposed led to 87 million US Facebook users having their data harvested by Kogan and Cambridge Analytica. The firm then sold this data to the 2016 Trump campaign, allowing a great number of those users to be targeted with digital propaganda and political misinformation precisely tailored to their weak points and cognitive biases so that people, particularly those who may have been on the fence about the election, were either driven to vote for Trump, or sufficiently put off by concocted stories about the opposition so as to not vote at all. When 44% of Americans obtain their news from Facebook alone, there is no doubt that this interference played a role in the outcome of the election.

In 2018, when news of what had transpired became widespread, Facebook tried to paint this scandal out to be a “data breach”.  However, according to information security standards this was no data breach; no systems had been infiltrated and no passwords or sensitive information had been stolen or hacked. In fact, what happened with Cambridge Analytica was not a matter of Facebook’s systems being compromised, but of Facebook’s systems working just as designed: data had been collected, data had been analysed, and data had been exploited. Facebook had allowed a third party to extract vast amounts of private information about vast numbers of people from its system, and that third party had passed the data along to another entity, who had grossly misused it for political ends. This story had been reported many times before the implications of what had happened truly hit home. The world cannot afford to turn a blind eye to a system that puts its own growth and profits above designing a platform based on the needs of its users; the consequences of this business model are eroding parts of society that we once had never expected social media to reach.

Currently the truth of a piece of information is far less important than whether it is shared, liked and monetised. These skewed engagement metrics distort the media landscape, allowing clickbait, hyperbole and misinformation to proliferate and thrive. The misinformation being spread doesn’t always involve outlandish conspiracy theories; insidious half-truths and misleading interpretations are often subtle enough to be believed by even the most reasonable of us, particularly when dealing with complex issues like the economy, immigration or climate change. Within Facebook’s digital echo chamber, we are inundated by shock-value misinformation that aligns with our beliefs and triggers our age-old confirmation bias; we share it with our friends and fake news spreads like wildfire. It’s a perfect feedback loop and there is nothing to break the cycle. Before the likes of Facebook, media companies acted as gatekeepers to the news, using verified sources and staff trained in fact-checking to provide a filter through which information flowed into the public domain. But the days of traditional media have passed; powerful, authoritative, well-researched journalism has been pushed out of the market, and in a desperate attempt to survive in the attention economy, standards have slipped across the board. Without real change to this business model there is nothing to contain the outbreak of misinformation, because it is what drives the economy; this is arguably the worst consequence born out of the rise of social media.

Up until 2017, there seemed to be no real reason to be concerned about the tech industry, but now we are starting to put forward questions that have never before been asked in a public forum. Just think for a moment; why is it legal for the service providers who give you your email accounts, internet access, online apps, credit cards, food deliveries, private transportation, healthcare, and social life to scan the contents of your private material and sell it for their financial gain? Through misaligned incentives, social technology is being used to both overwhelm and undermine human vulnerabilities. Mark Zuckerberg, alongside a tiny number of people in charge of the most powerful forces in society act in the interests of profit and growth rather than in the interests of bettering the lives of the billions of people who put their trust in them to do so. We are starting to look up from our screens and realise that these companies are the drivers of so many of the issues we see in society, and our efforts to expose the attention economy and shift the goalposts of incentives will be the only way to create the change we need. It is clear that our social lives will continue to exist, to a large degree, within the digital dimension, and social media shows no sign of disappearing anytime soon. However, if we can regain control of our minds and our data, no algorithm will have the right to decide what our social lives look like. That is for us to choose. 

By Zara Hussein, London, UK

LEAVE A REPLY

Please enter your comment!
Please enter your name here