By: Daniel Betancourt, YLS ‘22

In the forum’s fourth session, Nathaniel Gleicher was asked to briefly respond to Applebaum and Pomerantsev’s article in The Atlantic arguing that Facebook’s business model and commercial interests provided powerful, overwhelming incentives against making a “nicer” internet which could encourage civil conversation and discourage misinformation. “If you want to build a social media platform where people are going to engage,” he said, “you need to create a space that they’re going to want to be part of, that they’re going to want to come back to, that they’re going to want to spend time on.” Gleicher suggested that Facebook (and by extension, platforms like it) was strongly economically motivated to producing a better experience for users. After all, if users consistently have unpleasant experiences stemming from an abundance of clickbait and hate speech, they will presumably migrate to new platforms that offer superior experiences (or just abandon social media altogether and sleep like angels). As Gleicher put it, if the platform wishes to create sustained engagement, “Facebook’s incentive is pretty strongly aligned with tackling this problem.” Unfortunately, there is a countervailing incentive. Facebook and its fellow platforms, while in pursuit of sustained engagement, are motivated to use many mechanisms which fail to improve or which even harm user experiences. Their reliance on targeted ads and data collection, far from creating a nicer internet, drives them to preserve its most negative features which malicious actors, both foreign and domestic, will take advantage of and which our cyber leadership forum as well as Gleicher are so dedicated to stopping.

Negative User Experiences

Gleicher is of course right to say that an overwhelming barrage of negative content would drive away users from Facebook or almost any social media platform. If the rest of his theory is right, however, then reality must be wrong. Conceptually independent from its effects on the body politic and the spread of misinformation, social media use is linked with mood disorders, poor sleep, anxiety, depression, and low self-esteem. Evidence mounts that young people especially suffer in connection with their social media usage – and that the relationship is causal. Nobody knows this better than the users themselves, who ranked Facebook as the fourth-worst of America’s hundred most visible corporations, beating only Monsanto, The Trump Organization, and Juul Labs. Far from driving Facebook to create a space that users are “going to want to spend time on,” the company’s corporate incentives and desire for sustained engagement seem to generate a space that nobody likes but nobody leaves.

This negative reaction from users seem to disprove the suggested necessary connection between sustained engagement and a more civil internet. However, as we used to ask at the University of Chicago, “that’s all well and good in practice, but how does it work in theory?” If a pleasant user experience (which happens to facilitate democracy and thwart attempted putsches) doesn’t drive sustained engagement, what does? There are many ways to drive engagement that do not require users to “want” to be there. LinkedIn, a platform bearing the profiles of 176 million Americans, keeps those users by providing them with a powerful professional network. In Facebook’s case, users are retained by a number of mechanisms (including its professional network), but one of the more notable is addiction.

Addicted to Getting Mad Online

While I have been using Facebook as an example, over the past several years observers have increasingly used the language of addiction to describe  the relationship between users and all sorts of social media environmentsDriven by social insecurity, FOMO, and other factors, users stay online even without finding it rewarding, finding it difficult to log off and providing those clicks that let Facebook generate $84 billion in ad revenue in 2020. To generate sustained engagement, a platform doesn’t have to make a space where users want to be, so long as it is a space where users can’t not be. While still requiring platforms to be mindful of too much negative content, this becomes not a minimization but a balancing act. Applebaum and Pomerantsev note that shortly after the election, Facebook did in fact adjust its content delivery algorithm to reduce antagonistic content and produce a nicer newsfeed – only to reverse course shortly thereafter. Where Gleicher correctly identified a ceiling for negative content, Facebook’s actions suggest that there is also a floor. If users are not exposed to enough inflammatory content, enough extreme content, then they get bored and can leave, but nothing keeps people online like seeing that someone is wrong on the internet.

By keeping in mind the balancing act required to maintain user engagement, an observer can see how much of Facebook’s content regulation becomes a kind of smokescreen, reducing negative content just enough to keep everyone scrolling, but little further. Facebook’s vaunted fact-checking regime, for example, which labels false or misleading content for the convenience of the user, allows it to avoid asking why that content is appearing in our newsfeeds in the first place. The algorithm spreads bad content far and wide, users engage with the content, and creators, now encouraged, go on to make more bad content. This process need not be conspiratorial – Facebook is a big company, and while some of its employees and agents diligently search for and root out malicious actors spreading bad content, other agents improve and optimize the mechanisms that draw those actors in the first place and which turn other users into their unwitting victims. They have no choice but to do so. Social media platforms which rely on collecting user data to drive addicted engagement with personally-targeted advertisements seem to rely on just those negative psychological states listed above to sustain that engagement. The conclusion suggested by Applebaum and Pomerantsev, by the disdain held by users for Facebook and other platforms, is straightforward – the economic interests of social media companies may demand a (somewhat) tolerable internet, but that by no means implies a nice one.

A Horrible Timeline

Social media has featured heavily in this forum, for the obvious reason of its increasingly prominent role in the breakdown of civil society. As long as content sorting algorithms must maintain addictive engagement, it must deliver negative content to users. As long as algorithms deliver negative content to users, bad actors will find was to generate and spread that content for their own purposes, whether those actors be foreign governments or simply bad people that a better society would isolate. As long as social media companies are solely driven by their unfettered pursuit of ad revenue, it will always need to maintain addictive engagement with those ads. Any serious solution to this problem, then, may be technically complicated, but is conceptually clear: those companies must become fettered, either regulated in their pursuit of revenue or divorced from the model entirely. Without creating the legal structures for this fundamental shift, we will never fully erase these perverse incentives. It took twelve years for Facebook to transform from a hot-or-not knockoff into a world-straddling, election influencing colossus. If the ad revenue model is not taken seriously, I shudder to imagine what it will do with another twelve.