By: Lara Markey, YLS ’22 

Facebook’s recently formed Oversight Board has been thrust into the spotlight in recent months because of its consideration of the company’s decision to remove President Trump from the platform in January 2021. After nearly four months and over 9,000 comments from the public regarding the case, the Board announced on May 5, 2021 that Facebook’s decision to restrict President Trump’s account was proper. However, the Board determined that his indefinite suspension was inappropriate and went against the company’s terms and content policies. The fate of former President Trump’s Facebook account was not the only finding from this case; Facebook also asked the Board to provide new guidance on how posts from political leaders should be addressed on the platform. Although the recommendations made by the Board are not binding on the organization, Facebook has a history of implementing Board recommendations with care, and these new proposals may influence the use of the online platform by political leaders.

Since Mark Zuckerberg first proposed the creation of an Oversight Board in 2018 there has been widespread discussion about what standards the Board will use in its review of platform activity. The Board’s charter states that it will “review and decide on content in accordance with Facebook’s content policies and values” outlined in Facebook’s community standards. These standards apply to Facebook users across the globe, but the preamble of the Board’s charter notes an emphasis on the importance of freedom of expression, a right recognized most strongly in the United States. As Nathaniel Gleicher, Head of Cybersecurity Policy at Facebook, reminded the audience at the 2021 Yale Cyber Leadership Forum, many countries where Facebook operates “don’t have the same commitment to freedom of expression as we do in the United States.” Many of those countries have, however, agreed to the norms on Responsible States Behavior in Cyberspace and the values encompassed in that agreement. This article will consider whether any of the values or norms from that agreement could be used to enhance the Facebook community standards and create a more internationally focused framework of governance metrics for the Board.

Facebook’s Current Community Standards Applied to State Action

A number of Facebook’s current community standards could apply to limit misconduct from state parties and political leaders. The standards against inciting or facilitating serious violence may be relevant in preventing malevolent leaders from drawing support through the platform. The provisions against online harassment and hate speech might also apply to conduct by state parties. The standards that may come to mind first when thinking of malicious state action are those against inauthentic behavior and false news, as these were the tactics used to influence the U.S. election in 2016. Although no cases on this type of state action have been considered by the Oversight Board as of this time, Gleicher has noted that Facebook identified and stopped sixteen influence operations leading up to the 2020 U.S. election. Facebook’s recently announced Corporate Human Rights Policy may also inform the company’s decisions about political leaders.

The Oversight Board’s May 5 recommendations provide some new guidance on how to address action from state parties or political figures. The Board’s position is that distinguishing between political leaders and other influential users is not always necessary. However, the capacity of a user to influence others should be taken into account when reviewing content that may incite violence, and the Board acknowledges that government officials have greater power to cause harm than others. The Board recommends that potentially harmful content from political leaders should be escalated quickly and evaluated cautiously to prevent any risk of harm under international human rights norms, and that sanctions against these parties should be publicly explained. In making these determinations, Facebook should balance the risk of immediate harm against the right to hear political speech.

Potential Additions from the Responsible States Behavior in Cyberspace

Several principles from the 2015 UN Group of Government Experts on Information Security (GGE) norms for responsible state behavior could be used to enhance Facebook’s policies around state party and political leader action on the platform. Of the eleven norms outlined in the GGE report, five of them seem to be appropriate for incorporation in Facebook’s guidelines around behavior on the platform. Principle (a) of these norms calls for prevention of cyber practices that may pose threats to international peace and security; this norm could complement Facebook’s policies against the incitement of violence, and should be incorporated there or in its own section of the company’s community standards. Principle (c) prohibits the use of state territory for internationally wrongful acts in cyberspace and could be incorporated into the Community Guidelines or Terms of Service to govern actions on the “territory” of the Facebook platform. Principles (f), (g), and (h) concern the use and coordinated protection of critical infrastructure systems. Given the current public debate about designating social media networks as critical infrastructure, Facebook could add to its community guidelines principles against actions that would impair the use of the platform by other users. Though only these five norms are highlighted here, it is possible that other principles from the GGE report could be relevant for Facebook’s platform upon further review.

Increased enforcement of norms of international law and state behavior on social media platforms may help address difficulties faced by the international community in regulating conduct in cyberspace. As explained by Michele Markoff, the State Department’s Deputy Coordinator for Cyber Issues, the norms for responsible state behavior have created predictability in the behavior of state parties while balancing states’ diverging priorities and preserving the flexibility needed to allow for continued innovation in cyberspace. Markoff has also shared a belief that technology companies should not be taking the lead in making decisions of diplomacy in cyberspace. This perspective can be seen in diplomacy decisions of the United States, as the United States is not a subscriber to the Paris Call agreement between states and private sector partners. However, the international community is currently struggling with the issue that misconduct in cyberspace rarely rises to the use of force that would be necessary for ramifications under the U.N. Charter. Much of this conduct goes unpunished because there is no viable international punishment that fits this level of crime. While states are exploring sanctions to address this low-level misconduct, empowering Facebook and other social media platforms to respond directly to irresponsible state action may be a more effective approach.  Facebook and other platforms may also have internal tools that help them resolve uncertainty around attribution and online sovereignty, which would enhance their capacity to address malicious online conduct.

Empowering Facebook and other social media platforms to proactively respond to violations of international cyber norms would be a pragmatic response to the current issues of cyber enforcement. Account suspensions or removals are minor penalties compared to the options available to the U.N. or other states to address malicious online content. Facebook has already shown a willingness and ability to effectively address malicious state action in the context of elections, and this proposal would expand on that success to address other norms from the GGE. By relying on minor defensive penalties, such as account suspension or removal, Facebook can ensure that its enforcement actions do not escalate the online conflict or violate cyber norms in response. Because the proposed responses just amount to defensive regulation of Facebook’s own platform, there will be less need to measure the severity of transgressions against any thresholds required to warrant more serious counter-attacks. Many private companies already consider the conduct of states parties when considering whether to provide products or services: some defense companies refuse to sell weapons to certain states, and professional services organizations occasionally have policies limiting the work they will do for certain governments. The use of sanctions and platform limitations in social media is a logical complement to these policies that have long been in place in other industries, and would be a key step forward in addressing state cyber conduct.

Conclusion

Facebook’s creation of a new Oversight Board has demonstrated its willingness and capacity to tackle serious questions about online actions of state parties and political leaders. Facebook could enhance its abilities to address state misconduct by incorporating principles from the GGE Norms of Responsible States Behavior in Cyberspace into its community standards. The international community should welcome this assistance from Facebook and other technology platforms in addressing state misconduct that cannot be effectively punished through other mechanisms for international law enforcement.