Bankrolling Ethics: Do Tech Investors Have a Responsibility to Protect Democracy?

Are fake news and the misuse of personal data just unintended consequences of a new technology? Scholars Sandra Goh and Jack Loveridge believe tech investors have an ethical imperative to head off potential harms to democracy early on.

Image of a sign with the words "we are not fake news" written on the back

By Sandra Goh and Jack Loveridge

New startups are launching innovative technologies with the potential to transform democracies around the world, often in foreseeable ways. Always looking toward the future, early investors in new tech should work to infuse a startup’s business model with an ethical outlook that upholds democratic values. For a case in point, look no further than the recent history of a humble dorm room startup that attained a remarkable global reach: Facebook. 

It’s now been over a year since CEO Mark Zuckerberg delivered his much-publicized testimony before the US Senate’s Commerce and Judiciary Committees. Since then, in the wake of the Cambridge Analytica scandal—in which data from 87 million user profiles were made available to third party developers seeking to influence the 2016 US presidential election and the UK’s Brexit referendum—his company has struggled to reassure the world of its good intentions. 

At first glance, Facebook’s effort to revitalize its global reputation appeared to be paying off with a reported $16.9 billion in second quarter earnings. Last month, the US Federal Trade Commission (FTC) fined the company $5 billion for privacy violations—the largest penalty levied in its history, but a modest sum for a company with a market capitalization of over half a trillion dollars. Still, significant questions remain regarding the platform's monetization of user data and potential amplification of disinformation.
 
So many of these current concerns derive from specific choices made by Facebook’s founders and investors early in the company’s life that determined what might be called its ethical trajectory. They also rest upon popular assumptions regarding investor responsibility and the relationship between technological change and democratic governance. During his testimony, Zuckerberg reflected, “The history of how we got here [to the Cambridge Analytica scandal] is we started off in my dorm room with not a lot of resources and not having the AI technology to proactively identify this stuff.”

 By “this stuff,” Zuckerberg meant the harvesting of the personal data of millions of users without their consent by third party companies who used it to deploy targeted political advertising. While it is difficult to believe that no one at Facebook could have foreseen such abuses, Zuckerberg hit the mark in describing a popularly held conception of how technology and policy interact over time. Historically, the negative consequences of new technologies would appear to be addressed after the fact, by policy initiatives and popular movements. Regulation and restraint, it is assumed, do not emerge as preventive measures, but instead come in due time in response to innovation’s unavoidable excesses.
 
The First Industrial Revolution, for instance, famously enriched factory owners and fueled Britain's imperial expansion across Asia and Africa, prompting a popular reaction that would champion sweeping labor reforms at home and an end to colonialism abroad. In the United States, the Progressive Movement of the early 1900s rode a wave of public outcry regarding abuses in the food processing, manufacturing, and mining sectors. It championed a host of legislation designed to protect and empower the public, including the Pure Food and Drug Act that led to the establishment of the FDA and made unhygienic, adulterated, and mislabeled consumer goods rare. What if the assumption that restraint and ethical evaluation must come only after technological misuse and abuse is incorrect? What if what’s needed now isn’t AI to mop up the mess, but rather human insight to assess first assumptions?
 
The lessons learned from historical experience challenge us to catch problems early on, if not from the moment of inception, then from the first investment and increasingly as the technology evolves and its effects become known. Early investors have the uncommon capacity to help today's startups think about how their technologies and business models might affect democratic institutions. This active foresight is especially crucial in a digital economy, where user data itself is a central commodity, where public opinion can be manipulated subtly and institutional damage inflicted clandestinely.
 
Further, the issue of how private data is used and how disinformation spreads reaches well beyond the United States. Around the world, it has underscored the ways in which user data amassed by social media companies can be exploited to target key demographic groups with politically motivated disinformation—potentially sowing panic, manipulating public opinion, and undermining democratic processes. The Sri Lankan government, for instance, shut down social media in the wake of April's terrorist attacks, arguing the move was needed to halt the sharing of false reports inciting retributions against the country's Muslims.
 
In India, where Facebook boasts over 300 million users and continues to expand, a recent year-long study by Equality Labs found that the company has failed to delete 93 percent of non-English language posts that violate its own rules against speech targeting LGBT, caste, and religious minority groups. In May, Facebook filed suit against the South Korean app developer Rankware for allegedly using Facebook user data to provide consulting services to marketing companies, in breach of contract. Facebook's legal initiative shows that the company is taking action, but also reveals that the model of allowing third party developers broad access to user data is still alive and well. 

Embed from Getty Images

Perhaps most critical, the notion that millions of voters can be effortlessly manipulated via social media diminishes confidence in democratic processes more generally. On a conceptual level, if every election seems manipulated or can be made to seem manipulated, why bother casting a vote? That disturbing possibility has the potential to be so pervasive that every measure must be taken to establish the integrity of democratic processes, from transparency in advertising on social media to the security of final vote tallies.
 
The fledgling global effort to address concerns regarding user data, privacy, and disinformation at the policy level took its first faltering steps in 2019, most notably with the European Union’s ongoing investigation of Facebook under its newly implemented General Data Protection Regulation (GDPR). Earlier this year, British lawmakers released a report outlining how Facebook had been used to spread false news reports and disinformation during the 2016 Brexit referendum vote. Conservative MP Damian Collins, who led the inquiry, criticized the company's business model and its strategy of evading responsibility for user content as resembling the conduct of "digital gangsters."
 
In the political rush to counter new tech’s negative implications for democracy, less attention has gone to the bigger picture of how early investors shape the ethical outlook of startups and how such companies develop business models. This requires examining a given technology’s history and tracing seemingly unavoidable societal repercussions back through many contingent factors and decisions, right down to their roots. In the case of Facebook, could the company's initial investors—including Peter Thiel, who turned his $500,000 stake into a billion-dollar profit—have set up a management structure that scanned for the potential vulnerabilities and abuses inherent to the platform?
 
From social media to biotech and beyond, angel investors play a critical role early in a startup's life. It is estimated that more than 1.8 million tech startups are launched annually worldwide. Many of their founders are relatively young or unschooled in management, and much less fluent in public policy and international affairs. Most of these companies rely on the funds of angel investors and venture capitalists to shepherd them toward launch. These first investors often represent a young company’s most forward-thinking allies.
 
Too often that foresight comes with seemingly built-in limitations when evaluating the ethical dimensions of a business model and estimating the potential negative effects of a new technology on democratic processes and institutions. New technologies are rendering obsolete the customarily cavalier attitude of early stage investors toward the societal repercussions of the tech they finance. The old mantra of "move fast and break things" seems less a bold mission statement than a poor excuse for lacking foresight.
 
To better understand how the early financiers of new tech think about the ways their investments affect democracies and international relationships, we asked several investors active across multiple tech sectors in Asia, Europe, and North America to highlight some common criteria early funders use to inform their investments. To keep their responses candid, we agreed to leave them anonymous, but their perspectives shaped our conclusions concerning the ethical responsibilities of early stage tech investors.
 
Predictably, questions concerning a startup’s bottom line—including return on investment, the status of a minimum viable product, and the company’s projected customer base—dominate the initial phase of investor inquiry, regardless of national context. As one investor said, “I usually try to look at technologies that will be disruptive, high risk, and high return–something that has broad implications for pain points that people have not solved or know they have.”
 
Yet while these priorities are essential for effective investment, does the ethical outlook of the company also matter? Could early investors be considered partly responsible for any societal harm done by the companies they fund?
 
One of our interviewees agreed: “From a legal standpoint, being just an investor, I’m not sure you’re exposed to [the ramifications of] business decisions made. Personally, though, I would feel responsible for this [negative outcome from the product or technology] and I would try to influence a change in company direction or exit ASAP.” 

Another investor we interviewed saw an even clearer responsibility when an investment held potential for societal harm: “An investor like that sits on the board and has equity stake in the company. Of course, they are liable.”
 
In this sense, thinking proactively about the negative societal impacts of a technology from the start gives a company greater control over its future—itself a prized objective for early investors made apparent by our interviews. An ethical vision informed by projections of possible technological misuse strengthens business planning, challenging founders to refine their products and clarify their strategic thinking. Further, an intimate understanding of the inner workings of the startup would protect investors themselves against mistakes, exaggerations, and outright fraud.
 
Promoting effective leadership during a company's formative years represents another way early investors can help startups avoid ethical pitfalls. Investors can shape the management structure of startups, facilitating ethical oversight through a structure of internal checks and balances.

Quotation on foresight by Goh and Loveridge
 
One recommendation that emerged from our interviews is that the position of chairman should be more routinely split off from the role of CEO, particularly when a company founder takes the top executive role. As one of our interviewees observed, “The chairman, as independent from the executive manager, should at least provide better governance compared to an insider CEO and chairman, where it gets very hard for an independent nonexecutive director to know what is going on.”
 
The point of separating the roles of CEO and chairman seems prescient given Napster cofounder and early Facebook president Sean Parker’s 2017 remarks on the addictive qualities within social media platforms: “It’s exactly the kind of thing that a hacker like myself would come up with, because you’re exploiting a vulnerability in human psychology. The inventors […] understood this consciously. And we did it anyway.” With such motives in mind, it’s only reasonable to build opportunities for ethical cross-checking and oversight into a startup from day one.
 
Admittedly, not every technology holds obvious nefarious potential. It may take time to forecast seemingly remote scenarios and arrive at precise definitions for “harm” and “security.” Still, the work must be done. The process is essential for protecting democratic values and institutions against those who would exploit human vulnerabilities. Rather than early stage investment being a one-off moment of contact, it can be formalized as active ethical guidance, promoting critical engagement and anticipatory thinking at all points in a company’s lifecycle.
 
As a start, early investors should ask founders three key questions before cutting the first check: To what extent might the company’s technology promote the spread of disinformation, hostility between groups, or violence? How does the company’s business model ensure that its users or consumers are informed, autonomous, and secure in their personal data? Finally, if a party hostile to a community, group, or nation accessed this technology, what harm might they realistically be able to inflict?
 
The founders of the next Facebook may well be making pitches to their first investors right now. It falls first to those investors to ensure that the trajectory of that new startup does not lead straight to a rendezvous with the next Cambridge Analytica. As ever, the global transformations heralded by tech startups will pose ethical dilemmas and underscore the question of who is responsible for designing safeguards against intended and unintended threats to democracies and the international system. Early stage investors everywhere owe it to citizens of democracies to assume their full share of that responsibility.

Sandra Goh, Weatherhead Center alumna, and Jack Loveridge, Associate, Weatherhead Center for International Affairs 

Sandra Goh, a 2018–2019 Fellow with the Scholars Program at the Weatherhead Center for International Affairs, is a regional director of customer experience at Microsoft Asia Pacific. Her research interests center around technology's role in building nations' economic and political statuses in Asia.

Jack Loveridge is an Associate at the Weatherhead Center for International Affairs. His research interests lie at the intersection of economic development and technology ethics, focusing on the role of agricultural science in South Asia’s Green Revolution.

Captions
 

1. We are not fake news political banner at a protest march. Credit: Shutterstock

2. Global activists of Avaaz, set up cardboard cutouts of Facebook chief Mark Zuckerberg, on which is written "Fix Fakebook", in front of the European Union headquarters in Brussels, on May 22, 2018, as they call attention to what the groups says are hundreds of millions of fake accounts still spreading disinformation on Facebook. - Advocacy group Avaaz is calling attention to what the groups says are hundreds of millions of fake accounts still spreading disinformation on Facebook. Facebook chief will say sorry to the European Parliament on May 22, 2018, pledging that the social media giant has learned hard lessons from a massive breach of users' personal data. Facebook admitted that up to 87 million users may have had their data hijacked by British consultancy Cambridge Analytica, which worked for US President Donald Trump during his 2016 campaign. Credit: JOHN THYS/AFP/Getty Images

3. Video: Cambridge Analytica whistleblower: Vote Leave 'cheating' may have swayed Brexit referendum. Credit: The Guardian