Who Controls the Data? An Economic Exploration of the Right to Consumer Privacy

Image by Lianhao Qu via Unsplash

By Sakshina Bhatt & Barrett Redmond

In his book Seeing Like a State, author James Scott points out that “the roots of the word ‘statistics’ lie in the Latin word statisticus, which means ‘of state affairs.’’ The word’s origin implies the inherent relationship between statistics, data and notions of political control that Yuval Noah Harari uncovers in his TED talk, Why Fascism is So Tempting. Warning that the greatest danger facing liberal democracy is information, Harari echoes Scott and explicitly points to the relationship between data and power, claiming “the number one question that we face is: Who controls the data?” Control of data equates to power and influence, especially when reinforced by information asymmetry. Harari warns that if we, as consumers, do not control the data, we must find a way to ensure that we will not be manipulated by those who do.

In an effort to understand the tradeoffs between the protection and usage of an individual’s data in the United States, we can first look eastward to China for comparison. In the name of civil protection and “social stability,” China’s stringent control of social media has allowed the country to create systems that “comprehensively collect the identity of all internet users in public spaces, their internet behavior, their location, their movement, and identifying information about their phones.”1 China’s social credit system is the hallmark of a surveillance state, where an individual’s tradeoff between convenient access to services and data privacy is evident.2 Yet, China is not alone in creating the illusion of data sharing to protect people’s safety. Major security flaws were detected in applications designed for COVID-19 contact tracing, which were prone to breaches and used in ways beyond tracing purposes in India, South Korea, and Qatar.

In comparison to China, the United States government leverages the existing data infrastructure built by Big Tech Titans like Google and Facebook that have a long-standing history of data and privacy infringements. Even though the state’s position seems removed, evidence suggests otherwise; outsourcing state capacity and lobbying are channels through which the government widens its sphere of influence. In fact, just last year, it was discovered that the Trump administration “bought access to a commercial database that maps the movements of millions of cellphones in America and is using it for immigration and border enforcement.”3 Big Tech’s funding and outreach activities for the State under the guise of advancing development is not just chimeric, but also dangerous; large control lies in the hands of profit maximizing entities whose vested interests do not serve the public. Such Orwellian surveillance tactics were uncovered in one of the most popular Netflix documentaries of last year, The Social Dilemma. The documentary shed light on privacy issues, explaining that when users accept the terms and conditions to engage with various digital platforms, they are giving away their data to be used in ways they could never predict. In fact, one of the documentary’s interviewees claims, “if you are not paying for the product, you are the product,” referring to algorithms that are not so much meant to curate personalised feeds as they are to influence consumer choices. For reference, 98.29% of Facebook’s revenue in the second quarter this year has been generated via target advertising.4 Data is more valuable than the average individual might think; in economic terms, data is a limitless, non-rival good that does not deplete with usage. As a result, data’s nonrivalry characteristics lead to “increasing returns and imply an important role for market structure and property rights.”5

So, to echo Harari: who controls the data? In a recent study on the economics of data, two economics professors from the Stanford Graduate School of Business (SGSB), Charles Jones and Christopher Tonetti, examine the question of data ownership and restrictions surrounding data usage. To understand the relationship between ownership and regulation, Jones and Tonetti tested three data ownership scenarios and examined the outcome of each: 1) companies own data, 2) people own data, or 3) data sharing is outlawed. In the first scenario, which most reflects today’s reality, Jones and Tonetti found that “companies neither respected consumer privacy as much as consumers wanted, nor shared effectively with other companies.”5 As evidenced above and seen in the world today, company ownership of data leads to an unequal allocation of resources. However, when Jones and Tonetti gave individuals ownership over their own data, outcomes were close to optimal; they found that although consumers value their privacy, they also value consumption efficiency as well. Therefore, this scenario shows that when individual consumers own their data, they “preserved the data they wanted private, but sold other data to many different firms, capitalizing on the value inherent in sharing nonrival data widely.”5 In scenario three, interestingly, the researchers found that “failing to share data ultimately stifled economic growth.”5 This insight points to the need for balanced data regulation, in which legislation around how to regulate data considers “not only issues of privacy, but also the long-term health of the economy.”5

So how do you balance concerns over privacy, competition, and efficiency when considering a market for data? The most straightforward solution may involve data democratization: the practice of making all data available in public domains to reduce Big Tech’s edge on data possession. Yet, doing so would further infringe on privacy issues and reduce the integrity of data. Given Jones and Tonetti’s insights, we return to the cornerstone of economics to understand the answer: trade-offs.

In the field of economics, trade-offs are paramount; extremes in any direction are inefficient, and equilibrium is always sought. Solving the issue of privacy, therefore, includes “finding a balance between information sharing and information hiding that is in the best interest of data subjects but also of society as a whole.”5 The ability for consumers to both own their data and sell it to interested parties could strike that balance; when consumers have basic property rights, their privacy is protected to their preferred extent, yet there still remains the opportunity for valuable data to contribute to economic growth and innovation, resulting in outcomes closer to the social optimum. Through the creation of national information markets, in which individuals decide how much of their data is permitted to be shared, privacy protection could be co-regulated. Though this presents the massive challenge of establishing property rights for an intangible entity like data, we’ve seen that it can be done through the establishment of intellectual property rights. Jones and Tonetti note that personal control of data and information is important in not only protecting personal privacy, but also in making “the best use of ‘non-rival’ data to increase productivity and overall economic well-being.”5 Even Mark Zuckerberg claimed in a testimony before the Senate Judiciary Committee, “I think everyone should have control over how their information is used.”6

Zuckerberg would be happy to know, then, that the California Consumer Privacy Act of 2018 (CCPA), which went into effect in January 2020, has taken the first legal steps to define property rights surrounding an individual’s data. The California state law “gives consumers more control over the personal information that businesses collect about them” and secures new privacy rights for those consumers: the right to know about the personal information a business collects about them and how it is used and shared; the right to delete personal information collected from them (with some exceptions); the right to opt-out of the sale of their personal information; and the right to non-discrimination for exercising their CCPA rights. Not too long after, Californians voted on a ballot measure in November 2020 and passed the California Privacy Rights Act (CPRA), a consumer privacy law which further strengthened consumer control over data outlined in the CCPA. The CPRA, which will supersede the CCPA come January 2023, expands consumer rights to include not only the selling of personal data, but the sharing of personal information as well.

The rights defined in the CCPA and further strengthened in the CPRA flatten the information asymmetry between big data platforms and individuals, and clarify the obligations of participants in data markets. In addition to the passing of the CPRA, California Governor Gavin Newsom ambitiously proposed a “data dividend,” which would give consumers bargaining power by allowing them to “share in the billions of dollars made by technology companies in the most populous U.S. state.”8 Gov. Newsom argues that the Big Tech companies that are “collecting, curating, and monetizing our personal data have a duty to protect it. Consumers have a right to know and control how their data is being used.”9 Though we are still far from knowing what reasonable data usage is and the potential externalities a data tax may cause, the imposition of a “data dividend” may serve as both a way to put a financial value on information and an opportunity for individuals to partake in the wealth generated from their own personal information.

So, who controls the data? Granting consumers property rights to their own data might answer Harari’s question. In a separate interview with Chris Anderson, Harari notes that the importance of an individual’s data will decrease drastically as more data is amassed because eventually there will be higher precision human ‘hacking,’ or rather, the ability to understand humans better than they know themselves by anticipating their actions in advance.10 Data from a single user is not just representative of one person, but also of a set of people who share the same characteristics and lifestyle patterns.11 Yet, this does not imply that disclosure of the firms’ knowledge will prove beneficial to consumers. Arrow Information Paradox, which asserts that the demand for undisclosed information is undefined, explains exactly this — information spillover to consumers will result in generation of little economic value with perfect information available for free.

This may not lead to the socially optimal welfare or efficient resource allocation or even innovation, since there will be no buyers for data known by all. Therefore, property rights should be given to consumers so that their privacy is protected to their preferred extent. And yet, there remains an opportunity for valuable consumer-driven data to contribute to economic growth and innovation. Through a co-regulatory combination of defined property rights, potential data markets, regulations such as the CPRA, and Newsom’s “data dividends,” consumers would be protected and the information asymmetry would be flattened to ensure individuals are not “manipulated” by those who control the data. Additionally, data protections could potentially tilt the information market away from monopolistic competition towards perfect competition, where no firm has a significant advantage over another. Though data collection seems to move at a faster pace than our legal system, we must ensure that individual’s rights are protected before data becomes our government – before we’re seduced by the illusion of short-term profits and fall prey to data breach and algorithmic bias, treating privacy as a right rather than a commodity will likely lead us on the right path.


Sakshina Bhatt is a second-year Master’s in Development Practice (MDP) student at the Goldman School of Public Policy. Her interests lie in the responsible use of technology & big data in development, as well as impact measurement. Before joining GSPP, Sakshina worked in monitoring & evaluation as a field researcher at the World Bank (rural livelihoods), Tata Center of Development at UChicago (low-income housing), and J-PAL (labor market friction). 

Barrett Redmond is a second-year Master’s in Development Practice (MDP) student at the Goldman School of Public Policy. Interested in the intersection of development, gender, and business, Barrett’s research centers around women’s financial inclusion and economic empowerment. Prior to the MDP, Barrett served as a Community Economic Development Peace Corps volunteer for two years in Paraguay, South America.

The views expressed in this article do not necessarily represent those of the Berkeley Public Policy Journal, the Goldman School of Public Policy, or UC Berkeley.


  1. Creemers, Rogier, China’s Social Credit System: An Evolving Practice of Control, Leiden University, May 22, 2018
  2. Mozur, Paul, Krolik, Aaron, A Surveillance Net Blankets China’s Cities, Giving Police Vast Powers, The New York Times, December 17, 2019
  3. Tau, Byron, Federal Agencies Use Cellphone Location Data for Immigration Enforcement, WSJ, February 7, 2020
  4. Facebook Investor Relations, Facebook Reports Second Quarter 2021 Results, July 28, 2021
  5. Jones, Charles I., Tonetti, Christopher, Nonrivalry and the Economics of Data, Stanford Graduate School of Business
  6. Walsh, Dylan, How Much Is Your Private Data Worth — and Who Should Own It?, Stanford Graduate School of Business, September 19, 2018
  7. State of California – Department of Justice – Office of the Attorney General, California Consumer Privacy Act (CCPA), October 15, 2018
  8. Cowan, Jill, How Much Is Your Data Worth?, The New York Times, March 25, 2019
  9. Mehrotra, Kartikay, California Governor Proposes Digital Dividend Aimed at Big Tech, Bloomberg News, February 12, 2019
  10. The TED Interview: Yuval Noah Harari reveals the real dangers ahead, Apple Podcasts, July 16, 2019
  11. Bergemann, Dirk, Bonatti, Alessandro, Gan Tan, The economics of social data, Cowles Foundation Discussion Paper No. 2171, Yale University, March 26, 2019