We Can’t Let Silicon Valley Regulate Itself

By Nandita Sampath

Last November, The New York Times published a damning report on Facebook once again identifying the company’s alleged misdeeds. The report claimed that Mark Zuckerberg and Sheryl Sandberg cared more about the company’s image than substantively looking into Russian interference in the 2016 election, and they were too slow to act and disclose this information once it had been confirmed by independent news organizations. Even now, Facebook continues to host hate speech and propaganda, which many claim has led to the radicalization and recruitment of some users to extremist groups and the further polarization of our already divided nation. It even sowed the seeds of genocide in countries like Myanmar.

Although we are seeing increased criticism of the company from both the media and government officials, it is unlikely that Facebook will drastically change the way it operates without statutory regulation. The platform’s business model has been working in its favor — Facebook is one of the richest companies in the world, and its executives are immeasurably wealthy. So far, there has been no reason for the company to change its behavior. Repercussions have been minimal, just a few exposés about the company along with a disastrous Senate hearing last year, neither of which led to any punitive measures.

My brief experience as an engineer in the technology industry taught me that engineers often work on problems simply because they are technically interesting, rather than considering the long-term effects of what they’re building. And, as things stand, we shouldn’t expect them to; considering how lines of code affect broader societal systems isn’t part of the typical computer science curriculum (although it should be). Silicon Valley has long managed to avoid major scrutiny and regulation because of the aura surrounding the industry, reflected by companies’ stated missions: slogans like “bringing the world closer together” exemplify how these companies want us to see them. It is true that the technology sector has and will continue to contribute to human health, education, and ease of communication between people at all corners of the world. But there is an underbelly to the industry we have overlooked for far too long, because we assume engineers and company executives, many of whom have celebrity status, know what they’re doing.

Scandals from Cambridge Analytica to the spread of fake news have revealed that it is a mistake  to neglect this industry’s ugly underside. The days in which “but we’re connecting people” was a good enough excuse for Mark Zuckerberg to avoid taking accountability for Facebook’s wrongdoings are long over. However, it goes beyond Facebook: Twitter, Google, and Amazon, among others, have all received criticism for issues like privacy breaches, fake news, and monopolistic behavior — or a combination thereof. So far, primarily due to the lack of regulation in this industry, there’s been little incentive for these companies to amend their harmful ways.  

It is clear we cannot trust the tech industry to regulate itself. These companies have shown that both their leadership and business models are problematic. Our government needs to begin passing legislation at the federal level to rein in tech companies. The European Union (EU) has begun to crack down on these giant corporations using both privacy and antitrust mandates. If we care about protecting our citizens from the harm technology companies can do, we must take tech culture off the pedestal and, at the minimum, start implementing policies similar to the EU.

The General Data Protection Regulation (GDPR) is the primary law in the EU that regulates how companies must protect private data. These requirements include providing data breach notifications, anonymizing data to protect privacy, and even go as far as mandating that certain companies hire an official to oversee GDPR compliance. Companies must follow the requirements set by the GDPR or else they face harsh penalties and fines.

Lawmakers in the U.S. are beginning to address privacy issues, but not quickly enough. In June 2018, California passed a digital privacy law that gives consumers more understanding of their personal data online. This law is a step in the right direction, but we need to do more. The law is not as expansive as the GDPR law. More importantly, this type of legislation needs to be passed at the federal level to protect as many consumers as possible. Although Democrats generally receive large donations from the technology sector and are therefore less likely to crack down on those companies (though there has been increased skepticism in the industry lately), user privacy should be treated as a bipartisan issue.

Some advocates would go even further and argue that individuals within companies must be held accountable if major privacy violations occur. So far, these companies are just receiving fines that amount to no more than a drop in the bucket compared to the massive amounts of revenue they generate. If company leadership were liable to individual fines (and possible imprisonment, depending on the situation), they would be more likely to act ethically. In fact, Senator Ron Wyden (D-OR) has proposed a bill that fines executives for violating privacy standards, as well as mandating prison sentences up to 20 years for executives whose statements in required annual privacy reports fail to meet these standards. While this law is unlikely to pass because of lobbying by technology companies, thinking through how to hold executives accountable is a worthwhile project.

The EU has also taken a big stick approach to companies with monopolistic tendencies, slapping Google with a $5 billion fine because of its mobile operating system. In the U.S., antitrust scholars and the media have started putting Amazon under fire regarding its problematic behavior around antitrust issues. Amazon employs a variety of typical monopolistic behaviors from predatory pricing to vertical integration to poor working conditions for their stockroom workers. Small businesses are often forced to sell their products on Amazon because of the visibility; when Amazon’s analytics finds that a product from a third-party seller is doing extremely well, they make a similar version of the product and sell it at a lower cost. Because current U.S. law is only focused on consumer harm and Amazon keeps their prices low, it will be difficult to initiate a lawsuit against them. We must update antitrust policy to keep up with the way tech companies operate and address consumer harm in the long-run, rather than continue its current myopic approach.

Addressing both privacy and antitrust issues is only the first step. The spread of fake news, hate speech, and propaganda is also rampant, in part propagated by companies with advertisement-based revenue models. However, social media companies heavily censoring their platforms could set a dangerous precedent for the future of information on the internet. And while the first amendment prohibits the government from banning hate speech outright (though private companies do have the right to censor, since the first amendment applies only to the public sector), there may be other ways to combat these issues circuitously. For example, the government could perhaps operate under the jurisdiction of the Federal Elections Commission. Possible approaches include restricting who can advertise, mandating that platforms make advertising more transparent, and requiring a better verification system when a user or advertiser creates an account.  

It’s possible that implementing these kinds of policies will make it infeasible for social media and other technology companies to operate as they currently do. But this may be for the better, since Facebook’s and Twitter’s business models are fundamentally problematic at their core — they generate revenue based on how long users remain on the site and disseminate information, whether that information is accurate or not. The fact that bad actors can take advantage of these platforms to spread propaganda  is not the result of the inadequacies of these models, but rather their intention. In my opinion, it will be next to impossible to ever completely eliminate propaganda and hate speech if these companies continue to employ an advertisement-based revenue model. They will continue to  struggle to identify and remove hate speech, and new fake accounts spreading propaganda will pop up every day. This propaganda has not only influenced the way we voted in the 2016 election, but it also continues to influence voters in their elections around the world. Perhaps, if and when our federal government decides to regulate these companies, it might be useful for legislators to consider how these types of business models impact society and our democratic institutions. Because, as even Apple CEO Tim Cook admits, the free market “is not working.”

Of course, we’ve all benefited from Silicon Valley. Enabling greater access to information, ideas, and people is a worthy cause, and it is one that has unquestionably improved quality of life around the world. However, in the handful of companies I’ve worked for, I have never once heard of or been part of a conversation regarding the long-term or potential consequences of what we were building, as well as how our products already on the market had affected society or human behavior. This problem isn’t just pertinent to a few companies; it is part of the culture in Silicon Valley I’ve experienced my whole life growing up here. The industry’s lack of both foresight and self-reflection has led to virtually all of the problems we’ve been seeing in recent news. Back when Zuckerberg started Facebook in 2004, no one expected a platform for connecting college students could someday perpetuate political extremism and mass genocide (as some commentators have argued Facebook contributed to in Myanmar). Once Zuckerberg realized he had a winning idea, Facebook focused on revenue and gave little thought to the implications of its technology. Their strategy has and still seems to be to get as many people to sign up for Facebook at all costs. Even with the massive damage that has been done, Facebook seems to only care about log-ins and advertisers, its bottom line being profits. While this transparent greed isn’t necessarily illegal in itself, the results of Facebook’s ruthlessness are blatant and egregious, and our government has given them little reason to stop. We must start treating the technology industry like any other industry — and this involves being aware of both the pros and cons that innovation brings and regulating in an appropriate, and proactive, manner.

 

Nandita Sampath is a Master of Public Policy candidate at the Goldman School of Public Policy and a Senior Editor of the Berkeley Public Policy Journal.