Is it Time to Regulate Tech?

Neal Sivadas
4 min readFeb 18, 2023

--

“Do you commit to shutting down Finsta?”

“How do you sustain a business model for a free service?” “Senator, we run ads.”

If you’ve ever watched a senate hearing with tech leaders, you’ve probably seen the cringeworthy questions that our elected leaders have. When the average age of a US senator is 64 years of age, it’s no wonder that they’re out of touch with the tech-driven reality of today.

From social media to search engines to artificial intelligence to virtual reality, there’s so much new technology out there that didn’t even exist 25 years ago. And with this, also very little regulation.

But as these technologies expand and take over, how important is regulation?

Is there a way to regulate these companies without causing harm to the positive benefits these technologies have on our society? Or is there even a way for these tech companies to self-regulate?

……………………………..

The case for government regulation

The one piece of notable government regulation came in 1996, Section 230 of the Communications Decency Act. It states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This means that social media companies like Twitter and Facebook are not liable for the impact of 3rd party content on their platform.

However, technology like artificial intelligence can have a significant impact on our lives. We can use AI to decide how long prison sentences can be. We can use AI for self-driving cars. We can use AI to automate a large portion of jobs. AI can destroy or change lives in an instant.

A similar case is the development of automobiles; a world-changing technology, but also one that could cause injury or even death. The government established the Federal Safety Standards in 1968 to protect drivers in the case of poor design, construction, or performance of vehicles.

This government-driven regulation has saved an estimated tens of thousands of lives.

With the impact of artificial intelligence and other technologies on our society, proper and coherent regulation can protect users and build a better society.

The case for self-regulation

Self-regulation sounds silly. How can someone regulate themselves? It’s like telling the batter to call their own balls and strikes at the plate.

But it’s been done before and can be done again, usually through coalition type work. 40–50 years ago, entertainment companies were having the same dilemma about what content to moderate and they acted on it by setting precedents together.

For example, the tv networks agreed to curb alcohol & tobacco ads and video game companies established a rating system for profane content. Social media companies have already come together to ban terrorism content from their platforms.

And arguably, they probably can set standards better than any government could.

If the government had regulated these industries, they would have needed industry experts, back and forth deliberation, and consensus on what should be deemed good versus bad.

But if the government leaders leading this legislation can’t comprehend an ad-revenue driven company or understand that Instagram does not “own” Finsta, imagine the unintentional or misguided legislation that can be potentially written.

Understanding the bounds of artificial intelligence and the metaverse is nowhere nearly as simple as defining whether a piece of content is appropriate or not.

………………………………………………..

And so unsurprisingly, in my opinion, it’s going to take a mix of government efforts and self-regulation to move the needle here.

I think first and foremost — our leaders need to better understand the technology ecosystem. Among the million and one issues affecting our society, technological development should be a high priority. Not just inviting tech leaders to testify, but inviting technologists at these companies and outside to educate leaders on the benefits and drawbacks of these technologies.

Second — more government scrutiny can be helpful. For example, there have been recent discussions to overturn Section 230 and this has ignited tech companies to take more action — such as establishing product teams dedicated to well-being or forming coalitions to tackle online safety.

As we continue to build more powerful technologies that have a global long-term impact, we need to work together to make sure they’re safe and effective for all.

*Thanks for reading and following the Tech as a Tool Project. This content is part of the 6-week LinkedIn Accelerator incubator program where I tackle our society’s complex relationship with technology.

Originally published at https://www.linkedin.com.

--

--

Neal Sivadas
Neal Sivadas

Written by Neal Sivadas

LinkedIn Top Voice | PMM @ TikTok | Gen Z Marketer + Blogger

No responses yet