Amazon security chief warns AI rules could limit development

Amazon security chief warns AI rules could limit development

Tech in Asia·2025-06-11 17:00

Amazon’s chief security officer Steve Schmidt has voiced concerns regarding government regulations on AI.

He believes that such regulations could hinder progress in the field.

Schmidt argues that the industry should establish its own standards based on customer needs.

His views reflect statements from executives at Microsoft, OpenAI, AMD, and CoreWeave. These leaders recently testified before Congress about the challenges associated with AI regulations.

OpenAI CEO Sam Altman supported “sensible regulation” that would not hinder the United States’ ability to compete with China in AI development.

.source-ref{font-size:0.85em;color:#666;display:block;margin-top:1em;}a.ask-tia-citation-link:hover{color:#11628d !important;background:#e9f6f5 !important;border-color:#11628d !important;text-decoration:none !important;}@media only screen and (min-width:768px){a.ask-tia-citation-link{font-size:11px !important;}}

🔗 Source: Bloomberg

🧠 Food for thought

1️⃣ The historical cycle of technology regulation repeats with AI

Amazon’s resistance to AI regulation follows a well-documented pattern in technological history where initial fear gives way to balanced oversight.

The printing press and steam locomotives faced similar opposition when first introduced, with critics warning of dire consequences before eventually developing appropriate regulatory frameworks 1.

During the Industrial Revolution, regulations evolved gradually to address tangible problems like worker safety and monopolistic behavior rather than attempting to control the underlying technologies themselves 1.

This pattern suggests that effective AI regulation should focus on specific impacts (like privacy violations or discriminatory outcomes) rather than attempting to control the technology itself, which is rapidly evolving and difficult to define 2.

Current survey data reveals this tension continues today, with 72% of U.S. adults expressing concerns about AI’s impact on society despite industry resistance to oversight 3.

2️⃣ Global AI regulatory divergence creates competitive pressures

The tech industry’s arguments about competitiveness with China reflect real differences in international regulatory approaches that shape market advantages.

The EU has adopted a comprehensive, risk-based approach with its AI Act that imposes stringent requirements on high-risk applications, while China combines centralized guidance with local innovation to maintain both control and growth 4.

In contrast, the U.S. employs a more distributed approach without comprehensive federal legislation, relying on agencies to adapt existing laws to AI applications 5.

PricewaterhouseCoopers has projected that AI could add $15.7 trillion to the global economy by 2030, with China potentially seeing the largest economic gains at 26% GDP growth, creating tangible economic incentives to minimize regulatory friction 6.

This regulatory divergence creates genuine competitive concerns for U.S. companies who may face different compliance requirements across markets, though critics argue this shouldn’t come at the expense of necessary safeguards 7.

3️⃣ Tech industry’s regulatory stance has reversed as AI commercializes

The position taken by Amazon’s CSO represents a significant shift from earlier industry calls for careful oversight to today’s resistance against regulation.

Tech leaders, including OpenAI’s Sam Altman, previously advocated for regulation due to concerns about AI’s potential risks but have now reversed course to favor deregulation as commercial applications have matured 8.

This pattern of initially acknowledging potential harms before later resisting oversight once commercial interests are established mirrors earlier technology cycles, where industry self-regulation is proposed as an alternative to government intervention 3.

The proposed 10-year moratorium on state AI regulations would effectively nullify existing and future state laws addressing AI harms, creating a regulatory vacuum at a time when AI applications are rapidly expanding into critical domains 9.

Critics argue this approach could undermine public trust in AI technologies, potentially triggering stronger regulatory backlash if documented harms accumulate without accountability mechanisms in place 3.

……

Read full article on Tech in Asia

Technology Business