By Parsa Tariq
In just a few short years, artificial intelligence has moved from the pages of science fiction to the center of modern life. From hospitals to stock markets, algorithms are quietly deciding what we see, buy, and believe. As this invisible force reshapes the world, a simple but urgent question echoes across boardrooms and parliaments: Who controls AI?
Governments argue that AI must serve the public not profit. Tech giants say innovation should not be slowed by political borders. Between those two visions lies a struggle that could define the digital future: one for power, privacy, and control.
The Rise of National AI Regulations
India: Data Sovereignty and Ethical AI
India’s AI journey is being built around responsibility. The Digital Personal Data Protection Act (DPDPA), passed in 2023, makes consent the cornerstoen of data use. It allows cross-border data transfers except to countries blacklisted by the government and aims to give citizens more control over their personal information. In Tamil Nadu, the state government is taking a different kind of leap. It plans to share anonymized public data with startups to train AI models for governance and public service delivery. Every dataset, officials say, will be masked to protect privacy. It’s a small but symbolic step one that reflects India’s belief that technology can be powerful and ethical at once.
India’s AI push is also scaling up financially. The IndiaAI Mission, approved in March 2024 with a budget of ₹10,300 crore (about $1.25 billion), aims to create a national computing grid and AI innovation hubs. It’s a state-led response to the dominance of global tech giants, a statement that India intends to build its own digital muscle.
China: State-Controlled AI Development
China’s approach to AI has always been tightly bound to state strategy. The New Generation Artificial Intelligence Development Plan (AIDP), unveiled in 2017 set a bold goal to make China the global AI leader by 2030. The AI Plus initiative, launched in 2025, builds on that vision, embedding AI across manufacturing, education, and governance.
Unlike most nations, China doesn’t separate innovation from control. In 2025, it introduced new labeling rules requiring all AI-generated content to be clearly marked , a measure intended for transparency but also for surveillance oversight. At the same time, Beijing tightened rules on generative AI companies, mandating government approval before releasing new models. The message is clear: AI will advance, but always under state supervision.
European Union:
Europe has taken a more cautious path. The AI Act, which came into force on August 1, 2024, is the first comprehensive attempt to regulate artificial intelligence across an entire region. Its bans on certain “prohibited” AI practices took effect on February 2, 2025, while rules for high-risk systems will apply from August 2026. The Act classifies AI by risk, setting clear obligations for developers from transparency requirements for chatbots to mandatory human oversight for high-risk systems. It also introduces regulatory sandboxes where companies can test AI systems under supervision. Europe’s approach signals that innovation is welcome , but only when it aligns with democratic accountability.
United States: State-Level Initiatives
Across the Atlantic, the U.S. remains divided on how to regulate AI , leaving much of the responsibility to individual states. California has taken the lead. Its Automated Decision Making Technology (ADMT) regulations, adopted in July 2025 and approved two months later, will take effect in 2027. They’re designed to protect consumers from harms in automated systems from biased algorithms to opaque decision-making.
Utah, meanwhile, was the first state to pass an AI-specific consumer protection law. California went further with Assembly Bill 2013, requiring large AI developers to document their training data, conduct impact assessments, and embed safety measures. It’s a patchwork system, a reflection of America’s tech-driven ambition and fragmented politics.
Tech Giants Push for Global AI Operations
While lawmakers debate, Silicon Valley builds.
OpenAI recently began restricting accounts linked to the Chinese government that were using its models for surveillance. The company said its tools would not support any state-driven monitoring activities. Microsoft, OpenAI’s closest ally, continues to expand its global AI infrastructure building massive data centers across the U.S., Europe, and Asia. OpenAI also struck a $10 billion partnership deal with Microsoft in 2023 and a separate deal with AMD to secure six gigawatts of computing power using the new Instinct MI450 GPUs. It’s an expansion plan that dwarfs many governments’ — entire AI budgets.
Google, meanwhile, is walking a delicate line. It complies with the EU’s AI Act while warning that “excessive caution” could slow down innovation. Despite those concerns, Google is spending over €25 million across European AI research centers in 2025, signaling that the company plans to shape —not simply follow—the global rulebook.
The Clash: National Sovereignty vs. Corporate Autonomy
Here lies the core of the conflict — two power structures colliding.
Governments want control; corporations want scale. India is demanding data localization, China is labeling AI outputs, and the EU is classifying algorithms by risk. Together, these policies are closing the walls around how and where data flows.
Tech companies, however, depend on open data streams that cross continents. OpenAI’s GPT models are trained on vast global datasets. Microsoft runs AI servers across 60 countries. Google Cloud handles billions of data transfers every hour. Each new national law each demand for local storage, each restriction on cross-border data chips away at their global reach.
In 2025, more than 30 countries introduced or updated AI-related laws, while Big Tech companies spent an estimated $200 billion collectively on AI infrastructure. The world is splitting into two systems: one governed by regulation, the other driven by scale.
The result? A tug-of-war that’s no longer abstract. When the EU restricts facial recognition, U.S. companies lose access to key markets. When India limits foreign data transfers, global AI training slows down. And when China bans foreign AI models from its platforms, it cuts off billions of potential users.
This is not just a policy standoff , it’s an economic and ideological battle over who gets to write the next chapter of intelligence itself.
The Future of AI Governance
If governments succeed, AI could become fragmented a world of digital borders, where technology follows the rules of geography. But if corporations win, the opposite could happen: a borderless, corporate-led AI landscape where ethics struggle to keep pace.
The most likely future lies somewhere in between , where states and companies learn to share the steering wheel. Already, the OECD and G20 nations are discussing global AI safety standards, while the UN’s Advisory Body on AI is drafting proposals for international coordination.
The world has entered a new kind of diplomacy not about weapons or oil, but data and algorithms.
Conclusion
The question of who controls AI is no longer academic — it’s geopolitical. Nations want sovereignty. Companies want freedom. Each is building its own version of the digital future. In the end, control over AI may not belong to any one side. It may belong to whoever learns to balance power with purpose and builds systems that reflect not just intelligence, but intent.
Because what’s at stake is not only who runs AI — but who it will run for.