Who Controls AI? How Access and Competition Are Shaping the Future of Artificial Intelligence
This article was initialized by a human, created by AI, updated by a human, copy edited by AI, final copy edited by a human, and posted by a human. For human benefit. Enjoy!
Artificial intelligence (AI) is transforming how we interact with the digital world, from chatbots to advanced image generators, fundamentally reshaping our online experiences. However, these developments also raise critical concerns: Who controls the technology driving these AI systems, and how can we ensure that everyone—not just Big Tech—has equitable access to these powerful tools?
To address these issues, Mozilla commissioned two key pieces of research: “External Researcher Access to Closed Foundation Models” (produced by data rights agency AWO) and “Stopping Big Tech From Becoming Big AI” (by the Open Markets Institute). Both reports examine how AI is being developed, who holds the reins, and what needs to change to create a fairer, more open AI ecosystem.
Why Researcher Access Matters
The report “External Researcher Access to Closed Foundation Models”, authored by Esme Harrington and Dr. Mathias Vermeulen, highlights a critical issue: independent researchers lack adequate access to the AI models developed by large corporations. Foundation models, which form the backbone of most AI applications, are mainly controlled by a small number of companies that dictate who can study or use them.
Key Problems with Access:
- Limited Access: Big tech companies like OpenAI and Google act as gatekeepers, restricting access to those researchers whose work aligns with their priorities. This leaves independent, public-interest researchers excluded from the conversation.
- High Costs: When access is granted, it often comes with high financial costs, effectively barring smaller or underfunded research teams from participating.
- Lack of Transparency: These companies often do not disclose details about how their models are trained or updated, making it difficult for researchers to replicate studies or fully understand the technology.
- Legal Risks: Independent researchers face potential legal threats if their work uncovers vulnerabilities in AI systems, discouraging them from conducting critical research.
The report suggests that tech companies need to make access more affordable and transparent while governments should introduce legal protections for researchers. This would allow for more independent scrutiny of AI systems, fostering a more open and ethical AI ecosystem.
AI Competition: Is Big Tech Stifling Innovation?
The second report, “Stopping Big Tech From Becoming Big AI”, written by Max von Thun and Daniel Hanley, focuses on how Big Tech’s growing dominance in AI threatens innovation and competition. A small number of tech giants—Microsoft, Google, Amazon, Meta, and Apple—control key resources like data, computing power, and cloud infrastructure, all essential for developing AI technologies.
What’s Happening in the AI Market:
- Market Concentration: A few major players dominate the critical inputs required to build AI, giving them disproportionate control over the AI value chain.
- Anticompetitive Practices: Large companies acquire smaller AI startups or form strategic alliances, which often sidestep traditional competition regulations. This prevents smaller firms from effectively competing or innovating.
- Gatekeeper Power: Big Tech's control over essential infrastructure, like cloud services and app stores, allows them to set terms that favor their products. They can charge high fees or prioritize their own AI offerings over those from competitors.
The report calls for robust government intervention to prevent the monopolization of AI, much like what has occurred in digital markets over the past two decades. Policymakers need to enforce competition rules and ensure that resources like computing power and data are accessible to all, not just Big Tech.
Why This Matters
AI has the potential to revolutionize many aspects of society, from healthcare to education, but only if it is developed in a way that is open, fair, and accountable. Mozilla argues that the future of AI should not be controlled by a few powerful corporations. Instead, the ecosystem must be diverse, with innovation driven by competition and public-interest research.
The findings from these reports emphasize the need for change. Enhancing access for researchers and addressing the growing concentration of AI power can help create an environment where AI benefits everyone—not just tech giants. Mozilla remains committed to advocating for a more transparent and competitive AI landscape, and this research represents a critical step in achieving that vision.
Interested in the latest updates on AI technology? Follow us on Facebook and join our group (Link to Group ↗️) to leave your comments and share your thoughts on this exciting topic!