Thursday, 3 April 2025
25.7 C
Singapore
28.2 C
Thailand
20.8 C
Indonesia
26.9 C
Philippines

AI companies increased federal lobbying in 2024 amid regulatory uncertainty

AI companies increased their U.S. federal lobbying spend 2024 by 141% amid regulatory uncertainty, pushing for key legislative changes.

Artificial intelligence (AI) companies significantly ramped up their spending on federal lobbying in 2024 as regulatory uncertainty loomed in the United States. According to data from OpenSecrets, the number of companies involved in AI lobbying jumped to 648 last year, up from 458 in 2023. This represents a remarkable 141% year-over-year increase, highlighting the growing focus on influencing AI-related legislation.

Key players push for legislative support

Leading tech firms like Microsoft backed initiatives such as the CREATE AI Act, which aims to support benchmarking AI systems developed within the U.S. Meanwhile, companies such as OpenAI threw their weight behind the Advancement and Reliability Act, advocating for establishing a dedicated government centre for AI research.

Notably, specialised AI labsโ€”businesses focused almost entirely on developing and commercialising AI technologyโ€”were among the most active in lobbying efforts. OpenAI increased its spending from US$260,000 in 2023 to US$1.76 million in 2024. Similarly, its rival Anthropic doubled its lobbying budget, rising from US$280,000 to US$720,000. AI startup Cohere also significantly boosted its expenditures, spending US$230,000 in 2024 compared to just US$70,000 two years ago.

OpenAI, Anthropic, and Cohere spent US$2.71 million on federal lobbying efforts last year, dramatically rising from the US$610,000 paid in 2023. While this figure is relatively small compared to the US$61.5 million the broader tech industry allocated for lobbying during the same period, it signals growing urgency among AI companies to influence regulations.

To strengthen their lobbying efforts, OpenAI and Anthropic hired seasoned professionals to engage with policymakers. OpenAI hired Chris Lehane, a political veteran, as its Vice President of Policy, while Anthropic appointed Rachel Appleton, formerly of the Department of Justice, as its first in-house lobbyist. These moves reflect the industry’s determination to play an active role in shaping AI governance.

Despite these efforts, the regulatory landscape remains turbulent. According to the Brennan Center for Justice, in 2024, U.S. lawmakers considered over 90 AI-related bills in Congress during the first half of the year alone. At the state level, more than 700 AI-related bills were introduced.

Progress in states but federal action lags

While Congress struggled to progress significantly on AI legislation, state governments stepped in. Tennessee became the first state to protect voice artists from unauthorised AI cloning, and Colorado adopted a tiered, risk-based approach to regulating AI. Meanwhile, California Governor Gavin Newsom signed several AI-related safety bills, requiring companies to disclose details about how their AI models are trained.

However, not all proposed measures were successful. Governor Newsom vetoed SB 1047, a bill that sought to enforce stricter safety and transparency standards on AI developers, citing concerns over its broad scope. Similarly, the Texas Responsible AI Governance Act (TRAIGA) faces an uncertain future as it moves through the legislative process.

The U.S. still lags behind its global counterparts, such as the European Union, which has already introduced comprehensive frameworks such as the EU AI Act.

Federal challenges and industry outlook

The federal governmentโ€™s approach to AI regulation remains unclear. Since his return to office, President Donald Trump has shown a preference for deregulating the industry, revoking several Biden-era policies aimed at reducing AI risks. On January 18, Trump signed an executive order suspending certain AI-related regulations, including export rules on advanced models.

Despite the lack of consensus at the federal level, companies continue to push for targeted regulation. In November, Anthropic called for swift legislative action within the next 18 months, warning that the opportunity for proactive risk management was closing. OpenAI echoed similar sentiments in a recent policy document, urging the government to provide clearer guidance and infrastructure support to foster AI development responsibly.

As the regulatory debate continues, the stakes for the AI industry remain high. Without a cohesive strategy, the U.S. risks falling behind international competitors while struggling to address the potential risks posed by this transformative technology.

Hot this week

Garmin launches premium Connect+ plan to boost health and fitness tracking

Garmin introduces Connect+ with AI insights, advanced training tools, and social features to help users reach their health and fitness goals.

Intelโ€™s future in the GPU market looks uncertain

Intel may not release a high-end Battlemage GPU, and Arc Celestialโ€™s future is unclear, leaving gamers with limited options in a challenging market.

Microsoft removes Windows 11 loophole for skipping account setup

Microsoft is blocking a well-known workaround that lets you set up Windows 11 without a Microsoft account, enforcing stricter installation rules.

Android Auto beta now supports full-screen gaming

Android Autoโ€™s latest beta introduces full-screen gaming, allowing you to play Candy Crush Soda Saga and Angry Birds 2 while parked.

Perplexity CEO dismisses financial concerns, confirms no IPO before 2028

Perplexity CEO Aravind Srinivas denies financial troubles, confirms the company has no plans for an IPO before 2028, and explains product changes.

Qualcomm expands AI research with MovianAI acquisition

Qualcomm has acquired Vietnamese AI research firm MovianAI to boost its AI development in smartphones, PCs, and software-defined vehicles.

Roblox introduces new parental controls to enhance child safety

Roblox introduces new parental controls, allowing parents to block games, restrict friends, and monitor their childโ€™s activity for better safety.

Anthropic introduces Claude for Education, a new AI chatbot plan for universities

Anthropic launches Claude for Education, an AI chatbot plan for universities that offers advanced learning tools and administration support.

Exabeam introduces Nova, an agentic AI that boosts cybersecurity operations

Exabeam unveils Nova, a proactive AI agent that boosts security team productivity and reduces incident investigation time by over 50%.

Related Articles