Trump unveils AI plan that aims to clamp down on regulations and 'bias'

Trump outlines AI vision to loosen regulations and prevent ‘bias’

Former President Donald Trump has introduced a new artificial intelligence initiative that places a strong emphasis on limiting federal regulations and addressing what he describes as political bias within AI systems. As the use of artificial intelligence rapidly expands across various sectors—including healthcare, national security, and consumer technology—Trump’s approach signals a departure from broader bipartisan and international efforts to apply tighter oversight over the evolving technology.

Trump’s newest proposition, integral to his comprehensive 2025 electoral strategy, portrays AI as a dual-faceted entity: a catalyst for American innovation and a possible danger to free expression. At the core of his plan is the notion that governmental participation in AI development should be limited, emphasizing the need to cut down regulations that, according to him, could obstruct innovation or allow ideological domination by federal bodies or influential technology firms.

Aunque otros líderes políticos y organismos reguladores en todo el mundo están desarrollando marcos orientados a garantizar la seguridad, transparencia y uso ético de la inteligencia artificial (IA), Trump está presentando su estrategia como una medida correctiva frente a lo que considera una creciente interferencia política en el desarrollo y uso de estas tecnologías.

At the core of Trump’s AI strategy is a sweeping call to reduce what he considers bureaucratic overreach. He proposes that federal agencies be restricted from using AI in ways that could influence public opinion, political discourse, or policy enforcement in partisan directions. He argues that AI systems, particularly those used in areas like content moderation and surveillance, can be manipulated to suppress viewpoints, especially those associated with conservative voices.

Trump’s plan indicates that any employment of AI by federal authorities needs examination to guarantee impartiality, and no system should be allowed to make decisions that could have political consequences without direct human monitoring. This viewpoint is consistent with his persistent criticisms of governmental bodies and major tech companies, which he has often alleged to lean towards left-wing beliefs.

His strategy also involves establishing a team to oversee the deployment of AI in government operations and recommend measures to avoid what he describes as “algorithmic censorship.” The plan suggests that systems employed for identifying false information, hate speech, or unsuitable material could potentially be misused against people or groups, and thus should be strictly controlled—not in their usage, but in maintaining impartiality.

Trump’s AI platform also zeroes in on perceived biases embedded within algorithms. He claims that many AI models, particularly those developed by major tech firms, have inherent political leanings shaped by the data they are trained on and the priorities of the organizations behind them.

While researchers in the AI community do acknowledge the risks of bias in large language models and recommendation systems, Trump’s approach emphasizes the potential for these biases to be used intentionally rather than inadvertently. He proposes mechanisms to audit and expose such systems, pushing for transparency around how they are trained, what data they rely on, and how outputs may differ based on political or ideological context.

Her proposal does not outline particular technical methods for identifying or reducing bias; however, she suggests the creation of an autonomous entity to evaluate AI tools utilized in sectors such as law enforcement, immigration, and digital interaction. She emphasizes that the aim is to guarantee that these tools remain “unaffected by political influence.”

Beyond concerns over bias and regulation, Trump’s plan seeks to secure American dominance in the AI race. He criticizes current strategies that, in his view, burden developers with “excessive red tape” while foreign rivals—particularly China—accelerate their advancements in AI technologies with state support.

In response to this situation, he suggests offering tax incentives and loosening regulations for businesses focusing on AI development in the United States. Additionally, he advocates for increased financial support for collaborations between the public sector and private companies. These strategies aim to strengthen innovation at home and lessen dependence on overseas technology networks.

En cuanto a la seguridad nacional, la propuesta de Trump carece de detalles, aunque reconoce el carácter dual de las tecnologías de IA. Promueve tener un control más estricto sobre la exportación de herramientas de IA cruciales y propiedades intelectuales, especialmente hacia naciones vistas como competidores estratégicos. No obstante, no detalla la forma en que se aplicarían tales restricciones sin obstaculizar las colaboraciones globales de investigación o el comercio.

Notably, Trump’s AI framework makes limited mention of data privacy, a concern that has become central to many other proposals in the U.S. and abroad. While he acknowledges the importance of protecting Americans’ personal information, the emphasis remains primarily on curbing what he views as ideological exploitation rather than the broader implications of AI-enabled surveillance or data misuse.

This absence has drawn criticism from privacy advocates, who argue that AI systems—particularly those used in advertising, law enforcement, and public services—can pose serious risks if deployed without adequate data protections in place. Trump’s critics say his plan prioritizes political grievances over holistic governance of a transformative technology.

Trump’s approach to AI policy is notably different from the new legislative efforts in Europe. The EU is working on the AI Act, which intends to sort systems by their risk levels and demands rigorous adherence for applications that have substantial effects. In the United States, there are collaborative efforts from both major political parties to create regulations that promote openness, restrict biased outcomes, and curb dangerous autonomous decision-making processes, especially in areas such as job hiring and the criminal justice system.

By supporting a minimal interference strategy, Trump is wagering on a deregulation mindset that attracts developers, business owners, and those doubtful of governmental involvement. Nevertheless, specialists caution that the absence of protective measures may lead AI systems to worsen disparities, spread false information, and weaken democratic structures.

The timing of Trump’s AI announcement seems strategically linked to his 2024 electoral campaign. His narrative—focusing on freedom of expression, equitable technology, and safeguarding against ideological domination—strikes a chord with his political supporters. By portraying AI as a field for American principles, Trump aims to set his agenda apart from other candidates advocating for stricter regulations or a more careful embrace of new technologies.

The proposal also reinforces Trump’s broader narrative of fighting against what he describes as an entrenched political and technological establishment. AI, in this context, becomes not just a technological issue, but a cultural and ideological one.

Whether Trump’s AI plan gains traction will depend largely on the outcome of the 2024 election and the makeup of Congress. Even if passed in part, the initiative would likely face challenges from civil rights groups, privacy advocates, and technology experts who caution against an unregulated AI landscape.

As artificial intelligence advances and transforms various sectors, nations globally are striving to find the optimal approach to merge innovation with responsibility. Trump’s plan embodies a definite, albeit contentious, perspective—centered on reducing regulation, skepticism towards organizational supervision, and significant apprehension about assumed political interference via digital technologies.

What we still don’t know is if this method can offer the liberty alongside the protections necessary to steer AI progress towards a route that rewards society as a whole.

By Roger W. Watson