Executive Logo EXECUTIVE|DISORDER

Revoked by Donald Trump on January 20, 2025

Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence

Ordered by Joseph R. Biden Jr. on October 30, 2023

Summary

Issued by President Biden and revoked by President Trump in January 2025, the EO established standards and oversight for safe, secure, and responsible AI development, promoted worker protections, equity, consumer privacy, and international cooperation. Its revocation removed mandated AI safeguards, transparency, and risk mitigation measures.

Background

Before its revocation, Executive Order 14110 sought to regulate artificial intelligence with a focus on safety, security, and ethical use. The framework promulgated by the Biden administration inspired a policy environment encouraging transparency, testing, and monitoring of AI systems through directives without explicit rulemaking. Agencies like the National Institute of Standards and Technology (NIST) and the Department of Commerce developed guidelines and standards for generative AI and AI-driven technologies, seeking to curb risks associated with malfeasance and systemic errors. These measures aimed to prevent anomalies in AI algorithms that could produce discriminatory behavior or obscure decision-making processes, thereby setting a standard for ethical AI development.

Operational adjustments mandated by the order included the establishment of AI Risk Management frameworks and red-teaming exercises within the federal apparatus. This pushed agencies to incorporate rigorous testing environments where AI's capabilities and vulnerabilities could be explored under controlled conditions. It emphasized spotlighting methodologies to mitigate inherent weaknesses in AI models, particularly those with potential dual-use applications for security threats. The emphasis on operational adjustments ensured AI technologies deployed by government entities would adhere to defined safety and security benchmarks, enhancing public trust in federal AI utilization.

On the enforcement side, the order enacted accountability measures tailored to curtail AI misuse. It invited executive agencies to enforce existing consumer protection laws and ensured that AI-enabled products complied with privacy and civil liberties standards. The guidance also required service providers to report on substantial computing clusters indicative of significant AI training endeavours. Compliance metrics extended to businesses possessing AI models that surpassed specific computational thresholds, making such activities subject to government audits. As a result, the order placed firms on notice, compelling them to disclose information crucial to AI integrity and compliance.

Reason for Revocation

The context of revocation by President Trump frames a pivot towards more laissez-faire economic principles, aligning with a broader deregulatory agenda characterizing his political ideology. Trump’s administration historically favored reduction in federal oversight, positing that excessive regulation stifles innovation and economic growth. With AI positioned as a burgeoning industry, the withdrawal of EO 14110 aligned with efforts to unshackle the sector from federal constraints. This freedom appeals to AI companies favoring rapid development cycles and fewer compliance hurdles, enabling them to compete aggressively in the global market.

The significance of this revocation rests in Trump's inclination to bolster domestic industries by minimizing perceived burdensome federal mandates, typifying a return to more flexible, market-driven AI governance policies. The initial impetus for regulation under Biden seemingly contradicted this narrative. In Trump's vision, competitive advantages stem from swift technological advancements unencumbered by rigid regulations, arguing such conditions facilitate US industry leadership on the global AI stage.

Ideologically, revoking this regulation reflects broader priorities to reduce government intervention in technology sectors, and encourages partnerships following industry-led governance structures. This ideology suggests fostering innovation through ecosystems where the government acts as facilitator rather than regulator. Critics may suggest this exposes industries to unchecked commercialization risks, advocating instead for deliberate, consensus-driven policy formation. Trump's disinclination toward multilateral regulatory approaches—preferring bilateral engagements where the US could wield greater influence—further elucidates the motivation behind ending the order.

Winners

Large technology firms and industry incumbents stand to benefit considerably from the revocation. Corporations like Google, Amazon, and Microsoft, which leverage vast computing infrastructures and possess advanced AI tool development programs, could see financial windfalls. These enterprises now face fewer constraints on domestic AI model testing and deployment, allowing expedited timeframes for innovation without detailed compliance checks previously required under the order. Operating with greater autonomy, these firms can channel resources directly into enhancing AI capabilities rather than meeting prescriptive government standards.

The semiconductor industry, crucial for the hardware underlying AI systems, also emerges as a winner. The removal of reporting mandates on computing cluster acquisitions and AI processing thresholds offers flexibility. Companies like Intel and Nvidia, already leading in AI-related chip manufacturing, can scale production and innovate new chip designs unhindered by government monitoring or data reporting obligations. The focus shifts towards optimizing their production lines and accelerating advancements in process metrics.

Startups and smaller tech ventures also stand to gain a reprieve. The deregulation trend lifts burdensome compliance restrictions inherent in strict AI governance, enabling rapid prototyping and deployment of their technologies. Enabled to quickly iterate on products and deploy solutions to consumers or partners, these entities can allocate resources traditionally earmarked for regulatory adherence toward developing business operations and expanding their marketreach.

Losers

Public advocacy groups and consumer protection organizations express concerns over the revocation due to its potential to diminish safeguards against privacy invasion and algorithmic discrimination. Such organizations, including the Electronic Frontier Foundation and the American Civil Liberties Union, sought robust mechanisms to ensure AI systems in sensitive areas like healthcare and criminal justice operated with transparency and accountability. With the regulatory framework dismantled, these entities lose a critical platform for advocating stringent AI oversight. This could tune AI systems towards maximizing efficiency at potential societal costs.

Federal agencies, previously aligned with Biden's mandate to enforce AI safety standards, face operational and strategic setbacks. Agencies such as the Department of Energy and Homeland Security prepared to implement thorough risk assessments surrounding AI use in critical infrastructure. The lack of centralized guidelines translating to fragmented AI governance across the federal spectrum risks inconsistencies in how technologies that influence public welfare are overseen and integrated effectively with security protocols.

The civil rights community perceives the removal as weakening efforts to address systemic biases proliferating through AI models. Under Biden's guidance, orders emphasized technical audits to identify discrimination within equity-focused measures. Revocation now creates a void, potentially catalyzing unchecked technological proliferation that fails to consider long-standing societal disparities. Stakeholders argue that AI without thoughtful governance risks reinforcing or exacerbating racial, gender, and economic inequalities within its frameworks.

Implications

This section will contain the bottom line up front analysis.

Users with accounts see get different text depending on what type of user they are. General interest, journalist, policymaker, agency staff, interest groups, litigators, researches.

Users will be able to refine their interests so they can quickly see what matters to them.