What is woke AI? Decoding the White House's new target.
President Donald Trump says that "woke AI" is a pressing threat to truth and independent thought. Critics say his plan to combat so-called woke AI represents a threat to freedom of speech and potentially violates the First Amendment.
The term has taken on new significance since the president outlined The White House's AI Action Plan on Wednesday, July 23, part of a push to secure American dominance in the fast-growing artificial intelligence sector.
The AI Action Plan informs a trio of executive orders:
Promoting the Export of the American AI Technology Stack
Accelerating Federal Permitting of Data Center Infrastructure
Preventing Woke AI in the Federal Government
The action plan checks off quite a few items from the Big Tech wishlist and borrows phrasing like "truth-seeking" directly from AI leaders like Elon Musk. The executive order about woke AI also positions large-language models with allegedly liberal leanings as a new right-wing bogeyman.
So, what is woke AI? It's not an easy term to define, and the answer depends entirely on who you ask. In response to Mashable's questions, a White House spokesperson pointed us to this language in a fact sheet issued alongside the woke AI order: “biased AI outputs driven by ideologies like diversity, equity, and inclusion (DEI) at the cost of accuracy.”
Interestingly, except for the title, the text of the woke AI executive order doesn't actually use this term. And even though the order contains a definitions section, the term itself isn't clearly defined there either. (It's possible "woke AI" is simply too nebulous of a concept to write into actual legal documents.) However, the fact sheet issued by The White House states that government leaders should only procure "large language models (LLMs) that adhere to 'Unbiased AI Principles' defined in the Order: truth-seeking and ideological neutrality."
And here's how the fact sheet defines "truth-seeking" and "ideological neutrality":
So, it seems the White House defines woke AI as LLMs that are not sufficiently truth-seeking or ideologically neutral. The executive order also calls out specific examples of potential bias, including "critical race theory, transgenderism, unconscious bias, intersectionality, and systemic racism." Obviously, there is a culture-wide dispute about whether those subjects (including "transgenderism," which is not an accepted term by transgender people) are inherently biased.
Critically, AI companies that fail to meet the White House's litmus tests could be locked out of lucrative federal contracts. And because the order defines popular liberal political beliefs — not to mention an entire group of human beings — as inherently biased, AI companies may face pressure to adjust their models' inputs and outputs accordingly.
Mashable Light Speed
Want more out-of-this world tech, space and science stories?
Sign up for Mashable's weekly Light Speed newsletter.
Loading... Sign Me Up
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
The Trump administration has talked a big game about free speech, but critics of the action plan say this order is itself a major threat to free speech.
"The part of the action plan titled 'Ensure that Frontier AI Protects Free Speech and American Values' seems to be motivated by a desire to control what information is available through AI tools and may propose actions that would violate the First Amendment," said Kit Walsh, Director of AI and Access-to-Knowledge Legal Projects at the Electronic Frontier Foundation, in a statement to Mashable. "Generative AI implicates the First Amendment rights of users to receive information, and typically also reflects protected expressive choices of the many human beings involved in shaping the messages the AI writes. The government can no more dictate what ideas are conveyed through AI than through newspapers or websites."
“The government has more leeway to decide which services it purchases for its own use, but may not use this power to punish a publisher for making available AI services that convey ideas the government dislikes," Walsh said.
Again, the answer depends entirely on where you fall along the political fault line, and the term "woke" has become controversial in recent years.
This adjective originated in the Black community, where it described people with a political awareness of racial bias and injustice. More recently, many conservatives have started to use the word as a slur, a catch-all insult for supposedly politically correct liberals.
In truth, both liberals and conservatives are concerned about bias in large-language models.
In November 2024, the Heritage Foundation, a conservative legal group, hosted a panel on YouTube on the topic of woke AI. Curt Levey, President of the Committee For Justice, was one of the panel's experts, and as a conservative attorney who has also worked in the artificial intelligence industry, he had a unique perspective to share.
Levey also said that if LLMs are biased, that doesn't necessarily mean they were "designed to be biased." He added, the "scientists building these generative AI models have to make choices about what data to use, and you know, many of these same scientists live in very liberal areas like the San Francisco Bay area, and even if they're not trying to make the system biased, they may very well have unconscious biases when it comes to to picking data.”
A conservative using the phrase "unconscious bias" without rolling his eyes? Wild.
Ultimately, AI models reflect the biases of the content they're trained on, and so they reflect our own biases back at us. In this sense, they're like a mirror, except a mirror with a tendency to hallucinate.
To comply with the Executive Order, AI companies could try to tamp down on "biased" answers in several ways. First, by controlling the data used to train these systems, they can calibrate the outputs. They could also use system prompts, which are high-level instructions that govern all of the model's outputs.
Of course, as xAI has demonstrated repeatedly, the latter approach can be... problematic. First, xAI's chatbot Grok developed a fixation on "white genocide in South Africa," and more recently started to call itself Mecha Hitler. Transparency could provide a check on potential abuses, and there's a growing movement to force AI companies to disclose the training data and system prompts behind their models.
Regardless of how you feel about woke AI, you should expect to hear the term a lot more in the months ahead.
Topics Artificial Intelligence Social Good Donald Trump Politics
……Read full article on MashableAsia
America Government Technology International
Comments
Leave a comment in Nestia App