Elon Musk’s so-called Division of Authorities Effectivity (DOGE) operates on a core underlying assumption: The United States must be run like a startup. Thus far, that has largely meant chaotic firings and an eagerness to steamroll laws. However no pitch deck in 2025 is full with out an overdose of synthetic intelligence, and DOGE is not any totally different.
AI itself doesn’t reflexively deserve pitchforks. It has real makes use of and may create real efficiencies. It isn’t inherently untoward to introduce AI right into a workflow, particularly should you’re conscious of and in a position to handle round its limitations. It’s not clear, although, that DOGE has embraced any of that nuance. In case you have a hammer, every thing appears like a nail; when you’ve got probably the most entry to probably the most delicate information within the nation, every thing appears like an enter.
Wherever DOGE has gone, AI has been in tow. Given the opacity of the group, loads stays unknown about how precisely it’s getting used and the place. However two revelations this week present simply how intensive—and doubtlessly misguided—DOGE’s AI aspirations are.
On the Division of Housing and City Improvement, a faculty undergrad has been tasked with utilizing AI to search out the place HUD laws could transcend the strictest interpretation of underlying legal guidelines. (Businesses have historically had broad interpretive authority when laws is obscure, though the Supreme Court docket just lately shifted that energy to the judicial department.) This can be a job that really makes some sense for AI, which might synthesize info from giant paperwork far quicker than a human might. There’s some threat of hallucination—extra particularly, of the mannequin spitting out citations that don’t actually exist—however a human must approve these suggestions regardless. That is, on one degree, what generative AI is definitely fairly good at proper now: doing tedious work in a scientific manner.
There’s one thing pernicious, although, in asking an AI mannequin to assist dismantle the executive state. (Past the very fact of it; your mileage will range there relying on whether or not you assume low-income housing is a societal good otherwise you’re extra of a Not in Any Yard sort.) AI doesn’t truly “know” something about laws or whether or not or not they comport with the strictest potential studying of statutes, one thing that even extremely skilled attorneys will disagree on. It must be fed a immediate detailing what to search for, which implies you cannot solely work the refs however write the rulebook for them. It is usually exceptionally wanting to please, to the purpose that it will confidently make stuff up quite than decline to reply.
If nothing else, it’s the shortest path to a maximalist gutting of a significant company’s authority, with the prospect of scattered bullshit thrown in for good measure.
A minimum of it’s an comprehensible use case. The identical can’t be mentioned for an additional AI effort related to DOGE. As WIRED reported Friday, an early DOGE recruiter is as soon as once more in search of engineers, this time to “design benchmarks and deploy AI brokers throughout dwell workflows in federal businesses.” His goal is to eradicate tens of hundreds of presidency positions, changing them with agentic AI and “liberating up” staff for ostensibly “increased impression” duties.