A critique of the call for public AI

As I spend more time in the AI infrastructure business, I've been thinking more and more about the government's role in AI. There's no shortage of opinions and position papers on the topic, and sadly, most of them are written from the perspective of the government rather than the AI industry. As a result, they are often full of misunderstandings, misleading statements, or ideas about the world that are six months out of date—the blink of an eye to the government, but an eternity in the AI industry.

The latest to cross my desk is a report titled The National Security Case for Public AI which was released on September 27, 2024 by the Vanderbilt Policy Accelerator. At a high level, the authors try to make the case that the U.S. government should build its own vertically integrated AI stack (from silicon to data centers to applications) to compete with (“complement”) effort in private industry. They also suggest regulating the AI industry in a way analogous to how public utilities are regulated. It's full of hyperbole, no doubt to stimulate thought and debate, and is a pretty easy read with a helpful bunch of references I'd never read before.

But as often happens when I read these sorts of things, I started angrily annotating the PDF as I read. It's full of gaps, logical fallacies, and dishonest representation sprinkled throughout, and by the time I realized that it probably wasn't worth my time to peel apart such a flawed position paper, I was already committed to marking up the whole thing. So as not to feel like I completely wasted an evening on this, I've decided to post my annotations here.

The biggest hurdle is workforce

Although it wasn't the lynchpin of this paper, the most egregious problem with this whole paper is its frequent call to hiring more AI expertise to develop a public AI capabilities. The tone is completely ignorant of what it's really like to work in industry versus government in a high-tech space, and the paper repeatedly makes recommendations that imply that hiring and retaining staff who are leading experts in all aspects of the AI stack is just a matter of cutting a bigger check.

As I have been whining about for years, this is a misguided belief, and a vision that is built on this premise is a house of cards. Until the people who write these sorts of papers understand why it's hard to attract and retain people who have skills which have dual use in public and private sector, these sorts of grand visions to go head-to-head with the AI industry will make incremental progress at best. If you are reading this and are ever tempted to write a position paper that includes hiring more HPC or AI experts into the government, please talk to someone who's worked in both worlds first!

The obliviousness to partnership

The position of the authors is also a very stark, black-and-white view where the government is unambiguously good and private industry is just out to stack cash, fleece the government at every opportunity, and let the world deteriorate around them. I kept finding myself making the following points:

  1. The defense industry and its relationships with subcontracting is not the only way government can partner with industry. The authors completely ignore the fact that the NSF and DOE each have their own successful models for funding national-scale infrastructure for the public good, and those subcontractors (UT-Battelle which runs Oak Ridge, LANS which runs Los Alamos) or their subcontractors who build specific computing solutions (IBM, HPE, etc) have decades of history partnering with the technology industry.
  2. The existing "AI stack" in private industry is not 100% proprietary. Significant pieces of it--likely the majority of it--are open-source, openly developed, and managed through a neutral foundation of supporters. The PyTorch foundation is a prime example of this; it is the foundation on which much of the training at scale has been done, and there's nothing stopping anyone (including the government) from participating in its development.
These two points painted a picture where the authors are trying to apply what they know (likely work in the defense sector, building physical widgets) to something they only peripherally understand (developing hardware and software technologies, and deploying and maintaining them at scale).

My detailed notes

That all said, I am not an expert in any of this either; I am neither an expert in AI nor a policy wonk, and I don't really understand how the government (especially those parts outside of DOE and NSF!) work. What follows are just a loose collection of quotes from the paper and my personal thoughts in response. All the usual disclaimers apply as well. These views are mine alone, they do not reflect those of any past or present employers, and so on.

Let's dive in.

Altman frames the choice as between two futures: “Will it be one in which the United States and allied nations advance a global AI that spreads the technology's benefits and opens access to it, or an authoritarian one, in which nations or movements that don't share our values use AI to cement and expand their power?

...

By public AI, we mean two things: publicly-provided, -owned and -operated layers in the AI tech stack, such as cloud infrastructure, data, and model development; and public utility-style regulation of the private AI industry that fosters competition and prevents abuses of power.

What would stop AI innovation from moving to countries that simply do not impose public utility-style regulation that isn't the hyperbolic “authoritarian” government described above? Honest question.

If space, power, cooling, and money are the only things stopping private AI industry, I can think of several places in the world (that aren't the USA) that could make attractive landing spots. Given the political extremism and volatility in the US, one could reasonably argue that there are better places than the US where such innovation could happen.

Regulation won't work nearly as well when the workforce is remote and the regulations are not necessarily aligned with global societal norms.

Investing in people with technological expertise has the potential to create a virtuous cycle: a more affordable mission-driven staff would not only build public-interested AI systems for a wide variety of public- uses but could also evaluate private sector AI services more accurately and reduce the likelihood that government contracts will suffer from cost and quality problems.

This is one of the most nonsensical things I've read on this topic, and it reads like the perspective of someone who's never worked on both sides of technology. Likening leadership in AI innovation to the rollout of a generic web service like healthcare.gov reflects a complete lack of understanding of how AI, and the specialized expertise it requires, differs from general IT functions.

There is no such thing as “more affordable mission-driven staff” when it comes to AI. Do you think people working at Meta, OpenAI, and other leading AI labs are “affordable” by the typical American taxpayer's standards?

I believe in the mission more than most people working in Big Tech, but this claim is patently absurd.

Here, too, it seems that the DOE will rely on some private sector AI infrastructure and partnerships (including cloud, data centers, and likely software designers).

This implies that private sector partnership is abhorrent to the notion of public AI. I have bad news—the public sector cannot stand on its own and create its own shadow version of what the AI industry has collectively done.

Part of this is because the AI industry itself has benefited tremendously from public-private partnership. The authors here are ignorant of open-source software and the effective public-private partnership that goes into industry, governments, and universities all contributing to a common foundation.

More robust federal investment in the infrastructure and human capacity for public AI is needed.

The government cannot “invest” its human capacity problems away. This is such a simplistic view.

The reason is obvious: the railroad would only serve its own vertically-integrated coal company or would charge prohibitive prices to competitors, thereby pushing them out of business. A competitive coal sector required preventing vertical integration with railroads. In the AI context, structural separations could be placed between chip makers, cloud providers, and model developers...

Is this a real threat? History has shown that being vertically integrated is often a very bad thing; compare Intel, which is vertically integrated in chip design and fab, and NVIDIA, which is not.

The coal and railroad analogy is imperfect because railroads and coal are both independently useful to many market segments. By comparison, a data center is not useful unless there are GPUs in it, and a GPU isn't useful unless there is a model to train on it. Again, this betrays the fact that the authors do not actually know how the supply chain underneath AI models actually works.

Nondiscrimination rules, or neutrality mandates, require that infrastructural providers serve all comers neutrally without favoritism or price discrimination.

So GSA prices for everyone?

Nondiscrimination rules ensure a level competitive playing field for entrepreneurs and non-profit, academic, or public sector competitive playing field for entrepreneurs and non-profit, academic, or public sector customers to access critical resources. In the AI context, these rules would apply to customers to access critical resources.

Doesn't antitrust cover this, since the authors claim that the whole AI industry is monopolistic?

Without competition or regulation, an AI oligopoly is likely to box out innovative start-ups, lose their innovative edge, offer worse quality of service to government clients, and raise costs for the American taxpayer. Regulating market structure to prevent the abuses of monopoly American taxpayer.

At what level of the stack is “AI oligopoly” being defined here? Or is it all of them?

What in the world is an “innovative start-up” when it comes to building multi-billion-dollar data centers?

What is an “innovative start-up” in the context of chipmakers who all rely on TSMC fab capacity to make their chips, and who can design chips from locations around the world?

The problem with the public utility analogy is that public utilities are geographically anchored to their consumers in the US. By comparison, the AI supply chain faces global competition. American AI companies will not “lose their innovative edge” because they're getting fat off of government contracts; they'll lose them because other countries are playing on the same field and can move faster.

First and foremost, public AI would bolster innovation. As Mariana Mazzucato has shown, the federal government has been an engine of innovation – and particularly technological innovation - throughout its history. Research and development programs, national missions and industrial policies, and other publicly-resourced and programs, national missions and industrial policies, and other publicly - resourced and often publicly-run programs have led to considerable breakthroughs. We should often publicly-run programs have led to considerable breakthroughs.

I'd love to hear the long-form version of the argument that the AI industry would move faster if the government was involved.

This is such a broad, nonspecific argument that washes over all the nuanced differences between AI as a societally revolutionary technology and other revolutionary technologies that got off the ground with government support.

It is textbook economics that firms facing little competition and no regulation to discipline them will both abuse their power and fail to innovate.

There is an AI land grab happening right now between a combination of AI startups and large technology firms. How can the authors say that there is “little competition” in one of the most fiercely competitive technology races that private industry has ever seen?

If there is “little competition,” why are so many people in the AI business working 60+ hour weeks?

Of course, I can envision a few cases where the government might feel like they're getting a bad deal from AI companies. For example, imagine an opportunity to work with the government was presented to an AI company, but it was written with the presumptions of this paper. Ridiculous claims that imply that finding talent was just a matter of money and that the government expects "affordability" would be received by any private bidder as a customer who will have unreasonable and misguided expectations. Why would a company, who itself is struggling to retain talent and build infrastructure faster than its competitors, divert its constrained resources to work with a customer who is that far out of touch?

Perhaps the issue is that there is too much competition, and people are willing to pay higher prices and have more reasonable expectations than the government. That isn't a sign of "little competition and no regulation," it's a sign that the government needs to catch up.

...we should expect these firms to continue pursuing anticompetitive actions that undermine innovation as they move into the AI space. Robust, independent public AI capacity also allows for more bespoke into the AI space.

This is quite disingenuous, because it implies that these companies' existing businesses and the markets in which they compete are completely transferable to the AI industry.

The AI industry does not even have a clear path to net profitability yet, so how can the authors claim that monopolies or oligopolies will form unless the government steps in? There are plenty of arguments to make for the government to regulate AI, but this isn't one of them.

Consider Elon Musk's control of Starlink for example. Whatever one thinks of Musk's political views or the war in Ukraine, should one person – or one firm – be able to undermine U.S. government policy with respect to a major conflict simply because they want to?

I would like to understand how this was undermining U.S. policy since Elon Musk isn't an agent of the government. Does this statement argue that the federal government should be operating or regulating its own Starlink? If so, why isn't it?

...quite real prospect of a contractor withholding critical products and services if the firm's leadership has a policy or political difference with the U.S. government.

Citation needed. When has a major tech firm ever done this? This is a genuine question; I may be naive since I've only worked in the high-end supercomputing space of the government.

That said, companies last a lot longer than presidencies. The damage of withholding critical products and services to spite one president or congressional session would endure far beyond, and I can't picture a successful company ever doing this.

dependence by government or critical infrastructure entities (such as utilities or airlines) on sole source providers for foundational operations services creates national security risk.

But this wasn't the case where there was a sole-source provider.

  • You can't dual-source email service, and there is no top-down mandate that all government agencies use one email service over another. For every hack of an Exchange account, there is a hack of a Gmail account.
  • Similarly, not all airlines were affected by Crowdstrike, because not all airlines chose to use it. In fact, the access that Crowdstrike had to cause the failure that it caused was a result of Microsoft opening up kernel access so other companies could compete with Microsoft's own security software. If the argument is that not everyone should use Windows, well, why hasn't the government addressed this by regulating the operating system business or mandating an alternative? Honest question.

And for what it's worth, I don't use Windows at work.

Public AI stacks create an independent option for government, one free from conflicts of interest or the whims of powerful private citizens. It ensures that national security goals cannot be dictated or determined by private actors.

To claim that anything government-made will be “free from conflicts of interest or the whims of powerful private citizens” is patently absurd given the country's campaign finance regulations and the tendency for some people in power (public or private) to abuse that power for personal profit.

To claim that never happens, which this statement does, undercuts a significant chunk of the whole argument here. It indicates the authors make this argument from an idealized world, not the one in which we live.

When government does need to leverage the private sector, a robust, independent public AI capacity will improve its ability to effectively partner with industry to advance the national interest.

How? With a magic wand? I don't understand this claim.

In short, these regulations would help keep the AI ecosystem healthy for the situations in which contracting out is necessary.

This is a good place to point out that much of this report is a giant slap in the face to the DOE and NSF supercomputing programs.

These organizations rely heavily on contractors and subcontractors to deliver the closest thing to a national AI infrastructure today. To suggest that they should be absorbed into the federal government—and be even more constrained than they currently are in the choices they can make, the costs they must incur for compliance, and the excess oversight and process that erodes their agility—is completely out of touch with reality.

DOE and NSF have shown that the government does not need to be vertically integrated and own all its own chipmaking, system integration, data centers, and applications to advance science for the public good. Perhaps more than any other single sentence in this report, the tone of this statement makes me question whether it was a good use of my time to even respond to this report, because it is in no way grounded with the decades of success that the government has already had in maintaining technology and infrastructure, largely through public-private partnership, for the national interest.

The tech platform example is instructive: countless hours and billions of dollars have been spent optimizing what videos and advertisements people should see. Far less effort in our age of technological progress has gone toward improving veterans benefits or social welfare programs – because that's not where the money is.

Quite hyperbolic, but fine.

However, this is an application of AI which is at the very tip of the vertically integrated public AI stack that this paper is calling for. Billions of dollars invested in putting eyeballs on ads is not the same as hundreds of billions of dollars invested in building out nation-wide infrastructure to support these applications.

And as I'll detail below, improving the lives of people is where money is for companies who can afford to compete at the top end of the AI game. Business is good when society is happy and productive.

Cost-overruns and delivery delays are standard. Quality of the output is sometimes a problem.

This is a non sequitur. Is this because of contracting, or is it incidental to contracting?

I don't understand how bringing these capabilities in-house will somehow make the process on-time and under-budget. What are examples of government functions which are handled in-house that are successful and efficient? Jury duty and going to the DMV?

Even if the system does not replicate all of these pathologies, once national security needs are identified, contracting to private actors still takes a considerable amount of time compared to in-house development and delivery of solutions.

Citation needed. This is not true.

One can imagine researchers and developers using public AI resources to develop and deploy AI solutions to address thorny problems of poverty and food insecurity, climate change, and disease – and without problems of poverty and food insecurity, climate change, and disease – and without the imperative to commercialize those solutions or achieve a return on the investment of time and money.

Again, these are applications of AI. The majority of the investment required to make a vertically integrated public AI stack is not in developing AI applications to solve public problems! The majority of the investment is in duplicating the massive infrastructure build-out, operations, and development of models which can be used by applications.

And to suggest that these “thorny problems” are not of interest to the corporations who can build the needed AI infrastructure is near-sighted. Food insecurity, climate change, and disease are good for nobody. If people are starving and dying, profits are down. It is true that some companies do not see societal challenges as aligned with shareholder value, but those companies are playing the short game and are unlikely to have the vision and capital required to build AI infrastructure in the first place.

If private companies understand that the government has the ability to develop national and homeland security solutions in-house, they would have to be develop national and homel and security solutions in - house, they would have to be more competitive in their pricing and more sensitive to delivering on time and on budget.

So the claim here is that private companies are late and over budget because the government lets them?

Show me a case in the history of leadership supercomputing where this was true. Stuff is late because measured bets are made and developing first-of-a-kind technology to solve groundbreaking problems is fundamentally hard and risky.

I feel like the authors want it both ways; they either want to develop in-house alternatives to commodities available on the open market so they aren't fleeced by nefarious subcontractors, or they want to compete directly with a fast-paced global AI industry developing new technologies at unprecedented cost and scale. Which is it? One comes with competitive pricing, and the other comes with risk-adjusted pricing.

...reliance on outsourcing to contractors and consultants saps the government of knowledge, talented people, and focus on public problems.

Explain how DOE ASCR and NNSA/SC programs work given this statement.

You cannot apply generic findings from the defense sector and claim it applies to AI when you already have a much more realistic analog (national supercomputing efforts in DOE, NSF, and other agencies) in the government already.

Moreover, having serious in house AI expertise and capacity will improve federal agencies' capacity to evaluate private contractors' AI proposals and products, and in turn, ensure that the government gets the products and services it needs at a fair price. This is one reason why experts have recommended building up federal tech capacity and personnel across agencies.

Again—wave a magic wand and it will be so.

You can't go down to the local Walmart and just buy AI expertise. You also cannot train up AI expertise and expect them to not consider other options when they realize their skills are in demand and met with higher value in the private sector. Until the government provides

  • Competitive total compensation
  • Clear, compelling mission
  • A workplace culture that is supportive of the highest performers

there will be a net egress of AI (and tech) talent from the government to private sector.

At best, big tech companies have a mixed record when it comes to public safety and welfare and democratic practices. The list of inadequate comes to public safety and welfare and democratic practices.

The same thing could be said about the government with equal weight and credibility. Any long-lived organization is going to have blemishes; to present this as if it's unique to Big Tech, and therefore relying on Big Tech must not be trusted and government is the only alternative, is disingenuous.

Some frontier AI companies have already been sued for training their models using massive amounts of copyrighted materials without permission or payment.

A little off-topic, but this rings a little hollow given how much research for the public good gets locked behind the paywalls of journals and major publishers.

As I said above, so many of these points about how Big Tech isn't to be trusted can be turned right back around at the government. These problems are not unique to private sector; they are a function of the way the country and society incentivizes the behavior of people regardless of who employs them.

Of course, the federal government is not perfect either, especially in the national security context. The U.S. government has undertaken its fair share of undemocratic and rights-abusing actions from domestic surveillance of civil rights leaders to bulk data collection. For this reason alone, public AI efforts should be accompanied by strict privacy rules and independent oversight to ensure Americans' rights. But in creating a public option for AI, lawmakers have the opportunity to advance, rather than diminish, democratic values and establish layers of oversight and transparency, which importantly - and unlike private companies - are democratically accountable.

This started out good and then took a hard turn. Why is public AI the only one that should be accompanied by strict privacy rules? This statement reads like “we should have public AI so that we can regulate data privacy” when the real statement should be “we should regulate data privacy.”

Also, “democratically accountable” doesn't exactly mesh well with all the claims that private sector is only out to “maximize shareholder profits.” I think I get what the authors are trying to say here, but it's not as if there's no accountability. When a company does something that's bad for society, generally speaking, its share price reflects that. There are exceptions, of course.

Some firms also seem to treat AI safety as an afterthought, which has led to a number of alternative firms created by disaffected and worried former employees. Leading figures in the AI sector, including the heads of frontier AI companies, have warned that generative AI models pose catastrophic and potentially existential risks to humanity - including the risk of “large-scale destructions” within a few years. Some have even declared that the future generative AI models will be so powerful and risk-laden that they should not be in private hands.

Doesn't this statement undercut the idea that private industry cannot be trusted to care about AI safety? It didn't take a government to tell these people to create their own firms or to get venture capitalists to fund them. The problem is being addressed exclusively by private industry, and by the same evil Big Tech and VC firms that "treat AI safety as an afterthought."

Regarding “the risk of large-scale destruction,” that's not what the testimony says.

And citing a podcast, which has a financial incentive to drive listenership by making controversial claims, as an authority on the risks of AI severely undercuts the credibility of this paragraph. Shame on the authors.

then the U.S. government should be at the cutting edge of AI safety research. And to conduct cutting-edge AI safety research, the federal government needs its own AI capabilities on which public employees and outside independent non-profit researchers can build frontier models and conduct safety testing.

Unless the government prevents it, frontier models will be proprietary, so collaboration with private industry will be necessary to actually have a material impact on AI safety to prevent “large-scale destruction.”

Developing its own vertically integrated AI safety capabilities means necessarily going head-to-head with the largest AI companies in the world to develop models that can be deeply inspected. This is not tractable, full stop.

The focus should be on building trust with industry through partnership, not decrying private sector as nefarious and claiming you'll just do what they do but better, faster, and cheaper. Developing parallel capabilities to train frontier models just makes no sense here. It's really expensive, even by government standards.

Moreover, if existential risks or emergent properties do materialize, it would likely be better for the first people to encounter and engage with such models to be public sector AI developers and national security professionals, who can be held publicly accountable, rather than corporate engineers and executives with primarily economic incentives. There are three reasons for this. First, the government would most likely encounter and engage with any so-called AI “superintelligence” in a closed, classified facility rather than a more open corporate environment.

There is no “open corporate environment” in which a superintelligence will be developed. The authors clearly have no clue how leading-edge AI development is happening. The security of the facilities training frontier models are at least as comprehensive as classified data centers, because they are just as worried about their secrets being stolen by adversarial state actors as the government is. To suggest otherwise is ignorant.

Second, corporate incentives will likely push in the direction of release without sufficient testing or controls.

Again, the authors have clearly never talked to anyone who is credibly working on AGI. I don't know anyone in the industry who has this in their business plan when AGI or superintelligence is reached.

A system running a superintelligence will be phenomenally expensive to own and operate. To suggest that any person off the street would be given access to a superintelligence as soon as it is activated ignores the financial realities of how this will play out.

As much as it would make this narrative more convincing, the AI industry is not this carefree and reckless.

Third, and relatedly, the government has decades of experience (and is generally quite good at) maintaining security for extremely dangerous materials and sensitive information – from nuclear and cyber weapons to disease samples and state secrets. Indeed, this is one reason why these activities are either publicly run and public managed capabilities or are highly regulated.

Do you think corporations aren't good at keeping secrets too? Show me evidence that the government is better than industry at these things.

The specific cases mentioned here are places where the private sector is not allowed to compete. Of course the government will have a better track record, because nobody else is on the track.

...tech companies seek to maximize profits for their shareholders. But the profit motive does not necessarily overlap with the United States's national security interests or with the public interest.

They do not necessarily, but they often do. American tech companies require a stable and successful nation to “maximize profits for their shareholders” so acting in the national interest is often aligned with financial incentives.

Arguments about tech patriotism in the AI race with China are particularly questionable given that most of the big tech companies operate in China, are dependent on China for production of their hardware, or have consistently attempted to get into Chinese markets (and simply been thwarted by Chinese officials).

I agree with the sentiment, but I don't think this statement is as true as the authors wish it to be. As relations between the US and China get frostier, companies have a natural incentive to distance themselves.

It is not unrealistic to worry that such commercial ties to adversarial or diplomatically transactional countries could, if enough money or market share was at stake, undermine or at least complicate American firms' services to the U.S. government.

I don't disagree with this. There is a concerning amount of “free money” flowing into the US tech sector from nations with checkered human rights records, for example. This is geopolitical and far beyond the scope of AI though.

Rather, it is simply to say that profit seekers are likely to argue for policies that benefit their shareholders, not the American public, when these two sets of interests are at odds.

A reasonable person could argue that a profit seeker could also be president, a member of congress, or any other elected or career member of the US government at any given time. This is a pretty weak argument when used to argue that the government will do a better job than corporations or startups.

First, the sprint to build public AI would complement – not prevent, preclude, or crowd out – private AI infrastructure and investment. It would coexist with the private sector and address national security challenges and public goods.

There is zero threat that public AI would “crowd out” private AI. And “complement” is very hard to distinguish from “compete against, poorly” when it comes to paying smart people to do innovative things that have dual use.

It would also ensure a dedicated, resilient, and uncompromised AI capacity that would meaningfully strengthen national security and advance public AI capacity.

Resilient? How will that work when existing government HPC resources are completely unresilient? I would say that the government's ability to deliver resilient, large-scale infrastructure for HPC is far behind the capabilities of commercial AI supercomputers. I would love to see a Top 10 supercomputer at a government lab train a trillion-parameter model to convergence (as opposed to training it for just a few steps and writing a paper about it!). It would be an eye-opening experience for the government.

What does “uncompromised AI capacity” even mean?

To put a fine point on this: what happens to this infrastructure when Congress can't pass a budget? When this happened during my time in government, I was fortunate to be a contractor and have my employer carry my salary until the politicians got their act together. Do you know how much money is lost when a data center full of GPUs goes idle for days or weeks in the private sector? Industry, and the shareholders holding them accountable, does not stand for that level of dysfunction.

the U.S. Government has historically been a transformational innovator and enabler of public-interested technological innovation where there is an urgent and compelling national interest. Finally, to the extent that building public AI would require transforming government - by hiring many new people with technological experience and expertise and increasing state capacity for public activities - this is a feature, not a bug. For too long, the government's capacity to act, and especially to act on technology, has been underdeveloped, slow, and outsourced.

The U.S. Government has historically been a transformational innovator when there is no commercial interest in doing something. Going to the moon is not profitable. Nuclear weapons are not profitable (because they're so highly regulated). AI is profitable and transformational because it is a feature of products that are already profitable. To liken the government's role in AI to the government's role in the moon landing is a joke.

As far as "hiring many new people with technological experience and expertise," should someone (Congress?) just wave their magic wand and make working in government at least as desirable as working in private industry for AI research?

There are so many things wrong with this.

  • The government is slow to move because it works by consensus. Do you think AI innovation would happen if it moved at the pace of the slowest thinker?
  • What would motivate a smart and ambitious AI practitioner to work in a slow-moving environment, mired in bureaucracy, where the penalty for underperforming is a lifelong salary with no critical responsibilities? This is a demoralizing environment to work in when it happens, and the government offers little recourse when one bad employee poisons the well.
  • Pay is an obvious challenge. How can the government justify the highest-paid government employees—who would have to be paid more than the US president to be competitive with industry—be working on nebulous AI initiatives, dictated in part by clueless bureaucrats, that are in direct competition with a focused and driven private sector?

Like I said earlier - you can't just go to Walmart and buy AI expertise. The authors completely fail to acknowledge that and speak as if they have a magic wand.

Our current, largely unregulated ecosystem of one GPU manufacturer, three Big Tech cloud providers, and a handful of AI labs at or affiliated with Big Tech companies will not provide the AI that the United States needs to safeguard national security and serve the public.

This seems intentionally hyperbolic.

  • Don't tell AMD investors that there's only one GPU provider. Their quarterly financials don't seem to reflect that.
  • Even if there was even competition in the market, what will you do about TSMC? This isn't a one-dimensional issue.
  • Don't tell Meta AI that they are affiliated with a cloud provider. Or Anthropic. In fact, OpenAI and Google are the only two AI shops that fit this categorization, and OpenAI is already branching out.