The views expressed by contributors are their own and not the view of The Hill

Is OpenAI’s Sam Altman’s future worth $7 trillion?

Sam Altman, chief executive officer of OpenAI, during a panel session on day three of the World Economic Forum (WEF) in Davos, Switzerland, on Thursday, Jan. 18, 2024. The annual Davos gathering of political leaders, top executives and celebrities runs from January 15 to 19. Photographer: Stefan Wermuth/Bloomberg via Getty Images

The contemporary world of artificial intelligence (AI) has a maxim: if something is not possible with state-of-the-art generative AI systems today, just wait—it will be next month. Serving as a powerful reminder not to underestimate the capabilities of rapidly emerging systems, this maxim is making its way into the halls of state power, exemplified by the United States’ strategy of tech containment in its great power competition with China. This competition unfolds as some Middle Eastern states, particularly the United Arab Emirates (UAE), increasingly play an assertive role in the development of emerging technologies like AI, intertwining with AI’s modern maxim.

Nothing exemplifies this optimism more than reports that OpenAI CEO Sam Altman is in talks with, among others, the UAE government to raise $5 trillion to 7 trillion (yes, trillion) for increased chip-building capacity, following news that Altman sought billions of dollars for a chip company focused on Tensor Processing Units. The newly minted Abu Dhabi-based investment fund MGX, chaired by UAE national security advisor Sheikh Tahnoon bin Zayed al-Nahyan, is in “early” talks with OpenAI to help fund Altman’s chip-building endeavor.

That such an endeavor is even being considered indicates that AI’s modern maxim is taking root among diverse actors who are intent on harnessing AI. The perception is that generative AI models will only get more sophisticated and those states intent on playing leading roles in the global economy of the future must act decisively now or find themselves irrevocably disadvantaged in the long-term.

One could be forgiven for believing that generative AI has already fulfilled the transformative vision on which the field of AI has had its sights set for years. Yet, underappreciated commercial and technical obstacles cast doubt on whether these models will deliver on the promise of transformative applications. Echoing Matt O’Shaughnessy’s warning that talk of “superintelligence” leads policymakers astray, Altman’s strategically ambitious but hubristic effort threatens to strap AI policymaking down to a future that may not come to pass—American and Middle Eastern policymakers alike should be wary.

As the Financial Times reported, one (anonymous) chief financial officer of a multi-billion dollar company noted a disparity between their peers’ spending on products like Amazon Web Services (AWS) and ChatGPT, with the latter “often at the margins of their business and he was surprised at how little they were actually paying OpenAI.”


Moreover, June Yoon argues that the market for the chips that power AI is starting to show signs of overheating. Conjoined with the lack of broad enterprise adoption, generative AI shows a significant resemblance to the boom and bust of telecom stocks in the Dot-com bubble period.

Additionally, reporting by The Information shows that commercial adoption of generative AI is lagging behind the hype due to a mismatch between expected capabilities and current costs. Customers of cloud providers including Microsoft, AWS and Google “are being cautious” or “deliberate” about “increasing spending on new AI services, given the high price of running the software, its shortcomings in terms of accuracy and the difficulty of determining how much value they’ll get out of it.” Privately, short-term expectations for returns on AI adoption investments have tempered.

To be sure, generative AI companies do have short-term advantages. Altman is an impressive strategic actor, moving rapidly to capitalize on the unexpected success of ChatGPT-3.5 in late-2022 and cleverly calibrating his rhetoric regarding the future of the technology and OpenAI’s role in it to both the U.S. Senate and international audiences. OpenAI, furthermore, reached its $2 billion annualized revenue within just ten years of its founding (up from $1.3 billion in mid-October 2023).

Additionally, generative AI is impacting the customer service industry, as is evident in the cases of Salesforce’s “Einstein GPT”—a generative AI customer relationship management (CRM) technology—and Swedish FinTech company Klarna’s use of GPT-4 to handle customer service chats.

Finally, OpenAI’s text-to-video generator “Sora” is causing anxiety in the entertainment industry. This comes on the back of a 148-day Hollywood writer’s strike in 2023, in which AI-generated material played a central role. OpenAI is currently attempting to woo Hollywood film studios and directors with Sora.

Nevertheless, generative AI systems are being deployed with great rapidity before their suitable applications are identified—an odd sequence of events. As Alan Holland argues, venture capital investment in generative AI systems has occurred in an irrational reversal of basic investing practices, namely in identifying a business challenge first and then constructing an application to solve it.

The technical obstacles faced by generative AI are no less serious. Hallucinations, for one, may be an innate feature of the technology that can be reduced in real-world contexts, but not eliminated.

Furthermore, large language models (LLMs) lack sufficient ability to plan and reason. An AI research group led by Subbarao Kambhampati finds that rather than planning and reasoning in a human-like fashion, LLMs engage in what they call approximate retrieval—they complete an input given to them by reconstructing its completion word-by-word. In contrast to a traditional database, however, the novelty of LLMs’ outputs means that the process of approximate retrieval does not retrieve data exactly as it is found but reconstructs it probabilistically. Their research shows that when tested autonomously, LLMs including GPT-3.5 and GPT-4 exhibit a decidedly poor ability to execute plans—with the important qualification that LLMs can improve the search processes of external planning systems via idea-generation.

Finally, generative AI systems lack intellectual autonomy. Put simply: the fundamental conceptual work is still done by humans. A human decides which concepts these systems should leverage and then determines whether the output is logically consistent. It is not clear how any descendants of GPT-4 belonging to the same fundamental architecture could alleviate this problem.

Considering these remarks, the modern maxim of AI must be qualified: whenever a generative AI system appears to indisputably cross another threshold, just wait—the eventual walk-backs and caveats will bring it back to reality.

State and private actors, including Abu Dhabi, should be exceptionally careful with Altman’s proposals. An AI system’s underlying architecture determines the scope of its capabilities and the practical costs of its construction, training and performance. AI infrastructure is inextricably linked to a system’s architecture. Generative AI is no exception. Thus, any attempt to concoct a plan for the infrastructure that future AI requires is necessarily operating with some significant level of uncertainty, as future AI systems may have unfamiliar architectures.

It is not novel to identify that the future of AI training and performance may leverage different hardware—it is less appreciated to identify that the paradigms on which such decisions are made are potentially unknown to researchers today. Software engineer Grady Booch—most famous for co-designing the Unified Modeling Language (UML)—is vocal about the folly of Altman’s proposal, identifying the broader trend as a “clear sign that contemporary AI has chosen the wrong architecture.”

Altman is sending an invitation to the future with his $7 trillion idea—but another, quite unfamiliar future may be in store for this technology.

Vincent J. Carchidi is a Non-Resident Scholar at the Middle East Institute’s Strategic Technologies and Cyber Security Program. He is also a member of Foreign Policy for America’s 2024 NextGen Cohort. His opinions are his own. You can follow him on LinkedIn or X.