- In & Out AI
- Posts
- Trump's UAE AI Deal Reveals a Huge Problem for the US
Trump's UAE AI Deal Reveals a Huge Problem for the US
Is the US struggling to stay ahead of China in the AI race?
Last week, the Trump administration announced a series of bilateral investment deals with the UAE, Saudi Arabia, and Qatar. Totaling over $2 trillion, it has been championed as an exemplification of Trump's commitment to global collaboration and prosperity (Making this a great time to fire Navarro and launch him into the ocean via catapult). This deal includes a 5 giga-watt (GW) AI cluster to be built in the UAE. As one of the biggest AI clusters announced to date, it's incredibly ambitious, and rife with question marks when examined closely.
What's In the Deal
First, a closer look at what was included in the announced deal. Most of the items are in fields such as manufacturing, energy, and defense - this includes the entire $1.2T Qatar deal, and the majority of the Saudi and UAE deals.
Both Saudi Arabia and the UAE have made plans to build AI infrastructure (data centers) as a part of their deals, but the sizes differ drastically. Saudi Arabia took a more conservative approach, aiming for a 500 mega-watt (MW, 1000MW = 1GW) datacenter in 5 years. They expect to build it in collaboration with Nvidia and AMD, the two leading AI chip providers.
The UAE, on the other hand, is aiming for the moon. It's building a 5GW AI campus near Abu Dhabi by 2030 - the largest AI cluster buildout plan so far.
To put these numbers in perspective, the largest AI data center in operation today is XAI's colossus cluster in Memphis. With 200,000 H100 GPUs, its estimated power usage is around 250 MW. By the end of this year, AWS will potentially take the throne with its Rainier cluster, consuming around 400 MW. The industry leader, OpenAI, held the largest AI cluster buildout plan before the UAE announcement. And they were only planning for a ~4.5 GW cluster, with an investment of up to $500 billion from Softbank. For reference, NYC's average power consumption is estimated to be 5.7 GW. So while Saudi's plan looks more than achievable by 2030, the UAE's plan to build an AI cluster 20 times the size of the current biggest cluster, and almost consumes as much energy as the entirety of NYC is... ambitious. But hey, they have Bugatti cop cars. If there's anyone who can build it, maybe it's them.

source: X@ohlennart
What's It For, Anyway?
Now, let's assume the incredibly cracked engineers from the US and Middle East are able to work together and get this 5GW data center built - then what? How will it be put to use?
A Primer on Training and Inference Data Centers
Generally speaking, AI data centers can serve two distinct functionalities: training and inference.
Using a data center for training means it is used to find the right weights/parameters for an AI model. I.e. It's used for research and development and doesn't interact with users directly.
Inference data centers serve apps like ChatGPT. When you send a chatbot a message, it's sent to an inference data center nearby to be processed.
While these two types of data centers use more or less the same kind of chip (GPUs), they are built very differently, favoring distinct characteristics.
The largest data centers are usually reserved for training. The GPUs for a training run need to be as physically close to one another as possible. Ideally all under the same roof. Because a model's parameters are constantly being updated during the training run, every GPU needs to maintain the same, latest version of the updated weights. Thus you want to house them as close as possible to minimize the latency during these updates. Theoretically, this information can be transmitted at a rate close to the speed of light via fiberoptic cables, so even if two chips are thousands of miles apart, each update should only take a few milliseconds. However, a large training run involves hundreds of millions (a conservative estimate) of gradient updates, which makes inter-GPU communication time a significant overhead.
The good news is, training data centers don't need to interact with users. So it doesn't matter where it is built. It can be in the middle-of-nowhere, as long as its chips are close together.
On the other hand, data centers for inference are almost the polar opposite. It needs to be close to the users so they can have a low-latency, smooth user experience. But since the parameter values are already set, each GPU server only needs to maintain a copy of the parameters and doesn't need to communicate with other GPUs. So a more distributed datacenter design with a small GPU cluster close to every population hub is ideal for inference tasks.
A helpful analogy would be to compare this to a manufacturer of, let's say, smartphones. Training data centers are like the factories. You want different factory lines located as closely as possible, ideally all in the same building so you can save on shipping costs. Inference data centers are like the stores. You want to have many stores in various cities, so consumers can get to your stores and buy your phones easily.
In general, the complexity and cost of building data centers scale hyper-linearly with their size, meaning that a data center twice as big would cost more than twice as much. This is due to challenges in areas such as networking, power supply, and cooling. Because of this challenge in scaling up data centers, almost all the biggest AI clusters are dedicated to training.

Visualizing the difference between training & inference clusters. source: Marvell Technology
So which kind of data center will the UAE 5GW be? In the Commerce Department press release, it states:
The UAE-US AI campus will include 5GW of capacity for AI data centers in Abu Dhabi, providing a regional platform from which US hyperscalers will be able to offer latency-friendly services to nearly half of the global population living within 3,200 km (2,000 miles) of the UAE. Once completed, the facility will leverage nuclear, solar, and gas power to minimize carbon emissions and will also house a science park driving advancements in AI innovation.
It's not hard to deduce from this paragraph that the plan for this data center is to use it for inference. This is utterly perplexing. Remember we said earlier that the largest clusters with the most compute power are normally dedicated to training? How is it possible that the largest AI cluster announced to date is an inference cluster? It's completely unfathomable when it actually gets built one day, and all the hyperscalers are just like "That's a cool 5GW cluster right there, let me use it to deploy my shipped model to better serve the public in the Middle East."
A Problem for the US
While this definitely isn't "we didn't kill JFK" level of dishonesty, it is confusing and weird that the Trump administration claims this will be an inference cluster. Why would they do this? My best guess is that the Trump admin has yet to find a convincing rationale for justifying training state-of-the-art AI models on a cluster owned by the UAE. Given how the Trump admin and its inner circle have been harping on about maintaining the US's AI & tech dominance, it would be a bad look for them to announce that the US would rely on the UAE to train future frontier AI models.
So if building the most advanced AI cluster in the Middle East is misaligned with Trump's campaign talking points, why did they still go through with it? A likely explanation is that the US lacks confidence in its ability to build a comparable cluster domestically.
The main bottleneck would be power supply. Since 2024, many industry leaders have voiced their concern that the US could lose the lead in AI due to an inability to supply enough power to support state-of-the-art AI data centers. According to an Epoch AI estimate, it's unlikely for the US to build a >3 GW data center by 2030. The CEO of NextEra, the largest utility company in the United States, also considered it challenging to find a site for a 5 GW data center in the US.
The announcement of the latest 5GW datacenter in the UAE corroborates this point of view. Looking forward, AI dominance is arguably the top national security concern. If the US is reasonably confident with its ability to keep building the largest AI clusters domestically - in other words, if the administration had a choice - it would never even entertain the possibility of building facilities so critical to national security outside of US soil. The Trump administration must have recognized a real possibility that the US would fail to keep scaling up key AI infrastructure due to power constraints. If they don't build it somewhere else, the alternative would be to give China the lead. The US's closest allies - e.g. NATO countries - likely won't have enough industrial capacity either, making the UAE the next best option, as it remains a close ally despite not being fully aligned ideologically. Most importantly, it's power-rich.
Stated more simply, the 5GW data center in the UAE can be viewed as a contingency for if domestic plans like OpenAI's Stargate fail to materialize - a scenario the US government treats as a realistic possibility. The risk of losing the AI race to China is so big, that the US would rather allow a country like the UAE to control the critical infrastructure for training frontier AI models.
Closing Words
There is still very little detail about this 5 GW cluster, so most of the article is conjecture based on one paragraph in a press release. Despite this scarcity of information, the US choosing to build the largest AI data center in another country is saying a lot. This move suggests that the US may not be willing to bet everything on its domestic capabilities and is hedging against the possibility of slowed AI infrastructure buildout at home. While not putting all your eggs in one basket is often prudent, it can also be interpreted as a sign of wavering self-confidence.
We're here to help you make sense of AI’s impacts. You'll get original insights, explained in an easy-to-follow, engaging way. Subscribe today to get future editions delivered to your inbox!
If you like this post, please consider sharing it with friends and colleagues who might also benefit - it’s the best way to support this newsletter!
Reply