- In & Out AI
- Posts
- China's Grift-Induced AI Hardware Boom
China's Grift-Induced AI Hardware Boom
A primer on China's "DeepSeek Consoles".
Leading open source large language models from China like DeepSeek and Qwen have become the standard all open source models are measured against. Topping benchmarks and seeing widespread adoption, these models have led many to believe that China has figured out the secret to AI innovation, and is poised to surpass the US in the AI race. However, as Victor Shih recently said on the Dwarkesh Podcast, the amount of wasteful spending in the technology sector in China is enormous, and the winners are merely an extremely small amount of outliers. Due to special interests and outright corruption, billions and billions of dollars are basically thrown down a hole. In this piece, we'll look at a phenomenon, a fad, that's likely going to cost the Chinese government and corporations billions of dollars.

A DeepSeek console ad at the Shenzhen airport.
The DeepSeek Console
In early 2025, "DeepSeek consoles" suddenly became all the rage, especially among Chinese local governments and state-owned enterprises. Following DeepSeek's rise to stardom in late 2024, every company has its eyes set on deploying DeepSeek in its workplace like girls in west village trying to cop Trader Joe's tote bags. And very much like Trader Joe's tote bags for the NYC girlies, DeepSeek doesn't bring true value to most of these companies. It's a trend, a status symbol for leaders in these organizations to have the hottest buzzword tied to their names.
DeepSeek consoles are servers (mostly made by hardware firms unrelated to the AI labs) with open source LLMs like DeepSeek (and sometimes Qwen) pre-loaded. It's meant to be deployed on-prem (hosting the servers within their own data centers) to serve LLM queries. A console itself can cost anywhere between hundreds of thousands to over 2 million RMB (1 USD ≈ 7 RMB), with significant add-on costs such as software subscriptions and engineering support.
So why don't these companies & organizations just dump their files into a hosted vector database service and stick an AI chatbot on top of the data like most of the self-proclaimed "AI-enabled" companies in the world? Why spend millions of dollars on something that requires much more brainpower and capital to operate when there are much simpler and cheaper solutions?
This comes down to the weird tradition of obsessing over on-prem compute due to distrust of cloud service providers, and the widespread misconception that on-prem compute and data storage are always more secure than the cloud-based alternatives.
Another important factor is the political calculus. In China, many decisions in technological adoption are due to top-down political mandates. The foremost consideration when evaluating such decisions isn't whether the new technology is truly useful for these organizations. Instead, it's how much political capital can the business leaders and their colluding bureaucrats accumulate.
Moreover, Chinese companies and local governments generally prefer capital-intensive routes because there are more opportunities for grifting. In the case of AI deployment, every party - C-suite leaders, procurement managers, local government officials, hardware builders, GPU traffickers, etc. - is trying to siphon money from this operation, creating an incentive to spend as much as possible.
The DeepSeek console, however, is not a viable technical option for the vast majority of use cases. It's extremely hard to scale up and down, requires technical expertise that most organizations don't have, and doesn't actually have the perceived security and privacy advantages - just to name a few of its drawbacks. That's why when the grifters' party is over, what's left will be nothing but a pile of obsolete servers that have had no real utility, will continue to be useless, and will eventually be disposed for dimes on the dollar.
The Origin Story of the DeepSeek Console
The emergence of the DeepSeek consoles is best described as a combined result of historical market preferences, strategic national mandates, and profit-seeking grifters. We'll take a deep dive into the first two factors in this piece, while the last factor warrants an article of its own.
The cloud landscape is very different in China vs. in the US. While the Chinese government collects extensive data from its citizens, it supposedly takes protecting personal data very seriously (which is absolutely not true and virtually every Chinese citizen's data is for sale on Telegram.) Thus virtually the whole public sector is very slow to adopt cloud computing and storage, as it is somehow seen as less secure. Many state-owned enterprises and government branches have servers in their office buildings to store sensitive data.
This preference persisted into the adoption of AI. Many of the decision makers - from high-level technocrats to IT managers in companies - do not have a thorough understanding of the differences between the compute needs for AI vs. traditional software. Naturally, they've kept the tendency of wanting to deploy everything on-prem for AI.
Another reason why there's an appetite for the DeepSeek consoles is the Chinese government's budgetary structure. Unlike the US, where local governments collect their own taxes along with the federal government, local governments in China submit most of their tax revenue to the central government. The higher-level governments would then repay their lower-level subsidiaries via various forms of "transfer payments" based on budgetary needs. The majority of the transfer payments are "special-purpose transfer payments," which have to be spent on the specified uses (such as refurbishing run-down schools in rural areas) and are subjects to audits by higher-level governments.
AI has been listed as a national priority during the latest Chinese National People's Congress in March 2025. This indicates that AI spending is poised to constitute a larger share of the overall government budget, most likely in the form of special-purpose transfer payments to local governments. For the bureaucrats in local governments, a larger budget is without exception always better than a smaller one. Adding AI to the government's agenda via billed-by-usage cloud services surely won't make a good argument for a juicy budget increase from the higher-ups. Instead, "we'll need hundreds of millions of RMB in transfer payments to purchase DeepSeek consoles to equip our government workers with AI" is much more likely to actually bring in more money for the local governments.
Why the DeepSeek Console is a Bad Technical Choice
As we've seen, buying and deploying DeepSeek consoles is mostly a business and political decision instead of a technical one. When technical viability is the last thing considered for deploying a new technology, it's usually a sign of disaster - and that's exactly what's going to happen in this case. For most organizations looking to deploy DeepSeek consoles, they'll likely experience poor usability and low uptime due to a lack of technical expertise, bleed money from unforeseen costs, while still facing the same security vulnerabilities that were supposed to be prevented by on-prem deployment. Only in very few use cases, for those with truly specialized needs and that possess sufficient technical know-how, would warrant on-prem deployment of AI inference servers (DeepSeek consoles). And most buyers of DeepSeek consoles aren't in that category.
Pre-deployment
For many customers of the DeepSeek console, poor technical decisions begin when buying the machines. To be fair, it's very hard, if not impossible, to make the unequivocally correct decision on buying the right hardware and software stack. Top-of-the-line and entry-level consoles have price differences of over 20x, and the software platforms running on the consoles also vary greatly in price. Sure, they can run pilot programs and test a few different setups, but real-life scenarios with 500 people using the system are guaranteed to be very different from pilot programs with 5 people in the testing group. Scaling up the pilot programs linearly is likely infeasible, due to limitations in power supply, physical infrastructure, and money. So even if a pilot program goes well, the prospective clients are almost always making difficult compromises that would very likely lead to failure in real-life deployment.
Another huge hurdle for the successful deployment of DeepSeek consoles is customization, though this limiting factor also applies to cloud-based solutions. For an AI system to add value to a workforce, it has to thoroughly integrate into existing workflows and understand the contexts of the work. This means data has to be easily searchable and retrievable by the AI in order for the outputs to be helpful.
However, IT and data governance in governments and state-owned enterprises have historically been dreadful. Many computers still run on Windows 7 and are using obsolete versions of Microsoft Office, not to mention lots of the software is pirated. Almost all files are stored locally with no clear organizing structure, making it almost impossible to create an aggregated knowledge base. Workplace communications are often mixed with personal messaging on platforms like WeChat, making it impossible for LLMs to query. This historical oversight of standardized data storage and artifact building has created a huge roadblock for deploying customized AI solutions, making most of the models deployed on-prem via DeepSeek consoles no different from the publicly available versions.
In-use
The headache will only increase once the DeepSeek consoles are deployed. Keeping the servers online and ensuring the LLM inference services are consistently available are no easy tasks. They require a level of operational expertise that currently does not exist in almost all organizations seeking to deploy DeepSeek consoles, and it would take lots of money and time to build a team with such expertise. Hiring a team of two engineers to manage the DeepSeek stack can easily cost these companies and government entities north of 1.5m RMB (~200k USD) per year. Though significantly cheaper than hiring in the US, it's still a price most of these organizations aren't willing to pay. Not to mention two engineers are probably not enough for any decently large team. See a more detailed calculation on how much it costs to maintain on-prem LLM services here.
"But the hardware providers are offering on-call support," some might argue, "and there's no need to maintain an in-house engineering team to service the machines." While this is true, the true utility of these services is questionable. Many have chosen DeepSeek consoles over cloud-based solutions for the sole reason of data sovereignty. Relying on external support means allowing external access to data, thus giving up data sovereignty and contradicting the premise for buying this solution. Moreover, on-call support provided by the hardware manufacturers will always take longer to respond than in-house support, leading to more downtime.
A more crucial issue is with scaling. If DeepSeek consoles find success after the initial deployment, which, supposedly, is what both the sellers and the buyers want, they would want to expand LLM usage to reap more benefits from it. Scaling up LLM capacities for managed cloud-based solutions can be as simple as changing a number on a website. Even in the worst case, it won't take more than a day's work for an engineer. Scaling up on-prem solutions like DeepSeek consoles, on the other hand, is astronomically harder. From planning and building the physical infrastructures - including buildings, power supply, cooling, etc. - to installing the hardware and software, it could easily take months and cost millions.
As models and chips iterate, and they do iterate very fast, deployed DeepSeek consoles will soon be obsolete. So even if there's no desire to scale up the services, just maintaining it in order to benefit from the latest models is going to be expensive.
In short, the DeepSeek console is not an approach that's prepared for operational success. It gets exponentially harder to operate and maintain as adoption increases. As AI models become more capable and have more practical use cases, the sunk cost for DeepSeek consoles is going to surge, before most of the users eventually abandon this solution and incur significant financial losses.
Is It More Secure & Private?
The biggest supposed benefit of the DeepSeek console is the superior security and privacy it provides. "Running AI workloads on-prem means you're less vulnerable to attacks and data breaches." But is that really the case?
First, is deploying AI on-prem more secure than using managed cloud services? It really comes down to the quality of the user's in-house security team. If a company pays top-of-the-line salary to attract elite security engineers, then sure, running AI workloads on-prem is probably more secure than running it on the cloud. However, the reality is most often the other way around. Companies put huge emphasis on deploying the hardware for the bragging rights that come with it, and tend to conveniently ignore the seemingly low-ROI and low-prestige tasks such as security. This makes on-prem solutions often more vulnerable than cloud-based ones.
Even if the on-prem AI workloads are separated from the public internet, all it takes is an HR person clicking a link in her email or an intern using a USB stick that had been plugged into her personal laptop for malware to get in. So saying "on-prem AI is more secure" is truly a stretch, as it only applies to a few outliers who are willing to invest heavily in their security teams.
Another advantage DeepSeek consoles are supposed to possess is data privacy. It's believed that because all the data is stored on-prem, it will be safe from any unauthorized external access. However, an important premise must hold for that to be true - the organization must have its own maintenance and operating team.
As previously mentioned, maintaining and operating AI hardware like DeepSeek consoles require specialized talents, and not every company can afford to hire such talents. A popular approach is outsourcing this work to the hardware providers, where their technical teams are responsible for operating the hardware that was sold.
For the companies and government branches choosing this approach, the data privacy advantage is de facto nonexistent. Having to rely on external teams to maintain and operate the servers means high data access privileges must be granted to those outside the organization, which makes using DeepSeek consoles not that different from using cloud-based solutions, as there would always be a theoretical possibility that sensitive data is exposed to those who aren't supposed to.
Closing thoughts
The Chinese tech ecosystem is very different from its US counterpart. The interplay between government mandates and market forces often leads to decisions that are beneficial for all parties involved in the short term but are intrinsically unsound in the long run. In the case of DeepSeek consoles, enterprises chose this more capital-intensive method to deploy AI models instead of the simpler, more common cloud-based approach because that's what's most likely going to maximize personal gains for the leadership. Fundamentally, it's the grifting, both legal and illegal, that has led to the popularization of this unviable technology stack, which is likely going to cost its adopters billions in the future.
Here at In & Out AI, I write about AI headlines using plain English, with clear explanations and actionable insights for business and product leaders. Subscribe today to get future editions delivered to your inbox!
If you like this post, please consider sharing it with friends and colleagues who might also benefit - it’s the best way to support this newsletter!
Reply