As this is being typed out, we are nearly two days removed from the “market meltdown” on Monday, January 27th, created by China’s AI lab startup DeepSeek. It’s not that we haven’t been thinking about it. In fact, we’ve been fielding lots of questions about DeepSeek over on our Semi Insider community. You should consider joining if you want all of our research and most up-to-date in-the-moment thinking on things. Our YouTube video production is not a breaking news source, sorry. https://ko-fi.com/chipstockinvestor/tiers 

By the way, here are links to our YouTube channel and videos in podcast form:

https://www.youtube.com/@chipstockinvestor

Chip Stock Investor Podcast

Besides the logistical challenges involved with making the DeepSeek video so many of you desire, there’s another reason for our delay in addressing this topic more directly. When the media (legacy, social, and otherwise) start clamoring to cover a hot topic, we have found tremendous value in waiting, thinking, and reflecting before weighing in (hey, we’re going to talk about some new AI that does this exactly, why can’t the humans do the same?!?). Which is where we’d like to start this discussion on DeepSeek, and what it means for investors.

The real “threat” from DeepSeek is this

By our assessment, DeepSeek’s AI breakthrough is the real deal. It will have an impact on the further development of AI, its application in business and personal use, and the monetization of this new frontier in computing. 

However, what that impact will be is anyone’s guess. Seriously, no one really knows yet. Which is why the market had a mini meltdown (and subsequent rally, at least as of this writing). Uncertainty is risk, and any new risk needs to be discounted in stock prices. The market didn’t crash because the party is over. Quite to the contrary, as we’ll get to momentarily.

Which is the primary issue we’ve taken with this most recent iteration of an old problem. Human emotion can run rampant in both directions, both in optimism and (in this case) pessimism. We’ve been presented with at least a dozen “deep dives” on DeepSeek from folks we’ve never heard of before, claiming expertise in this area of AI/investing. We get it, strike while the iron is hot. But using a period of uncertainty, and pumping the risk factors for attention, should automatically draw skepticism from long-term investors (anyone know where the “I told you so” crowd during the 2022-23 bear market went?). 

The real making of money isn’t in knee-jerk reactions. It’s in the patient waiting, as the businesses you own roll with the punches, solve problems, and accrue economic value for their shareholders. The real threat right now is nothing more than human emotion running away with what should be careful and calculated decision making.

Enough of the lecture, what is DeepSeek? And why care?

DeepSeek was founded in early 2023 by the Chinese hedge fund High-Flyer. Liang Wenfeng, High-Flyer’s cofounder, serves as the CEO.

DeepSeek was started as an AI development lab, and seems to be working with Nvidia (NVDA) H800 GPUs made specifically for the market in China in response to U.S. AI chip export restrictions. (The H800s are a modified version of the H100s, which went into production in 2022.) We see no evidence of any foul play, just a highly talented engineering team innovating with the tools available to them, in the spirit of competition.

DeepSeek’s first product, DeepSeek-Coder, was released in November 2023. But what really started to attract attention was DeepSeek-V3 released in the final days of 2024, followed by the R1 model on January 20th, 2025, less than a month later. What are V3 and R1?

  • DeepSeek-V3 is comparable to OpenAI’s ChatGPT-4o, released in May 2024. ChatGPT-4o is a general-purpose model that can accept input and output responses in text, images, and audio. DeepSeek-V3’s capabilities are limited to text-based inputs and outputs at this time. 
  • DeepSeek-R1 compares to ChatGPT-o1, which made an early preview debut in September 2024 before wide release in December. ChatGPT-o1 extends the capabilities of 4o by adding “reasoning” into the mix, and provides a more in-depth explanation of the response it gives. It particularly lends itself to complex problems like in math, science, and writing software code. DeepSeek-R1 is also a text-based reasoning engine that rivals the performance of o1.

It seems R1 was some sort of “tipping point” for investors (any Malcolm Gladwell readers out there?). After enough people became aware of DeepSeek’s existence, and a week of stewing on the implications, people were ready to freak out on Monday morning of the 27th.

However, OpenAI’s models still appear to be in computing performance lead, especially with ChatGPT-4o’s ability to process images and audio. So what’s the big deal? DeepSeek’s big breakthrough is that its AI models operate at a fraction of the cost to comparable leading AI models, all using less-advanced hardware than Nvidia’s latest-and-greatest. 

All of this is very well-documented by DeepSeek, and open-sourced so the global AI engineering community can replicate and improve upon the work. https://api-docs.deepseek.com/news/news250120 https://github.com/deepseek-ai/DeepSeek-V3 

Game over for U.S. Big Tech, right?

The reason some investors are worried

DeepSeek’s models were significantly cheaper to develop (train), and cheaper to operate (inference) than those of its competitors. DeepSeek-V3 was trained for an estimated $5.6 million, although an important caveat is this estimated expense excludes the cost of all of the previous work done (including on the other open-source work DeepSeek tapped into, like Meta’s Llama models) that made V3 and R1 possible. Nevertheless, by comparison, U.S. Big Tech’s models typically cost tens or hundreds of millions of dollars to train. 

This cost advantage stems from DeepSeek’s focus on algorithmic efficiency and its ability to achieve high performance with fewer resources. “Necessity is the mother of invention,” as the saying goes, and as quoted by our friend Jason Hall at Investing Unscripted. An extended conversation with Jason and Jeff on this topic will be published next week. https://open.spotify.com/show/7mbqwY9bh2JeNAOi7rBDRo 

At any rate, the concern raised by DeepSeek’s competitive advance is primarily two-fold:

  1. Big Tech data center hyperscaler spending on AI is out of control. Wh hyperscalers we mean Microsoft Azure (and its slow break-up with OpenAI), Meta, Amazon AWS, Alphabet Google and Google Cloud, and relative newcomer Oracle Cloud (the first-mover with Nvidia’s H100s and DGX Cloud accelerated computing systems back in 2022). Collectively, the Big Tech hyperscalers have been shelling out tens-of-billions of dollars on capital expenditures (or CapEx, which includes Nvidia’s AI servers) every quarter. Investors are, much like they were back in July and August 2024, worried DeepSeek’s efficient AI work just proved all of this “AI spending” is a total waste.
  2. Nvidia, and other “AI chip stocks” like Broadcom (supplier of IP for custom chips like Google’s TPUs), are going to lose their windfall. If the hyperscalers suddenly pause their CapEx on AI chips and infrastructure, we’re finally headed for that dreaded cyclical decline in sales and profit for companies like Nvidia and Broadcom. And thus, look out below for the stock price.

For what it’s worth, if this is the two-part scenario that will play out in 2025, we’re ready for it. If the hyperscalers cut back on their CapEx, their free cash flow (FCF) profits will rise. This actually makes the hyperscalers a type of built-in “hedge” against the risk of Nvidia and other AI chip companies’ revenue and profit drying up. The Ultimate Nvidia Stock Hedge, and the Truth About Big Tech AI Spending

Why DeepSeek’s rise is just tech acting in normal economic fashion

The risk from DeepSeek’s demonstration of efficiency is real, and puts the onus on AI engineers everywhere to up their game. This is competition at its finest. It’s how tech industries are supposed to behave. 

In fact, this race for AI model supremacy reminds us of the race for internet search algorithm supremacy in the late 1990s. Search quickly became a commodity, and the winner (Google) became such by figuring out how to vertically integrate. By the way, we just did a video on vertical integration. It doesn’t directly address this new batch of AI models, but the lessons remain the same. How to Invest In Chip Stocks 2025: Semiconductor IP — Tiny Tech, Big Bucks!

However, regarding AI more specifically, allow us to offer some very important context and counterpoint to the risk of AI getting commoditized.

1. Media and other would-be experts have the “AI spending bubble” all wrong

    As we explained last summer, hyperscaler spending on “AI infrastructure” like what Nvidia provides isn’t so simply explained by looking at the total CapEx and labeling all of it as “AI.” Why not? The TRUTH About the AI Bubble, Alphabet GOOGL Stock, and What It Means For Nvidia

    These businesses are massive, and they need computing infrastructure (in the form of data centers) to support their workloads and customer workloads that haul in many tens of billions of dollars of revenue every single quarter

    Yes, some of the new CapEx spend is for infrastructure to support new AI model training. And yes, some of this unknown expense is likely at a premium, and the hyperscalers’ work on AI is essentially subsidizing the work of newer entrants like DeepSeek. Someday, there will be a reckoning for this.

    But let’s not forget that elevated CapEx is a cycle, and we’re still very much well within historical norms. Take Google as an example. As a percentage of revenue, CapEx spending on data center equipment (including for new AI training) appears quite healthy. And if you chart out this CapEx spend as a percentage of operating cash flow (OCF) profit, Google is quite justified in this spending to support its continual growth in Google ad revenue and Google Cloud. 

    The story is similar at Meta, which unbeknownst to many investors, is also one of the world’s most powerful data center computing companies (not just a social media app company). In fact, Meta’s ad delivery efficiency, plus its entrance into the AI race, could be one of the best risk-reward opportunities around based on Zuckerberg and co’s use of CapEx on “AI” so far. There’s no inefficiency rearing its ugly head here… not yet anyways.

    If you want to make cool financial charts and models, check out FinChat.io. Here’s a link that gets you 15% off a membership: https://finchat.io/csi/ 

    Remember, data center equipment needs to be refreshed every four to five years. If Nvidia’s most advanced systems get the job done in more efficient fashion, why not do the refresh with Nvidia rather than the older CPU-based systems? This is not just an “AI spending” thing, it’s just capital-intensive businesses (hyperscalers) doing what they need to do, plus adding some unknown amount of cutting-edge AI work along the way. 

    As for the other hyperscalers, especially Microsoft and Oracle, they are spending on CapEx at a more aggressive rate than others. Perhaps this is unsustainable, but they’re also trying to scoop up market share too. Time will tell if their more aggressive spending comes back to bite them or not after DeepSeek demonstrated another cheaper way to develop AI.

    2. Risk to Nvidia and other “AI chip stocks” is overstated, according to Jevons paradox

      In the midst of the market meltdown on the 27th of January, Microsoft CEO (and obvious proponent of all things AI) Satya Nadella hinted that DeepSeek’s breakthrough actually wasn’t a bad thing at all. Perhaps contrary to reactionary logic, DeepSeek making a revolutionary improvement in AI model cost is a great thing – including for Nvidia. Nadella mentioned “Jevons paradox.”

      Jevons paradox, named for English economist William Stanley Jevons (1835-1882), is less an actual paradox and more just a simple real-world observation of elasticity of demand. In describing the use of coal at the time (industrial tech of the 19th century), Jevons explained how increased efficiency in the fuel source’s use actually served to increase its overall consumption.

      In other words, the cheaper a technological resource gets, the more businesses and consumers will use it.

      Computing technology has been Jevons paradox on steroids for decades now. Computers not only get cheaper to produce, but they get far more powerful in their capabilities every year too. And talk about proliferating demand as a result. Many of us these days have multiple PCs, smartphones, smart wearables, intelligent cars, mobile connectivity with internet access and computing power on demand from anywhere…

      AI is simply the next iteration of this. Looked at from this perspective, DeepSeek’s innovation actually isn’t anything all that special. AI is getting simultaneously cheaper and more powerful at the same time – which is exactly what we should (and have, really) expected it to do. And in the early days of a new innovation like AI, cost relative to performance falls the most in absolute dollar terms. DeepSeek costing a fraction of what its predecessors needed to build and operate is also… just normal economics in the computing era. Pioneers and leaders tend to front a lot of development effort and expense, and those that build on that innovation later benefit. 

      Right on queue, a mere hours and days after the DeepSeek news hype began, business leaders from across the IT world (including at Nvidia) and beyond have praised the V3 and R1 model. It would seem that adoption of AI will rise as it gets more accessible and useful, creating another Jevons paradox situation.

      What does this mean for Nvidia and other chip stock friends? Short-term, the increase in overall consumption of a new technological resource doesn’t go up in a straight line. It never does. Things get bumpy, which is why so many investors have been keen on figuring out when the hyperscalers might begin to taper off on their data center and AI CapEx. Perhaps DeepSeek just set the wheels in motion for the first accelerated computing and AI infrastructure cyclical downturn. 

      But a cyclical downturn is just a normal part of doing business in this industry. Over the long-term, any AI model – OpenAI, DeepSeek, Google, Meta, Anthropic, Perplexity, etc. – will need to operate on hardware. And regardless of how efficient that AI may be, increasing use of the resource will require more hardware – and a hardware supplier. We expect Nvidia, Broadcom, and other “AI chip” suppliers to be just fine. If you haven’t already (prior to January 27th), be warned you should strap in and prepare for turbulence.

      What are we doing now at Chip Stock Investor?

      Very little. We bought some more Broadcom stock. We discuss this sort of thing over on Semi Insider. 

      Why didn’t we react to the supposedly catastrophic DeepSeek news? This is where portfolio diversification comes in. We have said that our largest position in our portfolio is Nvidia – not just among our chip stocks, but in the whole portfolio. Nvidia has earned its way to that spot, over years of delivering profitable growth and us scooping up some more shares on the dips. However, and this is an important point, we have a well-diversified portfolio across a number of industries. 

      We have had some questions on our live Q&A events recently about the argument that you only need a handful of stocks, and that betting big on a small handful of high-growth businesses provides the biggest upside. That argument cuts both ways, though. Portfolio concentration, and especially if it isn’t truly diversified across businesses participating in unrelated secular growth trends, can be very emotionally damaging when there are days like the “AI reckoning of January 27th” (just made that up, what do you think?). 

      And what happens if those big positions suffer a 20%, 40%, or 50%+ drawdown, and stay down for a year? Two years? Or more? They could very well still deliver market-obliterating returns over the long-term… but only if you stick with them. But do you have the mental fortitude to be patient, watching your portfolio stay down from all-time-highs waiting for a comeback, while other investors are making money?

      In parting, we believe DeepSeek to be an important landmark in the AI race. But we aren’t shaken. These are early days, and thousands of companies with talented people all over the world are working very hard to make AI a success for all sectors and industries of the economy. We’re happy to be patient with the businesses we own, and leave the panicking to someone else.

      Discover more from Chip Stock Investor

      Subscribe now to keep reading and get access to the full archive.

      Continue reading