Amazon (AMZN) exceeded expectations in Q2 2024. It once again proved that after amassing all that revenue last decade, it can still be a winning stock by now shifting to profit margin growth. Oh… and revenue growth wasn’t all that bad either.
Even so, the stock got hammered by the July / August stock market selloff. This looks like a gift to us – at least, a gift for any with excess cash in need of fast-tracking a core portfolio holding in a tried-and-true business model. This is the type of “value stock” you should be looking for for the 21st century.
But in this segment, we want to talk about the cloud computing segment Amazon Web Services (AWS), and specifically one little misunderstood part of AWS that the media loves to pump as an “Nvidia killer.” Let’s break it down.
The power of unmatched scale
Amazon’s Q2 report was solid, but one of the most impressive parts was the steady comeback of AWS, the cloud computing segment. It’s now at a $105 billion annualized revenue run-rate. Though that plays a distant second fiddle to Amazon’s various e-comm segments, AWS remains by far the primary generator of Amazon profitability.
After over a year of AWS customers “optimizing” their computing needs (cost cutting during the bear market of late 2022-23), AWS growth is accelerating again. Q2 AWS revenue was up 18.8% YoY, with hints that further acceleration towards 20% YoY growth and beyond is possible later this year. After all, cloud computing is still less than a $1 trillion per year global industry, compared to over $5 trillion for the IT sector overall. Amazon CEO Andy Jassy continues to assert his belief that cloud will “invert” the model, eventually reaching 50/50 with legacy IT spending, and perhaps one day surpassing it.
Those are what we call fighting words.
But of course, Wall Street has never gotten awards for long-term vision. Analysts continue to harp on this “AI bubble” nonsense. We already debunked this a couple weeks ago (https://youtu.be/Rv9BNDw-8HY). It isn’t just about AI infrastructure. AI – or more broadly, accelerated computing – is simply building atop existing cloud infrastructure, which is still very much in growth mode. For now, capital expenditures (or CapEx, spending on property and equipment) isn’t a bubble. When you have a business doing over $100 billion a year in revenue, generating operating profit margins of over 35%, and growing at a near 20% pace, the supporting infrastructure company needs to expand to meet customer needs.
Do note, as an e-comm business, Amazon’s CapEx also includes product logistics (delivery) and warehouses too. Jassy and company don’t provide that level of granularity on specific CapEx spend, probably by design to ward off curious eyes of competition and government regulators.
What’s this to do with custom silicon?
AWS is coming up on 20 years of pioneering IT work. And due to its treasure trove of experience, AWS landed on this idea back in 2015 to start designing its own chips. What better way to jumpstart those efforts than by making a small acquisition of a custom CPU design startup, Annapurna Labs (clever, Annapurna being one of the tallest mountains in the world, but most notably, one of the most dangerous for expeditioners).
A short few years later, the quiet and dodgy custom chip unit of Amazon started cranking out Arm-based CPUs (Arm being the same company that powers Apple’s M-series laptop chips and iPhone chips) for AWS customers.
What does that mean? Much like Google Cloud’s model with the TPU (Tensor Processing Unit), Amazon doesn’t sell these chips directly to customers. Instead, it’s e-comm empire uses them, and later, offers them to AWS customers looking for “optimization.” There’s that key word again, “optimization.”
These days, AWS EC2 (Elastic Cloud Compute, general and customizable enterprise computing in a cloud data center accessible via internet connection) has manufactured and deployed at least a couple million Arm-based Graviton CPUs. AWS boasts over 50,000 global customers are utilizing these “instances” (a virtual server) powered by Graviton. https://aws.amazon.com/blogs/aws/aws-graviton4-based-amazon-ec2-r8g-instances-best-price-performance-in-amazon-ec2/
What is a virtual server? We made this visual awhile back:
Put another way, AWS Graviton, and the ability to offer customers the ability to optimize their cloud computing spend, was a major headwind for Amazon during the bear market of 2022 and 2023. AWS revenue growth slowed all the way down to just a 12% pace at one point, and profit margins dipped too.
But on the flip side, Graviton chips are able to keep those customers happy, and might prevent customers from jumping ship to a competitor like Microsoft Azure, Google Cloud, etc.
AWS custom silicon (Annapurna) expands to the AI field
What of all this accelerated computing and AI infrastructure now being built in the cloud? Amazon hopes it can pull off a similar “optimization” offering with new chips, just like it did for general purpose cloud computing with Graviton.
But accelerated computing is a different animal entirely. Big tech, even with its deep pockets of cash and tech know-how, lacks some of the basic IP needed to make these chips themselves. As we have covered in the past, this is where “custom silicon” powerhouses Broadcom and Marvell Technology Group come into play. Does Broadcom’s AI Event Spell Trouble For Nvidia Stock? (AVGO & NVDA)
For years, we’ve owned both Broadcom and Marvell. But we made a change earlier this year and consolidated to just Broadcom. We explained why a couple months ago in this video: Is Marvell’s AI Upside A Mirage? Chip Stock Investor’s Better Buy For June 2024
Broadcom has been at the heart of designing some top custom data center chips, including more than a helping hand with Google’s TPUs since 2015.
Note in the graphic above the deep integration with Amazon on CPUs and NICs (network interface cards), and more recently, “accelerators” — you know, the “AI infrastructure” Wall Street is losing its mind over right now.
It’s possible that Amazon is also getting some help from Marvell, although these specific design relationships aren’t explicitly named and explained.
Nevertheless, this is why we frequently call out companies like Broadcom, Marvell, and increasingly Nvidia too, in our Semi Industry Flow Chart as being especially powerful due to their IP portfolios that are so fundamental to big tech. See IP (patents) listed in the top right of the flow?
Jassy explains the reason for custom AI chip design
Deciding to co-design and manufacture (likely using Taiwan Semi Manufacturing) a custom semiconductor is no small endeavor. But AWS is massive, the leader in cloud infrastructure, and knows exactly what its customers need. So designing expensive chips like this makes sense.
During the Q2 2024 earnings call Q&A, Jassy explained a bit of history on Graviton, and how it has led to more recent work on accelerated computing chips. We’ve pulled the quotes below and cleaned them up below for readability. But we believe this to be really important commentary if you want to understand the cloud infrastructure foundation that accelerated computing and AI is now being built on top of.
Brief detour here before more from Jassy. But it sounds like much of the work up to this point has been on Graviton (again, 4th generation of those chips, so Graviton is maturing in terms of their capabilities) and to a lesser extent, Trainium (the AI training data-crunching accelerator chips, which makes sense given that’s where companies need to start on their AI journey).
In the coming years, as nearly every other tech company has been telling us, AI inference is going to be a big growth market. That’s after the AI has been trained, and it begins to be used within an application. Thus, AWS Inferentia chips (currently working on 2nd gen) could be a major tailwind for AWS’ revenue in the next decade.
Ok, back to those Jassy quotes:
Is Amazon AWS going to be an “Nvidia killer?”
Does this mean Amazon AWS and its Trainium and Inferentia chips could emerge as top “Nvidia killers” down the road? Likely not. But it does make a lot of financial sense for cloud infrastructure providers like AWS to deploy some capital from their epic scale (again, AWS in the lead at a $105 billion annualized revenue run-rate) to custom design exactly what its customers need so they get optimal long-term operating price and operating expense.
This is why Nvidia is of course also looking at doing some custom chip work of its own, like what Broadcom and Marvell are already doing. In fact, we’d bet Nvidia has probably been doing some of this all along, but is just a bit tight-lipped about it. https://youtu.be/iw5zO3zpWYE
More than competing with Nvidia, though, what AWS is doing is competing against its cloud infrastructure peers. Note that this Magic Quadrant from Gartner will need to be overhauled, as much has changed since it was published in March 2023.
As we stated here on Chip Stock Investor in late 2022, and especially starting in early 2023, it’s become clear that Nvidia has leveled the playing field in data center technology. It kicked off an arms race of sorts, and opened the door for new cloud infrastructure entrants (including Oracle Cloud) and pushed leaders like AWS to double down on their infrastructure technology.
We maintain this isn’t an “AI bubble” yet, but that could change. In the meantime, Amazon is working hard on improving its own profit margins while simultaneously growing alongside its customers. At about 30x this year’s expected free cash flow (FCF), and with earnings CAGR expected to grow over 40% for the next two years, this looks like a really reasonable price tag right now.
Amazon stock is one of our core holdings, one of our top stock picks of 2024, and we’re happy to keep it that way for the foreseeable future. The current valuation seems fair if AWS was all we were buying, with the improving e-comm business getting tossed in as almost a freebie.
Once again too much common sense. You are the best.