Following Nvidia’s (NVDA) incredible quarter and surge in expectations, we think shares of the AI chip powerhouse could gain another 14% from record highs over the next six to nine months. That’s generally our time horizon for a Club price target, which we raise on Nvidia from $300 to $450 per share. We maintain our 2 rating on the stock, indicating we would like to wait for a pullback before buying more. No kidding, right? Nvidia closed at $305 a share on Wednesday ahead of astonishing post-bubble financial numbers that pushed shares up a whopping nearly 29% to Thursday’s all-time high of $394.80 each. It’s almost in the $1 trillion market cap club. Jim Cramer, an Nvidia supporter since at least 2017, recently designated it as the Club’s second no-holds-barred stock. (Apple was the first). Jim even renamed his dog Nvidia. Our new $450 per share price target on Nvidia is about 45 times earnings estimates for fiscal year 2025 (or calendar year 2024). Nvidia has a weird financial calendar and on Wednesday night it reported results for the first quarter of fiscal 2024. While 45 times isn’t cheap on a valuation basis at just over twice the S&P 500’s current valuation, it’s only slightly above the 40 times the average valuation investors have placed on the stock over the past five years. In our view, it’s more than justified when we consider the catwalk for growth that Nvidia has ahead of it. That’s what we’re seeing on Thursday, as this latest round of upward revisions to estimates also serves as a reminder that Nvidia has, more often than not, turned out to be cheaper (or more valuable) than initially assumed, as analysts have consistently been overly conservative about it. potential of Nvidia’s disruptive nature, now on full display as the undisputed leader in cards to leverage artificial technology. NVDA 5Y mountain Nvidia’s 5-year achievement Jim has been singing the praises of Nvidia CEO Jensen Huang for years – not to mention many of the graphics processing unit (GPU) technologies already in use that allowed the company to capitalize on the explosion of AI into consumer consciousness when ChatGPT went viral this year. On Wednesday evening’s post-earnings call, management made it clear that they see it getting even better later this calendar year. While not officially releasing guidance beyond the current quarter, the team said demand for generative AI and large language models “has increased our data center visibility by a couple of quarters and we’ve gotten significantly higher supply for the second half of the year.” . .” Simply put, management seems to indicate that earnings in the second half of the year will be even higher than in the first half. The question of what they are talking about is widely accepted and comes from consumer internet companies, cloud service providers, corporate customers and even AI-based start-ups. Keep in mind that Nvidia’s first-ever data center central processing unit (CPU) is due later this year, with management noting that “At this week’s International Supercomputing Conference in Germany, the University of Bristol announced a new supercomputer based on the Nvidia Grace CPU Superchip, which is six times more energy efficient than the previous supercomputer.” Energy efficiency is a major selling point. As we’ve seen in 2022, energy represents a major input cost of operating a data center, so anything that can be done to reduce those costs will be very attractive to customers looking to increase their own profitability. The Omniverse Cloud is also on track to be available in the second half of the year. On a higher level, management spoke on the call about the need for the world’s data centers to go through a significant upgrade cycle to meet the computing needs of generative AI applications, such as OpenAI’s ChatGPT. (Microsoft, also a club name, is a major supporter of Open-AI and is using the technology from the start-ups to power its new AI-enhanced Bing search engine.) “The data centers of the world are moving toward accelerated computing” , Huang said Wednesday night. That’s $1 trillion in data center infrastructure that needs to be refreshed because it’s almost entirely CPU-based, which, as Huang pointed out, means “it’s essentially not accelerated.” However, with generative AI clearly becoming a new standard and GPU-based accelerated computing being so much more energy efficient than non-accelerated CPU-based computing, data center budgets will, as Huang put it, “have to shift very dramatically towards accelerated computing and you see that now.” As noted in our guide to how the semiconductor industry works, the CPU is basically the brain of a computer, responsible for fetching instructions/inputs, decoding those instructions and sending them to perform an operation that yields the desired result. . GPUs, on the other hand, are more specialized and can handle many tasks at once. While a CPU processes data sequentially, a GPU will break down a complex problem into many small tasks and execute them simultaneously. Huang continued that as we move forward, data center customer capital spending budgets will actually be heavily focused on generative AI and accelerated computing infrastructure. So over the next five to 10 years, we’ll see what’s now about $1 trillion and a growing value of data center budgets shift very sharply in Nvidia’s favor as cloud providers look to them for accelerated computing solutions. In the end it’s actually very simple, all roads lead to Nvidia. Every major company is migrating workloads to the cloud – be it Amazon Web Services (AWS), Microsoft’s Azure or Google Cloud – and the cloud providers all rely on Nvidia to support their products. Why Nvidia? Huang noted during the talk that Nvidia’s core value proposition is that it’s the lowest total cost of ownership solution. Nvidia excels in several areas that make it so. They are a full-stack data center solution. It’s not just about having the best chips, it’s also about developing and optimizing software solutions that enable users to maximize their use of the hardware. On the conference call, Huang even touted a networking stack called DOCA and an acceleration library called Magnum IO, noting that “these two pieces of software are some of the crown jewels of our company.” He added: “No one ever talks about it because it’s hard to understand, but it allows us to connect tens of thousands of GPUs together.” It’s not just about a single chip, Nvidia excels at maximizing the architecture of the entire data center – the way it’s built from the ground up, where all the parts work together. As Huang put it, “it’s another way of thinking that the computer is the data center or that the data center is the computer. It’s not the chip. It’s the network operating system, your distributed computing engines, your understanding of the architecture of the network equipment, the switches and the computer systems, the computer structure, that whole system is your computer, and that’s what you’re trying to operate, and so to get the best performance, you have to understand the full stack, you have to understand the scale of data centers, and that’s what accelerated computing is.” Usage is another important part of Nvidia’s competitive advantage. As Huang noted, a data center that can only do one thing, even if it can be incredibly fast, will remain underused. However, Nvidia’s “universal GPU” is capable of many things – again going back to their huge software libraries – and allows for much higher utilization rates. Finally, there’s the company’s data center expertise. During the call, Huang discussed the issues that can arise when building a data center, noting that for some, a build can take up to a year. Nvidia, on the other hand, has managed to perfect the process. Instead of months or a year, he said Nvidia can measure delivery times in weeks. That’s a key selling point for customers who want to continually stay on the cutting edge of technology, especially as we enter this new era of AI with so much market share now up for grabs. The Bottom Line As we look to the future, it’s important to remember that while ChatGPT was an eye-opening moment, or an “iPhone moment” as Huang put it, we’re just at the beginning. The excitement about ChatGPT isn’t so much about what it can already do, but more about the fact that it’s kind of a proof of concept of what’s possible. The first-generation iPhone, released 16 years ago next month, was nowhere near what we have today. But it showed people what a smartphone could really be. What we have now, to extend the metaphor, is the original first-generation iPhone. If you’re going to own Nvidia and not trade it as we intend, as impressive as generative AI applications are, you need to think less about what we have now and more about what this technology will be capable of when we get to the “iPhone 14 versions” of generative AI. That’s the really exciting (and slightly creepy) reason to hold onto shares of this AI-enabled juggernaut. (Jim Cramer’s Charitable Trust is long NVDA, MSFT, AMZN, AAPL, GOOGL. See here for a full list of the shares.) As a subscriber to the CNBC Investing Club with Jim Cramer, you will receive a trade alert before Jim makes a trade. Jim waits 45 minutes after sending a trade alert before buying or selling a stock in his charity’s portfolio. If Jim has talked about a stock on CNBC TV, he will wait 72 hours after the trade alert is issued before executing the trade. THE ABOVE INVESTING CLUB INFORMATION IS SUBJECT TO OUR TERMS AND CONDITIONS AND PRIVACY POLICY, TOGETHER WITH OUR DISCLAIMER. NO FIDUCIAL OBLIGATION OR DUTY EXISTS OR IS CREATED BY YOUR RECEIPT OF ANY INFORMATION PROVIDED IN CONNECTION WITH THE INVESTING CLUB. NO SPECIFIC OUTCOME OR PROFIT IS GUARANTEED.
Nvidia CEO Jensen Huang in his usual leather jacket.
Getty
As a sequel to Nvidia‘s (NVDA) incredible quarter and strong guidance increase, we think shares of the AI chip powerhouse could gain another 14% from record highs over the next six to nine months.