- The C.R.E.A.M. Report
- Posts
- The Google TPU Versus Nvidia GPU Is All Smoke...
The Google TPU Versus Nvidia GPU Is All Smoke...
The Lemon Wasn't The Squeeze...


Above Average Info For The Average Joe…
WHEN INVESTING BECOMES A LIFESTYLE YOU WEAR IT!
NEW MERCH ALERT - WEALTHY RED - Limited Quantities - Grab Yours Today! CLICK HERE

$25 EACH WEALTHY RED…BUY 2 GET 1 FREE
I am building a list for my next masterclass in February, if you would like to be in the next masterclass send your name and number. The class is free and is broken down to 4 days and covers, financial building blocks, Emotional IQ, Fundamental Stock Analysis and Macroeconomics.
Here are some of the testimonials:
Testimonial #1
I recently took Kevin Davis’s masterclass, “Investments Dojo,” and it genuinely shifted the way I approach investing. The section on emotional intelligence was exactly what I was missing — it helped me understand how much my reactions, biases, and mindset influence my decisions.
The stock research portion was equally powerful. Kevin didn’t just teach strategies; he exposed the blind spots I didn’t even realize I had. It made me see how much I didn’t know I didn’t know—and that alone was worth the class.
The entire masterclass was packed with value and gave me a clearer, more disciplined perspective on investing. I highly recommend it to anyone serious about leveling up their investment skills and mindset.
Casey Thompson
Testimonial 2
Class is in session! The Investment Dojo Masterclass is a great way to jumpstart your investment journey, to revisit some topics that are no longer in the front of your mind or to overcome inertia. It gives a window into investing that touches on micro and macro economics, emotional intelligence, insurance, market indices and company filings.
I took economics in high school and college and I am surprised how much I was not using basic economics in my approach to investing.
The masterclass covers emotional intelligence and how to remain calm during market downturns. I must say I learned this lesson a few years ago. It is an expensive lesson if you learn it the hard way.
The class also covers information on types of insurance and what you need to do to secure your family's future. It is definitely helpful whether you have a young family or if you are nearing or in retirement.
In the master class, you learn to look at the big picture and where to go to find actionable information.
Kevin explains how finding out the "why" is just as important as the "what" when you are trying to figure out what is going on in the market. I recommend signing up the next time it is available. For me it helped me overcome inertia.
I knew there were some things I needed to do but was putting off. The class gave me the nudge I needed to get in gear.
Al Mercadel
Testimonial #3
Recently, after watching the Investment Dojo for about a year and reading the cream report for approximately six months, I was fortunate enough to have the opportunity to participate in a week long Investment Dojo master class. Our leader, Kevin, began sowing the seeds to both seasoned and newbie investors on how to be successful in the stock market by analyzing both the qualitative and quantitative properties of the stocks that we are choosing to invest in.
As a long-term daily options and derivatives trader, having the opportunity to learn what I didn't know was too much to pass on so thankfully I took the leap and was not disappointed. The amount of knowledge shared by Kevin and The Investment Dojo in this past week should change this groups life. I'd highly recommend the investment dojo and Kevin to my closest friends and family.
David Gillum



AI The New Cold War…
Analyst seem to leave out the fact that AI is global and that the demand is not just domestic but global. It isn’t just companies racing for AI dominance, countries fighting to keep their economies technologically relevant.
During the late Cold War, the US and USSR together were effectively spending the inflation‑adjusted equivalent of over $1–1.5 trillion a year on their militaries at peak, while today the “AI race” is on track to demand hundreds of billions per year in global AI infrastructure capex and several trillion cumulatively by 2030.
The dollar amounts are moving into the same rough galaxy, but the Cold War burden was a much larger share of GDP and far more concentrated in just two players, whereas AI spending is more distributed across governments, hyperscalers, and enterprises worldwide.
Cold War arms race spending in today’s dollars:
• U.S. defense spending: Historical analysis shows that from 1950–1990, US defense spending ranged roughly from 4.5% to 10% of GDP, with peaks above 8–9% during the Korean and Vietnam Wars and around the early 1980s Reagan build‑up.
• One recent review points out that current US defense spending, at about $900+ billion in 2023, actually exceeds Cold War peak levels in inflation‑adjusted dollars but represents only about 3–3.5% of GDP, much lower than during the height of the arms race.
To anchor concrete numbers:
• USAFacts notes that in 1980, US defense spending was about $506 billion in 2023‑adjusted dollars, rising to roughly $820 billion by 2023—an inflation‑adjusted increase of about 62% over that period.
• During the mid‑1980s, with US GDP around $5–6 trillion (in today’s dollars), a 6%‑of‑GDP defense burden implies on the order of $300–360 billion per year then, which scales to several hundred billion in current dollars and is broadly consistent with the inflation‑adjusted figures above.
The Soviet side is harder to pin down, but declassified and academic work converge on a brutally high burden:
• CIA and economic studies estimate the USSR devoted roughly 12% of GNP to defense in 1970, rising toward 18–21% by around 1980–85.
• A survey of estimates summarizes Soviet military spending as roughly 10–20% of GDP, with a mid‑teens compromise of around 15%—roughly double the US share.
If you translate that into today’s scale:
• Using a late‑Cold‑War Soviet GDP on the order of the low‑single‑digit trillions in today’s dollars, a 15–20% burden implies annual military spending equivalent to perhaps $600–800+ billion in 2020s dollars at peak.
• Combined with US spending, that suggests the core superpowers were together burning through the inflation‑adjusted equivalent of easily $1–1.5 trillion per year (or more) on the arms race by the 1980s.
The key point: the Cold War arms race was not just expensive in absolute terms, it was crushing as a share of national output—especially for the USSR, where defense grabbed close to a fifth of the economy by some estimates.
Today’s AI race: annual and cumulative spend
The AI race does not show up as neatly on a single national budget line, but several recent analyses give order‑of‑magnitude numbers for global AI infrastructure.
• McKinsey estimates that meeting AI‑driven compute demand will require around $5.2 trillion into AI‑related data centers by 2030, within a broader projection of about $6.7 trillion into data centers worldwide when all compute demand is included.
• Deloitte notes that just eight leading hyperscalers expect a 44% year‑over‑year increase to about $371 billion in 2025 for AI data centers and computing resources.
• Another market analysis reports that total data center equipment and infrastructure spending reached about $290 billion in 2024, largely underpinned by hyperscaler capex, with AI now the main driver.
• One industry forecast cited by an investment research note puts Big Tech’s combined AI‑focused capex at roughly $405 billion in 2025, with the cumulative AI data center build potentially running between $3 trillion and $8 trillion through the decade, midpoint around $5.5 trillion.
So in broad terms:
• Annual AI‑related infrastructure capex is heading into the high hundreds of billions per year (roughly a Cold War‑scale budget, but for GPUs and power, not tanks).
• Cumulative AI infrastructure spend through 2030 is forecast in the neighborhood of $5–7 trillion worldwide.
Direct comparison:
arms race vs AI race, putting the two races side by side in very rough, inflation‑adjusted terms.
• Cold War peak superpower defense spend (US + USSR): on the order of $1–1.5+ trillion per year in today’s dollars at the 1980s peak, concentrated mainly in two countries and heavily skewed to military hardware and personnel.
• AI race infrastructure spend today: annual global AI‑driven data center and compute capex approaching $300–400+ billion already, with trajectories and forecasts that push cumulative spend to $5+ trillion by 2030.
Two big differences stand out:
• Cold War: US defense was often 5–10% of GDP for long stretches, while the USSR was devoting perhaps 15–20% of its entire economy to defense, a level most economists see as structurally destabilizing.
• AI race: Today’s AI capex is massive in dollar terms but is a few percent of global GDP at most, spread over many firms and countries; it is a strategic priority, not yet a macro‑level war‑time mobilization burden.
Concentration vs diffusion:
• Cold War: Spending was tightly concentrated in central governments, especially two superpowers.
• AI: Spend is distributed across US hyperscalers, Chinese tech giants, global enterprises, sovereign data center initiatives, and public funding; the US, EU, China, and others all contribute meaningful slices of the global AI tab.
In simple terms:
• At their peak, the superpowers were dedicating a Cold War “arms race budget” roughly comparable to or larger than what the entire planet will soon be spending annually on AI infrastructure.
• But over this decade, the world is on track to invest multiple arms‑race‑scale sums cumulatively into AI compute, data centers, and supporting infrastructure, with a total tab in the mid‑single‑digit trillions.
This means we are at the beginning and any talk of an AI bubble is manipulitive Wall Street narrative shifting.
WHEN INVESTING BECOMES A LIFESTYLE YOU WEAR IT!
NEW MERCH ALERT - BILL GATES BLACK - Limited Quantities - Grab Yours Today! CLICK HERE

$25 BILL GATES BLACK BUY 2 GET 1 FREE

The Google TPU Versus Nvidia GPU Is All Smoke…
Wall Street’s latest “DeepSeek” moment is Meta in talks about maybe, possibly, someday spending billions on Google TPUs, and Wall Street immediately sprints to the conclusion:
“Nvidia is finished.” This is the same playbook as every overhyped “Nvidia killer” headline of the past few years—lots of drama, very little understanding of how this market actually works.
Meta is trying to lower its AI bill and diversify suppliers, not file for a divorce from Nvidia. Even Google’s own ambition is framed like, “If everything goes great, maybe we get around 10% of Nvidia’s AI revenue.”
That’s called “second supplier,” not “new monopoly.”
Portability versus being “Google-shaped”
Nvidia’s advantage is simple and plenty, its a matter of flexibility and portability versus being Google-shaped. The same Nvidia GPUs work pretty much everywhere—Amazon, Microsoft, Google, Oracle, and in tons of smaller clouds and private data centers.
You learn one stack (CUDA and friends), and you can move your models all over the place without rewriting your life.
Google TPUs are the opposite: they live in Google’s world, on Google’s software, with Google’s tools. Once you go heavy TPU, your AI setup starts to look very “Google-shaped,” which makes switching later expensive, annoying, and risky if Google’s roadmap ever stops matching your needs.
So when Meta leans into TPUs, they are basically saying: “We’ll save some money now, in exchange for tying a lot more of our future to Google.” That’s not free alpha; that’s a trade-off.
If in fact a trade-off for Meta, then why would Google still buying Nvidia GPU if they are the new 800 pound gorilla on the block?
If Google’s TPUs were truly the one-chip-to-rule-them-all, Google wouldn’t still be one of Nvidia’s biggest customers. Yet it is—because TPUs are great for certain jobs, but the world’s wild zoo of models, tools, research code, and customer workloads still runs best and most flexibly on Nvidia GPUs.
Google Cloud literally makes money renting Nvidia GPUs to customers. Cutting Nvidia too aggressively would mean telling a lot of customers, “Sorry, your stuff doesn’t really run here anymore unless you rewrite it for our special chip.” That is a great way to lose share to clouds that keep Nvidia front and center.
Another fact is Meta still lives on Nvidia, Meta’s beloved Llama family is the poster child for why Nvidia is not going anywhere. Llama has racked up hundreds of millions to over a billion downloads across its versions, with rapid growth in usage and derivatives. And where do people run Llama? On Nvidia GPUs, across clouds and on-prem, because that’s where the capacity and tools are.
Inside Meta, Llama training and a huge chunk of serving have been built around 100% Nvidia hardware. Moving some inference to TPUs to shave costs is sensible; pretending that means “Llama leaves Nvidia” is fantasy.
Meta needs Nvidia for training and a ton of real-world Llama deployments, both internally and across the broader ecosystem.
Hyperscalers are not firing Nvidia For AWS, Microsoft, Oracle, and even Google, the rational setup looks like this:
• Use their own chips (TPUs, Trainium, Inferentia, Maia) to cut internal costs and serve select big customers.
• Keep Nvidia as the main, flexible, “any model, any framework, any customer” platform that everyone recognizes and trusts.
Nvidia still controls the majority of AI accelerator revenue and shipments, with demand and backlogs that stretch years. CUDA remains the default language and ecosystem for AI, while custom chips are still the “second stack” you use when you’re big enough to justify the extra complexity.
If a hyperscaler tried to “cut Nvidia in favor of TPUs,” they’d face:
• Furious customers who want Nvidia instances for portability and compatibility.
• Massive contract and breakup costs and years of migration risk.
• A weaker competitive position versus clouds that keep Nvidia available.
That is why they add alternatives at the edges instead of ripping out Nvidia from the core.
Google the agressor was the “DeepSeek moment” that wasn’t Wall Street is tried to replay the DeepSeek scare.
We have a “New chip! New model! Panic now, think later!”
The problem is, the backward facts don’t support the doom narrative:
• TPUs are a serious incremental threat, mostly on inference economics, not an all-out replacement for Nvidia’s training and broad ecosystem.
• Even the bullish TPU talk tops out at taking a slice of Nvidia’s revenue, not erasing it.
• The overall AI market is growing so fast that even with more custom chips, Nvidia’s business is modeled to stay dominant well into the next decade.
So yes, margins can get pressured over time, pricing power can cool a bit, and Nvidia’s growth can slow from “utterly insane” to “merely very strong.” That is not “beginning of the end”; that is “mature monopoly meets real competition and still wins a lot.”
WHEN INVESTING BECOMES A LIFESTYLE YOU WEAR IT!
NEW MERCH ALERT - BUY THE DIP NAVY - Limited Quantities - Grab Yours Today! CLICK HERE

$25 BUY THE DIP NAVY…BUY 2 GET 1 FREE
Quick Links…
Would You Consider Yourself Wealthy…
Not Over Our Dead Body…
I can Feel The Electricity In The Air…
Thank you for reading, we appreciate your feedback—sharing is caring.
Reply