Adapted from “The Parallel Processing Revolution” by Porter Stansberry – much of this is taken directly from his writings, and much of it is my own from researching additional sources to supplement, expand upon, or clarify his writings. This is a new format for a post on this blog, so I hope you enjoy reading it. Where possible, I’ve attempted to link the sources used in this post to give further background on specific and possibly unfamiliar terms.
Introduction: The Big Bang That Went Unnoticed
A quiet revolution is underway—one that is fundamentally reshaping how industries function, how global supply chains are structured, and how modern economies will grow in the decades ahead. We are not talking about social media platforms or mobile apps. It’s deeper than that. It’s about the transformative power of parallel processing, a technological leap that promises to revolutionize our digital landscape.
It is the Parallel Processing Revolution.
Porter Stansberry’s detailed analysis, The Parallel Processing Revolution, outlines how a shift in computing architecture has triggered a massive change in how data is processed. This shift—from serial to parallel processing—is enabling the rise of artificial intelligence, transforming data enter infrastructure, and creating new global chokepoints around chip production.
This post draws directly from Stansberry’s work while integrating additional context from related trends in US–China trade, the AI infrastructure buildout, and the investment landscape for digital infrastructure—all of which are covered in our previous posts.
Why Serial Computing Hit a Wall
For over 50 years, computing has advanced steadily under the guidance of Moore’s Law, which predicted that the number of transistors on a chip would double roughly every two years. This led to exponential growth in CPU performance as processors became faster, more compact, and more energy-efficient. From the Intel 4004 in 1971 to the multi-core chips of the 2000s, progress was largely linear: better chips meant faster software.
However, by the early 2010s, this trajectory began to falter. Transistor miniaturization hit physical and thermal walls. As transistors shrank to single-digit nanometers, leakage currents, quantum tunneling, and heat dissipation became significant challenges. Adding more transistors no longer delivered proportional performance gains, and clock speeds plateaued around 3–4 GHz.
Simultaneously, computing needs exploded. The rise of artificial intelligence (AI), machine learning (ML), edge computing, and real-time data analytics ushered in workloads that required massive parallelism and throughput. CPUs—optimized for sequential instruction execution—struggled under these new demands.
For instance:
- AI training models, such as GPT and BERT, require processing billions of parameters across massive datasets.
- Edge devices need low-latency computing power at the periphery of the network, which traditional server-centric models can’t efficiently provide.
- High-frequency trading, autonomous vehicles, and predictive maintenance systems depend on processing large volumes of data in real-time—far beyond what serial computing can support alone.
This inflection point gave rise to parallel processing, where thousands—or even millions—of operations occur simultaneously. It’s no longer about making a single processor faster but about orchestrating thousands of them to work together. This architecture shift paved the way for GPUs, tensor cores, and specialized accelerators that now form the backbone of modern computing infrastructure.
Enter: parallel processing.
For decades, central processing units (CPUs) were the workhorses of computing. They handled operations sequentially—executing one task at a time, albeit rapidly. This serial approach worked well during the era of spreadsheets, databases, and early internet applications. But as demand for computing power exploded—driven by AI, big data, and real-time analytics—this model hit its ceiling.
Rather than tackling tasks in order, parallel computing allows thousands—or even millions—of tasks to be executed simultaneously. This architectural shift enables exponential increases in speed and efficiency, particularly for workloads that demand vast amounts of computational power.
So, what technology powers this leap?
The answer lies in the Graphics Processing Unit (GPU). Originally engineered to render complex video game graphics in real time, GPUs were developed to handle the intense mathematical calculations required to draw and manipulate images on a screen. Each frame in a 3D video game, for instance, involves millions of calculations—lighting effects, textures, shading, depth, and motion—all processed in parallel to ensure smooth, immersive visuals.
To perform these operations efficiently, GPUs were designed with massively parallel architectures—thousands of small, efficient cores working in tandem simultaneously. While CPUs can handle a handful of complex tasks sequentially, GPUs were designed to break large problems into many smaller tasks and process them simultaneously.
This made GPUs ideal not just for graphics, but for any application that benefits from parallelization, and nowhere is that more critical today than in artificial intelligence. Training large AI models like GPT or BERT involves processing billions of data points and adjusting millions of parameters across multiple layers of a neural network. These are inherently parallel tasks—perfectly suited for the GPU’s design.
Nvidia’s strategic decision in the mid-2000s to open up its GPUs to developers outside the gaming industry (through its CUDA programming platform) unlocked a tidal wave of innovation. What was once a niche product for gamers quickly became the core engine for deep learning, high-performance computing (HPC), and complex scientific simulations.
Today, GPUs are at the heart of data centers, self-driving car systems, real-time fraud detection platforms, and even military defense networks. Their original purpose—powering video games—now serves as the basis for the most transformative technologies of the 21st century.
The implications extend far beyond performance. Parallel processing is reshaping chip design, energy demands, supply chains, and even national security policy. It’s no longer just a tech innovation—it’s a macroeconomic and geopolitical force – it’s the basis for modern technological advancement that should improve the lives of billions of people in coming years.
The Silent Catalyst: Nvidia’s Strategic Inflection Point
As detailed in The Parallel Processing Revolution, Nvidia’s role in this shift cannot be overstated. What began as a 3D graphics card company evolved into a $3 trillion enterprise and the undisputed leader in high-performance computing.
The key moment came in 2006 when Nvidia launched CUDA (Compute Unified Device Architecture), a platform that unlocked the power of GPUs for developers beyond gaming. Then, in 2012, a team of researchers at the University of Toronto utilized Nvidia GPUs to train AlexNet, a deep-learning model that outperformed all other competitors in the ImageNet challenge.
That moment, barely noticed at the time, was the Big Bang of modern AI.
From there, Nvidia moved aggressively. It supplied GPUs to Google, Meta, Baidu, Microsoft, and every major hyperscaler [a company that provides massive-scale cloud computing infrastructure and services, enabling businesses to run applications, store data, and perform high-level computing tasks efficiently across the globe – companies include Amazon Web Services, Microsoft Azure, Google Cloud Platform, etc,], enabling them to build AI models. Today, its processors power everything from autonomous vehicles to defense systems to the generative AI tools redefining search, marketing, and operations.
In our earlier blog post, The Data Center Gold Rush, we explored how Nvidia’s advancements have driven demand for next-generation data center infrastructure—an effect that continues to accelerate.
Watch for a deep‑dive blog post on Nvidia and its investment implications.
The Supply Chain Underpinning Parallel Processing
Nvidia’s GPUs are powerful, but they are not created in isolation. Three other companies make Nvidia’s dominance possible:
ASML: The Monopoly in Lithography
ASML controls 100% of the market for EUV (extreme ultraviolet) lithography machines, essential for producing sub-10-nanometer chips. Each machine costs ~$380 million, includes over 100,000 components, and ranks among the most complex machines ever built. Without ASML, the semiconductor revolution stalls. Their moat is real, mechanical, and irreplaceable.
TSMC: The Geopolitical Linchpin
Taiwan Semiconductor Manufacturing Company (TSMC) produces nearly 90% of the world’s most advanced chips. Companies like Nvidia, Apple, and AMD rely on TSMC to turn designs into physical chips.
However, TSMC’s location—on an island that China considers its own—has created geopolitical risk on a scale we examined in “The U.S.–China Trade Deal 2025 Analysis.” A conflict over Taiwan would trigger a global economic shock that dwarfs anything seen during COVID-19.
TSMC is a single point of failure in the global economy. Currently, it has no replacement.
Arm Holdings: The Blueprint Provider
Arm doesn’t build chips—it licenses designs. Nearly every major chip—powering mobile devices, edge applications, and Nvidia’s own Grace CPU—is based on Arm architecture.
Arm’s IP licensing model yields unmatched capital efficiency, underpinning the computing revolution with extremely high margins and recurring revenue.
Watch for a deep‑dive blog post on these three companies and their investment implications in the coming weeks.
The Infrastructure and Energy Backstop
Parallel processing, while transformative, comes with cost—specifically, energy consumption.
As data centers deploy Nvidia GPUs at scale, their power draw increases exponentially. AI training models are not energy-efficient; they require constant, intensive operation.
This increase in demand has sparked renewed interest in foundational energy sources—coal, gas, and nuclear—not for political or environmental reasons but because these sources provide consistent base-load electricity. As detailed in The $500 Billion Stargate Project, energy and computing power are now permanently linked.
The Energy Titans Powering Data Centers
As data center workloads increase, so do their energy requirements. This need for power has led to an unexpected surge in select coal, gas, and grid infrastructure companies.
Consol Energy (NYSE: CEIX) – The King of Coal
Coal isn’t dead. Consol is thriving. Its high-BTU coal is favored by utilities operating data centers, especially those that require reliable base-load power. Unlike wind or solar, coal generation can operate 24/7—which is essential for GPUs training AI models continuously. Consol’s pricing power has grown in tandem with computing demand, and its free cash flow yield remains among the best in the energy sector.
EQT Corporation (NYSE: EQT) – The God of Gas
EQT is the largest natural gas producer in the US, sitting atop the Marcellus and Utica shale basins and powering many peaking power plants that now supply data centers. Natural gas remains critical for grid reliability—especially as renewables expand but storage lags. EQT’s role will become even more prominent as AI workloads demand 24/7 availability and energy stability.
Watch for a deep‑dive blog post on these two companies and their investment implications in the coming weeks.
The Grid Infrastructure You Can’t Ignore
Generating power is only part of the equation. That energy must be distributed securely and efficiently. That’s where industrial backbone companies come in.
Atkore Inc. (NYSE: ATKR) – The Industrial Plumber
Atkore manufactures electrical conduit systems—metal piping used to route high-voltage cabling. These systems are essential for data center construction and grid reinforcement projects. AI-driven infrastructure spending has expanded Atkore’s total addressable market. Data centers aren’t just buildings—they are hardened electrical environments [ resilient, surge-protected power infrastructure designed for mission-critical loads ]. Atkore’s dominance in electricity “plumbing” is critical to bringing GPU-powered systems online.
BWX Technologies (NYSE: BWXT) – The Nuclear Supplier
BWXT supplies small modular reactors (SMRs) and nuclear fuel systems—primarily to the US Navy and the Department of Energy. Interest in zero-carbon, base-load nuclear power is growing. SMRs are recognized as a promising answer to meet the growing electricity demand. Despite challenges, Reuters reports that SMRs “could help meet an expected surge in U.S. electricity demand coming from the expansion of power-hungry data centers.” BWXT is advancing advanced nuclear technologies, such as the BANR microreactor, designed to deliver reliable, clean power to campuses, industrial sites, and data centers.
Watch for a deep‑dive blog post on these two companies and their investment implications in the coming weeks.
Reuters also reports that data center energy demand is fueling a race to build out SMR supply chains. As noted in a recent Reuters article, SMR developers are racing to establish nuclear fuel supply chains to meet anticipated electricity demand from AI-driven data centers.
AI infrastructure isn’t a five-year buildout; it’s a 50-year transformation. BWXT’s positioning could prove strategic as governments balance environmental goals with reliability.
Microreactors: Compact Power for the Future of Data Centers
Microreactors are compact nuclear power plants that generate 1–20 MWe of electricity—100 to 1,000 times smaller than traditional reactors. They’re modular, transportable, and designed to operate for years without refueling, making them ideal for powering remote sites, military bases, and, increasingly, data center campuses with reliable, carbon-free energy.
Their small size and rapid deployment potential make them a promising complement to larger small modular reactors (SMRs). As data centers demand persistent and resilient power, microreactors could offer on-site, off-grid energy solutions, reducing dependency on grid infrastructure and backup fuel systems.
Why Microreactors Are a Strategic Fit for Data Center Power Demands
- Scalability: Easily deployed to campus sites with limited local grid capacity.
- Reliability: Designed to run continuously for 5–10 years—ideal for uninterrupted AI and computing operations.
- Clean energy: Low-carbon alternative to diesel generators and peaker plants – power plants that grid operators call on at times of particularly high electricity demand, typically running only during peak demand periods and often fueled by natural gas or coal, making them essential for grid stability.
Publicly Traded Microreactor Companies
Oklo Inc. [OKLO] | Developing the Aurora microreactor for off-grid, 10–50 MWe power—has agreements for deployment at military bases and data center campuses.
NANO Nuclear Energy Inc. [NNE] | U.S.-listed developer of portable microreactors (ZEUS and ODIN models), targeting campus and edge computing applications.
NuScale Power Corporation [SMR] | While focused on SMRs, its compact 50–77 MWe modules blur the line between microreactors and SMRs—now NRC-certified.
Watch for a deep‑dive blog post on these three companies and their investment implications in the coming weeks.
Microreactors remain in the early stage. Regulatory approval, safety reviews, and capital infrastructure are still in development. However, recent developments suggest momentum.
Oklo has garnered the attention of the FDA and DOE with its Aurora system, which targets data centers and defense facilities; its stock has surged following defense department contracts.
Nuclear recently partnered with Digihost Technology to explore the deployment of its microreactor on data campus environments.
The NRC (Nuclear Regulatory Commission) continues the pre-application review of multiple microreactor designs.
Microreactors could serve as on-site, clean, and constant power sources for future data center campuses—offering a path to decarbonization and resilience. Their compact design and modular nature make them uniquely suited to bridge the gap between centralized power and next-gen computer demand.
The Public Utilities at the Center of Demand
While Nvidia and TSMC make headlines, utility providers are becoming silent beneficiaries of the AI revolution.
Constellation Energy (NASDAQ: CEG)
CEG operates America’s largest fleet of zero-emission nuclear plants. That makes it a direct beneficiary of data center energy demand—especially from customers seeking carbon-neutral solutions. Its long-term contracts, regulatory structure, and scale provide resilience. Pricing power is rising in tandem with demand.
Vistra Corp. (NYSE: VST)
Vistra offers a balanced mix of natural gas and renewable generation, with significant capacity in Texas and Illinois—two states with rapidly expanding data center footprints. Vistra is uniquely positioned to capture the upside from growing commercial demand without being constrained by regulated residential markets.
As discussed in [The $500 Billion Stargate Project], utilities like Vistra and Constellation are not only enabling AI-grade infrastructure—they are also emerging as strategic investment targets.
Watch for a deep‑dive blog post on these two companies and their investment implications in the coming weeks.
New Frontiers for Infrastructure and Energy Investment
This infrastructure and energy demand is creating durable, long-duration investment opportunities—especially in firms that build, maintain, and supply the backbone behind AI. These opportunities extend far beyond the usual names, such as Nvidia and TSMC.
As explored in The Tariff Conflict, government incentives and geopolitical risk mitigation are now accelerating massive investment in domestic production and hard assets.
But what does this look like in practice?
Digital Infrastructure Manufacturing
The CHIPS Act and Inflation Reduction Act have triggered a renaissance in U.S.-based production for semiconductors, power systems, and server hardware. New fabrication facilities are breaking ground in Arizona, Ohio, and Texas, signaling long-term capital formation in regional economies, labor markets, and technology clusters.
Grid-Scale Modernization
As AI systems demand stable, 24/7 power, investment is flowing into high-voltage transmission, substation retrofits, and electrical conduit systems. Companies like Atkore and Quanta Services are benefitting directly, supplying the industrial materials that provide the essential infrastructure for digital energy consumption.
Clean Base-Load Energy
We’re entering a transitional phase in the US energy mix. Natural gas remains critical, but the narrative is shifting toward nuclear microreactors, Small Modular Reactors (SMRs), and carbon capture technologies. BWXT, Oklo, and NANO Nuclear are examples of companies that have turned once-niche concepts into scalable commercial deployments.
AI-Centric Real Estate
A new category of industrial real estate is emerging—GPU campuses, AI server farms, and heat-dense colocation facilities [specialized data centers where various businesses can rent space to house their own servers and networking equipment — they are designed to handle extremely high levels of thermal output from modern computing hardware].
These are not traditional hyperscaler data centers. They require custom HVAC, modular nuclear or gas power systems, and specialized materials. This has implications for construction firms, insulation manufacturers, and real estate investment trusts (REITs).
Secure, Hardened Infrastructure
Geopolitical tensions are forcing a rethink of the physical and cyber resilience of critical infrastructure. Firms offering redundant grid pathways, EMP-hardened electrical environments, and edge computing systems with autonomous failover capabilities are seeing increased interest from both commercial and defense sectors.
The Genius Act – Passed Today by the US Senate
The GENIUS Act (Government-Enabled Nationally Integrated US Stablecoin), passed today by the US Senate, is designed to accelerate the adoption of U.S.-backed digital stablecoins as a mainstream payment medium. Its impact extends far beyond financial markets—it’s catalyzing demand for real-time digital financial infrastructure that operates continuously, securely, and at scale.
Stablecoins rely on blockchain validation networks, automated settlement engines, and tokenized banking rails that require constant uptime and ultra-low latency. This infrastructure depends on energy-intensive data centers, especially those hosting blockchain nodes, smart contract execution layers, and regulated digital custody systems.
As stablecoins move from experimental fintech to federally regulated instruments, the underlying computer architecture must evolve. Think: energy-hungry, permissioned blockchains hosted in hardened colocation facilities—forcing utilities, real estate developers, and server manufacturers to account for stablecoin-related infrastructure needs alongside traditional workloads.
Implication: Watch for power demand from FinOps (financial operations) platforms to converge with AI computing power, driving multi-sector investment into high-availability, ultra-secure digital infrastructure—particularly in regulated financial jurisdictions.
AI infrastructure is not just about software—it’s about steel, silicon, power, and real estate. This is the industrialization of intelligence, and the companies that form the backbone will be central to capital markets for decades to come.
Watch for a blog post on the Genius Act coming soon, which will provide more detail.
Market Dynamics and Valuation Risk
Nvidia currently trades at a forward P/E multiple that reflects high expectations. Two assumptions underpin this valuation:
- Annual revenue growth near or above 30%.
- Gross margins consistently above 50%.
If either fails—due to customer concentration (Microsoft, Meta, Amazon), rising competition (here is another source for competition), or manufacturing price pressures—valuations could compress sharply.
History provides context. Microsoft, the dominant internet company of the late 1990s, lost over 60% of its value during the dot-com crash despite strong fundamentals.
The graph below shows a price chart for Microsoft from 2000 to the present, illustrating the 16 years it took for the stock price to return to its pre-crash level.
Chart courtesy of Michael Lebowitz
On the graph, you see the pre-dot-com price high in 2000 [Microsoft’s share price peaked around $36 in early 2000 (adjusted for splits)], subsequently dropped 60% to $15, and then began the journey its investors took if they held on until 2016 when the stock finally broke out above that previous high.
Buy and Hold did not work out so well for Microsoft investors employing that strategy – or did it?
Let’s look at different investors:
Investor 1: Bought Microsoft at the $36 2000 high and held it until today, producing a 10.9% CAGR. A $ 1,000 investment here turned into $12,740.
Investor 2: Bought at the 2000 high price and sold at the bottom of the crash, producing a total return of -58.3% loss on the investment. A $ 1,000 investment here turned into $417.
Investor 3: Bought Microsoft at $36 when it crossed the red line on the graph above and held until today, producing a 26.3% CAGR and a Total Return of 1,227.5%. A $ 1,000 investment here turned into $13,268.
Investor 4: Bought at the 2000 high price and sold at the bottom of the crash, then repurchased it at the bottom of the 2008-2009 bond market crash and held until today, producing a 13.3% CAGR. A $1,000 investment here turned into $13,278.
All scenarios produced roughly the same total return, other than buying at the top and selling at the bottom, something investors managing their personal investments frequently do.
The lesson learned: It’s not the market’s highs or lows that determine long-term success—it’s the discipline to stay invested, avoid panic-selling, and reenter strategically that sets investors apart.
The Road Ahead
The parallel processing revolution is not a short-term technology cycle. It is a reordering of the physical and digital economy.
It begins with Nvidia’s chips and expands outward—through TSMC’s fabs, ASML’s machines, Arm’s designs, and the energy sources and providers that power them all. These firms—and the infrastructure they depend on—are becoming thecore utilities of the 21st century.
Investors, policymakers, and industry leaders must view this through a macro lens. This future is no longer just a story about semiconductors. It is about national security, capital formation, and industrial scale.
In a coming blog post, we will explore Nvidia in greater depth—how its vertical integration, from hardware to software, is building a self-reinforcing economic moat unlike any the tech sector has seen since Microsoft in the 1990s.
What Comes After Parallel Processing?
As parallel computing has matured, researchers and technologists are looking ahead to breakthrough paradigms that could dramatically change computational capabilities and efficiency:
Neuromorphic Computing
Inspired by the brain’s structure, neuromorphic architectures use networks of artificial neurons and synapses to process information. These systems excel at energy-efficient, event-driven tasks, mimicking the operation of biological brains.
This brain-inspired computing architecture offers several advantages:
-
- Mimics human brain structure and function using spiking neural networks
- Provides event-driven processing for energy efficiency
- Excels in:
-
-
- AI applications
-
-
-
- Real-time sensory processing
-
-
-
- Pattern recognition tasks
-
-
- Offers inherent parallelism with the simultaneous operation of neurons and synapses
Why it matters: Neuromorphic chips promise ultra-low-power processing for sensory and edge applications, including robotics, IoT, and adaptive AI systems—far beyond conventional parallel models.
Quantum Computing
Leveraging principles of quantum mechanics—like superposition and entanglement—quantum computers can, in theory, solve particular classes of problems exponentially faster than classical systems.
Quantum computing represents a revolutionary advancement in parallel processing capabilities:
-
- Leverages qubits for exponentially faster computations through quantum superposition
- Particularly powerful for:
-
-
- Complex optimization problems
-
-
-
- Cryptography
-
-
-
- Materials science simulations
-
-
-
- Chemistry applications
-
-
- Enhanced training capabilities for neuromorphic networks through massive parallel data processing
Why it matters: Targeting optimization, simulation, cryptography, and drug discovery, quantum hardware (gate-model or annealing) operates on fundamentally different principles than CPUs or GPUs.
Hybrid & Novel Architectures
Emerging systems combine general-purpose and specialized hardware—such as GPUs, FPGAs, TPUs, neuromorphic cores, and quantum co-processors—working together to tailor solutions for diverse tasks.
-
- In scientific and AI computing, researchers explore architectures that integrate parallel, neuromorphic, and quantum elements, making new classes of applications possible.
- Projects like the UC San Diego–led PRISM initiative are pushing in-memory and near-memory computing as complementary to parallel processing.
Industry Applications and Market Growth
AI and Data Centers
The rising demand for computational power is reshaping industries and infrastructure. The GPU market alone is expected to grow from $70 billion in 2024 to $237.5 billion by 2030. This growth reflects not just faster performance but the shift toward new, more complex applications.
Here’s where that growth is happening and why it matters across key sectors.
- The GPU market is projected to grow from USD 70 billion in 2024 to USD 237.5 billion by 2030
- Growth will be driven by:
-
- AI workload demands
-
- Machine learning applications
-
- Deep learning requirements
- Tensor Core GPUs are revolutionizing deep learning with unprecedented processing capabilities
Gaming and Graphics
The gaming industry remains a significant driver of GPU innovation. As consumer expectations rise for visual realism and responsiveness, developers are leveraging advanced graphics capabilities and artificial intelligence (AI) to enhance gameplay and accessibility across various platforms.
- Continued advancement in real-time ray tracing and AI-enhanced graphics
- Cloud gaming services expanding accessibility
- Integration of AI technologies for improved gaming experiences
Integration of Technologies
Beyond performance gains, the next phase of computing will involve integrating multiple architectures to meet the demands of increasingly complex workloads. This convergence of technologies will reshape how systems are designed, combining the strengths of different computing models for greater efficiency and specialization.
The future will likely see the convergence of multiple computing paradigms:
- Hybrid systems combining quantum and classical computing
- Integration of neuromorphic elements with traditional GPU architectures
- Enhanced AI-specific hardware capabilities
These advancements will have significant impacts across various sectors:
- Healthcare
- Finance
- Transportation
- Climate change solutions
- Sustainable energy development
The future of GPUs and parallel processing is marked by a shift toward more specialized, efficient, and powerful computing solutions that can handle increasingly complex computational tasks while maintaining energy efficiency and processing speed.
CONCLUSION
The Macro Investment Theme of Data Center Growth remains intact and should allow our clients’ portfolios to grow materially in the years ahead – barring any adverse economic decisions, a global conflict, or controversial/snap statements by Washington.
Many individual investors do not have the time or desire to actively manage an investment portfolio to maximize returns and minimize risk; that is where a professional portfolio manager, like BankChampaign, can help.
With 35 years experience in the business of investment management, with millions of dollars of assets under management, with a proven investment process that has yielded average returns well above its benchmark, BankChampaign is well positioned to help you when you need it.
Contact Senior Vice President Karen Sharp ( [email protected]) or Vice President Joel Wallace ([email protected]) to discuss your investments. You can reach them at the listed emails or by phone at (217) 351-2870.
Thanks for reading!
Mark