Public Cloud Services Revenues Surpass $500 Billion in 2022: IDC
Worldwide revenue for the public cloud services market totaled $545.8 billion in 2022, an increase of 22.9% over 2021, according to new data from the IDC Worldwide Semiannual Public Cloud Services Tracker.
SaaS applications were the largest source of public cloud services revenue, accounting for over 45% of the total in 2022. Infrastructure as a service was the second largest revenue category at 21.2% of the total revenue, with Platform as a service and SaaS System Infrastructure software at 17.0% and 16.7% respectively.
IDC data also showed that the top five public cloud providers – Microsoft, Amazon Web Services, Salesforce Inc., Google, and Oracle – accounted for more than 41% of the worldwide total revenue for a 27.3% year-over-year growth. Microsoft had the largest market share of the public cloud services market with 16.8%, followed by Amazon Web Services with 13.5%.
“Given the economic challenges of the past year, it’s easy to conclude that we are in a period where a focus on constraining new expenditures and optimizing the use of existing cloud assets will dominate CIOs’ priorities and shape the fortunes of IT providers for the next several years. It’s also a very wrong conclusion,” said Rick Villars, group vice president, Worldwide Research at IDC. “The assessment and use of AI, triggered by generative AI, is starting to dominate the planning and long-term investment agendas of businesses and cloud providers will play a significant role in the evaluation and adoption of AI enablement services.”
Though some budgets may be tightening due to lingering economic uncertainty, generative AI is driving long-term investment strategies for enterprises and cloud providers.
“Cloud providers are making significant investments in high-performance infrastructure,” said Dave McCarthy, research vice president, Cloud and Edge Infrastructure Services, IDC. "This serves two purposes. First, it unlocks the next wave of migration for enterprise applications that have previously remained on-premises. Second, it creates the foundation for new AI software that can be quickly deployed at scale. In both cases, these investments are resulting in market growth opportunities.”
Venture capital firm Andreessen Horowitz says that Generative AI warrants significant cloud investments due to its resource-intensive nature.
“Nearly everything in generative AI passes through a cloud-hosted GPU (or TPU) at some point. Whether for model providers/research labs running training workloads, hosting companies running inference/fine-tuning, or application companies doing some combination of both – FLOPS are the lifeblood of generative AI,” the company wrote. “For the first time in a very long time, progress on the most disruptive computing technology is massively compute bound.”
Andreessen Horowitz says that access to compute resources at the lowest total cost has become a determining factor for the success of AI companies. The venture capital firm expects the vast majority of startups to use cloud computing for generative AI, as it offers less up-front cost and more scalability in many cases.
A recent Wall Street Journal article noted that traditional cloud infrastructure was not designed to support large-scale AI and cloud providers are rushing to catch up with the demand.
“There’s a pretty big imbalance between demand and supply at the moment,” said Chetan Kapoor, director of product management at Amazon Web Services’ Elastic Compute Cloud division in the WSJ article.
Only a small portion of cloud services are optimized for AI. A majority of the cloud consists of servers leveraging general-purpose CPUs, whereas GPU clusters better served for running AI workloads make up a minority of the infrastructure.
Kapoor told WSJ that AWS plans to deploy multiple AI-optimized server clusters over the next 12 months. The article also noted that Microsoft Azure and Google Cloud are also working to increase their AI infrastructure.
Hewlett Packard Enterprise is also entering the AI cloud market. The company recently announced it is introducing HPE GreenLake for Large Language Models, an on-demand, multi-tenant supercomputing cloud service that will enable enterprises to privately train, tune and deploy large-scale AI.
“Unlike general-purpose cloud offerings that run multiple workloads in parallel, HPE GreenLake for LLMs runs on an AI-native architecture uniquely designed to run a single large-scale AI training and simulation workload, and at full computing capacity,” the company said in a release. “The offering will support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at once. This capability is significantly more effective, reliable, and efficient to train AI and create more accurate models, allowing enterprises to speed up their journey from POC to production to solve problems faster.”
HPE President and CEO Antonio Neri commented that we have reached a generational market shift in AI that will be as transformational as previous tech breakthroughs like the web, mobile, and cloud.
“HPE is making AI, once the domain of well-funded government labs and the global cloud giants, accessible to all by delivering a range of AI applications, starting with large language models, that run on HPE’s proven, sustainable supercomputers. Now, organizations can embrace AI to drive innovation, disrupt markets, and achieve breakthroughs with an on-demand cloud service that trains, tunes, and deploys models, at scale and responsibly," he said.