Trolling through Twitter yesterday I found a tweet from Azure CTO Mark Russinovich. I’ll quote the text verbatim “Like I mentioned, Notepad really screams on the Azure 24TB Mega Godzilla Beast VM.” Ultimately this thread leads to an Ignite presentation from October, 2020. Therein, Russinovich showcases monster Azure VMs.
When Russinovich Showcases Monster Azure VMs, What’s the Point?
From left (older) to right (newer), the lead-in graphic shows a historical retrospective what’s been “monster” for memory optimized servers over time. Itty-bitty boxes at far left started out with Intel and AMD Gen7 versions, with 512 GB and 768 GB of RAM respectively. Along came Godzilla after that, with 768 GB or RAM and more cores. Next came the Beast, with 4 TB RAM and 64 cores. After that: Beast V2 with 224 Cores and 12TB RAM. The current king of Azure monsters is Mega-Godzilla-Beast. It has a whopping 448 cores and 24TB RAM. No wonder Notepad really screams. So does everything else, including huge in-memory SAP HANA workloads for which this VM is intended.
I took Russinovich’s “really screams” Notepad remark as tongue-in-cheek when I saw it. Viewing his Ignite video proves that point in spades. What’s fascinating, though, is that some of the highest-end Azure users are already pushing Microsoft for an even bigger monster. They’re ready to tackle even bigger and more demanding workloads than Mega-Godzilla-Beast can handle.
Who Needs Mega-Monster VMs?
This rampant upscaling of resources is no mere idle fancy. Indeed, there are large companies and organizations that need huge aggregations of compute, memory, storage and networking to handle certain specialized workloads.
This also gives me an insight into the ongoing and increasing allure of the cloud. Most datacenters simply couldn’t put the technologies together to create such mega-monster VMs for themselves. The only way place to find them is in the cloud. Further, the only way to afford them is to use them when you need them, and turn them off right way when the workload is done.