Missed this year’s Google Next event? This page is a good place for you to catch up.
But before we get there, what is Next, and why is every enthusiastic developer talking about it?
Google Next is an annual conference/event where Google’s best and brightest, including its CXOs, discuss the recent innovations and future of their business and technology. If you’ve been following them, you’ve probably heard about the many announcements they made at their most recent Cloud Next event in October this year, which was attended by their Cloud Head, Thomas Kurian and CEO, Sundar Pichai. Here are a few of the 123 essential development announcements from the live event that would have a direct impact on your business, even if they aren’t immediately obvious:
Meeting growing demands with 5 new Google Cloud regions
To meet the needs of a growing customer base, Google has invested in expanding its global network by introducing 5 new Google Cloud regions: Austria, Greece, Norway, South Africa, and Sweden. That takes the total cloud regions to 48 + 5 or 53, serving customers in more than 200 countries and territories worldwide. For information about these regions and their precise locations, check out this article by the technology giant.
Enhancing Google Cloud Infrastructure with New C3 VM and Hyperdisk
Realizing they cannot rely on super-fast CPUs alone anymore as Moore’s Law allowed in the past, Google was faced with a decisive choice: either allow customers to optimize their workloads for a given platform or offer them a platform that’s dedicated to and fully optimized for their specific needs. Not only did they choose the latter, but they implemented it thoughtfully and efficiently to build machines that benefited their customers.
Their new C3 machine series powered by the 4th Gen Intel Xeon Scalable processor and custom Intel Infrastructure Processing Unit (IPU) and Hyperdisk block storage, can deliver exceptional performance gains to enable high-performance computing and data-intensive workloads. Companies like Snap, one of Google’s customers, have reportedly seen a 20% increase in performance for a key workload compared to the previous generation C2. Parallel Works Inc., another of Google’s customers, shared that “Based on the initial performance data, running weather research and forecasting (WRF) on C3 clusters can deliver as much as 10x quicker time to results for about the same computational cost. This will significantly accelerate R&D for our customers in weather, environment, and engineering domains.” Note that the Hyperdisk block storage offers 80% higher IOPS per vCPU for high-end database management system (DBMS) workloads than other hyperscalers.
Workload tailoring with Google Cloud TPU v4
With the general availability of TPU v4, Google will deliver its advanced AI-optimized infrastructure (which is based on the world’s largest publicly available ML hub in Oklahoma and can offer up to 9 exaflops of peak aggregate compute). This will enable large-scale training workloads to run up to 1.8x faster and up to 1.5x cheaper as compared to the next best alternative. To developers and researchers, it means the TPU v4 would allow them to train sophisticated models for large-scale natural language processing (NLP), recommendation systems, and computer vision algorithms more cost-effectively and sustainably.
Easier Cloud usage with Anthos enhancements
Anthos is Google’s Cloud-centric container platform to run scalable apps anywhere. With new Anthos enhancements, Google would enable its customers to run their Cloud “where and how they want”. The latest upgrades include:
- A more robust user interface: So that you can create, update and reconfigure your Anthos clusters the same way as a dashboard or command-line interface, that is, wherever your clusters run.
- An upgraded fleet management experience: So you can manage large container cluster fleets across clouds, on-premises, and at the edge for different use cases (including isolating dev from prod, applying fleet-specific security controls, enforcing configurations fleet-wide).
- The general availability of virtual machine support on Anthos clusters for retail edge environments.
Simplifying Mainframe Modernization with Dual Run and Migration Center
Google Cloud’s Dual Run is a mainframe modernization solution to simplify and reduce the risk of enterprise cloud migrations of legacy mainframe systems. It solves the challenge of the tight coupling of data to the application layer, which requires companies to stop an application for a period of time to move, modernize, or transform it. Dual Run does this by:
- Enabling parallel processing: So that customers can simultaneously run workloads on their existing mainframes and Google Cloud.
- Demonstrating privacy, security, and data residency compliance requirements: So that the needs of organizations in highly-regulated industries, such as banking, retail, healthcare, manufacturing, and the public sector can be easily met.
Strengthening Open Source AI commitments with OpenXLA
As the founding member and contributor to its OpenXLA Project, Google wants to let anyone whose work aims to make ML frameworks easy, to use quickly and easily turn their AI idea into reality. OpenXLA Project is an open-source ecosystem of ML technologies developed by Google, AMD, Arm, Intel, Meta, NVIDIA, and more. With this, they hope to overcome the challenges associated with inflexible strategies by working with the community to turn Open Source Software projects into catalysts for technological advancement.
Have a look at all the interesting announcements made by the tech giant here.