If you want to run memory-intensive workloads in the cloud, you may be in luck. According to Amazon Web Services, Modern Instances are ideal for relational and NoSQL databases and high-performance database software such as distributed in-memory web cache, such as an in-memory database with time-lapse big data analytics real, such as Memcached and Redis -Cluster, Hadoop and Spark – Spark.
The availability of Amazon EC2 R6a instances recently announced by AWS is built for memory-intensive workloads like SQL and NoSQL databases. The newly generated instances are powered by AMD Milan CPUs and built on the AWS Nitro system.
To scale databases and in-memory applications, new instances from r6a.large to r6a.48xlarge are available, each with up to 192 vCPUs and 1536 GiB of memory. This exceeds r5a’s limit by two times.
Advocate Channy Yun, AWS Senior Developer explains:
“R6a instances, powered by 3rd Gen AMD EPYC processors are well suited for memory-intensive applications such as high-performance databases (relational databases, noSQL databases), distributed web scale in-memory caches (such as memcached, Redis), in-memory databases such as real-time big data analytics (such as Hadoop, Spark clusters), and other enterprise applications.”
What do users get?
- Amazon has been busy introducing new EC2 instances that now allow users to rent and run Mac Mini M1s in the cloud.
- AWS claims that the new instances provide up to a 35% improvement in price/performance over R5a instances and 10% cost savings over equivalent x86-based EC2 instances.
- The most effective R6a “r6a.metal” instances, in contrast, provide up to 1,536 GIB of storage and 50 Gbps of network bandwidth.
- AWS also states that R6a instances, like R5a instances, have a memory to vCPU ratio of 8:1 and support scaling up to 192 vCPUs per instance.
- R6a instances are unambiguously certified by SAP and compatible with SAP Business Suite.
- The lowest performance “r6a.large” instance can support 16 GiB of storage and up to 12.5 Gbps of network bandwidth. R6a instances come with a wide range of specifications.
- A whopping 3% increased efficiency value per vCPU compared to R5a benchmarks.
- 10% cheaper than comparable x86 situations.
- As much as 1536 GiB of memory, 2x more than earlier technology, giving you good options for scaling databases and handling large in-memory workloads.
- Up to 192 virtual CPUs, enhanced 50 Gbps networking, and 40 Gbps EBS bandwidth for faster knowledge acquisition, workload consolidation, and lower cost of ownership.
- Compared to high-performance enterprise databases SAP-certified situations require memory-intensive targets.
- Help with persistent memory encryption with AMD Clear Single Key Memory Encryption (TSME) and accelerate encryption and decryption algorithms with AVX2.
Amazon EC2 R6a instances are generally available, according to AWS. The AWS Nitro system, which powers R6a instances and gives them access to virtually all of the host hardware’s computing and storage resources, is made for workloads that require a lot of memory. According to AWS consulting services The launch of AMD’s third-generation EPYC processor with the new Zen 3 core was on the expected line. The core microarchitectures, platform compatibility, and security options are a good benchmark for the enterprise. Anyway, the Zen 3 was getting better per-core performance in the consumer market, expectations grew for similar success in the enterprise market, and now we’re seeing the results. These SAP-certified memory-optimized instances cost 10% less than comparable EC2 x86 instances while providing up to 35% more processing power than R5a instances for a variety of workloads.
Hybrid and managed cloud architectures are the way of the future. With AWS managed services, teams can implement R6a instances that come in two new bigger capacities with 192 vCPUs and 1536 GiB of memory to better accommodate customer requests for increased scalability. The biggest R5a instance is twice as big as this one. R5 instances have 20% less memory bandwidth per v-CPU. Additionally, R6a instances offer highly scalable, low latency node-to-node connections, the Elastic Fabric Adapter (EFA) is offered in 48xlar and bare metal sizes.