Niraj Zade   Home  Blog  Tools 

Ram per core analysis of AWS EC2 instances

Tags: reference   databricks  

This report was created on 13 March 2024.

Introduction

When deciding an instance type for your workloads, you simply need to know how much ram you have got for every processor core. This decides the extent of parallelism - how many cores can you actively use, while staying within limits of available ram.

Eg - Suppose your instance has 8cpu and 64gb ram. The max ram per core available is 8gb. Now, suppose your workload is designed to use 10gb for every core. In this case, you can only run it on 6 cores. 2 cores will be idle. This is the limit of your parallelism.

Ram to core ratio is the golden metric.

When you look at the families of AWS EC2 instances, every family tries to maintain the same ram to core ratio across in its instances. The number of cores increases as you go up, but the ram per core ratio stays the same.

So, if you know the kind of workloads you have, you know the family of EC2 instances you need.

I didn't find these numbers online, so I'm publishing my report here. Enjoy.

Notes on Multi-threading and processor families

Multi-Threading

On most modern CPUs, a single physical core can run multiple threads by rapidly switching between them. This technology of running multiple threads on one physical core is called Multi-threading. (Intel calls it Hyperthreading, which is just marketing).

Suppose your CPU has only 1 physical core, but it supports multi-threading and can run 2 threads per core. So, to the operating system, it will appear as if you have 2 CPU cores. Even when there is only one physical core.

These 2 multi-threaded CPU cores are called as vCPU (virtual CPU).

The operating system only reports vCPU. It doesn't report the number of physical cores.

In case of Databricks, the spark engine is running on top of the OS. So it gets the number of available cores from the operating system itself - the number of vCPU. In the above example, Databricks will be told that there are 2 cores available.

In our calculation of ram per core, what matters is the number of vCPU as seen by the operating system. The actual number of physical cores don't really matter.

So, in the calculations, I have only consider the number of virtual processors (vCPU), and not the number of physical cores.

  • All AWS EC2 instances with Intel or AMD processors support multi-threading and run 2 threads per core.
  • All Graviton processors run 1 thread per core.

PROCESSOR FAMILIES

You'll see variants in instances within the same family. These variations are because of the processors used in the instance.

For example, instances with i in name have intel processors, a in name have AMD processors, g in name have Graviton processors.

Performance varies across processor families. Still, what matters the most is the amount of ram available per core. The ram to core ratio is still our golden metric.

Average ram to core ratio, by family

instance type average ram per vcpu (gb)
m-general purpose 4.000000
c-compute optimized 2.032738
r-memory optimized 7.972345
z1d-memory optimized 8.000000
d-storage optimized 5.600000
i-storage optimized 7.848958

Details of all EC2 instance families

This graph was created from the data tables below. It is actually a very large, wide graph. Open the image to enlarge it.

General purpose instances

M-general purpose

Every instance of this family has 4gb ram for every core.

type name vcpu ram ram per vcpu (gb)
m-general purpose m4.large 2 8 4
m-general purpose m4.xlarge 4 16 4
m-general purpose m4.2xlarge 8 32 4
m-general purpose m4.4xlarge 16 64 4
m-general purpose m4.10xlarge 40 160 4
m-general purpose m4.16xlarge 64 256 4
m-general purpose m5.large 2 8 4
m-general purpose m5.xlarge 4 16 4
m-general purpose m5.2xlarge 8 32 4
m-general purpose m5.4xlarge 16 64 4
m-general purpose m5.8xlarge 32 128 4
m-general purpose m5.12xlarge 48 192 4
m-general purpose m5.16xlarge 64 256 4
m-general purpose m5.24xlarge 96 384 4
m-general purpose m5a.large 2 8 4
m-general purpose m5a.xlarge 4 16 4
m-general purpose m5a.2xlarge 8 32 4
m-general purpose m5a.4xlarge 16 64 4
m-general purpose m5a.8xlarge 32 128 4
m-general purpose m5a.12xlarge 48 192 4
m-general purpose m5a.16xlarge 64 256 4
m-general purpose m5a.24xlarge 96 384 4
m-general purpose m5d.large 2 8 4
m-general purpose m5d.xlarge 4 16 4
m-general purpose m5d.2xlarge 8 32 4
m-general purpose m5d.4xlarge 16 64 4
m-general purpose m5d.8xlarge 32 128 4
m-general purpose m5d.12xlarge 48 192 4
m-general purpose m5d.16xlarge 64 256 4
m-general purpose m5d.24xlarge 96 384 4
m-general purpose m5dn.large 2 8 4
m-general purpose m5dn.xlarge 4 16 4
m-general purpose m5dn.2xlarge 8 32 4
m-general purpose m5dn.4xlarge 16 64 4
m-general purpose m5dn.8xlarge 32 128 4
m-general purpose m5dn.12xlarge 48 192 4
m-general purpose m5dn.16xlarge 64 256 4
m-general purpose m5dn.24xlarge 96 384 4
m-general purpose m5n.large 2 8 4
m-general purpose m5n.xlarge 4 16 4
m-general purpose m5n.2xlarge 8 32 4
m-general purpose m5n.4xlarge 16 64 4
m-general purpose m5n.8xlarge 32 128 4
m-general purpose m5n.12xlarge 48 192 4
m-general purpose m5n.16xlarge 64 256 4
m-general purpose m5n.24xlarge 96 384 4
m-general purpose m5zn.large 2 8 4
m-general purpose m5zn.xlarge 4 16 4
m-general purpose m5zn.2xlarge 8 32 4
m-general purpose m5zn.3xlarge 12 48 4
m-general purpose m5zn.6xlarge 24 96 4
m-general purpose m5zn.12xlarge 48 192 4
m-general purpose m6g.large 2 8 4
m-general purpose m6g.xlarge 4 16 4
m-general purpose m6g.2xlarge 8 32 4
m-general purpose m6g.4xlarge 16 64 4
m-general purpose m6g.8xlarge 32 128 4
m-general purpose m6g.12xlarge 48 192 4
m-general purpose m6g.16xlarge 64 256 4
m-general purpose m6gd.large 2 8 4
m-general purpose m6gd.xlarge 4 16 4
m-general purpose m6gd.2xlarge 8 32 4
m-general purpose m6gd.4xlarge 16 64 4
m-general purpose m6gd.8xlarge 32 128 4
m-general purpose m6gd.12xlarge 48 192 4
m-general purpose m6gd.16xlarge 64 256 4
m-general purpose m6i.large 2 8 4
m-general purpose m6i.xlarge 4 16 4
m-general purpose m6i.2xlarge 8 32 4
m-general purpose m6i.4xlarge 16 64 4
m-general purpose m6i.8xlarge 32 128 4
m-general purpose m6i.12xlarge 48 192 4
m-general purpose m6i.16xlarge 64 256 4
m-general purpose m6i.24xlarge 96 384 4
m-general purpose m6i.32xlarge 128 512 4
m-general purpose m6id.large 2 8 4
m-general purpose m6id.xlarge 4 16 4
m-general purpose m6id.2xlarge 8 32 4
m-general purpose m6id.4xlarge 16 64 4
m-general purpose m6id.8xlarge 32 128 4
m-general purpose m6id.12xlarge 48 192 4
m-general purpose m6id.16xlarge 64 256 4
m-general purpose m6id.24xlarge 96 384 4
m-general purpose m6id.32xlarge 128 512 4
m-general purpose m6idn.large 2 8 4
m-general purpose m6idn.xlarge 4 16 4
m-general purpose m6idn.2xlarge 8 32 4
m-general purpose m6idn.4xlarge 16 64 4
m-general purpose m6idn.8xlarge 32 128 4
m-general purpose m6idn.12xlarge 48 192 4
m-general purpose m6idn.16xlarge 64 256 4
m-general purpose m6idn.24xlarge 96 384 4
m-general purpose m6idn.32xlarge 128 512 4
m-general purpose m6in.large 2 8 4
m-general purpose m6in.xlarge 4 16 4
m-general purpose m6in.2xlarge 8 32 4
m-general purpose m6in.4xlarge 16 64 4
m-general purpose m6in.8xlarge 32 128 4
m-general purpose m6in.12xlarge 48 192 4
m-general purpose m6in.16xlarge 64 256 4
m-general purpose m6in.24xlarge 96 384 4
m-general purpose m6in.32xlarge 128 512 4
m-general purpose m7g.large 2 8 4
m-general purpose m7g.xlarge 4 16 4
m-general purpose m7g.2xlarge 8 32 4
m-general purpose m7g.4xlarge 16 64 4
m-general purpose m7g.8xlarge 32 128 4
m-general purpose m7g.12xlarge 48 192 4
m-general purpose m7g.16xlarge 64 256 4
m-general purpose m7gd.large 2 8 4
m-general purpose m7gd.xlarge 4 16 4
m-general purpose m7gd.2xlarge 8 32 4
m-general purpose m7gd.4xlarge 16 64 4
m-general purpose m7gd.8xlarge 32 128 4
m-general purpose m7gd.12xlarge 48 192 4
m-general purpose m7gd.16xlarge 64 256 4

Compute optimized instances

C-compute optimized

With the exception of a few instances, all instances of this faimly have 2gb ram per core.

type name vcpu ram ram per vcpu (gb)
c-compute optimized c4.2xlarge 8 15 1.875
c-compute optimized c4.4xlarge 16 30 1.875
c-compute optimized c4.8xlarge 36 60 1.666666667
c-compute optimized c5.xlarge 4 8 2
c-compute optimized c5.2xlarge 8 16 2
c-compute optimized c5.4xlarge 16 32 2
c-compute optimized c5.9xlarge 36 72 2
c-compute optimized c5.12xlarge 48 96 2
c-compute optimized c5.18xlarge 72 144 2
c-compute optimized c5.24xlarge 96 192 2
c-compute optimized c5a.xlarge 4 8 2
c-compute optimized c5a.2xlarge 8 16 2
c-compute optimized c5a.4xlarge 16 32 2
c-compute optimized c5a.8xlarge 32 64 2
c-compute optimized c5a.12xlarge 48 96 2
c-compute optimized c5a.16xlarge 64 128 2
c-compute optimized c5a.24xlarge 96 192 2
c-compute optimized c5ad.xlarge 4 8 2
c-compute optimized c5ad.2xlarge 8 16 2
c-compute optimized c5ad.4xlarge 16 32 2
c-compute optimized c5ad.8xlarge 32 64 2
c-compute optimized c5ad.12xlarge 48 96 2
c-compute optimized c5ad.16xlarge 64 128 2
c-compute optimized c5ad.24xlarge 96 192 2
c-compute optimized c5d.xlarge 4 8 2
c-compute optimized c5d.2xlarge 8 16 2
c-compute optimized c5d.4xlarge 16 32 2
c-compute optimized c5d.9xlarge 36 72 2
c-compute optimized c5d.12xlarge 48 96 2
c-compute optimized c5d.18xlarge 72 144 2
c-compute optimized c5d.24xlarge 96 192 2
c-compute optimized c5n.xlarge 4 11 2.75
c-compute optimized c5n.2xlarge 8 21 2.625
c-compute optimized c5n.4xlarge 16 42 2.625
c-compute optimized c5n.9xlarge 36 96 2.666666667
c-compute optimized c5n.18xlarge 72 192 2.666666667
c-compute optimized c6g.xlarge 4 8 2
c-compute optimized c6g.2xlarge 8 16 2
c-compute optimized c6g.4xlarge 16 32 2
c-compute optimized c6g.8xlarge 32 64 2
c-compute optimized c6g.12xlarge 48 96 2
c-compute optimized c6g.16xlarge 64 128 2
c-compute optimized c6gd.xlarge 4 8 2
c-compute optimized c6gd.2xlarge 8 16 2
c-compute optimized c6gd.4xlarge 16 32 2
c-compute optimized c6gd.8xlarge 32 64 2
c-compute optimized c6gd.12xlarge 48 96 2
c-compute optimized c6gd.16xlarge 64 128 2
c-compute optimized c6i.xlarge 4 8 2
c-compute optimized c6i.2xlarge 8 16 2
c-compute optimized c6i.4xlarge 16 32 2
c-compute optimized c6i.8xlarge 32 64 2
c-compute optimized c6i.12xlarge 48 96 2
c-compute optimized c6i.16xlarge 64 128 2
c-compute optimized c6i.24xlarge 96 192 2
c-compute optimized c6i.32xlarge 128 256 2
c-compute optimized c6id.xlarge 4 8 2
c-compute optimized c6id.2xlarge 8 16 2
c-compute optimized c6id.4xlarge 16 32 2
c-compute optimized c6id.8xlarge 32 64 2
c-compute optimized c6id.12xlarge 48 96 2
c-compute optimized c6id.16xlarge 64 128 2
c-compute optimized c6id.24xlarge 96 192 2
c-compute optimized c6id.32xlarge 128 256 2
c-compute optimized c6in.xlarge 4 8 2
c-compute optimized c6in.2xlarge 8 16 2
c-compute optimized c6in.4xlarge 16 32 2
c-compute optimized c6in.8xlarge 32 64 2
c-compute optimized c6in.12xlarge 48 96 2
c-compute optimized c6in.16xlarge 64 128 2
c-compute optimized c6in.24xlarge 96 192 2
c-compute optimized c6in.32xlarge 128 256 2
c-compute optimized c7g.xlarge 4 8 2
c-compute optimized c7g.2xlarge 8 16 2
c-compute optimized c7g.4xlarge 16 32 2
c-compute optimized c7g.8xlarge 32 64 2
c-compute optimized c7g.12xlarge 48 96 2
c-compute optimized c7g.16xlarge 64 128 2
c-compute optimized c7gd.xlarge 4 8 2
c-compute optimized c7gd.2xlarge 8 16 2
c-compute optimized c7gd.4xlarge 16 32 2
c-compute optimized c7gd.8xlarge 32 64 2
c-compute optimized c7gd.12xlarge 48 96 2
c-compute optimized c7gd.16xlarge 64 128 2

Memory optimized instances

R-memory optimized

Almost all instances of this family have 8gb ram per core.

type name vcpu ram ram per vcpu (gb)
r-memory optimized r3.xlarge 4 31 7.75
r-memory optimized r3.2xlarge 8 61 7.625
r-memory optimized r3.4xlarge 16 122 7.625
r-memory optimized r3.8xlarge 32 244 7.625
r-memory optimized r4.xlarge 4 31 7.75
r-memory optimized r4.2xlarge 8 61 7.625
r-memory optimized r4.4xlarge 16 122 7.625
r-memory optimized r4.8xlarge 32 244 7.625
r-memory optimized r4.16xlarge 64 488 7.625
r-memory optimized r5.large 2 16 8
r-memory optimized r5.xlarge 4 32 8
r-memory optimized r5.2xlarge 8 64 8
r-memory optimized r5.4xlarge 16 128 8
r-memory optimized r5.8xlarge 32 256 8
r-memory optimized r5.12xlarge 48 384 8
r-memory optimized r5.16xlarge 64 512 8
r-memory optimized r5.24xlarge 96 768 8
r-memory optimized r5a.large 2 16 8
r-memory optimized r5a.xlarge 4 32 8
r-memory optimized r5a.2xlarge 8 64 8
r-memory optimized r5a.4xlarge 16 128 8
r-memory optimized r5a.8xlarge 32 256 8
r-memory optimized r5a.12xlarge 48 384 8
r-memory optimized r5a.16xlarge 64 512 8
r-memory optimized r5a.24xlarge 96 768 8
r-memory optimized r5d.large 2 16 8
r-memory optimized r5d.xlarge 4 32 8
r-memory optimized r5d.2xlarge 8 64 8
r-memory optimized r5d.4xlarge 16 128 8
r-memory optimized r5d.8xlarge 32 256 8
r-memory optimized r5d.12xlarge 48 384 8
r-memory optimized r5d.16xlarge 64 512 8
r-memory optimized r5d.24xlarge 96 768 8
r-memory optimized r5dn.large 2 16 8
r-memory optimized r5dn.xlarge 4 32 8
r-memory optimized r5dn.2xlarge 8 64 8
r-memory optimized r5dn.4xlarge 16 128 8
r-memory optimized r5dn.8xlarge 32 256 8
r-memory optimized r5dn.12xlarge 48 384 8
r-memory optimized r5dn.16xlarge 64 512 8
r-memory optimized r5dn.24xlarge 96 768 8
r-memory optimized r5n.large 2 16 8
r-memory optimized r5n.xlarge 4 32 8
r-memory optimized r5n.2xlarge 8 64 8
r-memory optimized r5n.4xlarge 16 128 8
r-memory optimized r5n.8xlarge 32 256 8
r-memory optimized r5n.12xlarge 48 384 8
r-memory optimized r5n.16xlarge 64 512 8
r-memory optimized r5n.24xlarge 96 768 8
r-memory optimized r6g.large 2 16 8
r-memory optimized r6g.xlarge 4 32 8
r-memory optimized r6g.2xlarge 8 64 8
r-memory optimized r6g.4xlarge 16 128 8
r-memory optimized r6g.8xlarge 32 256 8
r-memory optimized r6g.12xlarge 48 384 8
r-memory optimized r6g.16xlarge 64 512 8
r-memory optimized r6gd.large 2 16 8
r-memory optimized r6gd.xlarge 4 32 8
r-memory optimized r6gd.2xlarge 8 64 8
r-memory optimized r6gd.4xlarge 16 128 8
r-memory optimized r6gd.8xlarge 32 256 8
r-memory optimized r6gd.12xlarge 48 384 8
r-memory optimized r6gd.16xlarge 64 512 8
r-memory optimized r6i.large 2 16 8
r-memory optimized r6i.xlarge 4 32 8
r-memory optimized r6i.2xlarge 8 64 8
r-memory optimized r6i.4xlarge 16 128 8
r-memory optimized r6i.8xlarge 32 256 8
r-memory optimized r6i.12xlarge 48 384 8
r-memory optimized r6i.16xlarge 64 512 8
r-memory optimized r6i.24xlarge 96 768 8
r-memory optimized r6i.32xlarge 128 1024 8
r-memory optimized r6id.large 2 16 8
r-memory optimized r6id.xlarge 4 32 8
r-memory optimized r6id.2xlarge 8 64 8
r-memory optimized r6id.4xlarge 16 128 8
r-memory optimized r6id.8xlarge 32 256 8
r-memory optimized r6id.12xlarge 48 384 8
r-memory optimized r6id.16xlarge 64 512 8
r-memory optimized r6id.24xlarge 96 768 8
r-memory optimized r6id.32xlarge 128 1024 8
r-memory optimized r6idn.large 2 16 8
r-memory optimized r6idn.xlarge 4 32 8
r-memory optimized r6idn.2xlarge 8 64 8
r-memory optimized r6idn.4xlarge 16 128 8
r-memory optimized r6idn.8xlarge 32 256 8
r-memory optimized r6idn.12xlarge 48 384 8
r-memory optimized r6idn.16xlarge 64 512 8
r-memory optimized r6idn.24xlarge 96 768 8
r-memory optimized r6idn.32xlarge 128 1024 8
r-memory optimized r6in.large 2 16 8
r-memory optimized r6in.xlarge 4 32 8
r-memory optimized r6in.2xlarge 8 64 8
r-memory optimized r6in.4xlarge 16 128 8
r-memory optimized r6in.8xlarge 32 256 8
r-memory optimized r6in.12xlarge 48 384 8
r-memory optimized r6in.16xlarge 64 512 8
r-memory optimized r6in.24xlarge 96 768 8
r-memory optimized r6in.32xlarge 128 1024 8
r-memory optimized r7g.large 2 16 8
r-memory optimized r7g.xlarge 4 32 8
r-memory optimized r7g.2xlarge 8 64 8
r-memory optimized r7g.4xlarge 16 128 8
r-memory optimized r7g.8xlarge 32 256 8
r-memory optimized r7g.12xlarge 48 384 8
r-memory optimized r7g.16xlarge 64 512 8
r-memory optimized r7gd.large 2 16 8
r-memory optimized r7gd.xlarge 4 32 8
r-memory optimized r7gd.2xlarge 8 64 8
r-memory optimized r7gd.4xlarge 16 128 8
r-memory optimized r7gd.8xlarge 32 256 8
r-memory optimized r7gd.12xlarge 48 384 8
r-memory optimized r7gd.16xlarge 64 512 8

Z1D-memory optimized

All instances of this family have 8gb ram per core.

type name vcpu ram ram per vcpu (gb)
z1d-memory optimized z1d.large 2 16 8
z1d-memory optimized z1d.xlarge 4 32 8
z1d-memory optimized z1d.2xlarge 8 64 8
z1d-memory optimized z1d.3xlarge 12 96 8
z1d-memory optimized z1d.6xlarge 24 192 8
z1d-memory optimized z1d.12xlarge 48 384 8

Storage optimized instances

D-storage optimized

All instances of this family have 8gb ram per core.

type name vcpu ram ram per vcpu (gb)
d-storage optimized d3.xlarge 4 32 8
d-storage optimized d3.2xlarge 8 64 8
d-storage optimized d3.4xlarge 16 128 8
d-storage optimized d3.8xlarge 32 256 8
d-storage optimized d3en.xlarge 4 16 4
d-storage optimized d3en.2xlarge 8 32 4
d-storage optimized d3en.4xlarge 16 64 4
d-storage optimized d3en.6xlarge 24 96 4
d-storage optimized d3en.8xlarge 32 128 4
d-storage optimized d3en.12xlarge 48 192 4

I-storage optimized

Almost all instances of this family have 8gb ram per core.

type name vcpu ram ram per vcpu (gb)
i-storage optimized i2.xlarge 4 31 7.75
i-storage optimized i2.2xlarge 8 61 7.625
i-storage optimized i2.4xlarge 16 122 7.625
i-storage optimized i2.8xlarge 32 244 7.625
i-storage optimized i3.large 2 15 7.5
i-storage optimized i3.xlarge 4 31 7.75
i-storage optimized i3.2xlarge 8 61 7.625
i-storage optimized i3.4xlarge 16 122 7.625
i-storage optimized i3.8xlarge 32 244 7.625
i-storage optimized i3.16xlarge 64 488 7.625
i-storage optimized i3en.large 2 16 8
i-storage optimized i3en.xlarge 4 32 8
i-storage optimized i3en.2xlarge 8 64 8
i-storage optimized i3en.3xlarge 12 96 8
i-storage optimized i3en.6xlarge 24 192 8
i-storage optimized i3en.12xlarge 48 384 8
i-storage optimized i3en.24xlarge 96 768 8
i-storage optimized i4i.large 2 16 8
i-storage optimized i4i.xlarge 4 32 8
i-storage optimized i4i.2xlarge 8 64 8
i-storage optimized i4i.4xlarge 16 128 8
i-storage optimized i4i.8xlarge 32 256 8
i-storage optimized i4i.16xlarge 64 512 8
i-storage optimized i4i.32xlarge 128 1024 8

Ending notes

  • The ram available per core metric peaks at 8gb of ram per core. So, only the number of vCpu can be scaled up. Ram per core cannot be scaled up.
  • If AWS says that an instance has 8gb ram, then we will actually get maybe 7.7gb of ram to use. Some ram is taken up by the EC2 management daemons.

And that's how this report was born. I know the graph sucks, but the tables do the job.

That's all. Enjoy.