This week, NVIDIA officially snapped their suspenders announcing the next in their series of GPUs sporting the latest graphics-crackin’ Ampere microarchitecture – the RTX A6000 Pro Viz GPU and the A40 Data Center GPU.
They follow on the mainstream GPU availability of the unexpected, highly demanded, hard to come by RTX 3080 and RTX 3090 and, later this month, RTX 3070.
NVIDIA Ampere GPU Specs
If you love the 1st-gen RTX GPUs, the next-gen is going to make you all sorts of giddy. The Ampere cards build on the Quadro RTX capabilities and introduce the 3rd-Generation of Tensor Cores tech to speed up AI operations, 2nd-Generation RT Cores to speed up ray tracing, and 3rd-generation NVLink to speed up multiple GPU scaling.
Though new performance data is coming out daily, here’s a mish-mash of specs that are, strangely enough, spread across various sites, collected here for your convenience.
Ampere GPU Comparison
|A40||A6000||RTX 3070||RTX 3080||RTX 3090|
|Memory Clock||14.5Gb GDDR6||16Gb GDDR6||16Gb GDDR6||19Gb GDDR6||19.5Gb GDDR6|
|Memory Bandwidth||696 Gbps||768 Gbps||448 Gbps||760 Gbps||936 Gbps|
|Memory Interface Width||384-bit||384-bit||256-bit||320-bit||384-bit|
|Tensor Cores (3rd gen)||336||336||384||576||576|
|RT Cores (2nd gen)||84||84||46||68||82|
|Tensor Performance||?||?||82 TFlops||119 TFlops||143 TFlops|
You’ll notice the A6000 and A40 sit roughly between the Geforce 30 Series RTX 3080 and RTX 3090 but with gains in VRAM akin to the Turing-era Quadro RTX 8000 but more than doubling its CUDA core count. In fact, it may be more helpful to compare it to the high-end Turing GPU, so let’s do that.
RTX GPU Comparison
|A40||RTX A6000||RTX 8000|
|Memory Clock||14.5Gb GDDR6||16Gb GDDR6||14Gb GDDR6|
|Memory Bandwidth||696 Gbps||768 Gbps||672 Gbps|
|Memory Interface Width||384-bit||384-bit||384-bit|
|Tensor Performance||?||?||130.5 TFlops|
NVIDIA Ampere GPU Design
Perhaps just as sexy as all these numbers is the new GPU design. With the sleek two-tone black/gold style, it is now a certifiable travesty to hide these GPUs inside a dust-ridden tower or rack.
Read the rest at SolidSmack