Secondary Market savings on used NVIDIA A100 GPUs

The NVIDIA A100 Tensor Core GPU represents a pinnacle in GPU technology, designed specifically for accelerating artificial intelligence (AI) and high-performance computing (HPC) workloads.

Built on the NVIDIA Ampere architecture, the A100 is equipped with Tensor Cores that are tailored for matrix multiplication operations, a fundamental component of AI model training and inference. With an impressive 6,912 CUDA cores and 40 GB of high-bandwidth memory (HBM2), the A100 delivers unprecedented computational power.

Its multi-instance GPU (MIG) capability allows for efficient partitioning of the GPU into smaller instances, enabling simultaneous processing of multiple workloads.

What our customers say about us
Customer-Focused IT Hardware

Since 1995, we've poured all of our efforts into delivering products that match our client's expectations and exceeds their price savings target.

With our huge inventory and industry-leading engineers, customers tell us they trust Alta for our reliability, custom configurations, parts responsiveness, lightning fast delivery and savings to boot.

Request a Bid
Huge Inventory

"Aisles and aisles of servers & IT equipment. Experts at what they do."

-Ryan, President, Priority Envelope
Trust & Longevity

“We trust Alta with refurbished equipment. The longevity of the product is just the same as if I had bought it from the distributor.”

-Traci Leffner, Sovran
An IT Geek's Dream

“I have visited Alta’s warehouse, and they have just about everything.. It’s kind of like an IT Geek’s Dream in there.”

- Dustin Siemers, TechFixers

NVIDIA A100 GPUs Information:

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration at every scale, powering the world's highest-performing elastic data centers for AI, data analytics, and HPC applications. Providing up to 20X higher performance over the previous Volta™ generation, the A100 can efficiently scale up or be partitioned into seven isolated GPU instances with Multi-Instance GPU (MIG).

This creates a unified platform that dynamically adjusts to shifting workload demands. Supporting a broad range of math precisions, the A100 80GB model doubles GPU memory and offers the world’s fastest memory bandwidth at 2 terabytes per second, accelerating solutions for the largest models and datasets.

Request a Quote Today

NVIDIA A100 GPUs Specifications:

A100 80GB PCIe

FP64: 9.7 TFLOPS

FP64 Tensor Core: 19.5 TFLOPS

FP32: 19.5 TFLOPS

GPU Memory: 80GB HBM2e

GPU Memory Bandwidth: 1,935GB/s

Max Thermal Design Power (TDP): 300W

Multi-Instance GPU: Up to 7 MIGs @ 10GB

Form Factor: PCIe dual-slot air cooled or single-slot liquid cooled

A100 80GB SXM

FP64: 9.7 TFLOPS

FP64 Tensor Core: 19.5 TFLOPS

FP32: 19.5 TFLOPS

GPU Memory: 80GB HBM2e

GPU Memory Bandwidth: 2,039GB/s

Max Thermal Design Power (TDP): 400W

Multi-Instance GPU: Up to 7 MIGs @ 10GB

Form Factor: SXM


Request a Quote Today

No products found
Use fewer filters or clear all

Ship Today!

Global Logistics and an IN-STOCK inventory allow us to ship your systems, switches or parts Today!

Save 30-80%

Not only reliable, but with discounts up to 80% off of new, we have thousands of satisfied clients worldwide. Meet your IT project needs for less.

Equipment Buy-Back

We buy back your excess IT equipment for cash offer or trade-in, providing full data erasure and ITAD services.

Since 1995

Our product experts know servers, storage and networking equipment, delivering decades of experience in IT hardware.