Microsoft announced the first AI chip Maia 100

watch 1m, 27s
views 2

16:00, 30.08.2024

At the Ignite 2023 conference Microsoft for the first time talked about developing their own AI accelerator chip under the name Maia, sharing the specifications of Maia 100 just before the event. Maia 100 is one of the biggest 5nm node TSMC processors and is specifically developed for high workloads in Azure.


Maia 100 has the following features:

  • chip size - 820 mm2;
  • package - TSMC N5 process with COWOS-S interposer technology;
  • HBM BW/Cap - 1.8 TB/s @ 64 GB HBM2E;
  • Peak Dense Tensor POPS - 6 bits: 3, 9 bits: 1.5, BF16: 0.8;
  • L1/L2 - 500 MB;
  • Backend Network BW - 600 GB/s (12X400 GB);
  • Host BW (PCIe) = 32GB/s PCIe Gen5X8;
  • TDP requirements - 700W;
  • TDP - 500W.


Microsoft Maia 100 features vertical integration for cost and performance optimization, as well as customized server boards with specially designed racks and a software stack for enhanced performance.


SoC Maia 100 has the following architecture:

  • High-speed tensor block for training and output processing with support for a wide range of data types 16xRx16.
  • Vector processor being a loosely coupled superscalar engine designed using an instruction set architecture (ISA) to support a wide range of data types including FP32 and BF16.
  • Direct Memory Access (DMA) supporting different tensor segmentation schemes.
  • Asynchronous programming provided by hardware semaphores.
  • L1 and L2 are managed by software for better data utilization and energy efficiency.


Maia 100 utilizes an Ethernet-based interconnect with a custom RoCE-type protocol for ultra-high bandwidth computing, supporting all-gather and scatter-reduced bandwidth up to 4800 Gbps and all-to-all bandwidth up to 1200 Gbps.


The Maia SDK enables quick porting of PyTorch and Triton models to Maia, with tools for easy deployment to Azure OpenAI Services. Developers can use either the Triton programming language for DNNs or the Maia API for optimized performance. The SDK also supports PyTorch models natively.

Share

Was this article helpful to you?

VPS popular offers

-10%

CPU
CPU
3 Epyc Cores
RAM
RAM
2 GB
Space
Space
20 GB NVMe
Bandwidth
Bandwidth
Unlimited
KVM-NVMe 2048 Linux

14.9 /mo

/mo

Billed annually

-9.4%

CPU
CPU
8 Epyc Cores
RAM
RAM
32 GB
Space
Space
200 GB NVMe
Bandwidth
Bandwidth
Unlimited
wKVM-NVMe 32768 Windows

102.8 /mo

/mo

Billed annually

-9.9%

CPU
CPU
3 Xeon Cores
RAM
RAM
1 GB
Space
Space
40 GB HDD
Bandwidth
Bandwidth
300 Gb
KVM-HDD HK 1024 Linux

4.95 /mo

/mo

Billed annually

-5.6%

CPU
CPU
4 Xeon Cores
RAM
RAM
2 GB
Space
Space
60 GB HDD
Bandwidth
Bandwidth
Unlimited
wKVM-HDD 2048 Windows

13.7 /mo

/mo

Billed annually

-8.1%

CPU
CPU
6 Xeon Cores
RAM
RAM
8 GB
Space
Space
100 GB SSD
Bandwidth
Bandwidth
Unlimited
wKVM-SSD 8192 Windows

31.9 /mo

/mo

Billed annually

-10%

CPU
CPU
6 Xeon Cores
RAM
RAM
8 GB
Space
Space
200 GB HDD
Bandwidth
Bandwidth
Unlimited
KVM-HDD 8192 Linux

25.25 /mo

/mo

Billed annually

-7.4%

CPU
CPU
4 Xeon Cores
RAM
RAM
4 GB
Space
Space
100 GB SSD
Bandwidth
Bandwidth
Unlimited
wKVM-SSD 4096 Windows

23.1 /mo

/mo

Billed annually

-10.2%

CPU
CPU
6 Xeon Cores
RAM
RAM
16 GB
Space
Space
150 GB SSD
Bandwidth
Bandwidth
100 Mbps
DDoS Protected SSD-KVM 16384 Linux

123 /mo

/mo

Billed semiannually

-15.3%

CPU
CPU
4 Xeon Cores
RAM
RAM
2 GB
Space
Space
75 GB SSD
Bandwidth
Bandwidth
40 Mbps
DDoS Protected SSD-wKVM 2048 Windows

54 /mo

/mo

Billed annually

-8.6%

CPU
CPU
6 Epyc Cores
RAM
RAM
8 GB
Space
Space
100 GB NVMe
Bandwidth
Bandwidth
Unlimited
wKVM-NVMe 8192 Windows

42.3 /mo

/mo

Billed annually

Other articles on this topic

cookie

Accept cookies & privacy policy?

We use cookies to ensure that we give you the best experience on our website. If you continue without changing your settings, we'll assume that you are happy to receive all cookies on the HostZealot website.