Leveraging Micro-Bump Pitch Scaling to Accelerate Interposer Link Bandwidths for Future High-Performance Compute Applications
Artificial intelligence (AI), and particularly large language models, are rapidly advancing capabilities within a range of fields, from personal assistants to medical diagnosis to weather forecasting. This growth is supported by high-performance compute (HPC) datacenters, whose performance has been...
Saved in:
Published in | 2024 IEEE Custom Integrated Circuits Conference (CICC) pp. 1 - 7 |
---|---|
Main Authors | , , , , , , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
21.04.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Artificial intelligence (AI), and particularly large language models, are rapidly advancing capabilities within a range of fields, from personal assistants to medical diagnosis to weather forecasting. This growth is supported by high-performance compute (HPC) datacenters, whose performance has been doubling every 2.5 years [1] and is likely to accelerate over time based on demands of future AI applications. These server-class systems consume energy equivalent to a medium-sized city, where significant power is used for data movement between the various modules, boards, racks, and clients throughout the datacenter. The rapidly growing needs of AI-driven compute require increasing communication bandwidth with the highest degree of energy efficiency across all levels of system integration. |
---|---|
ISSN: | 2152-3630 |
DOI: | 10.1109/CICC60959.2024.10529010 |