Research on LLM Vector Dot Product Acceleration Based on RISC-V Matrix Instruction Set Extension

Considering the high-performance and low-power requirements of edge AI,this paper designs a specialized instruction set processor for edge AI based on the RISC-V instruction set architecture,addressing practical issues in digital signal processing for edge devices.This design enhances the execution...

Full description

Saved in:
Bibliographic Details
Published inJi suan ji ke xue Vol. 52; no. 5; pp. 83 - 90
Main Author CHEN Xuhao, HU Sipeng, LIU Hongchao, LIU Boran, TANG Dan, ZHAO Di
Format Journal Article
LanguageChinese
Published Editorial office of Computer Science 01.05.2025
Subjects
Online AccessGet full text
ISSN1002-137X
DOI10.11896/jsjkx.241200074

Cover

Loading…
More Information
Summary:Considering the high-performance and low-power requirements of edge AI,this paper designs a specialized instruction set processor for edge AI based on the RISC-V instruction set architecture,addressing practical issues in digital signal processing for edge devices.This design enhances the execution efficiency of edge AI and reduces its energy consumption with limited hardware overhead,meeting the demands for efficient large language model(LLM) inference computation in edge AI applications.For the characteristics of large language models,custom instructions were extended based on the RISC-V instruction set to perform vector dot product calculations,accelerating the computation of large language models on dedicated vector dot product acceleration hardware.Based on the open-source high-performance RISC-V processor core XiangShan Nanhu architecture,the vector dot product specialized instruction set processor Nanhu-vdot is implemented,which adds vector dot product calculation units and pipeline processing logic on
ISSN:1002-137X
DOI:10.11896/jsjkx.241200074