Adaptive Neural Accelerators in Embedded Systems
Abstract
The rapid proliferation of artificial intelligence (AI) and machine learning (ML) applications has transformed embedded systems, necessitating energy-efficient and high-performance computational solutions. Adaptive neural accelerators (ANAs) have emerged as a promising approach to meet these requirements by dynamically optimizing computation, memory access, and power consumption for neural network workloads. This review explores the architecture, design methodologies, and implementation strategies of adaptive neural accelerators in embedded systems. Key design considerations such as resource allocation, adaptive precision, heterogeneous integration, and real-time constraints are discussed. The paper also highlights recent advances, challenges, and future directions in ANA deployment, focusing on balancing
Full Text:
PDF 72-85Refbacks
- There are currently no refbacks.