800x100 static WP 3

AI/Deep learning Accelerator – System Architect

AI/Deep learning Accelerator – System Architect
by Admin on 05-16-2022 at 2:46 pm

Website Andes Technology

Andes Deep Learning Accelerator (AnDLA) is a highly efficient and cost-sensitive AI solution for edge devices and endpoints. AnDLA features hardwired processing units for matrix multiplication, convolution, pooling functions, and more in the future.

You design and build the AI subsystem with AnDLA and AndesCore processors. You develop the AI subsystem from various trade-offs (performance, power, energy, and area) exploration, system, and memory architecture specification. You collaborate with the AnDLA design team closely for PPA optimization from the memory access and bandwidth perspective.

=== Responsibilities ===

* Explore various trade-offs of AI system architecture in terms of performance, power, energy, and area. Initiate new feature modeling in the architecture simulator.

* Work with AnDLA HW designers to optimize PPA.

* Work with algorithm architects and application developers to optimize performance.

* Work with AI tool engineers to optimize compilation results.

=== Minimum qualifications ===

* Master’s degree in Electrical Engineering, Computer Science, or equivalent practical experience.

* Experience with simulator development and micro-architecture

* Experience optimizing/architecting software/hardware solutions for AI, image, video, and/or packet processing as well as power and performance analysis.

* Experience with one or more general-purpose programming languages, including C/C++ or Python

* 6 years of experience in AI system architecture and/or performance.

=== Preferred qualifications ===

* Knowledge of CPU architecture.

* Experience with domain-specific accelerators, deep learning accelerator (DLA), application-specific instruction-set processor (ASIP), or co-processor design.

* Experience with memory subsystem architecture for high-performance design.

* Experience with hardware/software co-design and heterogeneous computing.

Share this post via: