800x100 static WP 3

Principal Design Engineer

Principal Design Engineer
by Daniel Nenni on 07-25-2020 at 4:33 pm

  • Full Time
  • Austin, TX
  • Applications have closed

Website Cadence

At Cadence, we hire and develop leaders and innovators who want to make an impact on the world of technology.
Sr. Deep Learning Compiler Engineer

Your responsibilities will include:

Develop a deep learning compiler stack that interfaces frameworks such as Caffe2/PyTorch, Tensorflow, etc. and converts neural nets (CNN/RNN) into internal representations suitable for optimizations.

Develop new optimization techniques and algorithms to efficiently map CNNs onto a wide range of Tensilica Xtensa processors and specialized HW

Implement state of the art code generation (source-to-source as well as binary)

Develop supporting data compression techniques, quantization algorithms, tensor sparsity enhancements, network pruning, etc

Devise multiprocessor/multicore partitioning and scheduling strategies

Develop complex programs to validate the functionality and performance of the CNN application programming kit

Help in authoring and reviewing product documentation

Assist the Tensilica application engineering team support customers of the product (some amount of direct customer interaction may be required).
Required and desired qualifications:

3-5+ years of experience working on a production compiler.

Advanced compiler construction, target-independent optimizations and analyses code generation fundamentals is a must.
Expertise in software development, test, debug and release required.

Great C++ is a must, Python mandatory, but less pressing.

Knowledge of and experience with LLVM compiler stack is very desirable (other state-of-the-art compilers qualify too).

High to intermediate optimization space: loop optimization, polyhedral models, IR construction/transition/lowering techniques is a big plus.

Prior work with CNNs and familiarity with deep learning frameworks (Tensorflow, Caffe/2, etc.) is a strong plus.

Familiarity with the state-of-the-art deep learning compilation approaches is a huge plus: XLA, Glow, etc)
Familiarity with various deep learning networks and their applications

(Classification/Segmentation/Object Detection/RNNs) is a plus.

Knowledge of neural net exchange formats (ONNX, NNEF) is a bonus.
We’re doing work that matters. Help us solve what others can’t.

Share this post via: