hip webinar automating integration workflow 800x100 (1)

Principal Software Engineer

Principal Software Engineer
by Daniel Nenni on 08-16-2020 at 8:32 pm

Website Cadence

Cadence Data Analytics platform cater to a wide variety of data generated by multiple EDA tools and EDA data analytical application. These applications will help semiconductor customers to significantly improve productivity and enable business drivers to make critical decisions.

We use a diverse technology stack such as Hadoop, Spark, Kafka and beyond.

Designing, Developing and scaling these Big Data technologies are a core part of our daily job.

The team member will be able think outside of the box and should have passion for building analytics platform and applications to improve semiconductor customer’s productivity and enable business drivers to make critical decisions.

Design and build data analytics platform
Design and build highly scalable data pipelines using new generation tools and technologies like Spark, Kafka to ingest data from various EDA tools.
Strong understanding of analytics needs and proactive-ness to build application to semiconductor customers.
Build visualization tool chain using Self-Service tools like Tableau and perform data analysis to find insight from EDA data.
Collaborate with multiple multi-functional teams and work on solutions which has larger impact on Cadence business.
Work across multiple teams and business groups to setup the platform rollout activities
Ability to communicate effectively, both written and verbal, with technical and non-technical multi-functional teams.
Interact with many other group’s internal team to lead and deliver elite products in an exciting rapidly changing environment.
Qualification:

MS/PhD in EE, CS and relevant majors, with 10+ years experience.
Understanding of data structures and algorithms
Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data
Database development experience with Relational or distributed systems such as Hadoop
Programming experience in building high quality software in Java, Python or Scala preferred
Experience in designing and developing ETL data pipelines. Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs
Excellent understanding of development processes and agile methodologies
Strong analytical and interpersonal skills
Enthusiastic, highly motivated and ability to learn quick
Experience with or advance courses on data science and machine learning is a plus
Work/project experience with Big Data and advanced programming languages is a plus
Experience developing Big Data/Hadoop applications using java, Spark, HDFS, Hive, impala, Oozie, Kafka, and Map Reduce is a huge plus

Share this post via: