webinar banner2025 (1)

Career Change Advice and how EDA Hiring has Changed

Career Change Advice and how EDA Hiring has Changed
by Mark Gilbert on 08-05-2018 at 7:00 am

I think most of us can attest that changing jobs is one of the most stressful decisions we make, as our careers progress. Making a job change is rarely an easy decision, though admittedly, so wonderful when it accomplishes your career and life goals. Having the right, well thought out expectations is the best way to ensure success, and not waste your or anyone else’s time. Here is the best way to approach a career change…

First, think of where you are now… is this position, the role you want, for the next several years? Is the team you work with, a team you enjoy working with? Do they make you feel good, comfortable about coming to work every day? Does the company have a product that interests you and are they doing well? Does their product work, scale and have longevity in the market? Do they treat their employees well? Are you looking for something different? What might that difference look like? It is these questions and more that should always be considered before making the decision to start looking or interviewing.

Once you have decided to start exploring, or perhaps follow up on a position that you just learned of, either through a friend or recruiter, make sure that you are prepared for all of the next steps. While your technical abilities will certainly make or break the outcome, there are other key preparations that can help you maximize your success through the process. One key ingredient to add to the formula, is the use of a highly specialized INDUSTRY recruiter. This one key ingredient can increase your chances of a great and lasting outcome. To start, a good recruiter will help you examine your reasons for wanting to make a career move. The right recruiter can prepare you, giving you insights into the process, as well as the dos and don’ts, and timing of what to say when, which should all help to ensure that you present yourself, in the best light.

A successful experienced recruiter will walk you through the interviewing process, using their own proven formula to prepare you each step of the way, further ensuring your chances of a successful outcome. Proper interviewing techniques is key to a good outcome, regardless of how good you are technically. More importantly, that recruiter, if they are good and have relationships with the hiring company, should do all the post-interview follow ups, on both sides, so that the process keeps moving forward.

Bottom line: getting hired in today’s job market is no easy task. Companies are more particular in who they hire, than ever before. The good news is, there are far fewer candidates looking so the pool is smaller, making your chances a little better. Companies want people that can make an immediate contribution. When applying, save yourself time by applying to things within your wheelhouse, and perhaps go out a few spokes. In the tech world we all play in, it is all about innovation and new approaches to problem solving. Know your strengths and play to them. Make sure your resume has all the key ingredients to excite and interest the person reading it. If your resume does not catch their eye, regardless of how good you are, you have no chance of getting the call.

In my next blog, I will cover some more good reasons for considering a career change as well as good tips for structuring a strong resume.


An update on the Design Productivity Gap

An update on the Design Productivity Gap
by Tom Dillinger on 08-03-2018 at 12:00 pm

Over a decade ago, a group of semiconductor industry experts published a landmark paper as part of the periodic updates to the International Technology Roadmap for Semiconductors, or ITRS for short (link). The ITRS identified a critical design productivity gap. The circuit capacity afforded by the Moore’s Law pace of technology advancement was growing faster than the capabilities of EDA tools and flows to support the associated design complexity. The figure below captures the ITRS projections at that time.

Note that the ITRS experts recognized the increasing influence of design IP (and its reuse) to help mitigate the gap. Nevertheless, this ITRS report served as a ‘call to action’ for the EDA industry to address methods to improve design productivity.

Fast-forwarding a decade, how has the landscape changed? Moore’s Law has continued to enable tremendous growth in circuit density, a testament to the expertise and ingenuity of fabrication engineers and equipment manufacturers. Note that this process technology evolution has been achieved without a reduction in reticle size, truly an amazing achievement.

Many EDA tools have been re-architected (and design models optimized) in support of multi-threaded (and in some cases, distributed) computation for very large datasets.

EDA platforms have been introduced, integrating analysis algorithms into implementation flows to improve optimization decisions, and thus overall design closure schedules. In support of these timing, noise, and power optimizations, the design model (and cell library model) complexity has grown – this adds to the stress on dataset size.

I was curious to know, “How has the industry progressed in closing the productivity gap? What are the areas where the gap remains?”

At the recent DAC55 in San Francisco, Cadence assembled a panel of industry and EDA experts to address the topic:

“Monster Chips – Scaling Digital Design Into the Next Decade”

The panel participants were:

 

  • Chuck Alpert, Senior S/W Group Director for Genus Synthesis, Cadence
  • Anthony Hill, Fellow and Director for Processor Technology, Texas Instruments
  • Antony Sebastine, Principal Engineer with the Systems Group, ARM
  • Anand Sethuraman, Senior Director, ASIC Products, Broadcom
  • Patrick Sproule, Director of Hardware, P&R Methodology, Nvidia Corp.

Here are some of the insights the panel shared – both acknowledging the strides made in addressing the design productivity gap and highlighting remaining challenges.

Advances
“The growing use of (multiple instances of) hard IP has required significant focus on macro placement. Automated macro placement in design blocks has improved significantly – routing congestion is reduced.”

“DRC throughput – especially approaching tapeout – is always time-critical. The (distributed) physical design verification tools have kept runtimes in check.”

“The ECO-based flows to close on timing issues have improved substantially.”

“The signal buffering and advanced layer selection algorithms in GigaOpt provide better physical implementations—of course, pre-route to post-route correlation is always a challenge.”

Challenges

“Design implementation capacity must be improved. The Quality of Results (QoR) for blocks greater than 2 million instances tends to degrade substantially.”


“Agreed. We are constrained to block sizes of 1M-3M instances to achieve suitable turnaround time and QoR. The design partitioning overhead in floorplanning and constraint management is cumbersome. We need to be able to support block sizes of 20M-30M instances to keep pace with the technology.”


“We are utilizing physical design servers with 40 processors and 768GB to 1TB memory, but the (multi-threaded) jobs are still limited in throughput.”

“A flow of increasing importance is the calculation of dynamic P/G grid IR voltage drop, and the impact on timing margins. The tools need to have the capacity to support activity-based analysis on very large networks.”

Expanding upon the last comment, the significance of dynamic voltage drop (DvD) on design closure was a common theme throughout the DAC conference, both in technical presentations and the EDA vendor exhibit floor. Current SoC designs commonly incorporate several features that increase the sensitivity of timing analysis calculations with DvD-based characteristics:

 

  • dynamic operation voltage domain optimization (DVFS)
  • non-linear cell and interconnect delay dependence upon (reduced) P/G voltages
  • (aggressive) library cell track allocation to the P/G rails

At advanced process nodes with aggressive voltage domain and power optimizations, static IR drop P/G analysis (with fixed cell characterization margins) will be increasingly problematic.

Summary

Chuck A. from Cadence offered a unique perspective to the comments from the other panel members. “Cadence wants to partner with design teams working on difficult blocks. We evaluate our implementation platform QoR on our internal testcases, but would benefit from a closer collaboration, to better understand the issues presented by specific designs.”

The takeaways from the discussion that I noted are:

Several EDA tool areas have made significant improvements in designer productivity and support larger dataset sizes – e.g., analysis-driven optimization algorithms, multi-threaded and distributed algorithms.

Nevertheless, designers are continuing to face the productivity gap identified a decade ago – support for block sizes of 20M-30M instances is required to keep pace with Moore’s Law. Specifically, physical design implementation flows require (academic and industry) research focus to be able to accommodate larger block sizes. Collaborative partnerships between (leading-edge) design teams and flow developers are required.

Patrick S. from TI reminded the audience that there will be an increasing demand to integrate reliability-based analysis algorithms into implementation platforms – e.g., EM, aging mechanisms. The goal would be to exit implementation and be ready for reliability sign-offmuch as EDA platforms strive for timing/noise signoff quality.

Alas, the EDA productivity gap is still present – a factor of 10X improvement in throughput is needed.

At DAC, I bumped into a professor colleague who lamented that EDA academic research funding was drying up, as there is a general perception that “all big EDA problems have been adequately addressed… the money is going to proposals associated with machine learning and AI.”In actuality, the challenges of efficient design data modeling, development of (tightly-correlated) optimization algorithms, and opportunities for improved (distributed) processing are more important than ever.

I guess you could consider this to be a 2018 version of the ITRS call to action.

-chipguy


Speak N Spell

Speak N Spell
by Daniel Nenni on 08-03-2018 at 7:00 am

This is the ninth in the series of “20 Questions with Wally Rhines”

Success has many authors and the Speak & Spell product from Texas Instruments generated lots of write-ups to demonstrate this. For most of the semiconductor industry, results of innovation were not apparent to the masses but, for the consumer electronics that emerged in the 1970s, the innovations were visible, exciting and fun. My job in the Consumer Products Group (CPG) was Engineering Manager with responsibility for the design and development of all the chips and plastic cases used in TI’s fledgling consumer business. In early 1977, almost all of CPG was moved from Dallas to Lubbock. From then on, we performed the logic design for our chips in Lubbock while the physical layout was done in Houston under the direction of K. Bala, an energetic, driving manager who was perfect for the task of juggling dozens of complex designs while competing for resources with TI’s traditional semiconductor business.


From left to right, Gene Frantz, Richard Wiggins, Paul Breedlove and Larry Brantingham

Paul Breedlove was in charge of product development for the “Consumer Calculator Division” which was managed by Jim Clardy (later CEO of Harris Semiconductor and then co-founder and CEO of Crystal Semiconductor (which ultimately became Cirrus Logic). Jim and Paul had a miserable job. Japanese manufacturers like Casio, Sharp, Toshiba and many more could design and manufacture great “four function” (add, subtract, multiply and divide) calculators for less than TI could. By late 1977, TI was reselling Toshiba four function calculators with a TI label because they were more profitable than our own. Paul kept searching for a differentiating alternative and he found it by attending one of the monthly “Research Reviews” that were held in the Central Research Laboratories (CRL) and open to TI employees from other parts of the company. At this particular review, Richard Wiggins presented the technology he was developing for speech synthesis. He was approaching a capability of producing understandable speech at a data rate of only 1000 bits per second. Paul was fascinated. Why not develop a product that took advantage of speech to differentiate, or augment, traditional consumer electronic products? Paul was helped along by the analogy of one of the few really profitable, successful consumer calculators called “The Little Professor” which was an arithmetic learning aid for children. Every year we expected revenue for the Little Professor to decline but it seemed to have a life of its own. We were beginning to realize that parents will pay any price to give their children an advantage in the education system.

As an experiment in innovation, TI had recently established a funding mechanism called the “Idea Program” where any employee could propose an idea for a product or technology and, if approved, receive $25,000 of funding to demonstrate feasibility. Paul submitted an Idea Program proposal (probably because the Consumer Calculator Department was really squeezed for funding) and Ralph Dosher, the CPG Controller, approved it. That’s when I became involved. Paul needed someone to figure out how to design chips that could be used in the product. Larry Brantingham worked in the Logic Design Branch of the Engineering Department I ran and he became the obvious choice.

Speech synthesis chips were under development at National Semiconductor and other companies but success was very limited because the current state of the art N-Channel MOS, or NMOS, technology was just too slow to achieve the needed performance for this computationally intense application. What is so remarkable about Speak ‘N Spell is that Larry didn’t use the higher performance NMOS technology but instead used the much slower P-Channel MOS, or PMOS. Why, you might ask? Very simple. Larry didn’t know how to design with NMOS. In addition, CPG was in a continuous battle with the Semiconductor Group over pricing of chips and Morris Chang, Semiconductor Group VP, became tired of all the arguments and settled the dispute by offering CPG a flat $25 price per two-inch wafer of PMOS, which was a five photomask process at that time. If Larry had learned how to design with NMOS, the program would have failed because the cost of NMOS wafers would have been too high. While the artificially subsidized price for PMOS made the cost feasible, the performance seemed much to slow.

Larry went to work with Richard Wiggins designing a pipelined multiplier in PMOS. Responsibility for the product was moved from Consumer Calculators to Specialty Products which was run by Kirk Pond (later CEO of Fairchild Semiconductor) because, although Consumer Calculators were struggling, the Specialty Products Divisions was struggling even more to find new product possibilities. Gene Frantz was enamored with the product and quickly became product manager, tasked with all the issues of choosing product features, name, marketing, etc. as well as managing the overall product development (Figure One)

Even more ridiculous than designing a pipelined multiplier for the synthesis chip was the task of designing read-only memory (ROM) chips big enough to store the pre-recorded speech vocabulary. When I presented the proposed chip design program to the TI Corporate Development Committee, Dean Toombs, head of R&D for the Semiconductor Group argued that the engineers in CPG had gone crazy. Semiconductor Group was struggling to produce the NMOS 2716 ROM at very low yields. If they couldn’t produce a 16K ROM, how could CPG design a 128K bit one? What Dean overlooked, however, was the fact that the TMS 2716 needed an access time of 450 nanoseconds while our speech chips could be dramatically slower. PMOS was also an older, more mature technology and was easier to produce than the highly advanced NMOS. So we received approval to go ahead and received corporate funding to develop a four-chip system with a synthesizer, controller and two 128K bit ROMs (Figure Two).

As the actual die size increased beyond the original estimates, the estimated cost of the chips increased. At one point, Kirk Pond threatened to kill the whole program because it was well known that $40 was a critical point for consumer products where both spouses had to approve the purchase. By the time Speak ‘N Spell was introduced, the suggested retail price was $60 and it sold so well that we quickly raised it to $65. Like the Little Professor, parents just couldn’t resist the purchase of an educational aid that would help their children spell, even though the synthesized speech sounded more like a robot than a human. Shortly after the introduction, we were invited to show the product on the Today Show, the most popular TV news program of the day. Charley Clough, our highly articulate and lovable head of Semiconductor Group sales walked Jane Pauley through the steps of using Speak ‘N Spell while Gene Frantz was behind stage with backup units since the reliability of our early production units wasn’t very good.

Speak & Spell took the world by storm and became a great story of corporate innovation. Not long after the introduction, I moved to Houston and took over the Microprocessor Division where we developed the TMS 320 single chip digital signal processor, or DSP. Although I was the only one who worked on both programs, there was at least a remote connection to the theme of digital signal processing in TI’s speech synthesis success. And DSP became the cornerstone of TI’s next wave of growth.

The 20 Questions with Wally Rhines Series


Netspeed and NSITEXE talk about automotive design trends at 55DAC

Netspeed and NSITEXE talk about automotive design trends at 55DAC
by Tom Simon on 08-02-2018 at 12:00 pm

DAC is where both sides of the design equation come together for discussion and learning. This is what makes attending DAC discussion panels so interesting; you are going to hear from providers of tools, methodologies and IP as well as those who need to use them to deliver working solutions. There are few places where the interplay between design activity and the tools necessary to accomplish it are more important than in the rapidly changing area of automotive electronics. For decades cars basically had coils and capacitors. Then came radios. But in just the last few years the complexity of automotive electronics has leapt to the upper end of the spectrum.

As usual power efficiency and performance are essential, however, in addition, safety has become a top line requirement. So, it was interesting to see the “fireside chat” hosted at DAC by Netspeed on the topic of “Design for Safety and Reliability – ADAS and Autonomous Vehicle SoCs”. Netspeed CEO Sundari Mitra interviewed Hideki Sugimoto from NSITEXE, the Japan based supplier of SOC IP for automotive autonomous driving systems.

Sundari started the session off by pointing out that the rapid growth in automotive semiconductors is fueled largely by the development of ADAS and autonomous driving systems. She asked what are the major trends and what are the needs and challenges that are created by them. Sugimoto replied that he has seen three major trends.

The top OEMs and Tier One vendors have very specific performance, safety and power goals. It used to be that ASSPs worked well enough, but off the shelf parts, even when tailored for specific markets, are not good enough anymore. So now all the players are involved in some way with ASIC development.

Now, within ASIC design, there has been a need for a shift from hand tuning each part of a chip to achieve performance goals. This was fine when the SOCs has fewer blocks, but now the better way to meet requirements is to start at the top level and map out requirements thoroughly. The requirements need to be specified in terms of end use-case performance or power goals.

The last trend is the move towards heterogeneous computing. The SOCs needed for ADAS or autonomous vehicles are extremely complex and their performance cannot be improved just by adding more CPUs or GPUs. The right way to carefully match up the right mix of the above and also add special purpose processors and accelerators for things like machine learning. Sugimoto’s company NSITEXE has a strong track record in this area, so these observations are borne out of direct experience.

Sundari followed with a question on the specifics of a requirements driven approach to putting chips together. Sugimoto cited the need to really look at the system level and not just focus on the chip itself at the outset.

The main performance factors are throughput, latency and end to end QoS. At the same time, all of this needs to be done without compromising safety. Sugimoto pointed out that you can’t bolt on the safety elements later and you also cannot achieve your performance goals without considering the safety requirements early in the process.

Sugimoto feels that there are three must have capabilities in an architectural design solution for automotive chips. They are (a) handling heterogeneous compute elements and coherency (b) delivering high QoS across all types of workloads and (c) ASIL-D and ISO 26262 certification. Every one of these affects safety security and reliability. The maximum amount of data must be extracted from the sensors. This means higher bandwidth and more processing. Sugimoto emphasized heterogeneous computing is really the only solution. However, this makes the chip architecture more complex. Sugimoto sees Network on Chip as a critical tool for managing this complexity and still being able to achieve design goals. NoCs can help provide determinism in these systems through the addition of memory coherency and QoS, for instance. Naturally the NoC will also have to comply with ISO 26262.

Looking at the conversation it is clear that automotive is now the killer app for advanced SOC design tools. This is where the greatest challenges are. The combination of the unique environmental, power, safety and functional factors make this a very interesting prospect. I have been covering Netspeed for some time now and can see how they are looking at the automotive market as a space where their NoC technology can make a big difference. Their website has more information on designing SOCs for automotive applications. It’s worth taking a look.


KLAC gets an EUV Kicker

KLAC gets an EUV Kicker
by Robert Maire on 08-02-2018 at 7:00 am

KLAC put up a great quarter coming in at revenues of $1.07B and EPS of $2.22. Guidance is for $1.03B to $1.1B with EPS of $2 to $2.32. Both reported and guided were at the high end of the range and above consensus. We had suggested in our preview notes that KLAC would be the least impacted of the big three (AMAT, LRCX & KLAC) semi equipment companies by the current memory volatility.

That has turned out to be the case as KLAC will likely see less than a 10% slow down in shipments and essentially zero impact on revenue and earnings versus the 25% drop in revenues expected by Lam along with an EPS drop below the low end of the range. We still predict that AMAT will fall somewhere between KLAC’s zero revenue impact and Lam’s 25% fall off probably coming in at a 10%-15% drop in revenues.

KLAC is obviously much less exposed to the more volatile memory sector and thus didn’t see as much of the sharp increase or sharp decline and has historically been a more consistent performer. KLAC reported record high shipments, revenues and EPS in the quarter and continues its juggernaut like roll. The company also was more forceful about its view of the December quarter calling for a “sharp snap back” rather than the more nebulous “positive trajectory” offered up by Lam.

We expect memory capex spending to continue to be quickly modulated by pricing and demand trends with either long or short term volatility and cyclicality as the industry keeps one foot on the accelerator and the other on the brakes. Having a very good balance and diversification that KLAC has will provide more consistent longer term performance and likely deserves a higher multiple,as compared to others, for the reduced volatility. We also expect ASML to have a similar performance to KLAC with minimal memory impact given its steadier long term history.

Market positioning remains very strong as the industry needs metrology and yield management tools to get the process fine tuned before they go out and buy the process tools that vary more with demand and output.

An EUV kicker
The transition to EUV is anything but easy. Aside from ASML who is the obvious beneficiary of the transition we think that KLAC could see if not as much benefit but perhaps even more especially in the current early stages of the transition where the problems are the greatest. ASML’s EUV revenue also replaces its DUV revenue so there is a bit of an offset which KLAC does not see.

This compares to dep and etch players who have nothing but downside as the number of overall dep and etch steps will without doubt be lower under EUV processes as compared to DUV processes. We think that KLA is another way to play the EUV transition for those who do not want to put all their eggs in ASML’s basket.

Financials are best in the industry
With industry leading gross and operating margins coupled with ATM machine like cash generation, whats not to like. The financials underscore the pricing power of being the market leader in the process control part fo the industry.

More diversification coming
The pending Orbotech acquisition will bring even more diversification to KLA’s model and further insulate it from volatile single end markets. We think this addition of SAM to an already strong base will further strengthen the story.

China exposure helps KLA
KLA is likely seeing more benefit than others from the growing China market as a lot of money is being spent in China by chip companies trying to come up to speed and perfect processes. Something that is almost impossible to be done without KLA tools. While China has more alternative sources for process tools there are really no alternatives to KLA tools which they have to keep buying in quantity.

The Stock
We continue to favor KLAC over both LRCX and AMAT. Not only are the financials way better but the risk is much lower as demonstrated by KLAC’s performance this quarter which barely registered a blip at KLAC but is causing heartburn elsewhere. The company well deserves a higher multiple for this more consistent performance along with better financials and diversity.

This more consistent performance may not be as flashy on the way up but avoids the near term pain and stock gyrations on the way down. In general we feel a lot safer in KLAC stock right now and probably for a couple of quarters until we get better visibility on memory. The near term factors are most aligned with KLAC right now and their stock should be a better performer.


Accelerating the PCB Design-Analysis Optimization Loop

Accelerating the PCB Design-Analysis Optimization Loop
by Tom Dillinger on 08-01-2018 at 12:00 pm

With the increasing complexity and diversity of the mechanical constraints and electrical requirements in electronic product development, printed circuit board designers are faced with a number of difficult challenges:

  • generating accurate (S-parameter) simulation models for critical interface elements of the design – i.e., connectors sockets, (twisted-pair) cables
  • developing comprehensive simulation/analysis models for entire packaging solutions – e.g., rigid-flex board topologies
  • accelerating the design optimization-analysis feedback loop

Given the aggressive schedules allocated to PCB development, typically dependent upon completion of key IP/SoC/module design milestones, the last challenge above is especially critical. The evaluation of interface compliance measures – e.g., timing/voltage margins, eye diagrams, bit error rate estimates – may necessitate board design updates, which then need to be re-analyzed. Minimizing the time and engineering resource (and the risk of an error) to close on implementation-extraction-analysis iterations is crucial.

I had the opportunity to chat with Brad Griffin, Product Manager Group Director at Cadence, about these challenges, and some of the features incorporated in the recent Sigrity 2018 release that will significantly alleviate them.

Brad indicated, “Bridging the gap between the mechanical model of interface components and the corresponding electrical model for power integrity and signal integrity simulation is a key addition to this latest release. The Sigrity 3D Workbench takes the physical Allegro board model as input, and applies full-wave field solver technology to derive the S-parameter model for simulation.”

The examples below illustrate the mechanical model of board-mount connector pins, which would be presented to 3D Workbench for S-parameter model generation. (The “breakout” of the board trace is included in the full-wave field solver input, to the point where a 2D hybrid solver analysis of the PCB trace can be applied.) The physical models of cables, connectors, and sockets are also analyzed by 3D Workbench.

An illustration of how 3D Workbench is used in a larger flow is depicted below. Brad said,“This 3D Workbench capability is a new component of existing flows, such as the Serial Link Compliance validation solution shown in the figure.”

Brad continued, “Our internal IP group develops high-speed SerDes and parallel interface (DDRx) offerings, verified to the compliance measurements associated with industry standards. The Allegro and Sigrity teams collaborated closely with the IP group on the functionality and testing of 3D Workbench and the Sigrity 2018 release.”

With regards to the growing utilization of rigid-flex technology, Brad noted, “There is a comprehensive connection between the rigid-flex design and analysis flows. Allegro is integrated with the extraction and simulation features of Sigrity PI/SI. Again, a mix of full-wave models(from 3D Workbench)and 2D hybrid solver models can be extracted, stitched, and simulated.”

The figure below illustrates a rigid-flex design with the corresponding visualization of the Sigrity PowerDC results. (Note the power distribution in the flex cable to the mezzanine card on the right results in significant losses.)

Speaking of the integration between Allegro and Sigrity, Brad was excited about the productivity gains this enables. SI/PI engineers can make their design changes in the Sigrity environment, re-extract and simulate – e.g., a specific via array pattern optimized to meet loss targets. Brad highlighted, “A key feature in this release is that updates made in the Sigrity platform are directly incorporated into the Allegro model, without the need to re-draw.”

The handoff of “markup” requests from the SI engineer to the physical design team is eliminated, improving the rate of design closure (and reducing errors) in the final optimization phase before release for PCB fabrication.

Future Challenges

I asked Brad about upcoming challenges for PCB (and rigid-flex) designers.

“I’ll point out two of the areas we’re working on.”, Brad said. “In the future, support will be provided to work with encrypted mechanical component models, for improved security of the component vendor’s intellectual property.”

“And, with the growing complexity of board designs, combined with the higher datarate defined for future interface IP standards, full wave model accuracy(out to multiple harmonics of the fundamental)will be required for a larger set of models. Full-wave mesh topologies will be denser, requiring greater compute resources. The methodology leveraging both full-wave and hybrid solvers for extraction and simulation will be distributed across multiple machines.”

The 3D model generation capabilities, the support for full system model PI/SI analysis with the combination of solvers (including intricate rigid-flex topologies), and the focus on improving the PCB design-analysis optimization loop are all part of the enhancements in the recent Sigrity 2018 release (link).

-chipguy


AMS Experts Share IC Design Stories at #55DAC

AMS Experts Share IC Design Stories at #55DAC
by Daniel Payne on 08-01-2018 at 7:00 am

At #55DAC in SFO the first day is always the busiest on the exhibit floor, so Monday by lunch time I was hungry and took a short walk to the Marriott hotel nearby to listen to AMS experts from several companies talk about their EDA tool use, hosted by Synopsys:

  • Samsung
  • Toshiba Memory Corp.
  • NVIDIA
  • Seagate
  • Numem
  • Esperanto

Continue reading “AMS Experts Share IC Design Stories at #55DAC”


Verification Importance in Academia

Verification Importance in Academia
by Alex Tan on 07-31-2018 at 12:00 pm

“Testing can only prove the presence of bugs, not their absence,” stated the famous computer scientist Edsger Dijkstra. That notion rings true to the many college participants of the Hack@DAC competition offered during DAC 2018 in San Francisco. The goal of this competition is to develop tools and methods for identifying security vulnerabilities in the SoC designs using both third-party IP (3PIP) and in-house cores. The trustworthiness of such SoCs can be undermined by security bugs that are unintentionally introduced during the integration of the IPs.

During a 6-hour final trial, the finalists are requested to identify and report security bugs from an SoC that is released to them at the start of the day. The teams mimic the role of a security research team at the SoC integrator, in trying to find the security vulnerabilities and quickly dispatch them back to the design team –so they can be addressed before the SoC goes to market. The bug submissions from the teams are then scored in real time by industry experts. The team with highest score is declared as winner.

At the end of the competition, both Hackin’ Aggies from Texas A&M University and The Last Mohicans from IIT Kharagpur were both declared as winners. I had a subsequent interview with Professor Michael Quinn from Texas A&M, who has been actively shepherding the school’s team to take part in the competition and also joined by the Cadence staff who are coordinating the university programs: Dr. Patrick Haspel, Cadence Global Program Director of Academic and University Programs and Steve Brown, Marketing Director of Verification Fabric Products. Some excerpts from the Q&A session are included in the second half of this article.

DAC and Verification Engineers

Based on DAC 2015-2017 statistics, about 38% of the attendees are engineering professionals and about 10% are academia as shown in figure 1. Although there are other venues such as IEEE sponsored events that involved the academia, their participation in the industry sponsored events such as DAC or DVCon could be viewed an indicator for how much participation or interest is given in the ecosystems. Based on a subset of the statistics, verification engineer attendance consistently ranks third after CAD/application and design engineers (see figure 2).

A well rounded verification engineer demands proficiency in both the design implementation aspects as well as the functional verification techniques. We are accustomed to college programs providing training to be design engineers, computer scientists and process engineers –but no so much tailored for a verification engineer. This prompts the question on how we should prepare these professional candidates to be more adaptable to the industry requirements?

Cadence Academic Ties
Aside from their own R&D dollars, EDA companies innovate through the various synergistic partnerships among its ecosystems’ members, including their customers and the academia. Being at the forefront of the EDA ecosystem, Cadence has actively fostered a strong relationship with the academia through the Cadence® Academic Network program, which facilitates the exchange of knowledge by co-organizing educational events or trainings and providing access to the latest Cadence technologies. There are several notable subprograms related to this venture as tabulated here:

Interview with Professor Michael Quinn

Texas A&M University has been part of Cadence Academic Network Program and ranks first in Texas in term of student size. The university launches the 25-by-25 initiative, which targets an engineering enrollment of 25,000 by 2025 and this year boasts largest freshman female engineering class in the country. Its electrical and computer engineering programs recently were ranked 12th and 10th among public universities.

The following excerpts are from Q&A session with Professor Quinn:

Could you comment on current research emphasis in the area simulation/verification?
“From the verification standpoint, the biggest area getting looked at is associated with security. Texas A&M has a whole new department that has grown up for the past few years, very well endowed and it’s about security design, architecture and also verification. I think their biggest push in these area is in formal. Formal based approaches, not so much functional,” Prof. Quinn said. “By the way, we did (the contest) without using the formal tool,” he quipped.

Which Cadence tools do you use?
“My class uses all the simulation and visualization tools such as Xcelium. It starts just as an engineering verification job, with a specification planning using Cadence VPlanner. The students start developing the verification environment using Cadence UVM based methodology, which is superb as it supports the current IP design methodology,“ said Prof. Quinn. It also allows the students to incrementally do a bottom-up verification and integration works, starting with low-level IP and progressing to the SOC level. A key strength of such approach is the ability to seamlessly reuse of works previously applied at the lower-level. Subsequent verification and debug involves running random testings and the use of Vmanagerto tie various aspects of planning, testing, tracking and analysis together. And finally Indago, to efficiently manage debugging process.

Should the school program be geared towards software development mastery, hardware design proficiency or hands-on applications for EE candidates?
He believes we need all the above. A more well rounded designer is being looked for by companies and can be transitioned to different projects. Along this end, he aspires of having more courses that are inter-disciplinary, experiential in nature –and are based on multi-faceted curriculum that put together logic designers, architect plus software folks– would greatly enhance the learning experience.

What is the current state of engagements with Cadence?
Since its start in 2016 when only a handful of verification engineers entered into the industry from Texas A&M, the program has now contributed 100 or more. “It’s a win-win solution,” he said. Relating to his Drexel almamater, he believes the value of having a co-op program as good training ground for the incoming graduates. His wish is to be able to continue the efforts further and share his instructional works with an expanded network.

According to Dr. Haspel, part of the Cadence Academic Network team responsibilities is to connect the industry need of trained engineers and to enable students to not only learn but to be valuable also to the prospective employers. This is achieved through working with university on curriculum aligning, partnering with school that have the right mindsets to collaborate and arm them as the recruiting targets. Furthermore, he said “Sometimes it is the pipeline thing, but it is also the responsibility of the ecosystem…”

What is your impression on DAC presentations with respect to HW design?
Prof. Quinn is intrigued by the conference advising EDA vendors to pay more attention to big data, machine learning, security and data analytics. He concurs that it is the right feedback. He anticipates that post-silicon is able to contribute in the verification, which previously was not possible as data does not stop at tapeout. One may need a coverage monitoring –possibly at customer site by doing workload monitoring and feedback the simulation process. It is a big close-loop.

As the famous quote goes –“Tell me and I forget, teach me and I may remember, involve me and I learn.”– at DAC 2018 the Texas A&M team had demonstrated a slice of the fruitful outcomes from the EDA industry collaboration with academia. Kudo to Cadence and the Aeggis! Experiential learning really makes a difference.


Webinar: Differential Energy Analysis for Improved Performance/Watt in Mobile GPU

Webinar: Differential Energy Analysis for Improved Performance/Watt in Mobile GPU
by Bernard Murphy on 07-31-2018 at 7:00 am

May want to listen up; Qualcomm are going to be sharing how they do this. There is a constant battle in designing for low power; you don’t accurately know what the power consumption is going to be until you build it, but by the time you’ve built it, it’s too late to change the design. So you have to find methods to estimate power early on, while using that information in a way that won’t compromise your design choices because you were judging their impact based on eyeballed numbers.

This can appear difficult, particularly for RTL-based power estimation, which typically shows variance of around 15% on final gate-level estimates. Surely judging optimizations based on such coarse estimates would be very challenging unless the changes deliver massive advantages, greater than that margin of uncertainty?

In fact the picture is much better if you do differential analysis – comparing the difference in predicted power savings for different optimizations. While absolute power estimates carry that larger level of uncertainty, differences between estimates can be much more accurate for a fairly obvious reason. Differences subtract out many of the unknowns in absolute RTL power estimates: detailed cell and designware mapping, placement, routing, clock tree details and so on. What you’re left with can be much closer to equivalent differences based on signoff power numbers.

The people who build mobile solutions know more than almost all of us about squeezing out every last pico-watt of power. Apple isn’t likely to tell you what they do, but Qualcomm is just as good for learning about best practices in this domain.

Register HERE to learn more in this webinar on August 23[SUP]rd[/SUP] at 9am PDT

Summary:
Mobile devices demand high performance in a very constrained environment. As a leader in perf/watt, Qualcomm® Adreno™ GPUs, a product of Qualcomm Technologies, Inc., leverages many effective methods to improve power efficiency. In this regard, Qualcomm has developed a differential energy analysis methodology based on ANSYS PowerArtist to identify the power optimization opportunity in GPU. This methodology can help to locate the inefficient part that needs further optimization in the pre-silicon stage. Experimental results based on identifying unnecessary register toggles demonstrate the effectiveness of this proposed methodology.

Speakers:
Preeti Gupta, is head of RTL product management, for the ANSYS semiconductor business unit.

Yadong Wang is currently a staff engineer in the GPU system power team at Qualcomm Technologies, Inc., San Diego, California. He has about 10 years of ASIC low-power design experience. At Qualcomm, he is responsible for power modeling and analysis of Adreno™ GPUs, and explores and develops many effective methods to improve power efficiency. Before joining Qualcomm, he worked as a hardware power engineer at NVIDIA. Yadong earned an M.S. degree in electrical engineering from Tongji University (Shanghai, China) in 2009.

About ANSYS
If you’ve ever seen a rocket launch, flown on an airplane, driven a car, used a computer, touched a mobile device, crossed a bridge, or put on wearable technology, chances are you’ve used a product where ANSYS software played a critical role in its creation. ANSYS is the global leader in engineering simulation. We help the world’s most innovative companies deliver radically better products to their customers. By offering the best and broadest portfolio of engineering simulation software, we help them solve the most complex design challenges and engineer products limited only by imagination.


Machine Learning Meets Scan Diagnosis for Improved Yield Analysis

Machine Learning Meets Scan Diagnosis for Improved Yield Analysis
by Tom Simon on 07-30-2018 at 12:00 pm

Naturally, chips that fail test are a curse, however with the advent of Scan Logic Diagnosis these failures can become a blessing in disguise. Through this technique information gleaned from multiple tester runs can help pin down the locations of defects. Initially tools that did Scan Logic Diagnosis relied on the netlist to filter locations for various faults. This made it possible to exclude a number of potential locations. In the push to improve the so called “resolution” of the diagnosis, the tools started considering layout information. This went a long way toward narrowing down the list of potential fault locations.

When layout information was added to the mix, there was enough information to use the same data for yield analysis. However, even with the improved resolution in the number of suspects, the results from the diagnosis-driven yield analysis were not good enough. The engineers at Mentor realized that there was more information to be gleaned from the root cause analysis that comes from the test data and the design itself. What normally happens is that a number of potential root causes are identified and the probability of each one is reported. Each different failure will have a unique distribution of these potential root causes.

Mentor developed a technique called Root Cause Deconvolution (RCD) to help improve the fault location prediction. Mentor has a white paper on how RCD works and what kind of results it can provide. For a baseline, they conducted a simulated experiment to show how effective root cause prediction normally is. They injected two different types of single defects in different locations in a total of 470 devices. Without RCD the predicted root causes included 49 types of faults. Even the prediction of the second most probable root cause was not correct. They saw 47 probable root causes that did not correspond to any actual root cause.

When they ran with RCD the predictions narrowed dramatically down to just three root causes. This is a pretty significant improvement. RCD uses critical area information and then examines the design in detail to come up with probable root causes and their defect distributions. This data is then used to compare observed defects with the statistical information computed from the design. For realistic numbers of actual root causes the computation needed can rapidly explode. When using direct computation, even when considering only a few hundred root causes, the computational needs approach infinity. However, Mentor realized that machine learning can be used to help determine the number of relevant defect distributions that are worth looking at. It should be pointed out that machine learning is continuously finding new applications in the EDA space. This is not the first time that Mentor has decided to rely on machine learning to solve tough problems to deliver breakthrough results.

What is most interesting about RCD, as incorporated into their Tessent Diagnosis and Tessent YieldInsight, is that no additional data is required beyond what is normally needed for layout aware diagnosis. Also, in cases where diagnosis reports are encoded to protect proprietary information, the flow still works. Whenever RCD is used there are fewer probable root causes to look at, making resolving yield issues a much faster process. Because RCD’s design analysis actually can predict failure distributions in advance of failure analysis, it can open doors to more proactive yield enhancement. To learn more about how the entire process works, read the white paper entitled Leveraging Volume Scan Diagnosis Data for Yield Analysis that is available from the Mentor website.