webinar banner2025 (1)

Networking trends for Automotive ADAS Systems

Networking trends for Automotive ADAS Systems
by Daniel Payne on 08-16-2018 at 12:00 pm

From my restaurant seat today in Lake Oswego, Oregon I watched as an SUV driver backed out and nearly collided with a parked car, so I wanted to wave my arms or start shouting to the driver to warn them about the collision. Cases like this are a daily occurrence to those of us who drive or watch other drivers on the road, so the promises of using Advanced Driver Assistance Systems (ADAS) is especially relevant in keeping us alive and injury free. I did some online research to better understand what’s happening with the networks used in automotive applications.

Automotive networks are tasked with moving massive amounts of data to process and help make decisions, just think about the data that these safety features and systems require:

  • Emergency braking
  • Collision avoidance with other vehicles
  • Pedestrian and cyclist avoidance
  • Lane departure warning
  • HD cameras
  • Radars
  • LIDARs
  • Fully autonomous driving

Our electronics industry is often driven by standards committees, and for networking in ADAS applications we thankfully have the Time Sensitive Networking (TSN) IEEE working group that has thought through all of this and come up with standards and specifications. So let’s take a closer look at how the Ethernet TSN standards can be used in automotive scenarios, then ultimately why you would use automotive-certified Ethernet IP in your SoC design.

Going back to 2005, there was an Ethernet standard for Audio Video Bridging (AVB) used for things like automotive infotainment and in-vehicle networking. These applications aren’t really all that time critical and the data volume was low in comparison to higher-demand tasks like braking control, so in 2012 the IEEE revamped things a bit by transforming this AVB working group into TSN. So with TSN we now have a handful of very specific standards to deal with ADAS requirements:

[table] style=”width: 500px”
|-
| TSN Standard
| Specification Description
|-
| IEEE 802.1Qbv-2015
| Time-aware shaper
|-
| IEEE 802.1Qbu-2016
IEEE 802.3br-2016
| Preemption
|-
| IEEE 802.1Qch-2017
| Cyclic queuing and forwarding
|-
| IEEE 801.1Qci-2017
| Per stream filtering and policing
|-
| IEEE 802.1CB-2017
| Frame replication and elimination
|-
| IEEE 802.1AS-REV
| Enhanced generic precise timing protocol
|-

Time-Aware Shaper
An engineered network will provide a predicted, guaranteed latency. They do this with a time-aware shaper that allows scheduling so that critical traffic gets a higher priority. As an example consider four queues of data, so the IEEE 802.1 Qbv scheduler controls which queue goes first (shown in orange) and so on.

Preemption
In the next example, Queue 2 in orange starts transmitting its frame first, but then a higher priority happens and Queue 3 in green preempts, so in the lower timing diagram we see how the green frame travels ahead of the orange frame. The green frame which is time-critical has preempted the orange frame, providing a predictable latency.

Cyclic Queuing and Forwarding
To make network latencies across bridges more consistent regardless of the network topology there’s a technique called Cycling Queuing and Forwarding. The complete specification is on the IEEE 802 site. Shown below in dark blue is a stream with the shortest cycle packet, while in light blue is a stream in the presence of multiple packets.

Frame Replication and Elimination
How do you find and fix from: cyclical redundancy check (CRC) errors, opens in wires, and flakey connections? With frame replication and elimination. In the following example there’s a time-critical data frame sent along two separate paths, orange and green, then where they join up, any duplicate frames are removed from the streams, so applications can receive frames out of order.

In the IEEE specification there are three ways to implement frame replication and elimination:

  • Talker replicates, listener removes duplicates
  • Bridge replicates, listener removes duplicates
  • Bridge replicates, bridge removes duplicates

Enhanced Generic Precise Timing Protocol
Knowing what time it really is across a network is fundamental, so synchronizing clocks comes up and this protocol lets you use either a single grand master or multiple grand masters, as shown in the next two figures:


Single grand master, sending two copies


Two grand masters, each sending two copies

Each of these TSN specifications have grown over time in order to meet the rigors in automotive design to support real-time networking of ADAS features.

Summary
Ethernet in automobiles has come a long way over the past decade, and now we have TSN to enable the ADAS features of modern SoCs, inching towards autonomous vehicles. With Ethernet in the car we get:

  • Wide range of data rates
  • Reliable operation
  • Interoperability between vendors
  • TSN to standardize on how data travels with predictable latency

SoCs for automotive also need to meet the functional safety standard ISO 26262 and AEC-Q100 for reliability. In the make versus buy decision process for networking chips you can consider IP from Synopsys, like their DesignWare Ethernet Quality-of-Service (QoS) IP because it is ASIL B Ready ISO 26262 certified.

John Swanson from Synopsys has written a detailed Technical Bulletin on this topic.


Chip, Package, System Analysis – A User View

Chip, Package, System Analysis – A User View
by Bernard Murphy on 08-16-2018 at 7:00 am

While I missed ANSYS (and indeed everyone else) at DAC this year, I was able to attend the ANSYS Innovation Conference last week at the Santa Clara Convention Center. My primary purpose for being there was to listen to a talk by eSilicon which I’ll get to shortly, but before that I sat through a very interesting presentation on the growing importance of simulation in validating medical devices. This isn’t the kind of simulation we usually discuss; these are computational fluid dynamics (CFD) sims for blood flow through stents, insulin flow from insulin pumps and other such worthy objectives. ANSYS has a representative on an FDA advisory committee exploring increased use of simulations in regulation for medical devices. Important and fascinating stuff and a reminder of how broadly ANSYS impacts technology in many areas beyond electronic design.


Back to the main topic, eSilicon gave a presentation at the conference on their work with ANSYS to validate signal and power integrity in designs for eSilicon customers. You should understand first that eSilicon works with customers on the leading-edge of custom ASIC design, from HPC to networking, AI and 5G infrastructure. I wrote recently about their platform-specific offerings for AI and networking at 7nm in advanced 2.5D packaging options with high-bandwidth memory stacks. Point being that this is bleeding-edge design for system customers demanding total system performance, not just “the chip works to spec”. So, these designs are a good test for the ANSYS “chip-package-system” (CPS) mantra.

eSilicon doesn’t build the boards. Their customers do that, so they work collaboratively to extract, analyze and optimize the board design together with the ASIC package, interposer and components on the interposer. The speaker, Teddy Lee from eSilicon, detailed flows they used for signal integrity (SI), DC power integrity (PI) and AC power integrity. For signal integrity they extract 3D models from the MCM database into the ANSYS HFSS tool and from this build S-parameter models for insertion loss, return loss and crosstalk, then optimize traces, materials, spacing, etc. and iterate. They do this for the substrate layout and the interposer design, then connect the 2 models and send to the customer for use in their IBIS-AMI channel analysis.


In DC power integrity, customers want to model DC voltage drop from the voltage regulator module (VRM) on the board, through trace and then through the package. Here a customer will again use SIwave to build a model, with IR drop and current densities, which eSilicon combines with a similar model extracted from the package substrate and the silicon interposer and then runs a DC simulation with SIwave. You can see simulated voltage gradients from the VRM to the package in the first figure and from the substrate up through the interposer in the picture above. This clearly provides very fine-grained analysis of power distribution all the way from the voltage regulator on the board up to the die ports.

In Teddy’s view, this system to die view, with accurate extraction at all levels, is essential to getting reliable PI analysis down to the die. He noted that you can’t just assume an idealized VRM somewhere on the PCB. You have to define where it’s going to sit and extract the real traces though which it will ultimately drive the package – the PI analysis you get from the idealized model may be quite different from the real model.

Teddy wrapped up with an explanation of their approach to AC power integrity where they want to look at the impact of noise generated by the die or on the board (everything looks good at DC, but what happens when a power domain turns on or another device on the board suddenly becomes active?). Here they use RedHawk to build a chip power model for the die and interposer, then combine that with an SIwave model for the package substrate and board. Based on this they do a system-level simulation (PCB down to the die) and perform a frequency domain simulation to see where they should add package-level decaps to reduce system-level resonances at the package. This is followed by a time-domain analysis looking at noise on the die. Depending on how this turns out, they may feed that back to the frequency domain analysis where they can change some of those decaps or perhaps change some trace geometries. And again iterate.

So, it looks like eSilicon sees value in CPS-style iterative analysis for SI and PI, given the demanding expectations of their customers. Chalk up another proof-point for CPS. You can learn more about CPS analysis HERE.


Enabling Complex System Design Environment

Enabling Complex System Design Environment
by Alex Tan on 08-15-2018 at 12:00 pm

Deterministic, yet versatile. Robust and integrated, yet user-friendly and easily customizable. Those are some desirable characteristics of an EDA solution as the boundaries of our design optimization, verification and analysis keep shifting. A left shift driven by a time-to-market schedule compression, while the process and application complexities keep pushing it in the opposite direction.

From the many DAC held technical sessions, early verification has made progress in doing shift-left to keep pace with the implementation process by means of integrating the application or end-product software within the virtual prototyping to do an early system design exploration. Virtual prototyping allows designers to not only explore corner scenarios but also to reproduce the experiments with various permutations of constraints or variants. The more heterogeneous SoCs for the emerging applications demand virtual prototyping that supports not only software and hardware but also the incorporation of digital, analog and interconnect IPs.

Magillem provides robust front-end design XML based solutions that enable and streamline design activities around its integrated environment. It has deployed its products across several industry boundaries –from SoC design houses, semiconductor manufacturers to legal and technical documentation publishers.

Since rolling out its Magillem Architectural Intent (MAI) for architectural intent capture as covered in my prior blog, the company has announced a joint effort with Imperas for an integrated virtual prototyping platform and also introduced Magillem Flow Architect (MFA), a turnkey solution that help customers define their best recommended flow.

At DAC 2018, I had the opportunity to interview Magillem CEO, Isabel Geday, and Magillem VP of Strategic Account Manager, Paschal Chauvet. The discussion was centered around Magillem continued efforts to accommodate IC design needs and how it adapts with the current trends in the EDA ecosystem.

Some EDA players have announced their product collaborations. Does Magillem have similar efforts?
“I call it a partnership, by not creating duplicated solutions,” said Isabel diplomatically. She gave example of Magillem’s earlier partnership with Imperas using its Verification IP models and debugging software. At DAC, Magillem announced another partnership with Arteris IP, an indisputable leader in NOC-IP that provides SoC cache interconnect solution.

The integration with Arteris IP was demonstrated by a full-compliance validation of the company’s interconnects with the Magillem environment. Using a single design environment, customers can now easily build a SoC using Arteris’ IP instances (FlexNoC and NCore) and the Magillem front-end design environment (MAI, MPAand MRV). “It is very good for customers to have one design environment. To be able to work with and plug all the IPs,” she added.


Since Magillem is based on the industry standard IP-XACT, it enables a possible integration of other tools into its environment. “While the other big players have closed environments, for Magillem it is the DNA of our product to allow integration,” she pointed out. Furthermore, The unified environment also provides more efficient and automated sharing of information across the supply chain during the product development.

There are increasing AI and ML related efforts. How do these impact your products?
“This is very interesting question as we work with methodology and flow aspects,” Isabel said. She gave two examples. The first is from a product application stand point. A large customer has used Magillem solution on some kind of expert system, which interacts with engineers through questionnaires and depending on the given answers make decisions one way or another.

The second example is related to Magillem prior internal effort. “We had activities on the side, done several years ago to demonstrate how versatile is our platform, by building something using the assembly of metadata of descriptions.” She elaborated that the team applied some AI aspects to analyze the interpreted legal texts and the impact of changes made on the existing document corpus. It measured the impacts of the new text fragments on the existing ones and suggested changes when it’s necessary on the existing document corpus. In addition, it was capable of learning a new syntax, when it did not recognize a new pattern. To sum up, she believes that AI is more a replay of previously exercised concept, but with more memories, compute power and algorithms involved.

Aside from these examples, the recently announced Arteris IP-and-Magillem integrated solution has been targeted to simplify the increasingly complex SoCs designs for AI and autonomous driving applications, which are now bounded by the latency of on-chip interconnects rather than the performance of on-chip processors and hardware accelerators.

Here at DAC, more EDA vendors showcasing their products to be accessible on the cloud. What is your take on cloud deployment?
“Our customer are Tier-1 companies and they have entrusted us with their most complex, expensive, demanding SoC platform and designs,” said Isabel. She added that the customers policy is to keep confidentiality as a top-priority. She acknowledged that although some design data intelligence may benefit from cloud based scenario, cloud is not an option yet. “People gain ownership on this internal solution. It is less interesting idea to us than in providing a cognitive assistance to the experts. The customers are very good on what they do. They have to be the decision maker…to make fast decision and be more productive as they deal with huge legacy and data.”

What is your data model? Do you allow customer flow customization?
“Our solution was directly derived from IP-XACT, which is universal inside our tool, allowing our customers to use one data model for the entire design flow,” said Paschal. Embedding external tools can be achieved through the Eclipse plug-in and TGI (Tight Generator Interface), the standard API to manipulate any IP-XACT database. According to Paschal, such flexibility is crucial for smaller companies as they tend to highly customize their environment. The scalability of the compact data model is not an issue as Magillem have worked with SoC having millions of gates.

Traceability is about the ability to track the safety requirements from the initial design inception through its implementation and operation phase. It is a key ingredient for the functional safety standards compliance as defined in IEC 61508 and ISO 26262. With the parsable IP-XACT based data, automated traceability throughout the development flow can be achieved.

Commenting on future works, Isabel stated that the current Magillem platform offering is unique. “Our earlier vision is now very appealing to new customers and new markets,” she added. Ongoing works includes the infrastructure to build the hub of link that will guarantee traceability in a very elegant way. She added that instead of building a hub of data, one then could add different standards and other sort of data while preserving all the essential elements.

By providing a versatile framework that could be retargeted for complex system designs, Magillem solution enables design teams to adapt with changing requirements from both design specifications and implementation methodologies.

For further info on Magillem products, please check HERE.


What Silicon Valley still gets wrong about innovation

What Silicon Valley still gets wrong about innovation
by Vivek Wadhwa on 08-15-2018 at 7:00 am

Silicon Valley well exemplifies the saying “The more things change, the more they stay the same”. Very little has changed over the past decade, with the Valley still mired in myth and stale stereotype. Ask any older entrepreneurs or women who have tried to get financing; they will tell you of the walls they keep hitting. Speak to VCs, and you will realize that they still consider themselves kings and kingmakers.

With China’s innovation centers nipping at the Valley’s heels, and with the innovation centers that Steve Case calls “the rest” on the rise, it is time to dispel some of the myths by which it operates.

Myth #1. Only the young can innovate

The words of one Silicon Valley VC will stay with me always. He said: “People under 35 are the people who make change happen, and those over 45 basically die in terms of new ideas”. VCs are still looking for the next Mark Zuckerberg.

The bias persists despite clear evidence that the stereotype is wrong. My research in 2008 had documented that the average and median age of successful technology company founders in the U.S. is 40. And several subsequent studies made the same findings. Twice as many of these founders are older than fifty as are younger than 25; twice as many, over sixty as are under twenty. The older, experienced, entrepreneurs have the greatest chances of success.

Don’t forget that Marc Benioff was 35 when he founded Salesforce.com; Reid Hoffman, 36 when he founded LinkedIn. Steve Jobs’s most significant innovations at Apple — the iMac, iTunes, iPod, iPhone, and iPad — came after he was 45. Qualcomm was founded by Irwin Jacobs, when he was 52, and by Andrew Viterbi, when he was 50. The greatest entrepreneur today, transforming industries including transportation, energy, and space, is Elon Musk; he is 47.

Myth #2. Entrepreneurs are born, not made

There is a perennial debate about who can be an entrepreneur. Jason Calacanis proudly proclaimed that successful entrepreneurs come from entrepreneurial families and start off running lemonade stands as kids. Fred Wilson blogged about being shockedwhen a professor told him that you could teach people to be entrepreneurs. “I’ve been working with entrepreneurs for almost 25 years now,” he wrote, “and it is ingrained in my mind that someone is either born an entrepreneur or is not.”

Yet my teams at Duke and Harvard had documented that the majority, 52 percent, of Silicon Valley entrepreneurs were the first in their immediate families to start a business. About 39 percent had an entrepreneurial father, and 7 percent had an entrepreneurial mother. (Some had both.) Only a quarter of the sample we surveyed had caught the entrepreneurial bug when in college. Half hadn’t even thought about entrepreneurship even then, and they had had little interest in it when in school.

Useful specific instances are the backgrounds of Mark Zuckerberg, Steve Jobs, Bill Gates, Jeff Bezos, Larry Page, Sergey Brin, and Jan Koum. They didn’t come from entrepreneurial families. Their parents were dentists, academics, lawyers, factory workers, or priests.

Anyone can be an entrepreneur, especially in this era of exponentially advancingtechnologies — in which a knowledge of diverse technologies is the greatest asset.

Myth #3. Higher education provides no advantage

Peter Thiel made headlines in 2011 with his announcement that he would pay teenagers $100,000 to drop out of college. He made big claims about how these dropouts would solve the problems of the world. Yet his foundation failed in that mission and quietly refocused its efforts and objectives to provide education and networking. As Wired reported, “Most (Thiel Fellows) are now older than 20 and some have even graduated college. Instead of supplying bright young minds with the space and tools to think for themselves, as Thiel had originally envisioned, the fellowship ended up providing something potentially more valuable. It has given its recipients the one thing they most lacked at their tender ages: a network”.

This came as no surprise. Education and connections are essential to success. As our research at Duke and Harvard had shown, companies founded by college graduates havetwice the sales and twice the employment of companies founded by others. What matters is that the entrepreneur complete a baseline of education; the field of education and ranking of the college don’t play a significant role in entrepreneurial success. Founder education reduces business-failure rates and increases profits, sales, and employment.

Myth #4. Women can’t succeed in tech

Women-founded firms receive hardly any venture-capital investments, and women still face blatant discrimination in the technology field. Despite the promises of tech companies to narrow the gap, there has been insignificant progress.

This is despite the fact that according to 2017 Census Bureau Data, women earn more than two-thirds of all master’s degrees, three-quarters of professional degrees, and 80 percent of doctoral degrees. Not only do girls surpass boys on reading and writing in almost every U.S. school district, they often outdo boys in math—particularly in racially diverse districts.

Earlier research by my team revealed that there are also no real differences in success factors between men and women company founders: both sexes have exactly the same motivations, are of the same age when founding their startups, have similar levels of experience, and equally enjoy the startup culture.

Other research has shown that women actually have the advantage: that women-led companies are more capital-efficient, and venture-backed companies run by a woman have 12 percent higher revenues, than others. First Round Capital found that companies in its portfolio with a woman founder performed 63 percent better than did companies with entirely male founding teams.

Myth #5. Venture Capital is a prerequisite for innovation

Many would-be entrepreneurs believe that they can’t start a company without VC funding. That reflected reality a few years ago, when capital costs for technology were in the millions of dollars. But it is no longer the case.

A $500 laptop has more computing power today than a Cray 2 supercomputer, costing $17.5 million, did in 1985. For storage, back then, you needed server farms and racks of hard disks, which cost hundreds of thousands of dollars and required air-conditioned data centers. Today, one can use cloud computing and cloud storage, costing practically nothing.

With the advances in robotics, artificial intelligence, and 3D printing, the technologies are becoming cheaper, no longer requiring major capital outlays for their development. And if entrepreneurs develop new technologies that customers need or love, money will come to them, because venture capital always followsinnovation.

Venture Capital has become less relevant than ever to startup founders.

For more, follow me on Twitter: @wadhwa and visit my website: www.wadhwa.com


Meeting Analog Reliability Challenges Across the Product Life Cycle

Meeting Analog Reliability Challenges Across the Product Life Cycle
by Daniel Payne on 08-14-2018 at 12:00 pm

Create a panel discussion about analog IC design and reliability and my curiosity is instantly piqued, so I attended a luncheon discussion at #55DAC moderated by Steven Lewis of Cadence. The panelists were quite deep in their specialized fields: Continue reading “Meeting Analog Reliability Challenges Across the Product Life Cycle”


Architecting an ML Design

Architecting an ML Design
by Bernard Murphy on 08-14-2018 at 7:00 am

Discussion on machine learning (ML) and hardware design has been picking up significantly in two fascinating areas: how ML can advance hardware design methods and how hardware design methods can advance building ML systems. Here I’ll talk about the latter, particularly about architecting ML-enabled SoCs. This approach is getting major traction for inferencing applications particularly when driven by power considerations (eg in the IoT and increasingly in automotive apps), also in training when driven by demand for highest performance per Watt (eg Google TPU).

Architecture has to be the center of design in such a system, optimizing the algorithm and architecture for your CNN. For the core algorithm, this has to be in choice of number, types (convolution, pooling, …) and characteristics (stride, …) of layers in the network. In the implementation, memory support is one critical consideration. Neurons like tightly-coupled memories for weights, activation functions and retrieving and storing neuron inputs and outputs, but everything can’t be tightly-coupled so you also need high-bandwidth access to larger memories – like caching but implementation can be quite different in CNNs. And you want to optimize PPA, aggressively if the design is targeted to a battery-operated application.

There are challenges in finding an optimal architecture for these systems. RTL isn’t an option – you need to test out different options with 100’s of MB of data (images as an obvious example) so this has to run at a higher level of abstraction. And it’s not obvious (to me at least) how you would experiment with CNN architectures in a JIT-type virtual-prototype running on a general-purpose processor (or even a cluster). This class of architecture design seems a much better fit with TLM approaches, for example Platform Architect Ultra.

You’ll likely start in one of the standard CNN frameworks (Caffe, TensorFlow, etc) where most experts in the field are familiar with building CNN graphs. You can translate this into a workload model in Platform Architect (Synopsys calls this a task graph). This can then be mapped into an IP in your larger SoC model: CPUs, memory interfaces and on-chip bus. Naturally the platform supports a rich library of IPs and VIPs for this task:

  • Processors – Arm, Synopsys ARC, Tensilica and CEVA DSP, traffic generators and more
  • Memory subsystem – DDR and multiport memory controllers from Arm and Synopsys
  • Interconnect models from Synopsys, Arteris IP, Arm and NetSpeed

Architecting the bulk of the SoC is pretty familiar; what is different here is designing the task graph for the CNN, or adapting it from an imported graph. I don’t want to get too much into the details of exactly how this works – you should follow the webinar link at the end to get a more complete description. But I think it is useful to highlight some of the key points for those of us who think more in terms of conventional logic.

Since you’re likely working with an imported CNN task graph, you probably need to build sub-components which will ultimately connect to TLM models, starting for example with a convolution layer in which you need tasks like read_input (reading an image sub-block), read_coefficients (e.g. for weights), process the neural function and write_result. The tool supports this through creation of a task components as blocks which you then connect to create a graph, indicating serial versus parallel processing. You also add parametrization to tasks and connections for things like height and width of a frame and the stride (how much the filter shifts on the input map per convolution).

You can them simulate this model and sweep over different scenarios, looking at basic latencies and utilization of resources, and fine-tune and connect this as a hierarchical sub-component in the larger task graph. You will also build a structure to support the main body of your SoC (this is standard Platform Architect stuff), into which your CNN task graph is instantiated.

From here you start mapping between tasks and TLM models in the library – connecting your task-graph to a TLM implementation. Now when you simulate with scenarios, you get more realistic info on latency, throughput, utilization and so on. You can also do system-level power analysis at this stage by adding UPF3 power monitors (state machines, bound to each TLM model, with estimated power per state. Ah-hah – so that’s where these UPF3 monitors are used 😎)

The presenters (Malte Doerper – PMM, and Tim Kogel – AE manager, both in the virtual prototyping group) show application of these concepts to an AlexNet case study (AlexNet is the reference CNN benchmark everyone uses). Their goal on starting is to process a frame in 10ms with 4 frames processing in parallel, and to consume less than 500mW. They show a nice analysis, based on running over multiple scenarios where they can compare implementation choices against meeting the performance spec, power and energy (the old trick – run faster, higher power but lower energy). And of course they meet their power and performance goals!

I found this a pretty interesting exposition of how CNN design can fit comfortably into the full SoC architecture goal. Modeling convolution layers through tasks may look a little different, but it all seems to flow quite smoothly into full architecture creation. There’s a lot more detail in the webinar, along with a quite detailed Q&A. You can learn more HERE.


A True Signoff for 7nm and Beyond

A True Signoff for 7nm and Beyond
by Alex Tan on 08-13-2018 at 12:00 pm

The Tale of Three Metrics
Meeting PPA (Performance, Power and Area) target is key to a successful design tapeout. These mainstream QoR (Quality of Results) metrics are rather empirical yet inter-correlated and have been expanded to be linked with other metrics such as yield, cost and reliability. While the recent CPU performance race is less intense as Moore’s Law based scaling is increasingly costlier and more complex, power is taking the center stage. This has been intensified by the proliferation of more silicon geared toward addressing mobility (automotive, wireless-augmented- anything), distributed applications (Internet-of-Things) or scalable computing (multi-core).

Many of the adopted low-power implementation flows have shown that a holistic approach is necessary in order to achieve an optimal power target. It must also be initiated early at the architectural level such as addressing multiple power domains and clock domain partitioning, followed by power optimization at different level of the implementation stages. Concurrently, an accurate power analysis is required to provide feedback for needed adjustments to the optimization constraints as tradeoff recurrences are expected among these metrics.

Power Signoff Challenges
Power signoff checks power integrity of the grid. According to Jerry Zhao, Cadence Product Director, today designs can be categorized into two types for the purpose of analyzing the power grid constructions and identifying its challenges. The first design type demands capacity as it contains billions of instances and nodes such as the power-hungry oriented GPUs (Graphical Processing Unit) or CPUs for machine learning. The other is small in size but more sophisticated, which requires multiple power domains (such as IoT related chips) or special needs (such as packaging integrated analysis for automotive) –for this type, a higher analysis accuracy is needed.

Additionally, with the advanced nodes such as 7nm and 5nm FinFETs where metal resistance is more pervasive, higher correlation accuracy is also required between the implementation estimates and the analysis results generated by the signoff tools. In flow utilizing a traditional signoff approach, non-converging iterations are common occurrences as it relies on the use of design margins as well as due to a disconnect in either the underlying data model or the optimization/analysis engine of the associated point tools.

Cadence True Signoff Solution
As part of a tapeout signoff, design teams perform various validation including physically related verification (DRC, LVS, reliability), timing and power. While it is common to have crosstalk and SI (Signal Integrity) effects concurrently done with timing analysis, power verification (related to IR drop, electromigration/EM) is traditionally decoupled from timing analysis, in spite of the fact that an IR induced lower power biasing and the remedy to an EM avoidance may incur significant timing differentials that could change the criticality of a timing path.

Driven by such tighter correlation needs especially in the advanced nodes domain, Cadence launched project Virtus (Voltus IR drop TempUS technology) which integrates the power and timing analysis to yield a true signoff solution. It had also been aligned with the existing Cadence digital implementation flow for both QoR predictability and convergence. The overall solution is dubbed full-flow digital.

As discussed during Cooley’s DAC 2018 Troublemaker Panel, Jerry shared a customer case involving a high frequency design. The 3Ghz design passed signoff using other third party power and timing tools, but failed to perform on silicon at the targeted frequency by several hundreds of Mhz. This design was then subjected to true signoff analysis, which was able to uncover the presence of IR drop violations that could induce timing violations comparable to the earlier post-silicon observation.

How much impact does an IR drop violation have on timing? At the DAC theater presentation, Cadence showcased such IR induced timing violations on a 2.5Ghz, 7nm CPU based design testcase. A non-critical path having +31ps slack and passing power integrity analysis with +42mV margin, was identified with true signoff as failing timing by -33ps due to the presence of proximity aggressors (equivalent to a 8% slow down in speed). This path interestingly was not one of the top-paths, instead, it was buried in the deeper non-critical path bin.


Speed and Parallelism

Since scalability and performance are needed for power integrity analysis, Cadence has recently rolled-out Voltus-XP, enhanced with extensive parallelism algorithm to support power grid signoff on giga-scale designs with massively distributed processing. It is cloud ready and provides up to 5x speedup.

Full-Flow Digital Solution
The tight handshake exemplified between Voltus and Tempus seems to be just one of the many close interactions among Cadence tools as shown in Full-Flow Digital Solution diagram, which is inline with this year Cadence’s slogan of being a system design enabler.

“Design closure is tightly correlated with how a cell is placed and routed as it impacts how current flows through the placed region and thus, influencing the IR drop,” stated Jerry referring to Voltus-Innovus IR-drop-aware placement fixing.

As one move towards the top of the die, packaging has its own IR drop requirements that are different than chip level. It is more thermal centric and involves gradual change compared with quick voltage ramp. A similar handshake with packaging analysis is also needed. Designer is expected to iterate through the tools in the ecosystem though not required to be concurrent, in order to ensure a QoR convergence.

In addition to IR drop check, for 7nm and 5nm power integrity, meeting foundry driven EM rules requires a year-round team collaboration which culminates in passing foundry certification process.

Signoff Solution for The Advanced Nodes
Asked on his take with the adequacy of Cadence current tool offerings for 7nm or 5nm signoff, Jerry stated that implementation step could further leverage the analysis outcome. “When Voltus is running it will report hot-spot areas (i.e. with IR drop error) understood by Innovus and used to do IR aware placement fixing,” stated Jerry. The tool is smart enough to move the aggressor by a few rows based on a cost function. A rerun of Voltus is needed to ensure it is fixed. Using such approach designer can reduce the IR drop by 30% in one iteration. With multiple iterations, it would resolve significant number of IR related issues although it may not fix the whole problems as there are some designer imposed constraints such as not allowing tool to touch the clock trees.

Addressing it from another angle has been demonstrated through the use of Tempus timing signoff tool. Jerry said that as Voltus and Innovus shared the same database (data model), an integrated Tempus-Voltus timing analysis can be done and an ECO based on voltage report can be generated to fix timing violations.

To recap, both timing and power signoffs have become increasingly longer with more complex designs and advanced nodes. Cadence integrated signoff solution not only provides multi-dimensional analysis but also a tightly coupled solution with the optimization based tools to alleviate signoff bottlenecks.

For more info on Voltus please check HEREand Cadence silicon signoff HERE


SEMICON West – Leading Edge Lithography and EUV

SEMICON West – Leading Edge Lithography and EUV
by Scotten Jones on 08-13-2018 at 7:00 am

At SEMICON West I attended the imec technology forum, multiple Tech Spot presentations and conducted a number of interviews relevant to advanced lithography and EUV. In this article I will summarize what I learned plus make some comments on the outlook for EUV.
Continue reading “SEMICON West – Leading Edge Lithography and EUV”


Are We Over Uber? Bring on the Bots

Are We Over Uber? Bring on the Bots
by Roger C. Lanctot on 08-12-2018 at 12:00 pm

From sexual harassment, to surveilling regulators, to Uber drivers and taxi drivers committing suicide (because they can’t make a living) the pervasive creepiness of Uber continues to spread while the means of corraling this societal phenomenon creeps steadily forward like sclerotic mid-town traffic. The latest chapter in the Uber saga is unfolding in New York where court rulings (in line with similar rulings in California) may force gig drivers to be treated as employees just as the New York City Council is considering freezing for-hire licenses to combat negative urban traffic consequences.

Multiple studies show the onset of ride hailing apps such as Uber, Lyft, Via, Yandex, Grab and others around the world are driving up urban traffic congestion and undermining public transportation. The latest moves in New York, however, highlight the favor felt by under-served minority communities when it comes to fulfilling their ad hoc local transportation needs. In New York, and elsewhere, traditional Yellow Cabs have a reputation of insufficiently servicing minority riders (i.e. not stopping to pick them up) and their communities (i.e. refusing to drive to particular neighborhoods).

In New York, civil rights organizations and leaders including the New York Urban League and the Reverend Al Sharpton’s National Action Network have come out in opposition to the freeze on for-hire licenses and in support of Uber. These two currents perfectly capture the conundrum of a service that undercompensates drivers, many of whom are immigrants from minority communities, while providing superior service to those same minority and immigrant communities.

Let’s not get carried away, though, because Uber (and Lyft etc.) drivers are also well known for choosing not to accept certain fares (even though they may suffer app-based consequences). It’s not a perfect solution.

The New York Times chimed in with its own editorial solution of a minimum wage for gig economy drivers and a hike in taxi fares along with congestion charging for drivers of privately owned vehicles entering the city. The Times would prefer to see some balance restored in the transportation network between individual vehicles and public means of transportation.

That’s right. Autonomous taxis can be more effectively managed and regulated and won’t discriminate regarding passengers or destinations. A recently published study modeling the creation of an automated taxibot-centric transportation network in Lisbon, Portugal, found that 90% of cars could be eliminated and commute times reduced though traffic would remain depending upon the degree of reliance on the taxibots.

– Urban Mobility System Upgrade – International Transportation Forum

The pressure is growing on Uber et. al. to compensate drivers fairly and treat them as employees. Uber continues to be forced out of particular cities and countries – or to sell off its assets to local competitors.

The resistance to freezing for-hire licenses in New York City is a recognition that Uber and competing services have a significant driver turnover problem – as well as a surfeit of drivers only working part-time. Freezing licenses could actually improve the compensation picture for the remaining drivers, but might limit the availability of the service.

But all of this regulatory and legal attention reflects the reality that the Uber model works against both the gig drivers and the regulated and licensed taxi drivers. What is worse is that taxi operators around the world have demonstrated an almost comprehensive inability to compete – dependent as they have become on regulated fares and licensing that render them more or less defenseless.

The combination of regulatory and growing urban gridlock have set the stage for a combined downward spiral in the quality of service for taxis, ride hailing services and public transportation. What looks to an Uber et. al. passenger as a win-win is actually a lose lose.he only answer, it seems, is taxibots. So, I guess that means we’re facing another 5-10 years of sexual harassment, regulatory surveillance and taxi (and Uber) driver suicides along with declining public transit performance. It’s a small price to pay for a cheap cab ride, or is it?


Florida Sends Mixed Autonomous Message

Florida Sends Mixed Autonomous Message
by Roger C. Lanctot on 08-12-2018 at 7:00 am

Florida is poised to surpass California and Arizona as the leader in autonomous driving development thanks to an aggressive legislative agenda that began in 2012 and direct engagement with autonomous car developers. The state has good reasons for fostering automated driving given that it is the second largest state in the U.S. with the third highest number of highway fatalities.

These Florida realities and others were brought to my attention by a member of the Florida Chamber of Commerce who I met earlier this year. The Chamber of Commerce is at the heart of the “Autonomous Florida” effort.

There are many motivations for stimulating autonomous vehicle development in a state where the driving environment is heavily influenced by the behavior of aging drivers and tourists. By many measures Florida is the number one tourist destination in the U.S. and Florida is a leader in highway fatalities involving senior citizens.

SOURCE: TRIP Transportation Research

In spite of all this momentum and motivation, The Florida Chamber of Commerce, which has been a leader in the autonomous driving effort, held a Webinar recently that focused on potential job losses and other negative economic impacts from autonomous driving. The Webinar also highlighted varying levels of ambivalence toward the technology from different demographic segments.

The Webinar embodied all the worst boogieman concerns that autonomous driving technology inspires including fear of the vehicles themselves and fear of near-catastrophic job losses from their deployment. The gloomy Webinar contrasts mightily with the otherwise autonomous tech booster-ism of the Florida Chamber.

In fact, Florida is poised to add multiple new autonomous vehicle partners including Cruise Automation and Waymo in coming months, vaulting the state into contention with California and Arizona for leadership in the segment. Florida has done much to lay the ground work including a focus on funding infrastructure investments and setting up multiple smart city and sustainable community initiatives.

The onset of Florida’s leadership is notable as it becomes the third fair weather state – fourth if you include Nevada – to embrace autonomous vehicle tech. Florida is also following the lead of Arizona in clearing away regulations to open up its roads to driverless, steering wheel-less, and brake-and-accelerator-pedal-less automated vehicles.

This is important in the context of countries around the world that are attempting a top-down approach to simultaneously fostering and regulating autonomous vehicles. The message from Florida (and Arizona and Virginia and a growing roster of U.S. states) is: Less is more.

Regulators and legislators are more likely to get things wrong or narrow development options by intervening too aggressively. The onset of autonomy does indeed have implications for infrastructure, employment, productivity and consumer behavior. But regulators and legislators are unlikely to “guess right” when it comes to specifying how technologies are deployed or which technologies are acceptable.

Safety and security are key issues and standards-setting organizations are best equipped to address these concerns. But just as we are exploring allowing drivers to take their hands off the steering wheel, it is probably best that regulators and legislators keep their hands off the technology.

Legislation before the U.S. Congress intended to regulate autonomous vehicle deployments and override state authority is currently stalled, and rightly so. The implementation of autonomous vehicle technology will mean different things in different regions and, thus far, there is no single clear path to autonomy.

California has implemented an array of licensing and data collection obligations. Florida and Arizona, in contrast, do not. Only time will tell which path forward is most attractive, acceptable or swiftest – to say nothing of safest.

States are closer to the action and better able to respond than the Federal government. The National Highway Traffic Safety Administration in the U.S. Department of Transportation has thus far demonstrated it is adequately up to the task of providing basic regulatory support within existing rules. The Society of Automotive Engineers has made its standards-setting contribution and is working closely with the Florida Chamber on autonomous vehicle testing and development.

A study from Germany-based research firm Progenium conferred autonomous vehicle development leadership upon the U.K. based on 23 factors as part of an Autonomous Driving Index. U.K. panelists at a recent London mobility event agreed with those findings in the context of the U.K.’s top-down oversight of autonomous vehicle development.

As for me, I prefer what, to Europeans, must look like a highly chaotic autonomous vehicle development environment ruled, for now, by individual states with some light Federal regulatory oversight. Autonomous driving is a local phenomenon ruled by varying local driving regulations and demographic and meteorological conditions.

Even individual states can get things wrong – particularly when rules become too intrusive. My favorite absurd rule is California’s special licensing for “drivers” of autonomous vehicles. Wait. What? It makes no sense to me.

Florida is on the right path with the right idea. But Florida needs to set aside the confusing messaging around lost jobs and fear of autonomous cars. Autonomous vehicles will create employment, stimulate the economy and change the way people think about transportation generally and driving in particular – and all of that is for the greater good.