RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Lip-Bu on Opportunity

Lip-Bu on Opportunity
by Bernard Murphy on 04-27-2017 at 7:00 am

Given a chance to talk with someone as connected as Lip-Bu Tan (President and CEO of Cadence and Chairman of the VC firm Walden International), it is tempting to ask all the usual questions about industry growth and directions in cloud, automotive, IIoT, AI and so on. I wanted to try something different. If you make a living (or plan to) in semiconductor design and EDA, this explosion of new technologies makes for interesting reading, but what does it mean for your job? Since all the press seems to be around the mega-companies in these fields (Google, et al), why not look for jobs there, especially given their supposedly stratospheric salaries and the chance to work with bleeding-edge technology?


Lip-Bu acknowledged that superficially it is difficult to compete for talent with these giants. But he made the point that most engineers want to solve difficult problems, and they want to have an impact. The big companies offer the first (if you can make it through the selection process), but not so much of the second. However, thanks to all those new emerging technologies, new problems and opportunities are sprouting like weeds in EDA. And solutions are starting to leverage leading-edge technologies in machine learning, big data analytics, neural nets and cloud-friendly development and deployment. Engineers working in these areas get to work on big hairy problems and learn about and use the latest development platforms.

Lip-Bu pointed to a few big drivers for this change. He mentioned first that a lot more business is now coming from systems companies (representing 40% of revenue for Cadence). Those companies are motivated to get maximum value from partners by offloading as many requirements for expertise as they can. Since what the systems companies are doing is ground-breaking, they expect more collaborative innovation from their partners leveraging state-of-the-art methods like machine-learning where the traditional EDA customer-base might prefer to keep that learning and expertise in-house.

Another interesting driver here in an increasing appreciation for the value of data. Companies like Cadence have incredibly rich stores of data on the architecting, verification and implementation of countless designs across all possible domains. Machine learning techniques can potentially be applied to improve results with specific types of designs like an IP block, for instance, enabling teams to focus more on the big objectives and less on the minutiae so they can get to market faster. Systems companies recognize this advantage and want to tap into it.

Lip-Bu next made a point about why this has become so important and drew an interesting analogy between the pharmaceutical industry and design/EDA. Big pharma traditionally focused on blockbuster drugs—spending a lot of money to develop and test a drug that you could sell to millions. That business model still works, but the real growth is in personalized medicine; cancer treatment is one area where tailored medication looks much more successful than blockbuster approaches. The same trend seems to be happening in design. When you look at the IoT, recognition, differentiation in the cloud, all these areas are driving similar “personalization” by applications in hardware design because it has become clear that customized solutions can get not just a little edge but orders of magnitude advantage over traditional solutions.

That means a lot more designs must be turned a lot more quickly by teams who want to (quickly) deliver the best possible solution for their application, but those teams, in many cases, have no interest in delivering to markets beyond their internal needs. So there’s a big hairy problem – deliver methods to build bigger, faster, lower-power designs but much more quickly and efficiently than can be done today, using a lot more intelligence tapped from that massive database of prior experience.

Lip-Bu stressed several times that Cadence is very much in learning mode with its partners on their needs, but it’s clear to me that they are not waiting to be told exactly what to do. As they learn, they are already aggressively building capability around machine learning, around massive parallelization in the cloud and around a host of other cool solutions to big hairy problems in a wide range of application areas, which I don’t have space to cover here.

So, when you think about whether you want to work for a company like Cadence or a company like Google, consider this. With the big guys, forget about the superstar salaries unless you’re already world-famous. You might still make (some) more money, but you’ll be buried in a giant team where your contribution will be relatively small. Or, you could work in an EDA company where there are lots of hard problems, innovation is essential and you can be a major contributor to an important solution. You can still wax poetic about how what you do makes the rest of this new industrial revolution possible. But it’s nice to be recognized for what you have done too, and it doesn’t hurt your career prospects to know you’re developing skills at the leading edge, just like the big guys.


Approaches for EM, IR and Thermal Analysis of ICs

Approaches for EM, IR and Thermal Analysis of ICs
by Daniel Payne on 04-26-2017 at 12:10 pm

As an engineer I’ve learned how to trade off using various EDA tools based on the accuracy requirements and the time available to complete a project. EDA vendors have been offering software tools to help us with reliability concerns like EM, IR drop and thermal analysis for several years now. Last week I attended a webinar from Silvaco that discussed how they have two approaches in this analysis area:

  • Highest accuracy tool, used by circuit designers for sign-off – InVar
  • Fastest run times, used by layout designers before design is LVS clean – InVar Prime

The InVar tools were part of an acquisition that Silvaco did of Invarian back in 2015, so it’s always a positive sign when the acquired product line continues to grow into new markets. With InVar Prime the tool user is the layout designer that wants to check out the quality of the routing and interconnect early on in the design process, even before the design is LVS clean. At such an early stage you can quickly pinpoint and fix the layout issues.

One technique to get fastest runs times for this type of analysis is to avoid using a SPICE circuit simulator, and instead use an approach with user-provided current source values. Here’s an overview of how this type of modeling works with an extracted IC layout:

A layout engineer can analyze the power and ground network for quality, estimate the IR drops, estimate current densities, look at point to point resistance values, uncover missing vias, find narrow wires and even uncover detour power routing. All of this early analysis will help meet the tape-out schedule and reduce costly design iterations.

So what does a user of InVar Prime have to supply the tool as inputs?

  • GDSII or Open Access
  • Technology file (ITF or iRCX)
  • Current source info (from a GUI, script or SPICE results)
  • Voltage source info

The capacity of InVar Prime is extended through the use of hierarchy, where each sub-block can be analyzed first before looking at the entire chip. Analysis speeds are quite fast, which allow the layout engineer to get multiple runs per day. Feedback from the analysis is both visually and textual, so for IR drop analysis you can see the regions of greatest drop in the color Red, or click on the text report to pinpoint the areas of greatest voltage drop:

Using an example standard cell layout they introduced an error by removing vias in the power net at the lower-left corner, then ran the IR drop analysis to quickly show the regions of highest voltage drop:

For EM and current density analysis your design can have any number of supply nets and you can see the regions of highest current density using the familiar rainbow of colors, review textual reports, or even click on a text report and view the graphical region. Scripting with the Tcl language can also be used to make your analysis even more automated.

If you need to find where the greatest resistance is from the source of one point in your power network to its final destination then the analysis results are presented both visually and in text formats.

As an IC operates then each transistor produces heat based on the device sizes, currents, frequency and quality of the power and ground networks. The InVar Prime tool provides thermal analysis in either 2D or 3D modes.

It’s interesting to compare the faster speed of using InVar Prime (gold) versus the SPICE-based InVar tool on three different designs:

With InVar Prime the run times are faster, use less RAM and take less time to prepare the analysis.

Summary
IR, EM and thermal analysis are key to ensuring that your chip designs perform reliably and correctly in first silicon, so now you have two approaches from Silvaco with their InVar and InVar Prime tools. The InVar tool was introduced back in 2010 while the InVar Prime tool was introduced 12 months ago, and they have happy customers that have designed processors, displays, memories, high current IC, sensors, mobile, WiFi networks and wired network chips.

To view the archived webinar, watch it online.


A Self-Contained Software-Driven Prototype

A Self-Contained Software-Driven Prototype
by Bernard Murphy on 04-26-2017 at 7:00 am

You’re building an IP, subsystem or SoC and you want to use a prototype together with a software testbench to drive extensive validation testing. I’m not talking here about the software running on the IP/SoC processor(s); the testbench should wrap around the whole DUT. This is a very common requirement. The standard approach to addressing this need is to hook the prototype up to a host PC, run your testbench there and connect to the prototype through one of the standard interfaces.

But that’s not necessarily ideal. Your real design probably has all kinds of communication interfaces to the rest of the system to which it will ultimately connect, yet you’re constrained to pushing all that testbench activity through a relatively narrow pipe between the prototype and the PC. Of course that can be done, but wouldn’t it be nice to have wider and more realistic channels to connect the testbench to the DUT? That’s what Aldec offers through their HES-US-440 prototyping board, hosting both a Xilinx UltraScale FPGA to prototype the DUT and a Xilinx Zynq FPGA to act as the testbench host.

The UltraScale and Zynq devices are connected through 160 board traces (which can be used as 80 LVDS pairs), plus four high-speed GTX serial links. And of course there are plenty of standard interfaces (PCIe, USB, Gigabit Ethernet, ..) on both FPGAs. These can also be looped-back between the devices if needed. There are also FPGA mezzanine card (FMC) interfaces on both FPGAs – these can too be connected in loop-back. So yeah, you can model, in hardware, very wide, fast, direct and protocol-based interfaces between your software-driven testbench and your DUT.


Aldec cite probably a very common case for using this capability. You’re building an IP with an AXI interface and you particularly want to test that interface (probably with other standard interfaces also connecting to the IP), through your software testbench, to validate correct operation of your prototyped design. The Xilinx Vivado design software does all the heavy lifting for you by building the AXI Chip2Chip bridge between the two FPGAs (providing both master and slave interfaces). The Xilinx SDK handles the software interfaces to AXI on each side so you’re just left with the software you would have written anyway – for the processor in your IP and for the testbench. You can also skip the AXI port on the Zynq side and use a pre-configured Aldec setup for the Zynq, running Linux.

I can imagine all kinds of organizations who might find this solution appealing. Small shops building IP really need to prove out their designs before committing to testchips. This board could provide a very cost-effective way to do that (and to help build and test the drivers they’ll need to supply with the IP). Groups in larger organizations may also be interested if they want to do intensive testing but have limited access to approved prototyping platforms thanks to demand from multiple other groups. Teams building designs targeted to FPGAs may want to start prototyping quickly without the overhead of building test boards. Heck, I think I want one. It’s feels like a Raspberry Pi for serious applications requiring hardware reprogrammability and high-performance connectivity (though I’m sure a little pricier than the Raspberry Pi :cool:).

I should also note that Aldec also suggests this board as a solution for high-performance computing (HPC) applications. Here it would be used not as a prototype but as the production version of an accelerator. Perhaps Amazon, Google and Facebook might not need this approach, but this is an interesting idea for the rest of us. If you want to build your own accelerator for search, recognition, analytics, fintech, whatever you want to accelerate, this could be an appealing place to start. If it turns into a billion-dollar business you can always convert it to an ASIC, but until that happy day you can have a working solution up, running and proving itself in no time.

You can learn more HERE.


3D Product Design Collaboration in MCAD and ECAD Platforms

3D Product Design Collaboration in MCAD and ECAD Platforms
by Tom Dillinger on 04-25-2017 at 12:00 pm

Consumer electronics demand aggressive mechanical enclosure design — product volume, weight, shape, and connector access are all critical design optimization criteria. Mechanical CAD (MCAD) software platforms are used by product engineers to develop the enclosure definition — the integration of the PCB design (or, potentially, a rigid-flex assembly) into the MCAD model enables the engineer to verify the mating of the enclosure and electronics, and submit the model to thermal, structural, and EMC/EMI analysis.

Traditionally, the (final) PCB definition was exported from the ECAD design platform using the .idf representation, short for “intermediate data format“. An (initial) .idf description would be exported from the MCAD toolset to reflect the starting PCB topology, with connector placements, mounting holes, keep-out areas, etc.

Yet, the .idf format was not conceived to support the requirements of current product designs, where iterative collaboration between MCAD and ECAD environments is required. To address the needs of MCAD and PCB designers, an industry consortium pursued the definition of a new standard, commonly known as .idx (named after the file extension used, short for “incremental design exchange“). Specifically, the .idx format supports the following key features:

  • all design objects are assigned an identifier

Electrical components, holes, keep-outs, mechanical components, etc. are all given a unique designator, which enables the main .idx characteristic, listed next.

  • incremental data exchange

IDX enables MCAD/ECAD systems to optimize the amount of data exchanged during design iterations, and track the change history.

  • data is represented using XML schemas

An .idx file is an XML document, which is readily extendible as future requirements arise.

  • rich geometry descriptions are supported

IDX uses the definition of “curvesets“, which are assigned a vertical extent to expand the 2D description into 3D. Objects are described using these curvesets; inverted shapes represent a void in an object.

  • roles

A “role” can be associated with any item (i.e., a collection of objects), which assigns rights and responsibilities for item updates to specific team members. For example, one ME may own the board shape, while another owns connector and mating hole locations; a PCB engineer may own component locations. The .idx format also supports request/accept/acknowledge handshaking for proposed updates, before changes are applied from one design domain to another.

  • properties

Components can be assigned property values, characteristics of specific interest to both MCAD and ECAD analysis (e.g., power dissipation, component mass, physical clearances around the component).

The IDX representation is used by MCAD and ECAD toolsets to exchange incremental updates, after a “baseline” .idx exchange of the initial product definition. A general workflow is depicted in the figure below.

The identifiers in the baseline description then enable exchange of updates to the design — e.g., addition/deletion/re-positioning of components, changes to board shape, relocating a connector or mounting hole, etc.

At the recent PCB Forum in Santa Clara held by Mentor (a Siemens business), the Xpedition team described how .idx has enabled a highly productive MCAD/ECAD collaboration design methodology. (Mentor and PTC were the original drivers of this new standard for mechanical/electrical data modeling and information exchange.) A key feature added to the Xpedition platform has provided the PCB designer with a 3D model visualization of the MCAD data.

The figures below illustrate the concurrent 2D/3D views in Xpedition, which can include visualization of the Cu data, as well. The incremental characteristics of the .idx file are leveraged in Xpedition — proposed changes imported from the MCAD platform are highlighted, for the PCB engineer to quickly pinpoint areas to review. (Note that Cu data would be exported from Xpedition in the .idx description and merged into the MCAD model, for both physical checks and EMC/EMI analysis.)

A common property applied to an .idx object is the “lock/unlock” status — a role member can assign a lock property to prevent updates. The Xpedition Data Manager tracks the .idx history, from the baseline through subsequent MCAD/ECAD proposal/response exchange transactions. The figure below illustrates the data management and change notification features of the MCAD Collaborator utility in Xpedition.

Mentor provides a rich library of existing 3D component models for PCB visualization — the figure below illustrates how the PCB designer’s view in Xpedition compares to the final manufactured board.

The Mentor Xpedition team also provided a demo of the rigid-flex support within the collaborative design environment. The figure below illustrates the concurrent 2D/3D views of a rigid-flex assembly — 3 PCB’s with multiple (physically overlapping) flex cables.

The complexity of current products requires a close interaction between mechanical and electrical teams. The transition from the .idf to .idx data exchange formats between MCAD and ECAD tools offers a significant benefit to the design methodologies in each domain. Specifically, a PCB designer can make a broad set of design optimizations, and quick export the updates to the MCAD engineer for review. The ECAD platform needs to support .idx exchange — a key feature includes 2D/3D visualization for the PCB designer. Mentor’s Xpedition toolset is focused on enabling this collaborative MCAD/ECAD flow.

For more information on upcoming Mentor PCB Forum dates, please follow this link.

For information on the 3D visualization support in Xpedition, please follow this link.

For general information on ECAD/MCAD Collaboration in the Xpedition platform, please follow this link.

-chipguy


The CDNLive Keynotes

The CDNLive Keynotes
by Bernard Murphy on 04-25-2017 at 7:00 am

I’m developing a taste for user-group meetings. In my (fairly) recently assumed role as a member of the media, I’m only allowed into the keynotes, but from what I have seen, vendors work hard to make these fresh and compelling each year through big-bang product updates and industry/academic leaders talking about their work in bleeding-edge system development. Cadence continued the theme this year with a packed 90 minutes of talks to an equally packed room at the Santa Clara Convention Center.


Lip Bu opened with “Enabling the Intelligent Connected World.” There’s a lot packed into that title. EDA/IP is enabling rather than creating that world, but the world wouldn’t be possible without what EDA and IP make possible. It’s intelligent because AI and machine learning are exploding, and it’s connected because focus has dramatically shifted from point-system compute to clouds, gateways and edges.

He sees massive potential for design and EDA, particularly around connected cars, the industrial IoT (IIoT) and cloud datacenters. As both the president and CEO of Cadence and head of Walden International (a VC company), he sees several important trends in these areas. The deep-learning revolution—moving from training machines to learn, to teaching them to infer (using that learning in the field)—is creating strong demand to differentiate through specialized engines (witness the Google Tensor Processing Unit). Roles between cloud, gateway and edge have shifted; where once we thought all the heavy-lifting would be done in the cloud, now we are realizing that latency has become critical, making it important that intelligence, data filtering, analytics and security be moved as close to the edge as much as possible.

All of this creates new design and EDA challenges from sensors all the way up to the cloud. At the edge, more compute for all those latency-critical functions demands more performance without compromising power or thermal integrity. In the cloud, even more compute and data management is required for innovative (massively reprogrammable, reconfigurable) designs packed into small spaces. Power and thermal integrity are exceptionally important here, as they are (even more so) in automotive applications where critical electronics is expected to function reliably for 15 or more years.

Lip-Bu recapped several areas where Cadence has been investing and continues to invest, but this is a long blog, so I’ll just mention a couple. One notable characteristic is Lip-Bu’s move away from M&A towards predominantly organic growth. There are arguments for both, but I can vouch for the organic style building exceptional loyalty, strength and depth in the team, which seems to be paying off for Cadence. Also, Lip-Bu said that 35% of revenue in Cadence goes to R&D—an eye-opener for me. I remember the benchmark being around 20% to keep investors happy. I now understand why Cadence can pump out so many new products.

Next up was Kushagra Vaid, GM of Azure Hardware Infrastructure at Microsoft. Azure is a strong player in cloud; while Amazon dominates with AWS, notice the second and third bars above are for Azure – added together, Azure stands at nearly 50% of AWS usage. This is big business (AWS is the 6[SUP]th[/SUP] largest business in the US by one estimate), set to get much bigger. Kushagra noted that there are about a billion servers in datacenters across the world, representing billions of dollars in infrastructure investment, and this is before the IoT has really scaled up. Microsoft is a strong contender in this game and is serious about grabbing a bigger share through differentiated capabilities.


The challenge is that these clouds need to service an immense range of demands, from effectively bare-metal access (you do everything), to infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). They must provide a huge range of services from web apps to container, datalakes, Hadoop, analytics, IoT services, cognitive services and on and on. He showed a dense slide (which unfortunately I can’t show here) listing a representative set, summing it up by saying that clouds had become the Noah’s Arks of service, hosting every imaginable species. Bit of a shock for those of us who thought clouds were mostly about virtualization. He also mentioned something that may make your head hurt—serverless compute (as if the cloud weren’t already virtual enough). This is a service to handle event-driven activity, especially for the IoT where a conventional pay-as-you-go service may not be cost-effective.

While news of the death of Moore’s law may be premature, designers of these systems don’t care. They must accelerate performance (TOPS, TOPS/W, TOPS/W/$, whatever metric is relevant) far beyond silicon and von Neumann possibilities. This is what drives what he called a Cambrian explosion in purpose-built accelerators tailored to workloads. Massive and distributed data requires that compute move closer to data, while deep learning requires specialized hardware, especially in inference where low power and low latency are critical. Advances in techniques for search, the ever-moving security objective, and compression to speed data transfers, all demand specialized hardware.

The Azure hardware team response is interesting; they have built a server platform, under the Olympus project, based on CPUs, GPUs and an FPGA (per server), and have donated the architecture to the Open Compute Project (OCP). I have mentioned before that this is no science experiment. Kushagra notes that this is now the world’s largest FPGA-based distributed compute fabric. He also mentioned that innovations like this will almost certainly appear first in the cloud because of the scale. In a sense, the cloud has become the new technology driver.

Kushagra closed with some comments on machine-learning opportunities in EDA, mentioning routing-friendly power-distribution networks, static timing optimization, congestion improvement in physical designs and support for improving diagnostic accuracy in chip test. He’s also a big fan of cloud-based EDA services, stressing that scalability in the cloud, without needing to worry about provisioning infrastructure, aids faster experimentation and greater agility in design options and faster time to delivery. Of course, there are still concerns about public clouds versus private clouds, but from what I hear, it’s becoming easier to have it both ways using restricted-access services with high security to handle peak demand (and see my note near the end on Pegasus in the cloud). All of this seems in line with cloud-based access directions Lip Bu mentioned.


Last up was Anirudh, who was responsible for the big bang product news, and he, of course, delivered—this time for the digital implementation flow. First, he talked about massive parallelization and the flow between the Genus (synthesis), Innovus (implementation), Tempus (timing) and Voltus (power integrity) solutions. Cadence has already made good progress on big parallelization for most of the flow but implementation has been hard to scale beyond around 8 CPUs. Next month, a new digital and signoff software release will be rolled out, which can scale up to multiple machines, also delivering a 1.5-2X speedup on a single machine, in some cases with improved PPA (this “more speed though parallelization, also faster on a single CPU” thing seems to be an Anirudh specialty).

Continuing his theme of fast and smart, he talked about ramping up intelligence in implementation. Here they have already demonstrated an ability for machine learning to drive an improvement of PPA through a 12% reduction in total negative slack. This capability is not yet released but is indicative of work being done in this area. Anirudh mentioned floorplanning, placement, CTS and routing as other areas that can benefit from machine-learning-based optimization.

Finally, the really big bang was his introduction of the Pegasus Verification System, the new and massively parallel full-flow physical verification solution. Daniel Payne wrote a detailed blog on this topic, so I won’t repeat what he had to say. The main point is that this is a ground-up re-design for massive and scalable cloud-based parallelism. If you want to keep costs down and run on one machine, you can run on one machine. If you’re pushing to tape out and iterating through multiple signoff and ECO runs, you can scale to a thousand machines or more and complete runs in hours rather than days. On that cost point, he cited one example of a run that used to take 40 hours, which now runs on AWS in a few hours, for $200 per run (Pegasus not included). I think that datapoint alone may change some business views on the pros and cons of running in public clouds.

A lot of information here. Sorry about the long blog, but I think you’ll agree it’s all good stuff. I just didn’t know how to squeeze it into 800 words.


NetSpeed Taking a Ride with Autonomous Automobiles

NetSpeed Taking a Ride with Autonomous Automobiles
by Mitch Heins on 04-24-2017 at 12:00 pm

The push for autonomous automobiles continues at a rapid pace. Last week a new conference was held in Santa Clara, CA by the Linley Group focused on Autonomous Hardware. The group included presentations from GLOBAL FOUNDRIES, Synopsys, NetSpeed Systems, Arteris, EMBC, Cadence, CEVA, ARM and Trilumina covering ADAS and autonomous driving, deep learning, and processors for autonomous vehicles.

Being an ASIC guy for many years, I was intrigued by the sheer complexity of the ICs being presented to handle the tasks of autonomous driving. These ICs epitomize a true systems-on-a-chip. The thing that sets these ICs apart for me is the fact that they are not merely pipelining or combining more of the same logic onto a die. They are very heterogeneous in nature with many different IP cores being used, all of which have different interfaces, performance and latency characteristics. Added to this is that fact that many of these IPs must share common memory and interact with each other. This implies employment of a sophisticated memory coherency strategy across the overall system architecture. Lastly, these ICs are going to be used to drive your car! That means they must be fault tolerant and able to work continuously without errors or deadlocks.

As a physical design guy, I know that if the logic is regular and repeated that the layout tasks and timing closure will be more or less straight forward. While these ICs have logic within IP blocks that is regular and repeated, the interconnection between these IPs is another challenge altogether. Because of the complexity of the devices and different interfaces for each, many IC suppliers are now opting to use a Network-on-Chip (NoC) to handle their interconnect. So much for routing wide buses around the chip between the IPs (a nightmare if you’ve ever had to do it).

The question then is the whether the medicine is worse than the cure. Once you move to a NoC, it’s like you introduced an entirely new IC within an IC except this new design must be distributed around the full IC’s IP blocks to enable inter-IP connections that must be made. Think of it as a distributed IC within an IC. This means you have an entirely new architecture that must be designed that will literally manage the full IC. Enter NetSpeed Systems to the rescue!

Anush Mohandass of NetSpeed Systems gave a very good presentation on how they enable designers to design and implement these complex NoCs, including helping designers make the difficult trade-offs between power, performance, area and functional safety (FuSa). The logic that implements the NoC has many different tasks to perform. This includes data translation for each IP’s interface to a common NoC format, efficient routing of data packets between IPs, error checking for fault tolerance, load balancing and dynamic routing adjustments to comprehend changing data traffic patterns between. All of this must be done while meeting user-specified quality of service (QoS) targets and avoiding deadlock situations. Additionally there is a need to also include on-the-fly security checking to ensure the IC is not being compromised by some agent trying to take over control from the outside. The main attack surface for these types of ICs is the NoC, as the NoC controls all data going into and out of the rest of the system.

NetSpeed offers a design and optimization cockpit called NocStudio that employs a top-down approach using machine learning to optimize the IC’s NoC-based QoS, power, performance, area, latency and FuSa. NocStudio analyzes different approaches to the power, performance, area and FuSa trade-offs and then synthesizes a NoC that best meets the designers’ goals. Designers can weight and customize the trade-offs depending on the end application and markets that their SoC serves. This includes the ability to categorize data packet traffic in up to 16 different classes and allocate up to 64 virtual channels with dynamic priority to allow for dynamic QoS control.

Functional Safety is considered a first-class citizen of the SoC from the very beginning, rather than as something that is tacked on at the end. NetSpeed’s NoC IP is certified ISO 26262 ASIL Level-D ready but the software also gives designers the flexibility to divide the NoC into different ASIL-level partitions depending on the needs of their clients. NetSpeed’s machine learning algorithms can synthesize the different partitions to different ASIL levels per the designers’ request as it also knows how to grade the circuit per the ISO 26262 standard. It does all of this while also ensuring that the NoC will be deadlock free even when the design is using a mixture of IPs with coherent and non-coherent memory access requirements.

Once a NoC design is created, NocStudio creates as its output synthesizable RTL, verification suites, and information used by physical design to handle placement of the NoC components to meet timing requirements and manage clock skews. It also produces the documentation required to meet the ISO 26262 standard, including a safety manual for the IC.

NocStudio is already used by the #1 and #2 IC suppliers for autonomous vehicles and hyperscale computing as well as top vendors in artificial intelligence, virtual reality/augmented reality, and real-time security analytics. On April 5[SUP]th[/SUP], NetSpeed also announced that they signed a multi-year license agreement with Sunplus Technology for NetSpeed’s Orion on-chip network IP. Sunplus is a leading provider of multimedia IC solutions and automotive infotainment solutions and will use NetSpeed’s IP to accelerate the design and development of future generations of its automotive SoCs

This is impressive technology and I believe we will be hearing more from NetSpeed in the future. NetSpeed Systems is a company to keep your eye on especially as the autonomous vehicle market takes off.

See also:
NetSpeed Systems web page
Sunplus Technology Licenses NetSpeed’s Orion IP


1.2 Terabit/s C2C Interface? Only with Interlaken!

1.2 Terabit/s C2C Interface? Only with Interlaken!
by Eric Esteve on 04-24-2017 at 7:00 am

If you are familiar with high bandwidth networking applications, you probably know this chip-to-chip (C2C) interface protocol. Interlaken architecture, fully flexible, configurable and scalable, is also an elegant answer to the need for very high bandwidth C2C communication. Interlaken is elegant because the protocol defines the controller specification and can interface with various SerDes architectures, up to 56 Gbps SerDes rates with Forward Error Correction (FEC).

The Interlaken protocol has clearly been defined to provide the lowest latency when interfacing two chips at very high speed. The definition is simple, allowing the best possible efficiency. If you compare Interlaken specification with PCI Express or Ethernet for example, it’s much, much simpler, making the protocol easy to implement albeit extremely powerful to connect devices together.

Interlaken is targeting high bandwidth networking applications, such as routers, switches, Framer/MAC, OTN switch, packet processors, traffic managers, look aside processors/memories or data center applications. In any of these application, the chip will integrate complexes protocols based on high speed serial links supported by SerDes. Developing or buying a 56 Gbps or even a 28 Gbps is either a resource intensive task or an expansive solution. Because Interlaken has been defined to cope with any kind of SerDes, the chip maker can reuse internally the investment made one time to implement the more complex protocol.

Open-Silicon, a founding member of the Interlaken Alliance formed in 2007, is launching the 8[SUP]th[/SUP] generation of Interlaken IP core, this supporting up to 1.2 Tbps bandwidth. This high-speed chip-to-chip interface IP features an architecture that is fully flexible, configurable and scalable.

The flexibility of the Interlaken IP core is translating into the multiple aggregate BW interfaces. As an example, a single Interlaken IP instance can be configured in-system to support different Interlaken interfaces: 1×1.2Tbps, 2x600Gbps or 4x300Gbps. On-chip implementation can be based on up to 48 SerDes lanes, when you use a 28 Gbps SerDes, or you can implement a 24 lanes solution when using a 56 Gbps solution.

The core is also highly configurable and scalable, illustrated by this features list:

Support for 256 logical channels

8-bit channel extension for up to 64K channels

Independent SerDes lane enable/disable

Support for SerDes speeds from 3.125Gbps to 56 Gbps

Configurable number of lanes from 1 to 48

Flexible user interface options:- 128b: 1x128b, 2x128b, 4x128b, or 8x128b
– 256b: 1x256b, 2x256b, 4×256, or 8x256b

Programmable BURSTMAX from 64 bytes – 512 bytes

Programmable BURSTMIN from 32 bytes – 256 bytes

Simultaneous In-band and Out-of-Band flow control

Programmable calendar

Built-in error detection and interrupt structures

Configurable error injection mechanisms for testability

According with Michael Howard, senior research director and advisor, carrier networks at IHS Markit, “with the unstoppable growth of high-bandwidth networking applications together with the desire to further technological advancements on a much quicker cadence, the demand for industry consortium standards that ensure interoperability grows sharply. It is for these reasons that solutions such as this chip-to-chip Interlaken IP core, will likely have high adoption into next generation routers and switches, packet processors, and high-end networking and data processing applications.”

“The demand for performance and bandwidth for applications in networking is growing exponentially,” said Vasan Karighattam, Vice President of Engineering for Open-Silicon. “With nearly a decade of experience building the Interlaken core, Open-Silicon has continued to provide its customers with leading-edge custom silicon and IP solutions that power next generation networking products. Open-Silicon remains committed to the Interlaken protocol and providing the highest-performance, most scalable Interlaken IP.”

The success of chip-to-chip Interlaken IP core adoption is based on the exponential growth of the bandwidth demand, with a 25% CAGR for 2015-2020 and a volume of 80 Exabytes per month in 2017, and the high interoperability level offered by the protocol. Moreover, the Interlaken IP core can be implemented in a SoC faster than any similar protocol, because it’s simpler and SerDes agnostic, allowing the chip maker to deliver a cost optimized SoC with a better time-to-market.

Open-Silicon’s 8th generation Interlaken IP is available today. For more information, please visit:
www.open-silicon.com/open-silicon-ips/interlaken-controller-ip/
or the Interlaken Alliance web site.

By Eric Esteve from IPnest


Attending DAC in Austin for Free

Attending DAC in Austin for Free
by Daniel Payne on 04-23-2017 at 7:00 am

I’ve been attending DAC since the late 1980’s and can tell you that it’s an annual highlight for me and anyone else interested in the EDA, IP and semiconductor industries. Where else can you see most of the big and little vendors of EDA software, semiconductor IP and foundries in one place? I recently blogged about the DAC keynote speakers, and then there’s the rich experience of the pavilion presentations. So you’d like to go to DAC, but then the money issue comes up. Is it really worth all of that expense?

How about free attendance to DAC for all of these events:

  • Four Keynotes
  • 175 Exhibits
  • World of IoT Exhibit
    • IP pavilion
    • Maker’s market
  • Pavilion presentations
    • SKY Talks
    • Fireside CEO chats
    • Three teardowns
    • Industry panel discussions
  • Networking
  • Evening receptions

How can this be offered to us for free? Well, the people at ClioSoft really want you to attend so they are sponsoring your entrance for free as part of the 9th annual campaign – I LOVE DAC.

Well, what are you waiting for? Come join me and other SemiWiki bloggers and attend DAC in Austin from June 18-22. I’d love to meet you and hear your story, who knows, maybe you’ll end up in one of my DAC blogs.

The only part that you are missing at DAC would be the technical proceedings

Free Registration
To follow through with this free deal you must register onlinebefore May 25, 2017.

About DAC

The Design Automation Conference (DAC) is recognized as the premier conference for design and automation of electronic systems. DAC offers outstanding training, education, exhibits and superb networking opportunities for designers, researchers, tool developers and vendors.

Members are from a diverse worldwide community of more than 1,000 organizations that attend each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives, and researchers and academicians from leading universities.

Close to 300 technical presentations and sessions are selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies.

A highlight of DAC is its exhibition and suite area with approximately 200 of the leading and emerging companies in:

  • Electronic Design Automation (EDA)
  • Intellectual Property (IP)
  • Embedded Systems and Software
  • Internet of Things (IoT)
  • Design Services

The conference is sponsored by the Association for Computing Machinery (ACM), the Electronic Design Automation Consortium (EDA Consortium), and the Institute of Electrical and Electronics Engineers (IEEE), and is supported by ACM’s Special Interest Group on Design Automation (SIGDA).

Also Read

ClioSoft Crushes it in 2016!

CEO Interview: Srinath Anantharaman of ClioSoft

Qorvo Uses ClioSoft to Bring Design Data Management to RF Design


How Far Has Design Automation Brought Us?

How Far Has Design Automation Brought Us?
by Tom Simon on 04-21-2017 at 12:00 pm

It’s always a struggle explaining electronic design automation (EDA) to people who ask me what field I am in. I have come up with simple and minimal descriptions – such as “software used for designing semiconductors.” This, of course, does little to provide any useful understanding to people who are not familiar with the field.

Sometimes I use the analogy of saying it is like Microsoft Word but for chips – chip designers need to capture the design in a program. It works nicely because Word also comes with grammar and spell checking – somewhat akin to simulation and physical verification. However vast worlds separate Word from the frequently arcane complexity of EDA.

I’ve been in the field since 1982, and have seen it develop and evolve in amazing and incredible ways. So many elements of our lives are resting on the accomplishments of the devices designed using EDA software. Since 1982 the complexity of semiconductors chips has grown from thousands of transistors to billions today. This scaling would not have happened without countless brilliant people working continuously.

The depth and complexity of each domain and sub field within the scope of EDA is hard to grasp. People working at one end of the design spectrum rarely deeply understand the other end. As a technology writer and analyst, I often must pull from a wide range of information about EDA technology. I was pleasantly surprised when I heard from Grant Martin, an old time co-worker from when I was at Cadence. He asked me to look over the latest edition of the Electronic Design Automation Handbook for IC System Design, Verification, and Testing. It is published by the CRC Press. As he had warned me, this two volume set is a weighty tome. Yet, this set does an impressive job of covering the field broadly and yet deeply.

It was originally published 10 years ago in 2006. Grant was one of the editors who marshalled the major update for 2016. There are over 40 technical contributors, who have written detailed technical articles on just about every corner of the chip design process. The first volume focuses on front end designed such as language based design, architecture specification, and higher levels of abstraction. Indeed, many of the updates to the handbook address changes in system specification and high level verification that have occurred over the last 10 years. The second even more substantial volume deals with everything from synthesis and schematic capture to lithography.

I decided to read up in the second volume on one of the topics that I had recently written about. Before I write an article I usually do background research to make sure that the technical points are properly covered. It’s pretty clear that had I referred to the handbook, it would have been much easier to pull together the detailed background information to help write a more informed piece. The content is well written and goes down to bedrock when it comes to the underlying theory and principles. As such it would be a very useful source of information for someone who wants to gain greater knowledge of the topics adjacent to their expertise.

I know we live in an age where books are being supplanted by online information. However, digging into a topic online often results in scattershot information. This handbook has even and thorough information. It is likely to remain close to my keyboard as a resource for future articles.


Machine Learning and EDA!

Machine Learning and EDA!
by Daniel Nenni on 04-21-2017 at 7:00 am

Semiconductor design is littered with complex, data-driven challenges where the cost of error is high. Solido’s new ML (machine learning) Labs, based on Solido’s ML technologies developed over the last 12 years, allows semiconductor companies to collaboratively work with Solido in developing new ML-based EDA products.

Data acquisition is expensive; brute force methods are time and resource intensive. Large amounts of data require a high level of expertise to provide valuable insights. Many EDA teams don’t have the expertise or resources to quickly and successfully parse this overwhelming amount of data, which can also be hard to visualize and interpret. Additionally, there is an important need to seamlessly integrate solutions into current design flows. Overlooking one of these elements can lead to poor designs, limited scalability, delays, or even more.

Solido has developed proven machine learning technologies for engineering applications. Engineering challenges are unique, as users are making expensive decisions where cost of errors is high. Results from ML technologies must not be estimations, but rather production accurate and verifiable. Solido’s ML technologies, developed over the last 12 years, form the basis of Solido Variation Designer and produce production-accurate results—not estimations. Variation Designer’s ML technologies build adaptive, self-verifying models that detect and correct errors automatically. Results are verifiable, and can be trusted by users. Solido’s ML technologies are scalable to 100K+ input variables, parallelizable to large clusters, and capture high-order interactions, non-linearities, and discontinuities (eg bi-modal, n-modal, binary, n-ary).

Overall, Solido’s ML technologies create large speedups, accuracy boosts, increases in coverage, and reductions in computing resources and license usage. All this resulting in faster time-to-market, improved designs and reduced engineering costs.

Solido is introducing Machine Learning (ML) Labs to make their machine learning technologies more accessible in solving an expanded range of data-intensive problems, improving access in applying their ML expertise and technology to EDA’s biggest challenges.

Here’s how ML Labs works:

  • Bring your EDA design challenges to Solido
  • Solido’s experts will work with you and your designers on how these challenges could be solved with either their existing ML technologies, or if new ML technologies are required, using Solido’s team of ML experts
  • Solido will work with you as a lead customer to bring the technology to a production EDA software product

Solido has the industry’s top EDA and ML experts, who develop innovative ML solutions, effective rapid prototypes, and conclusive proof-of-concepts. Their product integration team will make the solution work with your tools, in your environment. Solido already has experience in integrating new technologies with top EDA tools, which they can leverage to accelerate time-to-solution and make it work in any design flow. Their usability experts make their solutions easy to learn and use for your designers, providing support throughout the deployment and production use. With ML Labs, Solido’s high-quality team will be with you at every step along the way, to make it “just work” in production.

The first two products to come out of Solido ML Labs are ML Characterization Suite’s Predictor and Statistical Characterizer. Predictor uses machine learning to model the full library space using data from existing characterized library models. This reduces library characterization time by 30-70%, while saving on characterization licenses, simulation licenses, CPUs, disk, and time. Statistical Characterizer generates statistical timing data >1000x faster than brute force while maintaining Monte Carlo accuracy. It does this by adaptively selecting simulations to meet accuracy requirements while minimizing runtime for all cells, corners, arcs, and slew-load combinations.

You can find more information about ML Labs at http://www.solidodesign.com/ml-labs or by contacting Solido at mllabs@solidodesign.com.

About Solido Design Automation
Solido Design Automation Inc. is a leading provider of variation-aware design software for high yield and performance IP and systems-on-a-chip (SOCs). Solido plays an essential role in de-risking the variation impacts associated with the move to advanced and low-power processes, providing design teams improved power, performance, area and yield for memory, standard cell, analog/RF, and custom digital design. Solido’s efficient software solutions address the exponentially increasing analysis required without compromising time-to-market. The privately held company is venture capital funded and has offices in the USA, Canada, Asia and Europe. For further information, visit www.solidodesign.com or call 306-382-4100.