NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

3D Product Design Collaboration in MCAD and ECAD Platforms

3D Product Design Collaboration in MCAD and ECAD Platforms
by Tom Dillinger on 04-25-2017 at 12:00 pm

Consumer electronics demand aggressive mechanical enclosure design — product volume, weight, shape, and connector access are all critical design optimization criteria. Mechanical CAD (MCAD) software platforms are used by product engineers to develop the enclosure definition — the integration of the PCB design (or, potentially, a rigid-flex assembly) into the MCAD model enables the engineer to verify the mating of the enclosure and electronics, and submit the model to thermal, structural, and EMC/EMI analysis.

Traditionally, the (final) PCB definition was exported from the ECAD design platform using the .idf representation, short for “intermediate data format“. An (initial) .idf description would be exported from the MCAD toolset to reflect the starting PCB topology, with connector placements, mounting holes, keep-out areas, etc.

Yet, the .idf format was not conceived to support the requirements of current product designs, where iterative collaboration between MCAD and ECAD environments is required. To address the needs of MCAD and PCB designers, an industry consortium pursued the definition of a new standard, commonly known as .idx (named after the file extension used, short for “incremental design exchange“). Specifically, the .idx format supports the following key features:

  • all design objects are assigned an identifier

Electrical components, holes, keep-outs, mechanical components, etc. are all given a unique designator, which enables the main .idx characteristic, listed next.

  • incremental data exchange

IDX enables MCAD/ECAD systems to optimize the amount of data exchanged during design iterations, and track the change history.

  • data is represented using XML schemas

An .idx file is an XML document, which is readily extendible as future requirements arise.

  • rich geometry descriptions are supported

IDX uses the definition of “curvesets“, which are assigned a vertical extent to expand the 2D description into 3D. Objects are described using these curvesets; inverted shapes represent a void in an object.

  • roles

A “role” can be associated with any item (i.e., a collection of objects), which assigns rights and responsibilities for item updates to specific team members. For example, one ME may own the board shape, while another owns connector and mating hole locations; a PCB engineer may own component locations. The .idx format also supports request/accept/acknowledge handshaking for proposed updates, before changes are applied from one design domain to another.

  • properties

Components can be assigned property values, characteristics of specific interest to both MCAD and ECAD analysis (e.g., power dissipation, component mass, physical clearances around the component).

The IDX representation is used by MCAD and ECAD toolsets to exchange incremental updates, after a “baseline” .idx exchange of the initial product definition. A general workflow is depicted in the figure below.

The identifiers in the baseline description then enable exchange of updates to the design — e.g., addition/deletion/re-positioning of components, changes to board shape, relocating a connector or mounting hole, etc.

At the recent PCB Forum in Santa Clara held by Mentor (a Siemens business), the Xpedition team described how .idx has enabled a highly productive MCAD/ECAD collaboration design methodology. (Mentor and PTC were the original drivers of this new standard for mechanical/electrical data modeling and information exchange.) A key feature added to the Xpedition platform has provided the PCB designer with a 3D model visualization of the MCAD data.

The figures below illustrate the concurrent 2D/3D views in Xpedition, which can include visualization of the Cu data, as well. The incremental characteristics of the .idx file are leveraged in Xpedition — proposed changes imported from the MCAD platform are highlighted, for the PCB engineer to quickly pinpoint areas to review. (Note that Cu data would be exported from Xpedition in the .idx description and merged into the MCAD model, for both physical checks and EMC/EMI analysis.)

A common property applied to an .idx object is the “lock/unlock” status — a role member can assign a lock property to prevent updates. The Xpedition Data Manager tracks the .idx history, from the baseline through subsequent MCAD/ECAD proposal/response exchange transactions. The figure below illustrates the data management and change notification features of the MCAD Collaborator utility in Xpedition.

Mentor provides a rich library of existing 3D component models for PCB visualization — the figure below illustrates how the PCB designer’s view in Xpedition compares to the final manufactured board.

The Mentor Xpedition team also provided a demo of the rigid-flex support within the collaborative design environment. The figure below illustrates the concurrent 2D/3D views of a rigid-flex assembly — 3 PCB’s with multiple (physically overlapping) flex cables.

The complexity of current products requires a close interaction between mechanical and electrical teams. The transition from the .idf to .idx data exchange formats between MCAD and ECAD tools offers a significant benefit to the design methodologies in each domain. Specifically, a PCB designer can make a broad set of design optimizations, and quick export the updates to the MCAD engineer for review. The ECAD platform needs to support .idx exchange — a key feature includes 2D/3D visualization for the PCB designer. Mentor’s Xpedition toolset is focused on enabling this collaborative MCAD/ECAD flow.

For more information on upcoming Mentor PCB Forum dates, please follow this link.

For information on the 3D visualization support in Xpedition, please follow this link.

For general information on ECAD/MCAD Collaboration in the Xpedition platform, please follow this link.

-chipguy


The CDNLive Keynotes

The CDNLive Keynotes
by Bernard Murphy on 04-25-2017 at 7:00 am

I’m developing a taste for user-group meetings. In my (fairly) recently assumed role as a member of the media, I’m only allowed into the keynotes, but from what I have seen, vendors work hard to make these fresh and compelling each year through big-bang product updates and industry/academic leaders talking about their work in bleeding-edge system development. Cadence continued the theme this year with a packed 90 minutes of talks to an equally packed room at the Santa Clara Convention Center.


Lip Bu opened with “Enabling the Intelligent Connected World.” There’s a lot packed into that title. EDA/IP is enabling rather than creating that world, but the world wouldn’t be possible without what EDA and IP make possible. It’s intelligent because AI and machine learning are exploding, and it’s connected because focus has dramatically shifted from point-system compute to clouds, gateways and edges.

He sees massive potential for design and EDA, particularly around connected cars, the industrial IoT (IIoT) and cloud datacenters. As both the president and CEO of Cadence and head of Walden International (a VC company), he sees several important trends in these areas. The deep-learning revolution—moving from training machines to learn, to teaching them to infer (using that learning in the field)—is creating strong demand to differentiate through specialized engines (witness the Google Tensor Processing Unit). Roles between cloud, gateway and edge have shifted; where once we thought all the heavy-lifting would be done in the cloud, now we are realizing that latency has become critical, making it important that intelligence, data filtering, analytics and security be moved as close to the edge as much as possible.

All of this creates new design and EDA challenges from sensors all the way up to the cloud. At the edge, more compute for all those latency-critical functions demands more performance without compromising power or thermal integrity. In the cloud, even more compute and data management is required for innovative (massively reprogrammable, reconfigurable) designs packed into small spaces. Power and thermal integrity are exceptionally important here, as they are (even more so) in automotive applications where critical electronics is expected to function reliably for 15 or more years.

Lip-Bu recapped several areas where Cadence has been investing and continues to invest, but this is a long blog, so I’ll just mention a couple. One notable characteristic is Lip-Bu’s move away from M&A towards predominantly organic growth. There are arguments for both, but I can vouch for the organic style building exceptional loyalty, strength and depth in the team, which seems to be paying off for Cadence. Also, Lip-Bu said that 35% of revenue in Cadence goes to R&D—an eye-opener for me. I remember the benchmark being around 20% to keep investors happy. I now understand why Cadence can pump out so many new products.

Next up was Kushagra Vaid, GM of Azure Hardware Infrastructure at Microsoft. Azure is a strong player in cloud; while Amazon dominates with AWS, notice the second and third bars above are for Azure – added together, Azure stands at nearly 50% of AWS usage. This is big business (AWS is the 6[SUP]th[/SUP] largest business in the US by one estimate), set to get much bigger. Kushagra noted that there are about a billion servers in datacenters across the world, representing billions of dollars in infrastructure investment, and this is before the IoT has really scaled up. Microsoft is a strong contender in this game and is serious about grabbing a bigger share through differentiated capabilities.


The challenge is that these clouds need to service an immense range of demands, from effectively bare-metal access (you do everything), to infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (SaaS). They must provide a huge range of services from web apps to container, datalakes, Hadoop, analytics, IoT services, cognitive services and on and on. He showed a dense slide (which unfortunately I can’t show here) listing a representative set, summing it up by saying that clouds had become the Noah’s Arks of service, hosting every imaginable species. Bit of a shock for those of us who thought clouds were mostly about virtualization. He also mentioned something that may make your head hurt—serverless compute (as if the cloud weren’t already virtual enough). This is a service to handle event-driven activity, especially for the IoT where a conventional pay-as-you-go service may not be cost-effective.

While news of the death of Moore’s law may be premature, designers of these systems don’t care. They must accelerate performance (TOPS, TOPS/W, TOPS/W/$, whatever metric is relevant) far beyond silicon and von Neumann possibilities. This is what drives what he called a Cambrian explosion in purpose-built accelerators tailored to workloads. Massive and distributed data requires that compute move closer to data, while deep learning requires specialized hardware, especially in inference where low power and low latency are critical. Advances in techniques for search, the ever-moving security objective, and compression to speed data transfers, all demand specialized hardware.

The Azure hardware team response is interesting; they have built a server platform, under the Olympus project, based on CPUs, GPUs and an FPGA (per server), and have donated the architecture to the Open Compute Project (OCP). I have mentioned before that this is no science experiment. Kushagra notes that this is now the world’s largest FPGA-based distributed compute fabric. He also mentioned that innovations like this will almost certainly appear first in the cloud because of the scale. In a sense, the cloud has become the new technology driver.

Kushagra closed with some comments on machine-learning opportunities in EDA, mentioning routing-friendly power-distribution networks, static timing optimization, congestion improvement in physical designs and support for improving diagnostic accuracy in chip test. He’s also a big fan of cloud-based EDA services, stressing that scalability in the cloud, without needing to worry about provisioning infrastructure, aids faster experimentation and greater agility in design options and faster time to delivery. Of course, there are still concerns about public clouds versus private clouds, but from what I hear, it’s becoming easier to have it both ways using restricted-access services with high security to handle peak demand (and see my note near the end on Pegasus in the cloud). All of this seems in line with cloud-based access directions Lip Bu mentioned.


Last up was Anirudh, who was responsible for the big bang product news, and he, of course, delivered—this time for the digital implementation flow. First, he talked about massive parallelization and the flow between the Genus (synthesis), Innovus (implementation), Tempus (timing) and Voltus (power integrity) solutions. Cadence has already made good progress on big parallelization for most of the flow but implementation has been hard to scale beyond around 8 CPUs. Next month, a new digital and signoff software release will be rolled out, which can scale up to multiple machines, also delivering a 1.5-2X speedup on a single machine, in some cases with improved PPA (this “more speed though parallelization, also faster on a single CPU” thing seems to be an Anirudh specialty).

Continuing his theme of fast and smart, he talked about ramping up intelligence in implementation. Here they have already demonstrated an ability for machine learning to drive an improvement of PPA through a 12% reduction in total negative slack. This capability is not yet released but is indicative of work being done in this area. Anirudh mentioned floorplanning, placement, CTS and routing as other areas that can benefit from machine-learning-based optimization.

Finally, the really big bang was his introduction of the Pegasus Verification System, the new and massively parallel full-flow physical verification solution. Daniel Payne wrote a detailed blog on this topic, so I won’t repeat what he had to say. The main point is that this is a ground-up re-design for massive and scalable cloud-based parallelism. If you want to keep costs down and run on one machine, you can run on one machine. If you’re pushing to tape out and iterating through multiple signoff and ECO runs, you can scale to a thousand machines or more and complete runs in hours rather than days. On that cost point, he cited one example of a run that used to take 40 hours, which now runs on AWS in a few hours, for $200 per run (Pegasus not included). I think that datapoint alone may change some business views on the pros and cons of running in public clouds.

A lot of information here. Sorry about the long blog, but I think you’ll agree it’s all good stuff. I just didn’t know how to squeeze it into 800 words.


NetSpeed Taking a Ride with Autonomous Automobiles

NetSpeed Taking a Ride with Autonomous Automobiles
by Mitch Heins on 04-24-2017 at 12:00 pm

The push for autonomous automobiles continues at a rapid pace. Last week a new conference was held in Santa Clara, CA by the Linley Group focused on Autonomous Hardware. The group included presentations from GLOBAL FOUNDRIES, Synopsys, NetSpeed Systems, Arteris, EMBC, Cadence, CEVA, ARM and Trilumina covering ADAS and autonomous driving, deep learning, and processors for autonomous vehicles.

Being an ASIC guy for many years, I was intrigued by the sheer complexity of the ICs being presented to handle the tasks of autonomous driving. These ICs epitomize a true systems-on-a-chip. The thing that sets these ICs apart for me is the fact that they are not merely pipelining or combining more of the same logic onto a die. They are very heterogeneous in nature with many different IP cores being used, all of which have different interfaces, performance and latency characteristics. Added to this is that fact that many of these IPs must share common memory and interact with each other. This implies employment of a sophisticated memory coherency strategy across the overall system architecture. Lastly, these ICs are going to be used to drive your car! That means they must be fault tolerant and able to work continuously without errors or deadlocks.

As a physical design guy, I know that if the logic is regular and repeated that the layout tasks and timing closure will be more or less straight forward. While these ICs have logic within IP blocks that is regular and repeated, the interconnection between these IPs is another challenge altogether. Because of the complexity of the devices and different interfaces for each, many IC suppliers are now opting to use a Network-on-Chip (NoC) to handle their interconnect. So much for routing wide buses around the chip between the IPs (a nightmare if you’ve ever had to do it).

The question then is the whether the medicine is worse than the cure. Once you move to a NoC, it’s like you introduced an entirely new IC within an IC except this new design must be distributed around the full IC’s IP blocks to enable inter-IP connections that must be made. Think of it as a distributed IC within an IC. This means you have an entirely new architecture that must be designed that will literally manage the full IC. Enter NetSpeed Systems to the rescue!

Anush Mohandass of NetSpeed Systems gave a very good presentation on how they enable designers to design and implement these complex NoCs, including helping designers make the difficult trade-offs between power, performance, area and functional safety (FuSa). The logic that implements the NoC has many different tasks to perform. This includes data translation for each IP’s interface to a common NoC format, efficient routing of data packets between IPs, error checking for fault tolerance, load balancing and dynamic routing adjustments to comprehend changing data traffic patterns between. All of this must be done while meeting user-specified quality of service (QoS) targets and avoiding deadlock situations. Additionally there is a need to also include on-the-fly security checking to ensure the IC is not being compromised by some agent trying to take over control from the outside. The main attack surface for these types of ICs is the NoC, as the NoC controls all data going into and out of the rest of the system.

NetSpeed offers a design and optimization cockpit called NocStudio that employs a top-down approach using machine learning to optimize the IC’s NoC-based QoS, power, performance, area, latency and FuSa. NocStudio analyzes different approaches to the power, performance, area and FuSa trade-offs and then synthesizes a NoC that best meets the designers’ goals. Designers can weight and customize the trade-offs depending on the end application and markets that their SoC serves. This includes the ability to categorize data packet traffic in up to 16 different classes and allocate up to 64 virtual channels with dynamic priority to allow for dynamic QoS control.

Functional Safety is considered a first-class citizen of the SoC from the very beginning, rather than as something that is tacked on at the end. NetSpeed’s NoC IP is certified ISO 26262 ASIL Level-D ready but the software also gives designers the flexibility to divide the NoC into different ASIL-level partitions depending on the needs of their clients. NetSpeed’s machine learning algorithms can synthesize the different partitions to different ASIL levels per the designers’ request as it also knows how to grade the circuit per the ISO 26262 standard. It does all of this while also ensuring that the NoC will be deadlock free even when the design is using a mixture of IPs with coherent and non-coherent memory access requirements.

Once a NoC design is created, NocStudio creates as its output synthesizable RTL, verification suites, and information used by physical design to handle placement of the NoC components to meet timing requirements and manage clock skews. It also produces the documentation required to meet the ISO 26262 standard, including a safety manual for the IC.

NocStudio is already used by the #1 and #2 IC suppliers for autonomous vehicles and hyperscale computing as well as top vendors in artificial intelligence, virtual reality/augmented reality, and real-time security analytics. On April 5[SUP]th[/SUP], NetSpeed also announced that they signed a multi-year license agreement with Sunplus Technology for NetSpeed’s Orion on-chip network IP. Sunplus is a leading provider of multimedia IC solutions and automotive infotainment solutions and will use NetSpeed’s IP to accelerate the design and development of future generations of its automotive SoCs

This is impressive technology and I believe we will be hearing more from NetSpeed in the future. NetSpeed Systems is a company to keep your eye on especially as the autonomous vehicle market takes off.

See also:
NetSpeed Systems web page
Sunplus Technology Licenses NetSpeed’s Orion IP


1.2 Terabit/s C2C Interface? Only with Interlaken!

1.2 Terabit/s C2C Interface? Only with Interlaken!
by Eric Esteve on 04-24-2017 at 7:00 am

If you are familiar with high bandwidth networking applications, you probably know this chip-to-chip (C2C) interface protocol. Interlaken architecture, fully flexible, configurable and scalable, is also an elegant answer to the need for very high bandwidth C2C communication. Interlaken is elegant because the protocol defines the controller specification and can interface with various SerDes architectures, up to 56 Gbps SerDes rates with Forward Error Correction (FEC).

The Interlaken protocol has clearly been defined to provide the lowest latency when interfacing two chips at very high speed. The definition is simple, allowing the best possible efficiency. If you compare Interlaken specification with PCI Express or Ethernet for example, it’s much, much simpler, making the protocol easy to implement albeit extremely powerful to connect devices together.

Interlaken is targeting high bandwidth networking applications, such as routers, switches, Framer/MAC, OTN switch, packet processors, traffic managers, look aside processors/memories or data center applications. In any of these application, the chip will integrate complexes protocols based on high speed serial links supported by SerDes. Developing or buying a 56 Gbps or even a 28 Gbps is either a resource intensive task or an expansive solution. Because Interlaken has been defined to cope with any kind of SerDes, the chip maker can reuse internally the investment made one time to implement the more complex protocol.

Open-Silicon, a founding member of the Interlaken Alliance formed in 2007, is launching the 8[SUP]th[/SUP] generation of Interlaken IP core, this supporting up to 1.2 Tbps bandwidth. This high-speed chip-to-chip interface IP features an architecture that is fully flexible, configurable and scalable.

The flexibility of the Interlaken IP core is translating into the multiple aggregate BW interfaces. As an example, a single Interlaken IP instance can be configured in-system to support different Interlaken interfaces: 1×1.2Tbps, 2x600Gbps or 4x300Gbps. On-chip implementation can be based on up to 48 SerDes lanes, when you use a 28 Gbps SerDes, or you can implement a 24 lanes solution when using a 56 Gbps solution.

The core is also highly configurable and scalable, illustrated by this features list:

Support for 256 logical channels

8-bit channel extension for up to 64K channels

Independent SerDes lane enable/disable

Support for SerDes speeds from 3.125Gbps to 56 Gbps

Configurable number of lanes from 1 to 48

Flexible user interface options:- 128b: 1x128b, 2x128b, 4x128b, or 8x128b
– 256b: 1x256b, 2x256b, 4×256, or 8x256b

Programmable BURSTMAX from 64 bytes – 512 bytes

Programmable BURSTMIN from 32 bytes – 256 bytes

Simultaneous In-band and Out-of-Band flow control

Programmable calendar

Built-in error detection and interrupt structures

Configurable error injection mechanisms for testability

According with Michael Howard, senior research director and advisor, carrier networks at IHS Markit, “with the unstoppable growth of high-bandwidth networking applications together with the desire to further technological advancements on a much quicker cadence, the demand for industry consortium standards that ensure interoperability grows sharply. It is for these reasons that solutions such as this chip-to-chip Interlaken IP core, will likely have high adoption into next generation routers and switches, packet processors, and high-end networking and data processing applications.”

“The demand for performance and bandwidth for applications in networking is growing exponentially,” said Vasan Karighattam, Vice President of Engineering for Open-Silicon. “With nearly a decade of experience building the Interlaken core, Open-Silicon has continued to provide its customers with leading-edge custom silicon and IP solutions that power next generation networking products. Open-Silicon remains committed to the Interlaken protocol and providing the highest-performance, most scalable Interlaken IP.”

The success of chip-to-chip Interlaken IP core adoption is based on the exponential growth of the bandwidth demand, with a 25% CAGR for 2015-2020 and a volume of 80 Exabytes per month in 2017, and the high interoperability level offered by the protocol. Moreover, the Interlaken IP core can be implemented in a SoC faster than any similar protocol, because it’s simpler and SerDes agnostic, allowing the chip maker to deliver a cost optimized SoC with a better time-to-market.

Open-Silicon’s 8th generation Interlaken IP is available today. For more information, please visit:
www.open-silicon.com/open-silicon-ips/interlaken-controller-ip/
or the Interlaken Alliance web site.

By Eric Esteve from IPnest


Attending DAC in Austin for Free

Attending DAC in Austin for Free
by Daniel Payne on 04-23-2017 at 7:00 am

I’ve been attending DAC since the late 1980’s and can tell you that it’s an annual highlight for me and anyone else interested in the EDA, IP and semiconductor industries. Where else can you see most of the big and little vendors of EDA software, semiconductor IP and foundries in one place? I recently blogged about the DAC keynote speakers, and then there’s the rich experience of the pavilion presentations. So you’d like to go to DAC, but then the money issue comes up. Is it really worth all of that expense?

How about free attendance to DAC for all of these events:

  • Four Keynotes
  • 175 Exhibits
  • World of IoT Exhibit
    • IP pavilion
    • Maker’s market
  • Pavilion presentations
    • SKY Talks
    • Fireside CEO chats
    • Three teardowns
    • Industry panel discussions
  • Networking
  • Evening receptions

How can this be offered to us for free? Well, the people at ClioSoft really want you to attend so they are sponsoring your entrance for free as part of the 9th annual campaign – I LOVE DAC.

Well, what are you waiting for? Come join me and other SemiWiki bloggers and attend DAC in Austin from June 18-22. I’d love to meet you and hear your story, who knows, maybe you’ll end up in one of my DAC blogs.

The only part that you are missing at DAC would be the technical proceedings

Free Registration
To follow through with this free deal you must register onlinebefore May 25, 2017.

About DAC

The Design Automation Conference (DAC) is recognized as the premier conference for design and automation of electronic systems. DAC offers outstanding training, education, exhibits and superb networking opportunities for designers, researchers, tool developers and vendors.

Members are from a diverse worldwide community of more than 1,000 organizations that attend each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives, and researchers and academicians from leading universities.

Close to 300 technical presentations and sessions are selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies.

A highlight of DAC is its exhibition and suite area with approximately 200 of the leading and emerging companies in:

  • Electronic Design Automation (EDA)
  • Intellectual Property (IP)
  • Embedded Systems and Software
  • Internet of Things (IoT)
  • Design Services

The conference is sponsored by the Association for Computing Machinery (ACM), the Electronic Design Automation Consortium (EDA Consortium), and the Institute of Electrical and Electronics Engineers (IEEE), and is supported by ACM’s Special Interest Group on Design Automation (SIGDA).

Also Read

ClioSoft Crushes it in 2016!

CEO Interview: Srinath Anantharaman of ClioSoft

Qorvo Uses ClioSoft to Bring Design Data Management to RF Design


How Far Has Design Automation Brought Us?

How Far Has Design Automation Brought Us?
by Tom Simon on 04-21-2017 at 12:00 pm

It’s always a struggle explaining electronic design automation (EDA) to people who ask me what field I am in. I have come up with simple and minimal descriptions – such as “software used for designing semiconductors.” This, of course, does little to provide any useful understanding to people who are not familiar with the field.

Sometimes I use the analogy of saying it is like Microsoft Word but for chips – chip designers need to capture the design in a program. It works nicely because Word also comes with grammar and spell checking – somewhat akin to simulation and physical verification. However vast worlds separate Word from the frequently arcane complexity of EDA.

I’ve been in the field since 1982, and have seen it develop and evolve in amazing and incredible ways. So many elements of our lives are resting on the accomplishments of the devices designed using EDA software. Since 1982 the complexity of semiconductors chips has grown from thousands of transistors to billions today. This scaling would not have happened without countless brilliant people working continuously.

The depth and complexity of each domain and sub field within the scope of EDA is hard to grasp. People working at one end of the design spectrum rarely deeply understand the other end. As a technology writer and analyst, I often must pull from a wide range of information about EDA technology. I was pleasantly surprised when I heard from Grant Martin, an old time co-worker from when I was at Cadence. He asked me to look over the latest edition of the Electronic Design Automation Handbook for IC System Design, Verification, and Testing. It is published by the CRC Press. As he had warned me, this two volume set is a weighty tome. Yet, this set does an impressive job of covering the field broadly and yet deeply.

It was originally published 10 years ago in 2006. Grant was one of the editors who marshalled the major update for 2016. There are over 40 technical contributors, who have written detailed technical articles on just about every corner of the chip design process. The first volume focuses on front end designed such as language based design, architecture specification, and higher levels of abstraction. Indeed, many of the updates to the handbook address changes in system specification and high level verification that have occurred over the last 10 years. The second even more substantial volume deals with everything from synthesis and schematic capture to lithography.

I decided to read up in the second volume on one of the topics that I had recently written about. Before I write an article I usually do background research to make sure that the technical points are properly covered. It’s pretty clear that had I referred to the handbook, it would have been much easier to pull together the detailed background information to help write a more informed piece. The content is well written and goes down to bedrock when it comes to the underlying theory and principles. As such it would be a very useful source of information for someone who wants to gain greater knowledge of the topics adjacent to their expertise.

I know we live in an age where books are being supplanted by online information. However, digging into a topic online often results in scattershot information. This handbook has even and thorough information. It is likely to remain close to my keyboard as a resource for future articles.


Machine Learning and EDA!

Machine Learning and EDA!
by Daniel Nenni on 04-21-2017 at 7:00 am

Semiconductor design is littered with complex, data-driven challenges where the cost of error is high. Solido’s new ML (machine learning) Labs, based on Solido’s ML technologies developed over the last 12 years, allows semiconductor companies to collaboratively work with Solido in developing new ML-based EDA products.

Data acquisition is expensive; brute force methods are time and resource intensive. Large amounts of data require a high level of expertise to provide valuable insights. Many EDA teams don’t have the expertise or resources to quickly and successfully parse this overwhelming amount of data, which can also be hard to visualize and interpret. Additionally, there is an important need to seamlessly integrate solutions into current design flows. Overlooking one of these elements can lead to poor designs, limited scalability, delays, or even more.

Solido has developed proven machine learning technologies for engineering applications. Engineering challenges are unique, as users are making expensive decisions where cost of errors is high. Results from ML technologies must not be estimations, but rather production accurate and verifiable. Solido’s ML technologies, developed over the last 12 years, form the basis of Solido Variation Designer and produce production-accurate results—not estimations. Variation Designer’s ML technologies build adaptive, self-verifying models that detect and correct errors automatically. Results are verifiable, and can be trusted by users. Solido’s ML technologies are scalable to 100K+ input variables, parallelizable to large clusters, and capture high-order interactions, non-linearities, and discontinuities (eg bi-modal, n-modal, binary, n-ary).

Overall, Solido’s ML technologies create large speedups, accuracy boosts, increases in coverage, and reductions in computing resources and license usage. All this resulting in faster time-to-market, improved designs and reduced engineering costs.

Solido is introducing Machine Learning (ML) Labs to make their machine learning technologies more accessible in solving an expanded range of data-intensive problems, improving access in applying their ML expertise and technology to EDA’s biggest challenges.

Here’s how ML Labs works:

  • Bring your EDA design challenges to Solido
  • Solido’s experts will work with you and your designers on how these challenges could be solved with either their existing ML technologies, or if new ML technologies are required, using Solido’s team of ML experts
  • Solido will work with you as a lead customer to bring the technology to a production EDA software product

Solido has the industry’s top EDA and ML experts, who develop innovative ML solutions, effective rapid prototypes, and conclusive proof-of-concepts. Their product integration team will make the solution work with your tools, in your environment. Solido already has experience in integrating new technologies with top EDA tools, which they can leverage to accelerate time-to-solution and make it work in any design flow. Their usability experts make their solutions easy to learn and use for your designers, providing support throughout the deployment and production use. With ML Labs, Solido’s high-quality team will be with you at every step along the way, to make it “just work” in production.

The first two products to come out of Solido ML Labs are ML Characterization Suite’s Predictor and Statistical Characterizer. Predictor uses machine learning to model the full library space using data from existing characterized library models. This reduces library characterization time by 30-70%, while saving on characterization licenses, simulation licenses, CPUs, disk, and time. Statistical Characterizer generates statistical timing data >1000x faster than brute force while maintaining Monte Carlo accuracy. It does this by adaptively selecting simulations to meet accuracy requirements while minimizing runtime for all cells, corners, arcs, and slew-load combinations.

You can find more information about ML Labs at http://www.solidodesign.com/ml-labs or by contacting Solido at mllabs@solidodesign.com.

About Solido Design Automation
Solido Design Automation Inc. is a leading provider of variation-aware design software for high yield and performance IP and systems-on-a-chip (SOCs). Solido plays an essential role in de-risking the variation impacts associated with the move to advanced and low-power processes, providing design teams improved power, performance, area and yield for memory, standard cell, analog/RF, and custom digital design. Solido’s efficient software solutions address the exponentially increasing analysis required without compromising time-to-market. The privately held company is venture capital funded and has offices in the USA, Canada, Asia and Europe. For further information, visit www.solidodesign.com or call 306-382-4100.


The 4C’s of PCB Design

The 4C’s of PCB Design
by Tom Dillinger on 04-20-2017 at 12:00 pm

The diamond jewelry industry encourages customers to focus on the 4C’s — cut, clarity, color, and carats. At the recent PCB Forum conducted by Mentor (a Siemens business) in Santa Clara, I learned that current system design flows also require an emphasis on the 4C’s — collaboration, concurrency, consistency, and acloud environment. These capabilities need to span schematic design, constraint management, and physical PCB design and layout.

The complexity of current products requires attention to a plethora of details, to address the many optimization criteria:

  • cost (area, layer stackups)
  • routability
  • minimization of high-frequency signal reflections and losses
  • (differential pair and bus) signal topolopy matching
  • manufacturability
  • system thermal/EMI/mechanical packaging constraint (Look for another article shortly from the PCB Forum on MCAD-ECAD collaboration.)

The tasks of schematic design and physical implementation to achieve the goals above are tightly interdependent.

Collaboration among Applications

The conventional waterfall method for PCB development proceeds in a sequential manner — i.e., schematics and constraints are tossed “over the wall” to the physical design engineers to complete the implementation. This process simply does not address the demands of current PCB projects — a platform that enables an interactive, iterative, incremental flow is needed to support schematic and physical implementation designers collaborating in real-time.

Proposed edits to components or constraints need to be communicated among the design team for review and approve/reject decisions. A robust notification system is required, to indicate an update in one application that impacts other dependencies.

Mentor’s Xpedition platform utilizes a “traffic light” indicator in each application, to highlight that a change has been made in another interdependent application — greenis in sync, amber indicates an update in another application has been recorded. The Project Integrator pulldown is used to provide detailed information of the specific change for review. Designers are collaborating across applications in real-time.


The figure above illustrates three users working on the same project database — two in the constraint manager, and one in physical layout. An update in the constraint manager is shared concurrently with the other user, while the physical layout session is notified of the update.

Concurrent Design within an Application
Another reality of current projects is that the optimization of a complex system will involve the skills of all team members, leveraging specific expertise. The PCB development platform needs to readily support concurrent design among team members working on different areas of the design within the same application.

There are brute-force “partition, work independently, and re-assemble” approaches to concurrent design. Yet, designers require real-time visibility to the full design model — a true concurrent platform enables the full design data to be accessible.

Mentor’s Xpedition platform enables a fully concurrent set of users working in the same application on the full project database.


Consistency is a MUST
A development platform that enables collaborative, concurrent design MUST ensure the consistency of the “live” data being updated by the various team members. Xpedition maintains a single, consistent database model. For example, schematic sheets being edited are locked for others to edit until the sheet is closed — however, as schematic objects are modified, updates are visible in real-time to users viewing the schematic set. Design constraints being inserted/edited are locked until the edit is complete — the specific user performing the edit is displayed to other users viewing the constraint set. Once the edit is complete, updated objects are then highlighted to other users. For multiple designers working concurrently on a board layout, Xpedition provides isolation through real-time “force fields” representing the active neighborhoods in which separate designers are working, visible to all clients.


The figure above illustrates multiple concurrent layout users, with the force field identifying an active edit area of one user, and thus, a keepout for the others.

At the PCB Forum, Mentor provided a live demo of the unique features of these Xpedition product platform for concurrent, collaborative PCB design. It was one of those “Wow, that’s incredibly productive!” design methodology demonstrations.

A central Xpedition server manages the project data and concurrent access by multiple (up to 16) design clients. The demo highlighted how collaborative updates are dynamically reported to another application’s client using the traffic light indicator — e.g., the layout designer receives an amber notification when there is a component change in a schematic or an update to a property=value assignment in the constraint manager. The demo also highlighted how multiple clients work concurrently in the same application, with the appropriate locking of data objects.

Xpedition’s collaborative, concurrent environment supports both a lightweight data management/notification system, and a full enterprise-level DM application, which includes full user privilege and authentication controls, notification and signoff policies, and version/configuration management support. In either the Xpedition lightweight or xDM-based data management mode, client sessions are independent — the individual design workspaces are separate, supporting unique user preferences.

In addition to the use of a site-based server, the Xpedition collaboration features support a cloud-based project database.

Mentor’s Xpedition design platform addresses the 4C’s required by large, complex systems, enabling a Collaborative (multiple, dependent applications), Concurrent (same application) environment, with data management features ensuring Consistency, across a site or Cloud-based project database. The transition from a waterfall process to a more flexible methodology offers substantial project productivity benefits.

For more information on Xpedition’s design platform and collaboration features, please follow these links:

additional Mentor PCB Forum locations/dates

Concurrent Engineering landing page (with multiple video demonstrations)

Collaborative Management of Design Constraints blog

Concurrent Schematic Design blog

Real-time Concurrent PCB Layout blog

Xpedition datasheet

-chipguy


Webinar: Getting to Formal Coverage

Webinar: Getting to Formal Coverage
by Bernard Murphy on 04-20-2017 at 10:00 am

Facing rapidly growing challenges in getting to respectable coverage, designers have been turning more and more to formal verification, not just to plug gaps but increasingly to take over verification of significant components of the testplan. Which is great, but at the end of the day any approach to verification must be measured against its contribution to coverage and most of us wrestle with how to do that for formal.

REGISTER HERE for Webinar on Tuesday April 25th at 10am PDT

We know that when we test formally we ensure a very good check mark for that particular feature but how can we factor that into overall coverage and how does that relate to the coverage we best understand – simulation-based coverage? A disciplined engineering management approach to verification signoff must answer this question for formal investment on a design to ensure that effort adds up to more than a disaggregated set of point proofs.

Synopsys aims to answer that need in this webinar, providing ways to quantify formal coverage and particularly answering questions on how much of a design is covered by checkers and how much by full proofs, where design constraints might be unnecessarily limiting coverage and how to address coverage questions for inconclusive proofs.

REGISTER HERE

Web event: Boosting Confidence in Property Verification Results with VC Formal
Date: April 25, 2017
Time:10:00 AM PDT
Duration: 60 minutes

Formal Property verification is gaining a lot of traction in recent years due to a) An ever-increasing challenge to verify all possible corner-case behaviors and b) Industry adoption/acknowledgement of the power of assertion based verification.

The user base for property verification is not limited to a handful of formal experts but has extended to the realm of simulation-based verification users and designers. This increase in a rather diverse user base puts the spotlight on the most fundamental, “must-have” requirement for every verification engineer/manager — “How does one measure or quantify formal verification?” – A question answered with simulation-based verification using coverage metrics.

In this webinar, we will showcase VC Formal’s capabilities, which include allowing users to quantify formal progress at a granular level, in order to address the 4 basic questions leading to formal signoff:

  • How much of my design is covered by the list of checkers?
  • Is my formal test bench over constrained?
  • Are proof depths from inconclusive results good enough to catch potential design bugs?
  • Do the full proofs cover the design logic that was intended to cover?

We will rely on existing simulation based verification coverage targets ie: line coverage, condition coverage, FSM coverage, to measure the RTL targets that are hit based upon the formal test bench.

REGISTER HERE

Speakers:

Kiran Vittal
Product Marketing Director, Verification Group

Kiran Vittal is a product marketing director at Synopsys, with 25 years of experience in EDA and semiconductor design. Prior to joining Synopsys, Kiran held product marketing, field applications and engineering positions at Atrenta, ViewLogic, and Mentor Graphics. He holds a MBA from Santa Clara University and a Bachelors in Electronics Engineering from India.


Abhishek Muchandikar
Staff Corporate Applications Engineer, Synopsys Verification Group

Abhishek Muchandikar is a Staff Corporate Applications Engineer in Synopsys’ Verification Group. He has over 11 years of experience in the verification domain having worked upon formal and simulation based methodologies. He has previously worked on software telecom protocols. He holds a Master’s Degree in Microelectronics from Victoria University, Melbourne, Australia


Virtual Reality

Virtual Reality
by Bernard Murphy on 04-20-2017 at 7:00 am

In the world of hardware emulators, virtualization is a hot and sometimes contentious topic. It’s hot because emulators are expensive, creating a lot of pressure to maximize return on that investment through multi-user sharing and 24×7 operation. And of course in this cloud-centric world it doesn’t hurt to promote cloud-like access, availability and scalability. The topic is contentious because vendor solutions differ in some respects and, naturally, their champions are eager to promote those differences as clear indication of the superiority of their solution.

Largely thanks to contending claims, I was finding it difficult to parse what virtualization really means in emulation, so I asked Frank Schirrmeister (Cadence) for his clarification. I should stress that I have previously talked with Jean Marie Brunet (Mentor) and Lauro Rizzatti (Mentor consultant and previously with Eve), so I think I’m building this blog on reasonably balanced input (though sadly not including input from Synopsys, who generally prefer not to participate in discussions in this area).

There’s little debate about the purpose of virtualization – global/remote access, maximized continuous utilization and 24×7 operation. There also seems to be agreement that hardware emulation is naturally moving towards becoming another datacenter resource, alongside other special-purpose accelerators. Indeed, the newer models are designed to fit datacenter footprints and power expectations (though there is hot 😎debate on the power topic).

Most of the debate is around implementation, particularly regarding purely “software” (RTL plus maybe C/C++) verification versus hybrid setups where part of the environment connects to real hardware, such as external systems connecting through PCIe or HDMI interfaces for example. Pure software is appealing because it offers easy job relocation, which helps the emulator OS pack jobs for maximum utilization and therefore also helps with scalability (add another server, get more capacity).

In contrast, hybrid (ICE) modeling requires external hardware and cabling to connect to the emulator, also specific to a particular verification task, and that would seem to undermine the ability to relocate jobs or scale and therefore undermine the whole concept of virtualization. In fact, this problem has been largely addressed in some platforms. You still need the external hardware and cabling of course but internal connectivity has been virtualized between those interfaces and jobs running on the emulator. Since many design systems want to ICE-model with a common set of interfaces (PCIe, USB, HDMI, SAS, Ethernet, JTAG), these resources can also be shared and jobs continue to be relocatable, scalable and fully virtualized.

Naturally external ICE components can also be virtualized, running on the software host, or the emulator or some combination of these. One appeal here is that there is no need for any external hardware (beyond the emulation servers), which could be attractive for deployment in general-purpose datacenters. A more compelling reason is to connect with expert 3[SUP]rd[/SUP]-party software-based systems to model high levels and varying styles of traffic which would be difficult to reproduce in a local hardware system. One obvious example is in modelling network traffic across many protocols, varying rates and SDN. This is an area where solutions need to connect to testing systems from experts like Ixia.


You might wonder then if the logical endpoint of emulator evolution is for all external interfaces to be virtualized. I’m not wholly convinced. Models, no matter how well they are built, are never going to be as accurate as the real thing, in real-time and asynchronous behaviors and especially in modeling fully realistic traffic. Yet virtual models unquestionably have value in some contexts. I incline to thinking that the tradeoff between virtualized modeling and ICE modeling is too complex to reduce to a simple ranking. For some applications, software models will be ideal especially when there is external expertise in the loop. For others, only early testing in a real hardware system will give the level of confidence required, especially in the late stages of design. Bottom line, we probably need both and always will.

So that’s my take on virtual reality. You can learn more about the vendor positions HERE and HERE.