RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Magwel Adds Core Device Checking for ESD Verification

Magwel Adds Core Device Checking for ESD Verification
by Tom Simon on 05-11-2021 at 10:00 am

ESDi-XL Core checking

In the past ESD sign-off has been accomplished by a combination of techniques. Often ESD experts are asked to look at a design and assess its ESD robustness based on experience gained from prior chips. Alternatively, designers are told to work with a set of rules given to them, again based on previous experience about what usually works and what fails. Tools can come into the mix, many of them using widely ranging methods with widely ranging success. Indeed, engineering teams looking to buy ESD tools are confronted with a confusing set of solutions that may or may not find problems, and just as importantly may report numerous false errors. Some tools require multiple iterations to find real issues, or simply take teams forever to run and review results because of an inability to filter out false violations.

Magwel has been delivering a solid ESD solution for HBM verification for many years. Magwel’s ESDi tool strikes the perfect balance of comprehensive checking without creating burdensome simulation workloads. As a result, it reports fewer false positives and gives designers the tools to rapidly trace, debug and fix any issues it finds.

Tool Choices

As was mentioned before, engineers often have to choose between apples and oranges type choices, in the hopes they pick the right tool. Some tools use simple loop resistance to find potential problem paths, then require more detailed simulation to assess the real level of severity. This approach can miss problem paths altogether. Other solutions rely on rules to detect issues. However, the quality of the verification depends heavily on the specifics of the rules. New problems can occur that are not anticipated by the existing rules. Another approach is to rely too heavily on voltage propagation. While this is a step in the right direction, it can miss the nuances that many designs present.

Simulation Approach

Magwel’s approach has always been to use thorough and fast simulation, with easily obtainable TLP models for the ESD devices. It has a built-in highly accurate extraction engine tuned for ESD analysis. ESDi looks at each and every pad-pair (or pins in the case of IP) to see where problems are occurring. Self-protecting devices are also easily modeled. Because it uses comprehensive simulation, it can handle multiple parallel discharge paths, which ultimately affect current distribution and voltage levels across devices. Performance is boosted by parallel processing. ESDi typically simulates an HBM test in a fraction of a second and can perform up to 10K tests per hour per parallel thread.

ESDi also checks for missing vias or wires which may lead to unconnected ESD devices, as well as many other common layout issues that can cause ESD related failures. It handles chips with multiple power domains and checks for electro-migration issues.

ESDi-XL Flow

To improve overall ESD design and verification Magwel has just announced a set of new features that expand the ability to detect issues and improve the effectiveness of both front-end and back end design teams. With ESDi-XL design teams can now get an early look at ESD robustness during schematic design. Early design cycle insights into ESD protection effectiveness can save precious design time and avoid unnecessary iterations.

New Analysis Methods

Perhaps most important of all the new features in EDSi-XL is the addition of IO cell and core checking for overvoltage and overcurrent conditions during ESD discharge events. ESDi-XL already has an excellent ability to predict voltage and current flows in IO and ESD cells. Magwel applies this information and uses a proprietary algorithm to rapidly detect when any core devices would be exposed to overcurrent or overvoltage in the course of an ESD discharge event. This is extremely important because even with ESD protections working the potential for internal device damage continues to exist in many designs. Unfortunately, up until now the only way to find these issues was with massive time-consuming simulations or after tapeout on the tester. Magwel’s approach is fast and accurate and can save projects from having to make respins.

ESDi-XL Core checking

ESDi-XL also performs new expert topological checks, such as the presence and value of protection resistors at gate inputs, presence of secondary protections or W/L aspect ratios of stacked devices.

Conclusion

Magwel’s ESDi-XL brings high speed and accurate ESD analysis to the entire flow. It can quickly replace and supplement methods such as cursory checks, tedious simulation and manual review, all of which can be impractical or error prone. If you are going to buy an ESD tool, it makes sense to do it before you experience a project delay or failure. For more information on Magwel’s ESDi-XL visit their website.

 


Cadence Extends Tensilica Vision, AI Product Line

Cadence Extends Tensilica Vision, AI Product Line
by Bernard Murphy on 05-11-2021 at 6:00 am

Tensilica vision min

Vision pipelines, from image signal processing (ISP) through AI processing and fancy effects (super-resolution, Bokeh and others) has become fundamental to almost every aspect of the modern world. In automotive safety, robotics, drones, mobile applications and AR/VR, what we now consider essential we couldn’t do without those vision capabilities. Since Cadence Tensilica Vision platforms are already used in some impressive applications from companies including Toshiba, Kneron, Vayyar and GEO Semiconductor, so when they extend their vision and AI product line, that’s interesting news.

Fast evolution

Remember that this is a fast-moving space. You can already find phones with 6 cameras to support digital zoom and higher quality than you could get out of a single (phone-sized) camera. AR headsets that will let you measure depths through time-of-flight sensing (supported by a laser/LED). And, by extension, distances through a little trigonometry. Which can be invaluable in personal or work-related AR applications where you need to capture dimensions.

ISPs themselves are evolving rapidly because the quality of the image they produce critically affects the quality of recognition in the subsequent AI phase. This isn’t just about a pleasing picture. Now you must worry about whether the camera can distinguish a pedestrian stepping off the sidewalk in difficult lighting conditions. Before you even get to the AI phase. Dynamic range compression is a hot area here.

Then there’s AI, a world which continues to raise the bar on innovation in so many ways. This part of the pipeline is constantly advancing, spinning new neural net architectures to boost performance for safety critical applications. And to reduce power for most applications, but especially at the edge.

And finally post-processing. Bokeh for a nice background blur around that picture of your kids. Merging a real view with an augmented overlay (suitably aligned) for an AR headset or glasses. Or consider SLAM processing for that robot vacuum to navigate around your house. Or a robot orderly to navigate around a hospital, delivering medications and meals to patients. SLAM works largely through vision, building a map on the fly and correcting frequently to guide navigation. Curiously, SLAM doesn’t yet depend on AI, though there are indications AI is starting to appear in some applications.

What it takes

All of this means multiple high-resolution video streams, plus perhaps time-of-flight sensor data, converging first into an ISP, requiring very intensive signal processing. Then into pre-processing to build say a 3D point cloud. Then perhaps into SLAM for all that localization and mapping. These are massive linear algebra tasks, generally at least requiting single precision floating point, sometimes double.

The AI task is becoming a little more familiar. Sliding windows over massive fixed-point convolution, RELU and other operations across many neural net planes. Requiring heavy parallelism and lots of MAC (multiply-accumulate) primitives. With as much of the computation as possible staying in local memory because off-chip memory access is even uglier for AI power and performance than for regular algorithms.

Then you must fuse those inputs to enhance accuracy in object recognition through redundancy (low false positives and negatives), to compute depths and dimensions and whatever other conclusions could be derived from these images. Doing all of this requires a platform that is very fast, supporting all that signal processing, linear algebra and convolution. And very flexible because the algorithms continue to evolve. A platform which can also support hardware differentiation, to make your product better than your competitors’ offerings. The only way that I know to fit that profile is with embedded customizable DSPs.

A spectrum of solutions

The range of applications an embedded solution like this must support demands both high performance and low power options. Tensilica already provides their Vision Q7 and Vision P6 platforms for high throughput and low power respectively. Now they have extended the family. The Vision Q8 offers 2X performance on the Q7 in computer vision, AI and floating point and addresses high end mobile and automotive applications. The Vision P1 offers a third of the power and area of the P6 and is targeted to always-on applications (face recognition, smart surveillance, video doorbell, …). Sensors for these applications will trigger (on movement or proximity for example) a wakeup call to the app.

Both processors use the same SIMD and VLIW architecture used in the Q7 and P6, along with the same software tools, library and interfaces. OpenCL, Halide, C/C++ and OpenVx for computer vision, all the standard networks for AI.

And this is really cool. Suppose you have your own AI acceleration hardware. Not the full accelerator but some part of it where you will add your own special sauce. The Tensilica platforms will operate as the AI master engine but can offload those planes to your special hardware. Which then return to the master when they are done. The compile flow through Tensilica XNNC-Link supports this division of labor starting from a common input.

You can learn more about these Tensilica platforms HERE.

Also Read

Agile and Verification, Validation. Innovation in Verification

Cadence Dynamic Duo Upgrade Debuts

Reducing Compile Time in Emulation. Innovation in Verification


Webinar: System Level Modeling and Analysis of Processors and SoC Designs

Webinar: System Level Modeling and Analysis of Processors and SoC Designs
by Daniel Payne on 05-10-2021 at 10:00 am

exploration flow min

Engineers love to optimize their designs, but that implies that there are models and stimulus to automate the process.  Process engineers have TCAD tools, circuit designers have SPICE for circuit simulation, logic designers have gate-level simulators, RTL designers use logic simulation, but what is there for the system architects of a processor or SoC design? Even back in the 1980s at Intel, I recall that the architect coded a GPU architecture in the MainSail language and then pushed some stimulus through it to find out what the performance and bottle-necks would be, all prior to any detailed implementation, but that was a lot of error-prone, hand-coding required.

In 2021 there’s a system engineering company called Mirabilis Design, and they have focused on providing a system architect with a modeling and analysis environment to do actual exploration, and make the trade-offs to pick the winning architecture. I spoke with the founder of Mirabilis Design, Deepak Shankar to learn about his upcoming webinar.

Webinar

System architects have high-level questions that need to be answered, like: how will my SoC respond to network traffic, what is the Quality Of Service (QOS), and how should the Network On Chip (NOC) be configured?

I learned that the approach from Mirabilis is to use a system-level simulator, along with a library of 500 models for things like: queuing, networking, ARM M1, RISC V, etc. Most of these models are parameterized, like a scheduler, so you can get the proper configuration. If you wanted a system to have an Arteris NOC, ARM M1 core and use LPDDR5 interface to RAM, then how would they all work together, and how should the NOC be setup?

If the block that you want isn’t already modeled in the library, then there’s a way for you to quickly build your own, or even modify the source code of an existing library block.

In the webinar you’ll get to see two cases where The VisualSim environment is used to evaluate the requirements, power, performance and function of a Processor and an SoC design. The architectural exploration flow with this approach looks like this:

What struck me most with Mirabilis was that architects can now get early access to throughput, performance, power and even timing, all before detailed implementation is started. Power estimation and performance modeling are no longer split between two different groups of engineers, with two different sets of tools.

Results from VisualSim on the power estimation side are typically within 5-7% of what you’ll measure in silicon, and that’s quite valuable, because other approaches are lucky to be within 50% of silicon values.

Mark your calendar for May 27th, from 10AM to 11AM PDT, then sign up online for this informative webinar from Mirabilis Design.

Mirabilis

Mirabilis Design provides modeling, exploration and collaboration solutions for semiconductors, digital electronics and Embedded Systems. Clientele includes a mix of semiconductor, defense, aerospace, automotive and computing product suppliers. 6 of the top 12 semiconductor companies, 8 of the top 15 defense suppliers and 4 of the top 10 electronics companies use VisualSim to ensure the right design for their products.

Also Read:

WEBINAR: Balancing Performance and Power in adding AI Accelerators to System-on-Chip (SoC)

Webinar – Comparing ARM and RISC-V Cores

System-Level Modeling using your Web Browser


Samtec Keynote – Power Integrity is the New Black Magic

Samtec Keynote – Power Integrity is the New Black Magic
by Mike Gianfagna on 05-10-2021 at 6:00 am

Samtec Keynote – Power Integrity is the New Black Magic

The Signal Integrity Journal recently held a half day Electronic Systems SI/PI Forum that included presentations from industry leaders covering key design topics for signal integrity and power integrity engineers. The event was sponsored by Cadence. The keynote for the event was presented by Istvan Novak, principal signal and power integrity engineer at Samtec. Istvan presented some observations and revelations that will definitely make you stop and think. It was quite a memorable talk. If you missed it, don’t worry. A replay link is coming. But first, let’s look at some of the comments on why power integrity is the new black magic.

Istvan Novak

First, a bit about the speaker. Istvan Novak works on advanced signal and power integrity designs. Prior to 2018 he was a distinguished engineer at SUN Microsystems, later Oracle. He worked on new technology development, advanced power distribution, and signal integrity design and validation methodologies for SUN’s successful workgroup server families. He was engaged in the methodologies, designs and characterization of power-distribution networks from silicon to DC-DC converters. He is a Life Fellow of the IEEE with twenty-nine patents to his name, author of two books on power integrity, teaches signal and power integrity courses, and maintains a popular SI/PI website. Istvan was named Engineer of the Year at DesignCon 2020. If power integrity is of interest to you, Istvan is someone you will want to listen to.

Istvan began by explaining the motivation for the title of his talk. Before the 1990’s, electromagnetic compatibility (EMC) was a key focus. In the early 1990’s, signal integrity became a new area of focus and a defined discipline. In 1994, Dr. Howard Johnson famously described signal integrity challenges as “black magic” in his textbook, which is still in circulation today. Some industry experts believe that as signal integrity has matured, power integrity has now become the new black magic.  Samtec is no stranger to either signal or power integrity by the way. Dan and I discussed signal integrity with Matt Burns of Samtec in this podcast.

Istvan examines the reasons why power integrity is so difficult as he analyzes past predictions and current challenges.  The safety and reliability concerns brought on by the proliferation of power electronic circuits in all walks of life are discussed, from tiny energy-harvesting circuits, through consumer electronics products, to high-power electronics in autonomous vehicles.

Istvan provides an example early in his talk.  He discusses the widespread power blackout on the east coast of the US in 2003 as a substantial example of what can go wrong. This massive chain reaction failure was due to a power integrity problem. Istvan goes on to discuss the impact that an increasing number of supply rails has on power distribution network (PDN) design. The increasing density of these systems increases noise, and this is a key challenge.

Looking more closely at signal integrity vs. power integrity, Istvan points out that signal integrity tends to be a one-dimensional problem. Meaning the signal path is typically well defined and the parameters associated with maintaining the signal are also known. Contrast that with power integrity, where power distribution is done over the entire chip with normal and wide traces as well as power planes. In this case, the distribution of effects is much more of a 2D problem and the particular mechanisms at play come from noise, which is harder to characterize.

Istvan goes on to discuss other challenges associated with power integrity and how to characterize it accurately. He cites several design examples that do a great job to illuminate what needs to be looked at. I highly recommend you watch his keynote if power integrity is on your mind. You will come to understand why power integrity is the new black magic. You can see Istvan’s keynote here.


Mars Perseverance Rover Features First Zoom Lens in Deep Space

Mars Perseverance Rover Features First Zoom Lens in Deep Space
by Synopsys on 05-09-2021 at 10:00 am

Mars Perseverance Rover Features First Zoom Lens in Deep Space

On July 30, 2020, NASA launched the Mars 2020 Perseverance rover, which is scheduled to land today. Perseverance has been deployed to Mars with a new mission: to search for evidence of past life and collect samples that will eventually be brought back to Earth by future missions.

Mars 2020 Perseverance rendering courtesy of NASA/JPL-Caltech

According to NASA, the Perseverance Mars mission “takes the next step by not only seeking signs of habitable conditions on Mars in the ancient past, but also searching for signs of past microbial life itself.” The Perseverance rover is similar in size and design to the Curiosity rover – about the size of a compact car –but features new camera systems to facilitate rock and soil sample collection.

One of these is the Mastcam-Z instrument, which functions as the rover’s mast-mounted scientific “eyes.” The “Z” in Mastcam-Z stands for “zoom.” It represents a milestone in the history of space exploration: it’s the first zoom lens system to be included on a deep space instrument.

Mastcam-Z: Powerful Zoom Capabilities

The Mastcam-Z instrument is an update of the Mastcam instrument on the Mars Curiosity rover. It will perform several key functions:

  • Scanning the landscape on Mars to help scientists understand the terrain.
  • Assessing atmospheric and astronomical conditions.
  • Helping scientists identify and characterize materials for rock and soil sampling.

Mastcam-Z is capable of producing multispectral, stereoscopic images. With its powerful zoom, it will help scientists see small features on the Mars landscape from far away. To give an idea of its power, it is capable of resolving features as small as 3cm from a distance of 100m.

Synopsys optical engineers partnered with Malin Space Science Systems and Arizona State University to design the Mastcam-Z zoom lens system using Synopsys’ CODE V optical design software. There were many technical challenges to resolve. The lenses needed to be well corrected over an extended visible spectral range and needed to operate over at least a 3x zoom range while being able to focus from close to the rover out to infinity. The lenses also had to operate over a large temperature range, including extreme temperature gradients. This set of operating conditions required substantial design effort as well as extremely detailed analyses. Synopsys optical engineers needed to show that the lenses could be successfully fabricated and that they would function over all the operating conditions.

Dr. Jim Bell, principal investigator at Arizona State University, commented, “The Mastcam-Z science and instrument development teams were extremely pleased with the high level of technical skill and support provided by the Synopsys team designing the zoom lens system. The result is an amazing pair of cameras that are expected to give us high resolution color and even 3-D views of Mars.”

Dr. Michael Ravine, advanced projects manager at Malin Space Science Systems, said, “Synopsys supported the Mastcam-Z development from the proposal through our final testing. We were pleased with how well the zooms worked under simulated Mars conditions, and we’re looking forward to seeing them actually working on Mars.”

Dr. Blake Crowther, principal optical engineer at Synopsys Optical Solutions Group, said, “One of the biggest challenges associated with designing lenses for use in interplanetary missions is the multivariate nature of their operating environment — coupled with the fact that they must work the first time and every time without human attention. This is compounded when designing a zoom lens that must function over a significant range of object distances. The amount of detail that the designer must keep in mind over the design process is incredible. In every phase of the design, the optical engineer needs to be able to communicate complex design trades and detailed analyses results to a large and diverse review community, which is no small endeavor. It was an honor to design such a lens with the talented team assembled for the job. It was also fun.”

Illuminating Sample Collection

Another updated camera system on the Perseverance Mars rover is the CacheCam, one of several engineering cameras on board. The CacheCam is located underneath Perseverance and takes pictures of sampled materials as they are being prepared for sealing and caching. Synopsys optical engineers contributed to the design of illumination optics on the CacheCam.

The CacheCam includes a fixed illuminator (no moving parts) to provide close to uniform illumination of the materials throughout the collection process; at the beginning, the samples are far from the imaging optics and, at the end, the samples are much closer. In addition, the illuminator has to account for the presence of dust on the outer surface of the optics, as well as on the inner surfaces of the collection tube.

Simon Magarill, one of the engineers who worked on the CacheCam design, noted, “The  requirement to include the dust in the analysis and design process required a lot of calculations to account for different sizes and concentrations of scattering particles. We developed a systematic approach to design an illuminator that provides optimum performance in such challenging conditions.”

Perserverance CacheCam image. Courtesy of NASA/JPL-Caltech.

Learn More

A great place to start learning more about the Mars 2020 Perseverance rover is the mission overview on NASA’s website at https://mars.nasa.gov/mars2020/mission/overview/.

References

Also Read:

Verification Management the Synopsys Way

Synopsys Debuts Major New Analog Simulation Capabilities

Accelerating Cache Coherence Verification


Is IBM’s 2nm Announcement Actually a 2nm Node?

Is IBM’s 2nm Announcement Actually a 2nm Node?
by Scotten Jones on 05-09-2021 at 6:00 am

Slide1

IBM has announced the development of a 2nm process.

IBM Announcement

What was announced:

  • “2nm”
  • 50 billion transistors in a “thumbnail” sized area later disclosed to be 150mm2 = 333 million transistors per millimeter (MTx/mm2).
  • 44nm Contacted Poly Pitch (CPP) with 12nm gate length.
  • Gate All Around (GAA), there are several ways to do GAA, based on the cross sections IBM is using horizontal nanosheets (HNS).
  • The HNS stack is built over an oxide layer.
  • 45% higher performance or 75% lower power versus the most advanced 7nm chips.
  • EUV patterning is used in the front end and allows the HNS sheet width to be varied between 15nm to 70nm. This is very useful to tune various areas of the circuit for low power or high performance and also for SRAM cells.
  • The sheets are 5nm thick and stacked three high.

Is this really “2nm” as claimed by IBM? The current leader in production process technology is TSMC. We have plotted TSMC node names versus transistor density and fitted a curve with a 0.99 R2 value, see figure 1.

Figure 1. TSMC Equivalent Nodes.

Using the curve fit we can convert transistor density to a TSMC Equivalent Node (TEN). Using curve fit we get a TEN of 2.9nm for the IBM announced 333MTx/mm2. In our opinion this makes the announcement a 3nm node, not a 2nm node.

To compare the IBM announcement in more detail to previously announced 3nm processes and projected 2nm processes we need to make some estimates.

  • We know the CPP is 44nm from the announcement.
  • We are assuming a Single Diffusion Break (SDB) that would result in the densest process.
  • Looking at the cross section that was in the announcement, we do not see Buried Power Rails (BPR), BPR is required to reduce HNS track height down to 5.0, so we assume 6.0 for the process.
  • To get to 333MTx/mm2 the Minimum Metal Pitch must be 18nm, a very aggressive value likely requiring EUV multipatterning.

IBM 2nm Versus Foundry 3nm

Figure 2 compares the IBM 2nm devise to our estimates for Samsung and TSMC 3nm processes. We know Samsung is also doing a HNS and TSMC is staying with a FinFET at 3nm. Samsung and TSMC have both announced density improvements for their 3nm processes versus their 5nm processes so we have known transistor density for all three companies and can compute TEN for all three. As previously noted, IBM’s TEN is 2.9, we now see Samsung’s TEN is 4.7 and TSMC’s TEN is 3.0 again reinforcing that IBM 2nm is like TSMC 3nm and Samsung is lagging TSMC.

The numbers in red in figure 2 are estimated to achieve the announced densities, We assume SDB for all companies. TSMC has the smallest track height because a FinFET can have a 5.0 track height without BPR, but HNS needs BPR to reach 5.0 in BPR isn’t ready yet.

Figure 2. IBM 2nm Versus Foundry 3nm.

IBM 2nm Versus Foundry 2nm

We have also projected Samsung and TSMC 2nm processes in figure 3. We are projecting that both companies will use BPR (BPR is not ready yet but likely will be when Samsung and TSMC introduce 2nm around 2023/2024). We also assume that Samsung and TSMC will utilize a forksheet NHS (HNS (FS) architecture to reach a 4.33 track height relaxing some of the other shrink requirements. We have then projected out CPP and MMP based on the company’s recent shrink trends.

Figure 3. IBM 2nm Versus Foundry 2nm.

 Power and Performance

At ISS this year I estimated relative power and performance for Samsung and TSMC by node with some additional Intel performance data. The trend by node is based on the companies announced power and performance scaling estimates versus available comparisons at 14nm/16nm. For more information see the ISS article here.

Since IBM compared their power and performance improvements to leading 7nm performance I can place the IBM power and performance on the same trend plots I previously presented, see figure 4.

Figure 4. Power and Performance (estimates).

 IBM’s use of HNS yields a significant reduction in power and makes their 2nm process more power efficient than Samsung or TSMC’s 3nm process, although we believe once TSMC adopts HNS at 2nm they will be as good or better than IBM for power. For performance we estimate that TSMC’s 3nm process will outperform the IBM 2nm process.

As discussed in the ISS article these trends are only estimates and are based on a lot of assumptions but are the best projections we can put together.

Conclusion

After analyzing the IBM announcement, we believe their “2nm” process is more like a 3nm TSMC process from a density perspective with better power but inferior performance. The IBM announcement is impressive but is a research device that only has a clear benefit versus TSMC’s 3nm process for power and TSMC 3nm will be in risk starts later this year and production next year.

We further believe that TSMC will have the leadership position in density, power, and performance at 2nm when their process enters production around 2023/2024.

Also Read:

Ireland – A Model for the US on Technology

How to Spend $100 Billion Dollars in Three Years

SPIE 2021 – Applied Materials – DRAM Scaling


Podcast EP19: The Emergence of 2.5D and Chiplets in AI-Based Applications

Podcast EP19: The Emergence of 2.5D and Chiplets in AI-Based Applications
by Daniel Nenni on 05-07-2021 at 10:00 am

Dan and Mike are joined by Sudhir Mallya, vice president of corporate and product marketing at OpenFive. We explore 2.5D design and the role chiplets play. Current technical and business challenges are discussed as well as an assessment of how the chiplet market will develop and what impact it will have.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Sudhir Mallya is Vice President of Corporate and Product Marketing. He is responsible for custom silicon product marketing, technology roadmaps and business model innovation, corporate marketing initiatives, and strategic customer and partner alliances. He was previously at Toshiba where he led their North American silicon BU with a focus on data center and automotive applications. He is based in Silicon Valley and has held executive positions in engineering, marketing, and business development at leading semiconductor companies. He has led multiple$100M+ global strategic customer engagements from very early concept to high volume production. He has a BSEE from the Indian Institute of Technology, Bombay, and an MSEE from the University of Cincinnati.


CEO Interview: Srinath Anantharaman of Cliosoft

CEO Interview: Srinath Anantharaman of Cliosoft
by Daniel Nenni on 05-07-2021 at 6:00 am

srinath square

Srinath Anantharaman founded Cliosoft in 1997 and serves as the company’s CEO.  He has over 40 years of software engineering and management experience in the EDA industry.  Srinath graduated with a Bachelor of Technology from IIT/Kanpur and MSEE from Washington University in St. Louis.

The last time we talked to you was 2017. Tell us a little bit about how the company has grown since then and how you’ve evolved your strategy.

The company has grown steadily and significantly over the last few years. Oddly, we have seen a big uptick in our business during the COVID lockdown. Our SOS family of design management solutions has become the backbone for design data collaboration for many of the largest semiconductor companies in the world. We have engineers spread all over the world from the US to Australia developing and supporting the software that these multinationals depend on to share data efficiently across their design centers and the cloud.

Our business mantra has really never changed. Develop the best product we can, support our customers at the highest level and treat each other with respect.  We never focus on revenue or growth. These are by-products that will come if we deliver on our fundamentals. Apparently, we are delivering.

Your business model and solutions have gone to the heart of some of the biggest challenges of IP reuse. What are the challenges to IP reuse that you are seeing?

IP reuse is the holy grail of design we have been talking about for a while. It promises to bring about the next significant leap in design team productivity, design cost savings, and reduced time-to-market.  Unfortunately, reality has not caught up with the vision. While there is ad hoc IP reuse within a team, it rarely crosses over to other business units and/or across the enterprise. IPs are often trapped in silos while companies continue to acquire and grow globally. There are many factors that limit IP reuse. There is some overhead to develop IPs for reuse and it requires good documentation to assist potential reuse. It must be easy and convenient for designers to find the right IP and gauge its quality. When reusing an IP, designers need the ability to get help with the IP if needed, report issues found, and be notified if there are updates. Effective IP reuse requires a change in mindset, perhaps enforced through a mandate, along with an IP-based design methodology and a good software infrastructure to enable it all. Cliosoft is trying to evangelize the benefits of IP reuse and provide the tools needed to help design teams make it a reality.

Consolidation seems constant in the EDA industry. First, where do you see opportunities for new efficiencies? And where do you see opportunities for startups with disruptive technology?

Indeed, we have seen several acquisitions in our customer base. On Semiconductor acquired Fairchild and Aptina, Microchip acquired Microsemi, Intel picked up eASIC and Soft Machines, Marvell bought Inphi, Skyworks acquired  Avnera and Synopsys has snapped up several IP vendors. We see this as a great opportunity. Most mid-to-large size companies are the result of several acquisitions, globally distributed with different cultures and expertise. To be more than the sum of its parts, engineers need to collaborate and share expertise across these boundaries.

Our SOS design management platform helps teams in different business units work together on exciting new projects. However, we saw a much bigger opportunity in providing a solution to help harness the power of all the intelligence and expertise spread across the enterprise. We introduced a new product called HUB, which as the name implies, lets people across the enterprise share their Intellectual Property and expertise with others. It enables problems to be solved quickly by crowdsourcing and designs to be completed faster without reinventing the wheel. I recently heard a talk from Erica Dhawan, the author of a book named ‘Get Big Things Done – The power of Connectional Intelligence’, where among other things, she talks about how difficult problems can be solved by leveraging the expertise of a broad network. Creative new solutions may come from unexpected sources looking at the problem from a different perspective. HUB was designed to do just that – provide a platform to enable the use of Connectional Intelligence within the enterprise by making it easy to share and reuse IP and expertise. Using HUB, an engineer in one business unit needing a silicon IP may find that it has already been developed by an acquired company. They can now quickly access the IP and leverage not only the expertise of the authors, but other users in the enterprise who may have integrated that IP into their designs. All the interaction is recorded in HUB and becomes a knowledge base that future users of the IP can leverage.

Does the rise and popularity of RISC-V make design management any more difficult for companies? Put another way, how do Cliosoft solutions help those companies who are embracing RISC-V IP?

From a design management perspective, our SOS design management platform helps design teams manage their RISC-V IP and designs exactly the same as any other IP and design. However, given the open-source nature of RISC-V and the fact that any user can collaborate and extend the ISA with new instructions and innovate the micro-architecture of the RISC-V processors, our HUB IP management platform helps manage and track this collaboration. HUB provides IP traceability for RISC-V IPs along with their knowledge base to help proliferate the evolution, reuse and integration of RISC-V IP.

Tell us a little bit about improvements you’ve made to the SOS platform since we last talked.

SOS is a very mature platform with well over 300 organizations using the software. As teams and projects have become larger, our focus has been to improve performance and scalability. We have seen an increase in IP based design methodologies and so we have added features to lubricate this design flow. Since SOS is primarily used in IC design flows, with a large number of large binary files, optimizing the use of network storage has been a key differentiator. We are working on some new capabilities to improve storage optimization even more.

The other trend we have seen is that design teams may have multiple flows. A team using Cadence Virtuoso may also use Keysight ADS for designing some RF components. Some architects may use Mathworks Matlab and project leads may manage specifications and other documentation using Microsoft Office. We work with a variety of vendors so that engineers can invoke SOS revision control features directly from their preferred tools and all the design data and documentation is managed by SOS. Another trend is a result of acquisitions. A company using Cadence Virtuoso may acquire other companies that use Synopsys Custom Compiler or Siemens Tanner. Since SOS is integrated and production tested with all these flows, the company can use the same design management solutions for all the flows.

How do you see the rise of cloud services affecting your business?

Frankly, it has not affected our business in any significant way. Whether engineers are working in their private cloud or using rented cloud services, they are using our solutions in the same way. We have expertise with Amazon AWS, Google GCP and Microsoft Azure. Since we have a globally distributed workforce, we use the cloud and of course use our own software in the cloud to manage our software development.

Many startups use the Cadence Cloud-Hosted Design Solution. Our applications engineers have a great working relationship with the engineers managing the hosted solution at Cadence. Since the Cadence engineers are very familiar with our solution, they help onboard a new company. This almost eliminates the need for us to set up a new startup that is often low on CAD expertise and resources. We can’t thank the Cadence hosted solution engineers enough.

The competitive landscape has changed a little bit around Cliosoft. What’s your take on the impact of those changes for users of IP management solutions?

Cliosoft has always been focused on meeting the data collaboration needs of design engineers. Our competition has changed in that their focus has become diluted with acquisitions or interest in entirely different domains. So we are now the only vendor left whose sole focus is on helping semiconductor companies manage their crown jewels – their IP and design data. Customers trust that we will be laser focused on solving their problems and this has given us further credibility. We have seen a steady migration of customers moving to our solutions.

In recent years, we’ve seen a big increase in the number of large, vertically integrated companies that design their own SoCs. Apple, Google, Facebook, Amazon, to name a few. Have they embraced commercial IP management solutions, or do they roll their own solutions simply because they can?

The large semiconductor companies still have the largest design teams and the bulk of our focus. The companies you mentioned clearly have the software expertise to build any solutions they want. However, software for managing IC design data and IP reuse is very specialized and not their area of expertise. Some of them already use our solutions.

As you embark on your 24th year in business, what’s your vision for how IP use and reuse will evolve in the coming years and how Cliosoft can address any challenges there?

We continue to see a vigorous growth of new startups. Many of these will get acquired and we will see more consolidation. As design teams will be required to move faster to accommodate shrinking market windows, I expect that upper management will push to make reuse of existing IP a reality and try to purchase third party IP when necessary. Tracking all this reuse information and managing dependency trees will be of paramount importance both for design integrity and quality, as well as avoiding any legal or financial jeopardy with third party IP vendors. Our HUB solution is well positioned to address these needs and we have seen a growing interest especially with large multinationals. We expect to learn more from these engagements and further enhance the product to meet these challenges.

Cliosoft.com

Also Read:

CEO Interview: Rich Weber of Semifore, Inc.

CEO Interview: Dr. Rick Shen of eMemory

CEO Interview: Kush Gulati of Omni Design Technologies


Verification Management the Synopsys Way

Verification Management the Synopsys Way
by Bernard Murphy on 05-06-2021 at 6:00 am

Verification management min

Remember the days when verification meant running a simulator with directed tests? (Back then we just called them tests.) Then came static and formal verification, simulation running in farms, emulation and FPGA prototyping. We now have UVM, constrained random testing and many different test objectives (functional, power, DFT, safety, security, cache coherence). Over giant designs now needing hierarchies of test suites. And giant regressions to ensure backward compatibility, compliance and coverage while aiming to optimize use of compute farms and clouds. It’s all become a bit more complicated than it used to be. To achieve the productivity and efficiency gain needed to keep up, automating verification management of this complex and diverse set of objectives becomes essential.

Comprehensive verification management

In a recent recorded video Kirankumar Karanam (AE Mgr Synopsys Verification Group) walks through the Synopsys VC Execution Manager (ExecMan) answer to this need. The ExecMan solution has five primary goals:

  • Provide a systematic path linking from testplan to execution, debug and coverage and trend analysis
  • Optimize regression turn-around times
  • Minimize debug turn-around times
  • Optimize time to closure
  • Utilize the grid as effectively as possible

The planning phase always intrigues me, linking a design plan to a test plan and subsequently through to analysis and debug. In a short overview there wasn’t time to go into more detail on this topic. I could see this being very useful in establishing traceability between specs and testing.

Optimizing regression turnaround-time and debug productivity

One important consideration in optimizing regression throughput is simply load-balancing. Packing jobs in such a way that total turn-around time per regression pass is minimized to the greatest extent possible. The manager helps optimize this balancing. It also apparently does some level of reduction in redundant test identification, using coverage analytics.  There’s also a note in the slides on VCS engine performance enhancement in this release – I believe VCS 2020.12.

To optimize debug productivity, the manager provides help in several ways. First it automatically sets up debug runs to run in parallel with ongoing regression runs. You can supply debug hooks up-front to drive such runs. There’s also mention in the slides of ML-based failure triage and debug assistant(s), though not elaborated in the talk. These are topics I cover from time to time. Could be very helpful.

Optimizing closure turn times and grid utilization

Here there’s more focus on test grading by coverage, to filter out tests which don’t contribute significantly. Synopsys have also just introduced a feature called Intelligent Coverage Optimization (ICO), using ML to bias constraints for randomization, again to minimize low value sims. They claim 5X reduction in turn-around time using this technique for stable CR regressions.

Finally, on this general optimization theme, the manager optimizes for grid efficiency, looking at the best way to assign tasks to specific grid hosts. The manager does this by analyzing environment and historical data.

More goodies

ExecMan adds further automation for results binning, re-run and debug through Verdi. It further supports coverage analysis through test grading and plan grading tools and can link with bugs tracked in Redmine issue-tracking.

Kirankumar wraps up by describing a use-case they developed with a memory customer, based in this instance on VC SpyGlass regressions. An interesting point here is that this customer uses Jenkins for regression management. Requiring that ExecMan work with that flow. I don’t know how far that customer takes their use of Jenkins, but it’s encouraging to see tools from the agile world appearing in hardware regression flows.

You can watch the recorded video HERE.

Also Read:

Synopsys Debuts Major New Analog Simulation Capabilities

Accelerating Cache Coherence Verification

Addressing SoC Test Implementation Time and Costs


Spot-On Dead Reckoning for Indoor Autonomous Robots

Spot-On Dead Reckoning for Indoor Autonomous Robots
by Kalar Rajendiran on 05-05-2021 at 10:00 am

Sensors Characteristics

One meaning of the word “reckoning” says it is the action or process of calculating or estimating something. But dead reckoning? What does that mean? Believe it or not, we have all deployed dead reckoning to varying degrees of success on different occasions. As an example, when driving on a multi-lane winding highway and direct sunlight hits our eyes. Although we lose visibility momentarily, we still navigate our vehicle without hitting the median barrier or another vehicle. Of course, if we had been distracted and intermittently ignoring visual cues of the surroundings, the result may have been different. As per Wikipedia: “In navigation, dead reckoning is the process of calculating current position of some moving object by using a previously determined position, or fix, and then incorporating estimations of speed, heading direction, and course over elapsed time.”

Prior to modern day navigation technologies, dead reckoning technique was used for navigation at sea. Can this technique still be useful? The answer is yes, as we saw with the highway example. How about in the technology world? How much useful can this technique be?

Last month, CEVA unveiled MotionEngine™ Scout, a highly-accurate dead reckoning software solution for navigating Indoor Autonomous Robots. And on April 27th, they hosted a webinar titled “Spot-On Dead Reckoning for Indoor Autonomous Robots” to provide deeper insights into that solution. The main presenters were Doug Carlson, Senior Algorithms Engineer, Sensor Fusion Business Unit of CEVA and Charles Chong, Director of Strategic Marketing, PixArt Imaging. Even with multiple sensors’ feeding position, orientation and speed data to the navigation system, trajectory error can start building up as sensors’ data could be momentarily interrupted or corrupted. Doug and Charles explain how CEVA’s solution helps reduce the trajectory error by a factor of up to 5x in challenging surface scenarios.

The following are some excerpts based on what I gathered by listening to the webinar.

MotionEngine Scout avoids expensive camera and LiDAR technology-based sensors. Instead, it uses optical flow (OF) sensors. Figure below shows the three different types of sensors that the solution uses, how the sensors are used and what type of data they provide.

For optical flow sensing, CEVA’s solution uses PixArt’s optical track sensor, part number PAA5101. PAA5101 is a dual-light LASER/LED hybrid optical technology implementation. This approach yields best results over a wide range of surfaces. LED performs better on carpets and LASER works better on hard surfaces. Nonetheless all three types of sensors can be severely impacted by the environment and thus introduce errors in measurement data. That directly impacts dead reckoning calculations. Refer to Figure below for details on obstacles to accurate dead reckoning performance.

CEVA’s solution fuses measurements from these three sensors to achieve significantly better accuracy and robustness. Sensor fusion is the process of combining sensory data from multiple types of sensing sources in a way that produces a more accurate result than is possible with just the individual sensors’ data. MotionEngine Scout leverages 15+ years of CEVA R&D in sensor calibration and fusion. The solution is able to minimize absolute error by a factor of 5-10x over relying on just wheel encoder or optical flow sensor data. Refer to Figure below.

 

MotionEngine Scout is the software package that is being released to address the indoor autonomous robot market. It can support residential, commercial and industrial settings. Evaluation hardware will become available to customers in May/June 2021. The hardware will be in the form of a single PCB module and simple to integrate with customer’s robot platform.

As a backgrounder, MotionEngine™ is CEVA’s core sensor processing software system. More than 200 million products leveraging MotionEngine system have been shipped by leading consumer electronics companies into various markets. Check here for a list of MotionEngine based software packages supporting different market segments.

For all the details from the webinar, I recommend you register and listen to it in its entirety.

If you are developing indoor autonomous robots, you may want to have deeper discussions with CEVA. Their software package may help you address the challenging pricing requirements of your market.

Also Read:

IP and Software Speeds up TWS Earbud SoC Development

Expanding Role of Sensors Drives Sensor Fusion

Sensor Fusion Brings Earbuds into the Modern Age