RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Circuit Simulation Challenges to Design the Xilinx Versal ACAP

Circuit Simulation Challenges to Design the Xilinx Versal ACAP
by Daniel Payne on 06-24-2021 at 10:00 am

xilinx versal acap min

One of the most unique acronyms that I learned about this past year is ACAP from Xilinx, which stands for Adaptive Compute Acceleration Platform. At the recent Cadence LIVE event, I had the pleasure of watching Pei Yao, a Xilinx senior staff CAD engineer, as she talked about the challenges of getting all the analog and mixed-signals parts of their SoC to meet PPA specifications. Here’s a block diagram to show all of the IP included in the Xilinx Versal family of chips:

Source: Xilinx

The scalar engines are popular Arm-based cores, the adaptable engines are what we used to call a classic FPGA with memories, and the intelligent engines are customized for both AI and DSP functions. All of these high-level IP blocks communicate with a Network-on-Chip (NoC), and then the IP for IO comes in a dozen varieties.

7nm Design Challenges

Yao went through a list of design challenges, and the Versal chips used the 7nm process:

  • Process variation effects increased
  • RC interconnect delays increased
  • More thermal and reliability concerns
  • Sheer design complexity
  • Increased number of IP blocks
  • Clock rate increased
  • Noise and coupling issues

To complete their transistor-level analysis with these design challenges, the circuit simulation run times increased, the capacity required for circuit simulation increased, and more CPU resources were needed. They wanted to do both full-chip circuit simulations and EM-IR analysis.

Yao’s group has been a user of Cadence’s Spectre Simulation Platform products, such as the Spectre Accelerated Parallel Simulator (APS). For this 7nm project, they started to use the newer Spectre X Simulator. Circuit designers would choose between five different accuracy modes in the Spectre  Simulator.

Spectre X

The Spectre X Simulator was announced in 2019, and the attraction for Xilinx in using it can be summarized in a few metrics:

  • Scales to use hundreds of Cores
  • More speed and capacity than Spectre APS

In general, Yao reported that the Spectre X Simulator could complete runs up to 10X faster, and for design netlists that were 5X larger than what the Spectre APS tool could handle. The big question with a new simulator is always, how accurate is it compared to my reference?

Benchmark results showed that the Spectre X numbers were quite accurate, within 1% of their reference results. On capacity, they were able to run circuit simulations that had 20 million nodes in the Spectre X Simulator. Circuit designers would choose between five different accuracy modes, all depending on the types of circuits being simulated: Cx, Ax, Mx, Lx, Vx

Source: Cadence

Accuracy vs Speed

For functional verification needs the engineers used the Vx mode, which offered the highest capacity, while being slightly less accurate. On the other extreme, for sensitive circuits like making jitter measurements or doing RF analysis, then the Ax mode was used.

Case Studies

The first case study was for a CLK network circuit, where the input clock is redistributed to all I/O blocks. The Spectre X Simulator had a run time of just 3 hours, 48 minutes, while using 188G of RAM.

  • 107M nodes
  • 58M capacitors
  • 146M resistors
  • 10M active transistors
  • 3X faster than a third-party FastSPICE tool
  • 12X faster than Spectre APS

A second case study showed a wide-band transformer-based VCO. The Spectre X Simulator had a runtime of 2 hours, 39 minutes, while using 33.6G of RAM.

  • 79K nodes
  • 90K capacitors
  • 1.5M resistors
  • 16K transistors
  • 4X faster than Spectre APS (Mx mode)

As an added comparison point, the Ax mode was also run, and it was 2.4X faster than Spectre APS with under a 0.5% difference in accuracy.

Summary

Designing circuits for the 7nm node is a tough job, and the team at Xilinx was quite successful in bringing to market their new ACAP family of chips. The size of running extracted netlists in a circuit simulator is only getting larger with each smaller process node. To get accurate timing, power and EM/IR results requires a greater simulation capacity, and scaling to support faster run times.

Pei’s engineering group at Xilinx was able to use the Spectre X Simulator to meet all of these new challenges, and they even used the Spectre X-RF Option for harmonic balance noise analysis.

Related Blogs

Also Read

EDA Design and Amazon Web Services (AWS)

Connecting System Design to the Enterprise

Keynote from Google at CadenceLIVE Americas 2021


Silicon Catalyst and Cornell University Are Expanding Opportunities for Startups Like Geegah

Silicon Catalyst and Cornell University Are Expanding Opportunities for Startups Like Geegah
by Mike Gianfagna on 06-24-2021 at 6:00 am

Silicon Catalyst and Cornell University Are Expanding Opportunities

SemiWiki has covered many aspects of Silicon Catalyst, from their business model to notable industry events and profiles of promising startups. You can get some perspective on the breadth and depth of Silicon Catalyst here. In this post, I’ll explore an aspect of the broader collaboration the organization is engaging in. It is well-known that Silicon Catalyst maintains a substantial network of product and service providers as well as a large network of advisors. All this is intended to help promising young semiconductor-based startups to take their idea to the next level. There is another aspect of the innovation pipeline, however. It’s the journey from a great idea to proof that it’s possible to implement. Through added collaboration, Silicon Catalyst is addressing this phase as well. Their work is having measurable impact for promising young companies. Read on see how Silicon Catalyst and Cornell University are expanding opportunities for startups like Geegah.

The Next Big Thing begins with an idea – perhaps not even an idea but a dream. I am a firm believer that the dreamers among us are the ones who will change the world. As an idea progresses to a commercial implementation, there are many hurdles to cross. For a semiconductor startup a lot of these hurdles have to do with access to technology, services, infrastructure, and design tools. These are all areas where Silicon Catalyst brings a lot to the party.

Let’s get back to that dream of an idea. If someone is dreaming of a new application for semiconductor technology, the first step needs to be a reality check. Can the idea be implemented with current materials and fabrication techniques? Or perhaps something over the next horizon will be required. Answers to these questions often require fundamental research, but ultimately the new idea needs to address real world problems to build a viable business. This is an expanding area of Silicon Catalyst’s ecosystem, working with university partners to find the next great innovation for the semiconductor industry. More on this in a moment.

Expanded Collaboration

I had the opportunity to speak with several folks that are part of the expanded Silicon Catalyst ecosystem. Some represent the university research point of view and others the licensing of that research. Still others ensure the many moving parts of these relationships continue to work smoothly. And of course, there’s the growing list of startups who are the primary beneficiaries of all this work.

Laura Swan

Collaboration between Silicon Catalyst and universities isn’t new. There is an ongoing program that connects universities with the Silicon Catalyst ecosystem and you can learn about it here. One of the folks I spoke with is Laura Swan. Laura manages the university program at Silicon Catalyst. One of the key benefits of this program is to connect Silicon Catalyst’s large advisor network with research work at partner universities. Universities must focus on the viability of their research from a commercial standpoint and the Silicon Catalyst advisor network is full of folks who can help with market discovery and validation of the innovations. This organization can help search for early-adopters and build a foundation for ultimate business success. This is one of many win/win scenarios that are part of this story.

Cornell University’s Praxis Center for Venture Development

Cornell University is a member of the Silicon Catalyst University Program. Recall I mentioned that semiconductor startups often require fundamental research to establish the efficacy of an  idea. This kind of research requires materials and physics expertise as well as the environment and equipment to experiment with new materials to see what happens when you build it. Cornell brings a lot to the table here – they operate a semiconductor research fab, the Cornell NanoScale Science and Technology Facility, or CNF.

Robert Scharf

The organization at Cornell that has developed a partnership with Silicon Catalyst is the Cornell Praxis Center for Venture Development. This is Cornell’s on-campus incubator for engineering, digital and physical science startups. The program is run by Robert Scharf and Bob is one of the folks I got a chance to speak with. One of the first things Bob pointed out was the proximity of Cornell’s fab facility – it’s in the same building as the Praxis Center, so access to equipment and know-how couldn’t be easier. Bob described a process whereby startup companies are evaluated for admission to the Praxis Center. This in many ways is similar to what Silicon Catalyst does, as part of their comprehensive applicant screening process.

Bob explained that entrants to Praxis can be very early in the maturation process – one click past “will it work?” if you will. Early results and fundamental research are focus areas for Cornell, and many other universities as well. Bob went on explain that, as startups mature, they can physically grow to a size that is hard to accommodate on campus at the Praxis Center. This is where Silicon Catalyst has formed a seamless fit for the startup as they continue their journey. The graphic at the top of this post illustrates this process.

Alice Li

Before I discuss a promising new startup that is benefiting from all this collaboration, I’ll finish the picture for Praxis. To do this I spoke with Alice Li, the executive director of Cornell’s Center for Technology Licensing (CTL). As you will see if you visit its website, CTL supports inventors, industry, entrepreneurs and academia. Regarding entrepreneurs, their stated goal is this:

We work to create successful transitions from innovation to new enterprise

Licensing technology developed at Cornell turns out to be a two-way street. Certainly, startups benefit from access to cutting-edge research to create the foundation of a new enterprise.

Cornell also benefits from the “grounding” that occurs when one attempts to apply fundamental research in a commercial setting. Alice explained that this process provides an important reality check for advanced research. After all, the goal of this work is to impact the world in a positive way and understanding what is relevant to that goal is a very important ingredient. Yet another win/win was discovered during my discussions with Alice.

Geegah – a Promising Startup and Beneficiary

Amit Lai

To complete the story, I spoke with Amit Lal. He is the Robert M. Scharf 1977 Professor of Electrical and Computer Engineering at Cornell. He’s also the director of the SonicMEMS Laboratory there, which focuses on micromachining technologies for making ultrasonic transducers for ultrasonic applications.

Professor Lal made an important breakthrough. He and his students came up with a way of post-processing a CMOS layer with piezoelectric films to create ultrasonic waves to deliver high-resolution, precision imaging. The resultant small, gigahertz-frequency waves have many potential applications—from chip security to acoustic storage of computer memory, ultrasonic imaging, and ultrasonic analog computing.

Amit and his student Justin Kuo have created a new company, Geegah, to commercialize the technology. There are many potential applications. We discussed a couple. First is chip security. Consider the reverse engineering liability associated with a chip that has metal interconnect. Now consider the same device that implements on-chip communication with ultrasonic waves. There are no signal paths to observe (or copy), making chip copying difficult, if not impossible.

The imaging capability has significant applications as well. One that caught my attention has to do with agriculture. It turns out there are very small worms, called nematodes that eat plant roots. Sensing their presence, so they can be controlled is virtually impossible with today’s technology – it’s difficult to “see” inside of soil. The sensors being developed by Geegah can do this quite accurately, however. The implications on the worldwide food supply / agriscience markets are significant and can make farmer’s lives more predictable. Similar to the problem of not being able to see nematodes in soil is the problem of not being able to see viruses in one’s breath. Geegah is now extending the technology to enable imaging of viruses. This capability is not only needed for COVID-like viruses, but for many other body infections as well.

Geegah was born out of the fundamental research at Cornell under the guidance of Professor Lai. The company is now also a member of the Silicon Catalyst incubator. The combined resources of both Praxis and Silicon Catalyst should have a significant and positive impact on the trajectory of this promising new startup. Geegah has access to the Cornell cleanroom to extend the technology to commercial levels, while it also has access to an extended network of silicon commercialization experts via Silicon Catalyst.

In conclusion, fundamental university-based research continues to be a valuable resource to drive the next generation of semiconductor solutions to benefit our industry and ultimately our lives. The Cornell Praxis and CTL collaboration with Silicon Catalyst and Geegah provides a great example of this value. Laura Swan and her team are looking to further expand university collaboration and would welcome contact with other academic institutions and researchers to learn more. Post docs in search of a path to commercialization should consider applying to the Silicon Catalyst Incubator, as the deadline for the next application review cycle is July 2, 2021.

So, there’s the summary of how Silicon Catalyst and Cornell University are expanding opportunities for startups like Geegah. Another win/win for each of these organizations and potentially a big win for our industry.

Also Read:

Silicon Catalyst is Bringing Its Unique Startup Platform to the UK

Demystifying Angel Investing

Silicon Catalyst and mmTron are Helping to Make mmWave 5G a Reality


 Semiconductor CapEx strong in 2021

 Semiconductor CapEx strong in 2021
by Bill Jewell on 06-23-2021 at 10:00 am

Semiconductor CAPEX spending versus change 2021

Semiconductor manufacturers are expanding capital spending in 2021 and beyond to help alleviate shortages. In addition, many governments around the world are proposing funding to support semiconductor manufacturing in their countries.

The United States Senate this month approved a bill which includes $52 billion to fund semiconductor research, design, and manufacturing. The bill has support in the U.S. House and from President Biden.

The Japan Ministry of Economy, Trade and Industry earlier this month announced a “national project” to support semiconductor manufacturing in Japan.

South Korea announced in May a plan to spend $450 billion over the next ten years on non-memory semiconductor manufacturing paid for by private business and government tax credits.

The European Union in May announced it is ready to commit “significant” funds to expand semiconductor manufacturing in Europe.

These government initiatives will help support investment by semiconductor manufacturers. SEMI’s latest fab forecast predicts the industry will break ground on 19 new high-volume semiconductor fabs in 2021 and 10 in 2022. Equipment spending on these fabs should exceed $140 billion. China and Taiwan will each account for 8 new fabs, with 6 in the Americas, 3 in Europe and the Mideast and 2 each in Japan and South Korea.

Semiconductor industry capital expenditures (CapEx) totaled $113 billion in 2020, according to IC Insights. Projections for 2021 growth range from 16% to 23%.

Three companies accounted for over 50% of semiconductor capital spending in 2020. Samsung, the largest spender in 2020 at $27.9 billion, is expected to keep spending flat in 2021. TSMC will have the largest increase, adding $12.8 billion from 2020 to reach $30 billion in 2021, a 74% increase. TSMC will account for over 60% of the total industry spending increase of $20.4 billion. Intel has stated it will increase spending from $14.3 billion in 2020 to $19.5 billion in 2021, up 37%. The 2021 projections were mostly made in April after first quarter earnings release. Many of these numbers will likely be revised upward over the course of 2021.

The semiconductor industry has traditionally experienced boom-bust cycles. Large investments are made to expand capacity during high demand periods. When demand growth slows or declines, over-capacity leads to declining revenue. This trend is illustrated in the graph below. Annual change in semiconductor capital expenditures is depicted by the green bars on the left axis scale. Annual change in the semiconductor market is shown by the blue line on the right axis scale. The red line labeled “CapEx Danger Line” indicates where an increase in CapEx over 40% leads to trouble for the semiconductor market.

Large increases in semiconductor capital spending are followed in one to two years by a decline (or significant growth deceleration) in the semiconductor market. When the semiconductor market grew 46% in 1984, CapEx increased 106%. This was followed by a 17% decline in the semiconductor market in 1985. In 1988 the semiconductor market grew 38% and CapEx grew 57%. Following this, the semiconductor market decelerated by 30 points to 8% growth in 1989. The next big growth period was in 1993 to 1995, peaking at in 1995 at 42% market growth and 75% CapEx growth. The next year the market declined 9%. An 8% market decline in 1998 was due to the Asian financial crisis.

The semiconductor market expanded by 37% in 2000 at the peak of the internet boom. This was accompanied by a 77% increase in CapEx. In 2001, the market had its largest decline in history at 32%. In 2004 a 28% market increase and 52% CapEx increase was followed by a 21-point deceleration to 7% growth in 2005. Semiconductor market declines in 2008 and 2009 were driven by the global financial crisis. Strong growth returned in 2010 with 32% market growth and 107% CapEx growth. The market decelerated by over 30 points in 2011 to almost zero growth followed by a 3% decline in 2012. In 2017 the market increased 22% and Capex increased 41%. 2017 growth was relatively modest compared to prior peak growth rates. However, two years later in 2019 the market declined by 12%.

There are numerous factors affecting the semiconductor market growth rate including the overall economy and demand for key electronics products. However, large increases in capacity have invariably led to overcapacity when demand slows. The overcapacity leads to semiconductor price declines, especially for commodity products such as memory. Inventories held by electronics manufacturers and distributors are cut. This overcapacity tends to occur following CapEx increases of over 40%. This is indicated by the red CapEx danger line in the graph.

With forecasts of 2021 CapEx growth in the range of 16% to 23%, the industry is nowhere close to the “danger line” of over 40% growth. Even if CapEx growth accelerates in the second half of 2021, it is not likely to exceed 30%. TMSC is comfortable with a 74% CapEx increase since it has numerous foundry customers clamoring for more capacity. Two other foundries, UMC and GlobalFoundries, each plan to at least double CapEx in 2021 versus 2020. Foundry company SMIC of China plans to cut CapEx 25% in 2021 primarily due to trade issues. The memory companies such as Samsung are cautious on CapEx after seeing a 33% decline in the memory market two years ago in 2019.

While the current situation does not portend excessive semiconductor capacity in the near term, it bears watching in the next couple of years. It remains to be seen how much of the current semiconductor shortage is due to short-term disruptions from the pandemic and how much is due to increasing demand for electronic equipment and increasing semiconductor content.

Also Read:

Supply Issues Limit 2021 Semiconductor Growth

Automakers to Blame for Semiconductor Shortage

Electronics Back Strongly in 2021


Low Power Positioning for Logistics – Ultimate Tracking

Low Power Positioning for Logistics – Ultimate Tracking
by Bernard Murphy on 06-23-2021 at 6:00 am

logistics min

When thinking about positioning you probably first think of navigation. The device in your car that helps get from where you are to where you want to be. Or navigating while hiking in the backcountry. But those applications are not the where the big unit growth will come. The biggest demand will be in asset tracking, expected to reach over 1B units by 2025. These demand very low power. And world-wide communication. Low power positioning for logistics.

Logistics tracking

Think about that TV you ordered on-line. Manufactured maybe in Taiwan. Or a cooling system for a server, built by a contract manufacturer in Shenzen. Each must be loaded in a shipping container along with many other packages and transported maybe by road, maybe by rail to a port. Where the container will be loaded on a large ship with perhaps 20,000 other containers. That voyage will take anywhere between 15 and 30 days, to a port in LA perhaps. Containers are offloaded in a yard where they are transferred again to trains or trucks and transported to distribution centers. There, packages are regrouped for local distribution and shipped by vans or truck to homes and businesses.

That’s a lot of lead time for impatient consumers. And businesses that must plan manufacturing and delivery to much shorter schedules. Manufacturers and retailers build inventory so they can respond quickly to their buyers. But then they need to monitor that supply chain very carefully. Especially to know where orders are in that complex web of transportation.

The old-school way of doing this was through NFC tags. Tags on palettes and tags on shipping containers. But that’s not a scalable way to track large shipments. NFC tags can only be read up close with hand scanners. No-one is going to climb around a freight train scanning thousands of containers. What you need is an active but very low power positioning solution. Something that will work around the world, far from cities. That will identify containers stacked on a rail car or a ship, and palettes when outside containers.

GNSS and NB-IoT deliver

Active positioning requires GPS. Actually something more than GPS because China, Europe and Russia are each developing their own global positioning systems, supported by their own satellites. These are (mostly) unified through a standard called GNSS. So while GPS alone is good enough for global coverage today, logistics businesses are already planning and building for GNSS. To ensure we’ll still be able to rely on unified positioning around the world.

However, GPS/GNSS is power hungry because we expect instant updates when we’re using it for navigation. Since packages and containers may be in transit for weeks, that power demand must be slimmed down dramatically. CEVA has created a solution called “snapshot positioning” which does just this. Checking position periodically then going back to sleep. This compromises a little on accuracy (down to several meters) but is more than good enough to position a ship, train or truck. This modification requires some changes to the algorithms that normally depend on more frequent updates.

You also need to transmit that position update. For which narrow-band IoT (NB-IoT) is an ideal solution. Designed to transmit small amounts of data at very low energy consumption. Satellite-based NB-IoT communication solutions are already appearing, making communication feasible, even in the middle of an ocean. Put these together and now you have an active solution to track containers, palettes, even packages with no need for hand scanning. You want a solution you can stick on millions, billions of packages and palettes. Maybe even for one-time usage. A cost-effective embedded chip, self-contained apart from a battery and an antenna.

Dragonfly

CEVA has such a solution: Dragonfly, currently supporting GPS snapshot positioning and NB-IoT communication. Implemented as software IP optimized to run on the CEVA-BX1 DSP, at ultra-low power. They can also add CEVA’s Wi-Fi and BLE solutions, for in-warehouse tracking and CEVA’s MotionEngine Scout for dead-reckoning. And they plan to add other GNSS constellations soon. Check them out.

Also Read:

Spot-On Dead Reckoning for Indoor Autonomous Robots

IP and Software Speeds up TWS Earbud SoC Development

Expanding Role of Sensors Drives Sensor Fusion


Cadence Keynotes at CadenceLIVE Americas 2021

Cadence Keynotes at CadenceLIVE Americas 2021
by Kalar Rajendiran on 06-22-2021 at 10:00 am

LipBu 030 bizcasual

Last week, Cadence hosted its annual CadenceLIVE Americas conference. Four keynotes and eighty-three different talks on various topics were presented. The talks were delivered by Cadence, its customers and partners.

This blog is about the two keynotes delivered by CEO Lip-Bu Tan and President Dr. Anirudh Devgan. The guest keynotes from Partha Ranganathan, VP and Engineering Fellow from Google, and Dr. Yadunath Zambre, Chief Microelectronics Technology Officer (CMTO) of Air Force Research Laboratories (AFRL), are covered in two separate blogs.

While Lip-Bu’s talk provided the backdrop fueling the silicon renaissance, Anirudh’s presentation highlighted Cadence’s transformational product offerings the markets need now and into the future. If renaissance was the theme, allegro was the delivery pace of these keynotes. There were so many highlights that just one blog is not sufficient to cover everything that was presented. For example, the revolutionary Allegro X Design Platform deserves a standalone blog. This blog will include just a summary of that platform’s salient aspects.

Mr. Lip-Bu Tan’s Keynote

Lip-Bu opened his talk by calling out the five megatrends behind semiconductor growth. Then he proceeded to walk the audience through how these trends translate into secular growth drivers and ultimately how these secular growth drivers are behind Cadence’s product developments, investments, expansions, collaborations and partnerships.

The five megatrends are 5G communications, hyperscale computing, artificial intelligence/machine learning (AI/ML), autonomous vehicles and industrial IoT (IIoT). All these trends are connected by a common theme, which is data—data creation, processing, transmission, storage and analysis.

Data

Although most of the data nowadays is generated at the edge, only 20% is processed there. The rest of the data is transmitted to the cloud data center for processing, analysis and storage, and the results are sent back to the edge for taking action. This introduces several issues that can be grouped into three buckets: (1) Data privacy and security concerns, (2) cost and need for excessive network bandwidth and (3) intolerable roundtrip latency waiting for results for edge applications to act on. As a result, the edge is being treated more and more as a continuum with definitions of “near-edge” and “far-edge” capabilities that are needed, depending on the end applications. By 2030, 80% of data is expected to be processed at the edge. Depending on where in the edge spectrum processing happens, different types of solutions are required.

Drivers of Innovation

In addition to the above verticals/applications-driven semiconductor market forces, there are a number of secular growth drivers that are demanding innovations. For example, the slowing down of Moore’s law, lower yields from large dies in very advanced process nodes and reticle size limitations are leading to the disaggregation of SoCs. Heterogeneous chiplets integration, acceleration of 2.5D- and 3D-IC and chiplet-to-chiplet signal integrity analysis requirements are driving the need for transformational products from Cadence.

Cadence is leveraging its computational software expertise, massively parallel architecture and innovative algorithms to bring transformative products to its customers.

Dr. Anirudh Devgan’s Keynote

Anirudh opened his talk by defining three aspects that are critical to achieving what Cadence calls Intelligent System Design. He called them the three spheres that guide Cadence’s next wave of innovations and technology offerings. The spheres are pervasive intelligence (data), systems innovation (systems and software) and design excellence (chips, IP and EDA).

Staying true to its EDA, chip and IP roots, Cadence continues to maintain advanced-node digital implementation leadership. It boasts a count of 250+ 7nm/5nm tapeouts and industry’s first 3nm production plan of record (POR). It has also collaborated with Arm in implementing its first architecture for high-performance computing (HPC) servers. The Arm Neoverse-V1 processor can deliver mission-critical performance at 4GHz.

But today’s applications are demanding capabilities of the kind that in the past were used in aero/defense types of applications. Today, self-driving vehicles and advanced driver assistance systems (ADAS)-based vehicles need technologies similar to what an airplane uses. Many everyday applications are also moving toward leveraging multiphysics simulation and analysis.

Cadence has been investing heavily over the last few years in system-level innovations. It has expanded into the computational fluid dynamics (CFD) space through some key acquisitions—NUMECA and Pointwise. It is investing about 40% of its revenue into R&D. With about 5,000 people in R&D, a significant number of them are working on numerical analysis and computational software to connect the three spheres together. And the resulting solutions address the demands of a broad range of industries and market segments. Some of the market segments are consumer, hyperscale, mobile, communications, automotive, aero/defense, industrial and health.

Key Accomplishments Over Last Year

Design Excellence

The Palladium Z2 Enterprise Emulation and Protium X2 Enterprise Prototyping systems were launched. The Palladium Z2 emulation platform provides the best compile and debug times in the industry. The Protium X2 platform is FPGA-based to provide maximum flexibility to the customer in software bring-up and is the fastest system out there. AMD endorsed Cadence’s hardware platforms for helping with their top-of-the-line processor verification and software bring-up. NVIDIA talked about compiling and loading a multi-billion gate design emulation model in four hours compared to the 48 to 72 hours previous-generation platforms took.

The Spectre FX Simulator, a next-gen FastSPICE simulator was released. This product is the result of a complete from-the-ground-up development to perform FastSPICE on memory structures. This capability was lacking in the venerable Spectre product that handles analog, mixed-signal and RF simulations. The Spectre FX Simulator is 3X faster than competing solutions and integrates with the Spectre platform to deliver a comprehensive solution to customers.

JVCKENWOOD endorsed Spectre FX capabilities in helping them get to market faster by about 40%.

Systems Innovation

AWR was acquired from National Instruments for its leading RF design platform for 5G and integrated with the Virtuoso and Allegro environments. Pointwise and NUMECA were acquired for their CFD system simulation capabilities. The Clarity 3D Solver Cloud was launched to cost-effectively and securely scale finite element method simulation capacity by providing easy access to compute resources in the cloud. The Celsius Thermal Solver integrates with the Clarity, Pointwise and NUMECA tools to provide the most accurate thermal analysis results. Recently, Cadence launched Sigrity X for next-generation power and signal integrity. Sigrity is the most widely used simulation tool for 2.5D, and Sigrity X delivers up to 10X performance on large-scale system analysis in the cloud. It can also be used with Clarity and Cadence PCB tools.

Allegro X Design Platform

Allegro, the packaging and PCB platform, is a very widely used platform that has been in existence for a long time with incremental enhancements along the way. The recently launched Allegro X platform is revolutionary and the biggest innovation in board and package design in almost two decades. It delivers much greater performance and data handling capabilities with the option to run on CPUs or GPUs.

If we were to highlight just three salient aspects of the Allegro X platform, they are:

  • A unified cockpit for managing schematic capture, layout, SPICE simulation, signal and power integrity analysis, PLM integration, etc., compared to the previous-generation platform where users had to manually switch between different tools.
  • ML-based P&R automation and integration with Sigrity technology and Clarity 3D Solver to deliver a 4X productivity boost and a 10X layout turnaround gain.
  • A built-in data platform, called Allegro Pulse, for electronic system design data. Allegro Pulse can integrate with enterprise-level PLM systems for overall management.

Pervasive Intelligence and AI

Cadence believes its products deliver the best power, performance and area (PPA) benefits for its customers. Nonetheless, it continuously strives to improve. Through its AI/ML initiatives, it is working on ways to squeeze more efficiencies. It currently is in multiple beta engagements and is looking for additional beta partners.

Summary

Through its vision, focus and deep investments, Cadence has developed and is offering comprehensive and transformational products to its customer base. If you are involved in developing silicon, electronics hardware systems and software, you could benefit a lot by exploring and leveraging Cadence’s latest offerings.

 


Life in a Formal Verification Lane

Life in a Formal Verification Lane
by Shinavi Shah on 06-22-2021 at 6:00 am

New image for semiwiki

This summer, I got the opportunity to work as a Formal Verification Intern with Axiomise for six weeks. I’m a keen designer and love working in design and architecture. Although, I’ve not started my professional career yet, I have done most of my projects as a designer in my undergraduate and postgraduate studies.

Having said that, I was always curious to know – how do we test that the design works? How is verification done in the industry? I had a prior design experience in building a RISC-V core using TL-Verilog in a workshop organized by Kunal Ghosh and Steve Hoover. Extending my work, I implemented RISC-V designed during the workshop on FPGA which is described in more detail over here. But verification – God what would it be like? I was not sure how to best use the six-week time window to learn something from scratch and then apply it for verifying real processors.

When I spoke with Ashish before the internship, I told him “I’ve no background in verification, never mind formal verification, and six weeks is quite a tight deadline to achieve verification targets you’re giving me, but I’m up for the challenge!”

Face-to-face with formal verification

Before starting my Internship, I took the Formal Verification 101 designed and delivered by Dr Ashish Darbari, Founder and CEO of Axiomise.

I was not sure if an online course can be effective in delivering knowledge at a pace I can absorb. I was a little sceptic initially. I took a week to complete this course, and Ashish didn’t put any pressure on me to accelerate it.

The online training gave a great introduction to beginners such as myself about how formal verification works and a thorough understanding of why what, and how of formal?

Before I give you an outline of what I learned in this course, I’d recommend that all digital designers and verification engineers should use formal methods. It finds bugs and builds proofs of bug absence – something I don’t believe can be done without using formal.

The course covers the three pillars of Formal: Theorem Proving, Property Checking, and Equivalence Checking. Some of the highlights of this course are described here in case someone finds it useful.

Mathematics in action: I got to see how proofs can be built interactively using a theorem prover, where we are able to visualize the entire proof. By using HOL 4 – an open-source theorem prover, I was able to see mathematics in action, specifying designs and verifying them through proofs.

Model checking: How can we write System Verilog assertions and use them to perform exhaustive and unbounded proofs? Some of the reasons for using SVA are:

  1. Improve the design observability: Using assertions, we can focus our checks anywhere in the design which in turn helps in data path analysis.
  2. Improve the debugging capability: Assertions are capable of finding bugs faster hence reduces the debug time.
  3. Improve the design documentation: Assertions are just simple statements that will state and verify the design behaviour.

ABC of Formal: Abstraction, Bug Hunting & absence, and Coverage: This module provided a great overview of how SVA-based formal verification can leverage abstraction, and coverage to enhance bug hunting and building proofs of bug absence. I was fascinated by the fact that with formal we can actually build proofs that bugs don’t exist! The interactive quiz used in this module kept me focussed and provided me opportunities to check my understanding.

Coverage: Finding bugs is one thing, establishing proofs of bug absence is another great reason for using formal, but how do we know we have not missed a bug? How do we know we are complete? This is where coverage comes in. The coverage module describes a flow that I used later in my practical work to expose blind spots in formal verification such as over-constraints and finding incomplete checkers.

RISC-V Formal Verification: Towards the end, the course takes a deep-dive into how formal verification can be done on RISC-V processors, focusing on methodology and coverage. I was able to see how cv32e40p and other RISC-V cores were verified using the automated formal app formalISA® from Axiomise. I was feeling excited to start my own work on WARPV processor verification, but I’ll be honest, I was also nervous if I could deliver in the time that I had.

WARPV formal verification using formalISA®

Using the skills and knowledge gained from the training, my task was to verify WARP-V 6-stage pipelined core written in TL-Verilog using formalISA app.

Along with this, I was also able to verify 2-stage and 4-stage pipelined versions of WARP-V core. As WARP-V core is highly parameterizable, we could get the design module easily using WARP-V Configurator. On the other side, although I had inherited a significant code base of formalISA® app, in no time, I was able to understand and modify the code to align with the requirements needed for WARP-V i.e., to work for different pipeline stages.

First, I did the verification of RV32I which is the base integer instruction set – 32 bit for all cores and found several new bugs. WARPV has no implementation of compressed instructions. Initially, both Ashish and I were surprised as these cores have been formally verified before using the riscv-formal testbench. We were not expecting to see any bug, after all I just learnt in the course that formal verification provides guarantees of bug absence!

For many of these violations were that the designer’s interpretation of the RISC-V specification was incorrect – the designer assumed that the program counter always remained byte aligned. We arranged meetings with the designer so we could confirm that these issues are legitimate design issues. Steve Hoover, the designer of the WARPV core said that at the time of the design he was not sure about certain choices and he went ahead with whatever would make the riscv-formal testbench happy. Effectively, the riscv-formal testbench also has the same bugs as the WARPV core. A detailed account of all the bugs we have found so far is available at Github.

Ashish asked me if I wanted to build the property set for the M-extension. I wanted to certainly give it a try. I successfully implemented the properties and to our surprise again found new bugs in the core. We again contacted the designer and learnt that they never exhaustively verified the M-extension with riscv-formal before although they had used some directed tests to check that the M-extension instructions worked correctly. I could also perform exhaustive proofs for the areas where the core was working correctly.

Six weeks of non-stop fun and learning

Talking about my learning experience in this internship, it was easy for me to grab the flow of how formalISA® works as I have a designer background. As the tool flow was automatic, I had to provide the design files and a setup file that would map the testbench signals to appropriate design signals. And believe me, writing assertions is as easy as writing the test case in a plain English statement.

For the first time, I tried concept of pair programming with Ashish, and it helped me a lot to accelerate my learning curve. For those who don’t know what pair programming is, it is an agile software development technique in which two programmers work together. One, the driver writes the code and the other, the observer reviews each line of code as it is typed in. This helps in reducing mistakes and bugs.

Apart from this, I also learned about RISC-V architecture and micro-architecture concepts in more detail. Using formal, gave me an opportunity to see up close how processors are built? how the micro-architecture optimizations are done for pipelined processors to avoid hazards such as forwarding, stalling, and the Load Store architecture. Specific to RISC-V ISA, I got to know more about how ‘I’ and ‘M’ extensions of RISC-V ISA are built. How to handle exceptions, and when to raise a trap/exception signal according to the RISC-V ISA definition.

Designer’s perspective on formal methods

At last, I would like to pen down a designer’s perspective on formal. Adopt formal instead of simulation as dynamic simulation is a random approach where we can only provide a bounded test case. On the other side, formal verification is a systematic approach in which we can use assertions to perform exhaustive and unbounded proof. I would strongly recommend to at least all VLSI engineers to know what formal is, as it provides value to designers and architects along with verification engineers.

Acknowledgement

I would like to thank Dr Ashish Darbari for providing me a great opportunity. It was a great time working with him and learning formal verification from a practicing expert!!

Also read:

Why I made the world’s first on-demand formal verification course

Accelerating Exhaustive and Complete Verification of RISC-V Processors

CEO Interview: Dr. Ashish Darbari of Axiomise

 


EDA Design and Amazon Web Services (AWS)

EDA Design and Amazon Web Services (AWS)
by Daniel Payne on 06-21-2021 at 10:00 am

EC2 min

I first remember blogging about EDA in the cloud starting back in 2011, so what’s changed in the last 10 years you may ask? In 2011, it was basically a handful of EDA point tools running batch mode in the cloud, and you were on your own to integrate those into a coherent flow, so expect help from the CAD and IT departments for sure. In 2021, the cloud providers are all designing their own custom chips to improve the speed and efficiency of running their service businesses. I attended the Cadence LIVE event last week to get an update on what Amazon Web Services (AWS) was offering, and even how they have done their own custom chips.

David Pellerin of AWS was the presenter, and his background includes stints at EDA vendors, an IP startup, FPGA accelerator card business, plus an author of five books. Internally at Amazon, they design their own chips all the way starting from the smart speaker Amazon Echo, up to the AWS Graviton processor with 64-bit Arm Neoverse cores.

Why Cloud-based EDA

Good question, and the answer is that big SoC designs require even bigger compute resources for scaling EDA runs quickly, while offering a secure design environment that keeps IP thieves and spies locked out, reducing the risk to missing a schedule, all while keeping EDA costs within budget and giving teams a place to collaborate. Elastic Compute for the Cloud is the AWS phrase for their cloud servers, EC2.

For x86 servers they offer the EC2 M5zn instances using Intel Xeon processors, or you can choose Arm-based servers (Graviton) for running your EDA tools from Cadence. The AWS stack for running EDA tools has several components:

Custom Silicon at AWS

Engineers at Amazon have designed some custom silicon that fits between routers and the compute and storage servers, improving reliability and performance across EDA workloads, which are similar to other HPC workloads.

The three examples of custom silicon presented were:

Each of these projects was developed in AWS Cloud, for the AWS Cloud. The history of Amazon and running EDA tools in the cloud spans the last 10 years and involves the Annapurna startup in Israel, where they started out with on-premise EDA tool usage, then gradually moved into the cloud:

Their first cloud EDA tools were used for running functional simulation regression, then as tape-out time neared the EDA tools were mostly Static Timing Analysis (STA) and parasitic extraction runs. Today, you can run the entire EDA tool flow in the cloud, instead of just point tools.

Arm

The Physical Design Group at Arm started using AWS Cloud in 2017, and was able to reduce their characterization run times with Liberate from months to just weeks. Using the AWS Graviton2 processors brings significant cost savings versus other processors.

Migrating to the Cloud

Many EDA users prefer to have a hybrid approach of using an on-premise data center, along with AWS cloud. This diagram shows how a hybrid cloud works, and on the right are IP and EDA vendors being granted limited access to your secure design environment for technical support issues.

For high-performance storage options, you can choose from:

  • Block storage (Local NVMe, SSD, Amazon EBS)
  • File storage (Amazon EFS, FSx for Lustre)
  • Object storage (Amazon S3, Amazon S3 Glacier)

Semiconductor Supply Chain

Using the cloud can now enable the entire supply chain to be more secure, while offering collaboration along the way using: EDA tools, design data, design and verification, IP libraries, SW and firmware, wafer fabrication, assembly and test. There’s also the concept of a collaboration chamber, so that based on your needs the data can be securely shared among trusted parties:

  • EDA/IP users
  • Yield analytics
  • Foundry IP
  • Digital Twin
  • EDA Vendor
  • IP / Design services vendor
  • PCB design team
  • Foundry/OSAT

Summary

The past 10 years have brought about some gradual and some dramatic changes in how EDA designs can be performed in the cloud, collaboratively. We’re way beyond point tool offerings in the cloud, so there’s a viable platform in considering AWS Cloud for your complete EDA tool flow needs. Cadence has a close relationship with Amazon in offering their EDA tools on AWS Cloud, so users should come up to speed very quickly.

Amazon IC engineering teams are also users of the AWS Cloud internally to get their chip design projects completed, so you know that it’s quite well-tested.

Related Blogs

Also Read

Connecting System Design to the Enterprise

Keynote from Google at CadenceLIVE Americas 2021

Cadence adds a new Fast SPICE Circuit Simulator


Die-to-Die Connections Crucial for SOCs built with Chiplets

Die-to-Die Connections Crucial for SOCs built with Chiplets
by Tom Simon on 06-21-2021 at 6:00 am

die to die connections

If you ascribe to the notion that things move in circles, or concentrically, the move to die-to-die connectivity makes complete sense. Just as multi-chip modules (MCM) were the right technology decades ago to improve power, areas, performance and cost, the use of chiplets with die-to-die connections provides many advantages for today’s envelop pushing designs. In an article by Manuel Mota from Synopsys titled “How to Achieve High Bandwidth and Low Latency Die-to-Die Connectivity” there is a good overview and analysis of the reasons for using chiplets. The article also discusses IP that can be used to implement die-to-die connections.

Die-to-die connections

The traditional approach of implementing monolithic designs begins to break down as die sizes increase. Wafer-scale chips are staggeringly large now, with trillions of transistors. Building chips on the most advanced nodes usually requires moving IOs and other analog or RF blocks to the new process node. This can be time consuming and costly. Additionally, a single fabrication failure on a large die can scrap the entire chip, leading to yield issues.

Once the use of chiplets is examined as a solution to these potential problems, other benefits become apparent. The Synopsys article enumerates four major use cases for employing die-to-die connections. While the article is focused mainly on hyperscale data center needs, the use cases are applicable to other applications.

First off, chiplets allow various configurations of accelerators using a core set of CPU, AI, or GPU accelerator blocks with tightly coupled connections. As mentioned above using smaller dies helps manage yield, while extending Moore’s law by enabling the assembly of even larger compute engines from smaller chiplets. Die-to-die connections between chiplets lets each individual functional element be fabricated on the optimal process node. This is a big help when it comes to RF, FPGAs and other applications that have unique functional elements. The final use case cited in the article is how large digital chip cores, that are striving toward the most advanced node, can leverage IO’s designed on more conservative nodes for lower cost and improve reusability.

The motivations given in the article for using die-to-die connections are very compelling. The tougher part of the equation is finding the optimal die-to-die interface. Something as simple as on-chip buses or connections cannot be used. On the other hand, IO interfaces used for chip to chip connections would defeat the purpose by adding latency, area and power consumption. There needs to be a kind of Goldilocks solution that balances all the factors to arrive at the optimal solution.

Today there is no industry standard for die-to-die interfaces, though Synopsys is working with others on developing one. Die-to-die interfaces need to offer error correction to ensure reliable links. They must also support high bandwidth connections so that overall speeds are comparable to block to block connections on the same die. The PHY layer should be optimized for short reach and low loss connections. And, of course they should be power efficient.

The Synopsys article concludes with a summary of their die-to-die IP offering, which includes a controller and a PHY. The DesignWare Die-to-Die Controller IP offers industry leading low latency, with error recovery for high reliability. The controller supports AMBA CXS and AXI protocols and integrates with the Arm Neoverse Coherent Mesh Network. The DesignWare Die-to-Die PHY IP uses high speed SerDes PHY technology that runs up to 112Gbps for ultra- and extra-short reach links. For high density 2.5D packed SoCs they offer a High-Band Interconnect (HBI) PHY that delivers 8 Gbps.

The article also touches on how their die-to-die IP can be easily integrated into designs with the Synopsys 3DIC Compiler. The move away from monolithic ICs for many applications will continue. Of course, we will also still see large wafer-scale designs and larger chips. Regardless, the advantages of die-to-die connections will lead to their increased use for the foreseeable future. The article provides good background and tangible solutions for those looking at employing die-to-die in upcoming designs. The article is available on the Synopsys website.

Also Read:

Mars Perseverance Rover Features First Zoom Lens in Deep Space

Verification Management the Synopsys Way

Synopsys Debuts Major New Analog Simulation Capabilities


Ten Lessons Learned from Andy Grove

Ten Lessons Learned from Andy Grove
by Betsy Corcoran on 06-20-2021 at 6:00 am

Andy Grove SemiWiki

I met Andy Grove on a sunny day in New York City in 1987. He was dashing to press interviews for his just-off-the-presses management book, “One on One with Andy Grove.” I was a freshly badged member of the press working for IEEE Spectrum, a year or so out of college, still toting my college backpack. Little did I know that that would be the first of dozens of conversations I would have with Grove over more than two decades.

Grove was a self-made American original. He fled Hungary in 1956 when the Russians invaded, heading to the US with choppy English and bright ambitions. He reinvented himself along the way: He dropped the Hungarian “Andras Grof” for the more comfortable, “Andy Grove.” He spent a summer as a busboy at a resort in New Hampshire—and graduated at the top of his engineering class at City College in New York in three years. Impatient with the plodding pace of east coast companies, he gambled on California.

He was frequently an exasperating boss. “Occasionally we . . . suggest [to Grove] there may be an alternative to grabbing someone and slamming them over the head with a sledgehammer,” grumbled Craig Barrett (who followed Grove as CEO) in a 1996 interview. Others would call that an understatement. 

In the years since my first Grove encounter, I’ve chronicled how he battled critics who called Intel’s chips “flawed,” pumped up Intel’s market success by hiring the “Blue Men,” and was humbled by being named Time’s Man of the Year. After Grove retired from Intel, I’d visit him periodically for a brisk walk up and down the hills near his Los Altos home. 

When I started and became CEO of my own company, EdSurge, I discovered I appreciated many of Grove’s points afresh–even if I didn’t follow them all. I’m no fan of  his “constructive confrontation” approach. Even though Grove aimed to “attack the problem not the individual,” too frequently the “confrontation” could overwhelm the, ah, “constructive” parts. More recently, managers practice instead “radical candor.” I suspect that Grove, who was a perennial student of the latest thinking in management practices, might have appreciated both the feedback–and a fresh approach. 

Here are a few of the powerful lessons I learned from Andy Grove: 

It’s okay to be scared. When Grove joined Intel as its first official employee in 1968, he said he was “scared to death.” He was supposed to be director of engineering but because the team was so small, Grove became director of operations. First task: Get a post office box and sign up for catalogues of equipment the startup couldn’t afford. “I literally had nightmares.” 

Relish reality. After World War II, Grove and his family stayed in Hungary, only to see the corrosive effects of passive politeness. When he finally fled in late 1956, he headed to Vienna via train and had an epiphany: “After all the years of pretending to believe things that I didn’t, of acting the part of someone I wasn’t, maybe I would never have to pretend again.” 

Discipline, discipline, discipline. (Did I mention discipline?) Obsessed with details, Grove oversaw manufacturing, which became key to Intel’s success. The best chip design in the world would be worthless without the manufacturing discipline to churn out millions of copies. And any time a problem occurred, Grove drove his people to attack it like a virus until it was fixed. He demanded a lot of himself–and his team. Grove was infamous for creating a “late sheet” kept by security that anyone who showed up for work after 8:00 a.m. had to sign. Employees were terrified of the late sheet. Grove eventually admitted that he never looked at it—its mere existence prodded people to show up on time. 

When the going gets tough, suck it up. In November 1976, Grove wrote in his personal journal: “[D]issatisfied w/overall co. performance (hence: me!) … frequently depressed; thoughts of bailing out.” He didn’t.

Teach with humility. For years, Grove co-taught a class at Stanford University’s business school on strategy with long-time collaborator, Robert Burgelman. He tested ideas with students. He respected students, coaxing friends and well-known entrepreneurs, including Steve Jobs, to support him by putting in guest appearances. It was neither boastful nor for show: He also demanded students rigorously critique Intel’s choices, including pivotal ones about what microprocessor architecture to pursue in the mid 1990s. (Engineers were in love with an elegant architecture called RISC; Grove eventually stuck with a more evolutionary approach, CSIC.) 

Learn with passion. Over the years, I have interviewed countless industry leaders and other mucky-mucks. Grove was unique in that he seized every conversation as an opportunity to learn not just to lecture. I remember swinging by his Intel cubical one day after recent interviews with other executives pioneering work in handheld devices and mentioning some of the emerging technologies that seemed promising.  “Who was that? Why was it interesting?” he demanded, grabbing a notepad and scribbling furiously. It wasn’t just about competitors: He was boundlessly interested in learning about journalism, politics, social practices, recent books — you name it.  

Did you screw up? Admit it. This one was hard for him. When Intel confronted a wave of unhappy customers because of a subtle flaw in the arithmetic functions of Intel’s Pentium processor, Grove refused to believe it mattered. He dragged his feet on apologizing to customers until the howling both inside and outside of the company was deafening. Finally, he agreed to take the chips back. “I was thick-headed. I don’t know how to say that differently,” Grove later said.

Science the shit out of your problems. Grove would have loved Matt Damon’s line in the “The Martian,” as he realizes he’s stuck on Mars without enough food or water: “I’m going to have to science the shit out of this.” When Grove was diagnosed with prostate cancer in 1995, he dove into research about treatment alternatives, plotting out data in detailed analyses. He chose the path the data suggested—a successful pick. A few years later, he realized he had Parkinson’s disease. Once again, he “science’d” up, immersing himself in the details of the condition and the research, experimenting with treatments and agitating for more and faster research on the neurology of Parkinson’s.

Be real. “Beware of hypocrisy in the boss! The worst situation is when the boss says one thing and does another.” That was Grove’s advice in his book, One on One. And he tried to live it. Intel was among the first companies to eschew the “corner office.” Grove had a cubical, like just about all other Intel managers. He kept it real at home, too. Even when Grove and his wife became wealthy beyond their dreams, they continued to live in the house where they raised their daughters. Instead of doing movie cameos, the Groves more typically could be spotted around town waiting in a line at the theater. 

Love fiercely. For Grove and his family, Intel was like a third, sometimes unruly, child. He continued to actively follow Intel’s work years into his retirement. Even so, there was competition for his heart: his wife, Eva. When Grove wrote his personal memoir, Swimming Across, he offers scant details about his personal life once he arrived in America. But family and friends received a bonus section, though: Grove added a “long-lost” chapter about how he met and wooed Eva, whom he married at age 21. She remained the love of his life until his final breath.  

Betsy Corcoran is a long-time journalist and entrepreneur, working at the intersection of learning and technology. Follow her on LinkedIn here

Also Read:

CEO Interview: Deepak Shankar of Mirabilis Design

CEO Interview: Prakash Murthy of Atonarp

CEO Interview: Toshio Nakama of S2C EDA


Podcast Episode 25: Silicon IP for Early-Stage Semiconductor Companies

Podcast Episode 25: Silicon IP for Early-Stage Semiconductor Companies
by Daniel Nenni on 06-18-2021 at 10:00 am

Moderator:
Daniel Nenni, Semiwiki
Panelists:
Jothy Rosenberg, CEO Dover Microsystems
John Terry, CEO Espre Technologies
Fares Mubarak, CEO Spark Microsystems

General Theme: A discussion around the requirements and challenges of investigation and ultimate selection of IP for early-stage chip companies, discussing both the technical and business aspects of securing IP for your designs.

Topics Discussed:
What have been some of the biggest challenges for your startup?
And how did you overcome these challenges?
What have been the major challenges in development/production?
And how did you overcome them?

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

This panel is 40 minutes in length but well worth it!