DAC2025 SemiWiki 800x100

CEO Interview: Issam Nofal of IROC Technologies

CEO Interview: Issam Nofal of IROC Technologies
by Daniel Nenni on 05-24-2023 at 6:00 am

Dr.Issam AL ZAHER NOUFAL (1)

Issam Nofal is the CEO of IROC Technologies and has held various positions with the company for over 23 years as Product Manager, Project Leader, and R&D Engineer. He has authored several papers on test and reliability of Integrated Circuits. He holds a PhD in Microelectronics from Grenoble INP.

What is IROC Technologies’ background?

IROC Technologies is a privately held EDA tools and radiation test service company founded in 2000 by Dr. Michael Nicolaidis, a researcher at the CNRS-France. He has worked closely with  the TIMA laboratory. Dr. Nicolaidis is well known from his contributions in the test and reliability domains of Integrated Circuits.

The main idea behind the creation of IROC was to develop solutions for functional reliability threats caused by soft errors in modern semiconductor systems. The radiation effects on microelectronic systems were well known in the harsh environment of space, but with the increased use of deep submicron technologies these effects have become a serious threat to the reliability of ground level applications. Solutions to deal with this new challenge are now mandatory to achieve high reliability in semiconductor designs.

Main system integrators like CISCO and big foundries, such as TSMC, became aware of the soft error phenomena and the need for solutions to predict and evaluate Soft Error Rates of components and systems. This, in turn, helped to “evangelize” chip manufacturers and designers to take the soft error threat into account either by mitigating the design’s propensity to soft errors, or by selecting more resilient technology processes. In the last 22 years, IROC has worked closely with foundries, top semiconductor companies, and government programs to offer solutions and services that help them achieve this objective.

What products/services does IROC offer?

IROC helps the entire semiconductor value chain to reduce soft errors in designs by offering EDA tool solutions to predict the SER early in the design cycle. We also offer expert design consulting from component through system characterization and testing.

IROC products are based on a deep understanding of the soft error phenomena starting from the ionizing particles effects at transistor level and ending by the analysis of its effects on the functionality of the final system. They cover both cell level soft error simulation using TFIT, and the analysis and quantification of error propagation at circuit or SoC level using SoCFIT.

In addition to the EDA solutions, we provide radiation test services for High Energy Neutron, Thermal Neutron, Heavy Ions, Protons, Co-60, and Alpha, at the best test facilities in the world such as LANL, TRIUMF, and ISIS. We also deliver support and design consulting for management and mitigation of Soft Error Rate in complex components and systems.

What makes IROC’s EDA tools and services unique?

We collaborate with major foundries such as TSMC, Samsung, and Global Foundries to build accurate models for TFIT. Thanks to this collaboration, TFIT models for mainstream process nodes are built in conjunction with the foundry using process information not readily available with standard PDKs. The TFIT model for a given process characterizes the process sensitivity to ionizing particles, which can be used by TFIT to simulate any cell or custom design implemented in the target process. This unique foundry model enables TFIT to simulate the Soft Error Rate up to one hundred times (100X) faster than the best TCAD based solutions.

SoCFIT analyzes the propagation of errors from cell to system level. SoCFIT supports big designs with millions of Flip-Flops and memory blocks. The Soft Error Rate (SER) of the SoC can be calculated at high speed and mitigation solutions proposed to reduce SER with minimal area overhead.

IROC’s Test Services team has more than 20 years of experience with shuttle test campaigns, component/system test, and alpha test/count. We customize our services to our customer’s needs, from complete solutions to partial requirements. We leverage our close relationships with international testing facilities to provide the best service to our customers.

IROC also provides consulting services to customers with complex systems vulnerability analysis requirements. We use our expertise and TFIT/SoCFIT tools to provide total system error rate and identify critical parts of the design requiring reliability improvements at the cell, or SoC level.

Who is interested in your offerings?

Our customers are in the automotive, aerospace, healthcare, and HPC segments. Any semiconductor company with high reliability standards requirements can benefit from our unique foundry models, TFIT, SoCFIT, our specialized consulting, and testing services.

What are IROC’s upcoming plans?

We are continuously adding capabilities to our EDA solutions, while providing consistent high quality service offerings. We are adding new features to TFIT and SoCFIT to fit new market requirements. We will also continue to collaborate closely with foundries to add models for their latest technology process nodes.

We are working on the certification of SoCFIT according to the ISO 26262 standard and continue to investigate other safety standards that can benefit our customers concerned by functional safety.

We will soon announce our new website reflecting how our customers are using and benefiting from our unique radiation expertise and our EDA software solutions.

Visit our new website at https://www.iroctech.com/

How do customers engage with IROC?

Customers are turning to us for Soft Error analysis and mitigation using TFIT or SoCFIT and Radiation testing services. Email info@iroctech.com or visit https://www.iroctech.com/ to see how we can help you.

Also Read:

CEO Interview: Ravi Thummarukudy of Mobiveil

Developing the Lowest Power IoT Devices with Russell Mohn

CTO Interview: Dr. Zakir Hussain Syed of Infinisim


Driving the Future of HPC Through 224G Ethernet IP

Driving the Future of HPC Through 224G Ethernet IP
by Kalar Rajendiran on 05-23-2023 at 10:00 am

Advanced DSP Implementations

The need for speed is a never-ending story when it comes to data communications. Currently there are a number of trends such as cloud computing, artificial intelligence, Internet of Things (IoT), multimedia applications and consumer expectations driving this demand. All of these trends are accelerating the growth in high-performance-computing (HPC) and the traditional data center server architecture is evolving into a hyper-converged server box architecture. A hyperconverged server box is a type of server infrastructure that combines storage, compute, and networking resources into a single, integrated appliance. It is designed to simplify data center management and reduce infrastructure costs by consolidating multiple functions into a single device. As the industry moves to 224G connectivity rate, there are a number of design considerations and decisions to make to overcome the numerous implementation challenges.

Synopsys’ first demonstration of 224G SerDes was in Basel, Switzerland at the 2022 European Conference on Optical Communication (ECOC). As the first company to demonstrate 224G SerDes, Synopsys has valuable insights to offer. At the recently held IPSoC 2023 conference, Manmeet Walia made a detailed presentation on this subject matter. Manmeet is Director of Product Management at Synopsys for high-speed interface IPs which include PCI-e, Die-to-Die (D2D) and Ethernet.

Why is 224G Ethernet SerDes Needed?

224G Ethernet is needed for a number of reasons. To start with, it is needed for addressing the increasing demand discussed earlier, for higher data rates in modern data centers. The networks within data centers are flattening to reduce latencies, which drives the demand for higher bandwidth connections. Switch SoC die sizes are hitting the maximum reticle size limit, which means the higher connectivity rates are needed to support the higher bandwidth requirement. Server rack unit density, power dissipation and thermal management requirements are also driving the need for 224G connectivity. Additionally, 224G Ethernet helps reduce the number of cables and switches required in high-density data center environments, which can improve network efficiency and reduce costs. In addition, it provides backward compatibility with existing Ethernet standards, allowing for easy integration into existing networks.

Challenges to Delivering 224G

There are several areas of challenges when it comes to implementing and deploying 224G Ethernet. The laws of semiconductor physics are not keeping pace with Serial Link throughout demands. Link loss is going up as package/connector/channel technologies are not keeping pace with the demand. As physical distance on the Front Pluggable Panel has not reduced, reflections get worse. As isolation has not improved, crosstalk gets worse. The overall complexity for implementation increases 5x when increasing speed from 112G to 224G.

Challenges Being Addressed

There are a number of aspects to be considered and optimal choices arrived at, starting with the signaling scheme. While the PAM6 scheme delivers less Nyquist loss, PAM4 prevails for most use cases due to better signal-to-noise (SNR) ratio and lower FEC overhead. At the 224G SerDes architecture level, analog circuitry must be minimized for reduced parasitics and high bandwidth front end. And rigorous sensitivity analysis must be performed on individual analog blocks to reduce any impairments. Innovative digital signal processing (DSP) techniques are critical to compensate for the gain errors, skew mismatches and to achieve better noise immunity.

And parallelism should be the theme for high-speed processing efficiency when it comes to the high-level architecture for a 224G SerDes. Optics technology is also moving closer to the host SoC to address power and performance issues as we move to 224G.

Summary

224G Ethernet is fast driving the growth of HPC applications, with the licensing of 224G IP projected to crossover 112G IP by 2025. Early adopter applications include Retimers, Switches, AI Scaling, optical modules, I/O chiplets and FPGAs. Synopsys provides a complete solution with lowest power, area and latency to make it easy for customers to integrate, validate and go to production.

For more details on Synopsys 224G IP, visit here.

To listen to Manmeet’s talk at IPSoC 2023, visit here.

Also Read:

Curvilinear Mask Patterning for Maximizing Lithography Capability

Chiplet Q&A with Henry Sheng of Synopsys

Synopsys Accelerates First-Pass Silicon Success for Banias Labs’ Networking SoC


A Negative Problem for Large Language Models

A Negative Problem for Large Language Models
by Bernard Murphy on 05-23-2023 at 6:00 am

Picture1

I recently read a thought-provoking article in Quanta titled Chatbots Don’t Know What Stuff Isn’t. The point of the article is that while large language models (LLMs) such as GPT, Bard and their brethren are impressively capable, they stumble on negation. An example offered in the article suggests that while a prompt, “Is it true that a bird can fly?”, would be answered positively with prolific examples, the inverse, “Is it true that a bird cannot fly?”, will likely also produce a positive answer supported by the same examples. The word “not” is effectively invisible to LLMs, at least today.

The Quanta article is well worth reading, as are most Quanta articles. What is especially interesting is that fixing LLMs to manage negatives reliably is proving to be more challenging than at first thought. I see two interesting ways to frame the problem, first a computer science analysis, second in asking what we mean by “not”.

Why do LLMs struggle with negation?

These models learn, from spectacularly large amounts of data, to generate a model of reality. An LLM builds a model of likelihoods of sequences of words associated with corresponding topics. There is no place in such a model to handle negation of a word. How would it be possible for inference to map “not X” as a term when the deep learning model is built on training data in which terms are necessarily positive (“X” rather than “not X”)?

SQL selections routinely handle negative terms – “select all clients who are not in the US” (I’m being casual with syntax). Why couldn’t LLMs do the same thing? They could in training use a similar selection mechanism to pre-determine what data should be used for training. But then the model would be trained explicitly to handle prompts with that specific negation, blocking hope of answering prompts about clients who are in the US. What we really want is a trained model which can answer prompts for both “in the US” and “not in the US”, which seems to require two models. That’s just to cover one negation possibility. As the number of terms which might be negated increases, the number of models (and time to train and infer) grows exponentially.

Research suggests that ChatGPT has improved a little in handling negatives and antonyms through human-in-the-loop training. However, experts claim developers are chipping away at the problem rather than finding major breakthroughs. When you consider the significant range of possibilities in expressing a negative (explicit negation or use of an antonym, both allowing for many ways of re-phrasing), this perhaps should not be too surprising.

What do we mean by “not”?

“Not” in natural language carries a wealth of meaning which is not immediately apparent from a CS viewpoint. We want “not” to imply a simple inverse but consider the earlier example “Is it true that a bird cannot fly?”. Many birds can (robins, ducks, eagles), some cannot (penguins, some species of steamer duck, ostriches), and some can manage a little but not sustained flight (chickens). Some mammals can glide (flying squirrels); are they birds? The question doesn’t admit a simple yes/no answer. An LLM would likely present these options, ignoring the “not” but not really answering the question in a way that would demonstrate understanding. That is good enough for a search but is hardly a foundation for putting us all out of work.

“Not” provides a simple demonstration that meaning cannot be extracted from text by statistical analysis alone, no matter how large the training dataset. At some point meaning must tap into “commonsense”, all the implicit understanding we have in using language. “Not” highlights this dependency because “not X” implies absolutely everything – not including X – is possible. We deal with this crazy option in real life through commonsense, eliminating all except reasonable options. An LLM can’t do that because (as far as I know) there is no corpus for commonsense. LLMs can be patched through human guidance to do better on specific cases, but I am skeptical that patching can generalize.

LLM has demonstrated amazing capabilities, but like any technology we build it has limits which are becoming clearer, thanks in part to one seemingly inoffensive word.


Why Generative AI for Chip Design is a Game Changer

Why Generative AI for Chip Design is a Game Changer
by Daniel Nenni on 05-22-2023 at 10:00 am

Efabless AI Generated Design Challenge SemiWiki 1

AI-generated chip design is progressing at an incredible pace!

Earlier this week, I wrote about the Efabless AI Generated Open–Source Silicon Design Challenge.  If you haven’t done so already, take a closer look at the challenge and see first-hand what this is all about.  In talking to Mike Wishart and Mohamed Kassem, co-founders of Efabless, designers creating a simple block can go from prompt to GDS in hours, and even with iterating to debug, complete the project in days.  This is amazing, and the challenge promises a very intensive and unique learning experience.  I note that speed is key because the submission deadline is midnight June 2 to qualify to win almost $10,000 worth of free silicon and eval boards.

They have now also announced a panel of  judges with a wealth of experience in the semiconductor industry, open source and in machine learning.  This is a great opportunity to have your work exposed to industry experts.

Now I want to dive a little deeper into what this movement is all about, with thanks to Mohamed and even chatGPT for some interesting insights.

Overview

Generative AI, as I am sure you all know by now, refers to artificial intelligence that learns from enormous repositories of data and generates original conclusions.  It has been used to answer questions across the full range of domains and has generated content in areas that require what we would term “creativity”, including art, music and creative writing.  It’s being increasingly used in software development and designing complex systems, with quality of results that are improving at a very rapid, seemingly exponential rate.

In chip design, Efabless and its community are showing that chip design automation and optimization of the Verilog design process with Generative AI is a reality. The promised benefits include greater efficiency and increased innovation by bringing more people into the field of chips, broadening insight and reducing time and cost associated with the overall process.

AI Generated Chip Design

Historically, chip design has been difficult, time consuming and expensive. Furthermore, it has many steps, each of which require incredible attention to detail as mistakes can cost months and millions and millions of dollars. As a result, very few people and even fewer companies even try.

Artificial intelligence changes the process by exploring the myriad of alternatives much faster and better than even the best teams of designers, thereby identifying solutions that better meet the right mix of performance, power consumption, and cost.  Thus, human error is greatly reduced and the design process is accelerated.

Generative AI goes further by actually creating entirely new designs. The generative AI tools digest the vast spectrum of Verilog code, learning everything needed to create silicon designs. Then, when given plain-language “prompts” of a very high-level specification, the model generates the required Verilog.

This dramatically reduces the time and effort required for manual coding.  Importantly, it promises higher quality by avoiding the mistakes of inexperience or inattention, and, unlike us humans, will unfailingly adhere to the best practices it learns from the web.  Finally, it redefines creativity and innovation by pattern matching, in minutes, across the collective experiences of all designers to date (at least those whose experiences are on the web!).

AI Generated Design Is A Game-Changer

I can’t help but conclude that generative AI has the potential to revolutionize the industry, likely reinvigorating design activity, designers and innovation.

In my opinion, the Efabless Design Challenge represents the first step in making very visible what these changes may be.  Not only will it be the first open engagement in the area, but the designs will all be fully open sourced so that learning by us mortals is accelerated and democratized, absolutely.

I encourage everyone to join in with the idea that more people, more designs, more insight.

About Efabless

Efabless is a free cloud-based chip design platform, growing community of 9000+ chip designers, and fabrication friendly technology company that takes you from idea to silicon inside your product. Only Efabless chipIgnite provides a complete end-to-end solution for creating your own chip at a very low cost. Established in 2014, Efabless has a thriving community of thousands of chip designers putting more than 400 chips to fabrication.

Also Read:

Join the AI Generated Open-Source Silicon Design Challenge!

A User View of Efabless Platform: Interview with Matt Venn

CEO Interview: Mike Wishart of Efabless


AMAT- Trailing Edge & China Almost Offset Floundering Foundry & Missing Memory

AMAT- Trailing Edge & China Almost Offset Floundering Foundry & Missing Memory
by Robert Maire on 05-22-2023 at 8:00 am

Amat China

-AMAT reported inline resulted helped by trailing edge & China
-Memory remains at very low levels- Foundry remains uninspiring
-China seems to be buying anything they are allowed to buy
-The recovery is too far out & unknown to handicap

Quarter was OK and Guidance also OK

Revenue was $6.63B and EPS of $1.86 versus reduced expectations of $6.38B and $1.84….so more or less “in line”. Given that guidance is usually conservative this could be viewed as a “Miss” especially of true expectations or whisper numbers.

Guidance is for $6.15B+-$400M and EPS of $1.74+-0.18 versus the street at $6.02B and $1.65 in EPS, not very inspiring.

Systems revenue is expected to be down 5% at $4.5B and services up 1%.

Memory is still dead and logic/foundry not much better

Memory makers continue to see the worst downturn in well over a decade and have slowed their equipment purchases to near zero levels as you don’t add capacity when you are already swimming in it.

We have noted that street pricing of memory, especially NAND remains very weak at unsustainable/unprofitable levels.

There does not appear to be any change anywhere on the horizon.

We currently expect memory weakness through the end of the year.

As capex spend is a bit of a trailing indicator we also don’t expect a significant increase in memory spend for the balance of the year,

Advanced nodes and foundry logic while not as dead as memory are not a whole lot better off as spending is at low levels due to weak end market demand. Though there are certainly bright spots like AI, its not enough to offset the more macro weakness.

Trailing edge & old stuff is in vogue again

It seems very weird that the strongest demand remains in older technology nodes and especially in China.

Obviously China is embargoed from the leading edge so they can only buy trailing edge and they are doing so with extreme vigor…anything that isn’t nailed down.

The joke making the rounds in the industry that the Kanban phrase of “just in time” has been replaced in China by “just in case” (just in case they are totally cut off by the US).

While AMAT management seemed to deny the view that China may be stockpiling equipment we are not so sure as there does not appear to be enough fab projects for all the equipment being ordered from all the equipment makers… it just doesn’t add up.

Management did agree that the Chinese government was supporting purchases which they wouldn’t need to do if the tools were truly needed to meet demand in real fab projects.

We remain concerned that the main lifeline that Applied has is China and trailing edge equipment. We are not sure how dependable that demand is or will remain in the future.

When you can’t sell new tools you sell service and AGS is doing well

Obviously the company is at a point where it could live on service alone (though not very well) Management made a point on the call that the dividend could be supported by service alone….probably not as reassuring as they hoped that statement would be

China seems to be buying any chip equipment not nailed down

One of our other concerns that we continue to see is that China has been on a huge spending spree for non leading edge equipment. Its hard to figure out where all the equipment is going as it seems there aren’t that many fabs in China (that we know of).

It has all the makings of the famous toilet paper shortage as people bought in expectation of a shortage.

China seems to be buying any and all equipment they can as they likely fear that they will be cut off from even non leading edge tools. We saw this in ASML’s report this morning where 45-50% of DUV sales were into China.

This demand from China feels artificial and runs the additional risk of slowing because of increased sanctions or just running out of the stampede/herd mentality.

This obviously adds to the risk of a longer/deeper downturn

Dividends & buybacks support the stock because current business doesn’t

On the bright side the company has $10B of buybacks and boosted the dividend as a bit of a consolation prize to offset the business weakness.

Its not a bad consolation prize and the company doesn’t have a lot to do with the excess cash other than pay dividends and buy back stocks.

We wonder how they will justify any claims to CHIPS Act funding in light of having enough cash to boost buybacks and dividends to record levels…seems counter intuitive.

The stocks

We have little to no motivation to own the stock as there is no light at the end of the tunnel yet nor even a hope of a light.

The equipment stocks have been on a bit of a run for no real reason while business continues to suck. Macro news is not encouraging either.

There is no true hope of a recovery without memory and memory seems dead for quite some time.

Maybe investors have no where else to go with their money other than tech because sooner or later the cycle will end …… our concern is that the operative word is later and investors haven’t gotten their hopes up too soon and too high.

About Semiconductor Advisors LLC
Semiconductor Advisors is an RIA (a Registered Investment Advisor),
specializing in technology companies with particular emphasis on semiconductor and semiconductor equipment companies. We have been covering the space longer and been involved with more transactions than any other financial professional in the space. We provide research, consulting and advisory services on strategic and financial matters to both industry participants as well as investors. We offer expert, intelligent, balanced research and advice. Our opinions are very direct and honest and offer an unbiased view as compared to other sources.

Also Read:

LAM Not Yet at Bottom Memory Worsening Down 50%

ASML Wavering- Supports our Concern of Second Leg Down for Semis- False Bottom

Gordon Moore’s legacy will live on through new paths & incarnations

Sino Semicap Sanction Screws Snugged- SVB- Aftermath more important than event


An SDK for an Advanced AI Engine

An SDK for an Advanced AI Engine
by Bernard Murphy on 05-22-2023 at 6:00 am

Chimera SDK

I have observed before that the success of an AI engine at the edge rests heavily on the software interface to drive that technology. Networks trained in the cloud need considerable massaging to optimize for smaller and more specialized edge devices. Moreover, an AI task at the edge depends on a standalone pipeline demanding a mix of neural net/matrix, vector, and scalar/control operations with corresponding code for each, to manage stream pre-processing and post-processing in addition to inferencing. This is an overwhelming level of complexity for most product developers. Hiding that complexity while still maximally optimizing applications depends on a strong SDK and a unique engine architecture.

First, unify processing

The mainstream architecture for the intelligent edge combines a neural net for inference, a DSP for vector processing and a CPU/cluster for scalar processing and control. This approach amplifies programming complexity by forcing code partitioning and flow management between these separate platforms. The Quadric Chimera engine combines all three operation types in one processor with a unified instruction pipeline, starting with a common instruction fetch, then branching into a conventional ALU pipeline for scalar elements and a dataflow pipeline built on a 2D matrix of processing elements (PEs) and local registers for matrix/vector operations.

This structure allows scalar/control and matrix/vector operations to be modelessly interleaved. Unsurprisingly this architecture doesn’t support tricks like speculative execution but I’m guessing that is a small price to pay for simplifying the programming model. Also, I’m not sure such tricks even make sense in this context.

The programming model

I decided to first improve my understanding of inference programming as a foundation to see how the Quadric hardware/software supports that objective. Apologies to anyone who already knows this stuff, but it helped me. Programming in this domain means building graphs of pre-defined operations. There’s a nice example here of building a simple inference graph from scratch. Standard operations are defined by various ML frameworks. I found the ONNX format (an open standard) to be an easy starting point, with ~180 defined operations.

A framework trained model is structured as a graph, which (after optimization) will be embedded inside a larger inference pipeline graph. There may be more than one inference operation within the graph. Quadric share a nice 1D-graph example (below) for an employee identification system. This starts with a number of classical image processing functions. Then there is an inference step for face detection (a bounding box around the face), then again a few classical algorithm steps (selecting the most probable bounding box and some more image processing within that sub-image). Finally, another inference step for classification – is this a recognized employee?

Inference-centric operations are defined in the ONNX format or an equivalent, possibly supplemented by chip- or OEM-vendor operations for specialized layers. Other operations standard in a target application domain, such as standard image processing functions, would be supplied by the SDK. An OEM might also extend this set with one or two of their own differentiating features. What should be apparent from this pipeline is that the sequence of operations for this employee recognition system will ultimately require a mix of traditional CPU and matrix operations, in some cases possibly interleaved (for example in processing a custom layer or an operation not supported by the matrix processor).

Now for the Quadric SDK

With this target in mind, the Quadric SDK features and flow become very clear. Part of the solution is the DevStudio providing a graphical interface to drag/drop and connect pre-defined (or user-supplied) operations to build a graph. Again, pre-trained models from one of the standard training frameworks can be inserted into the graph. From this graph, DevStudio will build a C++ model which can be run through their Instruction Set Simulator (ISS) to profile against different Chimera core sizes, on-chip memory sizes and off-chip DDR bandwidth options. Once you are happy with the profile, you/the OEM can download the object code to an embedded implementation. In short, the AI architecture plus SDK provide developers the means to build, debug and maintain graph and C++ code on one core, not 3 separate cores. I have to believe this will be a popular option over more complex platforms.

The Chimera core is offered as an IP to embed in a larger SoC, consequently the SDK is expected integrate into the SoC vendor SDK. The SDK is now released; you can learn more HERE.


US giant swoops for British chipmaker months after Chinese sale blocked on national security grounds

US giant swoops for British chipmaker months after Chinese sale blocked on national security grounds
by Daniel Nenni on 05-21-2023 at 6:00 pm

UK US CHina Semiconductor Battle

According to UK based The Telegraph Pulsic is a chip maker and Cadence is a swooping US giant.  I guess you have to stretch the truth to get those precious clicks these days. Even so this is a strategic acquisition for Cadence.

Pulsic is a 20+ year old EDA software company that offers chip planning and implementation software for custom design at both mature and advanced nodes. Leading semiconductor companies use Pulsic’s physical design software to improve design productivity through layout automation. We have been collaborating with Pulsic for many years including on their most recent pivot to a freemium based licensing model. Great technology, great company, great acquisition, absolutely.

Cadence holds the custom layout Virtuoso franchise which is the predominant tool used around the world. Last I heard Virtuoso had a clear 90%+ market share with no serious competition.

Pulsic had an acquisition opportunity last year by Chinese investor Super Orange HK Holding but UK politicians blocked it. Not unlike when China blocked the acquisition of UK based Arm by US based Nvidia. Politics is such a wonderful thing. UK politicians are also against reshoring semi-conductors in favor of reshoring full-conductors.

Terms of the takeover by Cadence have not been disclosed but a Cadence spokesman reportedly said: “Cadence has acquired Pulsic, and we expect to be sharing more information in the next couple of weeks.”

When I first heard about the potential acquisition my guess would have been Synopsys as they are the only competition to Cadence Virtuoso. Cadence has gone to great lengths to protect and expand the Viruoso market share and this is yet another example. Synopsys did however make a yet to be announced acquisition at the expense of Cadence and Siemens so the big EDA arms race continues. Check our EDA Mergers and Acquisitions Wiki to see how EDA has truly evolved into a “giant” industry.

Pulsic Products Include:
  • Unity Chip Planning: Achieve design closure faster with the only top-down hierarchical, and now incremental, floor planning technology for custom design.
  • Unity Custom Digital Placer: Increased productivity through automation of advanced custom cell placement.
  • Unity Custom Digital Router: Greater productivity through the automation of several advanced custom routing technologies.
  • Unity Embedded Integrations: Easy access to the powerful Unity automation technologies. This will be especially beneficial for new users, as they can access the Unity automation without having to learn a new tool user interface.

About Pulsic 
Pulsic is an electronic design automation (EDA) company offering production-proven chip planning and implementation solutions for extreme custom design challenges at advanced custom nodes. Leading semiconductor companies use Pulsic’s physical design software to achieve significant improvements in their design productivity through layout automation using Pulsic’s advanced solutions. Complementary to existing design flows, standards, and databases, Pulsic technology delivers handcrafted quality faster than manual design or other EDA software solutions. Pulsic has delivered successful tapeouts for IDMs and fabless customers in the memory, FPGA, custom digital, LCD, imaging, and AMS markets worldwide. For more information, please visit http://www.pulsic.com.

About Cadence
Cadence is a pivotal leader in electronic systems design, building upon more than 30 years of computational software expertise. The company applies its underlying Intelligent System Design strategy to deliver software, hardware and IP that turn design concepts into reality. Cadence customers are the world’s most innovative companies, delivering extraordinary electronic products from chips to boards to complete systems for the most dynamic market applications, including hyperscale computing, 5G communications, automotive, mobile, aerospace, consumer, industrial and healthcare. For nine years in a row, Fortune magazine has named Cadence one of the 100 Best Companies to Work For. Learn more at cadence.com.

Also Read:

Balancing Analog Layout Parasitics in MOSFET Differential Pairs

Freemium Business Model Applied to Analog IC Layout Automation

Analog IC Layout Automation Benefits


Podcast EP163: The Unique Advantages of the Codasip Custom Compute Architecture With Mike Eftimakis

Podcast EP163: The Unique Advantages of the Codasip Custom Compute Architecture With Mike Eftimakis
by Daniel Nenni on 05-19-2023 at 10:10 am

Dan is joined by Mike Eftimakis. Mike has an extensive background in the electronics industry with almost 30 years in senior technical and business roles. After innovating with companies like VLSI, NewLogic or Arm, he is now VP Strategy and Ecosystem at Codasip, where he drives the long-term vision and its day-to-day implementation.

Dan explores what makes the Codasip architecture different. Mike explains the challenges custom compute can address, the types of designs that can benefit the most and how Codasip technology is deployed across a wide variety of applications.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


Chiplet Q&A with John Lee of Ansys

Chiplet Q&A with John Lee of Ansys
by Daniel Nenni on 05-19-2023 at 6:00 am

SNUG Panel

At the recent Synopsys Users Group Meeting (SNUG) I had the honor of leading a panel of experts on the topic of chiplets. One of those panelists was John Lee, Head of Electronics, Semiconductors and Optics at Ansys.

How is the signoff flow evolving and what is being done to help mitigate the growing signoff complexity challenge?

With multi-die, there are three key challenges that I’ll highlight and then talk about how we can best address those. First, the lines between silicon systems are blurring so we now have what we call multi-scale problems, where you’re looking at nanoscale effects such as cell heat at a transistor level, but then you need to go to centimeters or potentially meters scale effects as you look at multiple dies in the package and the electronic system.

The second one, I think is obvious, it’s Multiphysics. As you’re packing in more and higher signal speeds we generate more thermals, and you start stacking thermal dies on top of each other, generating heat. There are going to be severe mechanical effects that you also need to account for.

The third is a Multi-organization challenge. Typically, as we look across the industry or a customer base, you have a chip team, you have a package team, and some of the dies’, may not even be coming from your own company so there’s an organizational challenge that needs to be addressed.

We call that the 3Ms: Multi-scale, Multi-physics, and Multi-organization. And really to support that from a signoff standpoint, we have to start with the physics, right? If we’re not accurately modeling electromagnetics, not accurately modeling the thermals; the power integrity, signal integrity, and thermal integrity are a non-starter.

Thus, we need to provide an open and extensible platform. A platform implies scaling out very nicely in terms of computing, for a truly transistor design. How do you scale up that compute efficiently, open and extensible, which gets to the third part, which is a partnership, and the scale and scope of the problems that we’re solving around multi-die. It really requires a village. For example, the Ansys platform, working closely with the Synopsis platform is a great example of an open extensible ecosystem that better solves the challenges that we’ve seen together.

How is the increased learning curve for these Multiphysics effects being addressed?

It’s been pretty steep, to your point, Ansys has been doing mechanical thermal CFD simulation for over 50 years so it’s not that we don’t know how to solve these differential equations. If you look at, and specifically another example around thermal, typically the semiconductor companies might have a thermal team. That thermal was never really owned by the chip team or the IP team or the package team. Or if they did, they each had a different view of that. And then, in partnership with the , probably over the last five years it’s really about how do we take thermal into the workflow that’s silicon validated, that also is computational efficient.

As I mentioned earlier, we’ve seen our customer roadmaps with 2.5D and 3D.

There’s going to be a trillion transistor design pretty soon. And then a trillion transistors could mean 10 trillion geometries. And then from a thermal standpoint, we’d like to then mesh that into a hundred trillion elements, which of course is computationally impossible. And so how do we take that into a workable flow that can also be signed off, but also work in conjunction with 3D IC compilers. So, we have early system-level awareness. We do a lot of co-innovation with companies like Synopsys and TSMC.

We’ve used AI/ML, to guide us in how we can identify hotspots and how we can better optimize meshing. This has forced us to look more heavily at using reduced-order models to do abstractions so we can do quicker system-level simulations. And I’d say with all the experience that Ansys has around solving these equations we certainly feel like we have market leadership, though I’d say that we’re still only halfway there. We probably have another five years of hard work ahead of us to take these systems to the power and usability that we want them to be. And AI is going to be part of that, right? AI is already part of it, thankfully.

What is the customer’s view of the reliability challenge?

Well, it certainly has increased, and ’ example is a perfect example where reliability has become a primary concern, but it also extends if you’re building out a 5G base station that’s not in an air-conditioned warehouse or if you’re scaling out a huge data center. There are various use cases where reliability has an increasing problem and we’ve invested in this area.

We have a product called functional safety ISO 20620 compliance and building out, going from spreadsheets to formal systems for tracking reliability, which is extremely important. We’ve also put some focus on the software side. We had originally been investing in automotive software systems and making sure that they are fail-safe, but we’ve seen a lot of adoption from that product line into the automotive industry. But for my team, the most relevant work has really been with foundries like TSMC and making sure that we use computational physics to better dial in reliability.

Also Read:

Ansys Acquires Another!

Multiphysics Analysis from Chip to System

Checklist to Ensure Silicon Interposers Don’t Kill Your Design

HFSS Leads the Way with Exponential Innovation


eFPGA Enabled Chiplets!

eFPGA Enabled Chiplets!
by Daniel Nenni on 05-18-2023 at 10:00 am

Achronix eFPGA IP

With our continuing chiplet coverage I found this of great interest. I have always felt that eFPGAs and chiplets are a natural fit for the next generation of chip design and this is an excellent example. As we design with chiplets one of the challenges is verification/validation in regards to performance and interoperability. This partnership between Achronix Semiconductor and applied research institute Fraunhofer is a great example of how to address these challenges.

We have been working with Achronix Semiconductor since 2017 and know them well. Achronix stands out as they are the only independent high-end FPGA provider now that Altera and Xilinx have been acquired. This allows them to be much more customer centric. Achronix is also the only big FPGA company that provides eFPGA IP technology.

Fraunhofer is an interesting organization. I see them at conferences but have never worked with them directly. Fraunhofer-Gesellschaft, founded in 1949, is based in Germany and is the world’s leading applied research organization. Fraunhofer-Gesellschaft currently operates 76 institutes and research units throughout Germany. Over 30,000 employees, predominantly scientists and engineers, work with an annual research budget of €2.9 billion.

Here is the original announcement released last month with the associated links for more information. Definitely worth a read:

Fraunhofer IIS/EAS Selects Achronix Embedded FPGAs (eFPGAs) to Build Heterogeneous Chiplet Demonstrator 

Collaboration to create an eFPGA-enabled chiplet solution aimed at validating next-generation chip-to-chip interconnect technology.

Santa Clara, Calif., and Dresden, Germany, April 25, 2023 – In a continuing commitment to enabling industry-leading solutions for the semiconductor market, Fraunhofer IIS/EAS, leading-edge applied research institute in the field of advanced package solution design, and Achronix Semiconductor Corporation, the industry’s only independent supplier of high-end FPGAs and eFPGA IP solutions, are entering today in a partnership to build a heterogeneous chiplet solution to validate performance and interoperability in advanced high-performance system solutions.

The Fraunhofer institute provides system concepts, design services and fast prototyping in most advanced packaging technologies and will make use of Speedcore™ eFPGA IP from Achronix in its next project. The multi-chip system solution will be composed of several chiplets that will be used to explore chip-to-chip transaction layer interconnects such as Bunch of Wires (BoW) and Universal Chiplet Interconnect Express (UCIe).

Chiplets are rapidly being adopted for high-performance, heterogeneous multi-chip solutions and enable lower latency, higher bandwidth and lower cost than discrete devices connected via traditional interconnects on a printed circuit board. One key application that will be covered in this project is the connection of high-speed ADCs together with Achronix® eFPGA IP for preprocessing in radars as well as wireless and optical communication. Achronix Speedcore eFPGA IP is playing an important role in this application with low latency and reconfigurability while delivering high-performance data acceleration required in many applications.

The result of this project will create a demonstration platform suitable for applications such as 5G/6G wireless infrastructure, ADAS and high-performance test and measurement equipment. The findings of this cooperation will be communicated in a later press release and will be of interest to all semiconductor market actors seeking interface compatibility with their semiconductor chiplets.

About Fraunhofer IIS/EAS

The Fraunhofer Institute for Integrated Circuits IIS is a world leader in research on microelectronic and IT system solutions and services. Scientists at the institute’s EAS division in Dresden are working on key technologies for cutting-edge electronics systems. Among other things, the researchers focus on new design concepts to meet the challenge of the constant miniaturization of semiconductor components and the growing complexity of integrated circuits. The goal is to ensure the swift, resource-efficient, error-free, safe and secure development of electronic systems. Another focus is on new “More than Moore” technologies, which make it possible to combine a wide variety of assemblies in a single component.

About Achronix Semiconductor Corporation

Achronix Semiconductor Corporation is a fabless semiconductor corporation based in Santa Clara, California, offering high-end FPGA-based data acceleration solutions, designed to address high-performance, compute-intensive and real-time processing applications. Achronix is the only supplier to have both high-performance and high-density standalone FPGAs and licensed eFPGA IP solutions. Achronix Speedster®7t FPGA and Speedcore™ eFPGA IP offerings are further enhanced by ready-to-use VectorPath™ accelerator cards targeting AI, machine learning, networking and data center applications. All Achronix products are fully supported by the Achronix Tool Suite which enables customers to quickly develop their own custom applications.

Achronix has a global footprint, with sales and design teams across the U.S., Europe and Asia. For more information, please visit www.achronix.com.

Follow Achronix

Website: www.achronix.com
The Achronix Blog: /blogs/
Twitter: https://twitter.com/Achronixcorp
LinkedIn: https://www.linkedin.com/company/57668/
YouTube: https://www.youtube.com/user/AchronixCorp

Contacts

Sandra Kundel
Press Relations
Fraunhofer Institute for Integrated Circuits IIS, Division Engineering of Adaptive Systems EAS
Muenchner Strasse 16, 01187 Dresden, Germany
Phone +49 351 45691-152
pr@eas.iis.fraunhofer.de

Bob Siller
Achronix Semiconductor Corporation
408-889-4142
bobsiller@achronix.com

Also Read:

The Rise of the Chiplet

Achronix on Platform Selection for AI at the Edge

WEBINAR: FPGAs for Real-Time Machine Learning Inference