Banner 800x100 0810

Building Reliability into Advanced Automotive Electronics

Building Reliability into Advanced Automotive Electronics
by Bernard Murphy on 12-05-2023 at 6:00 am

Automotive reliability min

Those of you who have been in the industry for a little while will remember that the recipe for reliable electronics in cars (and other vehicles) used to be simple. Stick to old (like 10 years old) and well-proven processes and tweak rather than tear up and restart well-proven designs to the greatest extent possible. Because incrementing from a well-characterized base should ensure that reliability and other KPIs will not drift too far from expectations.

That recipe went out the window when we demanded that our vehicles turn into phones on wheels, offering ADAS, autonomy, advanced infotainment, requiring state of the art processes and architectures. Throwing out the rulebook gets us all the functionality; however, we don’t expect our phones to be ultra-safe or work reliably for 20 years or more. How do we bridge the gap between all the technology goodies we want and our safety and durability expectations?

proteanTecs recently hosted a webinar I rate a must-watch on this and related topics. Packed with information and informed opinion, it measurably increased my understanding of challenges and directions towards bringing high reliability to advanced technologies. More generally it helps build greater understanding of the need for electronics lifetime monitoring and adaptation. Participants were Heinz Wagensonner (Senior SoC designer at Cariad, the automotive software branch of Volkswagen), Jens Rosenbusch (Senior Principal Engineer, SoC safety architecture at Infineon), Robert Jin (Automotive safety architect at NXP), Gal Carmel (Senior VP/GM for automotive at proteanTecs), and moderator Ellen Carey (Chief External Affairs Officer at Circulor).

What Auto Reliability Means Today

One aspect of enhanced reliability is in fail-safe or fail-degraded systems. Real systems can and will fail. Acknowledging this reality, when a system is failing or expected to fail, a redundant system can take over or the system can fall back to reduced functionality, still safe enough to allow the car to limp home or perhaps to the side of the freeway. This reasoning is already well understood, though is expected to be applied more widely in future designs.

Another aspect – the subject of this webinar – recognizes that high reliability cannot be assured in a system which fails to evolve over time. Devices age, use-cases change, the environment changes, and feature capabilities will be upgraded. Few of these changes can be accounted for in the t=0 (product release) system. Systems must become intelligently self-monitoring, responding in part through locally determined adaptation but also through a feedback loop to a central resource which can synthesize broader learning for dissemination back to vehicles.

In short, for continued high reliability in these advanced systems, closing the spec for t=0 is only the start. You move to Arizona (hot) and your daily commute doubles. You are now pulling a trailer and have downloaded a software upgrade to your vehicle (now 10 years old) which promises to improve your range (ICE or EV). The “spec” keeps changing yet reliability plus safety must continue to measure up to the highest standards. This demands in-system and in-circuit monitoring through embedded sensors for workload, temperature, voltage, interconnect, and delay monitoring (for example) together with on-board ML-driven intelligence to interpret that data. This should capture not only immediate problems but also anomalous signatures which might indicate the beginning of a future problem. Allowing us to supplement now routine safety mitigations with the beginnings of predictive maintenance.

What constitutes a problem or a suspicious signature depends on mission profiles. One size does not fit all, for example a robotaxi, a city business vehicle, a personal car in the city or in rural use, will have different profiles. An important aspect of profiles will be factors affecting power: voltages and frequencies for example. Lowering power improves thermal reliability of course but will also extend range in an EV, also a positive for reliability.

Profiles can’t be programmed into a product at release, not least because we have no idea (yet) what those profiles should be. The t=0 spec must somehow accommodate the full range of possibilities, which designers accomplish through margins, margins everywhere which is expensive. In use, it will become clear for a certain profile that some margins can be tightened, whereas others perhaps should be loosened. Intelligent systems can learn their way to profile optimization, even better if they can share data with other cars.

From Theory to Practice

Naturally proteanTecs plays an important part in this solution. During chip design, they build and insert low impact agents, guided by detailed analysis, into the design to assure high coverage data in use. Working in partnership with proteanTecs, NXP have written a paper which became a driver for the ISO/TR:9839 standard on predictive maintenance. This is expected to fold into or alongside the next rev of the ISO 26262 standard.

This method for capturing and utilizing in-use behaviors is a starting point, however all participants agree that the next interesting/challenging step to derive full value is to collaboratively share this data, certainly within a brand, even across brands for common subsystems, say for engine and braking. Complementary value could be found in considering reliability of total system (the car) rather just individual component or subsystems. In both cases there is rich potential for AI to detect signature patterns in this collective data, patterns which perhaps appear only in multi-factor correlations that we would find hard to detect ourselves.

Sharing data is always a tricky topic these days. Since the types of data discussed here is relatively low-level, I would think so privacy concerns may not be as big an issue as brand competitive concerns. All panelists agreed on the need to build an ecosystem together with regulatory bodies to develop and support standards in this area.

There were several other interesting points raised. Will Tier1s disappear as OEMs build their own silicon? (no). How will this approach simplify analysis for field failures? (massively). Can chiplets help with reliability? (Maybe in the next decade after multi-die system reliability has been proven to automotive expectations for temperature, vibration and humidity).

Watch the replay for more detail on all points.

 


Improving Wafer Quality and Yield with UPW Resistivity and TOC Measurements

Improving Wafer Quality and Yield with UPW Resistivity and TOC Measurements
by Kalar Rajendiran on 12-04-2023 at 10:00 am

MT Group Stock Photo

An earlier SemiWiki post discussed water sustainability in semiconductor manufacturing, related challenges and solutions. Whether first time use or recycled use, water purity needs to meet certain stringent criteria for the processing task on hand. This article will look at it from a wafer quality and yield perspective and is based on a recently published whitepaper by Mettler Toledo.

The Significance of Water Purity in Semiconductor Manufacturing

Water serves as a cleaning agent, a heat transfer medium, and a crucial ingredient in the chemical processes used to etch and deposit materials onto wafers. Any impurities in the water can lead to defects on the wafer, reducing yield and affecting the overall product quality. Minute amounts of ionic impurities and organic contamination even at the sub-parts per billion (ppb) level, can negatively impact the yield and quality of wafers. Therefore, it is essential to monitor and control the purity of the water used in semiconductor manufacturing.

Ensuring Water Purity

Two key parameters that must be monitored in real-time to ensure ultrapure water (UPW) are Total Organic Carbon (TOC) and resistivity. TOC refers to the measurement of organic carbon compounds present in the water, which can be indicative of contamination. Resistivity, on the other hand, measures the water’s ability to conduct electrical current and can highlight any ionic impurities. As such, real-time, continuous monitoring of resistivity and Total Organic Carbon (TOC) has long been a standard practice in the industry. Strict control of these parameters is crucial for semiconductor manufacturers to enhance wafer quality and maximize yield in this highly competitive industry.

Challenges Faced

UPW is produced through a complex and costly multistage purification process. This process involves various techniques, such as reverse osmosis, micro-filtration, electrodeionization, ion exchange, adsorption, and UV photo-oxidation. However, one challenge associated with UPW is that any water used for Total Organic Carbon (TOC) measurement, a key indicator of water purity, cannot be returned to the process water stream and is instead directed to drain. UPW has a very high resistivity of 18.18 Megohm-cm, making it crucial for resistivity instruments used in semiconductor manufacturing to accurately detect even the smallest resistivity changes on a non-zero background.

While UPW is extremely important, being overly tight in measuring the purity level could lead to potentially throwing away a significant quantity of water as wastewater. On the other hand, a more relaxed approach could impact product quality and yield. To meet the industry’s stringent expectations for water purity, resistivity instrumentation must provide stable, precise measurements with effective noise reduction techniques in place.

Consequently, semiconductor manufacturing facilities seek solutions that are reliable, easy to integrate into existing systems, and operator-friendly.

UPW Monitoring For Stable and Precise Measurements

Advanced sensor technology such as Mettler Toledo’s UniCond sensors can simplify monitoring processes in several ways. These sensors are equipped with onboard memory that stores their unique identity and calibration data, and this information is automatically transmitted to the connected transmitter. Mettler Toledo’s M800, is a multi-parameter transmitter that offers installation flexibility and simplified process control measurements. It can simultaneously monitor one, two, or four in-line sensors, making it a versatile and cost-effective solution for UPW monitoring. This “plug and measure” approach simplifies installation and ensures the sensor’s performance integrity, even if it’s relocated to another process location.

The 6000TOCi sensor from Mettler Toledo provides additional features to streamline routine system maintenance. Users benefit from local storage of calibration data, allowing them to access historical calibration records, which ensures compliance with water system requirements.

Mettler Toledo’s Intelligent Sensor Management (ISM®) technology not only facilitates the communication of calibration data but also offers sensor diagnostics. These diagnostics can identify out-of-range resistivity measurements and temperature variations, contributing to improved process control. ISM also supports calibration planning and provides advance warnings of potential sensor failures through Dynamic Lifetime Indicators (DLI) for components like the UV lamp, filter, and ballast. This proactive approach helps reduce downtime and increase yield in industrial processes.

Summary

Repeatable and precise measurements are essential to maintain a consistent supply of UPW to wafer tools and wet benches. Mettler Toledo offers the most advanced tools for on-line continuous measurement and process control required for UPW systems. Its suite of plug-and-measure sensors, including UniCond and 6000TOCi sensors, can be easily integrated into existing water systems via a user-friendly M800 transmitter interface. These solutions ensure that TOC and resistivity are monitored at sub-ppb levels, reducing risks and improving yield.

For more details, download Mettler Toledo’s whitepaper here.

Also Read:

Podcast EP194: The Impact of Joining TSMC’s OIP From the Perspective of Agile Analog

CEO Interview: Dr. J Provine of Aligned Carbon

RISC-V Summit Buzz – Ron Black Unveils Codasip’s Paradigm Shift for Secured Innovation


RISC-V Summit Buzz – Axiomise Accelerates RISC-V Designs with Next Generation formalISA®

RISC-V Summit Buzz – Axiomise Accelerates RISC-V Designs with Next Generation formalISA®
by Mike Gianfagna on 12-04-2023 at 6:00 am

RISC V Summit Buzz – Axiomise Accelerates RISC V Designs with Next Generation formalISA®

If the recent RISC-V Summit proved one thing it’s that open-source hardware design, and particularly the RISC-V instruction set architecture (ISA) has entered the mainstream. It is a design methodology and architecture to watch closely. Across a broad range of applications from data center, to automotive, to IoT, RISC-V processors are finding a fit to address the huge processing demands of embedded AI. As is the case with any complex system design, a primary care-about is a robust design that is bug-free. Given the complexity of these designs, getting a new RISC-V implementation to that fully verified state can be a daunting task. An effective approach to this problem is by deploying formal verification. Using this approach can be challenging due to the expertise needed but this is where Axiomise can help. The company’s mission to “make formal normal” and its approach, along with RISC-V specific enhancements was on display at the Summit. Read on to see how Axiomise accelerates RISC-V designs with next generation formalISA®.

About Axiomise and formalISA

Axiomise was founded in 2017 by Dr. Ashish Darbari, after spending over two decades in the industry and top research labs increasing formal verification adoption. The company enables formal verification adoption by simplifying jargon and showing design teams how complex problems are easily solved through Axiomise’s abstraction-driven methodologies. These methodologies are vendor neutral. Training and consulting services are also available to enable formal in any corporate setting. You can learn more about Axiomise and its technology on SemiWiki here.

Earlier this year, the company announced its next-generation app specifically targeted at RISC-V processors. This unique push-button solution simplifies everything necessary to make verification efficient and effective. From avoiding test generation (like simulation) by using architectural-specification-precise formal properties to exhaustive testing (via formal) to then saving debug time (70% of verification time spent here) using i-RADAR® to sign-off via scenario coverage and reporting via SURF.  Its debug tool is powered by an intelligent debugger called     i-RADAR and a reporting and coverage solution called SURF.

It was reported in this announcement that the formalISA App has been in use for more than four years to formally verify numerous open-source and commercial RISC-V processors, proving the absence of bugs in out-of-order and in-order cores and exposing bugs in previously verified processors.

The Buzz at RISC-V Summit

Adeel Liaquat

At the RISC-V Summit, I was able to speak with Adeel Liaquat, engineering manager at Axiomise. I began by asking Adeel if there was one consistent care-about being expressed at the show. His answer was “time to market.”

He went on to explain that everyone wants an efficient, bug-free RISC-V processor to power new designs. Getting to that goal can be quite time consuming, however. Conventional UVM and simulation technology approaches can fall short of attaining the required level of robustness. Stimulus must be developed to cover all cases, and that is an open-ended, huge undertaking.

Formal verification offers a faster path by verifying the design across all possible states without the need for exhaustive input vectors. Adeel pointed out that handling deadlock conditions is a good example of the benefits of formal verification. These conditions may manifest after years of deployment in the field. It is virtually impossible to run enough vectors to uncover these situations up-front. The exhaustive nature of formal verification will ensure these conditions do not occur.

With all these benefits, why isn’t formal more popular? Adeel explained it’s a combination of awareness of the approach and expertise to implement it. He added that without the right methodology one could wind up in a verification “loop,” using up all licenses and all available time without a concrete result. Axiomise is a company dedicated to address this problem for design teams of all sizes.

From a technology perspective, the formalISA App makes RISC-V ISA specific formal technology available. The i-RADAR debugger simplifies the unique requirements associated with formal verification debugging. And SURF makes it easier to coordinate all the verification runs and consolidate results. Adeel explained that Axiomise can also develop custom capabilities that may be needed as well. Beyond the technology layer, the company offers training and consulting to bring design teams to the required level of proficiency. If internal resources are scarce, Axiomise can create and implement the full formal verification strategy and process for the customer.

This was an insightful discussion – I began to see how the mission of making formal normal was within reach. And that’s how Axiomise accelerates RISC-V designs with next-generation formalISA.

 


Podcast EP196: A Look at the Upcoming IEDM Conference with the Publicity Chair and Vice Chair

Podcast EP196: A Look at the Upcoming IEDM Conference with the Publicity Chair and Vice Chair
by Daniel Nenni on 12-01-2023 at 10:00 am

Dan is joined by Jungwoo Joh, a Process Development Manager at Texas Instruments and Publicity Chair for IEDM 2023. He currently leads gallium nitride technology development for power applications, and has been working on reliability, device characterization & modeling, and process development for various GaN based technologies as well as for high voltage silicon BCD processes. Jungwoo received his Ph.D. in Electrical Engineering from MIT. He has published more than 60 papers and holds 20 patents. He is a Senior Member of IEEE, and has been serving on IEDM technical and executive committees since 2015.

Dan’s other guest is Kang-ill Seo, Vice President of Samsung Electronics and Publicity Vice Chair for IEDM 2023. He directs Samsung’s international joint project with IBM at Albany Nanotech in New York state.His current research focuses on development of leading-edge logic technologies, including 3D transistor architectures, Interconnect with novel materials, and associated design-technology-co-optimization for next-generation devices for low-power and high-performance computing. Kang-ill earlier participated in and led the development of several generations of logic technologies, from 20nm to 7nm, in Samsung’s Semiconductor R&D center. He received his MS and Ph. D in Electrical Engineering and Material Science & Engineering from Stanford University. His has been published in 25 peer-review journals and conferences, and has more than 60 issued patents. He has served on the IEDM executive committee since 2018.

Dan explores what will be the popular topics at the upcoming IEDM with Jungwoo and Kang-ill. Energy conservation, sustainability and reduced carbon footprint are just a few of the many topics to be addressed. The evolution from 2D to 3D CMOS scaling at the device, circuit and chip levels are also discussed. New areas in memory design are discussed as well, along with changes to the conference program to support many new AI-driven innovations.

IEDM 2023 will be held in San Francisco from Dec. 9 – 13. You can register for the conference here.


CEO Interview: David Moore of Pragmatic

CEO Interview: David Moore of Pragmatic
by Daniel Nenni on 12-01-2023 at 6:00 am

PIC BY STEWART TURKINGTON www.stphotos.co.uk

David has almost 25 years of leadership experience in the semiconductor industry. Prior to joining Pragmatic, he served as Chief Strategy Officer at Micron Technology, Inc. He also spent six years at Intel Corporation in various roles, including General Manager of the Programmable Solutions Group, where he led the multi-billion-dollar FPGA and Structured ASIC business on a global basis.

Tell us about your company?
Pragmatic is a UK-based leader in flexible integrated circuit technology and semiconductor manufacturing. We use thin-film semiconductors to create ultra-thin, flexible integrated circuits, known as FlexICs, that are significantly cheaper and faster to produce than silicon chips – we’re talking days, rather than months, and potentially sub-penny price points for certain applications.

As you can imagine, this makes FlexICs a compelling alternative to silicon for lots of mainstream electronics applications, as well as enabling new applications that may not have been possible previously, either because of cost or due to silicon’s rigid form factor.

We also offer a Fab-as-a-Service model that enables secure, dependable, localized semiconductor supply. The reduced complexity of our innovative manufacturing and process technology allows for compact fabs that are orders of magnitude lower in terms of cost, and physical and environmental footprint, compared to silicon.

We are headquartered in Cambridge, UK with production lines in Durham in the northeast of England.

What problems are you solving?
Many of the most impactful Internet of Things applications that seek to harness AI at scale will depend on item-level intelligence and ubiquitous connectivity. We’re talking billions – and, ultimately, trillions – of smart physical objects. For a number of reasons, silicon chips alone can’t achieve that, and we aim to plug the gap.

Recently, we’ve become all too familiar with the impact of brittle supply chains. Even if we assume that similar shocks won’t reoccur – while being cognizant that they likely will – there’s simply not enough fab capacity in the world pumping out the right kind of silicon chips to address the incoming tsunami of demand. And it wouldn’t be cost-effective to use silicon for many of these workloads even if there were. The long lead times and high costs associated with silicon chip design mean that it is best reserved for creating complex, higher-spec chips in applications that require that level of performance.

We operate at the other end of the scale. Our agile technology platform, which leverages industry standard EDA tools, and rapid production cycles allows you to target a specific application and customize to meet the performance required. This means you can accelerate time to market – and even make changes as your requirements shift, without incurring crazy re-spin costs or long turnaround times.

To be clear, we’re not trying to compete with leading-edge silicon, here. Today’s FlexICs deliver just enough performance for a very wide range of simple tasks. But that simplicity is their strength and, it’s what allows them to provide ubiquitous intelligence and connectivity, at a price point and low environmental impact that’s simply not achievable with silicon.

What application areas are your strongest?
One of our largest, near-term opportunities is in Radio Frequency Identification (RFID) and Near-Field Communication (NFC) applications that use smart and reusable packaging to dramatically enhance supply chain optimization and consumer engagement.

There’s huge consumer demand, and increasing regulatory requirements, for brands to improve supply chain traceability and reduce waste. However, that can be challenging for high-volume, lower-value goods and packaging – often, the numbers just don’t add up.  But when you’re talking about item-level intelligence at a price point of pennies or less, it’s a game changer.

Companies have been quick to spot this potential: for example, we’re working on a trial with a major UK supermarket chain that’s investigating the deployment of smart reusable packaging at scale. It’s one of many exciting opportunities to change the way we think about plastic and waste in everyday consumables.

We’re also looking at a range of healthcare applications, from temperature sensors in health monitors to smart bandages – an area where the thinness and flexibility of our chips can really make a difference to wearer comfort.

We also see significant opportunities in consumer wearables and even toys and games. In an age where it can be hard to get kids off screens, boosting the interactivity levels of their ‘analogue’ toys by seamlessly adding intelligence holds a lot of appeal.

What keeps your customers up at night?
In recent years, semiconductor supply has been a key issue – concerns over global supply chain resiliency have been reflected by the US and EU Chips Acts and the UK’s recent semiconductor strategy announcement.

There’s now a push to onshore semiconductor fabrication, but if you’re looking to add silicon capacity on an advanced or even a mainstream node, it’s not something you can do quickly – or, often – economically a single fab typically costs billions of dollars to build and takes many years to be deployed. But deployment of new flexible semiconductor fabs takes far less time and because they’re modularly scalable: you can achieve cost-effective, localized manufacturing with a cost structure that’s significantly lower than legacy silicon fabs. And you can do it quickly, too!

Beyond that, as demand for semiconductors continues to grow, there’s increasing pressure for the industry to tackle its carbon footprint. Because our unique production process omits many of the resource-intensive, high-temperature stages of silicon semiconductor manufacturing, it uses orders of magnitude less water and energy, and significantly fewer harmful chemicals and gases. In particular, we don’t create or release PFAS – forever chemicals. While it’s not always easy to lay your hands on the relevant industry figures to make an exact comparison, initial calculations show that chips from our fabs could have a carbon footprint  well over 100  times lower than those manufactured in typical silicon fabs.

What does the competitive landscape look like and how do you differentiate?
Currently, no other company has a comparable offering in commercial production and, to our knowledge, competing approaches in research and development are still many years away from volume manufacturing at scale.

For existing applications using silicon chips, we compete against ‘legacy node’ foundries, but in practice, our differentiation on cost, form factor, production cycle times, sustainability and security of supply gives us unique advantages wherever our performance meets the application requirements.

What new features/technology are you working on?
Innovation is the essential fuel of our growth plans. We’re currently ramping up our second fab in the northeast of England, which will be the first 300mm wafer fab in the UK. It has the capacity to deliver billions of ICs per year, and we’ll be commissioning additional lines at the site to bring capacity to tens of billions of ICs in 2025.

Our next-generation process technology will deliver a 10X reduction in power consumption and significant area reductions, generation-on-generation, for our RFID product lines and to our foundry customers.

We’ll also be advancing our RFID technology roadmap to support expanded NFC features and Ultra High Frequency (UHF) RFID applications, enabling breakthrough solutions at scale in what is one of the fasted growing segments in the semiconductor industry.

In parallel, we’re investing to deliver new, cost-optimized, ultra-thin sensing and control capabilities that further expand our addressable foundry market for a host of exciting, high-impact applications in the consumer, healthcare and industrial segments.

How do customers normally engage with your company? 
Our core offering is the provision of foundry services to our customers. They design the chip, or partner with us to design it, then we manufacture and deliver the diced FlexIC wafers – just like companies such as TSMC or Global Foundries in the silicon space.

We also produce RFID products as ICs that are integrated into intelligent labels and smart packaging solutions by leading brands or RFID tag and solution providers.

Finally, Fab-as-a-Service is our unique solution for the future of distributed semiconductor manufacturing. By installing and operating a fab at the customer’s site of choice, we provide secure, dedicated, localized capacity and the most attainable route to cost-effective, scalable fabrication.

We’re excited for the future. By matching our unique technology with a disruptive business model – and our customers’ innovation – we’re confident that we can change the face of semiconductors, facilitating ubiquitous connectivity and solutions to some of the world’s most pressing data challenges.

Pragmatic Semiconductor

Also Read:

CEO Interview: Dr. Meghali Chopra of Sandbox Semiconductor

CEO Interview: Dr. J Provine of Aligned Carbon

CEO Interview: Vincent Bligny of Aniah


RISC-V Summit Buzz – Semidynamics Founder and CEO Roger Espasa Introduces Extreme Customization

RISC-V Summit Buzz – Semidynamics Founder and CEO Roger Espasa Introduces Extreme Customization
by Mike Gianfagna on 11-30-2023 at 10:00 am

3d,Rendering,Of,Cyberpunk,Ai.,Circuit,Board.,Technology,Background.,Central

Founded in 2016 and based in Barcelona, Spain, Semidynamics™ is the only provider of fully customizable RISC-V processor IP.  The company delivers high bandwidth, high performance cores with vector units and tensor units targeted at machine learning and AI applications. There were some recent announcements from Semidynamics leading up to the RISC-V Summit that extend the company’s focus on customization. I had a chance to meet with the company’s CEO at the Summit to get the back-story on what the announcement really means. Read on to get the whole story about how Semidynamics founder and CEO, Roger Espasa, introduces extreme customization.

What Was Announced

First, let’s look at what was announced leading up to the RISC-V Summit. The headline was Semidynamics launches first fully-coherent RISC-V Tensor unit to supercharge AI applications. The announcement introduced a RISC-V Tensor Unit designed for ultra-fast AI solutions. The design is based on the company’s fully customizable 64-bit cores. The Tensor Unit is built on top of the Semidynamics RVV1.0 Vector Processing Unit and leverages the existing vector registers to store matrices, as outlined in the figure below.

Tensor Unit diagram

This enables the Tensor Unit to be used for layers that require matrix multiply capabilities, such as Fully Connected and Convolution, and use the Vector Unit for the activation function layers (ReLU, Sigmoid, Softmax, etc.), which is a big improvement over stand-alone NPUs that usually have trouble dealing with activation layers.

The Tensor Unit leverages both the Vector Unit capabilities as well as the Atrevido-423 Gazzillion™ capabilities to fetch the data it needs from memory. Semidynamics Gazzillion technology allows the processor to send up to 128 requests to the memory system, whereas other cores can only tolerate very few cache misses. This means that the processor continues doing useful processing while previous misses are served. It is interesting to note that Tensor Units consume data at an extremely high rate and, without Gazzillion, a normal core couldn’t keep up with the Tensor Unit’s demands. Other solutions rely on difficult-to-program DMAs to solve this problem. Semidynamics took a different approach by seamlessly integrating the Tensor Unit into its cache-coherent subsystem. This innovation effectively opens a new era of programming simplicity for AI software.

The Motivation and Back-Story for the Announcement

Roger Espasa

I spent some time with Semidynamics Founder and CEO Roger Espasa at the Summit. Dr. Espasa is not your typical high-tech startup CEO. He has been teaching at the Universitat Politècnica de Catalunya in Barcelona for over 31 years. This work has brought him close to a rich pallet of innovation and an array of venture funding sources. Before founding Semidynamics over seven years ago, he was chief architect at Esperanto Technologies, technical director at Broadcom, a principal engineer and silicon architect at Intel, and an architect at Compaq. Roger has done the job of many of his customers. This gives him a unique ability to understand what they need, and his substantial exposure to innovation allows him to see the path forward with a unique level of insight. He explained that he can dig into the challenges of Semidynamics customers and help to develop unique, focused and highly efficient solutions. And this focus on problem-solving and making its customers successful is what created the new architecture that was announced and demonstrated at the show.

As Roger explained, customers told him they loved the Vector Unit architecture Semidynamics provided, but, for challenging AI workloads there was always a need for more. That request “for more power” is what led Roger and his team to add the Tensor Unit to help address the need for vast processing power. Roger pointed out this enhancement was delivered as part of the RISC-V ISA, making it easier to integrate the Vector Unit, Gazzillion technology and now the Tensor Unit into AI workloads. Ease of programming was a key feature, and Roger’s years of system design experience helped to drive that delivery. He explained that his customers are making huge investments in the SoC that will bring their new product idea to life. Their company is on the line, and Semidynamics intends to remove as many barriers as possible to enable success. This attitude means embracing special requirements, and that’s how Semidynamics founder and CEO Roger Espasa introduces extreme customization.

Also Read:

Deeper RISC-V pipeline plows through vector-scalar loops

RISC-V 64 bit IP for High Performance

Configurable RISC-V core sidesteps cache misses with 128 fetches


Rugged Security Solutions For Evolving Cybersecurity Threats

Rugged Security Solutions For Evolving Cybersecurity Threats
by Kalar Rajendiran on 11-30-2023 at 6:00 am

Secure IC Stockphoto.jpg

Secure-IC is a global leader in end-to-end cybersecurity solutions, specializing in the domain of embedded systems and connected devices. With an unwavering commitment to pushing the boundaries of security innovation, Secure-IC has established a remarkable track record. Its credentials include active involvement in new standards development, extensive thought leadership, and an extensive portfolio of more than 200 patents. The company’s expertise spans the entire spectrum of cybersecurity, from cutting-edge cryptographic solutions to comprehensive testing frameworks, ensuring the safeguarding of digital assets against evolving threats. Secure-IC’s notable flagship product, Securyzr, exemplifies its dedication to adaptable solutions that meet dynamic cybersecurity demands, making it a trusted partner in securing the digital sphere. Secure-IC recently made a couple of exciting announcements, one relating to eShard and the other relating to MediaTek’s flagship smartphone chip, the Dimensity 9300.

Ruggedness of a Security Technology

The ruggedness of a security solution is directly related to the ruggedness of the security technology upon which the solution is based. It goes without saying that the ruggedness can only be established and ascertained by comprehensive testing of the security technology itself. The recent agreement between eShard and Secure-IC highlights how committed Secure-IC is to offering rugged security technology. The company announced a strategic acquisition of patents portfolio from eShard, a renowned pioneer in advanced security testing.

eShard

Known for its state-of-the-art solutions, software tools, and expert services, eShard specializes in providing comprehensive testing frameworks that enable the scalability of security testing. eShard’s expertise extends across various domains, including chip security testing, mobile application security testing, and system security testing. With a legacy marked by innovation and a robust patents portfolio, eShard continues to be a trailblazer in the field of advanced security, contributing significantly to the defense against cyber threats.

Secure-IC’s Acquisition of eShard Patents Portfolio

This marks a significant milestone in Secure-IC’s commitment to pushing the boundaries of security innovation and reinforcing its leading position in the embedded cybersecurity industry. With this acquisition, Secure-IC has expanded its patent portfolio to more than 250 in approximately fifty international patent families. The integration of eShard patents portfolio constitutes a substantial reinforcement to Secure-IC’s existing tunable cryptography product offering. Cybersecurity is a very dynamic field where adaptability and innovation are essential and this partnering will help secure the entire lifecycle of connected devices. This aspect takes on even more importance in the context of the European Union Cyber Resilience Act (EU CRA) coming into force, which mandates critical products to be designed secure and kept secure for an extended duration.

Secure-IC’s Securyzr iSE 900 as Trusted Anchor and Root of Trust

Secure-IC recently announced that its embedded cybersecurity solution Securyzr iSE (integrated secure element) 900 was integrated into MediaTek’s new flagship smartphone chip, the Dimensity 9300. This collaboration represents a significant leap forward in the realm of embedded systems and connected devices, setting new standards for security and performance. What sets Securyzr apart is its dual function as the Trusted Anchor and Root of Trust, allowing sensitive processes and applications to run in an isolated, secure area. This Secure Enclave plays a pivotal role in safeguarding critical operations throughout a device’s lifecycle, including Secure Boot, Firmware Updates, Key Management, and Cryptographic Services. Its continuous monitoring capabilities ensure resilience against potential disruptions, such as Cyber Physical Attacks, thereby mitigating potential threats with utmost reliability. As a result, the Dimensity 9300 is able to guard against evolving cybersecurity threats, setting a new standard for secure mobile threats.

Summary

Embedded within the main System-on-Chip (SoC), Securyzr offers a comprehensive suite of services to its host system, ranging from secure boot and cryptographic services to key isolation and anti-tampering protection. What sets Securyzr iSE 900 apart is its dual computation-and-strong-isolation aspect, providing an additional layer of security that surpasses traditional trusted execution environments.

Secure-IC’s expertise spans the entire spectrum of cybersecurity, from cutting-edge cryptographic solutions to comprehensive testing frameworks, ensuring the safeguarding of digital assets against evolving threats. The company’s flagship product, Securyzr, spotlights its dedication to adaptable solutions that meet dynamic cybersecurity demands, making it a trusted partner in securing the digital sphere.

For more details, visit the Securyzr product page.

Also Read:

Cyber-Physical Security from Chip to Cloud with Post-Quantum Cryptography

How Do You Future-Proof Security?

Points teams should consider about securing embedded systems


SystemVerilog Has Some Changes Coming Up

SystemVerilog Has Some Changes Coming Up
by Daniel Payne on 11-29-2023 at 10:00 am

SystemVerilog - extending coverpoints

SystemVerilog came to life in 2005 as a superset of Verilog-2005. The last IEEE technical committee revision of the SystemVerilog LRM was completed in 2016 and published as IEEE 1800-2017.

Have the last seven years revealed any changes or enhancements that maintain SystemVerilog’s relevance and efficaciousness in the face of rapidly evolving technology? Why yes! Engineers are continually wanting more features, improved clarity of the specification, and fixes to the previous versions.

Starting in 2019, the technical committee started work on the proposed standard P1800-2023, with a plan of final publication in 2024. The 1800-2023 standard benefits from hundreds of corrections, clarifications, and enhancements to the LRM to keep the language current. Dave Rich from Siemens EDA wrote a nine-page paper going into the details of some of these changes. In this article, I’ll highlight just a few of the enhancements discussed in his paper.

Enhancements

Coverpoints are being extended so that covergroups have inheritance. The new syntax will allow you to write a class with covergroups like this:

class pixel; // original base class
 bit [7:0] level;
 enum {OFF,ON,BLINK,REVERSE} mode;
 covergroup g1;
   a: coverpoint level;
   b: coverpoint mode;
 endgroup
 function new(); 
   g1 = new;
 endfunction
endclass

class colorpixel extends pixel; // extended covergroup in extended class
 enum {red,blue,green} color;
 covergroup extends g1;
  b: coverpoint mode { // override the 
  coverpoint ‘b’ from the base class 
   ignore_bins ignore = {REVERSE}; 
  }
  cross a color; // ‘a’ comes from the base class 
 endgroup 
endclass

Arrays will now allow you to cast their elements to a new type, and operate on each element, like in these examples:

int A[3] = {1,2,3}; 
byte B[3]; 
int C[3]; // assigns and casts array of int to an array of byte 
B = A.map() with ( byte’(item) ); 
// increments each element of the  array (use b instead of item) 
B = B.map(b) with ( b + 8’b1 ); 
// B becomes {2,3,4} 
// Add two arrays 
C = A.map(a) with (a + B[a.index] ); 
// C becomes {3,4,5}

The ifdef statement will support Boolean expressions in parenthesis, reducing the number of lines required, like this:

`ifdef (A && B)
 // code for AND condition 
`endif 
`ifdef (A || B) 
// code for OR condition 
`endif

Multi-line strings are supported using a triple quote syntax:

string x = “”” 
This is one continuous string. 
Single ‘ and double “ can be placed 
throughout, and only a triple quote will end it. 
“””

Real number modeling has been added to better model AMS designs. The syntax using real numbers with covergroup looks like:

coverpoint r {
 type_option.real_interval= 0.02;
 bins b[] = {[0.75:0.85]};
 // 10 bins
 // b[0] 0.75 to less than 0.76
 // b[1] 0.76 to less than 0.77
 // . . .
 // b[9] 0.84 to less than or equal to 0.85 
}

With the chaining of method calls you can use a function result as a variable for choosing a member of the result.  Here’s an example:

class A;
 int member=123;
endclass 
module top; 
 A a; 
 function A F(int arg=0); 
  int member; // static variable 
  uninitialized value 0
  a = new();
  return a; 
 endfunction
 initial begin 
  $display(F.member); // 0 – No 
  “()”, Verilog hierarchical reference 
  $display(F().member); // 123 – With 
  “()”, implicit variable 
 end 
endmodule

There’s now support for adding a static qualifier to a formal ref argument, assuring that the actual argument has a static lifetime.

module top;
 function void monitor(ref static 
 logic arg);
  fork // the reference to arg only 
  becomes legal with a static 
  qualifier 
   forever @(arg) $display(“arg 
   changed at time %t”, arg, 
   $realtime); 
  join_none 
 endfunction 
 logic C; 
 initial monitor(C); 
endmodule

Summary

SystemVerilog users will reap the benefits of staying current with the proposed changes coming for the language. If your favorite features weren’t proposed for this release, then why not get involved with the technical committee to have your voice heard and make SystemVerilog even better for the next version.

Read the complete Nine-page paper from Dave Rich at Siemens EDA.

Related Blogs


A Complete Guidebook for PCB Design Automation

A Complete Guidebook for PCB Design Automation
by Kalar Rajendiran on 11-29-2023 at 8:00 am

Constraint Management

Printed Circuit Boards (PCBs) are the foundation of modern electronics, and designing them efficiently is complex. Design automation and advanced PCB routing have transformed the process, making it faster and more reliable. Design automation streamlines tasks, reduces errors, and ensures consistency. Advanced PCB routing combines auto-routing and manual routing for efficiency, optimizes layer stacking, controls via placement, and handles differential pair routing.

Siemens EDA has published an eBook on PCB design automation covering problems with legacy PCB design methodologies, constraint-management in PCB design, PCB component placement and routing, design reuse and automating manufacturing output for PCB board fabrication. Following is an overview of the eBook.

Problems with Legacy PCB Design Methodologies

Legacy PCB design methodologies are struggling to meet the demands of modern electronics development. They are ill-suited for complex products, shorter timelines, reduced budgets, and limited resources. Manual data manipulation is error-prone and time-consuming, hindering integration between design tools. Communication bottlenecks between engineering and other disciplines, involving physical document exchange, no longer work in today’s fast-paced design environment. To keep up with evolving industry standards, more efficient and streamlined PCB design processes are essential.

Constraints-Management in PCB Design

Constraint management is a vital practice in PCB design, streamlining the process and reducing the need for extensive back-and-forth communication between engineers and designers. Constraint-driven methodology has become a best practice, allowing for the systematic management of constraints and introducing standardization. Constraint templates, which can be reused and adjusted for specific projects, save time and maximize existing data utilization.

This approach offers control over electrical and physical rules, aligning the design with the final product’s requirements. Design constraints ensure quality is integrated from the outset, eliminating the need for costly post-design quality checks. Automated constraint entry in Siemens EDA’s Xpedition simplifies the process and ensures adherence to predefined parameters, enhancing the potential for design success by consistently meeting specified requirements and constraints.

PCB Component Placement

3D component planning and placement are pivotal in achieving a “correct-by-construction” PCB layout while considering electro-mechanical constraints. Clusters, defined groups of components within a circuit, play a crucial role in simplifying and optimizing placement. They enable efficient extraction, version control, and reuse of component groups, enhancing connectivity and flow. Clusters also support nested structures, allowing for unique rules within groups, streamlining component placement. In the Xpedition environment, clusters can be further enhanced with additional elements like mounting holes and seed vias, providing greater visibility and control over the PCB design process and improving design quality.

PCB Routing

Modern PCB design tools offer various routing approaches, including manual, interactive, semi-automated, and fully automated methods, improving the design process’s efficiency. In the Xpedition flow, advanced routing features like Sketch Routing and Sketch Planning provide user-friendly automation for high-quality, fast routing. These tools mimic human decision-making, allowing designers to experiment with autorouting and modifications until they achieve the desired outcome, enhancing PCB routing efficiency.

Additionally, advanced routing tools like the “hug router” help manage stubborn nets without overhauling the design. It’s particularly useful for routing single net lines in pre-routed designs. The “plow routing” feature aids in handling challenging remaining nets, reducing time and effort. For specialized signal requirements in analog and RF traces, “hockey stick” or segment routing offers precise control over routing paths, improving routing precision and efficiency in PCB design.

Design Reuse in PCB design

Efficiency in PCB design can be greatly improved through the practice of PCB design reuse. This strategy involves leveraging previously approved circuitry or IP in various designs, saving time and reducing project risks. It eliminates redundant efforts, allowing the reuse of reliable components and layouts. True design reuse is more than traditional copy-pasting; it involves applying entire layouts, like multi-layer circuit stacks, saved in the library for future use, saving significant time compared to manual recreation. In platforms like Xpedition, creating and managing reuse modules is seamless, simplifying sharing and tracking deployment, making PCB design reuse an invaluable strategy in electronics design.

Automating Manufacturing Output for PCB Board Fabrication

Once a PCB design is fully complete and successfully passes various assessments, the focus turns to preparing for board fabrication and assembly manufacturing. Automation is key in this phase, eliminating redundancy and saving time in generating output files like ODB++, Gerber data, GENCAD data, and more. It ensures consistency in output generation, customizability to meet standards, and correctness and quality in the content. In contrast to manual methods, automation streamlines the process and provides a reliable foundation for successful printed circuit assembly production by fabricators and manufacturers.

Summary

In the world of PCBs, the cost of board respins due to human errors is significant in terms of both time and money. Early error detection is crucial, as errors discovered later become more expensive to rectify. Adopting a “correct-by-construction” design approach and leveraging automation tools such as Siemens EDA’s Xpedition Enterprise are very important.

To learn more, download the eBook guide for PCB Design Automation.

Getting educated on PCB design automation tools is also essential to avoid schedule disruptions due to a lack of knowledge on how to effectively use the tools. Siemens EDA offers training sessions, including on-demand training, expert-led webinars, and on-site visits with application engineers, to empower designers to harness the full potential of automation and streamline PCB design processes efficiently.

For more information on PCB design automation, visit:

https://eda.sw.siemens.com/en-US/pcb/engineering-productivity-and-efficiency/design-automation/

To request an on-site training, visit:

https://resources.sw.siemens.com/en-US/talk-to-an-expert-about-xpedition

Also Read:

Uniquely Understanding Challenges of Chip Design and Verification

Successful 3DIC design requires an integrated approach

Make Your RISC-V Product a Fruitful Endeavor


ML-Guided Model Abstraction. Innovation in Verification

ML-Guided Model Abstraction. Innovation in Verification
by Bernard Murphy on 11-29-2023 at 6:00 am

Innovation New

Formal methods offer completeness in proving functionality but are difficult to scale to system level without abstraction and cannot easily incorporate system aspects outside the logic world such as in cyber-physical systems (CPS). Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Next-Generation Software Verification: An AI Perspective. This is an article published in IEEE Software, May-June 2021 issue. The author is from the University of Ottawa.

The author presents her research described in this paper as an adaptation of the CEGAR method for developing abstractions to be used in system level analysis. A key difference between methods in building an abstraction is that CEGAR uses model check (formal methods) in building and refining an abstraction, whereas the author’s flow (ARIsTEO) uses simulation under ML supervision for this purpose. This is an interesting and complementary approach for abstracting logic of course but has the added merit of being able to abstract analog, mechanical or other non-logic systems that can be simulated in some other manner for example through Simulink.

Paul’s view

Last month we looked at generating abstractions for analog circuits to simulate much faster while still being reasonably accurate. This month we take the analog abstraction theme further into the world of cyber-physical systems. These are essentially software-level models of analog control systems with sensors and actuators defined in Matlab Simulink, for example, a smart home thermostat, automotive controllers (powertrain, transmission etc.), or navigation systems (e.g. satellite).

Complexity of these cyber-physical systems is rising, with modern commercial systems often consisting of thousands of individual Simulink building blocks, resulting in simulation times for verification even at this level of abstraction becoming problematic. The author of this month’s paper proposes using machine learning to address the problem, realized in a verification tool called Aristeo. The paper is more of an editorial piece drawing some parallels between Aristeo and model checking. To understand Aristeo itself, I found it best to read her ICSE’20 publication.

Aristeo works by building an abstraction for the cyber-physical system, called a “surrogate”, that is used as a classifier on randomized system input sequences. The goal of the surrogate is to predict if a randomized input sequence is likely to find a bug. Sequences selected by the surrogate are applied to the full model. If the full model passes (false positive) then the model is incrementally re-trained, and the process continues.

The surrogate is built and trained using the Matlab system identification toolbox. This toolbox supports a variety of abstractions, both discrete and continuous time, and provides a system to train model parameters based on a set of example inputs and outputs. Models can range from simple linear functions or time-domain transfer functions to deep neural networks.

Aristeo results are solid: 20% more bugs found with 30% less compute than not using any surrogate. Interestingly, the most effective surrogate across a range of credible industrial benchmarks was not a neural network, it was a simple function where the output at timestep t is a linear function of all input and output values from t-1 to t-n. The authors make a passing comment that the purpose of the surrogate is not to be accurate but to predict if an input sequence is buggy. These results and observations align with our own experience at Cadence using machine learning to guide randomized UVM-based logic simulations: our goal is not to train a model that predicts circuit behavior, it’s to train a model that predicts if some randomized UVM-sequence will find more bugs or improve coverage. So far, we have likewise found that complex models do not outperform simple ones.

Raúl’s view

For a second month in a row, we review a paper which is quite different to what we have done before in this blog. This time, the topic is a new artificial intelligence (AI)-based perspective on the distinctions between formal methods and testing techniques for automated software verification. The paper is conceptual, using the concepts presented for a high-level perspective.

The author starts by observing that “for the most part, software testing and formal software verification techniques have advanced independently” and argues that “we can design new and better adaptive verification schemes that mix and match the best features of formal methods and testing”. Both formal verification and testing are posed as search problems and their virtues and shortcomings are briefly discussed in the familiar terms of exhaustiveness and flexibility. The proposed framework is based on two systems, CEGAR (counterexample guided abstraction and refinement) and ARIsTEO (Approximation-Based Test Generation). In CEGAR, the model of the software being verified is abstracted, and then refined iteratively using model checking to find bugs; if a bug is spurious, it is used to refine the abstract model, until it is sufficiently precise to be used by a model checker to verify or refute a property of interest. ARIsTEO works similarly but it uses a model approximation and then search based testing to find bugs. Again, if a bug is spurious it is used to refine the model; refinement is simply retraining with additional data, and the refinement iterations continue until a nonspurious failure is found.

This work was done in the context of and inspired by cyber-physical systems (CPS), complex industrial CPS models that existing formal verification and software testing could not handle properly. The author concludes expressing her hope that “the testing and formal verification communities will eventually merge to form a bigger and stronger community”. Mixing formal and simulation-based techniques to verify hardware has been common practice for a long time.