SILVACO 073125 Webinar 800x100

Are you Aware about Risks Related to Soft Errors?

Are you Aware about Risks Related to Soft Errors?
by Admin on 07-10-2023 at 6:00 am

Image1

Soft errors change stored data and cause temporal malfunctions in electronic systems. This mainly occurs when radiation particles collide with semiconductor devices. Soft errors are a concern in all environments, whether in the atmosphere, in space, or on the ground.

Soft errors are critical in high-reliability applications such as automotive, aerospace, medical, and high-performance computing. A Single-Event Upset (SEU) or Multi Bit Upset (MBU) can cause data corruption or software crashes that can have severe consequences for these applications.

For example, in 2003, soft errors in voting machines used in Belgium’s elections counted 4,096 more votes than voters (Ref1). Soft Error problems have been manifesting themselves for decades (Ref2) and the problems will continue to increase as chips become larger, increasing in density and functionality.

IROC solutions for Soft Error assessment in your design:

IROC Technologies is specialized in providing best-in-class EDA solutions and test/consulting services for soft error analysis and mitigation. With our expertise and know-how, we help our customer to get more confidence in their chips against radiation effects.

  • TFIT®: predicts FIT rate at the cell level and provides help for optimal hardening solutions
  • SoCFIT®: analyzes propagation of the soft errors at SoC level and provides mitigation strategies
  • SERTEST: provides radiation particle test in international world-class laboratories
  • SERPRO: provides consulting for SER management and mitigation for complex systems

IROC helps customers throughout all phases of chip design and development. Using IROC EDA solutions in a standard EDA flow early in the design cycle reduces costs and saves time while enhancing the reliability of your chip. IROC design consulting and radiation testing can help to optimize and verify your designs for optimal SER.

Customers & Partners:

More than 130 companies, including over 50% of the top semiconductor companies have benefited from IROC’s deep experience in radiation effects. Over a 23 year history, IROC has nurtured partnerships and long-term relationships with major foundries such as TSMC, Samsung, Global Foundries, the European Union, and CEA. Foundries, in particular, have used IROC EDA products to enhance PDKs to help reduce soft error problems in their customers’ designs.

Who benefits from IROC solutions?
  • TFIT®
    • cell developers such as foundries, IP vendors, and custom cell designers
    • reliability engineers looking for raw soft error rates of basic cells in their chip design
  • SoCFIT®
    • reliability engineers targeting chip reliability problems caused by radiation
    • designers who wish to analyze and mitigate propagation of soft errors and reduce the Soft Error Rate (SER) at SoC-level
  • SERTEST/SERPRO:
    • chip and system providers who want to verify the reliability of their chip
    • chip and system providers who seek expert advice to optimize the radiation related reliability of their designs

Example of IROC customers’ use cases:

Automotive: TFIT was used to harden Flip-Flops. TFIT provided analysis and suggestions for schematic and layout changes. The customer was able to achieve clean triple modular redundancy implementation and discovered a potential weakness related to angular impacts.

Aerospace: SERPRO team is working actively with the European Space Agency (ESA), recently completing a project in 16nm FinFET. Additional projects with even more advanced nodes are in the pipeline.

Medical: SERTEST team successfully analyzed and identified the root cause of a critical issue in a pacemaker application, resulting in the implementation of improved qualification and testing procedures for the customer.

HPC: Customer was able to lower their Failure in Time (FIT) Rate 7X by using SoCFIT on several, complex digital circuits including 40Mbit SRAM, 1.7M FF, 13M-Combo Gates.

Shi-Jie WEN, Distinguished Engineer, Advanced Silicon Technologist at CISCO Systems said: “Cisco benchmarked TFIT with results of tests on silicon for several designs and other tools. The correlation between the simulation results and test is impressive for this particular process node – TSMC 40nm. Cisco is committed to continue our correlation work with TFIT on the other Si technology nodes. TFIT stands as one of the best commercially available simulation tool offered to the industry for soft error simulation.”

If you would like to know more about IROC Technologies and our offerings, please do not hesitate to contact info@iroctech.com or visit www.iroctech.com.

Ref 1: https://en.wikipedia.org/wiki/Electronic_voting_in_Belgium

Ref 2: https://www.computerworld.com/article/2584471/q-a–mcnealy-defends-sun-reliability–personal-privacy-views.html

Also Read:

CEO Interview: Issam Nofal of IROC Technologies


Mirabilis Invites System Architects at DAC 2023 in San Francisco

Mirabilis Invites System Architects at DAC 2023 in San Francisco
by Daniel Payne on 07-07-2023 at 10:00 am

visualsim architect min

System architects have a difficult task of choosing the most efficient architecture by exploring alternative approaches, while tracking and testing requirements. Using a Model-Based Systems Engineering (MBSE) approach is recommended to achieve these goals,  before getting mired in low-level implementation details like RTL code. Mirabilis is an EDA vendor at DAC this year that aims at using an MBSE methodology through their VisualSim tool. I spoke with Deepak Shankar, founder of Mirabilis Design by phone this week to get a preview of what they’re doing at DAC.

The three big messages from Mirabilis this year for DAC that will interest system architects are:

1. Integration with Model Based Systems Engineering

Architects can import their existing SysML models into the VisualSim tool. The adoption of SysML started out in the Defense community and now even system-level semiconductor companies are using it. SysML can manage your code, but it cannot predict what the performance and power will be when mapping to HW like an SoC. SysML cannot tell cache contention, because there is no latency understanding. Using VisualSim with MBSE supplies the requirements data, allowing you to measure latency, see the buffer occupancy, and even track radiation requirements.

VisualSim Architect

2. Addition of RISC-V modeling environment

RISC-V is a very popular Instruction Set Architecture (ISA) that can be extended for specific domains, but how would you do power and performance comparisons between using SiFive or an ARM architecture? How about system-level benchmarks?

With Mirabilis there are pre-built models of RISC-V IP, plus you can even create your own special RISC-V architecture. For Network On Chip (NoC) you can choose from models for Arteris NoC or ARM NoC. In VisualSim Architect you can run benchmarks for RISC-V, and see what would happen if your data is stored in cache or you have to use external RAM. You can also see the performance and identify your architectural bottlenecks. This analysis allows you to evaluate RISC-V cores for use inside an SoC, or model an SoC to give to customers so they can start planning the architecture of their end products. VisualSim allows an engineer to understand pipelines and the entire system all in one platform.

3. Created a new packaging mechanism

A RISC-V core vendor could package their high-level model before product development even starts, to use it inside an SoC, then hand it off to their end customer to see how the full system works. A semiconductor vendor, tier 1 supplier and customer can all be sharing the same executable models. Denso is an example customer sharing models of their auto ECUs as a tier 1 supplier for automotive companies, and they saved 40% of development time by using VisualSim.

DAC Paper

Mirabilis has a DAC paper on system modeling and failure analysis in avionics, in conjunction with a US Defense application.

Attend this presentation on Tuesday, July 11th, from 2:24pm – 2:42pm PDT, room 2008 on Level 2, as part of the Embedded Systems and Software track.

DAC Booth

You will find Mirabilis in booth #2217, that’s on the 2nd floor at Moscone West, and in their booth will be three technical people to speak with. While other vendors tend to only show PowerPoint slides, you will instead see a live demo of the VisualSim Architect tool in action, now that’s confidence.

You may also sign up for a private discussion in their suites by requesting online here.

Summary

System design and system modeling has become much easier, is accurate, and by using system-level IP your system architects will analyze quickly and explore more thoroughly than with other methods. At this level of abstraction your team can analyze and uncover architectural bottlenecks before detailed RTL implementation starts.

Enjoy some chocolate at the booth, I’ve heard that it’s quite tasty.

Related Blogs


CEO Interview: Ashraf Takla of Mixel

CEO Interview: Ashraf Takla of Mixel
by Daniel Nenni on 07-07-2023 at 6:00 am

Ashraf Takla Mixel

Ashraf Takla is Founder and CEO of Mixel Inc., which he founded in 1998 and is headquartered in San Jose, CA. Mixel is a leading provider of mixed-signal IPs and offers a wide portfolio of high-performance mixed-signal connectivity IP solutions. Mixel’s mixed-signal portfolio includes PHYs and SerDes, such as MIPI D-PHY, MIPI M-PHY, MIPI C-PHY, LVDS, and many dual mode PHY supporting multiple standards. Before founding Mixel, Mr. Takla was Director of Mixed-Signal Design at Hitachi Micro Systems. He has over 40 years of experience in analog and mixed signal design, and holds 8 patents.

Tell us a bit about Mixel history and what lies ahead?

Mixel was founded in 1998, with one focus, to develop best of class Mixed-Signal IP. That is where our name comes from: Mixel; Mixed-Signal Excellence, so this year, we are celebrating our 25th Anniversary. Originally, we created a very wide portfolio of mixed-signal IP; PLL, SerDes, PHY’s, ADC, DAC, and Transceivers. As we grew, we focused more and more on PHYs and SerDes. In 2000, we had one of the first multi-standard SerDes in the industry, running at 4.25Gbps. When MIPI came along, we were ready to jump in since we already had an IP that supported a MDDI a standard, a predecessor to MIPI. Since then, we have had great success with our MIPI portfolio and plan to duplicate that success in adjacent segments.

There are a lot of competitors in the IP space. How is Mixel different?

We have a 25-year track record of delivering differentiated IP while consistently achieving first-time silicon success. We have a saying in Mixel; first-time success is the rule, no exceptions. For every generation and port of our IP, this has been the case.

We work closely with our customers, listen to their challenges, and address them by coming up with unique and differentiated solutions that set their products and our IPs above our competition.

Based on that, we develop proprietary implementations to address unique challenges for certain market segments. As an example, when we started addressing automotive applications over 10 years ago it became clear that testability is a key challenge. We developed and patented an implementation that uniquely addresses this challenge, which is now very widely adopted by our customers. This is a differentiated solution that only Mixel provides.

Over the last 25 years, we have developed robust design methodology that encompasses all our engineering activities. This approach enables us to create different configurations of our IP’s quickly with a high level of success. We call that methodology LegorithmicTM

Constant improvement in all we do. This is not only limited to our products, but also our engineering methodology and business practices. Our methodology and quality standards are always evolving and improving. What we consider excellent today might not be good enough tomorrow.

We have an outstanding culture that we call Mixel PRIDE: Partnership, Responsibility Integrity, Diversity and Excellence. This culture is one of the cornerstones of Mixel success.

Most importantly we treat our customers as partners, because at the end of the day it’s all about our customers’ and their customers’ success.

Who are your customers? How are your customers using your IP, what applications?

We are in the wired communication space. Most of our customers use our PHY IPs to transport data at high data rates at the lowest possible power.

We have a very wide loyal customer base, many of them are repeat customers, and have become true partners. They are all over the globe and span the whole spectrum from the largest tier-one semiconductor giants to the small innovative startups, and a whole lot between. Our IPs are widely deployed wherever the system incorporates sensors/cameras or displays, such as mobile, automotive, VR, MR, XR headsets, IoT, industrial and medical devices and platforms. You can see some of our customers on our website as well as several customer demos on Mixel’s YouTube page.

What is next for Mixel?

While we have been developing IP for automotive for 10 years, we continue to see increasing adoption in automotive. We were ISO 9001 certified in 2019 and in 2021, we achieved ISO 26262 certification with our process certified up to ASIL-D and multiple configurations of our IP certified up to ASIL-B. Automotive is a focus area for us.

While we continue to grow our MIPI portfolio, our market share, and expand our customer base, we are looking to replicate our success with MIPI in complementary standards. Many of our loyal customers are encouraging us to address more of their other PHY/SerDes requirements.

We have been selling our test chips at low volume for a while now and looking into growing that business and leveraging it as an entry to the chiplet business.

We are continually expanding the team and our global presence to address the ever-growing opportunities available to us.

What are you excited about?

Despite the recent high-tech industry challenges, the future of the semiconductor industry and the IP business is bright, particularly in the segments that Mixel is focused on.

I’m very excited about our plans to expand beyond our MIPI focus, and about growing our engineering team globally beyond our current footprint.

In 2021, we announced Mixel’s commitment to our environment, community, and employees as a part of our CSR initiatives. We want to do our part to give back. We are offsetting 100% of the carbon footprint of our global operations and have partnered with several organizations to give back to the local community. It will be exciting to build on those first steps and see that effort come to fruition.

Our customers continue to amaze us with their creativity and innovation. There are many examples. Just this year, we announced that our D-PHY IP is in Teledyne e2v’s award-winning Topaz CMOS image sensors. Our MIPI IP is in Hercules Microelectronics HME-H3 FPGA, the industry’s first FPGA to support MIPI C-PHY v2.0.

So, I’m excited about future collaboration with our customers and partners and that together, as part of the ecosystem, we are changing the world in positive ways.

Mixel will be exhibiting their latest customer demos at Design Automation Conference (booth #1414) on July 10-12, 2023, in San Francisco. Learn more here.

 Also Read:

MIPI D-PHY IP brings images on-chip for AI inference

MIPI bridging DSI-2 and CSI-2 Interfaces with an FPGA

MIPI in the Car – Transport From Sensors to Compute


AMIQ: Celebrating 20 Years in Consulting and EDA

AMIQ: Celebrating 20 Years in Consulting and EDA
by Daniel Nenni on 07-06-2023 at 10:00 am

AMIQ20

We’re getting close to the annual July Design Automation Conference (DAC) in San Francisco, and every year I like to make the rounds of the exhibitors beforehand and see what’s new. When I checked with AMIQ EDA, I found that this is a big year for them. Their parent company AMIQ just reached its 20th anniversary, and they’ll be celebrating that accomplishment at DAC this year.

I read a bit about their history and found that AMIQ was founded in 2003—20 years ago indeed—by Cristian Amitroaie, Stefan Birman, and Adrian Simionescu (Simi). I learned that the company was started to provide verification consulting services, but I was more familiar with their electronic design automation (EDA) business. I usually speak with Cristian, now CEO of AMIQ EDA, but for this topic he directed me to Stefan, CEO of AMIQ Consulting, and Simi, R&D Director at AMIQ EDA.

Why did you start AMIQ?

Stefan: There were three primary reasons:

  • To practice engineering within the semiconductor industry and help customers with high quality services and products
  • To give something back to the environment that we sprouted from and to prove it is possible to create value with local capital
  • To work with smart people who have a passion for engineering and to never work for incompetent managers again!
What were your backgrounds?

Simi: All three of us were hands-on engineers working on chip design and verification. We are engineers by education, experience and passion. We had been colleagues for three years when we decided to strike out on our own. Before that, we had our own experiences with different companies’ environments, projects and cultures. Although we shared a common vision, each of us had and still has his own qualities and imperfections, which somehow complement each other.

What are the biggest changes you’ve seen in the industry in those 20 years?

Stefan: There are many. One big change is the shrinking of the semiconductor industry in Europe, with most of it going to China, India, and the U.S. This led our European-based company to adapt quickly to solicit and support a worldwide customer base.

In terms of chip design and verification technology, we’ve been through some very interesting technology waves. We’ve seen the “e” language rising and falling, we’ve used a bit of Vera, and we’ve watched it mix with Verilog into SystemVerilog. We’ve lived through the creation and rise of various verification methodologies, from the e Reuse Methodology (eRM) to the SystemVerilog-based Universal Verification Methodology (UVM).

Have these changes affected your business?

Simi: Certainly, especially since SystemVerilog is now the most widely used of the many languages and formats we support. But perhaps the biggest evolution for us has been the semiconductor industry adopting software processes and tools: a “softwarization” of the hardware world. Some examples include:

  • Agile methodologies have replaced the old waterfall-based management
  • Continuous integration (CI) pipelines have been implemented
  • Git has replaced 80s code versioning technology
  • Engineers have adopted Python as a backbone scripting language
  • Verification, design, and project metrics are now handled by big data frameworks
  • Integrated development environments (IDEs) are now used for SystemVerilog, SystemC, the “e” language, and more

This last point is especially important because that’s what led to us expanding from consulting into EDA. Until 2004, nobody thought of using IDEs for hardware languages. We initially developed Design and Verification Tools (DVT) Eclipse IDE for internal use on consulting projects. We found it so valuable that we established AMIQ EDA in 2008 to make it available to all users. We’ve since expanded to other tools, including our Verissimo SystemVerilog Linter, which eliminates 90% of the code review burden while ensuring compliance with the UVM.

What were the top three challenges you’ve faced?

Stefan: The first challenge, which probably applies to any entrepreneur, is learning as you go. All three of us had engineering backgrounds, so everything else related to business and people we had to learn along the way. Our first employee was our accountant and CFO, Anca Nicolaescu, who is still working with us.  Everything else—programming, marketing, business development, HR, legal paperwork—was done by one or more of the three of us.

Today we have grown to 70 employees with dedicated people for each function or role required by any healthy company. The challenge is to learn as you go, while at the same time still providing value to customers, assuring the quality of products and services, interviewing people, and mentoring new employees, while also having a personal life.

Along those lines, another challenge is organic scaling. You might be surprised to find that the challenge is not to grow, but to refrain from growing. Every time a new employee joins the company it takes time to internalize the AMIQ culture, become part of the team, and start contributing value. Learning technical skills is the least important part of this process; if they were hired it means they already have the skills they need. The challenge is making the new employee part of the team, of the culture, of the vision. You need to refrain from hiring too many people at once if that risks affecting the culture in a negative way.

Stefan: Of course, new people don’t just absorb the existing company culture. They also help to evolve the culture to make possible further scaling and to adapt to changing requirements. So the third challenge is finding and growing the right people. The whole process is time-consuming and personally demanding. At AMIQ, the employees themselves do the screening and interviews, so they are empowered to select the future team members. It is a good way to foster and pass on our culture, and also an opportunity for our employees to grow.

Have you had success and has it matched your expectations?

Simi: Yes, we are successful, both objectively measurable and in more subtle and subjective ways. It is success that AMIQ people have given their best for the last 20 years and customers ask for more of our products and services. It is success that we have customers who have worked with us for the last 15 years. It is success that employees have chosen to stay with AMIQ for 16 years already. It is success that first-year college students join our internship program and most stay with us after graduation. It is success when you see people growing from fresh graduates to highly skilled professionals, taking up leading roles and growing other people in turn.

Does it match our expectations? Well, we did not start from a business plan done by a professional business consultant with hard bottom lines and hiring quotas and an exit strategy. We dreamed of doing what we knew best, growing a company around that, and getting to a financially stable place in the process. And we are there today, as we speak, with an even brighter future ahead.

I have to say that AMIQ has had quite a journey, perhaps even a heroic one, starting so small and achieving so much. Thank you very much for sharing it with us.

Stefan and Simi: Thank you, Dan, and thank you to all the employees who made AMIQ possible, the customers who entrusted us with their problems, and the families and friends who supported us in our endeavors. It was not possible without your patience and your faith in us.

Also Read:

A Hardware IDE for VS Code Fans

Using an IDE to Accelerate Hardware Language Learning

AMIQ EDA Adds Support for Visual Studio Code to DVT IDE Family


Visit with Agnisys at DAC 2023 in San Francisco July 10-12

Visit with Agnisys at DAC 2023 in San Francisco July 10-12
by Anupam Bakshi on 07-06-2023 at 6:00 am

Accellera Lunch 2023

I’d like to extend an invitation to you and your development team to visit with Agnisys in our booth, #2512, at this week’s Design Automation Conference (DAC) 2023, Monday, July 10-12.

In its 60th year, DAC is recognized as the premier event for the design and design automation of electronic chips to systems, so you can count on team Agnisys to help you solve complex front-end design, verification, and validation problems. Our certified IDesignSpec™ Solution Suite leverages a golden executable specification to capture and centralize registers, sequences, and connectivity for Intellectual Property (IP) and System-on-a-Chip (SoC) projects.

Book time, in advance, with our solutions team to learn how our intuitive user interfaces and standards-based workflows dramatically reduce risk by eliminating development errors while significantly increasing productivity and efficiency through the automatic generation of collateral (output files) for the entire product development team.

And while at our booth, don’t forget to take our Quiz. Winners of this quiz (who score 8 or higher) will receive one of the following prizes: a portable charger, a portable JBL Clip 4 mini speaker, or a JBL Go 3 portable speaker. Everyone who scores 4 or higher is eligible for our end of day drawing for a Creality Resin 3D printer.

I hope you’ll also find time to join me on Tuesday, July 11th, from 11:45AM to 1:00PM in room 3015, Moscone West, for an Accellera-sponsored luncheon and panel titled, “Tackling SoC Integration Challenges.”

With System-on-Chip (SoC) design becoming more and more widespread, the challenge of IP integration – IP created and verified with tools from different vendors – has been rapidly exacerbated. Accellera working groups are tackling these challenges by introducing new standardization initiatives such as the Security Annotation for Electronic Design Integration (SA-EDI) 1.0 Standard focused on helping IP providers identify security concerns, and the new Clock Domain Crossing (CDC) Working Group focused on creating a standard for CDC abstraction models to facilitate faster design IP integration. Agnisys has been an active and strong contributor to various Accellera working groups.

This lunch-time panel will focus on the efforts of the CDC Working Group to define a standard CDC collateral specification. The standard is aimed at easing SoC integration, enabling teams to integrate IPs verified using various CDC tools without sacrificing quality and design time. Panel members from the working group will share the key work in progress and look toward deliverables in the coming year. Attendees will have an opportunity to ask questions. If you’re interested in further details, please contact us or click on the image below to register.

If your development team needs to solve for the inevitable metastability in a
multi-clock-domain design, then you might enjoy the following article, . Agnisys provides a pushbutton solution for clock domain crossings related to register blocks that helps chip architects and engineers create an executable specification for the control and status registers (CSRs) and automatically generate outputs for software and hardware teams.

AGNISYS DEMOS AT DAC:

  • IDesignSpec GDI: Capture addressable registers / IP spec in a robust variety of formats + functional behavior of the registers + the connectivity specification for the entire chip to reduce development time by 50% / improve quality 1000X
  • IDS-Batch CLI: Command line capabilities for system development
  • IDS-Verify: Reduce your verification engineer’s workload (over 40%) by automatically generating UVM based verification environment and tests + generate assertions to automatically verify the HSI layer + create custom tests to test register related functionality
  • IDS-Validate: Generate C/C++ tests automatically for the Hardware-Software Interface layer to ensure that your product reaches the market flawlessly
  • IDS-Integrate: Construct an SoC from constituent blocks using a connectivity specification in Tcl / Python to help automate your IP-XACT based packaging flow
  • IDS-IPGen: Specify state-machines, data-path, and combinatorial logic in addition to addressable registers to auto-generate the design RTL and the UVM verification environment + generate AI based tests to ensure faster / complete code-coverage and functional coverage

BY SPECIAL REQUEST – schedule time to discuss these topics by clicking here

  • Batch processing of PSS files for generation of test files
  • Specialized editor for simultaneous multiple team member editing of PSS / SystemRDL files (generate outputs from it all in the cloud)
  • AI for automatic generation of tests for designs
  • TLM based SystemC generation
  • AMBA-5 for AXI, AHB and APB
  • Efficient creation of a top-level SoC (IP-XACT 2022)

If you won’t be going to this year’s DAC, but would like to learn more about specification automation solutions from Agnisys, we hope you’ll join us for the first in a three-part series of upcoming August webinars:

Be sure to save the date for the next topics in our webinar series:

  • Aug. 17: IP-XACT 2022 : What’s New
  • Aug. 31: Avoiding Metastability – CDC for Hardware & Software Interface

If you have any questions about correct-by-construction golden specification-based design, please contact us today!

Also Read:

Can We Auto-Generate Complete RTL, SVA, UVM Testbench, C/C++ Driver Code, and Documentation for Entire IP Blocks?

ISO 26262: Feeling Safe in Your Self-Driving Car

DAC 2021 – What’s Up with Agnisys and Spec-driven IC Development

AI for EDA for AI


Defacto Celebrates 20th Anniversary @ DAC 2023!

Defacto Celebrates 20th Anniversary @ DAC 2023!
by Daniel Nenni on 07-05-2023 at 10:00 am

20ans signature clean

Defacto Technologies is a company that specializes in Electronic Design Automation (EDA) software and solutions. Defacto offers a range of EDA software solutions that help streamline and optimize various stages of the front-end design process. Their tools focus on chip design assembly and integration before logic synthesis where they manage jointly different design formats RTL, IPXACT, UPF, etc.

In preparation for DAC I had a conversation with Defacto CEO Dr. Chouki Aktouf. Before founding Defacto in 2003, Dr. Chouki Aktouf was an associate professor of Computer Science at the University of Grenoble – France, and the dependability research group leader. He holds a Ph.D. in Electric Engineering from Grenoble University.

I noticed this year at DAC is the 20th anniversary of the company, congratulations on your success!

It is a really important year for Defacto and July is an important month. The company was founded in July 2003.

During these 20 years we proved our added value in the SoC building process pre-synthesis. In particular in the reduction of the design cycles and the PPA optimization. Today, we are proud to count most of the top 20 semiconductors companies as regular users of Defacto’s SoC Compiler.

So, definitely, after 20 years, we are confident to say that for many front-end SoC designers our EDA tools become the “de facto” and their “SoC Compiler” in the early SoC design building process managing RTL, IPXACT and design collaterals!

We will celebrate the 20th year anniversary at DAC, where we will be having several announcements in terms of success stories and tool major capabilities. Several customers will be coming to our booth (#1541) to share their experience using our solutions and how they are benefiting from them.  Many events and surprises are also planned to celebrate properly this 20th anniversary!

In parallel Defacto announced a new Major Release of its SoC Design solution. Could you please elaborate?

Exactly, this is the 10.0 Major Release of Defacto’s SoC Compiler. This release will bring a lot of announcements in terms of features, capabilities, and better performance along with first customers statements and testimonies when using this Release for large SoCs. In summary, the main announcements we will be showing at DAC about this major release, beyond the maturity of the Defacto design solution, is how much it becomes easy to use by RTL designers and SoC design architects. We are simplifying the SoC building process pre-synthesis from user SoC design specifications to the generation of the whole package, RTL and design collaterals, ready for synthesis and for design verification. This will be the main addressed topic by the Defacto team at DAC.

As I remember, SoC Compiler is an integration tool at the front-end. What about help for back-end designers?

Absolutely! our EDA market positioning for decades is clear. Our design solutions help at the front-end when starting the SoC building process but the way we manage the RTL and design collaterals is not independent or uncorrelated from the back-end. Actually, back-end designers can provide the tool with physical design information. And then the tool will generate, for example, the top level of the RTL which is physically aware. Which means that this physically aware RTL and the related design collaterals can be directly synthesized which usually leads to better PPA results. In summary,  this connection between the front-end and the back-end is where back-end designers, and also SoC design architects find a unique value compared to other EDA tools.

Are the benefits mainly speeding up the design process? PPA? Or both?

Good question. Definitely, speeding-up, shortening the design cycles/process is key since we are providing high level of automation. But getting a better PPA is also an important expectation when using Defacto. What I just mentioned earlier for the physically aware SoC integration definitely impact the PPA. Synthesis and P&R EDA tools will definitely do a better job.

In addition, our solution also helps directly optimize PPA by managing RTL connectivity with feedthroughs. for. Also, during DFT coverage enhancement and test point insertion, our design solutions automates the process of exploring and inserting test points at RTL to ensure high coverage with the lower area overhead. So, in summary both PPA and design cycles are addressed when using our design solution.

Do you manage design collaterals like UPF and SDC compared to RTL?

This is a major difference between Defacto’s solution and competition. In summary, we don’t manage only the RTL when building the SoC and generating the top level. We consider at the same time RTL and the design collaterals. At the same time, we mean managing incoherency problems between the RTL database, the SDC database, the UPF database, the IP-XACT database, etc. Also generating missing views to speed up the SoC building process. In other words, the joint management of RTL and design collaterals in a unified way is what makes Defacto’s SoC Compiler unique.

I always knew the tool to integrate IPs and build the top level. Is it possible to generate the design for synthesis and simulation tool?

This is exactly what we do. Building, integrating, inserting IPs, inserting connections, these are the daily capabilities the tool provides to the user but actually, what we are also enabling is what I said at the beginning, the generation of RTL and design collaterals.

If you need to rely to the tool to translate a specification of an SoC to the Top level, this becomes possible. How? We can share with the users at DAC through demos, how the tool interoperates with IP configuration tools to shorten the path between the specification to the Top level generation. So, the generation is today key in the automation that is provided by our design solution.

We hear a lot about Python interest in EDA, do you provide a Python API?

This is quite funny because from the past years people started to come saying: “I am a designer but in my engineering school I was more familiar with Python than Tcl, can you help us?” So, the answer is YES, today we see more and more designer’s pick-up with Python, expecting the tool to be used in Python. Why? Because for them it is easier to script in Python.

We fully support Python and the way we handle Python is 100% object-oriented. For people who have a Python culture, they should visit our booth, they will like the examples that our team will share with them!

Do you provide any checking capabilities like linting?

Checking engines are underlying in our design solution. When you start building the chip, it’s not only about editing or integrating features. The tool must provide checks to make sure the building process is reliable and correct by construction. So, we have many checking capabilities basic linting for the RTL, and each of the design collaterals along with the coherency between them. Static signoff is also provided for DFT, clocking, …. And more importantly, all these checking capabilities can be customized and extended by the user.

After 20 years and the focus on SoC Integration are you still providing DFT within SoC Compiler?

You know, we started with DFT solution a long time ago but still DFT is part of the offer. Our DFT solution is among the most mature ones in the market. We don’t really overlap with DFT implementation tools. We provide an added value at RTL in terms of DFT signoff, planning, exploration. So, yes in summary we are still a key provider of DFT solutions, for both RTL designers and DFT experts.

To find more information about Defacto Technologies and meet them at DAC booth #1541 and check out their website!

Also Read:

WEBINAR: Design Cost Reduction – How to track and predict server resources for complex chip design project?

Defacto’s SoC Compiler 10.0 is Making the SoC Building Process So Easy

Using IP-XACT, RTL and UPF for Efficient SoC Design

Working with the Unified Power Format


Vision Transformers Challenge Accelerator Architectures

Vision Transformers Challenge Accelerator Architectures
by Bernard Murphy on 07-05-2023 at 6:00 am

vision transformer

For what seems like a long time in the fast-moving world of AI, CNNs and their relatives have driven AI engine architectures at the edge. While the nature of neural net algorithms has evolved significantly, they are all assumed to be efficiently handled on a heterogenous platform processing through the layers of a DNN: an NPU for tensor operations, a DSP or GPU for vector operations and a CPU (or cluster) managing whatever is left over.

That architecture has worked well for vision processing where vector and scalar classes of operation don’t interleave significantly with the tensor layers. A process starts with normalization operations (grayscale, geometric sizing, etc), handled efficiently by vector processing. Then follows a deep series of layers filtering the image through progressive tensor operations. Finally a function like softmax, again vector-based, normalizes the output. The algorithms and the heterogenous architecture were mutually designed around this presumed lack of interleaving, all the heavy-duty intelligence being handled seamlessly in the tensor engine.

Enter transformers

The transformer architecture was announced in 2017 by Google Research/Google Brain to address a problem in natural language processing (NLP). CNNs and their ilk function by serially processing local attention filters. Each filter in a layer selects for a local feature – an edge, texture or similar. Stacking filters accumulate bottom-up recognition ultimately identifying a larger object.

In natural language the meaning of a word in a sentence is not determined solely by adjacent words in the sentence; a word some distance away may critically affect interpretation. Serially applying local attention can eventually pick up weighting from a distance but such influence is weakened. Better is global attention, looking at every word in a sentence simultaneously where distance is not a factor in weighting, evidenced by the remarkable success of large language models.

While transformers are known best in GPT and similar applications they are also gaining ground rapidly in vision transformers, known as ViT. An image is linearized in patches (say 16×16 pixels) then processed as a string through the transformer with ample opportunity for parallelization. For each the sequence is fed through a series of tensor and vector operations in succession. This repeats for however many encoder blocks the transformer supports.

The big difference from a conventional neural net model is that here tensor and vector operations are heavily interleaved. Running such an algorithm is possible on a heterogenous accelerator but frequent context switching between engines would probably not be very efficient.

What’s the upside?

Direct comparisons seem to show ViTs able to achieve comparable levels of accuracy to CNNs/DNNs, in some cases maybe with better performance. However more interesting are other insights. ViTs may be biased more to topological insights in a figure rather than bottom-up pixel-level recognition, which might account for them being more robust to image distortions or hacks. There is also active work on self-supervised training for ViTs which could greatly reduce training effort.

More generally, new architectures in AI stimulate a flood of new techniques, already apparent in many ViT papers over just the last couple of years. Which means that accelerators will need to be friendly to both traditional and transformer models. That bodes well for Quadric, whose Chimera General-Purpose NPU (GPNPU) processors are designed to be a single processor solution for all AI/ML compute, handling image pre-processing, inference, and post-processing all in the same core. Since all compute is handled in a single core with a shared memory hierarchy, no data movement is needed between compute nodes for different types of ML operators. You can learn more HERE.

Also Read:

An SDK for an Advanced AI Engine

Quadric’s Chimera GPNPU IP Blends NPU and DSP to Create a New Category of Hybrid SoC Processor

CEO Interview: Veerbhan Kheterpal of Quadric.io


Semiconductors, Apple Pie, and the 4th of July!

Semiconductors, Apple Pie, and the 4th of July!
by Daniel Nenni on 07-04-2023 at 6:00 am

Happy 4th of July 1024x768

Semiconductors, apple pie, and the 4th of July are American traditions. You can read about the history of semiconductors in our book “Fabless: The Transformation of the Semiconductor Industry” and the history of Arm in our book “Mobile Unleashed: The Origin and Evolution of Arm processors in our Devices“. Rather than reading about apple pie just go and get some!

July 4th, also known as Independence Day, is a national holiday in the United States that commemorates the country’s independence from Great Britain. It is celebrated annually on July 4th and holds great historical significance. Here are some key points about July 4th from ChatGPT:

  1. Declaration of Independence: On July 4, 1776, the Second Continental Congress adopted the Declaration of Independence, which declared the American colonies’ separation from British rule. This document, drafted primarily by Thomas Jefferson, outlined the principles of liberty, equality, and self-governance that the United States was founded upon.
  2. Independence Day: July 4th marks the anniversary of the adoption of the Declaration of Independence and is considered the birthday of the United States. It is a federal holiday, and most businesses, government offices, and schools are closed to observe the occasion.
  3. Patriotic Symbols: During July 4th celebrations, you’ll commonly see patriotic symbols such as the American flag, which represents the unity and independence of the nation. Many people decorate their homes, public spaces, and even themselves with flags, bunting, and other patriotic decorations.
  4. National Spirit: Independence Day is a time when Americans come together to celebrate their shared values and the freedoms they enjoy. It evokes a sense of national pride and fosters a spirit of unity among people of different backgrounds.
  5. Festivities: July 4th celebrations typically include various festivities across the country. Fireworks displays are a highlight of the evening, with vibrant and elaborate shows taking place in many cities and towns. Parades, concerts, carnivals, and community events are also common, offering entertainment for people of all ages.
  6. Barbecues and Picnics: Many people celebrate July 4th with outdoor gatherings, such as barbecues or picnics. Grilling hamburgers, hot dogs, and other classic American dishes is a popular tradition. Families and friends often gather in parks or backyards to enjoy good food, games, and quality time together.
  7. Reflection and Appreciation: Independence Day is also an occasion for reflection on the country’s history and the ideals it represents. It is an opportunity to appreciate the sacrifices made by the founding fathers and to honor the men and women who have fought and continue to fight for the country’s freedom.

While the core elements of July 4th celebrations remain consistent, specific activities and events can vary depending on the location and individual preferences. It is a day to remember and celebrate the birth of the United States as an independent nation and to appreciate the values and freedoms that the country holds dear.

This year my family (wife, children, grandchildren) will be celebrating the 4th in our traditional way: Local parade, backyard BBQ, and watching fireworks on the water.

Happy 4th of July!

Also Read:

TSMC Redefines Foundry to Enable Next-Generation Products

Semiconductor CapEx down in 2023

Samsung Foundry on Track for 2nm Production in 2025

Intel Internal Foundry Model Webinar


Computational Imaging Craves System-Level Design and Simulation Tools to Leverage AI in Embedded Vision

Computational Imaging Craves System-Level Design and Simulation Tools to Leverage AI in Embedded Vision
by Kalar Rajendiran on 07-03-2023 at 10:00 am

Typical Pipelines

Aberration-free optics are bulky and expensive. Thanks to high-performance AI-enabled processors and GPUs with abundant processing capabilities, image quality nowadays relies more on high computing power tied to miniaturized optics and sensors. Computational imaging is the new trend in imaging and relies on the fusion of computational techniques and traditional imaging to improve image acquisition, processing and visualization. This trend has become increasingly important with the rise of smartphone cameras and involves the use of algorithms, software and hardware components to capture, manipulate and analyze images. It results in improved image quality and enhanced visual information and additionally enables meaningful data extraction which is critical for embedded vision applications.

While there are several advantages from computational imaging, there are many challenges that need to be addressed to enjoy the full potential. The design and simulation tools used by optical designers, electronics engineers, and AI software engineers are often specialized for their respective domains. This creates siloes, hindering collaboration and integration across the entire imaging pipeline and results in suboptimal system performance.

A system-level design and simulation approach that considers the entire imaging system would optimize image quality, system functionality and performance (cost, size, power consumption, latency…). It would require integrating optical design, image sensor and processor design, image processing algorithms and AI models. Synopsys recently published a whitepaper that discusses how the gaps in computational imaging design and simulation pipelines can only be overcome with system-level solutions.

Leveraging AI Algorithms to Improve Computational Imaging Pipeline

Image Signal Processors (ISPs) process raw data from image sensors and perform various tasks to enhance image quality. Traditional ISPs are designed for specific functions and are hardwired for cost efficiency, limiting their flexibility and adaptability to different sensor classes. AI-based image processing utilizing neural networks (NN) shows promise in supplementing or replacing traditional ISPs for improving image quality.

Supplement or Replace Traditional ISPs

For example, a noise filter used in ISPs can enhance image quality but may discard crucial information present in the raw data. By analyzing chromatic aberration effects before digital signal processing (DSP), depth data contained in the raw sensor data can be indirectly extracted. This depth data can then be utilized by AI-based algorithms to reconstruct a 3D representation of a scene from a 2D image, which is not possible with current ISPs. In cases where the primary objective is for computer vision functions to interpret image content using machine learning rather than enhancing perceived quality for human viewing, working with raw data becomes advantageous. Utilizing raw data allows for more accurate object classification, object detection, scene segmentation, and other complex image analyses. In such cases, the presence of an ISP designed for image quality becomes unnecessary.

New Possibilities for Digital Imaging Systems

NNs excel in tasks such as denoising and demosaicing, surpassing the capabilities of traditional ISPs. They can also support more complex features like low-light enhancement, blur reduction, Bokeh blur effect, high dynamic range (HDR), and wide dynamic range. By embedding knowledge of what a good image should look like, NNs can generate higher resolution images. Combining denoising and demosaicing into an integrated process further enhances image quality. Additionally, NN-based demosaicing enables the use of different pixel layouts beyond the conventional Bayer layout, opening up new possibilities for digital imaging systems.

Cheaper Lenses Provide More Accurate Object Detection

NNs can produce better results for certain tasks, such as object detection and depth map estimation, when processing images captured by “imperfect” lenses. As an example, the presence of chromatic aberrations caused by imperfect lenses adds additional information to the image, which can assist the NN in identifying objects and estimating depth.

Co-designing Lens Optics with AI-based Reconstruction Algorithms

While smartphone-based ultra-miniaturized cameras have eclipsed Digital Single Lens Reflex (DSLR) cameras in the market, they face the limits of optics. Researchers at Princeton have explored the use of metalenses, which are thin, flat surfaces that can replace bulky curved lenses in compact imaging applications. They co-designed a metalens with an AI algorithm that corrects aberrations, achieving high-quality imaging with a wide field of view.

The key aspect of this co-design is the combination of a differentiable meta-optical image formation model and a novel deconvolution algorithm leveraging AI. These models are integrated into an end-to-end model, allowing joint optimization across the entire imaging pipeline to improve image quality.

Synopsys Solutions for Designing Imaging Systems

Synopsys offers tools to address the requirements of the entire computational imaging system pipeline. Its optical design and analysis tools include CODE V, LightTools, and RSoft Photonic Device Tools for modeling and optimizing optical systems. The company’s Technology Computer-Aided Design (TCAD) offers a comprehensive suite of products for process and device simulation as well as for managing simulation tasks and results.

Synopsys also offers a wide range of IP components and development tools to design and evaluate the ISP and computer vision (CV) blocks. These IP components include the MIPI interface, the ARC® VPX family of vector DSPs, and the ARC VPX family of Neural Processing Units (NPUs).

Synopsys ARC MetaWare MX Toolkit provides a common software development tool chain and includes MetaWare Neural Network SDK and MetaWare Virtual Platforms SDK. The Neural Network SDK automatically compiles and optimizes NN models while the Virtual Platforms SDK can be used for virtual prototyping.

Synopsys Platform Architect ™ provides architects and system designers with SystemC™  TLM-based tools and efficient methods for early analysis and optimization of multicore SoC architectures.

Summary

Computational imaging relies more than ever on high computing power tied to miniaturized optics and sensors rather than standalone and bulky but aberration-free optics. Promising system co-design and co-optimization approaches can help unleash the full potential of computational imaging systems by decreasing hardware complexity while keeping computing requirements at a reasonable level.

Synopsys offers design tools for the entire computational imaging pipeline spanning all domains from assisted driving systems in automotive, computer vision-based robots for smart manufacturing or high-quality images for mixed reality.

To access the whitepaper, click here. For more information, contact Synopsys.

Also Read:

Is Your RTL and Netlist Ready for DFT?

Synopsys Expands Agreement with Samsung Foundry to Increase IP Footprint

Requirements for Multi-Die System Success


A preview of Weebit Nano at DAC – with commentary from ChatGPT

A preview of Weebit Nano at DAC – with commentary from ChatGPT
by Daniel Nenni on 07-03-2023 at 6:00 am

Weebit Amir Regev Demo Screenshot
Weebit VP of Technology Development Amir Regev

Weebit Nano, a provider of advanced non-volatile memory (NVM) IP, will be exhibiting at the Design Automation Conference (DAC) this month. As part of this briefing I shared some of the basic the details with ChatGPT to see how it would phrase things. Here is some of what it suggested: “You won’t want to miss out on the epic experience awaiting you at our booth. It’s going to be a wild ride filled with mind-blowing tech and captivating demonstrations that will leave you in awe!”

ChatGPT is still learning but one thing it got right is that Weebit is showing a couple of innovative NVM demonstrations. The first is a demonstration of some of the benefits of Weebit ReRAM, a silicon-proven NVM technology that has ultra-low power consumption, high retention even at high temperatures, fast access time, high tolerance to radiation and electromagnetic interference (EMI), and numerous other advantages.

The demonstration uses Weebit’s first IP product, Weebit ReRAM IP in SkyWater Technology’s S130 process. For the demo, the ReRAM module is integrated into a subsystem with a RISC-V microcontroller (MCU), system interfaces, SRAM, and peripherals. The demo highlights the lower power consumption of Weebit ReRAM compared to typical flash memory. It also highlights the technology’s faster Write speed, which is largely due to its Direct Program/Erase capability and byte addressability. Unlike flash, which must access entire sectors of data every time it erases or writes, ReRAM only programs the bits that need to be programmed.

Weebit’s second demo is a bio-inspired neuromorphic computing demo in which Weebit ReRAM runs inference tasks using CEA-Leti’s Spiking Neural Network (SNN) algorithms. ChatGPT seemed particularly enthusiastic about this demo, saying, “Step into a realm where science fiction becomes reality as [this] mind-bending technology showcases the power of mimicking the human brain. It’s like a sci-fi movie come to life!”

This may sound over the top, but it is pretty exciting stuff. Commonly today, neural networks are simulated using traditional digital processors and accelerators, but this is inefficient and power hungry. A more efficient approach is neuromorphic computing, which makes it possible to emulate the brain’s natural operation, consuming orders of magnitude less power than today’s simulations. Because a ReRAM cell has physical and functional similarities to a biological brain synapse, it’s a natural candidate for implementing neuromorphic computing.

Visit the Weebit Nano booth #2224 at DAC to check out our demos and meet some of our execs and technologists. To arrange an in-person meeting email info@weebit-nano.com.

About Weebit Nano (ChatGPT)

Weebit Nano is an Israeli semiconductor company that specializes in the development and commercialization of next-generation memory technology. The company was founded in 2014 and is headquartered in Hod Hasharon, Israel.

Weebit Nano focuses on the development of ReRAM (Resistive Random Access Memory) technology, which is a type of non-volatile memory that has the potential to replace existing memory technologies like Flash and DRAM. ReRAM offers advantages such as faster read and write speeds, lower power consumption, and higher density compared to traditional memory technologies.

Weebit Nano’s ReRAM technology is based on silicon oxide materials and utilizes a cross-point array architecture. This allows for the stacking of multiple layers of memory cells, enabling high-density memory solutions. The company’s technology has potential applications in various fields, including consumer electronics, artificial intelligence, Internet of Things (IoT), and data storage.

Also Read:

Weebit ReRAM: NVM that’s better for the planet

How an Embedded Non-Volatile Memory Can Be a Differentiator

CEO Interview: Coby Hanoch of Weebit Nano