100X800 Banner (1)

How Breker is Helping to Solve the RISC-V Certification Problem

How Breker is Helping to Solve the RISC-V Certification Problem
by Mike Gianfagna on 12-02-2024 at 10:00 am

How Breker is Helping to Solve the RISC V Certification Problem

RISC-V cores are popping up everywhere. The growth of this open instruction set architecture (ISA) was quite evident at the recent RISC-V summit. You can check out some of the RISC-V buzz on SemiWiki here. While all this is quite exciting and encouraging, there are hurdles to face before true prime-time, ubiquitous application of RISC-V processors are commonplace. A big one is certification. I’m not referring to verification of the design, but rather certification of the RISC-V ISA implementation. Does the processor reliably do what is expected across its broad range of applications? Can we trust these devices?

It turns out this is a large and complex problem. The graphic at the top of this post illustrates its breadth. Solving it is critical to allow broad deployment of the RISC-V architecture. I decided to poke around and see what was being done. Breker has “verification” in the company name, so it seemed that would be a good place to start. I contacted my good friend Dave Kelf and I wasn’t disappointed. There is a lot going on here and Breker is indeed in the middle of a lot of it. Let’s see how Breker is helping to solve the RISC-V certification problem.

The CEO Perspective

Dave Kelf

I’ve known Dave Kelf a long time. He is currently CEO of Breker Verification Systems. Dave explained there are a lot of RISC-V design efforts underway at large companies, startups and advanced research. These programs include open-source projects, commercial programs and universities. He told me Breker alone is being used in 15 RISC-V development programs underway at present.

Dave explained that in the processor world, there are devices from companies such as Arm, Intel and AMD that come with a certification from the vendor. These devices undergo extensive testing. This creates a level of “comfort” that the device will perform as advertised under all conditions. The tests done by these companies can take on the order of 1015 clock cycles to run. That is indeed a mind-boggling statistic.

Dave provided an overview of what is involved in certifying a processor architecture like RISC-V. He explained that it’s important to understand that this task is a lot more complex than certifying a point-to-point communication protocol (think Wi-Fi). It’s also a lot broader than verifying a specific processor design. Before the processor gets that golden stamp of approval, it needs to be checked for all potential use cases, not just the one being used on a particular design.

The architecture of the certification test suite needs to be developed and agreed to by a steering committee that has a sufficiently broad ecosystem perspective. Then the actual tests need to be built and verified. Then comes the task of running the certification suite. Do companies self-certify, or does an independent lab do that work? And finally, how is all this funded?

A complex and daunting set of problems to solve, but this kind of proof of capability is what will be needed to achieve mainstream use across a broad range of applications for RISC-V. The good news is that RISC-V International has taken up the cause. Dave explained that after the RISC-V Summit last year work began. So, this project is about 10 months old. Breker, along with many other members of the RISC-V community is providing support and effort to realize these important goals.

Dave explained that there was a presentation on this work at the recent RISC-V Summit. This presentation filled in a lot more details for me.

The President and CTO Perspective

Adnan Hamid

Adnan Hamid, Executive President and CTO at Breker gave the presentation on RISC-V certification. As mentioned, this effort started after last year’s RISC-V Summit. It is being driven at the RISC-V International Board level. A key part of the RISC-V organization is the technical steering committee (TSC). This is where all the details for components of the RISC-V ecosystem are developed, both hardware and software. A certification steering committee (CSC) has been created that exists at a peer level to the TSC.  One way to think about this is that the CSC has the mandate to check the TSC to ensure a coherent path to certification can be developed. The diagram below illustrates the entities involved in the program. The goal is to deliver holistic brand value.

Organizations Involved in Certification

Adnan shared some of the details of the program. Although it is still early days, these are likely to include:

  • Allows implementations to ensure compliance with RISC-V standards
  • Goal is to provide confidence to the RISC-V ecosystem that it will correctly operate on certified implementations
  • Certifies processors, SoC components, and platforms
  • Certifies RTL and silicon
  • Includes commercial-grade certification materials
  • Customers pay to obtain certificate
  • Fee based on certification cost
  • “RISC-V Compatible” Branding Program
  • Certificate must meet customer requirements
    • Must be available in a timely fashion
    • Must be based on ratified RVI standards

Certification is planned to be done in phases as shown below.

Certification Deployment Phases

The CSC is taking shape. It currently has the following five working groups:

Certification Working Groups

You can see a recording of the full presentation Adnan gave here.

Dave and Adnan are actively involved in the Customer Survey and Tests & Models groups. There are about 24 companies involved in this effort so far, and that number is growing.

How You Can Help

There is a lot being done on RISC-V certification. And a lot more to do as well. If you’re like most folks in semiconductors today, you are thinking seriously about how the open architecture of RISC-V could help. If you are interested in RISC-V, the certification team wants to hear from you. They need your input which will be used to shape this program. There is a survey underway to better understand your needs.

Let your voice be heard. You can access the survey here. Do it today! And that’s how Breker is helping to solve the RISC-V certification problem.

Also Read:

Breker Brings RISC-V Verification to the Next Level #61DAC

System VIPs are to PSS as Apps are to Formal

Breker Verification Systems at the 2024 Design Automation Conference


CEO Interview: Ollie Jones of Sondrel

CEO Interview: Ollie Jones of Sondrel
by Daniel Nenni on 12-02-2024 at 6:00 am

xr:d:DAFvo1pPrb0:9,j:897746449552040023,t:23092711

Sondrel has just appointed a new CEO, Ollie Jones, so we had a chat with him to find out his vision for the company.

Ollie is a highly driven, commercially astute senior leader with 20+ years Commercial and Business Development experience across Technology and Engineering sectors.

Ollie has worked extensively across Europe, North America and Asia and has held a variety of commercial leadership roles in FTSE 100, private equity owned and start-up companies.

Most recently Ollie was Chief Commercial Officer for an EV battery start up where he led the acquisition of new customer partnerships with some of the world’s leading car brands.

Prior to that, roles held include VP Commercial and Business Development for a market leading global automotive engineering firm with responsibility for driving the sales growth of its electrification business unit, and VP Customer Business Group where he was responsible for leading multiple large and complex key accounts across Europe and Asia with over $1bn cumulative revenues.

Sondrel was founded over two decades ago in 2002 and is a well-known name in Europe but not in the USA. Why do you think that is?
Sondrel is fundamentally a service company. To give customers the best possible service when you are starting out, you need to be close to them so, being headquartered in the UK, Sondrel focussed on the UK, European and Israeli market for the first phase of the company’s growth as it is our home region. That enabled us to ensure that we built up a reputation with customers for going above and beyond in order to deliver high quality products. As we grew from pure “design services” towards more turnkey “ASIC” developments, we expanded our skills with design centers to include Morocco and India. It is only recently that we have started addressing the American market as we believe that we offer something that US customers want.

So how are you differentiating yourselves?
We sit perfectly in the zone of just the right size to be able to deliver chips with a high level of personal service. Our rivals are often too large with too many projects to give the level of personal service that we provide or too small to have the expertise needed to deliver the kinds of ultra-complex custom chips that are our speciality. This is 100% aligned with my own career, which has always been customer focussed. At Sondrel, we want customers to be successful and are completely focused on that mission. That means giving each customer a level of care and attention to details that will be almost impossible to match.

How does that tie in with delisting from being on the stock exchange and going private?
The challenge with being a listed company is that you have two objectives that can often conflict. Firstly, to deliver to investors who often have short timeframe goals and, secondly, to deliver to customers where the timescales are measured in many months. And there are times when it is very difficult to do both effectively as many companies have found and have gone back to being private. Sondrel is company with amazing engineers, huge experience and a stellar reputation with customers. That’s a solid foundation for the future of any company. And so that became the course of action with a delisting and restructuring. Then, as planned, I became the CEO to really focus, capitalise and commercialize the company’s strengths – customer focus, personal service, high quality and world-class design skills for ultra-complex custom chips.

Your background is from the commercial side of technology.  Does that help or hinder you as a CEO?
Absolutely it helps. When you think about it, the commercial aspects are critical to both our customers and our own ability to grow.  Over the past year, when I was VP of Marketing & Sales, I met each of Sondrel’s regular customers. Some of them have been using Sondrel for many years for project after project. In every case, when I asked them why pick Sondrel, the answer was always because Sondrel cares. We care passionately about the customer, the customer’s project and the commercial success of the chip – and our strong engineering team reflects this. We will do everything we can to be outstanding partners for our customers. The real shame is that most of our work is covered by NDAs so we cannot talk publicly about all our successes. This customer-first approach is something I’ve embraced throughout my career and was rooted in always seeking solid, mutual commercial success. So, “yes” I think a commercial background is a significant advantage to being a CEO.

It sounds like you are going to be very hands on?
This is very much the way I do things and how Sodrel operates. Frequent face-to-face meetings and continuous communications. People always want to do business with people they know, like and trust. We really dig into a new business opportunity to fully understand what the customer is trying to achieve and what matters most to them. And then we invariably exceed expectations by providing insights and ideas to make the project better by drawing on our experience from hundreds of successful projects. A design project for a billion-transistor chip is incredibly detailed and complex and we have the in-house tools, design flows and experience to deliver to agreed budgets and timeframes.

We work with the customer early in their definition stages, helping the customer where we can to make the right decisions for their chip project, unlike many of our rivals. This builds a huge level of trust and confidence in our ability to deliver chip designs which means that the customers then started asking us to handle the whole chip supply chain process right through to final silicon. This is now a standard turnkey service that enable customers to focus on their skill sets safe in the knowledge that their silicon will be delivered. This is particularly of interest to startups who are skilled in innovation but not in all the challenges of taking a chip through the supply chain stages of the manufacturing, testing and packaging so they need to outsource that to someone like us.

And that is why our US office is located in the heart of Silicon Valley so we can provide a personal service to all the exciting innovative startups located there.

There is only one of you, so how can Sondrel provide the level of personal service that you described for every customer?
We do that by having a mindset of customer-first, across the company. We even assign a Customer Success Manager to customer projects. Their job is to ensure that everything is running to schedule and that the customer is always in the loop, ensuring that the customer’s project is successful, which means we are successful in meeting the customer’s expectations. It’s how we deliver a very personal service to every customer.

You have said personal service is what differentiates Sondrel. What does that mean in practice?
Companies come to Sondrel because they want a chip that is custom made to their exact specifications. Basing your project on standard, off-the-shelf chips means that anyone can copy it. A custom chip is unique and the Power, Performance and Area have been tuned precisely to deliver the performance and cost required. And determining those parameters is done right at the start of a project discussion at the Architectural Specification stage. This is a perfect example of where we are different. Our team creates an in-depth, holistic view of the chip, what its functions and features are, and how to make it. For example, what node to use, what IP will be needed, which of our Architecting the Future reference architectures to use, how to incorporate Design for Test, etc. This means that Sondrel provides customers with an incredible detailed plan of how it will successfully design the chip. Often it includes improvements that Sondrel has brought to the table based on its engineers having experience in successfully delivering hundreds of other projects over the years.

This is level of intense personal service inspires confidence and trust that Sondrel will deliver to schedule and requirements that is continued throughout the project with regular meetings so that the customer is always fully informed on progress along with new ideas to make the design even better. For example, in one recent project we were able to reduce the power requirement of the chip, much to the delight of the customer.

You mentioned Architecting the Future. What is that?
This is a family of pre-defined reference architectures that provide a fast start for a new project rather than starting from scratch every time. This means that not only can we deliver a project faster but it also means that we can handle more projects simultaneously as our engineers can focus on the complex novel parts of a design knowing that the framework is already tested and ready to be built on.

Reusing trusted IP is fundamental to the ability to design chips and that’s what these architectures are. They reduce risk and time to market to help ensure customer success.

Talking about IP, I note that you have started licensing IP?
That was one of my first tasks as CEO to realise and commercialize existing assets. We have a library of IP blocks that we have created over the years for various projects where we found there was no commercial IP available with the performance or functionality required. They might be a bit unusual but that’s what we need when creating our ultracomplex chip design so, if we needed it for a chip design, then others might as well. In fact, we have just licensed our first IP block – our Firewall IP.

It’s yet another way to help customers be successful through our personal service of ensuring that they get exactly what they need.

Also Read:

Sondrel Redefines the AI Chip Design Process

Automotive Designs Have No Room for Error!

Sondrel’s Drive in the Automotive Industry

Transformative Year for Sondrel


Is AI Intelligent? Insights from Ronjon Nag’s Presentation

Is AI Intelligent? Insights from Ronjon Nag’s Presentation
by Admin on 12-01-2024 at 8:00 am

Is AI Intellegint SemiWiki

In his keynote at the 2024 31st IEEE Electronic Design Process Symposium, Dr. Ronjon Nag, an adjunct professor at Stanford Medicine and president of the R42 Group, poses the provocative question: “Is AI Intelligent?” Drawing from four decades of pioneering work in AI, Nag blends personal anecdotes, scientific analysis, and philosophical inquiry to explore the essence of intelligence in humans, animals, and machines. As an inventor who founded companies like Lexicus (sold to Motorola) and Cellmania (sold to BlackBerry), and an investor in over 100 AI and biotech ventures, Nag’s credentials lend weight to his multidisciplinary perspective.

Nag begins by unpacking intelligence beyond the familiar IQ metric. He highlights alternative quotients: Emotional Intelligence from Daniel Goleman, Ambition Quotient, Purpose Quotient from John Gottman, Compassion Quotient, and Freedom Quotient. These underscore that intelligence isn’t monolithic but encompasses emotional, social, and existential dimensions. In daily life, AI already manifests intelligently through applications like Siri, Alexa, robotic vacuum cleaners, collision avoidance systems, loan scoring algorithms, and stock market predictors. Yet, Nag clarifies terminology to avoid confusion: “Good Old Fashioned AI” refers to rule-based systems; Machine Learning involves data-driven algorithms like logistic regression and neural networks; Artificial Intelligence is the broader academic field; and Generalized AI or Strong AI implies human-level performance across any task.

A core comparison lies in biological versus artificial brains. Nag illustrates natural neural networks, where signals propagate through neurons, contrasting this with simplified computational models like weighted sums and sigmoid functions. The human brain boasts 100 billion neurons, each connected to about 1,000 others, yielding 100 trillion parameters. In contrast, GPT-4 operates with “only” 1.6 trillion parameters, highlighting AI’s efficiency despite its scale. Nag notes AI’s creative feats, such as generating paintings, but questions why AGI hasn’t been declared. Reasons include skepticism about metrics, ideological biases toward alternative theories, human exceptionalism, and economic fears.

Delving into the “Boundaries of Humanity” project at Stanford, Nag examines what distinguishes humans. Cultural intelligence—social learning, imitation, morality, and rituals—sets us apart, though contested in animals like chimpanzees (who exhibit community standards) and dolphins (with dialects and cooperative hunting). Machines store raw data and commentary (e.g., Amazon reviews), but encoding higher-order culture like morals remains challenging, potentially clashing with survival-oriented objectives.

Consciousness emerges as a pivotal debate. Nag asks: Can machines have minds? Strong AI posits yes, encompassing consciousness, sentience, and self-awareness, while Weak AI focuses on simulation. He references John Searle’s Chinese Room argument: a person copying Chinese symbols without understanding them mirrors computers following instructions sans comprehension. Alternative theories include quantum mind ideas from Roger Penrose and Stuart Hameroff, where microtubules enable wave function collapses for consciousness, or neurobiological views from Christof Koch and Francis Crick locating it in the prefrontal cortex. The Turing Test gauges human-like conversation but not consciousness; Nag proposes the Artificial Consciousness Test, probing AI’s grasp of experiential concepts like body-switching or reincarnation.

Humans excel in world understanding, action prediction, unlimited reasoning, and task decomposition—areas where Large Language Models (LLMs) falter. For AI’s future, Nag envisions neuromorphic chips outperforming GPUs by integrating on-chip memory and low-precision arithmetic. Emotions, per Antonio Damasio’s somatic marker hypothesis, evaluate stimuli via bodily responses; implementing them in AI could involve haptic technologies, as in robotic seals aiding dementia patients.

Nag ties AI to longevity, noting parallel inflection points: from Turing machines to ChatGPT in AI, and vaccines to CRISPR in biotech. By 2025, personalized AI healthcare and aging vaccines could converge. Ultimately, Nag argues AI is intelligent in narrow domains but lacks holistic human qualities. His talk invites reflection: as we augment ourselves (e.g., via implants), boundaries blur. Contact him at https://app.soopra.ai/ronjon/chat for deeper dialogue.

2025 IEEE Electronic Design Process Symposium (EDPS)


Podcast EP263: The Current and Future Impact of the CHIPS and Science Act with Sanjay Kumar

Podcast EP263: The Current and Future Impact of the CHIPS and Science Act with Sanjay Kumar
by Daniel Nenni on 11-29-2024 at 10:00 am

Dan is joined by Sanjay Kumar. Most recently, Sanjay was senior director at the Department of Commerce on the team implementing the CHIPS and Science Act. Before that, he was in the industry for more than 20 years, up and down the semiconductor value chain working at systems companies such as Meta, fabless companies such as Infineon, NXP, Broadcom and Omnivision and manufacturing companies such as Intel Foundry.

Sanjay provides a detailed analysis of the impact across the semiconductor value chain resulting from the CHIPS and Science Act. He details the significant industry investments that have resulted from the initial funding from the US Government.

Sanjay describes the collaboration between ecosystem companies and what the impact has been, and could be in the future. He discusses the impact AI has had as well. He describes possible future collaboration scenarios and the potential positive impact on the US semiconductor manufacturing sector.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Dr. Yunji Corcoran of SMC Diode Solutions

CEO Interview: Dr. Yunji Corcoran of SMC Diode Solutions
by Daniel Nenni on 11-29-2024 at 6:00 am

Dr. Yunji Corcoran (1)

Dr. Yunji Corcoran is the Chairwoman and CEO of SMC Diode Solutions. Dr. Corcoran has managed a successful career in the power semiconductor field for over 30 years, specializing in design, manufacturing, and global sales. She holds a bachelor’s degree in physics from Nanjing University and a PhD in semiconductor materials from Stony Brook University in New York. She currently focuses on management functions but still considers herself to be a dedicated engineer. Outside of the semiconductor business, Dr. Corcoran is passionate about traveling, cooking, and collecting cookbooks from around the world.

Tell us about your company?
SMC Diode Solutions is an American-led semiconductor design and manufacturing company based in Nanjing, China.

Since the company’s founding in 1997, SMC has provided customers with high-quality products and timely delivery. SMC’s products have been designed with great attention to detail and performance while focusing on the needs of customers.

The SMC team designs and manufactures its own products that are widely distributed in global markets. SMC’s portfolio includes Silicon Carbide Schottky rectifiers and MOSFETs, Silicon Schottky rectifier diodes, ultrafast recovery rectifier diodes, TVS diodes, Schottky and rectifier modules, and many more.

What problems are you solving?
At SMC, we focus on solving two major problems: the first is power conversion and the other is efficiency. The function of the devices we produce is to convert the voltages, whether that’s from AC to DC, DC to DC or DC to AC. During the conversion, we ensure our products minimize the conversion loss while increasing efficiency.

What application areas are your strongest?
In general, we work with any market segment that has to do with power conversions and protections such as power supply, AC-DC and DC-DC converters, and inverters. We also create products for the aerospace, automotive, and medical industries. With the opening of our new fab in June 2024 and growth of our silicon carbide capacity, we are also growing our presence in the electrical vehicle and renewable energy markets.

What keeps your customers up at night?
I think the thing that keeps our customers up at night is exactly the same thing that keeps us up at night: how to survive in this very competitive market. In today’s market, you not only need to make great products that satisfy your customers’ needs, but you also need to be constantly researching, innovating, and striving to be better than your competitors. Our customers choose to work with us because they know they will receive good, quality products from us to in turn provide quality products to their customers. So, in short, the constant question and pressure of how to keep improving upon their products is what keeps our customers up at night. We have a strong focus on research and development at SMC, so I hope our dedication to creating better products helps to ease the pressure and concern our clients also face.

What does the competitive landscape look like and how do you differentiate?
The discrete power semiconductor market is a very competitive market, and there are quite a few players with very similar products. What makes SMC different is our unique products and services. From the initial design to manufacturing, we use the most advanced technology and software to create the best products possible. For example, our power diodes and MOSFET products have a lower power loss compared to our competitors which is a very important parameter for those types of products. Not only do our products perform well and have really great reliability, but we also bring a personal touch through our customer care. Our whole company is dedicated to providing a personal service to our customers, and we approach everything with flexibility and care.

What new features/technology are you working on?
As always, SMC’s focus is on our products. We pride ourselves on making quality products and delivering enhanced performance through technology innovations. Currently, we are working on expanding our products portfolio now that we have more than quadrupled our production capability with the recent opening of our new fab. We are also looking into extending our existing power module capability as the next step to strengthen our market position.

We’re excited about what’s to come and are continuously innovating and updating our products and product offerings. Our focus remains on our products and with growing interest in those, we will continue to grow our business and provide the best power product for our customers.

How do customers normally engage with your company?
As a company with a presence in the United States, Germany, China, and South Korea, we have distributors and clients all over the globe. You can contact our distributors or a local sales representative in your area for additional information about our company and products. Detailed contact information is on our website.

Additionally, more often than not, you will find someone from our company at any major semiconductor event. For general questions, feel free to contact us at sales@smc-diodes.com or check our website updates to see where you can find us next.

Also Read:

CEO Interview: Sandeep Kumar of L&T Semiconductor Technologies Ltd.

CEO Interview: Dr. Sakyasingha Dasgupta of EdgeCortix

CEO Interview: Bijan Kiani of Mach42


CEO Interview: Rajesh Vashist of SiTime

CEO Interview: Rajesh Vashist of SiTime
by Daniel Nenni on 11-28-2024 at 10:00 am

Fotosbyt/san jose

Rajesh Vashist is SiTime’s Chairman and Chief Executive Officer and has served in this role since September 2007. Prior to joining SiTime, Mr. Vashist served as CEO and chairman of the board of directors of Ikanos Communications, Inc., a semiconductor and software development company, from July 1999 to October 2006. Mr. Vashist led the organization from a two-person pre-revenue startup to a public company with 90% market share and a market value of $600M. Prior to Ikanos, Mr. Vashist served as a general manager of a $450M business unit at Adaptec, a storage company, and held various general management and marketing positions at Rightworks, an ERP software company, Vitelic Semiconductor and Samsung Semiconductor.

Tell us about your company.
Precision Timing is the heartbeat of modern electronics. Whether it’s in AI data centers, networking infrastructure, automated vehicles, personal mobility or IoT, nothing works without precise timing. In today’s connected intelligence era, SiTime’s uniqueness lies in delivering Precision Timing to enable electronic products that are smarter, faster and safer. We are taking a new approach to the highly fragmented timing market, using semiconductor technology and processes to reimagine time. SiTime is the only company that is solely focused on all aspects of timing, from components to systems and software. We are using microelectromechanical systems (MEMS) technology to transform the $10 billion timing market with products that offer higher performance, smaller size, lower power consumption and unmatched reliability. What makes SiTime unique, apart from our Precision Timing technology, is the diversity of applications, products, customers, and our team.

What problems are you solving?
The timing market has been historically fragmented, but with the need for today’s electronics to be faster, always connected, more intelligent, and safer, a differentiated approach is vital. To meet these requirements, electronics are now being deployed in the “real world,” outside pristine environments such as air-conditioned offices. Here, electronic devices are being subjected to environmental stressors such as rapid temperature changes, extreme temperatures, shock and vibration. The incumbent timing technology for the past several decades, quartz, is susceptible to these stressors, which can impact the performance and reliability of intelligent, connected electronics. Our MEMS and analog technologies solve this problem. We deliver Precision Timing solutions that are orders of magnitude more resilient to these environmental stressors and help make electronics smarter, faster, and safer. For example, we are enabling higher timing performance and accuracy in AI data centers and 5G networks, where nanosecond-level synchronization is required, even in the presence of environmental stressors.

Our semiconductor-based MEMS technology enables us to offer the smallest size, more features, and higher stability, which, again, meets the requirements of modern electronics. With our fabless semiconductor supply chain and built-in programmability, we deliver better supply reliability and the flexibility to configure a device to each customer’s exact application requirements. Over the past decade, we have improved our Precision Timing performance by several orders of magnitude, something that the incumbents have not been able to do. To summarize, we deliver higher performance, more features, higher reliability, smaller size and lower power to our customers.

What application areas are your strongest?
Our strongest markets for Precision Timing include AI data centers and all the networking electronics within, communications, automotive safety and infotainment, IoT, and aerospace and defense. Specific applications within these markets include 800G/1.6T optical modules, smart network interface cards (SmartNIC), 5G remote radio units (RRUs), advanced driver assistance systems (ADAS) cameras, radar and LiDAR, smartwatches and low-Earth orbit (LEO) satellites. For example, AI networking requires ultra-low jitter and latency, which our timing products deliver. Similarly, in autonomous vehicles, Precision Timing is critical for sensor synchronization and rapid decision-making. Our Endura Epoch Platform, for example, is making inroads in aerospace and defense applications, offering unmatched performance and reliability for critical applications such as positioning, navigation and timing (PNT) services. In fact, because of the various benefits of our timing technology, it’s safe to say that we are crucial to the future of electronics.

What keeps your customers up at night?
We’ve realized that we have two kinds of customers: the ones who have experienced and seen the benefits of our programmable timing chips and those who we are working closely with to come up with creative ways to address their timing issues. Our customers are concerned about keeping pace with rapid technological advancements. Whether they’re developing AI, communications, automotive or IoT applications, they must continuously innovate with their products. With the explosion of data-driven applications, companies also worry about achieving low-latency, high-reliability networks, especially as 5G and AI infrastructures continue to scale.

Another point is the ability to meet customers’ needs for scaling fast when the demand for their products increases. SiTime’s silicon manufacturing process ensures a stable, reliable, and independent supply chain with shorter lead times for the highest-quality timing products in the market. Our job is to be inventive and dependable so customers can be comfortable.

What does the competitive landscape look like and how do you differentiate?
After being asleep for so many years and considered a commodity product, timing technology is undergoing a transformation, led by SiTime. Traditional quartz-based timing solutions have been incumbents for the past several decades, but they are rapidly being displaced, given their lack of differentiation and programmability as customers demand smaller, more resilient and reliable, and energy-efficient Precision Timing solutions that our MEMS technology offers. SiTime differentiates itself as the only pure-play silicon timing company, which gives us a unique position to drive innovation in this market. Our programmable clock and oscillator solutions enable customers to tailor their timing devices for specific application needs, a major advantage in high-performance sectors like AI computing and data centers. We also focus heavily on system-level solutions, combining silicon MEMS technology with analog circuitry, advanced algorithms and high-volume packaging, which enables us to deliver unmatched precision, reliability, small size and low power consumption.

What new features/technology are you working on?
As the world becomes more connected and intelligent, we are focused on developing more Precision Timing solutions that meet the demand for stability, resilience, and lower power consumption.

For example, we have design wins with our Precision Timing products in all key applications of the AI ecosystem, including GPU and CPU boards, interconnect switches, optical modules, NIC cards, accelerator cards, active cables, and switches. To provide a sense of scale of our focus on AI, in 2024, we shipped 70 unique part numbers across 14 product families to 30 different customers developing AI hardware.

One of our key product innovations for the world of AI is our Chorus clock generator—the industry’s first integrated clock system-on-a-chip (ClkSoC) designed for AI data center applications. The Chorus ClkSoC integrates a clock IC, a silicon MEMS resonator, and oscillator circuitry into a single chip. By integrating the resonator and eliminating the need for external quartz crystal devices, Chorus simplifies system clock architectures for AI systems, accelerates design time by up to six weeks, and improves reliability and resilience. Chorus clocks are engineered to deliver 10 times better performance in half the size of equivalent quartz-based devices.

Another recent innovation is the Endura Epoch Platform, a ruggedized MEMS-based oven-controlled oscillator (OCXO) that provides significant improvements in size, weight, and power (SWaP) while delivering benchmark timing performance for AI data centers and 5G infrastructure. Epoch OCXOs solve critical timing issues that were previously insurmountable with quartz technology, especially when deployed under extreme environmental conditions. We’re also expanding the use of MEMS technology in emerging markets like aerospace and defense with our Endura Epoch Platform, which offers enhanced performance and resilience for critical PNT services.

How do customers normally engage with your company?
We collaborate closely with each customer’s technical team to design and deliver Precision Timing solutions that meet their unique application needs. This collaborative, system-level approach allows us to build deep, long-lasting relationships with industry leaders and ensures that our timing technology continues to meet their evolving requirements. We also engage with our customers’ commercial teams to ensure adequate supply and other business terms. In addition to the large players in electronics, we also support smaller players who might be developing new applications, devoting equal attention to their developments.

How do you see Precision Timing evolving in the future, and what role will SiTime play in that transformation?
Precision timing will become even more critical as the world continues to embrace AI, 5G-Advanced and 6G communications, automated driving, personal mobility, and IoT. We’re not only addressing today’s timing technology needs but also anticipating future demands. SiTime will continue to innovate in the areas of MEMS technology, analog circuits, packaging, integration, and software to develop Precision Timing solutions that push the boundaries of what’s possible. We envision a world where SiTime’s Precision Timing products are embedded in every critical application, from the cloud to the edge, enabling faster, smarter and more connected systems everywhere.

Also Read:

CEO Interview: Dr. Greg Newbloom of Membrion

CEO Interview: Sandeep Kumar of L&T Semiconductor Technologies Ltd.

CEO Interview: Dr. Sakyasingha Dasgupta of EdgeCortix

CEO Interview: Bijan Kiani of Mach42


CEO Interview: Dr. Greg Newbloom of Membrion

CEO Interview: Dr. Greg Newbloom of Membrion
by Daniel Nenni on 11-28-2024 at 6:00 am

Greg Nebloom Membrion

Greg Newbloom, Ph.D., is the founder & CEO of Membrion, a cleantech startup focused on recycling wastewater from harsh industrial processes. Membrion makes electro-ceramic desalination (ECD) membranes out of the same material as the silica gel desiccant packets found in the bottom of a beef jerky package. Dr. Newbloom’s leadership and entrepreneurial efforts have been recognized by a half-dozen regional and national awards including a “35 under 35” from the American Institute of Chemical Engineering and a 2024 Meaningful Business 100 award (MB100).

He has 50 patents in addition to co-authorship of a textbook and 300+ citations in many publications. His work is regularly featured in both local and national media. Dr. Newbloom has a BA in Science from Oregon State University, an MA in Science from The University of Washington, and holds his Doctor of Philosophy (Ph.D.), in Chemical Engineering also from The University of Washington.

Tell us about your company:
Membrion is a pioneering technology company that specializes in electro-ceramic desalination membranes. We help industries manage some of the toughest wastewater challenges, providing innovative solutions that handle high-salinity, acidic, and heavy-metal-laden wastewater in addition to advanced recovery objectives in water stressed areas. Our focus is on sustainable water reuse, ensuring that companies can meet their environmental goals without compromising on efficiency.

What problems are you solving?
We address the critical need for efficient and cost-effective wastewater treatment, especially in industries dealing with corrosive and complex effluents. Traditional filtration methods often struggle with these challenging conditions, but our ceramic membranes excel where others fail, providing an effective way to treat and recycle water in even the harshest environments.

What application areas are your strongest?
Our strongest applications are in industries with highly complex wastewater streams, including semiconductor manufacturing, metal finishing, mining, and chemical processing. These sectors often face the most stringent regulations and operational challenges when it comes to wastewater treatment, and Membrion’s solutions are tailor-made to meet those needs.

What keeps your customers up at night?
Our customers are concerned about staying compliant with changing environmental regulations, reducing their water consumption, and finding cost-effective ways to manage wastewater. They also worry about system downtime due to trucking delays and fouling or corrosion in their current water treatment systems, which can halt production and increase costs. Membrion alleviates these concerns by offering onsite, durable, reliable, and high-performing membrane technology that minimizes these risks.

What does the competitive landscape look like and how do you differentiate?
The water treatment industry is full of conventional polymeric membranes and chemical-based treatments. Membrion stands out by offering a unique ceramic membrane that is more durable and resistant to harsh conditions. Unlike competitors, our technology thrives in highly acidic, saline, and heavy metal-contaminated wastewater, offering a longer lifespan and better performance over time. This differentiation enables our customers to reduce operating costs while achieving their sustainability goals.

What new features/technology are you working on?
We are continually improving the efficiency and longevity of our ceramic membranes, exploring advanced applications in emerging markets such as mineral extraction (copper, lithium, and others) and battery recycling. We are also working on expanding our modular membrane systems to be more scalable and easier to integrate into existing industrial setups.

We offer a Water Service Agreement to our customers which provides a no-risk, performance-based model where you only pay for treated water that meets your specifications. With no upfront costs for equipment or maintenance, Membrion guarantees effective wastewater treatment at a lower price than traditional methods.

How do customers normally engage with your company?
Customers typically engage with us through direct consultations, where we assess their specific wastewater challenges and design customized solutions. We also partner with engineering firms and integrators to implement our technology into new or existing water treatment systems. Additionally, we offer pilot programs to allow customers to test our membranes in their operations before committing to full-scale adoption. If you think Membrion solutions will be a good fit for your facility, let us know your wastewater challenges: https://membrion.com/contact-us/.

Also Read:

CEO Interview: Sandeep Kumar of L&T Semiconductor Technologies Ltd.

CEO Interview: Dr. Sakyasingha Dasgupta of EdgeCortix

CEO Interview: Bijan Kiani of Mach42


MZ Technologies is Breaking Down 3D-IC Design Barriers with GENIO

MZ Technologies is Breaking Down 3D-IC Design Barriers with GENIO
by Mike Gianfagna on 11-27-2024 at 10:00 am

MZ Technologies is Breaking Down 3D IC Design Barriers with GENIO

3D-IC design can be both exciting and frustrating. It’s exciting because it opens a new world of innovation possibilities – opportunities that aren’t constrained by the rules of monolithic chip scaling. It can be frustrating because of the large array of complex technical challenges that must be overcome to make this new paradigm accessible. MZ Technologies’ mission is to conquer 2.5D & 3D design challenges for next generation electronic products by delivering innovative, ground-breaking EDA software solutions and methodologies. MZ’s flagship product is GENIO, and the company recently announced a comprehensive roadmap for it. If 2.5 & 3D design are in your future, this is big news. Let’s examine how MZ Technologies is breaking down 3D-IC design barriers with GENIO.

What’s Coming

GENIO™ is an integrated chiplet/packaging co-design EDA tool. The announced roadmap calls for improvements to the product throughout 2025, starting with four significant additions that will be unveiled in mid-January.  Other new features will be added around the middle of the year and at year’s end.

The features coming at the beginning of the year address some truly difficult design challenges. The focus is thermal and mechanical stress. These enhancements will also include an improved and modernized user interface.  

These new features tackle next-generation 3D-IC design challenges head-on.  To provide some background, 3D heterogeneous devices suffer from thermal stress that comes from uneven heat distribution during operation, potentially leading to warping and even reliability failures. To address thermal challenges, robust management strategies are essential. Application of the right tools will minimize temperature differentials. The result is optimal performance and longevity of the chiplets used in the design.

Mechanical stress can result from processes such as thermal expansion mismatch and substrate flexing. These effects can cause interconnect failures and even delamination. A robust approach is needed to maintain structural integrity and performance across varying operational conditions and material interfaces. The new version of GENIO delivers enhanced capabilities in both areas. Mid-year, the company is expected to add additional thermal and interconnect features to GENIO.

Anna Fontanelli

These enhancements build on the momentum already achieved by GENIO. Anna Fontanelli, Founder and CEO of MZ Technologies commented, “MZ Technologies rolled out the first commercially available co-design tool three years ago and we feel an obligation to the EDA community to continue to innovate.”

What’s Already Here

MZ Technologies has already released several enhancements. GENIO 1.7 saves even more design time than the previous version thanks to its ability to track and classify potential design process issues.

Using a dedicated, always-on dock, the newest version alerts designers to a full list of problems classified by severity, scope, and category. This extensive check provides a real-time update after any operation and new violations are added to something called the Issues Dock. The user can select an issue in this dedicated dock and all errors and warning are highlighted with a severity-driven color across the GUI and the 2D/3D design views. The amount of data associated with 2.5 & 3D design is massive and this feature helps to manage that complexity.

The new version performs several categories of checks, including placement rules during floor planning to catch violations such as overlapping instances, out of boundary placement, and ignored keep-out zones. Other checks spot vertical routing connectivity issues such as broken nets and crossing fly lines. The graphic as the top of this post shows an example of this capability.

To Learn More

MZ Technologies provides automated solutions that facilitate the design and optimization of complex, heterogeneous IC systems. You can learn more about this unique company on SemiWiki here. You can read the full text of MZ Technologies roadmap announcement here. And you can find out more about GENIO here. And that’s how MZ Technologies is breaking down 3D-IC design barriers with GENIO.


Compiler Tuning for Simulator Speedup. Innovation in Verification

Compiler Tuning for Simulator Speedup. Innovation in Verification
by Bernard Murphy on 11-27-2024 at 6:00 am

Innovation New

Modern simulators map logic designs into software to compile for native execution on target hardware. Can this compile step be further optimized? Paul Cunningham (GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Efficient Compiler Autotuning via Bayesian Optimization. This was published in in the 2021 IEEE/ACM International Conference on Software Engineering and has 22 citations. The authors are from Tianjin University, China and Newcastle University, Australia.

Compiled code simulation is the standard for meeting performance needs in software-based simulation, so should benefit from advances in compiler technology from the software world. GCC and LLVM compilers already support many optimization options. For ease of use, best case sequences of options are offered as -O1/-O2/-O3 flags to improve application runtime, determined by averaging over large codebases and workloads. An obvious question is whether a different sequence delivering even better performance might be possible for a specific application.

This is an active area of research in software engineering, looking not only at which compiler options to select (eg function inlining) but also in what order these options should appear in the sequence, since options are not necessarily independent (A before B might deliver different performance versus B before A).

Paul’s view

Using machine learning to pick tool switches in Place & Route to improve PPA is one of the best examples of commercially deployed AI in the EDA industry today. A similar concept can be applied to picking compiler switches in logic simulation to try and improve performance. Here, there are clear similarities to picking C/C++ compiler switches, known as “compiler autotuning” in academic circles.

In this month’s paper, the authors use a modified Bayesian algorithm to try and beat -O3 in GCC. They use a benchmark suite of 20 small ~1k line C programs (matrix math operations, image processing, file compression, hashing) and consider about 70 different low level GCC switches. The key innovation in the paper is to use tree-based neural networks as the Bayesian predictor rather than a Gaussian process, also during training to quickly narrow down 8 “important” switches and heavily explore permutations of these 8 switches.

Overall, their method is able to achieve an average 20% speed-up over -O3. Compared to other state-of-the-art methods this 20% speed-up is achieved with about 2.5x less training compute. Unfortunately, all their results are using a very old version of GCC from 12 years ago, which the authors acknowledge at the end of their paper, along with a comment that they did try using a more recent version of GCC and were able to achieve only a 5% speed-up over -O3. Still, a nice paper, and I do think the general area of compiler autotuning can be applied to improve logic simulation performance.

Raúl’s view

Our 2024 penultimate paper for the year addresses setting optimization flags in compilers to achieve the fastest code execution (presumably other objective functions like code size or energy expended during computation could have been used). In the compilers studied, GCC and LLVM using 71 and 64 optimization flags respectively. The optimization spaces are vast at 271 and 264. Previous approaches use random iterative optimization, genetic algorithms, Irace (tuning of parameters by finding the most appropriate settings given a set of instances of an optimization problem, “learning”). Their system is called BOCA.

This paper uses Bayesian optimization, an iterative method to optimize an objective function using the accumulated knowledge in the known area of the search space to guide samplings in the remaining area in order to find the optimal sample. It builds a surrogate model that can be evaluated quickly, typically using Gaussian Process (GP, you can look it up here, it is not explained in the paper) which doesn’t scale to high dimensionality (number of flags). BOCA uses a Random Forest instead (RF, also not explained in the paper). To further improve the search, optimizations are ranked into “impactful” and “less impactful” using Gini importance to measure the impact of optimizations (look it up here for more detail). Less impactful optimizations are considered only in a limited number of iterations, i.e., they “decay”.

The authors benchmark the 2 compilers on 20 benchmarks against other state of the art approaches, listing the results for 30 to 60 iterations. BOCA achieves given desired speedups in 43%-78% less time. Against the highest optimization setting of the compilers (-o3) BOCA achieves a speedup of 1.25x for GCC and 1.13x for LLVM. Notably, using around 8 impactful optimizations is best, as more can slow BOCA down. The speedup is limited when using more recent GCC versions, 1.04-1.06x.

These techniques yield incremental improvements. They would certainly be significant in HW design where they can be used for simulation and perhaps for setting optimization flags during synthesis and layout, where AI approaches are now being adopted by EDA vendors. Time will tell.

Also Read:

Cadence Paints a Broad Canvas in Automotive

Analog IC Migration using AI

The Next LLM Architecture? Innovation in Verification

Emerging Growth Opportunity for Women in AI

 


Scaling AI Data Centers: The Role of Chiplets and Connectivity

Scaling AI Data Centers: The Role of Chiplets and Connectivity
by Kalar Rajendiran on 11-26-2024 at 6:00 am

Building the Modern Data Centre AI Compute Nodes

Artificial intelligence (AI) has revolutionized data center infrastructure, requiring a reimagining of computational, memory, and connectivity technologies. Meeting the increasing demand for high performance and efficiency in AI workloads has led to the emergence of innovative solutions, including chiplets, advanced interconnects, and optical communication systems. These technologies are transforming data centers into scalable, flexible ecosystems optimized for AI-driven tasks.

Alphawave Semi is actively advancing this ecosystem, offering a portfolio of chiplets, high-speed interconnect IP, and design solutions that power next-generation AI systems.

Custom Silicon Solutions Through Chiplets

Chiplet technology is at the forefront of creating custom silicon solutions that are specifically optimized for AI workloads. Unlike traditional monolithic chips, chiplets are modular, enabling manufacturers to combine different components—compute, memory, and input/output functions—into a single package. This approach allows for greater customization, faster development cycles, and more cost-effective designs. The Universal Chiplet Interconnect Express (UCIe) is a critical enabler of this innovation, providing a standardized die-to-die interface that supports high bandwidth, energy efficiency, and seamless communication between chiplets. This ecosystem paves the way for tailored silicon solutions that deliver the performance AI workloads demand, while also addressing power efficiency and affordability.

Scaling AI Clusters Through Advanced Connectivity

Connectivity technologies are the backbone of scaling AI clusters and geographically distributed data centers. The deployment of AI workloads in these infrastructures requires high-bandwidth, low-latency communication between thousands of interconnected processors, memory modules, and storage units. While traditional Ethernet-based front-end networks remain critical for server-to-server communication, AI workloads place unprecedented demands on back-end networks. These back-end networks facilitate the seamless exchange of data between AI accelerators, such as GPUs and TPUs, which is essential for large-scale training and inference tasks. Any inefficiency, such as packet loss or high latency, can lead to significant compute resource wastage, underlining the importance of robust connectivity solutions. Optical connectivity, including silicon photonics and co-packaged optics (CPO), is increasingly replacing copper-based connections, delivering the bandwidth density and energy efficiency required for scaling AI infrastructure. These technologies enable AI clusters to grow from hundreds to tens of thousands of nodes while maintaining performance and reliability.

Memory Disaggregation for Resource Optimization

AI workloads also demand innovative approaches to memory and storage connectivity. Traditional data center architectures often suffer from underutilized memory resources, leading to inefficiencies. Memory disaggregation, enabled by Compute Express Link (CXL), is a transformative solution. By centralizing memory into shared pools, disaggregated architectures ensure better utilization of resources, reduce overall costs, and improve power efficiency. CXL extends connectivity beyond individual servers and racks, requiring advanced optical solutions to maintain low-latency access over longer distances. This approach ensures that memory can be allocated dynamically, optimizing performance for demanding AI applications while providing significant savings in operational costs.

The Emergence of the Chiplet Ecosystem

A thriving chiplet ecosystem is emerging, fueled by advances in die-to-die interfaces like UCIe. This ecosystem allows for a wide variety of chiplet use cases, enabling modular and flexible design architectures that support the scalability and customization needs of AI workloads. This modular approach is not limited to high-performance computing; it also has implications for distributed AI systems and edge computing. Chiplets are enabling the creation of custom compute-hardware for edge AI applications, ensuring that AI models can operate closer to users for faster response times. Similarly, distributed learning architectures—where data privacy is a concern—rely on chiplet-based solutions to train AI models efficiently without sharing sensitive information.

Summary

AI is redefining data center infrastructure, necessitating solutions that balance performance, scalability, and efficiency. Chiplets, advanced connectivity technologies, and memory disaggregation are critical enablers of this transformation. Together, they offer the means to scale AI workloads affordably while maintaining energy efficiency and reducing time-to-market for new solutions. By harnessing these innovations, data centers are better equipped to handle the demands of AI, paving the way for more powerful, efficient, and scalable computing solutions.

  • Chiplet technology enables tailored silicon solutions optimized for AI workloads, offering affordability, lower power consumption, and faster deployment cycles.
  • Optical communication technologies, such as silicon photonics and co-packaged optics, are vital to scaling AI clusters and distributed data centers.
  • Memory disaggregation via CXL maximizes resource utilization while reducing costs and energy consumption.

Learn more at https://awavesemi.com/

Also Read:

How AI is Redefining Data Center Infrastructure: Key Innovations for the Future

Elevating AI with Cutting-Edge HBM4 Technology

Alphawave Semi Unlocks 1.2 TBps Connectivity for HPC and AI Infrastructure with 9.2 Gbps HBM3E Subsystem