wide 1

Podcast EP133: OpenLight and Their Revolutionary Approach to Photonics

Podcast EP133: OpenLight and Their Revolutionary Approach to Photonics
by Daniel Nenni on 12-23-2022 at 10:00 am

Dan is joined by Dr. Thomas Mader, Tom is the Chief Operating Officer of OpenLight, a newly formed independent company by investments from Synopsys and Juniper Networks. Dr. Mader’s experience spans 27 years across the photonics and consumer electronics industries. Prior to the formation of OpenLight, he led the same team within Juniper Networks. His previous experience includes six years at Intel, where he founded Light Peak, which eventually became Thunderbolt. He also drove innovation at Amazon for six years, creating several new devices such as the Amazon Dash Button.

Tom describes OpenLight’s unique ability to integrate optical components directly onto silicon – lasers, amplifiers and modulators. This technology allows a level of integration for silicon photonics that is new. Tom describes the benefits of OpenLight’s approach and the unique business model they employ to bring that technology to the market.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


A Five-Year Formal Celebration at Axiomise

A Five-Year Formal Celebration at Axiomise
by Daniel Nenni on 12-23-2022 at 6:00 am

DAC 2022 Axiomise

It’s been a bit more than a year since I interviewed Dr. Ashish Darbari, founder and CEO of Axiomise. I’ve been keeping an eye on Ashish and his colleagues, and I was surprised to learn that they recently celebrated their fifth anniversary as a company. I thought that this would be a good time to catch up with him to find out what’s happened over the year since we talked and to learn more about the last five years.

Ashish, congratulations on reaching five years! Can you summarize your journey?
Thank you, Dan. If I had to pick just one word, it would be “amazing.” When I set up Axiomise in October 2017 to offer training and consulting services around formal verification, it was because I knew that there was a big hole in the available industry solutions and methodologies, and therefore a big need and a big opportunity. But I have to say that the past five years have met or exceeded every expectation that I had. Clients have responded enthusiastically and provided us the business to grow and prosper. I’m very grateful for their support, and for the interest that you and others in the EDA community have shown in our company.

How has Axiomise evolved since we last spoke?
2022 has been incredible for us. For a start, we’ve been growing as a company. Gurudutt (GD) Bansal joined as COO and Neil Dunlop joined as CTO. They’ve been great partners in taking our business to the next level. We opened new offices in Hemel Hempstead, just outside of London, with room for more team members going forward.

I see on Wikipedia that Hemel Hempstead has been a village for more than 1000 years and was granted its town charter by Henry VIII in 1539. So you’re part of that great European tradition of doing cutting-edge technical work in historic settings?
That’s a nice way to look at it. The U.K. is a great location for a services company. We can work with Asia in our morning, with North America in our afternoon, and with Europe all day. We have a worldwide client base, which is part of our amazing story.

Speaking of expanding your scope, I see that you recently joined the ESD Alliance. What role will that play in your future?
As part of the SEMI Technology community, the ESD Alliance is closely tied to the semiconductor industry, and that’s where our clients are. So we’re becoming a more integral part of the chip ecosystem and are already networking, making contacts, and forging relationships that will help us grow further. The benefits of membership work both ways. The ESD Alliance is the voice of the EDA industry, and we feel that any successful company should be part of it and give back to the community by sharing experiences and offering advice to fellow members.

One of the things that has impressed me about you personally is your ability to act as an industry spokesperson for formal while running a company and doing hands-on verification work. Have you continued your speaking activity in 2022?
My goodness, yes. It’s been a really busy year on that front. Probably my highest profile activity was participating in the panel “Those Darn Bugs” at the Design Automation Conference (DAC) in San Francisco. Brian Bailey of Semiconductor Engineering led a lively discussion on whether it will ever be possible to eradicate all bugs from chip designs. Of course, there is no chance of that happening without formal taking a lead role. A video of the panel is available online and I think it’s worth watching.

Also at DAC, I talked about “Taming the Beast: RISC-V Formal Verification Made Easy” in the Cadence Theatre. I explained how 32-bit and 64-bit processors cores are verified with formal verification using the Axiomise formalISA app. A video of this talk is also available. Please thank your colleague Daniel Payne for covering our DAC activities.

I joined the “SoC Leaders Verify with Synopsys” panel at the Synopsys Users Group (SNUG) event and a recording of that is online as well. At DVCon Europe in Munich, I appeared on the panel “5G Chip Design Challenges and their Impact on Verification.”  Also in Munich,  I presented “Accelerating Debug and Signoff for RISC-V Processors Using Formal Verification” at CadenceLIVE Europe. Finally, I discussed how formal can address safety and security as well as functional verification at a Cadence Club Formal event in the U.K.

The Axiomise team participates in all kinds of events. Thanks to your colleague GD for doing a podcast with me earlier this year. The immediacy and directness of a podcast seemed to work well for explaining the potentially scary topic of formal. Have you done others?
I had the pleasure of recording a “Fish Fry” with Amelia Dalton of EE Journal on “The Art of Predictability” and the three pillars of formal verification. In fact, I like podcasts so much that I have my own series and have now recorded 50 episodes.

That’s really impressive; I can’t imagine how you possibly find the time. You write quite a bit as well, don’t you?
Yes, this year, I’ve published articles in EDN magazine and Electronic Design magazine. We also do webinars, white papers, and more. You can go to the Knowledge Hub menu on our website to get a complete list.

Surely all this external activity doesn’t prevent you from continuing to innovate in formal?
Not a chance. Speaking and writing is fun, but it’s the work with our clients that keeps us in business. They’re designing and verifying some of the biggest and baddest chips in the world, so they are constantly pushing the limits of formal technology. We have no choice but to innovate constantly, and that’s a big part of the value we bring to the industry.

As you can see from some of our talks and articles, our biggest innovation this year was expanding our solution for RISC-V verification. We announced this late last year and since then have been very busy helping clients verify their processors. Again and again, we have found serious bugs in RISC-V designs when they were thought to be correct based on massive amounts of simulation testing and even, in some cases, had been fabricated and tested in silicon.

How do you work with your clients?
Our primary goal is to offer maximum ROI to the client in the shortest possible time. It often means we take the formal verification work hands-on as a turnkey services project. It allows the client to see how formal is done on actual designs at a fast pace with excellent proof convergence finding bugs and establishing proofs of bug absence. Apart from the turnkey services work, which has been our primary focus, we also offer training to complement the services.

Do you have any final thoughts for our readers?
I just want to thank everyone who has provided support to us for the last five years. We’re excited to have hit this milestone but it’s only the beginning of what we can do to lead the industry in creating chips that are functionally correct, safe, and secure. To lean more, you can email us at info@axiomise.com or contact us through www.axiomise.com.  We are here to help.

Thank you for your time, Ashish.
You’re most welcome!

Also Read:

CEO Interview: Dr. Ashish Darbari of Axiomise

Accelerating Exhaustive and Complete Verification of RISC-V Processors

Life in a Formal Verification Lane

Why I made the world’s first on-demand formal verification course


Building better design flows with tool Open APIs – Calibre RealTime integration shows the way forward

Building better design flows with tool Open APIs – Calibre RealTime integration shows the way forward
by Peter Bennet on 12-22-2022 at 10:00 am

calibre real time digital and custom

You don’t often hear about the inner workings of EDA tools and flows – the marketing guys much prefer telling us about all the exciting things their tools can do rather than the internal plumbing. But this matters for making design flows – and building these has largely been left to the users to sort out. That’s an increasing challenge as designs and EDA tools get more complex and it’s sometimes become necessary to run a part of one tool from within another. To enable that, EDA companies have to pick up their share of the work.

That’s particularly the case for point tools in largely integrated vendor design flows. Calibre is perhaps the best known point tool out there and one common to all major analog, digital and mixed signal design flows. So it’s interesting to hear what Siemens EDA is doing here with Calibre.

Calibre’s RealTime interface supports this closer flow integration and is an established presence in all major digital and custom implementation flows.

How modern design flows drive closer tool interactions

While the Siemens EDA white paper here (link at the end) spends some time discussing the costs and benefits of best-in-class tool flows (like most Calibre ones) versus full single-vendor flows, that’s really a subject in itself (interesting, but perhaps for a separate article). The reality today is that users frequently need some third party point tools to be integrated into flows and it’s likely that many will always demand this.

We used to think in terms of a serial design flow where each tool has a distinct flow step and there’s little overlap between the tools. Something like this example for place and route:

Of course, we’ve simplified a bit here – we check (verification) after each step and have frequent iterative loops back to try things like alternative placements.

With today’s huge, hierarchical designs, leaving the entirety of signoff steps like DRC and LVS to the end of the flow is inefficient and puts signoff schedules at risk. Many of these checks can be done earlier in the flow. We just need an efficient way to do it. Similarly, it’s often helpful to do some local resynthesis within placement to minimise total flow run time.

What we’re looking for here might be called “on demand checking” (or implementation) – doing a local operation on a part of a design exactly when we need to, by pulling forward functionality from one tool to run within an earlier one in the flow. As ever, we want to run things as early as we possibly can – what Siemens call shift left.

It’s a real change in how we think about tools and flows.

How APIs help us here

We’ve always been able to add custom menus in tool GUIs and inject custom Tcl scripts into tool run scripts to access other tools. What’s usually been lacking is being able to call external tool functionality with the lowest interfacing delay and smallest memory footprint with clean, documented reliable integration that regular users can configure. We certainly don’t want to pass the entire design or invoke all the functionality of another tool if we can possibly avoid it.

At first glance, this would appear to be a decisive advantage for more integrated single vendor flows and an increasing drag on integrating other vendor tools.

But that’s not necessarily the case.

Users can interface with EDA tools in a variety of ways, including:

  • Native command shell (usually Tcl)
  • Tool commands
  • Direct database access (query, modification)
  • GUI (sometimes menu customisation, sometimes GUI scripting interface)
  • Reports (native, user-defined through scripting)
  • Logs

Anyone who’s spent too much time with a tool has also run across some hidden (or private) settings and perhaps further, less documented interfaces with unusual naming styles. When a tool pulls the command side together into a more complete and consistent, documented interface, this becomes an Application Programming Interface (API).

The limiting factor in tool flow integration is often the quality, consistency and scope of the API and the inherent ability of the tool to support rapid surgical interventions on critical parts of a design – regardless of whether it’s a single or multi-vendor flow.

These are often determined in the initial tool architectural design when the core data structures and envisaged use models are considered (otherwise they’re shoehorned back in much later). As so often in engineering, it’s the interfaces that are critical. As these are so critical for point tools, they often get more attention. You soon learn what type of tool you have from the consistency of the interfaces (and single-vendor flows may not yet be quite as streamlined as we might assume).

The white paper goes into more details about how this is implemented (Figure 1). Calibre functionality is added to a layout tool through both customisation of the GUI menus and a direct interface to the Calibre API through the layout tool scripting language.

Individual design groups can use an off-the-shelf Calibre integration (EDA vendors can do this through the Siemens EDA Partner Program) or quickly and easily fine tune an integration for their exact flow needs.

Putting this into practice, Calibre RealTime can provide on-demand signoff DRC checking with the signoff rule deck, giving designers a significant run time and productivity gain over running separate Calibre DRC checks.

Another application in use today is running Calibre SmartFill within the layout flow to get more accurate parasitics earlier in the flow.

Summary

There are many cases where design flows benefit from closer tool integration and we’ll likely need more and tighter interaction between what we used to think of as separate tools in a waterfall flow as we optimise design flows to run checks exactly where we want them, when we want them. But getting there requires determined efforts from EDA vendors improving tool usability with interfaces like APIs.

The Calibre RealTime interface shows what’s possible here. It’s being widely used in all major flows (Synopsys Fusion Compiler, Cadence Innovus and Virtuoso, as well as many others).

Find out more in the original white paper here:

https://resources.sw.siemens.com/en-US/white-paper-open-apis-enable-integration-of-best-in-class-design-creation-and-physical

Related Blogs and Podcasts

The Siemens EDA website contains a wealth of further material in white papers and videos:

https://eda.sw.siemens.com/en-US/ic/calibre-design/

This paper looks at the related importance of tool ease of use, again from a Calibre perspective:

https://blogs.sw.siemens.com/calibre/2022/03/16/ease-on-down-the-roadwhy-ease-of-use-is-the-next-big-thing-in-eda-and-how-we-get-there/

You can also learn more about Calibre here:

https://www.youtube.com/@ICNanometerDesign

Also Read:

An Update on HLS and HLV

Cracking post-route Compliance Checking for High-Speed Serial Links with HyperLynx

Calibre: Early Design LVS and ERC Checking gets Interesting


How an Embedded Non-Volatile Memory Can Be a Differentiator

How an Embedded Non-Volatile Memory Can Be a Differentiator
by Kalar Rajendiran on 12-22-2022 at 6:00 am

State of Weebit ReRAM

Embedded memory makes computing applications run faster. In the early days of the semiconductor industry, the desire to utilize large amount of on-chip memory was limited by cost, manufacturing difficulties and technology mismatches between logic and memory circuit implementations. Since then, advancements in semiconductor manufacturing have been bringing on-chip memory costs down.

Fast forward to today, applications such as AI, machine learning, mobile and other low-power applications have been fueling demands for large amounts of embedded memories. A challenge with SRAM-based memory processing elements is that they consume a lot of power which is not affordable by many of the above mentioned applications. In addition, many of the existing embedded non-volatile memory (NVM) technologies such as flash face challenges as the process node goes below 28nm. The challenges are due to additional material layers and masks, supply voltages, speed, read & write granularity and area.

Resistive RAM (ReRAM or RRAM) is a promising technology that is specifically designed to work in finer geometry process nodes where charge-based NVM technologies face challenges. It is true that ReRAM as a technology has spent many decades in the research phase. For satisfying NVM needs, Flash technology had the edge for many applications until 28nm.

ReRAM’s simplicity for process manufacturing makes it easier to be integrated into Back End of Line (BEOL) with only a few extra masks and steps. ReRAM technology enables high-speed, low-power write operations and increased storage density, all critical for AI computing-in-memory applications, as an example.

At the IP-SoC Conference 2022, Eran Briman of Weebit Nano talked about their ReRAM offering and how a wide range of markets and applications could benefit from it.

Who is Weebit Nano?

Weebit Nano is a leading developer of ReRAM technology based IP. They license their IP to FSCs and Fabs to manufacture the chips embedding this IP. From their early days in 2015, Weebit Nano has strategically partnered with CEA-Leti to leverage research in NVM and specifically on ReRAM.

Weebit Nano’s ReRAM Technology

Weebit Nano’s ReRAM technology is based on the creation of a filament that is made of oxygen vacancies in a dielectric material, and is hence called OxRAM. The dielectric layer is deposited between 2 metal stacks at the BEOL, and by applying different voltage levels a filament is either created, representing a logical 1, or dissolved, representing a logical 0. The technology is inherently resistant to tough environmental conditions as the information is retained within the stack itself. As a result, OxRAM is resilient in its operation at high temperatures, exposure to radiation and EM fields. The technology also utilizes materials and tools commonly used in standard CMOS fabs.

The resulting Weebit Nano based NVM solution is also very cost-effective as it requires only two additional masks compared to around 10 additional masks for embedded Flash. It is also power efficient as programming can be done at below 2V compared to at around 10V for embedded Flash. During operation, memory reads can be accomplished at 0.2V which is very power efficient.

Weebit ReRAM Status/Availability

The technology is now production ready as of November 2022, with wafers having been manufactured in 130nm to 28nm to date. Getting to production ready status required passing JEDEC industry standard qualification process for NVM memories. The qualification process includes rigorous tests for endurance, retention, retention of cycling, solder reflows, etc., on hundreds of blindly selected dies from three independent wafer lots.

Weebit Nano’s first production manufacturing partner SkyWater recently produced 130nm wafers embedding Weebit’s 256Kb ReRAM module. The dies are now going through the JEDEC qualification process and are available for customers to integrate into a range of target SoCs.

ReRAM: Why Now?

As noted earlier, Flash memory is facing scaling limitations beyond 28nm along cost and complexity dimensions. At the same time, the pressure for lower power and lower cost solutions is increasing, pushing products toward more advanced process nodes. ReRAM technology scales nicely beyond 28nm and fits easily in bulk CMOS, FD-SOI as well as FinFET processes. It can also support low power, high performance, RF CMOS, high-voltage and other process variants too. This opens up target markets to include mixed-signal, power management, MCUs, Edge AI, Automotive, Industrial and Aerospace & Defense applications. According to Yole, a market research firm, the embedded ReRAM market is projected to grow from less than $20M in 2021 to around $1B in 2027.

Why Weebit Nano ReRAM?

Refer to the following table which highlights how Weebit Nano’s ReRAM IP addresses key requirements of various applications in fast growing markets.

Those looking into designing chip solutions for applications that could benefit from embedded memories should reach out to Weebit Nano to get more insights about their ReRAM solutions.


Regulators Wrestle with ‘Explainability’​

Regulators Wrestle with ‘Explainability’​
by Roger C. Lanctot on 12-21-2022 at 10:00 am

Regulators Westle with Explainability​

The letter from the San Francisco Municipal Transportation Authority (SFMTA) to the National Highway Traffic Safety Administration (NHTSA) shines a bright spotlight on a major weakness of current automated vehicle technology – explainability. The letter is in reply to a request for comment from interested parties by NHTSA regarding General Motors’ request for exemptions from traditional safety regulations for GM’s Cruise Automation Origin vehicle to operate driverlessly on public roads.

The letter highlights the performance shortcomings of Cruise’s existing fleet of self-driving vehicles – based on Chevy Bolts – and the agency’s disappointment in Cruise’s responsiveness to multiple concerns that the agency has expressed. While the letter also highlights the clashing jurisdictions of Federal and local authorities, it also raises alarms regarding the potential negative impact that might result from Cruise unleashing hundreds or even thousands of vehicles on San Francisco streets – or, in fact, the streets of any city.

At the core of the SFMTA’s concerns though is not only Cruise’s failure to respond to questions regarding day to day operation of its vehicles or to concerns regarding particular incidents – the letter raises questions as to Cruise’s ability to explain how or why its vehicles are doing what they are doing. This breakdown reflects a shortcoming in artificial intelligence and machine learning technology where users or creators are unable to explain the output of their own algorithms.

The first evidence of this explainability breakdown emerged earlier this year when NHTSA opened investigations into phantom braking incidents plaguing vehicles from Tesla operating in Autopilot or Full-Self-Driving. Tesla vehicles are known to periodically come to a stop on highways – and the company has been unable to either explain or remedy the problem.

This experience echoed the “unintended acceleration” events that struck Toyota vehicles years ago and spurred a Congressional investigation and NHTSA’s outreach to the National Aeronautic and Space Administration (NASA) to try to explain the phenomenon. Of course, the Toyota incidents were not tied to artificial intelligence or machine learning.

The SFMTA letter cites multiple circumstances of Cruise vehicles slowing or stopping mid-block in the flow of traffic for no reason – including situations where emergency responders were impeded. The agency also expressed its unhappiness with Cruise vehicles not pulling out of traffic lanes and over to available curb space to pick up or drop off passengers – as required by law.

The agency further raised questions as to the timeliness of Cruise’s responsiveness in the event of vehicle failures. In these situations there were delays in making contact with appropriate personnel at Cruise as well as additional delays in Cruise personnel coming to rescue inoperable Cruise vehicles.

SFMTA’s concerns were elevated by its anticipation of the Bolt-based Cruise vehicles – which are equipped with steering wheels and brake and accelerator pedals – being replaced with much larger Cruise Origin vehicles that lack such manual vehicle controls. In fact, the lack of those controls are the motivation for GM to be requesting regulatory waivers for as many as 5,000 Cruise Origin AVs.

Cruise personnel can easily reposition or remove Bolt-based AVs, but Origin vehicle failures are expected to require the involvement of flatbed or tow trucks to remove or reposition the vehicles.

One of the SFMTA’s greatest concerns, though, expressed early in the letter, is the anticipated impact of steering wheel-less AVs operating in San Francisco in substantial numbers. According to the SFMTA’s own research, the introduction of a total of 5,700 Uber and Lyft vehicles six years ago was responsible for 25% of all travel delays in the city at that time.

Cruise’s current fleet operating in San Francisco without drivers currently consists of considerably less than 100 vehicles. The company has logged less than 20,000 miles of autonomous operation through May 22, according to SFMTA.

Cruise’s failures to adequately respond to local regulatory authorities and/or to explain the failure or idiosyncratic functioning of its vehicles marks an important turning point for the AV industry. SFMTA reported a significant uptick in 911 calls to emergency responders in connection with the erratic behavior or apparent failures of unmanned Cruise vehicles – even at their currently low on-road volume.

Cruise has made some efforts to reach out to the public with marketing messages and to try to explain itself, its operations and its goals. The more salient authority that is in desperate need of this kind of outreach is the SFMTA and local emergency responders who have been forced to cope with the evolving operational shortcomings of Cruise vehicles and the public’s reaction to them.

It’s worth noting that few such complaints or pushback has arisen from the operation of Waymo’s AVs. Waymo has thus far been operating with traditionally equipped and regulatory-compliant vehicles that do not require waivers from NHTSA to operate.

With its letter, the SFMTA posits a nightmare scenario where Cruise might – on its own – decide to introduce hundreds or thousands of its driverless AVs on the streets of San Francisco. The waiver request from GM to NHTSA forces the SFMTA to ponder the impact of such a prospective deployment.

For the average San Francisco native, the letter suggests that it might be time to put the brakes on all robotaxi activities until and unless the city decides that robotaxis are indeed a desired transportation objective. One hint as to the unlikeliness of this are the additional objections and concerns expressed by the SFMTA regarding accommodations for residents with disabilities. It was only after a major regulatory and legal tussle that the SFMTA was able to obtain appropriate concessions from Uber and Lyft for such residents.

In the end, Cruise needs to come clean and clean up its act. And the SFMTA has now raised questions that all municipalities must ask: Do we want robotaxis? How do we want them to operate? And how many are we prepared to accomodate?

Letter from the SFMTA: https://regmedia.co.uk/2022/09/26/letter_to_nhtsa.pdf

Also Read:

U.S. Automakers Broadening Search for Talent and R&D As Electronics Take Over Vehicles

Super Cruise Saves OnStar, Industry

The Truly Terrifying Truth about Tesla


Validating NoC Security. Innovation in Verification

Validating NoC Security. Innovation in Verification
by Bernard Murphy on 12-21-2022 at 6:00 am

Innovation New

Network on Chip (NoC) connectivity is ubiquitous in SoCs, therefore should be an attractive attack vector. Is it possible to prove robustness against a broad and configurable range of threats? Paul Cunningham (Senior VP/GM, Verification at Cadence), Raúl Camposano (Silicon Catalyst, entrepreneur, former Synopsys CTO and now Silvaco CTO) and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Towards the Formal Verification of Security Properties of a Network-on-Chip Router. The authors presented the paper at the 2018 IEEE ETS and are/were at the Technical University of Munich in Germany.

NoCs are the preferred connectivity fabric for modern SoCs. In mesh form NoCs are fundamental in arrayed processor architectures for many core servers and AI accelerators. Given this, mesh NoCs are a natural target for direct and side-channel software-based attacks. Further supporting recent interest, Google Scholar shows nearly 2k papers on NoC Security for 2022.

Use of formal methods as presented here is appealing since mesh structures are regular and the approach proposed is inductive (or a related algorithm), which should imply scalability. The authors illustrate with a set of properties to check for security against data alteration, denial of service and timing side-channel attacks.

Paul’s view

This is a perfect paper to close out 2022; a tight and crisp read on an important topic. Table II on page 4 is the big highlight for me, where the authors beautifully summarize formal definitions of security attacks on a NOC with a hierarchy of just 21 concise properties covering data modification, denial-of-service, and side-channel (interference, starvation) attacks. These definitions can be pulled out of the table and plugged directly into any commercial model checking tool or implemented as SystemVerilog assertion for a logic simulation or emulation run.

Since the paper was published in 2018 we have seen NOC complexity rise significantly, with almost any SOC from mobile to datacenter to vehicle deploying some form of NOC. At Cadence we are seeing model checking become an industry must-have for security verification. Today, even complex speculative execution attacks like Spectre and Meltdown can be effectively formulated and proven with modern model checking tools.

Raúl’s view

This month’s paper tackles Network on Chip (NoC) vulnerabilities by defining a large set of security related properties of the NoC routers. These are implementation independent. Subsets of these properties provide specific security and correctness checks; for example, avoiding false data in a router comes down to 1) Data that was read from the buffer was at some point in time written into the buffer, and 2) During multiplexing, the output data is equal to the desired input data (properties b5 and m1 in Table II and Fig. 3). Nine such composite effects of the properties are listed, in addition to avoiding false (above) data for example buffer overflow, data overwrite, packet loss, etc.

These 21 properties are formalized – written in Linear Temporal Logic (LTL) and checked using the Phaeton framework [25,26] which uses an unbound model checking solver. The actual model checked is the router synthesized together with a verification module. The latter includes the properties to be verified and acts as a wrapper to read and write the inputs and outputs to the router. All properties could be verified between 142-4185 seconds on a small Intel I5 CPU with 4GB of memory.

To show the effectiveness of the approach, 6 different router implementations were used. Examples include round-robin, time-division-multiplexing… and also trojan versions such as Gatecrasher which issues grants without any request. Time division multiplexing turns out to be the only implementation protected against all threats, i.e., satisfying all 21 properties. Table III summarizes the results.

For someone well versed in formal verification the paper is not difficult to follow, but it is definitely not self-contained or an easy read. It is a nice contribution towards secure NoCs (and router implementations). The correctness and security properties can be verified with “any other verification tool supporting gate-level or LTL model checking”, e.g., commercial model checkers.


9 Trends Will Dominate Blockchain Technology In 2023

9 Trends Will Dominate Blockchain Technology In 2023
by Ahmed Banafa on 12-20-2022 at 10:00 am

9 Trends Will Dominate Blockchain Technology In 2023

It’s clear that blockchain will revolutionize operations and processes in many industries and governments agencies if adopted, but its adoption requires time and efforts, in addition blockchain technology will stimulate people to acquire new skills, and traditional business will have to completely reconsider their processes to harvest the maximum benefits from using this promising technology. [2]

The following 9 trends will dominate blockchain technology in 2023:

1. Blockchain 4.0

Blockchain 4.0 is focused on innovation. Speed, user experience and usability by larger and common mass will be the key focus areas for #blockchain 4.0. We can divide Blockchain 4.0 applications into two verticals:

•       Web 3.0 

•       Metaverse

Web 3.0

The 2008 global financial crisis exposed the cracks in centralized control, paving the way for decentralization. The world needs Web 3.0- a user-sovereign platform. Because Web 3.0 aims to create an autonomous, open, and intelligent internet, it will rely on decentralized protocols, which blockchain can provide.

There are already some third-generation blockchains that are designed to support web 3.0, but with the rise of Blockchain 4.0, we can expect the emergence of more web 3.0 focused blockchains that will feature cohesive interoperability, automation through smart contracts, seamless integration, and censorship-resistant storage of P2P data files.

Metaverse

The dream projects of tech giants like Facebook, Microsoft, Nvidia, and many more, Metaverses, are the next big thing for us to experience in the coming few years. We are connected to virtual worlds across different touchpoints like social engagement, gaming, working, networking and many more. Metaverse will make these experiences more vivid and natural.

Advanced AI, IoT, AR & VR, Cloud computing and Blockchain technologies will come into play to create the virtual-reality spaces of metaverse , where users will interact with a computer-generated environment and other users through realistic experiences.

Centralized Metaverse entails more intense user engagements, deeper use of internet services and more uncovering of users’ personal data. All these almost likely means higher cybercrime exposure. Giving power to centralized bodies to regulate, control and distribute users’ data is not a sustainable set-up for the future of Metaverse. Therefore, much emphasis has been placed on developing decentralized Metaverse platforms that will provide user autonomy. Decentraland, Axie Infinity, and Starl, these are all decentralized Metaverses powered by Blockchain:

Also, Blockchain 4.0’s advanced solutions can help Metaverse users regulate their security and trust needs. Take the Metaverse gaming platform, for example, where users may purchase, possess, and trade in-game items with potentially enormous value. Proof of ownership through something as immutable and scarce as #NFTs will be required to prevent forgery of these assets.

At the end blockchain 4.0 will enable businesses to move some or all of their current operations onto secure, self-recording applications based on decentralized, trustless, and encrypted ledgers. Businesses and institutions can easily enjoy the basic benefits of the blockchain.

2. Stablecoins Will Be More Visible

Using Bitcoin as an example of cryptocurrencies its highly volatile in nature. To avoid that volatility, #stablecoins came to the picture strongly with stable value associate with each coin. As of now, stablecoins are in their initial phase and it is predicted that 2023 will be the year when blockchain stablecoins will achieve their all-time high. [1]

3. Social Networking Problems Meet Blockchain Solution

There are around 4.74 billion social media users around the globe in 2022.

The introduction of blockchain in social media will be able to solve the problems related to notorious scandals, privacy violations, data control, and content relevance. Therefore, the blockchain blend in the social media domain is another emerging technology trend in 2023.

With the implementation of blockchain, it can be ensured that all the social media published data remain untraceable and cannot be duplicated, even after its deletion. Moreover, users will get to store data more securely and maintain their ownership. Blockchain also ensures that the power of content relevance lies in the hands of those who created it, instead of the platform owners. This makes the user feel more secure as they can control what they want to see. One daunting task is to convince social media platforms to implemented it, this can be on a voluntary base or as a results of privacy laws similar to GDPR. [1]

4. Interoperability and Blockchain Networks

Blockchain interoperability is the ability to share data and other information across multiple blockchain systems as well as networks. This function makes it simple for the public to see and access the data across different blockchain networks. For example, you can send your data from one Ethereum blockchain to another specific blockchain network. Interoperability is a challenge but the benefits are vast [5].

5. Economy and Finance Will Lead Blockchain Applications

Unlike other traditional businesses, the banking and finance industries don’t need to introduce radical transformation to their processes for adopting blockchain technology. After it was successfully applied for the cryptocurrency, financial institutions begin seriously considering blockchain adoption for traditional banking operations.

Blockchain technology will allow banks to reduce excessive bureaucracy, conduct faster transactions at lower costs, and improve its secrecy. One of the blockchain predictions made by Gartner is that the banking industry will derive billions dollars of business value from the use of blockchain-based cryptocurrencies by 2023.

Moreover, blockchain can be used for launching new cryptocurrencies that will be regulated or influenced by monetary policy. In this way, banks want to reduce the competitive advantage of standalone cryptocurrencies and achieve greater control over their monetary policy. [2]

6. Blockchain Integration into Government Agencies

The idea of the distributed ledger is also very attractive to government authorities that have to administrate very large quantities of data. Currently, each agency has its separate database, so they have to constantly require information about residents from each other. However, the implementation of blockchain technologies for effective data management will improve the functioning of such agencies.

According to Gartner, by 2023, more than a billion people will have some data about them stored on a blockchain, but they may not be aware of it. Also, national cryptocurrencies will appear, it’s inevitable that governments will have to recognize the benefits of blockchain-derived currencies. Digital money is the future and nothing will stop. [3]

7. Blockchain Combines with IoT

The IoT tech market will see a renewed focus on security as complex safety challenges crop up. These complexities stem from the diverse and distributed nature of the technology. The number of Internet-connected devices has breached the 26 billion mark. Device and IoT network hacking will become commonplace in 2023. It is up to network operators to stop intruders from doing their business.

The current centralized architecture of IoT is one of the main reasons for the vulnerability of IoT networks. With billions of devices connected and more to be added, IoT is a big target for cyber-attacks, which makes security extremely important.

Blockchain offers new hope for IoT security for several reasons. First, blockchain is public, everyone participating in the network of nodes of the blockchain network can see the blocks and the transactions stored and approves them, although users can still have private keys to control transactions. Second, blockchain is decentralized, so there is no single authority that can approve the transactions eliminating Single Point of Failure (SPOF) weakness. Third and most importantly, it’s secure—the database can only be extended and previous records cannot be changed [7].

8. Blockchain with AI 

With the integration of AI (Artificial Intelligence) with blockchain technology will make for a better development. This integration will show a level of improvement in blockchain technology with adequate number of applications.

The International Data Corporation (IDC) suggests that global spending on AI will reach $57.6 billion by 2023 and 51% of businesses will be making the transition to AI with blockchain integration.

Additionally, blockchain can also make AI more coherent and understandable, and we can trace and determine why decisions are made in machine learning. Blockchain and its ledger can record all data and variables that go through a decision made under machine learning.

Moreover, AI can boost blockchain efficiency far better than humans, or even standard computing can. A look at the way in which blockchains are currently run-on standard computers proves this with a lot of processing power needed to perform even basic tasks

Examples of applications of AI in Blockchain: Smart Computing Power, Creating Diverse Data Sets, Data Protection, Data Monetization, Trusting AI Decision Making. [6]

9. Demand for Blockchain Experts 

Blockchain is a new technology and there are only few percent of individuals who are skilled in this technology. As blockchain technology becoming a fast-increasing and wide-spreading technology, that creates a situation for many to develop skills and experience about blockchain technology.

Even though the number of experts in blockchain fields is increasing, on the other hand the implementation of this technology has a rapid growth which will create a situation for the demand of Blockchain experts by 2023. [3]

It’s worth saying that there are genuine efforts by universities and colleges to catch up with this need including San Jose State University with several courses covering blockchain technology, but the rate of graduating students with enough skills to deal with blockchain technology is not enough to fill the gap. Also, Companies are taking steps to build on their existing talents by adding training programs for developing and managing blockchain networks.

  Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Quantum Computing

References

[1] https://www.mobileappdaily.com/top-emerging-blockchain-trends

[2] https://www.aithority.com/guest-authors/blockchain-technology-in-the-future-7-predictions-for-2020/

[3] https://www.bitdeal.net/blockchain-technology-in-2020

[4] https://medium.com/altcoin-magazine/to-libra-or-not-to-libra-e2d5ddb5455b

[5] https://blockgeeks.com/guides/cosmos-blockchain-2/

[6] https://medium.com/altcoin-magazine/blockchain-and-ai-a-perfect-match-e9e9b7317455

[7] https://medium.com/@banafa/ten-trends-of-iot-in-2020-b2

[8]  https://www.linkedin.com/pulse/blockchain-40-ahmed-banafa/

Also Read:

The Role of Clock Gating

Ant Colony Optimization. Innovation in Verification

A Crash Course in the Future of Technology


Chiplets UCIe Require Verification

Chiplets UCIe Require Verification
by Daniel Nenni on 12-20-2022 at 6:00 am

UCIe VIP Truechip

Chiplets have been trending on SemiWiki for the past two years and I think that will continue into the distant future. As a potential way to unclog Moore’s Law, you can bet the semiconductor ecosystem will prove once again to be a chiplet force of nature driving semiconductor company roadmaps to smaller and better things.

To be clear, the chiplet concept is not new. We have been doing multi chip modules (MCMs) for years now. IP blocks are also not new and that is what a chiplet is, a hard IP block. What’s new is the chiplet ecosystem that is developing so all companies, big and small, can design with chiplets.

To allow chiplet connectivity we have the developing Universal Chiplet Interconnect Express standard. UCIe is an open specification for a die-to die interconnect and serial bus between chiplets. It’s co-developed by our colleagues at AMD, Arm, ASE Group, Google Cloud, Intel, Meta, Microsoft, Qualcomm, Samsung, and TSMC.

One of the critical pieces of the chiplet UCIe design puzzle of course is verification, which brings us to a recent announcement:

Truechip Announces First Customer Shipment of UCIe Verification IP

Speaking at SemIsrael expo 2022 Nitin Kishore, CEO, Truechip, said, “UCIe is the need of the hour as it not only assists to increase yield for SoCs with larger die size but also allows to intermix components (or chiplets) from multiple vendors within a single package. SoC providers can reduce time to market and cost if they can re-use chiplets from previous or other chips (like a processor subsystem or a memory subsystem, etc.) versions or plug-in chiplets from third-party vendors. With the launch of the UCIe Verification IP, I believe that this protocol will enable design houses to configure, launch, analyze, manage sustainability targets, and accelerate them achieve their design goals.”

UCIe Verification IP Key Benefits

  • Available in native SystemVerilog (UVM/OVM /VMM) and Verilog
  • Unique development methodology to ensure highest levels of quality
  • Availability of various Regression Test Suites
  • 24X5 customer support
  • Unique and customizable licensing models
  • Exhaustive set of assertions and cover points with connectivity example for all the components
  • Consistency of interface, installation, operation and documentation across all our VIPs
  • Provide complete solution and easy integration in IP and SoC environment.

Nitin concluded, “With high-speed support of 32GTps per lane and the fact that it can also enable the mapping of other protocols via the streaming mode, UCIe is not only a high-performance protocol but also an interconnect protocol that requires very low power. The advantages of UCIe makes it the most innovative technique to smoothen the way towards a truly open multi-die system ecosystem by ensuring interoperability.”

Intellectual Property is a critical part of the semiconductor ecosystem. In fact, without the commercial IP market the fabless semiconductor business would not be what it is today. IP is still the most read topic on SemiWiki and the fastest growing semiconductor design market segment and will continue to be so, with or without chiplets. With chiplets, however, the IP market could easily hit the $10B mark by the end of the decade, absolutely.

About Truechip

Truechip is a leading provider of Verification IPs, NOC Silicon IP, GUI based automation products and chip design services, which aid to accelerate IP/ SOC design thus lowering the cost and the risks associated with the development of ASIC, FPGA and SOC. Truechip provides Verification IP solutions for RISC V-based chips, Networking, Automotive, Microcontroller, Mobile, Storage, Data Centers, AI domains for all known protocols along with custom VIP development. The company has global footprints and coverage across North America, Europe, Israel, Taiwan, South Korea, Japan, Vietnam and India. Truechip offers Industry’s first 24 x 7 technical support.

Also Read:

Truechip Introduces Automation Products – NoC Verification and NoC Performance

Truechip: Customer Shipment of CXL3 VIP and CXL Switch Model

Truechip’s Network-on-Chip (NoC) Silicon IP

Truechip’s DisplayPort 2.0 Verification IP (VIP) Solution


An Update on HLS and HLV

An Update on HLS and HLV
by Daniel Payne on 12-19-2022 at 10:00 am

NCJ29D5 BD2 min

I first heard about High Level Synthesis (HLS) while working in EDA at Viewlogic back in the 1990s, and have kept watch on the trends over the past decades. Earlier this year Siemens EDA hosted a two day event, having speakers from well-known companies share their experiences about using HLS and High Level Verification (HLV) in their semiconductor products. I’ll recap the top points from each speaker in this blog.

Stuart Clubb from Siemens EDA kicked off the two day event, and he explained how general purpose CPUs have struggled to meet compute demands, RTL design productivity is stalling, and that RTL verification costs are only growing. HLS helps by reducing both simulation and verification times, allowing for more architectural exploration, and enables new domain-specific processors and accelerators to handle new workloads more efficiently. The tool at Siemens EDA for HLS and HLV is called Catapult.

Catapult

With the Catapult tool designers model at a higher-level than RTL with C++, SystemC or MatchLib, which then produces RTL code for traditional logic synthesis tools. Your HLS source code can be targeted to an ASIC, FPGA or even eFPGA. There’s even a power analysis flow supported with the PowerPro add-on.

NXP

This company has 31,000 people, producing revenues of $11.06 billion in 2021, located across 30 countries and Reinhold Schmidt talked about their secure car access group of 11 engineers. Their product included an IEEE 802.15.4z compliant IR-UWB transceiver, ARM Cortex M33, and a DSP; they started with a 40nm process then migrated to a 28nm process, and their device operates on a coin cell battery.

NXP – NCJ29D5

Modeling was done in Matlab, C++ and SystemC. MatchLib, a SystemC/C++ library was also used. PowerPro was used for power optimization and estimation. Results on an IIR DC notch filter showed that HLS had an area reduction of about 40%, compared to handwritten RTL.

They plan to integrate HLS further into their infrastructure, and investigate using HLV. It’s a challenge to get their RTL-centric engineers to think in terms of algorithms, and for SW engineers to think about hardware descriptions.

Google, VCU

Video traffic takes up to 80% of the Internet, so Google has focused their HW development on a Video Coding Unit (VCU). Aki Kuusela presented the history of video compression: H.264, VP9, AV1, AV2. Video transcoding follows a process from creator to viewer:

Video Transcoding

Google developed their own chips for this video transcoding task to get a proper implementation of H.264 and VP9, optimized for datacenter workload, so an HLS design approach allowed them to do this quickly. With a Google VCU a creator can upload a 1080p 30 fps video at 20 Mbps, then a viewer can watch it at 1080p 30fps using only 4 Mbps.

The VCU ASIC block diagram shows how all the IP blocks are connected to a NOC internally.

VCU ASIC

The VCU ASIC goes onto a board and rack, then it’s built up into a cluster. Google engineers have been using HLS for about 10 years now, and the methodology allows SW/HW co-design, plus fast design iteration. Catapult converts their C++ to Verilog RTL, and an in-house tool called Taffel is used for block integration, verification and visualization.

HLS design style worked well for data-centric blocks, state machines and arbiters. With C++ there was a single source of truth, and there were bit-exact results between model and RTL, using 5 to 10X less code compared to RTL coding.

NVIDIA Research

Nate Pickney and Rangharajan Venkatesan started out with four research areas that HLS has been used in their group:  RC18: Inference chip, Simba: Deep-learning Inference with chiplet-based architecture, MAGNET: A Modular Accelerator Generator for neural networks, IPA: Floorplan-Aware SystemC Interconnect Performance Modeling.

The motivation for IPA – Interconnect Prototyping Assistant, was to abstract and automate interconnects within a SystemC codebase. You use IPA’s SystemC API for magic message passing, SystemC simulation for modeling, and through HLS for RTL generation. IPA was originally developed by NVIDIA, and now is maintained by Siemens EDA.

Interconnect Prototyping Assistant (IPA)

The SoC design flow between HLS and RTL, including exploration and implementation is shown below:

HLS Flow

Adding IPA into this flow shows how exploration times can be reduced to 10 minutes, while implementation times are just 6 hours.

IPA added to Flow

For the 4×4 MAGNET DL accelerator example the first step was to write a unified SystemC model, make an initial run of the IPA tool, update the interconnect, and then revise the microarchitecture. Experiments from this analysis compared directly-connected links, centralized crossbar, and uniform mesh (NOC). Each experiment using IPA took only minutes of design effort, instead of weeks required without IPA.

IPA info is open-source, learn more here.

NVIDIA, Video Codecs

Hai Lin described how their design and verification flow follows several steps:

  • HLS design electronic spec
  • HLS design lint check
  • Efficient Catapult synthesis
  • Design quality tracking
  • Power optimization, PowerPro
  • Block-level clock gating
  • HLS vs RTL coherence checking
  • Automatic testbench generation
  • Catapult Code Coverage (CCOV) coverpoint insertions

Their video codec group switched from an RTL flow to Catapult HLS and saw a reduction in coding effort, reduced number of bugs, and shortened simulation runtimes. Automation now handled pipelining, parallelization, interface logic, idle signal generation and more. RTL clock gating is automated with PowerPro. Finally, the HLS methodology integrates code and functional coverage at the C++ design source-code level.

STMicroelectronics

Engineers at ST have 10 years of HLS experience using Catapult on products like set-top boxes, imaging and communication systems, and now are using HLS for products like: sensors, MEMS actuators – ASICs for MEMS mirror drivers, and analog products.

An Infrared Smart Sensor project used HLS, and a neural network was trained from a set of data coming from a sensor in real life situations.

Infrared Smart Sensor

With Catapult they were able to explore neural networks with various arithmetic formats, then compare the area, memory and accuracy of results. The time for HLS design in Catapult, data analysis, testbench and modeling was only 5 person-weeks.

A second HLS design project was for a Qi-compliant ASK demodulator.  They were able to explore the design space by comparing demodulators with various architectures,  then measure the area and slack time numbers:

  • Fully rolled
  • Partial rolled 8 cycles
  • Partial rolled 4 cycles
  • Partial rolled 2 cycles
  • Unrolled

The third example shared was for a contactless infrared sensor with embedded processing. Three HW blocks for temperature compensation formulas were modeled in HLS.

Contactless Infrared Sensor with Embedded Processing

The generated RTL from each block was run through logic synthesis  and the area numbers were compared for a variety of architectures. Latency times were also estimated in Catapult to help choose the best architecture.

NASA JPL

FPGA engineer Ashot Hambardzumyan from NASA JPL compared using C++ and SystemC for the Harris Corner Detector. The main algorithm computes the Dx and Dy derivatives of an image, then computes a Harris response score at each pixel, finally applying a non-maximum suppression to the score.

Harris Corner Detector

They modeled this as a DSP process, and the HLS architecture was modeled as a Kahn Process. Comparisons were made between using C++, SystemC and SystemVerilog approaches:

Synthesis results

To verify each of these languages an image was used and the simulation time per frame was measured, and the SystemVerilog implementation required 3 minutes per frame, SystemC took only 5 seconds, and C++ was the fastest at only 0.3 seconds per frame.

Design and verification times for HLS were shorter than with RTL methodology. Basic training for using C++ was shorter at 2 weeks, versus SystemC at 4 weeks.  The Harris Corner Detector algorithm took just 4 weeks using C++, compared to 6 weeks with SystemC.

Viosoft

The final presentation talked about 5G and the challenges of the physical layer (L1), where complex math is used in communication with algorithms like Channel Estimation, Modulation, Demodulation and Forward Error Correction.

RAN Physical Layer

HLS was applied to L1 functions written in C++, then a comparison was made on the runtime for a CRC in three implementations:

  • X86 XEON CPU 2.3GHz – 608, 423ns
  • RISC-V QEMU CPU 2.3GHz – 4,895,104ns
  • Catapult – 300ns

Another comparison was between an RTL flow versus Catapults flow for maximum clock frequency, and it showed that HLS results from Catapult were 2X higher clock frequency than RTL. Resource utilization in Intel FPGA devices showed that RTL from Catapult was comparable to manual RTL for logic utilization, total thermal power dissipation and static thermal power dissipation.

Viosoft prefers the single source implementation of HLS, as HW/SW can be partitioned easier, design trade-offs can be explored, performance can be estimated, and time to market shortened.

Summary

HLS and HLV are growing trends and I expect to see continued adoption across a widening list of application areas. Higher levels of abstraction have benefits like fewer lines of source code, quicker times for simulation, faster verification, all leaving more time for architectural exploration. RTL coding isn’t disappearing, it’s just being made more efficient with HLS automation.

There’s even HLS open source IP at Github to help get you started quicker. The Catapult tool comes with reference examples across different applications to speed learning. You’ll even find YouTube tutorials on using HLS in Catapult. The HLS Bluebook is another way to learn this methodology.

View the two day event archive here, about 8 hours of video.

Related Blogs


MIPI bridging DSI-2 and CSI-2 Interfaces with an FPGA

MIPI bridging DSI-2 and CSI-2 Interfaces with an FPGA
by Don Dingee on 12-19-2022 at 6:00 am

MIPI specification chart, courtesy MIPI Alliance

We’re so spoiled by 4K and 8K and frame rates of 120 Hz or higher video content on high-performance devices that now, many of us expect these higher resolutions and rates on even small devices. The necessary interfaces exist in MIPI Display Serial Interface 2 (DSI-2) and Camera Serial Interface 2 (CSI-2). The challenge is these interfaces eat up a conventional SoC, either overwhelming low-end parts wholly or consuming too much power and battery life on a higher-end part. At MIPI DevCon 2022, Mahmoud Banna of Mixel co-presented with a customer, Hercules Microelectronics (HME), their solution: MIPI bridging DSI-2 and CSI-2 interfaces with an FPGA, providing acceleration at low power.

“Beyond mobile” applications for MIPI specifications

MIPI-powered displays have been ubiquitous in smartphones for some time. In most cases, smartphones feature high-end SoCs to power the complex cellular network interface and provide the multi-tasking response users expect. In exchange for access to high-performance content, users learned to take steps to save smartphone battery life, like dark mode, turning down the brightness, and more. It’s an acceptable trade most of the time.

A new generation of “beyond mobile” applications demands the same type of performance without the same resources or management. High-performance displays and camera-based sensors now appear in automotive, IoT, wearables, industrial devices, and more. Bandwidth requirements are high, power requirements are low, and EMI is a concern anytime signals switch rapidly. MIPI developed its primary display and camera interface specifications from the ground up for these applications. The evolution of these specifications continues; DSI-2 v 2.0 was released in July 2021, and CSI-2 v 3.0 was released in September 2019.

Splitting the difference with FPGA-based MIPI bridging solutions

Most of these new applications operate in consumer segments with short lifecycles or industrial segments with relatively low volumes. Add to those factors the ongoing enhancements of the MIPI specifications, and designing an ASIC becomes risky. It’s easy to miss a market window, a critical new feature, or cost targets, and seeing a device with an outdated specification raises questions among buyers.

An FPGA could solve many of those concerns. FPGAs enable faster prototyping and proof of concept, enabling companies to demonstrate devices for investors. FPGAs allow customization, using the same basic elements in more than one design, or quickly targeting a new use case. Risks of a hardware re-spin shrink, with the ability to reprogram logic. And FPGA-based reference designs allow OEMs to do their tweaking to unique requirements.

There’s still the challenge of power consumption. A mid-range FPGA on a newer process technology node can offer the right baseline of power use while still delivering the logic size and performance needed. “The traditional way to build a MIPI bridging system over an FPGA was to use the FPGA LVDS interface to emulate the MIPI interface,” says Banna. “But the growing trend here is to harden the MIPI subsystem, including the PHY and the controller, to achieve many benefits.” Hardened MIPI D-PHY and DSI-2 controller IP from Mixel hit higher data rates, are more stable in FPGA contexts, and consume less power.

Illustrating the concept, Mixel is teaming with Hercules Microelectronics and their H1D03 FPGA. Built on a 40nm LP process, the H1D03 pairs two hardened Mixel MIPI IP blocks with an 8051 microcontroller, two SRAM blocks, 2K LUTs running at up to 200 MHz, and LVDS and other I/O.

Wide range of possibilities for MIPI bridging

Configuring the MIPI D-PHY, MIPI DSI-2, and LVDS blocks for emulating CSI-2, this approach can hit many use cases. Yundong Cui of HME offers this diagram:

 

 

 

 

 

 

 

 

In the recorded video of the MIPI DevCon 2022 session, Cui steps through several applications for the H1D03. His examples include a low-end cellphone or tablet, an AR headset, an e-Paper display, a smart home control panel, and an industrial camera. In each case, he points out how offloading the SoC improves system performance while keeping power low. Banna takes on some questions in a short Q&A and highlights upcoming MIPI PHY IP on the Mixel roadmap.

Hardening the Mixel MIPI IP is an important step here. These latest MIPI specifications are complex, and hardened IP ensures consistent performance regardless of what happens in the FPGA. It also removes the burden for the OEM to try to debug a soft implementation so they can focus on functionality in their application.

To see how Mixel and Hercules Microelectronics work together in MIPI bridging DSI-2 and CSI-2 interfaces with an FPGA, view the entire MIPI DevCon 2022 session on YouTube at:

Leveraging MIPI DSI-2 & MIPI CSI-2 Low-Power Display and Camera Subsystems

Also Read:

MIPI in the Car – Transport From Sensors to Compute

A MIPI CSI-2/MIPI D-PHY Solution for AI Edge Devices

FD-SOI Offers Refreshing Performance and Flexibility for Mobile Applications