Bronco Webinar 800x100 1

SmartDV Expands Its Design IP Portfolio with an Acquisition

SmartDV Expands Its Design IP Portfolio with an Acquisition
by Mike Gianfagna on 12-29-2020 at 6:00 am

SmartDV Expands Its Design IP Portfolio with an Acquisition

Back in April, I posted a blog about SmartDV, The Quiet Giant in Verification IP and More. This is a story about the “more” part of that statement. Acquisition activity in the semiconductor sector has been quite brisk this year. A bright spot in what could otherwise be a sometimes-overwhelming series of bad news. Acquisition has been a focus at SmartDV as well. The company recently announced the acquisition of a new line of design IP controllers for high-speed communications.  Here is a bit more color on that announcement and how SmartDV expands its design IP portfolio with an acquisition.

What Was Announced

The announcement centers on the acquisition of a design IP business and products from a leading, pure-play engineering services company. The identity of the company and the price tag of the acquisition were not disclosed. Independent of those items, there is a lot to the story that highlights the strength and significance of this acquisition. A key piece of information is that the IP portfolio is silicon-proven and includes minimal area controller IP for popular standards including mobile and mobile-Influenced (MIPI) and universal serial bus (USB) interfaces.

The Importance of Silicon Validation

Silicon validation of standards-based IP like this is a critical item to reduce risk and time-to-market. Since the IP is coming from an engineering services company, silicon validation will be a key benefit. The release reports all the IP titles are implemented in numerous chip design projects and a variety of consumer electronics devices. Having worked at a company that offered IP and ASICs (eSilicon), I can tell you this is key differentiator. At eSilicon, we didn’t just design IP blocks and do one test chip for characterization before offering it for sale. Instead, we would use these IP titles in our ASIC business on real production chips. The validation from this kind of activity is quite a bit more robust than a characterization test chip alone.  

Deepak Kumar Tala, SmartDV’s managing director commented on the fit of the IP portfolio and its source. “The acquisition of a Design IP portfolio strengthens our offerings for mobile and high-speed communications application markets. The purchase comes from a company noted for exceptional customer support and service. With this reputation, its high-quality, highly configurable and silicon-proven Design IP is a good fit for SmartDV’s standards.”

The Details

The portfolio is quite extensive and includes:

MIPI

  • MIPI camera serial interface (CSI-2) transmitter and receiver controller design IP for C-PHY and D-PHY
  • MIPI display serial interface (DSI) and DSI-2 transmitter and receiver design IP for C-PHY and D-PHY
  • MIPI CSI-3 host and device design IP
  • Universal flash storage (UFS) interface 2.1 and 3.0 host and device design IP
  • MIPI unified protocol (UNIPRO) controller 1.6 and 1.8 design IP
  • I3C interface master and slave controller design IP

USB

  • Silicon-proven and USB-Implementers Forum (USB-IF) certified
    • USB 1.1/2.0 device controller
    • x 5G device controller
    • x 5G host controller
    • x 5G hub controller
    • x 5G dual-role controller
  • USB-IF certified
    • x 10G device controller
  • Verified and FPGA validated
    • USB On-The-Go (OTG)
    • USB SuperSpeed Inter-Chip (SSIC)
    • 0 xHCI host controller
  • Design Ready
    • 0 device router

Availability and To Learn More

Licenses for the new IP include validation platforms along with firmware support to functionally validate chip design prior to tape out and mitigate risk. All controller design IP is pre-verified and delivered as a comprehensive solution, complete with a verification suite, clock domain crossing, synthesis and logic equivalence checking constraints and waivers, as applicable. They are reusable at the system-on-chip (SoC) level and proven interoperable with partner PHY solutions. I can tell you from first-hand experience interoperability between the controller and PHY is really important, this is a key benefit.

The new portfolio is available now and backed by an experienced team. Pricing is available upon request. You can email requests for data sheets or more information to sales@Smart-DV.com. The news is true, SmartDV expands its design IP portfolio with an acquisition.

Also Read:

CEO Interview: Deepak Kumar Tala of SmartDV

The Quiet Giant in Verification IP and More

SemiWiki and SmartDV on Verification IP


Quantum Teleportation: Facts and Myths

Quantum Teleportation: Facts and Myths
by Ahmed Banafa on 12-28-2020 at 10:00 am

Quantum Teleportation Facts and Myths

Quantum teleportation is a technique for transferring quantum information from a sender at one location to a receiver some distance away. While teleportation is portrayed in science fiction as a means to transfer physical objects from one location to the next, quantum teleportation only transfers quantum information. [1]

For the first time, a team of scientists and researchers have achieved sustained, high-fidelity ‘quantum teleportation’ — the instant transfer of ‘qubits’ (quantum bits) the basic unit of quantum information. the collaborative team, which includes NASA’s jet propulsion laboratory, successfully demonstrated sustained, long-distance teleportation of qubits of photons (quanta of light) with fidelity greater than 90%. the qubits were teleported 44 kilometers (27 miles) over a fiber-optic network using state-of-the-art single-photon detectors and off-the-shelf equipment.[5]

Important point to keep in mind is quantum teleportation is the transfer of quantum states from one location to another using quantum entanglement, where the two particles in separate locations are connected by an invisible force, famously referred to as “spooky action at a distance” by Albert Einstein. Regardless of the distance, the encoded information shared by the “entangled” pair of particles can be passed between them. An interesting note is that the sender knows neither the location of the recipient nor the quantum state that will be transferred.

By sharing these quantum qubits, the basic units of quantum computing, researchers are hoping to create networks of quantum computers that can share information at blazing-fast speeds. But keeping this information flow stable over long distances has proven extremely difficult due to changes in the environment including noise. Researchers are now hoping to scale up such a system, using both entanglement to send information and quantum memory to store it as well. [5]

On the same front, scientists have advanced their quantum technology research with the development of a chip that could be scaled up and used to build the quantum simulator of the future using nanochip that allows them to produce enough stable photons encoded with quantum information to scale up the technology. The chip, which is said to be less than one-tenth of the thickness of a human hair, may enable the scientists to achieve ‘quantum supremacy’ – where a quantum device can solve a given computational task faster than the world’s most powerful supercomputer. [6]

Quantum Entanglement

In quantum entanglement particles that have interacted at some point retain a type of connection and can be entangled with each other in pairs, in a process known as correlation. Knowing the spin state of one entangled particle – up or down – allows one to know that the spin of its mate is in the opposite direction. Quantum entanglement allows qubits that are separated by incredible distances to interact with each other instantaneously (not limited to the speed of light). No matter how great the distance between the correlated particles, they will remain entangled as long as they are isolated. [2]

Quantum Teleportation: Paving the Way for a Quantum Internet

In July, the US Department of Energy unveiled a blueprint for the first quantum internet, connecting several of its National Laboratories across the country. A quantum internet would be able to transmit large volumes of data across immense distances at a rate that exceeds the speed of light. You can imagine all the applications that can benefit from such speed.

Traditional computer data is coded in either zeros or ones. Quantum information is superimposed in both zeros and ones simultaneously. Academics, researchers and IT professionals will need to create devices for the infrastructure of quantum internet including: quantum routers, quantum repeaters, quantum gateways, quantum hubs, and other quantum tools. A whole new industry will be born based on the idea of quantum internet exists in parallel to the current ecosystem of companies we have in regular internet.

The “traditional internet “, as the regular internet is sometimes called, will still exist. It is expected that large organizations will rely on the quantum internet to safeguard data, but that individual consumers will continue to use the classical internet.

Experts predict that the financial sector will benefit from the quantum internet when it comes to securing online transactions. The healthcare sectors and the public sectors are also expected to see benefits. In addition to providing a faster, safer internet experience, quantum computing will better position organizations to solve complex problems, like supply chain management. Furthermore, it will expedite the exchange of vast amounts of data, and carrying out large-scale sensing experiments in astronomy, materials discovery and life sciences. [2]

  Ahmed Banafa, Author the Books:

Secure and Smart Internet of Things (IoT) Using Blockchain and AI

Blockchain Technology and Applications

Read more articles at: https://ahmedbanafa.blogspot.com/

References

[1] https://en.wikipedia.org/wiki/Quantum_teleportation

[2] https://www.linkedin.com/pulse/quantum-internet-explained-ahmed-banafa/

[3] https://futurism.com/researchers-achieve-first-sustained-long-distance-quantum-teleportation

[4] https://ronnyeo.wordpress.com/2008/05/02/teleportation-quantum-entanglement/

[5] https://www.designboom.com/technology/nasa-long-distance-quantum-teleportation-12-22-2020/

[6] https://www.siliconrepublic.com/machines/quantum-computing-fermilab


What Might the “1nm Node” Look Like?

What Might the “1nm Node” Look Like?
by Tom Dillinger on 12-28-2020 at 6:00 am

transistor density

The device roadmap for the next few advanced process nodes seems relatively clear.  The FinFET topology will subsequently be displaced by a “gate-all-around” device, typically using multiple stacked channels with a metal gate completely surrounding the “nanosheets”.  Whereas the fin demonstrates improved gate-to-channel electrostatics due to the gate traversal over the height and thickness of the fin, the stacked nanosheets have further improved this electrostatic control – subthreshold leakage currents are optimized.

An extension to the nanosheet topology that has been proposed is the “forksheet”, as depicted in the figure below. [3]

The goal of the forksheet R&D is to eliminate the nFET-to-pFET device spacing rule (for a common gate input connection), isolating the two sets of nanosheets with a thin oxide.  The tradeoff for this attractive gain in transistor density is that the gate again surrounds the channel volume on three sides – “FinFETs turned on their sides” is a common forksheet analogy.

Although the dates for high-volume manufacturing (HVM) of post-FinFET nodes are somewhat fluid, it is expected that these evolutionary nanosheet/forksheet device topologies would emerge in the 2024-25 timeframe.

There is active process development and device research underway for a myriad of alternatives to the nanosheet.  Assuming that the “nano” device topology will be used for at least a couple of process nodes, research needs to be aggressively undertaken now, if any new device is to reach HVM in 2028-30.

At the recent IEDM conference, Synopsys presented their forecast and design-technology co-optimization (DTCO) evaluation results for one of the leading device alternatives for the “1nm” node in this timeframe.  [1]  This article summarizes the highlights of their presentation.

The “1nm” Node

The figure below depicts the straight-line trend of transistor density across several recent process nodes.  (This graph was provided as part of collaboration between Synopsys and IC Knowledge, Inc.)

Several things to note about this graph:

  • the node names on the X-axis represent a simple transition from the 14nm node, with each successive data point defined by the Moore’s Law linear multiplier of 0.7X

Frequent SemiWiki readers are no doubt aware that the actual nomenclature assigned by a foundry to successive nodes has received some “marketing input”.   For the sake of this discussion, using the 0.7X names is appropriate, if the goal of the DTCO process development is indeed to remain on this curve.

  • the density data points at each node represent metrics from multiple foundries
  • the data points include separate measures for logic and SRAM implementations

Logic density is typically associated with the foundation library cell implementation commonly used with the foundry technology.  For example, the area of a 2-input NAND cell reflects 4 devices in the cell using:

  • the contacted poly pitch (CPP)
  • the number of horizontal metal tracks in the cell (for signals and supplies)
  • the cell adjacency isolation spacing (“diffusion breaks” versus tied off dummy gates between cells)

Another key cell dimension is the area of a (scannable) data flip-flop.  The transistor density calculation above uses a logic mix of NAND and FF cells for each logic data point.

Of particular note is the assumption for the device topology used in the Synopsys projections for the 1nm node.  Active research is underway to evaluate several non-silicon field-effect device types in a timeframe consistent with this node – e.g., 2D semiconducting materials (MoS2) and 1D carbon nanotubes.  For the goal of staying on the transistor density curve, the Synopsys TCAD team pursued DTCO process definition for a 3D “Complementary FET” (CFET) implementation.  The figure below illustrates the CFET cross-section.

An attractive feature of the CFET technology is the similarity to the nanosheet topology, which will have years of manufacturing experience in the 1nm node timeframe.  The novelty of the CFET approach is the vertical positioning of pFET and nFET nanosheets.

The CFET topology leverages the typical CMOS logic application where a common input signal is applied to the gate of both an nFET and pFET device.  (The unique case of a 6T SRAM bitcell with nFET-only word line pass-gates will be discussed shortly.)

The figure above illustrates how the pFET nanosheet resides directly below the nFET nanosheet(s).  In the figure, two nFET nanosheets are present, narrower than the pFET – as space is required to contact the pFET source and drain nodes, the nFETs are reduced in width.  The two nFETs in parallel will provide comparable drive strength to the pFET.  (SRAM bitcell design in CFETs utilizes a different strategy.)  An M0 contact over active gate (COAG) topology is also shown extending this recent process enhancement.

The processing of CFET devices requires specific attention to the sequential pFET and nFET formation.  Epitaxial growth of SiGe for the pFET source/drain nodes is used to introduce compressive strain in the channel for improved hole mobility.  pFET gate oxide and metal gate deposition are then performed.  The subsequent epitaxial Si growth for nFET source/drain nodes, followed by gate oxide and metal gate deposition, must adhere to materials chemistry constraints imposed by the existing pFET device.

Buried Power Rails

Note the assumption for the 1nm node that the local VDD and GND distribution will be provided by “buried power rails” (BPR), resident below the nanosheets in the substrate.  As a result, both “shallow” (device) and “deep” (BPR) vias are required.  The metal composition of the BPRs and the vias is thus a critical process optimization to reduce the parasitic contact resistance.  The (primary) metal needs to have low resistivity and be deposited with extremely thin barrier and liner materials in the trench.

Speaking of parasitic, the (simplified) layout diagram below highlights a unique advantage of the CFET topology. [2]

The three-dimensional orientation of the CFET devices eliminates the gate traversal between separate nFET and pFET regions.  Also, in comparison to a FinFET device layout, the parallel run length of the gate-to-source/drain local metallization is significantly reduced.  (The small gate length extension past the nanosheet is shown in the figure.)  As a result, the device parasitic Rgate resistance and Cgs/Cgd capacitances are vastly improved with the CFET.

CFET SRAM Design

The implementation of a 6T SRAM bitcell in a CFET process introduces several tradeoffs.  The Synopsys DTCO team opted for unique design characteristics, as illustrated below.

  • an nFET pulldown : pFET pullup ratio of 2:1 is readily achieved

The two smaller nFET nanosheets shown earlier for a 1:1 logic drive strength ratio are the same width as the pFET in the SRAM bitcell, giving the 2:1 drive strength.  (Note that this would be comparable to a FinFET bitcell, where the number of nFET fins is 2 and the number of pFET fins is 1.)

  • a modified pair of nFET pass gate devices is implemented

The two nFET nanosheets used for the pass gates are (slightly) weaker than the pull downs;  the gate is only present over three sides of the nanosheet.  This “tri-gate” configuration provides for a denser bitcell, and optimizes the relative strengths of the pass gate:pull down nFET devices for robust cell read margins.

  • the pFET nanosheet under the pass gate devices now becomes an inactive “dummy” gate
  • a unique “cross-couple” layer (at the level of the M0 vias) is used for the internal 6T cell interconnections

DTCO analysis early in process development utilizes TCAD simulation tools, to represent materials litho patterning, materials deposition, and (selective) etching profiles.  This early optimization work provides insights into the required process windows, as well as the expected materials dimensions and electrical properties, including channel strain to optimize free carrier mobility.

Subsequent parasitic extraction, merged with the device models, enables preliminary power/performance measures for the new process, combined with the device layout area for a full PPA assessment.  The (rather busy) figure below provides a visualization of the DTCO analysis for the SRAM bitcell described above.

Summary

At IEDM, the Synopsys TCAD team provided a peek into the characteristics of the “1nm” node, based on a CFET device topology, with one pFET nanosheet below two nFET nanosheets.  Buried power rails were also assumed.  The lithographic assumptions were based on the utilization of (high numerical aperture) EUV – e.g., a 39nm CPP (with COAG) and a 19nm M0 metal pitch.  A unique SRAM bitcell design approach was applied, both for the relative PU:PD:PG drive strengths and for an internal cross-couple interconnect layer.

The results of this DTCO analysis suggest that the 1nm CFET node may indeed be able to maintain an aggressive transistor density, approaching 10**9 transistors/mm**2.  It will be extremely interesting to see how this forecast evolves.

I would encourage you to look at this IEDM presentation.

-chipguy

References

[1]  Moroz, V., et al, “DTCO Launches Moore’s Law Over the Feature Scaling Wall”, IEDM 2020, paper 41.1.

[2]  Ryckaert, J., et al, “The Complementary FET (CFET) for CMOS scaling beyond N3”, VLSI Technology Symposium 2018, p. 141-142.

[3]  Collaert, Nadine, “Future Scaling:  Where Systems and Technology Meet”, ISSCC 2020, Paper 1.3.

Also Read:

EDA Tool Support for GAA Process Designs

Synopsys Enhances Chips and Systems with New Silicon Lifecycle Management Platform

Synopsys Teams with IBM to Increase AI Compute Performance 1,000X by 2029


Tesla’s Musk Mocks Marketing

Tesla’s Musk Mocks Marketing
by Roger C. Lanctot on 12-27-2020 at 10:00 am

Teslas Musk Mocks Marketing

The Automotive News had a standout year in 2020 – thriving at a time when many publishers suffered. With factories and dealerships shuttered in the spring due to the pandemic, the outlook was bleak but the Automotive News dug in amplifying its reporting with multiple podcasts and video content documenting the ups and downs of the industry recovery.

Nowhere was this community service and news amplification more focused than in the Daily Drive podcast where Automotive News Publisher Jason Stein and fellow editors and reporters recorded interviews with industry notables describing their day-to-day experiences and pandemic responses. Paired with the Shift Automotive podcasts, Automotive News pumped out hundreds of these enlightening segments – all freely accessible for listeners stuck at home or needing background noise for a socially distanced workout.

The best of these podcasts, bar none, was Jason Stein’s exclusive interview with Tesla Motors’ CEO Elon Musk. And the moment that crystalizes the events of 2020 and Tesla’s and Musk’s role in the industry comes at 8:20 of the latest condensation of that interview where Stein asks Musk if Tesla did any customer research when designing its upcoming Cybertruck.

Condensed Stein-Musk interview from Automotive News “Daily Drive” podcast:   https://www.autonews.com/weekend-drive-podcast/daily-drive-podcast-december-21-2020-2020-review-elon-musk-cuff

The moment comes as Musk, predictably, responds that, of course, Tesla didn’t do any customer research:

“Oh, zero. Customer research. Ha ha ha ha ha.  No. I mean. I mean, we just made a car that we thought was awesome. You know, it looks super weird. I just wanted to make a futuristic, like a futuristic battle tank. Something that looked like it could come out of ‘Blade Runner’ or ‘Aliens’ or something like that – you know – but that was also highly functional, had incredible capabilities, faster than a Porsche 911 and more towing power, more truck capability than a Ford F-150.”

It’s difficult to render in print, but Musk’s laugh in the segment is positively diabolical. One hears the villainous cackle of every over-confident, ghoulish baddie from every Batman, Marvel Comics, or 007 movie ever made, combined. Stein follows up his question by asking Musk to assess his own management style and gets what sounds like an almost too-perfect response from Musk: “I must be doing something right.”

And who’s to argue with that? In a separate section of the interview Musk notes his market cap exceeds the combined value of the rest of the entire automotive industry. Again, Musk takes the stance of stating a simple fact without need of exaggeration.

This week commenced with Tesla’s stock having its debut in the S&P 500 where it waffled a bit before finding its groove and settling in. As 2021 approaches, storm clouds loom as lucrative carbon credits disappear with the pending PSA-FCA merger expected to be finalized at the end of January 2021.

As InsideEVs reports, it was those carbon credits that allowed Tesla to show five consecutive quarters of profit cementing its candidacy for S&P 500 listing. Writes reporter Henrique Gustavo Ruffo: “Among other requirements – such as its market cap above $8.2B – it was mainly because it presented five consecutive profitable quarters since Q3 2019.  When you check its GAAP net income in all profitable quarters, it was inferior to the regulatory credits the company received in most of these periods.”

“EU Approves FCA-PSA Merger: What Might It Mean for Tesla’s CO2 Credits?” – InsideEVs –

Stein did ask Musk about U.S. tax credits, which Tesla has already used up. He did not ask about the carbon credits.

Still, Stein’s customer research question was an important one. It followed Stein’s asking Musk about J.D. Power’s surveys. Musk, again not surprisingly, dismissed the importance of J.D. Power, but then claimed that Tesla has the highest customer satisfaction. “What really matters is your satisfaction after the purchase and ours is highest in the industry.”

What Musk was referring to is Tesla’s industry leading customer satisfaction from Consumer Reports. He is dismissing J.D. Power’s Initial Quality Study which shows Tesla as the worst in the industry with 250 reported problems per 100 vehicles, well above the industry average of 166.

For Musk, that’s just the eggs breaking to make the omelet. Musk’s dismissal of customer research or major marketing expenditures of any kind is a further reflection of the disruptive business model that the company presents.

Car companies spend billions of dollars on consumer research, advertising, and marketing. Musk, zero. (Insert diabolical laughter.)

Car companies consist of massive bureaucracies where decisions are made by committee and scores are settled and battles won internally with research. At Tesla, Musk decides. End of story. (This impacts product decisions and is exemplified by the massive mid-dash landscape display in the Model 3 that no consumer focus group would have ever approved.)

For competing auto makers, when the customer research is completed and the car is designed and ready, the market must be prepared with further research around advertising messages and then the hundreds of millions of advertising dollars and incentives must be invested to convince consumers they want to buy the new car, truck, or SUV. At Tesla, Musk announces an upcoming new vehicle and takes deposits and reservations. End of story.

The final takeaway from the brilliant Stein-Musk interview is that car makers are taking too long to make cars. Musk believes the walking-pace of today’s assembly line is a remnant, a holdover, of a bygone age. Manufacturing can and ought to be accelerated a thousandfold, he says.

The conclusion? Musk may miss his profit-boosting carbon credits in 2021, but his vision of accelerated vehicle production in the context of delivering cars against already-booked orders and unburdened by consumer research, complicated product development, or massive marketing expenditures is compelling and difficult to replicate or emulate. To his financial advantages Musk is daily adding a vast manufacturing-based moat that competing auto makers will find difficult to match. One can expect further diabolical laughter from Musk. And hat’s off to a great interview by Jason Stein.


Amazon Tesla Uber and DoorDash

Amazon Tesla Uber and DoorDash
by Roger C. Lanctot on 12-27-2020 at 6:00 am

Amazon Tesla and Uber DoorDash

As the death toll in the U.S. from COVID-19 approaches 300,000 I am impressed by the resilience of pandemic doubters and deniers. I’m talking about the point-three-percent-ers* who have shifted from calling COVID-19 a hoax to encouraging as many people as possible to get the virus to “get it over with.”

Maybe “impressed” isn’t the right word. “Horrified” would be more accurate.

The only thing worse than the pandemic doubters and deniers are those that talk about “getting back to normal” – i.e. returning to the way things were before the COVID-19 pandemic swept in and immutably altered lives forever. It’s time to get with the program. Things will never be the same again – especially in the world of transportation.

There are three big reasons why things will never be the same again and they are: Amazon, Tesla Motors, and Uber Technologies.

More than any other companies, these three organizations have used the pandemic to consolidate, reinforce, and expand their market dominance at the expense of competitors and alternatives. In the process, they have introduced structural changes which are already proving irreversible.

This coming week, a fourth long hauler will join Amazon, Tesla, and Uber: DoorDash. DoorDash’s initial public offering is set to hit the public markets priced at the high end of its original range with a $30B valuation twice what it was just a few months ago.

Like Amazon, Tesla, and Uber, DoorDash is doing its best to cement the structural economic changes wrought by the pandemic. Those changes have amplified DoorDash’s own influence and financial success and the company is seeking to ensure its dominance in food delivery long past the point when vaccines are widely and freely available, restaurants re-open, and human beings return to doing their own shopping and mingling. The purpose of the IPO and recent service modifications is to preserve a DoorDash-friendly environment into the future.

DoorDash will only be following in the footsteps of Amazon, Tesla, and Uber. Amazon has vastly expanded its network of distribution and fulfillment centers in the U.S. and globally. In fact, it has modified its 500 Whole Foods locations with the addition of on-site dedicated delivery operations.

Amazon started the pandemic with 30,000 branded delivery vans and 20,000 branded long-haul trailers and has since announced plans to add 100,000 electric vans in partnership with EV startup Rivian. This is at a time when FedEx and UPS are reporting severe shortages of delivery vehicles during a pandemic which has many shoppers browsing from home.

Amazon’s delivery-centric culture – offering same-, one-day deliveries – was perfectly suited to the pandemic. The expansion of fulfillment centers and its delivery fleet will only show the company refining its reformed business – transforming pandemic accommodations to permanent elements of its core value proposition. Amazon Prime’s streaming service has also benefited from this new stay-at-home culture.

Tesla pushed ahead with its plans for a German plant in the midst of the pandemic and has accelerated its growth pace in the face of supply chain disruptions, tariff tussles, and economic shutdowns. On the eve of its listing on the S&P 500 stock exchange in the U.S. – a listing that is expected to further juice its already stratospherically priced shares – Tesla’s CEO Elon Musk has mused about merging with or acquiring an existing auto maker.

The key ingredient in the Tesla success story – direct sales – is rapidly becoming the model of market penetration and disruption being adopted by both startups and legacy auto makers. Rivian has been battling to launch its direct sales model in Michigan in the U.S. in the face of local legislative resistance – while Volvo acquired two Swedish dealerships while announcing plans to shift as much as 50% of its future vehicle sales online.

The pandemic raised the prospect of touchless vehicle sales and accelerated an evolutionary process pioneered by the likes of Tesla, Carvana, Vroom, and others retailing new and used cars online. New and used car dealers have been forced to adapt new selling models and auto makers have been forced to rethink their direct sales strategies. (This doesn’t mean the death of dealers but unstoppable structural change is afoot.)

Tesla’s success has inspired rivals. EV startup valuations are percolating skyward at a rate commensurate with rising COVID infections. Special purpose acquisition corporations (SPACs) are popping up to facilitate an expansion of the EV movement as competitors pursue Tesla-like results. (SPACs are also emerging as a tantalizing path to public markets for the micromobility sector.)

Vehicle demand recovered quickly from the early economic shutdowns and, now, a march of EV startups is seen diving directly into the market without the need to set up regional or national dealer networks. Low maintenance EVs are seen as ideally suited to a low- or no-touch and limited service customer engagement experience – perfect for the pandemic…and beyond.

Of the three, Amazon, Tesla, and Uber, Uber was hit hardest by the pandemic as ride hailing customers disappeared and have stayed away – halving demand. The impact was simultaneously severe and somewhat muted by the fact that the decline in demand was matched by a decline in the supply of drivers.

The loss of revenue, though, was untenable and Uber has taken extreme measures to trim its sails and prop up its revenue sources. Uber offloaded its Jump scooter business to Lime, sold a stake in its Uber Freight unit, and is seeking buyers for its Advanced Technologies Group autonomous vehicle, and Uber Elevate flying car operations. At the same time, Uber has sought to acquire Autocab in the U.K.

The Autocab acquisition didn’t garner a lot of attention when announced, but the taxi dispatch company would give Uber immediate access to markets across the U.K. where it currently only participates in 20 local cities. London, in particular, and the U.K., generally, are strategic markets for Uber from both a revenue and profit perspective.

The Autocab acquisition would allow Uber to lay the groundwork for a massive expansion of its U.K. presence at the expense of local operators currently running highly profitable businesses. It might also be a hedge against further regulatory barriers raised by London’s local transport authority.

The pandemic has depressed valuations in the taxi sector opening the door to this timely acquisition which could transform Uber’s prospects in the U.K. while serving as a model for expansion elsewhere in the world where the company is facing regulatory resistance. The Autocab acquisition is subject to approval by the U.K. competition authority. Such an acquisition would mark an enduring change in market conditions.

DoorDash, like Amazon and Tesla, was in the right place at the right time when the pandemic hit – offering food and meal delivery services at a time when restaurants were closing and consumers were hunkering down. Also like Amazon and Tesla, DoorDash is seeking to lock in its advantageous position with a surge in new DashPass loyalty program users – rising to more than five million from about one million before the pandemic.

In a restaurant-friendly gesture DoorDash has also announced its intention to allow restaurants with their own delivery operations to integrate with DoorDash – allowing them to remain in the DoorDash network while delivering their own orders. These are steps designed to lock in customer behaviors learned during the pandemic.

From the accelerated pace of EV vehicle adoption to the emergence of a curbside-pickup and delivery-centric economy the world will never be quite the same. Partitions in vehicles and in most public places are not likely to ever come down. Masks will always be near to hand – as will be hand sanitizer. We will never return to the way things were. The sooner we accept and understand this, the sooner we will be able to finish building the new normal and get back to business, to school, to church, to the movies, to flying, to being together again.

*0.3% is the World Health Organization’s estimated median infection mortality rate – though rates vary widely based on age, pre-existing conditions, and other factors.


Analog Bits is Taking the Virtual Holiday Party up a Notch or Two

Analog Bits is Taking the Virtual Holiday Party up a Notch or Two
by Mike Gianfagna on 12-25-2020 at 6:00 am

Analog Bits Takes the Virtual Holiday Party Up a Notch or Two

As 2020 comes to a close, I hear a lot of chatter about virtual meeting fatigue; “I’m Zoomed out”. We’ve all attended virtual versions of conferences this year with various degrees of success. Overall, I have to say these events are getting better. Semiconductor and EDA folks have a way of adapting and inventing, and it’s showing up in virtual events as well. The virtual party is another expression of the medium. Delivering a fun experience through the internet isn’t easy. I’m here to tell you that Analog Bits recently did exactly that. Their virtual holiday event was memorable, fun and educational. That’s quite a package to deliver. Read on to find out how Analog Bits is taking the virtual holiday party up a notch or two.

Mahesh Tirupattur

First, a bit of background is in order about the person who “produced” the event, Mahesh Tirupattur, Executive Vice President at Analog Bits. In spite of the all-consuming commitment and geekiness most of the folks in semi and EDA exhibit, many have another side. Another personality that shows up when they come home from work or walk from the computer desk to the TV couch these days. Mahesh is one of those folks. Most of us have witnessed Mahesh geek out on all kinds of analog and mixed signal design challenges. I’m here to tell you Mahesh is also a well-educated, accomplished and certified sommelier.

Some of us have seen glimpses of “the other Mahesh” from time to time. Often, he’ll show up around the holidays with a bottle of Analog Bits-branded wine as a thank you to his customers and partners. The way this wine is created is actually quite a story in itself. The whole team at Analog Bits is involved in the process, but that’s a story for another day. In case you might have one of those bottles on your shelf, I just want to point out that Mahesh is well connected in Napa Valley. That’s a good bottle of wine you’re aging. Those of us who find ourselves tearing down booths after a trade show closes also know that the Analog Bits booth will always be pouring the best wine.

Back to the virtual holiday party. Now that you know the credentials Mahesh carries, it’s no surprise the Analog Bits virtual holiday party was about wine. But that’s just the beginning. Mahesh and Analog Bits teamed up with San Francisco Wine School to produce an event that was fun and educational. Billed as a “Cloud Wine Event”, Master Sommelier David Glancy promised a guided tasting of Napa Valley wines. I was lucky enough to get a ticket to the party. After registering, I received a package in the mail a few days before the event containing four tasting-size bottles of wine labelled Wine 1 to Wine 4. There was also an envelope that was labelled “open after the event”. Clearly that was the decoder ring for Wines 1-4. I resisted the temptation to open it.  There was also a very cool wine-themed face mask in the package designed by Mahesh. It was much appreciated.

Party setup

On event day, I dutifully brought the four bottles from refrigerator to cellar temperature, poured them in four identical glasses and joined the Zoom meeting at the designated time. The event was kicked off by Alan Rogers, President and CTO at Analog Bits, followed by Mahesh. I have this mental image of folks with the title “Master Sommelier” being stuffy and full of themselves. That is NOT who David Glancy is. He was energetic, entertaining and most of all, didn’t take himself too seriously in spite of his significant credential. If you’re wondering what a Master Sommelier is, you can find out more here. There are only 269 such people on the planet. It turns out David is good friends with Mahesh and they’ve travelled the world discovering and rating wines together. That didn’t surprise me.

The blind tasting that followed was all about altitude. All four wines were from different parts of Napa Valley and the vineyards they came from were at distinctly different elevations. You may have heard the term “terroir”, which refers to the environmental conditions that grapes experience as they grow and mature. Soil is a big contributor to terroir. It turns out altitude is as well. David armed us with some key facts:

  • For every 1,000 feet of rise in elevation, there is a 3 to 5.4-degree F drop in temperature
    • Lower temps yield higher malic acid, which makes the wine crisper and more tart
  • For every 1,000 feet of rise in elevation, there is a 5% increase in UV strength
    • Higher UV makes the grape skins thicker, resulting in darker color and more tannins
  • Higher elevations are above the fog that collects on the valley floor each morning
    • This means more sun and higher temps for more concentrated flavor profiles
The challenge

So, armed with our newly found skills we set about ordering the four wines from the lowest to the highest elevation. David treated us to many more interesting and fun facts about the Napa Valley, a small but very potent force in world-class wine production. At the end of the day, a very small number of us got it right, but we all had lots of fun trying. The wines were all excellent. They are from Napa Valley after all. Mahesh’s closing remarks included an important observation, “wine is analog”.

There was also an after-party where everyone got to turn on their cameras and mics and talk over each other. It was all great fun and quite memorable. This event is at the top of my list for 2020. Since there’s wine involved, no one should be surprised. I can’t help but wonder, as Analog Bits is taking the virtual holiday party up a notch or two, if this event is destined to become a virtual version of the famous Denali party. No pressure Mahesh.

Also Read:

Analog Bits is Supplying Analog Foundation IP on the Industry’s Most Advanced FinFET Processes

Analog Bits at TSMC OIP – A Complete On-Die Clock Subsystem for PCIe Gen 5

Cerebras and Analog Bits at TSMC OIP – Collaboration on the Largest and Most Powerful AI Chip in the World


TrueChip CXL Verification IP

TrueChip CXL Verification IP
by Luigi Filho on 12-24-2020 at 10:00 am

product821240745

TrueChip is a Verification IP specialist. For more than 10 years they have provided verification IP’s, like USB, PCIe, Ethernet, Memory, AMBA, Display RISC V and many more. They have an extensive portfolio including a very interesting product that is “TruEYE™️ GUI” which is a debugger helper tool for the verifications IPs.

Protocol Intro

The CXL standard is an extension of the PCI Express standard, implementing some more features, but staying compatible. CXL (Compute Express Link) is an open interconnect standard for enabling efficient, coherent memory accesses between a host, such as a CPU, and a device, such as a hardware accelerator, that is handling an intensive workload.

“CXL is a new interconnect for device connectivity, which aims to remove bottlenecks between CPU and high bandwidth devices or memory subsystems, such as accelerators with large memory (graphics cards, GPUs based accelerator devices), memory extension devices and accelerators without much memory (NIC, FPGA based devices)”, said Nitin Kishore, CEO, Truechip.

He further added, “CXL acts as an efficient interconnect between the CPU and workload accelerators to enable high-speed communications, which is the vital need of emerging applications such as Artificial Intelligence and Machine Learning. With the release of CXL Verification IP, our goal is to enable designers to efficiently verify the latest accelerator devices and subsystems.”

CXL is expected to be implemented in heterogenous computing systems that include hardware accelerators that are addressing topics in artificial intelligence, machine learning, and other specialist tasks. The technology is built upon the well-established PCI Express® (PCIe®) infrastructure, leveraging the PCIe 5.0 physical and electrical interface to provide advanced protocol in three key areas:

  • I/O Protocol
  • Memory Protocol, initially allowing a host to share memory with an accelerator
  • Coherency Interface

CXL uses three protocols: CXL.io, CXL.cache, and CXL.mem. The CXL.io protocol is used for initialization and link-up, so it must be supported by all CXL devices and appear in PCIe config space, with additional register capabilities.

TrueChip Verification IP

The architecture is show in figure below, and supports all possible device types in the standard.

The TrueChip CXL Verification IP will cover all CXL standard with some features like:

  • Verification IP configurable as CXL Host and Device when operating in Flex Bus mode and as PCI Express Root Complex and Device Endpoint when operating in PCIe mode.
  • Support for all three CXL protocols i.e., CXL.io, CXL.cache, CXL.mem and device types to meet specific application requirements with user configurable memory size for both CXL Host and Device.
  • Support for Alternate Protocol Negotiation for CXL Mode.
  • Support Pipe Specification 5.1 with both Low Pin Count and Serdes Architecture.
  • Support for CXL Link Layer Retry Mechanism.
  • Support for Configurable timeout for all three layers.
  • Support for different CXL/PCIe Resets.
  • Support for CXL 2.0 Configuration and Memory Mapped Registers(For CXL Device and Ports)
  • Support for CXL ALMP transmission and reception to control virtual link state machine and power state transition requests.
  • Support for CXL ACK forcing and Link Layer Credit exchange mechanism.
  • Support Arbitration among the CXL.IO, CXL.cache and CXL.mem packets with Interleaving of traffic between different CXL protocols.
  • Support for randomization and user controllability in flit packing.
  • Support for power management including the low power L1 with sub-state and L2.
  • Provides a comprehensive user API (callbacks).
  • Built in Coverage analysis. (TruEYE™️ GUI)

The TrueChip Verification IP not only support a host device, but the slave device and beyond that also support three CXL device types as defined in the standard that is:

  • Type 1 – CXL.io + CXL.cache
  • Type 2 – CXL.io + CXL.cache + CXL.mem
  • Type 3 – CXL.io + CXL.mem

So, this IP is a very powerful and complete solution if you are in need of the CXL protocol in your design and want it to be correct by construction the very first time.

To know more details you can check the webinar replay in the link below and check the Truechip website for more technical detail:

https://www.truechip.net/video/Final_CXL_Webinar.mp4

Also read:

Webinar Replay on TileLink from Truechip

Verification IP Coverage

Bringing PCIe Gen 6 Devices to Market


Multicore System-on-Chip (SoC) – Now What?

Multicore System-on-Chip (SoC) – Now What?
by Daniel Nenni on 12-24-2020 at 6:00 am

Siemens Nucleus RTOS

A quick Q&A with Jeff Hancock, senior product manager for Mentor Embedded Platform Solutions, Siemens Digital Industries Software. Jeff oversees the Nucleus® real-time operating system (RTOS) and Mentor Embedded Hypervisor runtime product lines, as well as associated middleware and professional services. Over the last 20 years, Jeff has held numerous roles in the embedded space.  Prior to joining Mentor in 2018, Jeff was a product manager at Renesas.  Before that he served as a product line manager at Wind River Systems, where he oversaw the entire Workbench Product Line, Helix App Cloud Development environment and the Build and Configuration for the VxWorks embedded operating system. Jeff earned his Bachelor of Science degree in Electrical Engineering Technology from the Purdue University.

Q1: So my first question is, what are the main pros and cons of using a hypervisor or a multicore framework?
A hypervisor is a reasonably complex, versatile software component that provides a supervisory capability over several operating systems, managing CPU access, peripheral access, inter-OS communications, and inter-OS security. A hypervisor may be used in many ways. For example, multiple operating systems may be run on a single CPU to protect an investment in legacy software, although with the growth of multicore processors, this is becoming rarer.

Hypervisors have advantages and disadvantages compared with other solutions.

Advantages

  • Great flexibility enables efficient resource sharing, dynamic resource usage, low latency, and high bandwidth communication between VMs
  • Strong inter-core separation
  • Enables device virtualization and sharing
  • Ability to assign ownership of peripherals to specific cores

Disadvantages

  • Only work on a homogenous multicore device (i.e. all cores are identical)
  • Significant code footprint
  • Some execution overhead
  • Require hardware virtualization enablement in the processor

Multicore frameworks are designed very specifically to support the multicore application, providing just the critical functionality: boot order control and inter-core communications. The result is that a multicore framework loads a system with much lower overhead and can be run on much more basic systems. Although each core in an AMP design probably runs an operating system, one or more cores may be “bare metal” – i.e. running no OS at all. A multicore framework can accommodate this possibility.

Multicore frameworks have advantages and disadvantages compared with other solutions.

Advantages

  • Provides the minimally required functionality for some applications
  • Modest memory footprint
  • Minimal execution time overhead
  • Can work on heterogeneous multicore devices (i.e. all cores do not need to be identical)
  • Support bare metal applications

Disadvantages

  • The core workloads are not isolated from each other
  • It can be more challenging to control boot sequence and to debug

Q2: Okay, so why would you choose one over the other?
If the specific application is just a consolidation of existing systems onto a single device or the application requires multiple operating systems to virtualize different peripherals, then a hypervisor is a good choice. If the device utilizes heterogeneous processor cores of the SoC, or the device has a mixed safety-criticality requirement, then a multicore framework is a better choice. In the end, the final decision will depend on the specific application requirements and the use case for the device.

Q3/Q4: How can you leverage heterogeneous multicore SoCs when there is a functional safety requirement? What isolation methods can I use to separate my runtime environments in a multicore system?
In the past, to meet the functional safety requirement users would have to create different hardware systems, or certify the entire system (including the parts that did not impact safety functions). Now users can take advantage of the features of the heterogeneous multicore SoC to separate the safe world from the unsafe world and establish communication by a certified framework. This results in lower hardware and certification costs.

A multicore framework leverages other hardware-assisted separation capabilities provided by some SoC architectures to obtain the required separation between the safe and non-safe domains. This includes the separation of processing blocks, memory blocks, peripherals, and system functions. The multicore framework provides enhanced bound checking to ensure the integrity of shared memory data structures. It also provides interrupt throttling and polling mode to prevent interrupt flooding. It is even possible to use a non-safety certified hypervisor, and a mixed safety-criticality enabled multicore framework.

Q5: Are there any performance reductions or improvements in using either method?
It is not logical to think in terms of performance reduction; a clearer concept is the amount of overhead introduced by a hypervisor or multicore framework. Although it can never be zero, a hypervisor, being quite a small piece of software, need not introduce much overhead at all when managing guest operating systems running on multiple cores. An area where performance may be a consideration is hardware access. If the hypervisor is used to virtualize devices, the overhead will be introduced.  Since the operating systems run directly on the cores, the execution time overhead is minimal.

The Nucleus® RTOS is deployed in over 3 billion devices and provides unparalleled value by accelerating the delivery of high-performance, highly reliable, highly secure embedded devices. System reliability can be improved using a process model to assist in protection for systems spanning the range of aerospace, industrial, automotive, and medical applications. Developers can make full use of multicore solutions across the spectrum of Microcontroller and Microprocessor SoCs using SMP and AMP configurations.

Also Read:

Smoother MATLAB to HLS Flow

A Fast Checking Methodology for Power/Ground Shorts

Mentor Offers Next Generation DFT with Streaming Scan Network


NetApp’s FlexGroup Volumes – A Game Changer for EDA Workflows

NetApp’s FlexGroup Volumes – A Game Changer for EDA Workflows
by Mike Gianfagna on 12-23-2020 at 10:00 am

NetApps FlexGroup Volumes – A Game Changer for EDA Workflows

In my prior post on NetApp, I discussed how the company’s FlexCache technology can keep distributed design teams in sync. Coordination and collaboration are critical elements of any complex design project. The ability to deliver results quickly while managing the massive amounts of data is also a critical element of success. The storage subsystem for a complex design flow needs to remain fast and efficient as SoC projects scale and this is not easy. When I heard that NetApp’s FlexGroup volumes was specifically designed for the scale and performance demands of 7 and 5 nm designs, I became quite interested. Is NetApp’s FlexGroup volumes a game changer for EDA workflows?

First, let’s put the technology in perspective. NetApp’s core storage operating system is called ONTAP. You can learn more about ONTAP in my prior post. For the last 20 years, NetApp’s FlexVol® volumes have been the gold standard for EDA workloads.  But as semiconductor designs have grown in size and complexity, so has the need for scale out and scale up storage performance.  FlexGroup volumes were designed specifically to meet the demanding needs of modern EDA workflows and shrinking process nodes.  FlexGroup volumes unlocks the extreme performance of NetApp’s enterprise-grade storage systems. The result is game-changing design efficiency to meet quality and time-to-market requirements.

I recently caught up with Tony Gaddis, senior director of performance at NetApp to get some background on FlexGroup volumes. I started my conversation with Tony exploring what EDA workloads look like.  Where are the challenges coming from?  Tony provided quite a list:

  • EDA workflows strive for the shortest possible runtime and thus should always strive to be CPU bound and not I/O bound. That means you want to design your workflow to minimize I/O and maximize CPU utilization, so you need high performance I/O
  • EDA workloads are highly parallel (LSF/Grid) meaning 100s to 1,000s of jobs (CPU cores) are hitting the filer at the same. The more jobs you can run in parallel, the faster you complete your projects. Your filer needs to be able scale without running out of available performance
  • EDA workloads are extremely high file count (millions to billions of small and large files in a single namespace) and can generate as much as 60-80% meta data I/O (file timestamp, does the file exists, etc.) which consumes available filer performance and often leads to performance bottlenecks
  • And the challenges are mounting. 10, 7 and 5 nm designs are creating an explosion of data which compounds the problems

Tony explained that an enhancement of NFS file systems is needed to deal with these challenges and this is what ONTAP FlexGroup volumes deliver.

With FlexGroup volumes, a massive single namespace (up to 20PB and 400 billion files) can easily be provisioned in a matter of seconds. FlexGroup volumes have virtually no capacity or file count constraints outside the physical limits of hardware or the total volume limits of ONTAP – the stated limits of 20PB and 400 billion files are simply a matter of qualification across a 24-node cluster. Tony explained that there is no required maintenance or management overhead (or costs) with a FlexGroup volume. You simply create the FlexGroup volume and mount it as you would a FlexVol.  In fact, as of the ONTAP version 9.7 release, you can non-disruptively upgrade an existing FlexVol to a FlexGroup volume. This kind of system management ease means design teams will experience superior up-time and faster support. More game-changing benefits.

How does a FlexGroup volume scale up in terms of performance and capacity? For starters, ONTAP allows up to 24 storage controllers for NAS configurations and up to 12 high availability pairs for six nines of data resiliency and availability..  When a FlexGroup volume is provisioned, ONTAP automatically writes data across the available storage nodes. The data is accessed as a single mount point, transparent to the NAS clients. All these clients see is a massive, high performance bucket to store data. A FlexGroup volume offers distinct advantages over the standard FlexVol volume.

With a FlexVol volume, metadata heavy workloads (i.e., CREATE and SETATTR) such as EDA can become bound to a single CPU thread, which performs serially in ONTAP. In addition, a FlexVol is “owned” by a single node, which means there is only a single node’s CPU, RAM, network and other resources able to apply to that workload at any given time

A FlexGroup volume takes advantage of multiple nodes to process I/O in parallel, which provides concurrency benefits to those EDA workloads.

FlexVol vs. FlexGroup volume

NetApp stands behind this performance boost and backs it up with published SPEC SFS 2014 software build benchmark results. The software build profile is very similar to the EDA benchmark – heavy write metadata. In these results, ONTAP clusters showed near linear scale as more nodes were added to the workloads and were able to push more overall jobs to the cluster than the competition as a result of the parallelized nature of the NetApp ONTAP FlexGroup volume, as well as upwards of 40GB/s and 3 million IOPS with a 12 node A800 AFF cluster.

You can check out those officially published results here:

https://www.spec.org/sfs2014/results/sfs2014swbuild.html

SPEC SFS 2014 Number of builds

The SPEC SF benchmark results below show the difference between a 4-node FlexGroup volume and an 8-node volume.  The number of EDA jobs that can run in parallel nearly doubles.  Results have demonstrated almost linear performance scaling as more nodes are utilized.  

SPEC SFS 2014 Throughput

Couple the performance benefits of FlexGroup volumes paired with NetApp’s latest NVME based storage controllers and customers are seeing upwards to 50% faster jobs by moving from traditional spinning drives with FlexGroup volumes to NVME with FlexGroup volumes.  All Flash (SSD) based systems with FlexGroup volumes is the new gold standard for EDA workflows.

So, what does all of this performance improvement from FlexGroup mean to the chip designers? 

With the higher performance and the ability to scale capacity transparently, it means that the most precious resource of an EDA design cycle, more time, is now available to be used in whatever way is most beneficial.  For products with fixed release cycles, more time could mean more QA cycles before release, leading to higher initial product quality.  For products with shortened time to market windows, more time could be used for maintaining the same quality but release sooner or it could be used to keep the release schedule and improve the quality with more QA cycles before release. 

With the increasing complexity and storage requirements of leading-edge designs, 3nm storage requirements are expected to be 4X larger than 5nm designs, the ability to scale capacity transparently while also improving performance are necessary to design effectively at these leading-edge process nodes.

All this was quite an eye-opener. Sophisticated approaches to storage management can have a huge impact on the efficiency of EDA workloads, and getting to market faster is what it’s all about.  NetApp has an excellent Technical Report on the subject entitled “Electronic Design Automation Best Practices”. This document will explain a lot more about FlexGroup volumes and how to deploy them in EDA workloads. You can get a copy of this report here. After perusing some of these resources you will understand how NetApp’s FlexGroup volumes is a game changer for EDA workflows.

Also Read:

Concurrency and Collaboration – Keeping a Dispersed Design Team in Sync with NetApp

NetApp: Comprehensive Support for Moving Your EDA Flow to the Cloud

NetApp Simplifies Cloud Bursting EDA workloads


Automatic Generation of SoC Verification Testbench and Tests

Automatic Generation of SoC Verification Testbench and Tests
by Daniel Nenni on 12-23-2020 at 6:00 am

Agnisys QEMU

Last month, I blogged about a webinar on embedded systems development presented by Agnisys CEO and founder Anupam Bakshi. I liked the way that he linked their various tools into a common flow that spans hardware, software, design, verification, validation, and documentation. Initially I was rather focused on the design aspects of the flow, noting how the Agnisys solution can assemble a complete system-on-chip (SoC) design and generate RTL code for registers, bus interfaces, and a library of IP blocks. But I was also intrigued by the amount of verification automation in the flow, and so I asked Anupam to fill me in some of the details. He agreed to do so, suggesting that we first look at the big picture and defer the details of which specific tools generate which specific verification models and tests. That made good sense to me.

From the verification viewpoint, the Agnisys flow generates UVM testbench models, UVM-based test sequences that can configure, program, and verify various parts of the design, and C-based tests that can do the same. Of course, the UVM tests run in simulation, but the C tests are more flexible. They can run in simulation on a CPU model, RTL or behavioral, along with the RTL design. They can also run as “bare metal” tests in an emulator, an FPGA prototype, and even the final chip. Thus, the generated tests range from RTL simulation to hardware-software co-simulation to full system validation. In fact, the Agnisys flow even generates formats that can be used to create chip production tests on automatic test equipment (ATE).

Fundamentally, the flow automatically generates three kind of environments:

  • UVM environment for verification
  • UVM/C based SoC verification environment
  • Co-verification environment

The UVM environment includes generated UVM testbench components for registers, memories, popular bus interfaces such as AXI and AHB, bus bridges and aggregators, and IP blocks such as GPIO, I2C master, timers, and programmable interrupt controllers (PICs). At this stage, the UVM environment might or might not include an embedded processor model. The generated tests use the UVM models and the design’s ports to configure the RTL, stimulate it to perform various functions, and check for correct results and sufficient coverage metrics. Anupam pointed out that these tests verify individual blocks in the chip, and more. Since they access the blocks from the design ports, they also exercise buses, bridges, aggregators, and other types of interconnection logic within the SoC.

The tests automatically generated for this environment are quite sophisticated. The sequences verify all the RTL code generated by tools in the Agnisys design flow, including registers, memories, bus/bridges/aggregators, and IP blocks. These tests are capable of handling interrupts and their corresponding interrupt service routines (ISRs). The tests check the functionality of special registers such as lock register, page register, indirect register, shadow register and alias register. These include positive and negative tests for different access types such as read-write, read-only, and write-only, providing 100% coverage in the UVM environment.

In a UVM/C environment, much of the same verification can be performed by running C tests on a model of the embedded processor (or processors). The Agnisys flow generates a UVM/C based environment that can run both C and UVM tests, including a component that provides synchronization between the two types of tests. There are numerous ways to mix and match these tests, but typically the C code is used to configure the IP blocks while the UVM tests run the bulk of the verification. If the user does not have a processor model available yet, the flow can integrate an RTL SweRV Core EH1, a 32-bit, 2-way superscalar, 9-stage pipeline implementation of the RISC-V architecture.

These generated tests can be used as the start of the SoC regression suite, while the generated UVM environment and models can be the foundation for the complete SoC testbench. The Agnisys approach is flexible; users can specify the sequences needed to configure, program, and verify their own design blocks so that custom tests can also be generated. The flow supports using Python or Excel to develop the sequence/test specifications. Anupam noted that the C tests are often used as the base for device drivers, diagnostics, and other forms of production embedded software. The idea that a single specification can be used to generate tests for UVM, UVM/C, and bare metal is clearly powerful.

The co-verification environment mixes C and UVM tests in a different way. The C tests run on the QEMU open-source emulator and virtualizer. QEMU is used for emulating the processor behavior and is an especially good vehicle for developing and debugging device drivers for the IP blocks. This approach helps teams develop their tests and driver code without the need for FPGA-based prototypes, so it’s much more cost effective and scalable alternative.

Since ultimately users buy tools to enable this SoC verification flow, I asked Anupam to quickly summarize how their products contribute:

  • IDesignSpec™ (IDS) generates UVM models for registers and memories
  • Specta-AV™ generates a complete UVM testbench with test sequences and coverage
  • Standard Library of IP Generators (SLIP-G™) generates tests for its IP blocks
  • Automatic SOC Verification and Validation (ASVV™) generates device driver building blocks and supports bare-metal verification
  • SoC Enterprise™ (SoC-E) assembles the complete SoC verified in the flow
  • DVinsight™ is a smart editor for creating UVM-based SystemVerilog code

Thank you to Anupam for filling me in on the detailed of this automated embedded SoC verification flow. The more I learn, the more I’m impressed by the scope of their solutions. Have a great holiday season, and I’m sure that we will be talking with Agnisys again in the new year.

Also read:

Embedded Systems Development Flow

CEO Interview: Anupam Bakshi of Agnisys

DAC 2021 – What’s Up with Agnisys and Spec-driven IC Development