Bronco Webinar 800x100 1

Elon Knows When You Crash

Elon Knows When You Crash
by Roger C. Lanctot on 02-21-2021 at 8:00 am

Elon Knows When You Crash

It’s true. Elon Musk, CEO of Tesla Motors, knows when you crash your Tesla. He just isn’t obliged, in the U.S., to do anything about it. And he’s not alone.

Here it is, 2021 and buyers of cars in the U.S. can’t count on getting automatic crash notification (ACN) included in their next new car.  Even those cars equipped with ACN require a subscription for it to work in most cases.

When European regulators mandated eCall in all new cars years ago, those of us on this side of the Atlantic chuckled at their feeble attempt to “catch up” with the U.S., where OnStar had been launched by GM 20 years before. While the EU was working on eCall, the U.S. was tinkering with “next gen 9-1-1.”

Now, here we are in 2021, and emergency crash notification – an automatic call for help from a car in the event of an airbag deployment, or a button-push request for assistance – is still neither a standard feature on cars sold in the U.S. nor a mandated piece of automotive kit.  If you crash your car in the U.S., you’re pretty much on your own if you haven’t paid for the built-in telematics service.

Tesla is a special case, though.  By now we all know that Musk is collecting buckets of vehicle data throughout the operational life of a typical Tesla via its built-in wireless connection.  We also know that Musk has used that data forensically to get himself and his company “off the hook” in the event of multiple spectacular and fatal Tesla crashes.

Time and again Musk has used vehicle data to demonstrate how drivers have misused Tesla vehicles, violating various warnings and caveats, leading to fatal encounters with other vehicles and inanimate objects.  We’ve all seen multiple Tesla RUD (rapid unplanned disassembly) pictures and videos.  What is missing from all of these events is the timely arrival of assistance in the form of police, fire department, or ambulance personnel – beckoned by a built-in, on-board 911 call – a la OnStar or some equivalent.

This puts Musk in a special category. He is using the wireless connection and the data collected thereby against the misbehaving vehicle owner rather than putting connectivity to work to provide assistance in urgent circumstances.

For the rest of the industry, the failure of auto makers to provide a free, built-in emergency call capability in all cars sold in North America – including General Motors vehicles – is a sad commentary on the industry.  But Tesla’s failure to provide a built-in emergency call function stands out.

In a recent Twitter exchange between Musk and a Tesla owner – who was unable to summon assistance using his phone and also was unable to access the vehicle’s wireless connection to seek help – the Tesla Motors CEO tweeted “Absolutely” to the suggestion that Tesla ought to enable emergency calling from its vehicles. So, Musk likes the idea. Musk already offers this solution on vehicles sold in continental Europe and Russia. Tesla owners in the U.S. wait.

Musk’s Twitter exchange with Tesla owner: https://cleantechnica.com/2021/01/03/tesla-vehicles-could-be-able-to-call-911-during-an-emergency/

It was 25 years ago that GM first began the process of introducing emergency call modules developed as part of Project Beacon in Cadillac vehicles – beginning the journey to the introduction of what we now know as OnStar.  At that time GM Executive Chairman Harry Pearce asked the perplexing questions (from OnStar President Chet Huber’s “Detour”): “If one hundred cars crash and they don’t have something like OnStar on board, how many of them will call for help?” “Now, how many out of a hundred OnStar-equipped cars that crash will need to call for help before we’d be more wrong for holding back a potentially lifesaving technology like this than we would be for putting it in?”

The rest is history, as they say.  OnStar was born, but it was another 10 years before it was built into every GM vehicle.  And today, the automatic crash notification feature from GM is still not free.  A friend of mine is fond of saying that making customers pay for automatic crash notification is like a hotel charging you for the fire extinguisher (or sprinkler fire suppression system) in your room.

Musk should correct this embarrassing omission in Tesla vehicles.  If Tesla can deliver cars with eCall in Europe and Russia, the company can deliver an equivalent solution in the U.S.

The same goes for the rest of the automotive industry.  Car makers shouldn’t be de-contenting vehicles of vital safety systems for the U.S. market and up-contenting for Europe and Russia.  Automatic crash notification in passenger vehicles ought to be regarded as standard equipment – a human right maybe?

Automatic crash notification is only a start.  There is further work needed on leveraging vehicle data in the event of a crash to determine crash severity, the condition and number of vehicle occupants, and the accurate location of the vehicle.  It’s not too late for Tesla to show the way forward.  Sad to say, in 2021, automatic crash notification is not a solved problem in the U.S.


How do you plan the best Bitcoin miner in the world?

How do you plan the best Bitcoin miner in the world?
by Raul Perez on 02-21-2021 at 6:00 am

iStock 1213603569

As many of you know Bitcoin prices have surged recently up to $40,000 USD per bitcoin as of February 2021. We are in the middle of a bit rush! People are noticing Bitcoin’s surge and wondering how they can profit from it. In this article we will explore how custom silicon is a vital part of a winning bitcoin mining strategy.

Some people wonder what it would take to make their own Bitcoin mining custom silicon in order to beat everyone else. My quick survey of the field indicates Bitmain Antminer S19 Pro is the state of the art bitcoin mining equipment as of February 2021. Just as many others such as Amazon, Apple, Facebook, Tesla, Google and more have realized that there is a clear competitive advantage to their businesses from custom silicon; Bitmain too decided to make their own custom silicon.

Using the latest silicon process node increases the power efficiency and the processing power of the bitcoin miner system. This is why bitcoin miner systems manufacturers continue to update their custom silicon mining chips.

Some things to consider to plan your next custom silicon mining chip:

  • Selecting a chip supplier to design your custom silicon (i.e. ASIC).

Finding a good chip supplier to design your ASIC is an art in and of itself. You want a reputable company with an excellent design team, but they also want a reputable system company as a customer.  So if this is your first project making a bitcoin miner you will need to convince the chip supplier (among others) that you’re a serious customer. There are many chip design houses in the world, but many of them are not probably who you’d want to hire if you want to reduce your technical and schedule risks. A way to mitigate your risk of selecting the wrong supplier and also to present your RFQ professionally is by hiring a silicon manager to assist you in those interactions. As part of CustomSilicon.com’s process we work through the Concept and Requirements phase with the chip supplier candidates, and end up selecting one candidate after the Si proposal review. I’d go for 4 chip supplier candidates at Concept, reduce that to 2 suppliers at Concept phase sign-off and then at Requirements phase sign off downselect to 1 chip supplier.

You want to buy RTL IP that is ready for use or hire a chip supplier that has it from past projects. There are some companies out there with previous experience designing custom ASICs for bitcoin mining. But you always need to thoroughly vet them before moving forward writing checks for NRE and masks.

  • Project cost.

There are some costs that are more predictable than others. A disclaimer: all prices below are my gut feeling/what I read/hear from others, from my experience, etc… But as you should know many prices are negotiable and these are influenced by your relationships, total volumes, cost of opportunity of the supplier, negotiating skills, etc…Here are some:

    1. Masks: To make a chip at the foundry you need to buy masks. ASICs today for bitcoin mining are in the 7 nm node already. So if you want to leapfrog the competition you need to shoot for 5 nm or 3 nm. 3nm is the highest risk since this process node is in development.  In my opinion masks for a 5 nm or 3 nm process will be in the 10 to 14 million USD range, let’s call it 12 million USD for easy math. A project like this will probably take two full mask sets on a good case scenario. Selecting a good supplier, performing detailed reviews, using state of the art EDA tools, getting direct foundry support, and hiring a silicon manager are all good ways to mitigate the risk of needing to tape out more than two times.
    2. NRE: This is the cost to pay for the chip design house. This is really subjective and speculative before having gone through Concept and Requirements phases since it will depend on how closely the RTL they have matches the requirements the system company wants and what trade offs you negotiate. It also depends on foundry rule deck accuracy, and simply put what other things that chip design house could work on since that is an opportunity cost for them. In my opinion this will land in the 2 to 5 million USD range. But this can really vary depending on negotiations and everything mentioned here.
      • Firmware: Here it’s important to decide who will write the firmware for the chip. This can actually be a significant cost comparable to chip designer time cost and sometimes exceeding it.
      • Assembly and test: It’s possible that the chip design house is not going to provide a full solution. In that case you need to go work with an outsourced assembly and test (OSAT) house.
      • Ideally you don’t need to go through one of the value chain aggregators (VCAs) to work with the wafer foundry, but that could happen. I think working direct with the foundry will help your project go faster and reduce errors, but the foundries don’t want to work directly with startups (i.e. companies with small volume). So the point made further below in this article about buying foundry wafers is key to gain direct foundry access.
    3. Summary of fixed costs: 24 Million USD (assuming two full mask sets) + 2 to 5 Million USD (NRE) = 26 to 29 Million USD
  • Project schedule.

    1. My gut feeling is that getting from Concept phase start to Requirements phase sign off is probably a 3 months endeavor.
    2. Time from spec freeze to tape out is probably in the 6 months range. This could be longer or shorter depending on how close the starting RTL is to what the system company wants from its miner.  It’s important to highlight that specification freeze requires the system company to develop concise, precise and complete requirements documentation during the Concept and Requirements phases. This is important so that the chip supplier can provide a draft specification quickly after Requirements phase sign-off and we can close on the specification of the chip with Specification phase sign-off. Constant spec changes during development are a project schedule and cost nightmare that can be avoided with disciplined process and early attention to detail.
    3. Time from tape out to tested samples is probably 5 months.
    4. Time to first samples: So time to first ASIC samples is equal to 1+2+3. Which is 14 months for your late Proto or first EVT build.
    5. You will likely need to spin the silicon one more time to get to final shippable silicon. This likely means 2 months of Validation time , 2 months to get ready for the tape out and 5 months of fabrication time. You will have to be thorough at chip and system level validation to find all bugs. Then later check that the ECOs are properly root caused and verified before tape out.  So your final silicon samples (i.e. not production quantities) come in at 23 months for your DVT build.
    6. Mass production risk ramp is another area where you will need to make a judgement call. This will be about how much money you want to risk without knowing that the final silicon is good yet (i.e. you haven’t completed your DVT build).  You can decide to pull in the bitcoin miner system’s mass production ramp date by risk releasing wafers before building DVT phase. To do that you need to go through all your validation status and make a risk assessment in preparation for Mass production phase sign-off leading to your PVT (i.e. final) build. It takes about 5 months to get mass production parts in volume. So if you waited to build systems with final silicon samples until 23 months and then signed off on ramping the wafers at DVT build exit sign off, it will take another 5 months (plus some assembly packaging and test time) to get those mass production chips in quantity to your factory. Risk released wafers could end up being scrapped if you find bugs at DVT that are unacceptable with your final silicon. So this needs to be done with care as it can cost you millions of dollars in scrapped wafer material. Sometimes bugs can be fixed with one time programmable memory (OTP) at final test which would save you from having to scrap the wafers. So you will want to plan to lock OTP settings sometime before you would need to run chips through final test for your mass production ramp.
  • Buying wafers from the foundry.

As you may have heard there is a shortage of silicon wafer foundry capacity. So you will need to make a compelling case to the foundry why they should work with you in their 5 nm or 3 nm process nodes. As you know money is a great facilitator. So it may be that you need to commit to buy wafers ahead of time with the foundry. If you commit to buy a lot of wafers ahead of time they need to provide you with direct support, preferential fabrication times (super hot lots, hot lots), etc…

Let’s say you plan on building 100,000 bitcoin miner systems. Each system containing 200 chips/ASICs inside. So that is 20,000,000 chips. In 300 mm wafers that is probably something like 5,000 wafers. The number of chips per wafer depend on your final die size, your fab yield, your package yield and your test yield. So here I assumed you get 4,000 good chips/dies per wafer. Of course these numbers could be different for your system, but I will assume these to illustrate what I think is the likely ball park. During the process phases all of these details are nailed down and adjusted as needed. The question then is what is the minimum amount of wafers the foundry would ask you to commit to buy upfront to get the kind of support and preferential access you need to get your bitcoin miners built faster.

I am going to guess a 3 nm wafer may end up costing $20,000 USD. So you see that if you end up buying 5,000 wafers that is a $100,000,000 USD purchase! Maybe you can commit to buying 10% of that upfront and get a direct deal with the foundry, maybe not, you need to negotiate.

There are also system level miner considerations.

These are outside of the scope of this article since they are not directly custom silicon items. But it’s worth briefly mentioning them since custom silicon is developed to directly support a custom system hardware project; in this case a bitcoin miner.  Here are some things that will need to be planned:

  • Hiring a CM.

You need to build systems in mass production somewhere. This supplier needs to be able to source all components, assemble, test them and package them to be shipped for you. A lot of companies choose CMs in Asia (China, Taiwan, etc…). These guys will also develop some or all of your factory test infrastructure. You need to pick wisely. The same CM company has very different levels of quality and experienced personnel for different customers. If you’re a new or small customer you may not get a good team, so you need to shop around for the right CM partner.

  • Pre-silicon deliverables.

    1. FPGA board. Your firmware team needs a platform to start developing code on in preparation for the first build.
    2. Blank packaged chips and mock form factor accurate PCB boards. Your mechanical engineers may need these so they can build mechanical only prototypes to mock-up the cooling solution well ahead of your first system build at the CM.
  • Designing the hard system level stuff.

    1. Firmware. You’ll need to write the firmware to control the PCBs with all the ASICs on them. So you need some firmware engineers with experience writing firmware for bitcoin miners.
    2. Cooling. You’ll need to cool down your miners. These miners consume thousands of watts each. This means you’ll need to design a customized heatsink system. Some people use fans, others immersion cooling, etc… Whatever you do this is a critical part of the project and you need to hire good mechanical engineers with experience in this type of design.
    3. PCB boards. You’ll need to design efficient power supplies. There is no point in making a super power efficient custom silicon chip and then you waste lots of power in the power converter plugged to the wall feeding your chip. You’ll also need to design good PCBs with thick copper so that your board losses are not too high. This all means that you need to hire a good quality electrical engineer to design this for you.

In summary.

Likely time to first samples 14 months from kick off.

Likely time to final silicon samples 23 months from kick off.

Summary of fixed costs 26 to 29 million USD. This is for the chip only. There will be some additional costs to develop the bitcoin miner system as discussed in the system level miner considerations section.

The estimates above assume the project is run like a tight ship. This can be hard to do generally, and especially when there are a lot of people and companies working together for the first time. Without experienced people and a good process to follow the chances to execute in these timelines are greatly diminished. Managing cross functional, multi national and multi company teams is vital for this engagement.

As you can see this project is doable. It’s also a big investment with big risks that need to be mitigated. So the question is: what will be the price of bitcoin by the time you have your miners ready?

 

For more information contact us.

Disclaimers: All prices, schedules and details in this article are my best guesses, my opinions, and what I gather from multiple sources of information. I provide this for illustration and informational purposes only. Use at your own risk. As a project progresses through the phase sign offs all these details are committed/verified with suppliers.


Podcast EP8: A Look Inside Analog IP and Analog Bits

Podcast EP8: A Look Inside Analog IP and Analog Bits
by Daniel Nenni on 02-19-2021 at 10:00 am

Dan and Mike are joined by Mahesh Tirupattur, executive vice president at Analog Bits. Mahesh discussed how he found his way to analog IP design and his long association with Analog Bits. Effective strategies for analog IP design and deployment are discussed as well as leading edge applications for analog IP . Mahesh also provides the back story on those Analog Bits gift bottles of wine that are seen each year around the holidays.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Mark Williams of Pulsic

CEO Interview: Mark Williams of Pulsic
by Daniel Nenni on 02-19-2021 at 6:00 am

mark williams final 2

Pulsic recently announced its new “freemium” product Animate Preview and we had the chance to chat with Mark Williams, President and CEO of Pulsic.  Mark explained details of  the product and the new business model that could change the direction of the traditional EDA business model structure.

What brought you to the semiconductor industry?
After attending the Cardiff University in the UK (yes – a long time ago), I was fortunate to start work as a software engineer in a company called Racal-Redac. They were an early EDA company focused on PCB design. I was lucky enough to be working in the routing technology group – which pioneered the method of shape-based routing. Working in this area really fueled my appetite for CAD and algorithmic methods, so much so that I am still working in this area today.

Can you tell us about the origin of Pulsic?
I along with the other founders were keen to explore applying some of the technologies that we had worked on into the field of semiconductors. The best way of doing this was to start our own company. We managed to secure some initial funds, along with our own funds and set up shop in January 2000.  After 12 months, we were able to generate a small amount of revenue and by the end of 2002 we had managed to close our first round of VC funding.  We still call ourselves a start-up – although having just turned 21 – I’m not really sure that term can really apply!

What markets does Pulsic address today?
Custom IC, Mixed Signal, Custom Digital, Analog, Memory Periphery

What are some of the customer challenges Pulsic addresses?
Pulsic works with leading analog and custom design teams around the world to solve their toughest custom IC design challenges. Our solutions specialize in automating custom layout tasks that traditionally would be completed manually. Our unique approach to automation delivers custom quality results with automated speed and reduced time to market.

What is the Pulsic competitive positioning?
Pulsic’s technology enables analog and custom IC design teams to accelerate their design flows, and still achieve the same high-quality results.  Our solutions allow design leaders to remove iterations, shrink project timelines, and reach time to market goals.

Can you tell us about the new Animate Preview and how customers will benefit?
Animate Preview gives engineers quick, easy, and accurate physical information about their analog circuit, in real-time, while developing the schematic. Animate Preview gives detailed layout visualizations of the circuit and helps to spot problems, while accurately estimating design size, transfer design intent and creating a black-box layout. With Animate Preview an engineer can make better decisions earlier in the design flow and reduce iterations.

Animate Preview has an interesting new business model for an EDA tool, can you elaborate on how you see this new model resonating with customers?
The current EDA sales model is “enterprise sales”. This typically involves a long evaluation process, usually requiring an evaluation agreement. Traditional EDA tools often need a lot of setup and complicated installation, plus the EDA vendor typically provides an application engineer to the customer to assist with the evaluation. Many of the EDA tools are complex to use and often need onsite training. If “successful” the customer and vendor must then negotiate a purchase and support agreement. What does this mean?  It means the barriers to adopting new tools are extremely high. This traditional model stifles design flow innovation, which delays and prevents customers from realizing the time and cost benefits that design automation can bring to their flow.

The freemium model adopted by Pulsic for Animate Preview removes these barriers. Customers can download and run Animate Preview on their own data within minutes. There is no usage fee for Animate Preview. This allows customers to get the benefits without needing to build a business case and negotiate agreements. Pulsic also provides no cost online support for Animate Preview.

Pulsic does offer an upgraded version of Animate Preview called Animate Preview Plus. Users can see the value in Animate Preview on real data and be confident that the tool works for them in their design flow before choosing to upgrade.

Do you see this type of business model being adopted for other tools in the EDA industry?
It would be fantastic to see this model more widely adopted. The current business model of selling EDA software not only stifles innovation in customer flows but also innovation within the EDA industry.

However, to truly enable this model, the EDA tools must be designed to be simple to use, otherwise, users will get bogged down early in the process and likely to give up.  Pulsic’s Animate technology was designed to be simple to use, from the ground up to enable this approach.

What does the next twelve months have in store for Pulsic?
It is going to be exciting to see Animate Preview roll out over the next 12 months. We have unique technology, and as mentioned, a unique way to get it out to the market. Well, unique for EDA that is. That coupled with solid growth in 2020 for our Unity platform, in what is a strong and growing custom digital market. All points to an interesting year ahead for Pulsic, our customers and partners.

https://www.pulsic.com/

Also Read:

CEO Interview: Sathyam Pattanam

CEO Interview: Pim Tuyls of Intrinsic ID

CEO Interview: Tuomas Hollman of Minima Processor


Synopsys is Enabling the Cloud Computing Revolution

Synopsys is Enabling the Cloud Computing Revolution
by Mike Gianfagna on 02-18-2021 at 10:00 am

Synopsys is Enabling the Cloud Computing Revolution

In 2019 I was involved in a major project to move all our engineering and financial systems to the cloud. We succeeded in this endeavor, but it wasn’t easy. We faced a lot of infrastructure challenges during our journey. The freedom from facility management and capital budgeting offered by the cloud was significant, however. If you look a bit deeper, there is a long list of challenges associated with building the massive compute infrastructure needed to fuel the cloud revolution. Synopsys recently published a White Paper on these challenges that is very informative. If you’re involved in technology for cloud computing, you need to read this White Paper to understand the challenges ahead of you and how Synopsys is enabling the cloud computing revolution.

If you talk to folks about moving to the cloud, you will get one of two responses:

I’m on the cloud now.

I don’t think the cloud is ready yet, but it is the future.

The second response is really one of not if, but when. The graphic at the top of this post from a Gartner survey supports this trend. This survey was done about two years ago; I believe the sentiment measured today would be stronger. The Synopsys White Paper shines a light on many of the technical challenges associated with the massive cloud build-out that is occurring around us. It’s good to step back and understand the big picture and this White Paper does that. It’s written by Scott Durrant, Cloud Segment Marketing Manager at Synopsys. Prior to Synopsys, Scott had a 24-year career at Intel and also spent time in the enterprise software market at places like McAfee. Scott understands the technology foundations of the cloud.

He begins with some interesting trends regarding cloud migration – growth rates, the wide deployment of AI and the expansion of edge computing for example.  There is a prediction from IDC regarding the size of the global datasphere in the coming years that will either excite or frighten you, maybe both. Scott then spends quite a bit of time examining six major functional areas in cloud computing – the underlying technology, the challenges and the market trends. You will learn a lot. Here is a brief summary of each area.

Compute Servers

Compute capacity, communication bandwidth and energy efficiency are discussed. The various memory technologies and interfaces are explored, along with standards such as Compute Express Link (CXL) and the requirements of high-speed SerDes channels. It’s interesting to see how HBM2E fits. Compute server market share is also presented. This is a surprisingly balanced market – I see no “900-pound Gorilla”. You can also check out a Synopsys webinar I covered here that discusses high-speed communication in the data center.

Network Infrastructure

The main focus here is network switching. The march toward 400G speeds is discussed, along with the various architectures to get there. The market share view here is different. There is indeed a 900-pound Gorilla.

Storage Systems

Next up is storage systems. The use of AI to optimize these systems is discussed. The impact of non-volatile memory technology is examined, along with cache coherency and the relevant standards. There is a strong player in this market, but not as strong as the network infrastructure market.

Visual Computing

This is something of a new category. It refers to the hardware and software needed to perform real-time video processing and analysis.  Think online collaboration, movie streaming, virtual reality, security and assisted driving. These applications demand some very high-end processing capability.

Edge Infrastructure

The edge is all about reducing latency. The amount of data collected by IoT devices is exploding. You can see estimates of the number of connected devices that will be deployed in the White Paper.  I won’t spoil the details, but I will say these devices are counted in billions. The need to process all this data with latency that fits the application is the challenge. This leads to essentially tiers of edge computing so that the right processing can be done with the right proximity to the application. A three-tier view of such a system is presented. All this challenges what we used to think a data center was.

Artificial Intelligence

Last, but certainly not least is a discussion of AI accelerators. These devices form the very backbone of the whole infrastructure. Some applications demand performance first with power as a secondary requirement while others demand low power first with a required level of performance. The technology and relevant standards in this area are discussed.

How Synopsys Fits

Throughout the discussion there are examples of where Synopsys IP fits into the various architectures presented. It should come as no surprise that Synopsys offers a substantial footprint here. I strongly recommend you download this White Paper to become acquainted with the all the changes happening and how Synopsys is enabling the cloud computing revolution. The White Paper, entitled, Addressing the Evolving Technology Needs of Cloud Data Centers with IP, can be downloaded here.

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

Synopsys Delivers a Brief History of AI chips and Specialty AI IP

The Heart of Trust in the Cloud. Hardware Security IP

Synopsys is Extending CXL Applications with New IP


Arteris IP folds in Magillem. Perfect for SoC Integrators

Arteris IP folds in Magillem. Perfect for SoC Integrators
by Bernard Murphy on 02-18-2021 at 6:00 am

merge min

Arteris IP and Magillem recently tied the knot, creating a merger of Network-on-Chip (NoC) and related Intellectual Property (IP) with a platform known for IP-XACT based SoC integration and related support. This is interesting to me because I’m familiar with products and people in both companies. I talked to Kurt Shuler, vice president of marketing to understand the rationale behind the acquisition.

The value Arteris IP provides

Kurt put the top-level reasoning to me this way. First, Arteris IP has always been about making it easier to build systems-on-chip (SoCs). Integrating IPs you acquired from various providers, together with your own IP, to produce an SoC that will deliver top-notch performance, Quality of Service (QoS), power, safety and so on. In fact, as he said (and I agree), the on-chip interconnect provides the “knobs and dials” for engineers to define the architecture of a SoC. You put a lot of IP that anyone can buy around that interconnect, and you make it yours, partly through your special sauce and partly through how you optimize your architecture using your unique interconnect configuration. Making this possible, each IP interfaces to the bus through a network interface unit (NIU) adapter, so that all IPs are speaking a common language on the bus.

The value Magillem provides

Magillem has a well-related and complementary goal. They also aim to make it easy for you to build your SoC. But they are doing it at a data management level. You acquire all these IPs from multiple sources. Each has Register Transfer Level (RTL) source files, SystemC models, register maps, bus interfaces, controls and more. Together with all the configurability offered by most IPs. This could create a nightmare for an integrator if IP vendors couldn’t agree on a standard way to package all that information. Fortunately, all the main providers already have agreed on the IP-XACT standard. This means that at the integration level when you’re connecting IPs and busses together, reconfiguring them and so on, in definition of packaging information, they also all speak the same language. Sounds familiar?

These capabilities have existed since the early days of both companies. Over the years they have acquired many joint customers. Some using Arteris and Magillem together, some using one or the other solution as best meets their immediate needs. In either case, they’re looking for low-friction development solutions in their interconnect design and/or in SoC assembly. Avoiding complex rework in multi-tier interconnects or in importing and updating IP with incompatible interfaces.

An obvious synergy

There is obvious synergy between these goals. Why not further reduce friction by having the NoC layer and the data management layer work hand-in-hand? Import a new rev. of an IP, and the NoC is ready to reconfigure against that new package. A new derivative can be created with some additional IPs and some removed. The NoC can easily sync up with that new configuration. Or optimize the NoC to meet a QoS goal and the corresponding IPs can automatically reconfigure.

Kurt tells me they will continue to support the standalone Magillem and Arteris IP products. They have already started engineering and architectural work to take advantage of the technical synergies between the two product lines. With so many top-tier shared customers, I expect we’ll start to see significant advantages and innovations in their joint capabilities soon.

You can read the press release HERE.

Also Read:

The Reality of ISO 26262 Interpretation. Experience Matters

Cache Coherence Everywhere may be Easier Than you Think

AI in Korea. Low-Key PR, Active Development


Techniques and Tools for Accelerating Low Power Design Simulations

Techniques and Tools for Accelerating Low Power Design Simulations
by Kalar Rajendiran on 02-17-2021 at 10:00 am

Figure 1 for Synopsys Blog

I recently watched a webinar titled “How to accelerate power-aware simulation debug with Synopsys’ VC LP” that was presented by Ashwani Kumar Dwivedi senior applications engineer at Synopsys. Watching the webinar made me reminisce how design verification has evolved over the years. A long time ago, static verification started gaining attention as a way to address some of the challenges of those times. Performance and memory capacity of computers couldn’t meet the turnaround time demands of simulating complex designs. And after long turnaround time simulations, if design bugs were identified, it was a gargantuan task to debug.

Static verification tools were developed to pre-verify designs. A way to catch design bugs early-on and minimize the need for running elaborate dynamic simulations as a requirement for signoff. Results were amazing and there was even a push for signing off simple designs without even running dynamic simulations or simply running dynamic simulations with a small set of test vectors just to exercise the complex portions of the designs. Nowadays, with the increased design complexities, SoC designs, incorporation of extreme low power design techniques and access to high performance compute/memory capacity, tendency may be to rely more heavily on dynamic simulation.

This webinar showcases how a judicious combination of static verification and dynamic simulation can provide immense benefits. The presenter provides lots of examples to highlight each area of benefit and quantifies the benefits using results from some case studies. I recommend you watch the webinar to get the complete details.

I’ll synthesize below what I gathered from watching the webinar.

First things first. The webinar sheds light on lot more aspects than what the webinar title may suggest. It addresses more than just accelerating simulation debugging. It covers issues that lead to down-stream bugs that show up during simulation and discusses ways to prevent those bugs in the first place.

The presenter sets the stage (Figure 1) by listing the many challenges faced during low power functional verification that fall into the Bring-up, simulation and debug stages. The fewer number of iterations that happen in each of these stages, the faster the turnaround time to design signoff.

Figure 1 (Source: Synopsys)

He highlights the importance of running a design independent UPF check (DIUC). Many UPF issues that have the potential to cause down-stream problems can be caught independent of the design and it is very important to fix these even before getting Synopsys VC LP to run.

Presenter then discusses many examples of situations where custom design rules are needed and if not implemented will cause bigger debug challenges down-stream. He walks through some examples and talks about VC LP custom design rule writing and checking functionality that is not possible with just UPF.

He then addresses how LP architecture checks can be performed for detecting both structural and functional issues. Debugging simulation failure due to X-propagation is very challenging and time consuming. The presenter goes into details of the many reasons that lead to X-propagation in simulation that can be prevented from entering the simulation stage.

The integrated flow between Synopsys VCS® and VC LP makes it an almost zero effort task for the simulation team to run VC LP. The presenter advises that whenever possible, the same team should run both static verification and simulation. This combined with an integrated flow through Synopsys’ Verdi® HW SW Debug software (Screenshot in Figure 2) enables easy tracing of debug situations for locating their root causes.

Figure 2 (Source: Synopsys)

By leveraging VC LP with VCS, a lot can be gained in terms of time and cost savings. The presenter reports stats from some case studies where a 38% reduction in simulation run time and an 81% reduction in design related issues were achieved after running VC LP ahead of simulation.

Check out the full webinar to see detailed examples and how they are applicable to your particular role within the design cycle.

Also Read:

A New ML Application, in Formal Regressions

Change Management for Functional Safety

What Might the “1nm Node” Look Like?


SmartDV Shines in 2020!

SmartDV Shines in 2020!
by Daniel Nenni on 02-17-2021 at 6:00 am

SmartDV 2020

After an incredible year for SemiWiki I spent much of January breaking down 2020 with our sponsoring companies. Some had a down year, some had a flat year, but quite a few had remarkable years. One standout company is SmartDV which recorded a 51% revenue increase, so the important question is why?

Reasons:
SmartDV covers one of the more explosive market segments and that is IP. SmartDV also covers one of the fastest growing EDA segments and that is verification. This is one of those 1+1=3 situations, absolutely.

SmartDV closed multi-year agreements and added 26 new customers in North America, Japan, Europe and Asia, which grew 126%.

SmartDV saw an increased licensing demand of close to 70% for Verification IP and a 300% increase in demand for Design IP solutions. So much of this can be attributed to its expanding Design and Verification IP portfolio. Engineering groups purchasing SmartDV’s IP range from well-funded startups and mid-size chip makers to well-known diversified companies.

Will that growth continue in 2021?
The acquisition of a portfolio of silicon-realized controller Design IP for MIPI and USB interfaces last year will help fuel growth this year. In fact, 2021 revenue to date has already reached 22% of 2020’s full year revenue. 2021’s strong growth so far comes from both design and verification IP in the 5G, consumer and HPC markets, though design IP is considerably stronger.

“Our 2021 revenue projections are strong in all regions and show increasing interest in our products, especially Design IP,” says Deepak Kumar Tala, SmartDV’s managing director. “As we grow our business and expand our offerings, our customer commitment and service will not waiver nor will our ability to rapidly customize Design and Verification IP for specific applications and customer requirements.”

So yes SmartDV is hiring:
SmartDV offers a unique opportunity for ambitious ASIC engineers. As a ASIC design and verification expert you will have range of projects to work with. You will have opportunity to work with industry’s best talent.

At SmartDV you will get to work on technologies which are very innovative and will have chance to contribute to this innovative technologies. If you think you know next big thing in verification, or you think you can solve next big issue in verification, then SmartDV is right place for you. Send your resume to jobs@smart-dv.com

About SmartDV
SmartDV™ Technologies is the Proven and Trusted choice for Verification and Design IP with the best customer service from more than 250 experienced ASIC and SoC design and verification engineers. SmartDV offers high-quality standard protocol Design and Verification IP for simulation, emulation, field programmable gate array (FPGA) prototyping, post-silicon validation, formal property verification and RISC-V CPU verification. Any of its Design and Verification IP solutions can be rapidly customized to meet specific customer design needs. The result is Proven and Trusted Design and Verification IP used in hundreds of networking, storage, automotive, bus, MIPI and display chip projects throughout the global electronics industry. SmartDV is headquartered in Bangalore, India, with U.S. headquarters in San Jose, Calif.

Connect with SmartDV at:
Website: www.Smart-DV.com
Linkedin: https://www.linkedin.com/company/smartdv-technologies/about/
Twitter: @SmartDV

Also Read:

SmartDV Expands Its Design IP Portfolio with an Acquisition

CEO Interview: Deepak Kumar Tala of SmartDV

The Quiet Giant in Verification IP and More


Synopsys Delivers a Brief History of AI chips and Specialty AI IP

Synopsys Delivers a Brief History of AI chips and Specialty AI IP
by Mike Gianfagna on 02-16-2021 at 10:00 am

Cloud AI Accelerator SoC

Let’s face it, AI is everywhere. From the cloud to the edge to your pocket, there is more and more embedded intelligence fueling efficiency and features. It’s sometimes hard to discern where human interaction ends, and machine interaction begins. The technology that underlies all this is quite complex and daunting to understand. Sometimes low power is critical, sometimes it’s all about raw throughput and it’s always about flexibility and programmability. Putting all this in perspective can be daunting. Just cataloging all the disciplines and suppliers involved is hard. When I saw a recent White Paper from Synopsys on the topic of AI, SoCs and IP, I took notice. After reading the piece I came away with a much better grasp of everything that is going on in the area of chips and AI. Indeed, Synopsys delivers a brief history of AI chips and specialty AI IP that covers a lot of ground.

If you are interested in AI, no matter what your job is, you’re going to want to read this White Paper. A link is coming. Let’s review some of the topics covered first. The author is Ron Lowman, DesignWare IP Strategic Marketing Manager at Synopsys. Besides Synopsys, Ron has had a long stint at Motorola and then Freescale. Ron clearly understands the space and does a great job explaining how all the pieces fit together. The breadth of application of AI is truly remarkable. The White Paper references a Semico report that is helpful. A quote from the report is included that is quite telling, “some level of AI function in literally every type of silicon is strong and gaining momentum.”  That says a lot.

The various types of silicon for AI are discussed and there are two of note. Stand-alone accelerators that connect in some fashion to an apps processor and application processors with added neural network hardware acceleration on-device. The market segments served by these kinds of chips are detailed. There are many, and they exhibit various levels of three key parameters, depending on the application:

  • Performance in TOPS
  • Performance in TOPS/Watt
  • Model compression

If you want to know how these parameters fit the various markets and what process technologies are used, download the White Paper. An interesting view of market growth is also provided that looks at expansion across high (>100 W), medium (5–100 W) and low (<5 W) power requirements. As you might imagine, the growth is extremely large. The split between the various power regimes may surprise you.

The design challenges associated with AI chip design are then discussed. There are many, including balancing power/latency/performance, special memory architectures, data connectivity and security.  A useful set of core challenges that span all markets is provided:

  • Adding specialized processing capabilities that are much more efficient performing the necessary math such as matrix multiplications and dot products
  • Efficient memory access for processing of unique coefficients such as weights and activations needed for deep learning
  • Reliable, proven real-time interfaces for chip-to-chip, chip-to-cloud, sensor data, and accelerator-to-host connectivity
  • Protecting and securing data against hackers or data corruption

Typical architectures for various types of AI devices are then discussed. Applications here include cloud, edge and on-device. The figure at the top of this post is a representation of a cloud AI accelerator SoC.  The White Paper then outlines the DesignWare IP that is available for AI SoCs. The categories discussed include:

  • Specialized processing
  • Memory performance
  • Real-time data connectivity
  • Security

Synopsys has quite a large footprint here, there’s a lot to browse. I’ve covered Synopsys IP support for AI/ML in this prior post as well. The White Paper concludes with a very important discussion regarding the tools and support Synopsys provides to help you get your AI application out the door. This includes tools for software development, verification, and benchmarking as well as expertise and IP customization. You can also review several customer successes, including Tenstorrent, Black Sesame, Nvidia, Himax, Infineon and Orbecc. This White Paper has something for everyone. Synopsys really delivers a brief history of AI chips and specialty AI IP that covers a lot of ground. The White Paper title is “The Growing Market for Specialized Artificial Intelligence IP in SoCs”, and you can download your copy here.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

The Heart of Trust in the Cloud. Hardware Security IP

Synopsys is Extending CXL Applications with New IP

Webinar on Synopsys MIPI IP


Happy Birthday UVM! A Very Grown-Up 10-Year-Old

Happy Birthday UVM! A Very Grown-Up 10-Year-Old
by Bernard Murphy on 02-16-2021 at 6:00 am

UVM logo min

.The UVM standard was first released by Accellera 10 years ago this month and is now by far the leading methodology for functionally verifying logic designs, especially at the block level. As I write, DVCon fast approaches so I talked to Tom Fitzpatrick, Verification Technologist at Siemens EDA (Mentor Graphics) for a perspective. Tom has been intimately evolved in committee activity on defining UVM, representing at different times each of multiple EDA vendors, so he knows whereof he speaks. We talked about how the standard emerged, where it is today and a look into where the committee is planning further growth.

History

Successful standards most often build on successful proprietary/pseudo-open implementations. Back in 2004 Mentor had AVM and Cadence had eRM. They agreed in principle to partner on a common standard, OVM, which took a little while to get off the ground as some tools were still completing their SystemVerilog support. When both were ready, they locked themselves in a hotel room in the Bay Area for a week. Agreeing this had to be based on SystemVerilog, they pulled in concepts from AVM and eRM, kicked around and refined ideas, converged on an OVM definition and donated it to Accellera.

Synopsys were already doing very nicely with VMM, but acknowledged that momentum was building behind OVM. They decided to jump in also, participating in the definition and donating the register abstraction language from VMM. To satisfy honor all round, this evolved standard was renamed to UVM. And that’s how UVM 1.0 was born, in February of 2011.

I asked Tom if he could share a little behind the scenes insight, for those of us who wonder why such collaborations don’t seem to produce definitions everywhere grounded in pure reason. He chuckled and admitted that some choices are made for technical reasons and some for political reasons. Even SystemVerilog has features defined to carry across some contributor-preferred methodologies. The same applies with the UVM definition. In fairness to vendors, I’m sure each is looking ahead to how they’re going to migrate their customers from legacy investments. Some level of accommodation is going to be needed for that reason alone. Then again, sometimes I’m sure it’s simply partisan pride in their own inventions. That’s an unavoidable reality in collaboration. Wise standards leaders know how to progress while keeping all participants reasonably happy.

Adoption

Mentor does regular blind surveys on verification through the Wilson Group, so they can say with confidence that almost 80% of ASIC design projects are using UVM and 50% of FPGA projects. (FPGAs are now so complex that verification – as opposed to burn and churn – has become very important, for prototyping and coverage reasons.) Tom acknowledged that most users focus on block-level verification. That said, big design houses have already built extensive libraries on top of this stack. Which I take as a measure of a truly successful standard.

Of the 20% who aren’t using UVM, Tom felt that the majority who are still on legacy standards rely on cost-of-switching arguments – “what we have works and we don’t have time to change”. An argument we can all relate to, but then again… All IP vendors have switched to UVM because they have to service customers who demand UVM support. And a high percentage of job postings for verification engineers explicitly list UVM experience as a requirement. Momentum is heavily behind UVM. Being a holdout is only going to get lonelier.

Looking ahead

I’ve covered some of this recently in my update with Lu Dai of Accellera. A few points particularly struck me the current discussion with Tom. First, as a UVM guy through and through, he acknowledged that UVM doesn’t do a good job in running software. You can create UVM sequences and virtual sequences that mimic software, but when it comes to replacing the UVM agent with an actual processor model, you have to rewrite the sequence as software. That’s where PSS can take over. UVM still reigns supreme at the block level, but not for modeling software at the system level. PSS tasks still connect to UVM underneath and a continuing focus is on connections between PSS and UVM.

Another fascinating direction is on Python implementations of UVM. Tom told me that he and Ray Salemi talked about how many new college hires know nothing about Verilog or VHDL. But they they all know Python. Ray (who at Siemens supports mil-aero FPGA users) also mentioned that class of users have a natural resistance to using a standard based on SystemVerilog. Switch them to perceived neutral territory in Python and the resistance evaporates. I imagine the same may be true for FPGA applications in datacenters (reconfigurable networking for example). Python could lower the barrier to adoption for CS grads hired to deal with these strange FPGA beasts.

Tom is an entertaining guy to talk to. He tells me they are going to be pretty busy at (virtual) DVCon this year,. Sounds like they already have planned tutorials and updates on UVM. I haven’t seen the agenda yet but I’m sure we will soon. Meantime you can learn about Siemens EDA views on and support for UVM HERE.

Also Read:

The Five Pillars for Profitable Next Generation Electronic Systems

Probing UPF Dynamic Objects

Calibre DFM Adds Bidirectional DEF Integration