Bronco Webinar 800x100 1

CEO interview: Graham Curren of Sondrel

CEO interview: Graham Curren of Sondrel
by Daniel Nenni on 02-26-2021 at 6:00 am

Graham Curren CEO Sondrel 13.12.29

It has been my pleasure to interview Graham Curren, CEO of Sondrel. A veteran of the Electronics Design industry, he founded Sondrel in 2002 to provide digital ASIC designs.

How did you aim to differentiate Sondrel when you started?
My view of the market was that there were a lot of small design companies and also huge in-house design teams. There was a gap that would grow as chips would only become more complex taking them out of the skill and price range of many companies. I founded Sondrel with the aim of being able to take on the design of a chip that would need teams of engineers working for a year or more and provide some economy of scale.

How well did that work out?
Badly to start with, because it was at the end of the dotcom boom (or the start of the bust).  However, over time, things improved and, over the last 18 years, we’ve grown at an average of around 25% every year.  We had to grow, of course, to be able to reach the staffing levels so we could handle design contracts for big digital chips. In fact, we are one of the few companies in the world with the staff and expertise to tackle such enormous projects – apart from those with in-house design teams.

How big are we talking?
We are regularly designing large chips, such as a recent one of 500 square millimetre chip that has over 30 billion transistors, 40 million flipflops, and 23 thousand pads for I/O, power and ground.

That would be beyond the skills of many in-house design teams!
Exactly. Which is why customers come to us as we have the experience of designing the architecture of big chips and solving the many associated challenges such as the NoC (Network on Chip), timing issues between different parts of the chip and between different chiplets for a SIP design, etc. As we have solved these challenges on many different chips, we provide a low risk means to get to market faster. Part of this success is that engineering teams have access to the rest of the 200 plus engineers working for Sondrel to brainstorm for ideas, additional skills and experience.

Do you only do big chips?
It is our speciality; however, we are doing some smaller devices now in response to customer demand since we are offering a full turnkey service of design through product validation to shipping silicon. This is really exciting existing customers and bringing in new customers who want the security of a one-stop shop for their product creation in silicon.

Why are you offering turnkey services?
Defining and designing a chip for a customer means that we really get to know them and what they want their chip to do and we are able to bring some of the advanced process techniques to the older generation nodes.  This can have a big impact on things like testing, minimising test time and field test failures which are a very expensive but often under-represented part of the design process. For these customers we become a trusted partner.

The supply chain is a complex process to manage with many opportunities for mistakes that could result in costly delays. So many customers told us horror stories about the less advanced supply chain support that they had received that we decided to solve it by offering a full turnkey ASIC service to take responsibility for the whole process from the initial brief for the chip, designing the ASIC and then managing the supply chain process right to shipping the tested silicon chips to customers. We are the single point of contact managing everything giving opportunities to improve the chip as time moves on, for example by improving the test to catch more failures. This means that customers can now focus all their energies on their area of expertise and innovation, safe in the knowledge that we will deliver the silicon to turn their ideas into real products.

Any other design specialities apart from big designs?
Of course, all designs in the latest technologies have huge numbers of transistors and therefore can be considered big! Designs can have Giga numbers of transistors and yet the actual size is in the square millimetres. Which is why we often describe them as complex chips and that what we do as a company is to make complexity simple.

We are very skilled at supporting the newer technologies – almost everything we do is below 28nm going right down to 5nm. Then there are design domains such as low power and high security where we have long established expertise. Functional safety and radiation hard are two examples of the new exciting areas that we are working on with clients.

All these different areas provide the variety that our designers really like as it enables them to learn new skills. That really attracts engineers to join us as we are big enough to have the latest tools and innovations, yet small enough that individuals’ ideas are listen to and can make a real difference to projects.

Sondrel is very well known in Europe and Asia but much less so in the USA. Why is that?
Growing a company organically has to be done at the right speed. Too fast and you can’t manage it (and you run out of cash). Too slow and others take your market share. We are headquartered in the UK so naturally our first focus was Europe. Then Asia as we have offices there with design engineers so it was logical to have sales teams there too based in our largest market in the region – Xi’an, China.

2021 is the year that we are really starting to build our presence in the US starting with Silicon Valley. A year ago, we took on Dave Krishna as VP of Sales and Marketing, North America. He is a great guy and so well connected that he is finding projects despite the COVID lockdown. We have now welcomed others to the team and have expanded our sales and technical support locally both in southern California and on the east coast. We are looking to continue to grow this team rapidly with positions for engineers and business development experts.

How has COVID affected Sondrel?
Fortunately, hardly at all. Unlike many design companies that have teams of people in offices as the only way to operate, a couple of years ago we ensured that all our staff have the ability to work effectively from home with fast, secure internet connections to our servers. As lockdowns were imposed in the various countries round the world that we operate in, we simply closed that office and the staff started working from home the next day. It helps that we have more senior, experienced staff that don’t have to be closely supervised to do their work. As a result, not a single customer project has been delayed, and we are receiving enquiries from customers who have been let down by our rivals who could not cope with lockdowns.

That’s great forward planning!
The ability to work from home is just one of the things that we have in place to make Sondrel a great place to work. I’m a firm believer that happy staff work better and are committed to go that little bit extra if needed. It certainly showed by the way that everyone pulled together to cope with all the issues of the pandemic.

What else do you do for staff?
We have always had flexible working hours for good work/life balance, which helped when staff had to also become teachers for their children locked down at home. Another big thing is the Social Committee that every office has and organises events from paintball to family picnics and from bowling to flower arranging. These events help bond us into a big family. Even during lockdown, they were organising quiz nights to keep people in touch with one another. Despite many requests, we have yet to manage a virtual curry night!

Are you finding it hard to recruit in the lockdown?
Not really. It has become quite apparent how different we are to rivals in terms of working arrangements, some of whom are insisting that staff have to come into offices and not work from home. As a result, engineers are approaching us to join. Fortunately, we recruiting for staff in all our offices around the world – China, USA, UK, France, Morocco and India. Our target is to recruit over a hundred engineers this year.

Why so many?
As I mentioned, our systems were already in place for home working. Some rivals really struggled and let customers down. They are turning to us with projects as we have proven to be a very safe, reliable partner so 2021 is looking very good for us. Which is why we need many more engineers to service this dramatic increase in business.

https://www.sondrel.com/

Also Read:

CEO Interview: Mark Williams of Pulsic

CEO Interview: Sathyam Pattanam

CEO Interview: Pim Tuyls of Intrinsic ID


FD-SOI Offers Refreshing Performance and Flexibility for Mobile Applications

FD-SOI Offers Refreshing Performance and Flexibility for Mobile Applications
by Tom Simon on 02-25-2021 at 10:00 am

FD SOI Advantages

SoC designers often have to make a, “red pill or blue pill,” decision when it comes to selecting process technology. Usually, a choice has to be made between performance, power and area, with one being prioritized at the expense of the others. However, as is pointed out in a recent paper by Mixel and NXP, designers can have the best of both worlds – the red pill and the blue pill – if they consider using a fully depleted silicon on insulator (FD-SOI) process. This is especially true because FD-SOI comes with an extensive ecosystem in the form of tool support and IP availability.

FD-SOI has been a niche technology compared to bulk CMOS, but with new demands from applications like IoT for low standby power and analog & digital performance, it may be ripe for a renaissance. In the paper titled “It’s Time to Look at FD-SOI (Again)” by Eric Hong, senior director of engineering at Mixel, and Nik Jedrzejewski, product line manager at NXP Semiconductors, they make the point that FD-SOI offers unique features that make it an excellent choice for IoT.

Mixel and NXP have worked together to provide designers a superior alternative to bulk CMOS processes. The paper highlights the NXP i.MX 7ULP platform for 28 FD-SOI which leveraged Mixel’s MIPI D-PHY IP.

Let’s look at what the authors have to say about the performance characteristics of FD-SOI devices. FD-SOI places an oxide insulation layer the under the entire transistor, which uses a thin channel with raised source and drain material. This construction provides for many interesting properties that can be exploited to improve chip design. This configuration reduces parasitics and short channel effects.

In bulk CMOS there is parasitic capacitance between the source, drain and the substrate. Also, in bulk CMOS two effects, gate-induced drain leakage (GIDL) and drain-induced barrier lowering (DIBL) play havoc with threshold voltage and high drain voltage turn-off, respectively. With FD-SOI, on the other hand, the buried oxide (BOX) layer shields the source & drain and allows for a thinner channel, improving the gate’s ability to turn off. The reduction of gate and parasitic capacitances mean that peak and dynamic power are reduced, and transconductance and ft are improved.

FD-SOI Advantages

Perhaps the most interesting property of FD-SOI is that body-biasing of the substrate under the junction can easily be applied. This body biasing can even be modified dynamically depending on the operating characteristics needed at the time. To improve the already impressively low stand-by leakage, reverse body-biasing (RBB) can be applied. The authors report that leakage can be reduced by up to a factor of 50X with this technique.

By applying forward body-biasing (FBB), the threshold voltage can be lowered, allowing for improved performance and higher gate overdrive (Vdd-Vt). The authors mention cases where there has been more than a 60% performance improvement with a 1V supply. Mixel observed a power savings of 50% on a design at the fast corner (FF). The same design saw a 14% power reduction at the typical corner (TT). This was accomplished with a W/L reduction of 55% for the on-chip devices. Body biasing can also be used to compensate for die-to-die variations to improve yield.

The paper also includes details about how FD-SOI improves many of the performance characteristics of transistors when they are used in analog design. This alone makes reading the entire article worthwhile. FD-SOI offers not one, but many advantages for designs seeking to improve power, performance, area and yield.

NXP, working with Mixel, have assembled a very compelling platform for SOC design based on FD-SOI. The paper also includes a diagram showing the combined IP and blocks available for building application processors. There is a complete and well-established ecosystem ready to go for designers facing challenges with using bulk CMOS processes for their products.  To get the full picture, read the paper here for more information.

Also Read:

New Processor Helps Move Inference to the Edge

Mixel Makes Major Move on MIPI D-PHY v2.5

MIPI gaining traction in vehicle ADAS and ADS


Key Requirements for Effective SoC Verification Management

Key Requirements for Effective SoC Verification Management
by Kirankumar Karanam on 02-25-2021 at 6:00 am

The Four Phases of SoC Verification

Effective and efficient functional verification is one of the biggest hurdles for today’s large and complex system-on-chip (SoC) designs. The goal is to verify as close as possible to 100% of the design’s specified functionality before committing to the long and expensive tape-out process for application-specific integrated circuits (ASICs) and full custom chips. Field programmable gate arrays (FPGAs) avoid the fabrication step, but development teams still must verify as much as possible before starting debug in the bring-up lab. Of course, verification engineers want to use the fastest and most comprehensive engines, ranging from lint, static and formal analysis to simulation, emulation and FPGA prototyping.

However, leading-edge engines alone are not enough to meet the high demands for SoC verification. The engines must be linked together into a unified flow with common metrics and dashboards to assess the verification progress at every step and determine when to tape out the chip. The execution of all the engines in the flow must be managed in a way to minimize project time, engineering effort and compute resources. Verification management must span the entire length of the project, satisfying the needs of multiple types of teams involved. It must also provide high-level reports to project leaders to help them make critical decisions, including tape-out. This article presents the requirements for effective SoC verification management.

Phases of Verification

There are four significant phases of verification: planning, execution, analysis and closure. As shown in Figure 1, these phases can be viewed as linear across the project’s duration. In a real-world application, the verification team makes many iterative loops through these phases. Identification of verification holes during the analysis phase may lead to additional execution runs or even revisions to the verification plan. Similarly, insufficient results to declare verification closure and tape out may ripple back to earlier phases. This is a normal and healthy part of the flow, as verification plans and metrics are refined based upon detailed results.

A unified flow and intelligent decision-making throughout the project require tracking verification status and progress in all four phases. Also, there are multiple points where the verification flow must integrate with tools for project and lifecycle management, requirements tracking and management of cloud or grid resources. Figure 1 also shows the two major gaps that must be closed before tape-out. The coverage gap represents the verification targets identified during the planning phase but not yet reached during the execution phase. The plan gap refers to any features in the chip specification not yet included in the verification plan. The verification team must close both gaps as much as possible.

Planning Phase Requirements

The verification plan lies at the heart of functional verification. At one time this was a paper document listing the design features and the simulation tests to be written for each feature to verify its proper functionality. The verification engineers wrote the tests, ran them in simulation and debugged any failures. Once the tests passed, they were checked off in the document. When all tests passed, verification was deemed complete and project leaders considered taping out. This approach changed with the availability of constrained-random testbenches, in which there is not always a direct correlation between features and tests.

In an executable verification plan, engineers list the coverage targets that must be reached for each feature rather than explicit tests. They may use constraints to aim toward specific features and coverage targets with groups of tests, but it is a much more automated approach than traditional verification plans. Coverage targets may exist in the testbench or the design and may include code coverage, functional coverage and assertions. Verification planning includes determining which engine will be used for which features. Simulation remains the primary method. Still, formal analysis, emulation and other verification engines might be more appropriate for some targets.

As shown in Figure 2, the verification plan features must be linked directly to content (text, figures, tables, etc.) in the design specification. This reduces the chances of features being missed since specification reviews will reveal any gaps in the verification plan. The plan must be kept in sync with the specification so that the verification team can see whether goals are being met at every phase of the project. The planning phase springboards from this plan, defining milestones for progress toward 100% coverage. Coverage tends to grow quickly early in the project but converges slowly as deep features and corner cases are exercised. Results from all engines must be aggregated intelligently and then back annotated into the verification plan so that the team has a unified view of achieved coverage and overall verification progress.

Execution Phase Requirements

Once the initial plan is ready, the verification team must run many thousands of tests, often on multiple engines, trying to reach all the coverage targets in the plan. This must be highly automated to minimize the turnaround time (TAT) for each regression test run, which will occur many times over the project. Whenever a bug is fixed or additional functionality is added to the design, the regression is executed again. Regressions must also be re-run whenever changes are made to the verification environment. An efficient solution requires an execution management tool that initiates jobs for all verification engines, manages these jobs, monitors progress and tracks results.

As illustrated in Figure 3, ordering and scheduling of tests are critical to minimize the TAT. Tests must be distributed evenly across all the grid or cloud compute resources so that the regression run uses as much parallelism as possible. Also, the information from each regression run can be used to improve the performance of future runs. The analysis phase must be able to rank tests based on coverage achieved. This information is used by the execution management tool in the next run to skip any redundant tests that did not reach any new coverage targets. This makes regressions shorter and makes better use of compute resources, storage space and verification engine licenses, all of which are managed by the same tool.

Coverage data is considered valid only for tests that complete successfully. Of course, many times tests will fail due to design bugs or errors in the verification environment. Most project teams execute regressions without debug options such as dump file generation to reduce regression runtime. The execution management tool must detect each failing test and automatically rerun it with appropriate debug features enabled. The engine must collect pass/fail results and coverage metrics so that they can be analyzed in the next phase of verification.

Analysis Phase Requirements

Once all tests have been run in the execution phase, the verification team must analyze the results to determine what to do next. This consists of two main tasks: debugging the failing tests and aggregating the passing tests’ coverage results. Access to an industry-leading debug solution is critical, including a graphical user interface (GUI) with a range of different views. These must include design and verification source code, hierarchy browsers, schematics, state machine diagrams and waveforms. It must be possible to cross-probe and navigate among the views easily, for example, selecting a signal in source code and seeing its history in a waveform. Once the reason for a test failure is determined, it is fixed and then verified in the next regression run.

Aggregating the coverage results can be tricky because different verification engines sometimes provide different results. Formal analysis can provide proofs of assertions and determine unreachable coverage targets, which simulation and emulation cannot. The verification team may also define custom coverage metrics that must be aggregated with the standard types of coverage. The aggregation step must be highly automated, producing a unified view of chip coverage. The coverage results and other data from the regression runs must be stored in a database and tracked over time, as shown in Figure 4. Verification trends are a crucial factor in assessing project status, projecting a completion date and making the critical tape-out decision.

The analysis phase must include test ranking by contribution to coverage to reduce the TAT for regression runs in the execution phase. Deeper analysis begins as the verification team examines the coverage holes that occur when the passing regression tests do not yet reach coverage targets. The verification engineers should study the unreachable targets identified by formal analysis carefully to ensure that design bugs are not preventing access to some of the design’s functionality. Aggregating unreachability results improves coverage metrics and prevents wasting regression time on targets that cannot be reached. This is another way to reduce execution TAT by making efficient use of cloud and grid resources.

Closure Phase Requirements

Once the verification engineers have eliminated coverage targets formally proven unreachable, as shown in Figure 5, they must consider the remaining targets not yet reached by the regression tests. Coverage closure is the process of reaching these targets or excluding them from consideration. If the verification engineers decide that a coverage target does not need to be reached, it can be added to an exclusions list. The analysis engine must support adaptive exclusions to be persistent even if there are small changes in the design. Reaching a coverage target may involve modifying existing tests, developing additional constrained-random tests, writing directed tests or running more formal analysis.

Modifying tests usually entails tweaking the constraints to bias the tests toward the unreached parts of the design. The results are more predictable if the closure phase supports a “what-if” analysis, in which the verification engineers can see what effect constraint changes will have on coverage. Ideally, this phase results in 100% coverage for all reachable targets. In practice, not all SoC projects can achieve full coverage within their schedules, so the team sets a lower goal that is typically well above 95%. The goal must be high enough to make managers feel confident in taping out and to minimize the chance of undetected bugs making it to silicon.

Tracking Process Requirements

Effective and efficient SoC verification requires the ability of the verification team and project management to observe results during any of the four phases and to track these results over time. Information from every regression run and analysis phase should be retained in a database so that both current status and trends over time can be viewed on-demand. The verification management flow must generate a wide variety of charts and reports tailored to the diverse needs of the various teams involved in the project.

Integration Process Requirements

There are many tools used in the SoC development process beyond the verification engines and the verification management solution. Information in the verification database must be exported to these other tools; other types of data may be imported. Many development standards mandate the use of a requirements management and tracking system. The features in the specification and verification plan must be linked to the project requirements. The verification management solution must also link to project management tools, product lifecycle tools and software to manage cloud and grid resources. Finally, verification engineers must be able to integrate with custom tools via an open database and an application programming interface (API).

Summary

Verification consumes a significant portion of the time and expense of an SoC development project. The planning, execution, analysis and closure phases must be automated to minimize resource requirements and reduce the project schedule. This article has presented many key requirements for these four phases and two critical processes that span the project. Synopsys has developed a verification management solution that meets all these requirements and is actively used on many advanced chip designs. For more information, download the white paper.

Also Read:

Techniques and Tools for Accelerating Low Power Design Simulations

A New ML Application, in Formal Regressions

Change Management for Functional Safety


HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment

HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment
by Mike Gianfagna on 02-24-2021 at 10:00 am

HCL Expands Cloud Choices with a Comprehensive Guide to Azure Deployment

HCL Compass is quite a powerful tool to accelerate project delivery and increase developer productivity. Last August I detailed a webinar about HCL Compass that will help you understand the benefits and impact of a tool like this. This technology falls into the category of DevOps, which aims to shorten the systems development life cycle and provide continuous delivery with high software quality. Scalability across the enterprise is a key factor for success here, so cloud migration is definitely a consideration. Recently, I detailed how HCL is assisting its users to get to the Amazon Elastic Compute Cloud. Vendor choice is definitely a good thing. I’m happy to report that HCL expands cloud choices with a comprehensive guide to Azure deployment.

Microsoft Azure, commonly referred to simply as Azure is a major force in cloud computing. Moving any enterprise application to the cloud provides significant benefits, including:

  • Lower costs
  • Increased agility
  • Reliable global delivery

There are some specific impacts that HCL’s migration guide cites (a link is coming). Some of these are worth repeating:

  • Cost effectiveness: VMs (virtual machines) deployed in the cloud remove the capital expense of procuring and maintaining equipment as well as the expense of maintaining an on-premises data center. These VMs can host instances of HCL Compass
  • Scalability: Estimating data center capacity requirements is very difficult. Over-estimation leads to wasted money and idle resources. Under-estimation degrades the business’s ability to be responsive. Cloud computing resources can easily and quickly be scaled up or down to meet demand. Of particular interest regarding this point, Azure provides autoscaling that automatically increases or decreases the number of VM instances as needed
  • Availability: Azure, like other cloud providers, invests in redundant infrastructure, UPS systems, environmental controls, network carriers, power sources, etc. to ensure maximum uptime. Most enterprises simply cannot afford this kind of scale

The guide from HCL provides everything you need to plan your HCL Compass deployment or migration in Azure. There’s a lot of items to consider, so having all this in one place is very useful. Here are just a few of the considerations that are addressed in the HCL guide:

Supported database platforms: Ensuring you are using the correct version of the required database software is key. Versions between on-premises and the cloud are discussed, along with recommendations on how to utilize an on-premises database for a cloud deployment. This latter discussion supports a hybrid environment.

Accessing the data: For a cloud deployment, the preferred method of data access is to utilize the HCL Compass web client. The specific browsers and versions to use are specified, along with the cautions and pitfalls of other approaches.

Requisite software: Along with Linux database versions, the required versions for installation software, Java, Windows and Linux are discussed.

Many other topics are explained in detail, including:

  • Performance and performance monitoring
  • Cross-server communication
  • Load balancing
  • SSL enablement
  • Single sign-on implementation
  • LDAP authentication
  • Multi-site implementation
  • EmailRelay considerations

A detailed discussion of migration considerations is also presented, along with sample implementation scenarios. One scenario treats HCL Compass and the database in Azure. The other treats HCL Compass in Azure with the database on-premises. All-in-all, this guide provides a complete roadmap to implement HCL Compass in Azure. I can tell you from first-hand experience that cloud migration can be challenging. Software is provisioned and managed differently in a cloud environment. As long as you understand those nuances, things go smoothly.

The migration guide provided by HCL helps you discover all those nuances. You can get your copy of this valuable guide here. Download it now and find out how HCL expands cloud choices with a comprehensive guide to Azure deployment.

 

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.


Finding Large Coverage Holes. Innovation in Verification

Finding Large Coverage Holes. Innovation in Verification
by Bernard Murphy on 02-24-2021 at 6:00 am

Innovation image 2021 min

Is it possible to find and prioritize holes in coverage through AI-based analytics on coverage data? Paul Cunningham (GM, Verification at Cadence), Jim Hogan and I continue our series on research ideas. As always, feedback welcome.

The Innovation

This month’s pick is Using Machine Learning Clustering To Find Large Coverage Holes. This paper was presented at Machine Learning for CAD, 2020. The authors are from IBM Research, Haifa, Israel.

Improving coverage starts with knowing where you need to improve, especially where you may have significant holes. Getting to what you might call good scalar coverage (covered functions, statements, and the like) is fairly mechanical. Assertions provide a set of more complex checks on interdependencies, high value but necessarily low coverage. These authors look at cross-product checks, relationships between events, somewhat reminiscent of our first blog topic.

It is important first to understand what the authors means by a cross-product coverage task. This might be say a <request,response> pair where <request> may be one of memory_read, memory_write, IO_read, IO_write and <response> may be ack, nack, retry, reject. Coverage is then over all feasible combinations.

Events are assumed related through naming. In their convention, reg_msr_data_read breaks into {reg,msr,data,read} which is close to {reg,msr,data,write}, not quite as close to {reg,pcr,data,write}. (You could easily adapt to different naming conventions.) From these groups they run K-means clustering analysis to group features (reg, msr, etc).

From these clusters, they build cross-product structures. This starts with sets of feature locations, counting from start and end of an event. Then finding anchors, most commonly occurring, and therefore likely most significant features in events (reg for example). The authors call groups of features falling between these anchors dimensions. Though not quite explicit in the paper, it seems these provide a basis for probable event combinations which ought to be covered. From that they can then monitor covered and non-covered events. Better yet, they can provide very descriptive guidance on which combinations they expected to see covered but did not.

Paul’s view

The depth of this paper can be easy to miss on a quick read. It’s actually very thought provoking and draws on ML techniques in text document classification to help with verification. Very cool!

The verification methodology in this paper is based on “coverage events” represented as a concatenation of words, e.g. “reg_msr_data_read”. However, the paper would seem to be equally applicable to any meta-data in the form of semi-structured text strings – it could be debug messages for activity on a bus or even the names of requirements in a functional specification.

The heart of the paper is a set of algorithms that cluster similar coverage events into groups, break apart the concatenation of words and then intelligently re-combine the words to identify other concatenations that are similar but as yet un-covered events. They use a blend of K-means clustering, non-negative matrix factorization (NMF), and novel code to do this. The paper is a bit thin on specifics of how K-means and NFR are applied, but the essence of the overall method still shines through and the reported results are solid.

The more I think about this paper, the more the generality of their method intrigues me – especially the potential for it to find holes in a verification plan itself by classifying the names of functional requirements themselves. The approach could quite easily be added as an app to a couple of the coverage tools in our Cadence verification flow…a perfect opener for an intern project at Cadence – please reach out to me if you are reading this blog and interested.

Jim’s view

Paul made an interesting point (separately). At the block level people are already comfortable with functional coverage and randomization. But at the SoC level, Engineers typically use directed tests and don’t have as good a concept of coverage. They want functional coverage at SoC but it’s too much work.

Maybe this is a more efficient way to get a decent measure of coverage. If so, that would definitely be interesting. I see it as an enhancement to existing verification flows, not investable as a standalone company, but certainly something that would be interesting as a quick acquisition. This would follow a proof of concept of no more than a month or so – a quick yes/no.

My view

Learning techniques usually focus on pure behaviors. As Paul suggests, this method adds a semi-semantic dimension. It derives meaning from names which I think is quite clever. Naturally that could lead to some false positives, but I think those should be easy to spot, leaving the signal to noise ratio quite manageable. Could be a nice augment perhaps to PSS/ software-driven verification.

Also Read

2020 Retrospective. Innovation in Verification

ML plus formal for analog. Innovation in Verification

Cadence is Making Floorplanning Easier by Changing the Rules


Single HW/SW Bill of Material (BoM) Benefits System Development

Single HW/SW Bill of Material (BoM) Benefits System Development
by Daniel Payne on 02-23-2021 at 10:00 am

Perforce - IP

Most large electronics companies take a divide and conquer approach to projects, with clear division lines set between HW and SW engineers, so quite often the separate teams have distinct methodologies and ways to design, document, communicate and save a BoM. This division can lead to errors in the system development process, so what is a better approach?

To learn more, I attended a virtual event from Perforce, their Embedded Devops Summit 2021, which I blogged about last month. They had three concurrent tracks: Plan, Create, Verify. I chose the Create track, and listened to the presentation, Implementing a Unified HW/SW BoM to Reduce System Development. Vishal Moondhra was the presenter, and his company Methodics was acquired by Perforce in 2020.

IP is a term used for both HW and SW teams, and it’s the abstraction of data files that define an implementation, plus all of the meta-data that defines its state.

 

BoM

A SW IP example would be a USB device driver, and a HW IP example a SRAM block. The Bill of Materials (BoM) shows the versioned hierarchy of all IP used to define a system, both HW and SW.

The SW blocks are shown in Green, along with their version numbers, while IP2 and IP1 are HW blocks with their own version numbers and hierarchy. If you examine the hierarchy carefully there are two instances of IP13, one at version 8, and the other at version 9, so a version conflict has occurred and your BoM system needs to identify this so that consistency can be restored.

Your SW team may be using Git, while the HW team prefers to use Perforce, and a unified BoM allows this mix and match approach.

Meta-data is the dependencies, file permissions, design hierarchy, instance properties and usage for each IP, and the Perforce approach is that a single system is used for both traceability and reuse. Once again, any Data Management (DM) system can be used.

Being able to trace which SW driver applies to a specific HW block is fundamental to maintaining consistency during system design, and a unified BoM takes care of this compatibility requirement. Tracking patches and updates across HW and SW ensures that no mismatches creep into the system during design.

The Platform BoM knows all of the versions being used in both HW and SW BoMs, and it’s fully traceable so that you always know which SW component was delivered with each HW component.

If a SW driver is incompatible with a particular HW block, then you can quickly identify that occurrence with a unified Platform BoM. If your Platform was only a handful of HW and SW blocks, then a simple Excel spreadsheet would suffice to track dependencies, but modern SoC systems have thousands of HW IP blocks, and millions of lines of code, so having a unified BoM system with traceability is the better choice.

Sending out SW patches to your released Platform demands that proper testing has been validated, so keeping track of dependencies is paramount for success.

With IPLM a SW team can use the concept of private resources where all of the details are abstracted out, leaving behind instead just the results of a build process. It still provides consistency, traceability and dependencies. Here’s an example of using a private resource for an ARM SW stack:

Working as a team with a unified BoM breaks down the old silo approach that separated HW and SW designers from each other. Design metadata can be managed to ensure traceability, promote transparency across engineering teams, enable IP to be reused, all while separate DM systems continue to be used.

Summary

The Methodics IPLM implements this unified BoM approach, so that your engineering teams can focus on completing their system work, while knowing that their HW and SW IP is fully traceable with centralized management, and that their IP releases are not introducing bugs.

To watch the 25 minutes archived presentation online, visit here.

Related Blogs


Achronix Demystifies FPGA Technology Migration

Achronix Demystifies FPGA Technology Migration
by Tom Simon on 02-23-2021 at 6:00 am

FPGA Migration Achronix Tool Flow

System designers who are switching to a new FPGA platform have a lot to think about. Naturally a change like this is usually done for good reasons, but there are always considerations regarding device configurations, interfaces and the tool chain to deal with. To help users who have decided to switch to their FPGA technology, Achronix offers an application note, titled “Migrating to Achronix FPGA Technology”, that explains the differences that may be encountered. As the application note states, Achronix FPGA technology will be familiar to anyone using another platform, but there will be some differences that will be useful to understand.

From my reading, what is interesting is how the application note offers information that could help someone who had not yet decided and was looking to see how the Achronix FPGA technology compares to other solutions. Indeed, the first section of the app note is useful for understanding which Achronix devices are good candidates as substitutions for the range of Intel and Xilinx devices. Kintex Ultrascale, Kintex UltraSCale+, Virtex Ultrascale, Virtex Ultrascale+ along with Aria 10 and Stratix 10 devices are listed along with suitable Achronix offerings ranging from the ac7t750 up to the ac7t3000. Of course, there are many caveats, such as included memory or DSP blocks, etc.

Achronix hints early on in the app note about unique capabilities for AI/ML and network-on-chip (NoC) that their Speedster7t family offers that have no analog in the devices from Intel or Xilinx. Achronix includes a cross reference of core silicon components including lookup tables, logic arrays, distributed math functions, block memory, logic memory DSP and PLLs. Because many of the core components are similar few, if any, RTL modifications are required during porting.

Noticeable differences appear in the interface subsystems available on various FPGA technologies. Achronix has placed a priority on including hard interface subsystems within the I/O ring. This eliminates the need for soft IP interfaces that use up valuable FPGA fabric. This also makes interface integration and timing closure easier. Achronix Speedster7t offers higher performance in most interface categories, including up to 4 x 400G Ethernet, Gen5 x 16 PCIe, DDR4 with 72-bits at 3.2G bps/pin in hard IP. Their SerDes supports up to 112Gbps. Lastly, they offer a unique and highly effective NoC.

Aside from physical specifications, a user contemplating migrating to Achronix will want to understand the supported tool flow. Unlike many other FPGA vendors, Achronix has opted to use Synopsys Synplify Pro in conjunction with their standalone ACE place and route tool. Synplify is recognized as an industry leader already and it is used by many users in place of the vendor supplied options. Achronix users benefit from a mature tool flow that includes practically every feature found in any other flow. The app note includes a feature by feature comparison table that bears this out.

FPGA Migration Achronix Tool Flow

So what code changes are required typically when moving to the Achronix tool flow? The Achronix answer to this question in the app note is that few if any RTL changes should be needed. Synplify Pro will automatically handle inferred RLB features such as LUTs and DFFs. The same goes for memories and DSPs so long as their regular inferencing templates are used. RLBs have a dedicated ALU that Synplify will use for generating efficient math and counter operations. Achronix Speedster7t supports a rich combination of DSP, Block memories and shift registers. Wrappers are not needed for primitives such as I/O ports and global buffers. I/Os and buffers are managed by using constraints applied in the I/O designer tool flow.

The app note has extensive sections on memory and DSP instantiation. It also goes into detail on the topic of constraints. It is worth reading these sections in their entirety. Suffice to say that in most cases they are handled in a straightforward way that should make any porting related work fairly easy.

The end of the app note talks about two distinguishing features of the Achronix Speedster7t family, network-on-chip (NoC) support and their machine learning processor (MLP). The NoC relieves the designer of managing and coding for high speed data transfers between the FPGA fabric and/or I/Os without restriction. For instance, the NoC can even populate a GDDR6 or DDR4 memory from the PCIe subsystem without consuming any FPGA fabric resources and with no need to worry about timing closure. The app note includes a reference to the Achronix documentation for the Speedster7t Network on Chip User Guide.

The MLP is a powerful math block available on Speedster7t chips for use in AI/ML applications. Each MLP can have up to 32 multipliers, ranging from 3-bit integer to 24-bit floating point, supported natively in silicon. It is extremely useful for vector and matrix math. It offers integrated memories to optimize neural net operations. They cite an example of a Speedster7t device processing up to 8600 images per second on Resnet-50.

The most interesting aspect of the Speedster7t family is that if users wish they can move their design to the Speedcore embedded FPGA fabric to incorporate it into their own SoC. Speedster7t is very competitive as a standalone FPGA device but as a Speedcore eFPGA integrated directly into an SoC, Achronix FPGA technology presents entirely new opportunities.

As I said at the outset, not only is the app note useful for guidance on migration to Speedster7t, it also shines a light on the competitive differences between Speedster7t and other FPGA technologies. The app note is available on the Achronix website.

 

 

 

 


Silicon Catalyst and mmTron are Helping to Make mmWave 5G a Reality

Silicon Catalyst and mmTron are Helping to Make mmWave 5G a Reality
by Mike Gianfagna on 02-22-2021 at 10:00 am

Silicon Catalyst and mmTron are Helping to Make 5G a Reality

Everyone is talking about 5G these days. The buildout is beginning. The newest iPhone supports the new 3GPP standard. Excitement is building. But there is a back story to all this. Silicon Catalyst recently added a new company called mmTron to their incubator program. These folks are millimeter wave experts and that turns out to be quite relevant for 5G. I had a chance to catch up with mmTron to explore this new addition to the Silicon Catalyst Incubator. What I discovered was there is a critical portion of the 5G buildout that has some serious challenges. Challenges that mmTron is uniquely positioned to solve. Read on to learn about the 5G back story and how mmTron’s innovative products will contribute to delivering on the promises of  5G. Silicon Catalyst and mmTron are helping to make mmWave 5G a reality.

The Team

Dr. Seyed Tabatabaei

First, a bit about the two folks I spoke with. Dr. Seyed Tabatabaei founded mmTron in 2020. He has substantial expertise in millimeter wave technology having led design efforts at MaCom, Agilent, Endwave and Teramics before founding mmTron. Seyed has assembled a team with exceptional skills in this specialized and critical area, drawing on experience from satellite and defense applications.

Glen Riley has recently joined mmTron as an advisor. Glen has a storied career in semiconductors that includes TI, AT&T and Qorvo. Glen has held several senior executive positions in general management, marketing, and sales. Glen currently is a board member and advisor for companies in the RF and optical markets. He previously knew Seyed as a customer and recently Silicon Catalyst put Glen back in touch with Seyed to become a key executive advisor.

Glen Riley

The 5G Design Challenge

It turns out much of the 5G build out occurring today is based on sub-6GHz spectrum implementations which are similar to the currently deployed 4G network. The substantial benefits of 5G (e.g., very high bandwidth and very low latency) will be delivered in the millimeter wave spectrum (i.e., 24GHz to 80GHz). Verizon is deploying some of this technology today and the new iPhone 12 can support that technology. These efforts are just the beginning of the process and there is still much to do before the full benefits of 5G are realized.

At these frequencies the speed delivered to your handheld device will be equal to or greater than today’s broadband residential connections. This is where the challenges of transmission for 5G exist. You’ve probably heard about the need for sophisticated antenna systems that support beamforming to make all this work.

Beyond antenna systems, there is also a big challenge to deliver electronics for high bandwidth and high-power transmission systems at reasonable cost. Most millimeter wave electronics available today are based on military and satellite applications, where commercial cost pressures aren’t as severe. This is the area where mmTron delivers significant value over and above what is currently available from the existing RF / mmWave suppliers.

The mmTron Solution

Thanks to its proven, patented architecture, mmTron technology can support 5G millimeter wave applications requiring higher power and higher linearity higher power and higher linearity better than other solutions. These key differentiating features mean fewer base stations and smaller phased array antenna systems are needed to deliver the same or greater capability. mmTron’s high linearity products complement existing lower power silicon-based beamformer chips on the market. mmTron estimates that 5G infrastructure costs can be reduced by 40 percent or more using its technology and that is big news.

mmTron’s outsourced fab and assembly/test ecosystem is already in place. RF silicon-on-insulator, gallium arsenide and gallium nitride technologies are used to deliver mmTron’s products. When compared to other large companies that support this market, mmTron represents a disruptive force in the industry as shown in the figure below.

Competitive Landscape

mmTron is currently in discussions with several very large infrastructure manufacturers. The company will soon close a funding round and tape out its first family of products for first delivery in late 2021. The addition of mmTron to the Silicon Catalyst incubator illustrates the breadth of the program from a technology and market perspective.

You can learn more about mmTron and its new and disruptive technology here. Whether you’re interested in learning more about their product offerings or contributing to the company’s growth, you can inquire here.  It looks like an exciting adventure as Silicon Catalyst and mmTron are helping to make 5G a reality.

The views, thoughts, and opinions expressed in this blog belong solely to the author, and not to the author’s employer, organization, committee or any other group or individual.

Also Read:

Silicon Catalyst’s Semi Industry Forum – All-Star Cast Didn’t Disappoint

Chip Startups are Succeeding with Silicon Catalyst and Partners Like Arm

Silicon Catalyst Hosts Semiconductor Industry Forum – A View to the Future … it’s about what’s next®


A Perfect Storm for GLOBALFOUNDRIES

A Perfect Storm for GLOBALFOUNDRIES
by Daniel Nenni on 02-22-2021 at 6:00 am

Chuck Schumer Globalfoundries Chips

GF has played some groundbreaking roles in the semiconductor ecosystem. The spinout of the AMD fabs and the acquisition of the IBM semiconductor division just to name two. Another big one would be the GF Initial Public Offering which may come as early as 2022.

When the IPO was first mentioned during a chat with GF CEO Tom Caulfield I had my doubts. Today however it looks like a perfect storm for a GF IPO with the ongoing semiconductor supply chain issues and the resulting automotive wafer shortages. There is a renewed push for more US based semiconductor manufacturing and other countries are considering the same. With the help of some serious political muscle GF established a semiconductor manufacturing beach head in Upstate NY (Fab 8) in 2009 and additional land rights have already been secured for future expansion.

Another strong sign of GF U.S. based semiconductor manufacturing prowess is the recent announcement with the U.S. Department of Defense:

U.S. Department of Defense Partners with GLOBALFOUNDRIES to Manufacture Secure Chips at Fab 8 in Upstate New York

To make a long story short, the IBM Semiconductor group acquired by GF was a longstanding trusted contract chip manufacturer to the U.S Government through the Fishkill fab (IBM building 323). That relationship was maintained by GF and is now being expanded/transferred to Fab 8 in Malta. Fishkill Fab 10 was sold to ON Semiconductor so this transfer is an important step for GF. The first chips under this agreement will arrive in 2023 and will be based on a 45nm SOI process. Here are the related quotes:

“GLOBALFOUNDRIES is a critical part of a domestic semiconductor manufacturing industry that is a requirement for our national security and economic competitiveness,” said Senate Majority Leader Chuck Schumer, who successfully passed new federal semiconductor manufacturing incentives in last year’s National Defense Authorization Act (NDAA). “I have long advocated for GLOBALFOUNDRIES as a key supplier of chips to our military and intelligence community, including pressing the new Secretary of Defense, Lloyd Austin, to further expand the Department of Defense’s business with GLOBALFOUNDRIES, which will help expand their manufacturing operations and create even more jobs in Malta.”

In a supporting statement from the U.S. Department of Defense, “This agreement with GLOBALFOUNDRIES is just one step the Department of Defense is taking to ensure the U.S. sustains the microelectronics manufacturing capability necessary for national and economic security. This is a pre-cursor to major efforts contemplated by the recently passed CHIPS for America Act, championed by Senator Charles Schumer, which will allow for the sustainment and on-shoring of U.S. microelectronics capability.”

“GLOBALFOUNDRIES thanks Senator Schumer for his leadership, his ongoing support of our industry, and his forward-looking perspective on the semiconductor supply chain,” said Tom Caulfield, CEO of GF. “We are proud to strengthen our longstanding partnership with the U.S. government, and extend this collaboration to produce a new supply of these important chips at our most advanced facility, Fab 8, in upstate New York. We are taking action and doing our part to ensure America has the manufacturing capability it needs, to meet the growing demand for U.S. made, advanced semiconductor chips for the nation’s most sensitive defense and aerospace applications.”

Given his current political clout, having the Senate Majority Leader as a champion is a tremendous asset for GF. And let’s not forget GF Fab 1 in Dresden. I was there when Angela Merkel toured the facility in 2015 and thought for sure with there would be serious Government investment to strengthen the EU semiconductor supply chain. How times have changed. As I said, a perfect storm for GLOBALFOUNDRIES, absolutely.

Also Read:

Technology Optimization for Magnetoresistive RAM (STT-MRAM)

3DIC Design, Implementation, and (especially) Test

Designing Smarter, not Smaller AI Chips with GLOBALFOUNDRIES


Calculating the Maximum Density and Equivalent 2D Design Rule of 3D NAND Flash

Calculating the Maximum Density and Equivalent 2D Design Rule of 3D NAND Flash
by Fred Chen on 02-21-2021 at 10:00 am

3D NAND Flash unit cell

I recently posted an insightful article [1] published in 2013 on the cost of 3D NAND Flash by Dr. Andrew Walker, which has since received over 10,000 views on LinkedIn. The highlight was the plot of cost vs. the number of layers showing a minimum cost for some layer number, dependent on the etch sidewall angle. In this article, the same underlying principles are used to calculate the effective 2D design rule for the 3D NAND array as well as to find the maximum density, both of which are strongly dependent on the sidewall angle of the holes etched through the multilayer stack. A previous article of mine focused on initial estimates of 2D vs. 3D wafer cost [2], but here we will go directly to the impact of 3D processing on the effective 2D density.

Model of 3D NAND cell
The 3D NAND cell has a typical arrangement as shown in Figure 1. The charge storage areas are circular rings containing at least a nitride layer sandwiched between two oxide layers. The rings encircle a silicon channel, typically also ring-shaped. The circular hole structures are taken to be located on a hexagonal close-packed lattice. If we take the minimum distance between holes to be equal to 1/4 the hole diameter [3], the density will be 2/sqrt(3) ~ 1.155 times that of the case where the same diameter holes are placed on a square lattice with the same minimum distance between holes. This proportionality will help in determining the equivalent 2D design rule later, i.e., the design rule of the 2D planar NAND array with the same density (assuming one bit per cell).


Figure 1. 3D NAND Flash unit cell.

3D NAND Hole Widening
The holes penetrating the layers of the 3D NAND stack are ideally with vertical sidewalls. Realistically, it deviates by a fraction of a degree from normal [4]. As a result, the bottom diameter of the hole will be smaller than the top diameter. It is the top diameter that therefore determines the cell pitch. The widening of the hole diameter from the bottom to the top can therefore be given by:

Top diameter – bottom diameter = cot(sidewall angle) * # layers * layer height.

The top diameter is used to determine the equivalent 2D design rule (E2DDR):

1.25^2 * sqrt(3)/2 * (top diameter)^2 = # layers * 4 * (E2DDR)^2, or

E2DDR ~ 0.58 * top diameter/sqrt(# layers)

This allows us to predict a maximum density or minimum 2D equivalent design rule for some number of layers, at a given sidewall angle. We can still expect the equivalent 2D design rule to reach 10 nm.


Figure 2. Top: Widening of diameter as stack height increases with number of layers. Bottom: Equivalent 2D design rule vs. number of cell layers, for different bottom diameters, at a sidewall taper angle of 89.7 deg. (visually estimated for Samsung’s 92-layer case from IWAPS 2019 presentation by J. Choe [4]).

Note that the maximum density or minimum equivalent design rule occurs for a smaller number of layers for a smaller diameter. This means taller holes would eventually need to be built up from stacking multilayers supporting shorter holes, with alignment required. It is a vertical analogy to the Litho-Etch-Litho-Etch… multipatterning used by foundries [5]. This is already a common practice among 3D NAND manufacturers [4], with only Samsung holding out so far, but considering it for seventh-generation V-NAND [6].

References

[1] A. J. Walker, IEEE Trans. Semicon. Mfg. 26, 619 (2013).

[2] F. Chen, Toshiba’s Cost Model for 3D NAND: https://www.linkedin.com/pulse/toshibas-cost-model-3d-nand-frederick-chen, also https://semiwiki.com/semiconductor-manufacturers/291971-toshiba-cost-model-for-3d-nand/

[3] A. Tilson and M. Strauss, Intl. Symp. Phys. & Failure Analysis Integ. Circ., 2018.

[4] Some figures for measurement are provided for example in J. Choe’s IWAPS 2019 presentation “Technology Views on 3D NAND Flash: Current and Future.” http://www.chipmanufacturing.org/1-A2-Short%20version%20for%20Publish_IWAPS%202019_Jeongdong%20Choe_TechInsights_3D%20NAND_F_s.pdf

[5] J. Huckabay et al., Proc. SPIE 6349, 634910 (2006).

[6] https://en.yna.co.kr/view/AEN20201201006900320

Related Lithography Posts