RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Monday DAC IP Session “PAM 4 Enable 112G SerDes”

Monday DAC IP Session “PAM 4 Enable 112G SerDes”
by Eric Esteve on 05-24-2019 at 1:00 pm

This session will open the DAC IP Track at 10:30 on Monday “How PAM4 and DSP Enable 112G SerDes Design” in Room N264. I am very proud to chair this invited paper session, as it addresses one of the key pieces of design, enabling to exchange data flow at the highest possible data rate. It can be between two chips on the same board, we talk about short reach (SR) or even in the same package with very short reach (VSR) or on a backplane, the interconnect is named long reach (LR). In any case, the goal is to send a high number of data through serial link(s) after serialization (Ser) and receive it via Deserialization (Des), so the SerDes acronym.

Initially used in telecom networking in the end of 1990’s, the SerDes was based on LVDS I/O running at 622 Mbps. At that time I was working with TI as ASIC Marketing in charge of telecom customers in Europe, and TI has made numerous design-win, thanks to this 622 Mbps LVDS SerDes. If we make a fast forward to 2008, an IP vendor like Snowbush was comfortable with PCI Express 2.0, based on 5.0 Gbps link, and was developing a 10 Gbps SerDes to support 10G Ethernet. In 2019, several IP vendors have developed silicon proven 112G SerDes. This is simply a 180-multiplication factor for the data rate in 20 years!

If you compare with the evolution of the CPU frequency, from about 1 GHz in 1998 to less than 5 GHz today, you realize the performance made by SerDes architects and designers. As usual in the industry, this evolution is the result of hard work made by multiple teams of mixed-signal designers. Nevertheless, it’s interesting to notice that, most often, innovation was supported by start-up. When the technology was proven and shipping, these start-ups were acquired, like V-semiconductor by Intel in 2012, Nusemi by Cadence in 2017 or Silabtech by Synopsys in 2018.

We have mentioned mixed-signal designers as SerDes design has been based on analog techniques since the beginning, even when equalization or pre-emphasis were used, and these are known to be signal processing related. But Digital signal processing (DSP) was too power hungry to be a viable solution. Up to the last FinFET nodes (7 nm and below), where pure DSP techniques could be successfully applied, as Tony Pialis, CEO of Alphawave, will show in his paper. If you want to understand the state-of-the-art in term of SerDes architecture, you will love this paper!

The invited paper from Rita Horner will explain you how 56G and 112G PAM 4 PHY can be used to build 400G or 800G Ethernet interconnects at every level in data center: intra rack, inter racks, room to room or regional. In all the papers, the move from NRZ to PAM 4 modulation type will be clearly described -it’s a good opportunity to learn from real experts.

If you are not convinced about the importance in the industry of SerDes based, very high speed PHY, just think about the incredibly growing demand for data bandwidth. The adoption of future applications is conditioned to a fast access to the cloud for an ever-increasing bunch of data. If you want your smartphone to benefit from 5G capability to download a video or run a specific application, you expect the wireless base station to scale, and move data to the data center as fast as possible. Industrial IoT, IoT and automotive applications will also require moving large amount of data to and from the data center, and inside this data center, as fast as possible.

SerDes based, very high speed PHY, is a small piece of design, initially 100% analog based, relying now on DSP techniques to reach 112 Gbps link speed. It’s also an essential piece of Silicon allowing supporting the 26% CAGR for Internet bandwidth (according with Cisco, see above picture). This move to PAM 4 PHY is the main enabler to support 112 Gbps, if you want to know more about it, come to the DAC IP session on Monday 3rd in Room N264.

From Eric Esteve from IPnest


400G Ethernet test chip tapes-out at 7nm from eSilicon

400G Ethernet test chip tapes-out at 7nm from eSilicon
by Tom Simon on 05-24-2019 at 10:00 am

Since the beginning of May eSilicon has announced the tape-out of three TSMC 7nm test chips. The first of these, a 7nm 400G Ethernet Gearbox/Retimer design, caught my eye and I followed up with Hugh Durdan, their vice president of strategy and products, to learn more about it. Rather than just respin their 56G SerDes, they decided to add the 112G SerDes, and at the same time use this vehicle for several other objectives. The gearbox in this chip contains 8 lanes of 56G and 4 lanes at 112G, allowing it to handle 400G Ethernet traffic. More than just showing that the SerDes work at 7nm, the configuration allows them to demonstrate a number of other things as well.

In our call, Hugh mentioned that they chose to work with Precise-ITC who develops IP for Ethernet and Optical Transport Network (OTN). They saw this as an opportunity to combine eSilicon interface IP with 3rd party IP to go through the process of integration and ensure that their StarDesigner 7nm flow was working as they expected. In essence this is a pipe cleaner of their SOC flow for 7nm.

Precise-ITC contributed a Forward Error Correction (FEC) block, Media Access Controller (MAC) and the Gearbox block. Having higher level functionality offers increased confidence in each element of the test chip. Hugh pointed out that this is a chip that customers can actually use as they evaluate the eSilicon’s offering. The chip will feature long reach and use only around 5W for the entire gearbox.

Designing at 7nm is even more difficult than at previous nodes. Lithography requirement impose many new restrictions on the layout. This makes designing chips with analog content challenging. Another aspect of the design that plays a critical role in the success of a chip like this is the packaging. Hugh told me that they used this opportunity to anticipate the complexity of designs with a much higher lane count by adding a more complex package design for some of the lanes. They also have the ability to inject noise during testing to ensure that the SerDes will perform in larger and more complex environments.

eSilicon is expecting to get silicon back in their lab by Q3 in 2019. They will make a test board that customers can use to put the SerDes and Ethernet related IP through its paces. The 112G SerDes will open the doors to continued development of Terabit Ethernet, which is becoming necessary with the explosion of data center throughput requirements.

eSilicon has consistently expended resources to stay at the leading edge of SOC technology. Their other May test chips included HBM and AI/ML designs all at 7nm. At the same time their partnerships will make life easier for their customers who are going to want to add advanced functionality to their designs. Test chips like this are a win for eSilicon, TSMC, Precise-ITC and their customers. We can eagerly await the return of silicon from this and their other test chips to learn more about how 7nm will perform in the wild. For more details, refer to the announcement on their website.


An evolution in FPGAs

An evolution in FPGAs
by Tom Simon on 05-24-2019 at 5:00 am

Why does it seem like current FPGA devices work very much like the original telephone systems with exchanges where workers connected calls using cords and plugs? Achronix thinks it is now time to jettison Switch Blocks and adopt a new approach. Their motivation is to improve the suitability of FPGAs to machine learning applications, which means giving them more ASIC-like performance characteristics. There is, however, more to this than just updating how data is moved around on the chip.

Achronix has identified three aspect of FPGAs that need to be improved to make them the preferred choice for implementing machine learning applications. Naturally, they will need to retain their hallmark flexibility and adaptability. The three architecture requirements for efficient data acceleration are compute performance, data movement and memory hierarchy. Achronix took a step back and looked at each element in order to recreate how programmable logic should work in the age of machine learning. Their new Speedster 7t is the result. Their goal was to break the historical bottlenecks that have reduced FPGA efficiency. They call the result FPGA+.

Built on TSMC’s 7nm node these new chips have several important innovations. Just as all our phone calls are now routed with packet technology, Achronix’s Speedster 7t will use a 2 dimensional arrayed network on chip (NoC) to move data between the compute elements, memories and interfaces. The NoC is made up of a grid of master and slave Network Access Points (NAPs). Each row/column operates at 256b @2.0Gbps, a combined 512 Gbps. This puts device level bandwidth in the range of 20Tbps.

The NoC supports specific connection modes for transactions (AXI), Ethernet packets, unpacketed data streams and NAP to NAP for FPGA internal connections. One benefit of this is that the NoC can be used to preload data into memory from PCIe without involving the processing core. Another advantage is that the network structure removes pressure during placement to position connected logic units near each other, which was a major source of congestion and floor planning headaches.

The NoC also allows the Achronix Speedster 7t to support 400G operation. Instead of having to run a 1000 bit bus at 724 MHz, the Speedster 7t can support 4 parallel 256 bit buses running at 506MHz to easily handle the throughput. This is especially useful when deep header inspection is required.

For peripheral interfaces, the approach that Achronix uses is to offer a highly scalable SerDes that can run from 1 to 112Gbps to support PCIe and Ethernet. They can include up to 72 of these per device. For Ethernet, they can run 4x 100Gbps or 8x 50Gbps. Lower rate Ethernet connections are also supported for back compatibility. They support PCIe Gen5, with up to 512 Gbps per port, with two ports per device.

The real advantage of their architecture becomes apparent when we look at the compute architecture. Rather than have separate DSPs LUTs and block memories, they have combined these into Machine Learning Processors (MLPs). This immediately frees up bandwidth on the FPGA routing. These three elements are used heavily together in machine learning applications, so combining them is a big advantage for their architecture.

AI and ML algorithms are all over the map on the need for mathematical precision. Sometimes large float precision is used, in other cases there has been a move to low precision integer. Google even has their own Bfloat precision. To handle this wide variety, Achronix has developed fracturable float and integer MACs. The support for multiple number formats provides high utilization of MAC resources. The MLPs also include 72Kbit RAM blocks, and memory and operand cascade capabilities.

For AI and ML applications, local memory is important, but so is system RAM. Achronix decided to use GDDR6 on their Speedster 7t family. It offers lower cost, easier and more flexible system design and extremely high bandwidth. Of course DDR4 can be used for less demanding storage needs as well. The use of GDDR6 allows each design to tune their memory needs, rather than being dependent on memory that is configured in the same package as the programmable device. Speedster 7t supports up to 8 devices with throughput of 4 Tbps.

There is a lot to digest in this announcement, it is worth looking over the whole thing. Looking back, this evolution will seem as obvious as how our old wired table top phones evolved into highly connected and integrated communications devices. The take-away is that this level of innovation will lead to unforeseen advances in end product capabilities. According to the Achronix Speedster 7t announcement, their design tools are ready now and they will have a development board ready in Q4.


Mentor Excitement at 56thDAC!

Mentor Excitement at 56thDAC!
by Daniel Nenni on 05-23-2019 at 10:00 am

Mentor continues to invest in conferences such as DAC, no matter the location, for which I am very grateful. They have a long list of activities this year but I wanted to point out my top three:

Wally Rhines has a talk in the DAC Pavilion which is first on the list. Wally’s expert industry perspective is the result of tireless research and endless customer meetings around the world and should not be missed. Wally will also be signing “From Wild West to Modern Life” books (last on the activity list) at the Mentor booth Monday at 5:00pm and Tuesday at 10:00am. There is a limited supply so I would get there early on either day. This is Wally’s first book, first book signing, and is your chance to get a piece of EDA history, absolutely.

FREE cappuccino from 9:00-2:00, and happy hour from 3:45-4:45 in the Mentor Booth. Hobnob with semiconductor professionals from around the world in the most casual setting. A great place to start and end your 56thDAC exhibition floor experience. I hope to see you there.

The 5G Myth vs. Reality Panel with Mentor, Synopsys and Cadence. Paul Mclellan and I were chatting about 5G at the Samsung Foundry event last week. His AT&T iPhone said he had a 5G connection while my Verizon iPhone said 4g. Identical phones different coverage. Marketing at its finest! This is an excellent opportunity to learn more about 5G from the semiconductor ecosystem, where electronics and 5G begins!

Activity List From Mentor Marketing:

The Design Automation Conference (DAC) is the premier conference for automated electronics design and verification technology. For 2019, DAC returns to sunny Las Vegas, Nevada at the Las Vegas Convention Center from June 2-5, 2019.

We’ve packed each day full of exciting activities and presentations featuring Mentor technical experts discussing the latest in cutting-edge design. You’ll find our experts in the conference program, in our booth (#334) hosting suite sessions and networking events, and in the Verification Academy booth (#617).

CONFERENCE PROGRAM

DAC Pavilion

Fundamental Shifts in the Electronics Ecosystem

MONDAY June 03, 10:30am – 11:15am | DAC Pavilion – Booth 871

Speaker: Wally Rhines – Mentor, a Siemens Business

Wally Rhines, CEO Emeritus of Mentor, a Siemens business, will examine major new market opportunities like AI/ML, automotive, 5G, etc. and how these markets will call for new design activity and the need for broader design tool innovation.  He will also explore whether we are heading into a period of stability after three years of disruption or if the revolution will continue.

Straight Talk with Tony Hemmelgarn, Siemens Digital Industries Software CEO

MONDAY June 03, 11:30am – 12:00pm | DAC Pavilion – Booth 871

Moderator: Ed Sperling

Myth vs. Reality: What 5G is Supposed to Be, And What it Will Take To Get There

TUESDAY June 04, 11:30am – 12:00pm | DAC Pavilion – Booth 871

5G is trumpeted as the big enabler, providing massive throughput and a massive upgrade path for the mobile and mobility markets. It is a way for cars, phones and other connected devices to stream massive amounts of data to the cloud and back again. But 5G signals don’t travel very far, and they don’t penetrate objects. Devices built for this market will require extreme power management so they aren’t searching for signals constantly. Parts of them will always be on, which has an impact on design and reliability. And some parts, such as the antenna arrays, cannot even be tested using conventional means.

Panelists:

Neill Mullinger – Mentor, a Siemens Business

Peter Zhang – Synopsys

Ian Dennison – Cadence Design Systems

Paper Presentations

MONDAY, June 03

4.4 Electromigration Signoff based on IR-drop Degradation Assessment

8.4 Local Layout Effect Aware Design Methodology for Performance Boost below 10nm FinFET Technology

TUESDAY, June 04

18.4 A Lightweight Hardware Architecture for IoT Encryption Algorithm

WEDESNDAY, June 05

66.4 Virtual Methodology For Performance and Power Analysis of AI/ML SoC Using Emulation

69.4 Efficient Verification of High-level Synthesis IP

Posters

123.21 Metric Driven Power Regression – A Methodology based Metric Driven Approach for Power Regressions

123.25 River Fishing: Leverage Simulation Coverage to Drive Formal Bug Hunting

124.2 Comprehensive Analog Layout Constraint Verification for Matching Devices

124.7 Enabling Exhaustive Reset Verification in Intel Design

124.16 A Smart RTL Linting Tool with Auto-correction

124.25 Configurable Multi-protocol AUTOSAR-based Secure Communication Accelerator

125.12 Faster PV Signoff Convergence in P&R using RTD

125.14 Hybrid Methodology- An Innovative Methodology for Hierarchical CDC Verification

Moving Up in the World

125.17 Functional Safety on A-R-M CPUs

125.21 Tackling the Increasing Challenge of IR drop & EM Fails in Advanced Technologies with a Push Button Solution

EXHIBIT FLOOR

Mentor’s booth #334 is located on the west end of the exhibit floor. Check in daily for a host of technical sessions, networking events, panel discussions, a free cappuccino from 9:00-2:00, and happy hour from 3:45-4:45! You’ll also find Mentor verification experts in the Verification Academy booth (#617) for in-depth sessions on Portable Stimulus, UVM, and more.

Technical Sessions in the Mentor Booth

Each day, Mentor experts will be in the booth delivering technical sessions across:

  • AMS Verification
  • Analog/Mixed-Signal Verification
  • Design & functional Verification
  • Digital Design & Implementation
  • IC Design & Test

You can view the complete list of technical sessions and pre-register here.

Expert Panel Discussions

Mentor experts will be moderating in-booth panels on both Monday and Tuesday directly following happy hour. Make sure to pick up a free glass of wine or beer before!

Design Smarter Innovations Faster using AI/ML and More with Mentor, a Siemens Business

MONDAY June 03, 4:00pm – 4:45pm | Mentor Booth #334

To enable our customers to deliver smarter innovations to market faster, Mentor, a Siemens business is actively delivering new solutions and use models that enable our customers to more readily develop AI-powered technologies. We are also integrating advanced machine learning algorithms into our existing tools to enable those tools to deliver better results faster. Come hear experts from across Mentor’s IC solutions portfolio describe what Mentor has to help customers deliver smarter IC innovations to market faster.

Panelists:

Ellie Burns, director of marketing, Calypto Systems Division

Vijay Chobisa, product marketing director, Mentor Emulation Division

Geir Eide, product marketing director, D2S Tessent Division

Amit Gupta, general manager, Solido, IC Verification Solutions Division

Steffen Schulz, vice president product management, D2S Calibre Marketing

Functional Safety in Isolation – Can Safety Be Collaborative?

TUESDAY June 04, 4:00pm – 4:45pm | Mentor Booth #334

As companies strive for greater levels of autonomy, more capability will be required of automotive ICs living at the edge, and the challenge of ensuring functional safety is exacerbated. The mass public trusts companies to deliver safe products to the market, but can the industry deliver on that promise given the demand for rapid innovation and complexity within the automotive ecosystem and supply chain? The scope of functional safety extends beyond the product boundaries to systems of interlinked devices representing the complete transportation network. From IP to automobile, each product plays a role in the overall functional safety of the transportation network. New paradigms and methodologies are required to ensure functional safety across all levels of the automotive ecosystem.

Panelists:

Yves Renard, Functional Safety Manager, ON Semi

Ghani Kanawati, Technical Directory of Functional Safety, Arm

Matt Blazy-winning, Functional Safety Director, NXP

Book Signing with Wally Rhines

Wally Rhines will be at the Mentor booth signing copies of his new book, “From Wild West to Modern Life”, Monday at 5:00pm and Tuesday at 10:00am.


Mentor Extends AI Footprint

Mentor Extends AI Footprint
by Bernard Murphy on 05-23-2019 at 8:00 am

Mentor are stepping up their game in AI/ML. They already had a well-established start through the Solido acquisition in Variation Designer and the ML Characterization Suite, and through Tessent Yield Insight. They have also made progress in prior releases towards supporting design for ML accelerators using Catapult HLS. Now they’ve stepped up to better round out (in my view) Catapult support, also to introduce new ML-enabled capabilities in Calibre.

Joe Sawicki (who needs no introduction but for completeness is EVP of IC EDA at Mentor/Siemens) kicked off this announcement with some background on AI/ML, starting with a nice infographic on startups in AI (over 2000 with $27B in funding) and the AI chip landscape, estimated to be $195B by 2027. Will all or even most of the startups make it? Of course not – startups have a significant fallout rate in any field. But the practical stuff – computer vision, keyword/phrase recognition, localization and mapping for robots, among others – this is real, and has massive potential in many markets. Siemens particularly is very interested in the Industry 4.0 opportunities. Joe also noted that over half the fabless venture funding since 2012 has gone into AI startups, most of it relatively recently, which is even more impressive.

Joe sees challenges in this area in four domains: optimizing ML accelerator architectures, managing power, dealing with huge designs (up to reticle size) and dealing with high speed I/O for fast memory access and communication. This is driven in part by winner-take-all competition in these application domains, demanding differentiation in hardware architecture towards application-specific goals at the edge versus ultimate performance in data-centers (DCs). Edge nodes need ultra-low power for long battery life and DCs still need manageable power (no-one wants to scale-out power hogs). Performance requirements in DC ML accelerators demand deeply intermixed logic with multiple levels of embedded memory, driving massive die sizes and need for fast access to off-die working memory through interfaces such as HBM2 and GDDR6.

For Joe, this maps onto design needs in top-down optimization through HLS, higher capacity and faster, scalable tools everywhere (he noted particularly that he sees this domain driving huge growth in emulation, particularly for power verification), power budget management and need for a flexible AMS flow, especially at the edge where you need to optimize from sensors straight into inference engines (aka smart sensors).

Ellie Burns (Mktg Dir for digital design implementation solutions) followed to describe progress the have made in Catapult HLS for AI/ML design. I first wrote about what they are doing in this area about a year ago. The value proposition is pretty clear. HLS works well with neural net architectures, ML designers for edge applications want to functionally differentiate while also squeezing PPA as hard as they can (especially power, for e.g. wake-words/phrases), so fast analysis and verification through the HLS cycle is a great fit.

The Catapult team have been working with customers such as Chips and Media for a while, optimizing the architecture and flow and they now have an updated release, including (again in my view) some important advances. First, they now have a direct link to TensorFlow. Earlier you had to figure out yourself how to map a trained network (trained almost certainly on TensorFlow) to your Catapult input; do-able but not for the timid. Now that’s automated – big step forward. Second, they now have HLS toolkits for four working AI applications. And finally, they provide an FPGA demonstrator kit compatible with a Xilinx Ultrascale board. You can checkout and adapt the reference design and prove out your ML recognition changes from an HDMI camera through to an HDMI display. The kit provides scripts to build and download your design to the board; board and Xilinx IP such as HDMI are not included.

Steffen Schulze (VP Calibre product management) followed to share the latest ML-driven release info for Calibre OPC and Calibre LFD. Almost anything in implementation is for me a natural for ML – analysis, optimization, accelerated time to closure – all good candidates for improvement through learning. Steffen said they have done a lot of infrastructure work under the Calibre hood, including adding APIs for the ML engine, seeing potential for other applications to also leverage this new capability.

On ML-enabled OPC, Steffen first presented an interesting trend graph – the predicted number of cores required to maintain a similar OPC turn-around-time versus feature size. The example he cites is for critical layer OPC on a 100mm2 die using EUV and multiple patterning, starts at around 10k cores for 7nm and trends more or less linearly to around 50k cores at 2nm.

He said that, as always, scalability of the tools helps but customers are looking for more performance and increased accuracy through algorithmic advances to cope with these significantly diffraction-challenged feature-sizes. As an interesting example of real-world application of ML in a critical application, they use the current OPC model to drive training, then in application to the real design they use one ML (inference) pass to get close followed by two traditional OPC passes to resolve inconsistencies and problems with unexpected configurations (configs not encountered in the training I assume). This approach is delivering 3X runtime reduction and better yet, improved edge placement error (a key metric in OPC accuracy).

For Calibre LFD (lithography-friendly design), let me start with a quick explanation since I’m certainly no expert in this area. The dummies guide, at least as this dummy understands it, is that processes and process variability today are so complex that the full range of possibly yield-limiting constructions can no longer be completely captured in design rule decks. The details that fall outside the scope of DRC rules require simulation to model potential differences between as-drawn and as-built lithographies. The purpose of Calibre LFD is to do that analysis, based on an LFD kit supplied by the foundry.

The ML-based flow here is fairly similar, starting with labeled training followed by inference on target designs. The training is designed to identify high-risk layout patterns, passing only these through for detailed simulation. This delivers 10-20X improvement in performance over full-chip simulation. Steffen also said that using this approach they have been able to find yield limiters that were not previously detected. Here also, ML delivers greatly increased throughput and higher accuracy.

To learn more about what Mentor is doing in AI/ML in Catapult and Calibre, see them at DAC or click HERE and HERE.


Webinar Recap: IP Life Cycle Management and Traceability

Webinar Recap: IP Life Cycle Management and Traceability
by Daniel Payne on 05-22-2019 at 10:00 am

Earlier this month I attended a webinar organized by Methodics on the topic of IP life cycle management and traceability, with three presenters and a Q&A session at the end. I’ve worked with Michael Munsey before and he was the first presenter. Semiconductor IP creation and re-use is the foundation of all modern IC designs, and keeping track of hundreds to thousands of IP blocks along with design scripts and verification results becomes a complicated process very quickly, especially if you’re still using a manual approach.

Methodics provides products in three major areas:

  • IP Lifecycle Management – percipient, versic
  • Enterprise Data Storage Acceleration – warpstor
  • Scaleable, Massively Parallel Job Execution – arrow

This webinar was focused on IP Lifecycle Management, aka IPLM. The company has been around since 2006, has an HQ in SFO, and is staffed with 32 professionals in the USA, Europe and Pacific Rim. Their tools work with popular vendors, like: Perforce, Siemens, Cadence, Synopsys, Jama and neo4j.

The percipient tool has five layers of abstraction, as shown below, where engineers have in a single place to access all of the information about their IC design and can track release management and versions.

Rien Gahlsdorf then gave us a live demo of percipient showing multiple ways to use the tool: command line, web, Cadence, API. Percipient is built on top of a DM system, then manages both meta-data and releases. Users can recall all IPs for any release made earlier, manage all file types, manage IP hierarchy, attach meta data to an IP, view layout, view schematics, review the design state. Making a new release can automatically trigger scripts: Simulations run, requirements checked.

percipient

Michael talked about functional safety (FuSa) and the challenges of complying with the ISO 26262 standard where traceability is a requirement from specification to design, verification and release.

The Methodics approach has a link from requirements through design and verification, enabling compliance with the ISO 26262 standard. Rien demonstrated a second time showing requirements in jama, making a release with Perforce, checking in IP with the latest version, and how a release can trigger scripts to run.

To make ISO 26262 compliance easier the percipient tool comes with IP templates that are configured with properties and attributes. There are survey and doc templates that automate the collection and FuSa interview responses.

In the final demo Rien showed how the percipient tool helps capture all meta-data throughout the entire design process, and automates release management. Documentation is even automated with percipient, where each IP gets a chapter in the design documentation, along with all meta-data entered, hyperlinks added and property values shown.

Q&A

Q: There are other traceability products, like from IBM, so why percipient?

A: percipient allows management of IP, traceability, FuSa compliance, etc. We know how to build a design BOM. Verification, design and requirements are all traceable. This was built from the ground up to achieve this.

Q: Is it possible to capture document and code reviews?

A: Usually within GIT you would use that code review feature, natively.

Q: How do you track a family of data?

A: In the demo we showed data types, there are no restrictions, you can have tables, graphs, charts, families of related data, hierarchical tables.

Q: is percipient DM agnostic?

A: Yes, we work with all the popular DM tools, plus we offercustom support as well. GIT, Perforce, Sharepoint, etc.

Summary

The percipient tool enables traceability from Design to Release to Verification. No more manual, error-prone engineering practices.

To view the webinar video archive visit here.

Related Blogs


What are SOTIF and Fail-Operational and Does This Affect You?

What are SOTIF and Fail-Operational and Does This Affect You?
by Bernard Murphy on 05-22-2019 at 7:00 am

Standards committees, the military and governmental organizations are drawn to acronyms as moths are drawn to a flame, though few of them seem overly concerned with the elegance or memorability of these handles. One such example is SOTIF – Safety of the Intended Function – more formally known as ISO/PAS 21448. This is a follow-on to the more familiar ISO 26262. While 26262 provides processes and definitions for safety standards of the hardware in electrical and electronic systems in automobiles, it has little to say about the high-levels of automation that dominate debate around autonomous and semi-autonomous cars.


ISO 26262:2018 introduces the Emergency Operation Time Tolerance Interval to account for fail operational use cases

Safety at SAE level 2 and above automation is no longer simply a function of the safety of the hardware. When systems-on-chip are running complex software stacks, quite often multiple stacks, and those systems use probabilistic AI accelerators depending not only on software but also on arrays of trained weights, then there’s a lot more that can go wrong beyond the transient faults of 26262.

An SoC designer might assert “Yes these are problems, but they have nothing to do with my hardware. My responsibilities stop at ensuring that I meet the ISO 26262 requirements. All the rest is the responsibility of the system and software developers.” But you’d be wrong, based on where SOTIF is heading. High levels of integration and non-deterministic compute elements (AI) in safety-critical applications raise a new question; how should the system respond when something goes wrong? And how do you test for this? Because inevitably something will go wrong.

When you’re zipping down a busy freeway at 70mph and a safety-critical function misbehaves, traditional corrective actions (e.g., reset the SoC) are far too clumsy and may even compound the danger. You need something the industry calls “fail operational”, an architecture in which the consequences of a failure can be safely mitigated, possibly with somewhat degraded support in a fallback state, allowing for the car to get to the side of the road and/or for the failing system to be restored to a working state. According to Kurt Shuler (Arteris VP of marketing and an ISO 26262 working group member), a good explanation of this concept is covered in ISO 26262:2018 Part 10 (chapter 12, clauses 12.1 to 12.3). The system-level details of how the car should handle failures of this type are decided by the auto OEMs (and perhaps tier 1s) and the consequences can reach all the way down into SoC design. Importantly, there are capabilities at the SoC-level that can be implemented to help enable fail operational.

Redundancy engineering is becoming more important in SoC functional safety mechanism design. In safety-critical areas in the design, you use two or more versions in parallel and compare the outputs. This is called static redundancy and sounds suspiciously like the TMR, lockstep computing and similar safety mechanisms you already use for ISO 26262. And to some extent they are. But as I understand it, there are a couple of key differences. First these requirements are likely to come from the OEM (or Tier 1), over and above anything you plan to add for redundancy. And second, in a number of redundancy configurations (called dynamic redundancy), these independent systems are expected to self-check their correctness. For example, there is a redundancy style called “1 out of 2 with diagnostics” (1oo2d) in which perhaps 2 cores would each compute a result in parallel, and also each provide a self-check diagnostic. The comparison step can then feed-forward a fail-operational result if both cores self-check positively and agree, or if one core self-checks positively and the other does not.

Another major component of fail-operational support requires the ability to selectively reset/reboot subsystems in the SoC. A very realistic example in this context would be for a smart sensor SoC containing (among many subsystems) one or more vision subsystems (ISPs) and one or more machine learning (ML) subsystems. On a failure in one of these subsystems, rebooting selectively allows other object-recognition paths to continue working. This obviously requires a method to isolate individual subsystems so that the rest of the system can be insulated from anomalous behavior as the misbehaving subsystem resets. One SoC network-on-chip interconnect company, Arteris IP, is already pioneering technology to enable this.

Redundancy in ML subsystems as described above allows for one class of failures in recognition, but what about failures resulting from training problems? One idea that has been suggested (though I don’t know if anyone has put it into practice) is to use asymmetric redundancy between two ML system trained on different training sets. It will be interesting to see how that debate evolves.

The system interconnect is the ideal place to manage a lot of this functionality in the SoC, from “M out of N” redundancy (maybe with diagnostics) to isolation for selective reset/reboot. Arteris IP have made significant and well-respected investments in this area. You should check them out.


20 Questions with John East

20 Questions with John East
by John East on 05-21-2019 at 10:00 am

In 1967 I was a grad student at Cal Berkeley.  In December of that year my wife to be and I got engaged to be married.  I was supposed to get my master’s degree in December of ’68, but once we worked out all the details we realized that I’d have to go to school over the summer of ’68 and get the degree in September.  We were broke and couldn’t afford the extra three months of expenses with little or no income.  Berkeley was set up with two biannual college recruiting programs, during which corporations would come in to interview prospective new hires.  One of the sessions was in April and one was in November. My original plan was to go through the college recruiting process in the November session, but the wedding plans changed that.  Since I wouldn’t be ready to go to work until September, the April recruiting session seemed too early.  So  — how to get a job?  That was the question. I wrote 40 or 50 letters. There was a college placement handbook that had the address of the important companies.  I wrote to them basically saying “Dear Sir, you don’t know me but I want a job.”  I got back just three responses which was a little depressing   One was from IBM, where I then interviewed and didn’t get a job offer. One was from HP where I interviewed and didn’t get a job offer.  But one was from Fairchild. All I knew about them — or thought I knew — was they made cameras.  (The official company name was Fairchild Camera.)  I interviewed with them and they were excited about me.  They brought me back a short while later to have lunch with two of their executives:  Jerry Briggs – an HR guy (Called Personnel in those days)  — and Gene Flath  –  a product line manager.  That was my first business lunch.  It turned out that in those days, business lunches involved large quantities of martinis and the like. They thought I was the greatest guy in the world (Possibly because of the martinis) and they offered me a job on the spot. This was in roughly May of ’68.  They knew that I wasn’t going to be done until September so they said, “That’s not a problem. We’ll wait for you. You’re going to be wonderful. In fact, you don’t even need to communicate with us in the interim. The day before you’re done, just call us and we’ll make arrangements for you to come and everything will be great.”   Then they both gave me their business cards. When I had one day to go —that is I had just taken my last final and was ready to go to work —- I picked up the phone and called Fairchild HR.  A lady answered the phone.  I asked, “Can I please speak to Jerry Briggs”   The lady who answered the  phone said,  “There’s no Jerry Briggs here and I’ve never even known a Jerry Briggs.” We debated for a while and after a bit I asked her “Well, how long have you been there?” It had been a couple of months. The department had turned totally over between the time of the offer in May and my call in September. I thought to myself, ‘That’s not a problem because I’ve got Gene Flath’s card as well. I’ll just call Gene Flath’.  So   —  I called Gene Flath’s number and got a secretary. She said, “There’s no Gene Flath here and there’s never been a Gene Flath here in all of the time since I got here.”  “Well, how long have you been here?” “A couple of months.”   I asked myself, “What the heck is going on here?”  I needed that job! Fortunately, I had the offer letter.  I called the HR department again and told them so. Some guy who I had never met said “Well, okay, we’ll honor it.  Come in at 9:00 on Monday morning and we’ll figure out what to do with you.” What the heck was going on?  I found out later that Bob Noyce, the President of Fairchild had just left to form Intel Corporation and taken a cadre of the really good people with him.  Sherman Fairchild (the Chairman of the Fairchild board) had brought in Les Hogan from Motorola to be the new CEO. Hogan, then, brought in eight of his top lieutenants to help him run things.  They were referred to as ‘Hogan’s Heroes’.  (That was the name of a popular TV show in those days.)  Hogan’s Heroes proceeded to fire about a third of the upper ranks. Roughly another third of the upper ranks said to themselves, “Well, wait a minute.  If I stay around they’re going to fire me, too.” So they left as well.  Everything had turned over in that four month window.  When I got there nobody knew what was going on. Nobody knew who their boss was. What a zoo it was, but that made it almost seem like fun. One thing that was particularly noticeable was —- in the other companies where I had interviewed the managers were 40-year-old or 50-year-old people.  Today that doesn’t seem very old, does it?  But then it seemed ancient. “You mean, I’ve got to be around twenty years before I can get a manager job? That’s terrible”.   At Fairchild the managers were kids. They were 25 and 26 years old. And not only were they kids, they were kids viewed as being experts in their field because the field was that young. I thought,  “I’m going to like this place” See the entire John East series HERE. Biography John East retired from Actel Corporation in November 2010 in conjunction with the transaction in which Actel was purchased by Microsemi Corporation.  He had served as the CEO of Actel for 22 years at the time of his retirement.  Previously, he was a senior vice president of AMD, where he was responsible for the Logic Products Group.  Prior to that, Mr. East held various engineering, marketing, and management positions at Raytheon Semiconductor and Fairchild Semiconductor.  In the past he has served on the boards of directors of Adaptec,  Pericom and Zehntel (public companies), and MCC,  Atrenta and Single Chip Systems (private companies).  He currently serves on the boards of directors of SPARK Microsystems – a Canadian start-up involved in high speed, low power radios — and Tortuga Logic  —  a Silicon Valley start-up involved in hardware security.   Additionally,  he is presently an advisor to Silicon Catalyst  — a Silicon Valley based incubator actively engaged in fostering semiconductor based start-ups. Mr. East holds a BS degree in Electrical Engineering and an MBA both from the University of California, Berkeley.  He has lived in Saratoga, California with his wife Pam for 46 years.

Breker on PSS and UVM

Breker on PSS and UVM
by Bernard Murphy on 05-21-2019 at 5:00 am

When PSS comes up, a lot of mainstream verification engineers are apt to get nervous. They worry that just as they’re starting to get the hang of UVM, the ivory tower types are changing the rules of dynamic verification again and that they’ll have to reboot all that hard-won UVM learning to a new language. The PSS community and tool makers work hard to dispel this fear by partitioning the roles of these languages (e.g. UVM for IP and PSS for SoC sequences and randomization) but questions remain; what about the grey areas between these two, and what about legacy UVM development? Also important, just how portable is PSS? In principle it’s perfectly portable but how does that work in practice? If I develop for one vendor’s platform will it work compatibly with another vendor?


Starting with the last question, Breker has a natural advantage as a neutral player among simulation platform providers, which should give them best access to validate their solution against each platform. It should also make it easier for them to validate equivalent behavior (within the scope of the PSS standard) across platforms – i.e. true portability.

That answers one concern, but what about the legacy question – how much do you have to reinvent, versus building on UVM you already have (and will continue to develop)? To set your mind at rest, Breker have released a white paper on just that topic. This elaborates in some detail how you can use the Breker tools to model and generate randomized sequences, and then generate the corresponding UVM sequences (along with automated scoreboard and coverage modeling), which can connect to your UVM testbench.

The PSS modeling stage works as you would expect; you define PSS models using either DSL or C++, or through their graphical interface. The Breker TrekGen and Trek UVM tools read the model and synthesize tests based on flows and resource constraints (and even path constraints) defined in the model, then convert those to SystemVerilog tests. Generated score-boarding and coverage analysis will roll-up test pass/fail, profiling and other details for analysis in the Breker debugger and/or a vendor-supplier debugger to guide further refinement in scenario modeling.

Point here is that with Breker a PSS-based testing flow works hand-in-hand with your native UVM environment as an easier way to define, randomize and check coverage on sequences. No need to start over on any test-building; this is an entirely complementary addition to your flow.

The white-paper points out a number of advantages to using this approach over using UVM-based sequence definition and control:

  • It’s a more efficient way to build useful sequence tests. Doing this in UVM is eminently possible, but it takes much more effort to build each sequence (or seed sequence with constraints) in a way that is guaranteed to connect meaningfully to real system behavior. PSS starts with expected system behavior, so each test is guaranteed to be meaningful. Which incidentally also accelerates test development and testing – always a desirable objective.
  • The PSS approach is white-box versus black-box. Figuring out how to drive a path test in UVM can be hard – very hard. PSS removes need to think about these details in modeling and sequence generation, thanks to the internal smarts of the UVM generator.
  • The PSS-based flow makes it possible to define more complex tests with more (allowed) concurrency. VIP models (an alternative) run independently, making it difficult to build tests around system-level concurrency, whereas these are easy to generate in PSS and constrain based on available resources as defined in the model.
  • Score-boarding and checking is built-in – no extra effort on your part is required.
  • Coverage is also built-in and is directly related to coverage of paths through the model, a concept that you can’t easily define through traditional coverage metrics. This for me is one of the big motivators for PSS. Traditional coverage is more or less useless at the system level. The useful metric in this context is coverage of realistic sequences constrained by available resources.
  • You get automatic reusability both horizontally and vertically in design and verification flows – the “P” in PSS. Once you’ve defined models for a block, you can reuse those in higher-level subsystem or system testing; you can also reuse these models from simulation to emulation, FPGA prototyping, virtual prototyping and silicon testing.

There’s a lot more detail in the white paper that I won’t attempt to cover here, but I will add that Breker now includes a Portable Stimulus/UVM example with every software distribution. The design is a small representative SoC based on a couple of CPUs, a couple of UARTS, a DMAC and an AES encryption block. Most importantly the white paper provides a detailed walk-through of the steps in integrating this into a UVM testbench and then executing these together. Well worth a read if you’ve been wondering about PSS but have been nervous about jumping in.

Also Read

Verification 3.0 Holds it First Innovation Summit

CEO Interview: Adnan Hamid of Breker Systems

Breker Verification Systems Unleashes the SystemUVM Initiative to Empower UVM Engineering


Uber Lyft and the Price of Greed

Uber Lyft and the Price of Greed
by Roger C. Lanctot on 05-20-2019 at 10:00 am

Uber and Lyft blew it with their initial public offerings over the past couple weeks. Both companies opted to cash out founders and early investors while tossing pennies to long-supportive drivers in the form of bonuses. The short-term cash out focus could sound the death knell of these market leaders.

Both companies extracted billions from investors in the process – but both companies also failed to attract the kind of money necessary to rebuild their business models and establish a path to long-term profitability. The one core fundamental issue they neglected: driver compensation and churn.

The U.S. is the home market for both companies and the employment environment in the U.S. is becoming increasingly hostile to ride hailing. Not only are local municipalities, like New York, forcing transportation network companies (TNCs) like Uber and Lyft to treat their drivers as employees – the available pool of drivers is shrinking along with the unemployment rate.

Strategy Analytics estimates that both Uber and Lyft will have to continue to recruit drivers globally and locally. At the same time, both companies are likely to be reducing driver compensation to address their debt and cash-flow challenges. All of these vectors point to ongoing driver recruitment and compensation challenges – especially in a market where competing services – including Amazon’s delivery operations – promise more reliable compensation with benefits.
What Uber and Lyft failed to recognize was the need to treat drivers, and maybe even passengers, as owners in the companies. Both drivers and passengers have been suspending their disbelief for the past five-plus years to make the service viable if not profitable.

Both Uber and Lyft seemed to recognize the importance of their drivers, but both companies missed the opportunity to set up a stock option plan of some sort to convert existing, and eventually new, drivers into vested owners of the company. What has always been missing from the Uber/Lyft experience has been a recognition or feeling among drivers that they were actually “owners,” representatives of the company and the brand.

The lack of this feeling is manifest in the fact that most, though not all, drivers – that I have met – drive for both. The driving for both proposition feeds the overall gaming the system mentality of the Uber/Lyft experience – also manifest in rides periodically cancelled by drivers that may not want to drive to your preferred destination.

Uber and Lyft (and Yandex and Ola and DiDi etc.) have overcome the supply and demand challenges of getting rides to drivers on the fly in a reasonable reliable manner. But they have failed to create any loyalty among drivers or passengers – leaving the door open to any new entrant seeking to offer a superior experience.

One such player, Bounce, is offering an ownership experience for drivers and passengers – though the company is only active in a handful of markets. The struggle of starting up in a market saturated and dominated by huge competitors – in this case Lyft and Uber – is clear and daunting and is captured in this review by Will Preston for TheRideShareGuy:

https://therideshareguy.com/what-is-it-like-to-drive-for-bounce/

The most important takeaway from this review, for me, is the inclination of Uber and Lyft drivers and passengers, both, to complain about the experience. Bounce seeks to address the flaws in the system by creating a share vesting program for both drivers and passengers built around recruitment referrals for both. Bounce also seeks to leverage relationships with event operators to provide queue-based post-event transportation.

The Bounce model and strategy are detailed here: https://therideshareguy.com/bounce-rideshare/
The long-term prospects for Bounce are unclear to me. The long-term prospects for Uber and Lyft are even less clear in view of their decision not to enter into any longer-term relationship with their drivers – even though some of those drivers have entered into long-term multi-year relationships with Uber and Lyft.

By using the IPOs to reward investors and founders and shun drivers, Uber and Lyft have signed their own death warrants. The IPOs have all the earmarks of an exit strategy not a survival plan. I may continue to use the services where and when they are convenient, but the IPOs have left a sour smell in my nose and a bad taste in my mouth. It’s telling that drivers did not celebrate the IPOs – if they even knew they were taking place – they went on strike.

These IPOs were not clever or game changing. They were nothing less than craven and mark the beginning of the end of ride hailing. It is only a matter of time. The current business model will not stand.