RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Webinar: Increase Layout Team Productivity with SkillCAD

Webinar: Increase Layout Team Productivity with SkillCAD
by Daniel Nenni on 12-18-2020 at 10:00 am

Header Webinar 1

The Cadence Virtuoso Design System has been one of the premier Integrated Circuit design systems for many years and is used by most major semiconductor companies.  While it is powerful and versatile, it is often not optimized for certain complex, repetitive and time-consuming layout design tasks.

The founder and president of SkillCAD, Pengwei Qian saw, that often layout tasks in Cadence required many mouse clicks for even simple tasks.  And as the number of clicks increased, along with increased human interaction, the possibility of design errors increased.  He felt that if both complex and tedious repetitive, layout tasks could be simplified and automated, not only would layout productivity be increased, but human errors, and costly design rework, would be greatly reduced.  “Correct and Optimized  by Construction” was the goal but without the expense of loss of control by the layout designer.

Originally containing a handful of commands to help with common layout tasks, SkillCAD has evolved into over 100 functions, including the powerful, patented V-Editor tools, metal routing tools that allow the designer to route one or fifty metal lines with equal ease, pin placing tools that allow the placement of hundreds of pins in a matter of seconds, and many other tools, that greatly improve a  layout design team productivity.

Watch Replay HERE

What you will Learn in the webinar:

Whatever layout design approach is used, bottom-up, top-down, or any combination of approaches, the power and versatility of the SkillCAD tools will shorten layout cycle times.

  • The powerful pin placement and modifying tools can take the placement of hundreds of pins, from hours to a matter of a few minutes.
  • The many metal routing and bus routing tools, make routing and editing metal routes, easy and efficient by…
  • Running wide power and ground metals and creating mesh ground metal planes with the slotted metal tools, is as easy as routing a single metal wire.
  • The dummy fill and density checking tools, make generating matched dummy metals over critical circuit areas and quickly checking density percentages in circuit blocks, as easy as specifying the layers and identifying a circuit region.

In addition to these commonly used tools, SkillCAD also provides powerful tools for generating and editing guard rings around devices, circuit elements, and even entire circuit blocks.

  • There are tools for generating shielding around sensitive metal signals, and even creating the complex twisted metal structures, with shielding, that are common for sensitive RF (Radio Frequency) transmission lines.
  • SkillCAD also includes tools for measuring circuit data, comparison viewing of old versus new circuit data, viewing cross sections of MOS devices, and many other tools not mentioned here.

Watch Replay HERE

About SkillCAD
Founded in 2007 to enhance productivity to Cadence Virtuoso layout design flow. Cadence Virtuoso + SkillCad have become the industry standard layout environment for full custom analog, RF, and mixed-signal designs. Over 80% of the major analog and mixed signal (AMS) companies use SkillCad. SkillCad seamlessly integrates with Cadence Virtuoso Layout L, XL and GXL and supports IC5, IC6, IC12, IC18. SkillCad has been a Cadence Connection Partner since 2008.

Also Read:

SkillCAD Adds Powerful Editing Commands to Virtuoso

SkillCAD Layout Automation Suite has Over 120 Commands Backed by 60 Customers

CEO Interview: Pengwei Qian of SkillCAD


Silicon Catalyst’s Semi Industry Forum – All-Star Cast Didn’t Disappoint

Silicon Catalyst’s Semi Industry Forum – All-Star Cast Didn’t Disappoint
by Mike Gianfagna on 12-18-2020 at 10:00 am

Silicon Catalysts Semi Industry Forum – All Star Cast Didnt Disappoint

A few weeks ago I wrote about an upcoming event Silicon Catalyst was hosting, the Semiconductor Industry Forum – A View to the Future. I mentioned a high-profile group of presenters: Don Clark, Contributing Journalist, New York Times as moderator;  Mark Edelstone, Chairman of Global Semiconductor Investment Banking, Morgan Stanley as a panelist; Ann Kim, Managing Director, Frontier Tech, Silicon Valley Bank & Kauffman Fellow as a panelist; Jodi Shelton, Co-Founder and CEO, Global Semiconductor Alliance as a panelist with both Pete Rodriguez, CEO at Silicon Catalyst and Richard Curtin, Managing Partner at Silicon Catalyst providing opening remarks. I had the opportunity to attend the event and I’m here to tell you it was insightful, thought-provoking and at times quite surprising. The all-star cast didn’t disappoint.

Don Clark began the panel session with an observation that “chips are cool again”. Don explained that he’s been covering semis since about 1987 when he interviewed Andy Grove, so he brings a substantial perspective to this event. He then engaged with each panelist. There were many great insights offered during this portion of the evening. I will offer a key point or two from each of the panelists here. A replay link is coming – I strongly encourage you to watch the entire event. It’s definitely worth the time.

First up was Mark Edelstone, who pointed out that he’s been watching semis for over 30 years and has never seen a better time for the industry. That statement alone made the whole event worthwhile for me. Mark presented some slides about the trends and what they mean. As is typically the case, he worked through a huge amount of data and turned it into clear and easy to understand trends. There is one slide in particular I’ll share here. I mentioned it in my previous post on the event. It’s an analysis of semiconductor consolidation trends and it shows what the semi world will look like in a few years. As shown in the graph, Mark sees a significant amount of further consolidation in the industry, projecting it to shrink to less than three dozen companies in the next five years.

Semiconductor Consolidation Trends

Next was Jodi Shelton. Early in her discussion she said, “we’re reminded as never before that the global economy runs on semiconductors.” That’s another one of those statements that made the whole evening worthwhile. Jodi presented a thoughtful analysis of the current tension with China and shared some views of how to get back to a more productive path for both countries. Taiwan and its unique position were also discussed. Jodi made a final comment about the significant lack of engineering talent. She pointed to the female population as an underserved demographic for engineering. The GSA will be promoting STEM education for women. If you’re a female who is choosing a college career, or if you know someone who is, give engineering a serious look – you will find a warm welcome with that credential.

Next was Ann Kim. The memorable quote here was, “VC funds have over $150 billion of dry powder. It’s a great funding environment.” She pointed out that a large influx of capital into the semi sector will help companies get to that all-important tapeout. Don asked Ann “what’s hot these days?”. Ann covered several areas. An interesting one was space technology. She said companies like SpaceX, Virgin Galactic and Blue Origin are doing well. Autonomous technology is popular in the air as well as on the ground it seems. Health care and life sciences are also driving a lot of interest.

After a few rounds of discussion with the panel from Don, the floor was opened to a very spirited Q&A from attendees. Many topics were covered; you need to see it for yourself. The all-star cast didn’t disappoint. You can see the full replay of the event here, starting with the introductory remarks from Richard regarding the Forum’s charter and an overview of the Silicon Catalyst Incubator by Pete.

By the way, for those of you engaged with early-stage semiconductor startups, the application deadline to the Silicon Catalyst Incubator is January 11, 2021. www.sicatalyst.com

Happy Holidays to all.

Also Read:

Chip Startups are Succeeding with Silicon Catalyst and Partners Like Arm

Silicon Catalyst Hosts Semiconductor Industry Forum – A View to the Future … it’s about what’s next®

Silicon Catalyst Announces a New Startup Ecosystem for MEMS Led by Industry Veteran Paul Pickering and supported by STMicroelectronics


Sensor Fusion Brings Earbuds into the Modern Age

Sensor Fusion Brings Earbuds into the Modern Age
by Tom Simon on 12-18-2020 at 6:00 am

CEVA Sensor Fusion

Ten years ago, earbuds might have seemed like a mundane product area with little room for exciting developments. Truly Wireless Stereo (TWS) has coincided with an avalanche of innovations that have moved earbuds from a simple transducer for creating sound into being a sophisticated device capable of accepting user commands and controlling a media device based on a wide range of environmental information. At the same time coupling earbuds with the latest in signal processing greatly enhances the listening experience. CEVA has a webinar on-demand, titled “Enhancing the TWS User Experience with Sensor Fusion”, that does an excellent job of describing just how significant the developments in so called hearables have become.

Once consumers are made aware of what is possible, they immediately grasp the new capabilities and are asking for them in new products. While things like sound quality, comfort, battery life and ease of use remain strong selling points, new features such as context aware behavior command strong interest in the market. It takes a lot more than good audio quality to develop a successful hearable product nowadays. A wide range of sensors need to be added to hearables, along with the processing capabilities to perform sensor fusion and deliver the audio experience that is needed.

Sensor fusion might seem like a very dry and abstract topic, but it is what allows inputs from accelerometers, gyroscopes, magnetometers, microphones, touch, and proximity sensors to be combined to create an understanding of not only the operating environment but the context necessary to control device behavior. Sensors themselves all have a variety of limitations which manifest as anomalies. Environmental factors such as aging, operating voltage and manufacturing variation can affect performance. Unless these factors are dealt with, user experience could be frustrating or the devices could even be useless.

Done right, sensor fusion can allow for a revolutionary user experience. Let’s start with input gestures for instance. The CEVA webinar goes through several scenarios. Clicking a button is often used for input, but with earbuds this can be problematic. Hitting a button on a tiny device can be hard and is made harder because the user might need to brace the device so it does not come out of their ear. Touch sensors are also difficult for earbuds because they required a larger area. If they could work, they could support a wide range of gestures and more natural motion. With advances sensors and sensor fusion, earbuds can take advantage of tapping and head movement for command input. An accelerometer can detect tapping and distinguish it from random movement. Head motion can be used by nodding and side-to-side movement. Head movement sensing is hands-free and uses natural motions.

CEVA Sensor Fusion

But the real power of sensor fusion is using environmental information to drive device behavior. It is possible for hearables using sensor fusion to know if you are walking on a street, in a restaurant, in a crowd or by yourself. Using this information, the device can do things like pass through external audio for safety or reduce environmental noise to facilitate a phone call or music. It is even possible for the hearable to respond to sirens by muting media to help you maintain awareness of your surroundings.

One intriguing use model CEVA discussed is using 3-D sound combined with location information to help a user find a friend in a crowd. As the user’s head moves the apparent direction to their friend would shift in their earbuds, giving them cues about which way to walk to get to them. The CEVA webinar offers details of a number of scenarios where a device can respond to help with enjoying music, phone calls, sports activities, conversations in noisy environments and more.

Getting all this to work requires the right assortment of sensors and the real-time software to enable local processing of raw sensor data so that the hearable and connected devices can perform as desired. This is CEVA’s specialty. They offer their MotionEngine Hear software library for the hearable market. It features 3D head tracking, InEar detection, Activity classifiers, tap/double tap detection, shake detector, step counter/fitness tracking and more. CEVA’s MotionEngine Hear library handles issues like gyroscope and accelerometer offset or bias to improve tracking accuracy. It offers a sophisticated calibration strategy that uses both static and dynamic methods to achieve excellent results.

Applying sensor fusion to hearables is leading to dramatically expanded functionality. It makes devices more intuitive and more responsive to their environment. Indeed, it is expanding functionality in ways that were not even apparent ten years ago. This is good news for consumers. However, there is essential technology that enables these changes. Fortunately, CEVA has been active in this space and has software available now to provide the foundation for these new features. The webinar provides a lot more information than can be mentioned here. You can view the full webinar on the CEVA website.

Also Read:

Sensor Fusion in Hearables. A powerful complement

Low Energy Intelligence at the Extreme Edge

Combo Wireless. I Want it All, I Want it Now


Synopsys is Extending CXL Applications with New IP

Synopsys is Extending CXL Applications with New IP
by Mike Gianfagna on 12-17-2020 at 10:00 am

CXLs busy timeline

Compute Express Link (CXL), a new open interconnect standard, targets intensive workloads for CPUs and purpose-built accelerators where efficient, coherent memory access between a host and device is required. A consortium to enable this new standard is in place, and a lot of heavy hitters are behind the standard, including IP support from Synopsys. If you want to learn more about CXL, Synopsys offers a good overview here. As the number two IP provider in the industry, Synopsys backing CXL is a big deal. I probed a bit to find out how Synopsys is extending CXL applications with new IP.

Gary Ruggles

CXL is not new to Synopsys. As reported on SemiWiki in 2019, Synopsys was the first IP provider with a complete CXL implementation. Arm also puts its weight behind CXL back in 2019. This turns out to be important for a number of reasons, as you will see in a moment.  I recently had the chance to catch up with Gary Ruggles, senior product marketing manager at Synopsys. I wanted to understand a bit more about CXL technology, the consortium and how Synopsys is extending CXL applications with new IP. Gary is a veteran of chip design and IP. He’s worked at several IP companies including Arm before joining Synopsys. He’s also assisted customers with IP requirements for ASIC, including a stint at eSilicon, my alma mater.

The first thing Gary explained was that the CXL consortium has been very busy. The headline graphic above illustrates the rapid expansion of the specification. There are over 120 members in the consortium. It is clearly the largest of the new high-speed interconnect/coherency standard consortiums, eclipsing membership in Cache Coherent Interconnect for Accelerators (CCIX) consortium with about 50 members, Gen-Z with approximately 70 members, and OpenCAPI with around 38 members. The CXL Consortium was formed about 18 months ago compared to nearly four years ago for the other two, so CXL has definitely hit a nerve with a lot of influential companies. I previously mentioned heavy hitters. Consider that the CXL Board of Directors includes Intel, IBM, AMD and Arm. So, the four major CPU makers are all behind CXL. This is change-the-world kind of stuff in my opinion.

Gary also pointed out that there is a memorandum of understanding (MOU) between the CXL and Gen-Z Consortiums. With this Gen-Z MOU in place, CXL is becoming the dominant solution inside servers. Gen-Z can now offer connectivity from box-to-box or even rack-to-rack, leveraging its ability to use Ethernet physical layers to get longer reach connectivity than CXL can achieve using PCIe 5.0 PHYs. The footprint for CXL is growing.

We also discussed future enhancements. Since the CXL 2.0 specification was just released on November 10th, it is now clear that it’s all about enabling storage applications. The bulk of the new features added to the CXL 2.0 specification are focused in this area and include:

  • Switching for CXL.mem
  • Pooled memory that can be shared by more than one DS port
  • Managed hot-plug support (enabling storage device removal)
  • Security/encryption support

With switching added, memory that is attached to multiple downstream devices may be able to be shared across multiple hosts, and the memory can be split among those hosts as needed for a particular application. This opens up many new architectural considerations.

There is a lot more to the CXL story, including potential for CCIX over CXL, which further extends the possibilities. Gary has written a very informative technical bulletin on the topic and you can access a copy of it here. There is one more interesting development I’ll mention. Synopsys recently announced its DesignWare® CXL IP supports AMBA CXS protocol to enable seamless integration with scalable Arm® Neoverse™ Coherent Mesh Networks. This capability delivers an optimized multichip IP stack for a range of high-performance computing, datacenter, and networking applications. You can read the full press release here.

You can learn more about Synopsys IP support for CXL, both current and future, here. You will clearly see how Synopsys is extending CXL applications with new IP.

Also Read:

Webinar on Synopsys MIPI IP

Synopsys talks about their DesignWare USB4 PHY at TSMC’s OIP

AI/ML SoCs Get a Boost from Synopsys IP on TSMC’s 7nm and 5nm


An Accellera Update. COVID Accelerates Progress

An Accellera Update. COVID Accelerates Progress
by Bernard Murphy on 12-17-2020 at 6:00 am

logo accellera min

Normally I would post this Accellera update during DVCon US but, no surprise, this year is weird. Particularly in conferences going virtual. The last DVCon was in early March of this year, right on the cusp of the shutdown. I was there in person, as was Lu Dai (Chairman of Accellera). Both Synopsys and Cadence had dropped out, citing safety, though presentations continued (not sure about the exhibits). Lu reminded me that, as thanks for our fortitude, DVCon was one of the best places to find hand sanitizer, out of stock everywhere else!

We talked about how the pandemic had affected standards development. Lu saw a net positive for members having to work from home and conference virtually. As an international organization, it has been easier to get everyone together, even if meeting schedules weren’t always convenient. Attendance has been higher, and meetings more frequent. He said that when working groups (WGs) put together their plans, the board worried they were too aggressive. But it turned out they’ve been pretty close – more is getting done faster after all.

PSS 2.0 and UVM-AMS

PSS 2.0, now in public review has certainly accelerated its schedule. What I find telling here is Lu’s view of the new release. He’s a user after all (at Qualcomm) as well as the chair of Accellera. Qualcomm is a major adopter of the standard. They saw 1.0 as a good start but not production-ready because they needed to do quite a lot of patching when building on proprietary implementations. In this new release they see a production-ready vendor-neutral solution they’re ready to adopt in full. As I said, a telling viewpoint.

UVM-AMS is a very new effort, launched only late last year. The WG have already developed what they call a design objective document (DOD), all the capabilities they want to be covered in the standard. Next, they’re going to be voting on which of those capabilities should make it into the first release. According to Lu, this is a pretty fast pace, much faster than normal. Again, a silver lining from the pandemic.

IP security and functional safety

The IP Security assurance working group is also progressing. Lu clarified (for me at least) that this will be an annotation standard which should get to release potentially faster than some other working groups. They’re working quite closely with Mitre. Mitre is already well established as a centralized resource for common vulnerabilities and exposure, originally in software, now also in hardware. IPSA is tying into the Mitre security threat database in hardware. The objective then is how that threat information carries over in markup for use by EDA tools. Details here are still evolving.

On functional safety, the Accellera working group is working closely with the IEEE functional safety working group and have agreed a division of tasks. Accellera focuses more on the hardware side, IEEE works more on the software and higher layers. Yet another area moving at a fast pace, internal email updates almost every day and regular joint meetings with IEEE. On coordination with broader standards activities (notably ISO 26262) Lu doesn’t see a problem. Given linkages between Accellera and IEEE, and already well established member linkages with ISO 26262 there’s lot of interaction between standard activities. By design a lot of Accellera and IEEE  work here is complimentary to the ISO 26262 focus, and there are enough channels to cross-check.

DVCon logistics

On DVCon, the other main focus for Accellera, I already mentioned the US 2020 conference. DVCon China cancelled since it was scheduled right in the middle of that country’s own battle with the pandemic. DVCon Europe had more time to prepare and was able to pull off a very impressive virtual conference. In fact I attended and wrote up one of the talks, though apparently I didn’t take advantage of the full virtual experience at the show. Lu was so impressed he plans to use the same platform for any other virtual events. Honestly from my perspective, I’d love to see all conferences go virtual even after the pandemic is over. May be tough on the travel and convention center industries but way easier on the rest of us!

For more detail on the latest new in Accellera, check HERE.

Also Read:

DVCon 2020 Virtual Follow-Up Conference!

Accellera Tackles Functional Safety, Mixed-Signal

Functional Safety Comes to EDA and IP


Advanced Process Development is Much More than just Litho

Advanced Process Development is Much More than just Litho
by Tom Dillinger on 12-16-2020 at 10:00 am

Vt distribution

The vast majority of the attention given to the introduction of each new advanced process node focuses on lithographic updates.  The common metrics quoted are the transistors per mm**2 or the (high-density) SRAM bit cell area.  Alternatively, detailed decomposition analysis may be applied using transmission electron microscopy (TEM) on a lamella sample, to measure fin pitch, gate pitch, and (first-level) metal pitch.

With the recent transition of the critical dimension layers from 193i to extreme ultraviolet (EUV) exposure, the focus on litho is understandable.  Yet, process development and qualification encompasses many more facets of materials engineering to achieve robust manufacturability, so that the full complement of product goals can be achieved.  Specifically, process development engineers are faced with increasingly stringent reliability targets, while concurrently achieving performance and power dissipation improvements.

At the recent IEDM conference, TSMC gave a technical presentation highlighting the development focus that enabled the N5 process node to achieve (risk production) qualification.  This article summarizes the highlights of that presentation. [1]

An earlier SemiWiki article introduced the litho and power/performance features of N5. [2]  One of the significant materials differences in N5 is the introduction of a “high mobility” device channel, or HMC.  As described in [2], the improved carrier mobility in N5 is achieved by the introduction of additional strain on the device channel region.  (Although TSMC did not provide technical details, the pFET hole mobility is also likely improved by the introduction of a moderate percentage of Germanium into the Silicon channel region, or Si(1-x)Ge(x).)

Additionally, the optimized N5 process node incorporates an optimized high-K metal-gate (HKMG) dielectric stack between gate and channel, resulting in a stronger electric field.

A very significant facet of this “bandgap engineering” for carrier mobility and the gate oxide stack materials selection is to ensure that reliability targets are satisfied.  Several of the N5 reliability qualification results are illustrated below.

TSMC highlighted the following reliability measures from the N5 qualification test vehicle:

  • bias temperature stability (BTI)
  • both NBTI for pFETs and PBTI for nFETs, manifesting in a performance degradation over time from a device Vt shift (positive absolute value) due to trapped oxide charge
  • also may result in a degradation of VDDmin for SRAM operation
  • hot carrier injection (HCI)
  • an asymmetric injection of charge into the gate oxide near the drain end of the device (operating in saturation), resulting in degraded carrier mobility
  • time-dependent gate oxide dielectric breakdown (TDDB)

Note that the N5 node is targeted to satisfy both high-performance and mobile (low-power) product requirements.  As a result, both performance degradation and maintaining an aggressive SRAM VDDmin are important long-term reliability criteria.

TDDB

The figure above illustrates that the TDDB lifetime is maintained relative to node N7, even with the increased gate electric field.

Self-heating

The introduction of FinFET device geometries substantially altered the thermal resistance paths from the channel power dissipation to the ambient.  New “self-heating” analysis flows were employed to more accurately calculate local junction temperatures, often displayed as a “heat map”.  As might be expected with the aggressive dimensional scaling from N7 to N5, the self-heat temperature rise is greater in N5, as illustrated below.

Designers of HPC products need to collaborate with both their EDA partners for die thermal analysis tools and their product engineering team for accurate (on-die and system) thermal resistance modeling.  For the on-die model, both active and inactive structures strongly influence the thermal dispersion.

HCI

Hot carrier injection performance degradation for N7 and N5 are shown below, for nFETs and pFETs.

Note that HCI is strongly temperature-dependent, necessitating accurate self-heat analysis.

BTI

The pMOS NBTI reliability analysis results are illustrated below, with the related ring oscillator performance impact.

In both cases, reliability analysis demonstrates improved BTI characteristics of N5 relative to N7.

SRAM VDDmin

The SRAM minimum operating voltage (VDDmin) is a key parameter for low-power designs, especially with the increasing demand for local memory storage.  Two factors that contribute to the minimum SRAM operating voltage (with sufficient read and write margins) are:

  • the BTI device shift, as shown above
  • the statistical process variation in the device Vt, as shown below (normalized to Vt_mean in N7 and N5)

Based on these two individual results, the SRAM reliability data after HTOL stress shows improved VDDmin impact for N5 versus N7.

Interconnect

TSMC also briefly described the N5 process engineering emphasis on (Mx, low-level metal) interconnect reliability optimization.  With an improved damascene trench liner and a “Cu reflow” step, the scaling of the Mx pitch – by ~30% in N5 using EUV – did not adversely impact electromigration fails, nor line-to-line dielectric breakdown.  The figure below illustrates the line-to-line (and via) cumulative breakdown reliability fail data for N5 compared to N7 – N5 tolerates the higher electric field with the scaled Mx pitch.

Summary

The majority of the coverage associated with the introduction of TSMC’s N5 process node related to the broad adoption of EUV lithography to replace multipatterning for the most critical layers, enabling aggressive area scaling.  Yet, process engineers must also optimize materials selection and many individual fabrication steps, to achieve reliability targets.  TSMC recently presented how these reliability measures for N5 are superior to prior nodes.

-chipguy

References

[1]  Liu, J.C., et al, “A Reliability Enhanced 5nm CMOS Technology Featuring 5th Generation FinFET with Fully-Developed EUV and High Mobility Channel for Mobile SoC and High Performance Computing Application”, IEDM 2020.

[2]  https://semiwiki.com/semiconductor-manufacturers/tsmc/282339-tsmc-unveils-details-of-5nm-cmos-production-technology-platform-featuring-euv-and-high-mobility-channel-finfets-at-iedm2019/

 

Related Lithography Posts


Close the Year with Cliosoft – eBooks, Videos and a Fun Holiday Contest

Close the Year with Cliosoft – eBooks, Videos and a Fun Holiday Contest
by Mike Gianfagna on 12-16-2020 at 6:00 am

Close the Year with Cliosoft – eBooks Videos and a Fun Holiday Contest

‘Tis the season, a time when a lot of companies summarize the year, send out holiday greetings and generally wind down until after the New Year. That’s not the case at Cliosoft.  Their marketing machine has been in full gear with lots of new, useful and compelling content. I’ll provide a round-up of what’s happening. You can close the year with Cliosoft – eBooks, videos and a fun holiday contest.

eBooks

Startup Best Practices eBook

eBooks aren’t something you see every day from an EDA vendor.  Cliosoft has two published, including one in Chinese as well as English, with more on the way. The first one is Startup Best Practices. The eBook is written by Srinath Anantharaman, the CEO and founder of Cliosoft. This is a short eBook that hits some very important fundamental points. Here is the table of contents:

  • INTRODUCTION
  • WHAT ARE ‘BEST PRACTICES’?
  • WHY ADOPT BEST PRACTICES FROM THE START?
  • ARE DESIGN MANAGEMENT AND OTHER COLLABORATION TOOLS NEEDED?
  • KEEP IT SIMPLE
  • IT CONSIDERATIONS
  • CONCLUSION

This is a great read if you’re starting to build a design infrastructure or if you’re considering an upgrade to your existing flow. If you are in one of these situations, there is a sentence in the introduction that I think is worth repeating here.

“This eBook makes the case that adopting best practices and methodology early will lay the foundation to create a design team that is built to last.”

Design Methodology Guide

The next eBook is one chapter from a book called Design Methodology Guide, Advanced Methodology for AMS IP and SOC Design, Verification and Implementation. The chapter is entitled Data Management for Mixed-Signal Designs, and it’s authored by Michael Henrie and Srinath Anantharaman. Michael Henrie is the director of software engineering at Cliosoft. The chapter goes into a lot of detail. It begins with a discussion of the current mixed-signal design environment and traditional team design techniques and their pitfalls.  There is then a discussion of design management system requirements and how to manage projects with such a system in place. The impact a design management system has on global collaboration, analog design workflows, ECOs and release tracking are discussed.

Techniques to administer rules, roles, access and permission as well as how to reuse IP and PDKs across projects are also touched on. A lot of detail and examples are offered. I’m sure there’s something in this eBook for everyone.

There are two more eBooks in the works now, with more in development. The next titles treat the ever-popular topic of moving to the cloud:

  • Best Practices for Deploying Design Management on Amazon AWS
  • Using Cliosoft SOS Design Management Platform in the Cloud

You can get your copy of Cliosoft’s eBooks here.

Videos

There are quite a few videos available on the Cliosoft website as well. The titles there include:

  • Designing on AWS
  • The New Trend in IP Traceability That IP Developers and Design Managers Rely On
  • Network Storage Optimization for IP
  • Challenges in IP reuse
  • Visualizing Differences in Analog Design
  • What’s in Your IP

The videos are a combination of webinar replays and “chalk talks”. The content covers a lot of very relevant topics.

I’ll focus on the content of one video, The New Trend in IP Traceability presented by Karim Khalfan, director of applications engineering at Cliosoft. I originally covered this webinar on SemiWiki here. Karim begins by discussing why IP traceability is important. Some key benefits include:

  • Increased visibility
  • Improved quality
  • Reduced risk

The standards that demand reliability and how IP traceability addresses these requirements are also discussed (e.g., ISO26262 and MIL-STD-882). Karim then sets up a series of live demonstration scenarios that illustrate the challenges of several stakeholders and how IP traceability helps. The webinar concludes with a Q&A session with questions from the original live audience. Karim manages to get through all this content in under 30 minutes. This one is definitely worth your time. You can access all Cliosoft’s videos here.

Holiday Contest

This one is a lot of fun. If you’re not quick, it can become torture so give it a try!  You have to spot a series of words in a “sea of letters” while working against the clock. It’s definitely worth the effort because you are entered in a chance to win a $250 Amazon gift card if you play. There will be drawings every Friday until December 25, so have a big cup of coffee and check it out. You can enter the game from a link at the top of the Cliosoft home page.  So that’s how you can close the year with Cliosoft – eBooks, videos and a fun holiday contest.

Happy Holidays to all!

Also Read

The History and Physics of Cliosoft’s Academic Program!

A tour of Cliosoft’s participation at DAC 2020 with Simon Rance

How to Grow with Poise and Grace, a Tale of Scalability from ClioSoft


Achronix Talks about FPGAs for Video Processing

Achronix Talks about FPGAs for Video Processing
by Tom Simon on 12-15-2020 at 10:00 am

Need for Video Editing

The internet keeps adding users and connected devices. According to the numbers in a white paper from Achronix, by 2022 there will be 4.8 billion internet users and 28.5 billion connected devices. Internet traffic will reach 275 exabytes per month. Of this a staggering 83 percent will be video traffic. Moving the data from creators to consumers, video editing and processing of video data for applications using machine learning requires large amounts of video processing. The Achronix white paper, titled “FFPGAs for Advanced Video Processing Solutions”, examines each of the above tasks and the type of data processing required.

Need for Video Processing

We have all seen the message “Processing your video” when uploading videos to YouTube or Facebook. The conversion of video in one format to another for use on other platforms and devices is critical a step for sharing content. Originally this work was done on CPUs or sometimes using GPUs. While ASICs might also be an attractive processing solution, they are limited when it comes to the proprietary compression methods that are often used in transcoding.

In the case of video editing and content creation, desktop computers with GPU acceleration have been a mainstay. However, with the dawn of 4K and 8K video, these platforms are underpowered for the task. This work has been moving to the cloud, however using traditional processors in the cloud has its limits.

Lastly, AI applications need images not video to perform inference. This entails converting H.264 or H.265 video streams into JPEG or PNG images that can be used by the AI processors. The conversion to an image file may also include converting image resolution or other processing to help the AI application.

Achronix makes the case that FPGAs, especially their Speedster7t, are well suited to all of these tasks. Both GPUs and FPGAs offer parallel processing, but FPGAs often come up as the preferred choice because of their power advantage over GPUs.

The Achronix white paper looks at each type of activity to analyze the effectiveness of their Speedster7t FPGA. When streaming and transcoding H.264 video many of the tasks are easily handled by CPUs. Yet, one task in the process, motion estimation, has been profiled to use around 21% of the entire processing load on CPUs. This is a task that can be moved to an FPGA for a big improvement in throughput.

Whether you are talking about working with RAW video data or compressing video using intra-frame structure, video editing and content creation have become unwieldy at resolutions such as 4K and 8K. Previously with HD and 2K video using CPUs was a feasible approach. The white paper includes benchmark data that supports the notion that CPUs must be supplanted at today’s higher resolutions.

For AI, there is a lot to be gained by combining the video decoder and the image encoder in the same processing unit. Frequently there is also a need for additional image processing as a prerequisite to the inference step. This too can easily be accommodated in an FPGA.

Achronix then moves to a discussion of the specific advantages found in their Speedster7t family. Their 2D Network on Chip (NoC) facilitates high speed transfers between the external interfaces in the Speedster7t FPGA and blocks in the FPGA fabric. It also provides rapid transfers among functional blocks on-chip. Because it is separate from the FPGA fabric, no FPGA resources are consumed when setting up pathways for data exchange. Likewise, because it uses a high-level protocol, FPGA designers do not need to put together routing and buffering logic. To transfer data a user or consumer only need to connect to a Network Access Point.

Speedster7t FPGAs come with a well thought out set of interfaces. The Speedster7t AC7t1500, for instance, offers fracturable Ethernet controllers (supporting rates up to 400G), PCI Gen 5 ports and up to 32 SerDes channels with speeds up to 112 Gbps. It also has multi-channel GDDR6 memory interfaces. With the NoC running at a much higher speed than the clocks usually associated with FPGA fabrics, it can transport in aggregate over 20 Tbps. The combination of the NoC and the high-speed interfaces means it is in a class by itself when it comes to meeting the needs of video processing.

The paper finishes with a discussion of the Machine Learning Processors (MLP) that Achronix has developed for use in the Speedster7t family. It is interesting reading about how the MLPs are optimized with local block RAM and math units that handle MAC operations needed for AI. Achronix has consistently been adding features for a wide range of complex applications to their FPGAs. Their white papers, such as this one, frequently make compelling cases for the use of their technology in system design. The full white paper on video processing is available on their website.

 

 


More on Bug Localization. Innovation in Verification

More on Bug Localization. Innovation in Verification
by Bernard Murphy on 12-15-2020 at 6:00 am

innovation min

Mining assertions from constrained random simulations to localize bugs. Paul Cunningham (GM, Verification at Cadence), Jim Hogan and I continue our series on research ideas. Feel free to comment.

The Innovation

This month’s pick is Symptomatic bug localization for functional debug of hardware designs. This paper was presented at the 2016 ICES. The authors are from the University of Illinois at Urbana Champaign.

There’s wide agreement that tracing bugs to their root cause consumes more effort than any other phase of verification. Methods to reduce this effort are always worth a close look. The authors start with constrained random (CR) simulations. They mine failing simulations for likely invariant assertions, which they call symptoms. These they infer from multiple CR simulations based on common observed behaviors. They use an open-source miner developed in the same group, GoldMine, for this purpose.

Then for each failure they look for commonalities between assertions, looking for common execution paths among symptoms.  They claim that common symptoms signal a highly suspicious path which likely localize the failure.

One way to determine what code is covered by an assertion is through static cone-of-influence analysis. These authors instead use simulations to determine dynamically what statements are relevant. These they assume are statements executed in the time slice from the left-hand-side of the assertion becoming true and the right-hand-side becoming true. They acknowledge dynamic analysis is incomplete for this purpose, though in their experiments they say they found all simulated all statements in scope.

The authors ran experiments on a USB 2.0 core and were able to localize bugs to within 5% of the RTL source code lines in one case and within 15% on average.

Paul’s view

I first would like to acknowledge that this paper is one of a series of high-quality papers from Dr. Vasudevan’s team at the University of Illinois, building on their GoldMine assertion mining tool. Lots of inspiring research here which I very much enjoyed learning about!

The paper nicely brings together three techniques. In the first phase, they look for common patterns in failure traces using GoldMine. The patterns found are in the form of “A implies B” assertions. Each such assertion identifies some time slice from the trace where whenever a sequence of events “A” happens to some other sequence “B” of events that happens later.

Second, they intersect these assertions mined across all the failure traces to find the important signatures that apply to all failure traces.

Finally, they map the assertions back to lines in the source RTL by re-running the failing simulations and tracking which RTL lines are executed in the time between “A” and “B” happening in the trace.

Overall, this is a very elegant and scalable method to localize faults. On their USB controller example, they localized half of the 22 bugs inserted to within 15% of the source code base, which is impressive.

I struggled a bit with the importance/sensitivity analysis in Figure 4. I expected to see some visual correlation between code zone importance and the actual code zone where the bug was injected but this didn’t seem to be the case.

The other thought I have, which would be a fun and easy experiment: in the second phase, check the signature assertions against traces for good simulations that passed. Then prune any assertion that also matches a good simulation trace. This might improve the quality of the important signature list further.

Jim’s view

By default, I would normally look at this and think “part of verification, no new money there, not investable”. However, debug probably has the least EDA support today. It’s also the most expensive part of verification in time consumed by verification experts. And tools in this area can be very sticky. I’m thinking particularly of Verdi. There, users developed a strong loyalty, quite likely because they spend so much time in debug.

Now I think a new capability that could significantly enhance the value of a debugger – reducing total debug time – could attract a lot of interest. I’d want to see proof of course, but I think there might be a case.

My view

Before Atrenta was acquired by Synopsys, we acquired a company that did assertion mining. As a standalone capability, generating new assertions for direct verification, it seemed to struggle find traction. This is a different spin on the technique. The assertions are not the end goal but a means to localize bugs. Clever idea and maybe more immediate market appeal.

You can read the previous Innovation blog HERE.


Alphawave IP is Enabling 224Gbps Serial Links with DSP

Alphawave IP is Enabling 224Gbps Serial Links with DSP
by Mike Gianfagna on 12-14-2020 at 10:00 am

Alphawave IP is Enabling 224Gbps Serial Links with DSP

Alphawave IP is a new member of the SemiWiki community. You can learn about the company and their CEO, Tony Pialis in this interview by Dan Nenni. Design & Reuse did a virtual IP-SOC Conference recently and Tony presented. The D&R event had a very strong lineup of presenters. They supplemented the prepared video presentations with two live panels on Automotive and FDSOI. This created a nice balance of prepared and live material, a good ingredient for a virtual event. Alphawave IP has a very strong portfolio of DSP-based multi-standard connectivity silicon IP solutions. They recently won the 2020 TSMC OIP Partner of the Year award for high-speed SerDes IP, so they’re definitely a company to watch. I was anxious to hear how Alphawave IP is enabling 224Gbps serial links with DSP at the D&R event.

DSP SerDes Introduction

Tony started by discussing the differences between an analog SerDes and a digital, or DSP SerDes. He explained that an analog SerDes can work reliably up to 36db NRZ or 30db PAM4. Since all equalization is implemented in the continuous time domain, this technology is sensitive to process variation. With a DSP-based design, most of the equalization is done digitally, allowing for more robust operation to 45db NRZ and 36db+ PAM4. This kind of design is also not very sensitive to process variation. Tony pointed out that the high-speed ADC required for a digital design like this is challenging to build.

Tony then went into some detail about analog linear equalization vs. DSP linear equalization. Clearly, the DSP approach is a better match for the demands of high-speed links.

The Road to 200Gbps Serial Links

Next, Tony discussed the challenges of getting from current 112Gbps PAM4 SerDes to 224Gbps PAM4 devices. Keeping the architecture the same, one can see that the reach for the device is dramatically reduced – roughly one inch vs. one foot. This is a serious challenge. The data is summarized in the figure below.

Scaling Symbol Rates to 224Gbps

Given that package and board material aren’t likely to change much in the next couple of years, a new approach to increase data throughput for existing channels is needed. One that doesn’t suffer from the tradeoff issues shown above. Tony examined several alternative modulation schemes. Each has its own strengths and weaknesses relative to required channel bandwidth and signal-to-noise (SNR) ratio. He focused on PAM8 as a good candidate given its low channel bandwidth requirements. The various modulation techniques and their requirements are summarized in the figure below.

High Capacity Modulation Schemes

The next challenge to tackle is how to manage the SNR degradation of PAM8. One step toward a solution is to use a “maximum likelihood sequence detector.”  This advanced DSP detector uses an approach called Viterbi Detection to make data slicing decisions based on a sequence of data vs. on a single symbol which is the typical approach. This minimizes error across a sequence of symbols and results in an improvement in SNR of about 1-3 db.

Next, Tony focused on forward error correction (FEC). Using new, third-generation soft FECs based on approaches such as block turbo codes, one can recover over 10 db of bit error rate, thus compensating for the challenges of PAM8 further.

Summary

Tony concluded with an overview of Alphawave’s world-leading portfolio of DSP-based PHYs covering many protocols and applications, short and long reach. The portfolio is available and silicon-proven on TSMC 7nm and 5nm processes. With this technology platform, Tony sees a path to 224Gbps. If you’d like to learn more about Alphawave IP’s assessment of the future and how its technology fits, you can see Tony’s complete D&R presentation by registering here. He goes into a lot of detail. You can also visit the Alphawave IP website to learn more and find out how Alphawave IP is enabling 224Gbps serial links with DSP.

Also Read:

Alphawave IP and the Evolution of the ASIC Business

Demand for High Speed Drives 200G Modulation Standards

CEO Interview: Tony Pialis of Alphawave IP