NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Cadence Explores Smarter Verification

Cadence Explores Smarter Verification
by Bernard Murphy on 07-10-2017 at 7:00 am

Verification as an effectively unbounded problem will always stir debate on ways to improve. A natural response is to put heavy emphasis on making existing methods faster and more seamless. That’s certainly part of continuous improvement but sometimes we also need to step back and ask the bigger questions – what is sufficient and what is the best way to get there? Cadence hosted a panel at DAC this year on that topic, moderated by Ann Mutschler of SemiEngineering. Panelists were Christopher Lawless (Intel Director of customer-facing pre-silicon strategies), Jim Hogan (Vista Ventures), Mike Stellfox (Cadence fellow) and David Lacey (Verification scientist at HP Enterprise). I have used Ann’s questions as section heads below.

What are the keys to optimizing verification/validation?
Chris said that the big challenge is verifying the system. We’re doing a good job at the unit level, both for software and hardware components, but complexity at the (integrated) system level is growing exponentially. The potential scope of validation at this level is unbounded, driving practical approaches towards customer use-cases. David highlighted challenges in choosing the right verification approach at any point and how best to balance these methods (e.g. prototyping versus simulation). He also raised a common concern – where is he double-investing and are there ways to reduce redundancy?

Jim saw an opportunity to leverage machine learning (ML), citing an example in a materials science company he advises. He sees potential to mine existing verification data to discover and exploit opportunities that may be beyond our patience and schedules to find. Mike echoed and expanded on this saying that we need to look at data to get smarter and we need to exploit big data analytics and visualization techniques, along with ML.

Of the verification we are doing today, what’s smart and what’s not?
David said a big focus in his team is getting maximum value out of the licenses they have. Coverage ranking (of test suites) is one approach, cross coverage ranking may be another approach. Mike said that formal is taking off, with interest in using these methods much earlier for IP. And that creates need to better understand the coverage contribution and how that can be combined with simulation-based verification. He added that at the system level, portable stimulus (PS) is taking off, there’s more automation appearing and it is becoming more common to leverage PS across multiple platforms. Chris was concerned about effectiveness across the verification continuum and wanting to move more testing earlier in that continuum. He still sees need for hardware platforms (emulation and prototyping) but wants them applied more effectively.

What metrics are useful today?
Chris emphasized that synthetic testing will only take you so far in finding the bugs that may emerge in real OS/applications testing. Real workloads are the important benchmark. The challenge is to know how to do this effectively. He would like to see ML methods extract sufficient coverage/metrics from customer use-cases ahead of time, and to propagate appropriate derived metrics from this across all design constituencies to help each understand impact on their objectives.

Mike felt that, while it may seem like blasphemy, over-verifying has become a real concern. Metrics should guide testing to be sufficient for target applications. David wants to free up staff earlier so they can move on to other projects. In his view, conventional coverage models are becoming unmanageable. We need analytics to optimize coverage models to address high-value needs.

Is ML driving need for new engineers?
Jim, always with the long view, said that the biggest engineering department in schools now is CS, and that in 10 years it will be cognitive science. He believes that we are on cusp of cognitive methods which will touch most domains. Chris wouldn’t commit to approaches but said Intel is exploring ways to better understand and predict. He said that they have many projects stacked up and need to become more efficient, for example in dealing with diverse IoT devices.

David commented that they are not hiring in ML but they are asking engineers to find new ways to optimize, more immediately to grow experience in use of multiple engines and to develop confidence that if something is proven in one platform, that effort doesn’t need to be repeated on other platforms. Following this theme, Chris said that there continues to be a (team) challenge in sharing data across silos in the continuum. And Mike added that, as valuable as UVM is for simulation-centric work, verification teams need to start thinking about building content for multiple platforms; portable stimulus is designed to help break down those silos.

Where are biggest time sinks?
David said, unsurprisingly, that debug is still the biggest time sink, though the tools continue to improve and make this easier. But simply organizing all the data, how much to keep, what to aggregate, how to analyze it and how to manage coverage continues to be a massive problem. This takes time – you must first prioritize, then drill down.

Chris agreed, also noting the challenge in triage to figure out where issues are between silos and the time taken to rewrite tests for different platforms (presumably PS can help with this). Jim wrapped up noting that ML should provide opportunities to help with these problems. ML is already being used in medical applications to find unstructured shapes in MRI data – why shouldn’t similar approaches be equally useful in verification?

My take – progress in areas we already understand, major moves towards realistic use-case-based testing, and clear need for big-data analytics, visualization and ML to conquer the sheer volume of data and ensure that what we can do is best directed to high-value testing. I’ll add one provocative question – since limiting testing to realistic customer use-cases ensures that coverage is incomplete, how then do you handle (hopefully rare) instances where usage in the field strays outside tested bounds? Alternatively, is it possible to quantify the likelihood of escapes in some useful manner? Perhaps more room for improvement.


Exclusive – GLOBALFOUNDRIES discloses 7nm process detail

Exclusive – GLOBALFOUNDRIES discloses 7nm process detail
by Scotten Jones on 07-08-2017 at 7:00 am

In a SemiWiki EXCLUSIVE – GLOBALFOUNDRIES has now disclosed the key metrics for their 7nm process. As I previously discussed in my 14nm, 16nm, 10nm and 7nm – What we know now blog GLOBALFOUNDRIES licensed their 14nm process from Samsung and decided to skip 10nm because they thought it would be a short-lived node. At 7nm GLOBALFOUNDRIES has taken advantage of the additional technical resources they acquired from IBM to develop their own process.


GLOBALFOUNDRIES 7LP
As Dan Nenni previously discussed in his GlobalFoundries 7nm and EUV Update! blog 7LP (Leading Performance) will offer a greater than 40% performance improvement relative to 14nm or greater than 60% lower power. Area scaling will be approximately 2x and the die cost reduction will be greater than 30%, with greater than 45% in target segments. Initial customer products on 7LP are expected to launch in the first half of 2018 with volume production in the second half of 2018.

The 7LP process will be produced with optical lithography and what we now know is the Contacted Poly Pitch (CPP) will be 56nm and the Minimum Metal Pitch (MMP) will be 40nm produced with Self-Aligned Double Patterning (SADP). A 6-track cell will be offered with a cell height of 240nm. The high density 6T SRAM cell size is 0.0269 microns squared. A 7LP+ process is also planned that will take advantage of EUV when it is ready to offer improved performance and density.

GLOBALFOUNDRIES is also in the unique position of providing an in-house ASIC platform FX-7 on their 7LP process. FX-7 provides a comprehensive suite of tailored interface IP including High Speed SerDes (60G, 112G), differentiated memory solutions including low-voltage SRAM, high-performance embedded TCAM, integrated DACs/ADCs, ARM processors, and advanced packaging options such as 2.5D/3D

Comparison to other processes

In my 14nm, 16nm, 10nm and 7nm – What we know now blog I looked at Intel’s 10nm process compared to Samsung and TSMC’s 7nm processes. Due to the lack of available information on GLOBALFOUNDRIES 7nm process I didn’t include it, I can now add it to the comparison but first I need to make a few updates to the data previously discussed.

Previously I used a 44nm CPP for Samsung basing it off the IBM, Samsung, GLOBALFOUNDRIES IEDM 2016 paper. I am now hearing their actual CPP is 54nm. Of the 4 processes being compared Samsung is the only process that will use EUV initially and therefore the process has the latest risk production date of the four and likely the highest risk of missing that date (something Samsung appears to recognize with their recent announcement of an optically based 8nm process due to enter risk production in late 2017). The use of EUV should result in a lower mask count that the competing processes and we are currently forecasting Samsung will use EUV for contacts, vias and metal block masks as part of a Self-Aligned Quadruple (SAQP) Patterning scheme for 1x metal layers.

In the same article, I used 54nm for TSMC’s CPP and although that is a claimed value for the process, I am hearing that their actual libraries have a 57nm CPP.

The following table compares the latest data we have for 10nm/7nm:

[INDENT=2]
[INDENT=2][1] IC Knowledge estimates.
[INDENT=2]Table 1. 10nm/7nm process comparison.

The data in the table illustrates the need for design-technology co-optimization (DTCO) at the leading edge. Intel and Samsung have the smallest CPP and MMP values but because GLOBALFOUNDRIES and TSMC offer 6T cells, they achieve smaller cell heights and ultimately GLOBALFOUNDRIES has the smallest CPP x Cell Height value. Samsung achieves the smallest SRAM cell size and through the use of EUV we expect Samsung to have the lowest mask count.

Conclusion
GLOBALFOUNDRIES 7LP adds a competitive 7nm process to customer options for leading edge design and production. The process parameters are competitive across the board and provide leading density. The availability of the FX-7 ASIC platforms offers customers an additional engagement path not available at other foundries.


Embedded FPGA’s create new IP category

Embedded FPGA’s create new IP category
by Tom Simon on 07-07-2017 at 12:00 pm

FPGA’s are the new superstar in the world of Machine Learning and Cloud Computing, and with new methods of implementing them in SOC’s there will be even more growth ahead. FPGA’s started out as a cost effective method for implementing logic without having to spin an ASIC or gate array. With the advent of the web and high performance networking hardware, discrete FPGA’s evolved into heavy duty workhorses. The market has also matured and shaken out, leaving two large gorillas and a number of smaller players. However, the growth of AI and the quest for dramatically improved cloud server hardware seems to be the expanding the role of FPGA’s.


At DAC in Austin I came across Achronix a relatively new FPGA company that is experiencing a renaissance. I stopped by the speak to Steve Mensor, their VP of Marketing. There was reason enough to speak with him because of their recent announcement that their YTD revenues for 2017 are already over $100M. This is largely the result of solid growth in their Speedster 22i line of FPGA chips. Achronix originally implemented this line at the debut of Intel’s Custom Foundry on the then state of the art 22nm FinFET node. This gave them the distinction of being the first customer of Intel’s Custom Foundry.


Building on this, Steve was eager to talk about their game changing IP offering of embedded FPGA cores – aptly named Speedcore eFPGA. These are offered as fully customized embedded FPGA cores that can be integrated right into system level SOC’s. To understand why this important, we have to look at a recent research project by Microsoft called Catapult with the goal of significantly boosting search engine performance. Microsoft discovered that there was a big advantage in converting a subset of the search engine software into hardware optimized for the specific compute operation. This advantage is amplified when these compute tasks can be made massively parallel – exactly the kind of thing that FPGA’s are good at. They also studied the same approach for cloud computing with Azure and found performance benefits there too.

The next market factor that starts to make embedded FPGA cores look extremely attractive is Neural Networks. Both training and recognition require massive computing that can be broken into parallel operations. The recognition phase – such as the one running in an autonomous vehicle – can be implemented largely with integer operations. Once again this aligns nicely with FPGA capabilities. So if FPGA’s can boost search engine and AI applications, what are the barriers to implementing them in today’s systems?

If you look at the current marketing materials for Altera and Xilinx you can see that they dedicate a lot of energy to developing and promoting their IO capabilities. Getting data in and out of an FPGA is a critical function. Examining the floor plan for an FPGA chip, you will see a large area used for programmable IO’s. Of course along with the large areas resources used, comes the need for higher power consumption.


Embedding an eFPGA core means that interface lines can be directly connected to the rest of the design. With less area for each signal, wider busses can be implemented. Interfaces can run faster, now that interface SI and timing issues have been reduced with on-chip integration.

The other benefit alluded to earlier is that eFPGA can be configured to achieve optimal performance. The adjustable parameters include the number of LUT’s, embedded memories and DSP blocks. Customers get GDS II that is ready to stitch into their design. The tool chain for Speedcore eFPGA’s can accommodate the custom configurations.


Steve told me that today the largest share of their impressive revenue is standalone chips, but by 2020 he expects 50% of their sales to be embedded. Another application for FPGA’s is use as chiplets for 2.5D designs. But more on that in future writings.

Steve emphasized that designing FPGA’s is pretty tricky. There are power and signal integrity issues that need to be solved due to their massive interconnect. Real improvement only comes over time with years of experience optimizing and tuning architecture. Steve suggested that many small improvements over time have added up to much better results in their FPGA’s.

Right now it looks like Achronix is positioned to break away from the pack of smaller FPGA providers and potentially revolutionize the market. With this new appproach, FPGA’s can be said to have decisively transitioned from their early days of being a glue logic vehicle to a pivotal component of advanced computing and networking applications. For more details on Achronix eFPGA cores take a look at their website.


LETI Days 2017: FD-SOI, Sensors and Power to Sustain Auto and IoT

LETI Days 2017: FD-SOI, Sensors and Power to Sustain Auto and IoT
by Eric Esteve on 07-07-2017 at 7:00 am

I have attended last week to the LETI Days in Grenoble, lasting two days to mark the 50[SUP]th[/SUP] anniversary of the CEA subsidiary. Attending to the LETI Days is always a rich experience: LETI is a research center counting about 3000 research engineers, but LETI is also a start-up nursery. The presentations are ranging from high level communication, to panels involving actors from the semiconductor and electronics industry (like “Future Applications and New Technologies” with Rajesh PANKAJ, Qualcomm, Alain MUTRICY, Globalfoundries and Antun DOMIC, Synopsys) or start-up sessions, all the start-up being issued from LETI!

We also had a very interesting keynote presentation from Jean-Marc CHERY, STMicroelectronics, passing a very realistic message to the industry. STM doesn’t play anymore in the smartphone application processor game, this market is dominated by Samsung, Apple and the Chinese chipmakers for the design side, and by a couple of foundries (TSMC, Samsung …) heavily investing to support the most advanced FinFET nodes, but sensors from STM are integrated in the major smartphones. STM is now developing innovative technology to support new power electronics devices, like SiC (Silicon Carbide) and supporting automotive with plenty of new devices, going up to ADAS through Mobileye support on FDSOI.

Instead of burning cash trying to stay in the very competitive race for maximum integration (application processors for smartphones, set-top-box, etc.), capitalize in the company strengths with sensors, microcontrollers, power electronics to address highly growing markets (automotive, industrial IoT) looks like a wise decision from a mature semiconductor company… who is also a start-up issued from LETI long time ago (1972)!

The next event was the press conference given by Marie SEMERIA, CEA-LETI CEO and Fraunhofer Group for Microelectronics Chairman Hubert LAKNER, who just signed an agreement to “…develop next generation microelectronics to strengthen European strategic and economic sovereignty.” This announcement is linked with the recent billion euros investment from Bosch to build a 12” wafer fab in Dresden to deliver chips for IoT and mobility, back up by the German government as well as a €300 million investment recently made by the French government in microelectronic…

We (the press) have carefully heard Marie Semeria and Hubert Lakner explain that the collaboration will focus on specific R&D projects like:

– Silicon-based technologies for next generation processes and products, including design, simulation, unit process as well as production techniques
– Extended More than Moore technologies for sensing and communication applications
– Advanced-packaging technologies

That I like with press conferences, by opposition to keynotes talk, is that you can ask for clarification after the talk. My question was about the level of investment made in semiconductor in Europe, to be compared with the $100 billion claimed by the Chinese government, or even this made by TSMC in Taiwan to build a new fab for FinFET – in the $10 billion range for a single fab!

The answer from Marie Semeria was frank, and very interesting. She said that European electronics and semi companies are no more involved into the smartphone race, and shouldn’t try to pursue Moore’s law up to the technology limit, this is simply requiring too much cash. Let’s try to be realistic, and develop the technologies supporting the applications and systems where the European industry is strong, namely FDSOI, sensors and power devices.

This strategy will allow to go up in the value chain and manufacture in Europe the systems based on the above-mentioned devices or technologies. At this point, you may want to make a comparison with IMEC, the well-known Belgium research center. IMEC is involved into the development of the most advanced technologies (Moore’s law), and do it very well. Unfortunately, these technologies are transferred to wafer fab and semiconductor companies based out of Europe…

FD-SOI: Power consumption, Performance and Cost
FDSOI technology can be a key differentiator, GlobalFoundries is implementing fabs supporting 22FDX and 12FDX in Europe, thanks to the licensing agreement with LETI.

Sensors
Many emerging applications, like ADAS in automotive or Industrial IoT, require more and more sensors. The next step is to put intelligence into sensors. Companies like STMicroelectronics are focusing on sensors and ship everywhere, including to the European industry.

Power Electronic
These devices will be more and more used, for example with the development of electric cars or industrial equipment and IoT. Europe is well positioned both in automotive and industrial segments. See also STM investment into SiC.

To conclude, I would like to remind you the development of LiOT by LETI. LiOT is an IP platform that developers could use to design ASIC for their IoT applications. I saw a paper from LETI during IP-SoC in Grenoble and it was very good, so I have suggested to LETI to present a paper describing LiOT during the DAC in Austin. This paper was well received by the audience and demonstrate LETI strong involvement into IoT.

And this paper was good enough to obtain the “DAC Best Paper Awards” in the Design/IP track category!

From Eric Esteve from IPNEST


Webinar: Synopsys on Clock Gating Verification with VC Formal

Webinar: Synopsys on Clock Gating Verification with VC Formal
by Bernard Murphy on 07-06-2017 at 12:00 pm

Clock gating is arguably the mostly widely-used design method to reduce power since it is broadly applicable even when more sophisticated methods like power islands are ruled out. But this style can be fraught with hazards even for careful designers. When you start with a proven-correct logic design and add clock gating, the logic (and timing) can change in ways which are not an intuitively simple extension of the original design, raising the need to validate that those changes have not broken the logic intent.

REGISTER HERE for Webinar on July 11[SUP]th[/SUP] at 10am PDT

The problem sounds similar to validating that synthesized logic is functionally equivalent to original RTL and we know how to deal with that problem – run logic equivalence checking between the RTL and the synthesized netlist. But that doesn’t work for clock gating checks because conventional equivalence checking (mostly) checks combinationallogic equivalence between reference points. However, clock gating structures are inherently sequential; traditional equivalence engines don’t work with this kind of problem – you need to use sequential equivalence checking (SEQ), still a formal-based check but allowing for cycle-shifted equivalence.

Why not simply use dynamic verification to verify the correctness of this gating? You could but (a) you have to extend your testbenches to add directed tests for clock-gating, not only increasing the complexity of testing but also dramatically increasing verification run-times and (b) you have to consider all possible variations of switching versus functionality in each case. Or you could verify formally, which doesn’t impact your dynamic verification setup or run-times at all and is intrinsically complete. And, by the way, VC Formal SEQ is an app, so much easier to use than traditional formal. Hmm – which way to go? Maybe you should check out the webinar.

REGISTER FOR THIS WEBINAR to learn how you can use VC Formal to formally verify the sequential equivalence of a clock gated circuit with the original logic, to have full confidence that your logic intent has been preserved.


ADAS and Vision from Cadence

ADAS and Vision from Cadence
by Daniel Payne on 07-05-2017 at 12:00 pm

A huge theme at #54DAC this year was all things automotive and in particular the phrase ADAS (Assisted Driver Assistance Systems), so I followed up with Raja Tabet a corporate VP of emerging technology at Cadence. We met on Monday in a press room where I quickly learned that Cadence has been serving the automotive industry for the past 15 years with IC tools, IP and services.

Cadence acquired Tensilica back in March 2013, gaining instant access to an app-specific acceleration platform with DSP features that was easy to customize, generated SW, and custom instructions. This technology fits quite well with ADAS features for vision, radar, Lidar and even sensor fusion. I’ve been following Tensilica over the years because my former roommate Chris Rowen founded the company, and we both worked at Intel starting in 1978.

Related blog – A Brief History of Tensilica

The latest version of Tensilica (P6) supports both imaging and vision functions. The Vision C5 is a CNN (Convolution Neural Network) processor, recently announced that gives you 1TMAC/sec computational capacity. What makes the C5 a bit unique is that in a single chip you get both a CNN and CPU together, instead of separate components from different vendors. Here’s a quick snapshot of the Tensilica product offerings:

Customers would first conceive of a neural network for their application, train the network, then embed the network using software to compile it into Tensilica hardware. Competitive approaches in this space include vendors that use GPUs or hardware accelerators. With the approach in the C5 you get a power/performance metric that is quite high and is easy to program. With Cadence you now have the technology to create your own ADAS system.

Related blog – Cadence to acquire Tensilica

The automotive infotainment world is well served by standards, however ADAS is so new that there are no standards out their to quickly choose from. Expect defacto standards to begin emerging as automotive companies form alliances or adopt specific vendor tools and flows.

Related blog –CPU, GPU, H/W Accelerator or DSP to Best Address CNN Algorithms?

From a design viewpoint there’s a lot to consider for the automotive market, like:

  • Functional safety
  • Compliance with the ISO26262 requirements, ASIL A to ASIL D
  • Fault simulation
  • Static and formal analysis
  • Fault analysis with emulation

The technology at Cadence has matured to the point of offering a comprehensive methodology for automotive design. Adjacent to the automotive market is a related market for robotic technology where imaging and vision are fundamental requirements for industrial robots. What intrigues me the most about neural networks is the ability for the machine to get smarter with each new encounter, or a new release of firmware that is downloaded over a wireless network.

Related blog – The CDNLive Keynotes

Along the path to fully autonomous driving which is level 5, are the four lower levels of automation. No matter which level you are trying to design for there are common questions:

  • Which architecture is the best one?
  • Centralized sensors or distributed intelligence per sensor?
  • Hybrid architecture?

The approach at Cadence is to let you decide how you want to implement ADAS by making your own decisions. In 2017 there are some 17 ADAS events that Cadence will participate at, so this opportunity is large and growing, which is always a good thing.


Nvidia Handles Data, Senate with Care

Nvidia Handles Data, Senate with Care
by Roger C. Lanctot on 07-04-2017 at 7:00 am

When speaking before the U.S. Senate or Congress one has to choose one’s words carefully. The temptation, when one is speaking before legislators and microphones and cameras, is to tell it like it is and speak truth to power. The reality is that the power of the legislators and the microphones and the cameras must be respected and, consequently, words must be chosen carefully.
Continue reading “Nvidia Handles Data, Senate with Care”


HW and SW Co-verification for Xilinx Zynq SoC FPGAs

HW and SW Co-verification for Xilinx Zynq SoC FPGAs
by Daniel Payne on 07-03-2017 at 12:00 pm

It constantly amazes me at how much FGPA companies like Xilinx have done to bring ARM-based CPUs into a programmable SoC along with FPGA glue logic. Xilinx offers the Zynq 7000 and Zynq UltraScale+ SoCs to systems designers as a way to quickly get their ideas into the marketplace. A side effect of all this programability and flexibility to design a system is the classic challenge of how to debug the HW and SW system before committing to a prototype or production.

You could use a traditional, sequential development flow where hardware designers code their RTL and verify using testbenches, simulation and BFM (Bus Functional Models). The software engineers would separately write applications and verify SW. Once the hardware is stable enough, then you could start to think about how the hardware and software integration should take place. A sequential development flow is going to take more time because of the number of iterations required, so this provides an impetus for a better approach.

The major point of a recent webinar from Aldec was to show a new co-simulation methodology that enables early communication between the hardware and software simulation environments. Here’s how co-simulation connects together the SW and HW worlds:

All of your programmable logic is modeled in RTL on the right-hand side within the HDL simulator named Riviera-PRO from Aldec. Your processor is emulated with the Open Source QEMU (Quick EMUlator) that supports the popular A9 and A53 series of ARM processors. Connecting the processing system with the programmable logic for co-simulation is the Aldec QEMU Bridge. Some of the benefits for using this co-simulation idea are:

  • HW and SW integration takes place quite early
  • Improved visibility during debug of HW in Riviera-Pro

    • Break points
    • Use of Data Flow / Code Coverage
    • Waveform inspection
  • Improved visibility during debug of SW in QEMU

    • Using the GDB debugger with both driver and kernel models
    • Setting break points

Related blog – Aldec Swings for the Fences

The requirements for running this type of co-simulation include using a Linux computer with Riviera-Pro 2016.10 or later, the Xilinx QEMU, Xilinx Vivado, SystemC, a co-simulation license, Zynq Linux distribution and a device tree. Here’s a more detailed view of the co-simulation and how it works:

Our webinar presenter Adam Taylor showed how you use QEMU to actually boot Linux and then configure a pulse width, viewing the hardware waveform results in Riviera-Pro:

A hardware break point was set to demonstrate how co-simulation could be interrupted when a particular RTL line was reached:

Software break points were also set using GDB. All of these features showed how to validate and debug what is happening between a hardware and software system. They even showed how you could use evaluation boards like the TySOM series of boards after your co-simulation work had validated the system. There are a slew of daughter boards with TySOM to fit the specific SoC, memory and interfaces that your system dictates.

Summary
Powerful and programmable SoCs from Xilinx that contain ARM cores along with FPGA fabric can be designed quickly and validated early by using a co-simulation approach from Aldec that connects their Riviera-Pro simulator to the QEMU emulator for Zynq processors. Co-simulation helps you uncover HW/SW bugs quicker and earlier in the development cycle.

View the entire 45 minute archived webinar here online.


Capture the Light with Integrated Photonics

Capture the Light with Integrated Photonics
by Mitch Heins on 07-03-2017 at 7:00 am


I wrote up a quick article in the weeks before the Design Automation Conference (DAC) letting readers know that Integrated Photonics were indeed coming to DAC again this year. As a follow up, I attended the DAC presentation, ‘Capture the Light. An Integrated Photonics Design Solution from Cadence, Lumerical and PhoeniX Software’, given by Twan Korthorst, CEO of PhoeniX Software, at the Cadence Theater. Twan was also supported by Jonas Flueckiger of Lumerical Solutions as both PhoeniX and Lumerical are part of the overall Electronic Photonic Design Automation (EPDA) flow that uses Cadence’s Virtuoso system as the cockpit for the flow. The presentation was well attended with all seats at the theater filled. That, in and of itself, is significant given that the presentation started promptly at 10:00a when the exhibit floor doors opened for the day.

Twan started his presentation by setting context for how and where integrated photonics are emerging onto the scene. Photonics have been used for decades in long haul communications but it is now getting a new life driven firstly by the mega data centers. Active optical cables (AOCs) with integrated photonic ICs (PICs) are supplanting copper cables for the longer reaches required of the mega data centers. Photonics are also key to low cost, low energy 100Gbps connections now being installed in these centers. Additionally, integrated photonics are also finding their way into high performance computing, aerospace and high-end RF applications as well as sensors (both biosensors and environmental sensors). The advent of the Internet-of-Things with huge numbers of sensors may in fact be the real long-term high-volume driver for integrated photonics. Photonics is also a key enabler for quantum computing which is just now starting to emerge.

Twan also took the audience through a brief tutorial of what is meant by integrated photonics. In a nut shell, integrated photonics entails guiding laser light through waveguides on a chip (something akin to fibers on a chip). Chip manufacturing for integrated photonics comes in many flavors including Indium Phosphide, Silicon, Silicon Nitride and variations on other III-V compounds. The big interest now is in the use of silicon as a medium as it has the promise of leveraging the extensive manufacturing infrastructure already in place from more than 3 decades of CMOS processing. Silicon enables high-index contrast waveguides in sub-micron dimensions with small bend-radii. This translates into the capability to integrate tens of thousands of what were once expensive discreet optical components onto a single low-cost chip.

As mentioned earlier, Cadence, Lumerical and PhoeniX are now fielding a complete front-to-back EPDA flow and as is done in electronic design, the companies are leveraging the concept of process design kits (PDKs) to enable designers to capture and connect their photonic circuit design using standard photonic building blocks and waveguides. At the simplest level, photonic components include lasers, electro-optic modulators, waveguides and photodetectors.

There are also components that are used to split, couple and switch light between waveguides as well as to multiplex/demultiplex multiple wavelengths of the light in to or out of waveguides. The new EPDA design flow enables co-design of electronics and photonics including schematic capture, co-simulation of electronics and photonics, schematic-driven layout of curvilinear photonic components and waveguides, back-annotation, design rule checking and GDSII generation complete with polygon discretization for different foundries.

The EPDA design flow is different from standard EDA flows in that it has added features to deal with the fact that photons do not behave the same as electrons. At 1550nm, the common wavelength used for the communications market, the design flow is akin to working with electronics that would be running at 193THz (that’s Tera-Hertz with a T). Photonics requires dedicated simulation routines that can accurately simulate the bidirectional propagation (including reflections) of the light while also comprehending that the light is multi-modal, in multiple frequency bands and accumulates phase shifts. In some cases, polarization is also comprehended and used. Photonics also requires smooth curvilinear bends in layout and as such requires specialized algorithms to deal with the generation of layout structures that will properly contain the modes of the light as it is manipulated throughout the circuit.

As part of the presentation, Twan also ran a series of video snippets that showed the design flow in action, including co-simulation of electronics and photonics using Cadence’s Analog Design Environment (ADE) working in conjunction with Cadence’s Spectre spice simulator and Lumerical’s INTERCONNECT photonic circuit simulator. The companies have done a great job of integrating the tools as the test bench, stimulus and resulting waveforms were a natural extension of how analog electronic simulation is done today in the Cadence ADE flow. Similarly, video snippets were also shown for the layout portion of the flow where Virtuoso seamlessly called PhoeniX Software’s OptoDesigner product in the background to produce and manipulate all-angle curvilinear layouts without ever having to leave the Virtuoso layout GUI.

A step that was added for this demonstration was the addition of new interfaces between Cadence Virtuoso / PhoeniX layout and Lumerical’s component level simulation tools. These simulations include mode solvers, beam propagation and finite-difference time-domain (FDTD) algorithms that are used to model specific individual photonic components. The output of these simulations is used to create abstracted compact models that are used for the circuit level simulations that would be too large for component level algorithms.

Twan rounded out the presentation by announcing a two-day photonics summit that will be held at the Cadence campus in San Jose in early September. The event will include one day of technical presentations discussing the challenges and progress towards implementing integrated photonic systems for a variety of end applications. The event’s second day will include hands-on training of the EPDA framework and including a preview of how to do system design that combines electronic and photonic die with a laser using a photonic interposer, all within in a single package.

It’s always good to see new technologies come along as it means growth and new opportunities for both companies and individuals. Seeing integrated photonics at DAC is yet another sign that this technology is here to stay.

See also:
Cadence Photonics Solutions Web Page
Lumerical Solutions Web Page
PhoeniX Software Web Page