CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Bringing Formal Verification into Mainstream

Bringing Formal Verification into Mainstream
by Pawan Fangaria on 04-28-2016 at 7:00 am

Formal verification can provide a large productivity gain in discovering, analyzing, and debugging complex problems buried deep in a design, which may be suspected but not clearly visible or identifiable by other verification methods. However, use of formal verification methods hasn’t been common due to its perceived complexity and the expertise and effort required in preparing the design and testbench for formal verification. However, with growing design complexity formal verification is fast becoming an essential part of verification methodologies to catch those bugs prone to be missed by simulation or other verification engines even after a large number of cycles.

Not all designers or verification engineers have requisite expertise to prepare the design code for applying formal verification. A few expert engineers, who have acquired a good level of expertise about formal verification by using it over the years, are usually preoccupied with many other tasks and cannot be employed for the huge benefit formal verification can provide by applying it on large SoCs at the enterprise level. This results in ‘ad hoc’ formal verification by a few experts. The real potential of formal verification remains underutilized in an organization.

What if we had an intelligent methodology with an intelligent, high capacity, and easy-to-use tool which could automate things to leverage formal verification? We could utilize the skills of those formal verification experts to a large extent in formalizing the methodology and making the formal verification available for all designers and verification engineers without needing much of their expertise in introducing formal verification constructs into the design code.The formal verification could best complement simulation and other verification engines for faster verification closure of large SoCs.

I am very much impressed with VC Formal, the brand new tool for formal verification launched by Synopsys. This tool makes it very easy for a verification engineer to setup a design for formal verification, run, and debug it in quick steps.


In addition to formal property checking, VC Formal provides various apps which require minimal setup and are quick and easy to execute. They are all seamlessly integrated with the Synopsys verification and debug environment. For example, VC Formal and its apps use the same design parser and compiler as VCS, do quick analysis of code, report about ‘unreachable code’, and generate formal goals for code coverage. The apps include solutions like Formal Coverage Analyzer, Connectivity Checking, Sequential Equivalence Checking, and so on.

Recently, an app-based approach for formal verification has become very popular and is employed in a few tools in the market; however its effectiveness and success lies in the ease-of-use, tool capacity and performance, and effective interface with simulators and the debug system. VC Formal has large capacity and high performance which can be leveraged for exhaustive analysis required for hard-to-catch bugs, and of course for quick checks and analysis.

For debug, VC Formal uses Verdi, the most familiar debugging platform in the verification community. This provides design view, verification task management, progress status, waveforms, constraint generation, etc. on a single platform already familiar to design and verification teams, thus making it very effective for reproducing and debugging deep-rooted problems.

What is more important in a formal verification system is a strong methodology to define the flow keeping in mind the particular designs and their users, what to apply where and in which circumstances. This can be best done by experienced formal verification experts who can define the strategy to identify the design blocks which can be most effective for formal verification. Simple code coverage or any criteria other than verification closure cannot be an effective measure for formal verification. The formal strategy and methodology for a particular design flow can be best realized by a strong tool with high capacity, speed, strong formal engines with intelligent algorithms, and easy debugging.

Talking to Prapanna Tiwari, Senior Manager, Formal Verification Product Marketing at Synopsys, I learnt that VC Formal employs some brand new technologies that build an optimized formal model for faster operation such as SoC level connectivity checking or deep bug hunting in case of corner cases, among many others. Also, there are many usability features provided for effective interaction with designers to improve their productivity significantly.

Prapanna says the core engines in VC Formal were developed by a team of experienced professionals with deep expertise in formal verification. The GUI has been put through to realise a defined formal methodology most effectively. The tool is already in use with major design houses who have realized significant improvement in the time to verification closure of their designs.

There is an on-line webinar– Increasing Verification Closure Effectiveness with Formal Verification, first delivered live in the last week by Dr. Sean Safarpour, Formal Verification CAE Manager at Synopsys. Dr. Safarpour has presented in great detail about how to go about formal verification in general, the app engines and capabilities of VC Formal, the GUI, and debugging features with Verdi.

Also, Dr. Safarpour has explained setting up of a formal testbench, assertion setup and debug, and analysis of over-constraints through a real example. It’s worth attending this on-line webinar to learn about formal verification in general and specifics of VC Formal.

More Articles from Pawan


Tcl scripts and managing messages in ASIC & FPGA debug

Tcl scripts and managing messages in ASIC & FPGA debug
by Don Dingee on 04-27-2016 at 4:00 pm

Our previous Blue Pearl post looked at the breadth of contextual visualization capability in the GUI to speed up debug. Two other important aspects of the ASIC & FPGA pre-synthesis workflow are automating analysis with scripts and managing the stream of messages produced. Let’s look at these aspects Continue reading “Tcl scripts and managing messages in ASIC & FPGA debug”


Metric-Driven Verification for System Signoff

Metric-Driven Verification for System Signoff
by Bernard Murphy on 04-27-2016 at 12:00 pm

Everyone knows that verification is hard and is consuming an increasing percentage of verification time and effort. And everyone should know that system-level verification (SoC plus at least some software and maybe models for other components on a board) is even harder—which is why you see hand-wringing over how incompletely tested these systems are compared to expectations at the IP level.

The problem is not so much the mechanics of verification. We already have good and increasingly well-blended solutions from simulation to emulation to FPGA prototyping. The concept of UVM and constrained random test generation is now being extended to software-driven test generation.There are now tools to enable randomization at the software levels and the developing Portable Stimulus (PS) standard will encourage easy portability of use cases between platforms to trade-off super-fast analysis versus detailed analysis for debug.

The problem at the end of the day is that tools only test what you tell them to test (with possibly some randomization spread). Thanks to the effectively boundless state-space of a system level design, conventional views of test coverage are a non-starter. The burden of how well the design has been tested falls on a (hopefully) well-crafted and vigorously debated verification plan. This has to comprehend not only internally-driven and customer-driven requirements but also new standards (like ISO26262 for example) and regulatory requirements (e.g. for medical electronics).

Once you have that verification plan, you can quantify how far you have progressed towards “done” and more accurately predict functional closure. That has to depend on unambiguous metrics across each line in the verification plan: functional metrics, coverage metrics at the software level and hardware level, register and transaction metrics, performance and latency metrics, connectivity metrics for the top-level and IOs, safety and security metrics – all quantifiable, unambiguous metrics for every aspect of the design. And, for each of these sections there is an agreed upon verification plan which organizes and then breaks down the plan into a long list of sub-items which make up the actual details of the testing. This is metric-driven verification (MDV), now applied to system level.


Cadence Incisive vManager provides a platform to define the test plan (vPlan in Cadence terminology) in an executable format, and also provides the means to roll-up current stats from each of the verification teams and each of the many sources of verification data. You get an at-a-glance view of how far you have progressed on each coverage goal and you can drill down wherever you need to understand what remains to be done and perhaps where you need to add staffing to speed up progress.

One very interesting example where new tools are playing a role in getting to coverage closure is the increasing use of formal to test for unreachable cases. When you see in vManager that coverage on some aspect has flattened at something below 100%, it may be time to deploy formal proving to see if that last few percent is heavily populated by unreachable states. If so, you can call a halt to further testbench development because you know you are done.

While metric-driven verification is not new, Cadence Design Systems pioneered vManager in 2005 to encapsulate these concepts, really driving a more automated and verifiable foundation for system signoff. Instead of using spreadsheets, you can roll-up coverage from simulation, from static and formal analysis, and from emulation and prototyping software-driven verification. Cadence plans to continue integration of yet more metrics across this broad spectrum of technologies to further augment this capability.

To learn more about vManager, click HERE.

More articles by Bernard…


No reason for FD-SOI Roadmap to follow Moore’s law!

No reason for FD-SOI Roadmap to follow Moore’s law!
by Eric Esteve on 04-26-2016 at 4:00 pm

We in Semiwiki are writing about FD-SOI since 2012, describing all the benefits offered by the technology in term of power consumption, price per performance compared with FinFET, etc. Let me assess again that I am fully convinced that FD-SOI is a very smart and efficient way to escape from the Moore’s law paradox: the transistor cost is increasing for (FinFET) technology node below 20 nm, and that I expect FD-SOI to see market adoption.

But I think that some people are confused when dealing with FD-SOI. When you see some picture like this “SOI Roadmap” (from VLSIResearch), it seems that the picture designer has just made a copy of the Bulk Roadmap and pasted it with 2 years shift. Even if 28 and 22 nm FD-SOI become successful technologies –that I hope- it will take some time for the foundries supporting these nodes to generate enough ROI before investing in a way as described on this graphic.

As of today, the Bulk technology roadmap and the production status is well-known: 28 nm is in full production, 14/16 nm also, chips are in design in 7 and 10 nm and probably taped out in 10 nm. If we focus on 14/16 nm, we realize that the very high runners like application processors for mobile, PC processors and data center SoC probably represent most of the foundries (or Intel) revenues for this node, and these revenues are really high, thanks to Apple, Mediatek, Qualcomm, Samsung (and more).

As of today, the only application processor targeting FD-SOI has been demonstrated by STMicroelectronics in 2013; unfortunately, the company has now exited the mobile market. To make it clear, none of the above listed high runners chips has targeted FD-SOI and the reason is most probably linked with risk aversion. FD-SOI technology for advanced technology node is perceived as completely new (even if this is not true, see IBM) and taking the decision to move such a healthy business to a new technology is just too risky.

As a result, the heavy investment made by foundries to develop new FinFET nodes can be quickly recovered thanks to a fast and large ROI coming from the top semiconductor players who need to design always larger, faster and lower power SoC to keep their market share in the couple of very lucrative application, mobile AP, PC processor and data center SoC. And can afford the incredibly high development cost for 14/16 or 10 nm FinFET technology nodes.

If you look at this problem from another angle, that leave many more chips which will NOT be developed on these too expensive nodes, like SoC addressing application processor for automotive (infotainment, smart vehicle…), for consumer application and many more. These chips need high performance (but not the highest possible), low power (in some case like IoT the lowest possible) and a unit cost as low as possible. Because the addressed market is more in the million or 10’s million units than the 100’s million, the NRE and development cost has to be kept reasonable, and certainly not in the $100’s million like for advanced FinFET SoC.

FD-SOI adoption has started! Samsung forecast 10 tapeouts on 28nm FD-SOI this year, SONY is shipping a GPS chips (announced in Tokyo SOI Forum in January 2015), NXP has adopted 28nm FD-SOI for the two new platform (iMX-7 and iMX-8) and GlobalFoundries is well engaged in 22 FDX process. FD-SOI provides such advantage in term of power consumption, thanks to forward Biasing capability, that we can expect technology penetration in many segments where low power and low cost of ownership (unit price + NRE) is more important than ultimate performance.

But it will take time, and by the way I think that stopping the 28 nm FD-SOI roadmap in 2022 (like on the picture) doesn’t make sense. Because we can expect volume production to start only in 2017 for chips taped out this year, so it will not be surprising if 28 nm FD-SOI production last 10 years (2027). This also means that it will take longer to collect enough ROI to be in position to offer 14 nm FD-SOI. Why not offering 40 nm or even 65/55 nm FD-SOI instead? It would make sense to support SoC integrating mixed-signal and digital, addressing application like low cost IoT edge devices… If you agree with this position, you will also challenge the above forecast, at least the $6B figure for 16/14nm FD-SOI production in 2017 or 2018…

Let’s come back to the initial question: why should FD-SOI technology follow Moore’s law, when the technology has been defined as one option to escape Moore’s law paradox (higher transistor price for lower node)? I think the only reason is a lack of creativity, pushing to duplicate a concept validated for FinFET (roadmap) to FD-SOI, although the market dynamic is completely different. The technology and marketing people who have defined and marketed FD-SOI have tried to think outside the box, analysts should go outside of their comfort zone and not just resale the same framework. Which is valid for FinFET may be completely different for FD-SOI…

Eric Esteve from IPNEST


SpyGlass DFT ADV accelerates test closure – Xilinx and Synopsys webinar

SpyGlass DFT ADV accelerates test closure – Xilinx and Synopsys webinar
by Bernard Murphy on 04-26-2016 at 12:00 pm

Fed up with ECOing your way out of test problems? You might want to register for this webinar.When you’re building monster SoC FPGAs, you have all the same problems you have with any other SoC. That includes getting to very high test coverage as quickly as you can with a design targeted to the most advanced processes. We’re not just talking the basic stuck-at coverage – that has to be very high. But you also have to have high at-speed coverage and, in these days of test compression, you have to fix areas of random pattern resistance which otherwise force longer test sequences and therefore longer test chains, undoing a lot of the advantages of compression.

Even basic stuck-at testability problems can be challenging to find in large designs using IPs from multiple sources. Problems limiting at-speed testability and random pattern resistance are simply too hard to track down without automation. That’s where Synopsys SpyGlass™ DFT ADV comes in. This tool will find coverage problems of multiple types and will help you isolate root causes; you can fix these through logic changes or by adding testpoints.

I know something about this technology since I was CTO at Atrenta for 15 years. SpyGlass DFT has become a must-have in many design for test groups in some of the biggest semiconductor and systems companies in the industry.

Learn more about how Xilinx uses this technology on their biggest and baddest SoC FPGAs to save weeks of test problem ECOs . Save the date – April 28, 10am Pacific.

You can register for this event HERE.

Web event: Xilinx and Synopsys Present: Meeting Test Goals Faster with SpyGlass DFT ADV
Date: April 28, 2016
Time:10:00 AM PDT
Duration: 45 minutes

Early detection of testability issues can prevent major bottlenecks downstream and avoid time-consuming design iterations. In this webinar, Synopsys presents new techniques and capabilities available in SpyGlass DFT ADV such as high-impact test points to boost coverage, reduce the number of patterns, and minimize test costs. Our guest speaker from Xilinx discusses test challenges associated with large SoC designs such as the Xilinx Zynq UltraScale chip family, and illustrates how SpyGlass DFT ADV addresses testability issues early in the design flow, saving weeks of complex DFT-related ECOs.

Speakers:

Amitava Majumdar
Principal Engineer, Programmable Platforms Group, Xilinx, Inc.

Amitava (Amit) Majumdar is with the High Speed Products Division of Xilinx’ Programmable Platform Group, responsible for defining unified DFx methodologies for digital and mixed signal IPs across Xilinx’ SOC products. These methodologies include test, silicon and application debug, characterization, yield and error tolerance capabilities in the presence of security and safety and power management features. After a brief stint as an EE faculty member at SIU-C, prior to joining Xilinx, Amit moved into the industry, working on various DFx topics at Crosscheck, Apple, Viewlogic, Synopsys, SUN Microsystems, Stratosphere Solutions and AMD. Amit has worked on 50+ successful tape-outs in various roles, ranging from front-end design to post silicon work as an engineer and manager. He has a wide range of interests, from statistical circuit design to data-compression, machine learning and other multi-dimensional optimization problems. Amit received a BE-Hons degree in Electrical and Electronics Engineering from BITS, Pilani, an MS degree in Electrical and Computer Engineering from UMASS, Amherst and a PhD in Electrical Engineering from USC.

Anthony Joseph
Applications Engineer, Synopsys

Anthony “Al” Joseph has over 30 years of experience in ASIC design and verification – the most recent 15 years focused on the SpyGlass RTL signoff platform. Anthony currently is a Senior CAE for the SpyGlass DFT ADV product family at Synopsys.

Dmitry Melnik
Marketing Manager, Synopsys

Dmitry Melnik is a Product Marketing Manager in Synopsys’ RTL Synthesis and Test group. He has more than 10 years of combined experience in EDA R&D, field applications and product management. He holds an MS degree in Computer Systems Engineering from KNURE, Ukraine.


Stop the Dashboard Insanity!

Stop the Dashboard Insanity!
by Roger C. Lanctot on 04-26-2016 at 7:00 am

Speaking as part of the digital track at this week’s NAB confab, John Ellis proclaimed the demise of the dashboard radio in the coming world of automated vehicles. The headline reporting his talk in Tom Taylor’s newsletter was “Radio is on a path to extinction in the vehicle.”There’s no point in being subtle if you’re John Ellis especially if you are addressing the deer in the proverbial digital headlights at NAB.

John makes an essential and legitimate point that the rise of car sharing and ride hailing services and increasingly automated driving machines will steadily nudge the content consuming public toward a BYOD approach to content reception. This means radio needs to make the leap to mobile devices via solutions such as NextRadio – now adopted by every wireless carrier in the U.S. with the sole exception of Verizon.

Quoth Ellis: “In an autonomous or shared car, there does not need to be a traditional head unit,” including the familiar AM/FM dial. “Occupants will bring in all their own content. Thus, no radio in the vehicle.”

As a solution, among other things, Ellis endorses adopting the standard called “SmartDeviceLink” from Ford and Livio and recently endorsed by Toyota. The point of SmartDeviceLink is to enable digital content acquisition in any car (or anywhere?) with any device.

SmartDeviceLink is specifically for enabling access to smartphone-based apps and services via a smartphone connection in a car. The current landscape of smartphone connectivity solutions encompasses everything from Alphabet’s Android Auto and Apple’s CarPlay to MirrorLink, IviLink, WebLink, PhoneLink, MyLink, IntelliLink, HondaLink and, yeah, the list goes on.

The beauty of SmartDeviceLink is that it has the overt support of both Ford and Toyota, but behind the scenes momentum is building for much wider support. Collaboration has already begun between OEMs – an almost-unheard of phenomenon.

The allure of SmartDeviceLink? A massive roster of already enabled applications and services, compatibility with Apple iOS and Alphabet’s Android and, soon, OEM independence.

But the real core of the SmartDeviceLink solution is differentiation. Car makers are quickly – and finally – learning that undifferentiated solutions conceived by non-automotive suppliers – Apple, Alphabet, Baidu (CarLife) – are nothing more than a dead end.


If you’re Mercedes-Benz, why would you want a dashboard experience that looked like Volkswagen’s? It makes no sense. It makes even less sense when car makers take into account the low priority ascribed to the automotive industry by the Apple’s, Alphabet’s and Baidu’s of the world.

The tipping point may well be J.D. Power’s new report on smartphone mirroring solutions. The press release states:

“Findings from the J.D. Power 2015 U.S. Tech Choice StudySM demonstrated below average preference in Apple CarPlay (92) and Android Auto (90), where 100 is average, even with smartphone ownership taken into account. Compare this to the top rated technology from 2015, Blind Spot Detection and Prevention at a preference rating of 225, to see that there is an uphill battle to communicate the benefits that Smartphone Mirroring provides to consumers.”

Strategy Analytics research has consistently identified safety as a much higher priority than infotainment. But what could be worse than UNDIFFERENTIATED infotainment? That is a negative, not a plus.

SmartDeviceLink, in contrast, allows for connecting Apple and Android-based devices but its key virtue is that it provides a framework within which car companies can create differentiated and brand-specific user experiences. And those experiences can be infused with vehicle sensor data and the related contextual information – something most car makers have withheld from Apple, Alphabet and Baidu.

Something of a footnote in this debate is the impending demise of MirrorLink. Volvo, GM and Daimler have all turned away from the interoperability challenges, the limited roster of compatible phones and the inability of MirrorLink to work with Apple phones. MirrorLink won’t go away, but it will be increasingly difficult to find and even harder for car dealers to explain and sell.

SmartDeviceLink is rapidly emerging as the go-to smartphone integration platform. Competing smartphone integrators such as Abalta and Airbiquity have read the writing on the wall and enabled their own SmartDeviceLink compatible solutions. It’s definitely time to forget the bollocks.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Fast Track to a reconfigurable ASIC design

Fast Track to a reconfigurable ASIC design
by Don Dingee on 04-25-2016 at 4:00 pm

Licensing IP can be a pain, especially when the vendor’s business model has front-loaded costs to get started. Without an easy way to evaluate IP, justifying a purchase may be tough. With more mid-volume starts coming for the IoT, wearables, automotive, and other application segments, it’s a growing concern. Flex Logix is doing something Continue reading “Fast Track to a reconfigurable ASIC design”


Data Security: Magic vs. Common Sense

Data Security: Magic vs. Common Sense
by Daren Klum on 04-25-2016 at 12:00 pm

I remember when I was a kid and my dad would perform magic tricks. His magic was so bad but at the time I thought it really worked and was real. You know the trick – get a coin, put it in your hand, wave your other hand over the coin, say ‘abra kadabra’ and then put the coin into the other hand when the person isn’t looking and walla the coin disappears. Then my dad would pretend to pull the coin out of my mouth, nose or ear. IT WAS MAGIC! Well, it was magic until my inquisitive 3-4 year old mind figured out what he was doing. Then it wasn’t magic at all but rather a really stupid trick.

I share this story because this is how I view the data security industry right now. Most of the solutions in the security space force you to believe in some kind of magic an ordinary person can’t understand but as we are learning the magic has a lot of fatal flaws. In fact, at the rapid pace the magic is getting hacked it’s very clear the days of magic are over. From back-doors in encryption, brute force hacking using super computing, passwords that can be socially engineered, SQL injections, packet sniffing, spear fishing, malware attacks and of course crypto-locking. There is simply no end to the flaws in the current magic we use to secure our data. Sadly the magic that has been our tried n’ true standard – Math (encryption) and Secrets (passwords) are dead!

So what is the answer? To me the answer comes with what I call the four new pillars of data security: (data conversion, data randomization, data segmentation and physical / multi-factor authentication). When you look at the solution my company has built to solve the security problem at Secured2 our very foundation rests on these pillars. For instance, we convert data into a random format, we randomize the data so it’s totally illegible to any hacker and then we have segmented the data into 10k chunks that are randomly delivered to the ‘multiple’ destinations of your choice (multiple clouds, hybrid, or local). Then to restore data so you can use the data you have to ‘physically authenticate through biometric, retinal, voice or other forms of multi-factor authentication. Our solution of simplicity works in huge contrast to the discovered magic tricks of the tired old solutions. Wouldn’t you agree it’s common sense that when data is converted, shredded, randomized and delivered into multiple locations it’s vastly more secure than the magic of today’s solutions that rely on complex math, secrets and aggregating data in single endpoints? Common sense & simplicity always wins!

Even nature has figured out a way to secure better than we do today. Nature ironically uses the same pillars of security we are using at Secured2. Just think of your brain. The data is not sitting in whole as we do today on a hard drive or in the cloud, it’s spread all around your brain in little bits. The minute you want to access ‘secured’ information you simply make a request and your brain gathers all the bits randomly spread all over your brain into the data you choose to share. Your brain then determines what level of information to share based on your level of trust with the person you are sharing information with (always physical identification). So as you look at how we as humans have already dealt with security all we are doing at Secured2 is mimicking this form of security but for a digital world.

It’s my belief that security is going through a major shift and companies like Secured2 have a first glimpse into the future of security. One thing is clear – the definition of insanity is doing the same things and expecting a different outcome. What we are using today isn’t working and solutions like Secured2 provide a viable alternative to the mess we find ourselves in today.


Would Sauron have made the One Ring if he had known about Plasmonics?

Would Sauron have made the One Ring if he had known about Plasmonics?
by Mitch Heins on 04-25-2016 at 7:00 am

In J.R.R. Tolkien’s novel ‘Lord of the Rings’, the Dark Lord Sauron created the “One Ring” as the ultimate weapon to conquer all of Middle-earth. So too it seems that in the world of integrated silicon photonics, the “ring” has become somewhat ubiquitous and powerful. Resonance rings can be made to modulate laser light, act as filters and switches and in some cases even be used as on-chip laser light sources.

Optics are considered to be one of the most viable solutions to the performance limitations of electrical interconnects. Integrated CMOS photonic solutions are arguably one of the most promising approaches for high bandwidth off and on-chip communications. Light modulation is key to any optical interconnection system as it converts electrical data into the optical domain. It is typically realized by changing carrier concentrations (holes and electrons) to affect the refractive index of the waveguide material, which, in turn, is used to modify the propagation velocity of light and the absorption coefficient in the waveguides. Optical modulators can modify phase, amplitude and polarization by thermo-optic, electro-optic, or electro-absorption modulation and they are usually based on interference (Mach-Zehnder interferometers – MZIs), resonance (rings or quantum well resonators) and bandgap absorption (germanium and now graphene-based electro-absorption modulators).

MZIs are probably the most well-known modulators and have played a major role in silicon-photonic based 100 gigabit optical transceivers for data center communication (see www.luxtera.com, www.kotura.com). They work by splitting an optical path into two parallel arms and then changing the index of refraction in one arm to induce a phase shift of the light. The light from the two arms re-unites and interferes either constructively or destructively allowing the light to be modulated. These devices are relatively large (several millimeters) and have energy dissipation of around 1-5 pJ/bit, two orders of magnitude higher than the 2-50 fJ/bit expected for on-chip communications.

Back to “rings”. Resonance-based modulators, are typically made up of silicon wave-guide rings integrated with a PIN junction to enable electronic control of their refractive index. Rings are coupled with linear wave guides data buses. Light from the input waveguide having a wavelength matching the resonance of the ring, will couple into the ring and build up in intensity over multiple round-trips due to constructive interference. This light is then output to a second detector waveguide. If critical coupling is achieved, light of the wave length selected will not propagate past the ring, effectively stopping propagation of that wavelength on the input bus. The PIN junction is used to modulate the ring’s index of refraction enabling it to be used to modulate light on the input bus as well as to act as a switch to move the selected light onto other buses.

The real power of the ring is that it enables wave length division multiplexing (WDM). WDM uses different wavelengths of light to simultaneously send multiple independent data signals down the same waveguide, effectively multiplying bus bandwidth by the number of wavelengths employed. Ring resonators are uniquely suitable for WDM as each resonator interacts only with wavelengths that correspond to its resonant modes. These devices have extremely small footprints (several microns) which results in low power operation as well as permitting integration of thousands of them on a single die.

Dense WDM modulation can be accomplished by cascading microring modulators on the same waveguide. Columbia University experimented with multiple different ring-cascade architectures for a TDM-based bus connecting multiple cores on the same die and showed effective bandwidths of up to 600 Gbps depending on the number of cores sites per switching cluster.

Now the thing that would make Sauron possibly rethink the “ring” as his ultimate weapon, at least for light modulation, is an electro-absorption modulator (EAM); specifically, an Indium-Tin-Oxide (ITO) hybrid plasmonics EAM. This class of transparent conductive oxides have been found to allow for unity index changes which is 3 to 4 orders of magnitude higher compared to classical electro-optical materials, such as Lithium Niobate. George Washington University has shown that when an electrical voltage bias is applied across this device it forms an accumulation layer at the ITO-SiO[SUB]2[/SUB] interface, which increases the ITO’s carrier density and raises its extinction coefficient. They were able to obtain an extinction ratio of –5 and –20 dB for device lengths of 5 and 20μm, respectively. This record-high 1 dB/μm extinction ratio is due to the combination of the hybrid plasmonics mode enhancing the electro-absorption of the ITO and ITO’s ability to change its extinction coefficient by multiple orders of magnitude when applied with an electric field. This change stems from an increase in the carrier density in the ITO film (by a factor of 60) due to the formation of the accumulation layer in the MOS capacitor, which was verified via electrical metrology tests and analytical modeling. In summary they were able to achieve deep sub-λ 3D optical confinement in a single-mode cavity with bandwidths approaching the THz range and power consumption in the atto-joule regime, which is about 3–5 orders of magnitude lower compared to other state-of-the-art devices.


While the “ring” is still the dominant structure for photonic design because of its versatility in filtering and switching, Sauron or any silicon photonics engineer for that matter, would be wise to continuing using them. However, when it comes to modulators, EAMs, and especially those employing hybrid plasmonics, would definitely be worth looking into.