BannerforSemiWiki 800x100 (2)

Glasses Refocus Mobile Power Design

Glasses Refocus Mobile Power Design
by Bill Boldt on 02-23-2014 at 11:00 am

Contextual awareness likely will emerge as one of the most exciting new user experiences enabled by wearable products. It happens when a mobile device, carried or worn, senses the user’s surroundings and presents information, offers advice, or controls itself and/or other devices according to that specific environment. This new experience will influence how users see the world, affecting their interactions with other types of screens, some of which have not even been invented. New styles of personal screens such as smart glasses and smart watches are popping up like weeds now.

A screen that you wear is more natural and personal because it becomes an extension of you unlike a smartphone you carry. So look into glasses platforms and you just might see the future, at least the future of the mobile platform. With glasses an image can be projected right into users’ eyes, overlaid on their view of the real world. This merging of real and virtual realities will create a bit of an Alice in Wonderland world where reality is altered like never before.The first likely major use of augmented reality will be for location-based services. Descriptions and information about the user’s current neighborhood, a museum exhibit, the building, the product the user is looking at, or the person the user is talking to (which is sort of creepy) will be available at the literal blink of an eye.



Connected glasses can provide context for people you meet.

Major mobile handset makers are already patenting contextual awareness methodologies. The implementations are rudimentary right now, but point to a future of augmented reality. Augmented reality is all about creating a more natural interaction with the digital world while living an analog life. Simple contextual awareness applications could include a phone that recognizes what the user is doing and present options and services that make sense at that time. Just imagine a smarter Siri that knows where you are and suggests what you might want to do. Now imagine a phone that has learned your particular gait when walking and senses whether someone else has taken your phone and locks it down until that person is authenticated. There is a smartphone already on the market that keeps the screen lit while the user has eye contact with it.

These new platforms present many new opportunities to pursue innovation in miniaturization, power management, connectivity, sensing, and control. Their requirements play right into the hands of semiconductor innovator. But innovation is more than just a fashionable slogan or something that magically appears after taking a design class at some prestigious university. Innovation requires the ability to pick up signals of the future out of the noise of the present and most importantly not be deluded by one’s own biases. A dirty little secret about shaping the future is that the look of the future is literally created by industrial designers. So, it is a great idea to start there for clues. Perfect examples are wearable smart glasses, smart watches, fitness bands, and other wearables. These new form factors are on their way and apply a lot of stress on a product’s physical design. Such stress leads directly to the need for innovative ways to route power inside a much more complicated and constrained physical structure. The old ways will simply not work as well as they used to. Future happens. Look at smartphone power management for a case in point.

Distributed power to follow distributed ICs

In smartphones the trend until now has been to gather power blocks all in one place — the PMIC. However, the PMIC’s monopoly is being challenged as power functions are being disintegrated and spread around in novel configurations. The reason is that physical design constraints are demanding that. So, the PMIC trust is being busted with disintegrated, specialized, and distributed PMICs, called micro-PMICs poised to take over. Micro-PMICs are already appearing in modular phone concepts from companies like ZTE and Motorola, and there will be others. This is tangible evidence that should get the attention of doubters. While these examples are still somewhat exotic and experimental concepts, they do point to a more distributed, decentralized architecture well suited for tomorrow’s wearable, mobile, and remote sensing products. It is probably a good time to abandon old-school biases and pick up the signals of the future.

Bill Boldt, Market Research Guy

lang: en_US


A Brief History of Kandou Bus

A Brief History of Kandou Bus
by Daniel Nenni on 02-23-2014 at 10:45 am

Kandou Logo

Kandou Bus uses a novel form of spatial coding to transmit data between wired chips. The main idea is to introduce correlations between the signals sent on the interface, and choose the correlations judiciously to lower the power consumption, increase the speed, and lower the footprint. It is a generalization of differential signaling (which sends correlated signals on two wires). The company is a spinoff of Dr. Shokrollahi’s lab at the Swiss Federal Technical Institute in Lausanne.

The company originated in a casual conversation about using channel coding to improve the throughput of DSL lines. The conversation was abound with “differential signals over twisted pairs.” “What is differential signaling?” asked Dr. Shokrollahi whose background is in mathematics, algorithm design, and channel coding than rather than in electronics. Once he heard what it was, he asked “so, how many wires do you use to send, say, 10 bits?” When he heard that the solution would be to use one pair for every bit to send, he immediately saw the inefficiency of the system and used his coding background to come up with a new solution in which signals were “smeared” across multiple wires. Thus, the idea of “Chord Signaling” and the company Kandou Bus was born! The name of the company is the Farsi word for beehive. Just as in a beehive where the hive’s output relies on collaboration between the bees, the superior properties of chord signaling are obtained through collaboration between the signals on the wires.

Over the course of the following 10 months, Dr. Shokrollahi assembled a team consisting of electronics and communication engineers and developed the new theory of chord signaling, and produced the first proofs of concept. The team focused on a typical mobile memory link, and taped out the very first instantiation of chord signaling capable of transmitting and receiving signals at 6.25 Gbps per wire over 10 cm of PCB trace. In this instantiation 8 bits were dispersed over 8 wires, but in a way as to make the transmission as resistant as differential signaling (which would require 16 wires). Soon after the fab out of that prototype, the team started developing a second full transceiver capable of transmitting and receiving up to 16 Gbps per wire over a challenging channel. The results of this chip have been presented at ISSCC 2014.

In the meantime, Kandou Bus has secured a Series A financing round, and has developed a product strategy to attack applications as varied as high speed networking, memory links, short chip-to-chip links via interposers, and low-power, high speed communication over TSV’s. The company is also active in the OIF-CEI and IEEE-802.3bj standards bodies and is proposing one of its technologies for solutions to the interconnect problems of various industries.

About Kandou Bus S.A.
Headquartered in Lausanne, Switzerland and founded in 2011, Kandou Bus is an innovative interface technology company specializing in the the invention, design, license and implementation of unmatched chip-to-chip link solutions. Kandou’s Chord™ technology lowers power consumption and improves overall performance of semiconductors, unlocking new capabilities in electronic devices and systems. http://www.kandou.com.


SEMICON China Shanghai 上海

SEMICON China Shanghai 上海
by Paul McLellan on 02-23-2014 at 10:30 am

SEMICON is not just the event in San Francisco every July, there are other SEMICONs around the world. Coming up next, Shanghai China. In fact there are four colocated events:

  • SEMICON China 2014, March 18th-20th
  • The 8th PV fab managers’ forum, March 17th-18th (all things PhotoVoltaic)
  • FPD China 2014, March 18th-20th (all things Flat Panel Display)
  • Solarcon China 2014, March 18th-20th (that would be solar)

All events are in the Shanghai New International Exhibition Center (SNIEC). There are all sorts of theme pavilions for technologies such as LED, TSV, Smart Life, MEMS and more.

Keynotes are from:

  • Tetsuro (Terry) Higashi of Tokyo Electron (SolarCon keynote)
  • Dr Tzu-Yin Qiu, CEO of SMIC
  • Dr Walden Rhines, CEO of Mentor. Expect lots of data from Wally, it’s his thing
  • Bill McLean of IC Insights (FPD keynote)

The keynotes are all in the Kerry Hotel, Pudong, Shanghai in the Shanghai Ballroom from 1-6pm on Tuesday March 18th. Simultaneous translation in English and Chinese will be provided.

Coming up next in the SEMICON world tour, SEMICON Singapore in the Marina Bay Sands Hotel on April 23rd-25th. Last time I was in Singapore I got a cheap room there on hotels.com. If you visit you have to go see the swimming pool on the top floor. This is the hotel that has become an iconic building in Singapore with three towers and a long swimming pool, bar, beach that stretches like a boat across all three. I’m as baffled about how they built it as I’m sure civil engineers are about how we make chips.

The focus of SEMICON Singapore is Enabling Mobility for IoT with Advanced Semiconductor Technology Innovations: A Southeast Asia Perspective. If you are part of the highly influential Southeast Asia microelectronics manufacturing ecosystem involved in semiconductors, LED, MEMs, printed/flexible electronics and other adjacent markets—this is a must-attend event.

A few more more for your diary:

  • SEMICON Russia May 14th-15th in Moscow. I didn’t even know semiconductors were a thing in Russia
  • SEMICON West, July 8th-10th in San Francisco as always. I’ll see you there.
  • SEMICON Taiwan, September 3rd-5th, Taipei
  • SEMICON Europa, October 7th-9th in Grenoble
  • SEMICON Japan, December 3rd-5th in Tokyo

Details for Shanghai (in Chinese and English) here. Details for Singapore here. The SEMI page for all the other conferences is here.


More articles by Paul McLellan…


A Methodology for Assertion Reuse in SoC Designs

A Methodology for Assertion Reuse in SoC Designs
by Daniel Payne on 02-21-2014 at 4:24 pm

As your SoC design can contain hundreds of IP blocks, how do you verify that all of the IP blocks will still work together correctly once assembled? Well, you could run lots of functional verification at the full-chip level and hope for the best in terms of code coverage and expected behavior. You could buy an expensive emulator to accelerate your verification process. You could try an Assertion-Based Verification (ABV) methodology and learn to manually write assertions. Or, you could consider using a methodology for assertion reuse in SoC designs.

Two years ago I started hearing about Assertion Synthesis, the process by which assertions could be created for an IP block automatically instead of by hand-coding. It sounded interesting, and it turned out to be valuable enough that Atrentabought a smaller company called NextOp to acquire a tool called BugScope. Paul McLellan blogged about that acquisition here.

Ravindra Aneja talked with me this morning by WebEx and brought up this notion dubbed MARS – Methodology for Assertion Reuse in SoC. Design reuse is well-known and a widely accepted method to save time and improve quality, so why not assertion reuse? Here’s the promise of MARS then:

  • It flags any IP that is incorrectly configured at the SoC level
  • Any IP feature failure is flagged at the SoC level
  • Any coverage targets missed by IP verification are pin-pointed

The whole idea is to provide a significant reduction in SoC Integration and debug time, while having a minimal impact on IP and SoC simulation run times. Plus, the MARS approach works with both emulation and FPGA-based verification environments. Here’s the proposed methodology flow:

A design or verification engineer would run BugScope on each IP block using functional tests and then look at the progressive test coverage report to understand:

  • When to start generating assertions
  • Are my IP tests mature enough yet

These automatically generated assertions are then ready to be re-used when the SoC has been assembled. Any violations of assertions would then be reported during SoC verification so that your team can fix SoC configuration bugs, fix specific IP design bugs, or fill in any IP coverage holes. Here’s an example of a progressive test coverage report:

This MARS approach is different than just ABV because BugScope is using the input tests to formulate the assertion automation. You can use assertion synthesis is you have ready access to the RTL code and tests for your internal IP, new IP, modified IP or even some 3rd party IP. Plan to target any control-intensive logic for generating properties: Arbitration, data flow control, interrupt controller, schedulers, etc.

Case Studies

The good news is that customers are using the MARS approach already, and in one case a customer had a critical IP block with about 1,400 tests defined. By using BugScope about 145 assertions were automatically generated, and when these assertions were re-used at the sub-system level there were 2 assertions fired. The first fired assertion uncovered an IP configuration error and the second fired assertion indicated that an important coverage hole was expected to be tested at the IP level.

In the second case a customer had some immature IP with a low verification confidence level, and BugScope generated 1,203 properties for re-use at the SoC level. With just 34 tests there were 105 properties fired, or about 10%. By looking through these properties they found that 48 unique and high priority IP coverage holes were exposed. The verification team now knew that their bus functional model needed to consider more realistic scenarios, that they needed to inject errors on multiple lanes simultaneously, and that clock skews was note test well enough at the IPO level.

Verification Management

To assist in the area of verification management there are three helpful feedback metrics:

  • Verification and Testbench Grading – which tests are most effective
  • Verification and Testbench Distribution – how exhaustive is each test, which coverage point needs more tests
  • Verification and Testbench Balance – are there enough hits on my most complex modules

Summary

It is possible to accelerate SoC verification by finding IP configuration issues, IP design bugs and IP coverage holes. The Methodology for Assertion Reuse in SoC designs (MARS) is in use today. The BugScope tool from Atrenta also can cut time with emulation and FPGA-based debug approaches.

The evaluation process typically takes between one week to a month, based on your team size and schedule.

Marvel presented a paper at DAC in 2013 from Marvel about their experience with BugScope, and another customer has submitted a paper to DAC 2014.

lang: en_US


2014 Semiconductor Growth Could be 2X 2013 Rate

2014 Semiconductor Growth Could be 2X 2013 Rate
by Bill Jewell on 02-21-2014 at 10:00 am

The fourth quarter 2013 semiconductor market declined 0.8% from the third quarter, according to World Semiconductor Trade Statistics (WSTS). Full year 2013 growth was 4.8%. Our most recent 2013 forecast at Semiconductor Intelligence was 6% in November 2013, based on expectations of positive growth in 4Q 2013. Who had the most accurate forecast for 2013 semiconductor growth? We compared publicly available forecasts for 2013 released in the few months prior to the January 2013 WSTS data release. The most accurate was IDC at 4.9%. Other close forecasts were WSTS and Gartner at 4.5% and Mike Cowan at 5.5%.

Key semiconductor companies reported 4Q 2013 revenue change versus 3Q 2013 ranging from +42% at Micron Technology (driven by revenues from the Elpida acquisition) to -18% at SK Hynix (due to a fab fire). Seven of the fourteen companies in the table below showed revenue growth in 4Q 2013 and seven had declines. Revenue guidance for 1Q 2014 indicates an overall decline in revenue from 4Q 2013. Of the twelve companies which provided guidance, nine expect declines in revenue ranging from -2% from Micron Technology (estimated based on bit growth and price guidance) to -16% from AMD. Toshiba’s semiconductor group, Infineon and Freescale all expect growth in 1Q 2014. The weighted average guidance for 1Q 2014 is a decline of about 5%.

What is the outlook for year 2014 semiconductor market growth? Forecasts in the last few months range from about 4% (WSTS and Mike Cowan) to 20% (Objective Analysis). Half of the forecasts are in the 7% to 9% range. Our latest forecast at Semiconductor Intelligence is 10%. This is down from our November forecast of 15% due to a negative 4Q 2013 and an expected 1Q 2014 decline of about 5%.

Our 10% growth forecast is largely driven by an expectation of accelerating World GDP growth. The International Monetary Fund IMF January outlook calls for World GDP growth of 3.7% in 2014, up from 3.0% in 2013. Acceleration is driven by developed economies, with the U.S. expected to accelerate to 2.8% in 2014 versus 1.9% in 2013. The Euro Area should recover from a 0.4% decline in 2013 to 1.0% growth in 2014. The strongest growth continues to be in emerging and developing economies, growing 5.1% in 2014, up from 4.7% in 2013. Although China is forecast to decelerate slightly from 7.7% to 7.5%, most other developing economies are projected to show accelerating GDP growth in 2014.

Semiconductor Intelligence’s model of the semiconductor market based on GDP for 2014 predicts 11% growth. We are reducing this to 10% based on a weak 1Q 2014. The upside growth in 2014 could be as high as 14%. 2015 GDP growth is forecast by the IMF at 3.9%, a slight acceleration from 2014. Based on this outlook, we expect double-digit growth for the semiconductor market to continue into 2015.

More Articles by Bill Jewell …..

lang: en_US


6 reasons Synopsys covets C/C++ static analysis

6 reasons Synopsys covets C/C++ static analysis
by Don Dingee on 02-20-2014 at 5:00 pm

By now, you’ve probably seen the news on Synopsys acquiring Coverity, and a few thoughts from our own Paul McLellan and Daniel Payne in commentary, who I respect deeply – and I’m guessing there are many like them out there in the EDA community scratching their heads a little or a lot at this. I’m not from corporate, but I am here to help.

Coverity and other purveyors of C/C++ static code analysis tools come from my happy place – embedded – and I’m a relative noob in EDA circles who can’t carry laundry for my colleagues here. However, I’ve been carefully observing the disciplines of EDA and embedded coming together for a few years now, way before coming to SemiWiki; it is why I was excited to be invited here and participate in the dialog, and maybe help shape the future.

From my perspective: this is more than just a sidestep by Synopsys into a new space to diversify beyond EDA roots, and I don’t see this solely as a competitive response to Mentor Embedded efforts, which are broader right now with a range of embedded software tools and operating systems. No, this is certainly a strategic maneuver, not just a tangential probe.

First, a bit of explanation: what is static code analysis? EDA types might recognize its foundation by another name: design rules checking, in which a tool looks at source files and relationships against a set of rules to find defects in code. Many folks are familiar with lint, the most basic of tools for checking C/C++ code. Coverity and other embedded tool sets go much farther than that, with heuristics and algorithms to not only shake out defects in C/C++ code, but reduce the annoying “false positives”. These tools provide control over the rule sets in play, and what gets checked and reported on, so actual errors are highlighted and warnings and other benign differences of opinion based on requirements and experience can be categorized or filtered out entirely.

Now, I’ll go back to the comments on “not firmly in EDA space” and “zero synergy” posed by my esteemed counterparts. I agree with them, this is not traditional EDA territory; allow me to make the case for the JJ Abrams-esque alternate strategy timeline. Aart de Geus and Anthony Bettencourt both focused on quality and security as their message, but it goes deeper than that – way deeper. Here are 6 reasons to think about the vision and where Synopsys is likely headed.

Time is money. Of course, you write perfect C/C++ code. Sure you do. I know I do. (Ha ha. HAHAHA. HAHAHAHAHA .…) Okay, maybe every once in a while, it’s not perfect. More than likely, you have policies involving coding standards, peer reviews, and objective testing to expose errors. Those all likely involve one thing: a human READING code, line by thousands or millions of lines. Eyeballs. Caffeine. LASIK. Late nights. Wasted days. Time that could be spent better. Static analysis tools not only read through code, but they catalog and check interprocedural relationships and other constructs for less obvious errors, pointing reviewers to the problem areas quickly.

All code is critical. We often relate code quality with safety critical applications. Certainly, the first advocates of embedded static code analysis have come from defense, industrial, medical, automotive, and other areas with stricter compliance and liability issues, and in some cases defined industry-wide coding standards. However, any code defect can make or break any application, and as the LOC count rises, the risk goes way up. This gets magnified in a typical SoC today, with different types of cores all running together. A high profile bug can torpedo any product quickly, something no developer can afford.

Code is the product. Microcontrollers, SoCs, and microprocessors do nothing but sit there and burn watts without software running on them – silicon is just the enabler, not the product. Synopsys may not be huge in the operating systems and tools business yet, but they are big in the embedded business; the popularity of the ARC processor core and DesignWare IP means there has to be verified software, somewhere. Synopsys has to create and deliver quality C/C++ code making this stuff work, and provide confidence it has been checked.

Co-verification and the golden DUT. Most folks think of EDA testing as RTL simulation, pattern generation and scan chains, but in today’s world, that is just the beginning. Real SoCs are co-verified, with the actual software running on a simulator or emulator. Think Apple A7 running iOS 7. In a complex part, without actual code running, errors can sneak through. Here’s a question: if you have new IP with both new hardware and new software, which is the problem? That golden software may not be as golden as you think, and many users report running static code analysis tools spotted actual problems they missed in software review and test.

IP is coming from everywhere. This is not a make-versus-buy world anymore; it’s build-borrow-buy. Here’s my favorite chart I stole from Semico Research via Synopsys, with the message that a complex SoC is approaching 100 or more IP blocks with both hardware and software, and reuse is key to productivity. If you write all your own IP, congratulations, but more than likely you get some IP from either open source communities or commercial suppliers. Guess what? The software IP from outside sources very likely doesn’t conform to your coding standards. Is it broke? Will it break, or will it break your IP, at integration? Would you like to read all that code line-by-line, or would you rather have it scanned – using your internal rule set, filters, and customized reports – to pinpoint where potential problems may lie?


In the end, there can only be one. “Yeah, but we’re talking about C/C++ here; we don’t design chips with C/C++.” (There are RTL static code analysis tools out there; same idea, but a story for another time.) True, but you likely design chips for C/C++, and again, your chips don’t do much without software. While the end game may be a generation away, at some point silicon will be optimized for the code it runs. If we believe in the ultimate vision for high level synthesis, design realizations will come directly from C/C++ application code – in order to do that with confidence, the code has to be not only defect-free, but well-formed and very well-understood. In the interim, it would certainly be interesting for Synopsys to adapt the Coverity tools for SystemC, not a gigantic stretch.

Like I said, this may be the JJ Abrams version of the story – but I think it will play out as the right one in an evolving EDA industry, and I strongly suspect Synopsys has already seen the benefits of Coverity tools internally as well as externally. A big congrats to the Coverity team, and to Synopsys for being brave enough to step out of the box further into embedded space.

More Articles by Don Dingee…..


lang: en_US


Mounir Hahad Rejoins Silvaco

Mounir Hahad Rejoins Silvaco
by admin on 02-20-2014 at 4:16 pm

Mounir Hahad just joined Silvaco as VP engineering. And when I say joined I really mean rejoined. I had a call with him to find out how that happened.

Mounir studied in France for a PhD in computer science on numerical computing. In 1995 the then-director of TCAD at Silvaco called him up having read some of his published papers. Silvaco had a problem in that simulation times were getting very long, especially in TCAD but also in SmartSPICE. Maybe Mounir could come and help them…

So a few months later Mounir joined Silvaco as a development engineer working on the parallelization of Atlas and SmartSPICE. It took a few years and he built up a team of experts in the space.

Ivan Pesic, the founder and then-CEO (who passed away in 2012) had an idea of a different licensing model for EDA/TCAD that made it easier to charge for peak use rather than just giving good customers a lot of extra licenses for free, which was what typically would happen. The idea was not just to do this for Silvaco but for other EDA and TCAD companies. Mounir went to be VP engineering at the new company EECad which did all the licensing and split revenue with their EDA partners. But Ivan didn’t seem to want to pursue it as aggressively as Mounir so he decided to move on.

EECad technology was not specific to EDA really but to any industry licensing software. Mounir thought that it would be attractive to do a shrink-wrapped appliance-based product for the whole market including email and security. So he joined IronPort and did, indeed, learn lots about security. IronPort was acquired by Cisco and Mounir had various management roles there but he also realized that in a large company, he’d have a limited opportunity to influence strategy in a big way. So when Silvaco heard he was loose they brought him back on board as VP Engineering.

Mounir believes that Silvaco really has a good opportunity to make it big. He felt that Ivan had wanted to keep the company reasonably small so he controlled it, and clearly Ivan didn’t believe in doing any serious marketing.

Mounir’s focus going forward is to take Silvaco’s engineering to the next level for operational excellence: enhance product quality, close up gaps between the products and improve release predictability and roadmap adherence.

The 2014 baseline release comes out in a couple of weeks and is more focused on end-to-end solutions. Lots of enhancements that customers have been wanting. Going forward Mounir thinks that Silvaco will need to become more open to industry standards and, as a result, partner more than it has in the past.


More articles by Paul McLellan…

 


Before SPICE Circuit Simulation Comes TCAD Tools

Before SPICE Circuit Simulation Comes TCAD Tools
by Daniel Payne on 02-20-2014 at 3:19 pm

I’ve run SPICE circuit simulators since the 1970’s and they use transistor models where the device parameters are provided by the foundry. These transistor and interconnect parameters come from an engineer at the foundry who has characterized silicon with actual measurements or by running a TCAD (Technology CAD) tool that is physics-based.

In the 1970’s at Intel the process engineers would come up with an idea to improve the speed or power of our DRAM technology, run a lot through the fab, and about 7 days later they would start to measure the results of their idea to validate it. Today, there’s not enough time to physically run through new process ideas as silicon and then measure the results, instead these process engineers are using TCAD tools to simulate the behavior of the transistors before fabrication and then predict what the SPICE parameters will be.

A long-time TCAD tool provider is Silvacoand their engineers recently wrote abouthow to simulate single-crystal gallium oxide (Ga[SUB]2[/SUB]O[SUB]3[/SUB]), a new material aimed at power device applicationsbecause it has a wide bandgap. Their device simulator is called Atlas, and here are the tool inputs and outputs:

Researchers from NICT in Japan had published their work on this new oxide compound semiconductor in June 2013, so the folks at Silvaco used that experimental data to then build the device structure and doping profile with the Atlas tool:

Once you have the device structure and doping profile, some assumptions are needed about the channel layer and dopant concentration levels.

TCAD Simulation Results

The Japanese researchers had published their Drain Current versus Voltage curves, so at Silvaco they compared Atlas results (green) versus measurements (red) and saw excellent correlation:


Simulated ID-VD curves compared with experimental data


Simulated ID-VG curves compared with experimental data

Another Silvaco tool for process simulation called Athena was used for reproducing the multiple Si implantation profile using a BCA amorphous material implants model. Once again, the simulated results were compared with the reported experimental results to confirm the accuracy:


A simulated Si depth profiles compared with experiment

Summary
For new types of devices like Ga[SUB]2[/SUB]O[SUB]3[/SUB] MOSFETs, you can run a TCAD tool like Atlas to simulate, experiment and even optimize the DC and transfer characteristics prior to actual fabrication or after initial fabrication experiments. This emerging oxide compound semiconductor may soon be in production for power device applications because of its properties being superior to GaN and 4H-SiC materials.

Further Reading
Silvaco publishes a quarterly newsletter all about TCAD advancements which you can find here.

lang: en_US


ISSCC: Analog-Digital Converter in FD-SOI

ISSCC: Analog-Digital Converter in FD-SOI
by Paul McLellan on 02-20-2014 at 11:50 am

The International Solid-State Circuits Conference (ISSCC) was last week in San Francisco. Stéphane Le Tual, Pratap Narayan Singh, Christophe Curis, Pierre Dautriche, all from STMicroelectronics presented a paper on A 20GHz-BW 6b 10GS/s 32mW Time-Interleaved SAR ADC with Master T&H in 28nm UTBB FDSOI Technology.

Modern wireline communication devices whether over copper or fiber require a high-speed analog-digital converter (ADC ) in their receive path to do the digital equalization, or to recover the complex-modulated information. A 6b 10GS/s ADC able to acquire up to 20GHz input signal frequency and showing 5.3 ENOB in Nyquist condition was presented at ISSCC. It is based on a Master Track & Hold (T&H) followed by a time-interleaved synchronous SAR ADC, thus avoiding the need for any kind of skew or bandwidth calibration. Ultra Thin Body and BOX Fully Depleted SOI (UTBB FDSOI) 28nm CMOS technology is used for its fast switching and regenerating capability. The core ADC consumes 32mW from 1V power supply and occupies 0.009mm[SUP]2[/SUP] area. The Figure of Merit (FoM) is 81fJ/conversion step.


Let’s focus on the implementation which is in ST’s 28nm FD-SOI process. Just as a reminder, FD-SOI is an alternative to FinFET which has some big advantages in being architecturally very similar to a “normal” planar process. FinFET has quantized transistor sizes which makes analog design challenging. ST have picked this transistor architecture and a couple of other manufacturers are in the FD-SOI consortium, most notably GlobalFoundries and UMC (but not TSMC which is committed completely to FinFET). This is a very high performance ADC and thus an example of complex high-precision analog design in FD-SOI.


Previously at ISSCC and other conferences, earlier designs have been presented in processes ranging from 65nm CMOS to 32nm SOI. Looking at the table above, you can see that while having similar characteristics as sampling rate or resolution, with the plus of having the smallest implementation (even if not an apples to apples comparison due to technology scaling), the best power consumption, best characteristics and, big advantage vs the earlier results, no need of gain/skew calibration for reaching such state-of-the-art results when for all the others it is mandatory.

To summarize, the block uses the efficiency of the pure passive “sampling and redistribute” concept for signals up to 20GHz. Together with the low-power capability of the 28nm CMOS UTBB FDSOI technology, ST could reach 10GS/s operation while keeping the power consumption at 32mW under 1V supply with a block that is just 0.009mm[SUP]2[/SUP].

The ISSCC website is here. If you have access to the proceedings then it is paper 22.3.


More articles by Paul McLellan…


$1 Billion IP & VIP sales by 2017?

$1 Billion IP & VIP sales by 2017?
by Eric Esteve on 02-20-2014 at 9:58 am

We are not talking about ARM Ltd., as the IP vendor has already passed the $1B sales in 2013. In fact, we are not talking about a single IP vendor; this $1B mark will be passed by two IP market segments: Interface and Verification IP. In fact these two segments are very close together. When an IP is developed to support a specific Interface protocol standard, a related Verification IP (VIP) is needed at the same time. This VIP is firstly used by be design team in charge of the related IP development, to verify the IP compliance in respect with the specific protocol. And, by the way, we understand why the VIP has to be designed by a different team, not necessarily from a different company, but the architects should be two different persons. This strategy is the only way to avoid the equivalent of the “common mode error” in aeronautics.

If you list every protocol standard, and each new release of this Interface standard, you can identify the related VIP in the vendor port-folio:

  • USB (USB 2.0, USB 3.0 and 3.1, HSIC, SSIC)
  • PCI express (PCIe gen-1, gen-2, gen-3 and yet to be released gen-4, M-PCIe)
  • MIPI (D-PHY, CSI-2, DSI, M-PHY, CSI-3, DSI-2, LLI, SlimBus, UniPro, etc.)

Then, adding Ethernet, SATA, SAS, HDMI, DisplayPort, I2C, JTAG, NVM Express, protocols, you have covered most of the potential VIP products. It’s interesting to notice that the Design IP and related VIP are acting as the hand and the glove: it’s complementary. Thus, if you have to verify a PCI Express Root Port IP, the VIP will act as an Endpoint agent, and conversely.

IPNEST is the well-known analyst expert of the Interface IP market (see: Interface IP Survey), it was a natural move to analyze the Verification IP market (this was IPNEST’s customer opinion). In fact, the market dynamics for Design IP and VIP are quite different. When starting a SoC design, the project manager easily identify the functions (IP) which may be outsourced, in order for the design team to focus on the company differentiators and speed up the SoC release for Time To Market (TTM) enhancement. Then the make versus buy analysis is run, the IP outsourced when it makes sense.

The Verification IP outsourcing follows different rules. The project manager may decide to buy VIP externally when his team is developing the related Design IP. In this case, the primary goal is to validate the IP itself, and the task is known to be CPU intensive, consuming as well many VIP Licenses (or token, or seats) and leading to a high VIP cost. The project manager may again decide to outsource or develop internally this VIP. Interesting for the VIP vendor, even if the Design IP is outsourced, and reputedly 100% functional, the project manager may have to outsource VIP, but the goal is different. In this case, he will need the VIP to run the complete chip verification, during the functional simulations. It’s to be noticed that when a Design IP functionality implies that most of the communication with the SoC will pass through it, the related VIP will become crucial, which translate in term of EDA expenses, as the team will need more VIP tokens (or seats) than for another function within the SoC (which could be more complex from a functional view point). This example help introducing the concept of “VIP Expenses” rather than “VIP License” cost. IPNEST has decided to segment each VIP segment in respect with the following parameters:

  • IP Internally designed, or outsourced
  • On the edge IP (first use), or re-used IP
  • Chip maker: Tier 1, or Tier 2

Thus the VIP expenses (not license or token) are evaluated for every protocol standard, and every segment (the A, B, C and D in the above table.
By using the “Interface IP survey”, we can extract the number of design starts by protocol standard, as we have evaluated the IP License Average Selling Price (ASP), covering the first segment: IP externally sourced. Then, we have to make an evaluation of the design starts including internally designed IP to cover the second segment. In the VIP survey, most of the intelligence is inserted in the various VIP expenses by protocol. To comfort this evaluation, we have run interviews with chip makers, representative of the Tier 1 and Tier 2 segments.

At this point, we realize that we have only covered 50% of the VIP market! In fact, a very important segment of this market, in term of business, is the sale of “Memory Models”. This was the initial Denali business, and this is an ever increasing segment. If you make the acquisition of the “Verification IP Survey”, you will see how IPNEST has dealt with this part of the VIP market. But this was not enough: the internal bus, like AMBA or OCP, also generates a VIP need, to speed up the SoC validation. These various VIP segments are also evaluated.

Maybe you don’t care about the methodology, and just want to know the bottom line result? This is a human behavior, and IPNEST had to deal with such request! If you consider that such an answer (VIP market size in 2013) is the result of a real work, running segmentation, design start evaluation, spending time to verify, as far as possible the various steps, you understand why only the happy few buying the VIP Survey will benefit from this information…

That I can tell you, free of charge, is that the cumulated Interface IP and VIP market segments will weigh more than $1 billion in 2017.

Eric Esteve from IPNEST

Table of Content for “Interface IP Survey 2008-2012 – Forecast 2013-2017” available here

Table of Content for “Verification IP Survey” available here

More Articles by Eric Esteve…..

lang: en_US

You probably better know why IPNEST is the leader on the Interface IP and VIP dedicated surveys, enjoying this long customer list:

Synopsys, (US)
Cadence, (US)
Rambus, (US)
Arasan, (US)
Denali, (US) now Cadence
Snowbush, (Canada) now Semtech
MoSys, (US)
Cast, (US)
eSilicon, (US)
True Circuits, (US)
NW Logic, (US)
Analog Bits, (US)
Open Silicon,(US)
Texas Instruments, (US)

PLDA
, (France)
Evatronix,(Poland)
HDL DH, (Serbia)
STMicroelectronics (France)

Inventure
, (Japan) now Synopsys
“Foundry” (Taiwan)
GUC, (Taiwan)
GDA, (India)
KSIA, (Korea)
Sony, (Japan)
SilabTech, (India)
Fabless, (Taiwan)