RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

What does the Ford Mustang and Intel’s Gordon Moore Have in Common with Local Motors?

What does the Ford Mustang and Intel’s Gordon Moore Have in Common with Local Motors?
by Charles DiLisio on 11-06-2014 at 11:00 pm

1964 Vision, Volume and Moore’s Law

The 1964 New York World’s Fair saw Lee Iacocca, then a young 40 year old General Manager, introduce a car that inspired “total performance” and was for a “young America out to have a good time.” This young America would become the baby boomer generation. The Mustang was revolutionary in its affordability, which was obvious in its sales. Ford estimated it would sell 100,000 Mustangs during that first year; in fact, it would sell more than 400,000. In 1964 big corporations (Ford, GM, US Steel), had big factories and drove down costs through greater volume. The mantra that greater volume spurs lower costs that creates greater demand was observed in 1936 by Theodore Paul Wright describing the effect of learning on production costs in the aircraft industry.

1964 was also when a young engineer, Gordon Moore, then at Fairchild wrote in his journal what would become Moore’s Law to be published in Electronics on April 19, 1965. Dr. Moore observed that in an integrated circuit (IC) that transistors doubles approximately every two years. As a result we have seen an explosion of electronics, computing and communication growth as a result of larger wafers —> smaller geometries —> greater integration —>lower prices —> greater electronic product volumes.

Enter the Experience Curve

In 1966, Bruce Henderson of the Boston Consulting Group (BCG) introduced the “experience curve,” which resulted from work done for Texas Instruments. It holds that unit costs go down as a company gains production experience. The experience curve concept allowed Texas Instruments to underbid rivals by postulating falling unit costs. Ultimately, the concept was to drive costs down while becoming the dominant market player.

The “experience curve” observation (mega manufacturing —> volume — > lower cost —> leading market share) has led companies not only in the semiconductor industry, but also manufacturing companies and even countries to seek volumes and lower prices to gain market prowess and success.

Today, the “experience curve” and its corollary Moore’s Law are being challenged based on the very things they created. Miniaturization, computing power and ubiquitous communication allows a disruptive business model to emerge even in traditional automobile industries where scale and volume have been barriers to entry.

The effects on the automobile industry will reverberate into the semiconductor industry and we need to be aware of this disruption. I suggest you read my previous article on SemiWiKi titled: Viva The New Industrial Revolution! What Etsy, 3D Printing, and Kickstarter Means to Semiconductor Companies?

Local Motors — Maker Revolution leads to an Open Sourced Car

Today, manufacturing does not necessarily need to be based on large volumes, driving down costs. Crowd sourcing, 3D printing, low-cost collaborative design tools, and cloud factories, complex manufactured products can go from idea to production quickly and at reasonable costs. It’s the Maker Revolution combined with Internet physics.

Local Motors is changing the way automobiles are designed, build and sold. Its using, crowd sourcing design, combining this with community tools like Siemens Solid Edge Design 1, and utilizing micro-factories to build interesting, niche vehicles that are C.O.O.L (Community, Open-source, Ownership Experience, and Local). As a result, Local Motors is able to go from idea, design and build in a period of five months, at a lower cost. The implication is very similar to what happened in the steel industry where big steel (US Steel) was challenged by mini-mills (Nucor) resulting in a new, profitable business model for steel production (see “The Rise of Mini-Mills”, New York Times, 1981)


As the semiconductor industry moves from the smartphone platform to the Internet of Things (IoT), it may be good to reevaluate the premise of Moore’s Law and the experience curve.


Look at the XC2V FlypMode Military Assault Vehicle. This DARPA project designed to see how a military vehicle idea could move from idea to production. In a matter of five months the vehicle went from military input to design, manufacturing and vehicle delivery to the military.

Semiconductor Companies — Systems Thinking vs. Moore’s Law

What can semiconductor companies learn from Local Motors? Four observations:

  • Co-creation or crowd sourcing or 3[SUP]rd[/SUP] party collaboration for IC design— necessary to meet time to market (TTM) issues in the future. Can semiconductor designers be open to crowd sourcing design with customers and non-customers?

  • More than Moore’s Law — focusing religiously on smaller geometries, greater levels of IC integration that require expensive, large capital expenditures may not be necessary to be successful in the future. Can you rethink integration using programmability (FPGAs, MCUs, MPUs) or utilizing 2.5D or 3D packing?

  • Think Systems vs. Devices — transitioning to the IoT world, you will need to think systems and be able to develop intelligent sensors which incorporate MEMS sensors, MPU/MCU, memory and some RF.

  • Value, not Volume — most sensor systems will have to be done faster, designed for very specific applications and at moderate volumes.

This New Industrial Revolution is a revolution of Makers, custom products and regionalization. It isn’t about volume or the learning curve as we know it. We are moving from the homogenous (baby boomer), one size fits all market, to the heterogeneous (millennial), highly fragmented one. Passionate design, rapid time to market, and uniqueness will be paramount as technology is becoming ubiquitous.

If you haven’t watched the videos below on Local Motors, you should. Not just for what they are doing but the message behind the company. This message will drive significant change in products and business models of all categories.

See:


Lucio and the Kaufman Award

Lucio and the Kaufman Award
by Paul McLellan on 11-06-2014 at 4:30 pm

Tuesday was the Kaufman award dinner. This year it was awarded to Lucio Lanza. Last week I wrote about how Lucio ended up in EDA, although that was not where he finished up. He is currently a venture capitalist running Lanza Technology Ventures, one of the few VCs to make any investments in the EDA/IP/semiconductor space. Also, unlike most Kaufman award honorees, he actually worked with Phil Kaufman earlier in his career.

See How Lucio Got Into EDA

Lucio has invested in many EDA and IP companies, served on their boards and mentored their CEOs. Two that were especially significant were Artisan Systems and ARM. Mark Templeton, the founding CEO of Artisan, and Simon Segars, the current CEO of ARM presented the award. Or as they were introduced, they provided “the entertainment.” Since it was election night they pretended to have a debate format. “How’s that independence thing going?” Simon asked. In case you don’t know, Simon is English (me too!) although he lives mostly in the US since he has kids in high school (actually I think he lives mostly on British Airways). Artisan was eventually acquired by ARM and Lucio was on the board for a time (although Simon was not CEO in that era).

Mark started by giving a little bit of the history I covered in my earlier post about Lucio starting in Milan, going to Olivetti, coming to the US to work for Intel and then leaving to Daisy. He then ended up in the somewhat strange position of working for Cadence as a consultant in the Joe Costello years and also working at US Venture Partners (USVP). At Cadence they made many acquisitions and he also started an IP business (which for some reason never survived his departure). Eventually the conflict of interest between making acquisitions and making investments in companies to be acquired became too much and so he left Cadence to be full-time at USVP. And then he left USVP to found his own venture capital company, Lanzo Technology Ventures. It is a one-man show run out of a little office on University Avenue in Palo Alto. I doesn’t appear to even have a website since everyone already knows Lucio.

Simon talked about many of the companies Lucio had been involved with. There were too many to fit on one slide and many of the CEOs of those companies were there at the dinner. One I talked to during the networking session before the dinner was Dave Stewart of CriticalBlue, which has had Lucio on its board since the beginning. Somehow I know him through the Edinburgh connection where they are based.

Mark talked about Lucio showing up at Artisan’s door with a whole lot of pizza. At the time he was not even an investor. He gave Mark some advice. “Double your prices.” Mark did and…nothing happened except they made more money. “I’ll tell you what is going to happen,” Lucio said. “In 15 years or so the IP market will be bigger than EDA.” In 14 years the lines crossed.

Mark and Simon presented the award to Lucio to a standing ovation and then it was Lucio’s turn to talk. He gave more color on the early years of his career before he switched to the challenges facing the fabless ecosystem. He pointed out that SoCs are not systems-on-chip, they are hardware on chip since the EDA industry has ducked the challenge of the software component and that has to change or the cost of design will become prohibitive. Indeed, the aforementioned CriticalBlue, started as an EDA company, has completely refocused their technology on optimizing embedded software.

In the future, Lucio said, “chips” (or whatever technology turns out to be the future) will have hundreds, then thousands then hundreds of thousands of processors. This is what I call “core’s law” since it is exponential but not really obvious yet since we are on the flat part of the curve still. But just like in the 1970s you could predict chips with tens of thousands of transistors even though there were just 128 or something at the time, you can run the numbers out a decade. Who is going to make designing such a system feasible and cost-effective? Lucio finished by throwing this down as a gauntlet for the whole industry for the future, a challenge and an opportunity. And when companies come along that solve a piece of the puzzle, guess who will turn out to be an advisor, investor, board member. Lucio.


More articles by Paul McLellan…


Semiconductor Safety

Semiconductor Safety
by Daniel Nenni on 11-06-2014 at 7:00 am

Semiconductors and automotive are now like peanut butter and jelly. Certainly you can have one without the other but why would you? I remember when a car first talked to me telling me that the door was ajar. It sounded more like, “the door is a jar” but I got the point. Now my car tells me just about everything including what is wrong with the engine or transmission. Not every semiconductor device designed into our cars are considered mission critical but I beg to differ.


Have you ever driven cross country with four children when the entertainment system failed? Talk about mission critical. Less than a year after I bought my car a light went on saying I should “return to workshop immediately” (it’s German). One of the sensors in the transmission intermittently failed so they replaced the entire transmission ($10,000). Thankfully it was covered under factory warranty because that would have certainly been mission critical to my wallet!

Also Read: ISO 26262 driving away from mobile SoCs

Automotive electronics will change dramatically over the next few years but given the 7 year design cycle (which includes 2 years of test) we are not really talking about bleeding edge or even leading edge technology here. Even so, safety verification is an important part of the automotive semiconductor design cycle which is why I took this briefing. My overall concern is that with mobile devices dominating the semiconductor industry we have lost sight of MTBF, failure in time, risk of undetected failures, etc… Seriously, what do you do when your smartphone drops a call? Or an app crashes? Or your phone just quits? How about when your anti-lock brakes fail or your airbags make an unscheduled deployment?

The Incisive Functional Safety Simulator and Functional Safety Analysis technologies are part of theCadence System Development Suite (SDS), addressing the largest and most complex verification and hardware-software co-development challenges faced by semiconductor and system companies:

HIGHLIGHTS:

  • Solution automates three elements of ISO 26262 compliance including traceability, safety verification and tool confidence level
  • New Incisive Functional Safety Simulator delivers up to 10x simulation performance versus traditional solutions
  • New functional safety regression analysis capability in Incisive vManager automatically generates regression profiles and results, enabling the traceable audit trail for compliance

“Addressing functional safety challenges, particularly in automotive electronics, is critical for the success of system and semiconductor companies,” said Charlie Huang, executive vice president, Worldwide Field Operations and System & Verification Group at Cadence. “By partnering with companies like Melexis who embrace functional safety today, Cadence is delivering a solution that enables engineers to more efficiently address one of the key requirements to proliferate fault-tolerant electronics in the automotive industry where the safety of consumers is paramount.”

Cadence enables global electronic design innovation and plays an essential role in the creation of today’s integrated circuits and electronics. Customers use Cadence software, hardware, IP, and services to design and verify advanced semiconductors, consumer electronics, networking and telecommunications equipment, and computer systems. The company is headquartered in San Jose, Calif., with sales offices, design centers, and research facilities around the world to serve the global electronics industry. More information about the company, its products, and services is available at www.cadence.com.

More Articles by Daniel Nenni…..


Daylight Savings Time and the IoT

Daylight Savings Time and the IoT
by Daniel Payne on 11-05-2014 at 6:00 pm

On Sunday in the USA we changed our clocks back one hour to account for Daylight Savings Time and I was reminded of how far we have to go in getting all of our devices to understand and automatically account for the time. Despite all of the talk about IoT and how it has the promise to automate our lives, we still have to manually set the time. Here’s my experience with DST around the home:

[TABLE] style=”width: 500px”
|-
| Device
| Type
| Connectivity
| Automatically Changed Time
|-
| MacBook Pro
| Laptop
| WiFi
| Yes
|-
| iPad 3
| Tablet
| WiFi
| Yes
|-
| iPad Air
| Tablet
| WiFi
| Yes
|-
| Galaxy Note 2
| Android Phone
| WiFi, GSM
| Yes
|-
| Kindle Paperwhite
| e-book Reader
| WiFi
| No
|-
| Honeywell Thermostat
| Thermostat
| –
| No
|-
| AM/FM Alarm
| Clock
| –
| No
|-
| Microwave
| Appliance
| –
| No
|-
| Oven
| Appliance
| –
| No
|-
| 1998 Acura RL
| Sedan
| –
| No
|-
| 2001 Honda Odyssey
| Minivan
| –
| No
|-
| Cateye Stealth50
| Cyclocomputer
| GPS,
Ethernet
| No
|-
| Insignia
| Set top box
| Antenna
| Yes
|-
| Sony DVD-Bluray
| DVD, Bluray
| WiFi
| Yes
|-

All of our devices using Android, iOS, Mac OS X and Windows understood and made the time change automatically.

The Kindle Paperwhite kind of surprised me because it didn’t update the time in spite of being connected to WiFi, even after a Sync operation it still required me to manually change the time of day. The 3G model of the Kindle Paperwhite does automatically change the time, but ironically the WiFi version doesn’t. Amazon has the more popular Kindle Fire devices, and they do understand how to update the time automatically.


Amazon Kindle Paperwhite

For home automation I could upgrade my thermostat to a WiFi-enabled device from Honeywell or Nest, but I’m not sold on the cost savings so will hold off for a while on that purchase.


Honeywell Thermostat

Newer AM/FM alarm clocks are WiFi enabled and the consumer electronics category of WiFi-driven radios is booming.

I’m not sure that I want my Microwave to be WiFi enabled, because it has a tendency to knock-out WiFi signals enough that my daughter’s iPhone disconnects while our Android phones and Windows laptops stay connected.

Our autos are old enough that they have no networking features, although there’s an interesting group called the Open Automotive Alliance that plans to bring the Android platform to cars starting this year.


Apple has something called CarPlay for infotainment with partners like: Alfa Romeo, Audi, BMW, Chevrolet, Chrysler, Citroen, Dodge, Ferrari, Fiat, Ford, Honda, Hyundai, Jaguar, Jeep, Kia, Land Rover, Mazda, Mercedes-Benz, Mitsubishi, Nissan, Peugeot, Subaru, Suzuki, Toyota and Volvo.


Apple CarPlay

For cycling my Cateye Stealth 50 cyclocomputer does connect to both GPS and Ethernet, although I still had to manually select DST off.

Our home audio-visual equipment is new enough that it connects to WiFi and auto-sets the date and time.

Slowly, but surely I am having to manually reset fewer clocks in our home each year, but we still have a ways to go before the IoT fully automates the time change. How does the time change effect your digital life?


MQTT not IoT “god protocol”, but getting closer

MQTT not IoT “god protocol”, but getting closer
by Don Dingee on 11-05-2014 at 12:00 pm

One protocol, and its descendants, drove the success of the World Wide Web. IP, or Internet Protocol, is the basis of every browser connection and the backbone of IT data centers. Some assumed that the Internet of Things would follow suit, Continue reading “MQTT not IoT “god protocol”, but getting closer”


Samsung 14nm is the one delayed!

Samsung 14nm is the one delayed!
by Daniel Nenni on 11-05-2014 at 6:00 am

As you may have read, the CEO of Ultratech made some unfortunate statements on the recent quarterly conference call in regards to FinFET Yield. As a result there has been a lot of speculation about the who, what, and why. I blogged about it because it interested me personally plus I wanted to collect more data on the subject. Some of the resulting data was posted publicly but most was emailed to me in private, discussed in person, or on the phone.

Also Read: Let the FinFET Yield Controversy Begin!

One of the many benefits of blogging on SemiWiki is that we have access to an incredible amount of semiconductor related data that passes through the site and our private email. It also gives us bloggers access to the semiconductor rank and file, more so than journalists as we are working semiconductor professionals and travel in the same circles. The bad news is that we get our fair share of hate mail. And when I say “we” it’s mostly me because let’s face it I don’t always play well with others, just check my grade school report cards.

While I speculate and make guesses in my blogs I generally know the truth before going out on a limb. On this occasion however I was wrong. Using deductive reasoning I expertly guessed that it was UMC that pushed out the equipment orders referenced in the conference call. It was not and I apologize. It was Samsung. From what I know, that specific market segment (annealing) is mostly shared between Ultratech and Mattson Technology. Here are clips from the recent conference calls:

UTEK CEO: As we have discussed on past conference calls, the difficult implementation of 3D FinFET microprocessors to high production manufacturing. Once again a major logic manufacturer delayed their FinFET ramp. We had then requested to prepare LSA tools for shipment for the end of the third quarter which was delayed. These LSA shipments for the most part caused our third quarter revenue to be less than projected. These LSA systems have been rescheduled for shipment in the fourth quarter. Due to the continued low yield in FinFET devices for the past two years, we have seen a reduction in new LSA bookings in subsequent shipments…

MTSN CEO:The adjustments of our customers’ production capacity plan caused a slight pause in Mattson’s quarter-to-quarter growth, which is consistent with the industry trend. However, we continue to expect our total 2014 revenue to grow over 40%, it can be achieved 2013. As a recovery of wafer fab equipment spending begins in the coming quarters...

Read the entire transcripts and tell me which one of these companies would you trust your business with if you were a major semiconductor manufacturer? Better yet, which one of these companies would you trust your hard earned investment dollars with?

And just because Samsung allegedly pushed out an equipment order does not mean that FinFETs are in trouble or according to Motley Fool’s “Senior” Technology Specialist: “that neither TSMC nor Samsung quite has the FinFET transistor structure (which promises higher performing transistors at lower power) figured out.”

I was also told that Ultratech lost the Intel and TSMC FinFET annealing business to Mattson which I understand completely since Ultratech clearly “does not play well with others.” If someone wants to briefly explain where annealing fits in with the semiconductor manufacturing process in the comment section it would be greatly appreciated.

More Articles by Daniel Nenni…..


FD-SOI, an Opportunity for China?

FD-SOI, an Opportunity for China?
by Paul McLellan on 11-04-2014 at 11:00 pm

Last month in Shanghai was a meeting of the FD-SOI consortium. The focus of the meeting was largely on the suitability of using FD-SOI to serve the Chinese market. The fabs in China are not right on the bleeding edge and are very cost-sensitive so 28nm is probably as advanced as they will get for a long time if not indefinitely. China has a goal that by 2020 they will manufacture 40% of the semiconductors used in China within the country. This is a big goal since China is actually the largest market for semiconductors in the world. This year the Chinese market will be $161B with just 8.9% being manufactured in Chinese fabs. Of course a lot of those semiconductors are re-exported inside finished goods but a lot are consumed in China too, after all it is also the largest market for smartphones.


The opening keynote was given by Handel Jones of IBS. He pointed out that FD-SOI is a great bridge process between 28nm planar and FinFET. It has a lot of the same power efficiency, low leakage etc but at a price that is roughly the same as planar. Handel still believes that 14nm/16nm will not ramp until Q4/2016 or Q1/2017, which seems very late based on everything else we have been hearing. Handel also had estimates of 28nm wafer volume going out to the middle of the next decade. 28nm is clearly going to be a very long-lasting node which also means that there is potentially a very large market for 28nm FD-SOI.


Handel also had some wafer pricing data. I assume that this is based on a model rather than actual quoted prices since those are usually too commercially sensitive. A 28nm FD-SOI wafer in 2015 is $2400, compared to a 14/16nm FinFET at $4800 and 14/16nm FD-SOI at $3600. If those predictions hold out then that is quite a price differential between FD-SOI and FinFET at the 14/16nm node. By 2017 IBS reckon that the cost of 100M gates will be just 90c in 28nm FD-SOI compared to $1.57 in 14/16nm FinFET, so nearly half as much again for the FinFET.


Other presentations were:

  • Laurent Remont of ST on FD-SOI technology
  • Marco Cassele-Rossi of Synopsys on designing with FD-SOI
  • Haoron Wang of Synapse on designing with FD-SOI for power efficiency
  • Pete Fowley of Wave Semiconductor on leveraging FD-SOI to achieve low power and high speed
  • Giorgio Cesana of ST on FD-SOI for energy efficient ICs
  • Tom Reeves of IBM (maybe soon GlobalFoundries) on the SOI ecosystem
  • Paul Colestock of GlobalFoundries on foundry business opportunities

The presentations can all be downloaded here.


In-Design DFM Signoff for 14nm FinFET Designs

In-Design DFM Signoff for 14nm FinFET Designs
by Pawan Fangaria on 11-04-2014 at 4:00 pm

While FinFET yield controversy is going on, I see a lot being done to improve that yield by various means. One prime trend today, it must be, it’s worthwhile, is to pull up various signoffs as early as possible during the design cycle. And DFM signoff is a must with respect to yield of fabrication. This reminds me about my patents filed about 6 years ago while I was at Cadence, they dealt with bringing lithography awareness in the design much earlier at the custom floorplanning and layout stage; it might have been too early a methodology to pick up at that time. But now with 14nm and 16nm process nodes and FinFET technology it has become a necessity, that’s my pleasure to feel proud about it 🙂

This day, I am impressed after seeing one of the Samsungpresentations at DACthis year where KK Lin of Samsung tells about Samsung’s readiness with 14LPE and 14LPP technologies and how they have streamlined their process, flow and overall solution to make a design DFM hotspot free for better yield. Samsung is ready with PDKs, libraries, IPs, and design kits for these technologies. After prototyping, the mass production is scheduled by the end of this year. The presentation conveys about their leading 14nm technology which has smallest gate pitch (CPP), innovative constructs for connecting gates in most compact manner and smallest area memory (SRAM) solution. Let’s see their effective and innovating approach for DFM signoff.

In their new DFM solution, Samsung offers DFM kits which can be inserted into design flows, thus enabling designers to seamlessly signoff the design for DFM.

Samsung 14nm DFM requirements are as listed in the above table; some are mandatory and some are recommended for better design differentiation at library, IP and chip level. The Process Hotspot Repair (PHR) and CMP Hotspot Check are mandatory requirements without any tolerance for hotspot.

Samsung uses pattern matching for DFM checking which can represent very complex patterns pretty fast compared to model simulation method; using traditional DRC also can be very complicated with lengthy code writing for this purpose and is not recommended. The pattern matching method provides a viable safety net to capture and repair all known issues from silicon.

The PHR flow is based on CadenceDFM pattern analysis tool, Litho Physical Analyzer (LPA) and integrated with Encounter EDI environment, qualified for 14nm process. The pattern library is created from process hotspot patterns found on wafers. After P&R of design, the patterns are detected using LPA and fixed in the same environment until signoff. Very fast detection and high fixing rate (>95%) has been observed using this flow without any timing impact on the design, thus improving the yield. The minimum set of remaining hotspots is fed-forward to process team for monitoring.

As designers need to see how friendly their design is to fab with respect to CMP, and would need to feed-forward the results to fab, this flow involving calibration for CMP model and prediction of CMP hotspots in the design has been developed by Samsung in collaboration with Cadence. In CMP model validation, it has been observed that actual measurements and simulation results are very closely correlated (~90%) for different step heights, thus increasing the confidence for capturing hotspots.

Very recently, Samsung and Cadence collaborated together to develop a block based analysis flow for designers to run CMP analysis during design implementation and remove CMP hotspots in-design. CMP correction later can be very difficult, time consuming and costly affair. The halo mimics a virtual neighboring environment which is obtained by silicon data analysis, distribution of silicon in terms of density and line width. Reasonable halo conditions are decided for block level simulation. Samsung provides this solution in its CMP model offering.

So, broadly what is done for CMP hotspot detection? Extraction of critical geometry information from design db, analysis of various attributes such as interlayer dielectric (ILD) height & thickness, surface height and Cu thickness, their comparison with the threshold values and identification of hotspots on the chip.

Samsung provides complete package for DFM solution including the kit, model, block level flow and also fixing guidelines based on silicon distribution that includes suggestions such as narrowing, removing, widening or adding dummy patterns among others.

This was a nice learning to know about a differentiated solution for DFM signoff that is Samsung foundry certified with the flow enabled by state-of-the-art tools from Cadence. View the on-line presentation here for more details. Click the link against “Foundry DFM Requirements with Cadence In-Design, Signoff DFM”.

Do we get the desired yield with 14nm FinFET? I think we need to wait until 2015, it’s not too far!

More Articles by PawanFangaria…..


Its a bouncing baby IEEE standard!

Its a bouncing baby IEEE standard!
by Beth Martin on 11-04-2014 at 12:00 am

Pass the cigars! On November 3rd, 2014, the IEEE-SA Standards Board finally approved IEEE P1687 as a new standard. From now on, you can drop the “P” and just call it 1687, or to its friends, IJTAG. Now would be a good time to sign up for an IJTAG technical workshop.

The new IEEE 1687 Internal JTAG (IJTAG) standard is changing the way the industry validates, tests and debugs chips and circuit boards. It builds upon the popular IEEE 1149.1 JTAG board-level test access standard with a set of uniform methods to describe chip internal IP blocks, which are referred to within the standard as “instruments.” IJTAG-based methods are more cost-effective, more accurate, faster and less time-consuming for you than legacy probe-based technologies like an oscilloscope. IJTAG’s software-driven tests and validation routines are initiated from instruments embedded inside chips.

IJTAG is exciting because it addresses the complex issue of testing a heterogenous set of embedded IP. The traditional situation is that the interface for communicating control sequences, and the sequences themselves, are defined by IP developers in a wide variety of different styles with little commonality. Therefore, it falls to the designers to create unique logic to integrate each embedded instrument into an overall design. For SoCs that often have literally hundreds of instruments from a variety of sources with disparate interface styles, this is a major undertaking that typically requires large engineering resource and time. The new IJTAG standard is designed to solve this problem by providing a method for plug-and-play IP integration enabling communication to all the instruments from a single access point (generally an IEEE 1149.1 TAP).

It standardizes a language for describing the IP interface and how IPs are connected to each other. It introduces a new language that defines how patterns that operate or test the IP are to be described. And, there are already automation tools available to simplify the process of connecting any number of IJTAG compliant IP blocks into an integrated network for uniform access and control.

IJTAG has some traction already because it is useful to both IP providers and to chip designers. For IP providers, it makes their products easier to integrate and therefore more attractive to a wider customer base. It also gives them better testing and debugging capabilities as well as an overall more robust product. For chip integrators, the standard also expands the availability of IP sources by eliminating integration uncertainties and incompatibilities as well as provides increased scalability for rapidly growing design sizes.

There’s already been a lot of work with IJTAG. This 2012 whitepaper from Mentor Graphics and NXP Semiconductors details how they implemented P1687 on mixed-signal IPs in a 65 nm automotive design. The results show significant advantages of P1687 over the IEEE 1149.1 (JTAG) test methodology, both in automating the test pattern development and in reducing test setup data volume by more than 50%. The latest news is from Asset Intertech and talks about how interoperability between vendors tools allows IJTAG to be used easily at both chip and board level.

IJTAG is exciting because it addresses the complex issue of testing a heterogenous set of embedded IP. It standardizes a language for describing the IP interface and how IPs are connected to each other. It introduces a new language that defines how patterns that operate or test the IP are to be described. There are tools available to simplify the process of connecting any number of IJTAG compliant IP blocks into an integrated network for uniform access and control. This lets you send commands to the blocks from a single top-level access point (a TAP).

Asset and Mentor have also teamed up to present a series of technical workshops on IEEE 1687 across the world. You can register here.

Has anyone already started using IJTAG? Do you plan to look into it now that the standard is ratified?


Improve Test Robustness & Coverage Early in Design

Improve Test Robustness & Coverage Early in Design
by Pawan Fangaria on 11-03-2014 at 5:00 pm

In a semiconductor design, keeping the design testable with high test coverage has always been a requirement. However with shrinking technology nodes and large, dense SoC designs and complex logic structures, while it has become mandatory to reach close to 100% test coverage, it’s extremely difficult to cope with the explosion of test patterns and keep them robust enough to detect each fault. While at-Speed or transition faults are more cumbersome to detect than stuck-at faults, at deep hierarchical logic levels the circuit becomes hard to test giving rise to another category called random resistive faults. Often a chip fails at the tester due to glitches in the design because of a non-robust test pattern. Trying to understand and correct these glitches can be a very time consuming situation.

Here is an example of the issues with test pattern robustness; while the test pattern generation tool assumes there is only one clock, the clocks merging at the ‘OR’ gate can cause a glitch on the tester. Similarly there can be other issues with deep logic circuits, especially in scenarios which can lead to merging of test clocks, re-convergent resets and so on. These kinds of issues need to be checked and avoided to make the test patterns more robust.

It was a nice occasion attending a webinaroffered by Kiran Vittal, Sr. Director, Product Marketing at Atrenta, where he presented details about SpyGlass DFT and SpyGlass DFT DSM products. These tools can be used very early in the design phase, at the RTL stage, to identify such issues, fix them, verify different aspects, and deliver improved RTL which will provide higher stuck-at and at-speed fault coverage along with more robustness and scannability in the design.

A designer can check for blocks with low coverage in the fault browser, and then a few schematics of the blocks to understand the issues and fix the RTL. The audit coverage report, for both stuck-at and transition faults, provides extremely useful information with existing coverage and actions to be performed to increase that coverage. Recently Atrenta added a smart capability in audit report where it checks for test robustness issues like merging clocks and re-convergent resets and pinpoints these in the report. For example, the various flops which could be the source for glitches are reported. With the help of this report, designers can save significant amounts of time and effort in fixing these issues at RTL, rather than leaving it for the downstream tools to debug later, which can take much longer, weeding through much more detailed data, while using far more expensive tools. A dashboardfor management review is also provided where management can set objectives and criteria to be achieved and then periodically review the progress and trend lines.

SpyGlass DFT identifies the positions in RTL where test points can be inserted to improve controllability and observability of particular nodes. It also reports the number of faults which can be detected after inserting these test points. To fix random resistive issues (described below), SpyGlass DFT DSM can trace throughthe hierarchy and identify test points to be inserted at the block boundaries. The random pattern coverage after applying test points are also generated for ‘what if’ analysis.

Random resistive faults are hard to test, because they are generally buried deep inside the design hierarchy. By zooming on color annotations and displaying the schematic, controllability and observability of nodes, in the range of lowest to highest, can be obtained and appropriate actions can be taken to fix low observability and controllability points. Random resistive faults have high impact on ATPG pattern count and runtime; and ATPG efficiency is especially poor for transition fault detection. SpyGlass DFT DSM reports random pattern coverage estimates in a hierarchical fault browser and helps designers through color annotation in schematics to fix specific faults to increase the coverage.

In an SoC, the DFT architecture can be quite complex and the test logic is controlled by the JTAG TAP controller. SpyGlass DFT runs at block level as well as SoC level and validates block level constraints that must be satisfied at the SoC level. It treats the block as a black box and verifies if correct ‘test mode’ values are reaching the block, thus allowing the processing in a top down manner early in the cycle.

To aid complex initialization with JTAG/IEEE 1500 controllers, SpyGlass DFT provides more effective and easier debugging of test sequences through Tcl commands and assertions. For easy debugging of failed assertions, the logic is highlighted in RTL and the issue is highlighted in the schematic, all in a single GUI. The expanded bit-sequence and waveform view can also be seen. This does not require writing time consuming testbenches; applying structural rules at RTL and some assertions to quickly verify connectivity for different modes of operations are sufficient. Verifying different types of faults and connectivity can be done very early in the design phase.

Another interesting and very effective solution from Atrenta is SoC test signoff with Abstract Modelsof IP which provides significant improvement (~2-6x) in runtime and memory compared to flat flow. SpyGlass also provides automatic insertion of memory test and repair at RTL; it supports vendor independent, user supplied MBIST at RTL.

The RTL test signoff meets many important design requirements such as scannability, stuck-at and transition test coverage goals and test robustness much earlier in the design flow and much faster (~10x faster than post layout). Kiran also talked about a customer case on a mobile application where they achieved ATPG test coverage for stuck-at faults within ~1% of what was reported by SpyGlass at RTL (>99%). The RTL estimation vs. transition ATPG correlation was within ~5%. The average runtime speedup of RTL compared to netlist ATPG was ~30x. View the on-demand webinarposted at Atrenta website to learn more.

More Articles by Pawan Fangaria…..