wide 1

EDS Fair: Dateline Yohohama

EDS Fair: Dateline Yohohama
by Paul McLellan on 11-20-2012 at 12:22 pm

Electronic Design and Solutions Fair (EDSF) was held in Yokohama Japan from Wednesday to Friday last week. It was held at the Pacifico Hotel, somewhere I have stayed several times, not far from the Yokohama branch of Hard Rock Cafe and, what used to be at least, the biggest ferris-wheel in the world.

Atrenta was one of the many companies at EDSF2012. The EDS fair consists, for the second year, of a combination of what used to be a pure EDA show and the larger embedded show. Since Japan tends to be in the forefront of systems thinking, with Europe coming next and the US bringing up the rear, this combination is good since it brings in people from across the whole spectrum of design, not just a succession of CAD managers from the usual suspects. This is especially important for Atrenta since they are focused on doing design at a higher level with IP and software IP combined together to form systems.

Atrenta managed to generate a good supply of high-quality leads, each qualified by either an AE or a salesperson (so this doesn’t include students, press etc). Just like SemiWiki, students, press, PR and everyone are all very welcome but in the end it is the real design and embedded engineers who are the most important audience.

One thing that worked very well was the use of iPads as a way for customers to receive a deeper understanding of Atrenta’s products. They ported product presentation pdfs onto 8 iPads. In addition to salespeople and AEs using them as a tool to explain Atrenta’s products, the iPads also provided a venue for customers to get a deeper understanding of their products than they would typically receive from perusing a panel. It proved to be very effective as it allowed them to engaged with more customers than by using demos alone.

In terms of product interest, the highest was with BugScope. We believe this to be a combination of existing customers wanting to know about our newest product as well as designers looking for solutions to their verification challenges. Following closely behind where: SpyGlass CDC, SpyGlass for FPGA (we had a presentation highlighting SpyGlass plus CDC combined with the Xilinx Vivado support), and SpyGlass Power.

Meanwhile, back on this side of the Pacific, Ed Sperling had one of his round tables on the challenges of 3D, new process nodes. Venki Venkatesh of Atrenta was one of the participants. I’ve participated myself in a couple of Ed’s round tables. Basically he records the whole thing and then transcribes it (himself, it is so technical you can’t just give it to a secretary-type), cleans up the ums and ers and publishes it pretty much verbatim. You can find this one here. One of themes, as I’ve been pointing out, is that people are going to stay on 28nm as long as possible, that there is no rush to 20nm since the costs are a challenge (never mind the technical challenges).

There is also a short video version with the same people around the table here.


How much SRAM proportion could be integrated in SoC at 20 nm and below?

How much SRAM proportion could be integrated in SoC at 20 nm and below?
by Eric Esteve on 11-20-2012 at 4:45 am

Once upon a time, ASIC designers were integrating memories in their design (using a memory compiler being part of the design tools provided by the ASIC vendor), then they had to make the memory observable, controllable… and start developing the test program for the function, not a very enthusiastic task (“AAAA” and “5555” and other vectors), look for the test coverage, and try to be creative to reach the expected 99,99% magic number. I agree that this was long time ago, but when looking back to this old time, you realize how powerful is DesignWare Self-Test and Repair (STAR) Memory System from Synopsys, initially developed by Virage Logic. Today we are talking about the version 5 of the tool. Moreover, since the year 2000’s, most of the ASIC vendors are externally sourcing the SRAM compiler (to Virage Logic at that time…), ASIC designer is taking benefit of faster, denser memories with Built-In-Self-Test (BIST) integrated.

According with Semico, the number of processors integrated into a single SoC is growing (left caption, even if I am not sure that 16 processor per SoC is the standard in 2012 for every SoC, this is certainly the case with Application Processor for smartphone or Set-Top-Box), and, as a matter of fact, processor cache size is growing (middle caption), leading memory to dominate chip area, as we can see on the right of the picture. Leading to an immediate consequence on the SoC yield: it would dramatically decrease, if… the designer don’t use repair capability, like for example this offered by the STAR product (for Self-Test and Repair). Another precision about STAR version 5: unlike the previous version sold by Virage Logic, the tool can be available for a memory generated by a compiler coming from any vendor. Last, but not least precision, STAR Memory System 5 is targeted for designs implemented at 20-nm and below technologies.

You may want to challenge the assertion claiming that SoC yield would be severely impacted when integrating higher SRAM proportion within IC at 20nm and below? Thus, just have a look at the above pictures:

  • Double patterning introduces variation from overlay shift,
  • Voltage Scaling induces local variation with voltage, and
  • Random Dopant Fluctuation generates global and local Vth variation

These process variations become significant at 20nm, causing bit failure. Implementing SRAM in a system (on Chip) is no more a “drag and drop” action, the designer has to also take into account a solid (automated) test generation, as well as to introduce repair capability, as the risk of failure is statistically… a certitude.

STAR Memory System 5 can be implemented for SRAM, as already mentioned, and also for ROM, Register File or Content Addressable Memory (CAM). The designer will implement a specific wrapper interfacing with the various memories, and a real memory sub-system is created, comprising a master and several slaves. The master, SMS Server, is connected to the outside world by a Test Access Port (TAP), and to several slaves (SMS Processors) through an IEEE 1500 normalized Bus. Each of the SMS Processors is connected to various memories (can be Single or Dual Port SRAM or Register Files) through the wrappers. Indeed, the test bench generation and the above described insertion are completely automated by STAR Memory System 5, also allowing performing diagnosis and redundancy analysis.

After running the test program on processed wafers, the failure diagnosis and fault classification will allow implementing redundancy, thanks to the Efuse box (Top right of the SoC block diagram), and dramatically increase the final SoC yield, as we can see per the image below.

To summarize, we have listed some unique features of the DesignWare STAR Memory System 5:

  • Performance and area optimized architecture to efficiently test & repair thousands of memories with smaller area and less routing congestion

  • Automated IP creation, hierarchical SoC insertion, integration, verification and tester-ready pattern generation
  • Hardened STAR Memory System IP with Synopsys’ DesignWare memories enabling faster design closure, higher performance, higher ATPG coverage, smaller area and reduced power.
  • Advanced failure analysis with logical/physical failed bitmaps, XY coordinates of failing bit cells and fault classification

Important to notice, this new version offers:

  • New optimized memory test and repair algorithms to efficiently address memory defects, including process variation faults and resistive faults, at 20-nm and below
  • New hierarchical architecture reducing test & repair area by 30% compared with the previous generation
  • Hierarchical implementation and validation accelerating design cycles by allowing incremental generation, integration and verification of test and repair IP at various design hierarchy
  • Support for test interfaces of high-performance processor cores maximizes design productivity and SoC performance, see below:

You may want to read the official Press Release from Synopsys about STAR Memory System 5

Eric Esteve


Andy Bryant Will Now Lead Intel Into The Foundry Era

Andy Bryant Will Now Lead Intel Into The Foundry Era
by Ed McKernan on 11-19-2012 at 12:30 pm

The announcement that Paul Otellini will step down in May 2013 is extraordinary in the history of the way Intel makes CEO transitions. They are thoughtful, deliberate and years in the making, unlike today’s announcement. Twenty years ago Otellini and Andy Bryant were in the top echelon of Andy Grove’s executive team and yet Otellini was already known to be the eventual successor to Craig Barrett (who took Grove’s place). Otellini’s understanding of the PC market gave him the edge over Bryant who would take the position of CFO in 1994 and hold it for 13 years. Since Intel’s market valuation peak in 2000, the company’s x86 business has underperformed while process development has outperformed and is about to reach a 4 year lead relative to the rest of the industry. Since leaving his CFO post, Bryant has focused on Intel’s manufacturing excellence and in a sense outperformed Otellini’s side of the business. Bryant’s work in this area led to his elevation as Chairman of the Board this past May. What is transpiring now is the most radical transformation of the company since Grove and Moore exited the DRAM business in the early 1980s in order to focus on the nascent, yet highly profitable and growing x86 processor business that powered the PC revolution. Foundry now will reign supreme.

If one goes back and reviews the presentations given to the investment community for the past three years, one will see that there is a constant theme that Intel’s growing process technology lead will eventually propel them to a leadership position in the PC and new Mobile world. It worked perfectly in the 1990s, however the center of gravity in computing architecture now is driven by the new ARM based ecosystems (iOS, Andorid etc.) and by the valuable baseband communications capability led by Qualcomm. Playing catch up is not what Intel is good at unless it is on the x86 architecture driven by the Microsoft Windows. The overwhelming success of Apple’s iPAD and Qualcomm’s 4G LTE silicon are the two significant high points of the year and yet imagine how much further ahead Qualcomm would be if they could have met the step function demand for their new chips.

Apple’s iPAD came into the world as an underpowered Internet screen and has been transformed this past year with the high-resolution screen and the new A6X processor into a mobile computing platform that can more than hold their own in the business world against x86 mobiles. Meanwhile the recently introduced iPAD Mini looks to be ready to cannibalize the sub $400 home PC markets. AMD felt the effect first and now it looks like Intel will take the all important step of breaking away from its reliance on x86 PCs and become a Foundry to the broader mobile computing world. I see it as a much better position than what Microsoft faces. Wintel is broken.

The transition that is about to occur for Intel may take place much faster than people believe. Looking back it now appears that a game of chicken was set in motion two years ago by Otellini, Bryant and the Board of Directors over the future of the company. If, Intel truly was going to be 4 years ahead of the industry by the time 14nm ramped then they should build a fab footprint that should serve the majority of the leading edge process silicon of the world. This meant doubling Intel’s wafer starts at a cost of $25B. Meanwhile Otellini had to execute on a plan to not just grow x86 mobiles but to win the smartphone and tablet market. On this score, Otellini was not able to execute fast enough or we can say the market outran him. Therefore, what is likely to happen is a retreat in the product development of x86 smartphone solutions, including baseband silicon to remove any competitive barriers to bringing Apple, Qualcomm and others on board.

There is no doubt in my mind that there have been foundry discussions between Intel and Apple, Qualcomm, Broadcom and even nVidia. My guess is that Bryant has led this side of the business while Otellini focused on Intel’s traditional business. Now that the PC market is shrinking, Intel is at 50% Fab utilization at 22nm and the company’s valuation has dropped to $100B, below that of Qualcomm, there was a need to change the business model. There is little time to waste for Intel to leverage its assets.

Apple is working on a transition plan to TSMC that is supposedly starting late next year in 20nm. Given their growth rate and the need to reduce geographic and Geopolitical Risk (EU, USA employment, antitrust and tariffs) it will be necessary to diversify their foundry business to worldwide locations. Only Intel can offer this to Apple with its fabs in the USA, Ireland and Israel. An Intel retreat on Atom and a friendlier wafer supply agreement should open the doors to an Apple partnership for leading edge capacity that heretofore was only given to x86 components. Apple’s A6 and A6X processors will now offer Intel better margins than consumer x86 processors – this is a first. Intel’s 14nm process will offer Apple, Qualcomm, Broadcom and the rest of the industry power and cost savings that are tremendous and as I expect will generate for Intel better margins than what TSMC receives.

Intel’s current market cap is roughly $100B. TSMC at 40% of Intel’s revenue is at an $80B market cap. If Intel were to split into three companies: Datacenter, x86 client and Foundry, then one could see a significant increase in valuation. The Datacenter business, if it was fabless and valued like F5 would be worth nearly $60B as both companies make roughly 80% Gross Margins. The Foundry business, if it were to win 50% of Apple’s and Qualcomm’s business could bring in $10B in a run rate revenue by end of 2013, offsetting much of the loss in the x86 client business. But that is just the beginning as companies like Apple ramp from 300MU to 1BU+ in the next few years.

Over the course of the next year Andy Bryant will have to fill 3 leading edge fabs as the ramp for 14nm starts. If he accomplishes that goal, look for a major consolidation in the semiconductor Foundry industry as the value shifts away from the old 350MU PC semiconductor supply chain to the 1BU+, extremely low power, leading-edge mobile semiconductor key components. As mentioned earlier this year when Qualcomm was overwhelmed with orders for its 4G LTE chips, we are now witnessing an industry that has an insatiable demand for leading edge process technology. Until now Intel was hamstrung from pursuing what is the future Billion Unit market, now it is in the hands of the manufacturing guys. The irony may be that Intel has come full circle.

Full Disclosure: I am Long AAPL, INTC, QCOM, ALTR


Is The Fabless Semiconductor Ecosystem at Risk?

Is The Fabless Semiconductor Ecosystem at Risk?
by Daniel Nenni on 11-18-2012 at 6:00 pm

Ever since the failed Intel PR stunt where Mark Bohr suggested that the fabless semiconductor ecosystem was collapsing I have been researching and writing about it. The results will be a book co-authored by Paul McLellan. You may have noticed the “Brief History of” blogs on SemiWiki which basically outline the book. If not, start with “A Brief History of Semiconductors”. In finding the strengths of the fabless semiconductor ecosystem I also discovered possible weaknesses.

Deep collaboration with partners and customers is the current mantra we hear at every conference. TSMC even spelled it out at the most recent OIP Forum last month: At 40nm partners and customers started design work when the PDK was release 0.5, at 28nm design work started at PDK 0.1, at 20nm design work started at PDK .05, and 16nm will start at PDK .01. The problem with this is that the earlier the collaboration the more sensitive the data is and this data is being shared with partners and customers who also work with competing foundries. It is a double edged sword and the cause of unnecessary bloodshed in our industry.

The only protection this sensitive data has is the NDA (Non Disclosure Agreement) which ranks right up there with toilet paper in our industry. One of the things I do as a consultant for emerging fabless, EDA and IP companies is help with NDAs. My least favorite is the 3-way NDA where the customer, the EDA vendor, and the foundry all have to sign it. How do you control that information flow?

NDAs have evolved over the years and the recent ones are much more detailed and have some serious legal repercussions but who actually reads them besides me and the lawyers?

The problem is that the EDA industry is very small and we all know each other. Industry people regularly gather at local pubs, coffee houses, and conferences and things tend to slip out. There is also an EDA gossip website that has no respect for sensitive data covered under NDAs.

So you have to ask yourself what is a foundry to do? How do you allow early access to your secret recipes without it being leaked to your competitors? Just some ideas:

Buy an EDA company: I suggested to GLOBALFOUNDRIES in the early days that they tap their oil reserves and buy Cadence. Imagine the seamless tool flow a foundry could create today if they had full control of the tools and information flow. Also imagine the fabless semiconductor industry growth potential if EDA tools were free, like FPGAs, design starts would multiply like bunny rabbits!

Limit the number of EDA companies you do business with:This is my personal nightmare. To cut costs and increase security a foundry would only work closely with the top 2 EDA companies. Granted this would kill emerging EDA companies and stunt the growth and innovation of EDA but it could happen. It would also increase the costs of EDA tools thus killing design starts.

Legal training for all semiconductor ecosystem employees: Use the recent insider trading case law. People are doing serious jail time for leaking Intel, AMD, and Apple financial information. Spend the time to educate your employees on NDAs and focus on controlling the information flow. A couple of hours of prevention could prevent years of incarceration.

If you have other ideas lets discuss it here and I will make sure it is read by the keepers of the secret semiconductor recipes. We have some serious challenges ahead that will require even deeper collaboration so lets be proactive and protect the industry that feeds us.


Here to make my stand, with a chipset in my hand

Here to make my stand, with a chipset in my hand
by Don Dingee on 11-16-2012 at 6:13 pm

Yesterday, I clicked “like” on a LinkedIn post with the title “TI Cuts 1,700 Jobs”. Today, I read the analysis and pulled out Social Distortion’s “Still Alive” for inspiration. I’ve been through this more than once. For them it’s not like-worthy, and I feel their sting.

The part of the post I liked was the comment: “This is good for embedded.”
Continue reading “Here to make my stand, with a chipset in my hand”


Mentor and NXP Demonstrate that IJTAG Can Reduce Test Setup Time for Complex SoCs

Mentor and NXP Demonstrate that IJTAG Can Reduce Test Setup Time for Complex SoCs
by glforte on 11-15-2012 at 8:10 pm

The creation of test patterns for mixed signal IP has been, to a large extent, a manual effort. To improve the process used to test, access, and control embedded IP, a new IEEE P1687 standard is being defined by a broad coalition of IP vendors, IP users, major ATE companies, and all three major EDA vendors. This new standard, also called IJTAG, is expected to be rapidly and widely adopted by the semiconductor industry. The P1687 standard will enable the industry to develop test patterns for IPs on the IP level without having to know how the IP will be embedded within different designs.

A typical architecture for integrating multiple IP blocks using the P1687 standard, which has specific languages (ICL and PDL) to define structural elements.

Tight co-operation between NXP and Mentor Graphics has demonstrated the benefits of using IEEE P1687 on a real industrial design manufactured at the 65 nm technology node. We created PDL files for the test setup of embedded mixed signal blocks on the instrument level. We also created the ICL descriptions of the instruments and their connections, up to the top level of the design. These experiments used Tessent IJTAG to automatically retarget the PDL descriptions from instrument level to the top-level chip pins. Top-level test benches were created to validate the correctness of the mixed signal tests. Test vectors were created as well, to transfer the test patterns to a production test. The results clearly show that test setup length can be reduced by up to 56%.

Furthermore, we confirmed that implementing the IJTAG-based tests can be done with a high level of automation. This demonstrates that tests described by IP providers at the instrument level can be automatically retargeted to the top level of the chip with a minimum of user input. Read the details in this white paper by Mentor Graphics and NXP Semiconductors.

Experimental results showing the average number of cycles per test (y-axis) for conventional methods of test control integration for IP blocks, compared to several approaches–SIBs(a), SIBs()b, SIBs(c)–employing IJTAG.

What I Learned About FPGA-based Prototyping

What I Learned About FPGA-based Prototyping
by Daniel Payne on 11-15-2012 at 8:10 pm

Today I attended an Aldec webinar about ASIC and SoC prototyping using the new HES-7 Board. This prototyping board is based on the latest Virtex-7 FPGA chips from Xilinx.

You can view the recorded webinar here, which takes about 30 minutes (should be available in a few days). I first blogged about the HES-7 two months ago, ASIC Prototyping with 4M to 96M Gates. Continue reading “What I Learned About FPGA-based Prototyping”


Test and Diagnosis at ISTFA

Test and Diagnosis at ISTFA
by Beth Martin on 11-15-2012 at 7:10 pm

Finding and debugging failures on integrated circuits has become increasingly difficult. Two sessions at ISTFA (International Symposium for Testing and Failure Analysis) on Thursday address the current best practices and research directions of diagnosis.

The first was a tutorial this morning by Mentor Graphics luminary Martin Keim on using scan diagnosis for failure analysis. Dr. Keim says the tutorial is both an introduction to diagnosis — describing what diagnosis actually is and what it can do for you, as well as practical advice to FA (failure analysis) folks how to make it work in their environment.

Defects, like death and taxes, are inevitable. Devices will fail because of manufacturing problems like contamination, insufficient doping, and process or mask errors. These problems cause shorts to power/ground, slow transistors, CMOS stuck-on/opens, and the like.

Faults are abstractions of physical defect behavior and can be represented as fault models in software and measured. Diagnosis tools (for example, ATPG) find faults; failure analysis tools finds defects. These two things work together. A diagnosis tool needs all the inputs and outputs of ATPG from the DFT engineers, and a datalog of failing devices from the test engineers. Okay, say you’ve got that managed and you run a diagnosis on a set of failed devices. The next thing to consider is the quality of the diagnosis results, and that depends in large part on the diagnosis tool you use. In particular, the tool should be layout-aware, as opposed to logic-only diagnosis. For details, you can download a whitepaper by Dr. Keim, Layout Aware Diagnosis. And for an industry case study perspective, look at the results of a diagnosis study between UMC, AMC, and Mentor in this paper available for a small cost through EDFAS.

For more views on test and diagnosis, you can attend a session of four papers (session 23) at a more reasonable hour on Thursday (3:25p-5:50p). The papers in this session address specific topics such as:

  • Using diagnosis for failure localization and root cause analysis
  • Leveraging test and characterization results in FA
  • Test-for-yield
  • Design-for-debug
  • Linking design-for-test and CAD navigation
  • Advances in diagnosis technology
  • FA-optimized test and test equipment
  • Industrial practices and new technologies

Troubleshooting how and why systems and circuits fail is important and is rapidly growing in industry significance. Debug and diagnosis may be needed for yield improvement, process monitoring, correcting the design function, failure-mode learning for R&D, or just getting a working first prototype. As you might imaging, this detective work is very tricky. Sources of difficulty include circuit and system complexity, packaging, limited physical access, shortened product creation cycle and time-to-market. These two ISTFA sessions are great sources of information on the new, efficient solutions for debug and diagnosis that have a much needed and highly visible impact on productivity.


Semiconductor market negative in 2012

Semiconductor market negative in 2012
by Bill Jewell on 11-15-2012 at 12:30 am

September WSTS data shows the 3Q 2012 semiconductor market increased 1.8% from 2Q 2012. The year 2012 semiconductor market will certainly show a decline. 4Q 2012 would need to grow 11% to result in positive growth for 2012. The outlook for key semiconductor companies points to a 4Q 2012 roughly flat with 3Q 2012. The table below shows the midpoint of revenue guidance for 4Q 2012 versus 3Q 2012.


Guidance for 4Q 2012 varies widely, from double digit declines from TI and Infineon to a 20% increase from Qualcomm. Most companies expect a flat to down 4Q 2012. The companies generally expect weak end market demand in 4Q, with the possible exception of mobile communications. We at Semiconductor Intelligence are forecasting the 4Q 2012 semiconductor market will be up 0.5% from 3Q 2012, driving a 2.5% decline for year 2012.

What is the outlook for 2013? The overall economic outlook is uncertain. The latest forecast from the International Monetary Fund (IMF) calls for worldwide GDP growth of 3.6% in 2013, a slight improvement from 3.3% in 2012. The advanced economies are expected to grow 1.5% in 2013, up from 1.3% in 2012. The IMF expects the Euro Zone to begin a slow recovery from the downturn caused by the debt crisis. Emerging and developing economies will be the major growth drivers with 5.6% growth in 2013. China GDP growth should increase slightly in 2013 after slowing in 2011 and 2012.

We at Semiconductor Intelligence have developed a forecast model based on GDP. Since semiconductors are at the low end of the electronics food chain, the market tends to follow the acceleration or deceleration of GDP growth rather than the rate of GDP growth. The 0.3 percentage point acceleration in GDP growth from 2012 to 2013 indicates 2013 semiconductor market growth of around 8% to 10%.

The electronics market outlook is mixed. Business and consumer spending on PCs is weak. However smartphones and media tablets are continuing to show healthy growth. Inventory adjustments are being made in the semiconductor supply chain. Thus the semiconductor market should turn up quickly when end demand picks up. Based on these factors, we at Semiconductor Intelligence are forecasting 9% growth in the semiconductor market in 2013. The chart below compares our forecast with other recent forecasts.


Creating Plug-and-Play IP Networks in Large SoCs with IEEE P1687 (IJTAG)

Creating Plug-and-Play IP Networks in Large SoCs with IEEE P1687 (IJTAG)
by glforte on 11-14-2012 at 2:15 pm

Until now, the integration and testing of IP blocks used in large SOCs has been a manual, time consuming design effort. A new standard called IEEE P1687 (or “IJTAG”) for IP plug-and-play integration is emerging to simplify these tasks. EDA tools are also emerging to support the new standard. Last week mentor announcedTessent IJTAG, which simplifies connecting any number of IJTAG-compliant IP blocks into an integrated, hierarchical network, allowing access to them from a single point. IJTAG will save engineering time by automating design tasks, and potetntially reduce the length of an aggregated test sequence for all the IP blocks in an SOC. This translates into reduced test time and smaller tester memory requirements. For more information on the standard and Tessent IJTAG, click here.

BACKGROUND
In the world of chip design, there are de-facto standards for what information must be included in the library package delivered with a reusable design block, also referred to as an “IP” (i.e., a block of Intellectual Property). Unfortunately, most IP packages do not include information on how to communicate test features or other commands to the block, or any predefined way to integrate specific test features into the overall design. Some IPs, like embedded processors and SerDes I/O interfaces, may come with build-in self-test (BIST) circuitry, but hardly any IP you buy off the shelf comes with a complete, ready-to-go test solution. In addition, there has not been a common protocol to interface to these blocks, so each block requires additional learning and customization to communicate to it.

Consequently, the integration, testing, and communication to IP blocks used in large-scale semiconductor devices has been, to a large extent, a manual design effort. Our customers tell us that integrating hundreds of IP blocks into a comprehensive test plan for a large SoC is becoming vastly more difficult and time consuming. Many of these IP blocks are stand-alone discrete functions such as clock controllers, power management units, thermal sensors, and more. It is necessary to control these IP blocks during testing. Challenges involved in integrating these IP blocks include frequent changes to the design, test engineers being unfamiliar with the blocks, an increasing number and diversity of IP blocks, and complicated operational and test setup sequences to manage.


Figure 1. The IEEE P1687 (IJTAG) standard defines two languages, ICL and PDL, that allow the interfaces to a reusable block of IP to be defined in a common manner that enables ply-and-play integration. Tessent IJTAG provides automation support enabling operations on any IP block in a design to be initiated from a single top-level access point.

A NEW STANDARD FOR IP INTEGRATION AND TESTING
To address these IP integration and testing challenges, the IEEE P1687 committee has proposed the IJTAG standard, which builds on the popular IEEE 1149.1 (JTAG) board and card-level test access standard. IJTAG provides a set of uniform methods to describe and access chip internal IP blocks, creating a basis for design, test, or system engineers to easily reuse IP in their products. IJTAG is being defined by a broad coalition of IP vendors, IP users, major ATE companies, and all three of the largest EDA vendors, and is expected to be rapidly and widely adopted by the semiconductor industry.

In the context of IJTAG, the IPs we are referring to are pre-defined blocks of analog, digital or mixed signal circuitry performing particular functions, such as a clock a generator, an interface to an external measurement probe, a radio tuner, an analog signal converter, a digital signal processor, or one of hundreds of other functions. These so-called “embedded instruments” may be internally designed circuit blocks reused across a product line, or 3[SUP]rd[/SUP] party IP purchased from an external source. Typically, they have a control (digital) interface, such as a command register, associated commands and synchronization signals, and a data path. To use the instrument functionality, the designer needs to ensure that defined commands (bit sequences) are sent to the instrument control register in order to reset or initialize it, to set a mode, monitor a state, perform a debug action, or to elicit any other behavior within the instrument’s repertoire. The interface for communicating these control sequences, and the sequences themselves, are defined by the suppliers in a wide variety of different styles with little commonality.

So designers need to create unique logic to integrate each embedded instrument (IP block) into an overall design. For SoCs that often have literally hundreds of instruments from a variety of sources with disparate interface styles, this is a major undertaking that requires lots of engineering time.

This is the problem that IJTAG is designed to solve by providing a method for plug-and-play IP integration enabling communication to all the instruments (IP blocks) from a single test access point (TAP). IJTAG describes a standardized way to define the interface and pattern sequences used by an instrument, and to retarget those commands to any point within an IJTAG-compliant hierarchy. Mentor’s new product, Tessent IJTAG, provides the automation support to implement IJTAG in large, complex designs. Tessent IJTAG can be used to integrate embedded instruments at the die, package, board or system level, providing a single access point at any desired level.

Overview of IEEE P1687
A reusable IP block with a compliant IJTAG digital interface is called an embedded “instrument.” The IJTAG description of an embedded instrument includes its I/O ports and register details as well as the specific sequences needed to control and access the instrument. Both parts are described through P1687 languages defined by the standard: Instrument Connectivity Language (ICL), and Procedural Description Language (PDL).

ICL provides an abstract, hierarchical definition of signals and registers necessary to control the instrument. ICL does not include details of the inner workings of instruments, but only the I/O ports, registers, bit fields, and enumerations of data elements that are necessary to carry out instrument operations. ICL also describes the network connecting all the different instruments. The standard only describes how to accessand control the instruments, and not the details of the instruments themselves.

PDL describes how to operate the instrument through commands and data written to an instrument’s port, and how to monitor the instrument by reading data from the port. These operations are described with respect to the boundaries of the instrument. PDL is compatible with the popular Tcl scripting language.

With the combination of the ICL and PDL descriptions for an instrument and its operations, the instruments can be easily ported between designs. The IJTAG standard ensures that a PDL defined sequence of operations written and verified at the instrument level can be used without modification after that instrument has been included inside a larger design. The automated process of retargeting translates and merges PDL sequences from the instrument’s instance level to the top level of the design (or any other level in between). In other words, Tessent IJTAG reads all the instruments’ PDL definitions and generates new PDL appropriate for the top-level of the design in which the instruments are embedded (Figure 1). Tessent IJTAG can also translate the top level PDL sequences into other formats, such as bit patterns for automatic test equipment (ATE), Verilog files for verification, or system level PDL used for integrating a subsystem into a larger design.

Figure 2 shows the ICL for a typical instrument. It defines an 8-bit shift register named “R”, which receives the command patterns, as well as input and output lines, and several control signals. Figure 2 shows PDL describing actions performed on the instrument. For example, the line “iWrite R $value” causes an 8-bit pattern named $value to be written to register R. Similarly, “iRead R 0xFF” causes the R register to be read and compared to the value 0xFF (i.e., all 8 bits should be 1’s).
Note that the user (e.g., a design or test engineer) simply has to declare the instrument interface and the commands to be given to it. The behavior of all the interfaces complies with IJTAG, so the designer only needs to learn one way to route sequences to the instrument interfaces, and does not have to know the internal details of every instrument as in the past.


Figure 2. The ICL description of a simple embedded instrument interface. Note that IJTAG is not concerned with any of the internal implementation details of the instrument (reusable IP block) itself.


Figure 3. An excerpt of PDL describing some actions to be performed on the embedded instrument from Figure 1, such as writing a value to its command register and reading a value from it.

IJTAG Automation
To fully enable and automate the use of IJTAG, the Tessent IJTAG tool performs four tasks:

  • Reads all the ICL and PDL files for a design.
  • Performs design rule checks to validate that the instruments and other P1687 components are properly connected to the top-level access point.
  • Retargets the PDL description of an instrument to the top level access point.
  • Translates the resulting retargeted 1687 PDL into IEEE 1364 Verilog test bench language and standard test vector formats like WGL, STIL or SVF (Serial Vector Format).

Using the ICL and PDL descriptions, Tessent IJTAG will “retarget,” or transform the instrument level PDL patterns to whatever is required at another level in the IJTAG hierarchy. Retargeting to a higher level merges the PDL patterns for multiple IP blocks in a highly efficient manner that results in minimum cycle counts for IP access within a reconfigurable 1687 network. At the top level, the retargeted PDL can be translated into a variety of pattern formats to support further integration, verification and production testing.

SIMPLE EXAMPLE
To make this a bit more tangible, consider the simple design in Figure 4. Here five instruments, or IP blocks, have been connected together with paths to a top level access point shown on the left. Each of the five IPs is provided with its own ICL file describing its control interface, and a PDL file describing the operations that can be performed on it. The designer connects these together through a simple network of registers that connect back to a top level IEEE 1149.1-compliant external test access point, or TAP, on the left. The form of these registers and preferred ways to connect them together are recommended in the IJTAG standard, although these mechanisms are very flexible.


Figure 4. Instruments (IP blocks) connected through an IJTAG network to a top level test access point (TAP). Tessent IJTAG automatically retargets the instrument level definitions to the top TAP, enabling control of all IPs from a single access point.

Tessent IJTAG reads all the ICL and PDL files included with the IP blocks and verifies that all instruments are IJTAG compliant. Tessent IJTAG also extracts ICL descriptions for the interconnecting paths to the top level TAP from either a gate level or RTL netlist, and verifies that they are also correct and IJTAG compliant. Tessent IJTAG then uses the ICL and PDL definitions to merge and retarget the instrument PDLs so that all the operations provided by each instrument can be controlled from the top level TAP. From the user’s perspective, causing an operation to happen in any selected instrument is simply a matter of writing a command to the instrument in the top level PDL. Pattern translations, merging and routing happen automatically.

The user can also translate the top level PDL into other types of patterns, such as test vectors to be applied to the top level TAP from automatic test equipment (ATE), or simulation test benches that verify the ICL against the IP’s Verilog description.

To improve ease-of-use and accelerate IJTAG-based test development, the Tessent IJTAG tool has the unique ability to extract (generate) all the ICL network descriptions using the netlist of the overall design. Tessent IJTAG also supports automated creation of the ICL network description to be added to the overall design in case the design does not already contain this network. Using this facility, the user can define how all IJTAG IP will be connected together to a common access point.

SIDE NOTE: JTAG PATTERNS VS. SCANTEST PATTERNS
It is important to differentiate the patterns we are discussing in the IJTAG context from automatic test program generation (ATPG), which typically refers to test patterns designed to stimulate structural or “scan chain” testing within a digital IC. Scan tests are used to detect defects within a manufactured IC by inserting a pattern of ones and zeros that propagate through the logic, and comparing the resulting internal states to the expected result. A mismatch indicates a defect in the implementation of the logic circuitry (a design defect) or in the manufacture of the device (a manufacturing defect). Scan test patterns are automatically derived from the actual circuit logic during the ATPG process and are distinct from the command sequences applied to an instrument interface to set a mode or initiate an operation in the block.

In the IJTAG context, a test engineer may perform a test on an instrument by using the IJTAG interface, for example, by putting it into a known state and then using a monitoring command to read back a value from the some point inside the instrument. However, this is quite different than a scan pattern set.

Tessent IJTAG can be used together with ATPG because scan testing typically requires that the logic to be tested must be initialized and certain modes must be set before scan testing can commence. IJTAG can be used to perform these setup and mode control functions. For example, test setup may require enabling power and clock domains across the design, programming PLL output waveforms, putting instruments into a “bypass” mode, or switching instruments in and out of the test scope, to name just a few of the possible usages. All these tasks can be done efficiently through an IJTAG network before the ATPG patterns are computed (when the test patterns are being generated) or applied (during production testing). In this scenario, the command patterns defined by the retargeted top level PDL could be combined with the scan test ATPG pattern set as a single test vector file. However, generating the scan patterns themselves is not done by Tessent IJTAG, but in a separate operation performed by Tessent FastScan or TestKompress.