Banner 800x100 0810

Create Beyond the NoC Solutions!

Create Beyond the NoC Solutions!
by Eric Esteve on 03-18-2013 at 9:09 am

The Network On Chip (NoC) concept is recent, about 10 years old, and the first implementation of commercially available NoC IP has happened in 2006. Should we drop the concept so quickly after it has been introduced? In fact, I don’t think so… But we could brain storm and imagine the new functions that could be implemented within or around the NoC, benefiting from the existing NoC architecture. Ever increasing SoC complexity generates needs for new functionalities, like error correction and resiliency to increase data robustness, software observability or cache coherency and distributed virtual memory in multi CPU SoC. The NoC provides a ready to use infrastructure (the physical links where to move data), packet based communication protocol (logical way to transport data), so why not using existing NoC physical and logical architecture and go “Beyond the NoC”: implement various serviced functionalities, opportunistically using the NoC to save real estate, power consumption and design resource?

Let’s have a look at additional SoC features and services, which are not the SoC primary functions, but need to be implemented in order to address SoC requirements, in some cases specific to addressed market segments.

  • Error correction and resiliencyare needed in markets like automotive, medical and industrial

    • If you want to add parity bits, ECC checking and other capabilities to ensure no data corruption, you may add it to the NoC transport protocol
    • Resiliency: Current resilient SoCs duplicate much of the interconnects within the chip to be able to test for errors. Using existing NoC infrastructure (logic and wires), you can implement identical error detection and correction functionalities without having to duplicate the entire NoC, therefor providing the same functionalities but saving real estate (die size) and minimizing power consumption.

  • SoC power management: in large SoC, designed in the latest technology nodes (28 nm and below), the power consumption is almost taking precedence on pure performance. Today’s SoC designs addressing various mobile electronic application have to provide multiple high performance features (Video, Imaging, Voice, Data broadband) within the smallest possible power budget, so battery operated system can run during days instead of hours. Implementing the power management techniques is still a “hand made” design process, and you need a specific team to do it.
  • I found a very interesting article, presented a DAC 2009, titled “NoC Topology Synthesis for Supporting Shutdown of Voltage Islands in SoCs”. In fact, in this article, the authors have created a NoC to connect all the power islands, supporting power gating of the islands, which sounds like a good idea: they have automated the tasks previously made by hand. It could be an ever better idea if the SoC architect could use the existing NoC as a foundation for power management purpose… Authors have used this algorithm:

  • Security: Every company has their own proprietary on-chip security schemes, including secure boot, data encrypt/decrypt (PKI), etc. It is already possible to create special “placeholders” within the NoC, where designers can insert their own proprietary security IP and logic. This allows the designer to retain 100% control of their security IP, we know how sensible such a service can be today. But this topic is also becoming a real concern for the 3[SUP]rd[/SUP] party IP providers, and they also could take benefit of such “placeholders” and have their security IP being inserted as well.

  • Software observability for debug: the TTM requirements are pushing for more efficient H/W and S/W co-development, and faster S/W integration and debug. Various S/W tracing mechanisms are already being used, like for example CoreSight On Chip Trace and Debug Architecture from ARM, but the NoC can offer additional probing capabilities. By definition of a Network on Chip, the NoC allow accessing every important block within the SoC, not only the CPU related blocks.

We have proposed a few examples, where the NoC can be a foundation for additional SoC features and services, and I am sure that creative designers will come up with other ideas. Architects first implement a NoC to take benefit of the communication capabilities, and to optimize their SoC architecture, as we have discussed here in Semiwiki. Once this NoC infrastructure is on chip, it could be possible to go “Beyond the NoC” and implement additional feature or service. Because you then use already “amortized” logic gates and wires, using NoC as a foundation to introduce new features, that you would introduce anyway (think On-chip Power Management for example), will allow to save Silicon, but not only! You would also save resource and creative energy, and finally launch faster a better SoC on the market. Beyond the NoC is a concept: either Arteris, either creative 3[SUP]rd[/SUP] party start-up or even SoC architect, could develop new features or services mapped to the existing NoC infrastructure and proven communication scheme, to create denser and smarter SoC.

By Eric Esteve from IPNEST


Schematic Migration Across Foundries and Processes

Schematic Migration Across Foundries and Processes
by Daniel Nenni on 03-17-2013 at 8:10 pm

A dedicated schematic migration tool can save weeks of effort and allow companies to explore new foundry opportunities. Unfortunately moving analog and mixed signal design data between foundries and processes is a complex business. While engineers would rather spend their days creating new circuits, many spend time translating existing designs by re-creating their data using components from new process design kits (PDKs).

Translating design data is usually a complex process and endless hours of expensive engineering time are wasted copying schematics, swapping new components for old and modifying parameters to satisfy a new PDK. With the increasing reliance on analog circuits in SoCs and an expanding market in analog IP, engineers need to migrate their schematics quickly to run simulations using models in the new PDK. A business can’t respond to new opportunities if it takes weeks before testing can begin in a new process or foundry.

Though it may first appear relatively simple, migrating schematics is much more involved than it looks. Tools must handle physical differences in component symbols and then deal with the hidden mysteries of parameter interactions and component description format (CDF). What starts out looking like a simple case of swapping a few components ends up becoming more and more complex and it’s little wonder that many designers end up copying their schematics manually.

At IN2FAB, we have spent many years building design migration technology as both a migration service company and an EDA tool vendor. Our OSIRIS schematic migration tools have been developed using hundreds of PDK variants for designs from IDMs and foundries and this has given us a great deal of insight in to the challenges of schematic migration. All of our technology is built on the Cadence platform and runs directly with foundry PDKs.

In an ideal world, symbols for components would all look exactly the same; at least for the common ones. If they were all the same size, drawn with the same origin and with pins in exactly the same place, an engineer would just have to change the name of the component library to be well on the way to getting a new circuit. Even if the component names were different, it would just be a case of writing a mapping script to swap one to the other.

Resistor migrated from source to target processes. Pin names are mapped between the PDKs and a new bulk pin tied to a net, all using the migration tools

Resistor migrated from source to target processes. Pin names are mapped between the PDKs and a new bulk pin tied to a net, all using the migration tools. Unfortunately, this is rarely the case, especially when moving from one foundry to another. Symbols change size, position and orientation, new pins appear and old ones are removed so swapping one symbol for another just isn’t good enough. Migration tools must take account of physical differences and include an automatic re-wiring capability to reconnect pins that move around or the engineer is left with a circuit full of badly placed components and a string of unconnected wires.

New pins such as bulk connections sometimes appear and they need to be connected while redundant wires that connected old pins that disappeared should be removed. Bulk connections can also be set as a property so this must also be addressed as part of the migration process.

Re-located or new pins can also short with existing wires so a short locator must find and fix places where new pins on bump in to old wires. Even when circuits go between radically different PDKs, they should have clean connectivity after migration without extended manual clean-up.

Even when the physical connections are all made, the hidden world of parameters and CDF can present a new round of problems. Entering values for new components using schematic tools is straightforward but mapping properties from one PDK to another is not as easy as it looks. At the simplest level, we need to know the names of the properties on the source and the target symbol and this is usually simple enough to find: e.g. old name = “w”, new name = “width”. With a little trial and error, a mapping system will come together but we also need to get the parameter type to match: e.g. the original is a string: (w = “1u”) while the new one is a floating point number: (w = 1e-06) so this must be adapted by the tools. The callbacks that set other parameters must also be triggered to make sure that everything is set correctly for netlisting and simulation.

Migration tools should handle all of the complexities of CDF and resolve differences between source and target PDKs automatically

These problems become more pronounced when calculating passive values. Migrating the width and length of a resistor is usually pointless as the resistance co-efficient is going to change. Far better to map the resistor value and width and let the new PDK work out what the length should be but that’s going to involve parameter manipulation and callback triggers which are another level of complexity again. A migration tool must analyse the PDKs and present clear information to the user through a GUI; then deal with all of the triggers and callbacks automatically during the migration process.

Lastly, we need to know if our new schematic matches the old one in order to be able to use it for simulation or other work. It’s possible to write netlists for the old and new circuits and run those in to an LVS tool but then we must allow for the differences in component and parameter value names. That can probably be fixed with some sort of perl script but that’s just making our job more complex again. A far better way is using a dedicated schematic comparator that understands all of the mapping and can identify a difference in between source and target in seconds and these are all built in to the schematic migration tools

Larger companies may have a CAD department that will put some sort of customized schematic porting capability in place when they have a big foundry move; doing part of the work and leaving the designers to clean up the rest. If a big corporation is has enough spare time and engineers to dedicate to the problem then they may spare the designers some of the pain but what does a designer do when they want to try out a foundry or two? Or a boutique IP company that’s offering a circuit in a new process they’ve never used before?

The increasing range of foundry and process options along with the dynamic nature of the analog IP market demands that engineers must be able to migrate and test their designs extremely quickly if they are to take advantage of a market window. A fast and intuitive schematic migration capability gives engineers the flexibility to move circuits and simulate in a new process in fraction of the time of other methods.

Tim Reagan
President & CTO at IN2FAB Technology



Cadence IP Report Card 2013

Cadence IP Report Card 2013
by Daniel Nenni on 03-17-2013 at 7:00 pm

The challenges of developing IP blocks, integrating them correctly, and hitting the power, performance, area, and time to market requirements of a mobile SoC is a growing problem. At 20nm and 14nm the probability of a chip re-spin due to an error is approaching 50% and we all know how disastrous a re-spin can be, those are not good odds even in Las Vegas.

Cadence talked a bit about IP during the CDNLive keynotes last week and even more so during a press lunch. Paul McLellan and I also spent time with Cadence IP Commander in Chief Martin Lund. Given the recent IP acquisitions it is clear that Cadence is serious about scaling their business so I have to give them an A+ on IP strategy thus far.

My first meeting with Martin is referenced in Cadence IP Strategy 2012, I liked him then and after two most excellent acquisitions I love the man. Great for IP, great for EDA, Cadence is now the #3 IP company behind ARM and Synopsys.

Unfortunately, assembling a robust IP offering is the easy part especially when you have a CEO (Lip-Bu Tan) who can raise money in his sleep. Selling commercial IP into a consolidating industry however is much more of a challenge that you might think. In my best guesstimate 80% of today’s silicon is shipped by the top 20 semiconductor companies and that is being generous. It could be less than 20 companies and of the top 20 companies listed below only UMC and GLOBALFOUNDRIES do NOT have sizable internal IP groups.


Clearly Martin Lund knows this since he worked at Broadcom for 12+ years and Broadcom has a VERY large IP group. So what is the Cadence IP strategy moving forward? In my opinion it is two-fold: IP Subsystems, which explains the Tensilica acquisition, and FinFETs, which is what the Cosmic Circuits acquisition is all about.

Dr. Paul McLellan covers Tensilica HEREand Dr. Eric Esteve covers CEVA HERE for SemiWiki. Click on over to the landing pages and you will read all about IP subsystems because that is what they do. That is how they differentiate themselves from the mighty ARM.

Cosmic Circuits does foundation IP which is the connection between the interface IP and the semiconductor process technologies. FinFETs are changing the foundation IP world as we speak. For layout people, the F word now stands for FinFETs because FunFETs they are not. There is an interesting thread in the SemiWiki forum HERE which talks about the FinFET layout challenges ahead. Bottom line: Not everybody will be successful with FinFETs so adopting commercial foundation IP is much more viable if you want to hit the power, performance, area, and time to market requirements of mobile SoCs.

Given that the Cadence Virtuoso dynasty has a good 90% AMS market share (my opinion) and probably a 99.9% FinFET layout market share thus far, I give Cadence a real shot at moving some commercial IP into the 20% of the companies that are shipping 80% of the silicon. They certainly have access to the top 20 IP groups through Virtuoso and IP subsystems fit right on top of that. Sound reasonable?

For the best detailed coverage of the CDNLive keynotes see Richard Goering’s posts:

Lip-Bu Tan at CDNLive 2013: Opportunities and Challenges for Electronics

Samsung CDNLive Keynote: Innovation and Challenges in the Post-PC Era

Martin Lund CDNLive Keynote: Why SoCs Need “Application Optimized” IP


Plotting to take over the time-domain only world

Plotting to take over the time-domain only world
by Don Dingee on 03-16-2013 at 10:00 am

The state machine nature of many digital designs has made time-domain debugging the favorite tool for most designers. We provide a set of inputs, data gets clocked in, and a set of outputs appears. We look for specific patterns in parallel paths, or sequences on serial lines.

Continue reading “Plotting to take over the time-domain only world”


EDAC CEOs: consolidation, clouds, and whether Intel will buy Synopsys

EDAC CEOs: consolidation, clouds, and whether Intel will buy Synopsys
by Paul McLellan on 03-15-2013 at 5:12 pm

Yesterday evening was the annual EDAC CEO forecast meeting. Actually it is not really a forecast meeting any more, more a sort of CEO response to some survey questions asked of EDAC members. Rich Valera of Needham moderated with Lip-Bu, Aart and Wally, along with Simon Segars representing the IP arm(!) of the business and Raul Camposano representing startup companies.

I’m not going to try and cover everything, just pick and choose things that I found interesting.


The first question asked was whether consolidation in EDA has helped innovation. 74% of people surveyed said ‘no’ but the CEOs all pushed back. For a start, lots of innovation takes place in the bigger companies. Nobody really made the point I would have done, which is people start little EDA companies in order to be acquired, and if acquisitions don’t happen, people won’t create startups. To some extent we see that because as acquisition prices have come down, the willingness to invest in startups has also reduced.

The next question was whether acquisitions have “helped” pricing. In the sense of have prices for tools increased. Wally pointed out that Moore’s law is really a learning curve, with cost per transistor coming down as you would expect as the total number of transistors shipped. EDA’s curve is exactly the same and has been 2% of semiconductor revenue for 15 years.

To a question about whether EDA would consolidate down to 2 companies, Wally pointed out that EDA has been a trimvirate (first Calma, Applicon, Computervision; then the DMV, now Synopsys, Mentor, Cadence). It seems to be very stable so seems unlikely (although, of course, Cadence did make an attempt to acquire Mentor a few years ago).

Next was whether Moore’s law breaks down at the sub-20nm level. Survey was split 50-50. Aart said that until now Moore’s law has driven opportunity but now opportunity is driving Moore’s law. So even if prices go up, it will continue. I’m not so sure myself, if the cost per transistor increases then that quad core cell-phone will cost more than the old dual core one. Sometimes that will be viable but not always.

How about verticalization, with companies like Apple and Samsung taking design in-house? Of course they buy tools to do this but leave other companies (such as TI) in their wake. Survey seemed to think it was negative, but CEOs were positive. Wally pointed out that concentration in the cell-phone industry was greater in 2007 when Samsung and Nokia had a greater market share than Samsung and Apple do today.

EDAC members were asked which was the hottest EDA startup. Most often mentioned was Oasys. But also OneSpin, BDA, ICScape, Jasper, Calypto, AtopTech, Forte, Breker, DeFact. Funny how many of the “hot startups” have been around for a dozen years or more.

Asked about the funding environment for startups, the EDAC membership came up with “bleak,” “dead,” “poor,” “you have to turn to Qualcomm, Xilinx, Apple and Intel.”

Raul pointed out that semiconductor in general and EDA too suffer from an image problem. If you ask what the next big thing is going to be, few people say “semiconductor.”

Peggy asked about a comment from the CEO of a small company who said it would not be long before Intel acquired Synopsys. Aart pointed out that the economics wouldn’t work. But if Intel or anyone else wanted Synopsy at the revenue multiple that Cadence just acquired Tensilica for then he’s available.

To a question on cloud computing, Aart said that Synopsys had made $0 on it. Not just a low number but actually zero. Luckily all the infrastructure they needed is what is required for deploying on internal clouds so it wasn’t a completely bad investment. Raul agreed. Nimbic expected big companies to be attracted to the cloud since even in companies with tens of thousands of servers, getting 100 at the same time is problematic. But he also made $0 on it, although smaller companies designing little circuit boards (and who don’t have big internal server farms) represent a long tail that is more accepting.


Visual Debugging at Altera on Billion-Transistor Chips

Visual Debugging at Altera on Billion-Transistor Chips
by Daniel Payne on 03-15-2013 at 10:38 am

My first job out of college was doing transistor-level circuit design, so I’m always curious about how companies are doing billion-transistor chip design and debug these days at the FPGA companies.

I spoke with Yaron Kretchmer,he works at Altera and manages the engineering infrastructure group where they have a compute farm, manage EDA licenses and create tool flows. Yaron has been at Altera for 10 years, has a background in ASIC (LSI Logic) and full-custom IC design.


Yaron Kretchmer, Altera Continue reading “Visual Debugging at Altera on Billion-Transistor Chips”


Can “Less than Moore” FDSOI provides better ROI for Mobile IC?

Can “Less than Moore” FDSOI provides better ROI for Mobile IC?
by Eric Esteve on 03-15-2013 at 10:00 am

In this previous article, I was suggesting that certain chip makers may take a serious look at a disruptive way to look at Moore’s law, as they may get better ROI, profit and even better revenue. The idea is to select technology node and packaging technique in order to optimize the Price, Performance, Power triptych and manage chip development lead time to optimize Time To Market (TTM) and cost. Only a complete business plan would confirm the validity of this assumption, but we think it could be a new direction to be explored, so we propose some tracks.

The goal for a chip maker supporting “Less Than Moore” is not to displace the Qualcomm or Samsung, following Moore’s law and getting back more than enough revenue to invest and develop IC ever more integrated, targeting smaller technology node, supporting the type of Roadmap you can see below. This roadmap from Samsung shows Discrete Application Processor and Baseband Processor paths, as well as in parallel a roadmap for cost sensitive systems with Integrated (Application + BB) processor.

Following Less Than Moore (LTM) could be beneficial for some of the (many) followers of the two above mentioned leaders: the AP market is so competitive that only a few could be successful if they all play Moore’s law. As previously discussed, developing IC in a more mature technology node (say 28 nm instead of 14 or even 20 nm) will certainly offer lower development schedule: EDA tools have been stabilized, as well as technology related models, the technology node require less “weird techniques” at the layout stage, so finally the design cycle is expected to be shorter: better TTM and lower development cost. On the process side, when a technology node is more mature, processing the wafer requires less operation and less mask steps, so the wafer fab cycle is faster and the mask cost lower: again better TTM and lower development cost. But what kind of approach could keep development cost lower than following Moore’s law, still allowing being successful on the Mobile market? Which means trying to offer the best (Price/ TTM / MIPS per Watt) compromise. Let’s have a look at various approaches like FDSOI, Multi Chip Package (MCP) and 3D Chip Processing & Packaging, and verify the technical and economic feasibility.

FDSOI stands for Fully Depleted Silicon On Insulator, and recently ST and ST-Ericsson have fabricated a smartphone chip based on 28nm FD-SOI in which the ARM dual Cortex-A9 CPU can reach 800MHz at a mere 0.6V and over 1.5GHz at just 0.85V. The benefits better performance and energy efficiency across the full range of power supply, exceptional performance at very low Vdd (e.g., 0.6V-0.7V), enhanced efficiency of DVFS (Dynamic Voltage and Frequency Scaling) and significant boosts in performance and leakage control through the optional use of a back-bias.

If we look at the above picture, we see that leakage power is becoming the more important part of power dissipation for traditional CMOS technologies, at 28 and 20 nm nodes. It sounds quite cleaver to target FDSOI and minimize leakage. But does it really work? The picture below shows the evolution of leakage power of Cortex ARM9 in function of the performances (Frequency axis) for 28LP (VDD=1V), 28G (VDD=0.85V) and 28FDSOI (VDD=0.9V). If we consider an Application Processor for Wireless Application, we should compare 28LP and 28FDSOI. That we can see is:

  • For the same leakage power budget (20 mW), FDSOI provides 30% increase in frequency, or 1,32 GHz
  • Or, at the same frequency (1 GHz), one order of magnitude difference in leakage power, with 10 mW for 28FDSOI compared with 100 mW for 28LP

So, a rough approximation could be to consider that using 28nm CMOS transistor on FDSOI wafer provide the same performance than using 20nm CMOS transistor on Bulk Silicon wafer, and this for the same power budget. Using FDSOI also provide another optimization path, with substrate biasing usable as a powerful way to get very high performance when needed: on the above picture you will note (green curve) that, when applying 0.45V Forward Body Bias (FBB), you increase ARM9 frequency from 1.55 GHz to 1.75 GHZ, at the same power adding cost than when you increase ARM9 frequency on Bulk 28G. The below picture illustrates the principle of the FBB, as well as showing a MEB view of the Silicon structure.

FDSOI looks attractive, but it could be wise to check if there are any drawbacks…

Design Flow and EDA tools: FD-SOI is fully compatible with traditional planar technology, it does not disrupt the design methodology, meaning designers keep the same flows and tools as what they would use with conventional CMOS design.

Design IP and Libraries: at this stage, you realize that you need to do at least the porting of Std-cells libraries, memories or power switches. But you need to redesign critical IP: IOs, ESD structures, Fuses and Analog IP. This can be a serious drawback if the chip maker does not usually develop his owns critical IP like PLL or SerDes. If we take the example of the Application Processor, you must find an IP vendor who develop and sell: USB 2.0 & USB 3.0 PHY, MIPI D-PHY & M-PHY, HDMI PHY, LPDDR2 or 3 PHY and probably a couple of PLL…

This can be a chicken and egg problem, if FDSOI adoption is high enough, IP vendors will redesign these IP, but if it’s not the case, the chip maker will have to rely on design service third party to develop it. The problem here will be a cost adder, compared with off-the-shelves IP, and even more critical a higher risk compared when using a Silicon proven IP from an IP vendor (which is not necessarily the case when you use the most advanced technology node, then the Analog IP is not yet Silicon proven…).

So, FDSOI is clearly an attractive technology, especially for wireless AP, as it allows minimizing drastically the power budget (by almost an order of magnitude for the leakage power), or increasing the processor core frequency. Using FDSOI is equivalent to design on one technology node back (28nm instead of 20nm), and benefit from lower mask cost and process complexity. This benefit should be balanced with extra cost and risk link to the redesign of all critical Analog IP, such cost and risk being minimized when/if FDSOI will see enough adoption level.
In fact, studying Less Than Moore techniques require more than two posts, you will have to stay tuned for a next post dealing with Multi Chip Packaging or 3D Chip integration, as these technologies could be promising too!

From Eric Esteve from IPNEST

Qualcomm Roadmap Clarification (about the previous article)

Remark: when I have shown this picture from Qualcomm in the previous “Less Than Moore” article , I was under the impression that it was their Mobile product roadmap. In fact, it’s not, and the meaning of this slide is to show the evolution of the Modem IC (MDM products) supporting more modes at every generation, or being integrated with the AP (MSM products = Single die Modem + AP). I thanks Edgar Auslander for providing this meaningful information (the key message being that Qualcomm is offering today their 3[SUP]rd[/SUP] generation of LTE Modem, far ahead of the competition) and invite him to share the part of Qualcomm roadmap missing here in Semiwiki… if possible!


Costello on Communicating a Compelling Company Story

Costello on Communicating a Compelling Company Story
by Paul McLellan on 03-14-2013 at 11:53 pm

The next EDAC sponsored emerging company series (what I’ve been calling Hogan University) is Joe Costello being interviewed on how to communicate a compelling company story. Anyone who saw Joe’s keynote at DAC several years ago will not want to miss this. I can’t promise that he’ll lie down on the stage and pretend to be a fish again, but I’m sure it will be interesting.

I’ll tell you his rules from that keynote:

[LIST=1]

  • “Think like a fish.” Know what your customers really want and give it to them.
  • “Write the press release first.” It never gets better than the press release so try and develop your product to measure up to the press release.
  • “Change the rules.” The company that sets the rules wins.

    This will take place at 6pm on May 1st on the Cadence campus (building 10). If you don’t know where the Cadence campus is (you probably worked there at some point, Joe of course did) you surely aren’t in EDA. The event is sure to sell out (it is free, so that “sell” is not quite the right term, but you do need to register).

    Details are on the EDAC website here, including a link for registration.


  • IJTAG for IP Test: a free seminar

    IJTAG for IP Test: a free seminar
    by Beth Martin on 03-14-2013 at 1:53 pm

    What: Better IP Test with IJTAG
    When: 26 March, 2013, 10:30am-1:30pm
    Where: Mentor Graphics, 46871 Bayside Parkway, Fremont, CA 94538


    If you are involved in IC test*, you’ve probably heard about the IEEE P1687 standard, called IJTAG for ‘internal’ JTAG. IJTAG defines a standard for embedded IP that includes simple portable descriptions that can be supplied with the IP itself. This creates an environment for plug-and-play integration, access, test, and pattern reuse of embedded IP that doesn’t currently exist.

    It’s the first new standard designed specifically to deal with the growing amount of IP used in today’s complex designs, and I expect that it will see wide adoption in the industry.

    This seminar from Mentor Graphics covers the key aspects of IJTAG, including how it simplifies the design setup and test integration task at the die, stacked die, and system level. You will also learn about IP-level pattern reuse and IP access with IJTAG. Are you wondering what you need to do to migrate your existing 1149.1-based approach to P1687? Yep, that’s covered in the seminar too.

    Mentor offers a product, Tessent IJTAG, to automate some aspects of implementing P1687, which is described in the seminar. Tessent IJTAG automates design and test tasks, and reduces the length of an aggregated test sequence for all the IP blocks in an SOC. This translates directly into faster production test readiness, reduced test time, and smaller tester memory requirements.

    All the examples used in the seminar are from actual industrial use cases (from NXP and AMD). The presenter is Dr. Martin Keim. He has the experience and technical chops to make this a very useful day for everyone involved.

    Register now!

    *DFT managers, DFT engineers, DFT architects, DFT methodologist, IP-, Chip-, System-Design managers and engineers, IP-, Chip-, System-Test integrator, Failure analysis managers and engineers, system test managers, and system test engineers. Whew!


    ARM Cortex SoC Prototyping Platform for Industrial Applications

    ARM Cortex SoC Prototyping Platform for Industrial Applications
    by Daniel Payne on 03-14-2013 at 1:00 pm

    If your next SoC uses an ARM Cortex-A9 and has an industrial application, then you can save much design and debug time by using a prototyping platform. The price to prototype is quite affordable, and the methodology has a short learning curve. Bill Tomasan Aldec Research Engineer conducted a webinar today on: ARM Cortex SoC Prototyping Platform for Industrial Applications.



    Bill Tomas, Aldec
    Continue reading “ARM Cortex SoC Prototyping Platform for Industrial Applications”