BannerforSemiWiki 800x100 (2)

Smart Strategies for Efficient Testing of 3D-ICs

Smart Strategies for Efficient Testing of 3D-ICs
by Pawan Fangaria on 02-12-2014 at 6:30 am

3D-IC has a stack of dies connected and packaged together, and therefore needs new testing strategies other than testing a single die. It’s given that a single defective die can render the whole of 3D-IC unusable, so each die in the stack must be completely and perfectly tested before its entry into that stack. Looking at it from a ‘yield’ angle, it’s much better to have a stack of smaller dies (which can have higher yield) than having a large die which can be more prone to defects. So, what should be done to realize that high yield, performance, power and high gate density per unit volume of 3D_IC? Thoughtful orderly sequences of steps have to be performed such that every die work excellent independently and then inside the package together with other dies.


[CoWoS arrangement in TSMC reference flow]

Last year, TSMC’s3D-IC (typically called as CoWoS[SUP]TM[/SUP] – Chip on Wafer on Substrate) reference flow was validated with Mentor’s3D-IC test solution in which an SoC and a Wide I/O DRAM are placed on a passive silicon interposer with about 1200 connections between them. This methodology has been extended further by using TSV (Through Silicon Via) based stacked dies, which requires far less wire length, thus improving performance and interconnection scaling at lesser cost.

Mentor has developed an excellent plug-and-play test infrastructure based on proven standard JTAG (IEEE 1149.1) TAP (Test Access Port) as interface for all dies and IJTAG (IEEE P1687) to model the TAP, test access network, and test components within the die. For accessing the whole package from outside, the TAP at the bottommost die is used, although for testing purpose any die can be used.

First step is to test each individual die to perfection that includes memory BIST (Built-in-Self-Test) for memories (Mentor provides soft-programmable memory BIST for which algorithms can be applied as needed), embedded compression ATPG with logic BIST for stuck-at, transition, path delay and other timing-aware and cell-aware tests, and die IO test based on boundary scan (IEEE 1149.1). IO wrap test and contactless leakage test are done by using a special test technique called ‘IO wrap’, where bi-directional boundary scan cells are used to set a logic value, tri-state the driver, then capture the input to check that excessive leakage didn’t occur and capture a 0. This boundary scan test technique can be used for thorough die testing, partially packaged device as well as interconnect between packaged dies.

A novel hierarchical DFT methodology provides significant advantages over conventional top-level ATPG. It shortens the design cycle where DFT insertion and ATPG pattern generation can be done at individual block level, thus allowing this activity to start early in the design cycle in more predictable manner. Then the patterns are re-targeted at the top level with appropriate automatic mapping through intelligent software. The patterns for blocks on individual dies are re-used (or merged where required) for complete SoC at the 3D-IC package level. Clearly, smaller test patterns run an order of magnitude faster requiring lesser memory and much efficient scan channel allocation with an order to magnitude fewer tester cycles. Here an important observation must be noted that the DFT logic and ATPG pattern, once created for a die remains valid for the die to be used in any other package as well.

The 3D-IC test methodology allows testing of external memories (usually Wide I/O DRAMs, sourced from different vendors) through JEDEC standard functional pins. Mentor’s special custom interface is used to connect the boundary scan test port to the logic die TAP. With standard interfaces (which may have different internal layouts), the memory die can be swapped when required and new algorithms can be loaded into the soft-programmable memory BIST controller for any targeted testing.

After testing all dies and external memories, interconnects, TSVs and complete package must be tested. The test is managed through the Tap structure on the bottom die which successively enables TAPs on the dies up the chain. IJTAG is used to define the boundary scan network, TAPs, BIST and other DFT logic. Testing is also performed for interconnect between logic and external memory die through JEDEC interface.

To validate the complete assembled 3D-IC, an ordered sequence of tests starting from power-up to interconnections with the Wide I/O memory and at-speed tests are done. A detailed description of this sequence and many other details at each stage is provided in Mentor’s whitepaperauthored by Ron Press, Dr. Martin Keim and Etienne Racine. This plug-and-play test approach based on proven IJTAG standard and existing mature tools provides robustness and flexibility to adapt to any changing requirements.

More Articles by Pawan Fangaria…..

lang: en_US


Designing an SoC with 16nm FinFET

Designing an SoC with 16nm FinFET
by Daniel Payne on 02-11-2014 at 9:35 pm

IC designers contemplating the transition to 16nm FinFET technology for their next SoC need to be informed about design flow and IP changes, so TSMC teamed up with Cadence Design Systems today to present a webinar on that topic. I attended the webinar and will summarize my findings.

Shown below is a 3D layout concept of an ideal FinFET transistor, followed by the actual manufactured device which is rotated 90 degrees from the layout:

Continue reading “Designing an SoC with 16nm FinFET”


Migrating to Andes from 8051

Migrating to Andes from 8051
by Paul McLellan on 02-11-2014 at 5:21 pm

The 8051 microcontroller has been around for years…decades in fact. It was originally developed in 1980 by Intel. Back then it required 12 clock cycles per instruction but modern cores use just one. While it is still widely used, mostly as an IP core for SoCs, it is running out of steam despite running over 50 times faster than Intel’s original core. The trend is for microprocessors to deliver more work per second, which can be done by doing more work per instruction or increasing the clock rate. The overall trend is certainly towards 32 bit.


One thing driving this trend is the move towards connectivity, the Internet of Everything (IoE) or the Internet of Things (IoT) depending on your choice of buzzword. A microcontroller like the 8051 doesn’t have enough compute power to run a full stack for internet access, WiFi access or cellular access and its memory interface both slows things down and consumes unnecessary power, and its address space is too small for the amount of memory these types of activities require.


Another power hog is security, and any embedded device with connectivity in the IoT world cannot ignore this since it is or will be subject to attacks. Keeping hackers at bay is a basic feature for any attached device.


The Andes family of microproessors spans a wide range from the N705 with a two-stage pipeline running at 240MHz, up to the N13 with a 13 stage pipeline running at over a GHz.

One unique features is FlashFetch. This adds two additional memories, one caches instruction prefetches from flash memory to keep the processor running at full speed and speeds up non-loop code accesses. The other is a tiny cache of 128B that speeds up loop accesses. Any loops smaller than 128B will run out of cache entirely, and lots of loops are like that in encryption algorithms, video encode/decode, just copying stuff around memory and so on. The result is performance/power ratio for all the processors is better than equivalent products from other processor vendors.


Of course there is a full development environment, AndesSight, development boards and a portfolio of associated IP to go along with the processor core itself, both silicon IP blocks and software stacks.

New products such as IoTs require more performance AND less power to provide wireless connectivity, touch interfaces, power management, etc.

  • An optimized MCU-memory interface is a key way to increase performance while reducing power consumption
  • Andes unique FlashFetch addresses the issue by reducing the number of accesses to Flash memory
  • Andes external prefetch buffer accelerates CPU performance
  • Securing your embedded SW code is imperative
  • A complete MCU design ecosystem improves your productivity


More articles by Paul McLellan…


A Brief History of Atmel

A Brief History of Atmel
by Paul McLellan on 02-11-2014 at 4:12 pm

Atmel was founded in 1984. The name stands for “advanced technology for memory and logic” although initially the focus was on memory. George Perlegos the founder had worked in the memory group of Intel back when Intel was a memory company and not a microprocessor company although that didn’t stop Intel suing them after a couple of years for patent infringement.

Initially Atmel was fabless but they purchased Honeywell’s old fab in Colorado Springs raising some venture capital to do so. They expanded the fab after purchasing FPGA manufacturer Concurrent Logic in 1991, the year in which they went public.

They then developed their first microcontroller based on the Intel 8051 although with much higher performance. They still have the 8051 on their product list today, both as a standalone microcontroller and as a core for they can embed is more customized devices. In the mid-1990s they licensed the ARM architecture and today they have a broad portfolio of ARM-based devices.


They also developed the AVR line of microcontrollers that were the first to use flash memory. The name comes from the initials of the Norwegian developers of the original core. Today the open-source Arduino project is built around AVR, targeted not to computer professionals but hobbyists, designers and makers.

Over the years Atmel made acquisitions that ended up giving them additional fabs: Matra Harris Semiconductor (MHS) with a fab in Nantes, a fab in Germany and one in UK they acquired from Siemens (this was before Infineon was spun out separately). Like other semiconductor companies, having a number of small fabs went from being an advantage as process development advanced and the size of an economical fab increased. Steven Laub became CEO and transformed Atmel to an increasingly fab lite model, divesting most of the fabs that it had acquired although they still have the Colorado Springs fab.


However they also made some acquisitions, moving into the touchscreen market and making both touchscreen controllers and flexible touchscreens called Xsense that look pretty much like those old overhead projector transparencies, a market with great promise moving forward.

Revenue for the fourth quarter of 2013 was $353.2 million, a 1% decrease compared to $356.3 million for the third quarter of 2013, and 2% higher compared to $345.1 million for the fourth quarter of 2012. For the full year 2013, revenue of $1.39 billion decreased 3% compared to $1.43 billion for 2012.

More than half of Atmel’s revenue comes from microcontrollers with the rest spread around various lower volume products for in markets such as automotive, memory (their original product area), LED drivers, wireless for low-cost consumer products and security/encryption. They have very strong positions in certain fast growing end markets. For example, over 90% of the controllers for 3D printing markets are Atmel’s. The breadth of their product line, especially their range of lower end microcontrollers, low-cost wireless and security, positions them well for the Internet of Things (IoT)


More articles by Paul McLellan…


ASTC and the new midrange ARM Mali-T720 GPU

ASTC and the new midrange ARM Mali-T720 GPU
by Don Dingee on 02-11-2014 at 3:00 pm

When we last visited texture compression technology for OpenGL ES on mobile GPUs, we mentioned Squish image quality results in passing, but weren’t able to explore a key technology at the top of the results. With today’s introduction of the ARM Mali-T720 GPU IP, let’s look at the texture compression technology inside: Adaptive Scalable Texture Compression, or ASTC.

In contrast to some other proprietary texture compression implementations, ASTC is backed by the Khronos Group – and royalty free. This multi-vendor backing by the likes of AMD, ARM, NVIDIA, Qualcomm, and even Imagination Technologies bodes well, with adoption gaining momentum. For example, CES 2014 saw NVIDIA’s Tegra K1 and Imagination’s PowerVR Series6XT GPU core both add ASTC support to the mix, and the latest homegrown Adreno 420 GPU core from Qualcomm showcased in the Snapdragon 805 also features ASTC.

Until these recent developments, many have equated ASTC technology with ARM, probably due to their leadership in the Khronos specification activity, and an IP onslaught designed to challenge Imagination head on. ARM was able to stake out a claim to ASTC support beginning with the Mali-T624 and subsequent versions of their GPU IP family.

First released in 2012, ASTC had the advantage of learning from actual use of other popular texture compression formats and targeting the opportunity for improvement. The problem, as the original Nystad presentation on ASTC from HPG 2012 opens with, is the wide field of use cases with varying requirements for color components, dynamic range, 2D versus 3D, and quality. Most of the existing formats do well in a particular use case, but not in others, and mobile designers have generally opted for more decode efficiency at the expense of some image quality.

ASTC is a lossy, block-based scheme, and like other schemes is designed to decode textures in constant time with one memory access, but with a big difference: variable texel size working within a fixed 128 bit block, with the result a finely scalable bit rate. ASTC also supports both 2D and 3D textures, LDR and HDR pixel formats, and an orthogonal choice of base format – L, LA, RGB, RGBA – so it can adapt to almost any situation, leaving the bit rate encoding unaffected.

All this control provides a way to scale image quality more precisely, without changing compression schemes for different situations, but at first glance one might think it would blow up storage space requirements with all those settings running around. The math magic behind ASTC is fairly complex, depending on a strategy called bounded integer sequence encoding (BISE).

We know power-of-two encoding can be dreadfully inefficient, but BISE takes a seemingly-impossible approach: effectively, fractional bits per pixel. A long story short (follow the above BISE link for a more complete ARM explanation), BISE looks at base-2, base-3, and base-5 encoding to efficiently pack the number of needed values into a fixed 128 bit block. That magic explains the above bit rate table and its rather non-intuitive texel sizes.

There are a few further insights to ASTC that make it pretty amazing:

  • As with other texture compression schemes, the heavy lifting is done during encoding, a software tool running on a host – all the mobile GPU has to worry about is fast texture decompression.

  • With all these options in play, the only thing fixed in the implementation is the 128 bit block footprint – every setting can vary block-to-block, meaning image encoding can change across an image dynamically based on the needs. (In theory, at least. I’m not sure ARM’s encoder tool actually does this, and in most comparisons, a group of settings applies to the entire image.)

  • The end result of more efficient and finer grained encoding is better signal-to-noise ratios – those better Squish results we mentioned earlier, with ARM indicating differences of 0.25dB can be detected by some human eyes.

  • ARM’s ASTC implementation is synthesizable RTL (plus ARM POP IP technology for hardening), allowing it to find homes with customers choosing Mali GPU IP or customers implementing their own GPUs like the ones listed above – and the absence of per-unit royalties is attractive for many potential users.

Now, ARM breaks back into the midrange with the cost-optimized Mali-T720 GPU core targeting Android devices, fully supporting ASTC and other enhancements including a scalable 1 to 8 core engine and partial rendering. As a companion to the optimized ARM Cortex-A17 core, the Mali-T720 continues to shrink die area targeting a 28nm process, while improving energy efficiency and graphics performance running at 695 MHz.

ASTC may be in the early stages of taking over for mobile GPU texture compression, especially on Android implementations where platform variety is larger. The adoption by Qualcomm is especially significant, and I’ll be excited to see Jon Peddie’s 2014 mobile GPU data soon to see what kind of impact the availability of more ARM Mali GPU IP is having. Stay tuned.

More Articles by Don Dingee…..

lang: en_US



ARM Announces A17

ARM Announces A17
by Paul McLellan on 02-11-2014 at 12:36 pm

It is microprocessors all the time right now, with Linley last week. Today ARM announced the next generation Cortex-A17 core. It is a development built on the Cortex-A12 core, itself built on A7 (which is the current volume leader). ARM says that it is 60% faster than the A7 core, although I’m sure a lot of that gain is a process node change and not just architecture, but I could be wrong (A7 is single issue, I think, and A12 is dual). The timing isn’t coincidental, Mobile World Congress is coming up in Barcelona in a couple of weeks.

The mobile market for processors is fragmenting since different price, performance and area targets are needed for different markets:

  • the very high end: companies like Apple and Qualcomm license the ARM architecture and build their own processors. For example, Apple had the first 64-bit ARM processor in production (before any of the licensees of ARM’s own implementation)
  • the high end: Cortex A-57 and A-53 big.little implementations with a good mix of high performance and low power
  • the middle end which seems to be where the A-17 is positioned
  • I’m not sure if A7 is obsolete or is still a good solution for the low end especially in something like TSMC’s 28LP process. It can also do big.little with A15.


The lead customer for the A-17 is Mediatek. They announced an 8-core big.little A17/A7 application processor with on-chip 150Mbps LTE modem expected to be in volume production in second half of the year. I believe this is also the first big.little chip that allows all 8 cores to be used at the same time.

There is a process war (at least of words) going on between TSMC and Intel, along with a processor war between ARM and Intel.

Intel very publicly had graphs showing that Intel’s wafer price continues to come down linearly but TSMC’s is taking a pause. TSMC, most unusually for them and for Taiwanese companies in general, addressed this and denied it in their latest conference call. Cynics have suggested that the reason Intel show linear decline (there are no numbers on the graphs) is because their wafer cost is so high that if they only get it a bit closer to TSMC’s then they can make the graphs look good. The numbers that I have heard are that Intel’s wafer cost is as much as 30% higher than TSMC’s.

Intel’s high end server cores have unparalleled performance on single thread, so perfect for some sorts of datacenters. But as I said last week, for some datacenters other aspects are more important: throughput, power, cost. For mobile, the high end is mature and in a low growth phase. Apple, Qualcomm and Samsung already own the bulk of the market. The low and mid end is where all the smartphone growth will come and it is all about cost. The goal is an unsubsidized smartphone at $200 or less next year. If Intel’s wafer cost really is 30% higher than TSMC’s then it won’t be cost-competitive.

Another Intel weakness is that it doesn’t have an on-chip LTE modem. Its LTE modem is a mixture of technology it acquired from Infineon and Fujitsu but it is manufactured by TSMC while the application processors for mobile are all on Intel’s own processes. Qualcomm, Mediatek, Broadcom and perhaps other all have integrated modems already. Intel’s plan is to have an integrated modem for the low end in the second half of this year, but still manufactured by (according to their investor day presentations) “external foundry” which I assume means TSMC although they’d bite their tongue rather than say so explicitly. At the high end they will have a two-chip solution with their modem and a 14nm Atom called Moorefield. But I’m not sure who would buy it, maybe some tablet vendors (most of which are sold without modems anyway, at least today).

The jury remains out on whether there is an enterprise “surface” tablet market where Intel architecture compatibility is important. I have always felt that it wasn’t since I don’t feel the need to run big spreadsheets on my iPad or enter new powerpoints. If there is, then Intel could be successful. In any case, it depends on what enterprise IT managers think, not what I do.

ARM didn’t just announce the A17, they also announced two new GPU cores targeted at the same mid-range markets: Mali-T720 GPU and Mali-V500 video processor which are scalable, energy-efficient compact multimedia solutions tand are complemented by the existing Mali-DP500 display processor.

ARM press release is here.


More articles by Paul McLellan…


If you still think that FDSOI is for low performance IC only…

If you still think that FDSOI is for low performance IC only…
by Eric Esteve on 02-11-2014 at 11:02 am

…then you should read about this benchmark result showing how digital power varies with process corners, for high-speed data networking chip, not exactly the type of IC targeting mid-performance mobile application. Before discussing the benchmark results, we need to have some background about this kind of ASIC chip. Such a chip is really at the edge in term of performance (running frequency), that’s why the just processed wafers are tested and binning is exercised. Intel uses binning to categorize the maximum frequency the CPU IC can reach. The higher is the frequency, the most expensive the chip will be sold. In this case, the goal is to extract as many IC as possible from wafers, in order to keep the chip price as (reasonably) low as possible. When the binning detects chips ranked in “slow” category, these chips are not trashed, but will be corrected in the field by using adaptive supply voltage (ASV). At this point, you may suspect that exercising a higher VDD on this chip will have a negative impact on the power consumption (according with the VDD[SUP]2[/SUP] law). Binning allows you to correct the impact of process variations to keep a chip running at the desired high frequency by applying higher VDD, impacting the dynamic power consumption.

Those who still think that power consumption is only an issue for mobile application should imagine dozens (if not hundreds) high performance chips closely packed in racks. The “cost of ownership” linked with such a system is going higher with chip power consumption: you need to guarantee excellent power dissipation at chip level (most expensive package, thermal drain etc.), at system level you may have to deploy an expensive cooling strategy (from Wikipedia we learn that for 100 watts dissipated in a server, you have to spend another 50 watts to cool them!). At the end of the day, somebody will pay for the electricity bill! Add to this pure $ expenses, the company image value degradation, in the eyes of eco-concerned customers and you finally realize that lowering the power, or increasing the power efficiency, should be the next semiconductor industry concern, not only for mobile applications…

The main conditions for this benchmark are listed below, if you want to know about the complete picture, I suggest you to read the full article from Ian Dedic, Chief Engineer at Fujitsu Semiconductor Europe, posted in Linkedin “FD-SOI design community” group here:

  • The benchmark uses extracted parasitics with typical clock rate, optimized library mix (different Vth and gate lengths),
  • fanout and tracking load taken from a high-speed data networking chip
  • high gate activity and 100% duty cycle
  • maximum Tj because this is the maximum power condition needed for system design
  • supply voltage is adjusted for each case (ASV) to normalize the critical path delay (clock speed) to the same value as slow corner 28nm
  • The FDSOI forward body biasing (FBB) (used to decrease Vth) is adjusted to get minimum power across process corners

From the first table showing Digital voltage (Vdd) variation in respect with the process conditions to keep the critical path delay to the same value, we already can suspect that enabling the FBB allows keeping Vdd almost flat. Such an effect can only occur on FDSOI technology, with regular transistor architecture or FinFET. Now if we look at the results in term of maximum power consumption (dynamic + leakage), the impact of forward body bias is very impressive, as we see a 31% difference for the same technology node (14FD-SOI) and slow conditions… and up to more than twice power consumption for the device in 28HPM with ASV.

Unfortunately, there is no benchmark with 14Bulk and FinFET, but the author makes the assumption that 14FDSOI (standard transistor architecture) with ASV only is very similar to 14FDSOI with FinFET. That we can say for sure is that you can’t exercise FBB effect on 14Bulk FinFET technology. Thus the great improvement on maximum power consumption on FDSOI technology is clearly due to the forward body bias effect, and such an improvement is a great benefit for high performance chips. Eliminating “Slow” process corner device could be possible, but extremely costly as a chip maker pays for the complete wafer. Using adaptive supply voltage is a way to keep high performance at the same level for any chips, even coming from a slow process corner, but at the expense of higher maximum power consumption. Finally, FDSOI is the only way to keep the device cost minimum (thanks to ASV) and minimize the power consumption (thanks to FBB).

As a reminder or if you did not read one of the previous blog about FDSOI, you have the opportunity to visualize the forward body bias effect on the above picture. It’s only possible to apply a forward bias to the “body”, or the substrate of the wafer, on silicon on insulator (SOI) technology, as the buried oxide is playing a role similar than a gate on a standard architecture, except that it only change the threshold value (Vth). If the threshold becomes lower than nominal, it becomes possible to lower Vdd (or not to increase it) to get the same performance value than for higher Vdd. Because the dynamic power consumption is a function of square Vdd, the FDD impact is terrific on power consumption…

From Eric Esteve from IPNEST

lang: en_US

More Articles by Eric Esteve…..


Data Outgrowing Datacenter Performance

Data Outgrowing Datacenter Performance
by Paul McLellan on 02-10-2014 at 1:13 pm

Last week I attended the Linley Datacenter Conference. This is not the conference on mobile which is not until April. However, a lot of the growth in datacenter is driven by mobile, with the increasing dominance of the model where data is accessed by smartphones but a lot of the backend computing and datastorage is in the cloud.

From 2012 to 2017 smartphones have grown 20% and tablets 46%. There are now more than 10 billion internet-connected devices. It takes about 400 smartphones to drive one server in a datacenter. During that same 5 year period, traffic per mobile device has increased between 4 and 8 times, with video being a big driver (growing fast and with the highest bandwidth requirements). This has driven a cloud computing growth rate of 23.5% from 2013 to 2017. As a specific example, Amazon Web Services (AWS) adds enough servers daily to support a $7B company.


One them running through the first part of the conference is that the growth in the amount of data is overwhelming the compute power. The standard way to build a datacenter is with a two level network, a top-of-stack (ToS) router at the top of each stack of servers, and then another level to link the stacks and connect to the outside world. The ToS may not use ethernet, RapidIO and PCIe are options for the stack communication. The problem is that the data rates are now so high that it is taking more and more of the compute power on each server just to run the network stack, leaving less and less compute power available to do actual work.

More of the cost and more of the power dissipation is in the networking, and as a result there are solutions from companies like Cavium, Netronome and Tilera that can be used to offload the servers and free up more compute power for actual processing. Also, specialized memory architectures such as the Micron Hybrid Memory Cube are also targeted at improving the power and performance.

Another area that still seems to be coming soon rather than here is scaling out rather than up, using (mostly) ARM-based servers that have a much lower cost-of-ownership. Intel/AMD cores have very high single thread performance but this comes at a significant cost in terms of power dissipation. Highly integrated ARM-based chips are much lower power but at have less single threat performance. However, aggregate performance can be much higher for the same budget in $, W or Size. Solutions are shipping but the big guys like Facebook don’t seem to yet be building ARM based datacenters. But there are products shipping:

  • Broadcom/Netlogic with up to 8 cores
  • Cavium Octeon II family with up to 32 cores
  • Freescale QorIQ P4 with up to 8 cores
  • LSI Axxia ACP3448 with 4 cores
  • Tilera Tile-GX with up to 72 cores


More articles by Paul McLellan…


Update on AMS Verification at DVcon

Update on AMS Verification at DVcon
by Daniel Payne on 02-09-2014 at 7:35 pm

Digital verification of SoCs is a well-understood topic and there’s a complete methodology to support it, along with many EDA vendor tools. On the AMS (Analog Mixed-Signal) side of the design world life is not so easy, mostly because there are no clear standards to follow.

To gain some clarity into AMS verification I spoke today with Hélène Thibiéroz. She has more than 17 years of combined experience in engineering, product development and marketing for Semiconductors and EDA companies. After getting a degree in Doctoral studies in EE, she worked for 5 years at Motorola as a device and spice characterization engineer within their advanced process and research center. She then moved to the EDA industry and has been focusing for more than 12 years in the AMS domain, from environment to analog, RF and mixed-signal simulations. She currently works as a senior marketing manager for Synopsys AMS products.

Q: Hélène, you have organized a panel discussion at DVcon to discuss what’s next for AMS verification. What was the motivation behind it?

In just a decade, the landscape of mixed signal design has drastically changed: we went from simple co-simulations between a digital and analog solver to a more complex mixed signal verification environment. For example, more and more SoCs designs rely on assembling successfully IPs rather than on a full-chip design approach. As such, there is a clear need today for advanced debugging features and technologies to fully test those new design assembly approaches.

While the future and unification of mixed signal verification is unclear due to the large diversity of use models and needs in the industry, new technologies and trends are emerging. I therefore wanted to invite several experts to discuss about those emerging techniques that would enable the digitally-centric mixed-signal community to reach their next level of verification.

Q: Can youtell us more about the panel? What would be the format and topics?

The four panelists are from different industry segments with diverse requirements and opinions, but each with a deep AMS background. Each panelist will first present their flow and design challenges. We will then have specific topics opened for discussion, followed by an audience Q&A. I selected some topics ahead of time that in my opinion will generate an interesting discussion. Some of those topics will cover:

• New behavioral modeling needs and standards
• Digital verification methodologies applied to mixed signal
• Debugging/regression environment

Thanks Hélène for organizing this event. You’ll have to register here online to ensure a spot at this luncheon panel on Monday, March 3, from 12:30pm to 1:30PM in the Pine/Cedar Ballroom.

lang: en_US


TI – The Initial Innovator of Semiconductor ICs

TI – The Initial Innovator of Semiconductor ICs
by Pawan Fangaria on 02-09-2014 at 9:00 am


[TI’s China Foundry acquired from SMIC]

During my engineering graduation, electronic design courses and mini-projects, the ICs I used to come across were SN 7400 series from Texas Instrumentsthat covered a large range of devices from basic gates and flip-flops to counters, registers, memories, ALUs, system controllers, and so on. There is ‘54’ series with military specifications and wider temperature range. Today, we have much wider variety of ICs, ASICs, SoCs and the like with various functions on the chip. Initially, 7400 series ICs used to have bipolar TTL (Transistor-Transistor Logic) technology that slowly transitioned through various refinements; now we see CMOS, BiCMOS, HCMOS and the like. While underlying technologies changed, the part number of a particular type of IC remained almost similar, thus standardizing the IC number to relate to the particular logic inside. While TTL provided higher speed, it also dissipated higher power; the CMOS dissipated lower power at the cost of slower speed. The technology within ICs has come through a long way with many variants (and new entrants over the years offering newer, differentiated ICs) struggling to optimize and trade-off between power and performance. And obviously, minimization of area with newer technologies and lower nodes played an important role in what we have today. In the near future, it would be my pleasure to talk more about TI’s technology and product portfolio with focused attention on particular segments.


At this moment, it’s too inviting for me to briefly ponder over the history of this holistic, innovative, most ethical, one of the oldest, built-to-last, and ever shining semiconductor design and manufacturing company (an IDM). Jack S. Kilby, the great inventive and creative mind at TI and Nobel Laureate, has the honour of inventing the first commercial semiconductor IC in 1958 (1[SUP]st[/SUP] patent filed in May 1959 on miniaturized semiconductor integrated circuits), probably the most versatile invention of the 20[SUP]th[/SUP] century which transformed the world of electronics. Holding more than 60 patents, in Dec 2000, Kilby was presented with the most prestigious Nobel Prize in Physics for his lifelong exemplary work. In 1967, TI invented the first electronic hand held calculator; again a patent in the name of Kilby along with two of his colleagues was filed. With more than 41000 patents in its portfolio, TI is probably among the first movers to start earning revenue out of its patents. More on that and other leading technologists on TI board later.

So, how did TI start? That’s again an interesting and exciting transformation story. Although TI was founded in 1951, its existence came much before the Second World War. Eugene McDermott and John Clarence Karcher started a company in 1930 to provide seismography services to oil exploration companies and that was later incorporated as Geophysical Service Inc. (GSI) in 1938. In 1941, Eugene McDermott along with Cecil Green, Eric Jonsson and H. B. Peacock bought GSI. During the later years of wartime (WW2), U.S. Navy, under the leadership of Lieutenant Patrick Haggerty, entrusted GSI with contract for developing devices to detect submarines from low flying aircrafts above the sea. The company with its expertise in locating oil was very successful in accomplishing this task which involved detecting magnetic disturbances due to submarine’s movement. With such assignments catering to defense systems, GSI transformed itself into an electronics company. As the electronics business grew, in 1951, GSI changed its name to Texas Instruments Inc. which was a name much closer to the electronics business.

Patrick Haggerty was a great visionary. After leading the contract with GSI as a Navy Lieutenant, he joined GSI in 1945 as General Manager with responsibility to diversify electronics business. His keen mind had observed that last 20 years of electronics development was driven by components and their connections through circuitry, and the future belonged to how dense that connection can be through new evolution of materials and technologies. In 1951, when TI was formed, Haggerty was Executive V. P. and he initiated purchase of a license from Western Electric to manufacture transistors with an eye to enter the consumer electronics business which had a great hidden growth potential. TI then manufactured the first transistor radio with low power consumption and size which could fit into a large pocket. In 1958, Haggerty became President of TI and that is when Jack Kilby, a researcher at TI at that time invented the IC. This became a great union of a visionary and an inventor.

It was an opportune time for TI in 1958, when it was the prime supplier of electronics to the U.S. Military and U.S. Air Force needed major renovation in its ballistic missile guidance system. TI received major funding from U.S. Air Force for developing ICs that were used in the ballistic missile guidance system. Later, the first IC based computer for U.S. Air Force was developed in 1961. Stay tuned for more….

Today, TI (headquartered in Dallas, Texas, U.S.A.) has its presence in more than 35 countries with more than 100000 customers, largest number of sales and support staff WW and more than 100000 analog ICs, embedded processors, tools and software.

More Articles by Pawan Fangaria…..

lang: en_US