Synopsys IP Designs Edge AI 800x100

GlobalFoundries Expands in Singapore

GlobalFoundries Expands in Singapore
by Paul McLellan on 09-09-2013 at 8:30 pm

GlobalFoundries has been in Singapore for a long time. Longer than GlobalFoundries has existed in fact. Chartered Semiconductor was started in Singapore in 1987 and GF acquired them in early 2010 less than a year after they were created by spinning out the manufacturing arm of AMD. When GF was started their state of the art fab was in Dresden (Germany) and their new state of the art fab is in New York state. GlobalFoundries is committed to Singapore and is expanding there, investing $500M under the umbrella title Singapore Vision 2015. In the paragraph above I talked about state of the art fabs, but you have to look at the end application too. For power ICs, 180nm is the state of the art process. It is also a big market. The market for analog and power ICs manufactured on mature processes is $45B and is growing at 15% CAGR, faster than the overall semiconductor market. The market is changing too. Most of these specialized chips were manufactured by specialized IDMs, but these designs have moved to process generations beyond what most specialized IDMs have in-house, and so naturally have also migrated to the foundry model just as high complexity SoCs did over the last couple of decades. This is not driven by the latest SoC for your smartphone (those other two fabs are for that) but by all the analog/mixed-signal which is everywhere. In your smartphone for a start… In fact, if you tear down a typical smartphone you’ll find four digital chips (the application processor, the baseband processor (these may be combined) and flash memory. But there are 8 analog chips for everything from power management, display drivers, gyroscopes and so on. But there is also a big analog mixed signal component in storage, automotive, industrial, mil/aero. There are currently 6 GlobalFoundries’ fabs in Singapore (numbered 2, 3, 5, 6, 7 and 3E). The oldest, fab 2, was opened in 1995. The most recent, 3E, in 2008. So what are they doing with that half billion dollar investment?

  • Refreshing the 200mm assets
  • Expanding the 300mm fab 7 giving the benefits of modular process flexibility with 300mm efficiency and scalability
  • Fab 6 will also be converted from 200mm to 300mm
  • That will take Singapore to around 1M 300mm wafers per year (2.6M 8″ equivalents, the “standard” measure) 2/3 of capacity up from 1/3 today
  • Aligning product and manufacturing portfolio in line with target markets with a focus on advanced analog, non-volatile memory, digital, power and RF
  • Reducing manufacturing complexity and increasing manufacturing scalability through a modular technology platform.

Read more about Vision 2015 and Foundry 2.0 here.


Cadence Introduces Palladium XP II

Cadence Introduces Palladium XP II
by Paul McLellan on 09-09-2013 at 8:00 pm

Well, despite all the arguments in the blogosphere about what process node palladium’s silicon is, and whether the design team is competent, and why it reports into sales…Cadence has announced their latest big revision of Palladium. Someone seems to be able to get things done. Of course it is bigger and faster and just as we used to have FPGA wars as to how many “gates” you could fit in a given product, we have wars about exactly what capacity and performance mean in emulation. However, emulation is a treadmill, Cadence announced today and so their product is probably the biggest. Mentor’s Veloce was last revved in April last year. I’m not sure about Synopsys (aka Eve aka Zebu). But for sure they will both be revved again and probably be the capacity leader for a time. And for sure their marketing will claim the crown by whatever they choose to use as a measure.


Cadence reckons that they have boosted overall verification productivity by 2X coming from:

  • extending capacity to 2.3BG
  • performance up to 50% higher than XP
  • trace depth 2X higher
  • debug upload speed 2-3X higher
  • power density (of the emulator itself) 11% lower
  • dynamic power analysis (of the design being emulated)

I met with Frank Schirrmeister who, like me, spent a lot of time trying to sell virtual platform technology (him at Axys and Virtio, more recently at Synopsys and Cadence; me at VaST and Virtutech). The problem was always the models. They were expensive to produce for standard parts and they were not the golden representation for custom blocks and so little differences from the RTL would creep in. Now that emulation has gone mainstream it seems that it is the way you get the models for the non-processors: just compile the actual RTL.

This is all really important. I won’t bother to put in charts here because you’ve all seen them. Just like we used to have “design gap” graphs in every presentation, everything to do with SoC designs shows the exploding cost of verification and the increasing cost of doing software development as the software component of a system has grown to be so large. With some graphs showing that the software component will grow to $250M it is unclear how anyone designs a chip. But software lasts much longer than any given chip. While I can believe that Apple or Google has spent that kind of money on iOS and Android, that software runs on a whole range of different chips over a period of many years. Indeed, that is one aspect of modern chip design: when you start to design the chip, a lot of the software load to run on it already exists so you’d better not break it.

Like those ubiquitous Oracle commercials that say “24 of the top 25 banks use Oracle”, Cadence reckon that 90% of the top 30 semiconductor companies use Palladium emulation (not necessarily exclusively, many companies source from multiple vendors and an emulator has to get really old before you throw it out); 90% of the top 10 smartphone application processors and 80% of the top 10 tablet application processors use it (are there really that many; after Qualcomm, Apple, Samsung and Mediatek everyone else must have negligible share), and the graphics chips of all the top game consoles (I think there are just 3 of those) were verified with Palladium.

But the more interesting story, I think, is the gradual creation of Verification 2.0 which ties together virtual platforms, emulation, FPGA based verification (Cadence’s is called RPP) and RTL simulation. As part of the new Palladium, Cadence have also upgraded the links between Palladium and Incisive RTL simulation and also between the Virtual System Platform (VSP) and Palladium, making it easier to run software and hardware together.


One of the tricks to running virtual platforms is that the processor models have to run a reasonable amount of code before synchronizing with the rest of the system (otherwise the whole system runs like molasses, under the hood this is JIT compilation after all). How often this needs to be done depends critically on what the software is doing and how deterministic the hardware is. Ethernet communications or cellphone packets don’t arrive on a precise clock-cycle of the processor, for example,the nearest few million will do. With the improved capabilities Frank reckons they have 60X speed up for embedded OS boot (very important to be able to boot an OS quickly whatever you intend to do once it is up, and since the system isn’t ‘up’ there isn’t a lot going on in the real world to synchronize with) and 10X for production and test software executing on top of the OS on top of the emulated hardware.

One of the key lead customers was nVidia. They reckon it improves the software validation cycle and also simplifies post-silicon bringup. And, based on my past experience, they are not an easy customer to please because their chips push the boundaries of every tool.

For Richard Goering’s blog on the announcement, on the Cadence website, go here.

Although test is not quite verification, this week it is ITC in the Disneyland Hotel. If you are going, Cadence have an evening reception nearby on Wednesday 11th at House of Blues (where the Denali party was last time DAC was in Annaheim, I think before Cadence acquired them). For details of all Cadence presentations at ITC and to register for the evening reception go here.


TSMC OIP: Mentor’s 5 Presentations

TSMC OIP: Mentor’s 5 Presentations
by Paul McLellan on 09-09-2013 at 6:30 pm

At TSMC’s OIP on October 1st, Mentor Graphics have 5 different presentations. Collect the whole set!

11am, EDA track. Design Reliability with Calibre Smartfill and PERC. Muni Mohan of Broadcom and Jeff Wilson of Mentor. New methodologies were invented for 28nm for smart fill meeting DFM requirements (and at 20nm me may have to double pattern some layers of fill). Also, Calibre PERC for checking subtle electrical reliability rules. These were both successfully deployed on a large 28nm tapeout at Broadcom.

2pm, EDA track. EDA-based Design for Test for 3D-IC Applications. Etienne Racine of Mentor. Lots about how to test 3D and 2.5D ICs such as TSMCs CoWoS. 3D requires bare die testing (to address the known good die problem) via JTAG. This can also be used for contactless leakage test. How to test memory on logic and other test techniques for the More that Moore world of 3DICs.

3pm on EDA/IP/Services track. Synopsys Laker Custom Layout and Calibre Interfaces: Putting Calibre Confidence in Your Custom Design Flow. Joseph Davis of Mentor. Synopsys’s Laker layout environment can run Calibre “on the fly” during design to speed creation of DRC correct layouts. Especially at nodes below 28nm, where the rules are incomprehensible to mere mortals this is almost an essential requirement to developing layout in a timely manner.

4.30pm, EDA track. Advanced Chip Assembly and Design Closure Flow Using Olympus SoC. Karthik Sundaram of nVidia and Sudhakar Jilla of Mentor. Chip assembly and design closure has become a highly iterative manual process with a huge impact on both schedule and design quality. Mentor and nVidia talk about a closure solution for TSMC processes that include concurrent multi-mode multi-corner optimization.

Identifying Potential N20 Design for Reliability Issues Using Calibre PERC. MH Song of TSMC and Frank Feng of Mentor. Four key reliability issues are electro-migration, stress-induced voiding, time-dependent dielectric breakdown of intermetal dielectric, and charge device model ESD. Calibre’s PERC can be used to verify compliance to these reliability rules in the TSMC N20 process.

Full details of OIP including registration are here.


A Hybrid Test Approach – Combining ATPG and BIST

A Hybrid Test Approach – Combining ATPG and BIST
by Daniel Payne on 09-09-2013 at 5:18 pm

In the world of IC testability we tend to look at various approaches as independent means to an end, namely high test coverage with the minimum amount of test time, minimum area impact, minimum timing impact, and acceptable power use. Automatic Test Pattern Generation (ATPG) is a software-based approach that can be applied to any digital chip design and requires that scan flip-flops be added to the hardware. SoCs can also be tested by using on-chip test hardware called Built-In Self Test (BIST).

At the ITC conference this week there’s news from Mentor Graphics about a hybrid approach that combines both ATPG and BIST techniques. Ron Pressand Vidya Neerkundar wrote a seven page white paper, and I’ll summarize the concepts in this blog. Read the complete white paper here or visit Mentor at ITC in booth #211 and ask for Ron or Vidya.


Ron Press

ATPG

The basic flow for using ATPG and compression is shown below:

The ATPG software is first run on the digital design and patterns are created that will be applied to the chip primary input pins and read on the output pins. To economize on tester time compression is used on both the incoming and outgoing patterns. Extra hardware is added to the chip for decompression, scan chains and compression. Benefits of using compression ATPG are:

  • High fault coverage for stuck-at and other faults (ie. resistive, transistor-level)
  • Low test logic overhead
  • Low power use

Logic BIST

With Logic BIST the chip itself includes the test electronics and instrumentation.


The test electronics is shown as a random pattern generator with a blue color on the left. The instrumentation to measure results is called a signature calculator with a blue color on the right side. Scan chains are shown in orange color, just like in the previous diagram for ATPG. The Random pattern generator is creating it’s own stimulus, which is then applied to the block under test. The response to the stimulus is collected by the signature calculator to determine if the logic is correct or defective. Benefits of using a logic BIST approach are:

  • No need for an external tester
  • Supports field-testing
  • Can detect un-modeled defects
  • Support design re-use across multiple SoC designs
  • Reduces tester times because you’re not limited to reading external data

Comparing ATPG and Logic BIST

Let’s take a quick comparison overview of ATPG and Logic BIST to better understand where each would offer the greatest test benefits.

[TABLE] style=”width: 500px”
|-
|
| ATPG
| Logic BIST
|-
| Black-box module
| OK
| Not accepted because of potential unknown, or X values
|-
| Testpoints
| Required less
| May be required for some faults
|-
| Multi-cycle paths
| Supported
| Added hardware required
|-
| Cross Domain Clocking
| Easy support
| Unknown states cannot be captured
|-
| ECO support
| Easy
| Masking ECO flops adds hardware
|-
| Diagnostic resolution
| Good
| Good
|-

The Hybrid Approach

There is a way to combine test hardware so that any block can have both compression ATPG and Logic BIST approaches together:

Diving into a bit more detail reveals some of the test logic in the hybrid approach:

With a hybrid implementation ATPG would only need to target faults that are not already detected by Logic BIST, saving test times up to 30% or so. You can use this hybrid methodology in either a top-down or bottom-up flow.

Hybrid Benefits

Adopting a hybrid test approach can provide:

  • Test/retest SoCs during burn-in and in-system
  • Very low DPM (Defects Per Million)
  • Checks for many defects

    • Small delay
    • Timing-aware
    • Cell-aware
    • Path delay
  • Lower test hardware area through merged logic

Summary

It may be appropriate to consider using both compression ATPG and Logic BIST in your next SoC design. Mentor Graphics offers software named the Tessent product line that support hybrid test. Read the complete white paper here, registration is required to download the PDF document.

lang: en_US


Verifying Hardware at the C-level

Verifying Hardware at the C-level
by Paul McLellan on 09-09-2013 at 2:25 pm

As more people adopt high-level synthesis (HLS) they start to worry about what is the best design flow to be using. This is especially so for verification since it forms such a large part of the effort on a modern SoC. The more people rely on HLS for producing their RTL from C, the more they realize they had better do a good job of verifying the C code in the first place. Waiting until RTL is inefficient, not to mention late in the flow. Issues are hard to locate and debug and you can’t change the RTL even when you do (at least you are asking for big problems if you do). Instead you need to return to the “golden” representation which is the C, find the corresponding part of the code and then fix it there.

Better by far is to do verification at the C level. If the C is correct, then HLS should produce RTL that is correct-by-construction. Of course “trust but verify” is a good maxim for IC design otherwise we wouldn’t need DRCs after place and route. Similarly, you shouldn’t assume that just because your C is good that the RTL doesn’t need any verification at all.

Unfortunately, tools for C and SystemC verification are rudimentary compared with the tools we have for RTL (software engineering methodology has changed a lot less than IC design methodology over the last couple of decades, but that is a story for another day). RTL engineers have code coverage, functional coverage, assertion-based verification, directed random and specialized verification languages. C…not so much.

But we can borrow many of these concepts and replicate them in the C and SystemC world. By inserting assertions and properties into your C models you can start to do functional coverage on the source code and formal property checking on the C-level design. An HLS tool like Catapult can actually make sense of these assertions and propagate them through into the RTL.

Catapult, along with SLEC, allows you to do this sort of analysis now. With SLEC you can check for defects in the C/SystemC source, do property checking on assertions in your C-level code and apply formal techniques to prove whether an assertion always holds true or not. Catapult will then embed all these assertions into the RTL, automatically instantiating them as Verilog assertions that do the at the RTL level as the originals did at the C-level. It also takes C cover points and synthesizes them into Verilog so that when you run Verilog you also get functional coverage.

This hybrid approach leverages all the methodology that the RTL experts have put together, but pulling it up to do more at the C-level. After all, the better the C, the better the RTL (and the gates…and the layout).

Calypto have a couple of upcoming webinars, one for India/Europe on another topic and one for US on verification.

Dynamic/leakage power reduction in memories. Timed for Europe and India at 10am UK time, 11am central Europe, 12pm Finland and 2.30pm India. September 12th (this coming Thursday). Register here.

How to Maximize the Verification Benefit of High Level Synthesis with SystemC. 10am Pacific on October 22nd. Register here.


TSMC OIP: Soft Error Rate Analysis

TSMC OIP: Soft Error Rate Analysis
by Paul McLellan on 09-09-2013 at 1:34 pm

Increasingly, end users in some markets are requiring soft error rate (SER) data. This is a measure of how resistant the design (library, chip, system) is to single event effects (SEE). These manifest themselves as SEU (upset), SET (transient), SEL (latch-up), SEFI (functional interrupt).

There are two main sources that cause these SEE:

  • natural atmospheric neutrons
  • alpha particles

Natural neutrons (cosmic rays) have a spectrum of energies which also affects how easily they can upset an integrated circuit. Alpha particles basically are stopped by almost anything and as a result these can only affect a chip if they contaminate the packaging materials, solder bumps etc.


Increasingly, more and more partners need to get involved in this reliability assessment process. Historically it has started when end-users (e.g. telecom companies) are unhappy. This means that equipment vendors (routers, base-stations) need to do testing and qualification and end up with requirements for process, cell design (especially flops and memories), and at the ASIC/SoC level.

Large IDMs such as Intel and IBM have traditionally done a lot of the work in this area internally, but the fabless ecosystem relies on specialist companies such as iROCtech. They increasingly work with all the partners in the design chain since it is a bit like a real chain. You can’t measure the strength of a real chain without measuring the strength of all the links.

So currently there are multi-partner SER efforts:

  • foundries: support SER analysis through technology specific SER data such as TFIT databases
  • IP and IC suppliers: provide SER data and recommendations
  • SER solution providers: SEE tools and services for improving design reliability, accelerated testing

Reliability has gone through several eras:

  • “reactive” end users encounter issues and have to deal with them. product recalls, software hot-fixes etc
  • “awareness” system integrators pre-emptively acknowledge the issue. system and component testing, reliability specifications
  • “exchanges” requirements and targets are propagated up and down the supply chain with SER targets for components, IP etc
  • “proactive” objective requirements will drive the design and manufacturing flow towards SER management and optimization

One reason that many design groups have been able to ignore these reliability issues is that they depend critically on the end market. The big sexy application processors for smartphones in the latest process do not have major reliability issues: they will crash from time to time due to software bugs and you just reboot, your life is not threatened. And your phone only needs to last a couple of years before you will throw it out and upgrade.

At the other end of the scale are automotive and implantable medical devices (such as pacemakers). They are safety critical and they are expected to last for 20 years without degrading.


iRocTech have been working with TSMC for many years and have even presented several joint papers on various aspects of SER analysis and reliability assessment.

iRocTech will be presenting at the upcoming TSMC OIP Symposium on October 1st. To register go here. To learn more about TFIT and SOCFIT, iRocTech’s tools for analyzing reliability of cells and blocks/chips respectively, go here.


Rapid Yield Optimization at 22nm Through Virtual Fab

Rapid Yield Optimization at 22nm Through Virtual Fab
by Pawan Fangaria on 09-09-2013 at 10:00 am

Remember? During DAC2013 I talked about a new kind of innovation: A Virtual Fabrication Platform, SEMulator3D, developed by COVENTOR. Now, to my pleasant surprise, there is something to report on the proven results from this platform. IBM, in association with COVENTOR, has successfully implemented a 3D Virtual Fabrication methodology to rapidly improve the yield of high performance 22nm SOI CMOS technology.

The CTO-Semiconductor of COVENTOR, Dr. David M. Fried was in attendance while IBM’s Ben Cipriany presented an interesting paper on this work at The International Conference on Simulation of Semiconductor Processes and Devices (SISPAD 2013). The paper is available at the link “IBM, Coventor present 22nm Virtual Fabrication Success at SISPAD” at the COVENTOR website.

Dr. Fried leads COVENTOR’s technology strategy, partnerships and external relationships. His expertise touches upon areas such as Silicon-on-Insulator (SOI), FinFETs, memory scaling, strained silicon, and process variability. He is a well-respected technologist in the semiconductor industry, with 45 patents to his credit and a notable 14-year career with IBM, where his latest position was “22nm Chief Technologist” for IBM’s Systems and Technology Group.

I had a nice opportunity speaking to Dr. Fried on phone during the conference at Glasgow, Scotland. From the conversation, all I can say is that SEMulator3D really has a true innovation and fascinating technology that will majorly ease the complex design and manufacturing process, not only at 22nm but still lower nodes in the days to come; specifically in analyzing and predicting the key phenomenon of variability at these nodes, quickly and accurately.

The Conversation –

Q: I guess the first introduction of SEMulator3D in wide public media was in DAC 2013?

Actually COVENTOR is a 17 years old company, with experience in MEMS s/w, Device simulation and Design enabling tools. It has many years of 3D structural modelling experience from the MEMS field, which has been migrated over the last 7 years into the Semiconductor space after encouragement by Intel, one of our original investors. I moved to COVENTOR about a year ago, and am now driving its technology roadmap and expanding SEMulator3D usage across the semiconductor industry. In 2012, we made a very important updated release of SEMulator3D. But in 2013, our newest release, developed in significant collaboration with IBM, adds lots of automation, advanced modelling and really takes a quantum leap forward in virtual fabrication.

Q: This is a new concept of Virtual Fabrication Platform. How does it relate with the design flow at Fabless companies?

It provides a flexible process platform for Fabless companies to explore the physical embodiment of their specific design IPs, otherwise they do not know what happens at the process end. In other words, it is a “Window into the Fab” for Fabless companies. Variability mainly depends on physical parameters. Because SEMulator3D is 10-100 times faster than other process modelling tools, it can provide many more data points of physics-driven highly predictive and accurate models, giving a more rigorous view of the variability effects.

Q: Let’s see some of the aspects of this paper presented at SISPAD. It’s a meticulous work that the 3D model was matched to inline measurement and metrology data as well as 2D SEM/TEM cross-section images of a nominal transistor. How did you get this idea to model this way, so accurately?

In any modelling situation, first of all, we need to gain faith by experimenting with the past results and demonstrating the perfect match with the existing data, calibrate and then predict. The modelling engine is physics-driven, so it will correctly predict many processes and effects without any calibration. For this detailed study, the nominal model was validated against a significant amount of inline measurement and metrology, and also TEMs and SEMs and iterations of calibration were performed. After that, thousands of cases of variation were performed and the results were assembled and analysed quantitatively. SEMulator3D 2013 adds the first production availability of our Expeditor tool which enables automated spreadsheet-based process variation study and analysis. Our expanded Virtual Metrology capabilities can automatically collect and analyse the structure during the minutes of modelling rather than the months these experiments could spend in the fab.

Q: It is observed that major process variation occurs at the n-p interface due to spatial exposure and overlay errors from multiple lithography levels whose edges define that interface. So, how difficult it is when multiple devices in a large layout are involved?

Right, the nfet-pfet interface is used in this work as a vehicle for the study and can easily be scaled to larger design space. Expeditorenables massively parallel execution, so the same type of modelling can be applied to many different design elements, such as cells in a design library. That’s how there is really no comparison between SEMulator3D and any TCAD tool out there. SEMulator3D is extremely fast, applicable to a larger physical design space and provides yield focused modelling.

Q: How large a design, SEMulator3D can handle to model and evaluate for process window tolerance?

There is effectively no limitation on design area. What matters is modelling time. SEMulator3D can do the typical job in minutes rather than weeks. The concept is to choose the smallest area of a design which can be modelled to solve a specific process integration challenge, apply process description and define resolution of models. And then parallelize the job. Some users are investigating large memory arrays, some users are investigating single transistor effects. Most of our typical users are applying this modelling platform at the circuit-scale.

Q: So, how do you choose that smallest sample, process description and resolution?

In our more collaborative engagements, like the one in this paper, we work with the customer to define these aspects and optimize the modelling exercise. The customer is always interested in their specific design IP, but we can help optimize the area selection and parallelization. The process description is where we do most of our collaboration. Each customer’s process is different, and developed independently, but we spend time working with “Modelling Best Known Methods” and many tricks of the trade we’ve learned in virtual fabrication. The modelling resolution is flexible based on the effects being studied and can be tailored to optimize modelling performance based on the granularity of the effects being analysed.

Q: What kind of different design structures have been modelled using SEMulator3D?

There are many, to name a few – DRAM, SRAM, logic, Flash memory, ESD circuitry, Analog circuits, TSVs, multi-wafer 3D integration structures, and so on.

Q: We know IBM is already working closely with COVENTOR. Who are the other customers using SEMulator3D?

Our primary user base is in the Foundries, IDMs and Memory manufacturers. We are engaged with the top 9 worldwide semiconductor manufacturers and also several Fabless design houses. With Cadencewe have provided a Virtuoso Layout Editor link to SEMulator3D. SEMulator3D also has its own full-function layout editor which serves as a conduit to any existing GDSII.

Q: One last question. Although, from this conversation, what I can perceive is that this tool has a bright future which can provide a solid foundation to designs at ultra sub-micron process nodes, I would like to know your view on what’s in store for future?

One of the things I spend a lot of time thinking about is 3D DRC. You know, most yield limitations come from a physical problem that is 3D in nature. That single 3D problem gets exploded into 100s of 2D rules to provide a DRC checking infrastructure. In many cases, it’s becoming cumbersome and inefficient to do it this way, especially in the development phase where these rules are just being explored. That’s an area to work on and optimize by doing 3D searches on structural models. There are some structures that will be better checked in 3D rather than with a massive set of translated 2D rules. So, in the future 3D Design Rule Checking will be a reality, and we are working hard on that.

This was a very exciting and passionate conversation with Dr. Fried, where I could enhance my knowledge about process modelling. And, I can see that SEMulator3D can provide competitive advantage to design companies as well, because this facilitates their parallel interaction with Fab in a virtual world which can be used to mitigate the manufacturing related issues at the design stage. A novel concept!


Searching an ADC (or DAC) at 28 nm may be such a burden…

Searching an ADC (or DAC) at 28 nm may be such a burden…
by Eric Esteve on 09-09-2013 at 9:13 am

If you have ever send a Request For Quotation (RFQ) for an ASIC including processor IP core, memories, Interfaces IP like PCIe, SATA or USB and Analog function like Analog to Digital Converter (ADC) or Digital to Analog Converter (DAC), you have discovered, like I did a couple of years ago, that these Analog functions may be the key pieces, limiting your choice of the technologies and foundries. Every System-on-Chip (SoC) will use a processor IP core (and most of the time it will be an ARM core, by the way), so you will realize that the processor is not a problem as it will be available in most of the technologies/foundries, as well as any Single Port or Dual Port memory, through memory compilers.

But, especially if you target advanced nodes like 28 nm, being able to find the right ADC or DAC may become a real issue. If you can’t find it in the technology you have to target, for good reasons like benefiting for the smallest possible area, or device cost, or to make sure that the power consumption will be within the allocated budget, the only possibility will be to order a custom design… not really the best approach to minimize risk and time-to-market (TTM)! When a well-known and respected IP vendor like Synopsys announce they have scaled their ADC Architecture to support Mobile and Multimedia SoC at 28 nm –and beyond, that’s the type of news which make your life easier when sending ASIC RFQ, targeting to use TSMC 28 HPM. If it’s your case, that make sense to attend to the next webinar:

Scaling ADC Architectures for Mobile & Multimedia SoCs at 28-nm and Beyond

If you are interested, you should quickly register as this webinar will be held on Tuesday, the 10[SUP]th[/SUP] of September…


Overview:
Data converters are at the center of every analog interface to systems-on-chips (SoCs). As SoCs move into 28-nm and smaller advanced process nodes, the challenges of integrating analog interfaces change due to the process characteristics, reduced supply voltages, and analog blocks’ area requirements. Synopsys is offering a complete portfolio for analog interfaces in 28-nm includes high-speed ADCs and DACs, PLL, and general-purpose ADCs and DACs, implemented on a Standard CMOS process with no additional process options.

I have noticed a few impressive features:

  • Parallel SAR 12-bit ADC architecture implementations for up to 320 MSPS sampling rates
  • Parallel assembly allows for greater architectural flexibility for specific applications
  • Very high performance 12-bit 600 MSPS DAC
  • Power consumption reduction of up to 3X and area use reduction of up to 6X over previous generations

This webinar will:

  • Compare the prevailing analog-to-data converter (ADC) architectures in terms of speed, resolution, area, and power consumption trade-offs
  • Describe the benefits of the successive-approximation register (SAR)-based ADC architecture for the medium and high speed ADCs
  • Describe how implementations of the SAR ADC architecture can reduce power consumption and area usage for 28-nm process technologies
  • Present the 28-nm DesignWare® Analog ADCs, which use the SAR-based architecture, and explain how they achieve 3x lower power consumption and 6x smaller area compared to previous generations


Who should attend:
SoC Design Engineers, Managers and System Architects

Presenter:

Carlos Azeredo-Leme, Senior Staff Engineer, DesignWare Analog IP, Synopsys
Carlos Azeredo-Leme is a senior staff engineer for the DesignWare Analog IP at Synopsys since 2009. Prior to joining Synopsys, he was co-founder and member of the Board of Directors of Chipidea Microelectronics in 1993, where he held the position of Chief Technical Officer. There, he was responsible for complete mixed-signal solutions, analog front-ends and RF. He worked in the areas of audio, power management, cellular and wireless communications and RF transceivers. Since 1994 he holds a position as Professor at the Technical University of Lisbon (UTL-IST) in Portugal. His research interests are in analog and mixed-signal design, focusing on low-power and low-voltage. Carlos holds an MSEE from Technical University of Lisbon (UTL-IST) in Portugal and a Ph.D. from ETH-Zurich in Switzerland.

Eric Esteve from IPNEST

lang: en_US


Test Compression and Hierarchy at ITC

Test Compression and Hierarchy at ITC
by Daniel Payne on 09-09-2013 at 8:00 am

The International Test Conference (ITC) is this week in Anaheim and I’ve just learned what’s new at Synopsys with test compression and hierarchy. Last week I spoke with Robert Ruiz and Sandeep Kaushik of Synopsys by phone to get the latest scoop. There are two big product announcements today that cover:

Continue reading “Test Compression and Hierarchy at ITC”


SemiWiki Analytics Exposed 2013

SemiWiki Analytics Exposed 2013
by Daniel Nenni on 09-08-2013 at 7:15 pm

One of the benefits of blogging is that you put a stake in the ground to look back on and see how how things have changed over the years. You can also keep a win/loss record of your opinions, observations, and experiences. Last year I posted the “SemiWiki Analytics Exposed 2012” blog so here is a follow-up to that.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 to June 30th 2012 more than 300,000 unique visitors have landed at www.SemiWiki.com viewing more than 2.5M pages of blogs, wikis, and forum posts. WOW!

Wow indeed. We have now had more than 700,000 unique visitors and they just keep on coming. Original content and crowdsourcing wins every time, absolutely. There is a plethora of website rating companies but we use Alexa (owned by Amazon.com) as it has proven to have the most comprehensive and accurate analysis in comparison to Google and internal analytics, absolutely.

Rankings from theAlexawebsite:

EETimes
2012 Rank: 24,870
2013 Rank: 28,584
-13%

SemiWiki
2012 Rank: 431,594
2013 Rank: 249,989
+73%

Design And Reuse
2012 Rank: 312,684
2013 Rank: 320,258
-2%

EDAcafe
2012 Rank: 441,196
2013 Rank: 388,857
+9%

ChipEstimate
2012 Rank: 861,483
2013 Rank: 1,169,187
-26%

DeepChip
2012 Rank: 1,175,172
2013 Rank: 1,849,759
-36%

SoCCentral
2012 Rank: 3,765,891
2013 Rank: 2,529,697
+14%

Yes a 73% increase is a big number but let me tell you why. First, you may have noticed we installed a new version of SemiWiki this summer which is optimized for speed and search engine traffic. My son the site co-developer has the summer off (he teaches math) so we kept him busy with site optimization. Second, we enhanced our analytics and internal reporting so we know more of what is happening inside SemiWiki. A good friend of mine is a Business Intelligence (BI) expert for a Fortune 100 company and shares his expertise with me in exchange for food and drink. Okay, mostly drink. This enhanced data mining and reporting capability gives our subscribers a clear view of what does and doesn’t work so we can manage the growth of their landing pages and SemiWiki on a whole.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 more than 700,000 unique visitors have been recorded at www.SemiWiki.com viewing more than 5M pages of blogs, wikis, and forum posts. WOW!

I would tell you more but the new SemiWiki 2.0 software release is so amazing it would sound like I was bragging and I’m much too humble to brag.

lang: en_US