RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Verification Execution: When will we get it right?

Verification Execution: When will we get it right?
by Daniel Payne on 02-06-2014 at 7:50 pm

Verification technologist Hemendra Talesaraattended a conference in Austin and asked me to post this article on verification execution for him as a blog. I first met Hemendra when he worked at XtremeEDA, and now he works at Synapse Design Automation – a design services company.
“In theory there is no difference between theory and practice, but in practice there is”
– Harry Foster

At a recent conference in Austin, I heard one of my favorite verification-philosopher-scientist, Harry Foster, gave a talk on how, in spite of advances in verification technology, methodology and processes; we are barely keeping up with Verification. Two thirds of the projects are always behind schedule. This has been constant for last five plus years for which they tracked the data. Good news is that this is true in spite of an increase in Verification Complexity (Harry cited different ways folks have looked at verification complexity – although the golden standard for Complexity is still illusory; maybe Accellera will focus on it next) and shrinking schedule. Bad news is that in spite of all the advances and maturation of tools, technology and processes, Verification Execution remains the number one headache for management.


Let’s look at it from another perspective. There is at least one third of the projects that are either on-time or ahead of schedule.

A schedule slip just indicates the gap between planning and execution. Where is the gap?

Let’s acknowledge that indeed, there are a few difficulties in planning a Verification Effort. In earlier days Verification was not very distinct from design tasks and was well understood by design managers. But, as design complexity has grown, Verification has become a lot more complex. Tools and Technology also have become more sophisticated. The skillset required to execute is also becoming more diverse. Scoping, Planning and leading a Verification Effort have also become a lot more complex. Verification is consuming more resources. Harry extrapolates that at the rate more resources are thrown in Verification, in twenty-five years only thing designers will be doing is Verification :rolleyes:.

Indeed, part of the problem is that planning itself requires that one understands what it takes to deploy these new technologies and the demand it places on skillset, resources, effort etc. A gauge on complexity of tasks at hand and a good understanding of new verification technologies is required. There are standard project management practices that can be employed for initial planning, scoping, scheduling etc. The problem starts when plans do not reflect reality, but, market constraints or wishful thinking of the management.“Realism is at the heart of execution, but organizations today are full of people who like to avoid or shade reality”
– Larry Bossidy and Ram Charan

Even, if the plan did reflect reality, reality is constantly shifting. Impact of specification changes and overall scope of the tasks that need to be dealt with are not always reflected in the plan. Perhaps Verification today demands more agility than a typical design task of earlier days.
Yes there are difficulties, but, even then at least one third of the projects are finishing on time.

Finally, if we look at this slide from Harry, it appears that on average almost 36% of the time spent on debugging. Now, if one must shade reality to please the upper management, what is it that is most likely going to be cut short in the planning process? Not so predictable “debugging”, which is mostly in the final phase of verification. This is the verification execution reality. Actual mileage for each project will vary. But, does your typical plan actually reflect similar distribution?

I invite you to share your experience and insight to help understand where do we fall short on the plan? Where do we miss on execution? Or when we ship on time, what is it that we do right? Are current project management practices adequate to deal with Verification issues?

I am looking forward to your thoughts and comments.

Hemendra Talesara

lang: en_US


SoC Verification Closure Pushes New Paradigms

SoC Verification Closure Pushes New Paradigms
by Pawan Fangaria on 02-06-2014 at 10:00 am

In the current decade of SoCs, semiconductor design size and complexity has grown by unprecedented scale in terms of gate density, number of IPs, memory blocks, analog and digital content and so on; and yet expected to increase further by many folds. Given that level of design, it’s imperative that SoC verification challenge has gone much beyond that (more than 2/3[SUP]rd[/SUP] of design time is spent in verification) at an exponential rate; however verification resources have seen more or less incremental and scattered additions. Other than logic and timing simulation, multiple other verification methods have evolved such as formal, hardware assisted, embedded software, virtual and so on. Considering that the whole SoC has to be verified together in a reasonable time with accuracy, we must look at it from a different perspective of complete and fast design closure rather than only performance of certain steps in the verification flow, such as simulation speed. Of course, performance is very important, but that needs to be put to productive use with intelligent assignments (or tweaking) to reach the final verification closure, faster.

Last week, when I reviewed Cadence’snew release of Incisive 13.2 platform, I felt it is a real major leap in the right direction towards what an SoC verification needs today. Adam Sherer, Product Marketing Director at Cadence has described top 10 ways (by using Incisive 13.2 platform) to automate verification for highest levels of performance and productivity in his whitepaper posted at Cadence website. Although most of these methods provide multi-fold productivity improvement in verification, I was really impressed with some of those which focus at higher level of abstraction to complete the job by order of magnitude faster, yet with the same level of accuracy as gate level. Notable among them are –

a) X-Propagation FlowIncisive Enterprise Verifier formal app creates assertions (to resolve issues due to X-optimism of RTL such as Verilog which can often assign a ‘0’ or ‘1’ to evaluate a state which should actually be an ‘X’ as per gate logic; refer herefor details) that can be applied to X-propagation RTL simulation to monitor the generated X values. These X values created by logic, X-propagation or low-power simulation can then be identified in the SimVision debugger.

b) UVM Debug Enhancements– With SystemVerilog support (in addition to ‘e’) in Incisive Debug Analyzer, its integration with Incisive Enterprise Manager, and several enhancements in the SimVision unified graphical debugging environment, bugs can be found in minutes instead of hours and also debugging can be done in regression mode.

c) Digital Mixed SignalSystemVerilog (IEEE 1800-2012) real number modeling support in ‘Digital Mixed Signal Option’ of Incisive Enterprise Simulator enables user to model wave superposition at digital simulation speed, thus introducing a powerful capability in Incisive Enterprise Simulator to perform complete mixed-signal simulation without needing co-simulation with an analog solver.

d) Register Map Validation– Considering a design having 10s to 100s of thousands of control registers, it’s impossible to manually write tests and verify them. Incisive platform has Register Map Validation app that completes the job in a few hours as compared to weeks of simulation effort.

Further, finding a nice articleon this topic written by Richard Goeringwhich provided me a link to a webinaron Register Map Validation, I was delighted to go through the webinar to know about how exhaustive, efficient and easy that Register Map Validation is. Thanks to Jose Barandiaran, Senior Member of Consulting Staff and Pete Hardee, Director of Product Management at Cadence for presenting this nice webinar.

Using the simulation approach for register map validation can be very inefficient in terms of coverage and test time whereas the Register Map Validation App can automatically test the register with all meaningful activities; above is an example of RW access policy test.

Users can provide specific information such as protocol which is merged with register description in IP-XACT format by a ‘Merge Utility’ and the extended IP-XACT description is passed to the Register Validation Map App which automatically generates Bus Functional Model (BFM) and other checks that get verified through Incisive Enterprise Verifier.

Both, front door (writing to register) checking between the IP and BFM and back door (read) checking are done. Usually at the sub-system level, multiple IPs are interfaced through a bridge. Verilog, VHDL or mixed language can be used. All types of checking such as Read-only, RW, write one set, write one clear, read to set, read to clear, register value after reset and write-only (in case of back door) are done. The app also provides control over activity regions to check any corruption. Particular registers or fields can be specifically selected for test.

An easy to use SimVision GUI is provided for customized debugging to identify the particular bits in error, in particular phase of testing (front door or back door), register address, reset sequence and value etc. There is an interesting demo also during the webinar which shows live debugging and validation of registers. It’s an interesting webinar to go through!

Register Map Validation App enables designers to do exhaustive testing of register access policies. It significantly reduces verification time and enhances designers’ productivity with easy debugging. The Incisive Enterprise Verifier uses mixed approach of simulation and formal. Formal analysis is done statically in breadth first manner covering complete space. Assertion driven simulation is dynamic and linear requiring no testbench. Cadence is seeing rising customer base using Incisive Verification Apps.

More Articles by Pawan Fangaria…..

lang: en_US


Cadence Acquires Forte

Cadence Acquires Forte
by Paul McLellan on 02-05-2014 at 4:46 pm


Cadence today announced that it is acquiring Forte Design Systems. Forte was the earliest of the high-level synthesis (HLS) companies. There were earlier products. Synopsys had Behavioral Compiler and Cadence had a product whose name I forget (Visual Architect?), but both products were too early and were canceled. Cadence internally developed their own high-level synthesis product called C-to-Silicon compiler. The market is now ready for adopting high-level synthesis. Xilinx purchased autoESL a couple of years ago and are aggressively pushing designers to a higher level, mainly so that software engineers who have never even heard of Verilog can use Xilinx products to create hardware accelerators for parts of their software when pure code is either too slow or too power hungry. Calypto is selling Catapult, which used to be Mentor’s offering in the space. Synopsys purchased Synfora although they don’t seem to be pushing into the space aggressively.

As Cadence said:Driven by increasing IP complexity and the need for rapid retargeting of IP to derivative architectures, the high-level synthesis market segment has grown beyond early adopters toward mainstream adoption, as design teams migrate from hand-coded RTL design to SystemC-based design and verification. The addition of Forte’s synthesis and IP products to the Cadence C-to-Silicon Compiler offering will enable Cadence to further drive a SystemC standard flow for design and multi-language verification.

It is actually a bit of an illusion that all HLS products are pretty much the same. HLS in general has been good for DSP algorithms where there is a fair bit of flexibility about trading off performance, power, throughput, latency and so on. But some approaches have focused on automatically taking very high level algorithms (such as recognizing moving objects in video) and others have focused on cleanly interfacing to complicated memory restrictions. Others are focused mainly on improving productivity, requiring fewer lines of code to get to the eventual implementaiton. The reality is that there often is less flexibility in the implementation than meets the eye but for sure HLS is at a higher level: fewer lines of code and written in C or C++, familiar to software engineers (and often also SystemC which typically is more of a hardware guy’s world).

Cadence again on how they see Forte as complementary to their existing C-to-Silicon offering:Forte brings high quality of results (QoR) for datapath-centric designs, world-class arithmetic IP, valuable SystemC IP and IP development tools. Forte’s Cynthesizer HLS product features strong support for memory scheduling, especially for highly parallel or pipelined designs. These strengths complement the high QoR for transaction-level modeling, under-the-hood RTL synthesis and incremental ECO support featured by Cadence C-to-Silicon Compiler.

I don’t know enough about the nitty-gritty under-the-hood details of either product although it is clear that the emphasis on their development has been different. Just how complementary they are remains to be see (would a customer buy both of them?) and I assume in time that the two technologies will be integrated together into a single HLS productthey don’t seem different enough to keep them as two separate product indefinitely. A lot of the IP that Forte has presumably will play in C-to-Silicon almost immediately.

The terms were not disclosed but Cadence say they expect it to be slightly accretive this year and accretive going forward, which I guess means the price was not extremely high. Accretive means that after the merger accounting, they will make more profit per $ of Forte product revenue than Cadence makes currently across its entire product line. Actually accretive means increasing EPS but it is almost the same thing.

Cadence press release is here.


More articles by Paul McLellan…


Intel Quark awakening from stasis on a yet-to-be-named planet

Intel Quark awakening from stasis on a yet-to-be-named planet
by Don Dingee on 02-05-2014 at 3:00 pm

We know the science fiction plot device from its numerous uses: in order to survive a journey of bazillions of miles across galaxies into the unknown future, astronauts are placed into cryogenic stasis. Literally frozen in time, the idea is they exit a lengthy suspension without aging, ready to go to work immediately on revival at their destination.

In our latest real life semiconductor drama, a tiny traveler is getting ready to awake after a journey from the pinnacle of past PC prestige into a very uncertain future on a planet observers haven’t even agreed on a name for. Some say this new terrene should be named Post-PC World, others say Wearables, more say the Internet of Things or the Internet of Everything, a few say Connected Autotopia, and still others are staking a claim with names like Makerland.

Pictured above is the platform ostensibly built to celebrate the old world: Intel Fab 42, on the West Side of Chandler AZ (“East Side, Don D in da house”). When asked recently about the decision to hold up on installing tooling in the massive facility, Will Strauss of Forward Concepts simply said:

Intel didn’t make a mistake, they just bet wrong.

If you observe Fab 42 from the point where our traveler embarked, that is somewhat true. The salad days of personal computing growth appear to be coming to an end, maybe not a precipitous one, but at least a long, slightly downhill decline. I don’t think the bet was wrong, however.

Intel certainly considered the possibility of a PC downturn, but went ahead with Fab 42 for several reasons. First, they had plenty of capital to support the investment. Second, one cannot simply build a $5B fab overnight – there are years of planning, environmental analysis, municipal infrastructure support, material logistics and construction techniques, and other details way before installing tooling. Third, Intel’s aging fabs, such as Fab 17 in Hudson MA, and Fab 11 in Rio Rancho NM, are entering retirement or slated to stay at mature process nodes, a reflection of the reality not every physical facility can be rehabbed to support advanced nodes cost effectively.

In a world nearly ready for 14nm but not exactly sure what will be economically viable to build with it, Intel decided to prep our traveler for a new mission, just in case. Perhaps not the actual parts, but certainly the idea behind them is being held in cryogenic stasis inside Fab 42, ready to awaken when some form of this future everyone says is coming actually appears in volume.

Our traveler is named Quark, a play on sub-Atomic particles that make up matter. Our story has a slight twist, because our traveler emerges from stasis not just well preserved but different, better than when it entered. Intel recognizes their legacy and future is Intel Architecture, not ARM or anything else – the blowback from one or more wearables shown at CES 2014 with “third party cores” affirms that. To be credible on landing in the new world, Intel has to build and ship Intel Architecture chips.

So, how do you take a still relatively hoggy implementation into a world currently dominated by ARM with mostly really small batteries for power in devices? The answer is a creative one from Intel’s Ireland design team. Grab the star of the PC era, the Pentium P54C at 600nm, reengineer its microarchitecture into a synthesizable core, lose the northbridge-southbridge paradigm and surround it with DDR3, PCIe, and an AMBA interconnect attaching modern peripheral IP, and shrink it to 32nm for starters. The first result: the Intel Quark SoC X1000, clocked at 400 MHz and consuming 2.2W max.

When CEO Brian Krzanich previewed the Intel Edison module at CES 2014, he teased us with a 22nm version of Quark, which back-of-the-envelope math says should be 1.5W or less, and threatened a roadmap to 14nm (but with darned few details yet) which should get them to the near-or-wee-bit-below-1W neighborhood. Yes, after years of denial, Intel may have finally figured out what “low power” actually means. A synthesizable core and AMBA interconnect means they can scale process and spin new variants much more quickly, so Quark is likely to become a family of travelers in short order.

Quark is a throwback to the lower average selling price (ASP) of the microcontroller franchise Intel created with the 8051 et al, but left for the allure of higher ASPs and margins available with complex microprocessors. As I said recently in a post on IoT semiconductor volumes, Intel has to be real careful here not to trade (relatively speaking) one $230 Core i7 for one $23 Quark in their advanced fabs – that would be absolute disaster. To win this game, they need to drive demand for Quark 10x or 20x what their microprocessor business has been in the long term.

That’s a huge order, literally; a whole lot of selling has to happen between here and there. In the meantime, Intel has the capacity, patience, and marketing might to make a push no microcontroller company can match – with Fab 42 waiting to bring Quark out of stasis like a wrecking ball when orders appear, be it from makers, wearables, the IoT, automotive, or the conglomerate known as post-PC. There are also some details, like mixed signal integration and perhaps a low-end GPU and maybe even a revisit of Stellarton with programmable logic, which Intel may want to reacquaint themselves with for success in taking on the space between the traditional MCU and the now mainstream mobile SoC.

Before getting too excited about planetary naming and gigantic projections and if Fab 42 ever really comes on line, the basic questions are: does Intel have the right idea(s), and can they win developer hearts and minds back from the ARM movement? If the technological equivalent of antidisestablishmentarianism is to succeed with Quark as the hero, it will likely start with makers and crowdfunding, smaller wins with viral potential and community buzz.

A journey of a bazillion miles starts with one step. In our next Quark installment, we’ll look at the maker angle and explore the latest on the chip architecture and Galileo and Edison modules in more detail.

More Articles by Don Dingee…..

lang: en_US


Cliosoft Grows Again!

Cliosoft Grows Again!
by Daniel Nenni on 02-05-2014 at 10:25 am

Cliosoft was one of the first companies to work with SemiWiki so they are an integral part of our amazing growth and we are part of theirs. I remember talking to Srinath Anantharaman (Cliosoft CEO) for the first time and discussing the goals of working together. It was simple really, there was disinformation in the market about Cliosoft and they wanted to set the record straight. The best way to do that of course was to talk to customers directly and is what we did with more than a dozen interviews.

It has been three years now and both Cliosoft and SemiWiki continue to grow. I caught up with Srinath after the New Year for an update and here it is:

(1) How was business for your company in 2013?
2013 was an excellent year for us. We increased the breadth of our support for EDA flows. Now engineers using Agilent’s ADS and Mentor’s Pyxis can get the benefit of integrated SOS design management. We saw over 40% growth in bookings over 2012 and added almost 30 new customers. Several IP vendors adopted our DM solution to manage their IP development including a leading EDA & IP vendor.

2) Where do you see growth for your company in 2014?

We have a large customer base within a wide variety of product areas and they are constantly providing product feedback and suggestions. We plan to introduce some exciting new improvements to our existing design data management solutions and also new products to improve team collaboration and IP reuse in 2014. We believe both of these will help us grow, both with IP vendors and SoC design teams.

(3) What were the challenges you face in hiring?
EDA is a stable and mature industry and does not have the lure of social media startups and block buster IPOs. With so many startups in the Bay area, and marquee companies like Google and Facebook, it is always a challenge for EDA and semiconductor companies to attract the best talent.

(4) What position(s) are you looking to fill today?
We are looking for a hands-on marketing manager/director and a web software developer

(5) Why should a candidate chose your company?
We are a small and yet very stable and fun company with very competitive compensation and benefits. Employees have a lot of authority and can make significant contributions to the products and company and are not just cogs in a large wheel. We believe in long-term partnerships with our customers, employees and partners. In fact, we have had no turnover at all since the company was founded in 1997.

About ClioSoft:

ClioSoft is the premier developer of hardware configuration management (HCM) solutions. The company’s SOS Design Collaboration platform is built from the ground up to handle the requirements of hardware design flows. The SOS platform provides a sophisticated multi-site development environment that enables global team collaboration, design and IP reuse, and efficient management of design data from concept through tape-out. Custom engineered adaptors seamlessly integrate SOS with leading design flows – Agilent’s Advanced Design System (ADS), Cadence’s Virtuoso® Custom IC, Mentor’s Pyxis Custom IC Design, Synopsys’ Galaxy Custom Designer and Laker™ Custom Design. The Visual Design Diff (VDD) engine enables designers to easily identify changes between two versions of a schematic or layout or the entire design hierarchy below by graphically highlighting the differences directly in the editors.

Also Read

High Quality PHY IPs Require Careful Management of Design Data and Processes

ClioSoft at Arasan

Data Management in Russia


DesignCon 2014 AMS Panel Report

DesignCon 2014 AMS Panel Report
by Daniel Nenni on 02-05-2014 at 10:15 am

DesignCon 2014 was very crowded! I have not seen the attendance numbers but as the first conference of the New Year it was very encouraging. The strength of the fabless semiconductor ecosystem is collaboration and face-to-face interactions are the most valuable, absolutely.

The session I moderated was on Mixed Signal Design which is of growing importance as mobile and wearable devices continue to drive the semiconductor industry. The panelists included an emerging fabless company CEO, a leading IP company VP, and an EDA product manager. The moderator is the opening act so it is my job to keep it interesting and entertaining, which I did.

As a moderator, I always ask for biographies and a list of questions the speakers would be comfortable answering following their presentation. I never actually ask these questions of course because that would be boring. I generally come up with candid questions that are a bit edgy but focused on content. I also try and get a ringer or two in the audience to keep the presenters on their toes.

For this session my ringer was a good friend and fellow consultant Herb Reiter. Herb started his career as an analog circuit designer and earned a dozen circuit design patents in Germany. He worked for VLSI Technology, ViewLogic, Synopsys, Mephisto Design Automation, and Barcelona Design amongst many other consulting clients. Herb knows AMS design, absolutely.

Zhimin Ding, is the CEO of Anitoa Systems which I wrote about HERE. The device he described was not really a wearable or a swallowable device, it is an ultra-low-light CMOS molecular imager enabling portable medical and scientific instruments. This is the future people, immediate results for medical tests, definitely. Swallowables are next! And if I’m going to swallow a medical diagnostic device it had better be 28nm or smaller that’s for sure.

Mahesh Tirupattur, SVP of Analog Bits, was the most entertaining. Mahesh and I have been on panels together before and I have not yet been able to unnerve him. Yes, he is that good. Mahesh can go from the IP business model down to the transistor and field any question in between. Analog Bits uses the Tanner EDA tools to develop high speed IP on the bleeding edge process nodes. They have already taped-out FinFETs and are currently collaborating on 10nm with the foundries.

Jeff Miller, PM of Tanner EDA was my Q&A focus as he interacts with customers and knows where the bodies are buried so to speak. Jeff spent his first 5 years with Tanner Research doing design work for medical, military, and aerospace applications. He then switched to the Tanner EDA group and has been helping customers with AMS designs ever since.

I tried to unnerve Jeff by asking him directly where the bodies are buried. I asked him for some customer crash and burn stories. I did get him to pause and give me a funny look but he came through with tales of process variation and mixed signal design that were well worth the price of admission, without mentioning specific customer names of course. Jeff earned his wings here, a true professional through and through.

Bottom line: If you want an interesting panel, have me moderate. If you want to attend an interesting panel look for the ones I moderate. If you are on a panel that I’m moderating don’t bother submitting questions in advance.

More Articles by Daniel Nenni…..

lang: en_US


iDRM for Complex Layout Searches and IP Protection!

iDRM for Complex Layout Searches and IP Protection!
by Daniel Nenni on 02-05-2014 at 8:00 am

iDRM (integrated design rule manager) from Sage-DA is the world’s first and only design rule compiler. As such it is used to develop and capture design rules graphically, and can be used by non-programmers to quickly capture very complex and shape dependent design rules and immediately generate a check for them. The tool can also be used for layout profiling: it detects every instance of a design rule or pattern, measures all its relevant distances and provides complete information on all such instances in the design.

In this paper we want to describe a slightly different application or use-case for iDRM: Searching for specific layout configurations. Using the iDRM GUI, users can quickly draw and capture specific layout structures or configurations they are interested in. These can be quite complex and involve many layers and polygons and can also include connectivity information.

Let’s take a look at a simple example: let’s say you want to look for the following layout configuration:

a Z shape blue layer polygon, that has on each side two parallel blue lines crossing diagonally situated green polygons, where the two inner parts of the green layer are electrically connected.


Fig 1: drawing a specific layout configuration to search for

There are no specific dimensions here, so pattern matching tools cannot be used in this case, but for iDRM this is a very easy and quick task. you simply draw the above configuration in the iDRM GUI , exactly as it is drawn here, and click the FIND button.

iDRM will search your layout database and find any layout instance that meets this criteria. Furthermore, if you also add measurement variables to your drawing, e.g. the spaces and width defined by A, B, C, D, E (see figure below), iDRM will measure them for you, and will display the results for every found instance. iDRM can then create tables or histograms counting all these results, and you can view each such instance using the iDRM layout viewer.

Once the dimensions are found, the user can choose to limit the search by adding specific measurement values or ranges of values as qualifiers to the search.


Fig 2: adding measurements to the layout configuration

Use cases: Circuit, layout, yield, reliability and … IP protection
This functionality is useful for circuit and layout designers that are looking for specifically laid out circuits, or for yield engineers that suspect certain configuration to be sensitive to yield or reliability issues. Another slightly different application is looking for use of protected IP or patented configurations, where specific layout or circuit techniques might be protected by patent and the user wants to find if such configurations are used in a design database.

More Articles by Daniel Nenni…..

lang: en_US


CMOS Biosensor Breakthrough Enables Portable Diagnostics Solution

CMOS Biosensor Breakthrough Enables Portable Diagnostics Solution
by Daniel Nenni on 02-04-2014 at 8:30 am

The panel I moderated at DesignCon last week was both entertaining and enlightining. One of the panelists, Zhimin Ding, is the CEO of an emerging fabless semiconductor company and here is their story:

In the past 5 to 10 years we have seen vast advancement in medical diagnostics technology. Doctors can now use DNA or anti-body analysis to get very precise answers about the type of virus, bacteria or cancerous cells that cause our illness. This is great news as precise diagnostics leads to effective drug and treatment with minimum side effect.

Unfortunately much of the world’s population still does not have access to this technology due to the cost and bulkiness of the equipments involved. Anitoa Systems, a startup in Palo Alto, CA, is working on meeting this challenge. They are creating a low cost, field portable nucleic-acid-test system built upon a proprietary CMOS molecular sensors technology.

Ultra low-light CMOS imager for molecular sensing

A majority of molecular diagnostic systems today use optical methods to detect molecular events, based on principle of fluorescence and chemiluminescence signaling. To meet the sensitivity requirement, engineers have to resort to bulky and expensive devices such as photon multiplier tubes (PMT) or cooled CCDs.


Anitoa’s ULS24 ultra-low light CMOS imager chip

Recent innovations of CMOS image sensors have made it possible to achieve much better sensitivity than what is possible before. But still more improvements in process, circuit, logic and software are needed to compete with PMT and CCDs for molecular sensing. For example, engineers at Anitoa need to reduce the noises of CMOS image sensor to provide a high signal-to-noise ratio (SnR). The excessive noise that cannot be eliminated in the chip, due to limitations of physics, is further computed and filtered through software algorithms.

With this approach, Anitoa has fabricated a CMOS image sensor built on 0.18um CIS technology from a world-leader specialty foundry. This chip has shown to achieving 3e-6 lux detection sensitivity, capable of detecting just a few molecules labeled with fluorescence reporter probes. Anitoa is now creating a miniaturized qPCR (quantitative polymerase chain reaction) system using its CMOS imager. The CMOS imager is paired with LEDs as the optical excitation source, to achieve fluorescence-based molecular sensing in a very compact platform.

qPCR for infectious disease diagnostics
When it comes to detecting very small amount of pathogenic molecules, such DNA molecules released from viruses or cancerous cells, it is important that the method is not only sensitive, but also specific. This is because the target molecules are immersed in a much larger number of surrounding DNAs from normal human blood cells.


DNA amplification and detection with qPCR

qPCR can achieve sensitivity and specificity through combined amplification and detection. By amplification, qPCR can cause target DNA strands be selectively replicated millions of times, with the help of a special enzyme called polymerase. As the target DNA strands being replicated, they bind with specially design molecular probes that are labeled with fluorescence materials.

The high sensitivity and SnR in Anitoa’s CMOS image means the instrument is able to work with small reaction volumes confined in a microfluidic structure. Small reaction volume means faster reaction and faster time to results.

Future trends
Today molecular diagnostics are performed in centralize labs located in big cities. Patient samples are collected at hospitals, sealed in ice-boxes and loaded onto trucks that deliver the samples to these labs .Transportation, material handling and batching means that sample to result take days and weeks. For many critical infectious diseases, such H1N9, the optimal symptom to treatment window is less than ten hours.

Anitoa envision in the near future, small and portable molecular diagnostics devices would be deployed at point of care, enabling rapid diagnostics of infectious disease on-the-site, so that doctors can respond quickly with life-saving drugs and treatment. The diagnostics device will be internet enabled, and the diagnostic results will be transmitted to a central database in the cloud, allowing doctors, drug companies and policy makers to make strategic decisions on global epidemic control.

Electrochemical molecular sensors have shown promise but require sophisticated surface chemistry and suffer from stability and specificity problems.

More Articles by Daniel Nenni…..

lang: en_US


The Great Wall of TSMC

The Great Wall of TSMC
by Paul McLellan on 02-03-2014 at 5:27 pm

TSMC doesn’t just sell wafers, it sells trust. It’s the Colgate Ring of Confidence for fabless customers. This focus on trust started at the very beginning when Morris Chang founded TSMC over 25 years ago, and still today trust remains an essential part of their business.

When TSMC started, the big thing it brought was that it was a pure play foundry. It had no product lines of its own. Foundry had existed before, but it was semiconductor companies selling excess capacity to each other. This meant that the buyer of wafers was always vulnerable to the seller company being successful and needing that capacity and they would get thrown out. And that was without even considering that companies might be buying wafers from a competitor, sending them masks of their crown-jewels and trusting that nobody would try and reverse engineer anything.

So when TSMC started, it brought the confidence that TSMC wasn’t going to suddenly stop supplying wafers since they needed the capacity for themselves, nor that TSMC was competing with them in the same end-markets. That is not to say that there never have been capacity issues: TSMC cannot afford to build excess capacity “just in case” any more than anyone else, so when businesses take off better than forecast or some other event happens, wafers can end up on allocation just as has always been the case in semiconductor, an inherently cyclical business.

Not competing with its customers remains the case today (as, to be fair, it does for GlobalFoundries, SMIC, Jazz and other pure-play foundries). But it is not the case for Samsung, which is in the slightly bizarre situation of having Apple as its largest foundry customer while competing with it as the volume leader in the mobile market (and never mind the lawsuits). Samsung is large diversified conglomerate, almost a lot of different companies all using the Samsung brand-name. Samsung makes all the retina displays for iPhone too, and doesn’t even use them themselves. They are a huge memory supplier. Apple is rumored to be moving from Samsung to TSMC for its next application processor (presumably to be called A8).

Intel has made a lot of noise about entering the foundry business but the only significant company that has been announced is Altera. And there are even rumors that they are thinking of going back to TSMC. But a company like Altera using Intel for its high end FPGA products might need 1000 wafers a month when a fab has a capacity of 50-100K wafers a month. It won’t “fill the fab”. It needs to get an Apple or a Qualcomm or an nVidia for that. But at least Altera can be confident that no matter how successful Intel’s other businesses are, at those volumes they are unlikely to be squeezed out, the amount of capacity they need is in the noise.

The other area that foundries have had to invest is to create an ecosystem around them of manufacturing equipment and material suppliers, IP and EDA companies. This grand alliance has made a huge investment in R&D. In aggregate, it has invested more than any single IDM. As a result the Grand Alliance has produced more innovation in high performance, lower power, lower cost than any single IDM.

At a modern process node deep cooperation is required. It is not possible for everything to be done serially: get the process ready, get the tools working on a stable process, use the tools to build the IP, start customer designs using the IP and the tools, ramp to volume. Everything has to happen almost simultaneously. This requires an even greater sense of trust among everyone involved, and the fact that changing PDKs means changing IP means redoing designs means inevitably an increased investment too.

So TSMC has a competitive edge, the great wall of TSMC to keep out the barbarian hordes:

  • it sells confidence and trust, not just wafers
  • it does not compete with its customers
  • it has orchestrated a grand alliance to create an ecosystem around its factories that has made a bigger R&D investment than any single IDM.


More articles by Paul McLellan…


Dual Advantage of Intelligent Power Integrity Analysis

Dual Advantage of Intelligent Power Integrity Analysis
by Pawan Fangaria on 02-03-2014 at 9:30 am

Often it is considered safer to be pessimistic in estimating IR-drop to maintain power integrity of semiconductor designs; however that leads to the use of extra buffering and routing resources which may not be necessary. In modern high speed, high density SoCs having multiple blocks, memories, analog IPs with different functionalities and IO cells integrated together on the same chip, space is a costly real estate and that must be used carefully. While keeping electromagnetic interference within acceptable limits, it is important that power and signal integrity must be addressed with accuracy for best possible performance and reliability.

So, how do we estimate the actual IR-drop in order to design a right sized power delivery network (PDN) and address these concerns? I was delighted to see this research paperpresented jointly by ST Microelectronicsand Apachein last PATMOS(International Workshop on Power and Timing Modeling, Optimization and Simulation). ST actually evaluated the impact of substrate on IR-drop reduction by extending the standard cell electrical characterization with the substrate characterization and noise analysis capabilities of Apache’s tools RedHawk and CSE (Chip Substrate Extension) and implemented this methodology in designing their complex microcontrollers for automotive applications that has severe area constraints. The methodology for accounting substrate parasitics in the design of PDN has been successfully implemented in ST’s digital design flow.

Main sources of current injection into the substrate by the digital core are the simultaneous switching noise (i.e. power supply fluctuation and ground bounce) and capacitive coupling of transistor sources and drains. To model the cell for noise injected into the substrate, the transistor bulk terminals have been separated from the P/G network, and a resistance extracted from the substrate technology parameters representing the path between the well P/G contacts and the transistor bulk has been inserted to probe the noise injected into the substrate network by each transistor. The cell netlist has a dedicated pin to bias the transistor bulk, which can be used to probe the substrate currents. Well resistance must be modeled from the transistor body to the well contacts, and a series resistor has been inserted between the probing pin and the substrate biasing contact of the standard cell.

The extended cell macro model includes well resistances and two additional current sources, Ipwell and Inwell that represent the substrate current injection as shown in the figure. The substrate passive network parasitics are represented by a lumped RC mesh.

The IR-drop simulation based on this model is performed by characterizing the library of standard cells and IPs to obtain their current profiles, estimating the power consumption, extracting the RC mesh for the top level PDN and the substrate and performing power integrity analysis.

The power integrity analysis done with this approach on ST’s leading edge industrial microcontroller, STXX with embedded Non Volatile Memory (eNVM) and integrated with digital core, analog blocks and IO cells on the same die, shows interesting results. The static IR-drop analysis shows a reduction of voltage drop when the substrate contribution is taken into account.

In case of dynamic IR-drop analysis, capacitive coupling between VDD and GND networks due to the decoupling capacitances (intrinsic, extrinsic and substrate parasitic capacitances) is also taken into account. The reduction in dynamic voltage drop (DVD) due to the substrate is more significant than that in static case. This is due to the fact that the substrate contributes significantly in the overall on-chip decoupling capacitance.

Hence, by taking the substrate into account, a more accurate and less pessimistic IR-drop analysis is performed that guides the designers not to add unnecessary extra routing resources and extrinsic decaps on-chip while guaranteeing the power integrity targets. This method through the use of RedHawk and CSE provides more accurate estimations for power integrity as well as saves costly area on the chip. An icing on the cake is that this flow for substrate noise analysis can also be used to explore different technologies such as highly doped vs. lightly doped, with or without deep n-well to improve the power integrity.

This was an interesting paper to study and discover that the substrate can be a blessing in disguise as opposed to its nature of impacting the noise integrity of analog and IO cells. However, in order to make the best use of it, careful modeling of substrate parasitics must be done at the top level PDN. Interested designers can go through the actual paper to study many details and references to know the actual physics behind these models and technology.


More Articles by Pawan Fangaria…..

lang: en_US