CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

ARM and frog Team up with UNICEF to Foster Creation of Wearables for the Developing World

ARM and frog Team up with UNICEF to Foster Creation of Wearables for the Developing World
by Tom Simon on 06-11-2015 at 5:00 pm

When the term wearables is mentioned most people’s first thoughts go to devices like the Apple Watch, Fitbit Flex, or Nike Fuel Band. Wearables such as these solve first-world problems like how much exercise am I getting, or what is my heart rate. The developed world drives the development of new technology in most cases, and wearables are no exception. Nevertheless we see many instances where our toys and gadgets become important problem solvers for developing countries.

There are many examples of the application of high tech devices to solve developing world problems. Cell phones have brought communication to remote and hard to ‘wire’ locations. LED’s, lithium batteries and solar panels have brought light to places that had to rely on flame based light sources previously. Incidentally my son, Kevin Simon, is working at MIT on developing highly efficient water pumps for the farmers of developing nations that are optimized for solar and small scale power sources.

Many of us are familiar with UNICEF. It is the wing of the United Nations that is solely focused on improving the welfare of children in the developing world. UNICEF has 7 pillars that they relentlessly focus on: Health, Education, HIV/AIDS, Water, Sanitation/Hygiene, Child Protection, and Social Inclusion. UNICEF is partnering with ARM and the design firm frog to explore how wearables can effect dramatic change for the children in the developing world. This initiative is called Wearables for Good. They have set up a web site and have published collateral material.

At the heart of this effort is achallenge open to anyone who has an idea or wants to build something to use wearable and sensor technology that serves people in resource constrained environments. From now until August 4[SUP]th[/SUP] anyone is welcome to apply to participate. The combined resources of UNICEF, ARM and frog will be available to coach and advise applicants. After a project refinement phase, final judging will take place from October 2[SUP]nd[/SUP] to Novemer 2[SUP]nd[/SUP], when the winners will be announced. There is an excellent handbook for applicants to learn about considerations and guidelines that is available on their website.

The handbook talks in more detail about what is meant by wearable technology. Despite our preconceptions, they have expanded the idea to include mobile technology that is not only on a wrist or ankle. Wearables can be devices that are close at hand, worn, or even ingested. They need to do one or more of four things: Alert/Respond, Diagnose/ Treat/Refer, Change Behavior, and/or Collect/Analyze Data. At the same time these wearables must live within a design approach that includes these characteristics: Cost effective, low power, rugged & durable, and scalable.

Designing things for the developing world is tricky. They have to work in an environment that must be fully understood. There are cultural issues, infrastructure limitations, and all sorts of pitfalls, such as limited resources for repair and deployment. Users may not have skills that we take for granted. There are even political barriers, such as privacy concerns. The handbook has a list of use cases that are suggested as possible areas of focus. One of them suggests helping alert people in dense slums when there is a fire. Another is looking for a way to modify people’s behavior so they wash their hands more frequently, thus reducing disease. One of the most compelling was helping to document births so people have official ‘identities.’ Without birth records, it is impossible for individuals to get aid, education and even own property. They suggest that a portable/wearable device that could be used in remote villages to record birth records and convey them to official agencies to alleviate this problem.

ARM is offering its development tools and mentoring from their wearable tech experts to help bring projects to fruition. The design firm frog is making available its design and product strategy expertise to the winners. frog is the renowned company that had a hand in the distinctive design of many of Apple’s products. Finally UNICEF has a network of innovation labs and many partners that can provide valuable insight into the real world needs to the ultimate users.

Probably the best ideas will not come from someone in who grew up in Palo Alto or New York City, but rather someone who has encountered the environments where the final projects are destined to operate. The invitation to the challenge is casting a big net and there will be entries from all over the world. It’s exciting to see opportunities for applying technology to address pressing problems around the world. The value here will be a lot more than a higher stock price or better revenues – people’s lives will be improved in significant ways. When you read about the preventable infant mortality rate or the numbers of preventable infections in developing nations, it is clear that this could be a truly meaningful effort.


Application Specific Integrated Comedy

Application Specific Integrated Comedy
by Paul McLellan on 06-11-2015 at 7:00 am

Tuesday night I got to meet an old colleague. OK, this is DAC, that is hardly a story. I was at the Synopsys media dinner and John Koeter handed out free wristbands to the Stars of IP party taking place later that evening. Remember, Synopsys is #3 in IP overall and #1 in interface IP. Talking of which, earlier in the day I was at the Synopsys custom IC lunch which I will cover later which had an especially interesting presentation from the Synopsys IP group who, not surprisingly, are big users of the custom IC product line.

The party was the 3rd Stars of IP party organized by IPextreme (although now most of the IP companies participate too). I had lunch with Warren Savage a few years ago and he told me the genesis of what became IPextreme. At the time he was at Synopsys. Also at the time, ARM supplied their microprocessor as hard IP, a physical process-dependent layout. Actually there was really only one core back then, the ARM7TDMI which became the standard in mobile phones and set ARM on the course to where it is today. As an experiment Warren and his team did a synthesizable version of the ARM7TDMI, after all they were at Synopsys. ARM were skeptical it could be done. What nobody, even Warren, really expected was that the synthesizable core would turn out to be smaller than the hard core. It wasn’t an overnight change but it completely altered how cores would be delivered. Except for a parts of the PHYs for interfaces, and some other analog areas, everything would be synthesizable, which is where we are today.

Performing at the Stars of IP party was Don McMillan. I first met Don when we were both at VLSI Technology where he was an IC designer. Last night I talked to Don about his early days in stand-up and he told me he was at a comedy club on open-mike night. “You mean I can just go up there and perform?” So he tried it. As he says, there is probably no business where you get quite such instant feedback. In 1991 he won the 16th San Francisco stand-up competition and then was the overall winner on Star Search the following year. By then he’d given up being a chip designer and was doing stand-up full time. Although he has performed on the tonight show and done commercials for Budweiser and all sorts of other things, a lot of his bread-and-butter comes from being able to do comedy in technical environments where he actually is as much of a geek as his audience. Engineer jokes are just a lot funnier coming from an engineer.

Another person working the same idea is Scott Meltzer who I hired for his first time at DAC when I was at VaST and for many years has been seen on the Apache booth doing a strait-jacket escape on a unicycle and other things. He has a degree in computer science from Berkeley. At VaST he ran the demos himself since he already knew Linux.

When I was at Compass we would hire Don to be our presenter on the booth, although the most interesting performances were always the last couple on the last day when we told him we couldn’t face hearing his routine about our products one more time and we just unleashed him to do whatever he wanted. It was always the biggest crowd of the entire show.

His first time working our booth we had a big deep-submicron (DSM) theme going so he came up with the first deep-submicron joke. And I still remember it:”A neutron walks into a bar and says how much for a beer?”
“For you, no charge.”

OK, I’ll stick to blogging. Don’t forget to tip your server.

Don’s website TechnicallyFunny is here.


Why silicon photonics and 2.5D design go together

Why silicon photonics and 2.5D design go together
by Beth Martin on 06-10-2015 at 4:30 pm

Silicon photonics is one of the upstart “More than Moore” technologies designed to enable the next generation of high-performance devices. Photonic design is the art of moving and transforming signals in the form of photons, allowing the message to literally travel at the speed of light, and bringing the promise of significant performance gains. I’m starting to see evidence that silicon photonics is moving from the research phase into development. The adoption of silicon photonics will be driven by the demands of data center and high-performance computing.

Another “More than Moore” technology is 3D IC design, in which a design is partitioned into smaller pieces that can be stacked, resulting in smaller form factors and thus allowing more functionality to be packed into tablets and other hand-held devices. Aside from some novel design concepts (such as the memory cube), you will not typically see a performance advantage with 3D designs. In fact, it may degrade performance by introducing longer paths between devices on separate dies.

“As luck would have it,” says John Ferguson, a product marking manager in the Calibre group at Mentor, “silicon surrounded by silicon-oxide makes an almost ideal waveguide material, meaning the optical signals can traverse with very little degradation.” This means that photonics can be designed and manufactured using the same fabs already in place for traditional IC design. Indeed, Intel has already demonstrated this last year when they introduced a 100 Gigabit silicon photonic product (prematurely, it turns out.)

But, of course, there is always a catch. Photonics design is purely passive. If you want to change an optical signal, you must induce that change using either heat or a magnetic field (or both) in the vicinity of the waveguide carrying the signal, says Ferguson. So, just create a design with some photonics components and some electronics components, pass timing-critical data as optical photons and use tried and tested electronics elsewhere, right? Well, maybe not, said Ferguson. If the electrical components are complex, you may need to target those expensive CMOS processes again. Unlike the CMOS transistor, however, there is little benefit in porting a silicon photonic waveguide to an advanced node. That is because the optical behavior for such components is set by the total length and width of the wave guide, along with some other concerns, like bend structures or how close it is to other components. The widths for waveguides in silicon are very large (100-200 nm) compared to today’s CMOS devices. So, even if you go to a new process node, the photonics section stays the same size. Also, Ferguson says, putting photonics on the same die as electronics uses up a lot of expensive silicon real estate for the relatively large photonics structures.

But, you could combine silicon photonic processing with 2.5D processing–partition the optical components to a less expensive process, such as those typically targeted for interposer use (like, say 65nm or 90nm), while targeting those more critical electrical components to a die on a more advanced process node, maybe 28nm. This is where a lot of photonics research is currently targeted. Almost all silicon photonics designers are putting photonics into the interposer itself, which is usually at a more matrure process node like 65nm or 90nm, and then connecting to the electronics as a die on top. In fact, Cisco showed a silicon photonics/2.5D prototype at a 2013 DesignCon keynote. That technology came from Cisco’s 2012 acquisition of optical interconnect company Lightwire.

There are lots of questions to be answered regarding silicon photonics and 2.5 and 3D design. For example, in some designs, like the memory cube, you can actually gain performance by connecting through a TSV, but it requires careful die to die placement such that the critical devices on either side are close to the TSV, says Ferguson. In such a case, you can have an electrically closer signal. Usually this means stacking directly on top of the signals in question. Unfortunately, outside of the memory world, so far this approach typically fails due to thermal impact. An active die with lots of switching can produce a lot of heat. Setting it on top of another active die can cause problems for the neighboring die devices. This problem is even more concerning to the photonics design because heat will change the behavior of the optical signals through the wave guide Fortunately, we’ve already learned to stack them like a stair case, Ferguson says, where the interposer juts out from the die and in the extruding area the photonics are inserted. Doing this in the less expensive interposer is far less costly than folding it all into the same expensive advanced-node die.

What impact will this have on the photonics components? The photonics will require a laser source, but because we’ve yet to produce a usable silicon laser in standard CMOS process, it will need to sit off-die. What impact will this have on the form-factor? What impact will the heat generated by the laser have on the near-by electronic components?

Is there was a way around the limitations imposed by TSVs? Oh, say, with photonics? Because light signals can pass through each other essentially unimpeded, this also brings the theoretical ability to eliminate the need for vias, dramatically reducing the power required to pass said signal. There is some high-level research in this area, but nothing practical yet.

There is a lot of work to do, and silicon photonics is a dynamic industry. A major hint that this is big-time interesting is that the government set up a National Photonics Initiative in 2013 and seeded it with $200 million. In the private sector, start-up Luxtera uses CMOS photonics to get around limitations of electrical chip I/O bandwidth. Another silicon photonics start-up, Kotura, was swallowed up by Mellanox in 2013. ST Microelectronics and Infinera are also active in the field. The global silicon photonics market is projected to grow from about $25 million in 2013 to between $400 and $500 million by 2020. And while it is exciting to think of what silicon photonics will do for our data centers, it also promises equally exciting advances in powerful and compact chemical and biological sensors.


DAC: Self-driving Cars

DAC: Self-driving Cars
by Paul McLellan on 06-10-2015 at 7:00 am

The keynote on Tuesday at DAC was by Jeffrey Owens of Delphi. For those of you that don’t know, Delphi used to be the part of General Motors dealing with electronics spun out from GM as a separate company in 1999.

Jeffrey pointed out that a modern automobile is the most complex device any of us own, with over 100M lines of code (loc) compared to 70M for Facebook and 12M for Android. A lot of his presentation was about general trends in automotive electronics but the most interesting was towards the end when he finished with talking about the Audi/Delphi self-driving car that recently drove from San Francisco to New York across the entire US. They learned a lot such as the cameras had problems at sunrise when the sun was very low in the sky (I have the same problem driving down 101 when the cameras in my head known as eyes always get blinded at the curve in Redwood City). Road markings differ a lot from state to state but it is necessary to understand them to keep in the correct lanes. The radar they used works fine in tunnels and in the biggest nightmare, crossing old metal bridges with reflecting surfaces everywhere (think of the old cantilever section of the Bay Bridge, for example).

Google’s self-driving cars get a lot of press but Jeffrey pointed out something that they had done with Audi which was to make much of the electronics vanish into the vehicle. There is no Lidar on the roof, in particular. Delphi assumes that would be completely unacceptable from an aesthetic point of view to any OEM (that’s what vehicle manufacturers such as Audi are called in the automotive world, companies like Delphi being known as Tier 1 suppliers). If the mythical soccer mom can veto a car because there are not enough cup-holders for the back seat then she can probably veto a car for having an ugly spinning thing on the roof.

Jefferey said that the car did 99% of the driving autonomously. But as the Wired magazine article on the trip says:Nine days after leaving San Francisco, a blue car packed with tech from a company you’ve probably never heard of rolled into New York City after crossing 15 states and 3,400 miles to make history. The car did 99 percent of the driving on its own, yielding to the carbon-based life form behind the wheel only when it was time to leave the highway and hit city streets.

So the car did all the highway driving autonomously but it couldn’t drive on city streets since they hadn’t done all the detailed mapping information that the Google car, for example, uses to make it possible to handle towns and neighborhoods.

On the show floor I talked to Matt Lewis, also of Delphi, who was one of the engineers that had worked on the cross-country Audi. The picture at the top is the car on the DAC show floor, which is the actual vehicle that drove across the country. As Matt said, “we cleaned it up a bit, it had a lot of bugs on by the end.”

Jeffrey had challenged us to find the sensors since they are not that obvious. Indeed, compared to the Google car they are well camouflaged. Behind plastic panels the radar is out of sight (plastic is transparent to radar). In the centre of the front grille there is Lidar (laser radar). There are cameras behind the mirror on the top of the windshield where they have a good view. Also rear-facing cameras too. Matt pointed out that the radar is standard Delphi radar already shipping in millions of units, as are most of the cameras. So while this is obviously not a production car it is close to a prototype.

See also the Wired magazine article This is Big: a Robo-car Just Drove Across the Country


DAC Keynote: Moore’s Law Isn’t Dead

DAC Keynote: Moore’s Law Isn’t Dead
by Paul McLellan on 06-10-2015 at 5:00 am

There were two keynotes at DAC this morning. I think the official designation of the first one was a “visionary talk” and the main difference was that it was only 15 minutes long. Vivek Singh, an Intel fellow, talked about Moore’s Law at 50: No End in Sight.

He started with a graph showing transistor speed versus leakage which is as good a measure as any of how good a transistor is. When it is on we want it to switch fast and have good drive, when it is off we want it to consume no power at all.

For now things are on-track. Haswell is 960M transistors in 22nm, but Broadwell is in 14nm and 1.3B transistors, an increase of 35%. From 22nm to 14nm the metal pitch decreased from 80nm to 52nm, a reduction of 0.65x which is slightly ahead of Moore’s Law.

The end of Moore’s Law has been predicted for years. Vivek had a few of the comments that have been made over the years (decades even):

  • optical lithography will reach its limits in the range of 0.75um
  • minimum geometries will saturate around 0.5um
  • X-ray lithography will be needed below 1um
  • minimum gate-oxide thickness is limited to about 2nm
  • copper interconnect will never work
  • scaling will end in about 10 years

As he pointed out, things look no different in 2015 (although the precise details of what people worry about have changed, obviously).


For me the big question has always been whether the cost per transistor reduces. After all, Moore’s Law was always an economic law, namely that the cost per transistor is minimized in a given process technology at a certain number of transistors, and that number seemed to increase by a factor of 2 every 2 years (or 18 months depending on which version of Moore’s Law you look at). Intel have always claimed that their cost per transistor continues to decrease in a way that they feel is not happening for their competitors.

Cynics might point out that since Intel manufactures at such high margins, it has never been under pressure to have a competitive wafer price and so it can keep transistor counts decreasing in a way that is not available (or less available) to foundries that have to manufacture chips that are competitive at much lower margins for the mobile industry in particular. As if Rolls-Royce pointed out that they could make cheaper cars if they had to in a way that Ford, say, would find hard.


Vivek’s speciality is in lithography. A simplified view of a stepper has a light source, a mask, some focusing, and a wafer. You probably know that we have not been able to get a light source with a smaller wavelength that 193nm. EUV is the big hope but it is always a few more years out. So we have had to use increasingly complicated optical proximity correction (OPC) to ensure that what we put on the mask produces what we want on the wafer, adding little corners to stop rounding, adding extra bars to stop necking, and so on. But even that has reached its limit and going forward we need to use inverse lithography.

Inverse lithography means starting from the geometry that we want to print and working out what patterns we need on the mask and what light source we need, to get it. This is computationally very expensive but Intel, handily, is in the business of decreasing the cost of computation (I believe Moore’s Law might apply here!).

So for now the innovation continues. 14nm is in full production, 10nm is on track, and 7nm is in research. Vivek is confident Moore’s Law will continue. For the most leading edge designs for the foreseeable future, I’m sure he is right. Whether the cost of computation will continue to fall fast remains to be seen. After all, twice is many cores is roughly twice as many transistors and if the cost per transistor remains flat(tish) then the cost doesn’t come down as you add cores every couple of years.


Predictions about EDA and IP at #52DAC

Predictions about EDA and IP at #52DAC
by Daniel Payne on 06-10-2015 at 4:00 am

On Sunday night at DAC this week I sat in the front row and listened to Gary Smith give his predictions about EDA and IP as an industry. His financial forecast was a $6.8B industry in 2015, growing to $9B in 2019. An ideal company for Wall Street to invest in would have slow and steady growth. If you add semiconductor IP into the forecast then the size would be $12.147B in 2019, where they are tracking 10 categories of IP.

For new markets of growth he sees Mentor expanding into the automotive market, while Synopsys invests in the software market. The embedded SW market has a $2.7B size, so Mentor is best positioned to take advantage of this.

The mechanical design market started out larger than EDA, however EDA has now caught up to mechanical in size. Dominant mechanical companies are PTC and Siemens

Synopsys has acquired some Optical technology, to create a dominant niche strategy. Another new market for Synopsys has been in application development software (Coverity).

Potential adjacent markets for automation include chemical design and Biomedical. This started to remind me of how ANSYS has already been offering multi-physics tools as a more holistic product design approach.

Companies in big data like Google have lured EDA developers away, although ANSYS just acquired Gear in that space

Will mechanical companies try to acquire an EDA company? Does EDA have a choice?

Question: Is the IP market in 2015 and 2019, still $3B?
A: Yes, it’s basically flattening. IP used to cost something with logic synthesis, now its free. The royalty model is weakening, up-front charges still happen, however it is a less profitable market segment. The growth market is for platform-based IP, they have royalties and up-front fees.

DesignWare size IP is becoming a commodity (SNPS is #3 in IP for quite a while), less revenue is expected in the future, causing flattening of IP market. So both SNPS and CDN have to go upstream, like with ARM IP.

Q: What is platform-based IP?

A: Platform-based IP mostly what ARM does (Processor plus memory, 4M gates). As DesignWare ages its value goes down.

Mentor has been able to figure out how to make SW IP for automotive standards.

SDA – System Design Automation tools, need SW ip based on standards.

Q: Is MEMS included in your EDA forecast?
A: Yes, MEMS – design tools are included in EDA market (modified PCB tools)

Q: Are there any growth areas for IP?

A: Yes, Analog and RF IP – will grow, but overall IP is flattened.


Google Smart Lens: IC Design and Beyond

Google Smart Lens: IC Design and Beyond
by Paul McLellan on 06-09-2015 at 1:00 am

Today’s DAC keynote was by Brian Otis of Google about their project, working with Novartis, to build disposable contact lenses that perform continuous glucose monitoring.

Why is this important? There are 382M people around the world with diabetes who typically have to check their blood glucose levels four times a day. This involves pricking the skin and then using a monitor to analyze a drop of blood. There are also continuous glucose monitors (CGM) which also require a needle under the skin connected to a monitoring device. Neither approach is all that pleasant.
Continue reading “Google Smart Lens: IC Design and Beyond”


EDA Acquisition to Drive SoC realization

EDA Acquisition to Drive SoC realization
by Pawan Fangaria on 06-08-2015 at 8:00 pm

A week ago I was reading an article written by Daniel Nenni where he emphasised about semiconductor acquisitions to fuel innovation. We would see that in a larger space, not only in semiconductor and FPGA manufacturing companies (e.g. Intel and Altera) but also in the whole semiconductor ecosystem. If we see it from technical perspective, acquisition will take place whenever there is some value in a company which can produce a larger sum by merging with its acquirer. Although I am not going into financial aspect here, but would like to mention that the financial stress also reduces with the merger of innovative companies.

EDA is an essential enabler of the large size and high complexity SoC realization today. As we see it today, an SoC description has to start from RTL or even from a higher level of abstraction. The design has to converge into the most optimized PPA (Power, Performance and Area) layout in the minimum possible time. So, definitely a large scale innovation is required in EDA space too.

Last week I wrote about an innovative approach taken by Atrenta for designing a lint-clean RTL design that can provide very fast closure of the design. This week, at the start of DAC 2015, we are hearing about this important acquisition in EDA space. Synopsys, the leader in EDA space is acquiring Atrenta, a true RTL implementation, optimization and verification company. Last month, I had written about Synopsys’ ‘Silicon to Software’ solution for semiconductor system design and I see that strategy being implemented quite fast. I have been following Atrenta for some time and I see its SpyGlass platform providing a complete solution at RTL level. In my view, it will complement quite well with Synopsys’ strength in design and verification platform.

Atrenta’s GenSys provides a unique solution for RTL re-structuring for design optimization at RTL stage. And Atrenta’s formal verification technology provides one of the most effective solutions for verification at RTL level. The BugScope provides a very effective ‘Assertion-based Synthesis’ solution. These products can complement quite well with Synopsys’ Verification Continuum and Galaxy Design platforms.

Also, Atrenta’s SpyGlass power, CDC, Physical, constraint management solution, and IP Signoff kit are state-of-the art solutions that work at the RTL level. Clearly this RTL level platform is in the right direction towards ‘Silicon to Software’ strategy of Synopsys.

This combination of technologies will further accelerate the convergence of the overall design towards closure as most of the verification and optimization loops will close at the RTL level. A design re-work loop at the RTL is order of magnitude faster compared to that at gate or layout level. So, this will further boost Synopsys’ ‘Shift Left’ strategy.

Read the press release here for more information.
Also read: “Semiconductor Acquisitions will Fuel Innovation!
A Robust Lint Methodology Ensures Faster Design Closure
SoC’s Shift Left Needs Software Integrity

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


TSMC Shows 10nm Wafer!

TSMC Shows 10nm Wafer!
by Daniel Nenni on 06-08-2015 at 4:00 pm

If you really want to know why I write about TSMC it is all about ego, my massive ego, absolutely. Blogs about TSMC and the foundries have always driven the most traffic and they most likely always will. Semiconductor IP is second, Semiconductor Design is third, and I don’t think that is going to change anytime soon:

SemiWiki BI: Daniel Nenni: TSMC: All
Total Blogs: 137
Total Views: 878600
Average: 6413

SemiWiki BI: Semiconductor IP: All
Total Blogs: 431
Total Views: 1641911
Average: 3810

SemiWiki BI: Semiconductor Design: All
Total Blogs: 1367
Total Views: 4157039
Average: 3041


TSMC came to the Design Automation Conference 16 years ago ushering in a new level of collaboration amongst the fabless semiconductor ecosystem. Other foundries have followed and one could argue that they are the center of the DAC universe. In that time TSMC has completed 15 reference flows (the latest being 10nm) with 7,500+ tech files, 200+ PDKS, and more than 8,600 silicon proven IP titles from .35u to 10nm.

Today, the first day of #52DAC, my prediction of a big crowd has come true. This year the big foundry buzz is around 10nm. TSMC is showing a 10nm wafer for the first time and everybody is wondering if in fact 10nm will arrive in 2016 like promised. I certainly believe it will and so does the majority of the fabless semiconductor ecosystem.

Let’s take a quick look at the TSMC process node revenue start history just for fun:

[LIST=1]

  • .35u 1996
  • .25u 1998
  • .18u 2000
  • .13u 2002
  • 90nm 2005
  • 65nm 2007
  • 40nm 2009
  • 28nm 2011
  • 20nm 2014
  • 16nm 2015
  • 10nm 2016
  • 7nm 2017

    Seriously, we are doing four new process nodes in four years? The fabless semiconductor ecosystem is truly an amazing thing. In regards to process ramp challenges, I remember .13u being very difficult because of the new copper interconnect. 40nm was certainly not easy. 40nm was the last node where TSMC gave you the option of using recommended (yield centric) design rules. Which one of these nodes was the most challenging? You tell me. If you have a design horror story please share it in the comments section and I will give you a free Kindle version of “Fabless: The Transformation of the Semiconductor Industry“.

    TSMC has the Open Innovation Platform Theater again this year in booth #1933. You can see the schedule HERE.The other TSMC related #52DAC activities are HERE:

    TSMC’s booth is jam packed, probably because they are giving away iWatches and other cool stuff. TSMC also had some interesting IoT press today, one even mentioning 10nm:

    Imagination and TSMC collaborate on advanced IoT IP platforms
    Imagination Technologies (IMG.L) and TSMC announce a collaboration to develop a series of advanced IP subsystems for the Internet of Things (IoT) to accelerate time to market and simplify the design process for mutual customers. These IP platforms, complemented by highly optimized reference design flows, bring together the breadth of Imagination’s IP with TSMC’s advanced process technologies from 55nm down to 10nm…

    Cadence Announces Collaboration with TSMC on IoT IP Subsystem
    Cadence Design Systems, Inc. (NASDAQ: CDNS), today announced that it is collaborating with TSMC on the development of an Internet of Things (IoT) intellectual property (IP) subsystem demonstration platform for TSMC’s ultra-low power (ULP) process. Targeting wearable, home automation, always-on and industrial control applications, this IP subsystem, with the support of the Cadence suite of digital and custom/analog tools, provides the opportunity to simplify IoT designs and accelerate the time to market for mutual customers…

    Synopsys and TSMC Collaborate to Develop Integrated IoT Platform for TSMC 40-nm Ultra-Low-Power Process
    Synopsys, Inc. (Nasdaq:SNPS) today announced a collaboration with TSMC to develop an integrated Internet of Things (IoT) platform on TSMC’s 40-nm ultra-low-power (ULP) process technology. The IoT platform incorporates a broad range of DesignWare® IP, including an integrated sensor and control IP subsystem with the ultra-low-power ARC® EM5D processor core, power-and area-optimized logic libraries, memory compilers, NVM, MIPI and USB interfaces as well as an analog-to-digital converter (ADC). The high-performance, low-power IoT platform provides designers with a pre-validated solution that enables them to deliver the energy-efficient, always-on processing required for applications such as sensor fusion and voice recognition…


  • Next Generation Formal Technology to Boost Verification

    Next Generation Formal Technology to Boost Verification
    by Pawan Fangaria on 06-08-2015 at 12:00 pm

    With growing complexities and sizes of SoCs, verification has become a key challenge for design closure. There isn’t a single methodology that can provide complete verification closure for an SoC. Moreover creation of verification environment including hardware, software, testbench and testcases requires significant resources and time. Formal verification tools have been there in the semiconductor design industry and are known to provide exhaustive verification coverage without the need of any testbench. However, they use assertions for particular verification tasks. For performing assertion checks, specific properties are defined in standard languages such as SVA (System Verilog Assertion) and PSL (Property Specification Language). Here is the problem; for designers it’s very difficult to learn assertion languages and use them for design verification to get the full benefit of formal technology.

    Clearly, this is an opportunity available for EDA vendors to automate and ease the verification process where formal technology is combined with ABV (Assertion-based Verification). This approach can provide tremendous benefits provided it can be used efficiently in the overall verification environment. Cadencehas been working since a few years on filling up this gap. Today, I am happy to see this approach really working in Cadence’s JasperGold Formal Verification Platform which is quite nicely integrated with the Incisive Platform.

    This platform provides a unique and innovative way to address the pain points of using formal and ABV technologies. JasperGold (Cadence acquired Jasper Design Automation in 2014) provides Verification Apps that are targeted to solve specific verification problems. The apps are seamlessly integrated between different applications through a common database. They can be easily setup and run. As they are vertically integrated into the overall system, the problems are solved most efficiently using the formal and ABV methods with support from simulation and other metric-driven technology. The needed properties for ABV or formal verification can be automatically created or even pre-packaged properties can be used. Also, Jasper’s patented Visualize[SUP]TM[/SUP]Formal Debug and What-If Analysis environment provides instant feedback on any change in any particular parameter. The effective verification, analysis and debug platform provides 15x performance gain compared to previous solutions.

    The JasperGold Apps Platform is very rich with several verification apps including ‘Formal Property Verification’, ‘Behavioral Property Synthesis’, ‘X-Propagation Verification’, ‘Control/Status Register Verification’, ‘Coverage Unreachability’, ‘Sequential Equivalence Checking’, ‘Security Path Verification’, and many more. Also a custom app can be created for any specific task. The platform architecture is extensible for developing and deploying new apps as needed in the future. There is a single GUI that allows different applications to work together with a consistent look and feel, thus improving designers’ productivity and analysis efficiency.

    Incisive and JasperGold formal engines are combined together for an exhaustive search to find deeper bugs quicker. With the support of different engines, complete proof of properties is obtained on datapath, control logic as well as memory. All combinations of inputs are tried without a testbench. With new formal-assisted simulation methods, deep state-space penetration is done for very deep bug hunting. The new Trident engine provides word-level and memory abstractions that significantly boost performance.

    This platform is integrated with the Cadence System Development Suite where formal-assisted simulation, emulation and verification closure management can be performed in sync with each other. The Indago[SUP]TM[/SUP] debug infrastructure provides a powerful debugging environment. The Indago resources include ‘Debug Analyzer App’, ‘Embedded Software Debug App’, ‘Protocol Debug App’, and ‘Advanced Debug Analyzer App’ that provides on-the-fly what-if analysis for design exploration and debugging.

    In formal-assisted emulation with Palladium XP II, the assertion-based VIP complements accelerated VIP. The assertion-based VIP coded in SVA can replace the checkers which cannot be compiled on Palladium platform and hence removed by accelerated VIP. The formal property creation from emulation traces also assists in debugging.

    The JasperGold is integrated with Incisive vManager to assist in verification status management and closure. The report can clearly show the status of tasks completed by formal. The coverage unreachability app (UNR) automatically generates properties to explore coverage holes and determines if the holes are reachable or unreachable. The unreachable cover points form an unreachable coverage database that guides users to exclude these unreachable cover points, thus saving time.

    The JasperGold SPS (Structural Property Synthesis App) integrates automatic formal analysis with the basic HAL (Cadence HDL Analysis) and provides a fully integrated lint solution. Property grading, violations, and waivers can be analyzed and managed with ease in the Visualize[SUP]TM[/SUP] GUI environment.

    A new LPV App for low-power verification has also been added in JasperGold platform. It performs all low power functional checks, power-aware sequential equivalence check, and runs other formal apps in power-aware mode. This combined with Incisive low-power simulation and Conformal low-power capabilities form a powerful low-power verification solution.

    It was a very pleasant occasion talking to Pete Hardee, Product Management Director at Cadence who explained about this innovative solution in detail. I see this as a unique solution in the verification segment of EDA that significantly boosts verification performance and productivity. The Next-generation JasperGold Formal Verification Platform is being released this month. A live presentation/demo can be seen at Cadence booth #3515 in DAC.

    Pawan Kumar Fangaria
    Founder & President at www.fangarias.com