100X800 Banner (1)

SPICE Timing Correlation for IC Place and Route

SPICE Timing Correlation for IC Place and Route
by Daniel Payne on 07-10-2012 at 10:35 am

SPICE circuit simulation is used for transistor-level analysis while Place and Route tools are typically used to connect cells and blocks of an SoC, so why would there be a connection between these two EDA tools?

I read a press release today from ATopTech and Berkeley Design Automation that talked about how SPICE and P&R are connected, so I contacted Eric Thune of ATopTech to learn more. Eric has worked at: Apache Design Solutions, I2 Technologies, Synchronicity, Synopsys and TI. Continue reading “SPICE Timing Correlation for IC Place and Route”


High-Productivity Analog Verification and Debug

High-Productivity Analog Verification and Debug
by Daniel Nenni on 07-08-2012 at 10:40 pm

See how Synopsys’ advanced analog verification solution can dramatically increase your verification productivity with CustomExplorer Ultra, along with CustomSim and CustomSim-VCS. CustomExplorer Ultra is a comprehensive simulation and debug environment for analog and mixed-signal design verification.

Web event: High-Productivity Analog Verification and Debug with CustomSim and CustomExplorer Ultra
Date: July 11, 2012
Time:10:00 AM PDT

Duration: 45 minutes + Q&A

REGISTRATION

This webinar demonstrates an advanced verification methodology using CustomExplorer Ultra with CustomSim and CustomSim-VCS that enables highly-productive verification and debug of analog and mixed-signal designs. CustomSim and CustomSim-VCS provide fast simulation engines while CustomExplorer Ultra is a complete verification environment for managing simulation corner and Monte Carlo setup, a flexible simulator interface, multiple testbenches, and interactive cross-probing with popular design environments, such as Galaxy Custom Designer and Virtuoso ADE for fast circuit debugging.

Speakers:

Duncan McDonald
Product Marketing Manager, Synopsys

Duncan has more than 20 years of experience in EDA, holding positions in engineering, sales, and marketing all related to analog and mixed-signal design. Duncan is the author of 3 U.S. patents and holds degrees from UC Berkeley and the University of Santa Clara.


DAC 2012 Cheerleader Controversy!

DAC 2012 Cheerleader Controversy!
by Daniel Nenni on 07-08-2012 at 9:00 pm

First, I must say that I’m biased. I like Cheerleaders, they are lots of fun, I even married one. Second, I’m not a fan of Peggy Aycinena. She has been on her EDA feminist rant for years now and I have been targeted multiple times. My solution has been to ignore her and any publication that supports her but this time she has gone too far.

It first started when Paul McLellan posted a blog on SemiWiki about the 49er Cheerleaders appearing at DAC 2012. What a great idea! The blog was deleted shortly thereafter and I was told by Paul that as it turns out the 49er Cheerleaders will not be attending DAC 2012. Bummer I thought. Even my wife, who attended DAC 2012, was disappointed.

Then I read an article by Mike Demler:

As the industry continues to shrink, can EDA bring sexy back?

Mike is a great guy, I’m a fan of his site, he is very credible:

DAC organizers made some initial attempts to liven up the proceedings, by signing up a few of the San Francisco 49er cheerleaders to wake up attendees before an 8:30 AM keynote address, on the second day of the conference. The cheerleaders, who regularly appear before crowds (including many families) of 70,000 fans at every 49er home game, also are known for their charitable work, and for their careers and education beyond the football field. Nevertheless, according to sources who would only speak off the record, when a female EDA blogger launched a personal protest of the cheerleaders, contacting EDAC Board members and DAC organizers, they cancelled the appearance. Attempts to get a statement from the DAC Executive Committee have gone without a response. Gold Rush management has also declined to comment.

After reading this, I felt sure Peggy was behind it but could not confirm and Paul McLellan was not talking. Paul is the official DAC webmaster so I understand his tight lips. I also understand the decision by the DAC people to cancel to avoid controversy.

Next comes John Cooley’s article:

Peggy bans 49er cheerleaders, Gabe wants Denali party cancelled

I do read John, don’t always agree with him, but certainly respect the work he has done on DeepChip:

I can’t believe this. The DAC Executive Committee caved into to the angry feminazi rants of Granny Peggy Aycinena????? WTF? Just because Peggy wouldn’t have appreciated these cheerleaders, a good 90% of the heterosexual male population at this DAC would have! WTF???

Okay, John is being crude here but I agree with his point. I don’t like a moral majority of one person making decisions on what is and is not appropriate for an entire crowd.I also don’t appreciate the negative label Peggy attaches to the 49er Cheerleaders. They are athletes, goodwill ambassadors, and they deserve better(I bold this because it is the main point of this blog).

Gabe Moretti
also did an article on this (according to John Cooley) but I don’t read his site Gabe on EDA and it did not come up on Google. Maybe he thought better than to get on Peggy’s bad boy list and deleted it. Or maybe Gabe’s site is not search engine friendly. Probably both.

Peggy’s response to all this did come up on Google to which I’m reading for the first time:

Cooley: Ignore the men behind the curtain by Peggy Aycinena

This rant is so fractured I don’t even know what to cut and paste so you will just have to read it yourself. She goes “eye for an eye” with John attacking him personally and with increased venom. Included is a list of people she has pissed off and I’m on it and this is why.

Last year Paul McLellan did an article on SemiWiki: Semiconductor Virtual Model Platforms which included a picture of a female model (nothing racy). Peggy posted a rant against Paul, me, and SemiWiki so we changed the pic to what you see today. That rant was also removed after Paul bought her lunch to smooth things over. Even better, Peggy once called some of the DAC hostesses (booth babes) prostitutes. That article was removed as have most of her other rants. This latest one will probably be removed so I saved a copy just in case because it really is quite funny in a disturbing sort of way.

This was my 29[SUP]th[/SUP] DAC so I have seen the evolution first hand. In fact, I was pleasantly surprised when DAC allowed alcohol on the show floor, which apparently Peggy is okay with, for now anyway. My opinion: We are adults and can make personal choices as we see fit. It would have been nice to have been allowed the choice of attending the 49er Cheerleader DAC session or not. Next year hopefully an actual majority will prevail and we will see Cheerleaders serving beer!


Testing ARM Cores – Mentor and ARM Lunch Seminar

Testing ARM Cores – Mentor and ARM Lunch Seminar
by Beth Martin on 07-08-2012 at 8:29 pm

If you are involved in testing memory or logic of ARM-based designs, you’ll want to attend this free seminar on July 17, 2012 in Santa Clara. Mentor Graphics and ARM have a long standing partnership, and have optimized the Mentor test products (a.k.a Tessent) for the ARM processors and memory IP.

The lunch seminar runs from 10:30-1:00 at the Santa Clara Marriott. The presenters are Richard Slobodnik of ARM, and Stephen Pateras of Mentor Graphics. They will describe the specific test solutions developed to cover memory and logic test for ARM-based designs. A newer feature is the shared bus interface where MemoryBIST controllers reside outside of the ARM core, and use the shared bus to test the memory inside the core. Blocks with a shared bus and with memories on the bus (memory clusters) have a functional interface to the bus (see the figure).

Sign up for this free ARM / Mentor Graphics Lunch Seminar now.

If you want to study up before, here are two relevant whitepapers from Mentor:
Memory Test and Repair Solution for ARM Processor Cores
High Quality Test of ARM® Cortex™-A15 Processor Using Tessent® TestKompress®


NVM IP: Novocell Semiconductor has announced an expansion of their product line

NVM IP: Novocell Semiconductor has announced an expansion of their product line
by Eric Esteve on 07-08-2012 at 3:52 am

Novocell Semiconductor, core antifuse-based OTP Smartbit™ technology was first patented in 2001 and 2002, and created a solid foundation for the first ten years,” stated Walt Novosel, President and CTO, “Since then, our customer-driven focus has led to numerous innovations in our original high reliability Smartbit-based NVM IP to best service specific system on chip (SoC) market segments. Our announcement today unveils our full line of NVM products to fully serve our customers’ needs, from 8bit register OTP, to specialty trimming and calibration OTP, to 4Mbit ultra-high density code storage and configuration OTP, to 1000x multi-time write hybrid OTP/MTP.”
Continue reading “NVM IP: Novocell Semiconductor has announced an expansion of their product line”


Intel Goes Vertical to Guarantee PC Growth

Intel Goes Vertical to Guarantee PC Growth
by Ed McKernan on 07-07-2012 at 8:30 pm

A Bloomberg article from early July caught my eye as it portends further changes in the competitive mobile market landscape. Intel is now in the business of paying Taiwanese panel suppliers to ensure the supply of touch-screen panels for PC ultrabooks. In essence it says that to win in the PC market, Intel has to mimic Apple and go more and more vertical in the supply chain. Apple’s stellar growth makes it difficult for PC manufacturers to forecast true demand out 3 to 6 months and given their minuscule profit margins they have to veer towards the conservative or face the risk of going out of business with excess inventory. Intel, like Microsoft is faced with having to control its destiny vs. the laissez faire Wintel model that has existed for 30 years.

In a previous blog, I mentioned how Microsoft may have started a Thermonuclear War with its customers (e.g. HP and Dell) when it introduced its Windows 8 Tablet – or should we say pre-announcement of Win 8 Tablets. Microsoft and Intel are showing signs that the combined profits that they derive from the PC market are too high for their customers to price at a suitable discount against the growing Apple Empire. Apple buries its O/S cost and in the case of iPADs and iPhones its CPU cost. These costs are lower than what Microsoft and Intel charge their PC OEMs and neither one wants to give in as Tablets and Ultrabooks rollout this fall. Given the strength of the iPAD growth and the now almost assured rollout of a smaller iPAD in September at $299, OEMs are concerned about what the true demand is for PCs, especially in the US and Europe.

Furthermore, Intel is in a short-term mode of keeping Ivy Bridge ULV prices high in order to force OEMs to abandon the idea of including an nVidia or AMD graphics chip in Ultrabooks because the additional cost pushes system prices out of range of what the market will pay. I expect Intel, however, to drive prices lower to capture the market in Q4 before AMD responds with a competitive solution. Currently the lowest cost Intel ULV part is over $100, which is way to high if Ultrabooks are to reach the $499 price point for high volume consumers. Over the long term though, Microsoft and Intel face unique challenges due to Apple’s growth. Both rely heavily on corporate and government purchases of PCs. Microsoft is threatened with the immediate prospect that Apple will make inroads with MacBook notebook PCs and iPADs. It is a direct hit on Microsoft’s O/S and Office Revenue stream. Microsoft has to have an immediate answer this Fall with a Windows 8 tablet, but it appears that HP and Dell can not deliver on a price that is below Apple’s iPAD.

Microsoft needs to step in and plug the hole with what will effectively be a discount on its software stack. Essentially give away the hardware to sell the Software (i.e. razor – razore blade model we are all accustomed to). Intel has a different scenario playing out and appears to be in a stronger position. In the short run it is executing to a plan that calls for cannibalizing AMD and nVidia ($10B+ revenue) with the ultrabook platform, even while PC growth slows at the expense of iPADs. The investment in Taiwan panel manufacturers will likely come with an exclusivity that bars AMD and nVidia silicon from showing up in the end product. From mid 2013 onward, Intel has to win Apple’s business as it attempts to force the whole mobile market to the leading edge process node.

Qualcomm’s misread on demand for its 28nm 4G solutions is a significant sign that the industry based its smartphone and tablet business models on an (n-1) process technology instead of being out over the ski tips. By (n-1) process, I am speaking of how many semiconductor suppliers were counting on 40nm being the volume process for mobiles this summer and fall and 28nm being a 2013 volume driver. Longer term, when Intel gets its baseband capabilities closer to Qualcomm’s, the leading edge will be determined by Intel’s latest process.Intel’s PC business model from the 1990s through today has been all about delivering processors on the leading edge. The trek they are taking to 14nm with mobile processors and Atom’s with a robust communications platform speaks to the opportunity to cannibalize Qualcomm and Broadcom. However, en route to this scenario it is looking like they will need to take a greater role in propping up the PC system supply chain.

FULL DISCLOSURE: I am Long AAPL, INTC, QCOM, ALTR


Intel’s finfets too complex and difficult?

Intel’s finfets too complex and difficult?
by Tom Dillinger on 07-07-2012 at 7:00 pm

Thanks to SemiWiki readers for the feedback and comments on the previous “Introduction to FinFET Technology” posts – very much appreciated! The next installment on FinFET modeling will be uploaded soon.

In the interim, Dan forwarded the following link to me “ Intel’s FinFETs too complicated and difficult, says Asenov, which provides some (preliminary) analysis on FinFET behavior, from recently published TEM pictures of Intel’s Ivy Bridge designs:
Continue reading “Intel’s finfets too complex and difficult?”


TSMC: Production Proven Design Services Driving SoC Innovation!

TSMC: Production Proven Design Services Driving SoC Innovation!
by Daniel Nenni on 07-06-2012 at 8:30 pm

One of the truisms of today’s disaggregated semiconductor design and manufacturing model is counter-intuitive to the do-it-yourself focus that is at the heart of every engineer. And yet, time and time again, success rewards those who understand that with today’s ever increasing complexity, it is difficult, if not impossible, to be all semiconductor things to all people.

This lesson was reinforced during a presentation at DAC 2012 from Global Unichip that focused on the services bundled into their Flexible ASIC Model[SUP]TM[/SUP] that allows semiconductor designers to focus on their core competency. What impresses me about their approach, is that GUC doesn’t insist on a hard-and-fast hand off point, but rather provides the flexibility for each company to determine where their core competency begins and ends.

GUC’s business today is being driven by high gate count, advanced technology, and low power SoC designs. Low Power is definitely key to much of today’s innovation. GUC’s low power design services start with defining specific low power library and power gating techniques. It also encompasses DVFS and AVFS services along with low power verification. But the heart of their design services lies in their low power competency, their domain IP integration and their highly sophisticated design for test (DFT).

The GUC Low Power Competency goes under the brand name PowerMagic® and covers IP that provide both internal and external power shutdown that has been proven on more than 80 designs over the past five years. Dynamic power is a big concern for low power, high performance designs. Working through different methodologies GUC has mastered the ability to efficiently perform clock gating at the architecture level. They are also experts at designing for multiple supply voltages on a single chip and dynamic voltage frequency scaling. The key to their success is the ability to design-in changes to the supply voltage and frequency based on the current processor loading.

Another design variable that drives both design quality and time-to-market success is the ability to integrate high speed interface IP into low power designs at both the chip and system level. To that end, GUC began developing and designing its own high speed interface IP, including SerDes, PCI Express 3.0, USB 3.0, DDR2/3, LPDDR2/3 and a number of others starting at 65nm. Today, that high speed interface IP portfolio covers production nodes down to 28nm and continues to shrink to 20nm. Another GUC service is customizing IP for each application to meet required specification. To achieve comprehensive domain IP integration, package influences also need to be taken into consideration.

In parallel, GUC extends its high speed design capabilities through its Design for Test (DFT) services. The key objective is to improve yield by reducing peak power during testing so as to cut down the cost of testing.

While there is much more to the GUC low-power methodology than I can blog here, the point is that low power at high performance is a difficult design challenge, one that vexes many designers and requires a new found expertise. The bottom line is that it might be worth checking out silicon proven low power designs specialists the next time you face that specific challenge.



Mind the Gap — Overcoming the processor-memory performance gap to unlock SoC performance

Mind the Gap — Overcoming the processor-memory performance gap to unlock SoC performance
by Sundar Iyer on 07-06-2012 at 3:25 pm

Remember the processor-memory gap— a situation where the processor is forced to stall while waiting for a memory operation to complete? This was largely a result of the high latency required for off chip memory accesses. Haven’t we solved that problem now with SoCs? SoCs are typically architected with their processors primarily accessing embedded memory, and accessing external memory only when absolutely necessary. However, while on-chip memory access latency is still a concern, embedded memories are also required to respond to back‐to‐back sustained access requests issued by a processor or processors. In fact, networking data pipelines and multicore processors can hammer memory with a multitude of simultaneous memory accesses to unique random addresses, and the total number of aggregated memory accesses has been dramatically increasing. So once again, system architects are up against a processor-memory gap, this time with embedded memory. And as a result, embedded memory performance has become the limiting factor in many applications (figure 1).

At Memoir Systems, we believe that the performance limitations of embedded memories are largely a result of the way that the problem has been conceptualized. In fact, we have found it is possible to improve memory performance by a factor of ten using currently available technology and standard processes. In the past, thinking about embedded memories was limited to a purely circuit- and process-oriented approach. Thus, the focus was on maximizing the number of transistors on a chip and cranking up the clock speed. This was successful up to a point, but as transistors approach atomic dimension, as an industry, we ran into fundamental physical barriers.

At Memoir we have taken an entirely new approach with ourAlgorithmic Memory technology. Algorithmic Memories operate by adding logic to existing embedded memory macros. Within the memories, algorithms intelligently read, write, and manage data in parallel using a variety of techniques such as buffering, virtualization, pipelining, and data encoding. These techniques are woven together to create a new memory that internally processes memory operations an order of magnitude faster and with guaranteed performance. This increased performance capability is made available to the system through additional memory ports such that many more memory access requests can be processed in parallel within a single clock cycle as shown in figure 2. The concept of using multi-port memories as a means of multiplying memory performance mirrors the trend of using multicore processors to increase performance over uniprocessors. In both cases, it is the parallel architecture rather than faster clock speeds that drive performance gains.

Algorithmic Memory technology is implemented as a soft RTL. The resulting solutions appear exactly as standard multi-port embedded memories. The new memories employ dozens of techniques to accelerate performance or reduce area and power requirements. However, perhaps the greatest benefits of Algorithmic Memory come not from the individual algorithms, but rather in how they are integrated into an elegant system (figure 3). In this system, the memories not only perform better, but their performance is fully deterministic. Furthermore, not only can new memories be created very rapidly, but they are also automatically exhaustively formally verified and, since they are built on existing embedded memories, no additional silicon validation is required.

Algorithmic Memory gives memory architects a powerful tool to rapidly and reliably create the exact memories they need for a given application. Most importantly, though, it empowers system architects with new techniques to overcome the processor-memory gap, and further unlock SoC performance.

Dr. Sundar Iyer

Co-Founder & CTO


Cadence at Semicon West Next Week: 2.5D and 3D

Cadence at Semicon West Next Week: 2.5D and 3D
by Paul McLellan on 07-05-2012 at 5:32 pm

Next week it is Semicon West in the Moscone Center from Tuesday to Thursday, July 10-12th. Cadence will be on a panel session during a session entitled The 2.5D and 3D packaging landscape for 2015 and beyond. This starts with 3 short keynotes:

  • 1.10pm to 1.25pm: Dr John Xie of Altera on Interposer integration through chip on wafer on substrate (CoWoS) process.
  • 1.25pm to 1.40pm: Ryusuku Otah of Fujitsu on Large SIP for computer and networking application with 2.5D, 3D structure.
  • 1.40pm to 1.55pm: Dr Huili Fu of HiSilicon Technologies on The demands and the challenges of TSV technology application in IC and system.

Then from 2.20pm to 3.30pm there is a panel session on Ecosystem and R&D collaboration. Cadence will be represented by Samta Bansal, who I talked to about Cadence’s joint work with TSMC that they announced at DAC.


As I’ve said before, I think that 2.5D (and eventually 3D) are going to be very important since it is not clear that we will be able to continue to keep on track with lithographic scaling. With double and triple patterning we can manufacture but it is very expensive and wafer prices are going up fast. EUV still looks a long way from possible commercialization and it may never get there. In the meantime, high levels of integration can be achieved with CoWoS along with the advantage of being able to mix die from different processes. We are at the early stages of this and there is still lots of work to be done, both on the EDA side but more so on the ecosystem and supply chain (who does what? when do you test? how do you ship ultra-thin silicon around without breaking it? etc). Since this is the topic of the panel session, it should be interesting to hear.

The session is free to attend if you are registered for Semicon.