Synopsys IP Designs Edge AI 800x100

Magma eSilicon One Keynote

Magma eSilicon One Keynote
by Paul McLellan on 09-27-2011 at 2:31 pm

I was at the first half of Magma’s Silicon One event yesterday. The first keynote was by Rajeev about the environment for SoC designs, especially fabless startups, and Magma’s role going forward. More about that later. The other keynote was Jack Harding, CEO of eSilicon. As usual Jack did his presentation without any powerpoint slides, something I find very difficult to do without losing my thread.

Jack started off with some statistics about eSilicon. They have been in existence for just over 10 years now and have done over 200 parts. A 3rd party audited them for a customer and decided that they had a 98% first time hit rate. For those who don’t know eSilicon, their business model is to be an ASIC company although they don’t have a fab. But they are more than a design house. They deliver tested, packaged parts just like an ASIC company with a fab, except that they let you put your own logo on the parts. At VLSI, for example, we always put ours on (look at any pictures of motherboards of early Macs).

The big change that Jack wanted to talk about was the consumerization of SoCs, and the effect that this is having on the design chain. Design used to be “all” digital, with a smart group of slightly eccentric designers down the hall (or in a separate company) who used spice and a layout editor and their bare hands to wrestle analog to the ground. Analog was something nobody worried about.

In the 90s, the strategy was to make it a separate chip. That way it could be done in an older process. So a system might be 4 digital chips and an analog chip that was either a standard product or designed and manufactured in a separate process.

But now all that is condensed into a single SoC with all the digital, and lots of it, and all the analog, and lots of it, on a single chip. This is a problem and an opportunity. eSilicon has historically, for business reasons, been more focused on networking than consumer and over half their chips had a serdes on. Today virtually every chip they do has hundreds of lanes of serdes. There are now so many variables to make the analog work that it is incomprehensibly by a human. So it is going to have to get more automated whether the designers like it or not, just like when place and route first arrived and designers figured they could do better. For a few gates, yes, but for thousands it is just impossible.

Even picking the IP is an intractable task. TSMC has 12 flavors of 28nm process and 40 different commercial lilbraries available (from them and 3rd parties) so that’s 480 combinations just there.

Jack was asked in the questions what EDA companies don’t design the chips for at least some of their customers. He thought that this made a lot of sense but there are big problems with the way Wall Street values EDA companies (many types of companies with mixed product lines, such as HP, have this problem too). To combine an eSilicon type business with Cadence type business (and remember Jack was CEO of Cadence too) would mean going for 95% software margins to 45-50% semiconductor margins and nobody knows how to value a company that mixes those two business (one reason VLSI spun out Compass when I was there was for that Wall Street got confused at companies with mixed product lines like that). So right now it is good business for eSilicon but clearly a potential slot for EDA to step up and provide themselves. But of course that would mean they only get paid of the tools work…

Which leads to the next question: why is EDA just a $4B business? Jack’s view is that it is a flaw in the EDA business model whereby EDA charges for a capability regardless of success. Everyone else is at risk. If a chip doesn’t go to production, no wafers are bought, no parts are packaged, nothing is tested and nobody makes any money. Except EDA. But the quid pro quo is that if EDA is not going to take that risk then it capped at $4B (in fact, excluding IP, it is probably shrinking). In the early days, EDA had a hardware business model (Calma, Applicon etc) and this model made sense. The software was thought of almost as an add-on to sell the hardware. But 15-20 years ago that stopped making sense. Jack’s estimate is that if EDA had switched to take risks on a variable basis then it would be a $40B business. More chips could be made, probably with a higher percentage failure rate (as business lines, not necessarily technical) but much more volume in total.

But he wouldn’t want to be the first CEO to make the switch. Wall Street would punish you for 2-4 years until the first designs in a new process node went from EDA software development through design to volume production. The challenge is how to find a way to switch without blowing up the existing business model completely. Jack said that they have done a few small deals where eSilicon + EDA company + customer on basis of long-term royalty. Possibly a good candidate to grow.

So the takeaway is that things need to be looked at differently before. The only way to get these designs done is to be very silicon aware, work with EDA partners, silicon partners, test and assembly and so on. This is leading to re-aggregation of supply chains since someone needs to take responsibility for everything. After all, if a package is broken then the foundry isn’t getting any wafer orders even though it isn’t their fault and vice versa. For the designs they do, eSilicon takes that responsibility and they invest a lot in communication and staffing for process people, manufacturing people, test people, package experts and so on.


Cadence VIP Enables Users to be First-to-Market with Mobile Devices Leveraging Latest MIPI, LPDDR3 and USB 3.0 OTG Standards

Cadence VIP Enables Users to be First-to-Market with Mobile Devices Leveraging Latest MIPI, LPDDR3 and USB 3.0 OTG Standards
by Eric Esteve on 09-27-2011 at 1:56 am

The mobile devices market is simply exploding, with smartphones shipmentgoing up to the sky, tabletsemerging so fast that some people think it will replace PC (but this is still to be confirmed…). This lead mobile SoC designs to integrate increasingly more features, to support customer needs for more computing power and sophisticated video, audio and storage. To support these new features, improving both performance and power, new interface standards have emerged, that SoC designers need to integrate, under an ever increasing time to market pressure, opening the door for external sourcing of new functions (Design IP) and the need for solutions that can accurately test the functionality of their design and ensure manufacturing success (Verification IP). Cadence has pretty well defined the problematic (see image): it’s too hard to verify from scratch, as it requires too much time, effort and expertise.

That’s why the Verification IP market is so dynamic these days, and that’s why both Synopsys and Cadence not only communicates, but are also very active: almost every week brings major news, acquisition or release of new VIP supporting emerging standards. The involvement of Cadence in the Verification IP for System on Chip developed for the Mobile industry (wireless handset, media tablets, and portable consumer electronic devices) clearly appears when looking at the list of VIP offering for mobile applications with support for the following standards:

  • LPDDR3: This low-power version of the pervasive DDR3 memory standard enables customers to meet the high bandwidth and power efficiency requirements of mobile systems.
  • MIPI CSI-3: Providing an advanced processor-to-camera sensor interface, MIPI CSI-3 enables mobile devices to deliver the bandwidth required to enable high resolution video and 3D.
  • MIPI Low Latency Interface (LLI): This interface cuts mobile device production cost by allowing DRAM memory sharing between multiple chips.
  • USB 3.0 On-The-Go (OTG): Providing 10x the performance of the previous USB specification, USB 3.0 OTG allows consumers to rapidly transfer data, such as video and audio content, as well as quickly and effortlessly charge devices.
  • Universal Flash Storage (UFS): A common flash storage specification for mobile devices, UFS, a JEDEC standard, is designed to bring higher data transfer speed and increased reliability to flash memory storage.
  • eMMC4.5:Designed for secure, yet flexible program code and data storage, eMMC4.5, a JEDEC standard, enables high bandwidth, low pin-count solutions that simplify system design.
  • cJTAG:With its support for reduced pin count, power management and simplified multichip debug, cJTAG enables efficient testing of mobile devices, a key requirement for delivering high volume, high quality mobile devices.

The secret sauce for supporting emerging standards, when these are still in development, is to actively participate to the standard committee, like for example MIPI Alliance. According with Joel Huloux, chairman of the board, “MIPI Alliance continues to advance mobile interface standards with processor and peripheral protocols that streamline system development and expand the sophistication of today’s mobile devices. By ensuring verification support for these protocols at the earliest stage possible, companies such as Cadence enable mobile designers to embrace the latest standards and deliver products that transform the consumer’s mobile experience.” Cadence was also the first company to add support for ARM Ltd.’s AMBA 4 Coherency Extensions protocol (ACE), speeding the development of multiprocessor mobile devices, and the DFI 3.0 specification, which defines an interface protocol between DDR memory controllers and PHYs.

Another important ingredient for cooking a successful recipe, at the other side of the spectrum, is the collaboration with the system manufacturers. If you take a look at the member list on the MIPI Alliance web, you realize that this collaboration could be with companies like: Ericsson, Nokia, Panasonic, RIM or Samsung, even if Cadence does not disclose this information. Being present at both side of the spectrum, participating to the standard elaboration well before the protocol release, and working closely with the system integrators, the final users, is a good way to fine tune the verification product and release it as early as possible, allowing the SoC designers to cope with increasingly shorter time to market.

Eric Estevefrom IPNEST


Apple Plays Saudi Arabia’s Role in the Semiconductor Market

Apple Plays Saudi Arabia’s Role in the Semiconductor Market
by Ed McKernan on 09-27-2011 at 12:08 am

The retirement of Steve Jobs left most commentators wondering if Tim Cook could lead Apple marching ever onward and upward. In truth, Tim Cook’s contribution on the operations side has been just as instrumental in the destruction of Apple’s PC and consumer electronics competitors as Jobs’ product vision. Under Tim Cook’s guidance, Apple has increased their gross margins from 29% to 41% in the last five years and they look to increase this further. Cook is executing on what I will call the “Swing Consumer” strategy, which is a takeoff of Saudi Arabia’s “Swing Producer” position within the OPEC Cartel. What it means is that those companies competing with Apple will be operating off of high priced left overs.

Saudi Arabia, naturally took the role of “Swing Producer” within the cartel based on its vast oil reserves, its ability to easily lift over 12M barrels of oil a day at a cost of $2 per barrel in a country with a relatively small population. The net of it all is that Saudi Arabia for many years could financially support itself on less than half of its full capacity and therefore ensure a maximum oil price in the market per any given economic situation. Despite having a dozen members in the OPEC Cartel and gross margins of over 90%, in the case of Saudi Arabia, the oil market has not collapsed like the DRAM market does in perpetuity off a base of less than a handful of suppliers. Furthermore, Saudi Arabia recognizes that the kingdom itself is valued more on what’s still in the ground than what it produces in terms of the current cash flow. 200B+ barrels of oil at $80 is more than $1.6T of assets, which is why short term price fluctuations don’t matter as much as maintaining the long term oligarchy. They always adjust supply to get oil to an upper range.

Apple first moved into the “Swing Consumer” of semiconductors mode when it based all of its new, high growth products off of NAND Flash in combination with commodity ARM processors. By 2005, Apple was taking roughly 40-50% of Samsung’s NAND capacity and growing. To maximize their opportunity, Apple decided in July 2009 to write a $500M check to Toshiba as a prepayment for NAND Flash capacity at a discounted price. As the largest consumer of Flash coupled with commodity ARM processors that could be build anywhere, Apple found itself all alone, able to dictate the lowest worldwide pricing. The only exception would be if Samsung subsidized their smartphone and tablet groups. All of Apple’s competitors were left negotiating the 2nd best price within a sometimes spot shortage environment. There is no chance for them to overturn Apple’s component cost lead.

Apple’s premium brand combined with its “Swing Consumer” logistics has put it into a similar position to that of Saudi Arabia. At a moments notice it can not only shift NAND flash capacity but also DRAM suppliers, LCDs, wireless chips etc. Last week, an article appeared that confirmed Apple’s plan to source more NAND and DRAM from Toshiba and Elpida and away from Samsung (see Apple Looks to Japan for DRAM, NAND Flash Supply). The article speculated that it was in response to the legal issues it is embroiled with Samsung. It is that and more. It is a chance for Apple to lower Samsung’s prospects of being a serious competitor in the Smartphone and Tablet market and it is a leveling of the playing field. Apple needs both Toshiba and Samsung around as competitors always sharpening their pencils.

As a growth company with strong product margins, Apple is able to offer suppliers a guaranteed forecast that is always expanding, while its competitors like Dell, HP and others must remain tentative in their outlooks in the face of worldwide economic turmoil. As I discussed in an earlier article (Apple Will Nudge Prices Down in 2012: PC Market Will Collapse), margins for the PC OEMs and the retailers (i.e. Best Buy) are so thin that they are forced to under build and under forecast to suppliers because of the fear that they are left with too much inventory at the end of a selling season which will wipe out any gains they accrue in the first few months. Thus PC OEMs are in a death spiral losing more and more market share as time passes and thus losing cost leverage over suppliers.

Imagine the task at hand for new HP CEO Meg Whitman as she tries to salvage this quarter’s revenue by first having to reassure customers that they are staying in the PC business. At the time of the last earnings call, when Apotheker announced that the tablet was to be cancelled and the PC group spun out, HP lost immediate leverage with suppliers as they expected to receive order reductions and cancellations.

The near term bottom for tech stocks, and in particular Dell and Apple, came the following day: August 18[SUP]th[/SUP]. I would tend to speculate that Dell and Apple reached out to suppliers to take advantage of HP’s debacle. Now, as Whitman restarts the PC group’s engine, HP has to go back to suppliers to beg for parts at higher prices. And the worst part is that HP has to re-enter the 10% margin PC business in order to reassure customers who are looking at buying higher margin servers, networking and services. Apotheker’s critical mistake was that he believed dropping the PC group would not impact the other business units. The phones went quiet.

Apple’s complete domination of its supply chain through the “Swing Consumer” strategy will allow it to continue to squeeze suppliers and improve margins. They do, however, have one more threshold to cross and that is with its processor strategy. Apple will be a dual CPU House for a long time but it needs cost leverage over Intel. Both are playing a slow elephant dance before the ultimate partnership is signed that could be a benefit to both.

In the past year, both Intel and Apple have implemented communication and corporate strategies to try to gain an upper hand on each other. As everyone knows, Apple is good about keeping secrets for things that it truly doesn’t want the world to know. It goes to the extend of tracking down missing iphone prototypes mistakenly left at bars or cutting off suppliers who talk too much about upcoming and actual shipping products. In areas that it wants to frame public opinion, it communicates in ways that appear secretive: like the recent reports that TSMC is signed on to fab the A6 and future A7 processors at 28nm and 20nm respectively. It’s anybody’s guess if and what an A7 is but it magically appears in print. One wonders where Intel fits in the scheme of things.

Perhaps we have been offered a hint at what is coming.

Midway through Intel’s IDF conclave, where their upcoming 22nm chips were on full display, there was a simple press release announcing that Intel was issuing debt for the first time in over 20 years (see Intel Announces Senior Notes Offering). The initial press release made no mention of the amount but did say the purpose was for stock repurchases. Given that Intel had just raised their dividend in August and have been aggressively buying back stock with their massive operating cash flows, the offering seemed out of place.

The Wall St. Journal later confirmed that the debt offering raised $5B in total: a number that is familiar to anyone looking to build an advanced 22nm or 14nm Fab. Perhaps Intel’s way of saying they are ready to build an additional fab for a new customer. More on how this can play out in another column.


Semiconductor equipment spending beginning to decline

Semiconductor equipment spending beginning to decline
by Bill Jewell on 09-25-2011 at 7:41 pm

Semiconductor manufacturing equipment shipments have leveled off after a strong rebound from the 2008-2009 downturn. August 2011 three-month-average shipments based on combined data from SEMI (North American and European companies) and SEAJ (Japanese companies) were $2.9 billion, down from a peak of $3.2 billion in May 2011. Three-month-average bookings have dropped significantly to $2.2 billion in August, down 31% from the peak of $3.3 billion in August 2010. The book-to-bill ratio dropped to 0.78 in August, indicating continuing declines in billings.
SEMI’s September 2011 forecast for semiconductor manufacturing equipment calls for a 23% increase in billings in 2011 followed by a decline of 3% in 2012. One of the largest capital spenders, TSMC, plans to cut spending by 19% in 2012 after a 25% increase in 2011, according to Taiwan press reports. What will be the impact of decreased spending on semiconductor capital equipment on semiconductor capacity and utilization? The latest available data from Semiconductor Industry Capacity Statistics (SICAS) for 1st quarter 2011 showed industry IC capacity utilization of 94.2%, the fifth consecutive quarter with utilization above 90%. Semiconductor shipments are currently sluggish. Recent forecasts for the 2011 market range from a decline of 2% to growth of 5%. Forecasters agree the semiconductor market will pick up to stronger growth in 2012, ranging from 5% to 10%. Our forecast at Semiconductor Intelligence remains at 4% for 2011 and 10% for 2012.

The worldwide economic outlook is very uncertain, causing semiconductor manufacturers to be very cautious. The downward trend in seconductor manufacturing equipment spending will slow down the rate of capacity growth. If the forecasts of improving semiconductor market growth in 2012 hold true, IC capacity utlization should remain above 90% through at least 2011 and probably into at least the first half of 2012. Utilization in the 90% to 95% range is generally healthy for the industry – high enough for semiconductor manufacturers to remain profitable but not high enough to result in significant shortages.

Semiconductor Intelligence, LLC can perform a variety of services to provide you and your company with the intelligence needed to compete in the highly volatile environments of the semiconductor and electronics markets.


A Verilog Simulator Comparison

A Verilog Simulator Comparison
by Daniel Payne on 09-22-2011 at 2:40 pm

Intro
Mentor, Cadence and Synopsys all offer Verilog simulators, however when was the last time that you benchmarked your simulator against a tool from a smaller company?

I just heard from an RTL designer (who wants to remain anonymous) about his experience comparing a Verilog simulator called CVC from Tachyon against ModelSim from Mentor.



Benchmark Details

First, let me say that my primary use for the CVC tool is with regard to regressions being done on RTL designs, so it is not a gate level design that I can give you benchmark data on. In my regressions, sometimes the test bench activity contributes as much or more to the total simulation time requirements.

The test case that I was writing about when I first sent you the email was the regression testing for a relatively small digital design of about 150,000 gates in 0.35 micron UMC; however, as I mentioned before, the regressions were being performed on the RTL.

In that design, there are about 7,500 lines of RTL code in the design, and the test bench is about 6,500 lines of RTL code.

In a regression that took Modelsim Questa 28 hours to complete, CVC completed the work in 10 hours. This regression consists of a bash script that calls the same test bench with different conditions repeatedly to test all the features and automatically verify the performance.

In a more recent test that I have done with a much smaller design, where the test bench takes more time to run than the actual DUT, I ran a 100 msec simulation in 6 minutes with CVC that toom Modelsim Questa over 30 minutes to run. The test bench in this design does a state space model simulation of an analog circuit that is connected to the DUT and performs functional simulation not so much for regression testing, but more for the purpose of design analysis. We do small designs on large geometry due to the power control nature of our business.

In this case, the RTL for the DUT is 12,000 lines and the test bench is 8,000 lines long. As you can see, there are more lines of code here, but the design is about 1/10th the size of the previous example (about 13,000 gates).

We find that the greatest differences in speed between Modelsim Questa and CVC relate to the test bench part; however, we have also noticed that CVC is often 20 to 50 percent faster in the gate level sims as well. Where it does NOT shine is that we run into some trouble using SDF back-annotation with it. We can get it to work, but it seems to be not consistent.

We tend to use CVC more for functional verification and development, but we still use Questa for the back end validation steps.

I know that some of this data is not perhaps as quantitative as the example that you sent to me, but since we have been using CVC now for almost 2 years, I believe that our results have been consistently observed over enough projects such that our group is increasing its use over time due to the speed advantage that it gives us at least in our circumstance.

I have NOT had an opportunity to use it with large designs such as those found in much of the communications, graphics, and other DSP intensive applications where the gate counts get into the millions. So I cannot address that behavior with my current experience with it.


Apple’s Supply Chain

Apple’s Supply Chain
by Paul McLellan on 09-21-2011 at 5:48 pm

I am doing some consulting right now for a company that shall remain nameless, and one of the things I have had to look at is Apple’s supply chain. I came across an interesting article by someone with the goal to “buy a MacBook Air that isn’t made by Apple.” He is in the UK and doesn’t like Apple’s UK keyboard and he doesn’t really want to have to run everything on a virtualization environment. So basically he wants to buy a MacBook Air that is actually a PC.

This is a market that Intel has announced it will support with $300M of its own money under the name Ultrabooks. After all, there is presumably a big market for a MacBook Air that is actually a PC Ultrabook. And how hard can it be? Apple’s industrial design is great but it isn’t that hard to copy. Most of the components are standard. Lots of people know how to put together a PC.

The answer turns out to be surprising. It is really hard to build a PCair for the same price as Apple does. And the reason is one of the secrets of Apple’s supply chain that I hadn’t really thought much about before. Apple hardly makes any products. Sure, it ships huge volume: it is the world’s largest semiconductor purchaser, much bigger than HP or Cisco or any other obvious candidates (expected to be $22B this year). But it has three iPhones (iPhone 3GS, iPhone 4 (GSM) and iPhone 4 (CDMA)), one iPad, two MacBook Airs, some iPods and some bigger notebook and desktop computers. Apple is not the Burger King of electronics, you don’t have it your way, you have it Steve’s way. Compare that to HP or Cisco’s product lines.

The PC market is predicated on have it your way. You go to HP or Dell’s website and decide what options you want, do you want wireless, which speed of processor, how much memory and so on. Also, they have broad product portfolios so they are forced to use standard products such as screens, batteries, wireless daughter cards, power supplies and so on since it doesn’t make sense to customize anything for a subset of a subset of the product line. So the PC industry is largely based on having a lot of components that purchased in varying amounts and then clicked together to build the end product. They ship a large volume but of a broad product range, so not much of any particular model.

Apple, by contrast, can integrate as much as it wants and buys all components in the same quantities (for each product) since you don’t get that flexibility. This gives it a much higher volume and more predictable demand and it can leverage this into lower prices. And since it has so few products it can invest in specialized components for each one: the MacBook Air has a specially shaped battery that just fits in among all the other stuff in there, the iPhone and iPad contain a custom Apple SoC (A4 for the current iPhone, A5 for the iPad2 and presumably for the imminent iPhone 5). Famously, a few years ago, Apple bought Samsung’s entire flash memory output. Too bad if you are someone else.

With those greater volumes and greater purchasing leverage, Apple can build the MacBook Air for less than any of the PC competitors can build an Ultrabook. Plus it doesn’t fit their business model well: a premium product that you cannot customize. Where does that go on the Dell website?

As an aside, a lot of this supply chain optimization is not the aesthetic Steve Jobs side of Apple, but is what Tim Cook, the new CEO, worked to put in place.


Custom Signal Planning Methodologies

Custom Signal Planning Methodologies
by Paul McLellan on 09-20-2011 at 4:08 pm

It is no secret that custom ICs are getting larger and more complex and this has driven chip design teams to split up into smaller teams to handle the manual or semi-automated routing of the many blocks and hierarchical layers that go to make up such a design. These sub-teams don’t just need to handle the routing within their own block(s) but also integrate the routing between the blocks and also address the challenge of creating correct top-level routing (that overflies the block) within the assigned part of the die.

Using informal approaches, such as verbal and email status reports, is no longer enough and makes the routing of a large custom chip become the long pole in the tent, very labor-intensive and with a schedule that determines the overall schedule of the entire chip. Once you add in congestion issues, advanced-node parasitic effects, the fact that the design itself is probably not stable and undergoing incremental change, then the process becomes almost impossible. Even “industry standard” routers are unable to complete top level routing challenges because they were not designed to fully address the complex combination of specialized topologies, hierarchical design rules and DFM requirements (via redundancy, via orientation, via enclosures, wire spreading, etc) that are required to achieve successful on-time design closure for AMS and custom ICs.

What is needed is a fully automated approach to signal planning. The key is to integrate the process with the block placement tasks and the use of intelligent, routing aware pin-placement algorithms to address multi-topology routing problems. Providing a tool with tight integration of these tasks means that designers can explore the implications of different placement alternatives before deciding on an optimal solution. And in a much faster time than doing it manually or semi-automatically.

One critical consideration is the routing style required to handle these complex top-level and block routing tasks. A manhattan routing style is used to avoid jogs, thus reducing the number of vias required and minimizing wire length, in turn reducing timing delays and power. Nets can be sorted during routing to avoid crossing routes, thus reducing crosstalk and other noise. Of course users must be able to define constraints for the router, such as width, shielding requirements, maximum and minimum widths on each layer and identifying matched signal pairs.

Another way to optimize area and improve productivity is to use a router which supports multiple-bias routing as well as strictly biased X-Y routing. With its jumpered-mode designers can define complex schemes where all routes in both horizontal and vertical biases can use the same metal layer efficiently, allowing a separate layer to be used as a jumper layer for channels where a layer change is required to route effectively. Further, many semiconductor manufacturers use routers that support special optimization for bus routing and compact signal routing, allowing them to take advantage of specialized semiconductor vias and via directions resulting in still more compact routing.

More information on PulsIC’s Unity Signal Planner is here.

Note: You must be logged in to read/write comments



Analog Constraint Standards

Analog Constraint Standards
by Paul McLellan on 09-20-2011 at 8:00 am

Over the years there has been a lot of standard creation in the IC design world to allow interoperability of tools from different vendors. One area of recent interest is interoperable constraints for custom IC design. Increasingly, analog design layout is becoming more automated. Advanced process nodes require trial layouts to be created even during the circuit design stage, to bring back detailed information for the iterative simulation loops. In particular, variation in sub 30nm nodes is impacted by layout dependent effects (LDEs), essentially values depend on proximity effects of what else is around, meaning that circuit design and layout design are much more closely intertwined than they were in the past. Otherwise correlation between pre- and post-layout is not assured.

To make the prototype layout process work smoothly, design constraints must be communicated to the layout automation, but there is currently no common open standard for defining these design constraints. As a result, users are forced to enter these constraints multiple times, once for each tool. Worse, subtle differences in the semantics can cause problems. To remedy this, the IPL Alliance, whose original charter was centered around open PDKs, has embarked on an initiative to create a single unified set of constraint definitions, covering both the syntax and the semantics. The goal, obviously, is to allow the designer to enter the constraints once and use tools from multiple vendors to achieve their design goals: higher quality, higher productivity, reduced time to market. The IPL Constraints Standard is available to all IPL Alliance members and is expected to be made public sometime in mid-2012.

The history is that in 2010 the IPL alliance decided to try and get ahead of the creation of design constraint standards. Otherwise every vendor would create their own proprietary standards and it is a lot harder to try and get alignment once such standards have achieved some adoption since there genuinely are costs of change and opportunity for political fighting among the most widely adopted standards.
The group’s goals were that the standard should be:

  • Portable and interoperable
  • Support existing and future tool sets
  • Be extensible
  • Take into account multiple ways that the constraints might be created: by hand, through GUIs, scripted, automated
  • Take into account the entire design flow where many steps are co-dependent.

A decision was made to support text-based and openAccess-based constraints, and tools should be able to translate between the two representations if necessary without semantic problems.

At DAC in 2011 the IPL Design Constraint Working Group announced the IPL Constraints 1.0 standard, which defines the syntax and schema for open, interoperable design constraints and included a proof-of-concept set of constraints. Work is going on to expand the standard to include a broader representation of actual constraints and to validate the interoperable use model.

The presentation from the 2011 DAC luncheon is here.

Note: You must be logged in to read/write comments


Coby Hanoch joins Jasper

Coby Hanoch joins Jasper
by Paul McLellan on 09-20-2011 at 7:00 am

Jasper has hired Coby Hanoch as the VP of international sales to manage sales outside of North America. I talked to him last week.

Coby started his career after graduation from the Israeli Institute of Technology as an engineer at National Semiconductor. He quickly ended up in verification where they developed the first random verification generator. Then he went to Paris to work in CAD/verification for the ACRI supercomputer project which burned a lot of money with little to show for it. So he returned to Israel and then he and a group of friends started Verisity. Somehow he got the job of doing the sales and marketing. They quickly brought Kathryn on (yes, Jasper’s CEO in her previous life) to manage US activity. In 1998, Moshe Gavrielov (who had come on as CEO) asked him to move back to Paris to run sales in Europe, and then, when that was up and running, to move to Asia. So he cheated and moved back to Israel (hey, it’s technically Asia). Verisity just took off and it was like sitting on a rocket. Europe went from $800K to $14.5M in 2 years. Asia wen from $2M to $30M in 3 years. Coby found himself as VP of worldwide sales. Cadence acquired Verisity and it didn’t feel a good fit so he left.

In fact he left EDA and went to a little Israeli startup, turned it around and then…time for a break. But despite offiically being on vacation, he kept getting calls from EDA vendors wanting help setting up distributors. A few phone calls and he collected a finder’s fee. But then came the downturn and finder’s fees dried up and, further, companies needed help managing reps and understanding the different cultures. So he set up EDAcon with reps in all the relevant countries.

Earlier this year, Kathryn visited Israel and told Coby that everything was going really well in the US but Europe and Asia not so much. She invited him to join. I guess she was pretty persuasive since he said yes. He started just after DAC.

Every day he is more excited since it feels like Verisity all over again. Jasper seems uniquely positioned for very rapid growth. When Coby was at Verisity, he used to feel he was doing the customer a favor when he sold them product. Jasper feels like that too. Jasper clearly has a great relationship with ARM and that gives Jasper and entree into ARM’s most advanced customers. But the target is broader: anyone doing more complex designs who has made the strategic decision to use formal.

The first few months were spent signing up reps. Next week is sales training. Bringing a lot of reps on at once enables Jasper to go broad rather than having to focus on one territory at a time. Obviously Israel is especially easy since Coby is there. But there is lots of business in China, Korea is starting up and, from earlier years, there are already a couple of strategic accounts in Europe. Also, it turns out Jasper has several AEs spread through the territories already on the ground. Evaluations are starting. Business discussions are starting. The product is mature. He’s excited. Let’s hope he’s right that Jasper is Verisity all over again. A wild ride by any standard.

To contact Coby, his email is coby at jasper-da dot com. I’m sure he’d love your PO.

The press release announcing Coby’s appointment is here.

Note: You must be logged in to read/write comments


Nanometer Circuit Verification: The Catch-22 of Layout!

Nanometer Circuit Verification: The Catch-22 of Layout!
by Daniel Nenni on 09-19-2011 at 8:00 pm

As analog and mixed-signal designers move to very advanced geometries, they must grapple with more and more complex considerations of the silicon. Not only do nanometer CMOS devices have limitations in terms of analog-relevant characteristics such gain and noise performance, but they also introduce new sources of variation which designers must worry about. Industry efforts like the TSMC AMS Reference Flow 2.0 have devoted considerable focus to this.

Managing the effects of variation has been part of analog design since the vacuum tube era. However, nanometer CMOS introduces variations that depend not only on the devices themselves, but also where they are physically located on the chip relative to one another. These new context-sensitive effects, such as well proximity, shallow trench isolation stress, and poly spacing effects, make accurate assessment of the layout’s electrical impact – also a time-honored analog design imperative – even more important. Or as the co-founder of a major fabless IC company once said, “at nanometer geometries, the layout is the schematic.”

Perhaps the biggest problem for computer analysis of these effects turns out not to be actually modeling them, but getting timely access to layout data. Because layout traditionally is a tedious and change-resistant effort, project teams don’t like to start it until the circuit design is nearly complete. Yet before the circuit design is complete – while it’s still evolving and flexible – is exactly when you do want the layout data. The layout really is just another view of the schematic.

In order to solve this Catch-22, it’s important to look at a couple of factors: how much layout is really needed for analysis? And how much of it can be automated?

For example, only placement is needed in order to assess the impact of well proximities; however, that placement needs to be accurate and complete – not just each differential pair or current mirror in isolation. Whereas routing is essential for node capacitance – but approximate routing might be adequate, especially if the capacitance is dominated by source-drain loading, in which case the wires themselves add little.

Fortunately there’s a group of companies bringing to market innovative solutions that focus exactly on these problems, and collaborating to hold the nanometer Circuit Verification Forum (nmCVF), on September 22nd at TechMart in Santa Clara. Hosted by Berkeley Design Automation, and including technologists from selected EDA, industry and academic partners, this forum will showcase advanced nanometer circuit verification technologies and techniques. You’ll hear real circuit case studies, where these solutions have been used to verify challenging nanometer circuits, including data convertors; clock generation and recovery circuits (PLLs, DLLs); high-speed I/O, image sensors and RFCMOS ICs.