RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Realibrium.com: EDA for Real-estate

Realibrium.com: EDA for Real-estate
by Paul McLellan on 07-29-2014 at 10:01 am

Everyone in EDA is really smart. People who leave EDA and go and work in other industries, especially people who left in the late 1990s for internet startups, notice that this is not true elsewhere. Not that there aren’t smart people in internet startups, just that not everyone is. EDA is an industry where you need a master’s degree to be in sales, never mind the kind of deep domain knowledge that is necessary to be in engineering inside an EDA company. I know there are smart people at, say, Oracle too. But let’s face it, relational database technology doesn’t require a major re-write every couple of years to address a new process node. Sure, databases get bigger, there are more cores, and so on. Double patterning, FinFETs and sea changes like that, not so much.

So it is always interesting when people leave the EDA industry to address some other industry but with the “EDA attitude” to doing it in a smart way. Realibrium.com is one such company that takes some of the basic analytic approach that is commonplace in EDA and applies it to buying and selling condominiums and homes. Elias Echeverri started the company (and wrote the code) and it is now going live. Disclosure: Elias is a friend, worked with me at VLSI, Ambit, Cadence and Envis.

Currently Realibrium.com only covers cities in Miami area as a test market, and will be expanding to
New York city in the next couple of months. Despite not being in it, Miami is the capital of South America due to its location (and somewhat of Europe, at least for international investors). South America is much further east than you think. Trivia question: if you go south from Atlanta GA, which is the first South American country you hit? Brazil? Belize? Venezuela? Answer: none, Atlanta is to the west of the whole South American continent. The whole of the south part of Miami Beach is taken up by modern towers of condos. There is a very international market of buyers and sellers, plus people from all over the US. So it is a good test market. Plus, as everywhere else, most realtors are clueless about technology, no matter how well they know the local market conditions.

The site has an automatic data feed for all condos and homes on the market and all sales. It can tell you all the properties that are currently for sales and lots of average data about that. But it can do much deeper trend analysis, such as looking at the trend in selling price per square foot and the ROI based on rental prices. Clearly prices are going up.

Or the average number of days on the market. Just by looking at this graph you can see how much the market has tightened and how it has become a sellers’ market. If you want to buy a condo now…well, you should have done it a year ago!

By using the trend data, there are algorithms that can give good estimate of what any given condo should cost, what you should put it on the market at (it also handles rental properties both for people buying to rent, people renting a property they own, and people looking for a property to rent).

Check it out. The menu at the top of the screen allows you to search for a property to buy (err…probably pretend to buy). The graphs above come from the market analysis which builds reports on the fly for wherever you want. It is all pretty self-explanatory (especially if you have the experience of buying or selling a property somewhere, probably not Miami).

The Realibrium.com website is here. There is also a blog.


More articles by Paul McLellan…


IoT will depend on FPGAs

IoT will depend on FPGAs
by Luke Miller on 07-29-2014 at 6:00 am

The IoT (Internet of Things) creates an ambivalence within me. Part of me hates computers and being connected, the other is currently working on a boiler controller that even adaptively predicts and senses when the next wood load is needed and alerts the wife. Yup pray for her. I really use FPGAs and CPLDs around the farm and I am slowly realizing that I am getting closer to the IoT in the Miller realm.

In my last blog that really brought out raw emotion is some folk where I almost killed off the CPU, by the FPGA, some of the point was missed such as agnostic serial interfaces, not being locked down to PCIe and such. Of course if the board is laid out you are locked to those paths, but if we think a bit ahead we very often can design around obsolesce and buy another 5-10 years using an FPGA, for real.

The IoT is revealing a very important need in technology besides the need of a like a billion wafer starts, the other need is programmable hardware, and IO. Once again a pure microcontroller is not going to cut it. IoT will be driven by FPGA like devices simply because they are the devices that can interface to outside world very easily. Second to that they are the only solution that is going to provide the lowest power, lowest latency and best determinism. (Excluding ASICs here).

What is the IoT going to interface with? Temperature, Pressure, Position, Acceleration, ADCs, DACs, Current, Voltage etc… But you say you could use Arduino, RaPI, etc… Yes you can but choosing those platforms has locked you into standards and usually one thread (only have one engine to share) and not much parallel processing. What is going to differentiate your solution vs. your competition? Let’s face it the IoT is going to go way beyond a thermostat that senses people in a room, and creates a temperature profile for the week. That is easy.

The FPGA and Zynq are going to allow agnostic sensor interface AND plenty of parallelism. Sure you could start round robin, read temps, read pressures, read O2 sensors, eventually you are going to blow out your timing budget. In the FPGA all this work can be done at the same time. Overkill? Remember the idea of multitasking when the PC came out? I remember people saying who needs to do more than one thing on a PC? Ha! RIP in Commodore 64. I’m on a RIP kick lately. A personal note here, the commodore 64 by far was the best computer ever made. Even better than Apple, yeah even Apple.

Best bet for these applications? Zynq. The Red Pitaya board is a great example of this and even has a web server which is a perfect application for the ARM. And that is what we want right? Data at our finger tips. So that has become the framework for my very complex boiler controller that will hopefully save me from cutting an extra 5 cords of wood.

Every time I leave for a trip I have gotten a call from the wife about the boiler, this will simplify things right? :p

A neat way to get started…read here


End-to-end look at Synopsys ProtoCompiler

End-to-end look at Synopsys ProtoCompiler
by Don Dingee on 07-28-2014 at 9:00 pm

Usually, we get the incremental story in news: this new release is x percent better at this or that than the previous release, and so on. Often missing is the big picture, telling how the pieces all tie together. Synopsys took on that challenge in their latest FPGA-based prototyping webinar. Continue reading “End-to-end look at Synopsys ProtoCompiler”


ARM’s Quarterly Results: Licensing Strong, Royalties Weak, Future is Bright

ARM’s Quarterly Results: Licensing Strong, Royalties Weak, Future is Bright
by Paul McLellan on 07-28-2014 at 8:30 pm

Last week was ARM’s quarterly earnings call. Simon Segars, the CEO, and Tim Score, the CFO, presented from London.


First, the numbers:

  • revenue was up 17% year-on-year in dollar terms to $309.6M, but only 9% in Sterling terms due to exchange rate moves to £187.1
  • profit before tax was $94.2M

  • licensing was up 42% year-on-year with 41 new processor licenses signed during the quarter. Version 8 licensing was particularly strong. Cumulatively ARM have no licensed v8 products 50 times to 28 different companies, 7 in the last quarter. All of the top-10 semiconductor companies that build chips for smartphones have licensed v8, and 9 of the 10 companies that build application processors for tablets have licensed v8. And 4 out of 5 of the top companies selling chips into the networking business.
  • During the quarter 8 Mali multimedia processor licenses were signed bringing the total to 96 licenses with 61 different companies. It is now the most widely licensed GPU in over 75% of digital TVs, over 50% of Android tablets and 25% of Android smartphones (of course this wasn’t mentioned on the call, but although iPhones are ARM-based they don’t use Mali, they use Imagination’s GPUs)
  • royalties were up 2% year-on-year. Don’t forget that royalties are one quarter in arrears so this is royalties on products that shipped in Q1 (licensees ship product in Q1, then in Q2 they work out how much they shipped and write a check to ARM). Royalties are higher on v8 licenses but realistically it takes 1-2 years for a license for a processor to show through into volume production. Simon explicitly guided expectations that the second half of the year would have higher royalties
  • 2.7B ARM-based chips shipped during the quarter, up 11%. Growth was especially strong in…no, not mobile…enterprise networking and microcontrollers (those are the Cortex-Mx cores)

ARM is obviously dominant in mobile with penetration essentially at 100% of smartphones. Of course there are other processors in smartphones too, out of sight hidden in things like wireless and bluetooth. The really interesting battleground is in the server space. Intel owns this space today, since everyone builds out datacenters using standard PC architectures. But that is not an optimum solution for power, physical size or cost if you are someone like Facebook building datacenters for very specific purposes and where total throughput is more important than the absolutely fastest single thread performance (which is what you want if you are doing, say, weather forecasting). ARM have signed 16 licenses for server applications and first production systems are expected next year. ARM expects that by 2018 they will have 10-15% of the total server market. If it turns out to be true then it is a big change. It will be material for ARM in terms of royalties but it will be even more material for Intel in terms of chips not shipped.
Talking of 2018, here is what ARM presented as their expectations across all their markets. $40B ARM-based chips.

For comparison, here are the actual numbers for last year (2013).
One frequently-asked-question of ARM is what happens as integration goes up. When two chips become one, does the royalty go down? The detailed numbers depend on the deals various licensees negotiate but roughly speaking ARM gets a 2% royalty for the first processor and a 1% royalty for the second. So with those numbers, if a company is shipping a kit of two parts, a $5 part and a $2.50 part, then the royalty is 15 cents per kit. If these are combined into a chip that ships at $6 then they get 18 cents per kit/chip. Basically 2% of $7.50 is less than 3% of $6. Everyone wins. ARM expects this type of chip, with multiple ARM processors to increase.

Slides from the call are here. Transcript of the call at SeekingAlpha is here.


Accelerating SoC Verification Through HLS

Accelerating SoC Verification Through HLS
by Pawan Fangaria on 07-28-2014 at 3:00 pm

Once upon a time there was a struggle for verification completion of semiconductor designs at gate level. Today, beyond imagination, there is a struggle to verify a design with billions of gates at the RTL level which may never complete. The designs are large SoCs with complex architectures and several constraints of area, performance, power, testability, synthesis and so on, requiring huge vector sets at hardware and software levels, often not enough for complete verification. The verification being a critical requirement, several techniques of verifying the designs in different ways evolve year after year, albeit with further increase in SoC size and complexity – it’s a vicious cycle!

Since the designs are at system level, the verification intent must start at the system level with hardware abstraction verified at that level. Typically, hardware can be abstracted at algorithmic, TLM (Transaction Level Model) or RTL level. The algorithmic model is a software model (written in C++ or SystemC) which simulates fastest but without timing. The TLM models (based on standard TLM library written in SystemC) can be segregated into untimed, loosely timed, approximately timed and cycle accurate whereas the RTL models are fully cycle accurate and synthesizable to actual gates exhibiting slowest speed in simulation. It’s clear that design and verification must start at the algorithmic level which can be mapped to TLM and then to RTL appropriately. The state-of-science here is how accurately a design is mapped at different levels and equivalence checked between them.

Designing and verifying the algorithmic model in C++ or SystemC is very efficient; ~1 month of simulation in RTL can be accomplished in less than 10 minutes in algorithmic model. They can be used as reference models (wrapped with SystemC or embedded in SystemVerilog in UVM environment) for hardware and tested in ESL (Electronic System Level) platform with TLM fabric. A TLM model again can be simulated ~100x faster compared to RTL. The synthesizable portion of TLM with bit-accurate datatypes in C++/SystemC can be transformed into RTL. Keeping the algorithmic model as the golden reference, equivalence between the three models can be checked. The synthesizable TLM can be verified very effectively and efficiently with limited performance testing without any clock in real time. The coverage can be monitored by using assertions and cover points in the source code, similar to what is done in algorithmic model, and can be carried forward in RTL. Analysis and profiling tools such as gcov can be used effectively.

So, what is the platform for doing all these operations? I was impressed by looking at Calypto’s Catapult, a High Level Design and Verification Platform. The whole process is described at length in an on-line webinarat Calyptowebsite. The platform considers complete SoC with algorithms, control logic, interfaces and protocols which can be targeted to any particular technology of choice. It provides ~10x improvement in verification productivity with the system level description synthesized to correct-by-construction RTL and provision to integrate any last minute code changes efficiently. Practical results with customer designs have shown up to ~18% area saving and ~16x gain in time compared to hand-coded designs. The platform is supported with Catapult LP which provides closed loop PPA exploration where different solutions can be explored and evaluated by changing the constraints, not the source code. Power saving techniques such as clock gating and others are used very efficiently.

Once an optimized RTL is synthesized, it’s essential to verify its correctness. The Catapult platform provides different ways of verifying the RTL. The SCVerify capability automatically generates the test infrastructure with SystemC transactors communicating with RTL which can be simulated with industry standard VCS, Incisive or Questa. The synthesizable TLM reference models can be compared with the result received from RTL co-simulation.

Another way is to test all models in the popular UVM environment. As shown in the above image, the synthesizable TLM model can be placed under test. The agents provide conversion between different levels of abstractions. After testing the TLM model, it can be swapped with the RTL Implementation (shown at the top right in the image) which can again be tested with the same test vectors.

A powerful and unique capability of Catapult platform is SLEC (Sequential Logic Equivalence Checking) which can be used to check equivalence between an ESL model in terms of algorithmic or TLM model and a RTL model. This high level formal verification unlocks the real potential of ESL that allows fastest simulations at algorithmic and TLM level without risking any design inconsistency.

With directed tests, constraint runs, FSM reset transitions and stall tests, close to 100% verification coverage can be aimed; some unreachable points may require waiving.

The designs can be re-synthesized to different performance points and/or target technologies. Rich Toone of Calypto has presented the whole information in great detail. It’s worth going through the webinarHow to Maximize the Verification Benefit of High Level Synthesis.

More Articles by Pawan Fangaria…..


FD-SOI: 20nm Performance at 28nm Cost

FD-SOI: 20nm Performance at 28nm Cost
by Paul McLellan on 07-28-2014 at 8:01 am

There has been a lot of controversy about whether FD-SOI is or is not cheaper to manufacture than FinFET. Since right now FinFET is a 16nm process (22nm for Intel) and FD-SOI is, for now, a 28nm process it is not entirely clear how useful a comparison this is. Scotten Jones has very detailed process cost modeling software (that is what semiconductor companies pay him for) and so I think his costs are probably very close to the truth. But the bottom line isn’t very interesting, neither process seems to have any sort of compelling cost advantage.

However, some interesting factoids have come to light from unlikely sources. First, the CSPA (Chinese Semiconductor Professionals Association) meeting a couple of weekends ago that Semiwiki’s Dan Nenni (嫩以但) attended. Samsung’s Kelvin Lo presented about Samsung’s foundry strategy. Remember, just before DAC, Samsung announced that it was licensing FD-SOI from ST. At DAC I attended a presentation from Philippe Margashack of ST on the Samsung booth.

Kelvin presented mobile as being 20nm and 14nm, networking as 14nm, and Internet of things (IoT) as 28nm. He said that 28nm is the sweet spot, both because of exploding design costs and because of increasing transistor costs. Clearly, Samsung thinks that 20nm transistors will cost more than 28nm transistors per transistor. Actually this data is Handel Jones’s but the fact that Samsung presented it presumably means they believe it. But don’t forget Samsung doesn’t have a 20nm process.

He presented 28nm FD-SOI as low-power, high-performance, 20nm performance at 28nm cost with PDK, libraries, IP and EDA tool-flow already available.

Then during the Cadence quarterly conference call FD-SOI came up a couple of times. Here is what Lip-Bu said:Clearly, we are engaging with our foundries based on the customer requirements. So we work very intensively with our customer. If the customer decided to go advanced node in FinFET we support them to with the foundry partners. If they decided to move on to the FDSOI, we clearly support that. In my remarks, I mentioned about the DDR4, because of customer requirements, moving to the FDSOI process, and we definitely support and enable that.

So Cadence is migrating at least some of its IP portfolio to FD-SOI. Of course that might just be for STM so might not be a significant datapoint since we know ST is committed to FD-SOI.

Perhaps more interesting was Lip-Bu’s response to a question about FinFET and FD-SOI and whether, as “FinFET gets delayed more” there is a move to FD-SOI.And so a couple of our key customers requested us to support them. And that’s why we have more than 25 new FinFET design projects with our leading customer. And in Q2, we see the development activity increased a lot on that. And then saying that, you know, we also had a couple of customers decide to use the FDSOI for the 28 and 20 and then some go beyond that. So I think we are open minded, and we don’t have any buyers, one way or the other.

Well, that hardly qualifies as a clear answer, especially the last sentence that could mean there is activity below 20nm in FD-SOI or maybe not. Ruben Roy of Piper Jaffrey was as confused as me so he asked a follow up question:You talked about 20 nanometer. I’m wondering, recently with some of the activity that you’re seeing out there, has there been discussions or requests to you to start thinking about and creating libraries for 14 nanometer FDSOI?

And this time Lip-Bu was less ambiguous:The answer to you is yes. We have customers request that. We are working with them. And so clearly, supporting the customer success is the most important for us. Make sure that our tools and IP optimized for that process and that approach. And clearly, the customer has demand us for doing it. Clearly we are supporting that, and so to answer your question, yes, there is an increase in the activity of FDSOI.

Despite the clear answer, I’m not sure it is true. As far as I know, nobody has PDKs for a 14nm FD-SOI process, which is precursor to developing libraries and IP for it. At Semicon, ST presented about 14nm FD-SOI (planar) but it seemed very preliminary. Even 20nm isn’t ready for prime-time (or PrimeTime!) yet. All FD-SOI activity, as far as I know, starts at ST and then is licensed to Samsung and, perhaps, GlobalFoundries so that is where it is happening if it really is that far along.


More articles by Paul McLellan…


A Win-Win Royalty Deal Structure in IP Business

A Win-Win Royalty Deal Structure in IP Business
by barun on 07-27-2014 at 8:00 pm

Royalty is a critical component in any IP deal. SoC companies want IP companies to share the risk of success (or failure) of their SoC and to enable that they want IP vendors to accept a substantial part of their payment to be paid as royalty. But the customers are also not very interested to shell out huge money to IP companies if the SoC is successful and hence they would like to have the royalty percentage as much as low which IP companies find non-acceptable in several situations.

One way, the IP companies can overcome the challenge is to provide a buyout option to its customers. The buyout option allows buyer to pay a certain amount of money to the seller and stop all future royalty payment. SoC companies will go to buyout option if they see the cash outflow of all the future royalty is more than the buyout price. IP vendor can demand of higher royalty percentage with the buyout option.

But the issue becomes here is that here only the IP seller is carrying the risk, not IP buyer. In case of downside i.e. product failure the IP buyer will not exercise the option and IP seller will be a looser. In case of upside i.e. product success the IP buyer will exercise the option and limits the IP seller to get the full benefit. Hence the seller should be able to charge some money for that risk. The question becomes how much the seller should charge. This can be calculated by option pricing method. The buyout option is equivalent to a call option and the money the IP vendor should charge is equal to the price of the call option.

Here the present value of future royalty payment is equivalent to the current price of stock which is owned by IP seller. Exercise of the option is equivalent to buying that stock by IP buyer from the IP seller.

Let us assume that the on average expected revenue from the SoC is USD 1 million every year and the IP royalty is 3% of that revenue. The SoC lifetime is 5 years and volatility of SoC revenue is 50%.

Hence the average expected revenue from IP royalty is USD 30K and it is spread over 5 years. The volatility of royalty payment is 50%

So the current stock price S[SUB]0[/SUB] = PV of the future royalty payment = 30 (e[SUP]-0.1[/SUP] + e[SUP]-0.2[/SUP] + e[SUP]-0.3[/SUP] + e[SUP]-0.4[/SUP] + e[SUP]-0.5[/SUP]) = USD 112.3K and volatility σ = 0.5

The duration of royalty payment is equal to the period of option as IP buyer can exercise the option at any time during this period hence duration of option T = 5

The dividend generated by the stock is 20% (1/T) as every year by not exercising the option the IP buyer is losing 20% of the stock value (basically he is paying the royalty money to the IP seller and hence not able to save the money)

Let us assume that the buyout price is USD 125K i.e. the IP buyer needs to pay USD 125K to the IP seller to avoid future royalty payment (equivalent to owning the stock). So the exercise price K = USD 125K

Let us assume that cost of capital is 10%. Hence r = 0.1
Now the price of call option = S[SUB]0[/SUB]e[SUP]-qT[/SUP]N(d[SUB]1[/SUB]) – Ke[SUP]-rT[/SUP]N(d[SUB]2[/SUB])
Where
d[SUB]1[/SUB] = [ln (S[SUB]0[/SUB]/K) + (r – q + σ[SUP]2[/SUP]/2)T] / σ√T
d[SUB]2[/SUB] = d[SUB]1[/SUB] – σ√T = = [ln (S[SUB]0[/SUB]/K) + (r – q – σ[SUP]2[/SUP]/2)T] / σ√T

d[SUB]1[/SUB] = 0.016, d[SUB]2[/SUB] = -1.1

So price for call option = 112.3 x e[SUP]-0.2 x 5[/SUP] x N (0.016) – 125 x e[SUP]-0.1 x 5[/SUP] x N (-1.1) = USD 13K
Hence the IP buyer should pay USD 13K as price for option to IP seller
So the summary is

  • IP buyer will pay 3% of SoC price as royalty to the IP seller
  • The SoC where the IP gets integrated is expected to generate USD 1 million revenue for the next 5 years
  • The volatility (standard deviation) of SoC revenue is 50%
  • IP buyer has a right of stopping payment of all the future royalty by paying IP seller USD 125K and for the above right the IP buyer will pay USD 13K at current time

Altera vs Xilinx FinFET Update

Altera vs Xilinx FinFET Update
by Daniel Nenni on 07-27-2014 at 10:10 am

One of the things I do in my spare time is listen to quarterly conference calls and try to sort fact from fiction. I compare past calls to the current one and attempt to predict what’s coming next. Confucius said, “Study the past if you would define the future” and I’m a big believer in that.

Paul McLellan wrote about the Xilinx call earlier this week:
Xilinx: Revenue Down, Profit Up, FinFET on Schedule

Here are the Altera financial details from the press release:

Altera (NASDAQ:ALTR) expects Q3 revenue to be down 2% to up 2% Q/Q.
Revenue:
Telecom & wireless revenue +28% Y/Y in Q2
Industrial/military/automotive +14%
Networking/computer/storage -6%
Other revenue +16%.
Altera says its saw its high-end 28nm FPGAs delivered “very good” performance
Gross margin fell 90 bps Q/Q and -100 bps Y/Y to 67%.
Q3 gross margin guidance is at 67% (+/- 0.5%).

According to the financial people, Xilinx and Altera are “archrivals” so rarely will you read one without a mention of the other. The funny thing is they don’t say each other’s company names on the conference calls. It is very Harry Potterish with the, “He who must not be named” thing.

Here are the Xilinx financial details from the press release:

Xilinx (NASDAQ:XLNX) expects FQ2 revenue to be flat to down 4% Q/Q.
Revenue:
Industrial, aerospace, & defense sales (31% of revenue) -9% Q/Q and -11% Y/Y
Telecom & data center (50% of revenue) +1% Q/Q and +20% Y/Y
Broadcast, consumer, & automotive (16% of revenue) +5% Q/Q and +3% Y/Y
Other revenue (3% of revenue) +39% Q/Q and +11% Y/Y.
Gross margin +150 bps Q/Q to 69.1%,
Xilinx expects an FQ2 GM of 70%. $100M was spent on buybacks.

Based on the Xilinx and Altera calls, Altera is catching up at 28nm. Xilinx was first to node on that process and had a record 70% market share last fiscal year. This year they are predicting 60%. Xilinx is six months or so ahead on 20nm as well but what about FinFETs?

(Comments edited for space)

John Pitzer – Credit Suisse: I wonder if you can give us an update on FinFET. How comfortable are you still with your timeline around FinFET?

Moshe Gavrielov– XLNX CEO: they’re (TSMC) doing 16 in two cycles. They’re doing a FinFET and they’re doing the FinFET Plus version, and we’re going to be using the FinFET Plus version.We’re now expecting to tape out our first 16-nanometer device in the first quarter of 2015.

Interesting, XLNX will skip 16FF in favor of the better performing 16FF+ thus giving up a six month lead. TSMC took a step back and boosted the density and performance of 16nm to better compete with Intel and Samsung 14nm and that is what competition is all about!

Ambrish Srivastava – BMO Capital Markets: Okay and then a quick follow-up on the 14 nanometer. Could you just please remind us? Last quarter you said that tapeout was delayed by a quarter. What is the timeline for sampling and also for mass production? Thank you.

John P. Daane – ALTR CEO:So, schedule for our Intel 14 nanometer product remains Q1 2015 for our first production part tapeout which will sample roughly midyear to customers.

It’s good to see that Altera is still on track considering all of the Intel 14nm delays. This will be an interesting comparison of FinFET processes: Xilinx vs Altera, TSMC versus Intel. Unlike PowerPoint slides, silicon does not lie, absolutely.


Getting a Tapeout Quote in 10 Minutes!

Getting a Tapeout Quote in 10 Minutes!
by Daniel Nenni on 07-26-2014 at 8:00 pm

On Thursday, July 31[SUP]st[/SUP] at 8 AM Pacific Daylight Time I’ll be moderating a webinar that will demo eSilicon’s new GDSII quoting portal. You can find more details about the webinar here, and you can register here.

I’ve worked with a lot of companies that do advanced custom IC designs. Getting a quote that covers all the NRE requirements for tapeout and a unit price for the projected volume production can take weeks to sort out. eSilicon is claiming to handle all that with a comprehensive series of online menus in 10 minutes. The claim is seven steps to a complete quote that the company will stand behind.


There is a wide range of options available for various packages, testers and processes. The site is done in conjunction with TSMC, so all process options are supplied by them, from 28nm to 350nm. I’ve used the Internet to buy all kinds of things. My beautiful wife uses it even more. I know people who have bought cars and even homes through the Internet. The NRE alone for an advanced custom IC can be over $1M, so eSilicon seems to be setting a new benchmark for online transaction size with this tool.

It is interesting to note that this kind of technology is opening up what appears to be a new business model for semiconductor design and manufacturing. Up to now, Internet-based transactions have been somewhat rare in the semi industry when compared with other business, such as enterprise software-as-a-service. Since the Internet was literally made possible by the semi industry, this fact is even more unusual.

To make their point about flexibility, eSilicon has recruited Bob Dunnigan, vice president of operations at Ikanos to drive the quote generation process. Bob will be specifying his tapeout requirements as eSilicon fills in the forms and generates the quote.

It occurs to me that a tool like this is useful well beyond standard quoting. It’s also a great “what if” tool to explore design, process, package and test options. By iteratively using a tool like this, you can answer questions such as: “What is the impact to the lifetime profit of my product if I use 7 vs. 6 metal layers?” Alternatively, “What happens to the total cost of this project if I reduce test time by one second, or if I use dual vs. single site testing?”

These questions are difficult to answer when it takes a month or more to assemble a quote. It’s a different story when it only takes 10 minutes. I hope Bob Dunnigan will try some “what if” options during the demo this Thursday. It will be interesting to watch. I hope you can join me.

More Articles by Daniel Nenni…..


Taking a leap forward from TCAD

Taking a leap forward from TCAD
by Pawan Fangaria on 07-26-2014 at 8:00 am

We all know that Technology Computer Aided Design (TCAD) simulations are essential in developing processes for semiconductor manufacturing. From the very nature of these simulations (involving physical structure and corresponding electrical characteristics of a transistor or device), they are predominantly finite-element based simulations with complex set of equations to be solved which require large computation, thus increasing simulation time exponentially with the size of the device. It was okay for earlier generations of semiconductor technology nodes to rely on transistor or small cell level process and characterization to develop large designs which were then verified through several build-and-test cycles through actual foundries. However, for today’s nanometer technology nodes and large, complex, high-density designs with complex transistor structures like FinFET and others which exhibit excessive variability in manufacturing, it’s clear that the same old methodology will no longer be effective. . Along with the technology, the economics of chip manufacturing and marketing has become equally pressing, needing substantial reduction in P/Q ratio and very high TAT in order take advantage of ever shrinking windows of opportunity.

TCAD simulations still have to be done to accurately develop the process and model a semiconductor device, but the challenge is to scale that to model today’s large semiconductor designs with the same level of accuracy such that they can be correct by construction and manufactured without any fault. Although I have been writing about the novel concept of Virtual Fabrication Platform provided by Coventorthrough its set of tools in SEMulator3D – which provides a detailed 3D modeling capability for any semiconductor design to enhance accuracy, reduce variability and improve yield,- my belief is further strengthened after reading this blogwritten by Mike Hargrove.

Mike reveals that the process modeling platform of Coventor combines with the statistical device TCAD suite of tools from Gold Standard Simulations Ltd. (GSS), an innovative company led by the Device Modeling Group in the University of Glasgow that provides physical accuracy, efficiency and usability in true sense. What scales this device modeling of real gold level standard to large designs is the voxel-based mesh, very fast and efficient computational engine and several other features of SEMulator3D such as ‘Structure Search’, ‘Layout-Aware Rebuild’, Expeditor and other Etch processes about which we have talked in the past.

The above 16nm FinFET SRAM half-cell model built in SEMulator3D in less than an hour is a result of the collaboration between Coventor and GSS. Add to it just a few hours of additional Fin profile variation simulations. A complete design block can be modeled and simulated to perfection before sending it into the actual foundry, thus saving huge money and time spent in build-and-test through fab.

The SEMulator3D can output tetrahedral meshes which can be easily imported into standard TCAD tools. In the above image, a tetrahedral mesh is shown along with individual pull-up (PU), pull-down (PD) and pass-gate (PG) transistors for single transistor simulation studies. Advanced studies such as the effect of Fin profile (which involves multiple process parameters) on individual transistor performance can be done with ease.

The PU and PD devices were simulated using GSS’s TCAD tool Garand. The important electrical characteristics such as the hole-density profile of a PU device can be studied to optimize the device performance.

It’s interesting to know that today we do have a solution that doesn’t require expensive in-fab experiments to choose the best technology option to model a particular large semiconductor design; accurately as per desired electrical characteristic, with improved yield and reliability, and reduced variability. This significantly reduces the design risk with the kind of advanced and complex technology nodes we have today, and at an ‘order of magnitude’ reduced cost.

More Articles by Pawan Fangaria…..