SMT webinar banner3

Toshiba’s ReRAM R&D Roadmap

Toshiba’s ReRAM R&D Roadmap
by Ed McKernan on 09-30-2012 at 11:00 pm

Most companies in the memory business have ReRAM on their radar if not their roadmaps. Toshiba have made some bullish comments about the roadmap and chip size for ReRAM at a recent R&D Strategies Update. At face value, the schedule would put Toshiba quite a bit ahead of their competitors. Over at ReRAM-Forum.com, we have done a little digging into these announcements and wonder what it all means. Read More at the ReRAM-Forum Blog.


Converge in Detroit

Converge in Detroit
by Paul McLellan on 09-30-2012 at 10:04 pm

When I worked for VaST we went to a show that I’d never heard of in EDA: SAE Convergence (SAE is the Society of Automotive Engineers). It is held once every two years and it focuses on transportation electronics, primarily automotive although there did seem to be some aerospace stuff there too. This is an even year, Convergence is October 16-17th.

The conference is held in the Detroit Convention Center (officially the Cobo Center). This is one of those convention centers that the city built in a blighted area (OK, pretty much downtown, but I repeat myself) to try and revitalize it. It didn’t work. The area still has nothing there apart from a couple of hotels that people wiser than us didn’t stay in. We asked in the hotel where there was somewhere we could get breakfast and it turned out to be a 4 mile taxi ride.

You can see the old glories of Detroit, what obviously used to be smart department stores now just boarded up. It is hard to believe that in the 1950s Detroit was not just rich, it was the richest city in the entire world. The one place in Detroit that seemed to be safe and fun was Greek Town, which really did have some very good Greek restaurants (and lots of other types).

Anyway, Convergence is mostly about automotive electronics. There are really two separate businesses in automotive. Safety critical (airbags, ABS, engine control) and everything else (adjusting your seat, GPS, radios). The issues for the two markets are very different. After all, if you have an acceleration problem like Toyota a few years ago (almost certainly driver error and nothing to do with electronics) you have a big problem. If you occasionally have problems getting to the next track on your CD, not so much.

Infotainment is pretty much like any other consumer product. A lot of it is ARM-based. There is not much difference between designing an in-car GPS than a smart-phone. In fact nobody really wants an in-car GPS any more since we already have smart-phones. Just as the ideal in-car entertainment system is just a plug for our iPhone/Android phone. An expensive multiple-disk CD-changer just isn’t really needed when we have all our music in our pockets.

Safety critical is an odd beast. Quite a lot is based on unusual microprocessors that you’ve probably never heard of (NEC v850 for example). Some is standard (Freescale is a big supplier). There are in-car networks like FlexRay and CANbus. When I was visiting GM back then, they had two big strategic programs: embedded software and emissions/mileage. Since much of the second is down to the engine control computers, which are mostly software, the two programs had a large area of overlap.

Synopsys will be at Convergence, booth 719. They will have 3 demos, one in safety critical, one in infotainment and one in multi-domain physical modeling:

  • Virtual Hardware-in-the-Loop & Fault Testing for Safety Critical Embedded Control.This demonstration will highlight how Virtualizer Development Kits based on microcontroller virtual prototypes can help start system integration and test early as well as facilitate fault injection testing for safety critical embedded control applications. Microcontroller from Freescale, Renesas and Infineon as well as integration with automotive development tools such as Simulink, Vector and Saber are supported by Synopsys.
  • Accelerating the Development of Android SW Stacks for Car Infotainment Application.This demonstration will highlight how Synopsys Virtualizer Development Kits for ARM Cortex Processors provide an Android-aware debug and analysis framework allowing for a deterministic and successive top-down debug approach. The demonstration will leverage a virtual prototype of the ARM big.LITTLE subsystem.
  • Multi-Domain Physical Modeling and Simulation for Robust xEV Power Systems. This demonstration will highlight how SaberRD is used to create high-fidelity physical models of key components in an xEV powertrain and how its extensive selection of analysis capabilities can be applied to assess system reliability and achieve robustness goals.

Click here to register for Convergence (sorry, you missed the pre-registration discount).


How Much Cost Reduction Will 450mm Wafers Provide

How Much Cost Reduction Will 450mm Wafers Provide
by Paul McLellan on 09-28-2012 at 9:05 pm

I’ve been digging around the Interwebs a bit trying to find out what the received wisdom is about how big a cost reduction can be expected if and when we transition to 450mm (18″) wafers from today’s standard of 300mm (12″). And the answers are totally all over the place. They vary from about a 30% cost reduction to a cost increase. That is, 450mm wafers may cost more per square cm than 300mm.

Here are some of the issues:

Creating wafer blanks. It is currently very expensive and not expected to come down to equivalent 300mm costs. It turns out that pulling an 18″ ingot of silicon has to be done really slowly because of various stress reasons. A 450mm wafer blank is expected to be twice the price per area of 300mm.

Double patterning. There is some saving during lithography using 450mm versus 300mm because you don’t have to switch from one wafer to the next so often. But the main litho step of flashing the reticle on each die has no gain from having more die on the wafer. A flash is a flash. And with double patterning that step is a bigger fraction of the whole manufacturing process. Oh, and don’t think EUV will save the day completely. It is so late that the earliest that it can be introduced is maybe 9nm and expectation is that EUV will require double patterning (it is 13.5nm light, although I don’t really understand why the OPC techniques we used to get, say, 90nm with 193nm light without double patterning won’t work, but I’m not an expert on optics).

Since 450mm wafers hold roughly twice as many die as 300mm, any given volume involves half as many wafers so, except for the highest volume parts like flash, there will be an increase in the number of machine setups (fewer machines, fewer fabs, but more switches from lot to lot).

Cost of equipment. This is the biggie. If 450mm equipment only costs about 1.3X 300mm equipment then the future is probably rosy. But there are people out there predicting cost increases of 3X which will make the transition economically infeasible.

Capital efficiency. The big advantage, if 450nm is a success, is that it is more capital efficient. It doesn’t take so much capital to get the same capacity in place. But if that is really so, can the semiconductor equipment industry survive?

Process generations. Received wisdom is that only around now are semiconductor equipment manufacturers through their R&D costs and starting to make money on their investment for the 300mm transition. That’s a lot of process nodes. How many more nodes will there be (with current equipment) to recover any 450nm transition?

Who wants it? Intel and Samsung. TSMC is apparently a reluctant guest to the party since their business requires smaller runs and lots of changes from one design to another, so realistically their cost savings will be less than Intel and Samsung whatever happens.

Human resources. Developing new processes to keep on the 20nm-14nm-9nm transition already is arguably short of people since it has become so demanding. Where are the people going to come from to manage the 450nm transition? If we get 450mm but not 14nm what does that do to costs?

SEMI, the semiconductor equipment association, has been against 450nm. Almost everything they have written for the last 5 years has been against it. I can’t see how it will be a win for them. In fact, I think the Intel/TSMC/Samsung investments in ASML are an acknowledgement of that. The R&D investment has to come from the end users of the equipment.

Here’s a SEMI quote, admittedly from 2010:”450-mm wafer scale-up represents a low-return, high-risk investment opportunity for the entire semiconductor ecosystem; 450-mm should, therefore, be an extremely low priority area for industry investment,” says a recent SEMI report.

OK, so that’s not very informative and I haven’t come across anything that convinces me that anyone really knows the answer. But there is a lot of food for thought.


Variation at 28-nm with Solido and GLOBALFOUNDRIES

Variation at 28-nm with Solido and GLOBALFOUNDRIES
by Kris Breen on 09-27-2012 at 9:00 pm

At DAC 2012 GLOBALFOUNDRIES and Solido presented a user track poster titled “Understanding and Designing for Variation in GLOBALFOUNDRIES 28-nm Technology” (as was previously announced here). This post describes the work that we presented.

We set out to better understand the effects of variation on design at 28-nm. In particular, we had the following questions:

  • How does process variation compare between 28-nm and 40-nm?
  • How do process corners compare vs. statistical variation?
  • How can variation be handled effectively in 28-nm custom design?

To answer these questions we looked at GLOBALFOUNDRIES 28-nm and 40-nm technologies, and performed variation-aware analysis and design with Solido Variation Designer.

We first looked at how process variation has changed from GLOBALFOUNDRIES 40-nm technology to the 28-nm technology by measuring transistor saturation current, idsat, for a minimum-size transistor in each technology. The plots below show the statistical distribution of idsat in 28-nm (left) and 40-nm (right) technology with global variation applied. The results show a significant overall increase in idsat variation from 40-nm to 28-nm. When both global variation and mismatch effects were included, the overall increase in idsat variation was lower, but still quite significant. This result underscores the increasing importance of accounting for variation.

Next we used a delay cell and a PLL VCO design to compare PVT corner simulations with statistical simulations in GLOBALFOUNDRIES 28-nm technology. The plots below show the the results for the delay cell (left) and for the PLL VCO (right). From the plots it can be seen that, for the simple delay cell, the FF and SS process corners reasonably predict the tail region of the statistical performance distribution. However, for the more complex PLL VCO, the FF and SS corners do not align well with the best- and worst-case performance of the statistical distribution. This is because simple FF and SS corners are normally extracted based on digital performance assumptions and often do not reflect the true worst-case performance conditions of a particular design. To avoid this fundamental limitation, it is necessary either to increase the number of corners being simulated or else to use statistical simulation.

Solido Variation Designer helps with both of these approaches: Solido’s fast PVT analysis can be used when a large number of corners exist to reduce the number of simulations required, and Solido’s sigma-driven corner extraction capability can find accurate statistical corners that can be used instead of the traditional process corners.

Variation-Aware Design with the 28-nm PLL VCO

With the PLL VCO design, we used Solido Variation Designer to perform both fast PVT corner analysis and statistical design (3-sigma and high-sigma analysis).

PVT corner analysis is an integral part of a variation-aware design flow, taking into account different process conditions for various devices (e.g. mos, resistor, capacitor, etc.) as well as environmental conditions that may affect the design (e.g. voltage, temperature, bias, load). When designing for 28-nm technology, the number of corner combinations can readily become very large.

For the PLL VCO design, the conditions that needed to be taken into account were: process conditions for mos, resistor, and capacitor; along with voltage and temperature conditions. Even with just these few conditions, several hundred combinations needed to be considered to properly provide coverage for the design. Since the PLL VCO has a relatively long simulation time to measure its duty cycle, it would be very time consuming to simulate all combinations to provide sufficient coverage of the design.

The steps we used for fast PVT design with Solido Variation Designer are shown below:

Solido Variation Designer’s Run Fast PVT application made it possible to find the worst-case conditions 5.5x faster than running all combinations. Furthermore, the results from Fast PVT could then be used with Solido’s DesignSense application to determine the sensitivity of the design under variation-aware conditions. This made it much more practical to perform iterative design, and to explore opportunities to improve the duty cycle performance under worst-case conditions.

Statistical variation analysis and design is another key part of variation-aware design. As discussed earlier, corner analysis does not always capture design performance properly under variation conditions. Furthermore, specific sigma levels often need to be achieved, such as 6-sigma, which require the use of statistical simulation.

Solido Variation Designer makes it possible to verify designs at any practical sigma level. For lower-sigma designs, such as 3-sigma, Solido Variation Designer can be used with GLOBALFOUNDRIES models to extract 3-sigma corners, perform design iteration, and verify the design. The image below shows the result of performing 3-sigma corner extraction on the PLL VCO duty cycle. As can be seen in the image, Variation Designer was able to extract corners for the PLL VCO design that bound the distribution much better than the FF/SS corners.

For higher sigma levels, using Solido Variation Designer’s High-Sigma Monte Carlo (HSMC) capability makes it possible to perform fast, accurate, scalable and verifiable high-sigma design. This is important for high-volume and/or high-quality designs including memory, standard cell, automotive, and medical applications.

The steps we used for high-sigma design with Solido Variation Designer are shown below:

Solido Variation Designer provided a 100,000x and 16,666,667x reduction in the number of simulations required to accurately verify the design to 5- and 6-sigma, respectively. In addition, Variation Designer allowed the extraction of precise high-sigma corners that could be used to design iteratively while taking into account high-sigma variation.
Below are the results from statistical design with the PLL VCO:

It was clear during our work that variation is an important consideration for design at 28-nm. To achieve yielding, high-performance designs, it is necessary to be aware of variation and take it into account in the design flow. Using Solido Variation Designer with GLOBALFOUNDRIES technology makes it possible to do this efficiently and accurately.


Will Paul Otellini Convince Tim Cook to Fill Intel’s Fabs?

Will Paul Otellini Convince Tim Cook to Fill Intel’s Fabs?
by Ed McKernan on 09-27-2012 at 8:30 pm

An empty Fab is a terrible thing to waste, especially when it is leading edge. By the end of the year Intel will, by my back of the envelope calculation, be sitting with the equivalent of one idle 22nm Fab (cost $5B). What would you do if you were Paul Otellini?

Across the valley, in Cupertino, you have Tim Cook, whose modus operandi is to scour the world for underutilized resources and plug them into the ever-growing Apple Keiretsu at below market prices. It’s always time to go more vertical.

With the launch of the iphone 5 behind him and the supply chain ramped to deliver 50MU of iPhone 5 in Q4, there seems to be a silly game in the press of how to raise Tim’s dander on all that is wrong in the Apple ecosphere. The component shortages that exist today are in reality the flip side of the coin known as unlimited demand at Day 1 of the new product launch. However, with Samsung ever on Apple’s heals, the game doesn’t stop and Apple must continue to innovate as well as wring out supply chain inefficiencies. The one that, no doubt, is staring Cook in the eye for 2013 is the A6 processor currently in production in Samsung’s Austin Fab. It is the last major component being produced by Samsung and it needs to move to a friendlier foundry.

For months the rumor mills have been rattling with stories of a TSMC – Apple partnership at 20nm targeting first production the end of 2013. This seems logical, given that Apple is moving to a two-supplier model across most of its major components. If they were to continue with this strategy, then it would mean they have to pick up another foundry (i.e. Global Foundries or Intel) to go hand in hand with TSMC and avoid any single point of failure due to “Acts of God” or unforeseen upside, both of which we have seen the past 24 months.

Intel’s announcement a couple weeks ago on a PC slowdown in Q3 came with a hint that 22nm is yielding well. If, however Intel’s revenues going forward are flat or even slightly rising as opposed to the 24% growth they experienced in both 2010 and 2011 then the Fab expansion plans they outlined last year regarding 22nm and 14nm would raise the question – for what reason? Perhaps it was the only strategy that Otellini could logically employ as Intel tries to outrun TSMC and Samsung.

A year ago, there were doubts as to whether Intel’s new 22nm Finfet process would yield as well as previous process technologies. If the PC market and the DataCenter continued to grow as in past years and if Ivy Bridge were to cannibalize the graphics cores of AMD and nVidia, then the argument could be made to expand Intel’s 22nm Fab footprint from 3 to 4. And so it is expected at year-end the 4[SUP]th[/SUP] Fab will come on line while Intel is swimming in well yielding Ivy Bridges. Look out below AMD and nVidia, your days may be numbered in a soft PC market.

The addition of two mammoth 14nm Fabs that can be upgraded to 450mm to Intel’s capex budgets seems to speak of insanity, unless they expect them to come on line much sooner and that it truly does represent a 4 year lead over competitors. Mark Bohr at IDF mentioned that 14nm will be ready for production the end of 2013 and word is that the 14nm successor to Haswell, called Broadwell, is already up and running Windows. This begs the question, is Broadwell really two years away from production or will Intel launch it early, thus setting up a 22nm to 14nm Fab transition 2H 2013? Otellini would seem to be in a position to deploy his large, highly efficient 22nm Aircraft Carriers in any number of Foreign Oceans wreaking havoc. Or perhaps, aggressively leverage them for a long-term fab deal with Apple.

If Otellini were to offer Apple free wafers, would Tim Cook disregard it? Preposterous you say. OK, but this is what game theory is all about. You have to test the limits and I believe until the summer slowdown, Otellini’s bid to Apple was to sell wafers with a 60% margin markup.

In this new environment, Otellini will be more likely to offer a price that is closer to cost plus a small adder for anytime starting first half 2013 and extending thru 2015. What are the ramifications for Apple? The new A6 processor is a 95mm die in Samsung’s 32nm process and costs somewhere around $25 (I have seen estimates from $18 to $28). In round numbers the A6 in Intel’s 22nm process is 50mm in size. If Intel saves Apple $10 a chip, then it is equivalent to $3B a year (300MU) that drops to its Operating line and would add nearly $50 to Apple’s stock price (based on 15.5 P/E).

The overriding issue for Intel and Paul Otellini is, as I mentioned before, that they need to move to 14nm as quickly as possible and take as much of the market with them (both x86 and Apple) and thereby eliminate the threat posed by TSMC and Samsung as Foundries looking to supply a greater percentage of the total semiconductor market that is built in leading edge processes. Until the last couple years, Intel consistently had over 90% of the leading edge compute semiconductor content delivered with their x86 processors, a legacy that goes back to the transition of IBM mainframes to the Desktop PC.

The End Game continues to get more interesting as we get closer to “All In with Leading Edge.”

Full Disclosure: I am Long AAPL, INTC, QCOM and ALTR


Samsung going vertical Qualcomm cry CEVA laugh

Samsung going vertical Qualcomm cry CEVA laugh
by Eric Esteve on 09-27-2012 at 11:09 am

These last days have been full of Apple related stories; maybe it’s time to discuss a new topic? Like for example Samsung, direct competitor for Apple in the smartphone market, and take a look at the company move toward more vertical integration. Everybody working in the SC industry knows that Samsung is ranked #2 behind Intel, even if the two companies SC products mix are pretty different. When Intel revenue is essentially coming from processor, Samsung SC revenue is coming from 3 families: DRAM, NAND and System LSI. As you can see from the Quarter by Quarter revenues from the last four years, System LSI revenues are drastically growing, both in value and, even more important, in percentage of overall SC sales. “System LSI” sounds almost obsolete as a name for a product line, but when you realize it represents products like Exynox (Application Processors) or CMC221S (2G/3G/4G Baseband GSM/WCDMA/LTE), in direct competition with, for example, Qualcomm MSM8960 integrated Application Processor, Texas Instruments OMAP5, or Apple A6, you then consider the System LSI name more respectfully…

If you look at the Bill Of Material (BOM) for a smartphone, you can see that there will always be DRAM, NAND Flash and an Application Processor plus a Baseband Processor (or an integrated AP + BB). You can imagine that Samsung top management has made this analysis some time ago, when they decided to heavily invest in developing top class smartphones, considering that manufacturing the high value above mentioned SC content was a good way to keep the value inside the company, which is the definition of the vertical integration: try to develop (and manufacture if it make economic sense) as much as possible of the various parts of a system, and the system itself.

Let’s focus on these non-memory semiconductors: Application Processor and Baseband processor, as these two are the heart of a smartphone and both part of System LSI. Samsung is not part of the handful pioneers of the wireless industry, like Nokia, Ericsson, RIM or Panasonic, and Samsung SC has entered the wireless chip segment more recently than the Qualcomm, TI or ST-Ericsson. This position can be seen as a challenge, but it also gives the new comer a competitive advantage. The company can select very quickly the winning solution, ARM IP core for the application processor, and CEVA DSP IP core for the baseband processor, when the pioneers have made some experimentation at the early days, sometime going to a dead end.

When Samsung (OEM) builds smartphone, the company can select the chips which best fit the need for a specific product, or even a specific market for the same product. We have a good example with the Galaxy Nexus LTE, using TIOMAP 4460 AP, Samsung CMC221 LTE Baseband and VIA Telecom CBP 7.1 CDMA Baseband, to be compared with the Galaxy S3 LTE supporting Korean operators, integrating Samsung devices for both the AP and the Baseband. In this case, both products integrates Baseband developed around CEVA DSP IP, as VIA Telecom also licenses CEVA products.

But the road for a complete vertical integration is long, and certainly passes by compromises, as Samsung (OEM) does not necessarily integrate Samsung SC products. We have a good example with these three variant of the Galaxy S3, in Korea, Europe and US. The Korean product, as above noticed, is 100% Samsung for the AP and the Baseband, when the product sold in Europe integrates a Baseband from Intel Wireless (Infineon for those who remember). By the way, in both cases, the Baseband also rely on a CEVA DSP IP core for the processing, as Intel Wireless licensee CEVA as well. The Galaxy S3 sold in the US is based on a Qualcomm MSM8960, integrated 2/3/4G Baseband and Application processor. This is strength for Qualcomm as Samsung are one of the biggest revenue drivers for the company today. It also represents a significant threat to Qualcomm as Samsung is going toward more vertical integration…

This vertical integration strategy represent a huge opportunity for CEVA: from a DSP perspective, Samsung exclusively use CEVA DSPs for their LTE designs, so this represents a very strong LTE driver for CEVA. Interesting to notice, many of CEVA’s other customers supply chips into Samsung handsets today, including Broadcom, Intel, ST-Ericsson, Spreadtrum and VIA telecom. A rough estimation (from CEVA) is that CEVA ships in more than 50% of all Samsung handsets today and that market share continues to grow, as Samsung tend to use non-Qualcomm Baseband solution. This did not happen by chance: CEVA has been a long time partner and is the DSP inside Samsung’s LTE roadmaps, as we can see from this press release from 2010 that Samsung has been using CEVA for its LTE development. That’s why, when Samsung is going further in vertical integration, as well as gaining market share in the smartphone segment (the market segment where most of the profit of the overall handset market are made), Qualcomm could cry, Apple displace the fight in the legal field… and a much smaller but successful IP company, CEVA DSP, will laugh!

By Eric Esteve from IPnest


A Brief History of Atrenta and RTL Design

A Brief History of Atrenta and RTL Design
by Daniel Nenni on 09-26-2012 at 7:41 pm

We’re plagued by acronyms in this business. Wikipedia defines RTL as follows: “In digital circuit design, register-transfer level (RTL) is a design abstraction which models a synchronous digital circuit in terms of the flow of digital signals (data) between hardware registers, and the logical operations performed on those signals.” This is kind of wordy for my taste. I’ve always thought of an RTL description as “what” the circuit does and not necessarily “how” it does it.

Increasing complexity has forced designers to retreat to higher levels of abstraction. You just can’t deal with all the issues at the gate level anymore, time and money won’t allow it. This trend has created a new EDA market segment for RTL design tools. RTL design has been around for a long time – over 20 years – but the market has seen some significant growth in the last 10 years, owing to the need to deal with design abstraction at a higher level due to complexity. Another way to think about this market is that it’s the pre-synthesis part of the design flow. Everything you do at RTL is basically intended to maximize the chances that the synthesis/place & route flow will go smoothly and there won’t be any surprises or fire drills. Things like lack of timing closure, not fitting in the package or not meeting the power budget all constitute fire drills in this context.

The Big 3 offer RTL design tools, but so do a whole host of other smaller companies. There are quite a few startups that focus exclusively on this market. If you think about it, this makes sense. The back-end flow is dominated by the Big 3, and that flow is becoming increasingly commoditized. It’s not a good place for a small company to try and break into. On the other hand, the design flow above the back-end has plenty of growth opportunity. To get another perspective on this market and how it has evolved, I turned to a guy who has been at it for quite a while – Ajoy Bose. He is the founder of Atrenta, the company that builds SpyGlass. This product has become something of a de-facto standard for RTL design. It’s so popular, the company actually re-branded themselves as “the SpyGlass Company” at DAC this year.

Ajoy is no stranger to RTL design, having led the team that developed the Verilog simulator at Cadence in its early days. When I asked Ajoy how Atrenta got started, I was surprised at the answer. In the late 1990’s, he was running a services company called Interra. One of their projects was to work with a large semiconductor company to help them develop a better methodology for IP reuse. During that project, they developed a piece of software that looked at a synthesizable description of an IP to see if it had any design constructs that would be inherently difficult to reuse. The project was a big success, and Ajoy and the team realized they were on to something. Could that software be generalized and made available as a product, and not just service-ware? The team thought so, and so Atrenta was born in 2001. I’m not sure if it was luck or extreme wisdom, but the Interra contract allowed the company to retain rights to the software they built for that services customer. That came in handy. Atrenta productized the code, and a new generation of RTL linting was born in the form of SpyGlass.

It seems that testability analysis was added next, then clock domain crossing (CDC) analysis followed by power and so on, creating a complete RTL platform all under the brand of SpyGlass. Today, there are probably a dozen companies all competing for your RTL analysis dollars. Some specialize in power reduction, some focus on CDC analysis and others on timing constraints. Everybody seems to have a linter. It’s good to see this level of competition from smaller companies. It suggests maybe there is growth opportunity in EDA after all.

I wondered where the next level of growth could come from in this market, so I went back to Atrenta. When I asked Ajoy that question, he paused for a moment and then said “if you’re looking for growth, never look down. Look sideways or look up.” It took me a minute, but I got the message. Sideways means adding more or better tools to the RTL flow. Different approaches to simulation and verification come to mind as one area for growth. Atrenta’s recent acquisition of NextOp looks like that kind of play. When you look up, there are lots of opportunities. The level above RTL includes things like SystemC. Anyone up for an integrated high-level synthesis to gates flow? I’d say there’s still a lot of work to do on that one.

Links to software is another interesting one. What if your RTL description can create a model that is accurate enough and fast enough to run software? You could then iterate the hardware design based on the way the software performs. Real hardware/software co-design. I suspect whoever gets that right will make some serious money.

Also see: A Brief History of RTL


Mentor Graphics Update at TSMC 2012 OIP

Mentor Graphics Update at TSMC 2012 OIP
by Daniel Payne on 09-26-2012 at 10:45 am

What
In just 20 days you can get an update on four Mentor Graphics tools as used in the TSMC Open Innovation Platform (OIP). Many EDA and IP companies will be presenting along with Mentor, so it should be informative for fabless design companies in Silicon Valley doing business with TSMC.
Continue reading “Mentor Graphics Update at TSMC 2012 OIP”