Banner 800x100 0810

It’s a 14nm photomask, what could possibly go wrong?

It’s a 14nm photomask, what could possibly go wrong?
by Don Dingee on 08-27-2013 at 3:16 pm

Let’s start with the bottom line: in 14nm processes, errors which have typically been little more than noise with respect to photomask critical dimension (CD) control targets at larger process nodes are about to become very significant, even out of control if not accounted for.

Continue reading “It’s a 14nm photomask, what could possibly go wrong?”


FPGAs The Life Savers

FPGAs The Life Savers
by Luke Miller on 08-27-2013 at 1:00 pm

Silicon dominates our lives, CPU’s, GPU’s are in the limelight but the unsung hero is the FPGA. They simply do the work where other silicon dare not tread, as they are unfit for the task. Never send a boy to do a man’s job.

For a moment, if we can, just for a few minutes perhaps we can break away from the social media bubble and the grip from entertainment (By the way I do not care what you are eating right now so stop tweeting it) and look at where FPGAs are servicing the important medical and safety needs of life. And no it is not in your TV, even though you will find FPGAs in them.

Because of FPGAs I was able to see my next child thru mama’s belly. He’s a 2013 model due out in Oct. Lord willing, and let’s hope he has my looks right? Know what else that DSP in the FPGA did for us? It showed problems with the little guy, you see they were able to see that his feet are just not right. That Doppler knowledge months ago, led to a plan and if all goes well, his feet will be just fine. We are very thankful that is all that is wrong. FPGAs are revolutionizing the medical and surgical field today! I believe due to better analogue to digital converters (ADCs/DACs) coupled with denser and denser FPGAs we will see better CT’s, MRIs, etc… and lower doses of radiation. Can you say imaging kiosk at Wally world? I know…

Xilinx’s FPGAs are featured in the Intuitive Surgical, ‘da Vinci Surgical System’. That’s a mouth full but it’s a modern marvel. I wish they used that thing last Oct. when I had my gall bladder out. From the Intuitive Surgical website:

“Using the most advanced technology available today, the da Vinci Surgical System enables surgeons to perform delicate and complex operations through a few tiny incisions with increased vision, precision, dexterity and control. The da Vinci Surgical System consists of several key components, including: an ergonomically designed console where the surgeon sits while operating, a patient-side cart where the patient lays during surgery, four interactive robotic arms, a high-definition 3D vision system, and proprietary EndoWrist® instruments. da Vinci is powered by state-of-the-art robotic technology that allows the surgeon’s hand movements to be scaled, filtered and translated into precise movements of the EndoWrist instruments working inside the patient’s body.”

Personally I like a strong national defense (I’m one of those, you know freedom is not free), in particular I am not a fan of invasion, terrorist attacks or the possibilities thereof. FPGA’s are red, white and blueas they serve a very important RADAR market most people take for granted. FPGAs are the heart of RADAR, which are the invisible patriot eyes on land, sea and air allowing us to sleep in peace as long as we do not think too much on the state of congress. Do you ever think about congress? Do you get a raise with a 7% approval rating😡

Currently I am working a design/invention that uses a Xilinx FPGA/CPLD to bring water to the parts of the world where it is not so easy. (Called ‘Well Doctor’). While FPGAs are not necessarily so called ‘entertaining’ us with “gaming” and how anyone can think games like ‘grand theft’ are morally adapt is beyond me, contrary FPGAs sure are making our lives safer and healthier. Thanks to all you FPGA guys & gals!


lang: en_US




Mobile SoC will benefit now from M-PCIe

Mobile SoC will benefit now from M-PCIe
by Eric Esteve on 08-27-2013 at 10:12 am

We have already discussed the recently released M-PCIe ECN from PCI-SIG in Semiwiki at the end of 2012, but the new “standard” (in fact an Engineering Change from PCI-SIG and MIPI Alliance) was only real on paper, at that time. The upcoming webinar from Synopsys, M-PCIe: Utilizing Low-Power PCI Express in Mobile Designs, shows that the concept is turning into a real IP product. This pretty fast response time (between six months and one year) is justified by two reasons. At first, both PCIe Controller IP and MIPI M-PHY IP are already available in Synopsys IP port-folio, the challenge was to integrate both IP together and validate it, a faster task than developing from scratch.

The second reason, in my opinion, is that many chip maker and OEM were expecting to be able to use PCI Express protocol in the mobile (smartphone and media tablet) segments. I suspect that many of the previously active actors in the PC segment, discovering that the market was moving to every kind of mobile systems, have decided to follow the market, and also move to mobile. This means designing for low power, and MIPI M-PHY is known to exhibit a far better power efficiency that PCIe PHY. Then, adopting M-PCIe is a good way to preserve a 10 years investment into PCI Express related development, hardware, drivers and software, and to attack a new segment with good technical arguments…

If you are interested to listen to the webinar “live”, you should rush, at this webinar will be held today! Tuesday, the 27[SUP]th[/SUP]…

Last point, as you can see below, one of the presenter is Richard Solomon, who has served in the board of Directors of the PCI-SIG for almost 10 years, he is probably one of the best knowledgeable person about PCI Express!

You can Register here

Designers utilizing PCI Express are looking to lower the power of their designs by incorporating the M-PCIe ECN from PCI-SIG. The recently released M-PCIe ECN adapts the PCI Express protocol for use with the MIPI M-PHY, benefiting applications that require low-power usage such as mobile products. To incorporate PCI Express with the M-PHY, device designers should understand the top issues they’ll need to consider.

This advanced technical webinar will begin with a quick overview of the specification and its application space, and then go into details such as bandwidth and clocking considerations, PHY interface differences, power management impacts, and the tradeoffs related to choices around link-layer changes. These changes may impact the transaction and application layers of devices moving from PCIe to M-PCIe, and the webinar will detail those issues. A basic understanding of PCI Express concepts is helpful.

Attendees will learn:

  • M-PCIe bandwidth and clocking considerations
  • M-PCIe power management
  • The tradeoffs related to the link-layer changes
  • PHY interface, transaction layer, and application layer differences between PCIe and M-PCIe


Who should attend:

  • System architects
  • PCI Express device designers
  • Mobile device designers

Presenters:


Scott Knowlton, Product Marketing Manager, Sr. Staff, DesignWare PCI Express, PCI-X, PCI and SATA IP, Synopsys
Scott Knowlton joined Synopsys in 1997 and has extensive experience in PCI Express, SATA, and AMBA IP as well as Synopsys’ coreTools product lines. Prior to joining Synopsys, Scott worked in simulation, synthesis and mixed signal solutions at Cadence Design Systems after several engineering and project management positions in ASIC development at Encore Computer, Intrinsix, and Raytheon. Scott earned his Bachelor of Science degree in Electrical Engineering from the University of Michigan.

Richard Solomon, Technical Marketing Manager, DesignWare PCI Express Controller IP, Synopsys
Richard Solomon has been involved in the development of PCI chips dating back to the NCR 53C810 and pre-1.0 versions of the PCI spec. Prior to joining Synopsys, Richard architected and led the development of the PCI Express and PCI-X interface cores used in LSI’s line of storage RAID controller chips. He has served on the PCI-SIG Board of Directors for over 10 years, and is currently Vice-President of the PCI-SIG. Richard holds a BSEE from Rice University and 25 US Patents, many of which relate to PCI technology.

Eric Esteve from IPNEST

lang: en_US


Wall St. Takes the Wheel at Wintel

Wall St. Takes the Wheel at Wintel
by Ed McKernan on 08-26-2013 at 5:45 pm

It now appears that Steve Ballmer was suddenly given his walking papers at the urging of an activist investor (ValueAct) and with the concurrence of Bill Gates. Wall Street’s growing impatience tends to coincide when the Innovators Dilemma scenario has taken hold of a company that has been unable to overcome its challengers. Why continue to invest in something that won’t happen when Microsoft with $80B in the bank could be bought outright with 7 years of cash flow? Intel, too is nearing a Wall St. revolt, possibly after this coming holiday season when the shift to the new mobile market will likely be more dramatic and favorable to Apple, Qualcomm and the no name Chinese ARM chip vendors. There is plenty of value to unlock at Intel, given the company can focus on its true strengths: Data Center and Foundry.

Peak Wintel occurred between the launches of Windows 2000 (Dec 1999) and Win XP (August 2001) and in the midst of Intel’s domination of the laptop market with their 32-bit x86 processors and WiFi. Both owned 100% of what was then “mobile.” However, the multi billion dollar, 64 bit Itanium side-show and an infatuation with Linux in servers blinded both to the historical primacy that computers always trend towards smaller form factors and greater mobility on account of Moore’s Law. MHz supremacy ran into a rising power wall as PC productivity anchored to the power outlet peaked. The iPhone bid anchors aweigh to all that in 2007 with productivity taking off in a new vector.

Under Steve Ballmer, profits tripled as new sales techniques maximized the squeezing of blood from the corporate turnip. Intel, at first fell behind AMD in the 64-bit server space but then doubled down, realizing without the massive profits generated by servers, there would be a much-reduced Fab and R&D budget. The data center market has been and will continue to be the new growth driver that requires maximum performance per watt and cares not at all for the energy sipping standby mode that dominates batter life in mobiles that sit more than 90% idle. More importantly it is dominated by x86 software.

The article by Paul McLellan this past week casting doubts on the imminent ramp of Intel’s 14nm process is interesting when viewed in the context of the shift that might be imposed by Wall St. Haswell notebook processors were the process driver for 22nm as Xeon server chips lagged by months, waiting for yields to reach their peak (A sound strategy for Xeon’s mega sized caches). Meanwhile 14nm Atom processors, racing to close the gap with ARM, have been delayed until 2H 2014, thus missing out on another years worth of mobile products. If Apple and Samsung continue to peel away 100% of the mobile profits then it is questionable that Intel can turn mobile Atom into a profit generating machine selling into the 40% of the market that is built by white box vendors.

What is Intel’s 14nm fab driver that requires a Q4 2013 ramp?

If the answer is Broadwell, Haswell’s 14nm successor, then I would counter by saying x86 has reached “good enough” when it comes to performance AND power consumption for an “All Day” PC. Apple, with Mac Air has demonstrated how it is possible to achieve 10-11 hours on Haswell, now it is up to Microsoft to fix Windows 8 to make the same possible with PCs. In addition, a declining market throws into question the demand for the latest and greatest processors that are offered at extreme prices when yields are low. Financially, this calls into question how soon and to what extend 14nm needs to ramp.

Think about it: today in the market their are two logic chip families that drive foundry process and volume and that is the Apple A6/A7 and Qualcomm’s latest standalone baseband and snapdragon processors. The correct business model for Intel is to ramp one of those two families in its latest process in order to remain a high volume play as with x86 of old. Mobile Atoms directed at the bottom 40% of the market requires a leading edge process to attack a trailing edge market and that is a major disconnect for the company.

If Wall St. catches wind of the new Intel, whose business model requires early, steep process ramps to churn out millions of high $$$ PC processors, than the game is likely up. The dithering while mobile Foundry opportunities pass them by with the likes of Apple, Qualcomm and Broadcom are actually a risk to its number 1 crown jewel: process technology. TSMC and Samsung, with their Apple arrangement are about to overwhelm Intel and the question becomes: can they afford 450nm, EUV and 10nm while its competitors pass them by? Wall St. may soon make the decision for them.

lang: en_US


Ballmer’s Retirement Leaves Nokia High and Dry

Ballmer’s Retirement Leaves Nokia High and Dry
by Paul McLellan on 08-26-2013 at 5:36 pm

It looks to me as if Ballmer’s planned resignation from Microsoft is going to leave Nokia high and dry without an operating system. Because any successor to Ballmer will cancel Windows Phone which has managed to take Microsoft’s penetration in smartphones from 5% before it had a serious partnership with Nokia all the way to…err…4% today. Except for Nokia, all the other handset partners have pretty much transitioned from Windows Phone to Android or, perhaps in the future, Firefox or Tizen (a joint Samsung/Intel mobile OS).

There seems to be increasing evidence that Microsoft planned to acquire Nokia at the start of this year, that Nokia was ready to do it, Ballmer was ready to do it but the Microsoft board, or perhaps Bill Gates personally, vetoed it in February. Up until that point, Nokia had increasingly focused on Windows Phone and killed off all the internal competing operating systems even when phones that were on-track to be very successful (winning awards, with people from some countries going to other countries to be able to get their hands on one etc).

Bill Gates then did something incredible, presumably just a few days later. He went on television on February 18th on CBS (video) and told Charlie Rose that:”we didn’t get out and lead early[…]we didn’t miss cellphones but the way we went about it didn’t allow us to get the leadership so it is clearly a mistake.”

This is the chairman of the board of Microsoft talking about the fastest growing industry that the world has ever seen and is basically saying that they screwed up. This is just after he listed recent accomplishments of Ballmer as Windows 8 (a fiasco), Surface (written down last quarter) and Bing (when did you last us Bing?). So perhaps it is not surprising that Ballmer is now getting to prepare his three envelopes.

Since Bill Gates is a key person in the new CEO search as both Chairman, founder and, I think still, largest shareholder, it seems unlikely that any new CEO is going to come in with a plan to continue Windows Phone, any more than the successor to Otellini at Intel (who also “didn’t get the leadership”) would have been someone without a mobile strategy.

I’m not sure I entirely buy the belief Tomi Ahonen and others that the biggest strategic mistake that Microsoft made in this area was to acquire Skype. Clearly the carriers don’t like Skype and it has so many users that it potentially represents a major threat to the voice business of the carriers. And the dirty secret of the carriers is that they make all their money from voice because people don’t realize they are being so overcharged. After all, with modern vocoders, it is only 12 or so kilobits per second of data, so 1000 minutes of voice is less than 10 megabits or about 5 seconds of YouTube video. But they charge you $50 for the voice and 2c for the video. However, one area where everyone is definitely right: without the support of the carriers you can’t sell your hardware. Without China Mobile iPhone has very low penetration in China; without DoCoMo, low penetration in Japan; and so on. Without the active support of the carriers, Nokia has low penetration pretty much everywhere. And remember, 2 years ago it had 30% market share shipping well over a million cell-phones per day. Actually the carriers make a ton on texting too, which uses truly negligible bandwidth, but iMessage, WhatsApp, WeChat and others are increasingly bypassing it.

So far, the lack of a successful mobile strategy has claimed the crowns of Intel and now Microsoft. Will Nokia be next?


LSI’s Experience With Formality Ultra

LSI’s Experience With Formality Ultra
by Paul McLellan on 08-26-2013 at 5:36 pm

LSI is an early adopter of Formality Ultra, Synopsys’s tool for improving the entire ECO flow. I already wrote about the basic capability of the tool here. ECOs are changes that come very late in the design cycle, after place and route has already been “nearly” completed. They occur either due to last minute spec changes or the late discovery of functional bugs. The RTL needs to be changed. But if the design has any chance of converging these changes need to be done by hand, finding the nets that need to be updated and then using incremental place and route to fix just those changes without perturbing everything else that already seems to be OK.

Of course you can’t just go changing the netlist without doing any verification, so historically what would be done would be that a bunch of changes would get batched up and then a full formal verification run would be done to make sure that the new netlist matched the updated RTL and no mistakes had been made. Actually, mistakes often were made and typically around 3 iterations of this loop are required to get closure. Worse, this ECO flow is on the critical path to tapeout.

LSI evaluated Formality Ultra using a reasonably large design: 28nm, 2 million cells, 500K flops, requires 18 hours to run through Formality.


The big challenges with this flow not using Formality Ultra is that:

  • finding where to make the changes is time consuming (and ECOs are on the critical path so need to be done fast)
  • multiple iterations are required to get the ECOs correct
  • netlist updates are required before starting verification
  • no matter how small the change, a full Formality run is required


In a Formality Ultra flow, the design is pre-loaded into Formality and then in parallel Formality Ultra is used to develop ECO scripts and apply them. They can then quickly be verified using multi-point (incremental) verification. Any that fail can quickly be debugged, fixed and re-verified. A full verification is done at the end.

The big saving is in time. The pre-verification takes 13 hours on the test design but each ECO and its associated verification takes around 20 minutes for the ECO and under an hour for the verification. Other advantages of the approach are:

  • can see the ECO before verification
  • find_equivalent_nets reduces errors (50% of ECO errors are the designer misidentifying a net)
  • ECO scripts are developed in parallel with verification rather than being batched up


So overall the time is reduced (assuming 3 iterations) from 56 hours down to 22 hours. Perhaps more importantly is that the time for an individual ECO is reduced by a factor of 10 meaning that it is easy to do them one at a time and check each one before moving on, especially trivial fixes like adding an inverter. A large amount of the 22 hours is the fixed time to set up and perform the final verify, not doing the actual ECOs.

There is a webinar presented by Cason Kolb of LSI. It is preceded by a presentation by Mitch Mlinar of Synopsys giving an overview of Formality Ultra. If you already know the basics of Formality Ultra (which I won’t abbreviate to its initials because this is a family blog!) then skip to 21 minutes in.


Something old, something new in SystemC HLS

Something old, something new in SystemC HLS
by Don Dingee on 08-26-2013 at 5:00 pm

Perhaps no area in EDA has been as enigmatic as high-level synthesis (HLS). At nearly every industry event, some new-fangled tool always seems to be tabbed as the next big thing by some analyst or pundit. In a twist, the latest news is on one of the oldest tools – CybeWorkBench.

Continue reading “Something old, something new in SystemC HLS”


Ten Ways Your Synchronizer MTBF May Be Wrong

Ten Ways Your Synchronizer MTBF May Be Wrong
by Jerry Cox on 08-25-2013 at 10:30 pm

Estimating the MTBF of an SoC should always include an analysis of synchronizer reliability. Contemporary process nodes are introducing new challenges to the reliability of clock domain crossings so it is prudent to revisit how your simulation tool calculates a synchronizer’s MTBF. Let’s list the ten most common pitfalls.

  • Shorting-nodes Method. Many in-house tools for estimating synchronizer MTBF use simulation to observe the time to settle to a valid voltage for two of the synchronizer’s nodes after they have been released from being shorted together. This method is problematic because in metastability the two nodes are unlikely to be at the same voltage. As a result the value of the time-constant obtained from the shorted-nodes experiment gives a poor estimate of Tau, the time-constant needed to estimate MTBF.

  • Master-slave Time Constants Differ.Methods for measuring MTBF in silicon typically yield the Tau associated with the master. Measuring the Tau associated with the slave is problematic because of the extreme rarity of slave failure events. Often the slave Tau is much larger than that of the master and the resulting MTBF of the master-slave combination may be dangerously overestimated.
  • Effect of Duty Cycle.When the Tau of the master and slave differ, clock duty-cycle can affect MTBF significantly. Thus, duty-cycle jitter or PLL bias must be included in the estimation of MTBF. Most contemporary synchronizer tools overlook this issue completely.
  • Effect of Supply Voltage.Estimates of Tau depend critically on the supply voltage V[SUB]DD[/SUB]chosen for the simulation. Generally, the metastable voltage is about half the supply voltage and the closer this voltage is to the transistor threshold V[SUB]th[/SUB], the slower is the recovery to a valid logic voltage. This strong dependency of MTBF on V[SUB]DD[/SUB]requires the careful simulation of Tau and the analysis of MTBF for the worst-case, typically when ½V[SUB]DD[/SUB]approaches V[SUB]th[/SUB].
  • Effect of Junction Temperature. Estimates of Tau depend critically on temperature, but because of the negative temperature coefficient of V[SUB]th[/SUB], cold temperatures are the ones that cause a significant reduction in MTBF.
  • Verification of Simulation in Silicon. Of course, simulation tools should be compared with measurements in silicon by using transistor models derived from the parameters of the silicon under test. Also, these comparisons should be made over a range of temperature and voltage conditions. Otherwise the simulation tool is inadequate for the estimation of MTBF.
  • Simultaneous Estimation of Tauand T[SUB]W[/SUB].Estimates of the metastability window T[SUB]W[/SUB]shift significantly with very small errors in Tau. A co-estimation technique that utilizes all the data obtained from simulations will yield the best joint estimation. This reduces the undesirable variability in T[SUB]W[/SUB] that can otherwise affect MTBF estimates.
  • Effect of Loading.The circuit that loads the output of a synchronizer affects the settling time of the preceding synchronizer stage through capacitive coupling (think Miller). The resulting change in MTBF is significant and must be treated with extra care in multistage synchronizers.
  • Multistage Synchronizer Formula Errors. Many published formulas for the calculation of multistage MTBF from individual stage parameters, produce results that disagree with simulations carried out on the complete synchronizer. In fact, these results are usually overly conservative, but in a few cases give an MTBF that is overly optimistic.
  • Distribution of Clock-Data Offsets.If the sending and receiving clocks in a clock-domain-crossing are derived from the same oscillator through different PLLs, it is invalid to assume a uniform distribution of clock-data offsets to calculate the synchronizer’s MTBF. Depending on the value of the rational ratio of the clock rates, the MTBF can be either less than or greater than that given by the usual assumption of uniformity. Furthermore, it is hard to determine which is true, less than or greater than.

Many inhouse tools for estimating the MTBF of synchronizers were developed before these pitfalls were completely understood. Fortunately, considerable progress has been made and better tools are now available. For example, MetaACEis available commercially and manages all of the ten difficulties listed above in one convenient simulation tool. You can learn more in these four papers: Node-shorting, MTBF bounds, Silicon measurements and Coherent clocks. Thanks to our colleague, Shlomi Beer, who did much of this work and then took the lead in preparing the papers.

lang: en_US


Security Needs in On-Chip Networks

Security Needs in On-Chip Networks
by Randy Smith on 08-25-2013 at 8:15 pm

I remember during my first ten years as a software developer, I used many different computers such as IBM mainframes, Apollo and Sun workstations, and VAX computers. During that time I also bought my first home computer, a Macintosh. I didn’t of course think of this at the time, but the one thing they all had in common was that they did not need antivirus software. Things have really changed a lot in the last ten years. A few years ago, I bought an Android cell phone – even that phone required antivirus software. That was one of the reasons why decided to move to an iPhone. Now, as we enter the era of the Internet of Things (IoT), security is becoming a very critical issue in a growing number of chip designs for this market.

Over the next several years, we will increasingly interact with specialized processors, and a growing number of devices, in our home, work, and the places we visit. Since we will be able to connect to these devices and interact with these devices in many different ways it is important that we feel secure when connecting with these devices. For example, if we install devices to be able to remotely lock and unlock the front door of our home, we want to make sure that no one else can use the system for the same purpose. This is why the chip designer, developing products for the IoT, needs to ensure the security of the data path from various processors and memory. The on-chip network, which manages the data flow through the chip, is a key component in the system that includes various security features, also called protection mechanisms or firewalls.

It is important that in selecting an on-chip network intellectual property (IP) provider, that you choose one which supports a broad set of security features in its products. The IP should come with mechanisms to enable content protection, core hijacking prevention, and denial of service protection. I have found that all these features, and more, exist in the security mechanisms provided in the products of Sonics.

Sonics security measures are based on the attributes of address, initiator ID, command, and user signals. The methodology includes the use of protection groups to determine which devices are allowed to communicate and what actions they may perform. The access rights are stored in a table of run-time configurable registers – these registers reside in a protected region. The system then performs two tests. First, is the incoming request type allowed according to the read and write permissions? And secondly, are the role bits of the incoming request within the allowed pattern established by the user defined network permissions? If the request passes both tests, access is allowed to the target. There is also a definition of “burst communication”. By chopping the data transfers into bursts, the overhead of security measures can be controlled.

Using these security features, various security protocols may be developed. One such example is shown in the figure above. Sonics network on-chip security measures go much further in identifying multiple threat vectors. The protection mechanism addresses security vulnerabilities such as information extraction, core hijacking, and denial of service attacks. While I am certainly glad I never needed to develop these features in the programs I previously developed for EDA tools, it is good to know that a semiconductor intellectual property provider such as Sonics has built mechanisms into their IP which makes security much easier to implement. Be sure to look into the security measures available in your network-on-chip IP.


The TSMC OIP Technical Paper Abstracts are up!

The TSMC OIP Technical Paper Abstracts are up!
by Daniel Nenni on 08-25-2013 at 8:10 pm

The TSMC Open Innovation Platform® (OIP) Ecosystem Forum brings TSMC’s design ecosystem member companies together to share with our customers real-case solutions for customers’ design challenges and success stories of best practice in TSMC’s design ecosystem.

More than 90% of the attendees last year said “this forum helped them better understand the components of TSMC’s Open Innovation Platform” and “they found it effective to hear directly from TSMC OIP member companies.”

This year, the forum will feature a day-long conference starting with executive keynotes from TSMCin the morning plenary session to outline future design challenges and roadmaps, as well as discuss a recent collaboration announcement, 30 selected technical papersfrom TSMC’s EDA, IP, Design Center Alliance and Value Chain Aggregator member companies, and an Ecosystem Pavilion featuring up to 80 member companies showcasing their products and services.

Date: Tuesday, October 1st, 2013

Place: San Jose Convention Center

Attendees will learn about:

  • Design challenges in 16nm FinFET, 20nm, and 28nm
  • Successful, real-life applications of design technologies and IP
  • Ecosystem specific implementations in TSMC reference flows
  • New innovations for next generation product designs

In addition, attendees will hear directly from our design ecosystem member companies talk exclusively about design solutions using TSMC technologies, and enjoy valuable opportunities for peer networking with near 1,000 of industry experts and end users.

TSMC Open Innovation Platform Ecosystem Forum is an “invitation-only” event: : please register in order to attend. We look forward to seeing you at the 2013 Open Innovation Platform Ecosystem Forum.

Registration: Join the TSMC 2013 Open Innovation Platform® (OIP) Ecosystem Forum to be held on Tuesday, October 1st at the San Jose (CA) Convention Center.

Established in 1987, TSMC is the world’s first dedicated semiconductor foundry. As the founder and a leader of the Dedicated IC Foundry segment, TSMC has built its reputation by offering advanced and “More-than-Moore” wafer production processes and unparalleled manufacturing efficiency. From its inception, TSMC has consistently offered the foundry segment’s leading technologies and TSMC COMPATIBLE® design services.

TSMC has consistently experienced strong growth by building solid partnerships with its customers, large and small. IC suppliers from around the world trust TSMC with their manufacturing needs, thanks to its unique integration of cutting-edge process technologies, pioneering design services, manufacturing productivity and product quality.

The company’s total managed capacity reached 15.1 million eight-inch equivalent wafers in 2012. TSMC operates three advanced 12-inch wafer fabs, four eight-inch wafer fabs, and one six-inch wafer fab in Taiwan. TSMC also manages two eight-inch fabs at wholly owned subsidiaries: WaferTech in the United States and TSMC China Company Limited. TSMC also obtains eight-inch wafer capacity from other companies in which the Company has an equity interest.

lang: en_US