ads mdx semiwiki building trust gen 800x100ai

An ASIC Design Flow at LSI

An ASIC Design Flow at LSI
by Daniel Payne on 10-15-2013 at 1:11 pm

Harish Aepalais part of the Design Closure Methodology group at LSIand he recently talked about his ASIC handoff experience in a webinar. Harish works with logic and physical synthesis, timing constraints, RTL analysis and formal verification.

One challenge with ASIC handoff has been getting through design closure with the fewest iterations so that the physical design still meets timing, power and area budgets. To reduce these iterations Synopsysengineered the Design Compiler Graphical tool to offer an approach with a better starting point for faster physical implementation. LSI has successfully used Design Compiler Graphical in their ASIC handoff methodology.

Design Trends

SoC designers are creating systems with deep logic levels in order to reduce latency, the RTL is often not optimized for hierarchical design because of legacy re-use, there’s an explosion in the number of hierarchical blocks, and the data bus widths have become even wider. A typical SoC design at LSI can be characterized in the following table:

The QoR (Quality of Results) for physical implementation are challenged by these design trends.

Traditional Design Flow

For simple SoC designs the following traditional flow shows how a customer runs their RTL code through logic synthesis and hands it off to LSI for floor planning. The customer then runs physical synthesis until design closure is reached. Finally, LSI adds the test logic and completes place and route.

At each hand-off phase there are checks run to ensure compliance.

As SoC size and complexity increase the traditional design flow starts to break down.

LSI Recommended Flow

Leading edge SoC designs are better handled with an LSI recommended flow where the customer uses Design Compiler Graphical instead of the Design Compiler Topographical tool:

Timing, area, power and congestions issues are handled better in this flow with DC Graphical.

One LSI customer started using DC Topographical however during floor planning there were large multiplexors that clogged the routing congestion to an extent that the area requirement couldn’t be met. Switching to DC Graphical with congestion-related options created a higher QoR netlist that could meet the area requirement:

  • Design Compiler directives

    • set_ignored_layer
    • set_congestion_options
  • DC Graphical

    • compile_ultra -spg

Results

Here’s the before and after congestion comparison for this 28nm design:

Another customer example used 40nm technology and the congestion was greatly reduced by using DC Graphical:

A third customer example shows how using DC Graphical allowed the customer of a 28nm SoC to modify their own floor plan to eliminate congestion overflow, eliminate iterations with LSI, and save one to two weeks in their schedule:

Timing as measured by Worst Negative Slack (WNS) and Total Negative Slack (TNS) can be compared between the traditional synthesis and DC Graphical approaches as shown below for a 28nm customer design:

The Timing QoR are much better (smaller) with the DC Graphical approach for this design, plus the timing estimates were within 4% of post-placement results.

Summary

RTL designers can experience fewer design closure iterations and achieve better QoR by using DC Graphical. Comparison results from LSI show improvements in reducing congestion, shortened project schedules, lower area and reduced power.

lang: en_US


SEMI Smart Technology Conference

SEMI Smart Technology Conference
by Paul McLellan on 10-15-2013 at 9:40 am

I should start by saying that SEMI Smart Technology is not technology that is only half as smart as our phones, it is a conference on smart technology organized by SEMI. Officially it is called the International Technology Partners Conference with a sort of subtitle of From Smart Cars to Smart Cities: Shaping the Future of Microelectronics. It is November 10-13th in the Wailea Beach Marriott in Maui. If you need a reason to go to Hawaii in November (aren’t Mai Tais and great seafood enough) then this is it.

This year’s theme is “Engines of Innovation – “Driving Collaboration, Partnership and Growth” and features program sessions that address industry trends, collaborative business models, R&D, technology challenges and market drivers.

The conference promotes business partnering dialog and relationships among the world’s top executives in the nano- and microelectronics manufacturing supply chains. Now on its 28th edition, ITPC is bringing together industry leaders to address strategic investment, market and technology issues in the semiconductor value chain. ITPC consistently attracts high-level executives from all over the world. The conference is aimed at senior executives in the semiconductor equipment, materials and semiconductor manufacturing industries.

There are four keynotes, given by IBM, Intel, JPL and Toyota:

  • IBM: Michael J. Cadigan, VP/GM, Microelectronics and Systems Technology Group
  • Intel: Wen-Hann Wang, VP, Intel Labs; director, Circuits and System Research
  • NASA Caltech Jet Propulsion Laboratory: Michael Watkins, Mission manager, Curiosity Mars Rover; manager, Science Division
  • Toyota Motor Corporation: Hiroyoshi Yoshiki, managing officer

Sessions include:

  • Growth Engines — Trends in Market Drivers: with Bill McClean, president, IC Insights and Raj Talluri, senior VP, product management, Qualcomm Technologies.
  • Driving towards the Future: Luc Van den hove, president and CEO, imec and James A. O’Neil, senior VP, Electronic Materials, ATMI Inc.
  • Engines of Innovation — Manufacturing Technology: Eric Meurice, chairman, ASML Holding and Atsuyoshi Koike, senior VP, Technology and Fab Operations, SanDisk; president, SanDisk (Japan) Limited
  • Accelerating Opportunities: Akihiko Tobe, GM, Smart City Project Division, Social Innovation Business Project Division, Hitachi, and Hans Stork, CTO, senior VP, ON Semi
  • A panel moderated by Dan Hutcheson on CTO Principles of Innovation
  • A panel on R&D collaboration that includes Intel, IBM, Global 450 Consortium, imec, and KLA-Tencor

Full details are here. The full agenda is here.


Layout-based ESD Checking Methodology at Nvidia

Layout-based ESD Checking Methodology at Nvidia
by Daniel Payne on 10-14-2013 at 12:43 pm

The company Nvidiais synonymous with designing all things video and GPU, so I watched Ting Ku, director of engineering at an archived webinar today talk about: Comprehensive Layout-based ESD Check Methodology with Fast Full-chip Static and Macro-level Dynamic Solutions.

Continue reading “Layout-based ESD Checking Methodology at Nvidia”


Enter the Warrior

Enter the Warrior
by Paul McLellan on 10-14-2013 at 11:57 am

Since Imagination’s acquisition of MIPS at the end of last year, the MIPS product line has been given a new lease of life. There are two things driving this. The first is simply that with its new home, the MIPS architecture has a solid future whereas before it was uncertain. Secondly, Imagination moved their own general purpose processor designers onto the MIPS team so that there is a lot more manpower working on the cores.


Back in June Imagination/MIPS previewed some information about the architectural direction. Back then I wrote about it here. The new core, the MIPS Series5 Warrior P5600, was officially announced today. It has:

  • 1.2X-2X gains on system-oriented software workloads
  • similar power envelope
  • 1-2+ GHz implementation range in TSMC 28HPM
  • proven MIPS architecture used for 30 years
  • full binary compatibility: low end 32 bit to high end 64 bit
  • best in class branch prediction
  • hardware virtualization: each guest has TLB and COP0 context so no operating system modifications are necessary to run as a guest
  • architectural support for hardware multi-threading
  • coherent multi-core configurations up to 6 cores
  • advanced SIMD
  • superior security
  • common tool chain
  • extensive 32 bit and 64 bit ecosystems
  • eXtended Physical Addressing (40 bits) provides for use of physical memory up to a terabyte
  • enhanced virtual addressing for kernel/user mapping


The P5600 is optimized for peak single thread performance. It is a superscalar multi-issue out-of-order design with a 16 stage pipeline. At peak it can have 4X fetch, 3X dispatch, 4X integer and 2X SIMD issue. The datapath has been widened giving lower latencies. Some additional tricks result in maximum utilization of the pipeline especially doing copies with load/store instruction bonding which bonds 2 32-bit integer or two 64-bit floating point accesses per cycle.

With 6 P5600 cores the Warrior delivers 35,000 DMIPS, 50,000 CoreMark (at 1.7GHz). But this is just the first of a wave of Series5 Warrior generation CPUs. It is available for licensing and silicon design this quarter.

More information about the P5600 is here including a lot more detail than I can get into this blog post.

Also, the Imagination Developers Conference IDC13 in San Francisco is tomorrow. Details and registration are here.


History of SoC Interconnect Fabric

History of SoC Interconnect Fabric
by Eric Esteve on 10-14-2013 at 4:18 am

I just read this very interesting article posted by Kurt Shuler from Arteris, describing the “History of SoC Interconnect Fabric” and explaining why the SC industry needs an advanced approach, named the “fourth phase of the Interconnect Fabric history” in the article. Kurt’s point of view is that in the past the SoC interconnect did not receive the type of research and development focus that CPUs or GPUs received. If you do not really understand what type of functionality is behind the “SoC Interconnect” or “Network-on-Chip” terms, you will certainly benefit from reading this article.

An 8-port unidirectional crossbar switch with four simultaneous connections (Source:Lattice Semiconductor)

The author starts from the beginning, when on-chip interconnect buses like AMBA or OCP were the only solution to interconnect the various functions within a SoC, and explains how the SC industry has evolved from buses to Crossbars, in the 1995-2005, to finally moved on to implement Network-on Chip (NoC) starting around the 2010’s. If you are already familiar with the NoC concept, you will benefit from the questions raised in this article: Do you need to explain to your management why investing in NoC IP is a smart thing? You may want to explain that “NoC packetize and serialize transaction address, control, and data information on the same wires, which allows transactions to be sent over smaller numbers of wires while maintaining high quality of service (QoS) for each transmission”… But I suspect that decision makers will be swayed much more by the types of problems solved by a NoC:

  • chip designs frequently suffer from routing congestion, leading to increased die size and generating time to market delays, in turn leading to missed market opportunities
  • physical timing closure problems generates efficiency losses, bottlenecks, and squandered performance potential

In the first case, the chip cost is higher, due to an increased die size, and the Return On Investment (ROI) is at best delayed, due to the back end (Place and Route) phase being extended by months. Or perhaps your return will never come if your product misses the right introduction window.

The second group of issues that a NoC will help solve is more qualitative. We are not talking about square millimeters or months. Here we are addressing what could be called a product attractiveness: if you don’t get full benefit of the CPU (or GPU) core you have paid for, -because you are facing timing closure problems, and you can’t reach the “magic” frequency number that will attract analysts, or because some bottlenecks impact image quality on your screen, the end user may simply not buy the product your SoC is designed for.


A network-on-chip interconnect with separate transaction, transport and physical layers (Source:Arteris)

Reading this article, you will be surprised like I was when you discover that the SoC interconnect did not receive the type of research and development focus that CPUs or GPUs received, and that as the SoC fabric became an afterthought in the design process, this critical component fell behind Moore’s Law. That sounds to me like testability was in the 1980’s, when the chip designer start thinking about Test when the chip design was almost completed. Then in the 1990’s, various techniques like Scan Test and Built-In Self Test (BIST) were developed by EDA vendors, and have since helped designers to focus on functional design, as soon as they follow a simple rule set. Nowadays, we could not imagine developing a SoC without automated Scan test and BIST.

In the same way, Kurt shows that NoC has been extensively used in highest growth segments in the industry, such as mobile application processors and LTE modems, and we can expect NoC to be much widely used during this decade, in SoC development addressing every industry segment, not only the very high volume Application Processor. What is the next step for NoC? Distributed cache coherent interconnects (CCI) fabrics will be the fourth era in the history of interconnects technology, according to Arteris. When should we expect cache coherent fabrics to become available available? That’s a good question we should ask to Arteris, as the company has become the undisputed leader of NoC IP…

Eric Esteve from IPNEST

lang: en_US


Device Noise Analysis of Switched-Capacitor Circuits Webinar

Device Noise Analysis of Switched-Capacitor Circuits Webinar
by Daniel Nenni on 10-13-2013 at 9:00 pm


Switched-capacitor (SC) circuits are ubiquitous in CMOS mixed-signal ICs. Thermal noise, introduced by MOS switches and active amplifier circuitry, is the major performance limiter in these circuits. This webinar reviews analysis techniques to accurately analyze the noise performance of switched-capacitor circuits and provides simulation examples using the BDA AFS Platform that complement and verify the theoretical treatment. Circuits discussed range from basic passive and active track-and-hold circuits, to integrators, to SC delta-sigma modulators.

This free webinar is brought to you by
Berkeley Design Automation
and hosted by SemiWiki

October 23, 2013, at 4:30 PM, Pacific Time, via WebEx

REGISTER HERE

Featuring:
Dr. Boris Murmann – Stanford University
David Lee – Berkeley Design Automation

Boris Murmann is an Associate Professor in the Department of Electrical Engineering, Stanford, CA. He received the Ph.D. degree in Electrical Engineering from the University of California at Berkeley in 2003. From 1994 to 1997, he was with Neutron Mikrolektronik, Germany. Dr. Murmann’s research interests are in the area of mixed-signal integrated circuit design, with emphasis on data converters and sensor interfaces. Boris is the recipient of various awards and serves in several IEEE committees.

David Lee is an Architect at Berkeley Design Automation. He has over 20 years of EDA experience and is well-versed in the art of precision analog/RF circuit simulation. David received his B.A.Sc. and M.A.Sc. degrees in Systems Design Engineering from the University of Waterloo. David has held positions with Northern Telecom, Bell Northern Research, and AT&T Bell Laboratories. Prior to BDA, David was a Scientist with Mentor Graphics Corporation. David is a Sr. Member of the IEEE.

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 more than 700,000 unique visitors have been recorded at www.SemiWiki.comviewing more than 5M pages of blogs, wikis, and forum posts.

Berkeley Design Automation, Inc. is the recognized leader in nanometer circuit verification. The company combines the world’s fastest nanometer circuit verification platform, Analog FastSPICE, with exceptional application expertise to uniquely address nanometer circuit design challenges. More than 100 companies rely on Berkeley Design Automation to verify their nanometer-scale circuits. Berkeley Design Automation was recognized as one of the 500 fastest growing technology companies in North America by revenue in 2011 and again in 2012 by Deloitte. The company is privately held and backed by Woodside Fund, Bessemer Venture Partners, Panasonic Corp., NTT Corp., IT-Farm, and MUFJ Capital. For more information, visit http://www.berkeley-da.com.

Also Read: TSMC Awards Berkeley Design Automation

More Articles by Daniel Nenni…..

lang: en_US


The TSMC CEO Succession Plan!

The TSMC CEO Succession Plan!
by Daniel Nenni on 10-13-2013 at 8:00 pm

The foundry executive shuffle continues at Samsung, GlobalFoundries, and TSMC. Some expected, some not, the needs of the many outweigh the needs of the few. As I have mentioned before I have no inside knowledge as to who will be named as Dr. Morris Chang’s successor but here is my candidate for the next TSMC CEO.

First, the executive shuffle: Mike Noonen is no longer Executive Vice President, Global Products, Design, Sales & Marketing at GlobalFoundries. This came as a shock to me as the changes I witnessed during his tenure were amazing. Hopefully Mike will take a CEO position in EDA or IP and continue with the transformation of the semiconductor industry. John McClure also left GlobalFoundries, John was Mike’s #2 guy. John is now with Intel’s Mobile Communications group. In fact, Intel has hired quite a few GF people so they are definitely serious about the foundry business.

Second: Ana Hunter is no longer Foundry Vice President at Samsung after more than seven years, which, by the way, is how long Samsung has been in the foundry business. Look for Samsung to be the #2 foundry when 14nm revenue kicks in and you can thank Ana for that, absolutely. Ana now works for GlobalFoundries.

Third: Dr. Shang-yi Chiang will retire at the end of October. Shang-Yi is TSMC’s Executive Vice President and Co-Chief Operating Officer. He retired from TSMC as Senior Vice President of R&D in July 2006 and returned in September 2009 so I would not be surprised if Shang-Yi comes back to TSMC after a much deserved break.

If you look at the TSMC corporate executives page you will see many people that are qualified to succeed Dr. Morris Chang. Dr. Mark Liu, Co-Chief Operating Officer, is the odds on favorite. Prior to joining TSMC, from 1987 to 1993, he was with AT&T Bell Laboratory. From 1983 to 1987, he was a process integration manager of CMOS technology development at Intel Corporation, Santa Clara, CA, developing silicon process technologies for Intel microprocessor. Mark has a PhD in EE and CS from UC Berkeley.

While Mark is the logical choice he is not who I would choose. Dr. Cliff Hou would be my choice as the new TSMC CEO, absolutely.

Dr. Cliff Hou was appointed TSMC’s Vice President of Research and Development (R&D) in 2011. He joined TSMC in 1997 and was previously Senior Director at Design and Technology Platform where he established the Company’s technology design kit and reference flow development organizations. He also led TSMC in-house IP development teams from 2008 to 2010. Cliff has 20 U.S. Patents and serves as a board member of Global Unichip Corp. He received a Ph.D. in electrical and computer engineering from Syracuse University.

Why Dr. Cliff Hou you ask? Cliff knows design enablement and is trusted by the top fabless semiconductor companies around the world, that’s why. Moving forward TSMC’s greatest challenge is not technical, which I’m sorry to say since that is their strong suit. TSMC’s greatest challenge is political. Their next challenge is retaining the commanding market share that 28nm brought them.

As the semiconductor industry continues to transform, TSMC and the other foundries must continue to integrate with their customers. Paul McLellan and I recently finished a book “Fabless: The Transformation of the Semiconductor Industry” and the research we did was enlightening. Historically speaking, the semiconductor industry transformed from the transistor to the IC to IDMs to the ASIC to the FPGA to fab-lite to fabless. The transition continues as fabless semiconductor companies consolidate and become more integrated with the foundries. GlobalFoundries calls this Foundry 2.0. TSMC calls this the Grand Alliance. I call this the natural course of business in Silicon Valley.

Bottom line: When 90% of the semiconductor wafers are purchased by 10% of the companies you really need to accommodate that 10%. Detractors will say that Dr. Cliff Hou does not have enough gray hair to be CEO. I do not agree. Cliff Hou is the same age as the people who buy the commanding share of the wafers and they buy from people who they know and trust.


Mentor Graphics Continues To Perform Well

Mentor Graphics Continues To Perform Well
by Ashraf Eassa on 10-13-2013 at 2:00 pm

The EDA tool space has been booming in this new “mobile era” of computing. As the world transitions to system-on-chip design methodologies, and as more teams are developing even more products for an ever-broadening set of end markets, the demand for ever more sophisticated design tools has only continued to skyrocket. After focusing on EDA’s largest player – Synopsys – in a prior article, I’d like to now shift my attention to the third largest (by revenue) player in this space: Mentor Graphics.

Some Quick Background

While I assume most long-time readers of SemiWiki are quite familiar with Mentor from a technical/product perspective, the company’s revenue base can be broadly classified into five different product/service segments: Scalable Verification, IC Design to Silicon, Integrated System Design, New and Emerging Products, and “Services and Other”.

Mentor’s Verification products essentially allow semiconductor design houses to determine whether a given chip functions as intended. As designs continue to balloon in complexity (for example, even the Apple A7 – which is a chip intended for a small smartphone – packs in over 1 billion transistors), the cost of finding out your design doesn’t work and that you need to go back and do another version (this means fixing the design, taping out, and then waiting weeks to get back the new samples from the fab) is non-trivial. Better verification tools mean lower development costs and faster time-to-market.

Mentor’s Calibre® tool family is a suite of tools that offers physical verification, transistor-level modeling and extraction, lithography, yield enhancement, and reliability measurement. In addition to the Calibre tools, Mentor offers its Olympus-SoC™ place-and-route tool. To augment the effectiveness of the Olympus-SoC tools, Mentor offers a platform known as InRoute™ which allows designers to use the wide variety of Calibre tools within the Olympus-SoC place-and-route tool.

Finally, Mentor offers a whole suite of products to facilitate with the design of PCBs, including its Expedition Series® aimed at larger enterprise customers, PADS® product line which is a lower cost product for Windows based systems, XtremePCB™ which allows multiple designers to edit a design simultaneously and XtremeAR which is a PCB routing product.

Digging into the Financials
Mentor Graphics’ share price, like Synopsys’, trades within a hair of its 52-week high (in the world of stocks, if your company is consistently trading near its 52-week highs, then you’re doing something right). The secular trends that drive the success of the EDA tool space are certainly clear: more complex chips designed to be built on increasingly complex manufacturing processes means that there’s an insatiable need for EDA tools. Further, the fact that – unless you’re doing your own in-house tools (and even companies like Intel that do their own tools in house don’t rely on them exclusively), you really only have three options: Mentor, Synopsys, and Cadence.

Indeed, the company’s most recent earnings report highlights a number of key positive trends. System and software revenue was up 7.3% on a year-over-year basis, gross margins (that is revenue less cost of goods sold) were up from 80.7% to 82.3% from the year-ago period, and operating margin was up from 8.3% to 10.5%. As far as the full year goes, the company reiterated its revenue outlook of $1.155B and bumped up its earnings-per-share forecast to $1.59, which is a solid $0.03 above Wall Street’s consensus. To top this all off, Mentor increased its share buyback authorization to $100M, up from $56M. Sales growth, margin growth, and what appears to be market share gains all point to a company that’s in great shape. In particular, there appears to be incredibly strong demand for Mentor’s emulation products as the increased complexity in IC design (thanks to much more advanced semiconductor processes) has moved more design teams to emulation for full-chip verification. This trend should only continue to play to Mentor’s strengths.

In addition, management had the following encouraging things to say on the call:

  • The average annual run rate of the renewals in the firm’s 10 biggest contracts increased 90%
  • 20 new companies in this quarter alone purchased the Calibre toolset (bringing the total number of purchases to over 1250)
  • Among the top five emulation bookings during the quarter, four of the companies were first-time emulation customers, and three of those were top 20 semiconductor companies. In addition 3 of the top 10 semiconductor companies in the world either have evaluations for Mentor’s emulation tools in progress or are looking to expand current install bases

Now I would like to look at Mentor from two angles: a discounted cash flow basis (which allows me to use my estimate of Mentor’s future cash flows to determine the fair value for the stock today) and from a more chart-based perspective. As far as Mentor’s stock goes, it is in a very obvious uptrend:

As was the case with Synopsys, the stock is nicely above all of the key moving averages (and recently tested and successfully rebounded from the 50-day moving average – which many market technicians believe to be a key indicator of a stock’s “health”). Further, the shorter-term moving averages are very cleanly above the longer-term moving averages, which is a further sign that the stock is in pretty good shape. The chart here doesn’t lie: investors aren’t going to let this one dip too far before buying.

But what is the company actually worth? Assuming that the company can earn $1.59/share this year (as per management’s expectations), and assuming that the company can drive a few more points of operating leverage against a high single digit/low double digit long term growth rate, it’s not tough to justify a fair value range of $25-$28 today, which would represent anywhere from 12%-26% upside from current levels. Not a screaming buy, but the shares certainly don’t look particularly expensive. In fact, that’s probably the most frustrating thing about the EDA stocks: they’re great companies operating in a near-ideal environment (and Mentor is particularly interesting given the trend towards using its emulation tools), but it’s almost always priced in.

More articles by Ashraf Eassa…

Also Read: A Brief History of Mentor Graphics

lang: en_US


Driving Innovation in Image Sensors and High Speed AMS Design!

Driving Innovation in Image Sensors and High Speed AMS Design!
by Daniel Nenni on 10-13-2013 at 7:00 am


This is a live Silicon Valley event and yes there is such a thing as a free lunch. This is the first in a series of live SemiWiki collaborative events. I strongly believe that, especially in the age of social media, real world experience is key to the collaboration required to be successful in modern day semiconductor design. This is your chance to meet the experts and network with other innovators in the mixed signal design industry.

If you are not in Silicon Valley I will be covering the event and will be accepting questions for the speakers on Twitter:

SemiWiki.com‏@DanielNenni

Moderated by SemiWiki founder, Dan Nenni, this live lunch and learn session will feature presentations by Eric Kurth – Design Manager – FLIR Systems and Dr. Lanny Lewyn – Principal, Lewyn Consulting. These Tanner EDA customers will share their industry experience and expertise while discussing how they’ve solved some of today’s toughest challenges in thermal imaging and high speed A/MS design. FLIR Systems is a world leader in the design, manufacture and marketing of thermal imaging infrared cameras. Their products serve industrial, commercial and government markets, internationally as well as domestically. Dr. Lewyn is a Life Senior Member IEEE, noted author and frequent invited speaker on topics related to nanoscale analog circuit design.

Tanner EDA technical staff will be on-hand to provide hands-on demos of the complete Tanner EDA mixed-signal and MEMS tool flow. Participants can view demos during the registration period (11:00am-Noon) and immediately following the customer presentations.

REGISTER HERE

When and Where
Thursday, October 24[SUP]th[/SUP]
Techmart, Network Meeting Center
5201 Great America Pkwy #122
Santa Clara, CA 95054
Phone (408) 562-6111

Agenda
[TABLE] cellspacing=”3″
|-
| Time
| Title
| Presenter
|-
| 11:00
| Registration – Tool Demonstrations available
|
|-
| 12:00
| Lunch served;
Tanner EDA Welcome / Introductions & Overview
| John Zuk, Vice President WW Marketing &
Business Strategy, Tanner EDA
|-
| 12:15
| Uncovering Secrets in Deep Space: High Speed
Analog for Astrophysics Exploration
| Dr. Lanny Lewyn, Life Senior Member IEEE
and Principal, Lewyn Consulting.
|-
| 12:45
| Seeing in the Dark: Innovative Infrared Product
Design enabled by a robust EDA tool flow
| Eric Kurth- Design Manager – FLIR Systems
|-
| 1:15
| Closing comments & Prize Drawing
| Dan Nenni, SemiWiki
|-

Who should attend?

  • Current and past users of Tanner EDA design tools
  • Users of other OpenAccess-based analog/mixed-signal design tools
  • Custom, analog and mixed-signal design engineers, layout engineers and CAD engineers
  • Project/Program and Design Managers of A/MS design groups
  • Managers of design teams who are looking to enhance productivity and reduce time-to-market for high-speed and low power A/MS designs

What will you learn?
This lunch & learn provides an opportunity to connect with Tanner EDA customers, product managers and application engineers and see the tools in action.

  • You’ll learn from industry experts practices and techniques to help overcome challenges in high speed and image sensor design
  • You’ll discuss analog/mixed-signal design, layout, and verification methodologies that offer high productivity and interoperability across custom/analog and digital design environments
  • You’ll see the benefits of using the latest release of Tanner EDA design tools

REGISTER HERE


lang: en_US



A Brief History of Tangent Systems

A Brief History of Tangent Systems
by Daniel Nenni on 10-12-2013 at 3:30 pm

In the spring of 1984, Mark Flomenhoft, Ph.D., approached Aki Fujimura, Randy Smith, and Steve Teig, to join him in developing a business plan to create a new EDA place and route (P&R) company. The three young software engineers all worked at Trilogy Systems Corporation where Mark was a director in the design automation department. Mark had been working on this project in his spare time for a while with Rob Smith (the ‘R’ in a Texas-based company called VR Systems). But with Mark in California, and Rob in Texas, the project never got off the ground and Rob withdrew. As the plan was being developed by the new team, Mark recruited Terry Smith (no relation to Randy) to be the CEO, telling the engineers that he felt that someone with more management experience would be needed to convince investors to fund the business. After the business plan was essentially complete, Dave Evans, then at Hewlett-Packard, agreed to add his resume to the plan with the intent of becoming the head of marketing should the business be funded.

In July, 1984, a verbal agreement was reached with Intergraph Corporation (now part of Hexagon, Nordic Exchange: HEXA B), to fund the company. Intergraph provided $2M and a $4M line of credit in exchange for 50% of the company. Within a month, the team started work at Tangent Systems, initially working out of Intergraph’s San Jose sales office near San Jose Airport. Work was immediately begun on TANCELL, the industry’s first timing driven place and route tool. TANCELL was shown in demonstrations at DAC in June, 1985. First sales of TANCELL were closed by the end of the year.

With Intergraph’s investment came a requirement to be able to deliver Tangent’s products on multiple compute platforms, including the yet to be released Intergraph i32 workstation. To solve this problem, Tangent chose to develop its software using MAINSAIL, the commercial version of SAIL (Stanford Artificial Intelligence Language). Tangent used DEC VAX computers as its initial development platform and later ported (mostly cross-compiled) to Sun, Apollo, and IBM workstations. A combination of using MAINSAIL and the database RIL (Relocatable Implementation Liberator) concept proposed by Steve, allowed TANCELL to be quickly modified to support new requirements and to increase its capacity as design sizes were rapidly expanding.

As the initial sales of TANCELL began, Aki went to Japan to talk to the major ASIC manufacturers. What emerged was the specification of a new type of area-based P&R system based on area routing, rather than channel routing on which TANCELL was based. The new product, TANGATE, was developed to target the gate array market. An area-based standard cell tool, Cell3 Ensemble, would later be derived from this at Cadence and would become the dominant tool in the standard cell P&R market. Cell3 Ensemble code would also become one of the key pieces of technology stolen by Avant!, and the subject of numerous civil and criminal actions.

While Tangent’s technology was a success, the financial exit for Tangent was not. There was a second round of funding which introduced venture capitalists to the company following the successful development of TANCELL. However, following Black Monday (1987) the venture capitalists wanted out. Intergraph obliged and by adding a bit more cash ended up the 80% ownership of the company. By the end of 1988, with a sales run rate above $9M and about 70 employees, Tangent was sold to Cadence for approximately $14.2M. The deal formally was closed in February 1989 and became Cadence’s first acquisition.

Tangent was a remarkable company fueled in part by Mark Flomenhoft’s selection of the three young engineers, then all aged 24-to-26 years old. Aki, Randy, and Steve have gone on to collectively hold at least 14 C-level titles, plus numerous board seats. Many other Tangent employees have also gone on to leadership roles in EDA start-up companies driven in part by their Tangent experience.