Synopsys IP Designs Edge AI 800x100

Ensuring timing of Custom Designs with large embedded memories – A big burden has solution!

Ensuring timing of Custom Designs with large embedded memories – A big burden has solution!
by Pawan Fangaria on 03-13-2013 at 10:30 am

In 1990s when designs were small, I was seeing design and EDA community struggling to improve upon huge time taken to verify the circuits, specifically with Spice and the like. I was myself working on developing tool for transistor level static timing analysis (STA) mainly to gain on time (eliminating the need of exhaustive set of vectors to simulate) with acceptable loss of accuracy. That’s a history, but today, the challenge is much bigger and critical considering large memory blocks embedded in multi-million gate SoCs, and that too of varying types / functionalities, (e.g. SRAM, ROM, multi-port register etc.) with different modes of operations like on-demand active or stand-by mode. Moreover, the challenge has increased multi-fold with process variations at nano-meter level. Of course, there is gain on performance, power, area and cost reduction owing to economy of scale and that’s why it’s worth spending the effort. The need of the hour is to have better accuracy, faster verification and for larger designs – a triple whammy!

I was delighted to see Synopsys’s NanoTime tool which has transistor level STA engine well suited for today’s complex SoCs with multiple large instances of memory blocks embedded into them. That inspired me to go ahead and take a look at the white paper – The Benefits of Static Timing Analysis Based Memory Characterizationposted by Synopsys on its website – http://www.synopsys.com/Tools/Implementation/SignOff/Pages/NanoTime.aspx

Synopsys provides novel approach to accurately estimate delays of sub-circuits within a memory block and use the graph analysis techniques to identify the most and least critical paths and determine all timing violations in a fraction of time taken by any dynamic circuit simulator.

The paths can extend from control logic through entire memory core to output buffers. The accuracy of result is again best within 5% of HSPICE. It supports both the approaches of memory model generation – characterization of memory compiler generated models and characterization of individual memory instances.

In the STA flow for memory design and characterization, the tool uses Spice/FastSpice as a sub-ordinate tool to further analyze and fine tune the timing violations found in the design at the first place.

Similarly memory compiler generated memory instances can also be characterized and verified by IP users.

It supports both timing models – Composite Current Source (CCS) and the standard Non-linier Delay Models (NLDM). The CCS model proposed by Synopsys for nano-meter delay modelling can be found at – http://www.opensourceliberty.org/ccspaper/ccs_timing_wp.pdf

The STA tool in NanoTime performs all types of timing checks pertaining to setup and hold times which are most crucial for the accuracy of sequential circuits. Several variants of these in the context of memory such as read / write time, read / write enable and so on; are all checked exhaustively and timing models are generated quickly for full-chip SoC sign-off. With all these kind of checks, it provides completeness on verification coverage as there are no vectors involved to be missed. It also checks Signal Integrity and does noise analysis. The tool has great capabilities in today’s context of designs.

By Pawan Kumar Fangaria
EDA/Semiconductor professional and Business consultant
Email:Pawan_fangaria@yahoo.com


EDPS Monterey. Agenda Now Available

EDPS Monterey. Agenda Now Available
by Paul McLellan on 03-12-2013 at 8:13 pm

For 20 years there has been the Electronic Design Process Symposium. It has been held each April and for the last few years at least has always been in Monterey at the Monterey Beach Resort. This year it is Thursday and Friday April 18th/19th.

The keynote on the first day is by Ivo Bolsens of Xilinx on The All-programmable SoC — at the Heart of Next-Generation Embedded Systems. The morning is then devoted to system and platform design, with presentations from Space Codesign and Cadence, and a panel session on How to make ESL really workwith Greg Wright of Alcatel, Mike McNamara of Adapt-IP, Gene Matter of Decoa Power, Guy Bois of Space Codesign, and Franck Schirrmeister of Cadence.

After lunch it is all about Design Collaboration with presentations by Synopsys, Intel, Nimbic, Xuropan and NetApp.

Then up into the 3rd dimension with a session on 3D system design, with presentations by Mentor, Cadence and Micron followed by a panel session 3DIC, are we there yet? with Dusan Petranovic of Mentor, Brandon Wang of Cadence, Mike Black of Micron, Ivo Bolsens the CTO of Xilinx, Gary Smith and Herb Reiter. Gene Jakubowski moderates.

Gary Smith is giving the keynote during dinner on Silicon Platforms + Virtual Platforms = An Explosion in SoC Design.

SemiWiki’s own Dan Nenni is giving the keynote on the second day on The FinFET value proposition. That is followed by a session on FinFET design challenges with presentations from Oracle, ARM, TSMC and Synopsys. Then after lunch the last session is on FinFET Foundry Design Enablement Challenges with presentations from 3 people from Global Foundries and ARM.

The complete agenda is here. Early registration ends on March 18th so don’t wait too long before you decide to go. UPDATE: the EDPS website is in error, early registration ends on 31st. So jog don’t sprint.


RTDA at Altera

RTDA at Altera
by Paul McLellan on 03-12-2013 at 8:05 pm

I talked to Yaron Kretchmer of Altera to find out how they are using RTDA’s products. I believe that Altera are the oldest customer of RTDA, dating back over 15 years, originally used by the operations team around the test floor before propagating out in the EDA and software worlds more recently.

Altera use two RTDA tools, LicenceMonitor and FlowTracer.

LicenseMonitor keeps very accurate high granularity data about what licenses are being used. Altera has one or more of pretty much every EDA company’s tools and they monitor several thousand different licensed tool features. This enables directors and VPs to see how licenses are being used, whether certain groups are using them and provides input data for purchases. The tool is very stable and it has resulted in cost-savings that are orders of magnitude greater than its cost by making it easier to sanitize large requests such as asking for a doubling of simulation licenses, and by cutting back on unused licenses during negotiations with EDA companies on remixing the licenses and, usually, acquiring additional licenses too.

The other tool is FlowTracer. This is like the Unix “make” command on steroids. When moving an existing flow into the FlowTracer environment it can automatically identify dependencies based on monitoring what gets done, what files get read, and so build the dependency graph. But Altera find that this isn’t very script friendly so they take that as a quick and dirty starting point and then handcraft the scripts for better maintainability. They currently have RTL->GDS, data management verification and some other flows up and running (they have been using it a bit over a year). There are lots more opportunities in the rest of Altera to make more use, such as the regression flows for software and hardware and additional functional verification flows.

Basically, Altera are very happy with the RTDA tools and expect to proliferate them more in the future. The first step in optimizing your license use is to know how many you are really using, and LicenseMonitor provides this. The first step in optimizing a flow is to make it repeatable and FlowTracer does this.

There is a forum thread on SemiWiki discussing RTDA’s products versus open-source here.


Samsung and the New World Order!

Samsung and the New World Order!
by Daniel Nenni on 03-12-2013 at 7:52 pm

The keynotes at CDNLive today were very interesting, but rather than cover the slides and bullet points let me share with you my personal view of Samsung and how they are changing the semiconductor industry. Before I continue remember I’m just a blogger who shares observations, experiences, and opinions. This blog is for entertainment purposes and not to be used for wealth management.

Right now Samsung Electronics is at $188B in revenue and expected to more than double in size by 2020. The Samsung Semiconductor portion is $33B and I predict that number will triple. To fuel that growth Samsung will spend hundreds of billions of dollars. Capital expenditures alone will be roughly $200B. That’s a lot of money folks and money talks!

If you look at Apple and how they completely remodeled the mobile industry, you will see the same pattern with Samsung and the electronics industry. Apple strategy focused on the “user experience” by controlling the associated ecosystem which brought us the iFamily of products: iPods, iPads, iPhones, iOS, iTunes, iCloud, etc… and hopefully iTV and the iWatch. All available online or in person at more than 400 Apple retail stores around the world. In owning the user experience Apple became what it is today, one of the most valued corporate brands.

Samsung is taking a similar route but not stopping at the ecosystem, they will control the entire electronics supply chain. If you were at the consumer electronics show this year you saw it up close and personal. The Samsung booth was ginormous with every electronic gadget and appliance you can imagine. Visit South Korea sometime and you will be hard pressed to see products in use that aren’t Samsung, even tooth brushes! If you look at the bill of materials for Samsung products you will see Samsung part numbers through and through, they control their supply chains, absolutely.

The fabless semiconductor industry started from IDMs renting out excess fab space and some say Samsung entered the foundry business for the same reason but I don’t agree. Samsung was founded in 1938 and is a very long term strategy company. Being an IDM certainly gives you depth in the semiconductor supply chain but even more so as a foundry. While other foundries built ecosystems organically over the years through partnerships and joint development activities, Samsung’s strength is inorganic business development (they can write some very big checks!)

Starting with EDA, “where electronics begins”, Samsung is one of the largest consumers of EDA tools and EDA really likes big checks. Samsung is now on the board of Cadence, right? Same goes for IP; ARM is the #1 CPU core for Samsung mobile products and look at all the ARM/Samsung 14nm press releases. ARM likes big checks too. Samsung also launched a $100M VC fund for semiconductor start-ups and you can bet they will include foundry services and EDA tool flows. Samsung knows how to invest so there are more big checks coming, believe it.

So where does that leave us folks in the fabless semiconductor ecosystem who traditionally do not write big checks? Or those of us who are not on the receiving end of those big checks? Well, may the force be with us!


Virtual Platforms, Acceleration, Emulation, FPGA Prototypes, Chips

Virtual Platforms, Acceleration, Emulation, FPGA Prototypes, Chips
by Paul McLellan on 03-12-2013 at 7:13 pm

At CDNLive today Frank Schirrmeister presented a nice overview of Cadence’s verification capabilities. The problem with verification is that you can’t have everything you want. What you really want is very fast runtimes, very accurate fidelity to the hardware and everything available very early in the design cycle so you can get software developed, integration done and so on. But clearly you can’t verify RTL early in the design cycle before you’ve written it.


The actual chip back from the fab is completely accurate and fast. But it’s much too late to start verification and the only way to fix a hardware bug is to respin the chip. And it’s not such a great software debug environment with everything going through JTAG interfaces.

At the other end of the spectrum, a virtual prototype can be available early, is great for software debug, has reasonable speed. But there can be problems keeping it faithful to the hardware as the hardware is developed, and, of course, it doesn’t really help in the hardware verification flow at all.

RTL simulation is great for the hardware verification, although it can be on the slow side on large designs. But it is way too slow to help validate or debug embedded software.

Emulation is like RTL simulation only faster (and more expensive). Hardware fidelity is good and it is fast enough that it can be used for some software integration testing, developing device drivers etc. But obviously the RTL needs to be complete which means it comes very late in the design cycle.

Building FPGA prototypes is a major investment, especially if the design won’t fit in a single FPGA and so needs to be partitioned. So it can only be done when the RTL is complete (or close) meaning it is very late. In most ways it is as good as the actual chip and the debug capabilities are much better for both hardware and software.


So like “better, cheaper, faster; pick any two”, none of these are ideal in all circumstances. Instead, customers are using all sorts of hybrid flows linking two or more of these engines. For example, running the stable part of a design on an FPGA prototype and the part that is not finalized on a Palladium emulator. Or running transactional level models (TLM) on a workstation against RTL running in an FPGA prototype.

To make this all work, it needs to be possible to move a design from one environment to another as automatically as possible, same RTL, same VIP, even be able to pull data from a running simulation and use it to populate another environment. Frank admits it is not all there yet, but it is getting closer.

Now that designs are so large that even RTL simulation isn’t always feasible, these hybrid environments are going to become more common as a way to get the right mix of speed, accuracy, availability and effort.


Visually Debugging IC Designs for AMS and Mixed-Languages

Visually Debugging IC Designs for AMS and Mixed-Languages
by Daniel Payne on 03-12-2013 at 4:18 pm

With an HDL-based design methodology many IC engineers code in text languages like SystemVerilog and VHDL, so it’s only natural to use a text-based debug methodology. The expression that, “A picture is worth a thousand words” comes to my mind and in this case a visual debug approach is worth considering for AMS and mixed-language IC designs.

Let’s say that on your SoC project there are multiple IP blocks being re-used, and that they are coming from another design group, another design location, or even another company. How would you get up to speed in understanding not only the interconnects to the IP block, but also take a peek inside the IP to best understand the structure or behavior?

It would take way too long to directly read the text in SystemVerilog or VHDL, then create a graphical representation manually. You have better things to do with your engineering talent then manually visualize how an IP block was designed. The good news is that there is EDA technology available that can:

Read in a mixed-language netlist

  • SystemVerilog
  • VHDL

Read in a gate-level netlist

  • EDIF

Read in a transistor-level netlist

  • SPICE
  • HSPICE
  • Eldo

Read in interconnect netlist

  • DSPF

This EDA technology is called StarVision PROand is developed by a German-based company called Concept Engineering. I first met the President and CEO, Gerhardt Angst, several years ago when I worked at Mentor Graphics and we needed a way to visualize SPICE and DSPF netlists that were supplied by FastSPICE users.

We used SpiceVision PRO and it was the only automated method that we could find that would quickly and accurately create a SPICE netlist, on the fly, with very little learning curve. It was a real time-saver for us, because we could quickly visualize the unknown circuits being simulated at the transistor level, traverse the hierarchy, and understand more of the design intent of the netlist. The transistor-level netlists were easy to read, and the logic flowed naturally from left to right, as neatly as if someone had hand-drawn the schematics.


Gerhard Angst, Concept Engineering

Webinar
If you are debugging IP at the RTL, gate, transistor or interconnect levels, and want to save some time in that process, then plan to attend their webinar next week on Tuesday, March 12th, 10:00AM PDT. Registration is online, and will fill up fast, so attend and get your questions answered about how a visual debug approach will save you time and effort.


Cadence To Acquire Tensilica

Cadence To Acquire Tensilica
by Paul McLellan on 03-11-2013 at 5:54 pm

You have probably already seen the news: Cadence is acquiring Tensilica for $380M. Cadence has been relatively late to the IP party compared to Synopsys. In contrast, Mentor was early, got into the IP business before it was really profitable and ended up shutting down the business.

Tensilica is quite sizable. It has over 200 licensees, including 7 of the top 10 semiconductor companies. They announced earlier this year that they had shipped over 2 billion cores. Somebody asked me how this will affect Cadence’s relationship with ARM but I don’t really see them as equivalent. People use Tensilica’s dataplane processors for specialized functions such as audio, voice recognition, video and wireless modem, where the ARM processor is not especially well-suited. Synopsys’s ARC processor, acquired in the Virage deal, seems a closer match. Many chips use an ARM processor as a control processor and then have Tensilica cores for offloading, normally to reduce power or, sometimes, to get bring more processor horesepower to bear on a complex problem. And, in fact, the press release announcing the acquisition even has a quote from Simon Segars, the President of ARM, that this is positive for the industry.

It will be interesting to see whether Cadence’s large sales channel is able to get more design wins than Tensilica could as a small company. After all, it is not as if customers didn’t know Tensilica existed. I had the same questions about the Denali acquisition but everyone in Cadence seems to think that worked out well.

With Imagination acquiring MIPS, the processor world is getting a shakeup right now. it is going to be interesting to how this all plays out.

The press release on the acquisition is here. Also read A Brief History of Tensilica


Sanjiv Kaul: Is HLS About to Take Off?

Sanjiv Kaul: Is HLS About to Take Off?
by Paul McLellan on 03-10-2013 at 8:10 pm


At the end of last week I talked to Sanjiv Kaul, the new CEO of Calypto. Just to give a little background for those that haven’t been following along at home, Calypto was founded to try and solve the very hard problem of sequential logical equivalence checking (mostly by people from the engineering team that I managed at Ambit). SLEC is automatically comparing the C (or SystemC etc) input to high level synthesis (HLS) with the RTL output. They then took this technology and produced a sequential power reduction product, which was a much easier sell since it didn’t depend on the acceptance of HLS, it operated at the RTL level. Meanwhile, over at Mentor they developed an HLS product called Catapult that was having a hard time getting traction without a focused sales force. The Catapult product line and its people were transferred to Calypto in a complicated transaction so now Calypto has 3 product lines:

  • SLEC
  • PowerPro
  • Catapult


HLS has been bouncing along for years with early adopters and a slow adoption. The first product in the space, that was eventually canceled, was Synopsys Behavioral Compiler back in the mid-1990s. A decade ago everyone pretty much assumed that after RTL would come HLS and we’d move up a level. But in fact we switched to IP as a higher-level abstraction. But as IP blocks are now so large individually that we need to improve the productivity of creating them. When Sanjiv has been talking to customers, many are starting to be at that proliferation stage where their initial projects have been a success and now it is time to deploy widely across the company.

“RTL is the new netlist.” The process for getting from RTL to completed physical layout is now getting to be turnkey (of course the people turning the keys would argue it is really hard, and that is true, but it is not really where the value is added in the design). More and more of the differentiation is going on at the IP level and this is the big opportunity for HLS.

Since Catapult has the only SLEC product on the market, Sanjiv feels this means that they are the best positioned. Formal verification techniques tend to be hard to use without guidance from the synthesis tool, but until Catapult moved into Calypto it wasn’t possible to build in appropriate hooks. By more tightly integrating Calypto with both SLEC and with the sequential power reduction, a more powerful product can be made.

In the past, Sanjiv has taken multiple products to a market dominating position, such as PrimeTime or Physical Compiler at Synopsys. Of course he hopes that the magic will work again and feels Calypto is well positioned to be the standard HLS/SLEC tool, the winner in the space. But it is early and there are multiple players. It reminds him of the early days of logic synthesis when companies such as Silc, Trimeter, Mentor/Autologic were also around, before the market standardized on DC.

Calypto is profitable and cash-flow positive (which is actually more important in a startup). They are around 100 people.

There is a webinar coming up on March 26th at 10am Pacific on How to Optimize for Power with High Level Synthesis. More details here.


A Brief History of the Foundry Industry, part 2

A Brief History of the Foundry Industry, part 2
by Paul McLellan on 03-10-2013 at 8:05 pm

Part 1 here.

The line between fabless semiconductor companies and IDMs has blurred over the last decade. Back in the 1990s, most IDMs manufactured most of their own product, perhaps using a foundry for a small percentage of additional capacity when required. But their own manufacturing was competitive, both in terms of the capacity of fab they could afford to build, and in terms of process technology.

However, gradually both of these things changed. The size of fab required to remain cost-competitive continued to increase to the point that most semiconductor companies could not fill a fab that large. The semiconductor processes also got significantly more complex and costly, so that the cost of staying on the leading edge became prohibitive for all except the largest IDMs, most notably Intel.

The first thing that happened was the formation of several process clubs, where a lot of the cost of technology development of the semiconductor process could be shared between a number of semiconductor companies. A small semiconductor company couldn’t hope to develop a state-of-the-art process on their own.

Gradually it became clear that only the largest semiconductor companies could even afford to build a cost-competitive fab. It wasn’t just a matter of the investment required, it was that the capacity that they would create would be more than they would be able to use. They would never be able to “fill the fab.” Back when a fab cost $3B to build, that meant that roughly a company would face a depreciation cost of $1B per year, meaning that they need to have a running semiconductor business of perhaps $5B, around the size of AMD, the only competitor to Intel in the x86 microprocessor business.

In fact, in March 2009, AMD went completely fabless. They divested their manufacturing to the Advanced Technology Investment Company, primarily owned by the Emirate of Abu Dhabi. The manufacturing part became a pure-play foundry called Global Foundries. This was still partially owned by AMD and, of course, with AMD already had a large customer in place. Subsequently Global Foundries acquired Chartered Semiconductor, the other major pure-play foundry based in Singapore, and today they are the second largest foundry behind TSMC.

Many other semiconductor companies also went fabless, such as Freescale, Infineon and Sony. Other semiconductor companies didn’t go quite so far. They kept their existing fabs, many of which were fully depreciated and running non-leading-edge processes. But for the most advanced processes they used foundries since they couldn’t afford either the investment or the cost of technology development to keep up. These companies, still usually called IDMs, are known as fab-lite.

In 1994 the fabless semiconductor companies and the foundries created the Fabless Semiconductor Association. There was already an organization, the Semiconductor Industry Association, which most IDMs belonged to, but in that era, fabless semiconductor companies were not seen as “real” semiconductor companies and were not allowed to join. This has since been renamed the Global Semiconductor Alliance or GSA.

GSA’s data show that in 2012 the top foundry is TSMC by a long way, with revenues of $16B. Global Foundries is number two with revenues of $4.3B, followed by UMC with $3.7B. The fourth largest company in terms of foundry business is Samsung, which is an IDM. Their foundry business was $3.4B. Of course they are famously both a major supplier of chips to Apple and their primary competitor in the smartphone business. Number 5 is China-based SMIC, the only other company with foundry revenues over $1B, at $1.6B. The size of business drops off rapidly although there is a long tail of foundries. For example, in twelfth place is Korea-based MagnaChip with revenues of $375M, one fortieth the size of TSMC.

Meanwhile the transition of IDMs towards outsourcing manufacturing to foundries continues. At 45nm, nine semiconductor companies had their own 45nm fabs: Intel, Samsung, IBM, ST, Panasonic, Renasas, Texas Instruments, Toshiba and Fujitsu. By the 20/22nm process node, the only IDMs with their own fabs, ignoring specialist companies that only manufacture memories, are Intel, IBM and Samsung.

At 20/22nm, the foundry manufacturing has been taken up by TSMC, UMC, Global Foundries and Samsung, the only foundries with 20/22nm fabs in place or planned.

Meanwhile, more and more of the top ten of the non-memory semiconductor manufacturers are taken up with fabless and fab-lite companies. In 2011, the latest year for which data is available, Intel (IDM), Samsung (IDM and foundry) and Texas Instruments (fab-lite) take up the top three places. They are followed by Toshiba (fab-lite), Renasas (fab-lite), Qualcomm (fabless), ST (fab-lite) and Broadcom (fabless). Basically the rest of the top 25 are all either completely fabless or using the fab-lite approach of using foundries for leading edge process and continuing to manufacture internally anything that doesn’t require a leading-edge fab. IBM (IDM and foundry) slots in somewhere, but they consume so much of their silicon internally that it is not clear where. Their merchant business is not very large.

Going forward, it is unclear whether all the IDMs and foundries that have made the move to 20/22nm will also be able to afford to make the transition to 14nm and beyond. There are big technical challenges as well as economic issues as to what price wafers will cost and, as a result, just how much of the existing product lines will make the transition as opposed to remaining on cheaper, less advanced, process nodes.


We are Live at CDNLive 2013!

We are Live at CDNLive 2013!
by Daniel Nenni on 03-10-2013 at 7:00 pm

Dr. Paul McLellan and I will be covering CDNLive this week, one of the premier EDA events of the year. Take a look at the agenda and exhibits, this year it looks like a full on Design Automation Conference! There is definitely something for everyone!

Get ready for two full days of content with more than a hundred tracks and keynotes by Lip-Bu Tan, President and Chief Executive Officer, Cadence – Young Sohn, President & Chief Strategy Officer, Samsung Electronics – and Martin Lund Sr. Vice President, Research & Development, SoC Realization Group, Cadence.

What’s Happening at CDNLive Silicon Valley 2013:

March 12-13, 2013
Hyatt Regency
Santa Clara, CA

Papers: Choose from a wide variety of user-authored papers addressing all aspects of design and IP creation, integration, and verification. Discover how others are using Cadence technologies and techniques to realize silicon, SoCs, and systems—efficiently and profitably.

Techtorials: Participate in a variety of interactive techtorials to get a more in-depth look at specific Cadence products, new solutions, and feature enhancements.

Keynote speakers: Hear from industry leaders who influence the global electronics marketplace. They will discuss industry trends in silicon, SoC, and system realization and share their thoughts on the most pressing design challenges.

Designer Expo: Learn more about the collaborative ecosystem available to support you. Cadence and our partners will showcase the latest results of our joint efforts. Explore new products and services from our many exhibitors.

Networking opportunities: Engage in stimulating technology discussions with your peers and stay connected after the conference.

CONNECT. SHARE. INSPIRE.

Not only do Paul and I get in for free, we will be having lunch with Cadence executives (the privileges of blogging on SemiWiki). After lunch we get private briefings. Mine is with Martin Lund, if you have questions you would like asked let me know. Preferably ones that make me look smart!

Here are the tracks I’m most interested in:

MIX101 Data Management for Mixed-Signal Designs, ClioSoft, Inc.
SYS204 How ARM® Software Development Tools can Accelerate Your Time To Market ARM
AVD201 Technology and Design co-optimization for 10nm and Beyond GLOBALFOUNDRIES
AVD202 Designing with 14nm FinFET Technology Cadence
AVD203 14nm FinFET implementation of an ARM Cortex-A7 Samsung/Cadence
AVD204 Designing with Layout Dependent Effects (LDE) in TSMC Advanced CMOS Processes TSMC
SFF206 Scalable Power Sign-off Methodology for Ultra Large Design NVIDIA
AVD207 The DRC + Pattern Database GLOBALFOUNDRIES

CDNLive Silicon Valley brings together Cadence technology users, developers, and industry experts to network, share best practices on critical design and verification issues, and discover new techniques for realizing advanced silicon, SoCs, and systems. And, you’ll have the chance to talk directly with the Cadence technologists who develop your tools.

CDNLive Silicon Valley 2013 registration includes:

  • Attendance at keynote presentations
  • Access to Cadence R&D and technology experts
  • More than 80 technical sessions, techtorials, and demos
  • Access to the Designer Expo
  • Access to all networking event
  • Lunches and coffee breaks for the duration of the conference

If you have not registered for CDNLive yet you can do so HERE. Try the promo code DCPCDN13 for a reduced rate. See you there!