Banner 800x100 0810

CDN Live 2013 in Munich: what’s the next acquisition? Evatronix!

CDN Live 2013 in Munich: what’s the next acquisition? Evatronix!
by Eric Esteve on 05-07-2013 at 2:44 am

It was definitely a good idea to go to Munich to listen to the keynote talk from Lip-Bu Tan. Did I learned in direct live the name of the next acquisition from Cadence in 2013, after Tensilica and Cosmic Circuits? Yes and the winner is… Evatronix! And cadence as well as Evatronix is enjoying more than 600 customers worldwide, thanks to a wide port-folio ranging from 8051 IP core to SuperSpeed USB Controller, or Flash Memory controller IP.


Cadence Design Systems, Inc. (NASDAQ: CDNS) today announced its intent to acquire the IP business of Evatronix SA SKA, adding to its rapidly expanding IP offering. Based in Poland, Evatronix delivers a silicon-proven IP portfolio, which includes certified USB 2.0/3.0, Display, MIPI, and storage controllers, which are highly complementary to Cadence’s IP offering.

HIGHLIGHTS:

  • Evatronix is an established provider of USB, MIPI, display and storage controller IP with a broad customer base of approximately 600 customers.
  • Nearly 200 USB controllers licensed to customers, including several top-tier semiconductor companies.
  • Evatronix’s controllers, combined with PHYs from Cadence, will enable a complete interface IP solution that combines controller, PHY, verification IP and integration kits.

“The rapid innovations in the mobile, connectivity and cloud markets are driving today’s IP marketplace,” said Martin Lund, senior vice president of research and development, SoC Realization Group. “Evatronix’s IP products will boost Cadence’s offering for these segments, with high quality leading-edge IP that is production-proven.”

Evatronix co-founder and president Wojciech Sakowski said, “Evatronix’s IP cores and services are designed for ease-of-integration, quality and time-to-market. As part of Cadence, we will be able to reach more customers globally and to accelerate our IP roadmaps. The integration with Cadence will allow our customers to get to market faster with less effort.”

The acquisition is expected to close in the second quarter of 2013, and is not expected to have a material impact on Cadence’s balance sheet or second quarter or fiscal 2013 results of operations. Terms of the transaction were not disclosed.

More to see on:http://www.evatronix-ip.com/

On my side I am polishing the presentation I will make in the Design and Verification IP tracks. This year is the first with a special attention about IP and Verification IP: PCIe and M-PHY, USB 3.0 PHY IP, Memory Models for Verification and DDR SDRAM Memory Controller and PHY IP, and the related Verification IP with a presentation from Susan Peterson from Cadence. It will also be a good opportunity to learn about Tensilica Dataplane CPU and I am sure not to miss that track, as I will make the presentation just before, named “Interface IP protocols: the winners, the losers in 2012”. It will be strongly updated from the presentation made during IP-SoC last December in Grenoble, as many changes have occurred during Q1 2013! Because we can now take into account the 2012 actual IP sales results for the various protocols (DDRn, USB, PCIe, SATA, MIPI, Ethernet, Thunderbolt, HDMI, DP), it will be fresh information, in advance from the launch of the “Interface IP Survey”…

Eric Esteve from IPNEST

lang: en_US


DAC 2013 Pavilion Panel: Affiliation Avenue – The Road to Success!

DAC 2013 Pavilion Panel: Affiliation Avenue – The Road to Success!
by Holly Stump on 05-06-2013 at 7:00 pm

Join us for a free career-building panel at DAC 2013, sponsored by Women in Electronic Design.
· Don’t go it alone! How many times have you heard it?


A lively panel of luminaries discuss how alliances are critical to our success, and cover networking and negotiating skills for achieving personal satisfaction and professional visibility. Also hear various perspectives on the importance of nonsense, and how to consciously incorporate humor and joy into your work life.

An informative and inspiring event for both women and men, in electronics and EDA, at any stage in your career.

Moderator:
·Sashi Oblisetty, Director, R&D, Synopsys. Previously, Sashi was President and CEO of VeriEZ Solutions, Inc.; VP & GMat TransEDA New England Design Center; and Founder, President & CEO DualSoft LLC. Education: B.Tech Electronics and Communication Engineering, Birla Institute of Technology, and MS Electrical and Computer Engineering, University of Massachusetts

Panelists:
· Soha Hassoun, Associate Professor, Dept of Computer Science, at Tufts University. Soha’s research interests lie in EDA and applying EDA techniques to Systems Biology. Soha has been a consultant at Carbon Design Systems; visiting researcher at IBM Research Labs; consultant at IKOS (now Mentor); Senior Design Engineer at Digital Equipment Corp.; Soha is an NSF CAREER award recipient. She hasserved in many technical and executive leadership positions within EDA. Education: PhD Computer Science and Engineering, University of Washington,MSEE, Massachusetts Institute of Technology.

· Jan Willis, President, Calibra. Previously Jan was Sr. VP, Industry Alliances at Cadence Design Systems; Vice President, Business Development for Simplex Solutions; Director, Product Marketing and Business Development at Synopsys; and Manager, Worldwide Customer Support for Hewlett-Packard. Education: MBA from Stanford University Graduate School of Business and BSEE, Electrical and computer engineering, University of Missouri-Columbia.


· Kavita Snyder, VP, Worldwide Applications, BluePearl Software. Previously, Kavita was President & CEO at KnowFolder; Technical Account Manager at Magma Design Automation; Director of Field Operations at Jasper Design Automation; FAE Director at Atrenta; Product Marketing Manager at Synopsys; and Field Applications Manager at Synplicity. Education: BS, Computer Engineering, San Jose State University.

Affiliation Avenue will be followed by a presentation of the Marie Pistilli Award to Nanette Collins, the 2013 recipient. Women have made important contributions and strides in the EDA industry. The DAC Executive Committee presents an annual award to honor an individual who has made significant contributions in helping women advance in EDA technology. Ann Steffora Mutschler, Senior Editor at System-Level Design, will interview Nanette Collins, the 2013 Marie Pistilli Award recipient.

Attend these fascinating panels on Monday, June 3, at 1:30pm, DAC Pavilion, DAC 2013 Austin,Texas. www.dac.com

And please join the Electronic Design Women Linked In group.

lang: en_US


Atrenta at DAC…SpyGlass is Everywhere

Atrenta at DAC…SpyGlass is Everywhere
by Paul McLellan on 05-06-2013 at 6:43 pm

Atrenta are at booth 1847 in the exhibit hall where there will be regular presentations in the “RTL Signoff Theater” and lots of presentations on various aspects of SpyGlass, GenSys and BugScope in their suites. The registration page for the suite sessions is here. Just who is presenting in the RTL Signoff Theater is being finalized but so far TI, TSMC, CEA-Leti and IPextreme are all signed up.

On Sunday there is a four hour workshop on IP quality, jointly hosted by Atrenta, IPextreme, TSMC and Sonics. One of us from SemiWiki should be there so there should be something more about what happened after DAC. There will be demos and everyone will be touching on using SpyGlass to qualify IP.

On Monday afternoon, Mike Gianfagna will be on John Cooley’s Troublemaker panel. It is in ballroom G at 3pm. Also on the panel are Joe Sawicki (Mentor), Joe Costello (Oasys), Dean Drako (IC Manage), Jim Hogan and Gary Smith. John Cooley, of course, is moderating (or trying to encourage lack of moderation).

One the designer track, Atrenta are presenting 3 times:

  • Churning the Most out of IP-XACT for Superior Design Quality, 10.30-12pm on Tuesday in room 18C. Presented with Texas Instruments.
  • Scalable Hierarchical CDC Verification, 12.30 to 1.30pm on Wednesday in hall 5. Poster session with Cisco.
  • Path-Finding Methodology for Interposer and 3D Die Stacking, 4-6pm on Wednesday in room 18C. Presented with Qualcomm

Atrenta are also busy in the evenings at DAC. Atrenta and Mentor are sponsoring all the musical entertainment for the 50th “Kick’in it up in Austin” party at Austin City Limits Live on Monday night. They also sponsoring a bar on the mezzanine level where you can get a SpyGarita. Two of these and you’ll see the future. Atrenta is also proud to be a sponsor for Jim Hogan’s benefit party on the third level of Austin City Limits on Monday night. They have some very special surprises in store for that event. Entry to the event requires a fee (tax-deductible contribution to a local children’s advocacy group in Austin called CASA).

On Tuesday evening, Atrenta is one of the sponsors for the Stars of IP party. This is a private, invitation-only event hosted by the IPextreme and their Constellations group. Live bluegrass music on the rooftop bar, a full Texas-style barbeque dinner and other surprises throughout the evening.

And, of course, Atrenta are one of the sponsors of I LOVE DAC where you can get a free exhibit pass. Sign up on the DAC website here.

Full details of everything Atrenta is up to at DAC on their website here.


More Injustice in EDA Lawsuits

More Injustice in EDA Lawsuits
by Daniel Payne on 05-06-2013 at 6:38 pm

ss ipad

There’s a one-person EDA start-up called iSchematics.com that offers schematic capture and cloud-based simulation for both web browsers and mobile devices like the iPhone and iPad that is being sued. I’ve blogged about their EDA tools before:


Mobile Storage Interfaces: There are a Lot

Mobile Storage Interfaces: There are a Lot
by Paul McLellan on 05-06-2013 at 5:39 pm

Storage interfaces for mobile are evolving rapidly, in particular with the Universal Flash Storage (UFS) standard. So how do you test a design? If you want to test a design that accesses, say, an SD card then you can wander into Fry’s and buy an SD card for a few dollars. But to design an interface to UFS is a bit harder since the devices don’t yet exist, nor does test equipment like protocol generators.

I was at the JEDEC mobile memory conference last week and listened to Yuping Chung of Arasan talk about UFS and verification.

Universal Flash Storageis a JEDEC standard for high performance mobile storage devices suitable for next generation data storage. The UFS is also adopted by MIPI as a data transfer standard designed for mobile systems. Most UFS applications require large storage capacity for data and boot code. Applications include mobile phones, tablets, DSC, PMP, MP3, and other applications requiring mass storage, boot storage, XiP or external cards. The UFS standard is a simple, but high-performance, serial interface that efficiently moves data between a host processor and mass storage devices. USF transfers follow the SCSI model, but with a subset of SCSI commands.

A UFS controller has two major challenges: interfacing to UFS and maintaining backwards compatibility with the older standards (which also have not stopped evolving). Assuming that you are building something like an application processor (AP) for mobile then you want to know you have all those standards nailed and your AP will be interoperable with whatever it needs to be: e.MMC 4.3, 4.4, 4.5, 4.51, 5.x, UFS 1.0, 1.1, 2.x, UniPro 1.40, 1.41,1.6x, M-PHY 1.0, 2.0, 3.0, SD 2.0, 3.0, 4.0, 4.x, UHS-II…phew, that’s a lot of compatibility.

Of course for most people the sensible thing to do is to license IP blocks for UFS from a company like Arasan. However, that moves the problem from your SoC to Arasan, because they still need to perform verification before seeing silicon (and afterwards, of course). The verification splits into two parts, verifying the M-PHY and verifying the digital controller from layers 2 on up including software drivers.

Separate from the PHY is the rest of the UFS system, which actually consists of two parts, a host controller and a device controller.

Finally, using an FPGA based system the whole system can be validated and software development can be done, ending up with everything that is needed for a successful UFS-based design.

Details of Arasan’s M-PHY are here. Details of Arasan’s UFS solution, including host controller and device controller are here. There is a UFS webinar here(registration). There is a webinar on May 8th at 6pm and 9th at 8.30am (pacific) on Mobile Storage Designs for Compliance and Compatibility – UFS/eMMC, registration is here.


How to Blast Your Chip with High Energy Neutron Beams

How to Blast Your Chip with High Energy Neutron Beams
by Paul McLellan on 05-06-2013 at 3:49 pm

So you want to know how reliable your chips are and how susceptible they are to single event effects (SEEs) where a neutron or an alpha particle causes a storage element (flop or memory cell) to flip in a way that alters the behavior of the device. There are two ways a particle hitting a device might not cause a problem. Firstly, the particle might hit an area of the chip that is not vulnerable to particle upsets. Or secondly, an upset occurs but due to other circuitry it doesn’t propagate to an output and thus doesn’t matter.

As with most reliability measures, waiting ten years to find out how reliable a device is over ten years isn’t really very useful. A certain amount can be done in shorter periods and then extrapolating statistically but most of the time this simply takes too long. What is required is a way to accelerate the testing by increasing the rate that the devices are bombarded with particles so that ten years worth of particles can be compressed into a day or two.

There are three types of particles of relevance:

  • high energy neutrons (from cosmic rays)
  • thermal neutrons
  • alpha particles

Alpha particles are stopped by almost anything, just a few centimeters of air for example, so alpha particles in practice have to come from the materials used in manufacture: package plastic, solder, silicon and so on. So alpha particle testing requires the chips to be de-capped or the package would largely absorb them.

Thermal neutrons are generated by a nuclear reactor and so finding it is relatively easy to find a source of low energy neutrons and put your devices in harms way and see what happens.


The most difficult is high energy neutrons. There is no convenient source of high energy neutrons so they need to be made specially at a resource like the Weapons Neutron Research (WNR) facility at Los Alamos. Of course, one immediate question is how do you accelerate a neutron given that at is…well, neutral (A neutron walks into a bar. How much for a beer? For you, no charge.) The answer is that you accelerate protons with a linear accelerator that is a couple of miles long (like the one at the Stanford Linear Accelerator that highway 280 passes over the top of, but much longer). You then put a target in the beam, such as a block of tungsten. This will generate neutrons along with protons and other stuff. So using a powerful electrical field the charged particles can be diverted leaving neutrons. And then using a collimator that is basically a pipe about 12″ in diameter the neutrons that are going in the wrong direction can be absorbed leaving a beam of neutrons.

The beam has a profile of neutron energy that is close to that of cosmic rays, although there are orders of magnitude more neutrons. So if your chip (or system) is put in the beam then it undergoes the effects of years of cosmic rays in a few dozen hours. Since these are single event effects not permanent damage of the device, the chip needs to be powered up and running some sort of test program to pick up the effects when they occur and count them.

As if that doesn’t sound complicated enough, beam time at WNR is booked a year in advance so for an individual company to do this is actually quite risky. If anything goes wrong then you risk missing your beam time slot and having to wait a year for another one.

iROC Technologies does this on a regular monthly schedule so that if you miss your slot with them one month, then you can get a slot the following month. iROC has a lot of experience with this sort of testing, how to set it up, how to execute it, how to analyze the data afterwards. Plus what can go wrong and how to prepare for it so that you don’t waste valuable beam time trying to, for example, find a spare cable.


Which is the best FPGA – Xilinx

Which is the best FPGA – Xilinx
by Luke Miller on 05-06-2013 at 10:00 am

Your corporate training will teach you there is no such thing as stereo types and they are bad, naughty. We all know they are true; it’s just some companies now of days try to force the worker bees to do a flash erase and drop your brain at the door. I never participated in that and as you can imagine it went very well. Dilbert is true…

I am asked many many times which is the best FPGA? After pondering for a pico second, I blurt out Xilinx. Flat out Xilinx has the better offering. Now I realize I lost about 47% of you, I’ll call you the 47%’s.:) Now put on your Chuck Schumer glasses and tilt them down so we can have a smart conversation. Try that look in your next performance evaluation and I guarantee you’ll get your 2.78% raise. Who am I kidding; the bucket was much smaller this year, sorry you owe us money.

I have personally seen resistance to the Xilinx product just because ‘We don’t want to be tied to one vendor’. There is some rational for that as I was burned by PA Semi, who wasn’t That was fun. I suggest that Xilinx is so far ahead of the game with FPGA technology that you must choose Xilinx at this point, they are well established and are going no where. And yes, I have programmed Altera. Xilinx has more resources, great IP and is the leader in the field. When the new node comes out, we hear rumblings’ that this is it for Xilinx but they seem to pull it off each time except for that interesting Virtex-4.

What Xilinx needs to be aware of or even nervous about is TOOLS. No one cares if you have the best IP, or 7 Billion transistors if you can’t route them in a reasonable amount of time. I was in a meeting with Xilinx and told them if Altera comes up with a 2x faster router, I’d trade performance for time if the performance is similar. GPUs’ and now CPUs are slowly eating away the market share pie for FPGA legacy systems because they are expensive to design and then route. Then as you know it’s always why is the FPGA not working or not ready, even though they are the heart of the system. I believe Xilinx’s Vivado HLS and Zynq have pushed them into the next level but they need more tools. They need a tool to answer this question for a group of system architects: How do I size my existing system into the Zynq Processor, help me draw that line. Now that is part of my job but companies need a tool to size the system, a quick rough estimate. Without this it is a tough sell to say, look take that Mercury or CW card and put it into this Xilinx part, really?

On the whole, all FPGA companies struggle to say what exactly they are selling. When they talk to us designers it is pretty good, but you know what? I do not have the pocketbook. They need to convey their message much better to program managers and other bosses how they can benefit their program. That is no easy task because as you know, some of these people at this level have had a lobotomy and must follow process at all times and cannot make a decision unless it is in a manual somewhere. Was that a stereotype? :rolleyes:

lang: en_US


Customer Stories at DAC#50

Customer Stories at DAC#50
by Daniel Nenni on 05-05-2013 at 8:10 pm

When you think Apache Design you probably think Low Power Design and what stuffed animal will they give away at DAC. The other thing you should think about is how the top semiconductor companies around the world use Apache products for leading edge semiconductor design. Demos are fine, but there is nothing like talking directly to designers who use the tools in an intimate setting (20 people). You definitely do not want to miss this opportunity to collaborate but space is limited so register now:

Experience Sharing by Customers: At-a-Glance
Design experts from AMD, ARM, Freescale, LSI, Nvidia, Samsung, and STMicroelectronics will share best practices and in-depth experiences on how they successfully achieve power, performance,reliability, and cost targets.

AMD: Unified Method for Package-Induced Power Supply Droop Analysis in High Performance Processors:
Excessive supply voltage droop can have a significant impact on the clock frequency of high-performance processors. In this work we present a unique approach for power supply droop analysis. A detailed model of the package extracted from layout is combined with a representative model of the IC to simulate dynamic supply voltage gradients across the die. The IC model is geared towards accurate prediction of voltages on power supply C4 bumps. At the same time, it is compact enough to allow efficient simulation of the complete chip-package system. We used this methodology to simulate the droop on a high-performance server SoC. Dynamic simulations were performed with patterns exercising high-power sections of the CPU cores on the SoC. The simulated droops correlate very well with silicon characterizations performed on the same design. We discuss the use of this methodology in future for early analysis of chip/package systems and for driving both IC and package design decisions.
Main products covered: RedHawk
June 5 @ 11:00
To Register

ARM: Comprehensive Power, Noise and Reliability Methodology for ARM Processor Cores:
This presentation will discuss power integrity analysis of Seahawk hard macro, a Quad-core Cortex-A15 implemented in the TSMC 28 hpm process. Seahawk is designed to operate at frequencies close to 2-GHz and has DVFS and retention capabilities built-in. This presentation will describe the power integrity checks that were performed on the design to ensure proper functioning and reliability.
Main products covered: RedHawk
June 4 @ 11:00
To Register

Nvidia: Early RTL Power Analysis and Reduction for Advanced Low-Power GPU Designs:
This presentation covers Nvidia’s methodology for RTL power analysis and reduction using PowerArtist. Data on PowerArtist RTL power correlation vs. sign-off tools will be presented along with runtime performance metrics. Material will also include reports generated using the PowerArtist OADB database API, specific examples of power reduction techniques applied and results achieved, as well as key clock gating coverage and efficiency metrics. In conclusion, this presentation will show key benefits Nvidia achieved with PowerArtist and highlight areas where Nvidia and Apache are collaborating to advance RTL power technology in PowerArtist.
Main products covered: PowerArtist
June 3 @ 11:00
To Register

Samsung-SSI: The Life of PI: SoC Power Integrity from Early Estimation to Design Sign-off:
The life of Power Integrity (PI) analysis starts at the product infancy stage. Early analysis involves resource allocation at the system level, such as the VRM, board, and package, and at the chip level, in terms of power grid structure, power scenario analysis, and the amount and placement of intentional decoupling capacitance (DECAP). This is done through systematic PI modeling and simulation. As the design matures, the power integrity engineer gets more information on the system and on the die. There are many phases of progressive iterations to evaluate design tradeoffs. Power integrity engineers work closely with board, package, and chip design teams to achieve PI closure. At the design tape out stage, the power integrity team is responsible for signing off static and dynamic IR drop and EM to verify that multi-million gates SoC chips meet stringent power supply noise budget. We investigated the impact of board, package, package embedded with DECAP, power grid, circuit switching activity, as well as on-die DECAP and demonstrated good correlation between early estimation and the final analysis with detailed chip and package models.
Main products covered: RedHawk
June 3 @ 14:00
To Register

STMicroelectronics: RTL Power Estimation on ARM Core Sub-systems:
I will start my presentation with an introduction about High Performance Power Efficient ARM Cores implementation within ST and the associated challenge of estimating dynamic power early in the implementation flow. Then I’ll present results obtained when benchmarking PowerArtist tool by measuring accuracy versus signoff power figures on a Dual-cortex A9 subsystem. I will pursue by explaining how we have used PowerArtist tool on a Dual-Cortex A15 subsystem and the different results we have obtained. Finally, I’ll finish my presentation with a conclusion on the benefits of PowerArtist tool.
Main products covered: PowerArtist
June 4 @ 14:00
To Register

GlobalFoundries: Hierarchical Voltage Drop Analysis Techniques for Complex Power-Gated Multi-Domain 20nm Designs:

When qualifying new technologies for new design nodes (e.g. 20nm, 14nm…), large SRAM arrays with many power domainsare employed. Since the VMIN characterization of bit cells performed on silicon requires a sufficient power/ground network, an IR drop analysis at the transistor-level for the whole design is required. This presentation will cover a hierarchical IR drop analysis flow using new capabilities within Totem to trace networks of multi power domains (>800) through switch cells, and to generate subblock abstracts for different detail levels. The abstracts can then be used for top-level analysis; allowing a large speed-up of IR drop analysis in comparison to flat analysis. This presentation also includes a discussion about the pros and cons of the approach and will present resource requirements (run time, memory).
Main products covered: RedHawk,Totem
June 4 @ 12:00
To Register

Nvidia: Comprehensive Layout-based ESD Check Methodology with Fast, Full-chip Static and Macro-levelDynamic Solutions:
This presentation will discuss a comprehensive ESD static/dynamic methodology developed for pre-tapeout ESD verification, failure diagnosis, and predictive simulation of improvements. The methodology focuses on fast, full-chip static and macro-level dynamic analysis and will feature real HBM and CDM application examples. We will also discuss the potential impact of upcoming technologies on ESD including 3D-ICs, FinFETs, and system-level trade-offs.
Main products covered: PathFinder
June 5 @ 10:00
To Register

Freescale: Power, Noise and Reliability Simulation Consideration for Advanced Automotive and NetworkingICs:
This presentation covers experiences of SoCs for Automotive and Networking Applications. Due to the nature of theirapplication domains these ASICs bring different challenges and requirements. This presentation will discuss both Wire-Bond Chips with Low Cost Packaging requirements, as well as high cost Flip-Chip/BGA designs. Instead of discussing typical Power and Rail Analysis details, this presentation will focus on insights gathered from running analysis and silicon results, and will include Power Analysis (RTL and Gate-level), Rail Analysis (Static and Dynamic), handling complicated IPs/Std cells, ESD Sign-off and Cell Electro-migration analysis.
Main products covered: PowerArtist, RedHawk, Totem, PathFinder
June 3 @ 13:00
To Register

LSI: Chip and I/O Modeling for System-level Power Noise Analysis and Optimization:
The system level methodology proposed in this presentation encompasses die, package and PCB interconnects exposing the impact of the capacitive and mutual coupling enabled outside the silicon on the PLL supply. The proposed methodology allows selecting the best package routing solution for PLL supply noise reduction. Correlation with PG noise and jitter lab measurements confirms that our Sentinel-SSO based methodology is critically instrumental in intercepting PLL supply noise due to independently supplied I/O cells switching activity.
Main products covered: CPM, Sentinel, SIwave
June 4 @ 15:00
To Register

Apache Design is a subsidiary of ANSYS, Inc., a leading engineering simulation software provider. Our virtual prototyping technology has enabled customers to predict with confidence that their product designs will thrive in the real world. The extensive ANSYS product suite offers the ability to perform comprehensive multiphysics analyses, critical for high-fidelity simulation of real architecture that integrates varied components.

Apache power analysis and optimization solutions enable design of power-efficient, high-performance, noise-immune ICs and electronic systems. This comprehensive suite of integrated products and methodologies advances low-power innovation and provides a competitive advantage for the world’s top semiconductor companies to help lower power consumption, increase operating performance, reduce system cost, mitigate design risks, and shorten time-to-market for a broad range of end-markets and applications.

Apache technology complements and expands the breadth, depth, functionality, usability and interoperability of ANSYS simulation products. Our combined tools open the door to more comprehensive systems simulation so that engineers can predict product behavior much earlier in the design cycle.

Return to Main Apache @ DAC Page

lang: en_US


A Tale of Two Events, Make that Three, Wait…How about Four?

A Tale of Two Events, Make that Three, Wait…How about Four?
by Camille Kokozaki on 05-05-2013 at 8:05 pm

It is increasingly apparent that Kurzweil’s Singularity is sure getting near, if it is not here already 32 years too soon. Not a week goes by without missing or needing to attend a key conference, seminar, symposium, summit, with each having parallel streams, panels, exhibits, demos, social networking. Not only are we informed, entertained, fed, pampered, enlightened but we get access to post event content, diligent follow-ups, great summaries, blogs, analysis, pictures, all while tweeting and being tweeted with the play-by-play unraveling of the action and pinging of every event. We are to absorb and put in context all this knowledge, we also need to digest, interpret, analyze, identify the relevance to our work and environment, let alone project, extrapolate and derive rules, corollaries and actionable decisions emboldened by this wealth of new information. By the time this settles in our computing neurons, things change and data is so 2 minutes ago.. Now you see the genesis of my Kurzweil lament.

In the last few days, I attended four conferences and I missed four events that I wished I could participate in, if I would have been able to defeat the laws of space-time physics and had the luxury of availability at my disposal. It started with the Linley Mobile Conference followed by the EDPS Monterey event that I and others recently blogged about. This was followed by the Perforce Merge Conference in San Francisco, forcing me to miss Design West and Mentor’s U2U. This week it was AWS Summit and luckily I was able to attend. I have been a cloud fan before cloud was cool, so I was looking forward to it. In some cases the choice of what to attend was easy due to being a sponsor, presenter, or exhibitor. In others it was not as straightforward. Event envy creeps in and anxiety about what new new thing or insight was missed, and hopes rise of getting the recorded sessions and obligatory slide decks in a time delayed fashion.

The AWS Summit had armies of IT and cloud professionals basking in the wealth, breadth and depth of the cloud computing universe orchestrated by increasingly sophisticated implements, tools, pricing and business models rolled out by Amazon Web Services. Andy Jassy, SVP of AWS delivered a Keynote full of content, statistics and guest speakers showcasing accomplishments and results attained through the various AWS offerings.



The Columns above refer to: Replace CapEx with OpEx, lower overall costs, no more guessing capacity, agility/speed/innovation, shift focus to differentiation, go global in minutes.

Andy spelled out how Enterprises are using AWS and the cloud:

  • Develop and Test Environments
  • Build Apps for the Cloud
  • Use Cloud to Make Existing On-premise apps better
  • New Cloud App that Integrates Back to On-premise Systems
  • Migrate Existing Apps to the cloud
  • All-in like Netflix

He went on to outline how to choose a cloud partner

  • Experience
  • Breadth and depth
  • Pace of innovation
  • Global footprint
  • Pricing philosophy

Great one-liners gleamed from the event:

  • There is no compression algorithm for experience
  • Sometimes you want to hug your server but it won’t hug you back

The keynote replay can be found here.

lang: en_US


Analog IC Verification – A Different Approach

Analog IC Verification – A Different Approach
by Daniel Payne on 05-04-2013 at 5:33 pm

Analog design seems to suffer from a huge gap when it comes to testing and verification. While some of this gap is natural – after all, often the only way to verify whether a particular design is working is to look at actual simulation waveforms – it still seems like a lot can be done to bring process into this sphere of the IC design space.

Analog designers generally work from their tool GUI’s. Even if they could run automated, or scripted tests to check those parts of the design that don’t need human intervention or bringing up waveforms, it is often hard to get the team to run these tests since they need to be launched from a terminal. In addition, test results are not available from within the GUI.

From a management perspective, analog designs are getting larger, more complex and distributed. One small mistake can cost millions in respins and lost market window. Enabling seamless collaboration across the globe, and tracking team progress is a challenge.

Analog Verification can now be automated using a management console as a new way of looking at testing in the analog/mixed signal design context. A company called Methodics offers a Data Management tool called VersIC with a plugin for Analog Verification management. This plugin for VersIC works in the Cadence Virtuoso 5.x/6.x environments and with the Synopsys Custom Design tools.

Analog IC designers create wrapper scripts to run their ADEXL/ADE/Ocean tests and then use the plugin GUI to register the script and define its required arguments. This is captured in the VersIC database and made available to the entire team

Verification tests can be combined into regression suites, making it easy to run and track a slew of related tests as a single entity. The tool understands all the popular grid management software, so regressions can be launched in parallel, dramatically cutting testing time.

Analog Tests
An analog designer can now automate and keep track of LVS, DRC, power consumption bounds, jitter and leakage measurements. Any kind of ADEXL/Ocean scripted test is a good candidate for this approach. VersIC with the plugin is quite simple to use. Most engineers are up and running in a couple of hours after reading the documentation examples or meeting with a knowledgeable AE.

The alternative to automating analog verification are manual efforts with spread-sheets or writing your own scripts. Using something off the shelf to automate analog tests saves time and lets you concentrate on your core design competence.

VersIC’s SQL database tracks all the tests run across the team – across multiple sites, multiple workspaces, multiple users – making for very effective collaboration.

Now, with a single click, managers can track all the tests run, across their entire team on a given release of a design. You can look at pass/fail rates of individual tests, and even figure out all the design changes that have happened since the last time a particular test passed.

History
The VersIC tool was first released in 2007 and is being used by a mix of large and small companies including: Cisco, Broadcom, PMC Sierra, Motorola, Cirrus Logic, Google, Wolfson, Lantiq, Huawei and ZMD.

Other vendors don’t offer a data management solution with releases attached to a database of the entire test history for that release. In mixed-signal designs this can be combined with the Evolve digital solution from Methodics allowing a single release mechanism for all analog/digital design data with the corresponding verification history.

Summary
Management always has the issue of “what tests have been run and did they all pass” whenever a major release is under consideration. Often “minor” changes to a schematic or layout are included in a release without being properly verified and cause expensive rework. With VersIC’s Analog Verification Management tools, this uncertainty can be removed and design teams can release with confidence, saving hours of manual work per week and helping ensure that silicon is correct.

lang: en_US