RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Verilog Simulator Comparison

A Verilog Simulator Comparison
by Daniel Payne on 09-22-2011 at 2:40 pm

Intro
Mentor, Cadence and Synopsys all offer Verilog simulators, however when was the last time that you benchmarked your simulator against a tool from a smaller company?

I just heard from an RTL designer (who wants to remain anonymous) about his experience comparing a Verilog simulator called CVC from Tachyon against ModelSim from Mentor.



Benchmark Details

First, let me say that my primary use for the CVC tool is with regard to regressions being done on RTL designs, so it is not a gate level design that I can give you benchmark data on. In my regressions, sometimes the test bench activity contributes as much or more to the total simulation time requirements.

The test case that I was writing about when I first sent you the email was the regression testing for a relatively small digital design of about 150,000 gates in 0.35 micron UMC; however, as I mentioned before, the regressions were being performed on the RTL.

In that design, there are about 7,500 lines of RTL code in the design, and the test bench is about 6,500 lines of RTL code.

In a regression that took Modelsim Questa 28 hours to complete, CVC completed the work in 10 hours. This regression consists of a bash script that calls the same test bench with different conditions repeatedly to test all the features and automatically verify the performance.

In a more recent test that I have done with a much smaller design, where the test bench takes more time to run than the actual DUT, I ran a 100 msec simulation in 6 minutes with CVC that toom Modelsim Questa over 30 minutes to run. The test bench in this design does a state space model simulation of an analog circuit that is connected to the DUT and performs functional simulation not so much for regression testing, but more for the purpose of design analysis. We do small designs on large geometry due to the power control nature of our business.

In this case, the RTL for the DUT is 12,000 lines and the test bench is 8,000 lines long. As you can see, there are more lines of code here, but the design is about 1/10th the size of the previous example (about 13,000 gates).

We find that the greatest differences in speed between Modelsim Questa and CVC relate to the test bench part; however, we have also noticed that CVC is often 20 to 50 percent faster in the gate level sims as well. Where it does NOT shine is that we run into some trouble using SDF back-annotation with it. We can get it to work, but it seems to be not consistent.

We tend to use CVC more for functional verification and development, but we still use Questa for the back end validation steps.

I know that some of this data is not perhaps as quantitative as the example that you sent to me, but since we have been using CVC now for almost 2 years, I believe that our results have been consistently observed over enough projects such that our group is increasing its use over time due to the speed advantage that it gives us at least in our circumstance.

I have NOT had an opportunity to use it with large designs such as those found in much of the communications, graphics, and other DSP intensive applications where the gate counts get into the millions. So I cannot address that behavior with my current experience with it.


Apple’s Supply Chain

Apple’s Supply Chain
by Paul McLellan on 09-21-2011 at 5:48 pm

I am doing some consulting right now for a company that shall remain nameless, and one of the things I have had to look at is Apple’s supply chain. I came across an interesting article by someone with the goal to “buy a MacBook Air that isn’t made by Apple.” He is in the UK and doesn’t like Apple’s UK keyboard and he doesn’t really want to have to run everything on a virtualization environment. So basically he wants to buy a MacBook Air that is actually a PC.

This is a market that Intel has announced it will support with $300M of its own money under the name Ultrabooks. After all, there is presumably a big market for a MacBook Air that is actually a PC Ultrabook. And how hard can it be? Apple’s industrial design is great but it isn’t that hard to copy. Most of the components are standard. Lots of people know how to put together a PC.

The answer turns out to be surprising. It is really hard to build a PCair for the same price as Apple does. And the reason is one of the secrets of Apple’s supply chain that I hadn’t really thought much about before. Apple hardly makes any products. Sure, it ships huge volume: it is the world’s largest semiconductor purchaser, much bigger than HP or Cisco or any other obvious candidates (expected to be $22B this year). But it has three iPhones (iPhone 3GS, iPhone 4 (GSM) and iPhone 4 (CDMA)), one iPad, two MacBook Airs, some iPods and some bigger notebook and desktop computers. Apple is not the Burger King of electronics, you don’t have it your way, you have it Steve’s way. Compare that to HP or Cisco’s product lines.

The PC market is predicated on have it your way. You go to HP or Dell’s website and decide what options you want, do you want wireless, which speed of processor, how much memory and so on. Also, they have broad product portfolios so they are forced to use standard products such as screens, batteries, wireless daughter cards, power supplies and so on since it doesn’t make sense to customize anything for a subset of a subset of the product line. So the PC industry is largely based on having a lot of components that purchased in varying amounts and then clicked together to build the end product. They ship a large volume but of a broad product range, so not much of any particular model.

Apple, by contrast, can integrate as much as it wants and buys all components in the same quantities (for each product) since you don’t get that flexibility. This gives it a much higher volume and more predictable demand and it can leverage this into lower prices. And since it has so few products it can invest in specialized components for each one: the MacBook Air has a specially shaped battery that just fits in among all the other stuff in there, the iPhone and iPad contain a custom Apple SoC (A4 for the current iPhone, A5 for the iPad2 and presumably for the imminent iPhone 5). Famously, a few years ago, Apple bought Samsung’s entire flash memory output. Too bad if you are someone else.

With those greater volumes and greater purchasing leverage, Apple can build the MacBook Air for less than any of the PC competitors can build an Ultrabook. Plus it doesn’t fit their business model well: a premium product that you cannot customize. Where does that go on the Dell website?

As an aside, a lot of this supply chain optimization is not the aesthetic Steve Jobs side of Apple, but is what Tim Cook, the new CEO, worked to put in place.


Custom Signal Planning Methodologies

Custom Signal Planning Methodologies
by Paul McLellan on 09-20-2011 at 4:08 pm

It is no secret that custom ICs are getting larger and more complex and this has driven chip design teams to split up into smaller teams to handle the manual or semi-automated routing of the many blocks and hierarchical layers that go to make up such a design. These sub-teams don’t just need to handle the routing within their own block(s) but also integrate the routing between the blocks and also address the challenge of creating correct top-level routing (that overflies the block) within the assigned part of the die.

Using informal approaches, such as verbal and email status reports, is no longer enough and makes the routing of a large custom chip become the long pole in the tent, very labor-intensive and with a schedule that determines the overall schedule of the entire chip. Once you add in congestion issues, advanced-node parasitic effects, the fact that the design itself is probably not stable and undergoing incremental change, then the process becomes almost impossible. Even “industry standard” routers are unable to complete top level routing challenges because they were not designed to fully address the complex combination of specialized topologies, hierarchical design rules and DFM requirements (via redundancy, via orientation, via enclosures, wire spreading, etc) that are required to achieve successful on-time design closure for AMS and custom ICs.

What is needed is a fully automated approach to signal planning. The key is to integrate the process with the block placement tasks and the use of intelligent, routing aware pin-placement algorithms to address multi-topology routing problems. Providing a tool with tight integration of these tasks means that designers can explore the implications of different placement alternatives before deciding on an optimal solution. And in a much faster time than doing it manually or semi-automatically.

One critical consideration is the routing style required to handle these complex top-level and block routing tasks. A manhattan routing style is used to avoid jogs, thus reducing the number of vias required and minimizing wire length, in turn reducing timing delays and power. Nets can be sorted during routing to avoid crossing routes, thus reducing crosstalk and other noise. Of course users must be able to define constraints for the router, such as width, shielding requirements, maximum and minimum widths on each layer and identifying matched signal pairs.

Another way to optimize area and improve productivity is to use a router which supports multiple-bias routing as well as strictly biased X-Y routing. With its jumpered-mode designers can define complex schemes where all routes in both horizontal and vertical biases can use the same metal layer efficiently, allowing a separate layer to be used as a jumper layer for channels where a layer change is required to route effectively. Further, many semiconductor manufacturers use routers that support special optimization for bus routing and compact signal routing, allowing them to take advantage of specialized semiconductor vias and via directions resulting in still more compact routing.

More information on PulsIC’s Unity Signal Planner is here.

Note: You must be logged in to read/write comments



Analog Constraint Standards

Analog Constraint Standards
by Paul McLellan on 09-20-2011 at 8:00 am

Over the years there has been a lot of standard creation in the IC design world to allow interoperability of tools from different vendors. One area of recent interest is interoperable constraints for custom IC design. Increasingly, analog design layout is becoming more automated. Advanced process nodes require trial layouts to be created even during the circuit design stage, to bring back detailed information for the iterative simulation loops. In particular, variation in sub 30nm nodes is impacted by layout dependent effects (LDEs), essentially values depend on proximity effects of what else is around, meaning that circuit design and layout design are much more closely intertwined than they were in the past. Otherwise correlation between pre- and post-layout is not assured.

To make the prototype layout process work smoothly, design constraints must be communicated to the layout automation, but there is currently no common open standard for defining these design constraints. As a result, users are forced to enter these constraints multiple times, once for each tool. Worse, subtle differences in the semantics can cause problems. To remedy this, the IPL Alliance, whose original charter was centered around open PDKs, has embarked on an initiative to create a single unified set of constraint definitions, covering both the syntax and the semantics. The goal, obviously, is to allow the designer to enter the constraints once and use tools from multiple vendors to achieve their design goals: higher quality, higher productivity, reduced time to market. The IPL Constraints Standard is available to all IPL Alliance members and is expected to be made public sometime in mid-2012.

The history is that in 2010 the IPL alliance decided to try and get ahead of the creation of design constraint standards. Otherwise every vendor would create their own proprietary standards and it is a lot harder to try and get alignment once such standards have achieved some adoption since there genuinely are costs of change and opportunity for political fighting among the most widely adopted standards.
The group’s goals were that the standard should be:

  • Portable and interoperable
  • Support existing and future tool sets
  • Be extensible
  • Take into account multiple ways that the constraints might be created: by hand, through GUIs, scripted, automated
  • Take into account the entire design flow where many steps are co-dependent.

A decision was made to support text-based and openAccess-based constraints, and tools should be able to translate between the two representations if necessary without semantic problems.

At DAC in 2011 the IPL Design Constraint Working Group announced the IPL Constraints 1.0 standard, which defines the syntax and schema for open, interoperable design constraints and included a proof-of-concept set of constraints. Work is going on to expand the standard to include a broader representation of actual constraints and to validate the interoperable use model.

The presentation from the 2011 DAC luncheon is here.

Note: You must be logged in to read/write comments


Coby Hanoch joins Jasper

Coby Hanoch joins Jasper
by Paul McLellan on 09-20-2011 at 7:00 am

Jasper has hired Coby Hanoch as the VP of international sales to manage sales outside of North America. I talked to him last week.

Coby started his career after graduation from the Israeli Institute of Technology as an engineer at National Semiconductor. He quickly ended up in verification where they developed the first random verification generator. Then he went to Paris to work in CAD/verification for the ACRI supercomputer project which burned a lot of money with little to show for it. So he returned to Israel and then he and a group of friends started Verisity. Somehow he got the job of doing the sales and marketing. They quickly brought Kathryn on (yes, Jasper’s CEO in her previous life) to manage US activity. In 1998, Moshe Gavrielov (who had come on as CEO) asked him to move back to Paris to run sales in Europe, and then, when that was up and running, to move to Asia. So he cheated and moved back to Israel (hey, it’s technically Asia). Verisity just took off and it was like sitting on a rocket. Europe went from $800K to $14.5M in 2 years. Asia wen from $2M to $30M in 3 years. Coby found himself as VP of worldwide sales. Cadence acquired Verisity and it didn’t feel a good fit so he left.

In fact he left EDA and went to a little Israeli startup, turned it around and then…time for a break. But despite offiically being on vacation, he kept getting calls from EDA vendors wanting help setting up distributors. A few phone calls and he collected a finder’s fee. But then came the downturn and finder’s fees dried up and, further, companies needed help managing reps and understanding the different cultures. So he set up EDAcon with reps in all the relevant countries.

Earlier this year, Kathryn visited Israel and told Coby that everything was going really well in the US but Europe and Asia not so much. She invited him to join. I guess she was pretty persuasive since he said yes. He started just after DAC.

Every day he is more excited since it feels like Verisity all over again. Jasper seems uniquely positioned for very rapid growth. When Coby was at Verisity, he used to feel he was doing the customer a favor when he sold them product. Jasper feels like that too. Jasper clearly has a great relationship with ARM and that gives Jasper and entree into ARM’s most advanced customers. But the target is broader: anyone doing more complex designs who has made the strategic decision to use formal.

The first few months were spent signing up reps. Next week is sales training. Bringing a lot of reps on at once enables Jasper to go broad rather than having to focus on one territory at a time. Obviously Israel is especially easy since Coby is there. But there is lots of business in China, Korea is starting up and, from earlier years, there are already a couple of strategic accounts in Europe. Also, it turns out Jasper has several AEs spread through the territories already on the ground. Evaluations are starting. Business discussions are starting. The product is mature. He’s excited. Let’s hope he’s right that Jasper is Verisity all over again. A wild ride by any standard.

To contact Coby, his email is coby at jasper-da dot com. I’m sure he’d love your PO.

The press release announcing Coby’s appointment is here.

Note: You must be logged in to read/write comments


Nanometer Circuit Verification: The Catch-22 of Layout!

Nanometer Circuit Verification: The Catch-22 of Layout!
by Daniel Nenni on 09-19-2011 at 8:00 pm

As analog and mixed-signal designers move to very advanced geometries, they must grapple with more and more complex considerations of the silicon. Not only do nanometer CMOS devices have limitations in terms of analog-relevant characteristics such gain and noise performance, but they also introduce new sources of variation which designers must worry about. Industry efforts like the TSMC AMS Reference Flow 2.0 have devoted considerable focus to this.

Managing the effects of variation has been part of analog design since the vacuum tube era. However, nanometer CMOS introduces variations that depend not only on the devices themselves, but also where they are physically located on the chip relative to one another. These new context-sensitive effects, such as well proximity, shallow trench isolation stress, and poly spacing effects, make accurate assessment of the layout’s electrical impact – also a time-honored analog design imperative – even more important. Or as the co-founder of a major fabless IC company once said, “at nanometer geometries, the layout is the schematic.”

Perhaps the biggest problem for computer analysis of these effects turns out not to be actually modeling them, but getting timely access to layout data. Because layout traditionally is a tedious and change-resistant effort, project teams don’t like to start it until the circuit design is nearly complete. Yet before the circuit design is complete – while it’s still evolving and flexible – is exactly when you do want the layout data. The layout really is just another view of the schematic.

In order to solve this Catch-22, it’s important to look at a couple of factors: how much layout is really needed for analysis? And how much of it can be automated?

For example, only placement is needed in order to assess the impact of well proximities; however, that placement needs to be accurate and complete – not just each differential pair or current mirror in isolation. Whereas routing is essential for node capacitance – but approximate routing might be adequate, especially if the capacitance is dominated by source-drain loading, in which case the wires themselves add little.

Fortunately there’s a group of companies bringing to market innovative solutions that focus exactly on these problems, and collaborating to hold the nanometer Circuit Verification Forum (nmCVF), on September 22nd at TechMart in Santa Clara. Hosted by Berkeley Design Automation, and including technologists from selected EDA, industry and academic partners, this forum will showcase advanced nanometer circuit verification technologies and techniques. You’ll hear real circuit case studies, where these solutions have been used to verify challenging nanometer circuits, including data convertors; clock generation and recovery circuits (PLLs, DLLs); high-speed I/O, image sensors and RFCMOS ICs.



AMS Design, Optimization and Porting

AMS Design, Optimization and Porting
by Daniel Payne on 09-19-2011 at 2:35 pm

AMS design flows can follow a traditional path or consider trying something new. The traditional path goes along the following steps:
[LIST=1]

  • Design requirements
  • Try a transistor-level schematic
  • Run circuit simulation
  • Compare the simulated results versus the requirements, re-size the transistors and go back to step 3 or 2
  • Create an IC layout
  • Extract parasitics, re-run circuit simulation
  • Compare the simulated results versus the requirements, re-size the transistors and go back to step 5 or 2

    You probably noticed that there are iteration loops in the traditional flow after I’ve created a sized schematic or produced an IC layout. These loops take both precious CPU time and wall time, which means that your schedules tend to slip because you’re not meeting your specs soon enough.

    There is another AMS design flow called model-based design that can reduce the time to design, optimize or port an IC design:
    [LIST=1]

  • Design requirements
  • Try a transistor-level schematic
  • Create a model
  • Run an optimizer (Inputs: Model, Constraints, Process. Output: Sized schematic)
  • Schematic driven layout
  • Extract parasitics, re-run circuit simulation
  • Compare the simulated results versus requirements

    Magma has created this new AMS design flow and calls their tool the Titan Analog Design Accelerator (ADX). What strikes me most about this approach is that I’m not spending the majority of my time running circuit simulations and tweaking transistor sizes manually, instead I’m creating a model of my IC design using equations then asking an optimizer to do the hard work of creating the best sized schematic for me that meets the specification.

    The design constraints for the analog optimizer could be:

    • Area
    • Power
    • New specification
    • Speed or frequency
    • PVT corners

    ADC Example
    Using the model-based approach in Titan ADX an ADC circuit was designed then automatically optimized. The following plot has 11 different results from the optimizer showing Power versus Input Range in Red, then Active Area versus Input Range in blue.

    I can look at the trade-offs shown in the plot and then choose which of these 11 sized schematics to use. Both area and power are minimized around the 1.9V input range according to these results.

    Optimizer Feedback
    The analog optimizer provides plenty of info to help you make design tradeoffs:

    • Sensitivity info for each constraint
    • Constraints that limit design objectives most
    • Critical PVT corners
    • Floorplan constraints
    • Layout constraints

    Porting an IC Design
    Let’s say that you wanted to port an AMS block from 130nm to 90nm. The following table will give you an idea of the time difference between a traditional flow and the newer model-based flow to port your design:

    You still have to learn the new model-based design approach before you can start seeing results like this, so take that into account. Analog designers can be resistant to changing their methodologies however this new approach can provide your company with attractive time-saving and optimization benefits.

    To get the most benefit from this Magma flow you have to add the Analog Virtual Prototyper (AVP) which lets you define the placement for your transistors.

    Summary
    If you are an AMS designer that wants an optimal IC schematic and layout sooner, then consider looking at the model-based approach offered by Magma called the Titan Analog Design Accelerator.



  • PVT and Statistical Design in Nanometer Process Geometries

    PVT and Statistical Design in Nanometer Process Geometries
    by Daniel Nenni on 09-18-2011 at 9:00 am

    On Sept 22, 2011, the nm Circuit Verification Forumwill be held in Silicon Valley, hosted by Berkeley Design Automation. At this forum, Trent McConaghy of Solido DA will present a case study on the TSMC Reference Flow 2.0 VCO circuit, to showcase Fast PVT in the steps of extracting PVT corners, verifying PVT, and doing post-layout PVT verification. The presentation will cover the speed benefit of Solido Fast PVT, and the multiplicative speed benefit when combined with Berkeley DA’s Analog FastSpice simulator. The picture below shows the benefits in the context of a corner-driven design flow, reducing the time taken for a thorough PVT flow from 4.8 days to 1.8 hours.

    Process, voltage, and temperature (PVT) variations are often modeled as a set of PVT corners. Traditionally, only a handful of corners have been necessary: with FF and SS process (P) corners, plus extreme values for voltage (V) and temperature (T), all combinations would mean 2^3=8 possible corners.

    With modern processes, many more process corners are often needed in order to properly bracket process variation across different device types. Furthermore, transistors are smaller, performance margins are smaller, voltages are lower, and there may be multiple supply voltages. To properly bracket these variations, more variables with more values per variable are needed.

    This leads to more corners. Consider the reference VCO circuit from the TSMC AMS Reference Flow 2.0 on the TSMC 28nm process. A reasonable setup of its PVT variation has 15 modelset values, 3 values for temperature, and 5 values for each of its three voltage variables, totalling 3375 corners. Industry-standard simulators take 70 s to simulate this, which means it takes 66 hours to evaluate all corners. Even with 10 parallel cores, runtime is 6.6 hours.

    Designers may cope by guessing which corners cause the worst-case performance, but that is risky: a wrong guess could mean that the design hasn’t accounted for the true worst-case, which means failure in testing followed by a re-spin, or worse, failure in the field.

    And what about layout parasitics? Ideally one does a thorough PVT analysis after layout. But each of these simulations takes 18 minutes on industry-standard simulators. Therefore, even with 10 cores, it would take 4.2 days to run 3375 corners.

    What is sorely needed is a way to quickly yet thoroughly identify worst-case PVT corners, when there are hundreds, thousands, or even tens of thousands of possible corners.



    Solido Design Automation has developed a new application called Fast PVT to address this. It uses adaptive machine learning technologies to rapidly identify the worst-case corners, often reducing the number of simulations by 10x or more. Fast PVT enables users to rapidly extract a handful of worst-case corners, which the user subsequently uses in rapid-turnaround design iterations. Once the corners meet spec, Fast PVT can be used for a more conservative PVT verification. Fast PVT is also applicable to post-layout analysis.

    Of course, PVT is not always the way. Some designers have access to sufficiently good statistical MOS models to consider doing statistical analysis, which is inherently more accurate than PVT. Ideally, one would consider statistical process variation effects during the design loop, in order to get to optimal power, performance, and area subject to a target yield. However, since Monte Carlo (MC) simulations are far too slow within the design loop, MC is traditionally run as a verification afterthought. For high-sigma, the challenge is even greater, since it is not feasible to do the 5 billion or so MC simulations to verify at 6 sigma yield. There is a final challenge: designers don’t traditionally think in statistics, they think in terms of corners.

    Fortunately, there is a way for designers to design with corners, yet consider statistical (and even high-sigma statistical) variations. The key is to extract statistical corners that actually bound the 3-sigma or 6-sigma output performance for the design at hand. To reiterate: these corners bound the performance of the circuit in a statistical sense, rather than traditional global MOS models like “FF” which bound the performance of the device. Also, there needs to be a fast, pragmatic statistical (or high-sigma statistical) verification step. These steps of corner extraction and verification incorporate into a familiar-feeling corner-driven design flow: extract corners, design against them, then verify, and iterate if needed.

    This is exactly the same flow as the PVT corner-driven flow. The only difference is how corner-extraction / verification tools themselves behave. In the end, we have a unified, designer-friendly approach to handle PVT, statistical, or high-sigma variation.

    As shown below, Solido DA supplies appropriate tools to support the flows for all three styles of variation, and in conjunction with Berkeley DA’s AFS, provides speedups of 10x+ to 100x+.

    Trent McConaghy is the Solido Chief Science Officer, an engaging speaker, and someone who I have thoroughly enjoyed working with the past two years. If you are doing 28nm analog/RF, IO, memory or standard cell digital library design you will not want to miss talking to Trent!


    Fast Track Seminars

    Fast Track Seminars
    by Paul McLellan on 09-15-2011 at 6:11 pm


    Atrenta’s SoC realization seminars, “Fast Track Your SoC Design” have started.The first one was in Ottowa last Tuesday, and it was a full house. In a straw poll, most of the attendees acknowledged facing IP handoff and quality issues. The keynote speaker was Dr Yuejian Wu, director of ASIC development at Infinera and an adjuct professor at the University of British Columbia (which seems about as far away as you can get from Ottowa without actually leaving Canada!). He talked about “Fast silicon validation with built-in funcitonal tests.” Other attendees shared their experience with the SpyGlass tools and methodologies. Most of the interest, as judged by the questions, seemed to be on GenSys, Power and Advanced Lint.

    The next seminar is coming up next Tuesday, September 27th from noon until 5pm at the Network Meeting Center (5201 Great America Parkway, Santa Clara, by the Hyatt Hotel). The keynote will be Suk Lee, director of design infrastructure marketing at TSMC, another longtime EDA guy who worked for me at one point years ago. He will be speaking about soft IP quality.

    The seminar is free and includes lunch (who said there is no such thing as a free lunch) and closes with a cocktail reception.

    To register, go here. There is also a seminar in Bangalore on October 13th.


    Phil Bishop and marketing at Magma

    Phil Bishop and marketing at Magma
    by Paul McLellan on 09-15-2011 at 4:59 pm

    Earlier in the week I met with Phil Bishop, who is the corporate VP of worldwide marketing at Magma.

    I started by asking him where he came from. He originally started as a designer at Motorola in microprocessors and microcontrollers. Then he moved to Silicon Compiler Systems (remember them?) who ended up being acquired by Mentor. He stayed at Mentor for twelve years and ended up as VP of consulting with some product and IP framework. Then he decided the lure of fish & chips was too great and went to the UK for 5 years as the CEO of Celoxica. He built that up and took it public. Interestingly one place he found revenue was in the banking sector, selling them Xilinx-based add-in boards to accelerate Black-Scholes option pricing. Sounds more fun than the usual EDA term-license deals. When he came back from the UK he became Pyxis CEO and left soon before its sale to Mentor last year.

    Rajeev recruited him to Magma, initially to do global account sales until one day he found himself in charge of all of marketing: marcom, product marketing, solutions marketing and foundry interface.

    His key marketing thrust is to pull Magma’s product line together into a unified message (I used to call this empirical marketing when I was doing it at Cadence). The Silicon One tag is an umbrella name for this. Magma has a strong product line to do this with Finesim and Titan to address analog/mixed-signal, and a strong verification portfolio of Tekton (timing verification), QCP (extraction) and Quartz (DRC/LVS). Of course there are some holes that Magma has to fill with partners. It has no DFT solution of its own and no IP portfolio of its own, where it works closely with ARM, MIPS, Imagination and others.

    Another thing he is trying to do is drive the product marketing folks to take responsibility for moving opportunities from the funnel into true pre-sales engagements, so as to better understand all the adoption issues and pushback. At the same time, do a better job on competitive analysis, looking out a year or so to where the market will be, not just looking at what the competition has this quarter. This allows them to do a better job of driving engineering who are operating on that kind of a timescale anyway.

    The big thrust that Phil feels is driving Magma’s business is the move to have everything on the same chip. All the high-performance digital, all the analog, RF.

    Talking of Silicon One, coming up are the Silicon One seminars. The keynote speakers are pretty interesting:

    • Boston on 22[SUP]nd[/SUP] has Carl Anderson of IBM talking about the EDA cloud. IBM has put a lot of effort into building its own cloud for EDA and significantly reduced its IT cost as a result.
    • Santa Clara on 26[SUP]th[/SUP] has Jack Harding of eSilicon. He will talk about how chips are no longer primarily digital or primarily analog, they are totally both and the challenge is how to model, simulate, validate and optimize such beasts.
    • Austin on 28[SUP]th[/SUP] has Ty Garibay now of Altera and recently in charge of engineering at TI wireless. He is talking about how to integrate SoC and FPGA into multi-chip packages, given the change semiconductor economics whereby 20nm will be slow in coming and not cheap when it gets here, extending the push for greater integration than ever at 28nm.

    After that the seminars go to Asia and Europe.

    To see the agenda for the day at the seminars go here.

    To register for any of these seminars go here.

    Note: You must be logged in to read/write comments