CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Synopsys to Acquire Atrenta

Synopsys to Acquire Atrenta
by Paul McLellan on 06-07-2015 at 11:17 pm

I was at the DAC kickoff this evening in the Intercontinental Hotel. I was talking to Dave DeMaria, the senior marketing guy at Synopsys and he told me of a couple of minor press releases due to hit the wire tomorrow morning, didn’t sound important enough to be blogworthy. Aart was there too although I didn’t speak to him. Then at 8pm (11pm eastern) Synopsys put out something much more interesting, a press release that they are acquiring Atrenta, subject to the usual caveats. In particular there will be a waiting period due to HSR review since Atrenta is over the threshold even though it is a private company (when we sold Compass to Avant! we had to go through HSR filing too).

There have been lots of rumors about potential suitors for Atrenta, not just recently but over the years. Since they operate at the RTL and IP level there is clearly potential for other companies than the usual suspects of Cadence, Synopsys and Mentor. Two that have been much-rumored were ANSYS (who acquired Apache a few years ago) and Dassault (who have some process management solutions and acquired Tuscany Design Automation [disclosure: I was on the board] right at the end of 2012). I’ve never heard it mentioned but another possible candidate might have been TSMC who have used Atrenta’s SpyGlass solution as their “signoff” tool for IP qualification as part of their OIP ecosystem.

The financial details of the deal were not disclosed. But for sure it was not a fire sale. For a start, the current threshold for HSR filing, the so-called “$50M threshold”, is actually $76.3M this year so we know that the price was at least that high.

However, since there were multiple companies rumored to be interested I think it is logical to speculate that the price will be at a reasonable multiple on their revenue, itself rumored to be running in the $60M range. At 3.5X revenue that would be $210M so I’ll go with that as my guess. Final answer.

Of course Synopsys plans to integrate the Atrenta technology into their verification continuum, especially SpyGlass which has wide acceptance. Manoj Ghandi, who is the GM of verification, adds a bit of color (not much, to be honest):Atrenta’s demonstrated leadership in static and formal technologies is recognized throughout the EDA industry, and its technology is used by design and verification teams around the world. Synopsys expects to leverage this strong technology to further improve our Verification Continuum platform to address continually increasing verification challenges, and to support our ongoing R&D collaborations with customers in both verification and implementation.

Atrenta will be at DAC on booth #1732. Interestingly they just announced the date of their user conference in October. Of course that may take place, since under the rules for an acquisition like this the two companies are not really able to work together until the deal is closed (on the basis that if the deal is struck down then everything should go back to exactly how it was before). However, I think it is more likely that it will just get folded into SNUG.

The Synopsys press release is here.


5 Things Chipmakers Are Missing on the IoT

5 Things Chipmakers Are Missing on the IoT
by Don Dingee on 06-07-2015 at 7:00 pm

When the RISC movement surfaced in 1982, researchers analyzed UNIX to discover what instructions multi-user code was actually using, and then designed an instruction set and execution pipeline to do that better. Fewer instructions meant fewer transistors, which led to less power consumption – although in the original Berkeley RISC disclosure, the word “watts” never appears. Even during the early development of ARM, lower power consumption was completely serendipitous.

As the mobile SoC began gathering momentum in 1992, the benefits of fewer transistors, smaller dies, and less power were obvious. New developments were necessary. Low power DSP capability, whether through hardware multipliers and SIMD enhancements, or efficient DSP cores, was a must for GSM signal processing. Code density expansion was a BOM killer, giving rise to the ARM Thumb instruction set. Efficient Java execution gave rise to ARM Jazelle.

In 2002, smartphone efforts ramped up. Faster processors such as ARM11 appeared. Graphics needed to improve, leading to development of mobile GPU cores such as the Imagination PowerVR MBX Lite, and later development of OpenGL ES. Operating systems started to change, indicating a shift from Symbian, Palm, and Microsoft to newer ideas. Android was just a twinkle in Andy Rubin’s eye, and Apple playing with the beginnings of Project Purple and multi-touch.

Each of these phases blew up everything we thought we knew about chipmaking. Running in parallel was a constant push for more, smaller transistors, driven by the economics of the PC and later by consumer devices. This lead to bigger wafers and smaller geometries and FinFETs and FD-SOI and gigantic FPGAs and manycore processors.

It’s 2015, and the Internet of Things is here. We should be talking about a fundamental shift in the way chips are designed and made specifically for the IoT – but, we’re not, because it hasn’t really happened yet.

True, we have dozens of microcontroller architectures and billions of chips out there. These were designed to put intelligence on a point. Control some buttons. Light some LEDs. Spin a motor. Read a sensor. Automotive and industrial types discovered they could be put on a simple bus, like CAN. Some really brave folks started putting radios on chip, like 802.15.4 or ISM band, and protocol stacks like ZigBee and Bluetooth and Thread found homes. That lead to substantial IoT progress from the likes of Atmel, Microchip, NXP (nee Freescale), Silicon Labs, TI and others – but not a breakthrough of the likes we saw in earlier phases, at least so far.

DAC52 is offering a Management Day session on June 9th to discuss “big data” in two sessions, one from the perspective of behavioral analysis and design closure in EDA, and the other from possible trade-offs in connectivity. At least we are talking. We know we have way too many connectivity standards, and not enough data-level interoperability, when it comes to the IoT.


But we still don’t have the right chips, or the right discussion. What we have is what I call “the IoT paint job”, where everyone lists IoT on their website and booth signage to draw traffic. Just watch how many press releases come out of DAC with the term IoT somewhere. Not to disparage anyone in particular, there is some good stuff happening, and there’s some fluff. ARM is making great strides with a focus on the IoT. Mentor understands embedded software versus SoC design, and Synopsys has its ARC core and virtual prototyping.

What I’m saying is we need more actual IoT progress. At least 5 things are missing:1) Processes. Somewhere in between 14nm FinFET and 130nm BCD lies a sweet spot for the IoT. We know mixed signal and embedded flash get difficult below 28nm. MEMS also presents some challenges in economics. Talk of trillions of chips and 2 cent parts makes most chip firms yawn – it hasn’t happened, and frankly isn’t a sustainable model for most companies, especially the ones tied up at the 14nm end of the spectrum needing bigger ASPs to offset billions of cap ex dollars. Where is a true, dedicated IoT process, that can handle both the technology and the business model? (Hint: ARM announced 55ULP initiatives with TSMC and UMC recently.)

2) Subthreshold. The MCU firms all understand ultra-low power, and are fast to point out metrics like uA/MHz and various modes from catnapping to comatose. Super. Business as usual, hasn’t changed much since the 1980s except the power figures have gotten smaller. The fundamental change that has to happen is subthreshold logic, or something akin to it, that redefines the equation. Companies like Ambiq and PsiKick are out there. Sunrise Micro Devices, incubated by ARM and recently reacquired, is now the technology inside the ARM Cordio radio.

3) Mixed signal. I cut my teeth making drones fly (we didn’t call them that then, they were RPVs) with a lot of LM148s and Siliconix analog switches way back when. Mixed signal is near and dear to my heart. We integrated mixed signal on MCUs, great. I get to choose from a thousand parts using a parametric search hoping I can find the exact combination of resolution, channels, and pinouts I need. There is Cypress PSoC, and Triad’s VCA, and a MAX11300 from Maxim Integrated, and not much more in configurable mixed signal. The counter argument is just put a dedicated IP block on an dedicated SoC design, and that works if you have a few million dollars. When mixed signal gets as easy to create with as CPLDs, we’ll have something.

4) Optimization. If all the rage in server design is workload optimized processors, why isn’t that true for the IoT? A lot the focus on the IoT is on one tier: the edge. But there is so much opportunity to optimize at the gateway and infrastructure levels. Network-on-chip is a big help in making MCU architecture more SoC-like. We need to start looking at IoT traffic not as a bunch of packets, but in thread form, and figure out what makes it go faster. “Meh, IoT is low bandwidth.” I hear that all the time and for a particular sensor at the edge that may be true – but toss 10,000 sensors together in real time with predictive analytics engaged and tell me how bandwidth looks then. It worked for RISC, workload optimization is needed for IoT parts.

5) Programming. ARM is rallying around their vision, mBed OS, with optimized Cortex-M IP. Check. How about optimizing for Google Brillo? Or maybe something that runs MQTT or DDS better? This may be the biggest opportunity yet, really understanding IoT software. Another change that chipmakers need to be aware of: not everything is C, or Java. Those are two of the most popular languages in the world. C was especially great when we worked with Unix and programmed hardware down to the bit level. On the IoT, many other languages are emerging (and yes, some are on top of C). Coders today are learning in Python – embedded purists need to stop barfing on it as interpreted. For distributed data analysis there is Lua. For safe, concurrent threads, there is Rust, which has just put out its first stable release. It’s a new world, and the C compiler and debugger isn’t the only vehicle anymore, or even the right one.

We’re still working very much on the old chip technology base when it comes to IoT design. When Steve Jobs introduced the iPhone in 2007, he quoted Alan Kay: “People who are really serious about software should make their own hardware.” We saw what Apple did, making its own chips to run its own software better.

Well, the IoT is all about software. It’s time we make chips just for it.


Turning the Automotive Development Process Upside Down

Turning the Automotive Development Process Upside Down
by Daniel Payne on 06-07-2015 at 2:00 pm

Most of us drive automobiles and have a vague idea that the development of our cars takes many years, millions of dollars, is a proprietary process and require huge factories to produce. A relatively new company called Local Motors founded in 2007 has started to turn the automotive development process upside down because they do things differently, like:

  • Have a community of over 30,000 designers online that collaborate
  • Complete product developments 5X faster
  • Use 100X less cost for development

Take a look at some of the cool ideas that have been brought to life by Local Motors with their collaboration process:

In the DAC pavilion on Wednesday, June 10th from 10:30AM to 11:00AM you will hear Corey Clothier talk about, “How Open Collaboration is Fostering the New Mobility Revolution“. We’ve all heard about how Google has a self-driving car project, however the folks at Local Motors are also entering that market, so stay tuned to hear about their developments in the near future.

Instead of using a large, centralized, traditional manufacturing factory the vision at Local Motors is to use local, micro factories, where they can produce a 3D printed car. Their current headquarters is in Phoenix, but soon you’ll see micro factories in Tennessee, Maryland, Detroit, Florida and Europe. If all goes to plan then in 10 years their will be 100 micro factories around the globe.

They have an electric vehicle that will be through all of the mandatory testing procedures in 2016, ready for sale in 2017. One intriguing aspect of a 3D printed car is that you can actually have your car recycled in a few years, then receive a new body style in return as an upgrade.

The general process at Local Motors is:

[LIST=1]

  • Design community submits new ideas
  • Community then votes to decide on the best ideas
  • Prototypes are made
  • Micro-manufacturing is done
  • Consumers can buy, then designers share in the revenue

    Here are a few more photos from transportation systems designed by this collaboration process:

    All of this collaboration vision from Local Motors reminds me of the book from Alvin Toffler called The Third Wave, where the concept of mass customization was introduced. Consumers could create their own personalized automobile in the color, length, weight and other features to suit their individual tastes, instead of having the factory limit them to a few makes, models and trim packages. Attend the DAC Pavilion session and see the future now.


  • ESD Protection Network Checking is Difficult But Necessary

    ESD Protection Network Checking is Difficult But Necessary
    by Tom Simon on 06-06-2015 at 6:00 pm

    I’ve written before about anti-fuse non-volatile memory, where the gate oxide is intentionally damaged in order to create a readable bit of data, but this is what most circuit designers never want to have happen to their logic gates. However, since the advent of MOS transistors the issue of Electrostatic Discharge (ESD) and the resulting damage from voltage induced currents has been a key reliability issue. While it is possible to reduce the likelihood of an ESD event though proper handling and environmental controls, it is not possible to completely prevent them. An ESD event can cause excessive current that can damage or vaporize wires and vias, it can melt p-n junctions, and of course it can damage gate oxide.

    The world of ESD is a world of things out of the normal. Circuit designers like it when transistors and interconnect are behaving linearly. There is nothing linear about ESD. Understanding and preventing ESD events and the resulting damage takes us into a world where everything is operating at extremes.

    Shunting damaging currents caused by high electric potential is done by protection circuity in the pad ring. The protected devices in the core should never see high voltages or currents. It is the behavior of the ESD protection network that determines how the chip will fare. While the protection network is made of devices that are familiar, they are operating in modes that cannot be easily simulated using traditional circuit simulation methods.

    There are a number of commercial solutions for analyzing ESD protection networks. Magwel is a company that started out developing field solvers for modeling transistors, but at the behest of several customers has developed a comprehensive ESD analysis solution called ESDi. I have been learning about Magwel and ESDi because I am working with them on several projects.

    Magwel’s ESDi has a number of notable advantages over the previously existing tools. For one, ESDi can identify triggering of parasitic Bipolar junctions in MOS devices. It does this as it extracts the ESD protection network in the layout. These devices can be pre-characterized by the TCAD group and table models can be used to predict their behavior as part of the ESDi tool run. The same goes for the parasitic junction diodes formed in the MOS devices. This allows for proper modeling of avalanche and snap back behavior during ESD events.

    Another key analysis capability of ESDI is that it models triggering of multiple protection devices, and can allocate the current among the various paths available. ESDi can do this because it uses extracted network resistance to calculate voltages at all the device pins. ESDi has a sequential algorithm that then allows for multiple devices to trigger. This avoids pessimistic current predictions, eliminating extra downstream work fixing problems that do not exist. It also can report electro-migration violations and high current densities along discharge paths.

    Another advantage of ESDi is that it goes beyond what electrical rule checkers (ERC) do in determining if there is an adequate current limiting resistor in the discharge path. Rule checkers can tell you if you missed a resistor, but it cannot check to see if it has the correct value.

    Magwel’s ESDi has many other unique features that make it extremely fast and accurate. Magwel will be at DAC in June this year in San Francisco. You will see their booth right as you enter the exhibit hall. Please contact them at sales@magwel.com for a demo at DAC or just drop by to learn more about ESD protection network verification. Magwel also has tools that address power transistor switching analysis, and thermally adjusted spice modeling for transistors.


    Vacationing with the Fabless Semiconductor Ecosystem!

    Vacationing with the Fabless Semiconductor Ecosystem!
    by Daniel Nenni on 06-05-2015 at 4:00 pm

    The Design Automation Conference is the largest and most diverse event in the fabless semiconductor ecosystem. Next week in San Francisco you will see technology and people you have never seen before. You will benefit from the efforts of hundreds of thousands of semiconductor professionals like myself who have dedicated their careers to this industry. This is my 31[SUP]st[/SUP] DAC and judging by the keynotes, panels, fireside chats, and other events, this year looks to be both interesting and entertaining, absolutely!

    You can check the #52DAC highlights HERE.

    The Design Automation Conference (DAC) is recognized as the premier conference for design and automation of electronic systems. DAC offers outstanding training, education, exhibits and superb networking opportunities for designers, researchers, tool developers and vendors.

    Besides being the number one EDA Company, having the largest semiconductor IP portfolio, and the biggest user group, Synopsys is also one of the most diverse companies with the deepest executive bench in the fabless semiconductor ecosystem. Take a look at theEDA Mergers and Acquisition Wiki and you will see why. Synopsys is made up of more than a hundred different companies from around the world. I worked for several of those acquired companies so I know this by experience.

    Synopsys is the Silicon to Software partner for innovative companies developing the electronic products and software applications we rely on every day. Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP, and is also a leader in software quality and security testing with its Coverity solutions. Whether you’re an SoC designer creating advanced semiconductors, or a software developer writing applications that require the highest quality and security, Synopsys has the solutions needed to deliver innovative, high-quality, secure products.

    What happens when you put the number one event and number one company together? Some very cool stuff of course:

    Silicon to Software Theater
    Join us in Synopsys Booth #2133 to hear industry leaders discuss exciting new developments and the latest trends in IoT, automotive, FinFET, and mobile computing design and use.

    Conference Presentations
    Hear Synopsys speakers at conference panels, tutorials, poster sessions, and more.

    Special Events (free food!)
    Join us at the Park Central Hotel (formerly Westin) to hear users discuss the latest industry trends and their experiences using Synopsys technologies, including AMS verification, custom implementation, IC Compiler, PrimeTime, SoC verification and more, in their SoC designs.

    Partners & Standards
    See Synopsys highlighted in the DAC exhibit hall at many of our partners’ booths and learn about standards activities at DAC.

    Automotive Village
    Learn about the newest advancements in automotive design.

    ARM Connected Community
    Learn how ARM and Synopsys collaborate to enable you to create leading-edge ARM Powered products.

    My beautiful wife and I will be attending many of the parties but not all. Hopefully we will see you at the Love IP Partyon Monday night. SemiWiki is a participating company so when you register select us as your sponsor and be our guest (space is limited).

    On Wednesday night SemiWiki is again sponsoring a DAC reception. Last year we did a book signing, this year there will be SemiWiki bloggers mingling and it would be a pleasure to meet you. In addition to free food and drink there will also be tokens of our appreciation available in thanks of your support in making SemiWiki one of the top rated industry portals! I hope to see you there…

    WEDNESDAY June 10, 6:00pm – 7:00pm | Esplanade Foyer
    NETWORKING: Reception – All Invited!
    Join attendees for refreshments and lively discussion recapping the days’ events.

    Sponsored by: SemiWiki.com

    Also Read:Ten Things to Do in San Francisco the Way the Locals Do


    GlobalFoundries Adds RF to 28nm

    GlobalFoundries Adds RF to 28nm
    by Paul McLellan on 06-05-2015 at 7:00 am

    The internet of things (IoT) or internet of everything is a term that is into the red zone on the hype-meter. But it does genuinely have something of substance behind the hype. The thing that is a little deceptive is that the IoT term makes it sound like it is a market, but in fact it is several different markets: medical, automotive, industrial, metrology, fitness and more. But these different applications largely have several things in common. They are cost-sensitive, they require long battery life (or even scavenging power from the environment) and they require connectivity, almost always wireless.

    My expectation is that very few IoT designs will be large SoCs in advanced FinFET processes. The costs are too high, the difficulty of doing the design is too high, and the need for that many gates doesn’t exist. As for many other classes of design, I think the sweet spot for some time will be 28nm. This is the last process node that doesn’t require the expense and delay of double patterning, a lot of capacity is in place from a number of foundries, and much of the equipment is depreciated. I am not alone in believing that 28nm will be a very long-lived process generation. But for IoT there is a problem: wireless interfaces require RF.

    A couple of weeks ago GlobalFoundries announced a new 28nm process 28SLP-RF that is targeted at IoT and mobile applications. This is a process that is built on the foundation of the existing field-proven cost-optimized 28SLP process adding RF modeling. This is a HKMG (high-K metal gate). It is gate-first which GF claims is “up to” 30% cheaper than equivalent gate-last processes (10% mask adders, 10% power management and 10% area scaling disadvantage of gate-last).

    I talked to Mike Chen (Deputy Director, Product Line Management) and Peter Rabbeni (Director, RF Segment Marketing) about the new process and the ecosystem around it. What is being announced is essentially adding an RF component to the existing ramped 28SLP process. GF already have some customers who have designed their own RF on top of the basic process but this is purely as a COT design. What is new is that the RF is being made available to anyone doing design at 28nm. The PDK exist and can be downloaded from the GF website (for qualified companies). There are reference flows.

    Compared to the previous process node, 40LP, it has:

    • Twice the gate density
    • 36% speedup with full overdrive option
    • 40% power reduction
    • 1.6GHz performance for the ARM Cortex-A9
    • And now RF

    I asked Mike and Peter about IP. After all, most groups designing IoT applications are not really capable of designing radios on the bare silicon and even for groups who have in-house RF designers, it is slow and expensive to design a wireless interface. They told me that they were working to make available both bluetooth and WiFi interfaces although it was too early to tell me which company they were working with and giving early access to the process.

    Silicon results have demonstrated high-frequency performance (310GHz) and low flicker/thermal noise providing chip designers flexibility in optimizing core RF performance and functionality. The 28SLP-RF process technology is designed for devices that require low standby power and long battery life integrated with RF/wireless functionality. The technology is enabled with key RF features, including core and I/O (1.5V/1.8V) transistor RF models along with 5V LDMOS devices, which simplifies RF SoC design. For passive RF devices, 28SLP-RF offers alternate polarity metal-oxide-metal (APMOM) capacitors up to 5V, deep n-well devices, diffusion, poly and precision resistors, inductors and an ultra-thick metal (UTM) layer.

    The process is fully qualified from -40° to 125° and MPW shuttles run quarterly, and perhaps more frequently depending on demand and urgency. GF are already working with some lead customers. The expectation is that lead customers will have prototypes available late 2015 with production in 2016.

    The press release with more details is here.


    Automate those voltage-dependent DRC checks!

    Automate those voltage-dependent DRC checks!
    by Beth Martin on 06-04-2015 at 10:00 pm

    Because IC design and verification never gets simpler, verification engineers now have to comply with voltage-dependent DRC (VD-DRC) rules. What does this term mean, and what new challenges does it bring to the DRC task? I’d like to share what I learned during another water-cooler conversation with Dina Medhat, senior technical marketing engineer at Mentor.

    VD-DRC rules require different spacings based on either the operating voltage on the geometries being checked, or the difference in voltages between different geometries (wires) running next to each other. Because there might be many voltage domains and voltage differentials in a modern SoC design, a designer can no longer apply just one spacing rule per metal layer.

    The traditional challenge, says Medhat, is how to get the voltage information for each net to apply the appropriate spacing rule. She said customers often ask questions like, “How can we create an automated solution to this checking task?” Or more specifically, “How can we capture the voltage information without having designers add layout markers to designate them?” Before answering these questions, let’s have a deeper look at the technical problem that they’re facing.

    In VD-DRC, spacing requirements between nets are determined by the operating voltages present on the nets. But, how do you define these voltages in a layout? The best-known method is to add markers (either text layers or polygons) to the layout with the expected voltage value. Because the designer must add the correct marker manually, the process is subject to human error. If the markers are not present, or they are incorrectly placed, false violations can occur. These false errors can be very difficult and time-consuming to debug, which is a waste of time and resources. But even worse, marker errors can result in rule violations that sneak by the check, and result in device failure down the road. Moving to more complex designs and advanced process nodes, says Medhat, greatly increases the complexity of VD-DRC and the challenge of defining voltages in a layout.

    Checking errors that are introduced during DRC because of improper rule coding or erroneous voltage markers can generate hundreds of errors that need to be analyzed and debugged, and the false DRC violations then need to be waived by the designer, which introduces even more time and overhead. Inaccurately marked layouts can also result in substandard routing optimizations, if the router uses general worst case rules, rather than rules based on the actual voltages present on various nets of the layout.

    Medhat wants to solve VD-DRC challenges with an automated flow that can propagate realistic voltage values to all points in the layout, eliminating the more fallible manual process. Mentor has worked directly with customers to build such a flow based on Calibre PERC. “The VD-DRC flow first identifies the supply voltages for the design, and then uses a voltage propagation algorithm to determine the voltages on internal layout nodes,” says Medhat. “The voltages are computed automatically based on static propagation rules, which can be user-defined for specific device types. The algorithm is applied to the netlist to identify target nets and devices needed for VD-DRC.”

    Because the netlist information is preserved along the entire flow, the results are context-specific, making them easy to debug. This integration between netlist, connectivity-based voltage analysis, and geometric analysis is important. Once the node voltages are computed, the tool writes out the voltage information as text markers into a separate file, which is given as an input to the Calibre tool running the DRC sign-off deck.

    This automated flow doesn’t require any changes to sign-off decks, and it generates the voltage information automatically, without requiring any manually added physical layout markers. This approach reduces both the design team workload, and the chance of missing real violations or producing false violations.

    If you’d like to learn more, Dina Medhat is presenting this work at the DAC Work-In-Progress session Wednesday evening, June 10, from 6:00-7:00 pm. Bring your questions and observations!


    Is Avago Chip Industry’s Cisco?

    Is Avago Chip Industry’s Cisco?
    by Majeed Ahmad on 06-04-2015 at 3:00 pm

    If Ford is a reference model for value chain in the Industrial Age, Cisco is the icon of the twenty-first-century digital economy. The networking gear maker, who achieved phenomenal growth with the rise of the Internet, has been remarkably successful in snapping up and integrating scores of companies for products it could not innovate.

    Take Crescendo Communications, for instance, the company that Cisco bought in 1993 just to meet needs of two of its major customers: Boing and Ford. Eventually, the acquisition transformed Cisco from a router company to a routing-plus-switching vendor.


    Avago’s aggressive acquisitions are reminiscent of Cisco’s 1990s era of takeovers
    (Image: Avago)

    Cisco made one acquisition after another to capture intellectual assets and next-generation technologies during the 1990s. By the end of the decade, Cisco had purchased more than 50 companies to dominate the enterprise networking market. Cisco mastered the art of mergers and acquisitions, meeting immediate objectives to expand into new market opportunities. This aggressive pattern had become its modus operandi to assemble new market strategies.

    It’s pretty ironic that Cisco copied the HP model of dividing the market into small segments and leading each one of them. However, HP itself failed to respond to the web challenge while rivals like IBM and Sun Microsystems quickly identified themselves with the Internet tidal wave. In 1999, the company that more or less invented Silicon Valley decided to break itself to get more focused and nimble. The new HP would take care of computing and printers business while spin-off Agilent Technologies would oversee test and measurement operations.

    Avago’s HP Lineage

    And that fascinating technology history brings us to the talk of the town, Avago Technologies Ltd, an HP progeny that has rocked the semiconductor industry by gobbling a larger chipmaker than itself. Avago has mounted an ambitious $37 billion takeover of Broadcom Corp., the largest acquisition in high-tech history.

    Avago has been a relatively less known company that usually remained outside the media limelight. That’s partly because its chief Hock Tan wanted to spend less on marketing. So it’d be worthwhile to take a peek at the company’s lineage to the HP Way and see how it has managed to become the sixth largest semiconductor company in the world.


    In 2000, Agilent’s IC division created first-generation lab-on-a-chip that integrated a large number of chemical manipulations on a single die (Image: Agilent Technologies)

    Avago’s genesis can be traced back to 1961 when HP started a semiconductor products division and began the pioneering work on products such as light-emitting-diode (LED) displays, fiber-optic transmitters and optical mouse sensors. However, after HP spun off Agilent Technologies in 1999, the history began repeating itself. The testing and measurement business took over the limelight in the new company and semiconductor products went to backwaters.

    Consequently, Agilent, still a big piece of old HP, was ready to be split again in 2005. Private equity firms Silver Lake and Kohlberg Kravis Roberts & Co. spent $2.66 billion to acquire Agilent’s semiconductor business and created a new chip outfit: Avago. The new chipmaker—which specialized in markets such as lighting, fiber-optic gear and power amplifiers—went public in 2009 at $15 a share.

    Avago’s Acquisition Spree

    Avago has shown that it can make deals work. For example, in 2013, it acquired LSI Corp. for $6.6 billion. The purchase of LSI, a maker of networking and storage chips, allowed Avago to gain traction in rapidly growing cloud computing, web services and data center markets. Avago’s market value has tripled since it bought LSI less than two years ago.

    Emulex Corp. is Avago’s most recent acquisition before the Broadcom deal. It’s interesting to note that Broadcom had tried to buy Emulex in a hostile takeover bid back in 2009. In February 2015, Avago gobbled the maker of network connectivity and monitoring chips for $600 million.

    Avago—itself a product of two spin-offs—first went to shopping in 2008 when it bought Infineon’s bulk acoustic wave business for $23 million. The first full-fledged company that Avago acquired was CyOptics Inc., a supplier of optical chips and components for telecom and data communication markets that Avago snapped for $400 million in cash.


    Hock Tan: A big job ahead of him

    For the Broadcom takeover, interesting bits have surfaced in the media, and they clearly show that Avago’s latest chip deal is more about gaining scale than anything else. Back in February this year, when the news about NXP’s acquisition of Freescale surfaced, it has been reported that Avago was also a contender for Freescale.

    There are other media reports about Avago being out with a horde of cash and trying to buy another chip company. The names Maxim Integrated, Renesas and Xilinx have emerged in relation to Avago’s pursuit of an acquisition just before the Broadcom deal.

    The merger of Avago and Broadcom creates the second largest communications chipmaker after Qualcomm and fourth largest wireless chipmaker after Qualcomm, Samsung and MediaTek. That’s quite a bit of scale. Now Hock Tan and his lieutenants have to show that they can execute. The scale in case of “New Broadcom” is going to be much larger than LSI operations.

    The new company will use the Broadcom brand once the merger is complete. The name Avago won’t be there anymore, but if Tan and his team are able to pull this off, the legacy of Avago will remain just like HP’s DNA. The scale of Avago’s acquisitions is nowhere near Cisco, but there are still similarities between the two companies.

    Also read:

    Semiconductor Acquisitions will Fuel Innovation!

    End of the Road for Micrel

    Three Colorful Bytes from the NXP History

    Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronics and The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future.


    The Transistor is the Foundation of TCAD to Signoff

    The Transistor is the Foundation of TCAD to Signoff
    by admin on 06-04-2015 at 7:00 am

    At the most basic level, semiconductor design is all about transistors. Any report on a large microprocessor or mobile application processor is in awe about how many transistors it contains. Moore’s Law is all about the most economic way to manufacture transistors. Each process generation for the last decade and looking ahead is built around new transistor types: strained silicon, Hi-K metal gate, FinFET, FD-SOI, silicon nanotubes, carbon nanotubes, spintronics. Outside of SoC memory architectures have always been about transistors, deep trench DRAM, vertical Flash. Power (high voltage) is moving to new materials such as SiC and GaN.

    The transistor starts with the process. Technology CAD (TCAD) is the method of building up the transistor according to the process recipe and then analyzing the resulting transistor(s) to gradually converge on the desired characteristics.

    The TCAD models are too slow and unwieldy to be used directly for design, a model extraction phase is required whereby the results of many TCAD characterization runs are coalesced to create a SPICE model that can be used in simulation and PDKs to allow larger designs to be undertaken.

    The next level up is a traditional analog (or custom digital) flow. This consists of a high capacity layout editor (some cells are small, but others such as memories or flat-panel-displays are huge), a full 3D field-based extractor to get all the precise parasitics, and then accurate SPICE circuit simulation that can take the models from TCAD (or silicon characterization) and produce the usual performance data.

    One level up from there is the requirement for full power integrity analysis: current droop due to resistance (IR drop), electromagnetic effects (EM) and thermal effects (heating due to design activity). For the most extreme environments, such as space or even avionics, single-event effects and threshold drift due to dose over time may also be required.

    These transistor technologies can be applied to provide solutions in a wide variety of vertical markets. The key markets where Silvaco’s suite of products is used are:

    • Displays: this includes TFT, LCD and OLED
      • Almost all manufacturers of displays use the Silvaco suite for design and most of these designs are in high-volume manufacturing.
    • Power (high voltage/current): including DMOS, IGBT, SiC and GaN
      • Recently silicon-carbide (SiC) and gallium-nitride (GaN) and other materials have started to gain attention. Their wide bandgap means that they should have better performance than silicon. But that wide bandgap also leads to some challenges in simulation since accuracy needs to be very high.
    • Radiation and soft error reliability: SEE, SEGR, total dose
      • High energy particles cause two big problems. First, single-event-effects (SEE) including single-event-burnout (SEB) and single-event-gate-rupture (SEGR). A second effect is that the build up of the dose of radioactive particles can cause threshold shifts in the transistors which can result in long-term reliability issues.
    • Optical: including CCD, CIS, solar, silicon photonics and laser
      • The optical segment deals with the design of semiconductor devices that interact optically with the environment, either where light is an input (such as optical sensors) or where light is an output (such as semiconductor lasers).
    • Advanced CMOS process development: FinFET, FD-SOI and more advanced
      • Design of advanced CMOS processes starts with TCAD. It is too slow and expensive to run real wafers until relatively late, and much better to use rapid prototyping in TCAD to decide on an integrated flow followed by detailed simulation to determine process recipe details.
    • Analog and high-speed I/O design: PLL, ADC, SERDES etc
      • The real world is analog and as more and more of a system is integrated onto a single SoC, the requirement for high accuracy analog design in a mainstream processes becomes increasingly acute. At the same time, digital interfaces such as SERDES have to be designed as analog blocks.
    • Foundation library and memory design: standard cell, SRAM, DRAM, flash
      • One of the critical aspects of bringing a new process into production is getting the foundation IP designed (at a minimum, standard cells and SRAM memories).

    Today Silvaco brought their new website online. This reflects the emphasis on these 6 vertical markets. The navigation is now across the top. A good place to start is to click on “Solutions Overview” and you will see the graphic above. This is clickable for both the technologies involved (down the left hand side) and for the vertical markets (across the top).


    Silvaco will be back at DAC for the first time in a decade or more. They will be on booth 532. If you are coming to DAC then come by and see how Silvaco’s TCAD to Signoff technologies can help you in your own vertical markets.

    Silvaco’s new website is here.


    MIPI Beyond Mobile, Semiwiki Blogger Paper at #52DAC!

    MIPI Beyond Mobile, Semiwiki Blogger Paper at #52DAC!
    by Daniel Nenni on 06-03-2015 at 10:00 am

    IoT or wearable: it’s fascinating to see how many articles, blogs, and comments have been posted about them during the last two years! IoT business potential is huge as are the number of possible applications. If we summarize the functions within a wearable system we can count:

    [LIST=1]

  • CPU: it can be a standard Microcontroller or an embedded CPU core (IP)
  • Wireless communication sub-system (ZigBee, BLE, WiFi, etc.)
  • Display device (screen)
  • Sensor(s)
  • Camera and Sound (both optional)

    What is most important for a wearable device? I would put ultra low power consumption (or power efficiency) as the very first and TTM a close second. The equation can be synthesized as: a successful wearable device will need internal interfaces between the chip being designed for low power, functionally and silicon proven to accelerate Time-To-Market, and sized for Mobile application. Does this ring a bell?

    MIPI specifications from the MIPI Alliance have been successfully integrated into smartphones and even low cost phones for years. A quick evaluation suggests that about 7 to 10 billion MIPI powered chips have been in production this last year. It’s tantalizing to imagine that some of these MIPI specifications could be used to support wearable applications! The paper to be presented at DAC IP track by Semiwiki blogger Eric Esteve “MIPI Beyond Mobile” is trying to give some answers. Just to clarify, this paper has been written from the results of the “MIPI Ecosystem Survey” generated by IPnest (5 to 6 weeks of full time work). It’s not an opinion but the result of systematic research. One of the key findings is synthetized in the above table “MIPI Specification Adoption in Mobile (phone and media tablet)”. Because MIPI specifications were originally developed to support these applications this table is our reference, based on facts.

    We clearly see that the Multimedia specifications for Camera and Display have the higher adoption rate. Soundwire adoption looks low but when you consider that the preliminary specification had been released a few weeks before the survey there is no doubt that this 3[SUP]rd[/SUP] Multimedia specification has a bright future!

    Still in Mobile, the Radio frequency (RF) specifications, DigRF, and RFFE are seeing a good level of adoption. They are almost at the same level as Universal Flash Specification and UniPro, both being used together to support the interface with MIPI powered flash.

    Just a word about the various PHY specifications (D-PHY, C-PHY and M-PHY). To support a Camera, Display, DigRF, and UFS you need to implement one of these. I could try to explain how to select one PHY or another but it would require many more words than I have left here! Just keep in mind that the lower speed D-PHY (up to 2.5 GTransfer/s and per lane) is the most commonly used because the cost of ownership (development or IP cost and area impact) is lower.

    The paper to be presented at the DAC IP Track will address the following questions:

    • Which specifications are expected to be used in IoT? In Wearable? In Automotive?
    • Can we find a correlation between these emerging segments and the geographical locations? The company type (small or large company, long time established or start-up)?
    • What will the impact be on the IP business of these emerging applications? What is the impact of the emerging (Asian) chip makers targeting mobile on this IP business?

    Just come to the IP Track 23, Dr. Eric Esteve will answer these questions and more. If you can’t attend DAC in San Francisco the presentation will be posted by the DAC committee after the conference.

    I hope to see you there!