CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

How Magwel is Tapping Tried and True Business Strategy in Targeting ESD

How Magwel is Tapping Tried and True Business Strategy in Targeting ESD
by Tom Simon on 11-02-2015 at 12:00 pm

Often when a company starts out it takes a while for it to find the sweet spot in the marketplace. Very often it is feedback from existing customers and business success that can help point the way for small companies as they grow. This is just as true in EDA as it is in retailing or consumer products. For instance, Mentor Graphics, though not small at the time, became a significant player in the DRC market after building new technologies that customers responded to in a big way.

The EDA start up Magwel has a similar story. I first met their CEO, Dundar Dumlugol when we both worked at Cadence in the late 90’s. There he was responsible for Spectre and Analog Artist, both leading products in the analog design space. It was years later that we met again after he had founded Magwel. Around 2008 we met at Starbucks in Los Gatos and he explained their then current products that were focused on TCAD for device junctions.

At the time with my background in P&R and digital it was like listening to Greek. However, Magwel had a good technology base that allowed them to build high quality solvers for metal layers and device junctions. The customers at the time were happy with the solutions. However, for Magwel to grow significantly it needed to find additional new markets that had big growth opportunity.

At the nudging of some of their customers they explored solving more widely applicable problems. Chief among them was ESD protection network verification and power transistor modeling. ESD protection simulation and verification is a major challenge for chip companies and much of the core solver and simulation technology Magwel had already developed was easily adapted to this application.

Magwel’s customers were looking for an ESD tool that would accurately predict ESD events, and identify any overcurrent and electromigration issues. The alternative tools available were too slow, overly pessimistic, were not truly simulation based or simply were too hard to use for debugging ESD issues.

After getting numerous test cases and correctly identifying things like parasitic device triggering, electromigration violations and other problems observed in silicon, Magwel showed how well its solution works. By taking advantage of parallel processing and using TLP data to properly simulate snap back behavior, the results could be obtained quickly and matched silicon behavior extremely well. The cost of missing these issues in design can include on-tester failures, respins and even field failures.

The present capabilities of the tool continue to lead in part due to the foundation technology developed after Magwel spun out of IMEC. Working with customers on difficult test cases and their design problems, Magwel has added support for multiple device triggering and simulation for parallel current paths during an ESD event. This is different from just substituting R values for the active devices and assuming only one current path exists as is done in some other products.

The simulator in ESDi uses the TLP IV data for each device to see if the device is actually triggered and what the final-state voltage drop and current is. The triggering of devices is based on checking whether the voltage built-up over the device during an ESD event exceeds the Vt1 voltage. The algorithm correctly models competitive triggering of parallel devices with different Vt1. This requires fully extracting and incorporating the interconnect resistance along the ESD paths as well. These are not simple point-to-point resistances, instead the tool uses multi-point resistances where the current in one current branch will affect the effective resistance seen in another.

Set up is easy, with ESDi obtaining stack-up information from the foundry supplied ITF file and reading TLP data in a csv format. Running the tool and debugging is easy too with cross probing from the violation list to the layout with EM data overlaid in color to highlight errors.

Magwel is working on solving the issues that affect a large number of chip companies and whose effects can frequently be tied back to yield, reliability and performance. This is a good strategy for growth. For my part I reconnected with Dundar earlier this year and have been working on several projects with them. I observed quite a change in the company direction since our previous meeting in 2008. Magwel seems well poised to continue to expand the scope of the issues they can address with their foundation technology.

More information on Magwel’s ESDi can be found here.


Verification with Tcl for what?

Verification with Tcl for what?
by Anatoly Myshkin on 11-02-2015 at 7:00 am

Nowadays, verification as one of the most complex SoC, FPGA, and ASIC development flow stages always requires new approaches. The following is an introduction to TcL vs/ with SystemVerilog and VHDL, the first in a 3 part series. Part 2 will be “Tcl vs Python, Bluespec” and part 3 will be “VerTcl description”.

Of course, the industry has a traditional and common UVM and other frameworks way – too complex for a quick modifications and not for everyone – especially not for the programmers frustrated by the SystemVerilog, VHDL, etc. usage for something except design.

Many new “programming” technologies for verification (f.e. great Python-based Cocotb) are interesting but require external DPI and VPI interfacing for every simulator and are not compatible in parallel to existing UVM, OVM, OSVVM, etc.

However in enterprise new ways often needed only as help and in addition to known old methods. I will try to describe my team experience of providing new verification ways – experience with Tcl.

Just one moment – Tcl is not HDL or HVL! It is a general-purpose interpreted language useful for many practical cases – especially for scripting, DSL (domain-specific language) creation, testing and verification. Tcl is very popular as embedded EDA and CAD language in a whole industry – it is used as automation, testing, configuring, “glue” language – basically for all programming tasks needed in verification too. That’s give great benefits when it is better to work close with synthesis/compile tools, simulators, etc. But what is known about it? Tcl knowledge is not as common as knowledge of C.

In short – Tcl is powerful and expressive but a very simple programming language – experienced programmers can learn it in just a few hours or days (the syntax and semantics are completely defined in 12 rules, known as the Dodekalogue). In 2014 Tcl celebrates its 25th anniversary. There are many myths surrounding it, but the facts are:

  • Interpreted language using bytecode
  • Homoiconic language,metaprogramming “from the box” (all data types can be manipulated as strings, including source code)
  • Fully dynamic, class-based object system, TclOO, including advanced features such as meta-classes, filters, and mixins
  • Cross-platform
  • Open source (based on a BSD-style license)

It is used as a testing language in many projects – f.e. SQLite, which started as a Tcl extension, relies on more than 30,000 Tcl tests.

What about main differences in general-purpose Tcl and SystemVerilog and VHDL for verification? Unlike HDLs, Tcl works outside the simulator through the interpreter which comes with EDA tool. It controls simulator through interpretation of the scripts built on defined commands – without compilation. It’s possible to control simulator in different levels – from bath (console) or GUI mode. But the most important – it is possible to put inside the simulator and control DUT or DUT with some testing environment (UVM or another) in the same way.

Vendors built-in support:

[LIST=1]

  • Mentor Modelsim / Questa – Tcl-based – there are more than 350 highly-documented commands in “Command reference manual”, mostly full internals control is possible.
  • Synopsys VCS – Tcl embedded – full UCLI and DVE control.
  • Cadence Incisive – just one quote from the official site: “Tcl has become the de facto standard embedded command language for Electronic Design Automation (EDA) and Computer-Aided Design (CAD) applications.“

    At all – simulation internals control, UVM support, extending commands to C/C++ code, configs and much more are possible from the Tcl – it is powerful and universal instrument perfect for different cases when the traditional verification methods are not the clearest way to do needed things correct. Tcl interpreter embedded in simulator provides full language support to verify clear DUT, expand existing testbenches or work with different environments.

    But what are the benefits of using Tcl more intensive than simple simulator scripting/configuring like it usually done?

    • It is possible to “edit, run, stop and again” Tcl scripts without recompile or exit the simulator
    • Because of Tcl nature it is much more expressive and productive then SystemVerilog, etc.
    • DSL support from the scratch (will be discussed later)
    • Flexible creation of any data structures
    • Simple static analysis automation and improvements
    • The easiest way to interface with external to simulator EDA / CAD tools
    • A simple way to interact with something written in other language – “golden model”, third-party library, driver draft, math model, etc.
    • There is no need of anything new – Tcl is already delivered within EDA tools

    Some basic figures from the “Use Scripting Language in Combination with the Verification Testbench to Define Powerful Test Cases” (Franz Pammer, Infineon).

    Tcl can be used in batch mode, DO files or with GUI. It is easy to use command line or “Tools > TCL” menu. Support information about Tcl can be found in “Help > Tcl Syntax”, “Help > Tcl Man” and in official documentation. Simple Tcl examples from different vendors, more complex examples will be shown in the next parts –

    [LIST=1]

  • http://pastebin.com/0zwGDgmK – basic Tcl (Mentor)
  • http://pastebin.com/YBLTj9A5 – some hardware interaction with TCL (Altera)
  • http://pastebin.com/7zpPBhQd – design interaction (Xilinx)

    Better to know that any Tcl script can control itself and all simulation process at all very flexibly – handling breakpoints and errors of simulators, setting default error actions, interrupting, etc. – the control always can be returned to the script or command line.


  • Silicon Valley USPTO is Open for Business!

    Silicon Valley USPTO is Open for Business!
    by Daniel Nenni on 11-01-2015 at 4:00 pm

    Probably the best and most attended EDAC Emerging Companies event happened last week so congratulations to Bob Smith and staff. The premise was to develop, strengthen, and protect your intellectual property which is paramount to all emerging technology companies. Many people in this industry, including myself, have learned this lesson the hard way, absolutely.

    Remember Avant!? Yes some code was misappropriated by a nefarious consultant but the dirty deed did not merit the destruction of the company and the stake holders (my opinion). I was also involved in a simple EDA contract dispute that blew up into a multi-million dollar litigation. What a colossal waste of time and resources! I remember offering to split the difference (they would pay half of the contract amount) so we could shake hands and walk away with dignity. But of course lawyers got involved and it cost the opposing company 20x in settlement and legal fees. Our industry is strife with good and bad litigation so yes we should all pay attention here.

    First up was the Mayor of San Jose, Sam Liccardo. What a great guy! After watching the recent presidential debates I was not looking forward to meeting another politician but seriously, Sam renewed my faith in the American political system, at least at the local level for sure.

    Next up was John Cabeca, Director of the new Silicon Valley Patent and Trademark Office. Another great guy who clearly gets us and is an EXCELLENT resource for emerging technology companies:

    The Silicon Valley, known as one of the most prodigious and innovative entrepreneurial communities in the country, was selected as our West Coast presence to assist the USPTO in fostering and protecting innovation. Looking for information focused on startups? Visit our Startup Resources page.

    Take a look at the upcoming events calendar. Can you believe there is a Girl Scout Merit Badge for intellectual property!?!?!? And speed dating for start-ups? 😎

    • November 10, 2015
      Patent Quality Chat Webinar Viewing
      USPTO Silicon Valley Office
      26 S. Fourth Street
      San Jose, CA 95113
      View the Patent Quality Chat Webinar featuring Deputy Commissioner for Patent Quality, Valencia Martin Wallace in our offices. The webinar will be proctored by one of our Silicon Valley resource supervisory patent examiners.

    • November 10, 2015
      Trademark Tuesday—”Lunch and Learn”
      USPTO Silicon Valley Office
      26 S. Fourth Street
      San Jose, CA 95113
      The USPTO Trademark Assistance Center discusses resources and answers questions via webinar.

    • November 17, 2015
      Info Session: Ombudsman, Prior Art Search, and Interview Practice
      USPTO Silicon Valley Office
      26 S. Fourth Street
      San Jose, CA 95113
      Applicants and attorneys learn about programs and resources for more efficient patent prosecution.

    • November 19, 2015
      Patent “Lunch and Learn”
      Supervisory patent agents help inventors and entrepreneurs during live information sessions.

    • December 12, 2015 – Girls Scout Intellectual Property Badge Program
      Girls grades 2-8 learn about intellectual property rights in collaboration with The Tech Museum of Innovation and The Girl Scouts.

    • December 15, 2015, 12:00 p.m. to 1:00 p.m. PT
      Trademark Tuesday – Lunch and Learn
      USPTO Silicon Valley Office
      26 S. Fourth Street
      San Jose, CA 95113
      The USPTO Trademark Assistance Center discusses resources and answers questions via webinar.

    • January 26, 2016 – Patents and Trademarks 101
      The USPTO delivers intellectual property basics at the new Patent and Trademark Resource Center in the King Library in San Jose.

    • February 2016

      • Speed Dating for Startups
      • Cybersecurity Partnership Meeting
      • PTAB Rules Update

    The second half of the evening with EDAC was a panel session with lawyers. I tried to pretend to care about what they had to say but I failed and left after 30 minutes. Let’s face it, our legal system is not unlike our political system but I digress…

    Don’t forget to follow SemiWiki on LinkedIn HERE…Thank you for your support!


    Moving with Purpose for Certainty

    Moving with Purpose for Certainty
    by Pawan Fangaria on 11-01-2015 at 12:00 pm

    In 1492 Christopher Columbus sailed from Spain towards west on Atlantic Ocean in search of Asia and Indies. Between his four voyages (1492 – 1502) he discovered many different islands and then what we call Americas. Although he had a compass with him, imagine searching a needle in a haystack. Even with localization of areas and then a smart search, still there is no guarantee of success.

    Similar is the case in the modern world with semiconductor design and manufacturing; there is a great deal of uncertainty between the chip and the design, what you get out of the fab may be far from what you design. And this uncertainty gets worse with shrinking technology nodes, new transistor structures like that of FinFETs, low noise margin and so on. The semiconductor design needs 4-6 sigma accuracy and precision for it to provide the right yield.

    So, how should semiconductor design be approached for optimum yield? To check variation, should millions of Monte Carlo (MC) simulations be run for different circuit parameters at multiple process corners? You may still miss the actual outliers. A few years ago, I had written about Solido’s Variation Designer Platform and HSMC (High Sigma Monte Carlo) method which works very fast, is accurate, and scalable over large number of process variables; it prioritizes simulations for most-likely-to-fail cases and never rejects any sample that can cause failure.

    During DAC 2015, in a panel session organized by Solido, the representatives from Cypress, Applied Micro Circuits and Microsemi presented their experiences about variation issues and their solutions. Paul McLellan already talked about Sifuei Ku’s views from Microsemi in one of his earlier blogs. I would like to delve into the views expressed by Cypress and Applied Micro as well.

    Dragomir Nikolic, CAD Director at Cypress is critical about differentiating IP where significant analog content makes a difference. He sees major portion of SoC cycle times being invested into IP. How does he improve upon cycle times applying complex set of DOEs to designs which varied amid varied experiences of designers spread across different teams in different continents? This is what he calls a change in his design methodology from “Whack-A-Mole” to “Move-With-Purpose”.

    In Dragomir’s words –“What Solido was able to do for us is, allowed us to dynamically derive the corners that truly dominate your analog design. With that, our cycle times have tremendously shrunk, our quality of results has definitely reached our internal goals for 4-sigma and we have seen tremendous improvement in the overall quality of our design” .See Dragomir’s video below

    Dragomir’s story about the most reluctant designer for change coming back after Variation Designer evaluation and asking for its license was interesting. That says – whatever you do for performance, accuracy, quality, and the like; unless your tool’s usability is good, you can’t win the designers, you fail.

    Alfred Yeung, Sr. Director of Circuits & Technologies at Applied Micro Circuits talked about different parts of a design which can be sensitive to device variation in terms of functionality and performance; and how hard it is catching the actual problems using traditional MC simulations. He presented a case study where he used high-sigma analysis using Solido Variation Designer. Solido needed to simulate only 7000 samples out of 100000 samples analyzed; and 13000 samples simulated out of 10 billion samples in another experiment with the same case. Many outliers were seen outside the design target. The issues were observed in silicon and fixed.

    In Alfred’s words – “What we’re really interested in is what happens when you run 10 billion samples. And you see here, the circuit doesn’t scale well with a lot of samples and high sigma. And there’s a design flaw here. And you see here in this particular test case it ran up to 10 billion samples, but the simulation time is only based on about 13K samples simulated so the run times were still very reasonable for us. It was something that we were not able to do in the past, and not only did we see failures we saw big outliers. The transient scale and the value scale here is not the same scale as before. But to fit everything on the same graph you will find that large number of outliers, outside the design target” .See Alfred’s video below

    During the Q/A session, all panellists were appreciative of the Variation Designer’s usability; it had no problem changing from one simulator to another, it’s very easy to use. On one particular question about technology node – the Variation Designer can be applied to any technology node. It’s more to do with variation-aware design methodology which is applicable to 130nm or higher as well as 28nm or lower nodes.

    The session also included a presentation by Jeff Dyck, VP of Technology operations at Solido. Jeff talked about his experiences with designers who have difficulties understanding variation issues, which corners to run, and crunch of time for identifying and fixing issues. Most of them end up overdesigning which was found to be the biggest issue in the survey conducted by Solido. Then he talked about the challenges in moving from old to new tools, support issues, and so on. Jeff stressed on Solido’s deeply specialized, talented, and immediate support. See Jeff’s video below for more interesting details from the survey and his experiences from his interaction with designers.

    All videos and their transcripts are here.
    Also read:
    Replacing the British Museum Algorithm
    High Yield and Performance – How to Assure?

    Pawan Kumar Fangaria
    Founder & President at www.fangarias.com


    How to Live with Rapid Changes During Early Development of IP

    How to Live with Rapid Changes During Early Development of IP
    by Tom Simon on 10-30-2015 at 4:00 pm

    Best practices call for using a version control system with systematic releases when developing IP. However, in the early stages of IP development using a rigid version control system with a cumbersome release process can hinder productivity. To fully understand how this works we should start by defining what is meant when we say IP.

    The term IP is used broadly in IC design. It can refer to libraries, memory blocks, or portions of a design coming from internal or external organizations. Furthermore, blocks developed using IP may themselves become IP for higher level design. In this case the lower level IP are referred to as IP resources.

    Methodics has thought through the implications of developing and using IP early in a project’s lifecycle. This stage is characterized by rapid changes and short loop iteration. If each time a work in progress change is made early on in the development process, it was necessary to create a release, the entire work flow will be slowed down. To solve this problem Methodics supports a specification that uses the most recent version of the files to populate a workspace. This is known as IP@HEAD.

    There are a number of subtleties in its usage. They key point being that loading a workspace with IP@HEAD will pull in the then current versions of not only the IP itself but the resources the IP relies on. Additionally, if the IP resources are themselves running on an orderly release system, as opposed to @HEAD, then the most recent release will be what are loaded. A release update of an IP resource will update the user’s workspace that is using @HEAD.

    The history of design management in the IC space is a long and checkered one. It is definitely a different problem than source code revision control management. Designers have balked at systems that impose restrictions on their method of operation. The cost benefit-trade off is most difficult to justify at the beginning of a project, especially if the design management system is imposing unneeded and inefficient processes.

    Later as a design matures, the benefits are more readily apparent and design management provides some huge wins for reliability, quality, repeatability and traceability. When a design starts by using IP@HEAD for its workspaces, as the design matures a release methodology can be put in place so there is less uncertainly as functionality is stabilized.

    Methodics has built a sophisticated IC centric data management system. One of the plusses of their system is that it is based on industry standard revision control systems like Perforce and Subversion. This means that the design data will never be locked in. Their infrastructure also enables multisite sharing, disaster recovery and design progress analytics.

    If you want more information on using IP@HEAD in Methodics, please look here. For more general information about Methodics their website offers useful information.

    Follow the adventures of SemiWiki on LinkedIn HERE!


    How Can Big Data and EDA Tools Help?

    How Can Big Data and EDA Tools Help?
    by Daniel Payne on 10-30-2015 at 12:00 pm

    Big data is a headline phrase that I see appear almost weekly now in my newsfeed, so it’s probably time that I start paying more attention to the growing trend because it does impact how technology-driven, EDA tool flows are being used. From my last trip to DAC I recall only two companies that were really focused on system-level design, which is a ripe application area to start mining and controlling big data while building an electronics-based system. Michael Munsey is an expert in system-level design and works at Dassault Systemes, so I went back to a conversation that Michael had in August about the topic of EDA and big data.

    Q: As an overview, just what does Dassault have to offer?

    So at Dassault Systemes we’re known as the 3D experience company. We started off as a 3D CAD company, and then brought in other pieces of technology like a PLM system, some multi physics modeling solutions, and we’ve grown over the years to actually encompass twelve brands across twelve different industries.

    In the past couple of years we’ve been really focusing on the semiconductor industry because we’ve seen that there are a lot of system level issues that have been solved in other industries that we serve. Industries like the transportation, mobility, automotive, aerospace and defense. We have solutions that have worked very well for those industries for many, many years. We’re working on taking those solutions and creating a semiconductor solution based off of this proven technology and now we’re rolling it out to the semiconductor industry.



    Q: What is happening with big data and EDA tool flows today?

    The biggest challenge right now is that data is only as good as what you have available. Now we’ve seen a lot of people talking about capturing design data and performing big data analytics on it. That works, provided that you have all the data.

    What we’ve seen however, is that it’s very difficult for companies to come up with processes in place to capture everything that they need to capture. Design companies have multiple tools in the EDA flow chain, in different domains and you also have system tools. You have functional verification tools, you’ve got synthesis tools, physical design, custom IC design. There are a lot of engineers involved sharing a lot of information, and there is often no structured processes to actually capture all of that data, so ultimately you will not have a full set of data to actually get good analytics off of.

    The second challenge is that it’s not just about the design data and the design results. You’ve got to think about the entire ecosystem, a semiconductor company has product engineering that sets up project schedules, requirements systems and handles issuing defects across entire systems. You have manufacturing teams, so if you’re a fabless company then you have teams that interface with the foundry. An IDM will have their own manufacturing information as well. So to get the true analysis that you need requires a comprehensive view that looks at all this data to be really able to do predictive analysis on the problems that you’re trying to solve.

    So the largest problem that we see is that it’s a great goal, but without being able to put the methodology and the systems in place to capture all of that data then it’s going to only be somewhat effective. What we’re focusing on at Dassault is that again through other industries, we’ve been putting processes in place in terms of design process and manufacturing processes that allow us to have a great methodology, and we are capturing all this data now. We’re looking at ways of now bringing this approach to the semiconductor industry, and we’re starting right now through a piece of our technology called requirements-driven verification. This basically will capture the whole design process and verification process.

    We can automate the capture of this data now and the very first types of analysis that we’re doing on this data is what we call a Decision Support System. Imagine that you’re doing physical IC design and you’re trying to close timing and your trying to close power, typically you run different experiments and you might run ten or twelve different P&R experiments with different constraint files, making subtle tweaks to the design. You then get a bunch of results out, but it’s often like pushing a balloon if you improve one area another area gets worse, and then you look at the next result while you fix the one that got worse but the one you thought is fixed gets worse now. The minute that you have more than two tests to look at the problem becomes exponentially difficult to try to solve.

    In our SIMULIA brand there are some predictive analysis techniques that actually look at multiple groups of tests and analyze what inputs generated what outputs and with what desired results. This can now begin to guide you in a certain direction, so it’ll be able to get all your design constraint files and tell that if you use these design constraints and couple it with other design constraints from this test that you begin to assemble a view that leads you down the right path. We’re looking first at this very specific functional process to begin to make improvements in the overall IC design process to achieve design closure.

    The very first step that we’re looking at right now is very much at the role level of providing the analysis capabilities to take the data analysis problem and make it a lot simpler and provide some intelligence at that level.

    Q: What is the future of using big data inside of EDA tool flows going to look like?

    The obvious next step is to break out from the role level and now look across the entire development platform, and when I mean developing I mean all the way from system design planning through manufacturing. Once you’ve begun to capture enough data you’re able to do different level of analysis, so it moves from looking at how to make my job better to now how do you make the entire system design process better.

    If you have a project where you already know what the design results are then you also know who worked in the project, you start to have a notion of how good design teams are, you also start bringing in scheduling information that allows you to make predictions of how design teams work against certain schedules. We can start bringing issues and defects data and see how many errors that certain design teams have made versus other design teams, so you start building a much larger picture for product planning. From design to manufacturing you’re now able to really start predicting schedules at the beginning of a new design. What are my chances of achieving this schedule? Where are the problems going to be?

    When you start looking at that data over time we start seeing issues pop up along the way. Well, if I need extra resources now to work on this project, then what teams have done similar projects in the past? Should I apply this engineer to help fix this problem? How is that going to impact my other projects that I have going on where I might start borrowing people from? So it goes from a role based analysis to a system analysis across a whole design team, and then across multiple design teams from a whole company.

    Q: On a personal note, I understand that you’re both an engineer and a musician. How did that come about?

    It’s a right brain, left brain issue. You know, after spending your days doing pure analysis, analytics and thinking about design problems then everybody needs a creative outlet. Music is one of the best ways to be creative for me. I like to be as creative as possible and let the other side of my brain work. I love it.

    Follow the adventures of SemiWiki on LinkedIn HERE!

    Related Blogs

    Enterprise Design Management Comes of Age

    Talking Directly to EDA R&D

    A Systems Company Update from #52DAC


    Semiconductor market going negative?

    Semiconductor market going negative?
    by Bill Jewell on 10-30-2015 at 12:00 am

    The outlook for the semiconductor market for the remainder of 2015 is mixed. Intel’s 3Q 2015 revenues were up 10% from 2Q 2015 and guidance is for 2.3% growth in 4Q 2015. Most other companies which have announced 3Q 2015 results expect revenue declines in 4Q 2015. The most severe drops (based on the midpoint of company guidance) are -16% from NXP and -13% Freescale. NXP is in the process of acquiring Freescale, with the deal expected to close by the end of the year. Some of the weakness in the NXP and Freescale 4Q 2015 outlooks may be due to customers being cautious and waiting to see how the acquisition works out.

    [TABLE]
    |-
    | colspan=”3″ style=”width: 437px; height: 33px” | Key Semiconductor Company Revenue
    |-
    | colspan=”3″ style=”width: 437px; height: 21px” | Change versus prior quarter in local currency
    |-
    | style=”width: 221px; height: 21px” |
    | style=”width: 107px; height: 21px” | Reported
    | style=”width: 107px; height: 21px” | Guidance
    |-
    | style=”width: 221px; height: 21px” | Company
    | style=”width: 107px; height: 21px” | 3Q15
    | style=”width: 107px; height: 21px” | 4Q15
    |-
    | style=”width: 221px; height: 21px” | Intel
    | style=”width: 107px; height: 21px” | 10%
    | style=”width: 107px; height: 21px” | 2.3%
    |-
    | style=”width: 221px; height: 21px” | Samsung
    | style=”width: 107px; height: 21px” | 14%
    | style=”width: 107px; height: 21px” | n/a
    |-
    | style=”width: 221px; height: 21px” | SK Hynix
    | style=”width: 107px; height: 21px” | 6.2%
    | style=”width: 107px; height: 21px” | n/a
    |-
    | style=”width: 221px; height: 21px” | Micron Technology
    | style=”width: 107px; height: 21px” | -6.6%
    | style=”width: 107px; height: 21px” | -3.5%
    |-
    | style=”width: 221px; height: 21px” | Texas Instruments
    | style=”width: 107px; height: 21px” | 6.1%
    | style=”width: 107px; height: 21px” | -6.7%
    |-
    | style=”width: 221px; height: 21px” | STMicroelectronics
    | style=”width: 107px; height: 21px” | -1.0%
    | style=”width: 107px; height: 21px” | -6.0%
    |-
    | style=”width: 221px; height: 21px” | NXP
    | style=”width: 107px; height: 21px” | 1.1%
    | style=”width: 107px; height: 21px” | -16%
    |-
    | style=”width: 221px; height: 21px” | Freescale
    | style=”width: 107px; height: 21px” | -6.5%
    | style=”width: 107px; height: 21px” | -13%
    |-

    The latest forecasts for 2015 semiconductor market growth range from a 1% decline (VLSI Research and IC Insights) to 1% growth (our latest forecast at Semiconductor Intelligence). The outlook for 2016 ranges from a negative 0.5% from financial services company Credit Suisse to a positive 6.0% from Semiconductor Intelligence (SC IQ).


    Earlier in 2015 analysts were more optimistic with several forecasts for semiconductor growth of 7% or higher. Our March 2015 forecast at Semiconductor Intelligence was 8%. As the year developed, it became apparent the global economy and electronics end markets were weaker than expected. The chart below compares forecasts for 2015 and 2016 from Gartner released in March 2015 and September 2015 for PCs + tablets and for mobile phones. The most significant change was 2015 PC + tablet shipments dropping from 0.4% growth in March to a 9.3% decline in September. The 2015 mobile phone forecast dropped from 3.5% in March to 1.4% in September. 2016 forecasts changed less, with PCs + tablets dropping from 6% to 1.8% and mobile phones dropping from 3.8% to 2.9%.


    The 2015 global GDP forecast from the International Monetary Fund (IMF) also dropped in the last six months from 3.5% in April to 3.1% in October. The 2016 forecast dropped moderately, from 3.8% to 3.6%. Our semiconductor market forecast at Semiconductor Intelligence went from 2015 growth of 8% in March to 1% in September. Our 2016 forecast has decreased only slightly, from 7% in March to 6% in October.

    Why is the change in 2016 forecasts not as significant as the change in 2015 forecasts? A skeptic might say analysts do not alter their forecasts until actual data begins to prove them wrong. We are not quite that cynical. The IMF forecast for 2015 dropped from 3.5% to 3.1% primarily due to slower than expected growth in the U.S., a deeper than expected downturn in South America, slower growth in the Middle East because of falling oil prices, and growth slightly below expectations in India and Southeast Asia. Despite all the concern in the media about slowing growth in China, the IMF’s October forecast for 6.8% growth in 2015 and 6.3% growth in 2016 are unchanged from the April forecast. Conditions still point to GDP growth acceleration in 2016. Despite slower growth in China, the advanced economies (including the U.S., Europe and Japan) should improve in 2016. Russia and South America are expected to begin to recover from 2015 recessions. India and Southeast Asia GDP growth should accelerate in 2016.

    We at Semiconductor Intelligence feel confident about reasonable growth in the semiconductor market in 2016. Although there are some areas of concern in the global economy, signs point to better conditions in 2016. The key end markets for semiconductors (PCs, tablets and mobile phones) are also expected to improve in 2016.


    Is This a Dagger Which I See Before Me?

    Is This a Dagger Which I See Before Me?
    by Bernard Murphy on 10-29-2015 at 4:00 pm

    Macbeth may have been uncertain of what he saw but, until recently, image recognition systems would have fared even less well. The energy and innovation put into increasingly complex algorithms always seemed to fall short of what any animal (including us humans) is able to do without effort. Machine vision algorithms have especially struggled to be robust to distortions, different lighting conditions, different poses, partially obscured images, low quality images, shifts and many more factors. Macbeth’s dagger might have been recognized face-on in ideal lighting but probably not in this troubled vision.

    The problem seems to have been in the approach, a sort of brute-force attempt to algorithm our way to recognition. A different research path asked if we could model recognition on how we see and especially how the visual cortex maps images to recognized objects. First the image is broken up into small regions. Pixels within each region are tested against a function to detect a particular feature such as a diagonal edge. The function is simple – a weighted sum of the inputs, checked against a threshold function to determine if the output should trigger. Then a second feature test, now for a different feature (perhaps color), is applied again across each region in the image. This process repeats across multiple feature tests, as many as 100 or more.

    All of these outputs are fed into a second layer of neurons. The same process repeats, this time with a different set of functions which extract slightly higher level details from the first-level. This then continues through multiple layers until the final outputs provide a high-level characterization of the recognized object. Implement that in a compute engine of some kind and you have a Convolutional Neural Network (CNN). What the CNN is good at recognizing is determined by the weights used in each feature test; these are not set by hand but rather by training the CNN over samples images. Which is as it should be since this is mimicking functions of the brain.

    CNNs were something of a backwater in vision research until 2012 when a CNN-based solution beat all comers in a widely-respected image recognition challenge. Since then almost all competitive work in this area has switched to CNNs. These are now achieving correct detection rates (CDRs) in the high ninety percent range, besting not only other solutions but also human recognition in some cases, such as identifying species of dogs and birds.

    Cadence has implemented a CNN on the Tensilica Vision P5 DSP. They used as their reference a test known as the German Traffic Sign Reference Benchmark, a small sample of which is shown here. This should give you some sense of the recognition challenge: low lighting, glare, dappled lighting, signs at angles, signs barely in focus – these are fully up to the limits of our own ability to recognize. Cadence was able to achieve CDRs of over 99%, and nearly 99.6% with a proprietary algorithm which beats all known results to date. They have also demonstrated with this algorithm the ability to trade-off a small compromise in accuracy for greatly reduced run-times.

    The Tensilica Vision P5 DSP is pretty much ideal for building CNNs. As a DSP, multiply-accumulate instructions (for all those weighted sum calculations) are native. It supports high levels of parallelism through a VLIW architecture and ability to load long words from memory every cycle, so multiple image regions can be processed in parallel. And it has many other features which support the special functionality required by CNNs. All this is good but the results ultimately testify to the strength of the solution. Running the Cadence algorithm, this approach is able to identify 850 traffic signs per second. For pedestrian and obstacle recognition, and as we progress towards greater autonomy in cars, that kind of quick reaction time is critical.

    No complete vision system will be implemented solely with a CNN. A complete system must first identify areas in the image to which recognition should be applied, then recognize and finally provide guidance based on what has been recognized. This requires an architecture adept across wide range of functions, supporting a rich set of operations, multiple data-types, and should be balanced to support traditional vision algorithms as well as CNNs, demands for which the Vision P5 solution is well suited.

    I find it interesting that Cadence is investing heavily in the software part of this solution. Rather than run benchmarks based on open-source software, they’ve built their own software and have gone deep enough to produce a best-in-class CNN algorithm. Where they take this next should be interesting to watch. To learn more about the Tensilica Vision P5 DSP and the Cadence CNN algorithm, clickHERE.

    If Macbeth had access to this technology, events might have taken a different though less poetic and certainly less dramatic turn:
    Is this a dagger which I see before me? Let’s check this gizmo. No – definitely not a dagger. Well, can’t argue with technology. I’ll just have to tell the wife it’s off. I’m not going to kill the king.

    More articles by Bernard…