CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Changing Trends at the Top of Semicon Space

Changing Trends at the Top of Semicon Space
by Pawan Fangaria on 05-31-2015 at 5:00 am

As we have moved down from a CAGR of ~9% over last three decades to a CAGR of ~5% in the current decade, it’s time to check the realities. It can be definitely argued that a 5% of CAGR over a solid base of ~$378 billion should be considered good enough. In my view that’s the sign of maturity in the semiconductor market. At the same time we are seeing certain upcoming businesses like IoT that touches upon several other businesses such as automotive, medical, home etc. These businesses can bring new life to semiconductor industry, but who could be the winners? Generally speaking it’s the maturity cycle of an industry when new players emerge either to take on the existing businesses or come up with new products and technologies to change the scenarios. Also mergers and acquisitions take place between existing players. We are seeing all of these happening in the semiconductor space these days.

I reviewed IC Insights’ report of top20 semiconductor suppliers according to their sales numbers in 1Q 2015. This report shows some interesting data that indicates the changes we may see in the top rankings of semiconductor companies in the near future.

In the report there are 7 companies in US, 3 in Taiwan, 2 in South Korea, 4 in Japan, 3 in Europe and 1 in Singapore. The group of top20 companies includes pure-play foundries, fabless companies and IDMs. We have seen their 2014 results earlier, but the changes between 1Q2014 and 1Q2015 as mentioned in the 1Q2015/1Q2014 column of the table reveal a lot about changing trends.

The average sales of top20 companies increased by 9% compared to 6% for the overall semiconductor industry. What is interesting is that out of 7 US companies, sales for 6 companies increased by less than 7%, only GlobalFoundries had a sales increase by 21%. Intel was flat at 0%. On the other hand, 5 companies outside US showed more than 20% increase in sales; these are TSMC at 44%, SK Hynix at 25%, Avago at 24%, Sony at 26%, and Sharp at 62%.

The Japan-based Sharp had a dramatic entry into top20 list of semiconductor companies with a whopping 62% sales increase, riding on its success in the CMOS image sensor market. The other non-US company that entered the top20 list is Taiwan-based pure-play foundry UMC. Who were pushed out of the top20 list? They are US-based companies NVidia and AMD. It’s also interesting to note that Taiwan-based Media Tek has entered the list of top10 ranks.

My other investigation into the data shows that the 7 US companies accounted for $116909 million sales in 2014, i.e. 45% of total top20 companies’ sales, whereas the sales in 1Q 2015 were $27492 million, i.e. less than 43% of total top20 companies’ sales in 1Q 2015.

On the other hand, if I add up the numbers of 3 Taiwan-based and 2 South Korea-based companies, they account for $90454 million, i.e. 35% of total top20 in 2014, and $23643 million, i.e. ~37% of total top20 in 1Q 2015.

Clearly sales trend is pointing upwards for the east-Asia companies and downward for US companies. Well, there are mergers also happening which will definitely change equations in the top20 leaders of the semiconductor industry. NXP along with Freescale will push Europe into top10. ST is just out of top10 as per 1Q 2015 data. Similarly there are other equations that can change the top10 and top20 rankings for companies.

It will be interesting to watch the top20 list through 2015 and 2016. Read the IC Insights’ report here.

Also read “30+ Years of Semiconductors – The base matters!” to check the semiconductor sales trend over last 30 years, and “Look who is Leading the World Semiconductor Business” to check the 2014 list against that of 2013.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Semiconductor Acquisitions will Fuel Innovation!

Semiconductor Acquisitions will Fuel Innovation!
by Daniel Nenni on 05-30-2015 at 7:00 am

Has the semiconductor world gone acquisition crazy? It certainly seems that way with the more than $60B in M&A activity which may now include Altera. We are probably getting close to the 80/20 rule where 80% of the semiconductor revenue is being generated by 20% of the companies. Not far off from where we were at 25 years ago when the fabless semiconductor transformation began. It really is deja vu all over again.


Some say the winners in all of this are the investors and the losers are the thousands of semiconductor professionals that will lose their jobs due to the consolidation. I say we are all winners because of the changes we are starting to see in design enablement that will spawn a new generation of fabless semiconductor companies. The most recent change that interests me the most is the application of big data and portals to semiconductor design and manufacturing by the ASIC companies. Think Uber for semiconductor design.

Let’s face it, in order to usher in the next generation of fabless semiconductor companies we will have to make it cheap and easy to go from RTL to a finished chip from anywhere in the world. In the early days of the ASIC business model in Silicon Valley you could literally go from a design on a cocktail napkin to a finished chip. Things are a bit more complicated now since we rely on third party IP and have to squeeze every watt of power possible out of our designs. There is also the problem of getting a design through a foundry with a minimal amount of time and money.

Fortunately that is what today’s ASIC companies do for a living. The ASIC business is very competitive and margin centric so not only do they have to get it right the first time, they also have to optimize the hell out of the chip so that it will sell millions of units to get the appropriate ASIC ROI. After delivering many millions of chips to a broad spectrum of applications, the level of internal automation at an ASIC company is like nothing you have ever seen, absolutely.

And now ASIC companies are opening up this level of automation with big data attached to the world through online portals. It really is exciting, especially for the thousands of semiconductor entrepreneurs that will be affected by the megamergers of late and the tidal wave of IoT designs that are coming our way.

Also Read: eSilicon Lyfts Its Game

As you have probably read, eSilicon is leading this effort with their new STAR Online Design Virtualization Platform. STAR is an automated secure portal that provides a Self-service, Transparent, Accurate, Real-time experience from IC design through volume ASIC production, thus the name STAR. You can visit the eSilicon STAR landing page for more information and I strongly suggest you do: REACH FOR THE STARs HERE!


Synopsys Software Integrity: Find All the Bugs

Synopsys Software Integrity: Find All the Bugs
by Paul McLellan on 05-29-2015 at 9:30 am

A couple of days ago Synopsys announced that they were acquiring Quotium’s product Seeker. This is an interactive application security testing (IAST) product. Synopsys are acquiring the product and the R&D team, not the whole of Quotium. The Seeker solution is a pioneering solution for IAST that helps businesses find high-risk security weaknesses while fostering collaboration between development and security teams. The Seeker solution exposes vulnerable code and ties it directly to business impact and exploitation scenarios, providing a clear explanation of risks.

It is just over a year since Synopsys first moved into the software quality and security space with their acquisition of Coverity. They recently renamed this group in Synopsys to be the software integrity group.

Subsequently they acquired Codenomicon, a Finnish company well-known and highly respected in the global software security world with a focus on software embedded in chips and devices. They are also famous for having independently discovered the infamous Heartbleed bug last year while improving a feature in their tools.

At one level you can argue that Synopsys’ EDA product line has very little to do with software security and quality. Even though some companies show up as customers for both product lines, typically the teams designing SoCs and the teams creating the software to run on them are separate. Not just separate engineers, but separate purchasing arrangements, separate environments, separate budgets. There are also lots of companies (think banks, for example) who create a lot of software but don’t do chip design at all.

Software quality and security is a growing market since software is getting into more and more life-critical and security-critical areas. If your smartphone crashes it is annoying. If your ABS braking system crashes then maybe you do too. And if your heart pacemaker crashes then no good will come of it.

On the security side you have to have been living under a log for the last couple of years not to realize how important security is. It is clear that security requires a multi-layered approach involving both hardware and software so those separate groups are perhaps not so separate. It still seems to be hard to get companies to invest heavily in security but the stakes are very high. Target’s well-known security breach cost it hundreds of millions of dollars. The penetration actually happened through an air-conditioning system, not the first place that springs to mind.

I think the really big possibility for Synopsys is not just that these are attractive fast-growing markets. I think that there is a real possibility of using some of the techniques that we use for semiconductor design to strengthen software design. Semiconductor design is very different from software of course, we don’t get to run more than one or two versions of a design through a fab/foundry since it costs millions of dollars to do so. But if you look at it from a risk point of view they are not so different. An undetected error in either case can have a huge business impact, much greater than the cost of doing the design in the first place.

Two areas that seem to offer a lot of synergy are:

[LIST=1]

  • simulation and software testing are similar. You can never do enough, you can waste a lot of time if you don’t have a good test strategy, you never can be sure when you are done, you need to manage big farms and big suites of tests
  • formal techniques prove useful properties. For example, in the hardware world we can prove that nobody can access the encryption keys except in a special security mode. But that is just the sort of thing we want to do in software too, prove that it is impossible for a user to get kernel access to some security feature for example

    There is no solid guarantee that these synergies will prove to be enough to drive Synopsys’ software integrity business to a much higher level than would happen if it was run completely independently. But as the cost of getting things wrong goes up, then the value of being as sure as possible that all problems have been found goes up too. The investment that companies will be prepared to make to ensure software integrity will go up too.


  • "Cook’s Law" supersedes "Moore’s Law"-its impact on Apple, Samsung, TSMC & Intel

    "Cook’s Law" supersedes "Moore’s Law"-its impact on Apple, Samsung, TSMC & Intel
    by Robert Maire on 05-29-2015 at 7:00 am

    Apple drives the semi industry harder than Wintel ever did: Is winning Apple’s chip business a pyrrhic victory? Is 14nm done before it starts? Too short to be profitable?

    Chips marching to an Apple cadence…

    In the “old days” when Wintel ruled the roost and drove the semi industry, it was driving spending cycles based on new versions of Windows that stimulated unit volume of PCs and thus chips.

    New versions of Windows did not specifically demand nor require new technology nodes of Intel processors which were released at the standard “Moore’s Law ” cadence. Windows releases and Intel technology nodes were not interdependent and were relatively loosely linked. It was a “nice to have” if new processors came out at the same time as a new version of Windows but it wasn’t a “must have”

    In today’s world, a new version of the iPhone can’t be released unless a new processor is inside to drive it to new heights. The product, processor and software are inextricably linked.

    Given that Apple is driving the train with its fall, seasonal roll out of the new iPhone every year, everybody else, who supplies Apple (this means semi suppliers) has to be on board or be left behind at the station. In essence this means that Apple is setting the schedule for the next semiconductor technology node roll out, not the semiconductor industry itself or Moore’s Law as it had previously been.

    Apple is forcing and imposing a schedule upon its suppliers, which may be different than a “natural” cadence and likely negatively impacts those who are forced to follow.

    Is 14nm done before it even starts?

    We are amazed by the level of BS in the industry that competing players are throwing around about 14nm and now 10nm. Both Samsung and TSMC are pushing their competing press releases about 10nm in 2016 before the ink is even on the paper for 14nm orders.

    Going by whats in the trade rags and around the industry, we have moved on so quickly from the issues of 14nm and FinFET on to who has the lead at 10nm it makes my head spin. Apple won’t have a product out until the fall and we have already started to talk about who will win the A10 for 2016.

    If we believe the hype (and I’m not sure wether we do or not…) it sounds like 14nm will be another “lite” node much like 22/20nm was. The last “good” node being 28nm. However there are those in the industry that say that 10nm will be another “good” node much like 28nm.

    It feels like 14nm is already “old news” and the PR wars and jockeying for position at 10nm is even more severe than it was at 14nm….and who does this all benefit??….Apple.

    Is Apple chip business a “loss leader”?

    When you take into account the massive effort to ramp, the less than ideal yields and the competitive positioning needed to win Apple’s business its not likely very profitable at the end of the day.

    One of the main reason’s we would suggest this is that the cost of manufacturing semiconductors is primarily the amortization of the manufacturing costs over as many years and products as is possible. If Apple forces chip makers to move on before they get a chance to amortize the cost of equipment and R&D needed to get to that technology node then how do you make money? Certainly not on Apple. The only way you can make money is by trying to amortize that cost on the backs of trailing technology companies and no one wants to pay up for what is perceived as trailing edge devices.

    We think that Apple has made it a more dangerous, potentially much less profitable game by both compressing the technology nodes and forcing them to their own cadence.

    Cook’s Law..
    “Supplier competition goes up exponentially with each new supplier or technology node added”

    The semiconductor industry may be just as much a slave to Apple’s whims as are the Apple slaves at the Foxconn factories in China. Walmart may have a million employees in the US but Apple has more if you count suppliers globally.

    If you are going to be a slave at least be a high priced slave. We have a hard time seeing the semiconductor industry getting better profitability out of Apple given the current competitive supplier dynamics involved.

    We don’t see this changing soon as neither TSMC nor Samsung are likely to drop out of the race. Maybe Apple kicks Samsung to the curb again just to remind them of their place as a supplier but they will keep coming back. Maybe Global Foundries has the right idea as they are currently working on Qualcomm 14nm parts and not Apple A9. Maybe they figured out it was a bad game to play or maybe they were just too late. Apple has been the maestro of playing its suppliers and they continue to write the rules and set the standards

    Can equipment companies win?

    One would think with technology nodes coming fast and furious that equipment companies would be rolling in orders but that is obviously not the case. So where is the disconnect? Business is good but not great on the foundry side of life. Could it be that chip companies recognize that we have “lite” technology nodes, that are relatively short lived and are spending accordingly to not invest too much money in a node thats over as soon as it starts. Could it also be that the equipment for older nodes can get rolled over into new nodes and “reused” more quickly as not as much capacity is needed at trailer nodes as used to be the case in the past?

    Even given these two factors its still going to be hard to not spend incremental money when you start talking about quadruple patterning at 10nm and below. Lots of etch and dep tools, lots of stuff to go wrong needing yield management. EUV is nowhere to be seen at 10nm and 7nm may be “iffy”.

    Likely positive WFE spend trends at 10nm…
    If 10nm turns out to be more than the “lite” 22/20nm node or what seems like a “lite” 14nm node that would obviously be good for the likes of Lam, AMAT & KLAC. Less so for ASML.

    As far as the stocks go, we remain positive on Lam and KLAC, feel that AMAT is fully valued and ASML is overvalued…..based on these longer term trends. These should be interesting topics at the upcoming SemiCon West show……

    Robert Maire
    Semiconductor Advisors LLC


    Also Read:
    Why does Apple do business with Samsung?


    Virtual HIL and the 100M LOC car

    Virtual HIL and the 100M LOC car
    by Don Dingee on 05-28-2015 at 7:00 pm

    Aerospace and defense applications have traditionally leveraged hardware-in-the-loop (HIL) testing to overcome several issues. A big one is how expensive the physical system is. Even breaking down the system into subsystems for test can still be too expensive when fielding more than a couple test stations. Modeling elements of the “plant” for testing control electronics is essential to achieving reasonable development schedules and reducing risks through more complete testing at both the subsystem and system levels.

    Automotive companies – many of whom moonlighted as defense suppliers in varying degrees – borrowed the HIL approach to improve testing of vehicle designs. While the platform isn’t nearly as expensive, the compressed development schedules of model-year releases dictates a more efficient testing approach.

    Three other effects are adding to the automotive problem. First is complexity. By many estimates, the traditional metric of lines of code (LOCs) in a luxury vehicle is now surpassing 100M, and it isn’t much less at the midrange and low end as electronics content is increasing. That would be enough if it were a lump, but in fact the problem is much larger. Those LOCs are distributed across perhaps 100 or more subsystems, each with their own software and many running on different processor architectures. Manufacturers are trying to rein in that complexity by consolidating systems and standardizing around AUTOSAR and other architectures, but the problem is still large.

    Second is degree of difficulty. Simulation of most electromechanical and hydraulic systems used to be a relatively easy task. Much faster response times in power electronics have made simulation a challenge, with many designers turning to FPGA-based acceleration of test platforms. Also factored in is the asynchronous nature of subsystems, loosely coupled on a vehicle bus such as CAN – accurately simulating and reproducing timing under all conditions is critical to assessing system operation.

    Third is liability. The cost of failure in a car is much greater than it used to be, given the escalation in lawsuits and insurance costs. Even more dramatic is the expectation that manufacturers maintain vigilance through product recalls and warranty repairs. This has shifted the burden of test from straightforward functional verification to mitigating defect escalation, and the response is to “shift left” with earlier software testing.


    For higher integrity levels in ISO 26262, the recommended approach involves fault tree analysis, fault insertion, and failure mode and effects analysis (FMEA). Physical fault insertion is expensive, time consuming, and hard to reproduce. When scaled across numerous scenarios and subsystems, it becomes difficult to sustain on an aggressive development timeline. As SoC integration is increasing, physical fault insertion is also becoming less feasible for chip users.

    In a recent webcast, Synopsys has taken a fresh look at the ISO 26262 problem in the context of virtual HIL, using their experience in virtual prototypes. Using advanced tools combined with detailed, accurate models of popular automotive microcontrollers, many of the limitations of physical assessment of subsystems can be avoided. For instance, faults can be introduced virtually at the SoC level, providing rapid testing with reproducible, fully documented results.

    Synopsys overviews the changes in the automotive environment, along with a look at the ISO 26262 standard and the FMEA philosophy, plus a look at how their tools work, in this SAE-moderated event:

    “Shift Left” Functional Safety for Automotive System Development

    Synopsys has combined their Virtualizer Development Kits with their Saber simulation environment and third-party tools such as Vector CANoe for network simulation to create simulation capability that can handle these larger, more diverse automotive systems. The examples shown in two videos in the webcast are focused on a single ECU for simplicity, but it is evident how the concept could scale.

    For teams working on automotive SoCs, ECUs, or in designs targeting safety-critical systems in general, the ideas explored in this webcast may help keep up with the testing challenge.


    SITRI and Coventor Partner to Scale Up MEMS in China

    SITRI and Coventor Partner to Scale Up MEMS in China
    by Pawan Fangaria on 05-28-2015 at 12:00 pm

    When it comes to wearable technology and the rapidly emerging world of IoT, sensors and MEMS are on the frontlines. They collect and transfer raw data such as pressure, temperature and motion and process it with algorithms critical to making sure the right information gets to humans and/or machines so the right reaction is enabled. In less than a decade, there is expected to be approximately 1 trillion sensors deployed worldwide – yet the MEMS market is fragmented and there is as yet no standard process in place for MEMS development. Change is needed; a standard approach for MEMS design and manufacturing needs to evolve in order to sustain the massive growth prospects ahead.

    The significance of MEMS has not gone unnoticed, especially by Chinese companies who are eager to jump into this rapidly growing market. At the intersection of MEMS and China sits a company called SITRI, who is announcing a partnership with MEMS tool leader, Coventor.

    SITRI is an innovation center for accelerating the development and commercialization of “More than Moore” solutions to power the Internet of Things. In partnership with Coventor, the two companies are working together to help scale up MEMS in China. They recognize the need for an automated process for MEMS design and manufacturing. As part of this partnership, SITRI will provide representation, training and support for Coventor’s MEMS products within China.

    Coventor tools for MEMS design and integration, MEMS+ and CoventorWare offer a seamless environment for designing MEMS devices. Also, Coventor’s SEMulator3D provides a virtual fabrication platform for process development and integration for MEMS, CMOS, FinFET and many other semiconductor technologies.


    [Shanghai Industrial µTechnology Research Institute]

    SITRI is a research and innovation centre that accelerates the development of semiconductor devices and their commercialization in-house and through its network of partner organizations. It has a large presence in China and partners across the world including Taiwan, Korea, US and Europe.

    SITRI provides 360-degree solutions to start-up companies to help them grow and become successful. It provides technical expertise, infrastructure support, prototype development, process development and integration, design and simulation, market engagement, and even investment in startups. SITRI has strong ties with academic institutions, research centers, and the Chinese semiconductor industry, which make it an important player in the overall ecosystem.

    Today, SITRI is heavily focused on providing expertise in MEMS design, test, process, yield, predictability and packaging that accelerate MEMS time-to-market.

    China is a fast-growing market with numerous fabless companies dealing in SoCs and IP, and several foundries. It’s one of the largest consumers of MEMS and ICs. With a rapidly expanding market, research institutions, talent and expertise in semiconductor and MEMS development, and the right infrastructure, China provides an excellent ecosystem to scale up MEMS design and manufacturing to meet the rising demand for MEMS-based devices.

    Coventor’sproducts are sophisticated 3-dimensional modeling and simulation tools that automate the design and process development for MEMS. By using these tools, designers can work off specific fab process capabilities to qualify a particular MEMS design for manufacturing before committing to actual fab processing, dramatically shortening development time and increasing the quality and reliability of the MEMS design. SITRI will have access to all MEMS products from Coventor and it will represent Coventor in the China market. SITRI’s engineers will provide high-level expertise and support to Coventor’s customers in accelerating MEMS development through the use of Coventor tools.

    Both companies are very excited about this partnership to develop newer processes for MEMS and avail a larger opportunity provided by the IoT market in China. Coventor’s vision is to expand its solution for faster semiconductor and MEMS design and manufacturing across the world. Recently Coventor opened a sales and support office in Taiwan as well and the company already has a good presence in the U.S. and Europe.

    Read more in the press release about Coventor and SITRI partnership here.

    Pawan Kumar Fangaria
    Founder & President at www.fangarias.com


    WarpStor, the Data Tardis: Small on the Outside, Large on the Inside

    WarpStor, the Data Tardis: Small on the Outside, Large on the Inside
    by Paul McLellan on 05-28-2015 at 7:00 am

    There is a data explosion:

    • IBM says that 90% of all data was created in the last 2 years
    • Smartphone processor development requires 100GB of data per engineer
    • Android testing requires 30GB times the number of tests times the number of testers
    • Biotech simulation, game development and more all require enormous amounts of data

    This is a huge problem. While disk drives are cheap, reliable enterprise class storage is expensive and Gb ethernet connections are too slow and not scaling, and most tech environments are based on NFS which is slow and has a high overhead. With hundreds of users on projects, another challenge is to reduce needless duplication of the same files.

    Methodics is introducing WarpStor to address this problem. It is a content-aware network addressable storage (NAS) optimizer built on top of ProjectIC’s abstraction model. It is vendor agnostic, co-existing with storage solutions from IBM, EMC, Netapp and more. It doesn’t require weird stuff like kernel level patches, and seamlessly integrates with existing OS infrastructure.

    Although this is being announced today it is actually mature technology. It has been in use at Methodics internally for over a year with great results in their build and regression process. For example, the disk space requirements for the Methodics internal regression suite has been reduced from 300GB to 1GB, with a similar reduction in network I/O and a big reduction in wall-clock time for running the regressions.

    This sort of reduction sounds too good to be true, so how does it work? There is an IP master workspace. A workspace shrink takes place to reduce the storage requirements. Changes in the workspace are handled by copy-on-write. This is how virtual memory is handled in most operating systems. Data that has not been changed is shared and when one user makes a change, only then is it copied into their workspace (and altered) and other users continue to share the original unchanged version in their workspaces. As a result, creating a new workspace before any changes have been made is instantaneous. Eventually the changes are (normally) released and then become visible to others. So the first workspace requires some disk space and time to populate, but subsequent workspaces consume almost no disk space and take less than a second to create.

    WarpStor is seamlessly integrated into ProjectIC. There is no change at all in the conceptual data model, just a major increase in efficiency both in disk space usage and in the network bandwidth required to move it in and out of users’ own workspaces.

    In summary, ProjectIC’s abstraction model enables smart data management and WarpStor is seamlessly integrated with it. It can create 100GB+ workspaces in seconds, requiring negligible disk space at create time. Copy-on-write is used for changed files so the disk space requirements are tied to how much of the design is actually changed. Result: a huge saving in disk space, disk reads/writes, network file transfers. This provides a true turbo-boost to ProjectIC.

    “Scotty, I need warp speed in 3 minutes or we’re all dead.” No problem Captain, MethodICs can do it.

    The WarpStorage webpage is here.


    Why Design Data Management: A View from CERN

    Why Design Data Management: A View from CERN
    by Majeed Ahmad on 05-27-2015 at 10:00 pm

    On July 4, 2012, the European Organization for Nuclear Research, or CERN, announced that the ATLAS and CMS experiments had each observed a new particle, which is consistent with the Higgs boson predicted by the Standard Model of particle physics. The Compact Muon Solenoid (CMS) is a general-purpose detector with a broad physics program that includes the Higgs boson. The CMS experiment is one of the largest international scientific collaborations in history, involving 4,300 particle physicists, engineers, technicians, students and support staff from 182 institutes in 42 countries.

    Another prominent feature of the CMS experiment has been the extensive use of semiconductor devices—around 1 million chips in which nearly 700,000 units are ASICs—that range from pixel detectors to Si sensors to calorimeter chips. The ASIC design work is imperative in the high energy physics (HEP) community experiments because commercial off-the-shelf (COTS) components don’t meet the high-radiation, high-magnetic-field and low-power requirements.


    A CMS experiment consumes nearly 1 million chips

    The large-scale ASIC development is a giant challenge in its own right. However, the collaborative nature of work carried out at CERN brings a new conundrum that goes beyond the labyrinth of technical challenges generally associated with integrated circuit (IC) design work. There are around 30 engineers in CERN’s microelectronics team, and they are collaborating with 70 to 120 chip designers from 20 to 30 universities and research institutes. So, typically, design teams involved in the CERN projects are dispersed geographically as well as institutionally.

    And here comes the design conundrum for CERN: The chip industry is facing a number of challenges in dealing with traditional ad hoc ASIC design methodologies. And CERN’s situation, where it’s hard to get all the stakeholders in an ASIC design project in one room, further exacerbates design challenges. In addition, the sheer scale of the number of ASICs used for such a scientific undertaking means that huge data volumes are circulating in a project.

    Traditional ASIC Design Challenges

    Wojciech Bialas is an IC design engineer at CERN’s microelectronics group. He shared his views on ASIC design flow problems that chip designers face at CERN at the CDN Live EMEA in Munich, Germany at the end of April. He also explained the solution that allowed ASIC designers at all sites to have access to all design data as well as changes in real-time: a multi-site design collaboration scheme. A quick recap of the design problems first.

    In a traditional ASIC design flow, chip designers have scratch libraries for development, and they share master libraries that contain the finished cells. Chip designers create or edit cells in their personal scratch libraries, and then they verify them using the master libraries. ASIC designers verify the design by using the limited remote site collaboration available through either rsync or ftp,which is relatively time consuming. Access control is mostly based on trust, and if there is new design release, it’s archived in the master library.


    Ad hoc ASIC methodologies offer no traceability of design changes

    However, traditional design flow is quickly running out of steam for collaborative ASIC design work. For a start, design changes are usually tracked through e-mails and meetings. Chip designers can accidentally overwrite each other’s changes. Moreover, it’s hard for them to know when changes are lost, so they end up losing track of what different versions mean.

    As a result, libraries become cluttered with unwanted cells and design versions. The notion that any user can make changes in libraries at any time also creates doubts about the quality of simulation and verification work. All this leads to a high risk of miscommunication and delayed turnaround for design fixes.

    Design Data Management

    EDA toolmakers like Cadence Designs Systems have started integrating an API layer for third-party data management solutions. That allows all design data and revisions to be managed through a project repository like SOS from ClioSoft Inc. SOS is a design data and IP management platform that is integrated with leading design flow tools such as Cadence’s Virtuoso, Mentor’s Pyxis, Keysight’s ADS and both Synopsys Custom Designer and Synopsys Laker.

    The SOS project repository allows each user to have a separate and isolated work area, and each work area has a read-only linked copy of the project libraries. That allows ASIC designers to check out a writable copy of cell before editing. They can edit existing cells or create new cells in their work area. When a user changes the history of an object, the SOS data management tool automatically updates the entire project. Given the sensitive nature of the data, SOS provides administrative controls to manage design access.


    SOS provides tracking and accountability of design changes

    The primary SOS server located at the CERN headquarters in Switzerland manages the project repository that contains the entire project data and revisions. The distributed architecture of the SOS tool, however, permits different repositories to be set up at different locations as needed. At each remote site, cache servers are set up and automatically update with changes. In other words, ASIC designers at all sites have access to design data and changes in real-time. Designers without access to cache server infrastructure are allowed CERN computing accounts with encryption tunnels for appropriate access control.

    CERN’s Bialas acknowledged that the use of SOS data management frees CERN engineers from the need for periodic syncs and artificial partitioning of ASIC designs. Moreover, it allows design managers to accomplish an optimum use of resource bandwidth among different sites.

    Bialas also shared his views on the Visual Design Diff software, another ClioSoft product that displays design differences in schematics, layout and RTL. It’s particularly useful in ECO flows to track the changes made between different versions of the same design on which different design engineers are working. Bialas noted that the use of Visual Diff allows ASIC designers to quickly see the difference between the revisions. Moreover, users can take snapshots to record new configurations, and these snapshots can be recreated at any time in the design cycle.

    As the 100+ designers in the CERN project discovered, ClioSoft allows companies and institutions to use the best talent, worldwide, on every project. Collaboration in real time and worldwide revision control enable exploration in new areas of growth.

    Majeed Ahmad is the former Editor-in-Chief of EE Times Asia and is the author of six books about the electronics industry.

    Also Read

    ClioSoft Celebrates 2014 with 30% Revenue Growth!

    Secret Sauce for Successful Mixed-signal SoCs

    DNA Sequencing Eyes SoCs for Stability and Scale


    Will Dark Silicon Dictate Server Blade Architecture?

    Will Dark Silicon Dictate Server Blade Architecture?
    by Tom Simon on 05-27-2015 at 7:00 pm

    Does the evil sounding phenomenon known as Dark Silicon create a big opportunity for FPGA vendors as was predicted recently by Pacific Crest Securities? John Vinh posits that using multiple cores as a method of scaling throughput is flattening out, and the use of FPGA’s to perform computation can help off-load and thus overcome this issue.

    The root cause of the so called Dark Silicon phenomenon has nothing to do with evil Sith Lords or a post-apocalyptic Mad Max world left without any power to run IC’s. Any ASIC designer will tell you that it is essential to manage on chip power through controlling clocks, voltages and turning modules on and off as needed. Clock gating is the main tool in this camp, given that clock trees and flops consume a tremendous share of an ASIC’s power. Of course lowering clock rates helps, but this comes at a direct cost of performance, the very thing we are trying to squeeze out of these designs.

    Dark Silicon is more of an effect than a cause. When there are more gates on an ASIC than can be run within the thermal constraints of the design, the silicon that cannot be run is called Dark Silicon. It’s really better to think of it as a percentage of what needs to be switched off, rather than specific blocks that never run. However this is nothing new.

    What is new is the disparity between gates available and the ability to run them. Multicores helped push through the performance barrier when clock rates for CPU’s plateaued between 3 and 4 GHz. But multicores are also running out of steam. But does adding FPGA’s programmed to perform computation in server farms really solve this problem?

    The rule of thumb for converting a software task to an FPGA is that is provides about a 10X improvement in performance. But when Microsoft used a hybrid FGPA-CPU combination for its Bing search engine, they realized a 2X improvement.

    So there is clearly a cost in the hybridization process. What the Pacific Crest piece overlooks is the silicon utilization question for FPGA’s. Yes they can get higher throughput than a general purpose CPU with software algorithms, but where do they stand with regards to gate utilization and power consumption per computational operation? I bet that a CPU can perform more hardware work per unit power and silicon than an FPGA. They are optimized to do just that. FPGA’s certainly run slower than dedicated CPU’s.

    FPGA’s in this case are just able to solve an algorithm with less computation. So they have an advantage, but apparently not as big as you‘d expect – vis-a-vis the 2X gain, not the 10X you might expect in this scenario. And, as you might guess an ASIC would do even better for a hard wired algorithm. The easiest example to understand this is Bitcoin mining. People quickly stopped using processors and went to FPGA’s for generating Bitcoin hashes. FPGA’s were a lot faster than software for generating hashes, but the dedicated ASIC’s are orders of magnitudes faster.

    Dark Silicon is real and is causing people to design differently, but the move to FPGA’s is more about how many gates need to be toggled to solve a particular problem based on how general purpose the hardware is. Companies like Google, Microsoft and the other search and big data providers have enough clout to build their own ASIC’s for search and computation. And let’s not forget Oracle with some significant in house chip design expertise. And these ASIC’s will probably run pretty fast – and yet they will still need to worry about Dark Silicon.


    Getting the Best Dynamic Power Analysis Numbers

    Getting the Best Dynamic Power Analysis Numbers
    by Daniel Payne on 05-27-2015 at 1:00 pm

    On your last SoC project how well did your dynamic power estimates match up with silicon results, especially while running real applications on your electronic product? If your answer was, “Well, not too good”, then keep reading this blog. A classical approach to dynamic power analysis is to run your functional testbench on some RTL code or even gate-level netlist, then look at the switching activity as a function of time. This approach is shown in red on the following chart:

    Some of the issues with using a testbench approach to getting switching activity are:

    • Functional simulation takes a long time, so you cannot really boot an OS or run an app
    • Your stimulus may not uncover the worst case scenarios, giving you a false sense of security

    Another approach is to use a hardware emulator and then actually boot the OS and run your real apps to see switching activity, shown in blue on the chart above. For this SoC it is clear that the testbench approach lead to false power peaks, which would’ve meant that silicon power consumption was much higher than expected, causing a re-spin or even cancelation of the project. The emulator-based approach which was able to run the OS and live apps provided the truest switching activity numbers, leaving no surprises for silicon.

    So, who has created such an emulator-based approach to dynamic power analysis? It turns out that it’s not a single company, but rather two companies:

    • Mentor Graphics providing the emulation and integration
    • ANSYS with the PowerArtisttool for dynamic power analysis

    Related – Mentor’s New Enterprise Verification Platform

    I spoke with Jean-Marie Brunet of Mentor Graphics last week by phone to learn about this new capability for counting the switching activity on RTL or gate-level SoC designs using their Veloce emulator.

    Mentor Graphics has offered a couple of previous flows with their emulator that created files for switching activity using the UPF and SAIF file formats. What’s new this month is that their emulator can now create this switching activity and make the results available through a dynamic API, which provides benefits like:

    • Faster time to power analysis results (no reading and writing of file interfaces)
    • Integration with the popular PowerArtist tool from ANSYS which provides the dynamic power analysis numbers

    Using the SAIF (Switching Activity Interchange Format) approach is one possible power flow, and it will get you average power numbers. Another approach is to use and FSDB (Fast Signal Data Base) file flow, however you will quickly find that your hard disk is getting filled up with large files that also take a lot of CPU time to complete, but at least you are getting dynamic power values. The new, recommended approach is one where the emulator has a Power Application that can connect to PowerArtist from ANSYS with a dynamic API, allowing:

    • Fastest speed, sufficient to boot an OS and run real Apps on your SoC
    • Quickest time to results
    • Average of peak power
    • Eliminates large FSDB files

    Related – Improving Verification by Combining Emulation with ABV

    Here’s what the new flow looks like:

    This approach was requested by leading-edge customers, and you can expect Mentor to do additional integrations in the future, even with other EDA vendors. Actual performance numbers of the time savings for this API-based, dynamic power approach compared to the older, slower file-based approach shows a speed up of 2X to 4.25X, depending on the type of design that you have. That means that you can expect to save weeks of CPU time on a large SoC project to get accurate, dynamic power numbers from either RTL or gate-level netlists.

    Summary
    If you already have the Veloce emulator and PowerArtist tool, then it would make sense to give this new Veloce Power App an evaluation. If your last dynamic power estimate was dramatically different than silicon, then maybe it’s time to consider using this emulator-based flow instead of your old approach.