CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Trends in Automotive Electronics at #52DAC

Trends in Automotive Electronics at #52DAC
by Daniel Payne on 07-05-2015 at 4:00 pm

The coolest and most expensive car at DAC this year had to be the McLaren P1, priced at $1,150,00 and powered by a 903 hp gas/electric hybrid. Electronics are used in autos to provide safety features, infotainment, motor control and performance.

Also at DAC this year there was an Automotive Village with more cars and experts from both EDA and IP vendors on hand to explain how up to 25% of a new car’s cost comes from electronic systems.

Related – Noise-Coupled analysis for Automotive ICs at DAC

Ravi Ravikumar from ANSYS organized an automotive track with three invited speakers from Infineon, Freescale and Tesla.

Ajay Kashyap from Infineon presented on: Achieving Power and Reliability Sign-off for Automotive Semiconductor Designs. Consider that driving in a car we use an anti-lock brake system (ABS) to help us stop safely, an airbag system in case of a collision, electrically powered steering (EPS) or even Advanced Driver Assistance Systems (ADAS) like parallel parking. Ensuring high quality typically means that engineers have to look at reliability metrics like power and timing while effected by dynamic voltage drops within an IC.

Engineers at Infineon have used a design methodology to check and fix dynamic voltage drop issues by analyzing VDD and VSS levels that affect data integrity, and the effects on switching and non-switching signals.

Related – Will your next SoC fail because of power noise integrity in IP blocks?

The speaker from Freescale was Jehoda Refaeli and his talk was on: Thermal Integrity and Thermal-aware EM Reliability Check for 3D Stacked Dies in Automative Application. Freescale has been designing automotive ICs like multi-core MCUs to control systems like the powertrain, safety, motor and battery control. Stacking a memory chip on top of an SoC in 3D fashion can benefit power consumption and communication speeds, but engineering needs to verify that thermal issues are under control and don’t cause reliability issues.

Freescale engineers use a design flow that includes a Chip Thermal Model (CTM) for each die in a 3D stack. They use a concurrent simulation for power thermal convergence to analyze and avoid thermal run-away. EM reliability is impacted by the temperature, so a thermal analysis is performed to understand self-heating effects.

The CTM has a thermal profile and adding wire temperatures allows for a thermally-aware EM reliability check. Finite Element Model (FEM) analysis is used to calculate a wire’s temperature rise and 3D decay. Engineers now can understand both thermal and EM reliability for a 3D stacked die.

Related –A Key Partner in the Semiconductor Ecosystem


The most famous Electric Vehicle (EV) company for luxury cars has got to be Tesla, and Dr. Jenna Pollock Sr. talked about: High-Frequency, High-Power Magnetic Design with Maxwell 3D – from Geometry Creation to Component Optimization. Keeping the weight low for an EV is a big design goal, because it limits the driving capacity per charge. Component weight was minimized by using the ANSYS Maxwell tool on high-frequency magnetic cores.

Tesla engineers created a scripting library with fully parametric models, and then ran optimization routines to get the best size, weight and performance from their magnetic cores.

Summary
Automotive is a big market for electronic systems design, and I’m glad to see that the DAC organizers have been able to grow this new area and that users of ANSYS are having success at making our automobiles safer, more reliable and use alternatives like electric and hybrid vehicles.

Related – ANSYS Event to Highlight Cutting Edge Technology Development


Cellphones on the Path of Extinction

Cellphones on the Path of Extinction
by Pawan Fangaria on 07-05-2015 at 4:00 am

Semiconductor based electronics has continuously improved lives of people through various kinds of technology upgrades in the gadgets for our daily use. Imagine the journey from a mechanical typewriter to a laptop computer connected through a laser printer, transition from black & white photography to exotic coloured photographs on your finger tips, all kinds of banking transactions from anywhere on your phone, tablet, or laptop, and so on. Although the first handheld mobile phone was produced at Motorola in 1973, analog cellular networks evolved in 1980s giving way for further evolution of cellular networks into digital in 1990s. The digital cellular networks boosted data communication with 1G, 2G, 3G broadband and now 4G. One can do much more than just talking with a handheld mobile device from anywhere, anytime; see a whole movie on your smartphone. The data communication was the real catalyst for transition from cellphones to smartphones. Add to it the power of several ‘Apps’ to ease different functions for daily use in our lives.

Now it’s more than 30 years we still have commercially working cellphones in some parts of the world. The cellphones are already replaced by smartphones in the developed world and are being rapidly replaced in the developing countries. Thanks to the low-cost smartphone providers in China and India. How long do we think it will take for cellphones to become extinct? Let’s see some data from IC Insights report on smartphone market trends.

From the above graph, it appears that in last 2+ years the smartphone share of shipment in the total cellphone shipment has increased from 50% to 80%. It’s expected that the smartphone share will surpass much beyond 90% in next 3 – 5 years; IC Insights forecast says 93% by 2018. Can we see elimination of cellphones from the market post 2020? Make a guess from the following chart.

We can see a continuous decline in non-smart cellphone shipment and also almost stable to less than 5% growth in total number of cellphones. The net result will be complete elimination of cellphones from the market in next 5 years or so.

Although Appleand Samsungdominated the smartphone market in 2013 and 2014, both appear to lose market share to Chinese smartphone makers such as Xiaomi, Yulong/Coolpad, TCL and Huawei. The top six smartphone suppliers in China had increased their worldwide market share from 21% in 2013 to 29% in 2014. In Q1 2015, the top six China-based smartphone suppliers had a 25% market share though.

Today, a key factor in the smartphone market is replacement of cellphones by smartphones in the emerging markets like China, India and other developing countries. If a cellphone can be replaced by an smartphone in less than $200, people can happily afford it. In fact an smartphone can come for even less than $100 in these countries. And they have almost all functions of a low-end smartphone.

I had bought an Android1 Micromax Canvas, an Indian company smartphone for about $100 for my daughter. In normal day-to-day working, if I compare that smartphone with my Google Nexus, I do not see any appreciable difference. So wait for five years, all remaining cellphones in the worldwide market will be replaced by smartphones!

For detailed reading of IC Insights report, you can get it HERE.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


When it comes to High-Sigma verification, go for insight, accuracy and performance

When it comes to High-Sigma verification, go for insight, accuracy and performance
by Michael Pronath on 07-04-2015 at 7:00 am

There are three critical goals that designers of custom digital designs and memories look to achieve with high sigma verification:

(1) obtaining accurate results,
(2) achieving results with good run-time (efficiency), and
(3) gaining proper insight into how their circuit is behaving along with an understanding of failure modes that most affect high-sigma performance. Problems can then be identified and optimization tools can then be employed for tuning circuits to correct those problems.

A “black-box” approach taken by methods based on sampling and response surface models looks easy at first glance but falls short in all three categories: they introduce significant sampling errors, suffer from long runtimes, and provide little insight into the circuit’s different failure modes. In particular, there is no verifiability in black-box modeling methods. The user gets easily lulled into a sense of security by sample plots, but is not aware how unsafe these plots are as they show only a tiny fraction of data points chosen by a model. MunEDA tools offer to the designer added insight into their circuit behavior at extreme levels of variation rather than using a very limited “trust-us” approach.

MunEDA provides a fully-scriptable environment where each circuit preparation, analysis, optimization and reporting step can be programmed into a completely automatable flow (think Matlab and other industry-standard programming environments). MunEDA employs worst-case analysis (WCA) to rapidly locate the most-likely failure modes of your memory, clock-tree, analog, and mixed-signal circuits.

MunEDA WCA is based on the concept of most probable points (MPP), widely used for engineering reliability analysis and reliability-based design [1]. By virtue of their efficiency, accuracy, and providing insight into the analyzed problem, MPP-based methods are very popular for high-sigma verification tasks. WCA is an extension of MPP that simultaneously solves for worst-case corner and worst-case mismatch.

In [2], Intel engineers explain how MPP works and why it is their method of choice for verifying and optimizing yield, area, and power of register files and bitcells in microprocessors. A large number of different registers and bitcells is used in a CPU; for each one, high-sigma analysis is run many times, swept over process variants, corners, Vcc_min, etc. In [3], Intel engineers apply MPP method to carefully consider two failure modes of interest in their sequential logic circuit.

Due to the large number of required MC runs, efficiency of WCA is key. Typical runtimes of WCA-based 6 sigma analysis today are in the range of only 150 simulation runs compared to the requisite minimum 500 billion runs of standard MC to achieve the same accuracy.

Being able to run high-sigma efficiently is also important for designers who need to run it inside of an optimization loop. Due to mixed effects between operating conditions, corners, local variation, and device geometries, high-sigma analysis has to be executed multiple times during one optimization run.

One interesting property of embedding WCA into MunEDA’s environment is how analyzing different failure modes gives valuable insight into circuit behavior, which cannot be achieved using black-box sampling methods.

Figure 1: Multiple failure modes in Mux circuit

Take the example of the multiplexer circuit shown in Figure 1 above. As often happens with other types of circuits there are multiple failure modes of concern. Turning a black-box analysis tool loose on the circuit without providing input as to where to look will risk locating only one failure mode and maybe not the one of most critical interest. It’s especially problematic if a failure mode has a steep transition cliff, such as the nFET contention problem shown in figure 1.

Using MunEDA’s professional-grade environment for analyzing circuits with multiple failure modes, designers can consider all of their failure modes in succession and be sure not to miss one.
Figure 2 shows a run flow within the environment for analyzing multi-level failure modes in the arena of high-sigma analysis.

Figure 2: Multi-Level Worst-Case Analysis Example in MunEDA

Another application for WCA is in evaluating timing and race conditions along delay paths in clock circuits and other timing circuits. This approach gives valuable insight into circuit behavior, which cannot be achieved using black-box sampling methods. MunEDA’s modular and fully-scriptable analysis canvas provides the ability to analyze and attain confidence in timing solutions. Consider the example shown in Figure 3, which shows a delay-path circuit.

Figure 3: Robustness analysis for a race condition using WiCkeD

To truly understand circuit behavior at high-sigma and ultra high-sigma it is necessary to use professional-quality tools that allow designers to really understand the circuit, tools that MunEDA provides. MunEDA tools for circuit migration, analysis, optimization and modelling have been proven in thousands of silicon tape outs with many of the top semiconductor companies worldwide. Please follow this link to see customers who are using MunEDA’s “WiCkeD” platform: https://muneda.com.

[1] X. Du et al.: “Most Probable Point-Based Methods.”
In : A. Singhee, R. A. Rutenbar (ed.): Extreme Statistics in Nanoscale Memory Design. Springer, 2010.

[2] K. Anshumali et al.: “Circuit and Process Innovations to Enable High-Performance and Power and Area Efficiency on the Nehalem and Westmere Family of Intel Processors”. Intel Technology Journal, vol.14 no.3, p.112-114.
http://www.intel.com/content/www/us/en/research/intel-technology-journal/2010-volume-14-issue-03-intel-technology-journal.html

[3] CH. Chen, K. Bowman, C. Augustine, Z. Zhang, J. Tschanz. “Minimum Supply Voltage for Sequential Logic Circuits in 22nm Technology”. IEEE Symposium on Low Power Electronics and Design. 2013, pp. 181-186.


GlobalFoundries Endorse ST/LETI FD-SOI 22nm!

GlobalFoundries Endorse ST/LETI FD-SOI 22nm!
by Eric Esteve on 07-03-2015 at 9:00 am

The LETI days and the associated FD-SOI workshop took place in Grenoble (France) last week and I could not attend in person… but I had the opportunity to speak with LETI CEO Marie Semaria. Before going into details into the 3 key messages from the LETI (FD-SOI, Silicon Impulse and Cool Cube), it’s important to share the great news from this FD-SOI workshop: GlobalFoundries has officially presented their FD-SOI 20nm solution! The first three (business oriented) presentation have opened the second day, I think that the company list is already a good summary:

  • 28FD-SOI: Cost effective low power solution for long lived 28 nm (by Kelvin Low – Samsung)
  • Advances in Application and Ecosystem for the FD-SOI Technology (by Giorgio Cesana – ST)
  • Design/technology Co-Optimization for FD-SOI (by Teepe Gerd – GF)

ST and Samsung supporting FD-SOI is no more a breaking news even if it’s really important that a foundry offering huge wafer capacity like Samsung is part of the game. Beside the title of the presentation from GF, the breaking news is that GF will support 20 (or 22)nm FD-SOI. Why 20nm and not 28nm? This is probably a marketing decision: GF has decided to officially support FD-SOI technology more than one year after Samsung, endorsing 20nm FD-SOI instead of 28nm is a good way to fill this timing gap… The LETI is a French research center concentrating on advanced technology research (not expected to lead to profitable products before many years) the goal being to license such technology to the industry. The excellent presentation from Thomas Stotnicki, ST Fellow and Technical VP, clearly shows that it may take long time for a lab demonstrated technology like FD-SOI to eventually become an industry product: from 1988 to 2012 in this case! FD-SOI being endorsed by both Samsung and GF is clearly a success for the LETI. From the discussion with LETI CEO Marie Semaria I can synthesize three key messages passed by the LETI during this LETI days in Grenoble:

  • FD-SOI being licensed by Samsung (28nm), ST (28nm, 14nm) and GF (22nm) is extremely well positioned to support emerging IoT applications, especially Memories, RF IC and MEMS
  • To support wider FD-SOI adoption, including start-up and mid-size European chip makers, the LETI has launched the “Silicon Impulse” initiative to strengthen FD-SOI Ecosystem: EDA, IP or MPW
  • Another initiative, “Cool Cube”, has been launched by the LETI to support monolithic 3D, or CMOS on CMOS integration.

Marie Semaria, LETI CEO, considers that the FD-SOI technology should target in priority the IoT applications, thanks to better than bulk power efficiency (IoT loves low power!) and good enough integration capability at 28nm or even 22nm with GF. I completely agree with Marie when she says that Moore’s law explodes at 28nm: as soon as a new technology leads to higher cost per transistor (like 14nm compared with 22nm), the industry has to explore various innovative approaches to keep semiconductors as attractive as it has been during the last 50 years. Exploring the 3[SUP]rd[/SUP] dimension is one of these innovative possibilities. In fact “one” is not the right word, as different 3D technologies are in development, like Through Silicon Via (TSV) or sequential integration based on Cool Cube process developed at LETI. The undisputed fabless #1 leader, also leading the smartphone segment with Modem and Application Processor IC has not endorsed FD-SOI… but Qualcomm has decided to run a partnership with LETI in order to bring CoolCube into production. What’s CoolCube? The idea is to use a chip initially processed (say an Application Processor) as the bulk to process a 2[SUP]nd[/SUP] chip (say a Modem or a Memory for example). Once this 2[SUP]nd[/SUP] chip is processed, both chips will be interconnected by using bond wires (not TSV) and eventually packaged together. The benefits are smaller area (on the board) and better power consumption than compared with two IC packaged separately. In fact, the internal chip to chip interconnect are minored of package capacitance and inductance, leading to lower overall capacitance, thus lower power consumption (proportional to C*V[SUP]2[/SUP]). CoolCube looks very attractive and the chance for this emerging technology to reach production is higher with Qualcomm being the customer/driver. Silicon Impulse is obviously not impressive as CoolCube… but probably very important as well. The goal is to create complete FD-SOI Ecosystem including EDA tools, IP offer and Multi Project Wafer (MPW) capabilities. Silicon Impulse is supposed to help start-up and emerging chip makers to complete their first FD-SOI project but creating a real Ecosystem will eventually benefit to every chip makers adopting FD-SOI. From Eric Esteve from IPNEST


This is how FPGA Prototyping Works

This is how FPGA Prototyping Works
by Majeed Ahmad on 07-02-2015 at 1:00 pm

FPGA prototyping has come a long way since the late 1980s when chipmakers began using FPGA devices for building system prototypes of ASIC designs. The utility of a working FPGA prototype allows hardware designers to develop and test their systems, and it provides software developers early access to a fully functioning hardware platform.

A lot has changed since the late 1980s when FPGAs emerged on the semiconductor realm. Chip developers are now dealing with mega-million gate counts for the larger ASIC/SoC designs, and here, design partitioning, debug and scalability requirements are turning the FPGA-based prototyping technology into an even more viable design tool.


A prototype is used to develop both hardware and software iteratively

S2C Inc.’s e-book titled “Getting the Most Out of FPGA Prototyping” can serve as a handbook on how this design methodology works. Moreover, it debunks the myths surrounding the issues regarding how FPGA prototyping works and what value it brings to ASIC/SoC designers. S2C has cobbled the e-book on FPGA prototyping systems using a series of articles published in EE Times.

The series of articles can help chip designers navigate the world of FPGA prototyping technology—everything from overcoming FPGA prototyping hurdles to expanding the use of FPGA prototype design flow to even the larger designs.

Furthermore, the e-book from S2C looks into the specifics of how FPGA-based prototyping can accelerate design and verification process. And by doing that it offers an insight into how a complete prototyping platform can be helpful at any design stage and for any design size.

FPGA Prototyping: Challenges and Solutions

The book kicks off with an outline of five key challenges to FPGA prototyping and provides a detailed treatment of issues such as partitioning, debug and reusability. Next, it delves into ways for addressing these challenges and details the criteria for selecting FPGA-based prototyping systems.

The e-book also clears the air about this myth that FPGA prototyping is only suited to small designs; it forcefully makes the case for the use of FPGA prototyping in large SoC designs. Here, the book refers to the recent advancements in partitioning, debug, and scalability that have made FPGA-based prototyping a far more suitable solution for the large ASIC/SoC designs.

The book also shows how extending the functionality of FPGA prototyping through the use of a transactor interface can open up tremendous possibilities to designers. Next up, Getting the Most Out of FPGA Prototyping resorts to transactor as a use case of an interface between a software program and AXI-compliant hardware.


Transactors make early software development a reality

Finally, about the SoC designs, which are growing both in size and complexity, it’s worth noting that software development and hardware verification are the two leading factors in SoC design cost. Here, at this SoC premise, the book shows how today’s off-the-shelf FPGA prototyping systems can offer value in every stage of the SoC design flow. And it claims that FPGA-based prototyping technology is ready to cater to the next-generation SoC designs through extensible and scalable systems that offer a variety of both hardware and software interfaces.

The ebook Getting the Most Out of FPGA Prototyping is short, sweet and well worth an ASIC/SoC designers’ time.


SmartDV at DAC and More

SmartDV at DAC and More
by Pawan Fangaria on 07-02-2015 at 7:00 am

As we are aware about SmartDV Technologies, a fast emerging company in IP space with offices in Bangalore and San Diego, its booth in 52ndDACwas located at a prominent position in front of DAC Pavilion on the exhibits floor. So, most of the crowd coming to attend sessions in DAC Pavilion had a glimpse of SmartDV. I met Deepak Kumar Tala, Founder & CEO and Harish Poojary, VP of worldwide sales and business development at SmartDV.

On Sunday night, the day before start of the conference, I attended Gary Smith’s presentation about the future of EDA and IP where he predicted the IP business to be almost flat until 2019. Gary sees Designware IP to become commodity and platform-based IP (a model that ARMfollows) to remain premium. Also, in Gary’s list of what to see at 52[SUP]nd[/SUP]DAC, SmartDV was mentioned at the top of his list where he recognized the intelligent testbench technology for VIPs from SmartDV. The link to Gary’s list is provided at the end of this page.

SmartDV has a wide range of IP products including MIPI, Networking and SoC, Automotive and Serial Bus, and Storage VIPs, and also Memory models and Design IPs. And they have customers across the world for these IPs. So, I talked to Deepak and Harish about how they see the current IP business and the expected growth in future. Here is the conversation–

Q: SmartDV has a large VIP portfolio and you have a good customer base. How do you see the current IP business and future growth potential from your perspective?

A: From SmartDV business perspective, we have a large growth potential at key accounts. There is lot of work to be done to develop our major accounts and also acquire new customers. We are excited about the growth opportunity ahead of us. However, if you are asking about usual VIP business, we see simulation VIPs becoming more of commodity products whereas there is a premium uptapped market out there for simulation acceleration IPs.

Q: Your VIPs are easily customizable in customer’s environment and they run order of magnitude faster, so that’s like providing a customized VIP to a customer. How do you see profit margin playing out in that space?

A: We are able to maintain high profit margins while offering low cost VIPs to customers. Our Cost of Goods Sold (COGS) is way lower than competitors. We keep operational cost as low as possible by removing redundancies and also by increasing efficiency through automation. All our engineering is based in India and our support is remote. Also, there is lot of automation from engineering side due to compiler technology.

Q: Your own language and compiler technology must be providing you a good edge for differentiating your VIPs from the rest in the market?

A: Yes, our compiler is the key reason why our time to market is very short. Also, it helps maintain high quality and standard architecture across all VIPs. It is because of the compiler we are able to ship compliance test suites with all VIPs along with detailed documentation without high engineering cost.

Q: Recently you released six new VIPs – USB-Power Delivery, MIPI-CPHY, MIPI-DBI, AMBA5 CHI, LPDDR4 and DDR4. How are they performing in the market?

A: We have customers for all except AMBA5 CHI. We are very happy with the quality of top tier customers engaged for these VIPs. In addition to existing customers, we are currently engaged with several other propsects for evaluating these VIPs.

Q: I see networking, automotive, and storage gaining most traction in the near future. Where would be your focus in the next 3-5 years? Which area you see as most growing?

A: Our focus is to be the leader in simulation and acceleration VIP market including memory models. We see the major growth for SmartDV to come from acceleration IPs.

Q: How about Design IP? Which area you are pursuing?

A: Our strategy is not focussed on Design IPs. Instead, we are focussed on delivering complete portfolio in simulaiton and acceleration VIP market.

Q: How about a model for a complete solution in a particular area? For example, networking where you provide IPs for design, verification, interface, and other required hardware and software?

A: This is not something we have currently planned to accomplish. However, depending on market situation, we may consider such a solution in the future.

Q: What are your upcoming products this year? Is there any new release in immediate horizon?

A: There is lot of push from customers to develop comprehensive solution for memory models and platform independent acceleration IPs. This is our focus area for next 12 months while maintaining leadership in simulation VIP market.

Q: Would you like to talk about your customer(s)? What do they like most about SmartDV?

A: We can’t mention our customer names. We can proudly say most of the semiconductor companies are our customers. Some of the top companies use 10+ VIPs from us. They like our flexibility with technical customization, support model, pricing and quality of the products. Engineers love the fact that every VIP comes with a compliance test suite which gives them a jump start.

This was a great conversation with Deepakand Harish. I can see SmartDV’s strategy panning out quite well. Their innovative language and compiler technology is paying good dividends to keep them differentiated from others in the VIP space.

Gary’s list of what to see is HERE.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Global Foundries Completes IBM Semiconductor Acquisition

Global Foundries Completes IBM Semiconductor Acquisition
by Paul McLellan on 07-01-2015 at 4:40 pm

Today the deal for GlobalFoundries to acquire IBM’s semiconductor division closed, having had regulatory clearance from Committee on Foreign Investment in the United States a couple of days ago. GlobalFoundries is, of course, owned by Mubadala which is owned by the government of the Abu Dhabi, and I have heard that there were some issues with foreign ownership since IBM supplies the US military.

The merged company has five main manufacturing sites with a total capacity of around 7M 200mm equivalent wafers per year (mostly 300mm wafers in fact):

  • East Fishkill NY (previously IBM) running 90nm down to 22nm, with a capacity of 14,000 300mm wafers per month
  • Malta NY (GF’s fab 8) running 28nm down to 14nm and will go lower with 60,000 300mm wafers per month
  • Burlington VT (previously IBM) running 350nm down to 90nm with a capacity of 40,000 200mm wafers per month
  • Dresden Germany (GF, the old AMD fab) running 45nm down to 28nm with a capacity of 60,000 300mm wafers per month
  • Singapore (GF, previously Chartered) running 180nm down to 40nm with a capacity of 68,000 300mm wafers and 93,000 200mm wafers per month

The company is now structured into 3 business units:

  • CMOS platforms BU, with a broad technology portfolio across leading-edge and mainstream nodes (I think this is mostly GF’s existing business)
  • RF BU, accelreating RF leadership and manufacturing with technologies such as RFSOI, RFCMOS and SiGe (I think this is the IBM RF business)
  • ASIC BU, with the richest portfolio in the foundry industry of IP for wired, wireless infrastructure applications (I think this is the old IBM ASIC business)

To quote directly from the press release:In RF, GLOBALFOUNDRIES now has technology leadership in wireless front-end module solutions. IBM has developed world-class capabilities in both RF silicon-on-insulator (RFSOI) and high-performance silicon-germanium (SiGe) technologies, which are highly complementary to GLOBALFOUNDRIES’ existing mainstream technology offerings. The company will continue to invest to deliver the next generation of its RFSOI roadmap and looks to capture opportunities in the automotive and home markets.

In ASICs, GLOBALFOUNDRIES now has technology leadership in wired communications. This enables the company to provide the design capabilities and IP necessary to develop these high-performance customized products and solutions. With increased investments, the company plans to develop additional ASIC solutions in areas of storage, printers and networking. The most recent ASIC family, announced in January and built on GLOBALFOUNDRIES’ 14nm-LPP technology, has been well accepted in the marketplace with several design wins.

I had a phone call this afternoon with Mike Cadigan. He was formerly (well, until yesterday I guess) General Manager of IBM Microelectronics Division, and is now head of the Product Management Group at GlobalFoundries.

Mike said that as part of IBM their semiconductor offering had been reined in by the reluctance to invest in both product solutions (R&D) and capacity (basically capital for manufacturing). With the acquisition those technologies should be more widely available to the merchant market.

The company now has 16,000 patents and has a rich portfolio of technology not just in silicon but also in packaging, materials, manufacturing knowledge and more.

Processes will now be developed with early research done in Albany and then moved into the Malta fab 8. I asked Mike about next generation process (10nm) since IBM has historically done a lot of work on SoI. He said that SoI will continue to be important, especially for RF where IBM has historically been strong, down to 14nm. But moving forward they will have a high-performance bulk solution that the combined IBM/GF team will develop. Remember that GF licensed 14nm from Samsung. All 3 companies historically were part of the Common Platform which gradually seems to have faded away, but it is not inconceivable that there will still be 10nm collaboration.

Another aspect of the deal is that GlobalFoundries will be a partner with IBM (the non-semiconductor part) for ten years to provide the most advanced semiconductor solutions including access to the $3B in advanced semiconductor research that IBM is continuing to do. I asked about the design automation tools. Those seem to be remaining in IBM although there is an intention that anything needed will be shared.

The press release is here. A presentation on GlobalFoundries post acquisition is here (pdf).


A Systems Company Update from #52DAC

A Systems Company Update from #52DAC
by Daniel Payne on 07-01-2015 at 12:00 pm

On Sunday night at DAC we heard from Gary Smith that traditional EDA companies need to grow into new market segments in order to stay relevant, and that a systems-level approach to multi-disciplinary engineering was called for. I almost jumped out of my seat and said, “Hey, what about Dassault? They are already doing that now.” Hopefully Gary is reading this blog, and will update his slides for DAC 2016 in Austin and at least mention the several systems-oriented companies that intersect with EDA.

To better understand what is happening at DassaultI met with Michael Munsey in the Press Room, where it was much quieter than the exhibit floor and we could talk without interruptions.

Q&A

Q: What is a trend that you see at DAC this year?

Well, one big thing at DAC is the discussion on requirements-driven verification (RDV) strategy. Design companies are understanding that you must tie requirements to EDA tool results.

Semiconductor companies have requirements that are unfortunately spread out in over a dozen different places, but in a functional verification flow a bug could be in your plan, the design or the testbench. RDV tracks the entire path from requirements to verification results. Anything that produces a report can be linked back to requirements.

Related – Design Collaboration, Requirements and IP Management at #52DAC

Q: I remember hearing about a requirements tool called DOORS back in the 1990s. What’s new in this space?

Rational Software created a requirements management tool called DOORS, now owned by IBM. Reqtify is the Dassault tool for requirement traceability and impact analysis, but it links with all the other requirements tools and imports them into our system to create the traceability. If someone changes requirements, it tells the designers what the effect is, new requirements are in place, your old results are valid.

A graphical representation of requirements, test benches, design files and results are available with Reqtify. That tool answers questions like what has changed, what is out of date. If a requirement changes, we know all about.

The ad-hoc approach to requirements is now replaced with this structured flow.

Q: How does this requirements traceability process work with EDA tools?

All the outputs of EDA tools are now stored away and cataloged, and you have a dashboard across the flow, so that you can now look at historical trends and ask an important question – are we getting better?

  • functional coverage
  • timing closure
  • DFT coverage
  • etc.


The dashboard is configurable by the user and new tools can be added for tracking.

Related – Managing Semiconductor IP

Q: Who would be using a requirements tool on a design team?

It could be any engineer, but mostly we see that it is a verification engineer.

Q: What is management concerned about on large system projects?

Management likes to work with project plans – using the notion of invisible governance, where you can actually track EDA tool data against your milestones so that status is auto-updated once you install the infrastructure. End-users can focus on each of their specialized design tasks, while reporting is automated for them.

We do find that there is a bit of a big-brother aspect, because your design tasks are being tracked and reported.

Engineering data can come from design engineers, product engineers, even manufacturing – because all can be included in the system.

Q: What is a decision support system and why is it useful?

A decision support system answers the primary questions of:

  • Where am I in this process, are we really done designing yet?
  • How well are we tracking against plan, with these engineering resources?
  • Which design team is most effective in their projects?


With a decision support system I will know which resources will help me get to my goals. The more that you use the system, the better the prediction results are.

Related – Filling the Gap between Design Planning & Implementation

Q: How does this all work with other EDA vendor tools?

All of the big three EDA companies work well with Dassault, because each of their point tools work well within the Dassault environment.

Also Read

Design Collaboration, Requirements and IP Management at #52DAC

Managing Semiconductor IP

Filling the Gap between Design Planning & Implementation


Synopsys Aquires Security IP Company Elliptic

Synopsys Aquires Security IP Company Elliptic
by Paul McLellan on 07-01-2015 at 7:00 am

On Monday Synopsys announced that it was acquiring Elliptic Technologies. They have one of the largest portfolios of security IP consisting of both semiconductor IP blocks and software. Increasingly, security requires a multi-layer approach involving both secure blocks on the chip and a software stack on top of that.

Elliptic’s products are used in areas that you might expect such as for payment processing and for digital rights management (DRM). But it is also closer to the silicon than most security solutions including protecting against rogue semiconductor devices, IP theft and more. In each of these situations, cryptographic credentials such as keys or certificates must be managed and inserted into the target device to authenticate it. For example, if a manufacturer wishes to protect against anti-cloning when using a ODM, or overbuilding at a foundry, it can securely inject credentials from a secure server administered by the manufacturer. Only those products that receive these credentials will function correctly. Similarly, a designer of DSP algorithms for example could decrypt and enable the code only for authenticated use through the secure injection of credentials during manufacturing by customers. This will ensure that only authorized and paid-for copies are enabled.

Security IP is a significant growth area due to two factors. Firstly, the importance of security increases on a daily basis, and the risk of poor solutions can be measured in hundreds of millions of dollars for large companies. Secondly, the growth of connected consumer electronics such as smartphones and tablets, networking infrastructure, gateways, base stations, femtocells, and mobile applications. And, of course, an obligatory mention of IoT where security is up there alongside power as an issue.

Or, as Joachim Kunkel put it in the press release:We live in an internet-connected world and built-in security is critical in protecting devices from malware, data breaches and more.

One key capability is the Ellipsys Trust Framework that enables:

  • Manufacturers to protect against counterfeiting, cloning, overbuilding of products produced by ODMs and contract manufacturers;
  • IP designers to protect IP in the form of firmware-embedded algorithms, programs, and FPGA bit files, through all phases of product life cycle;
  • Content Distributors to protect high value content such as High Definition video;
  • Device manufacturers to activate and provision products at the point of sale;
  • Network operators and administrators to manage the identity of devices and subscribers, and to enable features, applications and services in mobile and wired networks

The portfolio includes:

  • Symmetric Cryptographic Engines
  • Hashes and MACs
  • Public Key Accelerators
  • Random Number Generators
  • Software Libraries
  • Security Protocol Processors
  • tRoot Embedded Security Modules
  • Security Accelerators
  • tVault DRM
  • DTCP-IP content protection
  • HDCP 2.2 content protection

This acquisition follows Synopsys’ recently announced acquisition of Codenomicon and announced plans to acquire Quotium’s Seeker product, also providing some of the necessary technology for developing secure products. Between pure software solutions which are grouped with Coverity, and semiconductor IP grouped into DesignWare, Synopsys has an increasingly broad portfolio of security products. Just as importantly, they have an increasingly large team of experts in software and silicon security implementation.

Terms of the acquisition were not announced, but it is not financially material to Synopsys (which means the price was not enormous). If you are curious about Elliptic’s name, I think it comes from two things (I’m guessing): getting IP into the name, and one of the major modern encryption techniques is elliptic curve cryptography (ECC), so to anyone in security the word elliptic doesn’t bring ovals to mind, but ECC which gets higher levels of security from the same key length as older approaches.

The Synopsys security IP page is here.


eSilicon ♥ ARM!

eSilicon ♥ ARM!
by Daniel Nenni on 07-01-2015 at 5:00 am

The things I enjoy the most at conferences are presentations by customers, the companies that solve the problems we face every day with modern semiconductor design. We all have access to the same tools and IP and use the same foundries so it’s the actual design and implementation that separates the wheat from the chaff, absolutely.

SemiWiki has direct access to dozens of customer presentations from #52DAC and will be writing about them over the summer. These types of blogs are the most viewed and the bigger the customer the more views. Or you can just mention ARM and the views go exponential because let’s face it, ARM IP is in just about every design, mobile or not.

The nice thing about eSilicon is that they are both a vendor and a customer so they have no problem talking about what they are doing and rightly so since they have taped-out hundreds of designs and shipped MILLIONS of chips so who better to listen to, especially when they talk about using ARM IP. This presentation was titled “ARM Based Designs in the Internet Age”and was well worth listening to.

eSilicon starts by using an IP hardening project to ensure the flow works well. They don’t push performance here but once eSilicon engages with a customer design on a validated flow they use design virtualization to get the best possible results (performance, power, price). The example used in this presentation is a baseband processor implemented in TSMC 28nm HPL. One of the first questions you will face when you start a design is: Which of the TSMC 28nm processes will be best for my design?

TSMC now has seven versions of 28nm: HP (high performance), HPM (high performance mobile), HPC (high performance computing), HPL (high performance low power), LP (low power), the recently added HPC+, which is an even faster version of HP, and ULP, which is ultra-low power for IoT and other battery powered applications. So many choices so little time, right?

The complete presentation can be found HERE.

Take a look at slide#3 to better understand Design Virtualization. What they are talking about here is a big data analytics system that contains a characterization of all the IP used across all the process options available. Using this technology, eSilicon can help users pick the right combination of IP, process options, operating conditions, Vt mix, etc… in real time by querying a data base. This essentially “virtualizes” all these choices. Users are typically stuck with their first decision on all these items because trying something new is a multi-week experiment and no one has that kind of time. Thanks to this virtualization layer, users can now try new choices and get quick feedback on the results. eSilicon can also provide an optimal set of choices for a given power, performance, or area target. They use design virtualization internally on all customer designs to make sure they deliver the best chip possible.

Slide #5 shows where design virtualization is used early in the flow to drive the best selections and then later in the flow to continue to optimize things like memory configurations. As the implementation gets closer to tapeout more is learned about the design and therefore more optimization is possible.

Slide #9 introduces a way for all design groups (not just eSilicon customers) to access design virtualization. It is delivered as a service. The customer provides a block that needs to be optimized and then eSilicon experts analyze and provide guidance on what to tweak to make it better. If they can’t achieve the required improvement there is no charge for the service. eSilicon was founded on a success based business model and this type of design virtualization service is yet another example.