CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

SSD Storage Chips: Basic Interconnect Considerations

SSD Storage Chips: Basic Interconnect Considerations
by Majeed Ahmad on 07-31-2015 at 4:00 pm

The joint development of 3D XPoint memory technology from Intel and Micron has once more brought the spotlight on data centers and chips for solid-state drives (SSDs). The two semiconductor industry giants claim that 3D XPoint memory is1,000 times faster than NAND Flash: the underlying memory content for SSDs. Such developments underscore the tectonic shift in the IT industry from HDDs to SSDs.

There is an inevitable switch from rotary storage to solid-state storage, and here, controller chips for SSDs are far more complex to develop than controller chips for HDDs. Then, there are design intricacies related to Flash memory that add to the complexity of an SSD controller chip. According Kurt Shuler, Vice President of Marketing at Arteris Inc., Flash is like a helicopter. It destroys itself as it is operating.

That’s especially true in the case of an enterprise SSD controller chip in which the “mother” controller talks to multiple “daughter” controllers, which are attached to specific banks of Flash. Unlike a consumer SSD, which maxes out at 1 TB, the controller chip-based enterprise storage devices scale to 10 TB and beyond.


SSD marks an inflection point as data centers move away from HDD storage

The SSD controller chips use a cascading slices approach to accommodate more storage because enterprise SSD solutions need to be highly scalable. That leads to the use of more IP block functions than the most complicated application processor designs. Not surprisingly, therefore, the huge chip size and design complexity are a major consideration for chipmakers offering the SSD controller system-on-chip (SoC) solutions.

SSD Design Challenges

Arteris’ Shuler says that data protection is the number one design goal for storage chips. Data protection ensures that there is no data loss by detecting errors and correcting them. It’s extremely important because the enterprise SSD companies know that they have to offer the same reliability as HDD even though they are using an inherently self-eating technology like Flash.

“Data protection can add data bits and logic for error-correcting code (ECC) and hardware duplication,” Shuler said. “In fact, the ECC engine is often the largest IP block on the die.” But it’s these interconnect reliability features that help make Flash a possibility in a market that so far has been ruled by HDD.


The high-level design of an enterprise SSD chip

Second, storage chips also boast extremely complex power management and dynamic voltage frequency scaling (DVFS). Power consumption is the biggest concern in data centers where low-power functioning is required for both compute and air conditioning operations. Around 33 percent of data center power is used for cooling.

Third, there are different quality-of-service (QoS) requirements for storage chips. The SSD chips are of huge size, and they require more bandwidth-balancing QoS schemes than the DRAM-centric end-to-end QoS schemes that are used in mobile application processors. Moreover, there are a number of clock domain crossing and clock propagation and balancing issues.

Interconnect Spaghetti in Storage Chips

Enterprise SSD chip architectures are even more complex power-wise than, for instance, the TI CC26xx IoT chip design that the Dallas, Texas–based semiconductor supplier has recently announced. There are more independent engines on an enterprise SSD controller than there are in a smartphone application processor, and the connections between them can overwhelm the layout team with a tangled mess of metal line spaghetti.

Here, a specialized interconnect technology like FlexNoC enables fewer wires and less logic as well as distributed interconnect design. That helps chip designers avoid routing congestion and the resulting timing closure problems the enterprise SSD industry struggles with while using older interconnect technologies like ARM’s NIC-400.

Let’s take the core design issue of data protection as an example of how the effective use of interconnect technology can drastically enhance the scalability of enterprise SSD system requirements. The storage chips are so big, and the interconnects are so complex that they have to be protected. An SoC interconnect implements ECC, parity and hardware duplication to protect data paths in a storage chip.

Shuler claims that Arteris’ FlexNoC Resilience Package creates more physical awareness earlier in the chip design process and facilitates data protection in tasks such as ECC, parity, hardware duplication and BIST. “The FlexNoC interconnect IP automatically ensures data safety when dealing with asynchronous clock domains and power domains.”


Clock tree power and unit-level clock gating in a storage SoC interconnect

Next up, let’s see how the low-latency requirement is balanced with extremely high bandwidth in data centers as an example to highlight the critical importance of interconnect in large and powerful storage chips. The ultra-low latency factor is especially crucial because of the communication to and from ARM Cortex-R5 low latency peripheral port (LLPP).

There are a lot of challenges in implementing this communication in the ARM world because of the ARM AMBA AXI 4 KB restriction per transaction. On the other hand, enterprise SSD chips require huge block transfers in their logical block addressing (LBA) schemes. “The FlexNoC interconnect IP bridges the gap between ARM architecture and enterprise SSD controller architecture,” Shuler said.

Also read:

Is Interconnect Ready for Post-Mobile SoCs?

Arteris Flexes Networking Muscle in TI’s Multi-Standard IoT Chip

Arteris Sees Computational Consolidation Amid ADAS Gold Rush


I want to use USB Type C (and I want it now)

I want to use USB Type C (and I want it now)
by Eric Esteve on 07-31-2015 at 12:00 pm

USB is certainly the most ubiquitous of the Interface protocols, used in our day to day life to connect multiple systems, as well as in professional segments like industrial or even high performance servers (yes, these systems integrates USB 3 connections). But USB is also one of the protocols able to generate frustration every day. I am not talking about strong frustration, just one of the small issues we are facing several time a day when plugging USB connector the wrong way! Then we think that it shouldn’t be rocket science to define a reversible connector, that you can plug it either way…but we will see that it requires high level engineering skills.

This magic connector has been finally defined by USB-IF, it’s the Type-C connector and I was told by Synopsys that the Type-C specification is the fastest adopted USB specification of all time: it took only 7 months between the spec and the product release. If you consider that we have waited 20 years to benefit from a reversible connector, fulminating against this #%&@ connector, this fast adoption make sense.

IPnest delivering the “Interface IP Survey” every year since 2009, I know that Synopsys is the undisputed leader on the USB IP segment, but I didn’t knew about the cumulated number of USB IP projects won by Synopsys: almost 3,500 since 1995. As the clear leader on this USB IP segment, Synopsys had to position as quickly as possible to support USB-C IP.

USB-C connector has to be reversible. Which is easy to use and to understand has a pretty strong impact on the USB IP itself, as the USB PHY is concerned. If you are familiar with interface IP, you know that it can be split into a pure digital block, the controller, and a mixed-signal part, the PHY. Because the PHY is mixed-signal, it’s 100% process dependent. In other words, it’s almost impossible to support all foundries and all process nodes and the multiple variations for a process node (LP, HPM, etc.) immediately. Synopsys has started by the most challenging, the USB-C 3.1 PHY running at 10 Gbps designed on 14/16 nm FinFET technology node, targeting the high end market (Application Processor for Mobile, Tablets or Laptop), as well as the mainstream with USB-C PHY IP supporting 3.0 and 2.0 designed on 14/16 nm FinFET and 28 nm Bulk technology nodes. That’s why you can see two different USB-C PHY IP on the picture below. Redesigning these PHY IP has been the opportunity for Synopsys to optimize it for area, so it counts up to 40% less pin, leading to a smaller footprint.

As I have previously mentioned, the Type-C connector impact the USB PHY. If you want to implement USB-C, you will have to implement the controller too. Thus you will expect Synopsys to have validated the interoperability of the USB 3.1 controller (supporting also 2.0 and 1.1 for backward compatibility) with these new PHY IP. Such work being done internally, the next step for the company was to validate the interoperability of the complete USB-C solution during the USB-IF plug-fest (compliance program) in July 2015, were you can verify that your solution is compliant and interoperable.

It’s crucial for a Synopsys customer that the USB-C IP has passed the compliance test, but it may also be very important to benefit from USB 3.1 Verification IP (to validate the proper integration of the IP into the SoC) and comprehensive Prototyping Kit (below) to allow as early as possible software development, in parallel with the SoC integration. Important to mention, Synopsys customers can use their existing USB 3.0 drivers to support the new USB 3.1 controller IP, this controller has been built on the existing USB 3.0 controller.

Synopsys is launching the first USB Type-C IP solutions supporting USB 3.1, 3.0 and 2.0 specifications, built on USB PHY IP proven in literally 100’s of customer designs. Last point, but not least: USB Type-C has been specified to deliver power up to 100 Watts!! Clearly USB Type-C PHY IP design has to be robust, on top of running at 10 Gbps.

From Eric Esteve from IPNEST


Silvaco 30 Years Ago

Silvaco 30 Years Ago
by admin on 07-31-2015 at 7:00 am

It’s Silvaco’s 30 year anniversary. You may already know the dry official story of the early days:

  • Founded in 1984 by Dr Ivan Pesic
  • In 1984 the initial product, Utmost, quickly became the industry standard for parameter extraction, device characterization and modeling.
  • In 1985 Silvaco entered the SPICE circuit simulation market with SmartSpice.
  • In 1987 Silvaco entered into the Technology Computer Aided Design (TCAD) market. By 1992 Silvaco became the dominant TCAD supplier with the Athena process simulator and Atlas device simulator.

I decided to get a bit more color so I sat down with Misha Temkin to find out how he joined Silvaco and what it was like in the early days nearly 30 years ago. He is an atomic physicist and did his PhD in Russia (actually still USSR) on ion implantation, modeling and atomic interaction. He published his work in a book in Russian and in 1986 it was translated into English which got him noticed.

Misha moved to the US in 1988 as a Jewish regugee and set about trying to find a job. Then in one of those alignment of the planets he saw an ad in the San Jose Mercury wanting an engineer knowledgeable in process simulation and SUPREM. It didn’t say who the company was. He sent his resume but heard nothing for 6 weeks until one morning, at 6am, he gets a call. It was Ivan, the founder of Silvaco. “I want you to work for me,” he said. He started work that Monday without even having negotiated a salary. He only found out what he was being paid when he got his first paycheck. He was the first TCAD person at Silvaco. The whole company was Ivan and about 8 or 9 engineers.

Silvaco actually started by doing parameter extraction for SPICE with Utmost (still sold today). Ivan’s previous employer had been HP who sold parameter extraction along with their equipment. But the other equipment companies had nothing similar and smaller companies could not afford HP. Ivan sold the product himself, flying all over the world. But he realized it was too small a market on its own and decided that TCAD would be a good complement.

In those days DARPA and Intel and some other industrial partners were funding TCAD work at Stanford that ended up being SUPREM and being licensed by TMA (who were acquired by Avant! and now form the heart of Synopsys’ TCAD offering). Ivan wanted to license it. But Stanford wouldn’t license it unless Silvaco had true expertise. After all, it was student code with inadequate documentation, only batch mode, no graphics. That was why Ivan hired Misha. “Here’s a guy, he did it all before but in Russia.” Eventually they got a license and started selling. In 1989, the first year, they sold 5 or 6 copies and Misha’s group grew to 5 or 6 people.

Ivan had a good nose for smart people. He could not compete with much larger richer companies to hire people from Stanford but he found people in Russia, Bulgaria, Brits, French, Asians. For 30 years the company has been more or less profitable. The only investment was from the family. As Ivan told Misha one day “we had to sell a couple of houses.” Now the family owns most of the park where Silvaco is based although they only occupy 6 of the 26 buildings.

Ivan liked to sell direct. By 1989 they already had an employee in Japan. They have been direct for a long time in both Korea and Singapore and are likely to soon go direct in China. It has been a wild ride. Silvaco was 8 people when Misha started and is now 180 people. TCAD is mostly here in Santa Clara with the mesh stuff done in the UK.

One memorable event was the SPICE billboard controversy on 101, about the only type of marketing that Ivan liked to undertake. Even CNN came by the office to, basically, complain. But it all helped to put Silvaco on the map.

You probably know the dry version of the sad recent history:

  • In October 2012, after an extensive battle, Dr Pesic succumbed to cancer.

But Silvaco lives on with a new management team. Ownership of Silvaco remains in the Pesic family, with Dr. Pesic’s son, Iliya Pesic as Executive Chairman of the Board, and Dave Dutton as CEO. Misha is still here 26 years later.

30 year anniversary video (2 minutes):


ARM and the Law of Accelerating Returns!

ARM and the Law of Accelerating Returns!
by Daniel Nenni on 07-30-2015 at 4:00 pm

ARM is one of the companies I have had professional experience with during my storied semiconductor career so I know some of their history first hand. I worked for a physical IP company that was purchased by ARM and at Virage Logic we competed against ARM head-to-head. ARM is also featured in our book “Fabless: The Transformation of the Semiconductor Industry” and our second book is a deep dive into ARM history, technology, and the evolution of the mobile industry. Book #2 is timed to come out in November coinciding with ARM’s 25[SUP]th[/SUP] anniversary.

ARM is also on SemiWiki and one thing I can tell you is that articles about ARM or even with ARM in the title get the most views, absolutely. So naturally I follow ARM closely, attend their events, and listen to the quarterly earnings calls. The most recent one includes a 3,000 word vision statement by ARM CEO Simon Segars which is definitely worth a read. Most opening CEO statements are less than 1,000 words by the way.

If you haven’t met Simon you really should make the effort as he is a true visionary, passionate about technology, and is very approachable. Simon was employee #16 at ARM and led the development of early products such as the ARM7™ and ARM9™ CPUs that powered the first digital mobile phones so he knows a thing or two about the mobile business.

The first 900 or so words are the company update but then Simon gets to the vision part:

“It’s actually quite incredible to look back over the last five years and think about the journey that smart mobile devices have taken and we take for granted everyday now we get access the internet, get all the data that you want, whenever you want it on this thing that sits in your pocket the whole time and let’s face it if you left it at home when you walked out of the house in the morning, you’d go back to get it. It’s incredible just how ubiquitous that has become in developed countries. But you just go back five years and think about the device you were carrying then, I personally was using a Blackberry back then, clearly basic Blackberry and it’s great for email, but couldn’t stream video to it, the mapping solution in it was very primitive. Rewind forward to today, we’ve got very sophisticated true mobile computers, higher resolutions screens, great processors, high speed data connectivity, seems like a very different world from only five years ago.”

ARM Holdings’ (ARMH) CEO Simon Segars on Q2 2015 Results – Earnings Call Transcript

I too was using a BlackBerry five years ago and now I buy a new smartphone every year just to keep up with technology. Simon continues on with a vision of the future and why smartphones will not stagnate like PCs to which I agree with wholeheartedly. There is a recent interview with futurist Ray Kurzweil on the Business Insider website (one of my daily reads) which explains it nicely:

“The reality of information technology is it progresses exponentially, 30 steps linearly gets you to 30. One, two, three, four, step 30 you’re at 30. With exponential growth, it’s one, two, four, eight. Step 30, you’re at a billion.”

Ray is all about futurology and has made predictions that have already come true such as a computer beating a human at chess and some that are still pending like fully autonomous cars. Today Ray is an Engineering Director at Google working on something cool I would bet. For more information on Ray you can read his extensive Wikipedia page HERE.


A Key Aspect Missing for IoT to become NBT

A Key Aspect Missing for IoT to become NBT
by Pawan Fangaria on 07-30-2015 at 12:00 pm

The IoT (Internet of Things) is not one product, technology, segment, or market. It’s a combination of many things, many markets, and many technologies. However, it’s one thing that needs to connect everything together; edge device to gateway to cloud. That brings the complexity, how’s that possible amid large heaps of heterogeneous devices, multiple M2M protocols, multiple communication protocols, large amount of data transport, security issues, and so on across the world? How can we see IoT becoming the Next Big Thing without addressing these issues?

Okay, large data centres can be constituted to handle big data. Technology processes and specialized chips with low power, low energy and low cost are already around the corner; some are already developed for IoT applications. But does that hardware confirm to the right software that can address the issues on a larger scale for a seamless and secure connectivity among devices and the internet across the world? We do not have one answer to that but have multiple answers. Everyone is working towards finding the right solution; there are multiple open source M2M protocols, communication protocols and company sponsored open IoT platforms in the work today. Earlier in this year, I had made a forum post on Semiwiki (the link is provided at the end of this article) about IoT standards in which I have mentioned about these.

There is a general consensus that different verticals in IoT market have different requirements. I would expect at least a common standard for each vertical and there should be ways for horizontal crossing between them to satisfy important requirements of IoT such as security, communication, data processing, and so on. There has been good effort to design smart and cost effective IoT edge devices, but what happens when you have to equip them to work with multiple wireless technologies because different regions across the world have different standards? The cost increases which is in conflict with IoT market growth. A common standard across the world can significantly reduce the cost from various angles and accelerate IoT market.

As I said in my forum post, the market forces will determine who wins among several IoT platforms and standards which are in evolution mode. This is not something which is already known and a standard can be formed by discussion. It has to evolve, and since there is big opportunity ahead, in such situation one who wins that becomes the standard. Let’s see some of the progresses made in this direction.

Qualcommdeveloped AllJoyn platform and released it for open development. Many manufacturers have joined AllSeen Alliance, an industry group to develop AllJoyn as an standard IoT protocol. Qualcomm is also working to integrate its next-generation peer-to-peer protocols with its new 4G networks with MuLTEfire technology that allows running 4G networks on a neutral unlicensed spectrum. The acquisition of CSR brings Qualcomm a powerful Mesh protocol which adds security, a key requirement for IoT, to Bluetooth standard. Bluetooth is a symmetrical standard for short range devices to communicate without any router in between unlike WiFi. Bluetooth with Mesh can allow any number of devices to securely connect and communicate with each other, thus providing impetus to IoT. If Qualcomm is able to significantly increase mobile network capacity with its MuLTEfire technology and make 4G cells as common as WiFi, it could be a clear winner in the IoT space. Hope Qualcomm comes out of its current crisis and fulfils its strategic vision about IoT as it has the right setup of hardware in terms of MuLTEfire, Mesh network, and AllJoyn platform required for IoT.

ARM mbed IoT device platform is another promising common platform for developing IoT devices at a scale. ARM’s IoT Subsystem for Cortex-M processors is available for IoT endpoint development and other IP solutions for all points including gateway and cloud servers. Recently, ARMannounced a new Quality Assurance Standard for mbed enabled devices. This platform provides interoperability between mbed-based devices. It could be another winner as it has the support of ARM IP platform that provides IP for most of the devices in the semiconductor industry.

Intelis investing heavily to build an IoT platform with specialized low-power devices and software. Their Quark SoC X1000 series based IoT platform has an open architecture for anyone to develop customized platforms for different applications. This platform caters to the complete IoT ecosystem from the edge to the data center. Recently I Intel demonstrated this platform working perfectly for industrial, energy and transportation segments of IoT.

Also, there are smartphone makers like Appleand Samsung investing in developing IoT platforms. There are automotive players gearing up for the classified automotive segment. What we need is standard platforms for different verticals. It may so happen that adjacent verticals, for example wearable, personal health and consumer segments can share a same standard.

Market forces will determine the winners. Smaller players will eventually join the larger players and a set of standards will evolve for IoT to become the Next Big Thing. We have great enablers from the EDA space as well with their tools and IP to fuel IoT development.

My forum post on Semiwiki: IoT standards – what, when, the reality and what’s possible

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Boost the Market for Interposer and 3D ICs with Assembly Design Kits

Boost the Market for Interposer and 3D ICs with Assembly Design Kits
by Beth Martin on 07-29-2015 at 6:00 pm

The traditional system-on-chip (SoC) design process has fully qualified verification methods embodied in the form of process design kits (PDKs). Why is it that chip design companies and assembly houses have no IC/package co-design sign-off verification process?

Package die are often produced using multiple processes and multiple foundries, which raises the level of complexity, but also increases the need for a process that can ensure these disparate products can be manufactured within a single package. I talked with John Ferguson, Director of Marketing for Calibre DRC Applications at Mentor Graphics and Tarek Ramadan, a Technical Marketing Engineer for Calibre Design Solutions at Mentor Graphics, about this need for an “assembly design kit” to ensure manufacturability and performance using standardized rules that ensure consistency across a process. Ferguson was presenting the results of a pilot project he did with Qualcomm and STATS ChipPAC to test an assembly design kit (ADK). The benefits of ADKs include reduced risk of package failure, increased packaging business, and increased use of 2.5/3D packages.

Ferguson and Ramadan talked about some chip package verification challenges. They said the new class of packages coming into the market enhances the interactions between the layers, so there is no clear separation between the traditional die and package, necessitating a unified co-design flow. The wafer-level package (WLP) is a type of chip-scale package (CSP) that enables the IC to be attached face down to the PCB using conventional surface mount technology assembly methods. The chip’s pads connect directly to the PCB pads through individual solder balls. The die may be mounted on an interposer upon which pads or balls are formed, like with flip chip ball grid array (BGA) packaging, or the pads may be etched or printed directly onto the silicon wafer, resulting in a package very close to the size of the silicon die. WLP technology differs from other ball grid array, leaded, and laminate-based CSPs in that no bond wires or interposer connections are required. The main advantages of the WLP are a small package size, a minimized IC-to-PCB inductance, and a shortened manufacturing cycle time.

However, they said, there are a lot of challenges in verifying these packages. For a single 28 nm chip for example, everything is on the same die and therefore when you run your verification tools, every geometry is checked to ensure it conforms to the minimal 28 nm requirements. But, if you separate the technology to separate dies, say one at 28nm for only the critical components, and another at a larger technology node, then things get tricky. You can run the specific process DRC and LVS decks on each die individually, which ensures the die can be manufactured, but how do you ensure that when you put them together into a combined package that it is still correct?

Multiple die in a package increases failure risk and unforeseen integration issues, especially considering that chips in a package often come from different foundries, and were verified using different processes, making package failures hard to identify and fix. The motivation for his ADK project then is to create better characterization of package processes and requirements to avoid ad hoc solutions from designers and assembly houses.

So what is included in an assembly design kit? Ferguson and Ramadan said we can compare it to the well-known process design kits to draw some parallels. The primary key to a process design kit are the sign-off requirements. For a given process this consists of two things: DRC rule files and device models. The DRC decks, along with a qualified tool, ensure manufacturability of the die in the process. The device models enable simulation so you know die meets electrical behavior and performance goals. Of course, Ferguson pointed out, simulation of the designed circuitry alone does not ensure that the chip will work once manufactured. So, an LVS step is needed. Over the years, this sign-off level analysis has to include parasitic extraction and DFM checking to capture the most complex manufacturing mechanisms at the leading advanced nodes.

Another function of a PDK is to help define one or more validated design flows. Device pcells, for example, simplify the placement of devices in layout configurations known to work. Place and route technology files, layer mapping files and more all make it easier for a designer to start working directly within a well-established EDA eco-system.

So like PDKs, they said, ADKs will need to start with sign-off practices. A designer will need a way to validate that the package assembly they’ve created will work as expected once manufactured. To ensure the assembly can be properly manufactured, some form of sign-off DRC-like analysis is required. Similarly, to validate electrical behavior and performance, some method to extract the electrical netlist of the designed assembly, to pass to simulation and analysis is required. These should be independent of any specific design tool used to create the assembly, and they must be validated by the package assembly/outsourced assembly and test (OSAT) company. In addition, when combining die into a package, there are other potential failure mechanisms that are less prevalent when reviewing a die stand-alone. For example, thermal interactions between dies, or stress induced on a die due to its packaging, may be other areas where you need sign-off verification.

But a design kit is more than just the signoff requirements, Ferguson pointed out. Designers need validated technology files for the design creation tools, just like we have for place and route and custom design tools in the IC space today. And then there are the design-for-test strategies. If an assembled package fails, how do you trace back to the root cause of that failure?

So, Mentor, Qualcomm, and assembly house STATS ChipPAC collaborated to develop a prototype assembly design kit for 2.5/3D IC packages. The goal was to create a method for presenting a fully-stacked system that included both DRC and LVS performed on each fabric independently AND at the interfacing level (die-to-die, die-to-package, etc.). The assembly design kit needed to handle multiple IC and package layout design formats and needed to provide support for assembly and stress rule checking.

The design they used for feasibility testing was a side-by-side package using an embedded Fan-Out Wafer-Level Packaging (FOWLP) technology to support multi-die integration (Figure 1).

For their project, Qualcomm wrote the design rules that defined what the assembly should look like, including how to DRC and LVS comparisons on packages. Design rules had to be created that addressed package-specific requirements, including specifics such as size and distance of package wires. Figure 2 shows some typical design rule checks that might be included in an assembly design package.

STATS ChipPAC wrote rules for the TYPES of elements and configurations permitted in the package. This includes die-to-die edge, die-to-package edge, die-to-package alignment, corner rules, etc. Mentor’s role was to enhance the syntax in the Calibre® 3DSTACK tool to bring the two rule sets together and provide rule checking capabilities (Figure 3).


In assembly design kit LVS process, they used virtual die to test package layouts prior to assembly. The package doesn’t come with a netlist, but a spreadsheet “netlist” works well. This “netlist” format can contain electrical connection information as well as pin locations. By expanding Calibre 3DSTACK to support such formats, you can verify the routing connections in the package between die pins and BGA pins. It also enables the extraction of an assembly-level netlist, which, if used in conjunction with the process-specific chip LVS or PEX results, can be used to generate a full assembly-level netlist for feeding into downstream simulation and analysis tools.

Ferguson and Ramadan said the project went quite well, and they delivered the results in a session at DAC 2015. Using Calibre 3DSTACK, STATS ChipPAC created a rule file for their FOWLP process that can be used by any designer targeting this package technology at this assembly house, regardless of what processes the dies are, or how many dies are in the package. The rule file checks the manufacturing constraints of the package RDL and the die-to-die constraints, and verifies the connectivity through the package from die-to-die and die-to-BGA. It is entirely independent of any specific design tool used to generate the package.

The take-away message, Ferguson said, is that an assembly design kit provides a standardized process both chip design companies and assembly houses can use to ensure the manufacturability and performance of IC packages. This has been a gap in the IC supply chain. Using an assembly design kit can reduce risk of package failure, while also reducing turnaround time for both the component providers and assembly houses. By implementing a repeatable, proven process, all participants can improve both their first-time success rate and overall product quality.

Thanks to John Ferguson and Tarek Ramadan of Mentor Graphics.


Cadence 2015 Q2 Results

Cadence 2015 Q2 Results
by Paul McLellan on 07-29-2015 at 6:00 pm

Let’s start by getting the financial stuff out of the way. Revenue was $416 million; non-GAAP operating margin was 28%; non-GAAP EPS was $0.27; and operating cash flow was $122 million (up at lot, it was just $47M in Q1 and $69M in Q2 of 2014).

The thing that the financial types are most interested in are the changes to Cadence’s stock repurchase program. In CFO Geoff Ribar’s words:We are replacing our current $450 million stock repurchase program with a new program to repurchase $1.2 billion of our shares over the next six quarters through the end of 2016. The actual timing and amount of repurchases will be based on business and market conditions, corporate and regulatory requirements, acquisition opportunities, and other factors. One such factor is the settlement of our warrants which begins in September of this year and extends through early December.

What that all means is that they will repurchase about $1.2B of Cadence stock, mostly during 2016. But readers of Semiwiki are less interested in financial legerdemain than information about product and insight into how the results reflect on market conditions for Cadence’s customers.

Let’s start with geographies and product segments. Geoff Ribar again:Cadence had a strong Q2. Total revenue was $416 million, up 10% compared to $379 million for Q2 of 2014. The revenue mix for the geographies was 48% percent for the Americas; 23% for Asia; 20% for EMEA; and 9% for Japan. Revenue mix by product group was 21% for functional verification; 29% for digital IC design and signoff; 27% for custom IC design; 11% for system interconnect and analysis, and 12% for IP.

In some ways the most impressive number is 12% for IP. Cadence wasn’t really in the IP business until the acquisition of Denali, and even then it only got seriously into the IP business in the last three years since Martin Lund joined from Broadcom in 2012. Synopsys, on the other hand, has been in IP for 25 years and I think it is around 20% of their business (off a larger revenue number, to be fair).


Cadence made two major announcements during the quarter:

  • Genus, which is their next generation synthesis product, already endorsed by Imagination Technologies and Texas Instruments. Like the other -us products, this has been re-architected to take advantage of the availability of large numbers of cores. They claim 5X performance improvement and capability to handle 5 million instances (rough rule of thumb is that an instance is 4 gates so this is roughly 20 million gates)
  • Indago debug platform and three apps:

    • Indago Embedded Software Debug: Resolves bugs associated with embedded software applications by synchronizing software and hardware source code debug
    • Indago Debug Analyzer: Extends root-cause analysis from e testbench (IEEE 1647) to SystemVerilog (IEEE 1800) and increases performance by up to 10X
    • Indago Protocol Debug: Visualizes advanced protocols such as DDR4, ARM AMBA AXI and ACE using Cadence VIP for intuitive debugging

Innovus, the new physical design system, continues to make progress with customers:Qualcomm Technologies, NVIDIA, STMicroelectronics, and Faraday Technology have joined ARM, Freescale, Juniper and others in adopting Innovus for production design at the most advanced nodes, benefiting from excellent quality of results and faster turnaround time.

I still think there is some gamesmanship going on here and that most customers are using multiple physical design systems and are not using any one supplier exclusively. As the dodo says in Alice in Wonderland, “Everyone has won and all shall have prizes.”

Palladium XP won six new logos (no details on who) and a big driver for emulation (in general, not just at Cadence) is dynamic power analysis, especially for mobile. Pre-production testing of their next generation is taking place now, on-track to start shipping before the end of the year.

One interesting area to watch is industry consolidation. NXP and Freescale, Avago and LSI and Broadcom, Tsinghua in China. In the short term this doesn’t have much effect since the contracts are all in place and the number of designs in progress does not change fast. But in the longer term, if the number of companies in the semiconductor ecosystem declines then it might have gradual and negative effect on the EDA and IP industries (and perhaps foundry too). As Lip-Bu said:Long-term impact of this on our industry is complex and difficult to predict. While we do not expect material impact near-term, consolidation could pose a challenge to industry growth over the next few years.

During the Q&A Lip-Bu talked about EUV for a couple of minutes:When you move down to 5-nanometer, clearly double, triple patterning may not be enough. Then you’re starting to really look at the EUV for 5-nanometer from my humble experience in a critical to have EUV. And I’m very pleased to see the ASML, the EUV are making progress. They can go up to 80 watts now and they go up to 500 wafer per day; that is extremely encouraging. Then you’re starting to look at TSMC NTR progress on the EUV side and also some the photo-resist related development. So we keep a very close eye on this whole group maps and the process technology and we also work closely with equipment, semiconductor equipment company to make sure ready.

I think that lost something in the transcription. But the basic facts are what I learned at imec (where Lip-Bu was also attending since he was one of the presenters) that ASML are making significant progress. But there are still major challenges. It is not clear to me if there are any major implications for EUV in EDA, more likely that the implications are if and when we don’t have EUV and have to go to very high numbers of masks for some layers (octuple patterning anyone?).

Cadence has been hiring. They had a shutdown in July 4th week (which was the last week of Q2 for them). But:we’re continuing to add engineering headcount and technical sales headcount. All we want to be clear is that when we leave 2015, we are going to be at a higher expense rate than we were during the beginning of the year.

Transcript of the call on SeekingAlpha is here.


The Evolution of Smart Glass Design

The Evolution of Smart Glass Design
by Majeed Ahmad on 07-29-2015 at 12:00 pm

The wearer says, “O.K., Glass” and glass leaps into action, performing most of the smartphone functions like check e-mails, take photos and videos, provide turn-by-turn navigation, and make and receive phone calls. Welcome to Smartphone 2.0.

Technology pundits called Google Glass the best thing that happened to augmented reality since the iPhone. What is the augmented reality? In this case, we can say it’s the interface between the wearable computing and the Internet of Things (IoT).


Google Glass: A marvel of embedded vision technology

Google Glass itself hasn’t been a smashing consumer success because of a number of strategic missteps, including a high price tag, lack of compelling applications and a poorly defined value proposition. It was a product ahead of its time when its prototype was launched back in early 2013.

However, it’s a revolutionary embedded design that has single-handedly created a new product category of Internet-hooked appliances: smart glasses. The new wearable product category—also labeled as smart eyewear—has attracted consumer electronics giants such as Epson, Intel, Microsoft and Sony as well as a new breed of Kickstarter outfits like Meta and Glassup.

The arrival of these 1.0 products is driving a gold rush in augmented reality, computational photography, and visual perception and analytics applications. However, this technology marvel is still in search of a cause a.k.a. utility, and at the same time, is fighting a few design conundrums. And the two issues are intertwined. In other words, the success of smart glass use-cases is closely tied to the evolution of product design.

Anatomy of Smart Glass

A smart glass, for instance, can help people with impaired sight by navigating their surroundings. Then, it can provide the company men with the access to computing and corporate data on the go about a warehouse, product manual, sales demo and more. However, the design of a smart glass is a balancing act between sleek design, robust processing performance and energy efficiency.

The early designs like Google Glass comprised of a single camera. However, the thicker form factor and dual-lens arrangement in a smart glass design provide a natural premise for dual-camera stereoscopic designs. It also appeals to depth discerning sensors that can complement object recognition tasks through high dynamic range and advanced pixel interpolation.


Smart glass uses internal and external sensors to generate information

Smart glasses are equipped with sensor fusion and boast components like GPS, accelerometer and gyroscope. So they can employ object detection and matching technologies merely by using a robust vision processor and provide an accurate discernment of finely detailed gestures that are used to control various functions in a smart glass.

Moreover, depth sensors can facilitate selective refocus on a portion of a scene during the post-image-capture stages. Then, there are 3D imaging technologies that can generate a depth map on the wearable device, and use the point cloud map for image classification and estimation in cutting-edge applications like augmented reality.

Glass’ Design Conundrum

The common perception about the connected wearable design is that a device like smart glass can simply tether processing-heavy tasks such as object recognition and gesture interface to a smartphone or these vision processing functions can be conveniently moved to the cloud. That popular design premise deserves a serious review because, first and foremost, it’s imperative for a connected wearable device to hold some degree of intelligence in order to avoid becoming a dumb terminal.

Furthermore, smart wearables must carry some processing capability to reduce the amount of data transfer to a smartphone over a Bluetooth or Wi-Fi link. Likewise, sending raw videos to the cloud over a cellular broadband connection means an increase in cost and power consumption. The large amounts of data transfer will consume power and drain the wearable device batteries that are much smaller than those of smartphones.

Smart glasses need sleek batteries and power efficient chips to carry out a full day of usage. However, while new battery technologies are still far from commercial realization, semiconductor IP companies like CEVA now provide a venue for power efficiency through specialized vision processing solutions that free up CPU and GPU for their original design tasks.

The CPUs and GPUs have initially volunteered to carry out image-processing tasks, but now dual-camera design and advanced sensor capabilities in smart glasses increasingly demand powerful vision processing solutions. Vision processing—the workhorse of smart glass operations—uses powerful algorithms for sophisticated image and scene analysis that in turn require a significant amount of computation.


Next-generation vision applications demand a specialized processor

Take object detection and matching, for instance, which typically use SURF and SIFT algorithms; these tasks are now moving to more advanced deep learning technologies like CNN to meet the needs of 3D vision, computational photography and visual perception. CEVA’s XM4 imaging and vision processor IP is designed to offload CPU and GPU from compute-intensive algorithms for image enhancement, computational photography and computer vision.

The instruction set in the CEVA-XM4 vision processor is optimized and defined for computer vision technology. It has a number of features optimized for bandwidth transfer—such as random access parallel load—and that leads to a smaller DSP with a far better cycle count. That, in turn, results in lower power consumption compared to imaging solutions based on GPUs and ARM+Neon settings.

The wearable devices like smart glass can bring a renewed push toward computer vision and computational photography by employing advanced camera subsystems that carry out image capture and vision processing in a power efficient manner. The integration of intelligent vision processor IPs like XM4 into smart glass system-on-chips (SoCs) offers that design venue for robust processing performance at affordable power consumption.

Also read:

CEVA-XM4 White Paper

Google Glass: The Second Coming and a Brief History

Apple Watch Design Revisit with a Wi-Fi Twist

Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronics and The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future.


The Antiportfolio

The Antiportfolio
by Paul McLellan on 07-29-2015 at 7:00 am

Last week Charlie Cheng of Kilopass wrote about venture capital for semiconductor. This reminded me of something amusing that I came across years ago.

See also VC For Semiconductor: Dead or Alive?

All VCs have a portfolio page and often a second exit page. The first shows the companies in which they currently hold investments and the second is their boasting page where they list the companies that they either took public or sold and sometimes with some big numbers…especially if they were early investors in Cisco or Apple or eBay.

As far as I know, only one VC is brave enough to have an anti-portfolio page. These were companies who pitched to them, in which they never invested, and went on to be huge successes. Of course all venture capitalists have stories like that but usually it takes a couple of glasses of wine before they tell you them.

Bessemer have an interesting history. They are actually the oldest VC on the planet. You have probably heard of the Bessemer Converter that used to be used in steel making, invented by Englishman Sir Henry Bessemer. Yes, it is the same name for the same reason:In 1872, Henry Phipps, Jr. and Andrew Carnegie co-founded Carnegie Steel, an innovative steel producer that commercialized an industrial process licensed from Lord Henry Bessemer. When they sold their startup 29 years later, Henry formed a family office to re-invest his proceeds into other entrepreneurial ventures like his own. He adopted the Bessemer name to honor the inventor behind his startup’s success.


Bessemer have had some great exits over the years. They provided early funding for companies like Ingersoll Rand, WR Grace and International Paper, nobody’s idea of a startup these days. More recently they invested in and took public LinkedIn, Ciena, Maxim, Skype and more. But as they say, their:long and storied history has afforded our firm an unparalleled number of opportunities to completely screw up.

On their antiportfolio page they tell you about the ones that got away. Here are a few of the most notable:

  • Apple: Bessemer was offered a position in a pre-IPO stock at a $60M valuation. Neill Brownstein called it “outrageously expensive.” If they had taken that position and held it all the way until the present day that would have to be insanely great.
  • Ebay: “Stamps? Coins? Comic books? You’ve GOT to be kidding,” thought Cowan. “No-brainer pass.”
  • Facebook: Jeremy Levine spent a weekend at a corporate retreat in the summer of 2004 dodging persistent Harvard undergrad Eduardo Saverin’s rabid pitch. Finally, cornered in a lunch line, Jeremy delivered some sage advice “Kid, haven’t you heard of Friendster? Move on. It’s over!”
  • Intel: Pete Bancroft never quite settled on terms with Bob Noyce, who instead took venture financing from a guy named Arthur Rock.
  • FedEx: they passed…7 times

And in the all-star miss, one of the partners, David Cowan had a college friend who had rented out her unused garage to a couple of students:In 1999 and 2000 she tried to introduce Cowan to “these two really smart Stanford students writing a search engine”. Students? A new search engine? In the most important moment ever for Bessemer’s anti-portfolio, Cowan asked her, “How can I get out of this house without going anywhere near your garage?”

The Bessemer Ventures Antiportfolio.


UTBB SOI can scale down to 5nm says Skotnicki…

UTBB SOI can scale down to 5nm says Skotnicki…
by Eric Esteve on 07-29-2015 at 12:00 am

…and FinFET down to 3nm. This assertion is the result of extensive research work made by Thomas Skotnicki, ST Fellow and Technical VP, Disruptive Technologies, leading to numerous publications, like in 1988 in IEEE EDL or in 2008 in IEEE TED paper. I say extensive, I should also say long, very long, as it took almost 30 years for the industry to recognize that, finally, such FD-SOI devices may compete with Bulk and FinFET. A good illustration is the fact that ST has first developed FD-SOI 28nm technology in 2011, followed by Samsung licensing the technology in 2014, then GlobalFoundries licensing the 22nm FD-SOI in 2015… and ST offering now 14nm FD-SOI.

Why does planar transistor fail for technology node below 20nm? The answer is Short Channel Effect (SCE) and Drain Induced Barrier Lowering (DIBL) is the short channel effect with the higher impact: in short-channel devices the drain is close enough to gate the channel, and so a high drain voltage can open the bottleneck and turn on the transistor prematurely. If you look at the three left devices, Bulk, PD-SOI and Thick Box SOI, you can see that the theory predict a minimum effective gate length in the 30nm range. Effective gate length (Lel) is commonly used to name a technology node (ie: 40nm, 28nm, etc.) and preferred to the designed gate length for marketing reason as it’s lower and sounds more aggressive. DIBL for the Bulk device is calculated by Skotnicki and DIBL = 140 mV; the threshold voltage is lowered by 140 mV.

If planar transistor fails, the industry needs to find new CMOS technologies in order to develop smaller geometry devices, the only way to benefit from faster and lower power transistors (I didn’t say cheaper) leading to design higher performance circuits. At this point, you may think that if Silicon CMOS is reaching physical limit, why not using completely different device technology, based on III-V material for example. We know that GaAs (Gallium Arsenide) exhibit electron mobility much higher than the Silicon: 8500 compared to 1400 cm2/V.s. Why not designing processor on GaAs, theoretically running at 15 GHz when the same on Silicon would be limited at 2.5 GHz?

The answer is oxidation: if you take a raw pure Silicon wafer and put it in a room, a SiO2 layer will naturally grow on the wafer. Obviously you will grow Silicon oxide by using much more sophisticated techniques in a fab, and you do it several times to eventually output the desired IC. Now take a raw GaAs wafer and try to grow oxide on it: the (defunct) Laboratoire Electronique de Philips (LEP) has spent multi-million dollar and many years trying to do it in the 80’s, and never succeed. I know it because I was working in LEP in 1983 and I still remember that it was the first priority for this research center. With no native oxide you will never be able to build GaAs based IC as complex as it can be in Si CMOS, at least at a reasonable cost. Let’s come back to Silicon.

If you look now at the three boxes on the right side, you see that the limit for Ultra Thin Body and Box (UTBB) SOI is now 7nm, going down to 5nm for “Ultimate UTBB SOI” and 3nm for FinFET. UTBB implies using 5nm Box thickness, the height of the oxide layer deposited on the raw Silicon wafer (SOITEC is the supplier of such SOI wafers). Ultimate UTBB define a kind of theoretical limit for FD-SOI as the oxide thickness (on the wafer) has to be the same than the gate oxide value or very, very thin. The FinFET technology leads the pack, with a theoretical limit at 3nm, and we will see why it’s not a surprise (at least for Skotnicki, to be honest it was a surprise for me) by looking at the next picture.

On the left side we start with the UTBB FD-SOI model. In blue, the bulk silicon, in yellow the oxide deposited on the bulk and in pink the source and drain of the transistor. Then in yellow again the gate oxide and the gate itself on the top of this thin oxide layer. If you manipulate this active structure as indicated by the three intermediate pictures, the final device is becoming… a FinFET. I agree that this manipulation require some imagination and also some ability for theoretical 3D visioning, but it’s interesting to notice that the two emerging technologies allowing overcoming the issues linked with the planar transistor are sisters.

At this point, you may think that this theoretical demonstration made by Skotnicki is superb, but is it production proven or will it stay a paper work? Better than a long talk, the answer is in the above picture. On the left side you see a 28nm FD-SOI devices (Gate length = 25nm = Box height) which is production proven as ST has processed ASIC for both internal use and customers in this technology. The blue arrow is an illustration of FD-SOI roadmap to 10nm (or maybe 14nm as a marketing label), it may be issued from a pilot fab, but it will certainly end in production.

Encore Bravo, Mr Skotnicki!

This post has been written from “The Success Story of FD-SOI – From Equation to Fabrication” presentation given by Thomas Skotnicki during the FD-SOI Workshop, the LETI days, June 2015.

From Eric Esteve from IPNEST