CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

For high-volume manufacturing at 10 nm and below: technology and friendship

For high-volume manufacturing at 10 nm and below: technology and friendship
by Beth Martin on 09-03-2015 at 4:00 pm

The technology for 10 nm is settled, but what about 7 nm and 5 nm? Those nodes will happen with silicon-based CMOS and 193nm immersion lithography, but exactly how is still being worked out. Right now, though, the focus is on getting 10 nm chips into high-volume production. TSMC and Intel both claim to be on track for high-volume manufacturing (HVM) of 10 nm FinFET processors by 2017. You can see in the image below that it is a real “dotted-line” prediction.

Achieving HVM past 14 nm and getting to 5 nm was the topic of a panel session at this year’s SemiconWest. I talked to panelist Juan Rey, senior engineering director at Mentor Graphics, about EDA’s role in the march to the ‘Last Node Ever.’

Meaningful discussions of process nodes must include viewpoints from all niches of the semiconductor ecosystem. This panel included experts from IMEC (semi process research), Soitec (semiconductor materials), Intermolecular (materials, process, HVM), LAM (fab equipment), Applied Materials (fab equipment), and Mentor Graphics (EDA).

An early question to the panelists was about the options for improving devices. An Steegen from IMEC talked about some recent R&D areas, including SiGe and GeAs high-mobility channels, under-etching, bi-layer graphene, and vertical silicon nanowires. Those silicon nanowires are also called gate-all-around transistors, and are a likely replacement to today’s FinFETS at 7 nm and 5 nm.

Christophe Maleville of Soitec focused on FDSOI as a path to HVM for 10 nm. The materials and equipment panelists talked about atomic-level etch and deposition, and also about improvements in scanning electron microscopy (SEM) used for metrology. Being an EDA person, the materials discussions were fascinating to me, but a little out of my wheelhouse. However, I suspect I’ll be learning more about these emerging technologies, particularly the gate-all-around transistors.

The final panelist in the session was Juan Rey from Mentor Graphics, who talked about the challenges at the interface of design and manufacturing. That ever-shifting borderland is where Rey spends most of his time. While Mentor’s work is highly technical, he started with a higher-level discussion about the need for better communication through the ecosystem. He says that the iterations between foundries and their partners must happen sooner, and in smaller, faster cycles. That is, the path to HVM will be accelerated when there is tighter cooperation between EDA, IP providers, and foundries.

He also suggested that foundries need to restrict the number of partners they engage with, so they can form deeper relationships with fewer key partners. On the more technical side, Rey said that EDA companies constantly work on multiple enabling technologies without always knowing which one will work, or which one will be widely adopted. Mentor, for example, has software for directed self-assembly (DSA), EUV, multi-patterning (both SADP and LELE), etc.

Take physical verification, for instance. The chart shows the proliferation of technology for physical verification by node. Each technology was developed to address specific challenges to design and manufacturing; lithography limitations required resolution and mask enhancements, planarity and loading effects called for new fill methods, FinFETS needed whole new device and interconnect models, and so on.

Integration between tools is also important, said Rey. He used examples of how Calibre verification based on sign-off foundry decks are interfaced with all major digital place and route tools and custom design tools. These tight interfaces improve the design flow productivity, particularly for multi-patterning, by essentially dropping the full Calibre signoff capabilities into the design and edit environment. This type of tight integration between tools can make a big difference in design productivity.

For more on productivity improvements through better tool interactions, you can download a free whitepaper on STMicroelectronic’s experience using Calibre RealTime on a 14nm FDSOI design.

Another issue Rey discussed is encryption to protect foundry IP. The problem is that EDA tools need lots of information on IP to generate accurate verification results, but the foundries are reluctant to provide such sensitive IP information in rule decks. The solution, says Rey, is partial deck encryption. This makes the foundry’s sensitive IP information unreadable to humans, but perfectly clear to the EDA tools.

The take-away from these experts overall is that there *is* a clear path to HVM for 10 nm, and a somewhat more winding and ambiguous path to 5 nm. The panelists didn’t seem discouraged, but excited about figuring out the solutions to the challenges that lay ahead.


Who is Leading in the Wearables Market?

Who is Leading in the Wearables Market?
by Daniel Payne on 09-03-2015 at 12:00 pm

My first experience with a wearable device was back in 1978 at college, it was an LED-based watch that had you push a button to read the time of day, saving battery life. Sad to say, but that electronic watch didn’t make it through the January winter at the University of Minnesota, so was promptly returned for a refund. Fast forward to today, and we see a steady introduction of wearable devices to track our steps, measure heart rate, report the time of day, and even synch with a smart phone to provide us with continuous alerts.

You may have heard about Fitbit, the wearable company that had an IPO back in June of this year, it has a market capitalization of some $6.8B which is just below the value of Synopsys and above the value of Cadence stock. Their most recent product is called the Fitbit Charge HR and it provides a step counter, heart rate monitor, caller ID and watch, all in a slender design, fitting on your wrist.

Research company International Data Corporation (IDC) follows the wearable market and just published some very interesting data about who is leading. Fitbit is the number one company in the wearable market, so I wasn’t surprised about that number in the report. What did surprise me is that there are so many players in this market that the leadership position is really open to rapid change. For the second quarter of 2015 there were 18.1 million devices sold, an increase of 232.2% from one year ago at just 5.6 million units.

Top Five Vendors
I had already heard about each of the top five vendors before, but didn’t quite know how they stacked up against each other.

[TABLE]
|-
| Vendor
| 2Q15 Shipment Volume
| 2Q15 Market Share
| 2Q14 Shipment Volume
| 2Q14 Market Share
| 2Q15/2Q14 Growth
|-
| 1. Fitbit
| style=”text-align: right” | 4.4
| style=”text-align: right” | 24.3%
| style=”text-align: right” | 1.7
| style=”text-align: right” | 30.4%
| style=”text-align: right” | 158.5%
|-
| 2. Apple
| style=”text-align: right” | 3.6
| style=”text-align: right” | 19.9%
| style=”text-align: right” | 0
| style=”text-align: right” | 0.0%
| style=”text-align: right” | %
|-
| 3. Xiaomi
| style=”text-align: right” | 3.1
| style=”text-align: right” | 17.1%
| style=”text-align: right” | 0
| style=”text-align: right” | 0.0%
| style=”text-align: right” | %
|-
| 4. Garmin
| style=”text-align: right” | 0.7
| style=”text-align: right” | 3.9%
| style=”text-align: right” | 0.5
| style=”text-align: right” | 8.9%
| style=”text-align: right” | 40.0%
|-
| 5. Samsung
| style=”text-align: right” | 0.6
| style=”text-align: right” | 3.3%
| style=”text-align: right” | 0.8
| style=”text-align: right” | 14.3%
| style=”text-align: right” | -25.0%
|-
| Others
| style=”text-align: right” | 5.7
| style=”text-align: right” | 31.5%
| style=”text-align: right” | 2.6
| style=”text-align: right” | 46.4%
| style=”text-align: right” | 119.2%
|-
| Total
| style=”text-align: right” | 18.1
| style=”text-align: right” | 100.0%
| style=”text-align: right” | 5.6
| style=”text-align: right” | 100.0%
| style=”text-align: right” | 223.2%
|-

Source: IDC Worldwide Quarterly Wearable Device Tracker, August 25, 2015

Related – Internet of Things and the Wearable Market

Fitbit was founded 8 years ago, and has gone through several generations of products, so it makes sense that they are the number one vendor in the wearable market.

Apple just lept onto the scene with the Apple Watch product being announced in September 2014, but started selling only in April 2015. So in just a couple of quarters the Apple Watch has rocketed to the number two spot. The installed base of Apple users have really validated the concept of a wearable that communicates with a smart phone.

At number three is our rising Chinese star Xiaomi with their Mi Band, a very low-cost step tracker that is popular in China and is now expanding into new geographies.

Garmin has been around for years with relatively high-end electronics for mapping and GPS applications, however their wearables are designed for specific uses like: step tracking, hiking, triathlete, swimming, golfing, flying and boating.

I use a Garmin Edge 520 for my bike computer, but that product doesn’t fit into the wearable category because it sits on your bike handlebars.

Samsung appears to be in big trouble here, because the wearables market is growing but their market share fell in the past year. Maybe having more products in a category isn’t as smart as having a winning product. They offer three watches (Gear S, Gear 2, Gear Live) and a fitness tracker (Gear Fit).


The Others category has a combined number greater than #1 Fitbit, and has some well-known brands in it like Pebble, Jawboneand Sony.

Related – Sony Endorse FD-SOI to Attack Wearable & IoT

Summary
The wearables market is characterized by rapid growth in the triple digits, and with many vendors all vying for your wrist it is a fast-changing market. The semiconductor content and sensors in these wearable devices must be cost effective, accurate and manage power efficiently. Battery life is a huge usability issue, especially for the smart watch category. With the trend for foundries going back to power-optimize their 28 nm process node, we can expect to see continued battery life improvements in this wearable market.

Related – CEVA creating a wearable IP platform

Read the full press release from IDC here.


Business Models: EDA Is Software But It Used To Be Sold As Hardware

Business Models: EDA Is Software But It Used To Be Sold As Hardware
by Paul McLellan on 09-03-2015 at 7:00 am

Business models are really important. Just ask any internet startup company that has lots of eyeballs and is trying to work out how to monetize them. It is a lot easier to get people to use something for free, much harder to get people to pay for something especially when they don’t value it much. Different companies that look somewhat similar often have very different business models.

I was once sat on a plane next to an executive from HBO. This was in the Sex in the City and The Sopranos era. I asked him how come HBO made programs that were so much higher quality than CBS, ABC, NBC and Fox. He said that although it looked like they were in the same business, making programs for TV, in fact their business models were totally different. The big networks sold eyeballs to advertisers, the more eyeballs, the higher the price. So they wanted to make lowest common denominator programs that appealed to as wide a range of people as possible. HBO got paid by subscriber. If you subscribed to HBO for a month, then they got $3 or whatever the number was. And here is what he pointed out that I’d never thought of. They didn’t care if you liked Sex in the City as long as you liked The Sopranos enough that you were not going to drop HBO. A mostly different demographic loved Sex in the City and never watched The Sopranos. But they were not going to drop HBO because they needed their Sex in the City fix every week. So it was important to make a few really high quality programs that together covered all interests, but it was irrelevant if you liked them all. It is almost as if HBO is wasting effort if you like too many of their programs, they get paid the same if you only like one, provided you like it enough to keep signed up. I am sure today there are many people who would never drop HBO because Game of Thrones will be back for season 6. Or who love John Oliver’s show.

Once business models are set, it is very hard to change them. HBO is facing this today, wondering how to best get money from millennials who typically don’t bother with cable. If they switch to pure internet, they risk losing all those $3s from people who don’t watch stuff on the internet (or better still, who pay for HBO as part of a bundle but don’t even watch it). If they don’t, how do they reach the non-cable audience. As more an more people cut the cord (I did) then their market is moving from people who watch on Comcast to people who watch on Roku or AppleTV or their phone.

When EDA started it was in the era when there were no separate hardware and software industries. The first wave of EDA was Calma, Applicon and Computervision. They would sell you the hardware with the software already installed. For example, the Calma Graphic Design System (or GDS) was a re-badged Data General minicomputer. You paid a single price to license the hardware and the software in the same way that if you buy a digital camera you don’t pay for the software separately. Hardware was sold as an up-front purchase price and then an annual maintenance of around 15-20% of the original price. That covered the hardware and the software.

As an aside, if that GDS name seems familiar, it is indeed the same GDS as in the layout interchange format GDS II. That is actually the tape backup format from those systems dating back to the late seventies.

See also Old Standards Never Die

In the early 1980s things started to change. It is funny to look back on now, but we were genuinely worried back then whether people would pay more for software than they did for the hardware it ran on. After all, they never had before. That fear turned out to be unfounded. Initially expensive software was sold with hardware on which to run it. The DMV sold proprietary hardware in the case of Daisy and Valid, or rebadged Apollo workstations in the case of Mentor. Sun workstations came on the scene, and other offerings from HP and IBM. Everyone already had a Vax. So over time the software got unbundled. But for years the business model continued to be what it had always been: pay and up front license fee and then a maintenance fee of 15-20% per year.

During the fast growth early days of EDA this was very convenient for the EDA companies because it front-loaded the revenue needed to grow and fund R&D, but still left a sort of royalty stream going in the future years. I believe it was Gerry Hsu who first started to switch users to 3-year deals, having noticed that people seemed to like to lease cars so they got a new one every three years. That business model transition was not entirely smooth, they never seem to be.

In EDA, as a rule, you make money with software that runs for a long time (like static timing or P&R) or that people sit in front of all day (like layout). This naturally creates a reasonable license demand. Other software suffers from what I call the “Intel only needs one copy” problem. If it runs fast then a single copy can be shared around a large organization. Since they are probably not going to pay a seven figure sum for the license, this creates a problem as to how to grow revenue. If one copy serves a lot of users, it is hard to turn the beach-head of a single copy into true proliferation. When this has happened in EDA various things have been tried: per tapeout fees, per named user fees, bundling it with something where lots of copies are needed. But these all have the problem that it is hard to change a business model. Users expect to pay a normal floating license. Anything else will at worst mess up the sale and at best delay it.

The whole industry is now in a mode where deal are done for a 2-3 year period and the revenue is recognized monthly over the period, as I wrote about last week. This gives a lot of predictability since over 90% of a quarters revenue is coming out of previous backlog.

See also Synopsys Did 90% of Business From Backlog with A Deal Length of 2.5 Years. Err…What Does That Mean?

The one part of the business that still has a hardware business model is emulation. But that is because it is hardware. Even if the EDA companies wanted to recognize the revenue over 3 years, GAAP will not let them. The hardware leaves the dock, the price drops from booking to revenue. Inevitably this makes emulation revenue lumpy and less predictable than software. Just like EDA used to be in the early days. In fact, given how much is coming out of backlog, when guidance for the following quarter is given in an earnings call it is largely about how much emulation business they expect to ship that quarter.

You may also have seen the news that Sesame Street is moving to HBO. They want every child to subscribe too! There is even a business model wrinkle here too, since Sesame Street is always used as an example of just the sort of program that only PBS could make since no private company ever would. Except now they will be, and since they are richer so they will be making twice as many new programs per year as PBS could afford (the new programs will be on PBS too, but delayed a few months). Netflix makes House of Cards. Amazon is making the new Top Gear. When business models are in transition interesting things can happen.


Four Industries that will be Transformed by GaN

Four Industries that will be Transformed by GaN
by Alex Lidow on 09-02-2015 at 4:00 pm

In a previous post we discussed a few automotive applications that will be big markets for GaN technology. But this is just a small part of the GaN story!

GaN transistors such as eGaN FETs from EPC are today available with performance 10 times better than the best commercial silicon. What happens when several devices are integrated to create asystem on a single chip? What happens when the performance of that chip is 100 times better than silicon?

In this posting we will look out 5 to 10 years and see how a transformative change in semiconductor technology is transformative to our world in almost every way.

Transforming Space

Power converters used in harsh environments, such as space, high-altitude flight, or high-reliability military applications must be resistant to damage or malfunctions caused by radiation. eGaN FETs today perform 40 times better electricallywhile being able to withstand 10 times the radiation compared with the aging Rad Hard power MOSFET. This enables entirely new architectures for satellite power and data transmission. Elon Musk, CEO of SpaceX, has set as his mission to reduce the cost of putting objects in space by a factor of 10. With eGaN technology applied to satellites we can reduce the size of the electronics, eliminate the shielding required, and greatly improve the performance of the data communications. This eliminates solar panels, makes the entire system smaller and lighter weight, and extends the life of the satellite. A factor of two reduction in weight is within our reach with today’s technology, whereas a factor of 10 is possible when eGaN technology is used to produce entire systems on a single chip. Multiply the impact of SpaceX with eGaN technology and we will change the way we use space and accelerate the exploration (and possible colonization?) of our universe.

Transforming the Machine Interface

LiDAR uses high speed pulsed lasers to rapidly create a three dimensional image or map of a surrounding area. One of the earliest adopters of this technology was the “driverless” car. Today’s eGaN FETs are enabling new and broader applications such as 3D printing, real-time motion detection for augmented reality glasses, computers that respond to hand gestures as opposed to touch screens, and fully autonomous vehicles. As eGaN technology evolves, LiDAR can be further improved in both resolution and cost. Projects are already underway to include “3D Awareness” in our cell phones. Imagine if phones could understand the space around us. We will be able to get directions in a new, more comprehensive way. An iPhone today can provide the location of the building you desire, but with LiDAR, 3-D mapping could lead you straight to a specific office.

Transforming the Use of Electricity
Wires suck. Today, we need wires to supply power to our ever-growing collection of electrically-powered gadgets. For those gadgets that are so completely indispensable, we need to take them with us at all times, and they need batteries that must be recharged all-too-frequently. Expected in late 2015, wireless power systems using eGaN technology will begin to unload this wired burden by providing energy wirelessly to charge cell phones and tablets. By integrating thin transmission coils in the floor tiles and the walls of buildings and homes, the need for wall sockets will be eliminated altogether! This same wireless power technology can be used to charge electric vehicles when parked over a transmitting coil embedded in the floor of a garage. A project is underway to embed wireless chargers at bus stops. Eventually, in a one-minute stop, a bus can get enough charge to drive a mile to the next bus stop. This could eliminate the need for most of the heavy batteries and overhead electrical systems that burden electric buses today.

eGaN technology makes possible the efficient transmission of electricity at safe frequencies that are difficult for their silicon transistor ancestors. Taking eGaN technology to higher voltages and higher frequencies extends the wireless power transfer distance. Integrating eGaN technology into complete systems on a chip enable wireless power systems to be embedded into almost every device that consumes electricity.

Transforming Medicine
We are all getting older every day, and, as we age, we develop more opportunities for frailties and chronic health problems. Today there are major advances in fields such as implantable systems, imaging, and prosthetics that are enabled by eGaN technology.

Wireless power is already having an impact on implantable systems such as heart pumps. Beyond just artificial hearts, many other medical systems can also benefit. As Dr. Pramod Bonde of the University of Pittsburg Medical Center speculated, “[wireless power]can be leveraged to simplify sensor systems, to power medical implants and reduce electrical wiring in day-to-day care of the patients.”

But it’s not just eGaN technology in wireless power that is transforming medicine. Imaging technology is also improving by leaps and bounds! The resolution of MRI machines is being enhanced through the development of smaller and more efficient sensing coils using eGaN FETs and ICs. Diagnostic colonoscopies are about to become a thing of the past due to today’s eGaN FETs. These types of non-invasive imaging breakthroughs significantly reduce the cost of health care through early warning and non-invasive diagnostics. As we integrate entire systems on a single eGaN chip, miniaturization and image resolution improves the standard of care while medical costs come down.

eGaN Technology –Transforming the Future
In this posting, we talked about a few of the transformations that will be enabled as eGaN technology evolves. EPC is taking the 10-times gap in performance between eGaN FETs and MOSFETs and improving it to a 1000-times gap. This technology is also being applied to integrated circuits made be EPC in eGaN technology. EPC is pursing parallel paths – discrete power semiconductors and fully integrated circuits that form building blocks for multiple applications, but will ultimately evolve into complete systems-on-a-chip for very high performance, low cost, and high value-added applications like the ones discussed above.

The eGaN journey has just begun!

Also read:

GaN Technology — Contributing to Medicine in No Small Way

Four Things a New Semiconductor Technology Must Have to be Disruptive
GaN Technology for the Connected Car


The Case for Data Management Amid the Rise of IP in SoCs

The Case for Data Management Amid the Rise of IP in SoCs
by Majeed Ahmad on 09-02-2015 at 12:00 pm

In the late 1990s and early 2000s, during the adolescent days of the system-on-chip (SoC) design movement, there was a lot talk about IP and design reuse, but it was seldom put into practice. A decade later, SoC turned into a juggernaut with a tripartite alliance of chipmakers, IP suppliers and semiconductor manufacturing fabs.

The internal and external IPs became a key building block in modern SoCs due to a number of semiconductor industry developments. For a start, the leading-edge SoCs continued their journey toward smaller nodes while striving to overcome speed and power challenges. Then, there has been a proliferation of standards. Take the Internet of Things (IoT) chips, for instance, where wireless standards like Bluetooth, Wi-Fi, ZigBee have become check-box items with the integration of IP subsystems.

A vantage point look at the evolution of SoC design over the course of a decade or so shows that the addition of more functions onto a single chip led to more complexity as well as time-to-market pressures. That made SoC segment an industry in a hurry. Chipmakers are now obliged to work around tight delivery schedules in the new IP-centric SoC model, and that makes the ever-growing library of IP content too hard to handle.

As a result, a new breed of semiconductor outfits have emerged that is benefitting from the growing complexity and time-to-market bottlenecks, and subsequently, the rising amount of IP in SoCs. One such company is ClioSoft Inc., the Fremont, California–based supplier of automated solutions for hardware configuration management of SoC designs.


Data management eases handling of disparate pieces of IP in SoC

ClioSoft, founded by Srinath Anantharaman in 1997, pioneered data and IP management for chip design on the lines similar to software configuration tools like Subversion. Anantharaman could see how precious chip engineering resources were spent on the manual handling of tasks like revision control, issue tracking and other gatekeeper functions.

ClioSoft’s first product—Save Our Software or SOS—managed the front-end RTL flows and eventually became a natural fit for data management in Cadence Virtuoso-based analog/mixed-signal designs. Over the years, the SoC design ecosystem continued to evolve and so did SOS tool as it added other popular analog/mixed-signal design flows such as Mentor Graphics Pyxis and Synopsys Custom Designer and Laker.

Then, in 2013, ClioSoft announced another milestone with the integration of its SOS tool into Agilent Technologies’ Advanced Design System (ADS) to provide version control and enterprise-wide management for RF and high-speed digital designs. That year, Agilent also spun off its test and measurement business as Keysight Technologies, which now owns and operates the prevalent RF design tool ADS.

ClioSoft’s SOS design collaboration platform is also winning attention from semiconductor IP suppliers, a crucial part of the chip design ecosystem. The steadily increasing use of IP in today’s bigger and more powerful SoCs means that IP vendors need to be more astute in customizing their products for different chipmakers and foundries, and do it in a time-efficient manner.

Not surprisingly, therefore, IP vendors are now increasingly using data and IP management tools like SOS to tag their products and hence map a specific IP from one chip client to another and from one foundry process to another. The design changes are happening on multiple fronts in an often geographically scattered SoC project, where design automation tools like ClioSoft SOS can play a critical role in efficiently managing the IP labyrinth.

Also read:

ClioSoft SOS v7.0: Faster, Smarter and Stronger

Starvision and SOS, a Perfect Match

Why Design Data Management: A View from CERN


Threat Detection: How To Keep the Crown Jewels Secure

Threat Detection: How To Keep the Crown Jewels Secure
by Paul McLellan on 09-02-2015 at 7:00 am

Let’s just take it as a given that securing IP design data is critical. It’s rather like saying that it’s a good idea to have security in the Tower of London to stop the crown jewels being stolen. IP blocks are the crown jewels of an SoC company.

Data now must be secured within the collaborative teams that share that data across international boundaries. Adding to the challenge is the fact that most of the current generation of security tools are perimeter-based solutions, focused on preventing outsiders from gaining access to internal company networks, file systems and databases containing sensitive, proprietary data. However, defending organizations from unwitting employee security breaches, compromised accounts, and insider attacks is becoming a growing concern.

Solving the complete IP security problem calls for technologies that protect source data from internal security weaknesses and provide early-warning alerts for risky and anomalous internal behavior. However, security solutions must also take into account the multi-site collaborative nature of today’s design teams.

To successfully protect IP design data from within, companies must look to technologies that support the concepts of both IP/file-level security and data-centric threat detection. This requires two foundational elements:

Firstly, IP-Level and file-level security through an IP management platform such as Methodics ProjectIC that provides IP-level permission assignments and tracking of design data according to IP parent/child relationships, IP branches, levels of hierarchy, and the tracking of who is using which data, where in the design, and in which geographic locations. Once the appropriate permissions are set, the IP management platform will pass this information to the underlying data management system such as Perforce Helix, which then assures the data is secured at repository, branch, directory and even the at an individual file level.

Second, big-data-centric threat detection such as provided by Perforce Helix Threat Detection that offers behavioral analytics and identification of threats and risky behavior performed on Helix SCM repositories.

ProjectIC already has a lot of security features at the project level, smoothly merged into the hierarchy and geographical multi-site aspects of most companies. ProjectIC also works with the underlying design data management tool, such as Helix, to ensure file-level security (so that IP level security cannot be bypassed by simply identifying and copying the underlying files).

The second critical element in a robust IP security solution is the ability to detect anomalous behavior and threats. This is where Perforce Helix Threat Detection comes in, offering a new approach to threat detection. Helix applies advanced big data behavioral analytics to user activity to detect potential attack events, alert security teams, and quickly generate actionable reports that detail anomalous, high-risk behavior.

User activity log files are ingested by the Helix Threat Detection Engine that correlates and analyzes: login and logout, project and file access (folder, file, path, etc), amount of data moved or synchronized (get, commit, sync, etc), timestamp and user data. It applies analysis models (e.g., activity, statistical and clustering mathematics) to log data. Once a threat is identified, a non-intrusive threat detection agent (endpoint sensor) can also be deployed to a laptop or desktop to capture all activity on the endpoint: file copies, cut and pastes, screen captures, printing, obfuscations and exfiltration.

The type of threats that this identifies include:

  • Compromised, careless, and departing employees who download large amounts of data from sensitive projects
  • Insiders who slowly take small amounts of data over a long period of time
  • Machines compromised by stealth malware that are siphoning data
  • Outside or Advanced Persistent cyber attacks

This approach gives companies a double-walled approach to securing proprietary IP data, at both the IP/file level and also head off potential security breaches of theft, whether the threat is internal or external.

The Methodics white paper Threat Detection: A Proactive Approach to Securing SoC IP Design Data is here.


SoC and Foundry Update 2H 2015!

SoC and Foundry Update 2H 2015!
by Daniel Nenni on 09-01-2015 at 10:00 pm

Rarely do I fly first class but I did on my recent trip to Asia. It was one of the new planes with pod-like seats that transforms into a bed. The flight left SFO at 1 A.M. so I fell asleep almost immediately missing the first gourmet meal. About half way through the flight I found myself barely awake staring straight up and what do I see? STARS! That has got to be one of the last things anyone wants to see while looking up on an international flight! Seriously, who puts fake stars on the ceiling of an airplane! EVA Airlines that’s who!

When I travel a lot of people want to meet with me to get the latest news from Silicon Valley. In exchange I get the latest news from wherever they are so it is a very nice quid pro quo type of thing, absolutely. The most common topics are the SoC and foundry business since they currently drive the semiconductor industry. Apple and Qualcomm are the most talked about SoC companies but Mediatek, Samsung, and even Intel are always discussed.

Let’s start with Apple: The big iProduct announcement is next week and we will finally get to see what is inside the iPhone 6s! Again, my bet is a Samsung based 14nm A9 SoC and inside the new iPads will be a TSMC based 16nm A9x SoC. I was right on the iPhone 5s (Samsung 28nm) and iPhone 6 (TSMC 20nm) so let’s see if I can keep my streak going. My bet is also that the Apple A9x will outperform all other SoCs and will continue to do so until mid to late next year.

Moving forward it is my bet that Apple will continue with TSMC 16nm for the iPhone7 with an enhanced version of the process specifically for Apple. Based on what I know today 10nm will not be in production in time for the iPhone 7 but could make it for the next iPads since iPads come out later in the year and require less volume. Currently Samsung and TSMC both have pre-production 10nm PDKs available but final decisions by the fabless elite have not been made. We should know more about where the fabless elite will fab 10nm at the end of this year. I would not expect 10nm production to start before Q2 2017 as there have been delays. The iProduct refresh in 2017 however will be 10nm for sure.

QCOM has a history of 2[SUP]nd[/SUP], 3[SUP]rd[/SUP], and even 4[SUP]th[/SUP] sourcing chip manufacturing down to 40nm. At 28nm everyone was forced into a monogamous relationship with TSMC which was very uncomfortable for a promiscuous company like QCOM. At 28nm QCOM is now in production at UMC and hopes to get ramped up at SMIC to appease the Chinese gods. QCOM as we have all heard will use both Samsung 14nm and GlobalFoundries 14nm for the next generation of Snapdragons. I’m also told that QCOM will use TSMC 16FF+ and they have a 14nm development agreement with SMIC in process.

Mediatek of course manufactures next door (literally) at TSMC and UMC and I do not see that changing anytime soon. Mediatek has hit semiconductor rock star status in Taiwan and they have attracted many ex TSMC and UMC employees. Not only does this give Mediatek leading edge design experience, it also gives them access to the inner foundry ranks. Given the importance of low power design for mobile I would bet Mediatek products will be FinFET enabled next year with the rest of the fabless elite so watch out QCOM!

I’m sorry, I ran out of space for more commentary. If you have questions we can continue the discussion in the comments section. Only registered SemiWiki members can read or write comments so if you are not already a SemiWiki member please join as my guest: https://www.legacy.semiwiki.com/forum/register.php


Adding NAND Flash Can Be Tricky

Adding NAND Flash Can Be Tricky
by Tom Simon on 09-01-2015 at 4:00 pm

As consumers, we take NAND flash memory for granted. It has worked its way into a vast array of products. These include USB drives, SD cards, wearables, IoT devices, tablets, phones and increasingly SSD’s for computer systems. From the outside the magic of flash memory seems quite simple, but we have to remember that this is a technology that relies on quantum tunneling.

For long term data storage, the spinning hard disk has been difficult to beat. I must confess I remember when bubble memory was going to topple flying head hard drives. But that never happened, but now decades later we are seeing a wholesale shift to flash. If you look at Apple’s computer line up, most have flash drives – either hybrid or pure SSD. Their operating speeds are high enough that they can take advantage of PCIe instead of SATA for their interface in some cases.

Now, what are the potential drawbacks of NAND storage devices? The method for writing to flash memory requires passing an electric charge to a floating gate that is situated between the MOS FET gate and the channel. There are two mechanisms for creating this floating charge, quantum tunneling and hot carrier injection. They both involve relatively high voltages, which causes degradation over time of the insulation for the floating gate.

NAND flash when used with direct addressing will have write failures after tens of thousands of cycles. The solution for this is wear leveling, where the physical location of a block of data is moved on every write. This avoids having frequently written blocks, such as OS file directories, wearing out before the blocks that are rarely written to. There are additional enhancements to this to ensure that blocks that are ‘static’ are also moved periodically to use their physical locations as well for necessary write operations, thus spreading around the load.

The relocation and remapping of blocks requires the implementation of fairly complex algorithms. Designers have a choice of using the system CPU for this or offloading the job to a dedicated controller. There are a series of trade offs in the selection of raw NAND, managed by the system CPU, versus a hardware wear leveling and block management solution like what is found in eMMC.

Interestingly, I backed a Kickstarter for the NEEO, a home automation controller, that just posted a blog about a delay in their system design. They had opted for raw NAND in their prototypes but started seeing failures after continuous stress testing. Early on, a potential investor had casually remarked that they ought to look at eMMC. They say in their blog that they owe them a dinner.

Designing embedded controllers for SD and eMMC requires making a number of choices and selection of the proper IP for the protocols that need to be supported. Cadence recently posted a video on their White Board Wednesday video series that give an overview of the technology available to designers from their IP portfolio. Lou Ternullo, Product Marketing Director in Cadence’s IP Division outlines the various protocols and flash technologies they support.

If you are interested in other areas where Cadence offers IP – and there are quite a few – I suggest looking at their IP Factory Brochure. Also their web site features their IP offerings here.

As for flash memory, it is safe to say that its use will continue to expand. Some day in the future we’ll look back at the idea of keeping our valuable data spinning on mechanical merry-go-rounds at 5,000RPM as quaint and primitive.


TSMC is the Top Dog in Pure-Play Foundry Business

TSMC is the Top Dog in Pure-Play Foundry Business
by Pawan Fangaria on 09-01-2015 at 12:00 pm

We all have echoed the fact that the arrival of fabless business model in the semiconductor industry has transformed it completely. The book, “Fabless: The Transformation of the Semiconductor Industry” provides several stories around that. In the backdrop of that, one key point to ponder upon is the start of pure-play foundries; TSMC being the initiator in 1987. The availability of pure-play foundries gave the boost and courage to small as well as large players around the world to start designs without owning fabs. The net result was a flood of fabless design companies and innovations in designs around the world. This is not without the pure-play foundries innovating themselves too. TSMC and subsequent foundries provided leading edge processes and technologies in manufacturing. Today, pure-play foundries provide manufacturing services not only to fabless companies but also to IDMs. Hence, looking at the other side of the coin, it would not be imprecise to say that the pure-play foundry model also transformed the semiconductor industry.

After TSMC, about ten more pure-play foundries were founded around the world, the latest being GLOBALFOUNDRIESin 2009. According to the first quarter sales figures of 2015, three pure-play foundries (TSMC, GLOBALFOUNDRIES, and UMC) occupy the ranks within top20 semiconductor companies in the world, TSMC being 3[SUP]rd[/SUP]. Interesting to note among the pure-play foundries is the following –

The percentage share of these top3 pure-play foundries clubbed together in the overall pure-play foundry sales is 79% and above; $33.68 B out of a total of $42.4 B in 2014, and $9.32 B out of a total of $11.4 B in 1Q 2015, i.e. 82%.

If we do a further analysis of TSMC’s share among the top20 pure-play foundries (i.e. the three foundries as stated above), it’s 74% in 2014 ($24.976 B out of $33.68 B) and 75% in 1Q 2015 ($6.995 B out of $9.318 B). What do we call TSMC in such a scenario?

Let’s also see the pure-play foundry business in recent perspective where we know TSMC had lost some business due to Samsungstarting in-house manufacturing of its Exynos and Appleallocating a part of their processors to Intel and Samsung. However, Apple is expected to come back to TSMC’s 16nm FinFET for their A9 processors in iPhone7. There are reasons for it; I’m not going in those details here. However, I would like to debate on how TSMC influences the overall pure-play foundry business. Let’s look at the following chart reported by IC Insights


This chart depicts the usual trend of the best growth for pure-play foundries in Q2 every year (double-digit growth compared to Q1), i.e. ahead of Q3, the best quarter for total IC industry. However, in 2015 that trend was broken; in Q2 the sales declined slightly compared to Q1 instead of increasing as was seen in previous years. The reason – 5% decline in TSMC’s revenue in Q2 compared to Q1. If TSMC’s 5% change in revenue can change the pure-play industry trend, then that’s definitely the ‘Top Dog’ in the industry. Although there are competing technologies possessed by other foundries as well, I would go back to my hypothesis that business leadership along with technology leadership is the key to establish someone as the ‘Top Dog’.

For TSMC, rest of this year and 2016 are certainly looking better. IC Insights forecasts the overall pure-play foundry sales in Q4 2015 to reach over $12 B, the highest ever. The IC Insights pure-play foundry report is HERE for your reference.

Also read:
Changing Trends at the Top of Semicon Space. The chart in this article provides the sales numbers of the top3 pure-play foundries mentioned above.

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


Solido Wrote the Book on Variation

Solido Wrote the Book on Variation
by Paul McLellan on 09-01-2015 at 7:00 am

When I studied mathematical analysis, one of the things that we had to prove turns out to be surprisingly difficult. If you have a continuous function and at one point it is below a line (say zero) and at another point it is above zero, then there must be a point at which the value is exactly zero. In effect, a continuous function can’t get from below a line to above a line without crossing the line. OK, mathematicians like to spend time proving things that are “obvious” since sometimes they turn out not to be.

How about this, more relevant to semiconductor design. If you simulate a design at the SS corner and the FF corner for some particular parameter, then any other corner will fall between those two values. I mean to get from slow to fast you have to go through the other corner right? Isn’t it obvious? Wrong.

Variation causes weird things to happen. It was not a problem at 90nm but from 28nm on downwards you can’t just simulate those big FF and SS corners and get away with it. Those simulations (at a given voltage and temperature) will define some sort of range but you can’t go from there to the assumption that any other corner will fall inside this range. It is as if you can get from one side of the line at SS to the other FF, without going through typical.

For example, above are a few hundred simulations of a PLL duty-cycle at all sorts of corners including SS and FF. So all the other values “should” fall in between. But look at the distribution. SS is the dot at the far left, so that is pretty much where you would expect to find it. But FF is in the middle of the distribution. If you made the assumption that all other process corners would fall between those two points you would be very wrong.


So it is clear that if you are designing complex analog designs in 28nm or below then you need to do all those simulations to find out what the real distribution is. In the diagram on the left is a simply non-variation-aware flow. On the right is a flow starting to take variation into account. Just pick all the PVT corners that you need and do the simulations. The trouble is that this is prohibitively expensive. In certain cases, such as memories, where these problems are at their worst (there are bit-cells, rows, columns, sense-amps, and more) then the only way to be sure if all you use is brute force is to do a billion simulations. In simpler cases it might be thousands. Words like geological time scale and age of the universe spring to mind. That is not going to be the way to handle this problem.


What is required is a better way to manage this process so that only a subset of the simulations are done. The flow becomes closer to pick some good corners, do the simulations and then see what has been learned. Pick some more corners. Continue until confident that all the important corners have been simulated. The problem is that this cannot be done by hand, it requires a tool to manage the process and do the machine-learning. The diagram above shows a little detail. On the left is the old manual process of simulating a predetermined list of corners. On the right we add intelligence and analysis.

All these diagrams come from the book Variation-Aware Design of Custom Integrated Circuits: A Hands-on Field Guide by Trent McConaghy, Kristopher Breen, Jeffrey Dyck and Amit Gupta of Solido. I should emphasize a couple of things about it. This is not some theoretical analysis of variation for research groups, it is a practical guide for actual design groups. And it is not a user guide to Solido’s tools, it is a guide to what needs to get done, in some sense what needs to get simulated, and while I’m sure Solido are not going to complain if you decide to use their tools, the book is useful even if you do not. It treads that balance between being deep on theory (and thus of little use to a practical designer) and being an extended application note on Variation Designer (and thus of little use to anyone who is not a hands-on user).

The book is available on Amazon here. There you can also get a free sample and you can even try it free (on any Kindle including phones and tablets) for a week.

For anyone who is interested (get a life!) then the proof of the continuous function problem I started with relies on another more primitive fact: any bounded set of real numbers, perhaps infinite, has a least upper bound. Once you have that, then the continuous function problem is easy. The set of values of x that result in a value less than zero must have a least upper bound. But the limit of the function as this bound is approached is zero (that is the definition of a continuous function in math-speak). So there is a value at which the function is zero. W[SUP]5[/SUP]