RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

What’s Hot at SPIE Advanced Lithography

What’s Hot at SPIE Advanced Lithography
by Beth Martin on 02-04-2015 at 10:00 pm

The 40[SUP]th[/SUP] SPIE Advanced Lithography conference will be at the San Jose Convention Center 22-26 February. Over the past few years, this conference has grown in scope to include emerging patterning technologies, like directed self-assembly (DSA) and design-process-technology co-optimization.

Underlying all the presentations, posters, panels, and hallway chatter are common goals and challenges: keep the fabs working and yields high, while controlling cost and turn-around time as the law of physics work against you.

One key component of managing modern manufacturing is computational lithography, which includes:

  • Optical proximity correction (OPC) and resolution enhancement technology (RET) software and methodologies that achieve the maximum possible lithography entitlement
  • Software, applications, and methodologies that allow foundries to increase their productivity and thus reduce development cycle times and associated costs
  • Manage the post-tapeout flow

You will see plenty of this type of technology presented at the SPIE Lithography conference. There are papers on incorporating DSA in multipatterning, analyzing litho hotspots with pattern matching software, enhancing local printability in sub-14nm nodes, model-based mask preparation, new modeling of 3D effects, and managing OPC jobs for better productivity and use of resources. These technologies help maintain reasonable turnaround times for the entire post-tapeout flow and manage foundry production costs.

The importance of modeling
I talked to John Sturtevant, the director of modeling and verification solutions at Mentor Graphics about some of the hot topics in computational lithography. He said that there are significant modeling challenges associated with the 14 and 10 nm manufacturing process nodes, particularly the need for accurate and fast simulation of three-dimensional phenomena associated with the mask, wafer, and resist.

“3D EMF effects associated with mask topography have been effectively modeled for many years,” Sturtevant said, “and to support 14 nm, we added refinement of edge to edge crosstalk signals in DDM.” This enhancement leads to significantly better matching to rigorous simulation with very little runtime impact. Using the crosstalk DDM library results in better wafer fitness, especially when the mask absorber sidewall is optimized in conjunction with mask bias, he said.

Sturtevant points out that formerly “non-critical” implant layers now pose a significant OPC challenge. Underlayer topography models, which capture the complex array of wafer topography effects, have been deployed for 14 nm. These models are being expanded to better represent the impact of active FinFETs and the results for pre- and post-poly layer implant models have been excellent.

There is also new focus on the photoresist model. The 14 and 10 nm nodes feature extensive use of negative-tone develop (NTD) resist processes for the patterning of metal and via layers, due to the intrinsic aerial image advantage of a bright field mask. These NTD resist processes have unique shrinkage and develop rate properties compared to the traditional positive-tone processes. Sturtevant says Mentor has modified the CM1 model to support new NTD-specific modelforms, with a 40-55% improved accuracy in predicting wafer results. They have also rolled out improvements in the prediction of resist toploss and scumming as well as SRAF printing for both PTD and NTD cases.
DSA is now on the near horizon, and compact models predicting the assembly of vias inside of guiding patterns are already available to assist in development efforts. An important consideration for these models is to ensure the proper 3D formation of the vias. You can expect to hear a lot about DSA, and computational platforms for DSA, at SPIE Advanced Lithography this year.

So if you are involved in design for manufacturing or post-tapeout engineering, don’t miss SPIE this year from February 22-26, 2015 at the San Jose Convention Center.


Temperature Monitoring IP to Revamp SoCs

Temperature Monitoring IP to Revamp SoCs
by Pawan Fangaria on 02-04-2015 at 3:00 pm

With increasing density and functionality of chips at extremely thin silicon and metal layers, temperature has become critical. The temperature situation can become worse with wireless enabled 24/7 power-on devices. In such a scenario, a device must manage its thermal profile dynamically to keep the temperature within tolerable limits. Of course all precautions must be taken to budget for voltage, power and temperature while designing an SoC. It’s prudent to also have mechanisms embedded within the SoC so that it can self adjust its operations when temperature hotspots arise. What are such mechanisms? Thanks to the innovative IP world; it provides IP that can continuously monitor voltage and temperature, detect temperature hotspots and guide the chip to adjust temperature in the hotspot regions.

This Monday, it was a pleasant occasion to watch an on-line videoat Cadencewebsite where Bob Salem, Product Marketing Director at Cadence explained how such an IP works and can be used in chips to optimize their performance, increase reliability and lengthen battery life. This is actually an explanation on whiteboard, recorded and posted on the Cadence website on a Wednesday. Cadence popularly calls these videos as ‘Wednesday whiteboard videos’! Let’s see what is there to learn in it.

Looking at the thermal imaging of a die, maximum temperature can be seen at a hotspot. The temperature degrades as we move farther from the hotspot in all directions. In the above picture, a hotspot is shown at 125[SUP]o[/SUP]C and the outermost circle periphery is at 40[SUP]o[/SUP]C. The first problem at hand is to locate the hotspots on the die and then take appropriate steps to cool down those areas.

On the left side in the above picture is a simple conceptual circuit diagram of the circuitry that goes into the voltage / temperature monitoring IP. A multiplexor takes multiple inputs from sensors for temperature, voltage, and percentage of moisture in a particular area where the IP is located and outputs the desired information in analog form. The analog data is then converted into digital form by an ADC (analog to digital converter). The digital data goes into a processor which deciphers the information and takes appropriate action. Depending upon the severity of temperature, either it can turn-off the power or lower the frequency of operation in that region.

As shown in the thermal profile diagram, the IP blocks can be spread across the die to record and process the voltages and temperatures in different regions. A good IP and well designed topology of an SoC and IP placement regions within it can be very effective for the SoC to manage its temperature profile to be within prescribed limits at all times. Clearly, this enhances the life and long term reliability of a device containing SoC with such IP. It also improves performance of the device, the device handling and its battery life.

It’s an interesting video where Bob Salem explains the story in very simple terms. There is no registration required for this video and it takes less than five minutes.

After watching the video, I tried to explore what kind of IP portfolio Cadence has for voltage and temperature monitoring. It does have a good range of power / sensor IP consisting of power low drop-out (LDO) voltage regulators, temperature sensors, and application specific analog designs, and so on. Look at the page at Cadence website here.


New Suite of ARM IP for Mobile

New Suite of ARM IP for Mobile
by Paul McLellan on 02-04-2015 at 7:00 am

ARM had a big press/analyst show at the Epic Roasthouse here in San Francisco this morning. They announced a new portfolio of IP targeted at the next generation mobile experience. There were 4 components to the announcement:

  • A new microprocessor, the Cortex-A72. More details below
  • New CoreLink CCI-500 Cache Coherent Interconnect allowing higher system bandwidth and increasing system efficiency delivering a 2X increase in peak bandwidth and enabling 4K displays and beyond
  • New Mali-T880 GPU for delivering console quality gaming experience within a mobile power envelope. 1.8X the performance of the T760
  • Optimization POP (Performance Optimization Pack) for TSMC 16FF+

There are already 10 licensees for the A72, most of whom won’t go on the record. The three that will are HiSilicion, MediaTek and Rockchip. The RTL has already been delivered to licensees who are designing it into products expected to ship early next year.


The Cortex-A72 is the heart of this announcement. Of course it is ARM v8 64-bit instruction set. If we take the Cortex-A15 in 2014 on TSMC 28nm as 1, then the A57 in TSMC 20nm (shipping this year) is 1.9X the performance and the A72 in TSMC 16FF+ (which should have parts that are in design now shipping early next year) is 3.5X. Some of that performance increase comes from the move from 20nm to 16FF+ but not all of it. Apparently there are also architectural advances too, which is surprising. It is not that the A57 doesn’t incorporate pretty much all the microprocessor knowledge out there. The A72 can be used in the big.LITTLE configuration and the LITTLE is still the A53 (as it was with the A57).

The new CoreLink enables big.LITTLE processing and delivers system power savings due to an integrated snoop filter. It has double the peak memory system bandwidth and 30 increase in processor-memory performance compared to the previous generation. So more responsive user interfaces (especially for gaming). There is also full support for TrustZone for a secure media path.


The Mali is capable of delivering 4K pixel resolution at frame-rate of 120fps within a typical mobile power budget. It also supports a TrustZone secure videopath for 4K premium content. There is a 40% power reduction for the same workload (and it has more shaders and so on too). Again some of this comes from the change in process node and some from architectural changes. Along with the Mali is the Mali-DP550 display processor. The whole video subsystem can be optimize encoding so that, for example, the parts of a scene that are unchanged can be fed forward to the video encoder so that it does not have to waste effort (power) on recomputing that as would be the case with a raw video feed. The video subsystem has native support for 10-bit YUV for advanced gaming and premium 4K video content.

A system built around this new IP should be capable of delivering 0.5W with 2.5GHZ for mobile (or 3.5GHz for tablet).


Ready to Wear Sensor Hubs

Ready to Wear Sensor Hubs
by Majeed Ahmad on 02-04-2015 at 3:00 am

Atmel Corp. has beefed up its sensor hub offerings for wearable devices with SAM D20 Cortex M0+ MCU core to add more functionality and further lower the power bar for battery-operated devices. The SAM D20 Cortex M0+ microcontrollers offer ultra-low power through a patented power-saving technique called “Event System” that allows peripherals to communicate directly with each other without involving the CPU.

Atmel is part of the group of chipmakers that use low-power MCUs for sensor management as opposed to incorporating low-power core within the application processor. According to market research firm IHS Technology, Atmel is the leading sensor hub device supplier with 32 percent market share.

Sensor hubs are semiconductor devices that carry out sensor processing tasks—like sensor fusion and sensor calibration—through an array of software algorithms and subsequently transform sensor data into app-ready information for smartphones, tablets and wearable devices. Sensor hubs combine inputs from multiple sensors and sensor types including motion sensors—such as accelerometers, magnetometers and gyroscopes—and environmental sensors that provide light level, color, temperature, pressure, humidity, and many other inputs.

Atmel has supplied MCU-centric sensor hub solutions for a number of smartphones. Take China’s fourth largest smartphone maker, Coolpad, which has been using Atmel’s low-power MCU to offload sensor management tasks from handset’s main processor. However, while still busy in supplying sensor hub chips for smartphones and tablets, Atmel is looking at the next sensor-laden frontier: wearable devices.


SAM D20 Evaluation Kit

Wearable devices are becoming the epitome of always-on sensor systems as they mirror and enhance cool smartphone apps like location and transport, activity and gesture monitoring, and voice command operation in far more portable manner. At the same time, however, always-on sensor ecosystem within connected wearables requires sensor hubs to interpret and combine multiple types of sensing—motion, sound and face—to enable context, motion and gesture solutions for devices like smartwatch.

Sensor hubs within wearable environment should be able to manage robust context awareness, motion detection, and gesture recognition demands. Wearable application developers are going to write all kinds of apps such as tap-to-walk and optical gesture. And, for sensor hubs, that means a lot more processing work and a requirement for greater accuracy.

So the low-power demand is crucial in wearable devices given that sensor hubs would have to process a lot more sensor data at a lot lower power budget compared to smartphones and tablets. That’s why Atmel is pushing the power envelope for connected wearables through SAM D20 Cortex M0+ cores that offload the application processor from sensor-related tasks.


LifeQ’s sensor module for connected wearables

The SAM D20 devices have two software-selectable sleep modes: idle and standby. In idle mode, the CPU is stopped while all other functions can be kept running. In standby mode, all clocks and functions are stopped except those selected to continue running.

Moreover, SAM D20 microcontroller supports SleepWalking, a feature that allows the peripheral to wake up from sleep based on predefined conditions. It allows the CPU to wake up only when needed—for instance, when a threshold is crossed or a result is ready.

The SAM D20 Cortex M0+ core offers the peripheral flexibility through a serial communication module (SERCOM) that is fully software-configurable to handle I[SUP]2[/SUP]C, USART/UART and SPI communications. Furthermore, it offers memory densities ranging from 16KB to 256KB to give designers the option to determine how much memory they will require in sleep mode to achieve better power efficiency.

Atmel’s sensor hub solutions support Android and Windows operating systems as well as real-time operating system (RTOS) software. The San Jose, California–based chipmaker has also partnered with sensor fusion software and application providers including Hillcrest Labs and Sensor Platforms. In fact, Hillcrest is providing sensor hub software for China’s Coolpad, which is using Atmel’s low-power MCU for sensor data management.

Atmel has also signed partnership deals with major sensor manufacturers—including Bosch, Intersil, Kionix, Memsic and Sensirion—to streamline and accelerate design process for OEMs and ensure quick and seamless product integration.

Image credit: Atmel Corp.

Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronicsand The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future.


Webinar: Electronics in Space or Avionics

Webinar: Electronics in Space or Avionics
by admin on 02-03-2015 at 3:00 pm

I talked to Derek Kimpton of Silvaco today. He turns out to be a fellow Brit. He is presenting a webinar on total dose that is of interest to anyone creating chips that will go into space (primarily satellites), or near space (primarily avionics in planes). Pretty much everyone knows at least the basics of single-event-effects (SEE) whereby a high-energy neutron (either from outer space, or from radioactive rocks, or even materials in the package such as solder) can cause a bit to flip in a memory or a flop to change its value. That is not what this webinar is primarily about, although this will be covered too.

Semiconductor devices suffer over time from thresholds that move and measuring these total dose effects is very important since eventually they will cause the device to fail. Even rad-hard devices suffer from the effect, just more slowly. When he was still in the UK at Plessey Semiconductor (remember them?) he discovered that people couldn’t predict the total dose effect at all. Even after 30 years it was not well understood. In particular, in his research, he discovered it was not entirely a one-way effect and that under some circumstances the effect is reversible depending on bias on the gate. So he created a model that turned out to be very accurate.

He joined Silvaco 17 years ago and put the model into radiation code. But until recently it has been classified, only available to the military for design of satellite electronics. Now it has been desclassified and is available to anyone (almost, North Korea need not apply). The webinar will describe this bias dependent effect, which nobody else’s models covered. He will go through it in a fair amount of detail.

As devices get smaller, the amount of deposited energy required to cause problems gets smaller and smaller. This means that increasingly people for whom this stuff is not on their radar screen need to know. For designers, the total dose model can be used to produce SPICE models after different periods of exposure so it is possible to simulate how the chip will behave when new, after a year, after ten years and so on. This is especially important when using commercial grade silicon in avionics and even space since the silicon has not been created with a level of built-in immunity.


The diagram above shows some of the issues that will be covered. A photon hits (and I don’t mean a flash of light, we are talking X-rays, gamma rays, high energy electrons or other high energy particles) and creates an electron and a hole in the oxide, so a tiny current. Sometimes the current dissipates and the electron goes one way and the hole the other. But under some circumstances the electron escapes but the hole gets trapped causing a tiny threshold shift. That is the two diagrams on the left. Alternatively, under different field strengths the electron and hole will almost immediately recombine releasing a little burst of energy (the energy that was in the original incoming photon). But instead of making the situation worse, this energy can actually release one of the trapped holes thus moving the thresholds back towards normal. It is this effect that other models did not capture and that has recently been declassified.

The webinar is Tuesday February 17th from 10-11am Pacific. The title is Simulating Total Dose, Prompt Dose, Damaging Fluence and SEU using TCAD which sounds of limited interest but, in fact, this topic is something that designers, not just process designers who are the usual target of TCAD tools, need to have a working knowledge of. Derek Kimpton will be presenting the webinar himself.

More details, including registration, are here.


Samsung Continues to Top 300mm Wafer Capacity

Samsung Continues to Top 300mm Wafer Capacity
by Pawan Fangaria on 02-03-2015 at 7:00 am

In 1992, when Samsungbecame the largest producer of memory chips, it was not in top10 list of semiconductor companies. It was ranked at #11. Since then it has strived to attain higher ranks in the top10 list. In around 2000, it climbed to the ranks of top5 and then since 2002 until now it is at #2 in the worldwide semiconductor sales which include pure-play as well as IDM businesses. The #1 rank is retained by Intel. If we include only foundry business, then Samsung occupied #3 rank in 2012 and #4 in 2013. However, interestingly Samsung is the #1 manufacturer of 300mm wafers since 2012.

Samsung has the lion’s share of world’s 300mm wafer capacity at 23.5%, which is much above its nearest rival Micronat 15%. Micron’s share includes IM Flash Technologies, its joint venture with Intel, and Inotera, its joint venture with Nanya Technology. That translates to about a million wafers per month for Samsung to process today! The top companies are continuously raising their share in 300mm wafers. The foundries are expected to raise the capacity further for couple of more years.

Another interesting data is that if we combine the wafer capacities of Samsung and SK Hynix, than it shows South Korea as the clear leader with nearly 35% of worldwide 300mm wafer fab capacity.

If we look at 2012 figures, Samsung’s 300mm wafer capacity was at 18.8% while that of Micron + Elpida was at 14.3%, slightly less than what it has in last December. In 2013, Samsung led in 300mm as well as overall wafer size capacity while TSMCled in 200mm wafer size capacity and ST Microelectronics led in 150mm and smaller wafer size capacity.

According to IC Insights report, the top four memory suppliers, Samsung, Micron, Toshiba and SK Hynix represent 62% of global 300mm wafer capacity. Although Samsung manufactures Smartphone and tablet processors to a large extent, major portion of its 300mm capacity is utilized in fabricating DRAMs and flash memories. The Samsung’s advantage is that a substantial portion of memory devices get consumed in-house in its Smartphone and several other devices. The rest of it is consumed in the open market which has high demand of such memory devices.

The memory demand and supply, although this is a commodity with cut-throat competition, will continue to rise. In an IoT era, as the number of connected electronic devices per person increases, the memory consumption will increase in equal proportion. So that will continue to utilize the wafer fab capacity and even create more demand for the same. Another large consumer of memory will be automotive segment. I hear that upcoming cars can account for more electronics than a whole computer room and can have more data flowing than a server. Imagine the amount of data storage and processing that will be needed by driver-less cars!

Another perspective to look at memories is that since they are commodities, volume play and cost leadership are the two important strategies that will work well in that space. As far as innovation is considered, there are many memory IP suppliers across the world to do that.


OpenHAB Aims to Bring Open Source and Local Control to IoT?

OpenHAB Aims to Bring Open Source and Local Control to IoT?
by Tom Simon on 02-02-2015 at 7:00 pm

The predominant model for IoT sensor data flow is for data collection on the device and data storage, analysis and access in the cloud. By cloud, I mean that particular vendor’s servers. This is true for Fitbit, Nest, Dropcam,Trace Snow (my favorite skiing app), Smart Things, etc. If you look up IBM’s presumptuously named Internet of Things Foundation, you will see that it is mainly an effort on their part to drive adoption of their cloud backend for IoT applications. The same is true for the Intel push with Edison. The development kit includes access to the Intel Cloud-based Analytics service. It’s understandable that the cloud can be used for heavy lifting with IoT applications. But the cloud can also be used to lock in customers and block competition. I took a quick look at the websites for several prominent IoT devices and they all offer an API for linking devices. This is good. So for instance, when I go hiking I can connect my Fitbit to my Endomondoand get improved information about my activity. Fitbit tracks my steps, but has no GPS. Endomondo is an app on my phone that can tell me my route, distance, elevation, etc. Together I can get a better picture of my activity. However when I have my Fitbit right next to my phone it seems counterintuitive that I cannot sync it unless I have an internet connection. With all of these devices being dependent on the vendor cloud service, we are subject to their reliability and even to their very existence. If one of the above companies were to close their doors, I’d have a ‘brick.’ This is close to what is happening to the buyers of the Revolvhome automation hub after the company was bought by Google/Nest. This raises the existential question of who ‘owns’ the device. What if they decide I have violated their terms of service? Can they unilaterally cause my device to become a lump. Kai Kreuzer, project lead of the Eclipse Smart Home Project, has an excellent slide that illustrates the present situation. When it comes to your house, odds are that you will have multiple vendors providing the devices. I have a wifi-connected stereo receiver with airplay. My TV has Wifi and an app. If I add smart lights, a Nest thermostat, security webcams, alarm system, garage door opener and other things, I most certainly will have an interoperability problem. Estimates are that our houses will have hundreds of connected things in the not too distant future. Major manufacturers are already producing these products in volume. You can find them at Home Depot, Lowes and online. Lowes even has its own line of home control products called Iris. Some of the companies manufacturing smart home products include big names like Philips, GE, Leviton, and Schlage. These are light bulbs, wall switches, door locks, water flow sensor, leak detectors, motion sensors, etc. The list goes on, with more coming every day. There are standards for these home automation devices such as Z-Wave, or ZigBee, but things are still in the early days. We also have Insteon, wifi based devices such as WeMo and more. Most these devices come with their own app or remote control, and probably an internet based service. There is a movement to consolidate these with hubs. The most common of them are made by Smart Things or Wink. Even the office supply company Staples offers one. And they mostly also use a cloud based service provided by their vendor. One notable exception is the very successful Kickstarter project called the NEEO. With the cloud based hubs, there is still concern for availability, security, and privacy. Let’s say your internet connection fails or the hub vendor has an outage, your security system will be off line or your door locks won’t be accessible. These systems know when you are home and away. If Wink or Smart Things has a security breach, then hackers could find out when you are not home. And, incidentally the only way you will find out if they have a security breach is if they tell you about it. There is a grass roots initiative to provide local control for these devices and systems. This is analogous to what happened with computer hardware and operating systems. We are witnessing a recapitulation of these earlier technology waves. The advocates of open and locally controlled devices such as Kai Kruezer, argue that innovation is fueled by open systems. He started the OpenHAB project, which stands for Open Home Automation Bus. It is a software package that can run on an open hardware platform like the Raspberry Pi. It is gaining momentum, but is not anywhere near to being a consumer option yet. OpenHAB has spawned the Eclipse Smart Home project. Many software developers are familiar with the Eclipse Foundation as an organization that facilitates major open source software development tools and projects. OpenHAB and the Eclipse Smart Home project will be very important for the future of the Internet of Things. Kai Kreuzer has a slide that calls for an Intranet of Things. Then cloud can then be used for things like back up and for providing off site connectivity where it is called for and makes sense. They are proposing a software architecture and helping to build a portable stack based on it. It will allow new devices to be easily integrated. Around these devices there will be code to support events from sensors. There are four components in the Eclipse Smart Home architecture: connectivity, automation, user interfaces and persistence. Hackers, makers and home hobbyists are already busy applying OpenHAB. They are using open source hardware and radios to build their own hubs. There is even one enterprising hacker who figured out how to root the low cost cloud based Wink hub and replace its firmware with the OpenHAB code. The Wink hardware has a compact package that includes a processor along with WiFi, ZigBee, Z-Wave interfaces. We are in the pioneering days of this technology and it will serve us well to question the architectures that businesses are promoting. If consumers ask for local control of their devices they can push the industry in that direction. Hopefully the traditions of open hardware and software will prevail, like it has for the PC and for Linux.


10 Cyber Security Predictions for 2015!

10 Cyber Security Predictions for 2015!
by Bill Boldt on 02-02-2015 at 3:00 pm

z1

In 2014 worries about security went from a simple “meh” to “WTF!” Not only did high-profile attacks get sensational media coverage, but those incidents led to a pivotal judicial ruling that corporations can be sued for data breaches. And as hard as it is to believe, 2015 will only get worse because attack surfaces are expanding as mobile BYOD policies overtake enterprises, cloud services spread, and a growing number of IoT networks get rolled out. Add m-commerce, e-banking, and mobile payments to the questionable tradition of lax credit card security infrastructure in the U.S. and you get a perfect storm for cybercrime.

In fact, 92% of attacks across the range of segments come from nine basic sources (seen in the diagram below), according to Verizon. More numerous and sophisticated cyber crimes are anticipated for this year and beyond.

1. More companies to get “Sony’d”
2014 saw the release of highly-evolved threats from criminals that in the past only came from governments, electronic armies and defense firms. A wide-range of targets included organizations in retail, entertainment, finance, healthcare, industrial, military, among countless other industries. As a repeat offender, Sony is now the cyber-victim poster child, and the term “Sony’d” has become a verb meaning digital security incompetence. Perhaps Sony’s motto should be changed from “make.believe.” to “make.believe.security.” Just saying!

Prior to 2014, companies on a wholesale basis tended to simply deny cyber vulnerabilities. However, a string of higher profile data breaches — such as Sony, Heartbleed, Poodle, Shellshock, Russian Cyber-vor, Home Depot, Target, PF Chang’s, eBay, etc. — have changed all of that. Denial is dead, but confusion and about what to do is rampant.

2. Embedded insecurity rising

Computing naturally segregates into embedded systems and humans sitting in front of screens. Embedded systems are processor-based subsystems that are “embedded” into other machines or bigger systems. Examples are routers, industrial controls, avionics, automotive engine and in-cabin systems, medical diagnostics, white goods, consumer electronics, smart weapons, and countless others. Embedded security was not a big deal until the IoT emerged, which will lead to billions of smart, communicating nodes. 15 to more than 20 billion IoT nodes are being forecast by 2020, which will create a gigantic attack platform and make security paramount.

A recent study by HP revealed that 70% of interconnected (IoT) devices have serious vulnerabilities to attacks. The devices they investigated consisted of “things” like cloud-connected TVs, smart thermostats and electronic door locks.

“The current state of Internet of Things security seems to take all the vulnerabilities from existing spaces – network security, application security, mobile security and Internet-connected devices — and combine them into a new, even more insecure space, which is troubling,” HP’s Daniel Miessler stated.

Issues HP identified ranged from weak passwords, to lack of encryption, to poor interfaces, to troubling firmware, to unencrypted updating protocols. Other notable findings included:

  • 60% of devices were subject to weak credentials
  • 90% collected personal data
  • 80% did not use passwords or used very weak passwords
  • 70% of cloud connected mobile devices allowed access to user accounts
  • 70% of devices were unencrypted

Investigators at the Black Hat Conference demonstrated serious security flaws in home automation systems. At DEFCON, investigators hacked NFC-based payment systems showing that passwords and account data was vulnerable. They also revealed that the doors of a Tesla car could be hacked to open while in motion. Nice! Other attacks were exploited on smart TVs, Boxee TV devices, smartphone biometric systems, routers, IP cameras, smart meters, healthcare devices, SCADA (supervisory, control and data acquisition) devices, engine control units, and some wearables. Even simple USB firmware was proven to be highly vulnerable… “Bad USB.”

These are just the tip of the embedded insecurity iceberg. Under the surface is the entire Dark Net which adds even more treacherousness. Security companies like Symmantic have identified home automation as a likely early IoT attack point. That is not surprising because home automation will be an early adopter of IoT technologies, after all. In-house appliances also represent an attractive attack surface as more firmware is contained in smart TVs, set top boxes, white goods, and routers that also communicate. Node-to-node connectivity security extends to industrial settings as well.

Tools like Shodan, which is the Google of embedded systems, make it very easy for hackers to get into the things in the IoT. CNN recently called Shodan the scariest search engine on the Internet. You can see why since everything that is connected is now accessible. Clearly strong security, including hardware-based crypto elements, is paramount.

3. More storms from the cloud
It became clear in 2014 that cloud services such as iCloud, GoogleDrive, DropBox and others were rather large targets because they are replete with sensitive data (just ask Jennifer Lawrence). The cloud is starting to look like the technological Typhoid Mary that can spread viruses, malware, ransomware, rootkits, and other bad things around the world. As we know by now, the key to security is how well cryptographic keys are stored. Heartbleed taught us that, so utilizing new technologies and more secure approaches to maintain and control cryptographic keys will accelerate in 2015 to address endemic cloud exposure. Look for more use of hardware-based key storage.

4. Cyber warfare breaks out
eBay, PF Chang’s, Home Depot, Sony, JP Morgan, and Target are well-known names on the cybercrime blotter, and things will just get worse as cyber armies go on the attack. North Korea’s special cyber units, the Syrian Electronic Army, the Iranian Cyber Army (ICA), and Unit 61398 of the People’s Liberation Army of China are high profile examples of cyber-armies that are hostile to Western interests. Every country now seems to have a cyber-army units to conduct asymmetric warfare. (These groups are even adopting logos, with eagles appearing to be a very popular motif.)

Cyber warfare is attractive because government-built malware is cheap, accessible, and covert, and thus highly efficient. Researchers have estimated that 87% of cyber-attacks on companies are state-affiliated, 11% by organized crime, 1% by competitors, and another 1% by former employees. Long story short, cyber war is real and it has already been waged against non-state commercial actors such as Sony. It won’t stop there.

5. Cybercrime mobilizes
According to security researchers, mobile will become an increasingly attractive target for hackers. Fifteen million mobile devices are infected with malware according to a report by Alcatel-Lucent’s Kindsight Security Labs. Malvertising is rampant on untrusted app stores and ransomware is being attached to virtual currencies. Easily acquired malware generation kits and source code make it extremely easy to target mobile devices. Malicious apps take advantage of the Webkit plugin and gain control over application data which hands credentials, bank account, and email details over to hackers. What’s more, online banking malware is also spreading. 2014 presented ZeuS, which stole data, and VAWTRAK that hit online banking customers in Japan.

Even two-factor authentication measures that banks employ have recently been breached using schemes, such as Operation Emmental. Emmental is the real name of Swiss cheese, which of course is full of holes just like the banking systems’ security mechanisms. Emmentaluses fake mobile apps and Domain Name System (DNS) changers to launch mobile phishing attacks to get at online banking accounts and steal identities. Some researchers believe that cybercriminals will increasingly use such sophisticated attacks to make illegal equity front running and short selling scams.

6. Growing electronic payments tanatalize attackers

Apple Pay could be a land mine just waiting to explode due to NFC’s susceptibility to hacking. Google Wallet is an example of what can happen when a malicious app is granted NFC privileges making it capable of stealing account information and money. M-commerce schemes like WeChat could be another big potential target.

E-payments are growing and with that so will the attacks on mobile devices using schemes ranging from FakeID to master key. Master key is an exploit kit similar to blackhole exploit kit that specifically targets mobile, where FakeID allows malicious apps to impersonate legitimate apps that allow access to sensitive data without triggering suspicion.

7. Health records represent a cyber-crime gold mine
Electronic Health Records (EHR) are now mandatory in the U.S. and a vast amount of personal data is being collected and stored as never before. Because information is money, thieves will go where the information is (to paraphrase Willie Sutton). Health records are considered higher value in the hacking underground than stolen credit card data. Criminals throughout both the U.S. and UK are now specializing in health record hacking. In fact, the U.S. Identity Theft Resource Center reported 720 major data breaches during 2014 with 42% of those being health records.

8. Targeted attacks increase
Targeted attacks, also known as Advanced Persistent Threats (APTs), are very frightening due to their stealthy nature. The main differences between APTs and traditional cyber-attacks are target selection, silence, and duration of attack. According to research company APTnotes, the number of attacks by year went from 3 in 2010 to 14 in 2012 to 53 in 2014. APT targets are carefully selected, in contrast to traditional attacks that use any available corporate targets. The goal is to get in quietly and stay unnoticed for long periods of time, as seen in the famous APT attack that victimized the networking company Nortel. Chinese spyware was present on Nortel’s systems for almost ten years without being detected and drained the company of valuable intellectual property and other information. Now that’s persistent!

9. Laws and regulations try to play catch up

A number of cyber security laws are being considered in the U.S. including the National Cybersecurity Protection Act of 2014, which advocates the sharing of cybersecurity information with the private sector, provide technical assistance and incident response to companies and federal agencies. Another one to note is the Federal Information Security Modernization Act of 2014 that is designed to better protect federal agencies from cyber-attacks. A third is the Border Patrol Agent Pay Reform Act of 2013 to recruit and retain cyber professionals who are in high demand. Additionally, there is the Cybersecurity Workforce Assessment Act, which aims to enhance the readiness, capacity, training, recruitment, and retention of the cybersecurity workforce. President Obama stated that wants a 30-day deadline for notices and a revised “Consumer Privacy Bill of Rights.”

One of the more interesting and intelligent recommendations came from the FDA, who issued guidelines for wireless medical device security to ensure hackers could not interfere with things such as implanted pacemakers and defibrillators. This notion was is part stimulated by worry about Dick Cheney’s pacemaker being hacked. In fact countermeasures were installed by on the device by Cheney’s surgeon. More regulation of health data and equipment is expected in 2015.

“Security — or the lack of it — will largely determine the success or failure of widespread adoption of internet-connected devices,” the FTC Commissioner recently shared in an article. The FTC also released a report entitled, “Privacy & Security in a Connected World.”

10. Hardware-based security may change the game

According to respected market researcher Gartner, all roads to the digital future lead through security. At this point, who can really argue with that statement? Manufacturers and service providers are seeing the seriousness of cyber-danger and are starting to integrate security at every connectivity level. Crypto element integrated circuits with hardware-based key storage are starting to be employed for that. Furthermore, these crypto elements are a kind of silver bullet given that they easily and instantly add the strongest type of security possible (i.e. protected hardware-based key storage) to IoT endpoints and embedded systems. This is a powerful concept whose fundamental value is only starting to be recognized.

Crypto elements contain cryptographic engines to efficiently handle crypto functions such as hashing, sign-verify, ECDSA, key agreement (e.g. ECDH), authentication (symmetric or asymmetric), encryption/decryption, message authentication coding (MAC), run crypto algorithms (e.g. elliptic curve cryptography, AES, SHA) and many other functions.

The hardware key storage plus crypto engine combination in a single device makes it simple, ultra-secure, tiny, and inexpensive to add robust security. Recent crypto element products offer ECDH for key agreement and ECDSA for authentication. Adding a device with both of these powerful capabilities to any system with a microprocessor that can run encryption algorithms (such as AES) brings all three pillars of security (confidentiality, data integrity and authentication) into play.

With security rising in significance as attack platforms increase in size and threats become more sophisticated, it is good to know that solutions are already available to ensure that digital systems are not only smart and connected, but robustly secured by hardware key storage. This could be the one of the biggest stories in security going forward.

Bill Boldt, Sr. Marketing Manager, Crypto Products Atmel Corporation




Inside tips on Tanner L-Edit toolbox

Inside tips on Tanner L-Edit toolbox
by Don Dingee on 02-02-2015 at 7:00 am

Advanced skill in auto repair, carpentry, plumbing, and similar trades often correlates to one factor. Knowing what you want to do is one thing – having the proper tool is another, and can make the difference. Many a job has extended from minutes to hours over the lack of the right tool at the right moment. Experienced mechanics and contractors acquire and maintain tools in a toolbox, some used daily, some used occasionally, but all very valuable in earning a living.

An EDA “tool” are often comparable to a toolbox, containing many different implements with various uses. In an EDA toolbox, there may be many items a user is unaware of, or unfamiliar with. Most designers develop a basic level of proficiency quickly by using a percentage of the tools, but real productivity depends on exploiting the entire set of items at their disposal.

Tanner EDA has a toolbox for analog, MEMS, and mixed signal design: L-Edit. As the name implies, L-Edit does layout editing, and includes more capability that is powerful. Some of the notable features: Schematic Driven Layout (SDL), which can take netlists from T-Spice and other tools; interactive design rule checking (DRC) that shows violations in real-time while editing; and node highlighting, which allows connectivity visualization displaying all geometry connected to a point based on connectivity rules.

Most EDA companies offer training to help familiarize users. Tanner EDA recently hosted a webinar with an application engineer walking through his inside view of L-Edit and situations designers can leverage.

Shortcuts to Streamline IC Layout and Productivity (look for title under On-Demand Webinars)

This webinar is for the intermediate L-Edit user, or a designer using competitive analog layout tools and considering switching to Tanner EDA. It is not intended to be a tutorial on analog design, but rather focuses on illustrating tasks an analog designer typically performs while capturing designs in a layout tool, and how to configure and use L-Edit to accomplish those. During the narrative, host Thuong U is constantly showing menu selections and pointing out keyboard shortcuts.

As with most modern computer-aided layout tools, perhaps inspired by Adobe Creative Cloud and others, L-Edit makes use of aerial toolbars. These aerial toolbars can be docked, resized, hidden, pinned, and otherwise manipulated on-screen to fit user preference. Thuong begins the session showing his preferred screen layout, starting with a pair of DFFC instances. He shows several situations using align, distribute, and snap-to-grid, including how to quickly select the proper objects. He then moves into creating, rotating, and reflecting polygons, differentiating how base points are used to place and pivot objects.

Thuong then moves into Boolean operations. These can work on individual objects or an entire design. His example shows combining shapes into a single element, much faster and more effective than trying to draw some complex shapes manually. He then shows how layers work, and controls visibility over how details shown using a TAB key while moving a cell. As an example of DRC, he moves a polygon into a DFFC, with a minimum distance violation prevented unless a connection is intended. He then filters layers, for example showing only poly, active, or metal. Layer visibility can be stored in a layer setup. Thuong then tackles one of the more challenging tasks: establishing vias, using templates and adding features like guard rings.

One of the powerful features in L-Edit is T-cells, allowing parameterization. This is handy for device generation, such as capacitors, concentric rings, or other items than can be described mathematically. Another power-user feature is layout versus layout, a sophisticated visual “diff” that can quickly spot areas that have been changed or added between selections. Thuong closes with a short Q&A, addressing questions such as how to add an image (like a logo) to a layout.

L-Edit users will likely derive a lot of benefit from this webinar, for some tips that might make a big difference in achieving layouts faster.


IP Market at Your Desk!

IP Market at Your Desk!
by Pawan Fangaria on 02-01-2015 at 4:00 pm

Semiconductors have played very important role in making internet successful and that has unleashed the potential of e-commerce. Today, we see names like Alibaba, whose primary focus is on commodity trade. I couldn’t imagine an e-commerce type of web portal for semiconductor services until I looked at the eSilicon website. What an innovative idea, it opens new paradigms! It’s at the right time when we are feeling acute need of IP cataloging, selection, and instant procurement in the semiconductor industry. The idea goes beyond IP for other semiconductor services such as instant quotes for full-chip manufacturing and automatic tracking of progress through the supply chain. In this article, I will talk about IP.

When I attended an on-line webinarabout their IPM portfolio, posted at eSilicon website, I felt it to be simply amazing. According to the need of semiconductor business, it can do all that is required for a semiconductor IP. A general e-commerce portal cannot do that much for its commodities. Imagine you need to take a look and try a pair of shoes or apparel fitting rightly on you, you can’t do that at an e-commerce portal, and you will have to go to the store!

At eSilicon, after registration at the site, you can review the available eSilicon-developed IP, build one according to your need, download, try it in your SoC to see if it fits well, and then only buy it. The IPM release 1.0 launched in last December includes all features up to free trial of IP. The next release in coming March will include procurement on-line, so wait if you have already done free trials and plan to buy one on-line!

Currently the IP includes memory compilers and standard and speciality IO libraries. One can choose a particular foundry and technology node to see all available IP with that combination. In the above picture, the IP available with TSMCat 28nm are displayed; there are 29 in total with TSMC, out of which there are 10 at 28nm. By selecting any product, one can see all features and options for that product. For silicon verified products there is an option to display ‘silicon report’ for users to review all test results, chip description, correlation analysis etc. Also, user manuals are on-line.

To compile a memory instance, one can choose from many available options for the memory parameters, PVTs, and various output views. For GDS view, one will need to have special privileges to gain access. One can have a PVT which may not be available with standard IPs on the shelf.

During memory subsystem analysis, one can see a memory instance parameters in an excel spreadsheet; add more rows to generate more instances with varying parameters. A batch file can be created for the whole memory subsystem and uploaded.

The PPA (power, performance and area) evaluation and comparison between technologies can be done with dynamically updating graphs. So, once a designer is completely satisfied with a particular IP on-line, after that she can download it and try it with her SoC to validate further. That’s a true value to delight customer, “try before you buy”.

A complete data file with PPAs for all instances can be exported which can then be used for further analysis and decision about power budgeting, size, configuration, and so on for a memory subsystem in an SoC. The IP between different technologies, architectures, PVTs etc. can be easily compared.

If a suitable IP is not found on eSilicon portal, then a special request can be filled in; eSilicon tries to obtain it. This on-line portal is an effective way to get optimized and differentiated IP that can accelerate SoC development to meet today’s aggressive time-to-market window.

It’s a novel concept by eSilicon which is getting great customer feedback. After the first baseline with credibility, robustness and on-line procurement process going full stream, eSilicon plans to further expand with more IP, technologies and partners.

Go through this ~30 minutes free on-line demo, in which Lisa Minwell will take you through the complete working model of IPM that provides an interesting feel of on-line IP purchase. You don’t have to work through weeks to procure an IP.

Also view “Real-world benefits of on-line GDSII and MPW quotes” and “The 10-Minute Tapeout Quote” videos. They are available on the same page and are very interesting.