CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

The Mechanical Reliability of IC Packages

The Mechanical Reliability of IC Packages
by Daniel Payne on 01-31-2016 at 12:00 pm

At Intel back in the late 1970’s we were designing DRAM chips and mounting them in ceramic and plastic packages, however there were problems when some of the die would crack inside of the package because of thermal mismatch issues with how the die was attached to the heat spreader inside the package. Back then we really didn’t have any computer models of the complex chip to package interface, or any scientific way to simulate the reliability of the packaging process. Fast forward to today and clever computer scientists have indeed figured out a way to model and simulate the reliability of IC packaging. To learn more about these advancements I spoke with Harish Surendranath, an expert at Dassault Systemes (DS). Harish graduated in 2001 with an MS degree in Mechanical Engineering and joined ABAQUS Inc, which was then bought by Dassault Systemes in 2005.

When I talk about the mechanical reliability of IC packaging there are several areas of concern:

  • Thermal cycling
  • Moisture sensitivity
  • Vulnerability to shock and drop
  • Package to board interconnects
  • 2.5D and 3D packaging issues
  • Use of TSV and silicon interposers
  • Electromigration

Instead of the old-fashioned method of building a prototype and then testing it, you can instead do a virtual product qualification which should not only save you a lot of time but also provide detailed insights regarding the response of the package, helpful in ensuring the reliability. SIMULIA is DS’s brand for realistic simulation. Abaqus Unified FEA (Finite Element Analysis) from SIMULIA is used widely in the industry to improve the reliability of packages. As packages become more complex, weak spots increase exponentially. Simulation is critical to understanding the response under different operating conditions so that you can modify the design to avoid potential failures.

Xilinx is a user of the SIMULIA software and they used Abaqus for package thermomechanical simulation and analysis in 2011 for their four-slice 28nm chip mounted on a silicon interposer with thousands of micro-bumps.


Overall Assembly. Source: Xilinx


TSV Interposer


​Micro-bumps

Package thermomechanical simulation and analysis were run with Abaqus using 3D models to help understand warpage of the package during thermal stressing. Here’s a schematic of the package configuration:

DS has collaborated with industry leaders and research institutions for over 20 years to apply and enhance simulation technology to improve package reliability. Engineers from Intel and DS wrote a paper at the 2007 ASME International Mechanical Engineer Congress and Exposition on the topic of modeling for solder joint fatigue reliability. They analyzed a land grid array (LGA) socket with its package, where solder balls are attached on J-lead paddles:


LGA socket with its package

The direct cyclic procedure combined with a global-local approach in Abaqus were used to predict the failure location of on the ball and a more accurate mechanical response of the BGA during fatigue.

Accurate characterization of power supply systems necessitates electromagnetic field simulations to take into account various electromagnetic interactions in packages. The importance of using a 3D field simulator is related to the multilayer stack-up, none-ideal return current paths and the presence of three dimensional geometries such as BGAs and bond wires. DS partner CST provides engineers access to a 3D full wave simulation. This partnership was announced in May of last year.

As you can imagine, these full 3D simulations are very compute intensive so the solver technology has been parallelized so that a typical simulation can be run on 16 to 32 cores or more and you get results in hours. With each new generation of workstations these simulation approaches for package reliability are more practical than ever.

Further Reading

Also Read

Semiconductors and Conflict Minerals

An Easier Way to Reach Design Closure for SoC

How Can Big Data and EDA Tools Help?


5nm Chips? Yes, but When?

5nm Chips? Yes, but When?
by Pawan Fangaria on 01-31-2016 at 7:00 am

For any invention, technical proof of concept or prototyping happens years ahead of the invention being infused into actual products. When we talk about 5nm chip manufacturing, a test chip was already prototyped in last October, thanks to Cadence and Imec. Details about this chip can be found in a blog at Semiwiki (link is given at the end). So, where is the technical wall for Moore’s law? Yes, it’s retarding, but is it the end? When can we expect 5nm commercial chips?


Not a single week passes without hearing about Moore’s law – Moore’s law is dead, Moore’s law long live, More than Moore, Steps to keep up Moore’s law….. We have to realize one thing that all the dots have to connect together for making a particular technology, process, product, or service to become successful.

Often discontinuities arise in different forms in any established technological or business process. In transistor density scaling on silicon, we saw one after 28-22 nm nodes; after a brief pause, in technical terms it needed a structural change in transistors to go below 20 nm when FinFET arrived, and now FD-SOI is also a contender. FinFET has extended till 10nm. Below 10nm, we are again debating on whether FinFET should extend further or it should take another structural change in transistor – nanowire FET (Gate-All-Around).

There are pros and cons for each alternative; availability of EUV can simplify lithography and reduce cost to a certain extent, what if EUV is not there in time (it’s been already delayed according to technology evangelists), should we extend the 193nm immersion technology but that increases cost and complexity. It’s visible, what worked from 180nm to 28nm linearly is not working in the same fashion below that. Below 22nm discontinuities are more frequent after every couple of nodes. New solutions have to arrive.

Now let’s see the connection of dots; technology, business, economy, technology consumption pattern, and the future scenario. Here, we can find business level discontinuity (w.r.t. silicon density scaling) which indirectly affects technology roadmap.

What Moore’s law has provided us so far seems good enough for a good number of years from now. The technology which appears to witness maximum traction at this juncture is IoT, and 28nm seems good enough for IoT from all angles (performance, power, cost,…). Automotive is fine with 28nm and above. Same stands true for other segments such as home, personal, industry, and so on. These are business level discontinuities for silicon density scaling. Smartphones with current technology and 4G, and upcoming 5G would be more than enough in terms of performance, data processing, and so on. We are seeing slowdown in Smartphones, the largest driver of semiconductors. Where do we need super-duper high-end device needing 5nm? And who can afford it, because it’s going to be hugely expensive? We need to first consume what we have. Let’s look at the other aspects.


An IC Insights report on worldwide semiconductor R&D growth says that the industry-wide R&D expense grew by just 0.5% in 2015 and the growth in R&D expense by top10 R&D spenders stood at 2%. Although Intel is the top R&D spender amounting to 22% of total worldwide semiconductor R&D expense, Intel’s R&D expense in 2015 grew by 5% compared to its average R&D expense growth of 13% since 2010.

Now look at in which areas world’s major investment is flowing today, it’s not semiconductors –


Top areas for venture capital investment are Software, Biotechnology, Media & Entertainment, IT services…. semiconductor comes way down the line beyond 10[SUP]th[/SUP] rank.

What’s the future outlook for semiconductors? Again, another IC Insights report says in 2016 worldwide IC Market growth will remain between 2 to 6%, 4% being the mid-point. It cautions, tendency of growth going down is more than going up, of course taking into account worldwide financial stress and more or less deflationary trend.

Combining all these factors together what I can envision is that couple of years from now, the ecosystem, the broader community is asking the semiconductor community to provide what they have – 14nm, 28nm, 45nm, doesn’t matter what it is, integrate them together heterogeneously in a system along with MEMS (and FPGAs) and show us it solves our purpose with minimum cost, lowest energy consumption, good enough performance, good configurability, safe and secure, automating and innovating our existing premises. The focus is on products and systems rather than semiconductor technology process. That’s one of the reasons why software comes at the top in VC investment because software connects things together securely and safely.

So, how do you see the future of semiconductor technology progressing? Do not forget, there has been massive consolidation (in one way attributed to financial stress), that will give way to innovation in future. While the semiconductor process innovation goes on, the broader community has to consume what they already have on their plate; in different forms of innovation at the system level using the existing process technology. Commercial proliferation of new semiconductor technology will be slow. In my view, I do see 5nm chips coming at some point of time, but not before mid next decade.

For 5nm test chip read the blog: IMEC and Cadence Disclose 5nm Test Chip

Pawan Kumar Fangaria
Founder & President at www.fangarias.com


2016 Samsung Foundry Update!

2016 Samsung Foundry Update!
by Daniel Nenni on 01-30-2016 at 9:30 am

When sketching out the chapters for our book “Mobile UnLeashed” we sought out the events and technology that empowered the mobile devices that literally changed our world. One of the companies that enabled this change of course is Samsung. Cleverly embedded in chapter 8 “To Seoul, via Austin” is the story of how Samsung got into the foundry business. Once you read this chapter and see how Samsung Foundry got to where they are today you will be in a much better position to understand where they are going tomorrow, absolutely.


To get the most recent update on Samsung Foundry I talked to my friend Kelvin Low. Kelvin started as a process Engineer at Chartered Semiconductor in 1998 and transitioned to GlobalFoundries after the acquisition in 2009 which is where we first met. Disclaimer: I’m a Kelvin fan and was thrilled when he joined Samsung Foundry as Senior Director of Marketing but I digress…

Kelvin shared a nice slide deck that highlighted Samsung’s FinFET leadership and their next generation 14nm LPP (Low Power Plus) process. Remember, Samsung was the first foundry to FinFET with the 14nm LPE based Exynos 7420 SOC which was quickly followed by the Apple A9. LPE stands for “Low Power Early” by the way and it certainly was. 14nm LPP boasts a 14% performance increase through process/device optimization and is in mass production with high volume devices. One of those devices is the S820, Qualcomm’s latest and greatest SoC beast of a chip which is getting impressive reviews thus far. Another is of course the latest SoC from Samsung the Exynos 8890. Both chips will be used in the Samsung Galaxy S7 and benchmarks have already been published with the Exynos winning on multicore Geek Bench 3 and the S820 winning on single core. The Galaxy 7 is scheduled for launch March 11th in the US.


“We are pleased to start production of our industry-leading, 2[SUP]nd[/SUP] generation 14nm FinFET process technology that delivers the highest level of performance and power efficiency,” said Charlie Bae, Executive Vice President of Sales & Marketing, System LSI Business, Samsung Electronics. “Samsung will continue to offer derivative processes of its advanced 14nm FinFET technology to maintain our technology leadership.”

Incorporating three-dimensional (3D) FinFET structure on transistors enables significant performance boost and low power consumption. Samsung’s new 14nm LPP process delivers up to 15 percent higher speed and 15 percent less power consumption over the previous 14nm LPE process through improvements in transistor structure and process optimization. In addition, use of fully-depleted FinFET transistors brings enhanced manufacturing capabilities to overcome scaling limitations.

The big question is of course who will have majority share of the Foundry FinFET market this year and next, Samsung or TSMC?

Most of us Silicon Valley folks already know which foundry the top fabless semiconductor companies are using but I’m not going to spoil it for you. Here is a tip though, take a look at LinkedIn profiles from your favorite fabless company and see if their engineers are designing to 14nm or 16nm (14nm is Samsung and 16nm is TSMC). This is a onetime offer though because the next node is 10nm for all foundries and engineers are not allowed to put foundry names on their profiles.

“Mobile Unleashed” is available now in print and Kindle formats on Amazon.

More articles from Daniel Nenni


Domain Crossing Verification Needs Continue to Grow

Domain Crossing Verification Needs Continue to Grow
by Bernard Murphy on 01-29-2016 at 4:00 pm

Clock domain crossing (CDC) analysis has been around for many years, initially as special checks in verification or static timing analysis, but it fairly quickly diverged into specialized tools focused just on this problem. CDC checks are important because (a) you can lose data or even lock up at, or downstream of a poorly-handled crossing, (b) deciding whether a crossing is well- or poorly-handled is not always easy, (c) this is the kind of problem you really need to check completely, which requires static rather than dynamic analysis and (d) there are a lot more of these things scattered across modern designs.

On that last point, while the number of clock domains in a typical design seems to be growing more or less linearly over time, the number of clock domain crossings is growing exponentially. And these are now quite strongly interrelated with power domain crossings and reset domain crossings. Which means you can’t just look at the clocks anymore. You have to look at the clock intent, the power intent and the reset intent and how those propagate through the design to be sure you are going to catch all potential crossing problems.

So you do static analysis, looking at SDC for clock definitions, UPF for power intent and maybe a side-file for reset definitions. That finds the crossings and, with some additional structural analysis, can check if you are using approved structures to handle crossings. But it doesn’t check functional correctness, say on a fast-to-slow crossing, so you add formal checks, which completely prove the functional correctness or otherwise of the crossing. Except when they don’t. Formal, as we know, doesn’t always quite live up to its “completeness” billing. Sometimes it is only able to partially prove an assertion; sometimes it can prove (or partially prove) an assertion only if you supply a number of constraining assumptions. So you need to be able to fall back easily to simulation to test within expected use-models that the partial assertion and assumptions remain valid. This is generally called a hybrid flow – mixing formal and simulation verification of crossings.

And then there are those pesky resets. Poorly-managed reset domain crossings are becoming a significant problem in their own right. This arises in part through use of asynchronous resets. While many designers suggest async resets should be avoided, sometimes you have no choice, especially if you can’t be guaranteed an active clock when the reset is applied (if the clock is gated, for example). In the example below, if RESET1 is released close to when CLK is turned on and RESET2 is not active, there is potential for metastability in the second flop, for pretty much the same reason you might get metastability between two flops with asynchronous clocks.

This class of problem is amplified by the growing number of reset domains (and even more rapidly growing reset domain crossings) found in modern devices – cold resets, warm resets, resets which skip configuration registers, resets which skip retention registers and more. And then there are the complex sequences through which these domains must be controlled. Reset domain crossing (RDC) checks are starting to appear and I’m sure we’ll see more growth in static /formal techniques for this area.

And then there’s the perennial question of coverage. This is a part of verification, so you really want to know how completely you covered these areas. Definitions and techniques for formal coverage are still emerging, but it appears some initial approaches to coverage are already possible, per class of crossing-problem. This is an important step in quantifying the degree to which domain crossing checks have been proven, which is obviously significant for overall confidence in verification completeness and may become increasingly important in safety-critical applications.

Synopsys dominates (by a wide margin) the domain-crossing checking market with their SpyGlass CDC product, so you probably should stay in touch with what they’re doing and where they’re going. You can learn more from THIS WEBINAR.

More articles by Bernard…


Evaluating the Performance of Design Data Management Software

Evaluating the Performance of Design Data Management Software
by Karim Khalfan on 01-29-2016 at 12:00 pm

In the wake of increased global competitiveness and shorter time-to-market windows, there has been a renewed focus by design management on the underlying data management infrastructure of the design teams. An increasing number of systems-on-chip (SoCs) now have some type of analog, digital and/or RF modules, making it imperative for analog, RF and digital designers to collaborate well on designs and to share design data easily across multiple design sites. This year, it is predicted that more than 121 billion analog integrated circuits (ICs) (Ref: Semico Research) will be used in all electronic devices, a substantial analog content in the SoCs being shipped. And with RF components also slowly beginning to make inroads into SoCs, the development lines between analog, digital and RF designs are slowly but surely starting to get blurred.

Given the challenges of managing large design teams with different technical abilities and located across different locations, design management in a number of semiconductor companies have begun to realize the necessity and benefits of providing a common data management solution capable of handling all types of designs: analog, RF, digital and mixed-signal. Some of the most important criteria that design teams look for while evaluating SoC design data management software are:

  • Performance and fault tolerance
  • Security
  • Robustness
  • Disk usage
  • Ease of use
  • Project management

In this article, we will discuss one of the criteria for selecting data management software, namely performance and fault tolerance.

Most semiconductor design companies, including startups, are nowadays inclined to have remote design centers at different locations, such as India, China, Vietnam, Korea, etc. To ensure efficient 24×7 productivity for the designers, the performance of a design data management solution becomes a key criterion. Performance depends on a number of factors, such as network speed and disk access times, the architecture of the design data repository and the manner in which design data is transferred, use of caches at design sites, how the user workspaces are updated, multi-streaming, etc.. Hardware and network considerations aside, let us review some of the other factors that impact performance, beginning with the selection of the type of design repository.

There are two main types of design data repositories, Centralized and Distributed:

Figure1: Centralized design data repository

The centralized repository is historically the oldest type of design data repository and is widely deployed by a number of software data management solutions. One of the main advantages of the central repository is that it easy to set up and typically works well when the entire design team is based in a single location. On the flip side, in a centralized design repository, there is only one database for all design projects, and tasks such as browsing the revision history of a file, creating a tag, and comparing different versions of a design object are limited by the speed of the network connection.Unfortunately, the centralized repository model does not fit well in today’s SoC design environment, as most semiconductor design companies, including small startups, have remote design centers and the sizes of the design projects are relatively large. If the project is large or there are a large number of projects hosted at that site, there will be an increase in WAN traffic, which leads to the sluggish and sub-optimal performance at remote sites. Data management operations that need to contact the primary server — such as check-in/out, tags, browsing and comparing version history — get affected most due to the WAN link acting as a bottleneck.

A centralized repository also becomes a liability for design companies by having a single point of failure. Any problems with the network hosting the design repository or any loss in the connectivity between the primary and remote site can have severe implications. All the members of the entire design team, located at multiple locations worldwide, are unable to access the design data. The delays can have an adverse effect on the design schedule as well.

Figure 2: Distributed design data repository

A distributed design data repository, on the other hand, is more flexible and provides better fault tolerance. It allows for the design data to be partitioned so that design centers can have their own localized design repositories. This ensures that design teams at remote sites benefit in performance, since the design data is stored at the same location as the user and is typically on a high-speed and reliable network.It is very common practice while designing SoCs to use existing design projects (IPs), some of which may not be hosted on the same design site as the user. The distributed repository architecture provides designers with the flexibility to reference projects from other design sites and get updates as needed. Designers working locally get notified whenever any changes are made to the reference projects, regardless of their location, and can choose when to update the IP version. An additional benefit of using a distributed repository is that there is no single point of failure. A failure on the network connectivity for a design site has minimal impact on the rest of the design team at other locations, as they use their own local repositories.

One must note that the concept of distributed repositories in the semiconductor industry is slightly different from that in the software world. Software data management systems such as Git replicate the repository of files onto each user’s workarea, making it an independent revision control repository without being connected to the central repository. Users of software data management systems can create their own revisions and eventually or timely merge them into the main repository as needed and reconcile the changes, if any. For software or text files, this approach works well, but when it comes to merging large binary files, as is the case with SoC designs, reconciling the design modifications can often be a challenge.

For most remote design sites, the bottleneck has been the time it takes to update a work area and obtain all the changes made at the other sites. A number of files, perhaps hundreds, varying in size, might have been changed since the user last updated the work area and must be brought across the WAN. This can have a big impact on the productivity of users at remote design sites. To resolve this problem, it becomes necessary to have a cache server that caches the latest versions of all the files used at that design site. With the presence of a cache, users would be getting the latest versions of the files from the local cache across the LAN instead of the WAN. This approach greatly reduces WAN bandwidth usage and dramatically improves performance.

The manner in which a user’s workspace is managed by the data management system will also have an impact on performance. A user’s workspace can typically have a large number of files, of different types and varying sizes. In a typical design development environment, each user has a workarea with a physical copy of a revision of each file. In such cases, it can take a considerable amount of time to update the work area with the latest version of the files from either the local cache or across the WAN.

A links-to-cache mechanism, on the other hand, provides better performance when workareas are created using files that are symbolic links to a cache managed by a cache server. The cache server maintains a copy of revisions of files being used by all the workareas at a design site. A physical file is copied over to the workspace only when the user intends to edit the file. The symbolic links are updated only when the user decides to update the workspace. Creating a symbolic link into the cache is a much faster operation compared to physically copying the latest version of the files, some of which may be quite large in size. In the links-to-cache model, a physical copy of only the files that need to be edited are maintained and copied over. This model provides users with a stable workarea over which they have full control (even when symbolic links are being used), faster performance and optimizized disk space by using links to shared revisions. Additionally backups and management of the workarea are much faster because of the symbolic links.

In addition to selecting the correct type of repository, it is also important to ensure that the design data management software makes full use of available bandwidth. The notion of multi-streaming data (not to be confused with multi-threading) ensures that there is no latency between the various operations required for data transfer, once it starts, and the network bandwidth is optimally used.

Figure 3: Multi-stream – uses bandwidth more efficiently to improve performance

Design teams will continue to grow and spread across multiple locations. The file sizes will also continue to grow by leaps and bounds. With such constraints, it is natural to continue to leverage the advancements made in data network speeds and protocols as well as to develop better mechanisms to improve the performance. With tight design schedules, most designers would like the ability to have the latest version of design data at their fingertips. At ClioSoft, we are committed to make that happen.

About ClioSoft
ClioSoft, Inc. is the pioneer in the field of SoC design configuration management and enterprise IP management solutions for the semiconductor industry. Built exclusively for hardware design engineers, with flexibility to adapt to complex flows, ease of use and robustness as the main drivers, ClioSoft’s SOS Design Collaboration Platform empowers design teams located at multiple sites to collaborate efficiently on complex RF, analog, digital and mixed-signal designs. Using SOS, design teams can streamline the development of complex SoC designs from design specification to tapeout by efficiently sharing and managing their design data across different design centers using a distributed, fault-tolerant architecture. To handle the requirements of complex SoC design flows, the SOS platform is integrated with major EDA flows — Cadence’s Virtuoso® technology, Keysight Technologies’ Advanced Design System (ADS), Mentor Graphic’s Pyxis Custom IC Design, Synopsys’ Galaxy Custom Designer® and Laker3™ Custom Design.

The sole objective of ClioSoft’s collaborative IP management solution is to improve design reuse within a company. ClioSoft helps semiconductor companies catalog and manage internal and third-party IPs, providing an easy-to-use administration and user cockpit to manage the process of creating IPs and their derivatives, their lineage, IP licensing, security, and issue and defect tracking.

Also Read

The Case for Data Management Amid the Rise of IP in SoCs

ClioSoft SOS v7.0: Faster, Smarter and Stronger

Starvision and SOS, a Perfect Match


New Vision for Traffic Cameras in 2016

New Vision for Traffic Cameras in 2016
by Roger C. Lanctot on 01-29-2016 at 7:00 am

A picture may be worth a thousand words, but a traffic camera video may be worth millions. TrafficLand is poised to transform forever the use of traffic camera video information with demonstrations at the CES show in Las Vegas. In the process, TrafficLand will be overcoming years of industry ambivalence toward the use of traffic camera info across a wide spectrum of transportation applications.
Continue reading “New Vision for Traffic Cameras in 2016”


IoT in Action : Snowy Smart Cities

IoT in Action : Snowy Smart Cities
by Pranay Prakash on 01-28-2016 at 4:00 pm

I have a gorgeous view outside my home. Everything is covered under more than 2 feet of snow. My flight for tomorrow is cancelled and schools are shut down until Tuesday. For my children, its probably a dream come true but for businesses, it doesn’t help when things come to a standstill. Also for many who need to be on the road, safety is a huge concern and reports of damage and injuries keep getting featured on the news. But the authorities have diligently been working on keeping the city functioning and technology – specifically ‘Internet of Things’ (IoT) has been helping. You might think what do snow and IoT have to do with each other? Well, some of the key elements of IoT have already been in play this week across the east coast and some are likely to happen in the near future across cities globally:


Predictive Analytics
I must admit being impressed by the weather folks who predicted the timing and the quantity of snowfall close to actual which gave ample time for county authorities to prepare for severe weather and respond timely. Jonas,despite being a powerful and potentially harmful snow blizzard of 2016 will be remembered as a proud moment for meteorologists who worked on predicting the power of the storm much in advance. Meteorologists use wide spectrum of data (you can call it big data) and algorithms that are used to predict upcoming weather. Predictive Analytics use cases have been talked about in IoT for past several years i.e. if you know more in advance you can do better. The National Weather Service has helped in doing exactly that using available data and latest technology. The New York times cited Dr. Louis W. Uccellini, the director of the National Weather Service. “We’re living in extraordinary times,” he said. “This is something that the entire enterprise has been working on for decades.”

Connected Snow Fleet Management

GPS connected snow removal fleet have been operating in snowed environments for the past few years. Even in the current blizzard, with GPS technology, authorities have been able to monitor each snow plough vehicle to ensure they’re on the assigned routes. Additionally, since many independent contractors are involved, authorities are able to drive greater accountability by ensuring that these vehicles are working on the routes assigned to them. As per AMNY, New York city’s Snow Removal Operations Center walls are lined with screens showing the location of fleet in real time, as well as cameras showing real-time conditions across the city, on key boulevards and highways

Remote Connected Home Devices

The importance of internet connected devices grows during such weather conditions. For example, if you’re stuck at an airport and unable to fly home, you’re probably using your remote thermostat and other devices e.g. connected lights, door locks like never before to ensure peace of mind. Over the next several days, as semi-normal life commences and you’re coming to a frozen house there is more likelihood you will use your phone app to control the temperature of your house. If you have a connected heated driveway and are on travel, you would be able to turn your driveway heating on remotely so you return to better conditions.


Future Smart Cities – Self Driving Cars, Drones & Lots of Sensors

I am certainly living in somewhat of a smart city and am thankful that modern means have allowed me and my family to stay safe in the snow. In developing and under-developed cities without basic infrastructure, getting these IoT benefits become harder but future technologies will make cities smarter globally and things will hopefully not have to come to a standstill during such severe weather. Will self driving cars help us? Machines that know how to move in snow without human intervention – although it might sound scary, it can very well be argued that autonomous cars and vehicles can be programmed to function better in severe weather conditions. Drones would also prove to be very useful in making important deliveries so you don’t have to drive to the store if you need something urgently or even for regular grocery needs. If you have a passenger drone, you might be able to even go to the office. And more sensors in the roads can help authorities determine the amount of snow around the city and manage snow removal fleet better.

These are just some examples. In the near future, although we might not be able to solve every major issue associated with snow and other severe weather conditions, IoT definitely promises to overcome some of them and bring normalcy faster to our lives. Looking forward to hearing your thoughts.


The Fine Art of Engineering

The Fine Art of Engineering
by Nazita Saye on 01-28-2016 at 12:00 pm

There’s a small art gallery near the office. It features a new set of paintings by a local artist every two weeks. As I walk by I tend to check out what’s hanging in there. Sometimes I turn up my nose at what I see – a bit too wacky, a bit too abstract, a bit too paint by numbers. Sometimes I walk in to take a closer look but leave the shop empty-handed due to “budgetary restrictions”. But sometimes I walk out with a nice shiny piece of art. You see a while back I decided I was all grown up so I was no longer going to buy mass produced stuff. Three years on I have a handful of pieces by local artists from the gallery gracing the walls of my house.

Art is a funny thing. Something that is aesthetically pleasing to me may not be to you. So let’s go back to the basics. What is art? A quick online search gives me this: art is an expression of creative skill or imagination. Sometimes in a visual form. Something that evokes a feeling.

Given this definition almost anything can be considered art. For example, sometimes I look at a CFD plot and find it so aesthetically pleasing that I think hey you can hang that on a wall in your house. Some folks may turn up their nose at this but if you forget about the science behind it (ie heat flowing thru a piece of electronics) then who is to say it isn’t art. We could even say CFD stands for Color For Decorating 😀

Here’s a favorite image of mine created for a magazine cover by Robin Bornoff a few years ago –

I even nominated it when our facilities team was looking for artwork to use for a lighting fixture in the office. Two years on and I still think it’s one of the coolest things ever (although if I were to do this I would have used LEDs to illuminate the streamlines):

And I’m glad that I’m not the only one who feels that simulation plots can be art. The good folks behind the Design Automation Conference (DAC) are organizing a silicon and technology art show at next year’s event. Everyone is encouraged to send in artwork and a select few will have their artwork framed and displayed at the show. There are no fast and hard rules as to what they are looking for … you can submit anything you like as long as it’s EDA, electronics or semiconductor related. Personally I think you can generate some stunning images based on thermal simulation. For more information and submission details please follow this link. Their deadline is March 24, 2016 so there is a bit of time.

I really hope you submit some of your work and I can’t wait to see you get some well-deserved kudos for the work you do. In fact, I’d like to ask you to send a copy of your images to me too at my Mentor address (nazita underscore saye at mentor dot com). I’d love to do an online gallery. Wouldn’t that be the coolest thing? I promise I’ll ask for your permission if I decide to frame one and put it on the wall in my house.

Until next time,
Nazita


True Random Number Generation

True Random Number Generation
by Bernard Murphy on 01-27-2016 at 4:00 pm

Random numbers are central to modern security systems. The humble password, perhaps the least profound application, is encrypted and verified against using SHA or MD algorithms with a random number salt. You probably remember a college class on how to generate pseudo-random numbers algorithmically, some very sophisticated. But most of these techniques have been consigned to the trashcan of crypto-history because they simply aren’t random enough (survivors live on in low-security applications).

Why is not-random-enough a problem? Because it means that your random numbers are distributed over a much smaller state-space than you think, and therefore brute-force guessing is much more likely to succeed when using a powerful machine and reasonable assumptions about likely seeds (time of day for example). Worse yet, our intuition on what is truly random seems to be quite weak, so our attempts to correct for biases are typically much less effective than we think.

The hardware engineer in us jumps next to what we think are high-entropy sources on-chip: shot noise, process variations amplified through ring oscillators and so on. These are good starting points but preserving randomness in conversion to a number (or stream of numbers) requires a lot of care to not re-introduce bias. Is the distribution really uniform, or does digitization introduce bias? Worse yet, are you using a source which seems like it should be random but may instead be quite predictable, such as the power-up state of a RAM?

And then there are other factors you need to consider. Can the operation of the generator be observed through a side-channel attack (observing power consumption or timing or EM radiation)? There are even sophisticated statistical methods to learn behavior profiles of how a function behaves, especially under environment changes, without needing to understand why it behaves that way, and these can be used to hack the device.

There’s a huge amount of subtlety to getting this right and a lot of standards to which you will need to show compliance if you want to be taken seriously: NIST SP 800- 22 and SP 800-90 and (FIPS) 140-2. This is a domain for experts only. Enjoy trying to understand all the complexity but build your designs around professional True Random Number Generators (TRNGs).
You can learn more about the Synopsys TRNG IPs HERE.

More articles by Bernard…