NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Moving from SRAM to DDR DRAM in Safety Critical Automotive Systems

Moving from SRAM to DDR DRAM in Safety Critical Automotive Systems
by Eric Esteve on 12-20-2016 at 12:00 pm

Until now, most of the processors contained within automobiles could be served by SRAM, at the exception of infotainment systems relying on a more powerful CPU connected to DRAM, but these systems are non-safety-critical. Advanced Driver Awareness Systems (ADAS) and self-driving vehicle systems demand powerful processors that require the memory capacity and bandwidth that is only possible with DRAM. Designers need to precisely understand the differences between DRAM and SRAM in term of reliability, temperature sensitivity and refresh requirements before to move from (embedded) SRAM based computing to external DRAM based architecture, especially for safety critical automotive systems.


In the above Table (DRAM vs SRAM Use in Automotive Applications) the reader can identify the main differences. The latency associated with DRAM is larger than for SRAM and most importantly, can be non-deterministic. The DRAM technology is characterized by the need to be periodically refreshed to avoid the loss of data in the memory.

The core of a DRAM chip is an analog array of bit-cells that operate by storing a small amount of charge on a capacitor within each bit-cell – just a few tens of femtoFarads or just a few tens of thousands of electrons per bit, on a DRAM device with 4 or 8 billion bits per die. The rate of leakage is dependent on the temperature, leaking more at higher temperatures… and the automotive segment requires the devices to run fully at spec at higher temperature than consumer or even industrial. This results in specially designed DRAMs targeted towards automotive applications.

In term of reliability, DRAM devices are also susceptible to soft errors due to Single Event Upsets (SEUs) – the effect of ionizing radiation on the DRAM device. As a consequence, a bit-cell may lose its charge and again error correction should be employed to recover the lost data. The impact of soft errors can be dramatic and is obviously not acceptable for safety critical automotive systems.

The SoC has to integrate Error Correction Code (ECC) mechanisms when using an external DRAM. We will see further which type of ECC may be implemented in automotive systems to prevent error propagation.

At this point, you may ask why integrating a DRAM into automotive systems, while considering these drawbacks!
The answer is simply that you have no other choice than using DRAM when designing computing intensive automotive systems, as DRAM is an enabling technology for these three automotive advances:

1. Displays: High definition displays generally require DRAM, and as displays like instrumentation consoles and heads-up displays will relay safety-critical information to the driver, then DRAM is needed in this safety-critical application.
2. ADAS systems that process camera and high-bandwidth sensor input: The cameras and other sensors that provide the input to the ADAS system generate a large amount of data which also requires further processing to remove noise, adjust for different lighting conditions, and to identify objects and obstacles. This kind of processing requires the bandwidth and capacity of DRAM.
3. Self-driving vehicles: Self-driving vehicles require processing of a number of high-bandwidth input sources and intense computation, making DRAM a necessity.

The most common DRAM device for new ADAS designs is the LPDDR4 SDRAM. LPDDR4, originally designed for mobile devices offers a balance of capacity, speed and form factor that is attractive for automotive applications. As a result, LPDDR4 has been automotive qualified by DRAM manufacturers and is available in automotive temperature grades.

Even with careful physical interface design, at LPDDR4 data transmission speeds, there is a non-zero bit error rate, so the risk of data transmission errors must also be addressed. There are a few possible ways to mitigate possible errors that may occur in DRAM devices to prevent the errors from propagating into the rest of the system.

The DRAM manufacturer may attempt to create a bit-cell that is more temperature resistant, or the DRAM manufacturer may introduce error correction within the DRAM die to correct for the bit-cells which have lost their charge between refreshes. Even if error correction is present within the DRAM die, the SoC designer may also introduce error correction on the DRAM interface to correct errors in the DRAM.


In traditional DDR DRAM designs such as servers and networking chips, any error correction is usually transmitted side-band to the DRAM data. However, when using LPDDR4 devices, the arrangement of LPDDR4 into 16-bit channels, 2 channels per die, 2-4 dies per package, 4 channels per package means that it is highly impractical to implement sideband pins with which to transmit sideband Error Correcting Code (ECC) data. In that case, an in-line ECC scheme may be used, which transmits the ECC data on the same data pins as the data it protects (Above Figure).

As a conclusion, DRAM devices are clearly an enabling technology for advancements in automotive safety, features, and convenience. With careful design and stringent process, DRAM can be introduced into safety-critical areas of the automobile to provide high bandwidth and large capacity to enable the computing necessary for driver information systems, ADAS, and self-driving vehicles.

This article has been inspired by the excellent paper “Understanding Automotive DDR DRAMwritten by Marc Greenberg, Product Marketing Director, in Synopsys DesignWare Tech Bulletin.

You will find an exhaustive list of Synopsys Automotive Grade DDR interface IP including PHYs, Controllers, Verification IP, architecture design models, and prototyping systems here

From Eric Esteve from IPnest


Building a Virtual Prototype

Building a Virtual Prototype
by Bernard Murphy on 12-20-2016 at 7:00 am

I wrote recently about how virtual prototypes (in the form of VDKs) can help embedded software teams practice continuous integration. Synopsys has just released a white paper detailing a practical approach to building a VDK, using the Juno ARM development platform (ADP) to illustrate. Just as a reminder, the point of a virtual prototype is to provide software developers a platform on which they can start development while the hardware is still in design. This is a software model capturing just enough architectural detail to track hardware behavior with reasonable accuracy, but avoiding implementation detail so it can run at close to real-time speed.

You can see the block diagram for the Juno ADP above. It has lots of functionality – two Cortex clusters, a Mali GPU, a control processor, debug, cache-coherent interconnect and TrustZone security, and a bunch of peripherals. If you take a serial view of “first build the VDK, then use that as a platform for software development” this task could look daunting. Worse yet, you might find that your polished VDK becomes available for use only as first silicon samples appear!

But it doesn’t have to happen that way – VDK development can be pipelined just like any other phase in product design. A software stack in development doesn’t need all features of the virtual prototype to be available from day 1. Components of the software are themselves developed in a pipelined fashion – development for the bootloader, OS, drivers, middleware and applications launch progressively and will require access to control, communications, audio and other layers only as those components evolve. As a result, the virtual prototype can start as a relatively incomplete model with many stubs and can be refined as the software stack itself evolves.

The minimum component you need to start is the compute subsystem, and this is also the easiest part. You just pick up the ARMv8 starting point VDK, do a very small amount of configuration and that part is done. You will want to add stubs for peripherals and other components in the Juno design since you’ll need to reference these in the next steps. This is pretty simple in Virtualizer Studio (VS). Where you already have models available you can of course use them but otherwise you can simply add stubs and add and type interfaces as needed (memory-mapped, interrupt, clock, etc).

Next you’ll want to add memory-map information, effectively a simple table of masters, slaves and address ranges, since this is fundamental to modeling the hardware-software interface. VS doesn’t require blocks to be connected explicitly in the model to support the memory map. It will generate a basic routing mechanism for you in support of however you define the map.

You’ll also need an interrupt map specification, again a simple table of devices and interrupt slots. Note also that in stub models, VS supports simple scripting so you can implement simple behavior for those endpoints without having to manually create a TLM. You can also add clock and reset tree information and model off-chip interfaces in a simple way, to model for example serial I/O. These features together should get you to your first-order VDK on which software development can start. Certainly, with the CPU models in place you should have enough to start testing basic boot.

As development progresses, you’re going to want to replace more of your stub models with real models. VS supports an extensive library covering most standard peripherals among other function types. You might choose to include these in your first revision of the VDK or you might want to introduce them in subsequent revisions. In many cases this is very simple – select the full model you want to use and the stub it will replace. VS will figure out and reconnect memory and interrupt maps and other connections as needed. If the full model supports more than one memory map (or other connections) you’ll need to specify which one should connect to the prototype and provide a way to stub other connections, but again this is a pretty simple substitution.

Even for those functions for which the VS library cannot supply a ready-made VDK you can still defer a lot of the heavy lifting in building a full TLM model. TLM Creator (inside VS) will let you import an interface definition and register map from Excel or IP-XACT, or you can create these through simple table interfaces in the tool. This will build a skeleton model you can use in place of a stub. And when you want to complete the model, you’ll often find the skeleton already has 50% or more of the code on which you need to build.

The white paper provides much more detail on using VS in support of this kind of flow.

More articles by Bernard…


ARM and Open Silicon Join Forces to Fight the IoT Edge Wars!

ARM and Open Silicon Join Forces to Fight the IoT Edge Wars!
by Mitch Heins on 12-19-2016 at 4:00 pm


I spent the last several days doing a deep dive into the world of IoT security and what I’ve learned has scared the pants off me. Various analysts predict that there will be over 30 billion connected IoT devices by the year 2020 growing from 9.9 million in 2013. A quick audit of my home identified over 40 connected devices including everything from iPhones, laptops, and smart TVs to security cameras and motion detectors. My security system alone had 10 individually addressable devices. With approximately 124 million households in the United States, if we say 20% of them are as connected as I am that means roughly 1 billion IoT devices today. Pick your favorite multiplier but at the rate we are moving 30 billion sounds low by 2020. All of this connectivity sounds good right? What could possibly go wrong?

On October 21, the U.S. experienced one of the largest DDoS (distributed denial of service) attacks ever recorded that took down a major portion of the eastern seaboard’s internet service for over an hour. The attack made use of 10’s of millions of discrete IP addresses representing IoT devices infected by the Mirai botnet. These IoT devices were compromised and used to bombard a major DNS vendor that effectively took out the internet. It seems that the industry needs to snap to attention to get this under control moving forward.

On that note, I was fortunate enough to attend a webinar by ARM and Open Silicon that addressed how to improve security of IoT edge devices. This diagram provided by ARM explains that IoT chains are typically made up of sensors/actuators at the edge talking to gateways and then cloud-based servers. Devices within the cloud tend to be less vulnerable to attack as they are kept under constant surveillance in controlled environments. The edge devices however live “in the wild” and early versions of these devices really didn’t have much thought put into them regarding security. With the advent of the October attack, we have come to sudden realization that these devices can in fact be used against us.

Yossi Weisblum of ARM and Kalpesh Sanghvi of Open Silicon both made it very clear that future IoT devices must have security designed into them from the beginning. This starts by understanding of the threat surface (e.g. the types and techniques of threats that must be mitigated) for a given device and identifying the right level of security required. Both of these gentlemen went on to describe some basic tenants of good design-for-security and how their offerings give an IoT designer a fighting chance.

One of the key tenets of establishing security is what is known as RoT or Root of Trust. A RoT is some piece of code or hardware that has been hardened well enough that it’s not likely to be compromised, and either can’t be modified at all, or else can’t be modified without cryptographic credentials. IoT edge devices must be enabled to be secure by default. ARM’s TrustZone CryptoCell family of security IP enables IoT designers to design RoT into their edge devices. CryptoCell starts by isolating the execution environment of the edge device into multiple domains, those that ensure RoT and those that will be exposed to the outside world. CryptoCell also provides for secure boot and OTA (over the air) update capabilities to enable real-time resets and updates for edge devices that may be under attack or compromised.

TrustZone’s modular approach enables designers to make PPA (power, performance and area) trade-offs as not all devices share the same security needs. The TrustZone domains address control and scheduling, data interfaces, encryption and other security resources such as crypto key generation in hardware. It should be noted however, that ARM’s solution is actually a combination of both hardware and software. To that end, ARM’s offering also includes their mbed OS operating system that integrates the rest of ARM’s IP with the integrated security modules of CryptoCell. The mbed OS also includes application support for creating secure transmissions using mbed TLS and mbed Client, both of which use the encryption/decryption of the CryptoCell hardware IP.

Ok, so given you have all of ARM’s security IP, it’s still a challenge to figure out how to make an optimal IoT device for your application. Open Silicon has stepped in to help with a reference architecture for IoT design called Spec2Chip. The Spect2Chip architecture is based on ARM’s Cortex M series processors and the TrustZone CryptoCell security subsystem. This reference architecture gives designers a dramatic head start and includes the ability to prototype their designs in an FPGA based implementation before committing to a full production SoC for their IoT edge devices. The solution includes both the hardware stack based on ARM IP as well as application specific software stacks that make use of ARM’s mbed OS with associated drivers for sensors and communications modules. The idea is to provide a platform that allows both hardware and software designers to quickly build and test their proposed edge devices. The FPGA implementation includes the critical ARM security IP including key generation, encryption/decryption and authentication functions for real world testing.

So, while I am tempted to now unplug all of my early version IoT accessories for the sake of the country, it is certainly heartening to know that the industry has indeed awoken to this IoT edge threat and is fielding some very powerful solutions to take back the internet. If you are an IoT edge device designer be sure to check out ARM and Open Silicon’s offerings.

See also:
Security on ARM
ARM TrustZone CryptoCell
Open Silicon ARM Cortex-M IoT SoC Platform


How ARM designs and optimizes SoCs for low-power

How ARM designs and optimizes SoCs for low-power
by Daniel Payne on 12-19-2016 at 12:00 pm

ARM has become such a worldwide powerhouse in delivering processors to the semiconductor IP market because they have done so many things well: IP licensing model, variety, performance, and low-power. On my desk are two devices with ARM IP, a Samsung Galaxy Note 4 smart phone and a Google tablet. Most of my readers will likely have a few ARM-powered consumer devices within their grasp. I just finished reading a new White Paper authored by both ARM and Mentor engineers on the topic of, “Low-Power Design is a Corporate Mindset at ARM”. The team at ARM takes a system-level approach to managing and minimizing power for their semiconductor IP spanning all the way from software in an app down to the transistor-level design decisions.


The green shaded area focuses on how to get power-efficient RTL IP in a three-step process.

1. Setting Low Power Objectives

It really takes a team to get the lowest power for an ARM IP, and each member plays a slightly different role.

  • ISA architects – keep the ARM architecture power-efficient
  • System architects – which system and IP power management approach to take
  • Technical lead – specifies and manages power targets with RTL designers
  • RTL designers – code the power reduction scheme
  • Implementation designers – both measure and analyze the power, collaborate with RTL designers

Objectives are met by first doing a top-down power budget, looking at each power component, having members compete for lowest-power, keeping area and toggles minimized, focusing on energy efficiency, and knowing how the RTL code gets synthesized into process-specific gates.

2. Using a Low-power Design Flow

The low-power development cycle has four major parts starting at requirements and ending with measurements against objectives.

Engineers at ARM use an EDA tool from Mentor called PowerPro to help in three tasks:

  • Analysis of RTL and gate-level power
  • Exploring RTL power
  • Reducing RTL power

Here’s the low-power IP design flow showing where the PowerPro tool comes into play for analysis, exploring and reducing power:

RTL Power Analysis

How can you quickly analyze power without a gate-level netlist? The PowerPro tool does a pseudo-synthesis step to create a gate-level prototype, which can take just an hour for a CPU or GPU design.

To further improve the accuracy of the gate-level prototype requires an estimate of the physical interconnect using SPEF (Standard Parasitic Exchange Format), so that step enables PowerPro to generate power numbers within 15% of actual gate-level results:

3. Optimization Techniques

The reports and power optimization suggestions from the PowerPro tool help the engineers to make trade-off decisions on achieving the lowest power numbers. One recommendation is to use combinational clock gating for most flops, and then show you the efficiency of clock gating being used. In the design efficiency report you get to see the total number of flops, percentage gating of flops, and efficiency of gating.

Any combinational redundancies in your design are reported so that you may take some design action:

  • Redundant mux toggle
  • Redundant memory data/address toggle
  • Clock toggle-data stable


Redundant Mux Activityy

Clocks can be shut off by using an Enable signal on flops for a given time period, consider the following case:


Inside of PowerPro there’s a calculation being made so that adding extra logic to control the power is still creating a lower power value than not making a change. Sequential redundancies are identified and recommendations are made for:

  • Sequential clock gating
  • Sequential data gating
  • Redundant reset removal

Gate-Level Power Analysis
For a sign-off level of accuracy you want to know what the gate-level power numbers are for each of your blocks. In the ARM flow the gate-level simulation uses the Standard Delay Format (SDF) for highest accuracy of power. You can even see the change in current per time (di/dt) to get some early insight of power grid analysis.

PowerPro Results
So how does the early power number at RTL correlate to the final gate-level power? You can expect the early RTL power numbers to be within 15% of the gate-level power numbers, while getting feedback in minutes to hours instead of several days, a nice trade-off.

ARM engineers did power analysis, exploration, power scrubs and optimizations over the course of several weeks on various blocks of a recent GPU project, and this shows their progress in power reduction for a specific type of test:

I asked the white paper authors about how popular the PowerPro tool usage was at ARM. “ARM uses Mentor Graphics PowerPro in the design process for all classes of ARM IP such as: CPUs, GPUs, interconnect sub-systems, and display cores to meet power goals,” said authors Stephane Forey and Jinson Koppanalil from ARM and Saurabh Kumar Shrimal and Richard Langridge from Mentor Graphics.

Summary
There is sufficient automation now available for power analysis, exploration and optimization at the RTL level that is helping leading-edge SoC companies like ARM get the most out of their architecture. Your team can now consider doing daily RTL power analysis at block and unit levels to get a quick idea of your power trends. Reports from the automated tools gives designers the info needed to make power trade-offs quite early in the design process.

Read the full 16 page White Paper here.

Related blogs:


CEO Interview: Dündar Dumlugöl of Magwel

CEO Interview: Dündar Dumlugöl of Magwel
by Tom Simon on 12-19-2016 at 7:00 am

Magwel CEO Dündar Dumlugöl is well known from his days at Cadence, where I first met him, and for his more recent tenure at Magwel. At Cadence he led the team that first developed Spectre. He has come a long way from the start of his career at IMEC in Belgium. He and I had a chance to have a conversation recently where he offered insights into ESD, latch-up and power transistor modeling – all areas where Magwel offers increasingly popular solutions.

Why is modeling power transistors so important these days?

Well, as you know, we are seeing an explosion in the number of mobile and wireless products coming to market. These are all battery powered, so buck and boost converters are essential for converting battery voltages to the circuit operating voltages. A significant determinant of battery life is the efficiency of these converters. An inefficient converter will waste energy. Also, power hungry converters need more expensive packages and larger heat sinks. Often the root causes of this wasted power include on-state resistance, low switching efficiency and the thermal characteristics of the power transistors used in these converters. Quite simply, if the power transistors are not optimized correctly battery run time will suffer.

Tell us more about electro-thermal modeling and what problems it solves?

The operating characteristics of a power transistor are affected by temperature. However, the temperature is determined by joule heating, the package, plus heat sources and sinks on the board. You just can’t look at one without the other. So traditional SPICE methods involve guesswork, which can lead to significant errors during the design process.

The best way to solve this problem is to run actual stimulus through the device to look at all the thermal behavior over time and converge on a unified electro-thermal result for the device operation. This is what our PTM-ET product does. It uses 3D thermal and electrical modeling performed concurrently to give a highly accurate result.

What are the barriers to modeling power device switching in converter circuits using conventional SPICE models?

SPICE is good at modeling individual device junctions. However, power transistors are composed of thousands of parallel devices with complex metal interconnect structures. A single SPICE model does not reflect the large complex distributed nature of a power transistor in transient operation. In addition, high currents on the internal interconnect of the device require 3D current flow modeling to capture parasitic losses and excessive current densities needed for performance and reliability assessment.

Device switching is not instantaneous, but rather there is skew and slew in the gate signal and as it travels toward each individual active device. At Magwel we have found that co-simulation with Spectre allows circuit level simulation to include the non-uniform switching effects that can be detrimental to operation. Non-uniform switching can lead to higher switching power dissipation and reliability issues due to current crowding.

With several ESD tools already on the market, why did Magwel develop its own solution – ESDi?

Our existing customers, who were very happy with our power transistor tools, started telling us that they saw areas for improvements in their existing ESD tools. We were hearing that accuracy and usability were both significant problems with the tools they already owned. They approached us to see if we would collaborate with them to develop a better solution.

We have excellent 3D extraction technology as part of our core capabilities. We also have deep experience with simulation that enabled us to create an efficient engine for modeling snap-back devices. We took these and added highly intuitive usability, visualization and reporting. The response was very good. Recently we have moved ahead even further by making the setup easy with automated ESD device tagging and a wide range of parasitic device recognition. We also boosted performance with parallel processing. We are seeing results that match silicon very closely and do not suffer from excessive false error reporting. This is due to our unique algorithms that model all the parallel discharge paths. This distributes the discharge current properly across all the affected devices.

Do you see your customers looking for solutions to help diagnose latch-up?

We have worked with several customers to understand their silicon failures that were proving difficult to track down. During the process latch-up had been proposed as the failure mechanism. This is how we became involved. Our device modeling and solver technology made an excellent platform to create a tool that models minority carrier injection into the substrate. Using this we can evaluate substrate currents and look for correlation. We have developed methods to use this information to reduce the effects and improve silicon results.

This is an excellent example of the ways we can take our very mature technology and apply it to the difficult real world problems that our customers face.

What other interests do you have outside of the semiconductor field?

I really like to run every day. I live in an area where there are good running trails nearby and make it part of my daily routine. I also read a wide variety of books, including history, economics and politics. Of course, I read up on engineering and related topics frequently as well.

For more detailed information about Magwel and the products they provide, take a look at their website.

Also Read:

CEO Interview: Jack Harding of eSilicon

CEO Interview: Randy Caplan of Silicon Creations

Expert Interview: Rajeev Madhavan


Building a Solar Powered Ice Freezer

Building a Solar Powered Ice Freezer
by Tom Simon on 12-18-2016 at 4:00 pm

My vacation is your worst nightmare. Well, at least that is what the bumper sticker says – it’s referring to Burning Man. It’s well known that among the tens of thousands of people attending this arts festival in Nevada at the end of each Summer there are lots of high tech luminaries. I also have gone many times – not to say that I am a luminary. It’s a common tradition of so-called ‘burners’ to build complex projects to take with them that provide some of the comforts of home and more during the week of survival camping in the Black Rock Desert. Ad hoc showers, aluminum can recycling stations, elaborate cooking arrangements, alternative energy systems, coffee carts, colorful lighting setup, fire shooting vehicles, huge sound systems – pretty much anything you can think of and many you never even imagined – are brought out and assembled on the “playa.”

There is nothing for sale there – you must plan on bringing everything you need. There might be some sociable gifting, and people will probably help you out in a pinch, but start with the basics, like 2 gallons of water per person per day, and go from there. Notwithstanding, you can buy coffee and ice at what they call Center Camp. That’s it. Everything else is on you. The ice is for people’s coolers. And we are talking about 80,000 people camping in the desert heat. Of course, people bring RV’s (some consider that cheating) and others have generators and refrigerators (might also be cheating). I have relied on ice and coolers for many of my ten day stints out there. Needless to say, dealing with ice can be a major hassle. Getting very cold and heavy ice at Center Camp and bringing it back to your camp on foot or bicycle can be an ordeal. When it melts, it fills your cooler with slushy water that inevitably finds its way into your food.

So I decided, what better project than to use solar power to run a small freezer to freeze blue ice packs? With a freezer working well enough, I can swap between two sets of blue ice packs each day. One set is getting deep frozen while the other set is keeping my Yeti cooler nice and cold. Last Summer, I started to assemble my system. It needed to be as compact as possible, weather proof, not easily damaged (either on the trip there or once set up), self-sustaining, reliable and reasonably priced.

I ruled out an off the shelf solution because I wanted to better understand the realities of using solar power for something that we usually count on running off the mains. So, I started researching and shopping. The starting point for the whole system is the freezer. This established the requirements for everything else. The good news is that a 1.1 cubic foot freezer requires around 60 watts when running. This seemed like a totally doable load.

Even though this is a solar powered system, a solar panel cannot deliver enough instantaneous power to turn over a motor. Therefore a battery would be needed. The first tests I performed were to start up the small 1.1 cubic foot Igloo freezer that I had bought as a factory second, on ebay for about $100. My first test using a 100 Watt combo inverter battery pack accomplished absolutely nothing. I then sought out a power meter to measure the wattage that was needed by the freezer.

Plugging the unit into the wall with a watt meter showed load of well over 500 watts for up to 10 seconds at start up. This moved my requirement from a 100 watt inverter to a 600-1000 watt inverter. The other problem I was facing is that sealed lead acid batteries are not the best for driving high loads. I happen to own a large lithium power pack (battery and inverter combo) that can deliver 1000 watts which had no problem starting the freezer. But a stand-alone pure sine wave 1000 watt inverter can cost hundreds of dollars. I was sure that a 400 to 600 watt inverter could be made to work. Incidentally the large battery pack was not an option for the freezer because it can only handle 100 watts of solar power as input for charging. It would be a challenge to run the freezer and recharge its 1200 watt hours with the remaining 30-40 watts if needed.

Batteries
Because of the low weight and high current ratings of lithium batteries I started a search for an economical lithium battery pack. However, it had to be compatible with the common lead-acid chemistry charge controllers available on Amazon. I abandoned a couple of really heavy sealed lead acid batteries and bought one, then another, Dakota Lithium batteries. These have lots of surge power and are each rated at 120 watt hours. They are small, light and well priced.

The Compressor Motor
I had a 700 watt modified sine wave inverter, but there were several things I did not like about it. And what is more, it still could not reliably start the freezer. This kicked off a deep dive into refrigerator motor design. Small cheap freezers, and probably more expensive ones, are a study in minimalist design. Those small back spheres that hold the compressor are cranked out by the millions. Inside is a motor with two coils, compressor piston, and a sealed oil lubrication and cooling system. Basically, the oil is lifted and sprayed around the interior by grooves on the motor shaft. They also have overheat protection.

The electronics for starting the motor are on the outside, usually in a small plastic case pushed over the three pins that connect to the internals. In my case the power switch energizes the ‘run’ coil and the ‘start’ coil is initially connected but is disconnected when the current running through a small thermal switch causes it to open. Brute force, that assumes unlimited starting current and uses no intelligence. The worst-case scenario is when the compressor has been running and is then shut off and restarted again right away. The start coil is not engaged and the stalled motor draws hundreds of watts until the over-temperature inside the compressor triggers, which cannot be good for the motor.

More sophisticated and better built appliances use inductors and/or capacitors in conjunction with relays and sensing circuits to deliver the starting coil current. This decreases the initial current draw substantially. I discovered an after-market so-called hard-start device designed for the standard refrigerator compressor motor. It is made by Supco. It comes as a one pierce sealed unit that connects to pins that the plastic housing was connected to. While I do not have detailed numbers, it is clear that this $30 device solved most of the starting current problem I was facing. The freezer would start reliably and I was certain that an even lower powered inverter would work.

Inverter Selection
The 700 watt inverter was a modified sine wave inverter, which causes power loss in inductive loads like motors. I was looking build the most power efficient system, and this seemed like something to improve. Also, a side effect of the modified sine waveform is that the motor runs hotter – due to the wasted power being transformed to heat. This particular inverter had a fan that would run even at low loads. It seemed unacceptable to have a 6-8 watt inverter load at idle. 10% of the system power would be going into the inverter at a minimum.

I wanted to set up a timer so that the freezer only ran during daylight hours. There were two choices. I could switch the AC power with a light timer, which are cheap and easy to setup. This has the disadvantage of requiring uninterrupted AC power to keep time properly. Another option is a DC timer, which has the advantage of not running the inverter at idle (3-5W) for the 18 hours the freezer is not running. If I have 240 watt hours of battery, you can see that ~40 watts hours adds up to a large percentage to lose every night.

The system must be designed so the batteries have enough power reserve to start the motor in the morning, so current leakage overnight to run the idle inverter is unacceptable. A few low sun days could run this system into the ground. I found the perfect DC timer that uses a battery to keep time, so it is immune to power outages. It is capable of switching large loads – the system can pull upwards of 10 Amps at startup.

Back on Amazon I found a cheap pure sine inverter that looked like it would meet my needs. It is a MicroSolar 600W Pure Sine Inverter for $89. Why not! It arrived and it worked great. There is no fan running in the inverter except under heavy loads – e.g. for start up, not at normal running currents. Soon though I discovered the Achilles heel of this system.

The Solar Panels
I started out thinking that I could get by with 100 watts of solar panels. The cruel truth about solar panels is that they often output well below their rated power. The worst offenders are the flexible panels – like the ones I wanted to use because they weigh less, are harder to damage and take up less room in transit. After many experiments, I opted for 4 panels each rated at 120 watts for $120 each. They can output close to 80-90 watts each in full overhead sun. So, the whole array can output an impressive 350 watts peak power. I could probably pare down to fewer panels, but the extra generation capability means faster battery recovery and more tolerance for cloudy days. It also means I can charge other loads and run the freezer at the same time. Think batteries for lighting, etc.

The most important link in the system is the charge controller. This takes the raw DC power from the solar panel array and feeds it to the 12V batteries. The panels each output between 12 and 25 volts. If they are wired in series this typically means there is 80V DC feeding into the controller. Solar panels are very fussy about the IV curve they produce. For any given lighting situation and load there is an optimal IV point which has the panel producing the maximum power.

Charge Controller
There are two kinds of charge controllers: PWM and MPPT. PWM is pulse width modulation, and MPPT is maximum power point tracking. You want MPPT. PWM just toggles the current to battery at varying duty cycles to charge the battery. PWM is not very efficient and it does not allow the panels to produce the maximum power. MPPT uses a microprocessor that walks the charging voltage up and monitors the current to calculate the change in total power. MPPT will bring the voltage up to the point where the maximum power is produced by the panel. Then there is circuitry that produces the desired output voltage for the batteries. This extra circuitry is more expensive, but the efficiencies are often in the high 90% range. There is a substantial price boost for MPPT, but for the system efficiency requirements it seemed worth it. I chose a MPPT unit made by Renogy. The alternative was buying more batteries or solar panels.

The inverter draws so much power at the freezer startup it exceeded the “load” terminal rating on the charge controller, so the inverter load just gets connected to the battery terminals. In essence the charge controller thinks it is charging the batteries when the load is powered. The batteries are acting as a ballast. The one requirement for the charge controller I purchased is that it must always have 12V from the battery to run. If the battery runs down, it cannot recharge it, even if there is power coming from the solar panels. This is a reasonable restriction for a lead acid battery, but I am using lithium batteries with a low voltage cut out that is above the inverter low voltage cut out. The inverter will run the lithium batteries until they shut down. At which point the system needs a “jump start”.

Once it became clear how serious this problem was, I realized I needed to monitor the voltage. Not only did it need to shut off above the minimum discharge level, to ensure the controller always had power, it needed to wait until the batteries had recovered significant charge before re-engaging the inverter and freezer load.

Another major problem was that because the on-off control for the inverter was designed using a momentary switch, the unit would need manual attention every time the DC power to it was restored. There was no way to leave the unit ‘on’ so it would power on after DC voltage was applied when the timer turned on in the morning.

However, my choice of inverter turned out to be fortuitous. It came with a wired remote on-off switch; but it was even better because it also featured ‘run’ and ‘fault’ indicator lights. This combination meant that I could design a circuit that plugged in to the socket for the remote switch module and use a microcontroller to actively manage the system behavior. I could design a circuit to monitor the battery voltage and turn the inverter on and off as needed to keep the battery levels exactly where I wanted them. I could probably even eliminate the timer, but there is some advantage to shutting everything down well after dark.

Hacking the Inverter
I have extensive experience building projects with Arduinos – small easy to program and use microcontroller circuits. It’s easy to buy these small pre-built boards that can be programmed over USB from a PC. They come with an on-board voltage regulator chip that operates with 12 to 15 volts, so the battery for the solar system can provide the power. Internally Arduinos operate on 5V, which is the voltage regulator output. The Arduino comes with digital I/O pins and analog input pins connected to the internal ADC. There is also support for SPI and I2C for connecting external devices and peripherals.

I needed to bring a 12V line high or low to create the effect of pressing the button on the inverter remote control, and also read three analog signals – run, fault and the battery line voltage. The ADC on the Arduino only supports 0-5V, so a voltage divider is required. This can be built with a 1M ohm and 100K ohm resistor. I reverse engineered the signal lines on the remote cable. It has 4 wires. One is always ‘high’ and the run, fault and start-button all use it as the return. When the run or fault light is on, their voltage goes low, creating a current flow through the LED. The start button is pulled high when the button is pushed, because then it is connected to the high signal.

To control the start button I designed a simple circuit with an NFET and a pull up resistor on the drain side. With this the drain floats high until the NFET is closed by a 5V gate voltage. Then it is pulled down to ground. I own a milling machine, called the OtherMill, that makes it easy to produce my own custom printed circuit boards.

I created a circuit in EagleCAD and then designed a PCB layout that can be milled on the OtherMill. It has pads for piggy back connections to the Arduino board I chose to use – Arduino Nano. These cost about $5 each and can be purchased on Amazon or Ebay.

Next came the software development. The Arduino comes with a free development environment. There are many libraries that make reading the analog signals and controlling the digital I/O’s easy. I also found coding example for the voltage divider. With some trial and error, I was able to write a program that would start up the inverter when power was applied and then monitor the status of the inverter and the battery voltage. I set a low voltage cutoff of 12 volts and have it wait until the battery voltage reaches 13.4 to restart, ensuring that the batteries have a reasonable level of charge before resuming operation.

My circuit also needed to avoid the death spiral of turning the unit back on after it had just stopped running. As is mentioned above if the freezer attempts to restart after just being turned off, the stalled motor draws heavy current and overheats. I added code to check to see if there is a fault and then ensures that there is a two minute delay before attempting a restart.

Having the automatic control was nice, but running without any external display or indicators was difficult during software debug. There is a capability to add println() debugging when the Arduino is plugged into a PC, but I wanted to know the system status when the circuit is running in the field and not attached to a PC. This was solved with a 4 line LCD display that is compatible with the Arduino and costs about $15. I added code to display the battery voltage, system status and whether a fault had occurred on the inverter. The LCD display runs on the I2C bus and required a slight redesign of the PCB to add a 4 pin connector.

3D Printing the Case
The last piece of this project is to add a small box for the custom electronics. I already purchased a larger waterproof case to hold the all the electronics: inverter, charge controller, timer, custom circuit and batteries. I opted to design a small circuit box using 3D CAD software and use a 3D printer to fabricate it. My favorite 3D modeling software is Onshape. A free account allows ‘public’ designs and a limited number of ‘private’ designs. The user interface is all web based, but don’t be fooled. This is a high-end tool, that uses cloud compute resources to provide sophisticated functionality. I own a FlashForge Creator Pro 3D printer and printed the box first in PLA and then in ABS. Hopefully the finished design for the box will hold up in the withering heat at Burning Man.

Everything fits into the waterproof case and there is a water resistant flap for inserting cables from the solar array and the inverter. There is an additional load line to recharge another external 12V battery pack for other uses. There will be some clean up and rewiring to make thing more tidy. However, the whole system is up and running in my back yard. I also ran a series of tests to see how cold the cooler can stay with swapping cold packs every evening. I am looking forward to testing everything out in the Spring on a relaxing car camping trip. It will be nice to have cold beer without having to make ice runs.

I have posted all the files used to create this project on GitHub.


Top IOT News for 2016

Top IOT News for 2016
by Bill McCabe on 12-18-2016 at 12:00 pm

2016 will go down as being part of the golden era for the Internet of things. This last year has experienced incredible advancements like cars that drive themselves, and cities that actually smarter. It has also been a learning experience where major security breaches threatened us, but we worked past them, and ultimately built stronger systems. So what were some of the top stories of the year?

The first was the Mirari code. This historic attack utilized the internet of things to take down massive websites, through unsecure devices. Amazon, BBC, and other websites felt the crushing blow of this DDoS attack. When things settled down, there was still the concern of what could happen if another attack were to happen.

Our vehicles also experienced an upgrade in 2016. Several fleets of autonomous cars (or those that drive themselves) were unveiled. These cabs would provide the most effective driving experience for passengers and could cut down on accidents, and avoid delays in traffic. This means the future of getting to and from work will be incredible. That doesn’t mean that it hasn’t gone off without a hitch the Tesla that was running on autopilot did already claim the life of one man who was in the vehicle as it drove in July.

Reactions were mixed this year in August when a pair of hackers revealed more security concerns for Jeep. While these individuals brought the information to the attention of the industry to help prevent a deadly encounter, it was still sobering for most to see just how dangerous these vehicles could be when connected to the internet of things. Fortunately, this information can be used to help establish a stronger set of code that makes it incredibly difficult for people to hack and to cause havoc on the roads.

But not everything has grey clouds over it this year. In Columbus, Ohio the U.S. Department of Transportation announced the city would receive federal funding to become a smart city. This technology would allow transportation to be more effective in the area and to ensure that the experience both residents and visitors is incredible.

This has been a year that has seen some definite improvement with the internet of things and what it can do. While the year is coming to an end, we still have 2017 to look forward to with the entire world of possibilities that it holds for us.

For more information about IOT check out our new website by Clicking Here – For Info or Ideas on Staffing your next IOT project Use this Link.

IOT internet of things


Mind-Boggling Uber Hubris

Mind-Boggling Uber Hubris
by Roger C. Lanctot on 12-18-2016 at 7:00 am

Uber was on a mighty roll throughout 2016 picking up strategic alliances with Ford Motor Company and Volvo Cars (for test vehicles) adding talent (cybersecurity experts Chris Vlasek and Charlie Miller) and acquisitions (Otto) and rubbing up against university researchers (Carnegie Mellon). So it was jaw-droppingly hideous to see its hot streak interrupted by various driver slip-ups on day one of testing in San Francisco.

Various press reports noted Thursday that California regulators told Uber to stop its autonomous vehicle service in San Francisco because it was illegal. Uber had chosen not to acquire the necessary state permits to operate an autonomous vehicle service in the state.

It appears that Uber’s launch of self-driving Volvo’s with human “drivers” had been outed as a result of multiple red-light running incidents, some caught on video. But the San Francisco launch was no secret. It was given ample publicity. It leaves one to wonder what Uber executives were thinking – particularly with Google car veteran Anthony Levandowski on the team by virtue of the Otto acquisition.

While Google always regarded California’s regulatory intrusions with disdain, the reality is that the data reporting obligation of the California permitting program has actually allowed Google to strut its technological advantage over competing offerings. The California data reported by Google shows steady improvement in automated driving performance consistently exceeding the performance of the competition.

It’s possible that Uber feared revealing the delta between its own performance and Google’s, but we may never know as Uber declined to comment on the red-light running other than to blame the human drivers. Of course!

But the really odd aspect of the situation is the fact that previous Uber acquisitions included deCarta which has worked with companies like Global Mobile Alert to use contextual information, such as intersection and traffic light locations, to mitigate driver distraction and prevent mishaps. Intersections are one of the greatest and most obvious challenges for self driving cars. But intersections are a challenge to cars that are driven by human beings as well.

One-third of all highway fatalities in the U.S. occur at intersections – including drivers, passengers and pedestrians. Rather than using self-driving car technology to demonstrate the life-saving advantages of letting computers do the driving, Uber has highlighted the shortcomings of the meat in the machine.
Even Tesla knew better than to take on this challenge, so far. Tesla advises using its autopilot function on highways, not secondary roads or in cities. Although Tesla drivers are not prevented from using autopilot wherever they choose.

There are three essential takeaways from the Uber kerfuffle:

[LIST=1]

  • This is no kerfuffle. This was a major screw up by Uber and placed the company outside the bounds of acceptable behavior by self-driving car startups in California or anywhere else. Uber has thereby given self-driving cars and the SDC community overall a bad name.
  • Enhanced intersection management is necessary to reduce overall highway fatalities generally and pedestrian fatalities in particular. Uber may be solving self-driving car challenges, but the company is doing nothing to mitigate pedestrian fatalities and enhance the mastery of intersections by SDCs.
  • Uber is its own worst enemy. The failure this week and the subsequent sanction by California was completely avoidable. It has now, once again, cast Uber as the outlaw that it seems to be.

    Uber’s work on self-driving cars is meant to be transformative of both transportation overall and of the company itself. True to its corporate mantra of violating or avoiding local ordinances for people-moving services, Uber took on the law in California and the law won. But now the wider SDC development community is losing in the process.

    Uber is now giving autonomous vehicles a bad name while attempting to pin the blame on the human drivers. This is always the way with Uber: throwing its drivers under the self-driving bus. Uber could have prevented the incident with enhanced contextual awareness built into the car and with a simple application for a autonomous vehicle permit. With billions of dollars on the line, you’d think Uber would get its head out of its hubris. I’m not counting on it.


  • IEDM 2016 – GLOBALFOUNDRIES 22FDX Update

    IEDM 2016 – GLOBALFOUNDRIES 22FDX Update
    by Scotten Jones on 12-16-2016 at 4:00 pm

    At IEDM in 2015 I had a chance to sit down with Subramani (Subi) Kengeri and get a briefing on GLOBALFOUNDRIES 22FDX technology. At IEDM 2016 Rick Carter of GLOBALFOUNDRIES presented a paper on 22FDX. Following Rick’s presentation, I had a chance to sit down with Rick and John Pellerin, VP of Technology and Integration and further discuss the status of 22FDX.

    My article on 22FDX from IEDM 2015 is available here.

    In 2015 22FDX was still in development, this year 22FDX is getting ready to ramp. The SRAM HD cell is seeing 95% yield and the defect density is in-line with GLOBALFOUNDRIES mature 28nm technology. The IP Ecosystem is well underway and over 60 companies are engaged with GLOBALFOUNDRIES from evaluation of the PDK to prototype design. The process is in the last stage of qualification and they are running customer IP on multi-project wafers. Basically, last year Subi said 22FDX would do certain things and this year 22FDX is delivering on the promise.

    As an author’s side note, I interviewed Gary Patton, CTO of GLOBALFOUNDRIES back in November of 2015. The key theme of my discussion with him was execution. Gary admitted that GLOBALFOUNDRIES had execution problems in the past and said that his key focus as the new CTO was on executing. Over the last year GLOBALFOUNDRIES has been hitting all their new technology milestones.

    My November 2015 interview with Gary is available here.
    .
    22FDX is an FDSOI process and one of the most unique features of the process is the ability to dynamically tune performance. By forming a gate under the buried oxide, the body of devices can be forward biased (FBB) increasing performance or reverse biased (RBB) to reduce leakage. This can be done locally and dynamically to optimize each area on the chip for the specific requirements. Biasing can also be used at the end of the process to adjust for process variations and dial in the final product. In a typical design environment, you must design for the process corners and this requires extras cells. With 22FDX you can design for the mean and trim for the corners with biasing. While I was at the conference I had the benefits of this technique confirmed to me by an executive at an IP company.

    22FDX has an optimum energy operating point of 0.4 volts, the lowest FinFET process operating voltage that I am aware of is 0.55 volts and since active power is proportional to voltage squared, 0.4-volt operation provides a significant power advantage.

    22FDX provides integrated RF capability with an NMOS F[SUB]T[/SUB]/F[SUB]Max[/SUB] of 350/325 GHz and a PMOS F[SUB]T[/SUB]/F[SUB]Max[/SUB] of 290/250 GHz. This is over 2 times the F[SUB]T[/SUB]/F[SUB]Max[/SUB] of current FinFET processes. With layout optimization NMOS F[SUB]T[/SUB]/F[SUB]Max[/SUB] can be further improved to 400/335 GHz and PMOS F[SUB]T[/SUB]/F[SUB]Max[/SUB] can be further improved to 350/310 GHz.

    Another advantage to 22FDX versus FinFETs is that 22FDX has 30% fewer masks than a 14nm FinFET process. 22FDX does require a more expensive starting substrate but the reduced process complexity results in a lower cost process that can provide similar digital logic performance with better analog and RF performance. The fewer masks and planar design enable much lower design costs, a critical feature for low cost or smaller volume products. FDSOI is more radiation tolerant making it ideal for automotive applications.

    In summary GLOBALFOUNDRIES continues to deliver on their promises and 22FDX is a process well positioned to address IOT, automotive and mobile applications.

    Also read: IEDM 2016 – 7nm Shootout