CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Bacteriography

Bacteriography
by Bernard Murphy on 08-25-2016 at 7:00 am

I recently found a couple of articles which caught my interest, both on roles bacteria can play in electronics. The first has to do with a method to form semiconductor-like structures on a sheet of graphene. Graphene is an excellent conductor but in sheet form but conducts more or less equally in all directions. So the first problem is to be able pattern structures onto the graphene to create preferred directions.

Researchers at the University of Illinois knew that wrinkles in the graphene layer will reduce conductivity across the wrinkle but stretching and releasing a sheet gives little control on scale or location for wrinkles. Thinking very creatively, they placed a drop of a nutrient solution containing bacterius subtillis (a bacterium found in soil) on the sheet, then ran a current through the sheet which causes these bacteria to align in the direction of the field. They then built a layer of graphene over the whole sheet (and bacteria) and cooked the lot in vacuum at 250[SUP]o[/SUP]C. That causes the bacteria to dehydrate and wrinkle, which apparently they do to a form with a height of 7-10nm and very precise wrinkle spacing of 33nm, causing the upper layer of graphene to wrinkle correspondingly.

The achieves the desired directionality of conductivity, but misses the mark on giving band-gap characteristics to the graphene structure, which apparently would require a wrinkle spacing of 5nm or less. Researchers hope that a different type of bacteria may be more obliging. The work obviously has a way to go before any of this becomes practical, not least in first releasing the upper (wrinkled) layer of graphene from the lower layer and the desiccated bacteria.

Meanwhile, the Office of Naval Research has been working on creating nanowires built from a part of a bacterium. Geobacter (a bacterium found in river mud) produces very thin (1.5nm) protein filaments which are electrically conducting and help support the respiration of the organism through connection to metallic oxides. Natural conductivity is very low but can be improved by multiple orders of magnitude by fiddling with the amino acid sequence in these protein chains. These nano-wires can now be synthesized independently from the bacterium. Again, don’t bother looking for KickStarter campaigns just yet, though there is now a site independent of the ONR dedicated to reporting progress in this area.

You can learn more about creating wrinkle structure on graphene HERE and the ONR project on nanowires HERE. The independent site is HERE.

More articles by Bernard…


Customized PMICs with OTP in automotive and IoT

Customized PMICs with OTP in automotive and IoT
by Don Dingee on 08-24-2016 at 4:00 pm

Power. Every device needs it. Managing it properly can make all the difference between a device people enjoy using and one that is more hassle than it is worth. What happens between the battery and the processor is the job of the power management integrated circuit (PMIC).

Why are PMICs gaining so much attention? Increased power subsystem complexity and, paradoxically, reduced power consumption is causing PMICs to be far more intelligent. In a simpler day of digital design, chips ran on one power supply – Vdd on schematics. Mixed signal requirements including analog inputs, high voltage peripheral drivers, and varying stages of digital logic gave rise to multiple power supplies to a single integrated chip.

Different subsystems of a device also have different power requirements, all supplied from a single battery source. These power supplies also typically have sequencing requirements such that everything initializes correctly and no damage is incurred from incorrectly applying power to interconnected circuits. In a device such as a smartphone or tablet, some of these unique power requirements are seen clearly:


Another aspect of power management is seen in the lower left corner: many devices have recharging capability, sometimes with an advanced wireless power transfer scheme, which must be carefully coordinated. A good example: my Samsung Galaxy S7 charges wirelessly without any problem, but my wife’s Galaxy S6 which allegedly uses the same wireless charging standard is extremely finicky and aborts charging when the display turns off.

That’s just one case in point of power domains within a device, where areas of circuitry power down to save power – hopefully without adversely impacting operation of other functional blocks. Modern PMICs also usually contain state controllers so they can correctly respond to full power and reduced power modes of the system. It’s non-trivial. Occasionally, my Verizon Ellipsis tablet seems to lock up in a mode where its Wi-Fi doesn’t sleep even though there is no other activity, and rapidly and completely discharges the battery.


There are also thermal considerations, where power demand needs to be managed based on operating temperature. Recent news of total recalls of the Intel Basis Peak smartwatch and the McDonald’s Step-It fitness tracker, both for inducing discomfort or even blistering due to localized overheating, suggest wearables may be the next frontier for power management. Designers of wearable and IoT devices may need to trade performance for thermal conditions in ways that existing chips haven’t quite imagined yet, particularly when transitioning from recharge to discharge modes.

Relying on system-level software to manage the minutia of power management misses huge opportunities and often leads to deadlock at inappropriate times, which can completely undo any power savings or worse lead to a safety condition. This is noteworthy in automotive platforms, connected to the ultimate battery system delivering power to many electronic subsystems. Every one of those subsystems has to manage itself, intelligently, if both system power and functionality are to be as expected at all times.

Building a more intelligent PMIC that meets the needs of both device manufacturers and device users is a challenge. Unique requirements are driving more teams toward customized PMIC chips implemented with one-time programmable (OTP) memory. Using OTP has several advantages. Implementations are compact and tamper-proof compared to microcontrollers, without storage endurance considerations. If a requirement changes, designers can modify the OTP configuration quickly, similar to programmable logic devices but much less expensive. OTP macros from Sidense also support mixed-signal, high-temperature processes typical of automotive environments such as 180nm BCD.


Sidense also offers the integrated power supply (IPS) macro, generating bit-cell programming from available power supplies. The OTP and IPS macros can be designed into a mixed-signal PMIC easily, providing the exact functionality needed in less silicon area at a lower cost than alternatives.

The same factors that are leading to purpose-built application processors for automotive, IoT, mobile, and wearable applications mean companion PMICs customized to system requirements are also needed. Avoiding bad power-related behavior with good PMIC design helps devices achieve success – it’s a huge factor in perceived reliability and trust. Teams looking for more differentiation and control are turning to OTP macros for customized PMIC designs.


Statistical Simulation Provides Insight into 6T SRAM Optimization

Statistical Simulation Provides Insight into 6T SRAM Optimization
by Tom Simon on 08-24-2016 at 12:00 pm

ARM’s Azeez Bhavnagarwala recently gave a talk hosted by Solido on the benefits of variation aware design in optimizing 6T bit cells. Azeez sees higher clock rates, increasing usage of SRAM per processor and the escalating number of processors, shown in the diagram below, as trends that push designers toward 6T. Six Transistor (6T) bit cells are preferred for SOC applications because of their small area and relatively low power requirements. He sees increased demand for larger and lower cost L2+L3 cache is creating never ending pressure to reduce power and area without compromising performance.

For IoT there are competing needs though that cause contention in selecting between 6T on one hand, or 8T and 10T on the other. Out of the box, 8T and 10T have lower active Vmin’s than conventional 6T cells. So despite the area penalty they sometimes win out. Azeez points out in his talk that there still are excellent reasons to employ 6T bit cells. First off, they are heavily optimized by the foundry during process development. With this comes readily available design and verification flows. Finally, as mentioned above, they are going to save area and power.

Let’s take a look at the three primary challenges for SRAM designers in Azeez’s view. Vmin’s need to be ultra-low, near threshold – below 400mV. This comes about from IoT device operation expectations for up to days, weeks or months between charges. At the same time some of these IoT devices need to operate in the gigahertz range to deliver the proper level of user experience. Rounding out the challenges is the need for retention Vmin that disp into the sub-threshold region. It’s not uncommon to see these low voltages combined with specs calling for only 100’s of femto amps per bit for retention in order to deliver battery sipping performance.

So how can designers respond to improve 6T Vmin? The foremost answer to this question is to include “write assist” in the design. Write assist helps the bit cell by further weakening the PFET and strengthening the NFET for the duration of the write operation. There are a vast number of circuit design techniques used to accomplish this, but one of them stands out once statistical analysis is brought to bear on the problem.

I have written recently about how Solido’s software can be used to ensure high yield in the face of variation. Interestingly variation aware analysis can also be used to find optimal operating points in cases like this where we are seeking the best write behavior and the lowest circuit Vmin. To implement write assist the main choices are: lower Vgs on the PFET, higher Vgs on the NFET or on the NFET Vds. Methods to do this include raising the virtual ground with a negative bit line, or lowering the column VDD or the WLOD.

Azeez’s work shows that lowering the column VDD is most effective. His use of Solido for statistical simulations shows dramatic results. This approach gives the largest improvement in decreased write voltage – it enables near VT write operations in a variety of 6T configurations.

However, even more interestingly, it tightens the distributions for write voltages as the voltage is decreased. This is an unusual win-win for the circuit designer who is usually faced with an unpleasant trade off choice. Here, lower voltages come with improved write performance. Other write assist methods do not come with the benefit of variation immunity to device fluctuations.

Azeez shows that even the alternative of going from planar to FinFET will not produce as large a benefit. Statistical variation aware analysis shows a method to achieve higher process performance through circuit design techniques than can be obtained by a process technology switch. In his talk Azeez also points out other optimization that can be applied by using statistical analysis for other aspects of the memory design. To see the entire talk, you can look here on the Solido website.


Tesla: After the Crash

Tesla: After the Crash
by Roger C. Lanctot on 08-24-2016 at 7:00 am

The funny thing about pitching new cars to the general public these days is that no one really wants to think about ever getting into a crash. There was a time when General Motors counted on OnStar as a deciding factor in selling cars because of its post-crash prophylaxis of automatically summoning assistance.

A couple decades of declining highway fatalities in the U.S. and the emergence of smartphones seems to have eliminated this concern for post-crash measures. In spite of this growing indifference, car companies, app developers, first responders and public authorities continue to work on this application space.

Where GM led and BMW later collaborated in developing algorithms for determining crash severity to better assess the nature of the emergency response, new innovators have stepped forward. Mercedes has taken something of a leadership position with two novel if not widely recognized or understood solutions: Rescue Assist and Pre-Safe Sound.

Pre-Safe Sound, available on the new Mercedes E-Class, is capable of detecting an impending crash and sending a so-called “pink-noise” signal triggering the human ear’s natural defenses against the loud noises of a crash. Rescue Assist consists of a QR code affixed to the door frame providing first responders with vital information regarding the location of airbags, seatbelt tensioners, batteries and other structural and functional systems in the car capable of posing a hazard to emergency technicians.

https://youtu.be/6VszIAKLpFw – Rescue Assist explanation – Mercedes

After a crash has happened it would be ideal for the car to have an OnStar-like system to summon assistance. Too many car makers have embraced connectivity but failed to integrate what is called automatic crash notification. Audi has corrected this flaw in its connectivity strategy with its latest LTE embedded systems.

Tesla has yet to embrace the concept of automatic crash notification – an oversight that looks increasingly obvious and embarrassing with each new Tesla crash. Two of the most recent Tesla crashes may highlight yet another missing piece of post-crash kit.

While we all have come to expect airbags to deploy in the event of a crash we may take for granted the fact that a car will come to a stop after a collision. Most, though perhaps not all cars, have a post-crash braking system supported by the airbag sensors.

Volkswagen’s Automatic Post-Collision Braking System, for example, automatically initiates braking after the vehicle suffers a collision, and brakes for as long as necessary to reduce the speed of the vehicle to 6mph in order to prevent or minimize the severity of subsequent collisions. It is standard on many VW’s.

http://tinyurl.com/jdy72yx – Automatic Post-Crash Braking System – Volkswagen

It may be that, just like automatic crash notification, post-crash braking is not included in Tesla’s. Judging from the post-crash behavior of the Tesla Model S involved in the recent fatal crash in Florida it seems that the car failed to detect a crash and continued driving through two fences before hitting a utility pole.

While it is possible that the initial drive-under crash, which may have only impacted the A pillars, might have failed to trigger the airbags, a more recent crash suggests a lack of post-crash preparation. The latest incident, reported last week, involved a Tesla Model S impacting a guard rail multiple times before coming to a stop.

http://tinyurl.com/jy9wcs6 – “Tesla Owner in Autopilot Crash Won’t Sue, But Car Insurer May” – Bloomberg.com

In fact, in the latest incident, the driver says the Tesla continued to accelerate AFTER impacting the guard rail the first time. The Tesla experience suggests a certain degree of complacency when it comes to the post-crash behavior of automobiles or even the basic recognition of the fact that cars can and do crash.

While the goal of auto makers is increasingly to avoid crashes altogether, it is premature to presume that we have solved this problem or that it occurs too infrequently to matter. Car companies can and should be mindful of preparing the car and its occupants when a crash is imminent and appropriately enabling emergency communications and facilitating safe interaction with a disabled vehicle for first responders post-crash.

It’s not too much to ask. It’s an obligation

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


MediaTek is on the Move with TSMC!

MediaTek is on the Move with TSMC!
by Daniel Nenni on 08-23-2016 at 4:00 pm

MediaTek (MTK) recently made the news for announcing their first leading edge SoC (Helio X30), a 32% increase in quarter over quarter sales, and an expected 30% increase for the year. Both of which deserve a closer look as we move into the second half of 2016 which should be very strong for MTK TSMC, and the fabless semiconductor ecosystem.

First a little bit of history: MTK was originally part of pure-play foundry UMC making chips for the consumer electronics market but was spun out in 1997 and taken public on the Taiwan Stock Exchange in 2001. MKT started out making chips for DVD players, TV, and early mobile phones. They expanded into smartphones and tablets and today are the number two SoC company, QCOM being number one. QCOM dominates the high end SoCs/smartphones while MKT is known for mid to low end products.

Up to 2009, MTK’s success was in 2G until they launched a wide range of chips for 3G moving up to 40nm chips in 2012 (QCOM was already at 28nm). Today MTK has the MT6572 SoC using different flavors of TSMC 28nm powering their revenue jump with more than one hundred and fifty design wins in production. Mediatek is covered in our book “Mobile Unleashed” in chapter 10 “An Industry in Transition”. We could have done a complete chapter on them like we did with Apple, Samsung, and Qualcomm and we probably should have because it is an interesting story.

The big change I have seen is MTK pivoting from a low cost trailing edge technology company to a leading edge SoC provider. In my opinion TSMC is behind that change since QCOM (TSMC’s largest customer) moved to Samsung at 14nm and 10nm. MTK will now be the first to showcase a leading edge TSMC based 10nm SoC in the first half of 2017 competing directly with QCOM and Samsung.

It really is a big jump since MTK has yet to release a 16nm SoC, but with TSMC’s help it will not be as big of a challenge. Remember, TSMC and MKT are both in Hsinchu just a mile or two away. In fact, quite a few former TSMC employees now work at MTK Coincidently, or not, MTK CEO Ming-Kai (MK) Tsai was recently awarded the Dr. Morris Chang Exemplary Leadership Award for pioneering the Taiwan semiconductor design industry. I was at the award presentation and found MK’s humble acceptance speech to be incredibly inspiring making him one of my favorite semiconductor CEO’s, absolutely.

Unfortunately, MTK has not made the jump to an ARM architectural license yet, like Apple and QCOM, so their Helio X30 chip will be more “show than go” so don’t expect a huge pile of high end smartphone design wins. The Helio X30 uses four Cortex A73 cores clocked at 2.8GHz, four Cortex A53 cores clocked at 2.2GHz, and another two A53 cores at 2.0GHz, PowerVR 7XT quad-core graphics, and a Cat 12 LTE modem. The point, however, is that TSMC is fully backing MediaTek’s thrust into the leading edge (16nm, 10nm, and 7nm) which is a serious “shot over the bow” of QCOM and Samsung.


The Perfect Wearable SoC…?

The Perfect Wearable SoC…?
by Rick Tewell on 08-23-2016 at 12:00 pm

Power is Everything
During Apollo 13 after the oxygen tank in the service module exploded forcing the crew to use the lunar module as a life boat to get back home, John Aaron – an incredibly gifted NASA engineer who was tasked with getting the Apollo 13 crew back home safely – flatly stated “Power is everything…we’ve got to turn everything off…”

His point was – the Lunar Module was burning 55 amps in steady state and it could only burn a maximum of 24 amps in order for the batteries to last the 45 hours required to get the crew back alive into earth orbit so they could then use the Command Module to parachute back to earth. I often think about that quote when working on products that require ultra low power like well…wearables! In this case it really is all about power. After an extensive worldwide tour talking to wearable SoC and OEM vendors, here is a list of the key features that end users say they want:

  • Rich graphics and touchscreen functionality
  • Connectivity – GPS, LTE, WiFi, BT, etc.
  • LONG battery life – up to two weeks between charges
  • Voice control
  • Health features – biometrics
  • Mobile wallet

The first two bullets are absolute POWER hogs. Let’s take a look at what I call the “power pyramid” – which shows why the third bullet is absolutely key!

Here, you see that the biggest consumer of power in a wearable is the wireless stuff, things like LTE and WiFi – but also things like GPS (I know that GPS might not be considered communications but it has an antenna and a receiver and burns power). Next comes the LCD backlight – this is always one of the worst offenders – the light for the LCD screen. After that comes the display controller – it has to refresh the display (when it is on) at 50 to 60 frames per second (let’s not debate this point – yes, you can send graphics to the display controller at any frame rate you want from 1 frame per second, or lower, up to full frame rate – but its the GPU doing the drawing. The display controller ALWAYS has to draw to the screen at the refresh rate of the screen) and to refresh the LCD it burns power like crazy.

Next comes the GPU – which draws the graphics on your LCD screen. Today’s GPUs (even small ones) are power hogs. After that comes the CPU and memory systems. Let’s look at these things one by one and see what can be done to build the perfect wearable SoC.

Communications
Here, I have one word of extraordinary coolness – Narrowband Iot (NB-IoT)…OK, two words. In order to save power, the radios in your wearable are going to have to be off most of the time. They need to be “bursty” – simply wake up – get or send what they need and shut down. Not “kind of down” – ALL the way down. When they are up and running, they need to be extraordinarily efficient in the way they handle power. NB-IoT cleverly uses the existing legacy mobile networks to address the low power wide area network markets – of which wearables certainly is a part of.
Even if NB-IoT doesn’t provide the necessary bandwidth for a wearable application, perhaps the next step up to LTE-M will – which is still lower power than LTE Cat-0 or LTE Advanced. At any rate – the capability to talk to the existing mobile wireless networks in a low power manner is here, and it will likely become prolific. Add to that Bluetooth Low Energy, super low power WiFi (HaLow – 802.11ah) and ultra low power GPS receivers all tied around exceptionally clever and efficient software and we should be able to turn this power hog into a power piglet.

LCD Backlight
Here – we just get rid of it. Just Say No. You are probably wondering what the alternative is…right? We will get to that in a moment – but the idea that we have to turn on a light to see a screen when we typically have light supplied to us essentially “for free” most of the time seems a little silly. The alternative? Sharp has some really great memory LCDs that use “transflective” illumination for the LCD. HERE is a link to a very descriptive white paper PDF from Sharp. Very cool…no light. Oh…you can ADD a backlight and control it from a light sensor (or manually) when you need it, like in low ambient light conditions, but this is a HUGE power saver.

I know what some of you are now thinking. Wait – did he say “memory” LCD? Does that mean that the image stays on the screen indefinitely after being drawn, eliminating the need to wastefully draw the same image over and over and over again when the draw rate is so much lower than the refresh rate. Yep. That’s what I am talking about. What about color? Yep – Sharp has some darn good color memory displays. What about frame rate – what if I want animation? Well, the Sharp memory displays can refresh at a rate that is suitable for full motion animation (15 – 17 fps equivalent).

Think about the case of a watch with a second hand – theoretically, you could update the screen once per second and with backlight off and using a memory display, your overall LCD power would be “noise level” compared to what it is in many of today’s wearables… So let’s say you go crazy and update the screen at 5 frames per second (which is like 200ms btw – pretty fast for us humans. The power required to do this would still be crazy low. How low? How about 25µW for a static image and 60 µW for a dynamic image. Even if you quadruple those numbers – still crazy low compared to a traditional LCD you find in today’s “high end” wearables. Yes – OLED displays are a HUGE improvement in power consumption over traditional LCD displays – but they aren’t anywhere near the numbers above and still have to be refreshed at 50 – 60 fps. Now – micro LED – THAT could be interesting…we shall see.

Display Controller
OK – so here we need a display controller that can handle “traditional” displays -or- go on a hiatus and support memory displays like the ones above. Remember that a traditional display controller must read a frame out of memory and clock it out to the LCD display at the LCD display’s required frame rate. Typically 60 fps. So, lets say the LCD display is 340×272 and is full color (24-bit RGB) – that means that you have to read 16MB from memory and pump it to the display every SECOND – even if nothing is changing on the screen! With a memory display at 15 fps – that goes down to 4MB /sec and 0MB / sec when nothing is changing on the screen (actually it is likely far less than that because you probably aren’t sending 24-bit RGB color…but I digress).

For the “perfect wearable SoC”, however, you should be able to support:

  • Traditional LCDs
  • OLED
  • Memory Displays
  • microLED

If the display controller is driving a memory display, the power requirements for the display controller should be VERY low assuming it is “off” when it isn’t updating the image on the screen.

GPU
This one is going to be controversial. The company I work for, VeriSilicon (who just recently acquired the Vivante GPU company) is one of the leading GPU suppliers in the industry for automotive, IoT, AR/VR, healthcare, and other mobile / consumer markets. 3D GPUs are near and dear to my heart, but our 2D vector graphics GPUs are nothing short of spectacular. Incredibly low power and able to deliver stunning RICH graphic experiences…I would say that for wearables – vector graphics is ideally suited.

We have created many graphically rich user interfaces from automotive to wearables using vector graphics (also jokingly referred to as 2.5D). I’m not going to debate the point much because the proof is already in the market – but consider these vector graphics samples HERE . Consider the following table:

To draw the SAME rich graphics user interface in a wearable / IoT application, our 2D vector graphics GPU is:

  • 1/2 the size of our standard 2D raster GPU
  • 3.5 times smaller than our “small” 3D GPU
  • draws 5.5 times less power than our 2D raster GPU
  • draws 8 times less power than the 3D GPU
  • the driver is 1/8 the size of the 3D GPU driver

Now – before you get all “smarty pants” on me – our 2D and 3D GPUs are known to be the smallest most efficient in the world – so we are comparing apples to apples here. Also, we build and sell 2D, vector graphics AND full featured 3D GPUs so I have dogs in ALL these hunts – I am just saying that the GCNanoLiteV vector graphics GPU should warrant STRONG consideration for any wearable SoC – especially the “perfect” one.

CPU
Here, I don’t have a dog in the hunt but I would say that RISC-V is certainly something worth considering in the “perfect wearable SoC”. Consider the following table. Looks like RISC-V could be very interesting indeed and should be on anyone’s shortlist looking to build a wearable SoC – I particularly like the power numbers. For those of you who may not know, RISC-V is an “open source” RISC instruction set architecture processor. Thisis the link to the RISC-V foundation. Also, check out SiFive– they can answer all your RISC-V questions…

While we are on the subject of power, the question of operating system comes into play here. Which one to use? I would lean towards Zephyr. Zephyr is a real time operating system and an “official” project of the Linux Foundation. It sports a nano kernel AND a micro kernel architecture. It very well may be ideally suited for wearables and IoT (I think it is anyway…) You can check it out here.

So, what are my conclusions here? Here are the key technologies that I think need to be “in play” for the “perfect wearable SoC” in power consumption order.

  • Narrowband IoT or CAT-M for communications
  • 802.11ah HaLow WiFi
  • LCD Memory Displays – display controller must support this
  • 2D Vector Graphics GPU – GCNanoLiteV
  • RISC-V processor consideration – although the Cortex M-Cores are pretty darn awesome (and no…they did not pay me to say that…it is just a fact). If you MUST have an A-Core – the A32 is pretty awesome too.
  • Zephyr RTOS (not really an SoC feature – but can easily drive IP choices based upon RTOS features) – I personally wouldn’t necessarily look anywhere else but Zephyr but I would strongly consider adding the Google Weaveframework here.
  • Oh…and last but not least – I would absolutely, positively build the “perfect wearable SoC” using FD-SOI. The ability to granularly control power by tweaking the back-gate biasing…it is just unparalleled. I would use either Samsung 28nm FDSOIor 22nm FDSOI from Global Foundries. Both are amazing and PERFECT for wearables. I did a presentation at DAC2016 about FDSOI and you can find it here.

Again, VeriSilicon is node agnostic – but FDSOI’s features just scream out for applications like wearables. Yes. We love FinFET and Bulk processes too.Well, that is the way I see it… Let the debates begin. Please be nice. If you are interested in a much more in-depth white paper about the “perfect wearable SoC” just let me know.


IoT Standardization and Implementation Challenges

IoT Standardization and Implementation Challenges
by Ahmed Banafa on 08-23-2016 at 7:00 am

The rapid evolution of the IoT market has caused an explosion in the number and variety of IoT solutions. Additionally, large amounts of funding are being deployed at IoT startups. Consequently, the focus of the industry has been on manufacturing and producing the right types of hardware to enable those solutions. In current model, most IoT solution providers have been building all components of the stack, from the hardware devices to the relevant cloud services or as they would like to name it as “IoT solutions”, as a result, there is a lack of consistency and standards across the cloud services used by the different IoT solutions.

As the industry evolves, the need for a standard model to perform common IoT backend tasks, such as processing, storage, and firmware updates, is becoming more relevant. In that new model, we are likely to see different IoT solutions work with common backend services, which will guarantee levels of interoperability, portability and manageability that are almost impossible to achieve with the current generation of IoT solutions.

Creating that model will never be an easy task by any level of imagination, there are hurdles and challenges facing the standardization and implementation of IoT solutions and that model needs to overcome all of them.

IoT standardization
The hurdles facing IoT standardization can be divided into 4 categories; Platform, Connectivity, Business Model and Killer Applications:

  • Platform:This part includes the form and design of the products (UI/UX), analytics tools used to deal with the massive data streaming from all products in a secure way, and scalability which means wide adoption of protocols like IPv6 in all vertical and horizontal markets is needed.
  • Connectivity:This phase includes all parts of the consumer’s day and night routine, from using wearables, smart cars, smart homes, and in the big scheme, smart cities. From the business prospective we have connectivity using IIoT (Industrial Internet of Things) where M2M communications dominating the field.
  • Business Model:The bottom line is a big motivation for starting, investing in, and operating any business, without a sound and solid business models for IoT we will have another bubble , this model must satisfied all the requirements for all kinds of e-commerce; vertical markets, horizontal markets and consumer markets. But this category is always a victim of regulatory and legal scrutiny.
  • Killer Applications: In this category there are three functions needed to have killer applications: control “things”, collect “data”, and analyze “data”. IoT needs killer applications to drive the business model using a unified platform.

All four categories are inter-related, you need all them to make all them work. Missing one will break that model and stall the standardization process. A lot of work needed in this process, and many companies are involved in each of one of the categories, bringing them to the table to agree on a unifying model will be daunting task.

IoT implementation
The second part of the model is IoT implementations; implementing IoT is not an easy process by any measure for many reasons including the complex nature of the different components of the ecosystem of IoT. To understand the gravity of this process, we will explore all the five components of IoT Implementation: Sensors, Networks, Standards, Intelligent Analysis, and Intelligent Actions.

Sensors
There two types of sensors: active sensors & passive sensors. The driving forces for using sensors in IoT today are new trends in technology that made sensors cheaper,smarter andsmaller. But the challenges facing IoT sensors are:power consumption, security, and interoperability.

Networks

The second component of IoT implantation is to transmit the signals collected by sensors over networks with all the different components of a typical network including routers, bridges in different topologies. Connecting the different parts of networks to the sensors can be done by different technologies including Wi-Fi, Bluetooth, Low Power Wi-Fi , Wi-Max, regular Ethernet , Long Term Evolution (LTE) and the recent promising technology of Li-Fi (using light as a medium of communication between the different parts of a typical network including sensors).

The driving forces for wide spread network adoption in IoT are high data rate, low prices of data usage, virtualization (X – Defined Network trends), XaaS concept (SaaS, PaaS, and IaaS), and IPv6 deployment. But the challenges facing network implementation in IoT are the enormous growth in number of connected devices, availability of networks coverage, security, and power consumption.

Standards

The third stage in the implementation process includes the sum of all activities of handling, processing and storing the data collected from the sensors. This aggregation increases the value of data by increasing, the scale, scope, and frequency of data available for analysis but aggregation only achieved through the use of various standards depending on the IoT application in used.

There are two types of standards relevant for the aggregation process; technology standards (including network protocols, communication protocols, and data-aggregation standards) and regulatory standards (related to security and privacy of data, among other issues). Challenges facing the adoptions of standards within IoT are:standard for handling unstructured data, security and privacy issuesin addition to regulatory standards for data markets.

Intelligent Analysis

The fourth stage in IoT implementation is extracting insight from data for analysis. IoT analysis is driven by cognitive technologies and the accompanying models that facilitate the use of cognitive technologies. With advances in cognitive technologies’ the ability to process varied forms of information, vision and voice have also become usable, and open the doors for in-depth understanding of the none-stop streams of real-time data. Factors driving adoption intelligent analytics within the IoT; artificial intelligence models, growth in crowdsourcing and open- source analytics software, real-time data processing and analysis. Challenges facing the adoption of analytics within IoT; Inaccurate analysis due to flaws in the data and/or model, legacy systems’ ability to analyzeunstructured data, and legacy systems’ ability to manage real- time data

Intelligent Actions
Intelligent actions can be expressed as M2M (Machine to Machine) and M2H (Machine to Human) interfaces for example with all the advancement in UI and UX technologies. Factors driving adoption of intelligent actions within the IoT; lower machine prices, improved machine functionality, machines “influencing” human actions through behavioral-science rationale, and deep Learning tools. Challenges facing the adoption of intelligent actions within IoT : machines’ actions in unpredictable situations, information security and privacy, machine interoperability, mean-reverting human behaviors, and slow adoption of new technologies

The Road Ahead
The Internet of Things (IoT) is an ecosystem of ever-increasing complexity; it’s the next weave of innovation that will humanize every object in our life, it is the next level to automating every object in our life and convergence of technologies will make IoT implementation much easier and faster, which in turn will improve many aspects of our life at home and at work and in between. From refrigerators to parking spaces to houses, IoT is bringing more and more things into the digital fold every day, which will likely make IoT a multi-trillion dollar industry in the near future. One possible outcome of successful standardization of IoT is the implementation of “IoT as a Service” technology , if that service offered and used the same way we use other flavors of “as a service” technologies today the possibilities of applications in real life will be unlimited. But we have a long way to achieve that dream; we need to overcome many obstacles and barriers at two fronts, consumers and businesses before we can harvest the fruits of such technology.

Article published on IEEE-IoT : http://iot.ieee.org/newsletter/july-2016/iot-standardization-and-implementation-challenges

References:
http://www.dbta.com/BigDataQuarterly/Articles/10-Predictions-for-the-Future-of-IoT-109996.aspx
https://campustechnology.com/articles/2016/02/25/security-tops-list-of-trends-that-will-impact-the-internet-of-things.aspx
http://dupress.com/
https://www.linkedin.com/pulse/iot-implementation-challenges-ahmed-banafa?trk=mp-author-card
https://www.linkedin.com/pulse/what-next-iot-ahmed-banafa?trk=mp-author-card

Figures Credit: https://pixabay.com/en/binary-code-man-face-board-trace-1327503/ and Ahmed Banafa


ARM gets wider and more flexible in vectors

ARM gets wider and more flexible in vectors
by Don Dingee on 08-22-2016 at 4:00 pm

ARM has a storied history of announcing major architecture changes at conferences far in advance of product implementations to get their ecosystem moving. At Hot Chips 2016, their sights are set on revamping the ARMv8-A architecture for a new generation of server and high-performance computing parallelism with a preview of the Scalable Vector Extension (SVE).

ARM NEON, similarly previewed in 2004, was a response to Intel’s move to incorporate their MMX technology for SIMD into mobile chips, adding iwMMXt to the PXA270 processor. In desktop and server space, Intel drove several evolutions of Streaming SIMD Extensions (SSE) and in 2010 announced a move to their Advanced Vector Extensions (AVX), currently with 512-bit support in the AVX2 variant in Knights Landing and Skylake.

To Intel’s credit, they have put extensive efforts into compiler technology that can deal with all the variants of SSE and AVX on various Intel processor families. Auto-vectorization, magically transforming sequential code for vector processing, is good in theory but often falls down unless the target hardware has exactly the right support. For example, tossing the -xAVX option at something predating Sandy Bridge generates a fatal error. Intel came up with the -axAVX flag to generate both a baseline path (set to SSE2, SSE3, SSE4.1, SSE4.1, or other instruction set by another option) and an AVX-optimized path, with runtime selection based on processor support.

ARM NEON fell far behind in comparison, really having little reason to evolve for mobile needs. However, it is a new era, and ARM wants its product in a new generation of server-class platforms with different workloads. “Server” always needs an adjective for proper discussion; Intel’s lead in high-volume application servers is undisputed, but ARM wants “beach heads in key segments” per their slides. Telecom is one of those – see my prior posts on their OPNFV efforts – as well as IoT infrastructure and real-time analytics platforms.


HPC is the next area ARM wants to scout. ARM has quietly been watching the portability beast Intel created and considering how to get the performance benefits of vectorization without the software migraine headaches. Vector length is part of the problem; picking a fixed length can lock-in goodness for some applications but cause others not to run.

SVE is wide: 128 to 2048 bits. There is almost no overlap with NEON at 128 bits max. Instead, SVE has been created from the ground-up for systems such as HPC. In an ARM tradition, where the ecosystem determines the best-fit for the architecture, SVE supports both a vector length choice and a vector-length agnostic programming model that can adapt to the available vector length in the hardware. There are many other improvements in SVE with the aim to smooth out compiler vectorization:


It’s interesting how ARM has squeezed SVE into ARMv8-A. 75% of the A64 encoding space is already allocated, but SVE took just a quarter of the remaining 25% with some creative use of predicated execution and attention to addressing modes.

Fujitsu has been collaborating with ARM on the Post-K supercomputer and compiler technology supporting SVE. As we see from the Intel efforts, compilers and libraries will be the make-or-break aspect for SVE, and the uptake in Linux distributions with SVE-enabled libraries will be an area to watch.

Nigel Stevens gave the talk at Hot Chips and wrote a blog post with more details on the innovations in SVE:

Technology Update: The Scalable Vector Extension (SVE) for the ARMv8-A architecture

There is also a good Fujitsu overview of Post-K:

Fujitsu’s Next Endeavor: The Post-K Computer

I think ARM recognizes very well they have a huge mountain to climb on Intel’s head start in server-class and HPC processing. They’ve clearly learned from the ARMv7 “we’re in servers now” debacle, and are taking steps in both ARMv-8A architecture and ecosystem development to start paving the path with niche wins.

SVE is a huge step forward, and ultimately will probably have much bigger impact than NEON for ARM. It really ups the ante in terms of vector width and the potential compiler technology that could support a wide range of hardware. Most of the HPC work will probably be in C/C++ or FORTRAN. As an IoT wonk, I’d also be curious how other distributed languages like Lua and Rust might be able to take advantage of vectorization with SVE.

Was anyone at Hot Chips and have further insight on this announcement, or just general thoughts on how SVE might stack up from HPC or compiler work?


Did My FPGA Just Fail?

Did My FPGA Just Fail?
by Daniel Payne on 08-22-2016 at 12:00 pm

Designing DRAMs at Intel back in the 1970s I first learned about Soft Errors and the curious effect of higher failure rates of DRAM chips in Denver, Colorado with a higher altitude than Aloha, OR. With the rapid growth of FPGA-based designs in 2016, we are still asking the same questions about the reliability of our chips used for safety-critical applications like:
Continue reading “Did My FPGA Just Fail?”


A New Player in the Functional Verification Space

A New Player in the Functional Verification Space
by Bernard Murphy on 08-22-2016 at 7:00 am

Israel has a strong pedigree in functional verification. Among others, Verisity (an early contributor to class-based testbench design and constrained random testing) started in Israel and RocketTick (hardware-based simulation acceleration), acquired more recently by Cadence, is based in Israel. So when I hear about an emerging Israel-based company in functional verification, I pay attention. Vtool is such a company, spun out of the Veriest FV consulting services company in 2014. I spoke recently with Hagai Arbel, the CEO.

Hagai’s focus is on simplifying and optimizing UVM testbench creation and on providing new/improved methods to debug. His company offers 3 tools: Machina, Vitalitas (not yet released) and Cogita. These are verification environment tools – they sit around whatever core simulator you happen to use.

Let’s start with Machina. It should come as no surprise to anyone that UVM, while it has done a great job by providing a standard foundation for testbenches, has also in a sense created a new career path, requiring a long apprenticeship to reach the lofty heights of UVM expert. That’s good for careers, not so much for getting designs signed off quickly. Worse yet, testbenches now contain 3-10 times as much code as the DUT, making the testbench a fertile breeding ground for many more bugs than you may find in the design. Building UVM testbenches isn’t just complicated, it’s very error-prone and debugging those testbenches can significantly amplify the verification workload.

The standard recipe for helping with this problem is libraries of predefined UVC components and graphical tools to build UVC components and protocol interfaces. Machina provides a nice implementation of these with on-the-fly linting and a drag and drop interface to build graphical flows, but this isn’t radically different from what the main simulation guys provide. What sets Machina apart, they tell me, is that their builder is completely interoperable between hand-crafted testbenches and the graphical variety. So you can start in text, improve it in the graphical interface, go back to text for some specialized changes from another team, go back into the graphical interface, … I have some familiarity from my Atrenta days with what works in graphical aids to RTL design. Interoperability makes a huge difference in usability and productivity because tasks that require text editing and tasks that benefit from the GUI don’t always nicely partition. Vtool said that one of their customers told them it took 2-3 hours to implement a UVC with Vtool, and without Vtool it would have typically taken them 2-3 days.

Vitalitas (not yet released) provides a visual method to build sequences in the form of flowcharts of scenarios; from this it will generate UVM code. This graph-based scenario method is becoming popular in the industry as a path to building more portable testbenches. I won’t go into more detail here since the product is pre-release but there is more info on the website.

Cogita is a novel debugger which you would very likely use as a complement to your standard debugger. It lets you write complex search patterns, for example to look for APB transactions and reads the simulation log files to produce graphical views based on those patterns. This can be extremely helpful in looking for suspicious behavior, something that’s not necessarily wrong but maybe unexpected. Particularly this helps you looks for unusual correlations between patterns that you expect to correlate in certain ways.

Hagai told me they are seeing adoption of the tool suite especially in companies that have not yet built up significant UVM infrastructure. There is particular interest in companies doing FPGA design for whom the big ASIC tool flows are still not so familiar. I would expect they may also see growth among UVM non-experts (aka most UVM users) around testbench generation. Vtool is distributed in the US by Consensia. You can learn more about Vtool HERE.

More articles by Bernard…