RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

The eFPGA Market is Heating Up!

The eFPGA Market is Heating Up!
by Daniel Nenni on 05-22-2017 at 7:00 am

It is nice to see an emerging market successfully emerge for a change. With embedded FPGAs we are way past test chips and are now seeing tape-outs and silicon in a variety of applications. I’m not sure what the current market estimate of eFPGA is just yet but they align nicely with the $30B+ micro controller market. Market studies have the MCU market exceeding $100B in 2024 partially due to the exploding IoT segment but also due to the increasing intelligence of automotive products and the need for security in ALL of our semiconductor products, absolutely.

Global Microcontroller Market, 2014 – 2024 (USD Billion)

This brings me to the topic at hand which is embedded FPGA pioneer Flex Logic raising $5M in Series B equity financing. The first question I asked was, “Why only $5M?” The answer of course is “Because that is all we need.” Boom… mic drop.

Here are quotes and some meat from the press release. As a receiver of MANY press releases I have to say Flex Logix releases are very informative:

“We believe that Flex Logix’s embedded FPGA has the potential to be as pervasive as ARM’s embedded processors have become today,” said Peter Hebert, Managing Partner at Lux Capital. “The company’s software and silicon are proven and in use at multiple customers, paving the way to become one of the most widely-used chip building blocks across many markets and for a broad range of applications.”

With readily-available, high-density blocks of programmable RTL in any size and with the features a customer needs, designers now have the powerful flexibility to customize a single chip for multiple markets and/or upgrade the chip while in the system to adjust to changing standards such as networking protocols. Customers can also easily update their chips with the latest deep learning algorithms and/or implement their own versions of protocols in data centers. In the past, the only way to accomplish these updates was through expensive and time-consuming new mask versions of the same chip.

“The Flex Logix platform is the best, most scalable and flexible embedded FPGA solution on the market today, delivering significant competitive advantages in time to market, engineering efficiency, minimum metal layers and high density,” said Pierre Lamond, Partner at Eclipse Ventures.

“These patented technology breakthroughs combined with an experienced management team led by Geoff Tate, the founding CEO of Rambus, strongly position the company for rapid growth in the future.” “We now have customers with working silicon, multiple licensees and are seeing our first repeat designs,” said Tate, CEO of Flex Logix. “With a large number of customers in active, detailed evaluation of our technology for a wide range of applications, we expect significant growth of our customer base in the near term. As a result, we’re staffing up ‘ahead of the curve’ to ensure we can satisfy the demand.“

About the Flex Logix Embedded FPGA Platform
Addressing a wide range of chips in multiple markets, the Flex Logix EFLX™ platform can be used with networking chips with reconfigurable protocols, datacenter chips with reconfigurable accelerators, deep learning chips with real-time upgradeable algorithms, base stations chips with customizable features for multiple markets, MCU/IoT chips with flexible I/O and accelerators, and aerospace/defense applications including rad-hard embedded FPGA. EFLX is available for the most popular process nodes already and is being ported to additional process nodes based on customer demand.

About Flex Logix
Flex Logix, founded in March 2014, provides solutions for reconfigurable RTL in chip and system designs using embedded FPGA IP cores and software. The company’s technology platform delivers significant customer benefits by dramatically reducing design and manufacturing risks, accelerating technology roadmaps, and bringing greater flexibility to customers’ hardware. Flex Logix has raised more than $12 million of venture capital. It is headquartered in Mountain View, California and has sales rep offices in China, Europe, Israel, Japan, Taiwan and Texas. More information can be obtained at http://www.flex-logix.com or follow on Twitter at @efpga.

Also read: CEO Interview: Geoff Tate of Flex Logix

You can read more about Flex Logix on SemiWiki HERE.


Calling on #IoTman to save humanity!

Calling on #IoTman to save humanity!
by admin on 05-21-2017 at 7:00 am

We, in the hi-tech community, tend to gravitate towards the technology, the API, the device, the platform, the processes node and to forget the goal behind all of those items. We have all noticed the platform wars and cloud API struggle for #IoT market domination. Someone needs to bring back the discussion to the top level, to why we started down the road of #IoT a few years ago. Here is my contribution, happy to see yours.

There are 4 major challenges that #IoTman (that is you and I in the hi-tech world) must resolve for 9 Billion people to have a good life in 2050:

  • Food – basic sustenance but also trend towards western diet
  • Water – drinking, hygiene and irrigation
  • Energy consumption – motors, lighting and industrial
  • Healthcare – chronic disease management and preventionBackground data is a dime a dozen. A focused Google search returns many supporting reports and diagrams regarding the gravity of the situation for those four items.

    Food
    How do we properly feed 9 billion people? We already have the means with abundant crops and precision agriculture that is increasing yields every year. We read such news items fairly regularly now: “Grains piled on runways, parking lots, fields amid global glut”. The issue is mainly distribution and eliminating waste. How do we recover the 30% of food that is wasted en route? Roughly one third of the food produced in the world for human consumption every year — approximately 1.3 billion tonnes — gets lost or wasted.


    “Figure 3 illustrates the amounts of food wastage along the food supply chain. Agricultural production, at 33 percent, is responsible for the greatest amount of total food wastage volumes. Upstream wastage volumes, including production, post-harvest handling and storage, represent 54 percent of total wastage, while downstream wastage volumes, including processing, distribution and consumption, is 46 percent. Thus, on average, food wastage is balanced between the upstream and downstream of the supply chain.”

    #IoTman has the solution for these issues with blockchain technology to streamline supply and demand contracts and reduce transaction cost while IoT platforms and sensors can track produce across the distribution chain to guard against spoilage. Piece of cake for #IoTman.

    Water
    As an example, an Italian study developed by Censis reports that the volume of water lost in the distribution network is 32% in Italy, 20% in France, 6.5% in Germany, 15.5% in England and Wales. Add to that the flooding that happens on a yearly basis then it is clear that we need better control of our water distribution networks and advance warning systems for floods.


    Given the graph above, clearly any #IoT system that can cut down on water waste in irrigation will have a massive impact.

    #IoTman again, already has many solutions to our water issues from smart meters to regulate consumption to sensors for leakage and massive flooding. We still need to work on our Blockchain exchange for supplying water contracts via various means but it is only a matter of time before someone releases such a peer to peer water trading platform with minimal transaction costs.

    Energy
    Energy efficiency is crucial in dealing with demand outstripping supply. Would anyone like to contest this statement? There is no one size fit all for all issues relating to energy. The item that is clear though is that we have the technology to dramatically increase the efficiency of energy use and distribution with #IoT sense and control systems across the whole energy chain. It is not just smart meters that will make a difference but smart cities, buildings and enterprises.

    Electric Motors Use 45% of Global Electricity. We are talking here about global electricity! #IoTman can definitely today supply systems that sense and control motors of all sorts to improve efficiency. Every few % points have a dramatic effect on a global scale.

    Healthcare
    I saved the best one for last. Who in his right mind imagines that we can properly care for 9 billion people in 2050 when we are already spending over 10% of GDP today just to barely keep up? Why are we still waiting for people to get sick before we prescribe medication to the tune of $1 T? This is definitely not a sustainable model for 2050.


    #IoTman has the technology to flip this problem around by focusing on prevention. Big data genomics and sensors for continuous tracking of daily medical status are the only possible effective way forward. Get the GDP percent down, get Pharma to change its business model to prevention instead of cure and most of all get those 9B people to live a healthier life as their life expectancy increases.

    There are many reports from various organisations that expand on the subjects above and go into details of how #IoT will make a dramatic impact on the living conditions for humanity in 2050. My intent was not to replicate these studies here but to serve as a quick reminder.

    Given the above, it is time for the hi-tech community rise to the challenge, to come together and put aside its fascination with my platform not yours, my API not yours and join hands so that we can together create a better future for humanity. Do we really want to delay massive #IoT deployment by many years until the right application platform eventually wins?

    Would you join #IoTman in this epic journey to a trillion #IoT nodes serving humanity?


Webinar: Recipe to consume less than 0.5 µA in sleep mode

Webinar: Recipe to consume less than 0.5 µA in sleep mode
by Eric Esteve on 05-19-2017 at 12:00 pm

Dolphin is addressing the ultra-low-power (ULP) needs for some applications, like for example Bluetooth low energy (BLE), machine-to-machine (M2M) or IoT edge devices in general. For these applications, defining active and sleep modes is imperative, but it may not be enough to guarantee that the battery-powered system will run for years, especially when the device integrates Always-On functions. In fact, a device in sleep mode is expected to support some mandatory functions, like Clock (RC or XTAL oscillator), retention SRAM (data and program memory), ACU/PMU logic (control), voice activity detection and so on. We will see in this webinar the different strategies to implement these Always-On functions, including the power network distribution within the chip.

It’s a very practical webinar, as Dolphin propose different architectures, all based on IP from Dolphin port-folio (in 180 nm, 55nm, 40nm and 22nm processes) dedicated to Always-On power domain, and discuss the impact on the chip power consumption by using measured (or simulated) power figures.

Dolphin proposes five synopses, each related to a specific action:

  • Using a supply multiplexer
  • Mutualizing a voltage regulator
  • Associating 2 voltage regulators
  • Using a near-threshold voltage library
  • Using a thick oxide library

The designer will go for the architecture optimized in respect with the application constraints, probably mixing several of the above listed recipes.

The emerging connected systems are most frequently battery-powered and the goal is to design a system able to survive with, for example, a coin cell battery, not for days or even months, but for years. If you dig, you realize that numerous applications, such as M2M, BLE, Zigbee…, have an activity rate (duty cycle) such that the power consumption in sleep mode dominates the overall current drawn by the SoC. For such applications, the design of the “Always-On power domain” is pivotal. To meet customer expectations, ensuring a current consumption of the Always-On (AoN) power domain – incl. blocks in retention mode – not higher than 500 nA is pivotal. To reach this target, the power network architecture needs to be carefully considered, and the IP supporting this architecture available in the target technology node.

Dolphin has developed a methodology based on a figure of merit (FoM), expressed as follow:


Because we are dealing with devices supporting always-on capability in the context of connected, battery powered system, the weight value is the highest for power consumption with 60%, decrease to 25% for area and 15% for bill-of-material (BoM). Looking at the five power architectures to implement the AoN power domain, a comparative analysis will enable to identify the characteristics of silicon IPs required to reach the targeted performance optimization, identified by the best (lower is better) FoM.


We can see a real example, with 3 cases of Bluetooth LE chips (BLE1, BLE2, BLE3), on the above picture. In the three cases, the active current is the same (the same BLE function is integrated), and only the sleep current value is different with resp. 10, 1 and 0.2 uA. If you consider a system active only 1% of the time (or 15 min per day), the battery autonomy varies from 208 days (10 uA sleep) to 260 days (0.2 uA sleep). The difference is much more impressive with a system active only 1 minute per day: in this case, the battery autonomy can reach up to 5 years for the lowest sleep current (0.2 uA) case, or 3 times more than for the highest sleep current (10 uA) case with a battery autonomy of 1.7 years.

Dolphin will hold a live webinar “Recipe to consume less than 0.5 µA in sleep mode” on May 23 for Americas, at 9:00 AM PDT or June 1[SUP]st[/SUP] (for Europe or Asia), at 11:00 AM CEST. This webinar targets the SoC designers wanting to learn how to quickly implement ultra-low power (uLP) techniques, using proven recipes.

You can access to the record of this webinar by registering to MyDolphin, the private space of Dolphin Integration’s website:
https://www.dolphin-integration.com/index.php/my_dolphin/login

By Eric Esteve from IPNEST
More about the various architecture allowing to implement ultra-low-power solutions:
https://www.dolphin-integration.com/index.php/solutions/low-power-always-on-panoply#summary


Webinar – Low Power Circuit Sizing for IoT

Webinar – Low Power Circuit Sizing for IoT
by Tom Simon on 05-19-2017 at 7:00 am

Optimizing analog designs has always been a difficult and tricky process. Designing for IoT applications has only made this more difficult with the added importance of minimizing power. Unlike other circuit parameters, it is not easy to specify power as a design goal when using equations. Power is a resultant property and must be optimized by heuristically tuning circuit parameters while not dropping out of spec on the other design requirements.

Add to this the need for chips to function in a large range of environments – from freezing to outdoors in direct sun or on widely variable supply voltages – and the design challenge only become greater. MunEDA has an approach that uses advanced numerical methods to automate analog circuit optimization. Instead of a painful trial and error process involving many discrete calculation steps, they have harnessed multivariable optimizations in conjunction with SPICE to seek solutions over the design space. It even works well with second order circuit effects and devices operating across their full range, not just weakly or strongly saturated.

To help more people understand their approach and solution, MunEDA is hosting a webinar on the topic of “Low Power Circuit Sizing for IoT” on June 1[SUP]st[/SUP] that will provide an overview of how their WiCkeD Tool Suite chews through optimization problems for analog chips, especially those targeted for IoT.

The webinar is intended for circuit designers, design managers, CAD engineers and EDA managers. The presenter will be Dr. Michael Pronath, Vice President Products & Solutions.

Here is their outline for the talk:•Why is multi-objective automated circuit sizing challenging for IoT designs.
•How can MunEDA automated sizing tools be leveraged to achieve high-performance and reliable IoT designs.
•How are statistics incorporated into sizing to consider device mismatch and process centering.
•What is the technology behind the MunEDA automated sizing environment — with enough math/statistics to be interesting but not overwhelming.
•How can MunEDA sensitivity-based tools be useful to test and confirm and validate current intuition of my IoT designs.

Here is the link for signing up for this event that will be held on June 1[SUP]st[/SUP] at 10AM PST


About MunEDA
MunEDA develops and licenses EDA tools and solutions that analyze, model, optimize and verify the performance, robustness and yield of analog, mixed-signal and digital circuits. Leading semiconductor companies rely on MunEDA’s WiCkeD’ tool suite – the industry’s most comprehensive range of advanced circuit analysis solutions – to reduce circuit design time and achieve maximum yield in their communications, computer, memory, automotive and consumer electronics designs. Founded in 2001, MunEDA is headquartered in Munich, Germany, with offices in Sunnyvale, California, USA (MunEDA Inc.), and leading EDA distributors in the U.S., Japan, Korea, Taiwan, Singapore, Malaysia, Scandinavia, and other countries worldwide. For more information, please visit MunEDA at www.muneda.com/contacts.php.


Understanding ISO 26262 Compliance for Automotive Suppliers

Understanding ISO 26262 Compliance for Automotive Suppliers
by Daniel Payne on 05-18-2017 at 12:00 pm

The semiconductor, IP, Software and EDA industries are all focusing on the growing automotive market because of its electronic content, size and growth. There are long-time suppliers to the automotive industry, and also first-time vendors that are launching something new every week for electronics in automotive. So where do you go to get up to speed most quickly on the requirements in automotive like safety standards? One way is to attend a free lunch and learn event planned for next week:

  • What: Understanding ISO 26262 Compliance for Automotive Suppliers
  • When: Tuesday, May 23rd, 2017 from 11:30AM to 1PM PDT
  • Where: Hyatt Regency Santa Clara, 5101 Great America Parkway, Santa Clara, CA 95054

Presenters

You’ll hear from two companies at this lunch and learn event:

  • Industry safety certification expert from TÜV SÜD
  • Requirements management expert from Jama Software

Who Should Attend

Any member of an organization who is currently working on or interested in the development of products for the automotive marketplace.

What You’ll Learn

How to navigate the development challenges of ISO 26262 when you are creating products for automotive customers.

Description

Developing products for the Automotive market has its challenges – building products your customers love, getting to market quickly, etc. However, those challenges are further complicated by functional safety standards ISO 26262 and regulatory overhead. Companies new to the automotive market seeking to capitalize on the growth opportunity are quickly confronted by unfamiliar safety standards like ISO 26262 and often struggle to get started. In addition, companies more familiar with the automotive industry struggle to keep up with changing standards and are constantly seeking ways to minimize risk in their process while saving time and money.

Join us on May 23rd for a lunch and learn with industry safety certification experts Tuv Sud, and industry leading requirements management organization Jama Software, as we share best practices for ways to maintain a competitive edge while minimizing risk, saving time and money, and adhering to today’s functional safety standards like ISO 26262.

Agenda

  • Overview of ISO 26262
  • Common challenges to ISO 26262 compliance
  • Traceability to support ISO 26262
  • How Requirements Management can ease the burden of ISO 26262

Registration
You need to register online and the event is free, however space is limited.


Two-Factor Authentication on the Edge

Two-Factor Authentication on the Edge
by Bernard Murphy on 05-18-2017 at 7:00 am

Two-factor authentication has become commonplace for those of us determined to keep the bad guys at bay. You first request entry to a privileged site through a user-name/password, which in turn sends a code to your mobile device that you must enter to complete your admission to the site (there are other second factor methods, but this one is probably the most widely-known). In the escalating security war, this approach too will be compromised at some point but for now it is certainly much more secure than single-factor authentication.


Texting a code to your phone works well for human-mediated authentication, but has obvious challenges for application to IoT devices, not only in how the second factor would be handled but also in typically resource-constrained devices. Yet this level of security is even more important in devices monitoring and managing critical infrastructure, such as the power grid or transportation systems, domains where breaches could have enormous impact. Security of this type becomes particularly important when looking at remote provisioning, for software updates for example.

A team sponsored by A*STAR (Singapore) has come up with a clever way to provide similar (or better) levels of authentication in edge devices. Most two-factor approaches rely on the second factor being something you (as a privileged user) know or have, as a possession or intrinsic characteristic, which would be difficult for an attacker to duplicate. Short of very advanced (and low power) levels of AI, it is probably over-optimistic to expect IoT edge devices to “know” anything and what they “have” doesn’t seem any more secure than an embedded password. But there is something quite unique and very challenging to replicate about each device – the history of communication it has had with the host system.

Naturally the full representation of that communication would be far too massive to store on the edge device (and far too slow to check). Instead, the method works as follows. First, the nature of second factor authentication is for the edge-device (in this case) to issue a challenge to the server to prove that it knows some specific fact. The server, which they call P, must generate a proof which it returns to the device, called V, which then verifies the response. The strength of the proposed method relies on the server having fast access, through big-data methods, to all the history (or a significant percentage of the history) of communication with the device, something that would be very expensive for an attacker to replicate and would greatly increase chances of detection if an attacker were to attempt to collect that information. And as that history continues to evolve, even a successful attack at one point in time would degrade in effectiveness beyond that time.

The server side of this method is easy to understand – all that history data is effectively a giant and evolving key which should be prohibitively expensive to attack. The tricky and clever part is how this works on the device V. Here the paper gets quite complex, so you’ll have to rely on my summary, or read the paper yourself (and please let me know if I got it wrong). V stores only a finite and evolving subset derived from historical data, converted through a function into tags, which are also communicated back to P as items are selected by V to be in that set.

On authentication (I’ll skip a bunch of details here for simplicity), V issues a challenge to P in the form of a random subset of indices on this set of tags. P runs a function which provides a proof that in essence shows it knows the historical data associated with those tags. P is not simply returning the tags corresponding to those indices; this is a variant on a proof of retrievability without having to return the underlying data. V then runs a different function on the returned proof and that function must return a true value for each index in the challenge.

That’s the 10-cent version of a more complex explanation. The authors also provide a proof of correctness, a detailed security analysis and an analysis of the resilience of the method to leakage, none of which am I going to attempt to explain here. They show that as the size of the stored subset (on V) increases, the likelihood of a successful attack decreases exponentially, also that even if as much as 70% of the subset is compromised, again the likelihood of successful attack continues to decrease exponentially with increasing subset size, though obviously more slowly, .

You can learn more about the approach HERE or HERE.


Cybersecurity in the World of Artificial Intelligence

Cybersecurity in the World of Artificial Intelligence
by Matthew Rosenquist on 05-17-2017 at 12:00 pm

Artificial Intelligence (AI) is coming. It could contribute to a more secure and rational world or it may unravel our trust in technology. AI holds a strong promise of changing our world and extending compute functions to a more dominant role of directly manipulating physical world activities. This is a momentous step where we relinquish some level of control for the safety of ourselves, family, and prosperity. With the capabilities of AI, machines can be given vastly more responsibility. Driving our vehicles, operating planes, managing financial markets, controlling asset transactions, diagnosing and treating ailments, and running vast electrical, fuel, and transportation networks are all within reach of AI. With such power, comes not only responsibility, but risks from those seeking to abuse that power. Responsibility without trust can be dangerous. Where will cybersecurity play in our world where learning algorithms profoundly change and control key aspects of our future?


Technology is a Tool
AI, for all its science fiction undertones, is about finding patterns, learning from mistakes, breaking down problems, and adapting to achieve specific goals. AI is a series of incredible logic tools that allow methodological progression in data processing. It sounds complex, but when distilled to its base components, it becomes simple to grasp. In practice, it could be finding the most optimal route to a location, matching biometrics to profiles, interpreting the lines of the road to keep a vehicle in the proper lane, distilling research data to identify markers for diseases, or detecting brewing volatility situations in markets or people. Vast amounts of data are processed in specific ways to distill progressively better answers and models.

The real importance is not in how AI can find patterns and optimal solutions, but rather what the world can do with those capabilities. Artificial Intelligence will play a crucial role in a new wave of technology that will revolutionize the world.


Every day we get closer to autonomous cars that can: transport people across town or the continent, accelerate research for cures to diseases, locate hidden reserves of energy and minerals, predict human conditions like crime, heart attacks, and social upheaval. AI systems are expected to improve the lives of billions of people and the condition of the planet in many different ways, including detecting leaks in pipes to reduce waste and avoid environment disasters, optimizing crop yields to reduce starvation, configuring manufacturing lines for efficient automation, and identifying threats to people’s health and safety.

Learning machines will contribute to computing performing actions more efficiently and accurately than humans. This will foster trust and lend to more autonomy. It is not just cars. Think medical diagnosis and treatment, financial systems, governmental functions, national defense, and hospitality services. The usages are mind boggling and near limitless. The new ways we use these capabilities will themselves create even more innovative opportunities to use AI. The next generation may focus on the monitoring, management, control, provisioning, improvement, audit, and communication between other lesser capable AI systems. Computers watching computers. The cycle will reinforce and feed itself as complexity increases and humans become incapable of keeping pace.

As wondrous as it is, AI is still just a technology. It is a tool, albeit a very powerful and adaptable one. Here is the problem. Tools can be wielded for good or for malice. This has always been the case and we just cannot change the ways of the world. As powerful as AI is, there is a direct relationship to the amount of risk which is accompanied with the benefits. When value is created, attackers are attracted and it becomes a target. It might be a hacker, online criminal, nation state, hacktivist, or any other type of threat agent. Those who can steal, copy, control, or destroy something of value, have power. AI will be a very desirable target for those who seek power.


From Data Processing to Control of the Physical World
Computers are masters of data. They can do calculations, storage, and all manner of processing extraordinarily well. For a very long time the data and information generated by computers were largely for humans to be better informed, to make decisions. There are other reasons of course, entertainment, communications, etc. But the point is there have been specific limits. Computers outputs were mostly to a screen, printer, or to another computer. To control things in the physical world takes quite a sizable amount of thinking, in order to do it right. In many cases, we simply don’t trust computers to deal with unexpected and complex situations.

Modern airliners have automatic settings which can fly the plane. But we all feel much more comfortable with a human in the cockpit, even if they don’t do much but enjoy the ride. They are our failsafe. One which has an irreplaceable stake, just like the passengers, to arrive safely to the destination. Humans, although slow compared to computers, fallible in judgement, and prone to unpredictability in performance, still have a trusted reputation to keep people safe and rise when needed to adapt to changing conditions. They simply are better at critical oversight of incredibly complex, ambiguous, and unpredictable situations, especially when self-interests are involved.

AI may challenge that very concept.
It will likely be proven on the roads first. Autonomous systems will be designed to reduce traffic congestion, avoid accidents, and deliver passengers by the most efficient route possible. Drivers are notoriously bad around the world. Sure, a vast majority of trips end in expected success, but many do not. Tremendous resources of time, fuel, and patience are wasted with inefficient driving. Autonomous vehicles, powered by various AI systems, will eventually statistically prove to be significantly better at driving. So much so, it could revolutionize the insurance industry and create a class system on the roadways. Autonomous vehicles can travel in close chevrons at high speed, while human drivers will be greatly limited in speed and efficiency.

Such cases will open the world to computers which will be allowed, even preferred, to control and manipulate the physical world around and for us.


It could be as simple as a smart device to mow the lawn. An AI enhanced autonomous lawnmower could efficiently cut the grass, avoid sprinklers, not come near the household pets, detour around newly planted flowers, be respectful of pedestrians, and turn off when children approach or their toys get too close. Such a device will also monitor its performance and act on maintenance needs by proactively ordering needed parts and connecting itself to the power grid when it needs to recharge. It may also find the best place to park itself in the tool shed or garage, and only return when it determines the biological grass again requires upkeep by a smart technological overseer. The trust that AI brings, in reliably making smart decisions, will allow digital devices to manipulate, interact, and control aspects of the physical world.



Malice in Utopia
The same sensors, actuators, and embedded intelligence could also be a recipe for disaster. Errors in faulty, damaged, or obscured sensors might feed incorrect data to the AI brain, leading to unintended results. Even worse, malicious actors could alter inputs, manipulate AI outputs, or otherwise tinker with how the device responds to legitimate commands. Such acts could lead to runaway mowers, chopping down the neighbors prized petunias, or pursuing pets with messy consequences. A single act could end badly. But what if a truly malevolent actor went so far as to hijack an entire neighborhood of these devices. Like a botnet, under the control of an aggressive actor. Still, the rise of suburban lawnmowers scenario seems a bit silly.

What if we aren’t talking about lawnmowers? What if, instead it was autonomous cars, buses, and trains that could be controlled by hackers? Perhaps medical devices in emergency rooms or those implanted in people such as defibrillators, insulin pumps, and pacemakers. Smart medical devices could save lives with newfound insights to patient needs, but also could put people at risk if they make a mistake or are controlled maliciously. What if hostile nations were to manipulate AI systems that are controlling valves on pipelines, electrical switches in substations, pressure regulators in refineries, dam control gates, and emergency vents managing the safety of power generation facilities, chemical plants, and water treatment centers? In the future, artificial intelligence will greatly benefit the operation, efficiency, and quality of these and other infrastructure critical to daily life. But where there is value, there is always risk.

The Silver Lining

What can be broken, can also be protected with AI. We live in a world full of risks. It does not make sense to eliminate all of them, but rather manage them to an acceptable level. Technology provides a myriad of benefits and opportunities. It also brings new challenges to manage risks. The goal is to find the right balance of controls to mitigate the risks to an acceptable level where it makes sense. Costs and usability are factors as well. The optimal balance is when benefits from the opportunities are realized while the risks are kept at an acceptable level. These goals and the enormous amounts of data required to understand them, represent perfect conditions for AI to thrive and find optimal solutions.

Intelligent systems could lead in new capabilities to detect insiders and disgruntled employees, identify stealthy malware, dynamically reconfigure network traffic to avoid attacks, scrub software to close vulnerabilities before they are exploited, and mitigate large scale complex cyberattacks with great precision. AI could be the next great equalizer for defensive capabilities to support technology growth and innovation.

The Unanswered Questions
This is not the end of the discussion, rather it is the beginning of a journey. AI is still in its infancy, as are the technologies and usages which will incorporate it to deliver enhancements to systems and services. There are many roads ahead. Some more dangerous than others. We will need a guide to understand how to assess the risks of AI development and evolution, how to protect AI systems, and most importantly methods to secure the downstream use-cases it enables.

Some questions we must ponder:

  • What risks are present when the integrity of AI systems are undermined, results are manipulated, data exposed, or availability is denied?
  • Who is liable if AI makes poor decisions and someone gets injured?
  • How will regulations for privacy and IP protection apply to processed and aggregated data?
  • What level of transparency and oversight should be instituted as best practices?
  • How should input data, AI engines, and outputs be secured?
  • Should architectures be designed to resist replay and reverse engineering attacks, at what cost?
  • What fail-over states and capabilities are desirable against denial-of-service attacks?
  • How do we measure risks and describe attacks against AI systems?
  • What usages of AI will be targeted first by cyber attackers, organized criminals, hacktivists, and nation states?
  • How AI can be used to protect itself and interconnected computer systems from malicious and accidental attacks
  • Will competition in AI systems drive security to an afterthought or will market leaders choose to proactively invest in making digital protections, physical safety, and personal privacy a priority?


Dawn of a New Day…
It is time the cybersecurity community begins discussing the risks and opportunities that Artificial Intelligence holds for the future. It could bring tremendous benefits and at the same time unknowingly become a Pandora’s box for malicious attackers. We may see innovators wield AI in incredible new ways to protect people, assets, and new technologies. Governments may be compelled to step in and begin regulating the usage, protections, transparency, and oversight of AI systems. Standards bodies will also likely be involved in setting guidelines and establishing acceptable architectural models. There will likely be thought leading organizations begin to incorporate forward thinking cybersecurity controls which protect digital security of systems, the physical safety of users, and the personal privacy of people.

I plan on exploring the intersection of cybersecurity and Artificial Intelligence in upcoming blogs. It is a worthy topic we all should be contemplating and discussing. This topic is virgin territory and will take the collective ideas and collaboration of technologists, users, and security professionals to properly mature. Follow me to keep updated and add your thoughts to the conversation.

Right now, the future is unclear. Those with insights will have an advantage. Now is the right moment to begin discussing how we want to safely architect, integrate, and extend trust to intelligent technologies, before unexpected outcomes run rampant. It is time for leaders to emerge and establish best practices for a secure cyber world that benefits from AI.

Interested in more? Follow me on LinkedIn, Twitter (@Matt_Rosenquist), Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity.


Webinar – Voice Interfaces of the Future

Webinar – Voice Interfaces of the Future
by Bernard Murphy on 05-17-2017 at 7:00 am

In our favorite Sci-Fi or fantasy movies or series we routinely expect voice-control of the many devices encountered in those stories. This seems natural because that’s how we most easily communicate our needs and intent (short of direct brain connections, though Elon Musk is apparently working on that). Typing on a keyboard (as I am now) feels hopelessly clunky, a relic of the 20[SUP]th[/SUP] century that I for one can’t wait to shed, as soon as voice, gesture and other natural interfaces become sufficiently trustworthy.


REGISTER NOW for Webinar on May 23rd 2017, 09:00 AM PST

We’re already seeing progress in voice recognition in the form of personal assistants like Alexa, Google Assistant and Microsoft Cortana. And we’re starting to see voice and other controls moving into the car cabin.

This is a trend that can only accelerate. How about telling your alarm clock to snooze or shut off rather than groping around to find the darn button? Telling your TV what you want to watch? Controlling home devices? Responding to emails as you drive? This stuff is already available. Getting rid of your office keyboard? Maybe a little further out, but conceivable given the depth of AI now going behind voice recognition, to get beyond word and simple phrase recognition into semantic recognition.

All this starts with voice recognition; this webinar will introduce you to how you can make the future happen today by adding voice recognition to your designs. Join CEVA and Alango Technologies to learn about this forefront in Human-Machine Interfacing.

REGISTER NOW


Summary

This webinar covers the current state and future possibilities of voice interfaces. It surveys the technologies that have enabled current proliferation of voice interfaces but also takes a critical look at the faults and drawbacks of current implementations. Finally, it explores the existing, emerging and future technologies that will eventually generate a revolution in the way we interact with machines.

Turning yesterday’s sci-fi into today’s reality, voice interfaces are gaining traction but still haven’t reached their peak. Enabling technologies are all around and can offer smarter and more efficient applications with more natural and intuitive interfaces. An always-on voice interface with human-like intelligence, capable of understanding intonations and inflections, responding to context and anticipating our needs and desires may be much closer than most people think.

Join CEVA experts to learn about:
· Natural Human-machine interfaces (HMI) of the future
· Far-field voice pickup and its enabling technologies
· Under the hood of smart speakers
· Human-like virtual assistants
Target Audience

Audio system engineers targeting natural human-machine interfaces, and marketing managers looking to voice-enable their smart home, mobile, and automotive products.

Speakers


Eran Belaish
Product Marketing Manager, Audio/Voice, CEVA


Robert Schrager
Director of sales and marketing, Alango Technologies

About CEVA, Inc.

CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, advanced imaging, computer vision and deep learning for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi (802.11 a/b/g/n/ac up to 4×4) and serial storage (SATA and SAS). Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube and LinkedIn.


Electrothermal Analysis of an IC for Automotive Use

Electrothermal Analysis of an IC for Automotive Use
by Daniel Payne on 05-16-2017 at 12:00 pm

Automotive ICs have to operate in a very demanding environment in terms of both temperature and voltage ranges, along with the ability to withstand g-forces and be sealed from the elements. Not an easy design challenge. For many consumer ICs we see output drive currents on the IO pins measured in mA, however in automotive if you want your IC to drive something like a DC motor then you can expect to see values in the Amp range, big difference. Engineers at an automotive IC design group in Toshiba recently had the challenge of designing a channel brushed DC motor driver chip with specifications of:

  • PWM mode: H-bridge driver
  • Output current: 5A
  • Low Ron: < 0.45 ohms
  • Operating voltage, 4.5V to 28V
  • Operating temperature, -40C to 125C

Here’s a block diagram of the Toshiba TB9051FTG chip:

When these large output driver transistors get turned on and start to draw currents the heat levels on the chip next to these transistors begin to rise, which in turn effects the transistor performance. If your circuit simulator doesn’t take into account the thermal effects of transistors driving large currents, then your simulation results will be overly optimistic and probably won’t meet your tight specifications. Toshiba uses DMOS (Double-diffused Metal Oxide Semiconductor) transistors for the high drive output pins, and the commercial SPICE simulator that supports fully-coupled electrothermal simulation is Eldo, from EDA vendor Mentor Graphics (a Siemens business).

Having a simulator that can simultaneously account for electrical and thermal properties is really mandatory for the design of this type of automotive chip. Here’s a block diagram showing the EDA tool flow for electrothermal analysis:

Inputs to the SPICE simulator are an extracted layout netlist, Verilog-A netlist and a thermal netlist. The Eldo simulator then produces a transient analysis showing current values and device temperature as a function of time. On this chip is a thermal unit which controls the output DMOS transistors as shown below:

The blue arrows in the block diagram are electrical data flow, while the red arrow is the thermal flow. With the Eldo circuit simulator there is a simultaneous solver for both electrical and thermal equations, giving you results that are both accurate and fast. Other approaches that use a relaxation technique to solve electrical and thermal coupling are vastly inefficient and have much longer run times. So with the concurrency in Eldo you get to see transient analysis results that are accurate and fast for both device temperature and current drive:

Looking at the current drive value in red we see that as the DMOS transistor turns on in the middle of the waveform there is a large increase in current, but then as the transistor heats up this current drive level tapers off to a smaller value than the peak value.

Related blog – Mentor DefectSim Seen as Breakthrough for AMS Test

Electrothermal Example

To get a grasp of how an electrothermal simulation works with Eldo let’s look at a simplified example with just three transistors that are thermally coupled:

The DMOS output transistors are shown on the left as XMH3 and XMH4, where they are controlled by pulse width modulation (PWM) signals, then there’s a current sensor transistor XM_ISD1 to control the input signal of the power DMOS devices. Simulating this example using an analog solver without any thermal coupling will produce a DMOS transistor temperature (blue) that doesn’t change even as the currents are toggling (pink and red):


Simulating again but with self-heating effects turned ON produces a very different result where we can start to see the transistor temperatures dynamically moving (blue yellow) as currents are toggling (pink and red):

Notice that the current shown in the middle curve (pink) has a high value that is tapered lower than its peak value, just what we’d expect to see as the higher temperatures begin to reduce the drive current levels.

Simulated Versus Silicon Measurements

The ultimate accuracy test of any SPICE circuit simulator is measuring silicon results on the bench and comparing them versus the simulated values. Toshiba engineers did this comparison and found that Eldo was producing current results within 1.5% of measurement using a Tektronix MSO405.4 oscilloscope during two time windows: 0 to 4ms (left), 86-90ms (right)

So now we know that Eldo does accurate electrothermal simulations, but what about the speed of circuit simulation? Mentor offers two speed versions of their circuit simulator, and on this particular chip simulating for 90ms of transient the run time comparisons for electrothermal simulations are:

  • Eldo with 1 core, 38h 48m
  • Eldo Premier with 1 core, 11h 50m
  • Eldo with 8 cores, 16h 38m
  • Eldo Premier with 8 cores, 7h 31m

Summary

If you need electrothermal simulation results accurately and fast then use Eldo, for faster results try Eldo Premier, and for the fastest run times use Eldo Premier with 8 cores. The chip designers at Toshiba have shown that you can expect simulated results to be within 1.5% of measured silicon, so Eldo has a very accurate electrothermal simulator that is ready to go on any circuits that have high current drive and localized heating within the IC.

White Paper

There’s a 10 page white paperon this topic online that requires a brief registration form.


High Frequency Trading and EDA

High Frequency Trading and EDA
by Bernard Murphy on 05-16-2017 at 7:00 am

Pop quiz – name an event at which an EDA vendor would be unlikely to exhibit. How about The Trading Show in Chicago, later this month? That’s trading as in markets, high-frequency trading, blockchain and all that other trading-centric financial technology. This is another market, like cloud, where performance is everything and returns easily justify investments in hardware design.


Of course, what these people are aiming to build is not smartphones or routers in the very latest semiconductor processes. They’re much more interested in personalized design – specialized applications (in this case high-frequency trading – HFT) built for proprietary advantage and never intended to be offered in the open market. And like their compatriots in the cloud, they’re very attracted to FPGA-based design for its flexibility, the rapidly increasing capability in those platforms and (comparatively) low development cost.

In automated trading, latency in trade information can have a huge impact on money made or lost. Simply aggregating ticker feeds from as many as 200 markets, each supporting its own format, was a task historically handled by software but with some latency in managing and massaging those inputs. Replacing that software with FPGA-based feed management can reduce latency in this stage by 5-10X. As in the cloud, there’s still a balance between the flexibility of software and the performance of hardware; some processing remains on CPUs, while more stable functions which need to be as fast as possible move to FPGAs. Solutions of this type are already in production use.

It’s not just about quickly consolidating ticker data. The trades themselves must be decided much faster than a human trader could respond. Deciding which trades to make requires intelligence, unsurprisingly using machine learning (ML) methods. That could be an opportunity for FPGAs, especially in ML algorithms with high-levels of parallelism benefitting more from FPGAs than multi-core CPUs, but lacking the neural net characteristics which fit so well to GPUs. However, I couldn’t find anything suggesting FPGAs are being used in this way today.

Risk management and order execution are other aspects of automated trading where FPGAs can help. While ML may suggest a trade, risk management monitors these suggestions in the background to ensure they do not fall outside trader-advised windows. Order execution, consolidating planned trades and fanning them out to the relevant markets to execute buys and sells, is yet another area where FPGAs naturally offer advantages in parallelism (sort of the inverse of the ticker aggregation task). In both these cases, FPGA-based solutions are in production today.


So now back to my opening question – why would an EDA vendor be exhibiting at a trading conference? FPGAs are playing a bigger role in automated trading, leveraging multi-billions of dollars in transactions. But these folks are traders. They have lots of mathematicians and lots of programmers, but they don’t have a lot of FPGA designers. Yet their differentiation hinges on how well their hardware performs – and that’s a moving target. Also, suppliers emerging around servicing these requirements, for smaller traders and retail companies serving day-traders, have the same needs. Which means that that it has become very important to be able to design and often re-design these systems to squeeze yet more advantage as competitors advance.

None of this has been lost on Aldec who, as an FPGA design verification and prototyping supplier, seem especially well positioned to take advantage of this opportunity (they apparently already have several customers in this space). They are exhibiting at the Trading Show in Chicago this year and Louie de Luna (Marketing Director at Aldec) was interviewed by the group that puts on the show. I’ll just touch on just a couple of points from that interview.


One that may trigger apoplexy in the ASIC verification community is momentum behind Python/cocotb for building testbenches (see here for background and a demo), rather than UVM. HFT engineers apparently don’t care about our sacred cows. They’re more than happy to switch to any solution that can get faster to completed verification. Louie cited one example where a UVM approach took 5k lines and 30 days to get to a result that missed some bugs, where the Python/cocotb approach completed with 500 lines of code in 1 day and caught those bugs. I’m sure this won’t always be true, and would probably not be generally true for large designs, but it is an interesting stat in its own right. No-one is suggesting UVM be replaced by this solution for mainstream ASIC design. At the same time, UVM may have already lost the battle in HFT and that may portend similar shifts in other personalized design flows.

Louie also talked about the importance of prototyping boards being available to test designs. He stressed that different algorithms often need different resources, perhaps more DSP primitives for floating-point-intensive calculations, or multiple small memories for highly-parallelized transaction processing. It is difficult to fit all these into one solution so Aldec offers a family of boards based on Xilinx Virtex-7 and UltraScale in multiple configurations, along with the software to manage them.

As an editorial sidebar, I should add that I am becoming quite impressed by this company. They’ve been around for over 30 years and most of us have probably dismissed them as a minor player in a space dominated by the EDA giants and others close to the core of “real” EDA (I’m guilty too). But something has changed at Aldec. I have no idea what triggered this but they seem to be getting a lot more aggressive, especially in FPGA-based design and even more in supporting applications where FPGAs themselves are growing rapidly. Relevance to high-frequency trading is just one recent example. Aldec are reinventing themselves and arguably, by complementing mainstream (and some not-so-mainstream) EDA solutions with a range of application-specific prototyping (perhaps even beyond prototyping) boards, they may be redefining the span of EDA.

You can read the interview with Louie de Luna in full HERE and the latest board announcement for HFT applications HERE.