NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Webinar: Getting to Accurate Power Estimates Earlier and Faster

Webinar: Getting to Accurate Power Estimates Earlier and Faster
by Bernard Murphy on 05-24-2017 at 7:00 am

Power has become a very important metric in modern designs – for mobile and IoT devices which must live on a battery charge for days or years, for datacenters where power costs can be as significant as capital costs, and for increasingly unavoidable regulatory reasons. But accurate power estimation on a design must start from an implementation with detailed gate-level representation and realistic interconnect values and requires gate-level simulation data which can take days or weeks to produce. That’s good for signoff but it can be too late to guide design changes if the power target is missed.


REGISTER NOW for this Webinar on June 1st at 10am PDT

One way to attack this problem is to do power estimation at RTL which has advantages in flexibility but is quite a bit less accurate that gate-level estimation and is certainly not suitable for signoff. A different approach, better suited to accurate estimation, is to continue to work with the implemented gate-level netlist, but to use readily available RTL simulation data together with a mechanism to infer corresponding gate-level activity from that data. This is what you can have using Synopsys PowerReplay together with Synopsys PrimeTime PX. This webinar will show you how that can be done.

REGISTER NOW for this Webinar on June 1st at 10am PDT


CDC Verification for FPGA – Beyond the Basics

CDC Verification for FPGA – Beyond the Basics
by Bernard Murphy on 05-23-2017 at 12:00 pm

FPGAs have become a lot more capable and a lot more powerful, more closely resembling SoCs than the glue-logic we once considered them to be. Look at any big FPGA – a Xilinx Zynq, an Intel/Altera Arria or a Microsemi SmartFusion; these devices are full-blown SoCs, functionally different from an ASIC SoC only in that some of the device is programmable.


All that power greatly increases verification challenges, which is why adoption of the full range of ASIC verification techniques, including static and formal methods, is growing fast in FPGA design teams. The functional complexity of these devices overwhelms any possibility of design iteration through lab testing. One such verification problem, in which I have a little background, is analysis of clock domains crossings (CDC).

CDCs and SoCs go hand in hand since any reasonable SoC will contain multiple clock domains. There’s a domain for the main CPU (possibly multiple domains for multiple processors/accelerators), likely a separate domain for the off-chip side of each peripheral that must support a protocol which rigorously restricts clock speed options. The bus fabric communication between these devices may itself support multiple protocols each running under different clocks. Clock speeds proliferate in SoCs.

That why CDCs are found all over an SoC. At any place where data can cross from one of these domains to another, from the off-chip side of a peripheral to the bus for example or perhaps through a bus bridge, clock domain crossings exist. It is not uncommon to find at least 10 different clocks on one of these FPGA SoCs, which can imply 10,000 or more CDCs scattered across the FPGA. The “so what” here is that CDCs, if not correctly handled, can fail very unpredictably and can be very difficult to check, either in simulation or in lab testing. But if a CDC problem escapes testing, your customers are going to find it in their design, very often in the field, as an intermittent a lock-up or a functional error. When the design is going into a space application or military avionics or any other critical application, this state of affairs is obviously less than desirable.

Simulation can play a role in helping minimize the chance of such failures but requires special checking and is bounded in value to the limited use-cases you can test. This has prompted significant focus on static (and formal) methods, which are use-case independent, to offer more comprehensive coverage. And in this domain, I am pretty certain that no commercial tool has the combined pedigree, technology depth and user-base offered by SpyGlass CDC, especially in ASIC design. It looks like Synopsys has been polishing the solution to extend the many years of ASIC expertise built in to the tool to FPGA design teams through collaboration with FPGA vendors and by adding methodology support for standards like DO-254.

You might reasonably question why another tool is needed, given that the FPGA vendor tools already provide support for CDC analysis. Providing some level of CDC analysis for the basic approaches to managing CDC correctness is not too difficult and this is what you can expect to find in integrated tools. But as designs and design tricks become more complex, providing useful analysis rapidly becomes much more complicated.

By way of example, integrated tools recognize basic 2-flop synchronizers as a legitimate approach to avoid metastability at crossings. But what is appropriate at any given crossing depends heavily on design context. A quasi-static signal like a configuration signal may not need to be synchronized at all; if you do synchronize you may be wasting a flop (or synchronizer cell) in each such case. Or you may choose to build your own synchronizer cell which you have established is a legitimate solution but which isn’t recognized as legitimate by the tool, so you get lots of false errors. Or perhaps you use a handshake to transfer data across the crossing. In this case, there’s no synchronizer; correct operation must be recognized through functional analysis.

Failing to handle these cases correctly quickly leads to an overwhelming number of false violations. If you must scan through hundreds or thousands of violations, you inevitably start making cursory judgements on what is suspect and what is not a problem; and that’s how real problems sneak through to production.


For many years, the SpyGlass team has worked with CDC users in the ASIC world to reduce this false violation noise through a variety of methods. One is through protocol-independent recognition, a very sophisticated analysis to handle a much wider range of synchronization methods that took many years to develop and refine (and is covered by patents).

A second aspect is in analysis of reconvergence – cases where correctly-synchronized signals from a common domain, when brought back together, can lead to loss of data. A third is in very careful and detailed support for a methodology that emphasizes up-front constraints over back-end waivers. Following this methodology ensures you will have a much more manageable task in reviewing a much smaller number of potential violations; as a result you can get to a high-confidence CDC signoff, rather than an “I hope I didn’t overlook anything”.

SpyGlass-CDC will also generate assertions which you can use to check correct synchronization in simulation; this becomes important when you want to validate functionally determined correctness at domain crossings, such as analyzing handshakes, bridges and other functionally-determined synchronization. And if you’re feeling especially brave, SpyGlass CDC also provides an extensive set of embedded formal analysis-based checks, which require very little formal expertise to use.

Synopsys has worked with Xilinx, Intel/Altera and Microsemi on tuning support in SpyGlass-CDC for these platforms. You check out more details and watch the webinar and a demo on the methodology HERE.


$100M China Investment for FD-SOI Ecosystem!

$100M China Investment for FD-SOI Ecosystem!
by Daniel Nenni on 05-23-2017 at 4:30 am

When GlobalFoundries first briefed me on 22FDX during a trip to Dresden in 2015, China was one of the first things that came to mind. The China semiconductor market was still on 28nm and FinFETs seemed far away for the majority of the Chinese fabless companies. A low cost, low power, low complexity 22nm process with a path to 12nm (12FDX) seemed like a perfect fit and as it turns out it is, absolutely.

Continue reading “$100M China Investment for FD-SOI Ecosystem!”


The eFPGA Market is Heating Up!

The eFPGA Market is Heating Up!
by Daniel Nenni on 05-22-2017 at 7:00 am

It is nice to see an emerging market successfully emerge for a change. With embedded FPGAs we are way past test chips and are now seeing tape-outs and silicon in a variety of applications. I’m not sure what the current market estimate of eFPGA is just yet but they align nicely with the $30B+ micro controller market. Market studies have the MCU market exceeding $100B in 2024 partially due to the exploding IoT segment but also due to the increasing intelligence of automotive products and the need for security in ALL of our semiconductor products, absolutely.

Global Microcontroller Market, 2014 – 2024 (USD Billion)

This brings me to the topic at hand which is embedded FPGA pioneer Flex Logic raising $5M in Series B equity financing. The first question I asked was, “Why only $5M?” The answer of course is “Because that is all we need.” Boom… mic drop.

Here are quotes and some meat from the press release. As a receiver of MANY press releases I have to say Flex Logix releases are very informative:

“We believe that Flex Logix’s embedded FPGA has the potential to be as pervasive as ARM’s embedded processors have become today,” said Peter Hebert, Managing Partner at Lux Capital. “The company’s software and silicon are proven and in use at multiple customers, paving the way to become one of the most widely-used chip building blocks across many markets and for a broad range of applications.”

With readily-available, high-density blocks of programmable RTL in any size and with the features a customer needs, designers now have the powerful flexibility to customize a single chip for multiple markets and/or upgrade the chip while in the system to adjust to changing standards such as networking protocols. Customers can also easily update their chips with the latest deep learning algorithms and/or implement their own versions of protocols in data centers. In the past, the only way to accomplish these updates was through expensive and time-consuming new mask versions of the same chip.

“The Flex Logix platform is the best, most scalable and flexible embedded FPGA solution on the market today, delivering significant competitive advantages in time to market, engineering efficiency, minimum metal layers and high density,” said Pierre Lamond, Partner at Eclipse Ventures.

“These patented technology breakthroughs combined with an experienced management team led by Geoff Tate, the founding CEO of Rambus, strongly position the company for rapid growth in the future.” “We now have customers with working silicon, multiple licensees and are seeing our first repeat designs,” said Tate, CEO of Flex Logix. “With a large number of customers in active, detailed evaluation of our technology for a wide range of applications, we expect significant growth of our customer base in the near term. As a result, we’re staffing up ‘ahead of the curve’ to ensure we can satisfy the demand.“

About the Flex Logix Embedded FPGA Platform
Addressing a wide range of chips in multiple markets, the Flex Logix EFLX™ platform can be used with networking chips with reconfigurable protocols, datacenter chips with reconfigurable accelerators, deep learning chips with real-time upgradeable algorithms, base stations chips with customizable features for multiple markets, MCU/IoT chips with flexible I/O and accelerators, and aerospace/defense applications including rad-hard embedded FPGA. EFLX is available for the most popular process nodes already and is being ported to additional process nodes based on customer demand.

About Flex Logix
Flex Logix, founded in March 2014, provides solutions for reconfigurable RTL in chip and system designs using embedded FPGA IP cores and software. The company’s technology platform delivers significant customer benefits by dramatically reducing design and manufacturing risks, accelerating technology roadmaps, and bringing greater flexibility to customers’ hardware. Flex Logix has raised more than $12 million of venture capital. It is headquartered in Mountain View, California and has sales rep offices in China, Europe, Israel, Japan, Taiwan and Texas. More information can be obtained at http://www.flex-logix.com or follow on Twitter at @efpga.

Also read: CEO Interview: Geoff Tate of Flex Logix

You can read more about Flex Logix on SemiWiki HERE.


Calling on #IoTman to save humanity!

Calling on #IoTman to save humanity!
by admin on 05-21-2017 at 7:00 am

We, in the hi-tech community, tend to gravitate towards the technology, the API, the device, the platform, the processes node and to forget the goal behind all of those items. We have all noticed the platform wars and cloud API struggle for #IoT market domination. Someone needs to bring back the discussion to the top level, to why we started down the road of #IoT a few years ago. Here is my contribution, happy to see yours.

There are 4 major challenges that #IoTman (that is you and I in the hi-tech world) must resolve for 9 Billion people to have a good life in 2050:

  • Food – basic sustenance but also trend towards western diet
  • Water – drinking, hygiene and irrigation
  • Energy consumption – motors, lighting and industrial
  • Healthcare – chronic disease management and preventionBackground data is a dime a dozen. A focused Google search returns many supporting reports and diagrams regarding the gravity of the situation for those four items.

    Food
    How do we properly feed 9 billion people? We already have the means with abundant crops and precision agriculture that is increasing yields every year. We read such news items fairly regularly now: “Grains piled on runways, parking lots, fields amid global glut”. The issue is mainly distribution and eliminating waste. How do we recover the 30% of food that is wasted en route? Roughly one third of the food produced in the world for human consumption every year — approximately 1.3 billion tonnes — gets lost or wasted.


    “Figure 3 illustrates the amounts of food wastage along the food supply chain. Agricultural production, at 33 percent, is responsible for the greatest amount of total food wastage volumes. Upstream wastage volumes, including production, post-harvest handling and storage, represent 54 percent of total wastage, while downstream wastage volumes, including processing, distribution and consumption, is 46 percent. Thus, on average, food wastage is balanced between the upstream and downstream of the supply chain.”

    #IoTman has the solution for these issues with blockchain technology to streamline supply and demand contracts and reduce transaction cost while IoT platforms and sensors can track produce across the distribution chain to guard against spoilage. Piece of cake for #IoTman.

    Water
    As an example, an Italian study developed by Censis reports that the volume of water lost in the distribution network is 32% in Italy, 20% in France, 6.5% in Germany, 15.5% in England and Wales. Add to that the flooding that happens on a yearly basis then it is clear that we need better control of our water distribution networks and advance warning systems for floods.


    Given the graph above, clearly any #IoT system that can cut down on water waste in irrigation will have a massive impact.

    #IoTman again, already has many solutions to our water issues from smart meters to regulate consumption to sensors for leakage and massive flooding. We still need to work on our Blockchain exchange for supplying water contracts via various means but it is only a matter of time before someone releases such a peer to peer water trading platform with minimal transaction costs.

    Energy
    Energy efficiency is crucial in dealing with demand outstripping supply. Would anyone like to contest this statement? There is no one size fit all for all issues relating to energy. The item that is clear though is that we have the technology to dramatically increase the efficiency of energy use and distribution with #IoT sense and control systems across the whole energy chain. It is not just smart meters that will make a difference but smart cities, buildings and enterprises.

    Electric Motors Use 45% of Global Electricity. We are talking here about global electricity! #IoTman can definitely today supply systems that sense and control motors of all sorts to improve efficiency. Every few % points have a dramatic effect on a global scale.

    Healthcare
    I saved the best one for last. Who in his right mind imagines that we can properly care for 9 billion people in 2050 when we are already spending over 10% of GDP today just to barely keep up? Why are we still waiting for people to get sick before we prescribe medication to the tune of $1 T? This is definitely not a sustainable model for 2050.


    #IoTman has the technology to flip this problem around by focusing on prevention. Big data genomics and sensors for continuous tracking of daily medical status are the only possible effective way forward. Get the GDP percent down, get Pharma to change its business model to prevention instead of cure and most of all get those 9B people to live a healthier life as their life expectancy increases.

    There are many reports from various organisations that expand on the subjects above and go into details of how #IoT will make a dramatic impact on the living conditions for humanity in 2050. My intent was not to replicate these studies here but to serve as a quick reminder.

    Given the above, it is time for the hi-tech community rise to the challenge, to come together and put aside its fascination with my platform not yours, my API not yours and join hands so that we can together create a better future for humanity. Do we really want to delay massive #IoT deployment by many years until the right application platform eventually wins?

    Would you join #IoTman in this epic journey to a trillion #IoT nodes serving humanity?


Webinar: Recipe to consume less than 0.5 µA in sleep mode

Webinar: Recipe to consume less than 0.5 µA in sleep mode
by Eric Esteve on 05-19-2017 at 12:00 pm

Dolphin is addressing the ultra-low-power (ULP) needs for some applications, like for example Bluetooth low energy (BLE), machine-to-machine (M2M) or IoT edge devices in general. For these applications, defining active and sleep modes is imperative, but it may not be enough to guarantee that the battery-powered system will run for years, especially when the device integrates Always-On functions. In fact, a device in sleep mode is expected to support some mandatory functions, like Clock (RC or XTAL oscillator), retention SRAM (data and program memory), ACU/PMU logic (control), voice activity detection and so on. We will see in this webinar the different strategies to implement these Always-On functions, including the power network distribution within the chip.

It’s a very practical webinar, as Dolphin propose different architectures, all based on IP from Dolphin port-folio (in 180 nm, 55nm, 40nm and 22nm processes) dedicated to Always-On power domain, and discuss the impact on the chip power consumption by using measured (or simulated) power figures.

Dolphin proposes five synopses, each related to a specific action:

  • Using a supply multiplexer
  • Mutualizing a voltage regulator
  • Associating 2 voltage regulators
  • Using a near-threshold voltage library
  • Using a thick oxide library

The designer will go for the architecture optimized in respect with the application constraints, probably mixing several of the above listed recipes.

The emerging connected systems are most frequently battery-powered and the goal is to design a system able to survive with, for example, a coin cell battery, not for days or even months, but for years. If you dig, you realize that numerous applications, such as M2M, BLE, Zigbee…, have an activity rate (duty cycle) such that the power consumption in sleep mode dominates the overall current drawn by the SoC. For such applications, the design of the “Always-On power domain” is pivotal. To meet customer expectations, ensuring a current consumption of the Always-On (AoN) power domain – incl. blocks in retention mode – not higher than 500 nA is pivotal. To reach this target, the power network architecture needs to be carefully considered, and the IP supporting this architecture available in the target technology node.

Dolphin has developed a methodology based on a figure of merit (FoM), expressed as follow:


Because we are dealing with devices supporting always-on capability in the context of connected, battery powered system, the weight value is the highest for power consumption with 60%, decrease to 25% for area and 15% for bill-of-material (BoM). Looking at the five power architectures to implement the AoN power domain, a comparative analysis will enable to identify the characteristics of silicon IPs required to reach the targeted performance optimization, identified by the best (lower is better) FoM.


We can see a real example, with 3 cases of Bluetooth LE chips (BLE1, BLE2, BLE3), on the above picture. In the three cases, the active current is the same (the same BLE function is integrated), and only the sleep current value is different with resp. 10, 1 and 0.2 uA. If you consider a system active only 1% of the time (or 15 min per day), the battery autonomy varies from 208 days (10 uA sleep) to 260 days (0.2 uA sleep). The difference is much more impressive with a system active only 1 minute per day: in this case, the battery autonomy can reach up to 5 years for the lowest sleep current (0.2 uA) case, or 3 times more than for the highest sleep current (10 uA) case with a battery autonomy of 1.7 years.

Dolphin will hold a live webinar “Recipe to consume less than 0.5 µA in sleep mode” on May 23 for Americas, at 9:00 AM PDT or June 1[SUP]st[/SUP] (for Europe or Asia), at 11:00 AM CEST. This webinar targets the SoC designers wanting to learn how to quickly implement ultra-low power (uLP) techniques, using proven recipes.

You can access to the record of this webinar by registering to MyDolphin, the private space of Dolphin Integration’s website:
https://www.dolphin-integration.com/index.php/my_dolphin/login

By Eric Esteve from IPNEST
More about the various architecture allowing to implement ultra-low-power solutions:
https://www.dolphin-integration.com/index.php/solutions/low-power-always-on-panoply#summary


Webinar – Low Power Circuit Sizing for IoT

Webinar – Low Power Circuit Sizing for IoT
by Tom Simon on 05-19-2017 at 7:00 am

Optimizing analog designs has always been a difficult and tricky process. Designing for IoT applications has only made this more difficult with the added importance of minimizing power. Unlike other circuit parameters, it is not easy to specify power as a design goal when using equations. Power is a resultant property and must be optimized by heuristically tuning circuit parameters while not dropping out of spec on the other design requirements.

Add to this the need for chips to function in a large range of environments – from freezing to outdoors in direct sun or on widely variable supply voltages – and the design challenge only become greater. MunEDA has an approach that uses advanced numerical methods to automate analog circuit optimization. Instead of a painful trial and error process involving many discrete calculation steps, they have harnessed multivariable optimizations in conjunction with SPICE to seek solutions over the design space. It even works well with second order circuit effects and devices operating across their full range, not just weakly or strongly saturated.

To help more people understand their approach and solution, MunEDA is hosting a webinar on the topic of “Low Power Circuit Sizing for IoT” on June 1[SUP]st[/SUP] that will provide an overview of how their WiCkeD Tool Suite chews through optimization problems for analog chips, especially those targeted for IoT.

The webinar is intended for circuit designers, design managers, CAD engineers and EDA managers. The presenter will be Dr. Michael Pronath, Vice President Products & Solutions.

Here is their outline for the talk:•Why is multi-objective automated circuit sizing challenging for IoT designs.
•How can MunEDA automated sizing tools be leveraged to achieve high-performance and reliable IoT designs.
•How are statistics incorporated into sizing to consider device mismatch and process centering.
•What is the technology behind the MunEDA automated sizing environment — with enough math/statistics to be interesting but not overwhelming.
•How can MunEDA sensitivity-based tools be useful to test and confirm and validate current intuition of my IoT designs.

Here is the link for signing up for this event that will be held on June 1[SUP]st[/SUP] at 10AM PST


About MunEDA
MunEDA develops and licenses EDA tools and solutions that analyze, model, optimize and verify the performance, robustness and yield of analog, mixed-signal and digital circuits. Leading semiconductor companies rely on MunEDA’s WiCkeD’ tool suite – the industry’s most comprehensive range of advanced circuit analysis solutions – to reduce circuit design time and achieve maximum yield in their communications, computer, memory, automotive and consumer electronics designs. Founded in 2001, MunEDA is headquartered in Munich, Germany, with offices in Sunnyvale, California, USA (MunEDA Inc.), and leading EDA distributors in the U.S., Japan, Korea, Taiwan, Singapore, Malaysia, Scandinavia, and other countries worldwide. For more information, please visit MunEDA at www.muneda.com/contacts.php.


Understanding ISO 26262 Compliance for Automotive Suppliers

Understanding ISO 26262 Compliance for Automotive Suppliers
by Daniel Payne on 05-18-2017 at 12:00 pm

The semiconductor, IP, Software and EDA industries are all focusing on the growing automotive market because of its electronic content, size and growth. There are long-time suppliers to the automotive industry, and also first-time vendors that are launching something new every week for electronics in automotive. So where do you go to get up to speed most quickly on the requirements in automotive like safety standards? One way is to attend a free lunch and learn event planned for next week:

  • What: Understanding ISO 26262 Compliance for Automotive Suppliers
  • When: Tuesday, May 23rd, 2017 from 11:30AM to 1PM PDT
  • Where: Hyatt Regency Santa Clara, 5101 Great America Parkway, Santa Clara, CA 95054

Presenters

You’ll hear from two companies at this lunch and learn event:

  • Industry safety certification expert from TÜV SÜD
  • Requirements management expert from Jama Software

Who Should Attend

Any member of an organization who is currently working on or interested in the development of products for the automotive marketplace.

What You’ll Learn

How to navigate the development challenges of ISO 26262 when you are creating products for automotive customers.

Description

Developing products for the Automotive market has its challenges – building products your customers love, getting to market quickly, etc. However, those challenges are further complicated by functional safety standards ISO 26262 and regulatory overhead. Companies new to the automotive market seeking to capitalize on the growth opportunity are quickly confronted by unfamiliar safety standards like ISO 26262 and often struggle to get started. In addition, companies more familiar with the automotive industry struggle to keep up with changing standards and are constantly seeking ways to minimize risk in their process while saving time and money.

Join us on May 23rd for a lunch and learn with industry safety certification experts Tuv Sud, and industry leading requirements management organization Jama Software, as we share best practices for ways to maintain a competitive edge while minimizing risk, saving time and money, and adhering to today’s functional safety standards like ISO 26262.

Agenda

  • Overview of ISO 26262
  • Common challenges to ISO 26262 compliance
  • Traceability to support ISO 26262
  • How Requirements Management can ease the burden of ISO 26262

Registration
You need to register online and the event is free, however space is limited.


Two-Factor Authentication on the Edge

Two-Factor Authentication on the Edge
by Bernard Murphy on 05-18-2017 at 7:00 am

Two-factor authentication has become commonplace for those of us determined to keep the bad guys at bay. You first request entry to a privileged site through a user-name/password, which in turn sends a code to your mobile device that you must enter to complete your admission to the site (there are other second factor methods, but this one is probably the most widely-known). In the escalating security war, this approach too will be compromised at some point but for now it is certainly much more secure than single-factor authentication.


Texting a code to your phone works well for human-mediated authentication, but has obvious challenges for application to IoT devices, not only in how the second factor would be handled but also in typically resource-constrained devices. Yet this level of security is even more important in devices monitoring and managing critical infrastructure, such as the power grid or transportation systems, domains where breaches could have enormous impact. Security of this type becomes particularly important when looking at remote provisioning, for software updates for example.

A team sponsored by A*STAR (Singapore) has come up with a clever way to provide similar (or better) levels of authentication in edge devices. Most two-factor approaches rely on the second factor being something you (as a privileged user) know or have, as a possession or intrinsic characteristic, which would be difficult for an attacker to duplicate. Short of very advanced (and low power) levels of AI, it is probably over-optimistic to expect IoT edge devices to “know” anything and what they “have” doesn’t seem any more secure than an embedded password. But there is something quite unique and very challenging to replicate about each device – the history of communication it has had with the host system.

Naturally the full representation of that communication would be far too massive to store on the edge device (and far too slow to check). Instead, the method works as follows. First, the nature of second factor authentication is for the edge-device (in this case) to issue a challenge to the server to prove that it knows some specific fact. The server, which they call P, must generate a proof which it returns to the device, called V, which then verifies the response. The strength of the proposed method relies on the server having fast access, through big-data methods, to all the history (or a significant percentage of the history) of communication with the device, something that would be very expensive for an attacker to replicate and would greatly increase chances of detection if an attacker were to attempt to collect that information. And as that history continues to evolve, even a successful attack at one point in time would degrade in effectiveness beyond that time.

The server side of this method is easy to understand – all that history data is effectively a giant and evolving key which should be prohibitively expensive to attack. The tricky and clever part is how this works on the device V. Here the paper gets quite complex, so you’ll have to rely on my summary, or read the paper yourself (and please let me know if I got it wrong). V stores only a finite and evolving subset derived from historical data, converted through a function into tags, which are also communicated back to P as items are selected by V to be in that set.

On authentication (I’ll skip a bunch of details here for simplicity), V issues a challenge to P in the form of a random subset of indices on this set of tags. P runs a function which provides a proof that in essence shows it knows the historical data associated with those tags. P is not simply returning the tags corresponding to those indices; this is a variant on a proof of retrievability without having to return the underlying data. V then runs a different function on the returned proof and that function must return a true value for each index in the challenge.

That’s the 10-cent version of a more complex explanation. The authors also provide a proof of correctness, a detailed security analysis and an analysis of the resilience of the method to leakage, none of which am I going to attempt to explain here. They show that as the size of the stored subset (on V) increases, the likelihood of a successful attack decreases exponentially, also that even if as much as 70% of the subset is compromised, again the likelihood of successful attack continues to decrease exponentially with increasing subset size, though obviously more slowly, .

You can learn more about the approach HERE or HERE.


Cybersecurity in the World of Artificial Intelligence

Cybersecurity in the World of Artificial Intelligence
by Matthew Rosenquist on 05-17-2017 at 12:00 pm

Artificial Intelligence (AI) is coming. It could contribute to a more secure and rational world or it may unravel our trust in technology. AI holds a strong promise of changing our world and extending compute functions to a more dominant role of directly manipulating physical world activities. This is a momentous step where we relinquish some level of control for the safety of ourselves, family, and prosperity. With the capabilities of AI, machines can be given vastly more responsibility. Driving our vehicles, operating planes, managing financial markets, controlling asset transactions, diagnosing and treating ailments, and running vast electrical, fuel, and transportation networks are all within reach of AI. With such power, comes not only responsibility, but risks from those seeking to abuse that power. Responsibility without trust can be dangerous. Where will cybersecurity play in our world where learning algorithms profoundly change and control key aspects of our future?


Technology is a Tool
AI, for all its science fiction undertones, is about finding patterns, learning from mistakes, breaking down problems, and adapting to achieve specific goals. AI is a series of incredible logic tools that allow methodological progression in data processing. It sounds complex, but when distilled to its base components, it becomes simple to grasp. In practice, it could be finding the most optimal route to a location, matching biometrics to profiles, interpreting the lines of the road to keep a vehicle in the proper lane, distilling research data to identify markers for diseases, or detecting brewing volatility situations in markets or people. Vast amounts of data are processed in specific ways to distill progressively better answers and models.

The real importance is not in how AI can find patterns and optimal solutions, but rather what the world can do with those capabilities. Artificial Intelligence will play a crucial role in a new wave of technology that will revolutionize the world.


Every day we get closer to autonomous cars that can: transport people across town or the continent, accelerate research for cures to diseases, locate hidden reserves of energy and minerals, predict human conditions like crime, heart attacks, and social upheaval. AI systems are expected to improve the lives of billions of people and the condition of the planet in many different ways, including detecting leaks in pipes to reduce waste and avoid environment disasters, optimizing crop yields to reduce starvation, configuring manufacturing lines for efficient automation, and identifying threats to people’s health and safety.

Learning machines will contribute to computing performing actions more efficiently and accurately than humans. This will foster trust and lend to more autonomy. It is not just cars. Think medical diagnosis and treatment, financial systems, governmental functions, national defense, and hospitality services. The usages are mind boggling and near limitless. The new ways we use these capabilities will themselves create even more innovative opportunities to use AI. The next generation may focus on the monitoring, management, control, provisioning, improvement, audit, and communication between other lesser capable AI systems. Computers watching computers. The cycle will reinforce and feed itself as complexity increases and humans become incapable of keeping pace.

As wondrous as it is, AI is still just a technology. It is a tool, albeit a very powerful and adaptable one. Here is the problem. Tools can be wielded for good or for malice. This has always been the case and we just cannot change the ways of the world. As powerful as AI is, there is a direct relationship to the amount of risk which is accompanied with the benefits. When value is created, attackers are attracted and it becomes a target. It might be a hacker, online criminal, nation state, hacktivist, or any other type of threat agent. Those who can steal, copy, control, or destroy something of value, have power. AI will be a very desirable target for those who seek power.


From Data Processing to Control of the Physical World
Computers are masters of data. They can do calculations, storage, and all manner of processing extraordinarily well. For a very long time the data and information generated by computers were largely for humans to be better informed, to make decisions. There are other reasons of course, entertainment, communications, etc. But the point is there have been specific limits. Computers outputs were mostly to a screen, printer, or to another computer. To control things in the physical world takes quite a sizable amount of thinking, in order to do it right. In many cases, we simply don’t trust computers to deal with unexpected and complex situations.

Modern airliners have automatic settings which can fly the plane. But we all feel much more comfortable with a human in the cockpit, even if they don’t do much but enjoy the ride. They are our failsafe. One which has an irreplaceable stake, just like the passengers, to arrive safely to the destination. Humans, although slow compared to computers, fallible in judgement, and prone to unpredictability in performance, still have a trusted reputation to keep people safe and rise when needed to adapt to changing conditions. They simply are better at critical oversight of incredibly complex, ambiguous, and unpredictable situations, especially when self-interests are involved.

AI may challenge that very concept.
It will likely be proven on the roads first. Autonomous systems will be designed to reduce traffic congestion, avoid accidents, and deliver passengers by the most efficient route possible. Drivers are notoriously bad around the world. Sure, a vast majority of trips end in expected success, but many do not. Tremendous resources of time, fuel, and patience are wasted with inefficient driving. Autonomous vehicles, powered by various AI systems, will eventually statistically prove to be significantly better at driving. So much so, it could revolutionize the insurance industry and create a class system on the roadways. Autonomous vehicles can travel in close chevrons at high speed, while human drivers will be greatly limited in speed and efficiency.

Such cases will open the world to computers which will be allowed, even preferred, to control and manipulate the physical world around and for us.


It could be as simple as a smart device to mow the lawn. An AI enhanced autonomous lawnmower could efficiently cut the grass, avoid sprinklers, not come near the household pets, detour around newly planted flowers, be respectful of pedestrians, and turn off when children approach or their toys get too close. Such a device will also monitor its performance and act on maintenance needs by proactively ordering needed parts and connecting itself to the power grid when it needs to recharge. It may also find the best place to park itself in the tool shed or garage, and only return when it determines the biological grass again requires upkeep by a smart technological overseer. The trust that AI brings, in reliably making smart decisions, will allow digital devices to manipulate, interact, and control aspects of the physical world.



Malice in Utopia
The same sensors, actuators, and embedded intelligence could also be a recipe for disaster. Errors in faulty, damaged, or obscured sensors might feed incorrect data to the AI brain, leading to unintended results. Even worse, malicious actors could alter inputs, manipulate AI outputs, or otherwise tinker with how the device responds to legitimate commands. Such acts could lead to runaway mowers, chopping down the neighbors prized petunias, or pursuing pets with messy consequences. A single act could end badly. But what if a truly malevolent actor went so far as to hijack an entire neighborhood of these devices. Like a botnet, under the control of an aggressive actor. Still, the rise of suburban lawnmowers scenario seems a bit silly.

What if we aren’t talking about lawnmowers? What if, instead it was autonomous cars, buses, and trains that could be controlled by hackers? Perhaps medical devices in emergency rooms or those implanted in people such as defibrillators, insulin pumps, and pacemakers. Smart medical devices could save lives with newfound insights to patient needs, but also could put people at risk if they make a mistake or are controlled maliciously. What if hostile nations were to manipulate AI systems that are controlling valves on pipelines, electrical switches in substations, pressure regulators in refineries, dam control gates, and emergency vents managing the safety of power generation facilities, chemical plants, and water treatment centers? In the future, artificial intelligence will greatly benefit the operation, efficiency, and quality of these and other infrastructure critical to daily life. But where there is value, there is always risk.

The Silver Lining

What can be broken, can also be protected with AI. We live in a world full of risks. It does not make sense to eliminate all of them, but rather manage them to an acceptable level. Technology provides a myriad of benefits and opportunities. It also brings new challenges to manage risks. The goal is to find the right balance of controls to mitigate the risks to an acceptable level where it makes sense. Costs and usability are factors as well. The optimal balance is when benefits from the opportunities are realized while the risks are kept at an acceptable level. These goals and the enormous amounts of data required to understand them, represent perfect conditions for AI to thrive and find optimal solutions.

Intelligent systems could lead in new capabilities to detect insiders and disgruntled employees, identify stealthy malware, dynamically reconfigure network traffic to avoid attacks, scrub software to close vulnerabilities before they are exploited, and mitigate large scale complex cyberattacks with great precision. AI could be the next great equalizer for defensive capabilities to support technology growth and innovation.

The Unanswered Questions
This is not the end of the discussion, rather it is the beginning of a journey. AI is still in its infancy, as are the technologies and usages which will incorporate it to deliver enhancements to systems and services. There are many roads ahead. Some more dangerous than others. We will need a guide to understand how to assess the risks of AI development and evolution, how to protect AI systems, and most importantly methods to secure the downstream use-cases it enables.

Some questions we must ponder:

  • What risks are present when the integrity of AI systems are undermined, results are manipulated, data exposed, or availability is denied?
  • Who is liable if AI makes poor decisions and someone gets injured?
  • How will regulations for privacy and IP protection apply to processed and aggregated data?
  • What level of transparency and oversight should be instituted as best practices?
  • How should input data, AI engines, and outputs be secured?
  • Should architectures be designed to resist replay and reverse engineering attacks, at what cost?
  • What fail-over states and capabilities are desirable against denial-of-service attacks?
  • How do we measure risks and describe attacks against AI systems?
  • What usages of AI will be targeted first by cyber attackers, organized criminals, hacktivists, and nation states?
  • How AI can be used to protect itself and interconnected computer systems from malicious and accidental attacks
  • Will competition in AI systems drive security to an afterthought or will market leaders choose to proactively invest in making digital protections, physical safety, and personal privacy a priority?


Dawn of a New Day…
It is time the cybersecurity community begins discussing the risks and opportunities that Artificial Intelligence holds for the future. It could bring tremendous benefits and at the same time unknowingly become a Pandora’s box for malicious attackers. We may see innovators wield AI in incredible new ways to protect people, assets, and new technologies. Governments may be compelled to step in and begin regulating the usage, protections, transparency, and oversight of AI systems. Standards bodies will also likely be involved in setting guidelines and establishing acceptable architectural models. There will likely be thought leading organizations begin to incorporate forward thinking cybersecurity controls which protect digital security of systems, the physical safety of users, and the personal privacy of people.

I plan on exploring the intersection of cybersecurity and Artificial Intelligence in upcoming blogs. It is a worthy topic we all should be contemplating and discussing. This topic is virgin territory and will take the collective ideas and collaboration of technologists, users, and security professionals to properly mature. Follow me to keep updated and add your thoughts to the conversation.

Right now, the future is unclear. Those with insights will have an advantage. Now is the right moment to begin discussing how we want to safely architect, integrate, and extend trust to intelligent technologies, before unexpected outcomes run rampant. It is time for leaders to emerge and establish best practices for a secure cyber world that benefits from AI.

Interested in more? Follow me on LinkedIn, Twitter (@Matt_Rosenquist), Information Security Strategy, and Steemit to hear insights and what is going on in cybersecurity.