CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

A Peek Inside the Global Foundries Photonic Death Star!

A Peek Inside the Global Foundries Photonic Death Star!
by Mitch Heins on 11-03-2016 at 12:00 pm

Last week I wrote about the Photonics Summit and hands-on training hosted by Cadence Design, PhoeniX Software and Lumerical Solutions and in that article I mentioned that Ted Letavic of Global Foundries laid out a powerful argument for why integrated photonics is a technology that is going main stream. This article dives into more details from Ted’s presentation. There are some basic misconceptions about photonics that need to be cleared up and Global Foundries did a good job of doing that in Ted’s presentation.

The first misconception is that integrated photonics will be a small niche market. Ted did a nice job of pointing out that the major growth driver for photonics will be cloud-based computing. Up to 75% of enterprise IT deployments are now hybrid-Cloud based. Cloud deployments are driving most of the server, network and storage growth, and it’s that grow that will drive a 10X growth in data center traffic over the next five years. Mobile data is another contributing part of this growth and it alone is forecast to grow at an astounding 53% CAGR from ~6 exabytes (EB) in 2016 to over 30 EB in 2020. In conjunction with greater data volumes comes the need for greater data bandwidth and flexibility. Ted noted the two biggest drivers for increased bandwidth as being the new 5G standard for cellular networks and the dis-aggregation of the data centers with suppliers moving away from super centers to many smaller centers that are connected together with high band-width networks. Both of these drivers will require increased bandwidth density and speed and decreased latency. With this in mind, networking bandwidth is forecast to double every two years for the foreseeable future and integrated photonics will be the prevalent solution in all areas of networking for telecom (long and short haul), mobile networks and data centers. Transceivers alone for telecom and datacom are forecast to be a $3B market by 2020.

The second misconception is that integrated photonics is still in the labs and hasn’t made it to the production fabs. Global Foundries made it abundantly clear that they are ready to take production runs in as many as three different fabs (Fishkill 90nm/300mm, Burlington 90nm/200mm and Singapore 45nm/200-300mm). All of these fabs are able to run SiGe (silicon germanium) on SOI wafers and support PDKs with all of the necessary components for integrated photonic designs including vertical grating couplers, low loss edge couplers, dense high-contrast waveguides and passive components as well as high-speed active modular and photo detectors.

A third misconception about integrated photonics is that because photonic components are large in comparison to their transistor counterparts that 300mm lines would be overkill for such devices. As it turns out, signal loss is a key concern of large photonic circuits and many of the major sources of loss such as line-edge roughness in waveguides, alignment errors at junctions, and line-edge placement errors of resonant structures caused by poor critical dimension (CD) control, can be mitigated by 300mm tooling. Global Foundries showed results comparing their 200mm and 300mm tooling with the 300mm lines having a 3-5X reduction in CD and overlay errors, 2.5-3X reduction in line-edge roughness and a 4-5X reduction in CD and overlay errors in modulators giving them a substantial boost in their RF definition. This tooling combined with judicious optical proximity correction (another staple of 300mm processing) makes for a very low loss photonic platform.

A last misconception about integrated photonics is that monolithic solutions combining electronics and photonics are a long way off. Global Foundries has a solution now, says Letavic. Global Foundries’ offering boasts monolithic and hybrid process integration including high bandwidth RF and Analog for broadband systems and 5G synergy. To strengthen their offering, Letavic also pointed out that Global Foundries has a wealth of capabilities for handling advanced packaging (C4/Cu pillars, TSVs and MCMs) and test requirements and have included support for integrated photonics by adding lower-cost passive fiber alignment-and-attach technologies and surface grating couplers for inline on-wafer testing.

Letavic rounded out his presentation by outlining the fact that they have PDKs for their capabilities now that are compatible with the Cadence, PhoeniX, Lumerical EDPA (electronic-photonics design automation) flow covered by the rest of the photonic summit.

As I mentioned in my last article, this truly is a watershed event for photonics. The AIM Photonics effort in the U.S. needed a production fab into which designs could go from prototype to production and now they have not one, but three!

Also Read: The Fabless Empire Strikes Back, Global Foundries and Cadence make moves into Integrated Photonics!


Always-On IoT – FDSOI’s Always Better? What About Wafers? (Questions from Shanghai)

Always-On IoT – FDSOI’s Always Better? What About Wafers? (Questions from Shanghai)
by Adele Hars on 11-03-2016 at 7:00 am

Mahesh Tirupattur, EVP at low-power SERDES pioneer Analog Bits lead off the panel discussion at the recent FD-SOI Forum in Shanghai with the assertion that for anything “always on” in IoT, FD-SOI’s always better. They had a great experience porting their SERDES IP to 28nm FD-SOI (which they detailed last spring – see the ppt here). The port from 28 bulk to 28 FDSOI took 2 1/2 months (vs. to FinFET, which took almost 6). Even without using body bias, they got performance up by around 15% and leakage down by about 30% (he added that with body bias, they could get five times that).

He compared porting to FD-SOI to playing high school ball, vs. a port to FinFET which is like competing in the Olympics. ESD was different, but not a big deal – you just need to “read the manual”. Heating? Nothing an engineer can’t resolve. For IoT, FinFETs are like using a cannon to shoot a mosquito, he quipped.

He later ticked off a few more advantages of FD-SOI for the IoT design community: system cost, lower power – and here’s a particularly interesting observation – cheaper packaging. They were able to do wire bonding, so they were able to package a wearable video app in a plastic capsule. All things considered, FD-SOI offers the perfect solution, he said (and now he’s got silicon with “dramatic results” to prove it), adding that the IP guys need to evangelize this.


Shanghai FD-SOI Forum Panel Discussion (left to right): Wayne Dai, CEO Verisilicon (moderator); Marshal Cheng, SVP Leadcore; Mahesh Tirupattur, EVP Analog Bits; Subramani Kengeri, VP GlobalFoundries; Handel Jones, CEO IBS; Christophe Maleville, VP Soitec. (Photo courtesy SOI Consortium and Verisilicon)

Moving really fast
GloFo VP Subramani Kengeri took a moment to look back before he looked forward. “FD-SOI is not new,” he reminded the audience. It was explored and researched for a decade. But at the beginning, CPUs were driving the industry, and everyone else followed suite. But now in mobile and IoT, RF is becoming more important, and what was good for the CPU is no longer what’s good for everything else. He tipped his hat to Soitec, ST and Leti, who “kept the lights on” and kept driving FD-SOI forward. Now with 5G on the horizon, FD-SOI is the enabler, he added.

He also noted that FD-SOI gets you the maximum memory onchip, and that with 12FDX, we’ll be seeing the world’s smallest SRAM. So that opens a new degree of freedom. The EDA partners have been working on automating body bias in the PDK for greater power management. He cites an ARM core with on-demand performance that can be used “intelligently”. Is it complicated? Not really, he says, especially if it’s automated. In fact he sees body bias opening the market for “extraordinary, innovative products” very soon. Key IP is in place. And it’s not just for IoT: if you don’t count high-end CPUs, FD-SOI is optimal for everything. “Everything’s happening now, and it’s moving really fast,” he said.

Clear substrate path to 7nm
SOI wafer leader Soitec VP Christophe Maleville was asked if he saw any limit on manufacturing the ultra-thin wafers for the 7nm node. No problem, he said – they can do those wafers with 4nm of strained top silicon and a 10nm layer of insulating BOX. They’ve been working on FD-SOI wafers for over a decade, he said, with Leti, IBM and ST. Back in 2013 when ST announced the Nova-Thor hitting 3GHz (or 1GHz at just 0.6V) on 28nm FD-SOI, everything was in place: the metrology was ready, reliability was controlled.

Today they’ve got a 15nm BOX layer in manufacturing, with no limits in moving to 10nm for customers going for very low power. For the strained top silicon needed for the 7nm node, they spent years working on strain with IBM et al in Albany, so they’re not starting from scratch. That substrate will be mature in just two years, so from a substrate point of view, he said, “7nm is no problem”.

Coming fast: lots of products (and a fab for China?)
In response to a follow-up question from a well-known financial analyst covering the China tech industry, panel moderator and Verisilicon CEO Wayne Dai said that the design community in China has the skills to do FD-SOI, no problem. He’d like to see more IP, but FD-SOI has powerful advantages in terms of cost, analog, memory and back biasing.

Dai then asked the panelists if they thought we’d be seeing a foundry in China opting for FD-SOI next year – all but one said yes. One thing all the panelists agreed on, however: they all expect to see FD-SOI products (and lots of them) on the stage at the Shanghai FD-SOI Forum in 2017.


Protium for the win in software development

Protium for the win in software development
by Don Dingee on 11-02-2016 at 4:00 pm

Cadence Design Systems is a long-standing provider in hardware emulation, but a relative newcomer to FPGA-based prototyping. In an upcoming lunch and learn session on November 11 in San Jose, Cadence teams will be outlining their productivity strategy. What’s different with their approach and why is this worth a lunch? Continue reading “Protium for the win in software development”


Keeping It Fresh with the Veloce Deterministic ICE App

Keeping It Fresh with the Veloce Deterministic ICE App
by Rizwan Farooq on 11-02-2016 at 4:00 pm

In The Times They Are A Changin’ Nobel Laureate Bob Dylan advised us to “heed the call” of change or suffer the consequences. This couldn’t be more true, considering what design and verification engineers face every day in the midst of the technological revolution.

Change has never been so rapid. And it requires we constantly adapt. Within the world of emulation, we are witnessing tremendous efforts to keep pace, making emulators more useful, more available, and more efficient. Virtualization and around-the-clock, concurrent availability of emulator resources to multiple project teams are primary strategies for better serving global design teams and the growing number of emulation use models and applications.

Yet traditional ways of doing things continue to have value. For example, in-circuit emulation (ICE) is needed for many SoC verification scenarios. Used to exercise a design under test (DUT) by connecting physical targets to an emulator, ICE delivers the significant advantage, among other things, of being able to run real-world usage scenarios before tape-out.

However, even when it is advantageous to use an ICE-based verification environment, verification engineers face four challenges:

  • Insufficient trace depth
  • Iterative and long debug cycles
  • Randomness
  • Lack of flexibility

To address these debug challenges, and keep ICE current with modern design and verification trends, Mentor developed the Veloce® Deterministic ICE App.

The Veloce Deterministic ICE App takes the randomness out of ICE, dramatically shortening the time to find and fix bugs. It delivers a repeatable and virtual debug flow for an ICE-based environment. It addresses debug limitations, including randomness, by creating a virtual debug model of an ICE run and generating a replay database to repeat a test without cabling to physical ICE targets.

Figure 1: The Veloce Deterministic ICE App use model.

The Veloce Deterministic ICE App use model is very simple. To generate a replay database, you specify your requirements and enable the Veloce Deterministic ICE App replay mode. Veloce generates the replay database while it runs the standard ICE test case with the ICE targets connected. Once the run is complete, the test case can be run as often as necessary using the replay database without the use of ICE targets.

Figure 2: Veloce Deterministic ICE use model.

Because the replay database has eliminated the use of ICE targets, you can run this database on any Veloce hardware. The emulator ICE targets are freed up for use by other project teams, and you can stop a run and inspect both data and full waveforms. This provides a rich debug platform and increased productivity in addition to efficient use of emulation resources.

The Veloce Deterministic ICE App also enables advanced debug methodologies like assertions, protocol monitors, and $display, which are commonly used in today’s advanced verification methodologies. You can also do power analysis, coverage closure, and offline SW debug using the Veloce Deterministic ICE App within an existing ICE setup.

To find out more about how the Veloce Deterministic ICE App improves your debug productivity and helps your team get the most out of your emulation resources, download the new whitepaper Using the Veloce Deterministic ICE App for Advanced SoC Debug.

“If your time to you is worth savin’” you’ll be glad you did.


Medicine will advance more in the next 10 years than it did in the past century

Medicine will advance more in the next 10 years than it did in the past century
by Vivek Wadhwa on 11-02-2016 at 12:00 pm

Mark Zuckerberg and his wife, Priscilla Chan, recently announced a $3 billion effort to cure all disease during the lifetime of their daughter, Max. Earlier this year, Silicon Valley billionaire Sean Parker donated $250 million to increase collaboration among researchers to develop immune therapies for cancer. Google is developing contact lenses for diabetic glucose monitoring, gathering genetic data to create a picture of what a healthy human should be and working to increase human longevity.

The technology industry has entered the field of medicine and aims to eliminate disease itself. It may well succeed because of a convergence of exponentially advancing technologies, such as computing, artificial intelligence, sensors, and genomic sequencing. We’re going to see more medical advances in the next decade than happened in the past century.

We already wear devices, such as the Fitbit and Apple Watch, which monitor our physical activities, sleep cycles, and stress and energy levels and upload these data to distributed servers via our smartphones. And those smartphones contain countless applications to keep track of our vitals and gauge our emotional and psychological states.

Then there is sequencing of the human genome, first completed in 2001 at a cost of about $3 billion. It’s possible today for about $1,000, with costs falling so fast that, by 2022, genome sequencing may be cheaper than a blood test. Now that it has been mapped into bits that computers can process, the genome has become an information technology.

With increasingly large sample sizes and tools such as IBM’s A.I. system, Watson, scientists are gaining an understanding of how our genes affect our health; how the environment, the food we eat, and the medicines we take affect the complex interplay between our genes and our organisms.

The next big medical frontier is on the horizon: our microbiomes, the bacterial populations that live inside our bodies. We may think we are just made up of cells, but in reality there are 10 times as many microbes in our body as cells. This is a field that I am most excited about, because it takes us back to looking at the human organism as a whole. The microbiome may be the missing link between environment, genomics, and human health.

Some children, for example, are born with a genetic predisposition to type-1 diabetes. Researchers tracked what happened to the stomach bacteria of children from birth to their third year in life and found that those who became diabetic had suffered a 25 percent reduction in their gut bacteria’s diversity (possibly from antibiotics). In another study, on Crohn’s disease, scientists took a small sample of feces from a healthy person and gave it in an enema to somebody with Crohn’s. Though that seems a disgusting procedure, it proved extremely effective in curtailing the condition. Scientists are also finding a correlation between the microbiome and obesity. It may well be the bacteria in our guts that make us fat — not just the food we eat.

Within a few years, our genome, microbiome, behavior and environment will all be mapped and measured, and prescriptive-medicine systems based on artificial intelligence will help us feel better and live longer.

The most amazing — and scary — genetics technology of all is CRISPR. It uses an enzyme, Cas9, that homes in on a specific location in a strand of DNA and edits it to either remove unwanted sequences or insert payload sequences. With it, Chinese scientists have genetically modified pigs, goats, monkeys and sheep to change their size and color. They also claim to have edited a human embryo for resistance to HIV. For better and for worse, CRISPR has the potential to eliminate some debilitating diseases and to create a species of superhumans. And it is so cheap and easy to use that hundreds of labs all over the world are experimenting with it.

There are also advances in 3D-printed prosthetics and bionics. One company, UNYQ, for example, is “printing” new limbs for people with disabilities. Ekso Bionics has developed robotic exoskeletons to help the paralyzed walk again. Second Sight is selling an FDA-approved artificial retinal prosthetic, the Argus II, which provides limited but functional vision to people who have lost their vision due to retinitis pigmentosa, a retinal ailment. I expect that, by 2030, we will have developed enhancements that give us perfect vision, hearing, and strength as seen in the 1970s television series, “The Six-Million Dollar Man.”

Yes, it will take time for the inventions to get from the lab to people in need, and the technology elite will have these before the rest of us. But this will only be for a short period, because the way the tech industry builds value is by democratizing technology, reducing its cost and enabling it to reach billions. This is why I am so excited that companies such as IBM, Facebook, and Google are taking the mantle from the health-care industry. These companies have a motivation to keep us healthy: so that we download more applications rather than remain hooked on prescription medicines.

This column is based on my upcoming book, “Driver in the Driverless Car: How Our Technology Choices Will Create the Future,” which will be released this winter–and you can preorder on Amazon.com


AI on the Edge

AI on the Edge
by Bernard Murphy on 11-02-2016 at 7:00 am

A lot of the press we see on AI tends to be of the “big iron” variety – recognition algorithms for Facebook images, Google TensorFlow and IBM Watson systems. But AI is already on edge-nodes such as smartphones and home automation hubs, for functions like voice-recognition, facial recognition and natural language understanding. Qualcomm believes there are good reasons for functions like this not only to stay on the edge but to continue to evolve there. I talked with Gary Brotman, director of product management at QTI to understand what’s driving this trend.

Part of the reason is availability. Carrier claims notwithstanding, there are still plenty of places you can’t get cellular or WiFi coverage. That might not be a huge deal for image recognition in Facebook photos, but it becomes a very big deal if you use biometric id(s) to unlock your phone or perform other critical functions. Which makes it a big deal in the rural/ mountainous/ heavily wooded areas that still account for the great majority by area of the US. Even urbanites accustomed to gigabit access can feel this pain when travelling any distance across country.

Part of the reason is privacy. If your dermatologist wants to use a mobile diagnostic device to check a possible melanoma, you have a right to expect that data will be handled with extreme care and especially that it won’t be shipped off to the cloud for analysis.

And part of the reason is security. No matter how great your hardware security may be, there are plenty of holes in software, and traditional signature-based approaches to malware detection are too cumbersome, too power-hungry and too slow to change to be effective against zero-day threats.


This is not an academic concern. Gary mentioned an IDC survey reporting that while less than 1% of applications use cognitive (aka AI) technologies today, more than 50% are expected to have that capability by 2018. The demand for cognitive-enabled functions is rocketing and if at least some of that capability has to be able to work untethered from the cloud, effective local solutions become essential.

Of course this doesn’t mean that everything has to be done locally. Training for deep-learning and related methods still happens in the cloud. But once training is downloaded, recognition should be able to function independently. If permitted, new data to enhance the training dataset can be uploaded when feasible, as Tesla does in gathering data from customer vehicles.


What powers this local analysis? Gary repeated a point he and others made on a panel earlier in the day. While there are now commonly-used hardware platforms for cognitive applications (CPU, GPU and DSP for convolutional and recurrent neural nets, along with frameworks like Caffe and Cuda), the bulk of application know-how today is still in software, not least because the domain is evolving so rapidly. Qualcomm sees platforms like their Machine Learning Platform as the best way to deliver a foundation for application developers. An SDK and frameworks offered within that SDK hide the gory details of implementation from the developer and can provide some level of future-proofing from changes in the underlying technology.

One example application can be found in Snapdragon™ Smart Protect. This is malware detection which uses not signatures for malware but rather machine-learning-based behavior triggers to protect against multiple types of attack and particularly against zero-day attacks. This is clever stuff. Signature-based approaches are impossibly clunky for mobile devices, are too easy to fool through mutating malware and cannot defend against zero-day attacks. Smart Protect behavioral detection looks instead at ~360 low-level behaviors which are harder to hide if the malware wants to achieve its intended objective (some examples cited include sending text messages when the user is not interacting with the device or taking photos when the display is off).

Finally, Gary noted that, to further support this trend to more processing (including AI) on the edge, Apple recently announced their position on “differential privacy” – the need to keep customer personal data out of their hands. Whatever you may think of Apple’s announcement, the principle they support is important. What we would consider personal used to be logins, passwords, bank data and other forms easily reduced to text. But increasingly we need to worry about information for facial recognition, typing behaviors, voice recognition and other biometrics which seem more abstract but could be just as damaging if leaked beyond our devices. I like what Qualcomm is doing; I might lose my phone or it might be stolen but I still have a better sense of control over something I can hold than over what ever might be happening in some distant cloud.

You can learn more about the Qualcomm Machine Learning Platform HERE.

More articles by Bernard…


New Cortex-M7 Chip to Help Power Sophisticated IoT Solutions

New Cortex-M7 Chip to Help Power Sophisticated IoT Solutions
by Tom Simon on 11-01-2016 at 4:00 pm

IoT architects face a dilemma in partitioning the compute power of their systems between the cloud and the edge. The cloud offers large storage and heavy duty compute power, making it an attractive place to perform the computation needed for IoT tasks. However, moving large amounts of data from the edge to the cloud servers, can easily swamp the available bandwidth. Plus, moving data can be power intensive, in and of itself. In addition, many IoT applications need lower latency than can be achieved by relying on cloud compute resources for executive actions.

Originally, IoT end point devices sported small MCU’s like the ATMEL AVR series, but the demands on these devices quickly swamped their capabilities. As a result, a new class of processors was spawned, most notable of which is the Cortex M family. The Cortex-M0 is an energy sipping processor ideally suited for low power IoT applications. Just the same, sensor fusion and increasing complexity have created the need for significantly more powerful processors. The Cortex M family now spans from the M0, to the M7 – a formidable processor with very advanced features.

The M7 was introduced in 2014, and many foresaw that it would bring extremely high performance and security, with low power draw. One of its major features is a superscalar architecture with a 6 stage dual issue pipeline which provides faster instruction execution – almost 2X the M4. It offers more options for memory configuration and can run at higher speed than its predecessors.

The fulfillment of the promise of a core delivered by ARM depends heavily on the specific implementation. ST has embraced the ARM Cortex M series with its STM32 family of microprocessors. Their first one was the STM32 F1 in 2007. Over the years they added many more. One of these was a Cortex-M7 implementation – the STM32 F7, but they just added a new high performance processor to their lineup.

I had a chance to visit the ST booth at the ARM 2016 Techcon in Santa Clara, where ST was demonstrating their new STM32 H7, which is their Cortex M7 implemented at 40nm. This node was chosen for its Flash memory process and higher speeds relative to the previous 90nm F7. The results are impressive.

Just looking at the processor, we can see that it achieves a very high score of 2010 on the CoreMark benchmark. This is double what the STM32 F7 delivers. Even more impressive is that the chip only requires 278uA/MHz, half of the F7 consumes. This is important because the STM32 H7 now makes more complex computation possible in edge devices, while also permitting longer battery life.

But processor speed and efficiency are only part of the picture. ST has designed the H7 with three power domains to allow flexibility in power management. Unused domains can be shutdown to save power. The 40nm process node offers dynamic voltage scaling. Below is a diagram that shows the power domain partitioning.

The STM32 H7 is not lacking in security features either. Edge nodes in the IoT present a higher potential security vulnerability than physically secure sever and hub devices. To deal with this, secure boot and code security are necessary. Also software updates need secure validation to deter malware and tampering. The STM32 H7 is designed to deal with eavesdropping, server spoofing, and fake devices.

ST has included secure memory for the system and application usage. There are embedded and protected cyptographic keys. To facilitate secure communications, the H7 adds a cryptographic HW accelerator, a hashing accelerator and a true random number generator. For code security there is flash memory read and write access protection, a memory protection unit and tamper protection.

At ARM Techcon I was able to watch the demo they provide with the development boards. On a small LCD touch screen they were running 4 concurrent videos using hardware acceleration. They also showed me another impressive demo that highlights their double precision FPU capabilities.

The smaller of the development boards looks something like an Arduino board with the standard Arduino IO pins. The larger board has a large number of interface options including video, networking and much more. The device is pin compatible with its predecessor. The software development ecosystem of development tools and libraries is comprehensive. Many high level interfaces are available for peripherals to make development easier and more efficient. Below is an overview of the ecosystem

There is a lot more about the STM32 H7 that makes it very compelling for IoT developers besides its high performance. I’ve not even touched on the extensive device and protocol support built in to the device. I’d encourage you to dig deeper by looking at the ST website.


IoT From SEMI Meeting: EDA, Image Sensors, MEMS

IoT From SEMI Meeting: EDA, Image Sensors, MEMS
by Daniel Payne on 11-01-2016 at 12:00 pm

Last Friday I learned something new about IoT by attending a SEMI event in Wilsonville, OR just a few short miles away from where I live in Tualatin. SEMI puts on two events here in Oregon each year, and their latest event on IoT Sensors was quite timely and popular judging by how many attendees showed up. First up was Jeff Miller from Tanner EDA, now owned by Mentor Graphics.
Continue reading “IoT From SEMI Meeting: EDA, Image Sensors, MEMS”


The challenge of insecure IoT

The challenge of insecure IoT
by Bernard Murphy on 11-01-2016 at 7:00 am

An attack on Dyn (a DNS service provider) through a distributed denial of service (DDOS) attack brought down Github, Amazon and Twitter for a while and is thought to have been launched through IoT devices. Hangzhou Xiongmai, a provider of webcams and the most publicly pilloried source of weakness in the attack is now recalling all its webcams in the US.

The problem, per one review, is that devices were all shipped with the same default credentials (login and password) and worse yet these were hardcoded into the firmware and not possible to change using software provided with the system. Further, the web interface for these devices either doesn’t check for credentials, or that check is easily bypassed. For this class of web weaknesses, it is believed that over half a million devices today are more or less trivially vulnerable. Which equally means that it can be rather easy not only to compromise a device but also to build botnets to launch DDOS attacks against whatever targets you want. To get a sense of hacker enthusiasm for this area, google “uc-httpd”.

I’m guessing that some of the problem here is cost for the supplier – small margins don’t encourage significant investment in security. Some is probably lack of sufficient security understanding – “yeah, we got security features”. (A frightening number of engineers have told me that having a cryptography core in their design means they’ve taken care of security.) And some probably has been a lack of standard idiot-proof security platforms.

The ARM Corelink SSE-200 subsystem, together with mbed and mbed Cloud, could go a long way to providing the idiot-proof part of a solution, since that takes away from the supplier control of credential management, among other security-related features. Of course the consumer of a webcam would have to do a little cloud work to establish their device with validated credentials but that doesn’t seem like it should be too onerous.

But meantime there are 500k easily-hacked devices out there. It also seems improbable that at least some suppliers won’t continue to cut some corners, or simply take time to come up the security learning curve. There will likely be a lot of potentially hostile devices in the IoT for some time. A tricky problem here is that the threat posed by such a device is not necessarily to the owner since DDOS attacks simply use devices as launch points to attack some other target. The owner may not be aware, or if aware may not care that their device is part of a problem.

So while it is important to protect devices and their link to the cloud, in some sense it is also important to protect “the system”. The network has to be protected because your well-protected, fully credentialed device can still be rendered effectively inoperative if network traffic is swamped by a DDOS attack. And devices within the network have to be protected because if even one is a little weak, an attacker can exploit that weakness to gain privilege, from which they can then run rampant through the network. Paradoxically, this becomes even easier if nodes in the network are based on a common architecture.

Point being, while it is important to have solid protection for a device and its connection to the cloud (as provided by the ARM IoT integrated solution), it’s also important to think about system-level defenses which can isolate/disable distributed attacks and compromised devices. You can read a quick version of the Xiongmai role in the Dyn attack HERE and a little more technical detail HERE.

More articles by Bernard…


Behind the 3DEXPERIENCE for Silicon

Behind the 3DEXPERIENCE for Silicon
by Don Dingee on 10-31-2016 at 4:00 pm

We’ve been covering the Dassault Systèmes “Silicon Thinking” platform for a while here, but, as I’m often prone to do, I wanted to explore the backstory to uncover more about the concept. With over 25M users of their product lifecycle management (PLM) solutions, why is Dassault Systèmes becoming so interested in semiconductor EDA? Continue reading “Behind the 3DEXPERIENCE for Silicon”