CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

NVIDIA’s Deep Learning GPUs Driving Your Car!

NVIDIA’s Deep Learning GPUs Driving Your Car!
by Mitch Heins on 12-09-2016 at 4:00 pm

In a recent SemiWiki article it was noted that 5 of the top 20 semiconductor suppliers are showing double-digit gains for 2016. At the top of the list was NVIDIA with an annual growth rate of 35%. Most of this gain is due to sales of its graphics processors (GPUs) which one normally associates with high performance computer gaming engines. The thing that caught my eye, though, was that while gaming had a hefty 65% growth, IoT (Internet of Things) and Automotive accounted for 193% and 61% growth respectively.

Ok, IoT is growing everywhere so a large % ramp makes sense. But what is Nvidia doing in automotive? My first thought went to graphics applications like heads-up displays but after a little digging I found a very interesting niche that NVIDIA has exploited with their GPUs and that’s in the area of ADAS (Advanced Driver Assistance Systems).

The idea of a ‘connected car’ has been around for a while with its roots going back as far as the 1960’s when General Motors (GM) had a project called DAIR (Driver Aid, Information and Routing). The vision was right but the technology and ecosystem to support it were not up to the task.

It would be 30 years later in the mid 1990’s before GM would announce OnStar connectivity in cars and another 10 years after that in the mid 2000’s with the advent of smart phones that we would see Infotainment’ apps be added. Add another 10 years and we have reached the present age where multiple pieces of the ecosystem are now in place to support GM’s original vision and much more.

These ecosystem pieces include 4G cellular ubiquity, high speed cloud-based data centers, cars with dozens of micro controllers and processors and a plethora of sensors, both in the cars and in the environment in which we drive. All of this technology is being pulled together with embedded software and connections to the cloud, in what we now call the internet-of-things or IoT.

This progression has led to an impressive amount of investment to take ADAS to the next level with the advent of autonomous or self-driving cars. Self-driving cars require multiple connected technologies to work together: GPS, radar, lidar, sonar, cameras, and sensors to name a few. All of these technologies generate massive amounts of heterogeneous data that must be analyzed together in real-time. Suppliers have stepped up with AI (artificial intelligence) technologies that process the information to enable a car to predict how surrounding objects might behave and to make decisions on how the car should respond.

One of the favorite algorithms for this AI type of work is something called a Deep Neural Network (DNN). DNNs mimic the neuron connections of the human brain and like the brain they must be trained. Training a DNN takes a long time. As an example for image recognition alone, a DNN must be shown millions of images and for each image it must go through a process where it identifies and classifies objects in the image. Results are graded against known good answers and then corrections are fed back into the DNN until it can successfully identify and classify all of the objects.

This is where NVIDIA has found a niche. It turns out that you can use the massively parallel nature of NVIDIA’s GPUs to dramatically reduce the time spent training a DNN. And as an added bonus, once the DNN is fully trained you can use the GPU to execute the DNN code on massive amounts of data in real time (milliseconds, not hours) and this is exactly what the creators of self-driving cars need to complete the technology portion of their eco-system.

To make it easier for anyone to use their GPU in this fashion, NVIDIA has developed an entire suite of software development tools called DIGITS (Deep Learning GPU Training System) that can be used to rapidly train DNNs for image classification, segmentation and object detection tasks. The nice thing about DNNs, though, is that they are useful for much more than just image processing. DNNs can be used for data aggregation and to convolve heterogeneous data of widely differing types (images, text, numerical data etc.) which fits perfectly with the problem space of IoT systems in general.

The self-driving car is just once instance of where data aggregation and convolution will occur in the world of IoT. That market alone is expected by Morgan Stanley to grow to $1.3 trillion by 2022 for the U.S. alone . If NVIDIA can win the processor sockets for that market and other IoT adjacencies where DNNs can be used, their growth for the next several years will be well assured.

See also:
5 of the Top 20 Semiconductor Suppliers to Show Double-Digit Gains in 2016!
IoT and Automotive to Drive IC Market Growth Through 2020
Who owns the road? The IoT-connected car of today – and tomorrow
Which GPU to use for deep learning?


Designing for Ultra-Low Power? Easier with “CLICK” IP

Designing for Ultra-Low Power? Easier with “CLICK” IP
by Eric Esteve on 12-09-2016 at 10:00 am

Designing for ultra-low power will become the mantra for many of the new SoC designs, but the related SoC architecture can be very complex to handle. Make or buy is the project manager choice, but if you decide to ask for an expert advice before jumping start an ULP SoC design, attending thiswebinar from Dolphin Integration “The Proven Recipe for ULP SoC” may be wise.
I know Dolphin since 1987 when the company was designing the PIAF for the smart card inventor (Roland Moreno) as he wanted to re-invest the money flowing from the smart card patent rights into applications linked with the invention. The PIAF was a smart card reader system dedicated to car parking in France. The driver was buying a card and the reader, and just leave this reader inside his car when parking as a proof of payment. The major requirement of the system was to be… low power.

The mixed-signal chip was designed in 2 micron (2,000 nm), designing for low power was like a revolution at that time, as every chip maker was trying to offer the best performance, the higher chip frequency.

You now better understand why Dolphin Integration is sharing in 2016 their 30 years’ experience in low power design. Dolphin is proposing a complete methodology, including the various IP developed to support Ultra-Low Power design, the central activity controller, resource controller and local activity controller as shown in the picture below.


Chip designers are more and more involved in the design of connected devices, most often battery powered, and one of the main goals is to drastically decrease the chip power consumption. A few years ago, the race for high performance at reasonable cost was the main goal, except for the teams designing for wireless mobile applications. Many chip-makers have to adapt their design practices to the challenge of designing for low power, and even for ultra-low power.

When deciding to select the Dolphin’s solution, you are not only dealing with the make vs buy question. Designing a power aware SoC architecture significantly increase design complexity, impacting development schedule and cost, and could jeopardize the project if you miss the TTM window. Dolphin knows that dynamically switching the power domains on and off can lead to severe noise issues. Selecting a proven pre-defined embedded control network instead of designing it from scratch can dramatically increase the level of confidence in your SoC design noise immunity. It will help the design team to efficiently implement these new techniques while staying on line with time-to-market (TTM) requirements.

This webinar will propose a step by step method for the adoption of more complex SoC architectures based on multiple power domains, which also require embedding the whole power regulation network. Dolphin will explain the rigorous methodology based on the step-wise analysis of the 4 intertwined embedded SoC networks: functional, clock, power regulation and mode control. The approach may look theoretical, but is very practical as for each network you can associate a set of silicon IP solutions that the designer could implement.

For the functional part of the circuit, you will implement the power gating of any power island or domain by using CLICK (the power island kit). As well the DELTA library of voltage regulators and monitors will be used to build the power regulation network of the SoC. Dolphin will present that is called “SoC Fabric IP” and which is a set of IP (voltage regulators, clock generators, monitors,…) allowing to implement a pre-defined embedded control network. The designer could develop a monolithic and full custom Activity Control Unit (ACU) or Power Management Unit (PMU / PMU logic), but such an homemade control network is known to be complex to develop, tedious to validate and little amenable to architectural updates.

Designing for ultra-low power will become the mantra for many of the new SoC designs, but the related SoC architecture can be very complex to handle. If the design team has to develop a monolithic and full custom Activity Control Unit (ACU) or Power Management Unit, he will certainly make it eventually. But doing so is like re-inventing the wheel, with the risk of falling in traps impacting the design schedule, the development cost and the time-to-market. In the worst case, the design integrity can be impacted by poor noise immunity, leading to a redesign. Make or buy is the project manager choice, but if you decide to ask for an expert advice before jumping start an uLP SoC design, attending this webinar from Dolphin is certainly a good idea!

Dolphin will hold another live webinar on December13, 5:00 PM GMT (9 AM PST). This webinar targets the SoC designers wanting to learn how to quickly implement ultra-low power (uLP) techniques, using proven methodology.
To register, use thiswebinarlink

By Eric Esteve from IPNEST

 


Managing International Design Collaboration

Managing International Design Collaboration
by Bernard Murphy on 12-09-2016 at 7:00 am

Customer perspectives on a tool are always interesting, as much for why they felt the need for the tool as how it is working out for them in practice. Active-Semi, an emerging leader in power management and digital motor drive ICs gave a presentation at CDNLive describing why they adopted ClioSoft tools for design collaboration and their experience with those tools, succinctly stated.

You should know first that Active-Semi has design centers in Vietnam, China and the US, and foundry and design partners in multiple countries. Unsurprisingly, they observed that efficient communication between these distributed teams can be a problem; syncing between fast-running design, test and applications teams often uncovers mismatches requiring rework where a more tightly integrated collaboration could avoid such problems. Examples of very familiar problems they cite are teams not aware of latest changes to spec and architecture docs, incorrect versions of files used in handoffs and layout engineers accidentally modifying schematics.

Natural evolution of design data management tends to start with roll-your-own. Active-Semi stated that syncing up data between multiple sites can be challenging, especially since not all countries have IT infrastructure as transparent and as fast as we have in the US. Network disk-space management also becomes a big problem as disk space balloons rapidly in multiple copies, especially (in my view) given the size of layout databases. The company quickly came to the realization that home-brewed ftp-based solutions would not work for them. Of course, they could have built fancier home-brewed solutions but, to their credit, they decided that their core competency is in the chip-building business, not the data management system building business, so they looked for a commercial solution.

Active-Semi chose ClioSoft-SOS as their data/configuration management solution and are now using it in production development. Capabilities they were looking for included:

  • Securing data transfer transparent to the designer with fast & efficient sharing of design data across multiple sites
  • Access control both to ensure teams can only change what they are allowed to change and that they cannot view confidential data
  • Traceability: getting a complete audit of exactly what is happening in the project
  • Accountability: knowing who made what changes and when
  • Recoverability: easily reverting incorrect changes when needed
  • Recording important milestones in a project

It was especially important to Active-Semi that the system be usable across the whole product life cycle (from spec to signoff), be very easy to use and administer, and not require designers to be IT experts.


As a case study, they talk about their experience in building an SSD power manager. This is a fairly sophisticated device with multiple buck regulators and LDOs, NVM and SPI communication and a programmable state machine, all documented in a ~70-page datasheet. The device was built using Cadence Virtuoso with ClioSoft-SOS to manage data/configurations. Active-Semi was able to build the whole device – from spec to packaged samples – in 16 weeks, of which just 6 weeks were consumed by design. The design team on this device was distributed between Dallas, Japan, Vietnam and Shanghai – a significant stress-test of the effectiveness of the collaboration solution.

Key points they note in why the solution works are that that the ClioSoft product is tightly integrated with Cadence Virtuoso, that teams can avoid collisions without needing 24/7 communication especially thanks to tight access controls, revision control allows backup to earlier/safer versions and network disk space is managed much more effectively (through links) than they were previously able to accomplish through manual management.

Active-Semi added what I think is the ultimate accolade for any piece of software: “It just works”. I was always taught that the best software products should be almost invisible. They deliver the behavior you want while intruding as little as possible on the real job you want to do. It sounds like ClioSoft has accomplished exactly that objective for Active-Semi. You can read the detailed presentation HERE.

Also Read

Making your AMS Simulators Faster (webinar)

3 small-team design productivity challenges managed

Organizing data is first step in managing AMS designs


Webinar: ARM Security Solution for IoT

Webinar: ARM Security Solution for IoT
by Daniel Nenni on 12-08-2016 at 4:00 pm

Yossi Weisblum will be presenting ARM’s IoT security solution during the Open Silicon webinar that I am moderating next week. Yossi manages product marketing for ARM’s CryptoCell subsystem. He has an extensive background in product marketing across several platforms, including connectivity, wireless, multimedia and mobile. Prior to joining ARM in 2016, Yossi worked at Intel for over ten years where he was instrumental in the development of the company’s wireless connectivity solutions.


Joining us is Kalpesh Sanghvi, SoC and Solutions Manager for Open Silicon. Kalpesh has over a decade of professional experience in the semiconductor and embedded industry. He has in-depth knowledge of software development and bring-up for SoC/ASIC designs, and domain expertise in IoT, storage solutions, security solutions, networking and multimedia reference designs. Kalpesh is also experienced in ASIC design flows, pre-silicon and post-silicon bring-up and validation as well as prototyping solutions.

We started tracking security related content at the end of 2015 and based on the SemiWiki analytics I can tell you for a fact that EVERYBODY is concerned about security, especially IoT security.

REGISTER HERE

This joint Open-Silicon and ARM® webinar, moderated by Daniel Nenni, CEO and founder of SemiWiki, will address the security issues associated with IoT edge devices and how to make them secure with custom SoCs. The key focus areas for security in IoT edge devices are secure boot, data security, tamper proofing and device authentication. Efficient security features are implemented with a combination of hardware and software. Features like root of trust with secure boot and tamper proofing with physical security are more efficient when implemented in hardware and IP by a turnkey ASIC vendor. Features like data security and device authentication are more efficiently implemented in software by OEMs leveraging purpose-built hardware.

The advantages of hardware-implemented security features with custom SoCs include a significant improvement in acceleration time (ex: boot-up time), mitigation of potential tampering, and enabling a purpose-built device from a system point of view. The ARM TrustZone® CryptoCell family of security IPs provides hardware-based platform security for cost efficient implementation in custom SoCs, as well as a fast path to market. Open-Silicon’s custom SoC IoT platform, based on ARM’s Cortex-M and TrustZone® CryptoCell, enables OEMs to develop secure IoT edge devices with lower risk and shorter development time. This platform supports root of trust with secure boot and a secure over-the-air firmware/application upgrade.

We just did the practice run of the presentations and it will be worth your time if you too are involved with or concerned about IoT security, absolutely. The webinar is on Tue, Dec 13, 2016 from 8:00 AM to 9:00 AM PST. After a brief introduction by me, Yossi and Kalpesh will present leaving plenty of time for Q&A. And to top it off, attend the webinar – ask a question – and if your question is addressed during the webinar I will make sure you get a signed copy of “Mobile Unleashed: The origin and evolution of ARM processors in our devices”. Sound reasonable?

REGISTER HERE


ARM and Mentor talk about Real Time Virtualization, Webinar

ARM and Mentor talk about Real Time Virtualization, Webinar
by Daniel Payne on 12-08-2016 at 12:00 pm

Processor cores come in a wide variety of speeds, performance and capabilities, so it may take you some time to find the proper processor for your system. Let’s say that you are designing a product for the industrial, automotive, military or medical markets that has an inherent requirement for safety, security and reliability – which ARM processor would fit the task? The A-series processors are popular and well-known, however the new Cortex-R52 is specifically designed for safety, security and reliability applications.

You will want to learn more about the Cortex-R52 at a webinar next week that is hosted by experts from both ARM and Mentor Graphics. A few highlights about the Cortex-R52 to whet your appetite:

  • Deterministic microarchitecture
  • Deterministic memory
  • Extensibility
  • Fast interrupt entry
  • Fast context switching
  • Scalability from 1 to 4 core


Expect to learn a few ideas at this webinar:

  • Advantages of using Cortex-R processors
  • Using devices and software with isolated hypervisors to mage safety and security events
  • Cortex-R52 used in ADAS applications

Register for the webinar today, there are two timezones available:

Jon Taylor from ARM and Felix Baum from Mentor are the two presenters, and you’ll have time to type in any questions during the webinar for answering at the end. I look forward to attending the webinar and sharing my thoughts in the next blog.

Jon Taylor
Jon Taylor is a Product Specialist in the CPU product marketing team at ARM with responsibility for the Cortex-R family of processors. His background includes real-time software development for multi-core microcontrollers and safety certified RTOS development.

Felix Baum
Felix Baum is working in the Product Management team of the Mentor Graphics Embedded Software Division, overseeing the virtualization and Multi-OS and Multi-Core technologies. Felix has spent nearly 20 years in the embedded industry, both as an embedded developer and as a manager. During the last few years he led product marketing and product management efforts for various real-time operating system technologies and silicon architectures. Before that, working in business development, he managed the technical needs of strategic alliance partners around the globe, helping them address the challenges of integrating and promoting joint solutions for mutual customers. Prior to that as a field applications engineer in the greater Los Angeles area, he consulted with customers on the development of highly optimized devices for a broad range of industries, including Aerospace, Networking, Industrial, Medical, Automotive and Consumer. Felix started his career at NASA’s Jet Propulsion Laboratory at the California Institute of Technology, designing flight software for various spacecraft and managing a launch campaign for the GRACE mission. Felix holds a master’s degree in Computer Science from the California State University at Northridge and a Master of Business Administration from the University of California at Los Angeles.


Design for Ultra-Low Power LTE: CEVA Webinar

Design for Ultra-Low Power LTE: CEVA Webinar
by Bernard Murphy on 12-08-2016 at 7:00 am

You might have thought that ultra-low power communication for the IoT was limited to standards like BT5 and 802.15.4 (eg in ZigBee and Thread) which depend on gateways to cellular networks and limit reach, especially deep inside buildings. But now there’s a new standard for ultra-low power and ultra-low cost based on LTE, known as Cat-NB1. If you want to understand more about communications options for your IoT devices, you should attend this Webinar.

REGISTER HERE

Summary

The 3GPP recently completed the standardization of a new LTE-based narrowband technology to support the emerging needs of cellular-IOT devices. Known as Cat-NB1 (formerly NB-IoT), this new low data rate category will enable the development of ultra-low power, ultra-low cost solutions that can leverage the existing LTE infrastructure for a new wave of connected devices. However, in order to efficiently leverage the Cat-NB1 standard, a new approach to cellular SoC design is required, where low data rate transfer in very large time intervals is the goal. As such, solution designers might step out of their “comfort zone” and reevaluate existing modem solutions.

Join CEVA experts to hear about:

  • A Cellular IoT market overview
  • Introduction to Cat-NB1
  • Cat-NB1 design considerations and SoC architecture overview
  • CEVA’s solution compared to traditional designs

Target Audience
Communication and system engineers targeting cellular IoT segment as well as similar Low Power Wide Area Network (LPWA) standards; IoT solution designers interested in low-end devices with low throughput requirement

REGISTER HERE


Emmanuel Gresset – Business Development Director, CEVA Wireless BU


Tal Shalev – Senior Communication System Architect, CEVA Wireless BU

Naturally, CEVA already has solutions in this space. As the leading licensor of signal processing IPs for smarter, connected devices, CEVA together with NextG-Com Limited (NextG-Com) has introduced two pre-integrated narrowband LTE solutions designed to simplify the development of Cat-M1 and Cat-NB1 (NB-IoT) modems for cost-sensitive Internet of Things (IoT) applications.

The solutions incorporate the recently introduced CEVA-X1 single-core IoT processor together with NextG-Com’s ALPSLite-M Cat-M1 or ALPSLite-NB Cat-NB1 protocol stacks, in a highly compact and power-efficient manner. Addressing the cost-sensitive nature of cellular IoT devices, the CEVA-X1 is capable of handling the full DSP and CPU processing loads for these modems, eliminating the requirement for a CPU controller in the system. Furthermore, the solutions have been carefully optimized to ensure smallest memory footprint, minimizing the requirements for costly embedded flash and static RAM. Ensuring maximum power efficiency for long life cellular IoT applications such as asset trackers or smart meters, the CEVA-X1 incorporates dedicated NB-IoT instructions, allowing it to operate at a clock speed of less than 100MHz for a complete Cat-NB1 solution.

“Cellular IoT offers huge potential in terms of the breadth of applications it will enable and the massive global scale on which these devices can be deployed,” said Denis Bidinost, Chief Executive Officer of NextG-Com. “Our partnership with CEVA brings together their dedicated IoT processor with fully-optimized implementations of our Cat-M1 and Cat-NB1 protocol stacks to significantly simplify the integration of narrowband LTE connectivity into cost-sensitive, power-optimized IoT product designs.”

CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, computer vision and computational photography for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (Smart and Smart Ready),Wi-Fi (802.11 b/g/n/ac up to 4×4) and serial storage (SATA and SAS).


Mentor’s Battle of the Photonic Bulge

Mentor’s Battle of the Photonic Bulge
by Mitch Heins on 12-07-2016 at 4:00 pm

A few weeks back I wrote an article mentioning that Mentor Graphics has been quietly working on solutions for photonic integrated circuits (PICs) for some time now, while one of their competitors has recently established a photonics beachhead. One of the most common challenges for PIC designs is their curvilinear nature, thus the reference to the now infamous battle of the bulge. To continue the parlance, Mentor has, in fact, been attacking PIC design on multiple battle fronts. Don Dingee wrote a nice article about one of those efforts in which Mentor’s Tanner team has been working with Luceda to make photonic design more straightforward (see link below).

I also wrote an article highlighting some of Mentor’s efforts in the area of Electro-Optical Design using their Pyxis and Calibre tool sets. The one point that hasn’t really been made in these articles, though, is that Mentor’s Calibre design rule checking (DRC) and layout-vs-schematic (LVS) technologies hold a strategic articulation point for any design going into a silicon-based foundry. Calibre is the de facto sign-off DRC & LVS tool for all production silicon foundries, and as silicon photonics starts to take hold, it will be essential that those foundries have good photonic DRC & LVS rule decks in place. Turns out this is a bit trickier than one might imagine. The good news is that Mentor has been working on this now for more than five years, and they have built up a wealth of experience.

As mentioned, the challenge when doing DRC for PICs is the curvilinear data, the aforementioned bulge. These curves get digitized into polygons with thousands of points that represent the curves in a piece-wise linear fashion, each of which must be snapped onto a manufacturing grid. The result is that a normal DRC deck will report thousands of false errors from these highly discretized shapes.

The example picture on the left from one of Mentor’s silicon photonics white papers shows concentric circles very typical of a photonics design style used to make spiral delay lines. Note the thousands of colorations along the design shapes denoting error markers from DRC using a normal DRC deck. The picture on the right is the same set of concentric circles, but now they are DRC clean. In this case, the Calibre team used their experience to modify the DRC deck to redefine the rules for width and spacing, given the knowledge that these shapes were indeed digitized curves with a given radius of curvature and spacing.

Two things were needed to make this happen. The first is that information needed to be sent to Calibre tools so that they knew to treat these shapes differently. This was done by a flow that Mentor put together with PhoeniX Software, which is used in Mentor’s Pyxis flow to generate these types of shapes. The second thing that needed to happen was to have a DRC capability that could use the information forwarded from PhoeniX to perform a type of check that was amenable to the identified shapes. Mentor put their equation-based DRC capabilities to work to solve this problem.

A second challenge for PICs is how to best perform LVS. The good news is that PIC designers are now using a PDK approach to building their designs. This means that many of the tricky LVS challenge can be subsumed within the PDK components. This includes things like logical connections between waveguides that transfer data through evanescent coupling where the waveguides don’t actually touch. These connections become components with pins, as opposed to the LVS tool having to discern a coupler vs. two waveguides simply passing each other. This makes the checking of basic connections easier, especially if one has a schematic with which to compare.
Another photonic LVS problem is checking schematic parameters against the layout. In electronic ICs, this would be comparable to measuring transistor widths and lengths in the layout and then comparing them to the W and L parameters in the schematics. With PICs this is much harder, as the parameters are not simple scalar values, but instead mathematical relationships. This has been a research subject of groups like PLAT4M (of which Mentor has been a member), and Mentor has come up with some unique and powerful solutions to these problems.

One solution, as already mentioned, uses forward-annotated data where Calibre compares the layout to a rendering based on the forward-annotated equations. This can be done using a combination of Calibre’s Design for Manufacturing capabilities, including its YieldServer API and DFM database, and geometric operations in the Calibre DESIGNrev finishing tool. Additionally, Mentor has come up with a nifty way to do the equivalent of a geometric checksum on PDK components based on parameters as specified in the schematic. This checksum can be computed for the actual layout shapes found and compared against the logical checksums from the schematics, flagging layouts that don’t match their schematic counterparts.

Rounding out Mentor’s photonic capabilities is the ability to use their Calibre LFD package to simulate a foundry’s lithography and etch processes to view how the actual manufactured layout will look. In this case, the designer can not only do DRC and LVS on these simulated shapes, but they can also generate data that can be back-annotated for re-simulation of both the components and the circuit as a whole. Through its collaboration with Lumerical Solutions, Mentor can back-annotate all measured shapes (lengths, widths, radius of curvatures, etc.) of the simulated manufactured design and send them to both Lumerical’s compact model generation process (to create updated S-matrix component models) and to Lumerical’s INTERCONNECT circuit simulator (for full circuit level simulations).

In cases where the design is known to be going into high volume production runs, these types of simulations can be done across the expected manufacturing process window to identify design areas that may be susceptible to yield fallout.

So, while the competition may have recently secured a photonic beachhead, Mentor Graphics has already pushed well inland and is driving forward on multiple photonic battlefronts, including the strategic articulation point between design and manufacturing.

See also: Electrical-Optical Design-A Bridge to Terabitsia and Making photonic design more straightforward


Qualcomm Brings Us One Step Closer To Gigabit LTE Speed Products

Qualcomm Brings Us One Step Closer To Gigabit LTE Speed Products
by Patrick Moorhead on 12-07-2016 at 12:00 pm

Qualcomm announced at their 4G/5G Summit in Hong Kong specific products their 1 Gigabit Snapdragon X16 LTE modem will ship inside of. In February, the company announced, as we wrote in Forbes last Febraury here, that they had achieved speeds of up to 1 Gbps using this newly-announced Snapdragon X16 modem. Many questioned when we would see real products using a modem of such capabilities, but the doubters now have an answer.

At their 4G/5G Global Summit in Hong Kong which Moor Insights & Strategy associate analyst Anshel Sag is attending, Qualcomm announced two major product developments using the Gigabit-class LTE modem known as the Snapdragon X16 LTE modem. Both announcements involve actual product announcements that utilize the X16 LTE modem and bring us into the “LTE Advanced Pro” era.

1Gbps Netgear hotspot on the Telstra network

Telstra, Netgear, Ericsson, Qualcomm deliver the first 1Gbps hotspot and service (Image: Qualcomm)

The first announcement involves a partnership between Telstra, Netgear, Ericsson and Qualcomm. The establishment of this partnership is a sign of how real this upcoming product is and the heavy hitters behind it. The device being brought to market by Netgear using Telstra’s network, one of the leading LTE networks in the world, is a Wi-Fi hotspot of sorts. However, this hotspot will be supercharged with a maximum theoretical speed of 979 Mbps thanks to the combination of support for 3x CA (carrier aggregation), 256-QAM modulation and 4×4 MIMO antennas. This makes the Netgear MR1100 (Mobile Router) essentially a wireless fiber link, wherever you go. Telstra is saying they will ship the entire solution in the next few months.

Having essentially unlimited bandwidth on-demand is quite the empowering technology that enables use cases that simply weren’t possible before. In fact, Gigabit LTE like that from the X16 is very likely going to be the way that many companies first test out their 5G use cases. One example that just keeps coming up is the idea of 360-degree video for both smartphone and VR consumption. The reality is that you need a pretty high bandwidth and robust stream to enable a high enough quality stream to make 360 video worthwhile outside of a Wi-Fi network. Anshel Sag has tried watching streamed 360-degree video content and its pretty abhorrent quality in many cases due to bandwidth limitations. There are plenty of other applications that are going to be possible thanks to gigabit-class modems, by removing bandwidth limitations we may see a new class of applications arise like we did as speeds increased including instant apps. The reality is that we probably won’t know what they are going to be until they reach smartphones which leads us into Qualcomm’s next announcement.

1Gbps in a smartphone soon

Qualcomm announced 1Gbps LTE capabilities will be in its next 800-class SoC (Credit: Qualcomm)

The second major 4G announcement that Qualcomm made today at their 4G/5G summit involves Qualcomm integrating the same discrete gigabit-class X16 modem into the next generation 800-tier Snapdragon processors. While Qualcomm isn’t saying what processor it is exactly, we are left to assume that it will be included with the Snapdragon “830” or whatever the successor to the 820 is called. This means that Qualcomm has once again upped the modem game against their competitors and laid down the gauntlet while also maintaining their coveted modem leadership.

1Gbps wireless is real and lays 5G foundation

The integration of the gigabit-class Snapdragon X16 into mobile routers and smartphones marks the next phase of LTE’s evolutions and the next generation of LTE. With the ability to reach near 1 Gbit/s speeds on a mobile modem over LTE, we are starting to see what I believe to be the foundation for 5G using a robust and nearly ubiquitous technology in LTE Advanced Pro. As Anshel wrote last week, there’s no mistake that 5G is coming very soon, but as the operators have indicated, 4G is going to play a huge role in the bring up of 5G, possibly even more than 3G did with 4G LTE. We can clearly see that Qualcomm’s X16 Modem is more than just a chip, but now a technology enabling a new phase of mobile computing. With these kinds of speeds, I wonder whether any of the operators will break ranks and market their LTE Advanced Pro solutions as ‘5G’ like we saw with ‘4G’. If you want more info here are links to Qualcomm’s press release and a video:


Dark data to fuel warp speed growth for #IoT

Dark data to fuel warp speed growth for #IoT
by Diya Soubra on 12-07-2016 at 7:00 am

In my world of semiconductors, dark silicon refers to transistors that are present in the chip but that can not be turned on due to thermal constraints. A valid resource that is available but not used. In the case of #IoT we have a lot of data already out there that I would label as dark data, it exists but no one outside the network owner has access to it. Valid data that is not available, hence not used. Dark data is growing on daily basis as system integrators deploy vertical networks of #IoT nodes and collect information.


In my continued search for the inflection point for #IoT, I ran into this interesting bit of information this weekend. The city of Boston is applying #IoT principles and best of all is giving open access to the data. The city has even partnered with other providers of data to improve road conditions. This reminded me that in one of my previous rants, I complained about the lack of a broker that would facilitate connecting the data from #IoT nodes to big data consumers. What if this was too big of a step to take? What if instead, a dark data exchange could be the smaller step that might be easier to take?

It would require a lot less in terms of technology and transactions to exchange data as opposed to giving access to networks of #IoT nodes. Someone who is in the business of deploying #IoT nodes would now collect the data and place it in the exchange for other people to purchase. We still need a broker to make the transaction but we no longer need to setup access to the individual nodes. This would be an alternative method to capture revenue for people in the #IoT node deployment for data capture business. Without that revenue potential, growth in the number of #IoT nodes will remain limited to those deployed by system integrators. Without a dark data exchange the growth in #IoT will remain limited to vertical deployments.

In #IoT, information is the fuel for analytics engines. People are willing to pay a lot for that fuel since the output from the analytics engine is worth 10x more than the input. A global dark data exchange will for sure make many people happy on all sides and would boost the number of companies that want to deploy #IoT nodes. Only the potential of revenue from deploying #IoT nodes will drive deployment and only the subsequent horizontal mashup applications would drive #IoT into exponential growth. No revenue potential, no data. No exchange, no mashups.

Do you know of a dark data exchange already in operation today?


Hack This? Making Software a Moving Target

Hack This? Making Software a Moving Target
by Bernard Murphy on 12-06-2016 at 4:00 pm

It sometimes seems that the black hats are always one step ahead of the white hats in the never-ending security game. One of the especially invidious ways hackers have found to evade detection is through mutation – changing the code in a virus on each copy, defeating classical signature detection methods and potentially requiring new signatures to be built and redistributed for each mutation.

The seriousness of this problem may be compounded by the growing lack of diversity in systems in the IoT at large – in industrial, home, medical and many other applications. ARM through its many customers has made this revolution possible but at the same time present to the hacking world an attractive and focused target of incredibly high value. Find a way to hack into the core of one system and you can potentially attack many systems with the same exploit. I’m sure ARM is very careful in analyzing and removing as many bugs as possible in their core software, but software by its nature is never perfect. Bugs will be found, especially since hackers can experiment at their leisure with their own ARM-based systems.

But there’s way to turn the tables on hackers by reintroducing diversity into systems, in fact even into a single system, so that exploits are always trying to aim at moving targets. The method started with a technique called address-space layout randomization (ASLR) introduced early in the millennium. In conventional computing the memory map of a program has fairly predictable structure. The stack starts in a certain well defined place, as does the heap. The easy “discoverability” of the stack is what enables so many basic hacks; you can easily change return routine return locations through tricks like buffer overflows. Less well-known but equally damaging exploits are known for attacking the heap or even the code.

ASLR in its early incarnation aimed to defeat hacks by randomizing the address map on program startup. In principle it would be much harder for exploits to discover key information because they wouldn’t know where to look. Sadly, as is eventually the case for all defenses, hackers found a way around ASLR by exploiting “memory disclosure bugs”, cases where applications may at least temporarily reveal information from which address map offsets can be reconstructed or code snippets can be extracted or overwritten.

Problems associated with this class of attack are very real. The Linux kernel has known memory discovery vulnerabilities. Some suggested fixes have included execute-only memory where it is not possible to overwrite code, but this becomes very challenging in the presence of Just-In-Time (JIT) programming where code and data can easily intermingle. Since JIT is becoming increasingly common – in Java and in Javascript (used by most browsers) – potential attacks have already been exposed.

But if viruses can constantly mutate, why not the address map also? A team at Columbia have created just such a defense which they call Shuffler. Shuffler randomizes code locations every tens of milliseconds (including for itself), effectively creating a deadline for an attacker – if they can’t figure out and exploit a disclosure in that time, their exploit will fail. Shuffler is software-based and uses various techniques to minimize performance impact but still shows up to a ~15% overhead. You could easily imagine hardware assist removing most of this overhead (but this would also need to be secured).

Another interesting approach to handling code read attacks on JIT code uses a technique called destructive code read. Code can be executed normally but if it is read it is immediately garbled, as a method to undermine the usefulness of memory disclosure. The software based on this approach is called Heisenbyte in tribute to the Heisenberg principle which asserts that observation of a state must change the state.

You can read a lightweight write-up on Shuffler HERE and a more detailed paper HERE. A description of a memory disclosure vulnerability in Adobe Shockwave is HERE and a paper on Heisenbyte can be found HERE.

More articles by Bernard…