RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

This is How We Autonomous

This is How We Autonomous
by Roger C. Lanctot on 10-13-2017 at 12:00 pm

Some days it seems like the world is obsessed with autonomous vehicles. No one really understands why. Surveys tell us that consumers are both interested in and repelled by self-driving cars. What’s missing is the business model – the commercial reason for the existence of self-driving cars.

Today’s announced acquisition of Auro Robotics by RideCell should go a long way toward clearing things up. RideCell has announced today its all stock acquisition of Auro Robotics, maker of driverless, zero-emission shuttles.

Auro’s SAE Level 4 application of geo-fenced automated driving is precisely what is at stake in the current debate: self-driving shuttles operating in a defined area. Of course we’ve seen the headline-grabbing antics of Tesla Motors’ Level 2 vehicles and Waymo’s Level 4 shuttles with human operators, but Auro is first to market with a commercial driverless application.

Simultaneous to its acquisition of Auro, RideCell also announced the public availability of its autonomous operations platform which has been used in multiple autonomous pilot programs. Ridecell now offers the industry’s first complete autonomous new mobility solution that enables on-demand autonomous shuttle mobility service in low-speed, private-road settings.

While testing on public roads, including highways and surface streets, and in dense urban environments may represent brass ring objectives for the autonomous driving industry, controlled use opportunities represent the low hanging fruit that will pay the bills and pave the way. The Auro acquisition allows RideCell to accelerate the evolution of its connected autonomous vehicle platform while enabling the company to refine the critical user interfaces crucial to crafting real world applications.

RideCell may have 20 customers operating in a variety of environments and scenarios, but the Auro acquisition and the new autonomous platform opens the door to opportunities where vehicles may or may not already be implemented. While RideCell partners, such as BMW’s ReachNow, may be operating in the wild with human drivers accessing car sharing or ride hailing use cases, the stage is set with Auro for a future driverless ReachNow proposition.

RideCell says it will continue to collaborate with partners to apply the RideCell platform to self-driving vehicles for automated management of operational tasks including cleaning, refueling, and emergency response situations. Auro expands RideCell’s opportunities with shuttle manufacturers to add self-driving capabilities for neighborhood electric vehicle platforms. These systems are designed to safely drive people around campuses, theme parks, resorts, business parks, and retirement communities.

Private environments with low-traffic, low-speed roads provide the perfect setting for deploying autonomous vehicles today. Auro-enabled shuttles were among the first driverless shuttles put into daily operation on the Santa Clara University campus in California and have already provided safe transportation to thousands of riders, the companies say.

The future of self-driving cars, therefore, is unfolding on college campuses, in theme parks, at resorts and in retirement communities. These are the places where valuable human interactions will be tested and assessed for the refinement of systems which will ultimately be accessible on surface streets, in urban centers and on highways. This is how we autonomous. This is why we autonomous.


Magillem User Group Meeting

Magillem User Group Meeting
by Bernard Murphy on 10-13-2017 at 7:00 am

Magillem is hosting a user group meeting on October 26th at The Pad in Sunnyvale. User Group meetings are always educational; this one should be especially so for a number of reasons, not least of which is the keynote topic: Expert Systems for Experts.


REGISTER HERE for the meeting in Sunnyvale on October 26[SUP]th[/SUP] from 10:00am to 6:00 pm

A great reason for going, as always in UG meetings, will be to hear other Magillem users talk about how they are using these tools in their daily design tasks. Since Magillem represents the leading edge of IP-XACT-based design assembly and support, that alone should be an incentive to attend.

Another reason for you budding entrepreneurs is to get a look at The Pad, a co-working/ innovation center for young startups. I’m not sure you can get in if you’re not a member or have business with a member (prospect or VC for example), so you might want to take advantage of this opportunity.

A particularly interesting reason is Magillem’s theme for the meeting: A Cognitive Assistant for SoC Designers. As a company working at the intersection of data representation standards and system design applications, Magillem have never operated as if they felt especially bound to limited functional areas. Today they work in system design, assembly and verification, traceability, data analytics and documentation. I’m sure there are rich opportunities for them to use cognitive methods in these domains.

REGISTER HERE for the meeting in Sunnyvale on October 26[SUP]th[/SUP] from 10:00am to 6:00 pm

Among other topics in the agenda, Eric Mazeran will talk on how cognitive assistants could help designers of SoCs and complex subsystems. There will also be demos of Magillem architecture intent and content platforms as well as their Crystal Bulb capability. From users, I expect to hear both from teams who have had Magillem technology embedded in their flows for many years and from others who have only recently adopted these flows. Should provide an interesting range of perspectives.

About the keynote speaker
Dr. Eric MAZERAN heads Prescriptive Analytics & Decision Optimization R&D within IBM. He is responsible for delivering innovative market-leading products based on Optimization & Cognitive technologies in the area of Prescriptive Analytics.

In addition to several executive roles in software development within IBM and ILOG, Eric has a deep technical background in Artificial Intelligence (author of Expert Systems, Expert System Shells, Knowledge Acquisition Systems, Rules Engines) and Software Architectures (author of Object Oriented Languages, Discrete Events Simulation software, Network Management System software, etc.) and has over 25 years experience with business applications in the fields of AI & decision automation with many different companies and industries.

An alumnus of the Stanford University Graduate School of Business, Eric is passionate about both epistemology and management science. He holds a PhD in Artificial Intelligence from French National Institute of Applied Sciences (INSA), Lyon, France, a Master in Robotics and CS from INSA, Lyon, and a Master/Engineer’s Degree in Civil Engineering from Ecole Nationale des Travaux Publics de l’Etat, Lyon, France.

About Magillem
Magillem delivers software that provides seamless integration across Specification, Design, and Documentation processes, connecting all product-related information in a traceable hub of links.

Magillem sells to large corporations in various industries (semiconductors, embedded systems…) tools and services that drastically reduce the global cost of complex projects and tasks.

Magillem industry expertise can achieve a real quality improvement for our customers helping them to deal successfully with the implementation of a new methodology. We can leverage the knowledge, experience and solutions we have to address our customers’ needs when it comes to the most challenging projects: the motto is to give you the tools to mine your own expertise and add value to your business.


Why Cars aren’t as Safe as Planes

Why Cars aren’t as Safe as Planes
by Roger C. Lanctot on 10-12-2017 at 12:00 pm

People are often confused and amazed that airline travel is so much safer than travel by bus, rail or car. The safety of air travel is truly miraculous, particularly in comparison to the disastrous history of terrestrial travel.

This reality was highlighted by the latest fatality data released by the U.S. Department of Transportation revealing that annual highway fatalities in the U.S. rose (5.6%) in 2016, for the second consecutive year. In contrast, 2016 was the second safest year ever for air travel.

It seems counterintuitive that what appears to be the most terrifying mode of travel – flying through the air in a metal tube with hundreds of other humans (or alone) – is actually safer than driving to the local shopping center. But the safety of air travel is a direct positive return on a substantial investment into regulatory oversight.

What really sets airline safety apart from the safety of terrestrial travel is the amount of scrutiny brought to bear when crashes and fatalities do happen. Federal investigators jump into action when a crash occurs, whereas the routine nature of car crashes merits little attention – even in the event of fatalities.

In fact, the first time such scrutiny was brought to bear on a car crash was the investigation of Tesla Motors’ first fatal crash by both the National Highway Traffic Safety Administration and the National Transportation Safety Board. The findings differed, but the outcome was identical – Tesla used data collected by the vehicle and its working knowledge of the system to more or less exonerate itself.

It is a sad comment on that crash that the driver who died in the Tesla crash was using Tesla’s inaptly named Autopilot. Perhaps we can forgive that driver for taking the name of the system too seriously. Any actual airplane pilot will tell you that “autopilot” is a function best used in an airplane operated in the sky by a trained pilot, not in a vehicle on the ground operated by an insufficiently trained and inattentive human driver.

The delta between the safety of air and car travel is directly related to the investment in regulatory oversight. In the U.S., nearly $16B is spent annually in support of the Federal Aviation Administration, more than 16x the $908M invested in the National Highway Traffic Safety Administration. Both organizations reside within the U.S. Department of Transportation.

It might be argued that we are saving $15B by not expanding NHTSA on the same scale as the FAA. The hundreds of billions of dollars in economic losses from 100+ daily highway fatalities and hundreds of thousands of injuries is enough of an argument to dispel any such apathy.

Correlated to that spending gap are the 50,000 employees working in the FAA vs. the 610 working at NHTSA, according to Wikipedia data. In essence, the Federal government has prioritized the safety of air travel and that commitment is reflected in appropriate investments in personnel and resources. Automotive safety, meanwhile, is underfunded and de-emphasized with a correspondingly predictable outcome

Both the aviation and automotive industries are viewed as essential pillars of the U.S. economy and a source of national pride and a projection of U.S. power and influence. Sadly, the lack of focus on automotive safety regulation has contributed to a decline of that prestige and power in the U.S. automotive industry which has seen a steadily declining share of global vehicle sales as well as diminished technological leadership.

Getting serious about automotive safety and reducing highway fatalities is going to require a far greater Federal financial commitment than is currently contemplated. NHTSA itself has been defunded and demoralized over recent years and now lacks internal leadership – with no Administrator having been appointed to lead the agency.

The 37,461 people who died on U.S. highways in 2016 can blame legislative distraction fueled by automotive industry lobbyists. NHTSA has shared the details of the toll this inaction took in 2016:

  • Distraction-related deaths (3,450 fatalities) decreased 2.2%;
  • Drowsy-driving deaths (803 fatalities) decreased 3.5%;
  • Drunk-driving deaths (10,497 fatalities) increased 1.7%;
  • Speeding-related deaths (10,111 fatalities) increased 4%;
  • Unbelted deaths (10,428 fatalities) increased 4.6%;
  • Motorcyclist deaths (5,286 fatalities – the largest number of motorcyclist fatalities since 2008) increased 5.1%;
  • Pedestrian deaths (5,987 fatalities – the highest number since 1990) increased 9.0%;
  • Bicyclist deaths (840 fatalities – the highest number since 1991) increased 1.3%.

Annual highway fatalities of nearly 40,000 souls can no longer be regarded as simply the cost of getting around. The ongoing increase in highway fatalities is a sobering reminder of the severity of the current crisis and one that won’t be fixed with self-driving car legislation or a vehicle-to-vehicle communications mandate without investments in greater regulatory resources.

Automotive technology is becoming increasingly complex with a vast expansion of safety systems, semiconductors, electronics and software in vehicles. The responsible regulatory agency, NHTSA, has not expanded in kind to take on the task at hand which is nothing less than the ineffable challenge of proving a negative – that a crash did not occur and lives were saved because of the implementation of a safety system.

Car companies and their suppliers need a coordinated data-driven effort backed by appropriately trained engineers to create the tools to determine how to mitigate fatalities. Such a program should be tasked with determining which system or combination of systems have proven effective in avoiding collisions and crashes.

Maybe a ranking of highway fatalities by car maker will get the industry’s attention – even if it might be misleading.

The default of the industry and regulators has been to blame 94% of crashes on drivers. If that is indeed true, then let’s attack driver training and licensing. And let’s do our best to design vehicle systems that are more forgiving of subpar drivers.

Let’s face it, it will be years before we are able to turn the driving task over entirely to the robots. It may not be possible in the long run without changes in infrastructure. In the meantime, it is criminal to tolerate 100+ daily fatalities. It’s not okay.

In the U.S. and in other parts of the world there is an assumption that everyone wants to drive, everyone should drive and maybe everyone should own a car. Maybe some people shouldn’t be driving and/or shouldn’t own cars. Maybe driving privileges should be more easily revoked or removed.

It’s not as if there aren’t options for getting around without doing the driving oneself. Why not leave it to the professionals? There has to be a better way. It’s simply not practical to fly commercial to every daily destination – even if it is safer.


TechCon: See ANSYS and TSMC co-present

TechCon: See ANSYS and TSMC co-present
by Bernard Murphy on 10-12-2017 at 7:00 am

ANSYS and TSMC will be co-presenting at ARM TechCon on Multiphysics Reliability Signoff for Next Generation Automotive Electronics Systems. The event is on Thursday October 26th, 10:30am-11:20am in Grand Ballroom B.


You can get a free Expo pass which will give you access to this event HERE and see the session page for the event HERE.

This topic is becoming a pressing concern, especially for FinFET-based designs. There are multiple issues impacting aging, stress and other factors. Just one root-cause should by now be well-known – the self-heating problem in FinFET devices. In planar devices, heat generated inside a transistor can escape largely through the substrate. But in a FinFET, dielectric is wrapped around the fin structure and, since dielectrics generally are poor thermal conductors, heat can’t as easily escape leading to a local temperature increase, and will ultimately escape significantly through local interconnect leading to additional heating in that interconnect. Add to that increased Joule heating thanks to higher drive and thinner interconnect and you can see why reliability becomes important.

Arvind Vel, Director of product management at ANSYS and Tom Quan, Deputy Director, Design Infrastructure Marketing Division, TSMC North America will be presenting. This should be an interesting session. ANSYS have developed an amazingly comprehensive range of solutions for design for reliability, spanning thermal, EM, ESD, EMC, stress and aging concerns. In building solutions like this, they work very closely with TSMC, so much so that they got a partner of the year award this year at the TSMC OIP conference.

Summary
Design for reliability is a key consideration for the successful use of next generation system-on-chips (or SoCs) in ADAS, infotainment, and other key automotive electronics systems. These SoCs manufactured on TSMC’s 16FFC process are advanced multi-core designs with significantly higher levels of integration, functionality and operating speed. These SoCs have to meet the strict requirements for automotive electronics functional safety and reliability.

ANSYS and TSMC have collaborated to define workflows that enable electromigration, thermal and ESD verification and signoff across the design chain (IP to SoC to package to system). Comprehensive multiphysics simulation that captures the various failure mechanisms and provides signoff confidence is needed to not only guarantee first time product success but also ensure compliance within regulatory limits.

This session will provide an overview of ANSYS’ chip package system reliability signoff solutions to create robust and reliable electronics systems for next generation automotive applications along with case studies based on TSMC’s N16FFC technology.

Founded in 1970, ANSYS employs nearly 3,000 professionals, many of whom are expert M.S. and Ph.D.-level engineers in finite element analysis, computational fluid dynamics, electronics, semiconductors, embedded software and design optimization. Our exceptional staff is passionate about pushing the limits of world-class simulation technology so our customers can turn their design concepts into successful, innovative products faster and at lower cost. As a measure of our success in attaining these goals, ANSYS has been recognized as one of the world’s most innovative companies by prestigious publications such as Bloomberg Businessweek and FORTUNE magazines.

For more information, view the ANSYS corporate brochure.


A better way to combine PVT and Monte Carlo to improve yield

A better way to combine PVT and Monte Carlo to improve yield
by Tom Simon on 10-11-2017 at 12:00 pm

TSMC held its Open Innovation Platform Forum the other week on September 13[SUP]th[/SUP]. Each year the companies that exhibit at this event choose to highlight their latest technology. One of the most interesting presentations that I received during the event was from Solido. In recent years they have produced a number of groundbreaking machine learning-based tools for Monte Carlo, Statistical PVT, cell optimization, library characterization, and other critical verification tasks.

Simply put, their aim has been to reduce the amount of brute force simulation needed to validate designs before silicon production. Their latest offerings have come from their Machine Learning Labs. The product they introduced at OIP this year is another in this line. It is called PTVMC Verifier, which is a mouthful, but should be correctly construed to mean that they are combining analysis over PVT’s with Monte Carlo – to fully account for all meaningful operating conditions and states.

This is significant because the largest growth segments in semiconductor design each need this kind of thorough analysis to ensure silicon success. IoT and mobile are pushing the limits of lower operating voltages, which can push margins to the edge. Automotive has extremely high reliability requirements, combined with harsh environmental conditions. High performance computing demands the tightest timing again at often high operating temperatures.

With unlimited time and computing resources the easiest approach would be to perform Monte Carlo simulations on every single PVT corner. This would ensure that working silicon could be achieved within the desired sigma. In order to adapt to the realities of chip level verification, the typical flow starts out by running a large number of PVT corners to find the worst-cases. These worst-case corners are run through Monte Carlo simulations. While this saves significant time, it still suffers from the possibility that a worst case PVT at nominal may not be the worst case at the target sigma. In this event a true worst case may be overlooked – possibly leading an unexpected failure.

This problem until now has been partially addressed through the use of statistical corners. This attempts to solve the problem by running Monte Carlo first to obtain sigma distributions then applying these to PVT corners to see where they are likely to be failures. The catch is that a given sample in a Monte Carlo may move to a different probability at other PVT conditions. Thus the probability profile can change dramatically for some sets of samples as they are moved across PVT’s.

Solido is applying Machine Learning to solve this difficult issue. They have learned that Monte Carlo samples have several different behaviors relative to each other as they are run at different PVT’s. Machine Learning can be used to characterize the relative ordering of the samples based on a small number of simulation runs. This garners enough information to predict how the probability distributions will shift moving between PVT corners.

In a sequence of three cycles of simulations using a small subset of the possible corners and samples, it is possible to predict the worst cases for a design. This takes the number of simulations required down from 10’s of thousands to somewhere between 500 and 2,000, and works up to 5 sigma by default, but can go higher if needed. The reliability of the results can be verified at runtime, so there is no doubt about the integrity of the results.

I have been saying that Machine Learning will be a revolutionary force in EDA. Solido has made great headway in applying it to numerical problems. We can also expect to see more in the area of visual pattern recognition. For instance, Mentor already has some technology it is using for DRC. For more information on how Solido is applying Machine Learning technology developed in its Machine Learning Labs, please look on their website.


Do investors understand the new memory paradigm?

Do investors understand the new memory paradigm?
by Robert Maire on 10-11-2017 at 7:00 am

Micron put up a great quarter beating both quarterly expectations and guidance. Even though the stock was up 8% and we still think it has a long way to go as investors have not fully embraced the upside ahead in the memory market.
Continue reading “Do investors understand the new memory paradigm?”


AI Based Software Designing AI Based Hardware – Autonomous Automotive SoC Platform

AI Based Software Designing AI Based Hardware – Autonomous Automotive SoC Platform
by Mitch Heins on 10-10-2017 at 12:00 pm


For those of you who missed the NetSpeed Systems, Imagination Technologies webinar, “Alexa, can you help me build a better SoC”, you’ll be happy to hear that the session was recorded and can still be viewed (see link at the bottom of this page). I’ll warn you now however, that this was a high-bandwidth session packed with information, so much so that I had to listen through it several times to absorb everything. Here’s a condensed version for those looking for the salient points.

SoCs for autonomous automotive applications are some of the most complex ICs designed on the planet. These designs must fuse data from multiple sensors (RADAR, LIDAR, ultra-sonic, video, cameras) in real time, while also dealing with ultra-high functional safety and security requirements. Real-time processing of multiple different data streams implies a heterogeneous mixture of CPUs, DSPs, GPUs, ISPs, and dedicated specialty hardware accelerators all communicating with each other using a sophisticated network-on-chip (NoC) capable of handling coherent memory and coherent I/O access.

NetSpeed Systems and Imagination Technologies partnered to deliver a next generation autonomous automotive platform that is now used by three of the top four auto-pilot players. NetSpeed Systems delivers the NoC while Imagination Technologies delivers the MIPS I6500-F processor architecture. Their autonomous automotive SoC platform enables designers to create a scalable, heterogeneous solution using AI software techniques to synthesize an optimal hardware solution while trading off power, performance, area (PPA) and functional safety (FuSa).

Some key items used in this collaboration include:

  • NetSpeed’s GEMINI Coherent NoC, certified ASIL-D ready per the ISO 26262 standard.
  • NetSpeed’s NocStudio development environment, with machine learning-based interconnect synthesis that uses NetSpeed’s CRUX (streaming interconnect transport), ORION (AMBA and OCP bridging), GEMINI (NoC coherent cache and I/O management) and PEGASUS (NoC L2, L1, and LLC cache support) template libraries.
  • Imagination’s MIPS I6500-F variant of the MIPS 6500 CPU, certified automotive ASIL-B(D) and Industrial Control applications IEC 61508 ready as a Safety Element out of Context (SEooC).
  • Enhanced heterogeneity using the MIPS IOCU ports and dedicated CPU threads to enable low latency paths through L2 cache between hardware accelerators and the CPU.
  • FuSa capabilities of the MIPS I6500-F including ECC across memories, redundant logic and parity protection, time-outs and the use of logic built-in-self-test (LBIST) that checks the hardware at both boot-up and during processing cycles when CPUs are not busy.
  • FuSa capabilities of the NetSpeed NoC synthesis process including path redundancy and guaranteed fail-safe deadlock-free solutions.


The thing that makes this joint solution so fascinating is the fact that it is indeed scalable and programmable, enabling designers to truly customize a SoC to meet their specific requirements. NetSpeed’s NoC is able to manage up to 64 cache coherent clusters and 250 I/O coherent IPs and the NoC is compatible with popular protocols ACE, CHI, AXI, AHB, APB and OCP. Similarly, the MIPS 6500 architecture is able to support cache coherent arrays of clusters with multi-threaded CPU cores.

Added to this is the fact that the design environment (per the title of the webinar) makes use of artificial intelligence (AI) techniques to help designers make intelligent PPA/FuSa trade-offs. Once the design has been iterated, the solution then uses NetSpeed’s NocStudio software to synthesize the resulting architectural RTL code.

A nice feature of the joint solution is that NocStudio can simulate system data traffic and interactions between the processing units and the coherent cache and coherent I/Os. In so doing, NocStudio can score both PPA and FuSa results on a path-by-path basis. Additionally, each NoC path from master to slave is user configurable in terms of its power, performance (latency and quality of service or QoS), area and Functional Safety requirements. NocStudio considers the requirements of each master-slave path of the system along with higher-level constraints such as expected data traffic for various paths, physical locations of master-slave paths on the IC, numbers of competing paths in the same area and the priority of the paths with respect to desired network redundancy. Depending on the specifications for each path, NocStudio’s AI algorithms synthesize and optimize a NoC to meet the requested constraints.


Many different FuSa trade-offs can be made with NocStudio for each master-slave path. Examples of this include end-to-end parity checks, ECC checks and checksums for packets as well as hop-to-hop checks and the synthesis of redundant paths to compensate for paths that may develop errors. Because NetSpeed Systems and Imagination Technologies have partnered on the solution, the NocStudio environment can also dovetail the parity checks applied by the MIPS I6500-F processor with the checks being done by the NoC, leading to system-level coverage for all components controlled by the MIPS processor.

The solution platform generates a plethora of data files and information that can then be used to implement the SoC including:

  • Synthesizable RTL,
  • Verification checkers, monitors and scoreboards
  • Files to aid physical design (block placements in DEF, timing constraints in SDC, Clock skew and physical design scripts to run place and route)
  • SoC integration files such as IPXACT, CPU/UPF, and an architectural manual
  • FuSa documents including FMEDA and a Safety manual that can be used for ISO 26262 certification.

The most impressive part about this is solution is that it is silicon proven and in use today by several leading autonomous automotive IC providers with successful implementations such as the one done by MobileEye (see SemiWiki article).

To dive deeper into what NetSpeed and Imagination Technologies have to offer in this space you can watch the full webinar at this link.

See Also:
NetSpeed Systems
Imagination Technologies


An IIot Gateway to the Cloud

An IIot Gateway to the Cloud
by Bernard Murphy on 10-10-2017 at 7:00 am

A piece of learning we all seem to have gained from practical considerations of IoT infrastructure is that no, it doesn’t make sense to ship all the data from an IoT edge device to the cloud and let the cloud do all the computational heavy lifting. On the face of it that idea seemed good – all those edge devices could be super cheap (silicon dust) and super-low power.


But the downsides can be severe. The latency in a round-trip to the cloud may not be acceptable if that edge device needs real-time response to adjust machine behavior. Power at the edge can actually be worse if you are burning it to ship large quantities of data uphill. Security is a problem, as much in industrial IoT (IIoT) devices as in personal devices thanks to a potentially significant attack surface, from the edge to the cloud. Even reliability is at the mercy of the link and the cloud, perhaps acceptable for a consumer device, but not necessarily for industrial applications.

Which is why a lot more compute is moving to the edge. You can do most of the (local) compute you need there (especially for real-time needs), you have more control over the attack surface, you can buffer communication if the uplink is misbehaving and, if you’re careful, you can still do all of this with low power.

Then there’s the question of implementation technology. In the semiconductor world, we tend to assume this means custom ASIC solutions but it’s not always clear that approach is the most effective, especially in the IIoT where needs may be extremely diverse across widely different applications and may need to evolve as standards evolve. You could allow for more flexibility in software on the edge node, but that again becomes more power hungry.

In many IIoT applications a better compromise platform is an FPGA. It’s obviously reconfigurable and can be very power-efficient; not down at the mA consumer-level but quite satisfactory for many industrial support functions. And from a security point of view, while bitstreams (for reconfiguration) are not immune to hacking they are arguably better defended in today’s platforms than software. Moving more of those custom needs to hardware can only reduce the potential attack surface.

Ultimately whatever you are building has to connect both to sensors and actuators, and to the cloud. The sensor/actuator part of this will be part of your secret sauce presumably, as will be sensor fusion and data processing. But you don’t really need to reinvent all that communication, to sensors etc and to the cloud, if you can get it wrapped up in a reference platform. And if you can add hardware extensions on FPGA mezzanine card (FMC) daughter boards, you can configure most of what you need before you have to add anything. Sure, it won’t be as cheap (unit cost) or as compact as an ASIC, but it will be field configurable, there’s no NRE and anyway you can have it up and running tomorrow.


Aldec recently hosted a webinar in which they showed how to connect such a solution to the widely-popular AWS cloud services. They use their TySOM-1-7Z030 development board in the demo, based on the Xilinx Zynq processor with dual-core ARM A9, along with DDR, SD, flash and other interfaces, USB 2.0 and 3.0, UARTs, Gb Ethernet, HDMI, mPCIe and camera interfaces. Daughter cards offer multiple standard wireless interfaces also industrial interfaces such as RS standards and CAN, in a range of configurations covering ADAS, IoT, vision, industry and other applications.

Configuring this system follows an expected flow. You design the hardware part of the solution using Xilinx Vivado, along with Aldec Riviera-Pro for functional verification. The software part of the solution is built using the Xilinx SDK. Aldec don’t spend any time on this topic in the webinar (they have other resources you might find useful on those topics). The main focus of this webinar is on connecting your IIoT solution to AWS.

The Webinar walks you through setting up an AWS account, connecting to AWS IoT and registering and configuring your device. This requires a communication protocol, supplied by AWS, for which Aldec recommends the embedded C MQTT option, a lightweight messaging protocol for small sensors and mobile devices, optimized for high-latency / unreliable networks (remember those challenges). AWS then creates a connection kit and certificates for your “thing”, which you can add to your edge device.

Aldec provides demo examples with this type of build for TySOM boards so you can get started quickly with testcases then iterate towards your particular solution needs. Starting from a demo setup, the TySOM board will publish processed data from sensors to AWS. In the webinar, they show this in action, collecting and publishing temperature and humidity data on a fixed schedule.


The solution is scalable using multiple TySOM boards, each collecting different data from different sensors and each connecting to AWS. Looks like an interesting solution to get up and running quickly with an IIoT product/application. You can get more detail from the webinar HERE.


Webinar on Electrochemistry and how it affects Semiconductor devices

Webinar on Electrochemistry and how it affects Semiconductor devices
by Daniel Payne on 10-09-2017 at 12:00 pm

My educational background is Electrical Engineering and I’ve learned a lot since starting in the industry back in 1978 while working on bipolar, NMOS and CMOS technology, designing DRAM, data controller and GPU devices. I continue to learn about the semiconductor industry through daily reading and attending trade shows like DAC. I’ll be attending a webinar on October 25th at 10AM (PDT) on the topic, TCAD Simulation of Ion Transport and Electrochemistry. This phenomena is essential for two types of modern designs:

  • Non-volatile memories
  • Solid-state batteries

On the downside there is an unintentional case where this TCAD simulation proves useful, and that is in finding degradation mechanisms. Silvaco is hosting this webinar and their TCAD tool called Victory Device has a new ability for electrochemistry simulations.

At this webinar you will learn the following points:

 

  • Why and when to consider electrochemistry in semiconductor devices
    • Phenomena
    • Applications
  • The equations solved in electrochemistry simulations
    • Ion transport
    • Chemical reactions
  • How to set up electrochemistry simulations in Victory Device
    • Definition of chemical species properties
    • Initialization of species concentrations
    • Specification of chemical reactions
    • Output of chemical species data
  • An example: Degradation in an IGZO TFT


Presenter

Dr. Carl Hylin is a Senior Development Engineer in Silvaco’s TCAD Division. Since joining Silvaco in 2007, he has worked exclusively on Victory Device. In addition to the chemistry module, he is responsible for many of the trapping and radiation-damage models in Victory Device, as well as for much of the high-precision numerics.

Dr. Hylin holds an SB from MIT, an MS from the University of Illinois, and a Ph.D. from the University of Kentucky, all in mechanical engineering. He is a member of the Society for Industrial and Applied Mathematics (SIAM). His field of specialty is computational science and engineering, which he has applied to rocket engines and search engines as well as to TCAD.

Who Should Attend
Academics, engineers, and management interested in the simulation of semiconductor devices involving ion transport, degradation, and charge-capture; including TFTs, non-volatile memories, and solid-state batteries.

Registration
There’s a briefonline registration form here. I’ve attended several Silvaco webinars and they have been quite detailed and technical, so don’t expect any marketing fluff. At the end you will have time to type in any questions and have them answered.


Enabling a New Semiconductor Revolution!

Enabling a New Semiconductor Revolution!
by Daniel Nenni on 10-09-2017 at 7:00 am

According to semiconductor trade statistics, 2017 will be the strongest market since 2010 easily recording double digit gains causing the SOX Semiconductor Index to outpace NASDAQ and the other indexes. The question Wall Street people have now is: How much longer will semiconductors be an attractive investment? That question was answered at this year’s Global Semiconductor Alliance Executive Forum: Enabling a New Revolution. The US Executive Forum is an invitation-only event attended by the leading semiconductor executives from around the world. You can see a list of attendees HERE if you don’t believe me. The next big event is the famed GSA Awards Dinner hosted by Wayne Brady!

Gene Munster, Managing Partner Loop Ventures, kicked it off with a “Creating New Markets” keynote including his top technology breakthroughs: Zeplin (transportation), Movie Projector (media), Internet (business communication), Smartphone (personal communication). Gene went through 115 slides in 15 minutes but really the focus was on AI which includes robotics, automotive, augmented reality, and virtual reality as AI interfaces. Gene mentioned that if you are weirded out by any of this you are too old, which, as a father of four millennials, I agree with completely. This is a young person’s game, if you want to add value, lead or stay out of the way. Gene also stated that AI was mentioned by 11% of S&P 500 companies during recent investor calls. I would bet that number is much higher for semiconductor investor calls.

I have Gene’s slide deck if you want to discuss it in more detail in the comments section.

There was a “5G – The Next Generation” session with Jean-Francois Hebert of Dassault, Cristiano Amon of Qualcomm, Robert DiFazio of Interdigital Labs, Shireen Santosham of The City of San Jose, and Preet Virk of MACOM, which was very interesting. 5G is discussed at just about every conference I have attended this year but when I ask, very few people know the difference between 5G and 4G in regards to transmission rates. 5G is 10GBPS versus 4G at 100MBPS. If you recognize how profitable 4G has been for semiconductors, you can multiply that by 100x for 5G, my opinion.

The next session: AI for Real! was the best example of the future opportunity for semiconductors in a 5G world. David Edelman, former Obama Technical Advisor (MIT), Paul Daughtery, Accenture CTO, and Mark Papermaster, AMD CTO, presented slides. Afterwards the panel discussion was moderated by Aart de Geus, Chairman and Co-CEO, Synopsys.

David made some very strong points: AI is everywhere and nowhere meaning that AI touches all of our lives today whether we recognize it or not and will continue to do so on a very large scale. David also quoted futurist Arthur C. Clarke’s third of his famous three laws, “Any sufficiently advanced technology is indistinguishable from magic” and AI certainly is modern day magic.

There was also a lot of discussion on how AI will change jobs and the skill levels of American workers. The consensus was that the skill level will increase as will the acceptance of AI and new technologies. Again, I have the slides and can talk more in the comments section.

Bottom line: Whatever AI and the future holds it will certainly have an insatiable compute and storage demand in both the cloud and edge devices. This heavy demand will continue to drive the semiconductor industry and specifically the SOX for years to come, absolutely.