CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

IP-SoC 2016: IP Innovation, Foundries, IoT and Security

IP-SoC 2016: IP Innovation, Foundries, IoT and Security
by Eric Esteve on 11-10-2016 at 7:00 am

The next IP-SoC conference will be held in Grenoble, France, on December 6-7, 2016 after Shanghai in September and Bangalore, India, in April. This will be the 20[SUP]th[/SUP] edition of this unique IP centric event, as well as the celebration of Design And Reuse 20[SUP]th[/SUP] anniversary. Creating in 1997 a company fully dedicated to Reuse, based on such an innovative business model as IP portal, was a real bet. If we look at D&R, it’s now the undisputed leader and the IP portal counts about 400 IP vendors and thousand IP product references. As an analyst, I must confess that I am using the web portal when needed, and my guess is that many others, customers or vendors do it as well.


IP-SoC conference agenda is organized by topics and we can identify the most hot of the year: IoT and Security. This shouldn’t be surprising as, during 2016, the semi industry has realized that the new business opportunities linked with IoT could only materialize if the security can be asserted. Do you know that the industry is still waiting for a security standard to be defined for IoT?

IP-SoC conference is located in Grenoble, France, and when talking about security we should remind that the smart card has been invented in France. We can notice that European companies are the market leaders for smart card IC, and the program reflects this focus put on security with presentations from ARM, Inside Secure and Barco Silex dealings with different aspects of security for IoT.

Before to launch a system, you need to design an IC integrating various IP and you will need to manufacture the IC. With presentations from GlobalFoundries, STMicroelectronics and TSMC, the foundry part of the IoT ecosystem is well covered! The IP side as well with ARM (#1) and Synopsys (#2) presenting several papers, as well as other IP vendors, analysts (including myself, but I will tell you more about my presentation in a separate blog) and Intel talking about IP from the customer side.

Last but not least, CEA-Leti is presenting a paper about IoT platform and FD-SOI, and Semiwiki readers know about the strong involvement of CEA-Leti into FD-SOI technology. In fact, the research center is involved in the development of specific IP that can be used in IoT systems, like connectivity IP (RF), but CEA-Leti could also tell you about security as this is one of their strong area of expertize among others, many others in fact: from High Performance Computing (HPC) to dedicated wireless network design or medical…

To register, just go HERE.

See you on Tuesday 6[SUP]th[/SUP] December in Grenoble

From Eric Esteve from IPNEST

IP-SoC Agenda

Foundry Corner and Innovation in Technology
[TABLE]
|-
| –
| “FDSOI production readiness and its roadmap beyond 22FDX” by Gerd Teepe, GlobalFoundries
|-
| –
| “Innovative FSOI IP needed !!!” by Patrick Blouet, Collaborative program manager, STMicroelectronics
|-
| –
| “Designing with TSMC Open Innovation Platform (OIP) Ecosystem” by Banchuan Cheong, Technical Manager, TSMC Europe
|-

From IP to IoT : Where is the IP world going?
[TABLE]
|-
| –
| “IP Explosion (1995-2010) and IP Paradox (2010-2025)” by Eric Esteve, IP Nest
|-
| –
| “Back to the Future. The end of IoT” by Bill Finch, CAST
|-
| –
| “How to jump start your ARM-based IoT chip for free” by Phil Burr, ARM
|-
| –
| “HIGH PERFORMANCE SYNCHRONOUS INTERFACE FOR IOT AND WEARABLE APPLICATIONS” by Pratap Neelasheety, Synopsys
|-

Security a key challenge
[TABLE]
|-
| –
| “Right sizing the SoC security architecture for the new connected world” by Bart Stevens, Director of Product Management, Mobile and Networking Hardware Security Solutions, Inside Secure
|-
| –
| “ARM technologies for IoT system’s security, from device to cloud” by Mike Eftimakis, ARM
|-
| –
| “Setting up secure VPN connections with cryptography offloaded to your Altera SoC FPGA” by Pieter Willems, Barco Silex
|-

IoT Design Resources needed for a connected world
[TABLE]
|-
| –
| “L-IoT : a flexible and ultra low power IoT platform in FDSOI 28 nm” by Edith Beigne, CEA Leti
|-
| –
| “ADAPTING UART FOR DISTRIBUTED MONITORING IN IOT APPLICATIONS: A DESIGNWARE IP CASE STUDY” by Sreenath Panganamala, Synopsys
|-
| –
| “HIGH PERFORMANCE SYNCHRONOUS INTERFACE FOR IOT AND WEARABLE APPLICATIONS” by Pratap Neelasheety, Synopsys
|-

IP in SoC and Product
[TABLE]
|-
| –
| “IP Breadcrumbs Method for tracking IP versions in SOC Database” by Mukund Pai, Intel Corp
|-
| –
| “A Knowledge Sharing Framework for Fabs, SoC Design Houses and IP Vendors” by Anne Meixner, The Engineers’ Daughter LLC
|-


One chip and the MCU variant challenge disappears

One chip and the MCU variant challenge disappears
by Don Dingee on 11-09-2016 at 4:00 pm

Merchant microcontrollers are usually made available in a wide range of variants based on one architecture with different peripheral payloads and packaging options. A couple of companies, notably Cypress with their PSoC families and Silicon Labs with the EFM8 Laser Bee Continue reading “One chip and the MCU variant challenge disappears”


Industry 4.0 and Manufacturing Processes

Industry 4.0 and Manufacturing Processes
by Bill McCabe on 11-09-2016 at 12:00 pm

Industry 4.0 or, as it is also known the fourth industrial revolution is the trend that is currently coming into play of automating the manufacturing processes and the use of IoT and other technologies to make industrial processes more readily accomplished. It is working hand in hand with things like the internet of things, cloud computing and cyber-physical computing.

Using Industry 4.0, we create what are called smart processes and smart computing.

According to Wikipedia, “Within the modular structured smart factories, cyber-physical systems monitor physical processes, create a virtual copy of the physical world and make decentralized decisions. Over the Internet of Things, cyber-physical systems communicate and cooperate with each other and with humans in real time, and via the Internet of Services, both internal and cross-organizational services are offered and used by participants of the value chain.”

The term Industry 4.0 or fourth industrial revolution began in the German government with a project that they had created that was markedly high tech. It promoted computerized manufacturing and provided the reasons for that manufacturing to take place as well as how industry 4.0 would play out with other areas of manufacturing such as logistics and supply.

Industry 4.0 provides for changes in the way in which we work. It makes our work smarter and faster and in most cases will save a great deal of money for the factories and businesses which embrace it. For those that do not embrace the fourth industrial revolution, they will be hard pressed to keep up to those who have introduced smarter factories. Better manufacturing, better use of space and better safety results are just a few of the things that Industry 4.0 provides.

For those who embrace Industry 4.0 the results can be faster, better, more profitable results from their business. What’s not to love about that.

This is the second in a series. To see # 1 in the series please use this link: Short History of the Fourth Industrial Revolution

Or Check out our website at www.internetofthingsrecruiting.com

Also Read: Manufacturing Singularity is Comng!


Optimizing Prototype Debug

Optimizing Prototype Debug
by Bernard Murphy on 11-09-2016 at 7:00 am

In the spectrum of functional verification platforms – software-based simulation, emulation and FPGA-based prototyping – it is generally agreed that while speed shoots up by orders of magnitude (going left to right) ease of debug drops as performance rises and setup time increases rapidly, from close to nothing for simulation to days or weeks for prototypes.

All of this adds up to a real headache for debug in prototyping. While FPGAs provide some level of built-in logic analysis, these devices are designed primarily for optimal mission-mode performance, not for broad and deep debug. Which means that typically you have to instrument the design RTL with logic to bring out the signals you want to probe. So if you find a bug in one run, then guess exactly the right signals to probe and when to trigger to find the root-cause, re-instrument and re-run, you have managed to track down the problem with just one re-spin. More commonly you’re going to re-spin multiple times to converge on the right probe and trigger set. A Wilson Research Group/Mentor Graphics survey found prototyping teams took 4-6 re-spins on average to converge on a root cause for a bug. And you’re burning a lot of unproductive time on each re-spin. Or not if you feel the need to polish your sword fighting skills from the perilous heights of your trusty office chair 😎

I’m going to talk about Mentor’s solution to this problem, but first I should touch on something that had me puzzled for a while. Mentor doesn’t have their own prototyping box (that I know of), so why are they providing solutions in this space? In fact, they offer a software product called Certus for instrumenting FPGA-based designs, which they acquired from Tektronix around 2013. Tek had been working closely with the Dini Group (among others) and that relationship continues with Mentor. Since the Dini Group is very well known for building high-performance multi-FPGA boards, frequently used in prototyping, the pieces came together for me. Mentor through Certus, offers a solution for instrumenting custom multi-FPGA prototype boards, a commonly used platform today and likely into the future, especially in performance-driven applications.

Now back to the debug problem. A prototype for a large design may be split across multiple FPGAs, so a root-cause for a problem appearing in one FPGA could be in another FPGA. Then you have to worry about overhead for tracing signals. The most obvious approach, to add logic and wiring to probe each signal you might want to observe, quickly gets out of hand. Certus has a more efficient method. You can map up to 64K signals into an observability network with low overhead (~1 LUT per signal instrumented). The observability network is connected to a capture station which can capture traces from up to 1024 of these signals.

Which 1024 signals are captured can be reconfigured without needing to re-instrument and recompile the design. And you can have up to 255 capture stations per FPGA, so you can have a very high level of configurable visibility across all FPGAs on your prototype board, without need for recompiles. Capture stations store traces in ring buffers (controlled by user-defined triggers) which are efficiently compressed to store traces in on-FPGA memory. This approach can store up to seconds of trace-depth, up to 15 seconds in one example cited where bus activity related to an Ethernet MAC activity was being traced.

Or you can stream trace data to external memory. In another example cited, all activity was traced on an AXI bus in an A7-processor-based design during Linux boot, into a 4GB DDR memory, though Steve Bailey noted that there is some expected blanking during streaming, unless you are willing to enable pausing the design clocks during streaming. However, even when there is blanking correct ordering/causality between signals will be preserved. In fact this is generally ensured, across all clock domains and across all FPGAs. Where clocks are derived from the same crystal, signals within those domains are guaranteed to be correlated. If they are from different crystals, they may be off in correlation by one or two cycles but hey, Mentor can’t control relative drift in crystals. All of this correlation is ensured by Certus.


Visualizer debug pulls all of this together for detailed analysis. You drill down and trace back, as you do in debug. Maybe you decide you need to look at some traces you didn’t capture, so you need a new prototyping run, but now you probably don’t need to re-instrument. You just reconfigure some capture stations to add those traces. Of course it is possible that even with this flexibility you might have to re-instrument in some cases. But on-the-ground experience indicates that the original average 4-6 rebuilds drops to 0-1 rebuilds when using Certus.

Steve also mentioned that Certus supports reading out register states, though he acknowledges this requires stopping clocks and is quite slow. Still, he pointed out that once in prototyping, many users want to stay there as long as possible and will take an occasional slow operation in exchange for faster and deeper co-debug between hardware and software. You can learn more about Mentor support for FPGA prototyping debug HERE.

More articles by Bernard…


Final SemiWiki Book Signing at REUSE 2016!

Final SemiWiki Book Signing at REUSE 2016!
by Daniel Nenni on 11-08-2016 at 4:00 pm

It has been a hectic year for the semiconductor industry so now is a good time to reflect on how we got to where we are today in hopes of better understanding where we are going tomorrow.

Given the importance of semiconductor IP (the $32B ARM acquisition by SoftBank for example) I would strongly suggest attending the REUSE 2016 event on December 1st at the Computer History Museum in Mountain View, CA. And yes, I will be giving away copies of our book “Mobile Unleashed: The History of ARM” in the exhibit hall. I will have 100 copies of the book and more than 200 people are expected so you do the math.

Mobile Unleashed is the origin story of technology super heroes: the creators and founders of ARM, the company that is responsible for the processors found inside 95% of the world’s mobile devices today. This is also the evolution story of how three companies – Apple, Samsung, and Qualcomm – put ARM technology in the hands of billions of people through smartphones, tablets, music players, and more.

Here is the REUSE 2016 abstract, I will post the presentation abstracts when they are available. Be sure and take a look at the exhibitor list because this is a great opportunity to meet the people behind the ever important building blocks of modern semiconductor design, absolutely.

REUSE 2016 is the first of an annual conference and trade show to bring together the semiconductor IP supply chain and its customers for a full day of everything to do with semiconductor IP. Hosted in the heart of Silicon Valley at the world famous Computer History Museum, there could not be a more appropriate venue for a day focused on the hottest segment of the semiconductor industry.

The day will begin with a keynote speech on trends driving IP reuse followed by multiple tracks of technical and business-oriented talks by a diverse set of companies, large and small.

Customer may engage with suppliers is a spacious exhibit area to learn about new technology and solutions that are available.

Capping off the evening will be a social in the exhibit hall with drinks and food provided to allow everyone the opportunity to relax and meet new friends, and even tour the museum itself.

The book signing will start at 5:00pm in the exhibit hall and based on previous events it takes less than one hour to sign 100 books so get there early if you want a print copy. If you would prefer to have a PDF version of the book you can download it HERE. Only registered SemiWiki members can access this wiki so if you are not already a member please join as my guest:

https://www.legacy.semiwiki.com/forum/register.php

If you haven’t been to theComputer History Museum lately this is your chance:

The Computer History Museum is a nonprofit organization with a four-decade history as the world’s leading institution exploring the history of computing and its ongoing impact on society. The Museum is dedicated to the preservation and celebration of computer history and is home to the largest international collection of computing artifacts in the world, encompassing computer hardware, software, documentation, ephemera, photographs, oral histories, and moving images.

The Museum brings computer history to life through large-scale exhibits, an acclaimed speaker series, a dynamic website, docent-led tours and an award-winning education program.

ABOUT REUSE 2016
A tradeshow and conference focused exclusively on semiconductor IP, REUSE 2016 will be held annually with its inaugural event Thursday, December 1, 2016, at the Computer History Museum in Mountain View, CA., from 9am-8pm. For event details, free participant registration, and prospective exhibitor information, please visit www.reuse2016.com. Stay informed of the latest news and updates using #REUSE2016.


System-level Design for IoT and Automotive

System-level Design for IoT and Automotive
by Daniel Payne on 11-08-2016 at 12:00 pm

Several years ago a former EDA co-worker went to work for MathWorks, so I started paying a lot more attention to this privately held company that is well known for the MATLAB language and analysis environment. Engineers at MathWorks have created a graphical environment called Simulink for both simulation and model-based design of multi-domain dynamic and embedded systems. On the electronics side of product development you can model and simulate analog sensors and circuits at the component level using the PSpice simulator. What about modeling and simulating with both MATLAB and PSpice at the same time, instead of separately? Before this week, you had to divide and conquer, using these two popular software tools separately and then hoping that your actual system would work after a prototype was built.

What Cadence and MathWorks decided to do was connect these two simulation environments together, using the Simulink technology, and that’s good news for systems designers because you can finally model and simulate your entire system before building any prototype. So here’s how you split up your modeling efforts between the tools:

  • MathWorks – mechanical, thermal, hydraulic
  • PSpice – electrical


Upper left – Simulink, lower left – PSpice waveforms, right – OrCAD schematics

The integration is bi-directional, so you can include PSpice models for analog and AMS in your Simulink (digital models) environment, or place MATLAB models in a PSpice design. You also get to choose which analysis tool to use for visualizing results, it’s really the best of both worlds. Let’s say that you want to visualize a phase plot, then the Simulink window on the left shows you polar plot results while the PSpice window on the right shows you a frequency response:


Polar and Frequency plots

Systems designers doing IoT and automotive projects can quickly benefit from this kind of technology because it shortens the product development cycle and provides feedback and validation that requirements have been met prior to production. The old way of doing product design by building a prototype, testing, iterating and refining are just too slow in comparison to the newer method of modeling a virtual prototype. Applications for this system-level simulation in the automotive market would include[SUP]1[/SUP]:

  • Engine control
  • Transmission electronics
  • Active safety
  • Driver assistance
  • Passenger comfort
  • Infotainment

IoT markets are quite diverse and are characterized by the use of analog sensors and mechanical actuators all controlled by digital electronics[SUP]​2[/SUP]:

  • Media
  • Environmental monitoring
  • Infrastructure management
  • Energy management
  • Medical and healthcare
  • Building and home automation
  • Transportation
  • Metropolitan scale deployments
  • Consumer products

Related blog – Eight Improvements for PCB Software

PSpice has been around for decades now and has some 34,000 models ready to be simulated for your next PCB project. The integration between PSpice and Simulink gives you a single, integrated environment for system design and debug. Using the PSpice Device Model Interface (DMI) you can model using C, C++, SystemC, Verilog ADMS components, or MATLAB software-generated code and then simulate.

Simulation results from PSpice can even be visualized in the MATLAB plotting tool using all of the advanced features. Any trace or DAT file can be exported from PSpice into MATLAB by clicking a drop-down menu choice in PSpice. From PSpice you can even start to use MATLAB functions in any of your measurement expressions or behavioral modeling.

Related blog – Growing Innovation in Modern PCB Design Tools

If this approach of using a virtual prototype for system-level design looks interesting, then consider contacting Cadence and MathWorks to get more info. On the sales side of things you still buy PSpice and OrCAD through Cadence or a channel partner, and Simulink through MathWorks. The team at MathWorks has been integrating with many other companies using Simulink over the years, so this integration with Cadence makes a lot of sense.

References
1 – Wikipedia, Automotive electronics
2 – Wikipedia, Internet of things


#IoT Big Data is worthless!

#IoT Big Data is worthless!
by Diya Soubra on 11-08-2016 at 7:00 am

I have been writing about big data for over three years now. In all that I wrote and many articles that I read, there is an underlying assumption that people naturally accept the huge economic value associated with big data. It turns out that this is a bad assumption. They don’t!


There are many people that see big data as worthless and they are totally correct. Data is actually worthless until it is transformed into information. In this example, McKinsey states that only 1% of the data collected from 30,000 active sensors is actually used, the rest is wasted. I believe that the same applies to many other deployments. Industry is generating terabytes of data and we have only just started to process that data to extract meaningful information. Big data is truly worthless, the economic value is actually in the extracted structured information. People talk about big data but they actually mean structured information.

Linking this subject to my continuous effort to push for a horizontal #IoT, I can add that for big data to yield meaningful information it has to come from a diversity of sensors. If I collect a million data points from a temperature sensor on a motor then the information that I extract will be linear and of little value. Now, if I was to collect data from ten different sensors on that same motor then the value of the information I extract will be multiplied by many factors of ten.

This is the power of horizontal #IoT. The mash-up of diverse unstructured data streams to generate valuable structured information. This technique has been proven over the past few years by application to crowd analysis. Various diverse streams of data are inter-processed to extract, for example, “sentiment”, an item in fashion these days. There is really no difference in the mash-up operation, be it user data or motor data. The same concept applies to both. Simply take in a huge volume of diverse, unstructured, real time data and extract structured information. Same concept, different algorithm.

Most people really want structured information, not big data, since it can be used to make decisions. This is where the real economic value lies. The most famous example is the airline company willing to pay GE a few million dollars in order to know in advance when an engine is about to fail. If it wasn’t for the expert data scientists at GE that know how to extract meaningful information, then the terabyte of data generated by the engine during flight would have remained worthless. Based on that structured information, a decision is made about the impeding failure of the engine. The airline is willing to pay serious amounts of money to avoid the liabilities associated with a plane falling from the sky due to engine failure. Structured information used to make a decision of much higher economic value, probably by multiple orders of magnitude.

Any #IoT or big data discussion, since they are so intertwined, would be more fruitful if both parties can agree to the value of what they are talking about. To facilitate that value estimate I am in the process of creating a formula to indicate the dollar value of information relative to that of big data. Something like :

Market $ value of information = Ʊm x ∑ ( ∑ Data(i)(k) )

Data(i) is the $ value of each sensor sample. i is the number of data points of one specific type of sensor while k represents the number of diverse data streams. Ʊm would be the fudge factor that represents the multiplier value of the extracted information for a specific market. This formula, when complete, would help two parties agree on the value of the data streams and of the information extracted. Most of all it would hopefully push people to deploy diverse sensors to do mash-ups in order to charge more for the extracted structured information. It is an excellent solution for both parties and for #IoT. People interested in deploying #IoT nodes will see where the revenue stream would come from and the information traders would see how they will create their revenue. Everyone wins.

If we are able to quantify the value then people would invest and deploy #IoT nodes and the resulting information would be traded at it’s fair value. Once that formula is complete then we would move on to the next level where we would need to quantify the value of the decisions made based on that structured information. This is a much harder problem.

A business professional that I work with and that I highly respect always drives to the point by saying: “show me the money”. I hope with this formula I can answer his question regarding big data for #IoT.

All feedback from experts who may have already figured this out is highly appreciated.

Also read: What’s Really Going to Limit the IoT?


Managing the IoT

Managing the IoT
by Bernard Murphy on 11-07-2016 at 4:00 pm

Now that ARM has introduced its end-to-end IoT, including the mbed Cloud SaaS to handle the cloud end of the IoT, you might wonder what service providers are going to offer on top of this solution. DevicePilot showcased one such solution at ARM TechCon, to manage connected products. These guys especially deserve to be featured because they won the “Best Software Product” award at TechCon, which makes sense when you consider how well they complement ARM’s direction.

DevicePilot provides a software solution, running in the cloud, to manage connected products automatically. You may have thought about the need for this kind of service, but apparently that would put you in a minority. According to Cees Links, CEO of IoT solution provider GreenPeak and a pioneer in wireless networking “It surprises me how many device companies don’t even know how many of their devices have been deployed, let alone how many are working. As the IoT matures, users’ expectations of service quality are rapidly increasing, and you really have to keep on top of this stuff. When it comes to the smart home we expect all devices to be connected and providing useful information for owners and manufacturers on usage, diagnostics, need for refurbishment and replacement.”


Think about the scale of a city-wide deployment – to monitor street lighting, or smart parking or waste management controls for example. Now you have thousands, maybe tens of thousands of devices. You need to understand where each of those devices is and what it is (if I’m an operations manager, I don’t want to jump between different tools to monitor different features or edge nodes from different vendors). You also probably want to plan where you should add coverage. So your first question is around location – what do you have out there, what are they and what kind of coverage do you have.


Your next problem is monitoring. What is the status of each of those devices? Isolate problems and give me a quick triage on each. What are the possible problems –battery running low, wireless issue, device problems? You’ll use this to optimize field service activity.

Then you want to manage devices. This includes monitoring through their lifecycle, from deployment to end of life (I seem to have had a lot of problems with this device, maybe I should just replace it), managing firmware upgrades and, on the other side of the cloud, integrating with existing business process. According to the website, this feature set in the product is coming soon.

Device Pilot was co-founded by Pilgrim Beart in the UK. Pilgrim has quite a background in founding connected product companies including most recently AlertMe, a connected home solution recently acquired by British Gas for $100M. In the course of building AlertMe, Pilgrim saw the need for this level of management in IoT deployment, so this solution is built on a lot of relevant experience for a domain as young as the IoT.

It’s worth remembering that the promise of IoT could easily turn into the nightmare of IoT if device management is not well handled. For an IoT deployment to work well you have to be on top of where all the devices are, how they are performing and what needs fixing (or might need fixing in the near future). Solutions like DevicePilot will be an important part of that management. You can learn more about DevicePilot HERE.

More articles by Bernard…


CEVA Webinar: Vision Based Autonomous Driving

CEVA Webinar: Vision Based Autonomous Driving
by Eric Esteve on 11-07-2016 at 10:00 am

CEVA Webinar “Challenges of Vision Based Autonomous Driving & Facilitation of An Embedded Neural Network Platform” will be held on November 16[SUP]th[/SUP] and will address one of the hottest topics today in our industry, probably the hottest in the automotive industry as all the players are working hard on autonomous vehicles.

The automotive market is seeing accelerated growth and rapid adoption of vision applications that will lead the way to autonomous vehicles. The solutions based on artificial intelligence and deep learning algorithms to identify objects were limited to research labs just a couple of years ago.

Why does deep learning and convolutional neural network (CNN) have exit the labs and be adopted by the automotive industry, Tier-1 suppliers and OEM? Deep learning is requiring a great amount of high performance processing and the new technologies like 16 nm (or below, 10 or 7 nm) allow targeting one chip solution; that’s the first reason. But the second reason and probably the most crucial is linked with deep learning performance improvements: it’s only since 2015 that the imageNet error rate is better than human performance, see below.

CEVA is offering CEVA-XM6 vision processor, an efficient HW and SW platform that is optimized for CNN workloads and other deep learning approaches. You can learn more about CEVA-XM6 HERE.

To register, use this v16 -CEVA-XM6%20&utm_source=semiwiki&utm_medium=post”>webinar link.

During this webinar you will hear about:

  • Challenges of ADAS and vision based autonomous driving
  • CEVA’s 5th generation deep learning embedded platform based on the CEVA-XM6 vision processor
  • Implementing low power machine vision solutions using the CEVA Deep Neural Network (CDNN) toolkit
  • Free space detection utilizing AdasWorks drive 2.0 SW implemented on CEVA’s imaging and vision platform

The webinar will be held by deep learning experts, from CEVA and the automotive industry:
Liran Bar
Director of Product Marketing, Imaging & Vision, CEVA

Jeff VanWashenova
Director of Automotive Segment Marketing, CEVA

Arpad Takacs
Outreach Scientist, AdasWorks

Again, to register for the CEVA Webinar (November 16th at 10 am PST 1pm EST) “Challenges of Vision Based Autonomous Driving & Facilitation of An Embedded Neural Network Platform” use this webinar link.

CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, computer vision and computational photography for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (Smart and Smart Ready), Wi-Fi (802.11 b/g/n/ac up to 4×4) and serial storage (SATA and SAS).


Executive Interview: Vic Kulkarni of ANSYS

Executive Interview: Vic Kulkarni of ANSYS
by Daniel Nenni on 11-07-2016 at 7:00 am

Having known Vic for many years, it is always great to spend time with him and catch up on what is happening inside the semiconductor ecosystem. As Senior Vice President and General Manager, RTL Power Business, at ANSYS in Silicon Valley, Vic spends a lot of time in the field with customers, partners, and at industry events so he has intimate knowledge of some of the changes we are experiencing, absolutely.

Prior to merging with Apache in 2009 and subsequently with ANSYS, Vic was a co-founder, President and CEO of Sequence Design. In addition to driving product and business growth within the ANSYS Semiconductor Business Unit with the senior leadership team, Vic is also evangelizing the emerging IoT opportunity along with other business units, connecting the dotsfrom chip-package-system software solutions with ANSYS multi-physics simulation tools targeted at various IoT vertical segments.

Tell us about ANSYS
We are a leading provider (Nasdaq: ANSS) of simulation products headquartered in Canonsburg, PA. The Company has been laser-focused on providing multi-physics and multi-domain simulation software to enable the product design for over 40 years! Multi-physics refers to an environment where products are subject to multiple physical forces such as thermal effects, structural integrity, electro-magnetics radiation and so on. Multi-domain typically refers to chip, package and system power domains in electronics, simulation of antenna radiation pattern, fluid dynamics to complete electro-mechanical systems. We have over 45,000 customers worldwide. ANSYS acquired Apache Design in 2011 and the Semiconductor BU of ANSYS was born.

Early this year ANSYS formally launched the IoT initiative to provide solutions for several vertical segments, ranging from Wearables, Healthcare, Automotive, Industrial, Defense to Smart & Connected Cities.

What are the key focus areas for ANSYS?
IoT is clearly an exciting business opportunity for the worldwide industry. Recently SoftBank CEO Masayoshi Son-san made a seminal statement in his keynote during the ARM TechCon Conference when he compared the emerging IoT explosion to the Cambrian explosion! He stated that chip sensing capabilities are evolving rapidly and will exceed the collective human intelligence in next few years akin to what happened to senses of intelligent animal species during the Cambrian explosion!

At ANSYS we see these rapidly growing trends especially with the Industry 4.0 applications, autonomous vehicles and advanced mobile segments. Increasingly, our customers have started to address critical challenges related to communication system design, sensor design and product reliability to out-innovate the competition. In this incredibly fast-paced environment, virtual prototyping using simulation software is an important strategic vehicle for creating a meaningful competitive advantage by getting the newest product model or next-generation features into customers’ hands as fast as possible.

Explain how ANSYS enables customers in this revolution.
This “revolution” is happening due to a confluence of several technological advances that have happened in the past decade or so, and simultaneously too.

The first one is Miniaturization – More and more electronics are being packed into smaller and smaller space, providing unprecedented processing and computing capability. The first cell phone weighed 2 pounds. In comparison, the newest generation of mobile phone weighs only about 4.6 – 4.8 ounces, and it enables you to do lot more than just talk! This type of miniaturization is enabled by smaller chips and electronic components. Now people talk about the trend towards “More Than Moore”. It refers to the fact that chips continue to gobble up more and more of the circuitry on a PCB. This kind of integration is enabling miniaturization at the larger scale, enabling planes, cars, drones, and virtual reality systems to get more sophisticated.

As an example for a connected car the performance and ADAS model must be tested in a simulated model of roads, buildings and pedestrians under diverse driving scenarios. Whether designing planes, cars or smartphones, engineers typically need to optimize IoT products for size, weight, power and cooling — a set of design requirements popularly known as “SWAP-C.” Engineers must manage all these components in a constrained space, while optimizing performance. This means relying on simulation to make design trade-offs quickly and cost-efficiently.

ANSYS provides a complete platform for engineering simulation, product designers can identify and address any functional flaws, such as impractical power demands at the chip, package and board level or faulty antenna design, as quickly as possible — and as early as possible in the design cycle, when mistakes are less costly to address.

According to an independent study of over 600 companies done by the Aberdeen Group clearly states that simulation is a key enabler in product design, reducing development time by 9X, reducing the overall product cost by 4X, with over 85% more likely to decrease the warranty costs and new product introductions with a success rate of over 65%!

How does Semiconductor Business Unit (SCBU) fit in the overall ANSYS Corporate strategy?
Apache Design (now called Semiconductor Business Unit) has been one of the important strategic acquisitions that ANSYS has made over the years to enable simulation of a complete electronic system design— from IP, Soc level RTL power analysis and power reduction, power integrity sign-off including dynamic voltage drop, electro-migration, on-chip ESD, and co-simulation of package, board and system level thermal and power effects in the context of chip-level dynamic voltage drop.

Designers are now able to analyze dynamic voltage drop of a complete SoC and evaluate its impact on the downstream electro-magnetic radiation signature analysis and system level thermal analysis. This is rather critical for designers to understand the overall system-level behavior in various vertical applications such as autonomous vehicles, health-care to advanced mobile devices.

What keeps you awake at night?
We have 290+ employees in our SCBU singularly focused on addressing the challenges posed by the most energy-efficient IP, SoC and electronic system designs.

There are 3 main areas which keep us awake at night:

  • N7 technology challenges
  • A comprehensive Chip-Package-System (CPS) simulation solution
  • Big Data for EDA with elastic compute driven architecture for next-generation SoC design challenges

We have seen complexity of designs exploding to Bn+ logic instances with 1,000+ I/Os, technology nodes going from 40 nm to 7nm feature sizes in just a few years. Along with the technology process node, innovative packaging techniques have kept up the pace as well… from a 2.5/3D package configuration to technologies such as InFO-WLP improve power, performance and reduce form factor.

The stakes are obviously very high when as much as $250+million investment and 500+ person-years (Ref: Gartner) are needed to bring a 7nm SoC to market.

As an example, meeting a 15 percent dynamic voltage drop limit in a 7-nm design running at 500mV is extremely challenging since the design trade-off choices that affect die-size, schedule and performance must be made to achieve the desirable outcome. On-chip variation, electro-migration (EM) and ESD sign-off considerations require careful modeling of advanced extraction and foundry rules both in an N7 chip and its InFO-WLP package. Accuracy convergence methodology must be followed rigorously from register-transfer level (RTL) power budgeting, estimation and regression to the final sign-off before committing to silicon.

At our Business Unit currently we are tracking 8 customers who are designing 7-nm SoCs. Complexity of these chips can range from 2- 4+ billion logical instances, with the number of physical geometries reaching 40 billion and parasitics reaching 400+ billion. One can now say we are reaching the “Big Data” problem!

The traditional architecture of EDA tools must be transformed. Why?
Conventionally EDA databases have always remained in silos and are structured (SQL – structured Query Language). They all use the exact same traditional monolithic database and data model systems. e.g. netlist, layout, logic, timing, RC, timing and so on. So it has been very challenging for engineers to readily explore design alternatives for an optimal solution where one physical effect can have serious impact on the other, e.g. voltage on timing. Our purpose-built SeaScape architecture is based on the principles of Big data elastic-compute principles to address these challenges since it enables a designer to run hundreds of what-if experiments in the time it used to take to build a single prototype, and create highly optimized designs.

To do this you will need to move away from the traditional silo-based design flow to a chip-package-board co-simulation workflow and methodology.

By leveraging chip-package-system flows and methodologies to target 7-nm technologies, one can achieve faster design convergence along with considerable business advantages. You can additionally profit from the reduced power consumption, higher speed and density improvements available from the 7-nm process node. Such simulation flows and solutions have to meet two broad requirements to make a meaningful impact: they must provide multi-physics sign-off accuracy and coverage, and enable accelerated design closure and optimization.

In addition, signal integrity analyses need to expand beyond traditional “SI” or cross-talk focus to include coupling of power rail and signal noise to predict jitter and noise coupling both inside and outside the chip to meet stringent DDR, SerDes data rate specifications.

Also Read:

CEO Interview: Taher Madraswala of Open-Silicon

CEO Interview: Simon Butler of Methodics

CEO Interview: Charlie Janac of Arteris