RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

A Turnkey Platform for High-Volume IoT

A Turnkey Platform for High-Volume IoT
by Bernard Murphy on 04-12-2018 at 7:00 am

Innovation in smart homes, smart buildings, smart factories and many other contexts differentiates in sensing, in some cases actuation, implementation certainly (low power for example) and rolling up data to the cloud. It isn’t in the on-board CPU and I doubt any of those entrepreneurs want to create their own Bluetooth or Wi-Fi (though they may want to optimize power or add some features). They mostly want the CPU and the communication to do their job as transparently as possible, with minimum design overhead and cost, requiring them only to add the special-sauce hardware and application software to differentiate their solution.

CEVA is already very well established in providing the communication part of this package. They are inside 9B+ devices shipped across multiple protocols, from BT/BLE and Wi-Fi up through all the cellular standards, and now offering 5G; specifically, Bluetooth and Wi-Fi solutions are provided through their RivieraWaves family. So it’s probably safe to assume they have communication part of the solution all wrapped up.

The standard choice for a CPU would of course be an ARM Cortex-M-class core – a safe bet and a big supporting ecosystem. But of course there’s a cost in licensing and royalties; this hasn’t historically been a big issue in premium devices but it could be a problem in price-competitive IoT devices. Which is one reason that RISC-V is attracting a lot of interest across the price-spectrum. Briefly on RISC-V, this is an open-source instruction-set architecture (ISA) developed originally in UC Berkeley and now available in open source implementations from Berkeley, ETH Zurich & University Bologna and in commercial implementations from Codasip, Cortus, Andes and SiFive (and others).


CEVA already provides the communications part of the software stack to run on an ARM platform, but given this growing interest in RISC-V, they now also offer turnkey hardware platforms including the “Zero-risky” open-source CPU implementation from the ETH Zurich & University Bologna, with FreeRTOS and communication stacks running on it (in this example for Wi-Fi). All you have to add is RF, sensor and peripheral interfaces, memory as needed, a real-time clock and your application software. All for a lower price than would be achievable with the standard platform which, Franz Dugand (Director of Sales and Marketing for Connectivity at CEVA) says is why one reason this platform is attracting a lot of attention.

According to Franz, the majority of customers they have today are using a more extended architecture in which this subsystem services all the communication functions, communicating through AXI with an application processor subsystem for more extensive processing. He tells me the Wi-Fi solution is scalable all the way from 802.11n up to ac/ax for big access points. What is different between these solutions is in implementation – clock frequency and memory size. Also the modems differ from one Wi-Fi version to another, from pure hardware implementations through software-defined, running on a DSP (naturally a core-strength for CEVA).

The turnkey solution for Bluetooth looks quite similar, with support for both low-energy and dual-mode operation and proven with both RivieraWaves and 3[SUP]rd[/SUP]-party RF.

CEVA provides FPGA-based evaluation boards hosting the Zero-riscy implementation of the RISC-V core along with, I believe, both Wi-Fi and BT/BLE options. They have run Wi-Fi benchmarking for their implementation against both Cortus and Cortex-M0-based solutions. Running each at the same clock frequency, they have been able to show comparable performance between across all three implementations.

One interesting point Franz made – he said they don’t ship ARM cores with their reference boards for the same cost reasons that customers may encounter. An obvious question for me was why they don’t use one of the SoC FPGAs which includes a built-in ARM core. His answer was revealing – they use low-end FPGAs (Spartan) to keep the board cost down. Using an SoC FPGA like Zynq would dramatically increase this cost. Also the SoC versions tend to use big processors (Cortex-A), where the CEVA target applications will more commonly be based on Cortex-M-class processors. Now with a Zero-riscy core, all those problems go away; the reference board and software are truly turnkey, and at a much more accommodating price-point.

Franz wrapped up with a compelling datapoint on what it took for them to move from an ARM-based implementation to a RISC-V implementation:

  • 1 week to build a new hardware platform (replace the CPU, run simulation, generate new FPGA binary)
  • 1 week to port the software
  • 1 week of system level validation

Three weeks is not a big investment to enable cutting your costs. You can learn more about CEVA RISC-V-based solutions HERE. There’s also an interesting viewpoint on how RISC-V is changing the game for IoT HERE.


Is there anything in VLSI layout other than “pushing polygons”? (8)

Is there anything in VLSI layout other than “pushing polygons”? (8)
by Dan Clein on 04-11-2018 at 12:00 pm

The year is 1999 and I decided that is time to try something else in layout. In 1989 in Israel I was part of the biggest chip in the world, the Motorola DSP9600. In 1998 in Canada I was part of the biggest Synchronous DRAM in the world, it was time to try analog/mixed signal/RF projects.

The opportunity came from PMC Sierra who already had a digital team in Ottawa and wanted to build a Mixed Signal team here. Tad Kwasniewski and Bill Bereza and myself started a new local group for Mixed Signal Design and Layout. Back to hire and train people, setup a new group, etc.

The rest of PMC was in 0.18-microns process and I needed to ensure that we have a proper setup, flow, tools, verification, etc. The success of our first local chip CRSU-10G, OC-192 was in jeopardy without proper setup in a new 0.13-micron process with a pretty aggressive project in mind for year 2000. Having a solid system in MOSAID for Electromigration thanks to Graham Allan I knew what has to be done. I shared the concept in my book original book version in 1999. The complexity of Electromigration is much bigger when you have to deal with huge buffers that drive 32 mA outputs. In this case the CML cells had source device of 3200-micron width in 0.13 microns so the metal size and number of vias arrays were crucial to be correct. Extracting information from SPICE models file I build a new 0.13 Electromigration table, yes, the layout guy 😊.

However, when I wanted to release it for use Colin Harris, advised me to get approved by the reliability department. I shared the file with Khai Nguyen, who was our PhD in reliability. After a few simulations he accepted my table and this became law for layout and design, but only for 0.13 microns. Two years later when we had major hiccup in another project, PMC decided to treat Electromigration much more seriously. Jurgen Hissen, one of mixed signal designers with a flair for programming, wrote an entire software to check it, a novelty in 2002. Peter O’Shea the new reliability PhD prepared a training course and Electromigration became a law for all Mixed Signal Design and Layout. More about this in the next book revision coming this year.

New design types bring new challenges. In MOSAID my problem was to verify big memory chips with millions of devices, in PMC the output of Mixed Signal Layout was actually small. Our blocks had up to 100k devices so using a hierarchical tool had no specific value. We were using Diva for online during development and Calibre for final GDSII. This meant that we needed for every process to qualify and maintain 2 verification tools from 2 different vendors. I invited Carey Robertson and Dan Chapman to talk to me in Vancouver. I knew that Calibre was a 2 licences software, one flat and one hierarchical, so I wanted to ask for a solution to my “IP level” verification. I explained them that if they can cut a licence that is “limited by number of devices” they can sell even more Calibre as people will replace DIVA. The calculation was simple: If the user can use a single verification deck for small blocks as well as for full chip, they can sell more licences and we (the users) can qualify/calibrate only one deck per process. In this case I even built their business case so it was a no brainer…

I had to reach Joseph Sawicki to get the ball rolling but by end of 2000 the Calibre CB (Cell & Block) with various device limitations was born, PMC Sierra ditched DIVA and the rest is history… How is this for a layout designer extracurricular activity?

I always liked competition and I knew that if only one company has a tool for my world, they will stop improving. I volunteered to work with all EDA vendors on perfecting their tools. One of my old friends from MOSAID, Jean Crepeau, now in Synopsys, got me another interesting engagement. A team from Victor, NY needed help to improve COSMOS, the Virtuoso competitor. For many quarters they drove 700 KM round trip for one day in Ottawa. They brought with them a computer disk which we hook up to a PMC desktop and we spent time reviewing features, idea, action and results from previous visit, etc. We had lot fun and the software was ready for market release but politics killed it and nobody was there to save at least the team. All this knowledge lost and Synopsys invented the new IC Designer/ Galaxy / etc… A novelty feature that was available in 2002 in COSMOS was Resistance and Capacitance as you route a signal, in this case based on a table like the Electromigration one. It knew how to calculate # of vias and metal width table based. Will talk more about this when we reach EAD software from Cadence.

In 2002 CADENCE decided that their verification team working on DIVA & Assura & Vampire & … can benefit from some training on flows. I worked with Gregg and updated the training material done for Mentor and worked with Beverly Higazi to organize the CADENCE visit. This time the training was actually challenging. For 5 days in the same class I had PhD in physics or software and Bachelors of Art, people with 20 years’ experience and new hires. Our material in this case was “too little” or “too much”, so the training room got very tense in the first day. I agreed with Beverly to work “overtime” and clarify some physics notions and concepts for the people who never had to deal with terms like resistance and capacitance. It was another success story and I have only good memories and a lot of pictures from this experience. We all learnt a few new things. I found again that training may be one of my future hobbies and the need for simple explanations.

I participated in Design Automation Conference (DAC) for 20 years and followed all their announcements for tutorials, workshops, etc. A new initiative coming from Synopsys. Karen Bartleson who was at that time a Marketing Director, was presenting at DAC a 2 hours tutorial called: “Introduction to Chips and EDA for non-technical audience.” Her intention was to train people outside engineering, lawyers, financial people, etc. 10,000 feet view of the VLSI industry and the relation to EDA organizations.

I always had to battle with support organizations to explain what we are really doing in VLSI, why do I want to hire aggressive people sometimes with poor “soft skills” and how important “thinking outside the box” is for our success. Karen agreed to let me audit the training and later provided me with the original material with the condition that will be used internally and free. I added a lot of company specific information like job descriptions and a few pages from my book to make it more specific.

I used this course many times in the last 10 years to train HR, Finance, IT, purchasing, document control, etc. In one of the pilots one of the financial controllers who worked in VLSI industry for 20 years wrote me:

“Thank you for your course, now I can explain my family what the company is actually doing and what is my personal contribution to the company success.”

The course isfree but the rewards are “priceless”.

More to come so stay put…

Dan Clein, view the rest of the series…


Embracing Architectural Intent

Embracing Architectural Intent
by Alex Tan on 04-10-2018 at 12:00 pm

During DVCon 2018 in San Jose, one topic widely covered was the necessity of describing and capturing intent. Defining our design intent up-front is crucial to the overall success of a design implementation. It is not limited to applying a process level intent, such as the use of verification intent with embedded assertions in code or optimization intents through the use of constraints capture in SDC or UPF, but also it is to be done at the architectural level. With the shift-left trend being touted at many design forums, to have an architectural intent should reduce the chance of ambiguity and potential failures.

Magillem has been an EDA platform provider for configuring, integrating and verifying IPs. Its product called Magillem Platform Design Solution is comprised of four stages: specification, design, documentation and data analytics. The following table captures all the stages and its adjoining product solutions.


In current design environment, system architects are prone to be disconnected from the design teams as facilities used to capture the architectural intent are not integrated into the overall flow. For example, a hardware system description is pushed down to the design team to be recaptured through a logical implementation that involved changes which do not get fed back to the architect. This usually occurs as the system abstraction involved basic drawings without any semantic value of the elements.

Magillem has introduced Magillem Architecture Intent or in short MAI as a front-end design environment for system architecture inception. It bridges the gap between software intent and hardware refinement. The inputs can be originated from either a software map (software intent flow) or from a hardware map (block diagram). This product fits at the top of the Magillem tool chain as captured in the above table. MAI ensures coherency in design views are intact when further refinement taken place along design implementation.

The main features for MAI include the following:

  • On the software side, it captures a given system from software system map and generates an early hardware description. The existing product, called Magillem Registers Engine (MRE), provides an advanced register language that allow one to develop, elaborate, and compile register descriptions from various formats such as Excel, SystemRDL and CMSIS (Cortex Microcontroller Software Interface Standard) and generates IP-XACT format output.
  • On the hardware side, it captures a given system from hardware block diagram, tracks and synchronizes any hardware refinement or software interface updates.
  • Allows exploration of schema design and traversing across hierarchies with added flexibilities of filtering only targeted schema of interest, auto placement of components, or graphically duplicating specific schemas.
  • Provides granularity of design entities viewing (component instances, bus interfaces, connections, bus and interface parameters, etc.)
  • Generate an IP-XACT description of captured design.
  • Includes APIs and an editor for design refinements.

There are many ways of capturing architectural intents. In a top-down design flow, an integrated capture facility such as MAI may prevent incoherencies in both design efforts and contents.

To learn more about MAI and associated environment, please refer to this publication links: MAI – Datasheet; MAI – Press Release


Emulation Outside the Box

Emulation Outside the Box
by Bernard Murphy on 04-10-2018 at 7:00 am

We all know the basic premise of emulation: hardware-assisted simulation running much faster than software-based simulation, with comparable accuracy for cycle-based 0/1 modeling, decently fast setup, and comparably fine-grained debug support. Pretty obvious value for running big jobs with long tests. But emulators tend to be pricey, so you really don’t want them idling waiting for the next big job; how can you leverage that resource so it’s delivering value round the clock? Certainly through virtualization where multiple verification jobs can share a common emulation resource but also by expanding use models beyond the standard “big verification” tasks.

Many of these applications are also familiar – ICE, simulation acceleration, power analysis and co-modeling with software for example. All good use-models in principle, but how are they working out in live projects? I talked with Frank Schirrmeister at DVCon last month to get insight into some customer applications.

I’ll start with simulation acceleration (SA), a use-model where part of the verification task runs in simulation, part runs in emulation and the two parts communicate/synchronize as needed. MicroSemi described their use of this approach at a 2017 DAC session. They had an interesting challenge in moving to an SA configuration since packet-switching within their SoC is controlled by 3[SUP]rd[/SUP]-party firmware which is often not available during the design phase. They work around this in their UVM testbench (TB) by randomizing packet-switching to cover as many switching scenarios as possible. With this setup, in SA they found a 20X speedup in run-times over pure simulation, not quite as exciting as they expected. They subsequently traced this problem to a high level of communication between the UVM TB and the emulation DUT. Putting a little work into optimizing randomization to lower communication boosted the gain to 40X. As they stepped up design size, they saw even bigger gains. The moral here is that SA can be a big win for simulation workloads if you’re careful to manage communication overhead between the TB and the DUT (which of course should be transaction-based).

Frank also mentioned another interesting acceleration application reported by Infineon. Gate-level simulation is becoming very important for signoff in a number of areas, yet often this is timing-based, where emulation can’t help. But emulation can help getting through initialization, beyond which interesting problems usually appear. Runs can hot-swap from an emulation start to timing-based simulation, greatly accelerating this signoff analysis. Infineon reported that this mixed flow reduced initialization run-times of 3 days to 45 minutes, an obvious win. I would imagine that even in simulation applications where you don’t need timing but you do need 4-state modeling or simply interactive debug, a fast start through emulation would be equally valuable.

At an earlier DAC, Alex Starr of AMD talked about using emulation for power intent verification, by which he meant verifying that the design still works correctly as the design operates in or transitions through many power-state sequences (power-down, power-up, etc.). Alex made the point, common to many power-managed designs today, that verification has to consider all possible sources of power switching and DVFS – firmware-driven, software-driven and hardware-driven – requiring a very complex set of scenarios to be tested. What you want to watch out for is, for example, cases where the CPU gets stuck trying to communicate with a powered-down block, or cases where retention logic states are not correctly restored on power-on.

AMD still does some of this testing in simulation, but where emulation really shines is being able to run many passes through many power sequences where simulation might be limited practically to testing one power sequence. Why is this important? Power state sequencing and mission-mode functionality are largely independent, at least in principle, so to get to good coverage across a useful subset of the product of both you need to run many mission mode behaviors against many sequences. Alex stressed that being able to run an emulation model against an external C++ stimulus agent gave them the confidence they needed to a level of coverage which would have been impossible to reach in simulation.

In a different application, when we think of emulation support for firmware we think of development and debug, but Mellanox have used Palladium emulation to help them also profile firmware against the developing hardware. To enable this analysis, they captured instruction pointers, per processor, from their verification runs. Since cycle counts are easily recovered from the run data, they could then run a post process on the emulation results to build the kind of information we normally expect from code profiling (e.g. prof, gprof):

  • Map instruction addresses to C code (in the F/W) through e.g. the ELF
  • Build a flat profile for each function with how many cycles it consumed, versus line of code
  • Build a hierarchical profile showing time consumed by parent/child relationships, versus (hierarchical) lines of code

Mellanox noted that they were able to fully profile and optimize their firmware before hardware was available, while also having full visibility down to the cycle level to debug.

I have only touched on a few customer examples here. You can read about a hardware-performance profiling example HERE and another simulation acceleration example HERE. All of these cases highlight ways that Palladium Z1 emulation can be exploited beyond the core use-model (run verification fast). Worth thinking about when you want to maximize the value you can get out of those systems.


Cleaning Trends for Advanced Nodes

Cleaning Trends for Advanced Nodes
by Scotten Jones on 04-09-2018 at 12:00 pm

I was invited to give a talk at the Business of Cleans Conference held by Linx Consulting in Boston on April 9th. I am not a cleans expert but rather was invited more to give an overview talk on process technology trends and the impact on cleans. In this write up I will discuss my presentation. I discussed each of the three main leading-edge technology segments, DRAM, Logic and NAND.
Continue reading “Cleaning Trends for Advanced Nodes”


SPIE Advanced Lithography 2018 – ASML Update on EUV

SPIE Advanced Lithography 2018 – ASML Update on EUV
by Scotten Jones on 04-09-2018 at 7:00 am

At the SPIE Advanced Lithography Conference in February ASML gave an update on their EUV systems, in this blog I will provide a summary of what they presented. I have also written about my impressions on EUV for the overall conference here.
Continue reading “SPIE Advanced Lithography 2018 – ASML Update on EUV”


NVIDIA GTC 2018 Then There Were Three

NVIDIA GTC 2018 Then There Were Three
by Roger C. Lanctot on 04-08-2018 at 8:00 am

If there was a central takeaway to Nvidia’s GTC event last week in San Jose it was this: autonomous vehicles are already operating or at least testing in virtual every corner of the planet including companies such as Tier IV and ZMP in Japan, and Pony.ai and Baidu in China. But two U.S. companies standout globally for the growing number of vehicles they have successfully and safely put into public road operation: Waymo and Cruise.

In the wake of Uber’s fatal crash in Tempe one has to ask whether or how Uber can regain its public road testing privileges in Arizona and California and how Uber obtained those rights in the first place. And if an operator like Uber with estimable experts on board can, nevertheless, fail to deliver a safely operating system, what are the terms and conditions of certification or recertification for on-road testing.

Even taking into account the fact that the fatal Uber crash, according to a layman’s assessment of the publicly released dashcam and in-car video, appears to be a basic comprehensive failure of the self-driving system, how does Uber and the relevant public authorities hit the reset button? Will self-driving car failures be a one-and-done proposition? Do we want Uber banned from future self driving car tests and operations?

It’s clear that automating the driving process is no simple proposition and that solving this challenge requires extensive testing and – as was discussed at the Nvidia event – extensive simulation and modeling. The best way to accelerate the process is to increase the number of vehicles on the road to expand the data collection pipeline.

Residents of Tempe, Ariz. and San Francisco can attest to the frequent sightings of self-driving vehicles as testament to the fact that fleets of vehicles are essential to mastering self-driving operation. As such, Uber represents an ideal self-driving car development platform – especially since it is capable of combining data gathering with the provision of taxi services. Lyft, too, is seeking to act as a platform for multiple self-driving car development partners.

Companies such as Mapper.ai, for example, have sought to leverage the daily driving behavior of Uber drivers for map data gathering via aftermarket devices installed in their vehicles. Virtually every fleet operator in the world – from rental cars and taxis to delivery companies – is being sought out by mapping companies, traffic information companies and self-driving vehicle developers for the data gathering properties of their fleets.

Uber’s failure (followed two weeks later by a crash of a Tesla Motors vehicle operating in “autopilot” mode), highlighted the development gap between Waymo and Cruise and the rest of the self-driving industry. It would be a shame, it seems, if Uber is unable to sort out its issues and get back in the self-driving business, but regulators, legislators and the transportation industry lacks a protocol for restoring faith and credabiity in an operator that has failed in the manner that Uber has.

The bottom line is that investigators from the National Transportation Safety Board are likely to conclude that the Uber vehicle simply failed to identify the pedestrian and the Uber safety driver simply wasn’t paying attention. In fact, a similar conclusion will arise from the fatal Tesla crash, according to data already shared by Tesla, that the driver did not have his hands on the wheel for six seconds before the fatal crash. In both cases, driver inattention played a role.

Waymo and Cruise may be leading the race to automate driving, but that race is rapidly revealing itself to be a marathon. Scaling automated driving from warm, sunny climates to more variable environments is a years-long process. (Volvo will gain an edge from its testing operations in Sweden.) This is a process that will require fleets of vehicles to solve and Uber is an ideal candidate to lead if the company can sort out its technical and organizational challenges to the satisfaction of regulators. Whether that recovery involves new ownership remains to be seen.

The final takeaway highlights the impressive performance of yet another market player which has been plying the path of Level 2 autonomy: Cadillac and its Super Cruise. The Super Cruise system cleverly leverages a camera-based in-cabin driver monitoring system to scan the head and eyes of the driver to ensure attention is focused on the road in combination with a high-definition Lidar-scanned map of the roadway to offer what may be best described as enhanced cruise control.

Super Cruise was launched on the Cadillac CT6 more than six months ago and has been operating without a reported failure and without the creation of dozens of Youtube videos showing drivers sleeping, or with their feet out the window or in the backseat. By geo-fencing the feature (so that it is only available on 130,000 Lidar-scanned miles of controlled access highways) and monitoring the drivers, Cadillac has found a way to deliver an autopilot-like experience without ever claiming autopilot functionality. But it has done so safely without any reported crashes, injuries or fatalities.

It’s a long way from Cadillac Super Cruise SAE Level 2 assisted driving to SAE Level 4 hands-free/eyes-free driving, but it’s important to give credit where credit is due – including suppliers Seeing Machines and high definition map prover, Ushr. Most importantly of all, Super Cruise is offered on a production vehicle and, finally, it is by now clear that Cadillac customers using this system understand its limitations and, if they don’t, the system will enforce those limitations by disengaging.

This driver monitoring and system disengagement is the beginning of a new relationship between car and driver. Cars have been helping humans drive since the onset of cruise control, electronic stability control, anti-lock breaks and all the rest of the various advanced driver assist technologies. But now we are launched on the path of removing our hands from the steering wheel while driving – yet keeping our eyes on the road. Maybe Uber will be able to rejoin the journey or maybe it won’t. For the time being, Uber’s driving privileges are only available in Pittsburgh. All eyes are now on Pittsburgh to discover what the next turn of the wheel will bring for Uber. Uber’s clearly gone to the trouble of integrating an in-cabin camera to watch the driver. It may be time to integrate that driver monitor with the vehicle controls a la Cadillac’s Super Cruise.


The Good the Bad and Tesla and Uber

The Good the Bad and Tesla and Uber
by Roger C. Lanctot on 04-08-2018 at 7:00 am

In “Willy Wonka and the Chocolate Factory” circa 1971, Gene Wilder plays a vaguely misanthropic Willy Wonka who leads the young winners of his golden wrapper contest on a tour of the seven deadly sins within his candy factory and labs. (Who can forget Augustus Gloop?) At one point, Mike Teavee, a television-obsessed pre-teen, is so enamored of Wonka’s experimental Wonka-vision that he insists on being transmitted – despite a half-hearted warning (see above) from Wonka himself.


Mike is thrilled when the device “transmits” him into a faux TV set, but when he steps out of the set it is clear to all but him that he has suffered a likely irreversible shrinking process to the shock and horror of his mother. Mike’s glee is undimmed. Wonka gives a shrug.

Something similar appears to be playing out with Tesla Motors as the company has rolled out Autopilot 2.5 and Tesla owners are taking more liberties than ever with the system. As the first production vehicle with advanced automated driving level 2 capabilities such as lane changing and passing, people have been taking liberties with Autopilot-equipped Tesla’s since day one, with fatal results for at least one driver.

Tesla’s equivalent of Gene Wilder’s half-hearted warning is the admonition that the driver must keep his or her hands on the wheel at all times and pay attention to the road ahead. It’s no surprise that drivers continue to ignore these warnings (suggestions?).

The results can be interesting, like the drunk Tesla driver who claimed he wasn’t actually driving or the heart attack victim who claimed autopilot got him to the hospital. The latest episode of the Tesla follies is the Tesla driver who put his feet out the window during an “Inside Edition” interview and was subsequently pulled over by a police officer. The driver received a ticket for going too slow (25 miles per hour) in a 65 mile per hour zone – but the ticket was later dismissed. It seems the traffic code may need a rewrite to cope with semi-autonomy.

The real news, though, is the Autopilot 2.5 update. Tesla has been in the midst of a process of playing catch-up since the fatal crash in Florida two years ago, after which Mobileye (a supplier of the camera system in the original Autopilot) parted company with the automaker.

Forced to rely on its own in-house algorithms, Tesla quickly down-shifted with a software update (downgrade?) and instituted a new geo-fenced version of Autopilot that only worked in certain driving environments and at certain speeds. Over time, the geo-fence expanded and the speed restrictions were relaxed and, with the release of 2.5, Tesla may have finally achieved parity with or surpassed the Mobileye-enabled performance of the original Autopilot.

With Musk’s claimed plan to deliver full autonomy via Autopilot, this may be good news or bad. Is the Model S (or X or 3) really ready or capable of full autonomy? And what exactly is full autonomy? Can a Tesla perform like a Waymo? Probably not for a while.

The concern is that Tesla throws the driving candy out to the sinners and more or less looks the other way (Stop. Don’t. Come back.) as the misbehavior unfolds. Try to pull Tesla-like shenanigans in a Cadillac with Supercruise and the car shuts the feature down.

There’s got to be more to corporate responsibility – enforced in real-time in the vehicle a la Cadillac – than a CEO Pied Piper crooning Wonka-like “Come with me…”

The issue is highlighted in a review of vehicle videos released by the Tempe, Ariz., police in connection with the fatal crash of an Uber autonomous test vehicle with a pedestrian walking a bicycle. The driver was looking down, distracted, but the vehicle sensors ought to have detected the pedestrian in spite of the nighttime circumstances. (Guess who Uber is going to blame.)

This is yet another case where the safety driver can and likely will be blamed – but one would also be correct in holding Uber responsible for the failure of the system. The video evidence suggests a failure of the system hardware or software. Putting such a system on public roads for testing suggests a certain degree of Wonka-like indifference. Automated vehicle-friendly states like Arizona ought to think about implementing sanctions for such failures to encourage a more responsible approach from the testers. Without consequences there will be no progress.

https://tinyurl.com/yalzhgpc – What happened when driver put his Tela on Auto Pilot? – Inside Edition

https://tinyurl.com/ybzab85o – Uber driver looks down for seconds before fatal crash – Ars technica


EDA CEO Outlook 2018

EDA CEO Outlook 2018
by Daniel Nenni on 04-06-2018 at 12:00 pm

The EDA CEO outlook took an interesting turn last night but before I get into that I will offer a few comments about the start of the show. I attend this event every year for the content but also for the networking. It isn’t everyday you get to hang out with semiconductor industry elite and have candid conversations over food and drinks. I always come prepared with questions for this blog but also for my other work inside the fabless semiconductor ecosystem.

Join us on April 5, 2018 for the 2018 ESD Alliance CEO Outlook!At this important event, the CEOs of four leading ESD Alliance member companies will discuss their views of where our industry – the semiconductor design ecosystem – is heading.

The distinguished panel will discuss major new trends they see, with the potential opportunities they anticipate. Panelists for the evening are Dean Drako(IC Manage), Grant Pierce (Sonics), Wally Rhines (Mentor, A Siemens Company) and Simon Segars (Arm). Ed Sperling (Semiconductor Engineering) will moderate the panel. Each CEO will present a brief opening statement about the future of the industry, followed by an interactive, moderated audience discussion.

It was mostly attended by people who I know but I did make some new friends. People who read SemiWiki introduced themselves and one person even said they had SemiWiki bookmarked in their browser which to me is a very high compliment.

Missing was Synopsys CEO Aart de Geus and Cadence CEO Lip-Bu Tan. In the past the CEO panel was CEO’s of publicly traded companies who either were in their quiet period so they could not comment on forecast questions or they would simpley take the fifth. This time there were CEOs of a very small EDA and IP company and two CEOs of companies that were acquired by much larger companies so they were technically no longer CEOs of publicly traded companies.

Prior to the event there was a lot of discussion about the pending trade war with China. I participated in an article last week for the Chicago Tribune (Trump may win the trade battle with China but lose the war – Chicago Tribune), my quote is at the bottom of the article:

“I don’t think tariffs will be a long-term thing, but they will accelerate (China’s) campaign to become independent,” said Daniel Nenni, CEO, and founder of Silicon Valley-based SemiWiki.com, an open forum for semiconductor professionals. “You have to control silicon to control your destiny.”

The point being that a country has to have their own fabs and design their own chips to ensure supply and security. The same goes for systems companies in a sense which is why Apple is now the most powerful fabless systems company. And based on the SemiWiki analytics, literally thousands of systems companies will follow Apple into the fabless systems business model. If you are interested in how Apple became a semiconductor force of nature read chapter seven of our book Mobile Unleashed.

The pre programmed questions were kind of tame but I understand how hard it is to moderate a panel like this because I did it many years ago. The first question was positive and negatives in regards to outlook. Simon was quite positive of course noting that we are now transitioning from mobile as a driver to a much more diverse ecosystem with Cryptocurrency, IoT, AI, Automotive, etc… and he did not see any barriers to future growth citing that mobile had everyone pulling their hair out (big laugh here since Wally and Simon are both hairline challenged) but we persevered.

Wally said that VC money is back for semiconductor start-ups and the semiconductor industry will continue to grow.

More than $900M was invested in 2017 with AI playing a role in the majority if not all of them. Deja veu of the dot com bubble maybe. Successful or not, they all have to buy EDA tools so Wally is more than happy to help.

Last year we saw 20% growth with about half being memory. Wally expects non memory to grow another 10% in 2018. Automotive was discussed and Wally compared today’s automotive burst to the one in the early 1900s where there were 285 car companies that narrowed down to 3 and now we are back up to more than 300 car companies suggesting history will repeat itself sooner than later.

When talking about the possible outlook downside the best line was from Wally when people casted doubt ”There are always things to worry about if you want to worry” but clearly Wally is not worried.

I will blog about the “interesting turn” of the event next…


The 4th Way Beyond Simulation, FPGA Synthesis, and Emulation

The 4th Way Beyond Simulation, FPGA Synthesis, and Emulation
by Camille Kokozaki on 04-06-2018 at 7:00 am

As verification continues to be a key ingredient in successful design implementation, new approaches have been tried to balance cost, time to results and comprehensive analysis in designs that require large patterns in some application like Image Processing. Simulation environments are well proven, and designers tend to use approaches they are familiar with, but these tend to take a lot of time for large verification suites. FPGA prototyping provides improved runtimes but setting up the targeting flow takes time. Emulation provides significant acceleration, but this comes at a hefty cost.

Vaxel Inc, a Silicon Valley startup, provides a verification approach that blends nicely the Simulation, FPGA synthesis and Emulation methodologies and it calls it Verification Acceleration (thus the name Vaxel). It is a low-cost software solution that is FPGA target agnostic and it automates the steps in FPGA targeting, allowing the designer to choose their preferred FPGA vendor and then allowing a 10X-30X improvement in runtime at a much cheaper cost than emulation. (Disclosure: I am helping Vaxel in Design Enablement and I am part of the organization).

Note that VAXEL is NOT an FPGA based prototyping tool but is a block level verification acceleration tool.

Yasu Sakakibara, Vaxel Inc CTO has published a paper entitled ‘Image Processing Verification beyond Simulation, Emulation, FPGA synthesis’.

In this paper, he examines the benefits and limitations of each verification approach (summarized by drawing excerpts here from the paper):

Simulator
A Simulator is without a doubt the verification tool that is most widely used in chip development projects. On one hand, it provides a large degree of latitude and helps you construct a verification environment where you can check operations closely by viewing waveforms and performing highly flexible input/output simulations. On the other hand, its operation speed is devastatingly slow. Therefore, you need creative approaches to the verification environment to compensate for the slowness. The following are examples:

  • Bypassing some of the time-consuming and repetitive processes, such as initial settings.
  • Adding some debug functions to the HDL design where verification can be performed with downsized input data.
  • Preparing a set of specific verification data that is likely to extend to “boundary conditions” with a reduced amount of data.
  • Executing “backdoor” memory initialization and dump.

These approaches will speed up a Simulator, but the downside is that they all lead to more complicated configuration procedures and more cumbersome code maintenance.

Emulator
While a conventional Emulator can perform high-speed verification because it is constructed with dedicated hardware to perform an HDL operation, the first and foremost issue with the Emulator is its cost. It is a very powerful tool and the capabilities are vast. But, the license for both initial deployment and the subsequent renewals is extraordinarily expensive. Because of this, Emulators are usually shared among multiple projects, even within the largest OEMs in the industry. Not only from the economical viewpoint, but also from the verification procedure viewpoint, an Emulator requires you to prepare behavior models to substitute peripheral functions. This is also time-consuming work.

Design verification on an Emulator can often result in a poor perspective in terms of error correction due to the discovery of block-level bugs that were not caught in the previous process using the Simulator because of those cutting-corners approaches you took to compensate for the slowness.

FPGA Prototyping
FPGA Prototyping is an effective low-cost verification method compared to an Emulator. However, to effectively prepare the dedicated hardware requires extensive knowledge of FPGA. In many cases, an FPGA specialist is assigned to the project in addition to the design engineer and the verification engineer. As a result, the benefit of FPGA Prototyping diminishes rather quickly.
The following illustrates other problems that need to be addressed with FPGA prototyping:

  • The speed difference between onboard DRAM on FPGA and DUT.
  • The necessity of RTL modification specifically for FPGA to run DUT at high speed.
  • The observability of internal states.
  • Defining the transfer method of input and output data.
  • Determination of a control method for the FPGA board.
  • That preparation of the FPGA board can be a time-consuming and costly project itself.

The Verification Acceleration Value Proposition
The ideal verification acceleration tool should come with a ready-to-use set of standard hardware function IP blocks, such as bridges and interfaces so that the setup would be easy and fast. The tool should be affordable, and that it should require little expertise in FPGA.

To accomplish the above, the following should be the elements and requirements of a 4[SUP]th[/SUP] way solution:

  • A software package available on the Internet that users can easily download, install and use.
  • Support standard off-the-shelf FPGA evaluation boards for low-cost deployment.
  • Utilize an embedded processor inside the FPGA for controls to provide “visibility”.
  • Use HDL native features for waveform generation and assertion settings
  • Deploy an operation scheme and structure similar to those of the Simulator
  • An interface to allow the Simulator to execute simulation on the tool.
  • Migration assisting tool from the verification environment set up on the Simulator
  • Automated FPGA synthesis script generator so that little expertise in FPGA is needed.
  • Includes CPU libraries to manage the verification environment.
  • Support USB for host-board communication.
  • Verification processes are run by firmware on the embedded processor inside FPGA by a command from the host.
  • Support commonly used programming language libraries such as C and Python as well as CLI, so that test data generation can be done by non-HDL design engineers.

The paper details the approach taken to verify an Image Processing design with large test patterns in three steps:

1. Run a smoke test on the simulator
2. Confirm connection to the simulator

3. Conduct verification on the FPGA
Controlling the FPGA board, loading firmware to the CPU on FPGA, and transmitting and extracting the input/output data are all done by the host PC.

The 4th way seems to find a way optimizing cost, time and block functionality checking with automation and acceleration.