webinar banner2025 (1)

Dover Microsystems Spins New Approach to Security

Dover Microsystems Spins New Approach to Security
by Bernard Murphy on 11-22-2018 at 7:00 am

One of the companies I met at ARM TechCon was Dover Microsystems who offer a product in embedded security. You might ask why we need yet another security solution. Surely we’re overloaded with security options from ARM and many others in the forms of TEEs, secure boots, secure enclaves and so on? Why do we need more? Because defending against security breaches is a never-ending battle; no-one is ever going to have the ultimate answer. This is particularly true in software, which is only guarded in the most secure of these systems to the extent that software designers comprehensively consider potential attacks and correctly use whatever memory protection strategies are provided. Given the complexity of modern software stacks, that is not a trivial task.

So how are you going to defend against vulnerabilities you don’t even know exist – the unknown unknowns? Dover have come up with an interesting idea that I like to think of as rather like runtime assertion-based verification (ABV). Think about why we use ABV in hardware design verification. We can’t anticipate everything that might possibly go wrong in a complex system, but we do know that there are certain statements we can make about behavior that should always be true or conversely always false. We have a language in which we can write these assertions (most commonly SVA) enabling us to write our own arbitrarily complex checks. When an assertion triggers in simulation, we trace back in debug and typically find some unexpected combination of conditions we never considered. Even though we hadn’t thought of it, the ABV approach caught it anyway.

I’ve long been a believer that ABV in some form could have value beyond design verification, catching problems at runtime. This could trap potential error conditions, for immediate defensive response and for later diagnosis, leading to software revs to defend against whatever unusual state triggered the problem.

From my perspective, Dover takes an approach along these lines, but instead of assertions being embedded in the mission hardware, they run in a separate IP they call a “policy enforcer” and Dover uses “rules” rather than “assertions” to describe what they check. A collection of rules to check for a class of attacks they call a “micropolicy”. Also they’re not looking at microarchitecture behavior; they’re watching the processor instruction trace and attempted writes to memory, (along with some additional data) which makes sense since they are looking at software-based attacks. They do this in a dedicated processor, sitting to the side of the main processor, so performance impact is limited and attacks on the main processor don’t affect this processor. Nothing is immune from attacks of course but separating this function should make attacks on this monitor more difficult.

In my assertion analogy, the assertions are bit more complex than just direct checks on instructions and addresses. You might build a state table to track where you are in significant aspects of operation flow, also you may choose to label (color) data in different regions. Dover call this information built for and used by the policies “metadata”, which they store in a secured region of the main memory. So when a rule is checking the validity of an operation (say a store), it can check not only the current instruction and target address, but also the metadata. Not too different from the fancy assertions IP verification experts build for protocol checking.

When a policy is violated, the current instruction will not be allowed to complete, and an exception is triggered in the main processor. How this is handled is then up to the kernel/OS. In a package delivery drone for example, an exception might cause the drone to disable network connections, switch into safe mode, load a home GPS location from an encrypted store and fly there.

Dover includes a base set of four micropolicies with product. The read-write-execute (RWX) micropolicy is designed to label regions or even words in memory as readable, writable and/or executable so that operations in violation of this labelling will be caught. The Heap micropolicy is designed is designed particularly to catch buffer-overflow exploits. A pointer to a buffer and the buffer itself (created on a malloc) are colored the same. Attempting to write beyond the upper boundary of the malloc, into a differently-colored region, will trigger a violation. In the Stack micropolicy, stack-smashing attacks which attempt to modify the return address through (compile-time sized) buffer overwrites are prevented. This is conceptually similar to the heap approach but must preserve the integrity of the whole stack.

The fourth base micropolicy is called CFI, for control flow integrity, and is designed to trap code-reuse attacks. These are a variant on return-oriented programming (ROP), where instead of trying to return to an injected malware routine, the code returns to a libc function, such as “system” which can then execute externally-provided malware. The CFI micropolicy profiles locations that contain instructions rather than data and targets/destinations of jumps. This profile cannot be altered during runtime. If a non-tagged location attempts a jump, or a jump is attempted outside the profiled set, the policy will trigger an error.

Dover’s CoreGuard IP integrates with RISC architectures (ARM and RISC-V). The technology originated as part of the DARPA CRASH program in 2010, was incubated by Draper in 2015, and was spun out through Dover as a commercial entity in 2017. The company has now grown to over 20 people and has closed an initial $6 million round of seed funding. Even more impressive, they are working with NXP towards embedding their solution in NXP controllers. You can learn more about the company HERE.


Technology Transformation for 2019

Technology Transformation for 2019
by Matthew Rosenquist on 11-21-2018 at 12:00 pm

Digital technology continues to connect and enrich the lives of people all over the globe and is transforming the tools of everyday life, but there are risks accompanying the tremendous benefits. Entire markets are committed and reliant on digital tools. The entertainment, communications, socialization, and many others sectors are heavily intertwined with digital services and devices that society is readily consuming and embracing. More importantly, the normal downstream model for information has transformed into a bi-directional channel as individuals now represent a vast source of data, both in content as well as telemetry. These and many other factors align to accelerate our adoption and mold our expectations of how technology can make a better world.

This year’s Activate Tech & Media’s Outlook 2019 presentation provides a tremendous depth of insights in their slide deck (153 slides) with a great amount of supporting data. It highlights many of the growth sectors and emerging use-cases that will have profound impacts on our daily lives.

Transforming Tech Intelligence
We are moving from the first epoch of digitally connecting people, to the second epoch of making intelligent decisions through technology. Artificial Intelligence research is advancing and with it the infrastructure necessary to make it scalable across a multitude of applications. Solutions are just beginning to emerge and yet showing great promise to make sense and use the massive amounts of data being generated.

Overall, devices and services continue to evolve with more awareness and functionality. We are in the ramp of adding ‘smart’ to everything. Smart: cars, cities, homes, currency, cameras, social media, advertising, online-commerce, manufacturing, logistics, education, entertainment, government, weapons, etc. It will be the buzzword for 2019-2020.

Such transformation opens the door where tools can begin to anticipate and interweave with how people want to be helped. Better interaction, more services, and tailored use-cases will all fuel a richer experience and foster a deeper embrace into our lives. Technology will be indispensable.

Risks and Opportunities
Reliance in our everyday activities means we have the luxury of forgetting how to accomplish menial tasks. Who needs to remember phone numbers, read a map, operate a car, or know how to use a complex remote control. Soon, our technology will listen, guide, watch, autonomously operate, and anticipate our needs. Life will seem easier, but there will be exceptions.

All these smart use-cases will require massive data collection, aggregation, and processing which will drive a new computing infrastructure market. Such reliance, intimate knowledge, and automation will also create new risks.

The more we value and rely on something, the more indebted we are when it fails. We must never forget that technology is just a tool. It can be used for good or for malice. There will be threats, drawn to such value and opportunity, that will exploit our dependence and misuse these tools for their gain and to our detriment. At the point people are helpless without their intelligent devices, they become easy victims for attackers. As we have seen with data breaches over the past several years, when people are victimized, their outlook changes.

In this journey of innovation and usage, public sentiment is also changing across many different domains. The desire for Security, Privacy, and Safety (the hallmarks of Cybersecurity) continues to increase but may initially be in direct conflict for our desire to rapidly embrace new innovations. This creates tension. We all want new tech toys (it is okay to admit it)! Innovation can drive prosperity and more enjoyment in our lives. But there are trade offs. Having a device listen, record and analyze every word you say in your bedroom may be convenient in turning on the lights when you ask, but it may also inadvertently share all the personal activities going-on without your knowledge. A smart car effortlessly transporting you to work while you nap or surf the internet sounds downright dreamy but what if that same car is overtaken by a malicious attacker who wants to play out their Dukes of Hazzard fantasies. Not so much fun to think about.

In the end, we all want to embrace the wonderful benefits of new technology, but will demand the right levels of security, privacy, and safety.

Trust in Technology
Unfortunately, trust in digital technology is only now becoming truly important. In the past, if our primary computing device (PC or phone) crashed, we breathed a small curse, rebooted and went on our way. We might have a dropped call or lost part of a work document, but not much more harm than that. That is all changing.

In the future, we will heavily rely on technology for transportation, healthcare, and critical infrastructure services. That autonomous car we expect not to crash, the implanted pacemaker or defibrillator we expect to keep us alive, or the clean water and electricity we expect to flow unhindered to our homes may be at risk of failure, causing unacceptable impacts. We want tech, but very soon people will realize they also need security, privacy, and safety to go along with it.

But how will that work? We don’t typically think of trust in terms of high granularity. We naturally generalize for such abstract thoughts. We don’t contemplate how trustworthy a tire, bumper, or airbag is, as those are too piecemeal, rather we trust the manufacturer of the car to do what is right for all the components that make up the vehicle we purchase. We want the final product, tied to a brand, to be trustworthy. For those companies that we trust, we tend to believe, whether correct or not, in all their products and services. This reinforces tremendous loyalty. The reverse is true as well. One misstep can become a reputational blight affecting sentiment across all a company’s offerings.

The saying “We earn trust in drips and lose it in buckets” perfectly exemplifies the necessary level of commitment.

Trust may become the new differentiator for companies that can deliver secure and safe products in a timely fashion. Those who are not trustworthy may quickly fall out of favor with consumers. Privacy is the first in many problems. Consumers, government regulators, and businesses are struggling to find a balance that must be struck between gathering data necessary for better experiences, but not too much that it becomes a detriment to the user. A difficult conundrum to overcome. Security and safety aspects will follow, where the potential risks grow even higher. The challenges are great, but so will the rewards for all those who succeed. I believe those companies which master these disciplines will earn long-term loyalty from their customers and enjoy a premium for their products.

2019 might be the first year where we witness this delineation as consumers may gravitate to more responsible companies and begin to shun those who have misplaced their trust. The big story for next year may in fact be how purchasing decisions for technology are changing, thus driving greater commitment to making products and services more security, private, and safe.

#cybersecurity #informationsecurity #technology #risk #LinkedInTopVoices

Interested in more insights, rants, industry news and experiences? Follow me on Steemitand LinkedIn for insights and what is going on in cybersecurity.


Design Compiler – Next Generation

Design Compiler – Next Generation
by Alex Tan on 11-20-2018 at 12:00 pm

Back in 1986, Synopsys started out with a synthesis product by name of SOCRATES, which stands for Synthesis andOptimization ofCombinatorial logic usingRule-basedAndTechnology independentExpertSystem. It is fair to say that not many designers know that was the birth name of what eventually turns out to be a very successful synthesis tool –Design Compiler. Over the last three decades, it has evolved to keep pace with the push and pull of Moore’s Law, advancements in process technologies, compute architecture shift and better software algorithms. Let’s replay how it has evolved over time.

Drivers to the Evolution
The first generation of Design Compiler (DC) took Boolean equations, optimized and minimized them and generated the combinatorial logic. Over time it has been revamped to handle more features such complex HDL constructs, pre-optimized building blocks (DesignWare IP), automatic DFT test-insertion and power related clock-gating among others. Its compile step also had gradually evolved from having only few options such as high, medium or low effort –to be more granular, as it accommodated increased designers awareness in optimizing more complex and higher performance logic. This includes the options to do compile_ultraand incremental compiles.
From methodology standpoint, DC interaction with the adjoining and downstream tools has been shaped by both the process-technology related geometrical scaling drive and satisfying designers automation needs. For example the use of wire load model was considered the norm during micrometer process node era. Designers who wished to push timing optimization further applied more area-centric, custom wire load models.

As wire scaling did not track well with device scaling in subsequent nanometer processes (as shown in figure 1), the lagging interconnect performance had disrupted the overall timing optimization results. TLU+ based RC modeling was then introduced to provide more accurate early wire estimation during synthesis and was embedded as part of DC-T or DC-Topographical.

While designers applied frequent synthesis cost-factor rebalancing to achieve an optimal QoR (Quality of Results), the slower interconnect had shifted top critical paths characteristic from gate dominant to more interconnect dominant, and caused a disconnect in assessing potential hot-spots (congested areas) in the design. This resulted in aggressive efforts made by the place and route tools in attempting to complete net connectivity with the available wire resources, and creating either routing congestion or scenic routes in the process. In order to account for such event, DC was once again enhanced to be ‘physically aware’ –DC-G or Design Compiler Graphical was rolled-out in 2011 to provide congestion prediction that was aligned with ICC place and route tool. Furthermore, deep advanced nodes, such as 10nm, 7nm and below, have also imposed the necessity of having accurate pre-route resistance estimation in synthesis step. DC-G was upgraded to handle such advanced nodes effects such as layer-aware optimization and via ladder.

The Next Generation Synthesis
Early this month Synopsys released Design Compiler NXT, a new addition to the Design Compiler family of synthesis products. It addresses customer demand for a greater throughput and improved PPA (power, performance and area) as well as a tight correlation to physical implementation tools. It delivers 2X faster runtime, 5% better QoR for dynamic power, and a new cloud ready distributed processing engine.

“Design Compiler NXT built on Design Compiler Graphical leadership position with innovation enabling improved quality of results and designer productivity. Technology Innovations include fast and impactful optimization engines, cloud-ready distributed synthesis, more accurate RC estimation and new capabilities to support 5 nanometers and below,” stated Dr. Michael Jackson, Synopsys VP of Marketing and Business Development.

According to Abhijeet Chakraborty, Synopsys Group Director of R&D Design Group, Design Compiler NXT incorporates high-efficiency engines yielding half the runtime for compile and optimization operations. Design Compiler NXT distributed synthesis capabilities enables each distributed machine to optimize the design with full physical and timing contexts, which yields faster synthesis runtimes without sacrificing QoR.

“Design Compiler Graphical has been the trusted synthesis tool for our designs for many years and a key enabler to the development of our advanced SoCs and MCUs,” said Tatsuji Kagatani, Vice President, Shared R&D Division 2, Broad-based Solution Business Unit, at Renesas Electronics Corporation. “We are collaborating with Synopsys on the latest synthesis technologies in Design Compiler NXT and are looking forward to deploying them on our designs to help meet our ever-increasing pressure of time-to-market and higher QoR.”

Common Library, Models and Advanced Nodes Support
Design Compiler NXT also provides users benefit through a new support to common physical libraries (common library and block abstract models) with IC Compiler II. Often times, library updates introduced during the ongoing project development may introduce library inconsistencies between synthesis, place-and-route, and physically-aware signoff-driven ECO. It also continues to support Milkyway for full backwards compatibility, a plug-and-play, user interface and script compatible with existing Design Compiler Graphical. This enables a seamless transition for users.

New power driven mapping and structuring techniques, the addition of concurrent clock and data (CCD) optimizations deliver enhanced QoR. It has been redesigned to meet modeling needs of advanced process nodes, improved interconnect modeling, net topology and local density analysis engines that delivered tight correlation to IC Compiler II.

In summary, Design Compiler has been the industry leader for over 30 years and has delivered synthesis innovation in the area of test, power, data-path and physical synthesis. With this new addition once again Synopsys is raising the bar in evolving their RTL synthesis to enable SoC designs to target many emerging applications.

For more details on Design Compiler NXT, please check HERE


Webinar: Tanner and ClioSoft Integration

Webinar: Tanner and ClioSoft Integration
by Alex Tan on 11-20-2018 at 7:00 am

A fusion of digital and analog IC circuits, mixed signal ICs are key components to many applications including IoTs, automotive, communications and consumer electronics –acting as enabler to bidirectional conversion of signals between analog domain derived from various audio, temperature and visual sensors to digital domain of the embedded processing units.

Handoff, DM and integrated SOS7-Tanner benefits
Mixed signal designers typically deal with shorter design cycles as their design will either be an IP of a large SoC or as a stand-alone IC product. In both scenarios, team collaboration and robust handoff mechanisms are needed to ensure a clean integration downstream.

As part of Mentor, a Siemens Business, Tanner EDA provides a list of design, layout and verification portfolio for analog and mixed signal (AMS) ICs, Micro-ElectroMechanical Systems (MEMS), and IoT designs for 28nm and above. As shown in figure 1, the environment consists of schematic and layout capture,
place and route, verification and chip assembly tools –segregated to serve analog, digital or mixed signals centric design.

Integrated into the Tanner design environment is ClioSoft’s SOS7 design management solution that enables local or multi-site design teams to collaborate efficiently on all types of complex SoCs – analog, digital, RF and mixed-signal – from concept-to-GDSII. The three areas DM could help are as follows:

  • Improving team efficiency such as access, revision, reuse and release control management. For example, in tracking release versions of an IP, handing-off completed schematic to the layout team for further implementation.
  • Automatic, secure and optimal real-time data synchronization across sites and compute resources. For example, design data handoff or sharing to multi-sites can be done automatically without the use of ftp, rsync, cron jobs or other insecure means.
  • Design progress monitoring through milestones audit and tracking of any open issues or completed tasks. For example, tracking on which blocks are completed and how many layouts are still pending, all can be done easily.

In order to demonstrate how tightly integrated Cliosoft SOS7 with Tanner design flow, a design involving ADC and DAC components are used. As shown in figure 2, the design data are captured in two ways: for the analog part, DAC, the schematic and layout is captured with SEDIT and LEDIT; while for the digital part, ADC control, a synthesis is involved and followed by layout generation.

Each process refers to its specific libraries (PDK or standard cells) and includes the generation of many views such as schematic, layout, pre- and post-layout netlists. Once the post-layout design verified or simulated, more views are produced such as simulation reports, measurement outputs and output data. In case of SoC design, some of these may need to be propagated further for the validation of overall system.

Without the use of DM, the design database will be a convoluted collection of directories usually in open access (OA), each consisted of many revisions of views and get replicated across into each designer’s local area to track updates –cluttering work area. With SOS7, the design database is captured in common repository along with multiple submitted revisions reducing the overall design footprint. Multi-site teams will be able to streamline design change updates and handoffs. For example a secure reservation of schematic to be worked on can be made, preventing concurrent edits. After each change, auto synchronization across other sites would be made by SOS7 cache server. These help more effective collaboration and also design data management. SOS7 also facilitates IP cataloging, capturing pertinent documentation and propagate any fixes and release to the users connected to the SOS7 Design Collaboration platform. Figure 2 (bottom part) illustrates the comparison before and after SOS7 deployment on the sample case design.

Driving ClioSoft DM in Tanner Environment
ClioSoft SOS7 GUI is invocable from within Tanner tools GUI such as S-EDIT and L-EDIT. As a design cockpit SOS7 GUI provides designers with full visibility of the state of design database such as who is editing which revision of the schematic, when a particular block is done and released, etc.

To enhance team collaboration, some steps such as tagging, labeling and snapshot are recommended during each sub process completion. As seen in figure 4, the use of tagging such as “Design Done”, “Layout Done”, “Design Verified” can be applied to signify milestone completion and inform subsequent users for the state of the database views. Similarly for snapshot as a way to capture all states up to a point (to permit undoing of work if needed).
As part of good design management practices, tracking design collaterals should cover all design related views (including binary format data files), libraries and technology files, runset and documentation. In addition, for verification or simulation related process only limit capture to run summary, logfiles, all used inputs (such as RTL) and testbenches. The remaining large actual simulation and DRC/LVS results as well as intermediate cell views should not be tracked. This will reduce overall project database footprint.

To recap, designers need an effective way to manage their project during the design phase. The integration of ClioSoft’s SOS7 Design Collaboration platform into Tanners IC Design flow allows designers to be more productive. This allows them to focus on their design instead of doing the design data management tasks as well as to facilitate early error findings and fixes –avoiding costly re-design iterations.

For more details on ClioSoft SOS7 DM check HERE and Webinar HERE.

Also Read

The Changing Face of IP Management

Data Management for SoCs – Not Optional Anymore

Managing Your Ballooning Network Storage


Using IP in a SoC Compliant with ISO 26262

Using IP in a SoC Compliant with ISO 26262
by Daniel Payne on 11-19-2018 at 12:00 pm

The automotive segment is being well served by semiconductor suppliers of all sizes because of the unit volumes, and the constant push to automate more of the driving decisions to silicon and software can raise lots of questions about safety, reliability and trust. Fortunately the ISO standards body has already put in place a functional safety compliance specification known as ISO 26262, so engineers working at automotive companies have a clear idea how to document their processes to ensure that you and I as consumers are going to be safe while driving the old fashioned way with our hands on the wheel, or more towards the goal of autonomous driving which will be hands-free. Semiconductor designers can use dozens to hundreds of IP blocks in their automotive chips, so one challenge is how to be compliant with iSO 26262 while managing so many blocks.

Design companies have teams with various Functional Safety (FuSa) roles, like:

  • Functional Safety Manger (FSM) – creates a series of surveys
  • Design Engineers – uses the surveys and completes a response for each IP being used

By the end of a project the FSM has read all of the survey responses and can decide if the automotive system meets functional safety readiness or not. Methodicsis an EDA company offering their Percipienttool for IP Life Cycle Management (IPLM), and the good news is that using Percipient will help both the FSM and design engineers document and manage their ISO 26262 process.

Change is constant, so what happens if one of your semiconductor IP blocks has been updated since it’s last release? Well, with a tool like Percipient you’ll be notified that a new IP version is available, then make the decision to either keep the old one or go with the new one, all while having an audit trail of the decision, no more manual tracking because it’s a built-in feature. Even if your IP is hierarchical and has dependencies, you will always be in the loop if some lower-level IP block has changed and then decide how that change affects your compliance status. Using a design flow with Percipient lets your team manage compliance with little engineering effort while keeping the context and version awareness.

You can now achieve IP reuse by using an approach which separates the IP hierarchy and dependencies from the IP data, because Percipient is maintaining the usage and dependencies outside of the IP itself. This methodology then provides a scalable workflow while re-using IP blocks.

I introduced the concept of an FSM creating surveys based on FuSa needs, and you can have surveys used at each level of the SoC design hierarchy while re-using soft IP, hard IP, external IP. Collections of surveys to cover a specific use case are called Survey Lists. Soft IP and Hard IP can each have their own surveys and FuSa requirements:

In your team the FSM or IP owners have the flexibility to attach a specific survey to an IP. Like the above example, your team could decide that all Soft IPs should have Survey 1 attached, while all Hard IPs should have Survey 3 attached.

Even at the SoC level you will have a survey list attached, so that when you want to look at a particular survey it will filter to show only what you requested and is relevant in that context:

If FuSa compliance was not a requirement, then at the SoC level you would see none of the surveys and their results.

Let’s say that in the middle of your design project you decide to swap out the Bluetooth sub-system from one vendor to another vendor. As your team is using Percipient that tool automatically loads the surveys and results from the IPs in the new Bluetooth sub-system, while the designers keep going along their daily tasks with little impact. Version control in Percipient shows the complete history of making changes to the hierarchy, and so you can see compliance results over time. Change is constant, but your engineers aren’t paying any penalties when it comes to compliance.

Each survey result is attached to an IP block and at a specific time, so when you create or accept a new version of an IP then you get to decide if the survey results stay the same as before or are now changing. If the IP is changing enough, then it calls for a retake of the survey. With Percipient you’re going to see a traceable history of changes on survey results for each IP, while not causing extra work for designers.

Compliance reporting can be complex because there are four levels to ASIL A-D, which in turn will affect who will create the surveys and look at the reported responses. Using an IPLM software package is the way to go, because it can handle all of the permissions and restrictions tied to the different ASIL levels.

Summary
Automotive design is a growing opportunity for electronics companies and the challenges of being ISO 26262 compliant can be met more rapildy by using the appropriate software tools from vendors like Methodics. The Percipient tool is an IPLM approach that will help your team meet ISO 26262 compliance.

Read the complete White Paper online.

Related Blogs


Eta Compute Receives Two Awards from ARM at TechCon

Eta Compute Receives Two Awards from ARM at TechCon
by Tom Simon on 11-19-2018 at 7:00 am

Many startups set out with the goal of accomplishing a technical feat that was previously considered impossible. Quite frankly most do not succeed. Yet, occasionally a company comes along that succeeds with a game changing breakthrough. ETA Compute has done just this. Yet, even more impressively, this 3-year-old company has done more than just develop one “impossible” technological achievement, they have developed two. The best part is that they already have working products that incorporate them. Consequently, they are positioned to radically change artificial intelligence processing on edge devices.

Eta Compute has announced their TENSAI AI platform, which is based on the ARM Cortex M3 processor, and has demonstrated a 30X reduction in energy consumption for image classification. Using only 0.4 mJ per image, it has bested previously a published energy consumption figure for a different processor that used 30mJ for the same task. The unique technology enabling this is Eta Compute’s delay insensitive asynchronous logic (DIAL). Not only does it save tremendous amounts of power, but it enables dynamic voltage and frequency scaling, and near threshold voltage operation.


Their novel processor, based on their asynchronous logic, won them two awards at the ARM TechCon, the Design Innovation of the Year and the Best Use of Advanced Technologies awards. By implementing the extremely popular and proven Cortex M3 processor with dramatically improved power efficiency, they have opened up opportunities for applying more comprehensive AI processing on edge devices. When edge devices can efficiently run neural networks a large number of new applications open up. At the same there are reductions in overall power consumption, latency, and bandwidth.

The TENSAI processor offers a Cortex M3 running at up to 100MHz, with sub 1uA sleep current, 512K Flash and 128K SRAM. It has an 8/16 bit dual MAC DSP. It also has independent SRAM for the M3 and DSP. Lastly, it has DMA engines to ensure efficient data transfer to memory from IO. Its highly efficient PMIC adjusts voltage to keep operating frequency constant despite process and voltage variations. An interesting characteristic of the device performance is the attractive scaling of current per Mhz. For instance, running Coremark at 3.3V and 10MHz the processor draws 13.3 uA/MHz. At 100MHz it draws 18.1uA/Mhz.

The second technical accomplishment that Eta Compute has under their belt is implementation of a Spiking Neural Network (SNN). This kind of network works more closely to their biological analogues than traditional CNNs. SNNs require fewer neurons and only require addition operations. This can make them 100X more efficient than traditional CNNs. ETA Compute says that their SNN is capable of doing unsupervised learning with no data labels. This makes it ideal for anomaly detection applications.

Eta Compute is actively exploring new application areas for their technology. For instance, always-on wake-up features are familiar to anyone who uses Google Home or Alexa. This is an ideal edge operation, as it needs to run fast and not require sending large amounts of raw data to the cloud. Data reduction at the edge is another promising technique. Edge neural networks can crop and filter data intelligently so smaller data sets are sent to the cloud for more intensive processing. Eta Compute says they already have a number of novel engagements to apply their technology in agriculture, retail, factories and even in data centers where there are large numbers of sensors deployed to monitor environmental parameters.

By combining their hardware and software advantages, it seems that Eta Computing is in a strong position for the move to enable AI at the edge, where its impacts will be significant in the coming years. We have already seen the changes brought about by pervasive connectivity. Anyone looking at what is ahead can see that there will be even more dramatic changes coming from pervasive AI. More information on Eta Compute’s technology and product can be found on their website.


Why Apple failed in India and how it can recover

Why Apple failed in India and how it can recover
by Vivek Wadhwa on 11-18-2018 at 12:00 pm

Apple iPhone sales in India are expected to have fallen dramatically this year to two million, from three million phones last year. Reuters reports that at the peak shopping season, in Diwali, Apple stores were deserted. This occurred in the world’s fastest-growing market, in which smartphone sales are increasing often by more than 20% every quarter.

Yet Apple’s loss of the Indian market was entirely predictable. In a Washington Postcolumn of March 2017, I described Apple’s repetition in India of the mistakes it made in China: relying entirely on its brand recognition to build a market for its products there. Rather than attempt to understand the needs of its customers, Apple made insulting plans to market older and inferior versions of iPhones to its Indian customers — and lost their loyalty.

The iPhone no longer stands out as it once did from its competition. Chinese and domestic smartphones boasting capabilities similar to those of the iPhone are now available for a fraction of the iPhone’s cost. Samsung’s high-end phones have far more advanced features. And, with practically no brand recognition by the hundreds of millions of Indians who are buying their first devices, Apple does not have any form of product lock-in as it does with western consumers who have owned other Apple products and are now buying smartphones. Apple also made no real attempt to customize its phones or applications to address the needs of Indian consumers; they are the same as in the United States. Siri struggles no less on an Indian iPhone than on a U.S. one to recognize an Indian name or city or to play Bollywood tunes.

It wasn’t even their technical superiority that made the earlier iPhones so appealing to the well-to-do in India; it was the status and accompanying social gratification they offered. There is no gratification in buying a product that is clearly inferior. Indian consumers who can afford iPhones want the latest and greatest, not hand-me-downs.

So Apple could hardly have botched its entry into the Indian market more perfectly.

And it’s not just Apple’s global distribution and marketing strategy that needs an overhaul. The company needs to rethink the way it innovates. Its pursuit of perfection is out of touch with the times.

The way in which innovation happens now is that you release a basic product and let the market tell you how to make it better. Google, Facebook, Tesla, and tens of thousands of startup companies are always releasing what are called minimum viable products, functional prototypes with the most basic of features. The idea is to get something out as quickly as possible and learn from customer feedback. That is because in the fast-moving technology world, there is no time to get a product perfect; the perfected product may become obsolete even before it is released.

Apple hasn’t figured that out yet. It maintains a fortress of secrecy, and its leaders dictate product features. When it releases a new technology, it goes to extremes to ensure elegant design and perfection. Steve Jobs was a true visionary who refused to listen to customers — believing that he knew better than they did about what they needed. He ruled with an iron fist and did not tolerate dissension. And people in one Apple division never knew what others in the company were developing; that’s the kind of secrecy the company maintained.

Jobs’s tactics worked very well for him, and he created the most valuable company in the world. But, since those days, technological change has accelerated and cheaper alternatives have become available from all around the globe.

Apple’s last major innovation, the iPhone, was released in June 2007. Since then, Apple has been tweaking that device’s componentry, adding faster processors and more-advanced sensors, and releasing it in larger and smaller form factors — as with the iPad and Apple Watch. Even Apple’s most recent announcements were uninspiring: yes, yet more smaller and larger iPhones, iPads, and watches.

There is a way in which Apple could use India’s market to its advantage: to make it a testbed for its experimental technologies. No doubt Apple has a trove of products that need market validation and that are not yet perfect, such as TV sets, virtual-reality headsets, and new types of medical devices. India provides a massive market that will lap up the innovations and provide critical advice. Apple could develop these products in Indian languages so that they aren’t usable back at home, and price them for affordability to their Indian customers.

To the visionaries who once guided Apple, experimenting with new ideas in new markets would have been an obvious possibility to explore. Taking instead the unimaginative option of dumping leftovers on a prime market suggests that Apple’s present leaders have let their imaginations wither on the vine.

For more, follow on Twitter: @wadhwa and visit my website: www.wadhwa.com


AMAT and the Jinhua Jinx!

AMAT and the Jinhua Jinx!
by Robert Maire on 11-18-2018 at 7:00 am

Applied Materials reported a just “in line” quarter but guidance was well below street expectation. AMAT reported EPS of $0.97 and revenues of $4.01B versus street of $0.97 and $4B. Guidance missed the mark by a wide margin with revs of $3.56 to $3.86 and EPS of $0.75 to $0.83 versus already reduced street expectations of $3.94B and $0.92 in EPS. Applied’s sock was down almost 10% in after market trading.

We had predicted in our preview note yesterday that we thought Applied would disappoint versus reduced expectations and they certainly did.

Jinhua Jinx
We suggested that the Jinhua loss which was downplayed by KLAC and LRCX was going to be worse at AMAT due to their extra China exposure and we were proven correct as AMAT management laid most of the blame for weak Q/Q guidance on Jinhua by saying that revenue would have been flat to up without the Jinhua issue.

Share Slump
We also suggested in our preview that we were concerned about share loss and Applied said on the call that share loss was the second reason for the worse than expected guidance.

Gary Dickerson, CEO of Applied said that current conditions “Do not play to Applied’s strengths” which is code for share loss. Management suggested that EUV roll out was part of the share loss issue, something we have been talking about for a long time as multi-patterning use will be reduced and fabs will be spending more on litho related tools.

“Can you Canoe?”
Management also said that the we will see more of a “shallow and gradual recovery” as compared to previous expectations of a quicker come back. This suggests more of a “U” or “canoe” shaped cycle bottom than previously expected.

It seems very clear that 2019 will be weaker than the 2018 WFE peak.

We had suggested in our preview that AMAT would buyback fewer shares which would not help offset reduced EPS as it did at Lam which bought back a slug of stock to pump up EPS. In the quarter AMAT bought back about $750M in stock so the EPS weakness was more apparent. AMAT may be keeping some of its powder dry and may amp up its buy backs if the share price drops further.

Service Saves & Display is OK
Service was very strong and up 18% while systems were down by 5%. Display at $700M was not bad and in line with plan. Applied said that going forward systems could be down 21% but service up 7%.

Applied harder hit
The main issue we pointed to in our preview note was that we thought AMAT would get hit harder than its peers in the group and this report seems to underscore that view and prove our thesis correct.

Lam is also exposed to similar share weakness but perhaps not as much China exposure. ASML with its EUV roll out and long lead times is perhaps the most immune to the current weakness. KLAC is also more resistant to the near term issues as it supports the EUV roll out but it does have a high China exposure at 25% of business.

The Stocks
We expect AMAT to get hit for the disappointment and its enough of a miss and weaker guide to take the rest of the group down in sympathy along with it. We have been saying that while we are getting closer to a bottom, we are not there yet so its still not safe to go back in the water and buy the stocks.

The overhang from China remains far from resolved and AMATs report makes us painfully aware of that. There is not likely to be a near term recovery in memory and the weakness could continue for a lot of 2019. Right now there is zero visibility as to the timing of a recovery and AMAT underscored that with its shallower and more gradual comment. We continue to stay on the safety of the sidelines avoid the group and more specifically Applied.


I Thought that Lint Was a Solved Problem

I Thought that Lint Was a Solved Problem
by Daniel Nenni on 11-16-2018 at 12:00 pm

A few months back, we interviewed Cristian Amitroaie, the CEO of AMIQ EDA. We talked mostly about their Design and Verification Tools (DVT) Eclipse Integrated Development Environment (IDE) and how it helps design and verification engineers develop code in SystemVerilog and several other languages. Cristian also mentioned their Verissimo SystemVerilog Testbench Linter, which enforces group or corporate coding guidelines for verification environments written in SystemVerilog. This stuck in my mind because it’s hard to imagine making money selling a stand-alone linter when so much related functionality is already built into the front ends of simulators and logic synthesis tools. So, I started asking more questions.

A little history lesson may be useful. The idea of a dedicated utility to examine code for programming errors and dubious constructs arose in the Unix team at Bell Labs in the late 70s, where Steve Johnson wrote the original version. The name “lint” cleverly captured the idea of getting rid of something unwanted. We have lint brushes for our clothes, lint traps in our dryers, and lint tools for our code. Many typos and simple syntax errors could be detected very early in the coding process, and over time linters added deeper analysis and the ability to find some semantic errors as well.

C programmers who used lint couldn’t imagine life without it. Its fast runtime and precise messages made it an ideal tool for use whenever code was changed. In fact, lint was sometimes added as a “screen” for checking in code to revision control systems. Linters were developed for other programming languages and became essential tools for software engineers. Then, in the late 80s and early 90s, hardware engineers started coding as well. Schematics were replaced by hardware description languages (HDLs) such as Verilog and VHDL. Hardware designers soon faced many of the same debug challenges as their programming colleagues.

In the mid-90s, InterHDL introduced Verilint, which checked for typos, races, dead code, undeclared variables, and more. As with original lint, Verilint ran quickly and produced clear, accurate results. Other companies developed competing products but, over time, most of them were absorbed into the “Big 3” EDA vendors. Many of the linting capabilities were rolled into the front ends of other tools. This leads me back to my opening question of whether there is a place in the market today for stand-alone linters and how AMIQ is being successful.

For a start, Verissimo has the advantage of being faster and better suited to the linting task than simulation or synthesis. But as I dug into Verissimo further, I began to appreciate why it is so popular. One immediate asset is the number of rules: more than 450 according to Cristian, about 75% of them based on the coding guidelines of actual users. This ensures that the problems reported are real issues for real testbenches. Although Verissimo also finds problems in designs, its focus is on testbenches, which are far more complex. This code is not well checked by simulators or traditional linters, while logic synthesis tools never read in testbenches at all.

Further, testbenches use the most advanced object-oriented constructs of the SystemVerilog and Universal Verification Methodology (UVM) standards. That’s why so many rules are needed. In fact, the latest version of the IEEE SystemVerilog standard is more than 1300 pages, and the latest UVM release adds nearly 500 more. With this complexity come overlapping constructs, multiple ways of doing the same thing, and opportunities for language misuse, performance issues, and code maintenance issues. Verissimo enforces best practices for dealing with all these challenges, with rules based on problems found by real users in the past.


Figure 1: Verissimo displays the results from checking more than 450 SystemVerilog rules

Now that verification consumes much more of a project schedule than design, managers are looking closely at how to improve efficiency. With two or three verification engineers per designer, it is important for all members of the team to be aligned on coding style and interoperability. Cristian points out that testbench linting is a great way to ensure that everyone is following the rules for coding conventions, naming conventions, constructs/patterns/recipes to be used or avoided, and even the organization of testbench files. It brings new team members up to speed quickly, reducing their learning curve and aligning them to the prescribed way of coding. Verissimo also automates most of the traditional manual code review tasks, saving even more project time.

The resource demands of verification also mean that testbench code is more likely to be reused than in the past. Verification harnesses, interface checkers, and other testbench elements are often applicable to multiple generations of designs. In fact, Cristian argues that the lifespan of verification code may exceed the lifespan of actual designs. Sometimes this code lives so long that nobody remembers exactly what it does, but everybody is afraid to throw it away or rewrite it. Testbench linting plays an important role in ensuring that new and legacy code is consistent. Just as with original C lint, Verissimo is frequently used as a screen before check-in.


Figure 2: Debugging Verissimo rule violations is easy within DVT Eclipse IDE

Verissimo supports the definition of new rules, customization of rules by changing/tuning parameters, enabling, disabling, or setting severity levels, and waiving of specific rule instances. Verissimo can be run from a command line, started from a regression manager, included in a continuous integration process, or invoked from within DVT Eclipse IDE. Cristian notes that it is easier to see and fix errors when running in the IDE GUI, a big improvement over early linters. All its current users also have access to lots of other tools, so clearly Verissimo adds unique value to the design and verification flow. I’m now sold on the idea that a stand-alone linting tool with the right set of features can succeed today. By the way, I also learned that “Verissimo” means “very true” in Italian.

To learn more, visit https://www.dvteclipse.com/products/verissimo-linter.

Also Read

Easing Your Way into Portable Stimulus

CEO Interview: Cristian Amitroaie of AMIQ EDA

Automated Documentation of Space-Borne FPGA Designs