webinar banner2025 (1)

McKinsey Mangles Micromobility

McKinsey Mangles Micromobility
by Roger C. Lanctot on 02-08-2019 at 12:00 pm

Micromobility – characterized by shared bicycles, scooters and other wheeled devices – was all the rage in 2018 and that enthusiasm has carried over into 2019 – if McKinsey is to be believed. McKinsey published a white paper last week touting the glories of micromobility and a potential market of $300B-$500B by 2030. The only problem? McKinsey neglected to recognize obvious sources of woe in the sector.

Micromobility’s 15,000-mile Checkup – McKinsey

Micromobility leaders around the world are struggling to come to grips with unanticipated levels of theft, vandalism and vehicle abuse. These issues have combined to undermine the confidence of investors and operators.

Simultaneously, the increased awareness of scooter-related injuries is alarming transportation policy makers. A Consumer Reports study found 1,545 electric—scooter implicated injuries since late 2017 in the U.S.

Investigations Find Scooters a Cause of 1,500+ Accidents – Consumer Reports

Combined with the disinclination of scooter and bike users to wear helmets and the inclination to leave scooters and bikes in inconvenient public spaces, the micromobility explosion is in danger of imploding. These downside insights are coming to the fore as cities weigh the relative merits of shared cars, bikes and scooters as part of an increasingly diverse portfolio of transportation options.

The most obvious failure in the McKinsey report was the estimate that the average scooter lasts four months. Strategy Analytics has found that, on average, scooters last anywhere from 1-3 months, definitely not four. McKinsey further provides estimates for repairs at $0.51/ride – with an average of five rides/day translating to $2.55/day allocated for repairs – or $306 over a four-month life cycle.

Strategy Analytics interviews with operators reveals that most don’t repair their scooters – they cycle the fleet, replacing the old and worn with new scooters. Occasional repairs, yes. Not $306 over a four month period.

There are other flawed assumptions regarding transactions, fees and taxes. Attendees at the recent Micromobility conference in Richmond, Cal., were treated to a healthy dose of enthusiasm leavened by the more sobering concerns of coping with the high cost of keeping scooters on the street in the face of higher than expected levels of theft and abuse.

The most full-throated endorsement came from Horace Dediu, co-founder of Micromobility Industries. Dediu looks at the market from the number and length of trips noting the total addressable market for micromobility consists of all 0-5 mile journeys comprising a total of 4 trillion kilometers/year. (Don’t ask me why Horace combined miles and kilometers.)

The Reason for Micromobility – Luke Wroblewski blog

Dediu may be correct. That would explain the rapid valuation ramp for companies like Bird and Lime. But attendees at the conference came away recognizing that locks, docking stations and drop off points need to see wider adoption, a solution to the helmet problem must be found, and the scooters themselves need to be made more durable.

Users of scooters need to learn better etiquette as well – a steeper objective for all operators. If the rising tide of reported injuries is not corralled, cities can’t be expected to be unalloyed advocates of micromobility.

Bottom line – there are a lot of bumps along the path to a multi-hundred billion dollar market opportunity. It’s best to approach that opportunity with a helmet, open eyes and ears and better market estimates.


Data Management Challenges in Physical Design

Data Management Challenges in Physical Design
by Alex Tan on 02-08-2019 at 7:00 am

IC physical design (PD) teams face several challenges while dealing with tapeout schedules. With shrinking process nodes and stringent PPA targets, the complexity of physical design flows and EDA design tools has increased multifold. In addition the amount of design data that needs to be managed has also increased exponentially. Managing the design data gains even more importance when the design teams are collaborating across multiple design sites.

Some of the challenges faced by physical design teams are:
• High network disk space explosion
• Multi-sites collaboration
• IP reuse & tracking etc.
• Direct exposure to foundry technology changes
• A clean front-end and back-end handoffs

The SOS7[SUP]TM[/SUP] design-management platform from ClioSoft[SUP]®[/SUP] empowers single or multi-site design teams to collaborate efficiently on complex analog, digital, RF and mixed-signal designs from concept to GDSII within a secure design environment. Tight integration with EDA tools, and an emphasis on performance for data transfer, security and disk space optimization provides a cohesive environment that enables design teams to streamline the development of SoCs.

In terms of data management from a physical design perspective, let us take a look at the following three primary issues and how they are addressed using SOS7:
• Handling network disk space explosion
• Treating design data as a composite design object rather than just files
• Reusing and tracking IPs

Handling Disk space explosion
As the technology nodes shrinks, the design databases produced by physical design tools have exponentially increased in size. The design-databases are often tagged for consumption by the functional and geographically dispersed design teams.

The SOS7 design management platform provides a unique feature – the Cache server – which helps manage the network disk space. SOS7 creates shared smart cache areas where all design files, not being modified, are hosted. User access to these design files is provided by tool-managed Linux symbolic links in the user’s working directory. This is one key feature that helps reduce up to 90% of a design team’s storage requirements, as most designers working on a design project tend to download all the project data into their workspace.

From a design collaboration perspective, the cache server also functions as an agent, prefetching the files for remote users in case of geographically dispersed multi-site scenarios. This ensures that the remote teams are not waiting on the design data to be available at their site, thereby not wasting precious time when a tapeout deadline looms large.


Managing design data as composite design objects
A typical physical design database is essentially a collection of files, which are auto generated by the physical design tools. When a physical design engineer kicks off a run, depending on the operation (for example, floorplanning, placement, routing) and the optimization option and/or goal set for the run (search and repair, SI, post route optimization), the PD tool may change large number of files in the database or only a few files. From a design data management perspective, any change, small or large (number of files) is treated as a new revision of the design data.

SOS7 incorporates the UDMA technology, which allows the CAD teams to define a composite object as a collection of files (the physical design database). Using this technology, SOS7 automatically tracks changes to individual files in a composite object and translates it as changes to the design object. The physical design team, therefore tracks changes to the design object rather than the files individually changed during design cycle. This often proves to be a very useful feature for design teams when dealing with large quantity of data. This enables the PD engineer to easily track down any changes in any specific run.

Tracking IPs used in your design
With the sole objective of shrinking the product development cycles, many design companies have leveraged IP reuse either internal or 3rd party IPs. In the case of IP reuse, physical design teams face two main challenges namely

1. The top level integrator has to be absolutely certain that the product tapeout includes the appropriate releases of IP blocks from downstream sources.
2. IP developers need to keep track of the different releases of the IPs being used in upstream SoC products.

The SOS7 design platform provides an IP referencing feature that allows product teams to choose a specific release of the IP they are going to incorporate in their top level. Whenever a newer version of the IP becomes available, SOS7 notifies the users /engineering leads that a newer release of an IP available for use. The design lead can then review the issues fixed in the newer release or review the release notes before deciding to upgrade to a newer version of the IP. In the event, the designer decides to upgrade to the newer version he has an option to try out the new release of the IP before upgrading the entire design team to the newer version of the IP.

SOS7 also provides a live report for the producers & maintainers of IPs and their releases used in upstream products. This helps the IP producer to plan their next release and also indicate the consumers of their IPs if any serious issue in the IP arises.

To conclude, SOS7 design management platform provides a solution to data management challenges for physical design teams and flows.

For more information on SOS7, click HERE.

Also Read

Webinar: Tanner and ClioSoft Integration

The Changing Face of IP Management

Data Management for SoCs – Not Optional Anymore


Building Better ADAS SOCs

Building Better ADAS SOCs
by Tom Simon on 02-07-2019 at 12:00 pm

Ever since we replaced horses in our personal transportation system, folks have been pining for cars that offer some relief from the constant need for supervision, control and management. Indeed, despite their obvious downsides, horses could be counted on to help with steering and obstacle avoidance. There are even cases when a horse could guide its owner home without any supervision. We’ve had to wait a long time, but it appears that partially and even fully autonomous vehicles are arriving and on the horizon. Getting to this point took far longer than anyone hoped. Remember the Jetsons?

During this interval, humans have proven that there is a lot of room for improvement when it comes to driver safety. The same is true for congestion management and fuel efficiency. Autonomous vehicles will also open up access for effective transportation to many classes of people, such as the elderly or disabled, who face serious restrictions today. The push for autonomous vehicles really got moving when DARPA announced its first Grand Challenge for driverless vehicles in 2002. In 2004 not one vehicle passed the finish line when the first challenge was held. However, the following year five vehicles completed the 212 km race.

The SAE J3016 taxonomy describes five levels of vehicle automation. They range from level 0 with no automation, to level 5 which is fully autonomous and requires no driver at all. Moving up the chain to each higher level requires exponentially more processing power. The single biggest technology breakthrough that enabled the progress we are seeing today is AI in the form of machine learning (ML). However, within this field there are a large number of evolving technologies and architectures. Should processing be centralized or distributed? Should it use traditional CPUs or SOCs? In building autonomous vehicles major challenges have popped up concerning in-car networking bandwidth, memory bandwidth, ensuring reliability, and adaptability to evolving standards and required upgrades, among others.

Sensor data is exploding, expanding from pure optical sensors to proximity, LIDAR, radar, etc. Data needs to be combined using sensor fusion to create a real-time 3D model of the car’s environment that the vehicle can use to make navigation decisions. The natural outcome of this pressure is to place processing power closer to the sensors to speed up processing and reduce the amount of data that needs to be transferred to a central processing unit. This come with a side benefit of power reduction. Other data sources will include 5G V2V and V2X communication that will let cars exchange information and allow the road environment to communicate with the car as well. Examples of this might include information about traffic, road repairs, hazards or detours.

A new white paper by Achronix talks about meeting the power, performance and cost targets for autonomous vehicle systems. Often designers need to choose between CPU style processors or dedicated ASICs to process the information flowing through the system. However, there is an interesting third choice that can help build accelerators and offers adaptability in the face of changing requirements or updates. Achronix offers an embeddable FPGA (eFPGA) fabric that can be built into an SOC. Because it is on-chip, it can have direct high-speed access to memory and system caches. Their latest GEN4 architecture is targeted at machine learning applications, with special features for handing ML processing.

In addition to the obvious advantages that embeddable FPGA can offer, such as lower power, higher speed processing, ability to update algorithms in the field, or retargeting SOCs for multiple applications, it offers unique features for safety and reliability. As part of automotive safety standards, such as ISO26262, there is a need for frequent self-testing through BIST and other types of proactive test error insertion and monitoring. eFPGA can be reprogrammed during system bring up and even operation to create test features as needed. Another unique application for eFPGA will be in V2X communication implementation. When eFPGA is used in communication SOCs, it can be field updated to handle next generation communication protocols, like those in the upcoming 5G rollout.

The Achronix white paper outlines a number of other interesting ways that eFPGA can help enable the development of autonomous vehicle processing systems. It discusses how each instance can be specifically configured with the resources needed to perform the intended function. This eliminates some of the other problems with off-chip programmable logic – wasted real estate and mismatched resources. The white paper, which can be downloaded from their website, also offers interesting insight into how the market for autonomous vehicles is shaping up. I for one, look forward to the day that I can simply ask my car to take me home, instead of having to direct my full focus on the task of vehicle operation.


Where Circuit Simulation Model Files Come From

Where Circuit Simulation Model Files Come From
by Daniel Payne on 02-07-2019 at 7:00 am

I started out my engineering career by doing transistor-level circuit design and we used a proprietary SPICE circuit simulator. One thing that I quickly realized was that the accuracy of my circuit simulations depended entirely on the model files and parasitics. Here we are 40 years later and the accuracy of SPICE circuit simulations still depend on the model files and parasitics, but with the added task of using 3D field solvers to get accurate parasitic values, and even the use of 3D TCAD tools to model the complex physics of nm IC designs using FinFET transistors.

Foundries and IDMs both need a proven and accurate flow from TCAD to SPICE simulation for accurate FinFET behavior that match silicon measurements. This blog will consider such a flow using tools from Silvaco like DeckBuildor the Virtual Wafer Fab (VWF). I will start with an example circuit of a 9-stage ring oscillator using a FinFET technology with 20nm gate lengths:

The next step is to create an annotated layout that identifies active devices (FinFET transistors) and the interconnect between the devices:

The initial IC layout is performed on a 2D representation, so the TCAD tool takes this 2D info as a starting point in doing a 3D process simulation where we get to choose process parameters like:

  • Fin height
  • Equivalent gate oxide thickness
  • Source-drain diffusion times

With just a handful of input parameters the process simulator then automates the device meshing and creates a 3D representation of a p-channel FinFET including the SiGe source-drain stressors for mobility enhancement:

The 3D TCAD simulator Victory will automatically create the SPICE model files based on the p-channel and n-channel physical and electrical characteristics. Process engineers can iterate and investigate how different process parameters, strain and layout effects impact circuit performance (aka Design Technology Co-optimization).

With a transistor model, you can then visualize the I-V curves for p and n transistors:

When running device simulations the process engineer decides which device physics to include:

  • Standard model set
  • Strain effects
  • Gate tunneling
  • Band to band tunneling
  • User-defined effects

Finally, the BSIM-CMG models are created based upon these TCAD I-V curves and the 3D physics involved. In the 1970’s we only had proprietary models and SPICE simulators, but today we have model standards like BSIM-CMG which is approved by the Compact Modeling Council.

You can customized how your model cards are created if needed by using the UTMOST-IV GUI, but for this example a standard script was used without any tweaking. One quality control step is to check the difference between the original I-V curves and the generated SPICE model, look at how close these sets of curves are:

When I look at the TCAD curves versus SPICE model curves, the values are consistent. The Utmost-IV tool does have a full SPICE simulator that is used to fit the curves so closely, saving time for us and keeping the desired accuracy. So the transistors are well modeled in this tool flow and it’s time to look at the interconnect between devices.

Just as the FinFET devices required 3D modeling, the interconnect between FinFET transistors also requires a 3D field solver, Clever, to extract accurate resistance and capacitance values. The 3D Back End Of Line (BEOL) structure for the nine-stage inverter layout is shown below where Metal 2 is in Red, Metal 1 is Purple:

This 3D field solver produces the SPICE netlist which has both FinFET transistors and RC interconnect values.

An engineer can even run a large Design Of Experiments (DOE) with the Virtual Wafer Fab tool and use the built-in statistical features to fit response surface models, relating input variables to predicted outputs.

Running the SPICE netlist in the SmartSpice circuit simulator shows that the 9 stage ring oscillator is functioning properly (left plot), and we know the average power consumption (right plot).

This same TCAD to SPICE model file flow was also run on a D-type Flip Flop from the Nangate digital library, here’s the 3D interconnect view:

All possible logic states are simulated with SmartSpice on the Flip Flop netlist using extracted parasitics:

A CAD engineer could run many of these steps using the DeckBuild environment, or for further automation you should consider using the Virtual Wafer Fabtool instead. An example of using VWF is for a Design Of Experiments where a virtual split lot tree is built and run across many computers, where the final results are fitted to a Response Surface Model (RSM):

One insight from the RSM is that the maximum frequency is inversely proportional to the metal line width, to this particular circuit is parasitic capacitance limited, instead of resistance limited. Now the circuit designer knows how to better make trade-offs in reaching the requirements.

Summary
I’ve laid out the steps used by process engineers and CAD engineers to create SPICE model files:

  • 3D TCAD simulation (Victory)
  • SPICE model parameter extraction (Utmost IV)
  • 3D netlist extraction (Clever)
  • SPICE simulation (SmartSpice)

Silvaco has all four tools needed in this tool flow and they also added plenty of automation to save your team precious time (VWF) for performing Design Of Experiments. For more details read the recent article in Silvaco’s Simulation Standard online.

Related Blogs


Machine Learning and Gödel

Machine Learning and Gödel
by Bernard Murphy on 02-06-2019 at 7:00 am

Scanning ACM tech news recently, I came across a piece that spoke to my inner nerd; I hope it will appeal to some of you also. The discovery will have no impact on markets or investments or probably anyone outside theories of machine learning. Its appeal is simply in the beauty of connecting a profound but obscure corner of mathematical logic to a hot domain in AI.

There is significant activity in theories of machine learning, to figure out how best to optimize neural nets, to understand what bounds we can put on the accuracy of results and generally to add the predictive power you would expect in any scientifically/mathematically well-grounded discipline. Some of this is fairly close to implementation and some delves into the foundations of machine learning.

In foundational theory, one question is whether it is possible to prove, within some appropriate framework, that an objective is learnable (or not). Identifying cat and dog breeds is simple enough – just throw enough samples at the ML and eventually you’ll cover all the useful variants. But what about identifying patterns in very long strings of numbers or letters? Since we can’t easily cross-check that ML found just those cases it should and no others, and since potentially the sample size could be boundless – think of data streams in a network – finding a theoretical approach to validate learnability can look pretty attractive.

There’s a well-established mathematical framework for this analysis called “probably approximately correct (PAC) learning” in which a learning system reads in samples and must build a generalization function from a class of possible functions to represent the learning. The use of “functions” rather than implementation details is intentional; the goal is to support a very general analysis abstracted from any implementation. The target function is simply a map between an input sample (data set) and the output value, match or no match. There is a method in this theory to characterize how many training samples will be needed for any given problem, which apparently has been widely and productively used in ML applications.

However – when a theory uses sets (of data) and functions on those sets, it strays onto mathematical logic turf and becomes subject to known limitations in that domain. A group of mathematicians at the Technion-Israel Institute of Technology in Haifa have demonstrated that there exist families of sets together with target learning problems for which learnability can neither be proved nor disproved within the standard axioms of mathematics; learnability is undecidable (or more precisely, independent of the base mathematical system, to distinguish this from computability undecidability).

If you ever read “Gödel, Escher and Bach” or anything else on Gödel, this should sound familiar. He proved, back in 1931, that it is impossible for any mathematical system to prove all truths about the integers. There will always be statements about the integers that cannot be proved either true or false. The same restriction applies to ML it seems; there are learning problems for which learnability cannot be proved or disproved. More concretely, as I understand it, it is not possible to determine for this class of problem an upper bound to the number of training samples you would need to supply to adequately train the system. (Wait, what about proving this from the halting problem? The authors used Gödelian methods, so that’s what I describe here.)

This is unlikely to affect ML as we know it. Even in mathematics, Gödelian traps are few and far between, many quite specialized although a few like Goodstein’s theorem are quite simple. And of course we know other problems, like the traveling salesman problem which are theoretically unbounded yet are still managed effectively every day in chip physical design. So don’t sell your stock in ML-based enterprises. None of this will perturb their efforts even slightly. But it is pretty, nonetheless.


Open Letter to the FTC Bureau of Consumer Protection

Open Letter to the FTC Bureau of Consumer Protection
by Matthew Rosenquist on 02-05-2019 at 12:00 pm

In December 2018 the FTC held hearings on Competition and consumer Protection in the 21st Century. A number of people spoke at the event and the FTC has graciously opened the discussion to public comments. The Federal Trade Commission has interest, certain responsibilities, and can affect changes to how data security evolves. This is our opportunity for the public to share its thoughts and concerns. I urge everyone to comment and provide your viewpoints and expertise to the FTC committee.

Comments can besubmitted electronically no later than March 13, 2019.
Below is my Open Letter to the FTC – Bureau of Consumer Protection, that has been submitted. As always, I am interested in your thoughts. Feel free to comment.

Open Letter to the FTC – Bureau of Consumer Protection
I would like to make clear as an important preface to the following that I do not speak for or on behalf of Intel Corporation. In this regard, all of this material, the opinions and positions expressed and the conclusions drawn are my own and do not reflect the material, positions or conclusions of Intel Corporation.

In response to your hearing on Competition and Consumer Protection in the 21st Century, I respectfully provide the following insights and recommendations:

Protecting consumer data continues to grow in importance while the technology challenges expand the complexity and risks. The difficulty will sharply increase with emerging innovations that will be able to analyze and aggregate vast amounts of consumer data in new and exciting ways.

The challenges are as significant as the benefits that new technology adoption brings. Insightful and calculated strategic action now is necessary to establish a solid foundation to allow technology benefits to prosper, while instituting the frameworks that will protect consumer data in ways that maintain alignment to public expectations as the risks increase.

The technology industry has a focus in providing innovative solutions for profit but must also build trust in how products protects consumers. Incidents which victimize users represent an inhibitor to long-term adoption and business viability. Consumers are increasingly placing more purchasing weight on security factors.

Trust will be key. Consumers must be confident in the security, privacy, and safety of technology. Brand reputation for cybersecurity will emerge as a competitive differentiator. It is in everyone’s best interest to have a healthy technology market where competition drives toward the optimal balance of risks, costs, and usability to meet consumer’s needs.

Observations, inputs, and recommendations:
Digital technology connects and enriches the people and prosperity of the world. Innovation is hugely beneficial, but it also brings risks. In a symbiotic manner, as technology grows in capability and reach, so do the accompanying risks.

  • Society wants the benefits that technology enables, but with manageable controls to protect their security, privacy, and safety in the face of ever increasing, creative, and motivated threats. In short, consumers want innovation at the lowest price, without additional risk.
  • The risks to consumers will increase with the processing of more personal data. Information is the fuel powering future digital technology and services. Soon, automated and intelligent systems will be the preferred tool to make sense and derive new value from the vast oceans of collected data. It is crucial that data Confidentiality, Integrity, and Availability be protected and its usages aligned to the benefits of the users. These risks must be taken into account now as part of any future control framework.
  • The acceleration of widespread consumer victimization is the driving force for expectation changes and regulatory oversight. Consumers want better protective standards and the ability to act through their own choices.
  • It is difficult, for all parties, to identify and deliver the optimal balance of security. For consumers, the ambiguity and complexity of risks are more challenging to understand as compared to tangible benefits of the technologies they desire. This delta has traditionally led to the blind acceptance of risks as a tradeoff for benefits. As impacts rise however, the tolerance will not hold and consumers will want action and uncomplicated empowerment to choose a better balance for themselves. The concept of ‘trust’ will emerge as an easy way to help consumers with buying decisions and brand loyalty. Trusted technology, vendors, and service providers, which protect user’s security, privacy, and safety, will have a business advantage to thrive as compared to less-trustworthy competitors. The value of ‘trust’ can be a healthy and sustainable model for market reinforcing incentives that continuously align to consumer data protection needs.

In order for a sustainable ecosystem to maintain parity between risks, costs, and usability, an optimized set of incentives, controls, and oversight must be established.

  • Partnership between the public and private sector is crucial. Academia, business, and government must work together in strategic ways to achieve both the adoption of beneficial technology and the mitigation of risks to acceptable levels
  • Consumers also have an important role in being responsible for their data and should be given the visibility, tools, and ability to seek remedies for the protection of their information that could be used to their detriment. Efforts that educate them over time support better decision making for tradeoffs and a more informed culture for consumer data protection.

The goal should be to establish a sustainable environment where good data practices benefit ethical market players, and overall trust in technology is elevated through transparency and accountability requirements, enabling users the informed empowerment to make educated choices for beneficial trade-offs.

  • Governmental oversight is well suited to regulate and enforce the adherence of businesses to transparency and accountability requirements. This takes minimal effort and primarily targets offenders to ensure a fair and competitive playing field. Market forces then drives the evolving beneficial behaviors across the system and quickly adapts priorities to align with evolving risks.
  • One mistake we must avoid is the unnecessary constraint of innovation or attempt for prescriptive controls on behalf of consumers with shifting expectations. Inhibiting innovation is counterproductive as technology can contribute to more protections for consumers and undermines the compounding benefits of iterative advancement and growth. Regulations are not as nimble as the evolving threats and should not attempt to define specific controls to mitigate attacks, but rather establish a framework that promotes the ecosystem to rapidly respond to new risks based upon healthy competition for consumer loyalty.
  • Fostering market forces, to reward ethical behaviors of businesses, is key in building a sustainable and self-supporting environment for technology prosperity and better consumer data protections.

I believe that ‘Trust’ in technology is a key to both prosperity and the realization of tremendous benefits by consumers for innovative products and services. Technology providers should compete for consumers trust by providing secure, private, and safe products. The FTC can play an important role to facilitate the healthy competition for consumers benefit while ensuring a fair playing field by targeting organizations which seek to undermine transparency and accountability necessary for consumers to better understand the risks and how organizations compare when it comes to trustworthiness.

Technology organizations, possessing expertise on technology innovation and supporter of ethical competitive practices, should be sought to assist the FTC, peer organizations, and academia to establish a sustainable regulatory structure to maximize market incentives to achieve the best possible optimization for protecting consumers.

Respectfully,

Matthew Rosenquist, Cybersecurity Strategist


How Apple Became a Force in the Semiconductor Industry

How Apple Became a Force in the Semiconductor Industry
by Daniel Nenni on 02-05-2019 at 7:00 am

From our book “Mobile Unleashed”, this is the semiconductor history of Apple computer:

Observed in hindsight after the iPhone, the distant struggles of Apple in 1997 seem strange, almost hard to fathom. Had it not been for the shrewd investment in ARM, Apple may have lacked the cash needed to survive its crisis. However, cash was far from the only ingredient required to conjure up an Apple comeback.
Continue reading “How Apple Became a Force in the Semiconductor Industry”


Getting to 56G Takes The Right Stuff

Getting to 56G Takes The Right Stuff
by Tom Simon on 02-04-2019 at 12:00 pm

During the 1940s when aerospace engineers were attempting to break the sound barrier for the first time, they were confronting a slew of new technical issues that had never been dealt with before, and in some cases never seen before. In subsonic flight airflow was predictable and well understood. In crossing the sound barrier, they were confronted with new physical effects and issues that had to be resolved. Today there is a similar challenge facing designers of chips and systems that communicate at data rates of 56G and 112G. Circuits operating at Millimeter Wavelength (mmWave) frequencies are not unlike aircraft flying at supersonic speeds. New behaviors including reduced margins, increased noise and electromagnetic effects can dominate the performance of these circuits.

Designing SerDes for 56G and 112G is complex and challenging. In fact, there are only a few companies with working silicon. eSilicon is one of them and made an announcement at DesignCon this year that highlights another huge challenge for these designs – how to test them and measure their performance. What is necessary to test these new high speed circuits is a collaboration of companies that can all contribute essential pieces of the puzzle. eSilicon worked with Wild River Technology to design and build a test board that allows de-embedding up to and beyond 70GHz. Also involved were Keysight’s Advanced Design System and Samtec’s Bulls-Eye Test Point System.

eSilicon’s 56G PAM4 & NRZ DSP based 7nm SerDes was used to drive the channels in the test system. Without proper de-embedding the actual circuit operation cannot be accurately characterized. PAM4 introduces multi-level signaling, which decreases the noise margin, making characterization and validation of the SerDes more difficult. For eSilicon this marks an important milestone along the way to having a fully qualified, production ready 56G SerDes.

Central to this project is the upcoming IEEE P370 standard which addresses the quality of measured S-parameters for PCBs and related interconnect. Wild River has expertise in this area. Their founder and chief technologist, Al Neves, has contributed a great deal of time and effort to this developing standard. Within the standard there are three groups: test fixture, de-embedding, and S-parameter quality.

Prior to the announcement I had a chance to talk to Al Neves. He said they did a number of things to ensure their success. One of them was to create three teams to work on the problem to validate the results. He took inspiration from the Apollo program, where independent teams were given the assignment of calculating the launch trajectories. If they all agreed then the likelihood of success improved, as witnessed by their overall operational record.

Another important factor that Al identified was their tool selection. Projects like this live or die by their analysis tools. They used both ANSYS HFSS and Simeor THz software, among others. Al said they push their tools pretty hard and often have to consult with the vendors to get fixes and confirm portions of the methodology.

eSilicon now plans to move to the next step with their 56G SerDes by building a test socket suited for 70GHz. While no test pilots’ lives are at stake, the risks are not insignificant. This stuff is hard, but some companies have the right stuff. The eSilicon website has more information on this announcement and other aspects of their 56G/112G SerDes project.


Jeephack Repercussions

Jeephack Repercussions
by Roger C. Lanctot on 02-04-2019 at 7:00 am

Automotive cybersecurity is an intractable nightmare with significant though inchoate implications for consumers and existential exposure for auto makers. This reality became painfully clear earlier this month when the U.S. Supreme Court declined to hear FiatChrysler Automotive’s appeal in a class action lawsuit over allegations of vulnerabilities in its Jeeps and other trucks.

The case will go to trial in March.

At the core of the lawsuit, arising from the infamous 2015 so-called “Jeephack” orchestrated by Chris Valasek and Charlie Miller (now General Motors employees), is the issue of FCA’s liability and responsibility for the hack or future hacks. The litigation raises the question of whether truck buyers can sue over hypothetical future injuries without being actual victims of cybersecurity.

Approximately 200,000 FCA vehicle owners are parties to the class action and the penalty they are asking the court to apply is $2,000 per vehicle. Obviously that means FCA’s exposure in the litigation is potentially $400M. In reality, given that 1.4M vehicles are implicated in the alleged FCA vulnerability the actual exposure is $2.8B.

The consumers have said that, had the defects been disclosed, they never would have purchased the vehicles in the first place or would have paid less for them. They also said the defects reduce their vehicles’ resale value. A U.S. District judge certified the class action for claims of fraudulent concealment, unjust enrichment and violation of various state and federal consumer protection laws..

The significance of the lawsuit is that it quantifies the potential value of cybersecurity to the consumer at $2,000/car. It also suggests that large numbers of consumers are starting to care enough and understand enough about cybersecurity to take legal action where and when it is found to be wanting.

The action is the latest step in the process of the automotive industry coming to grips with the implications of cybersecurity. Every month brings word of yet another vehicle hack. Usually these “reveals” are accompanied by a description of the remedy being offered by the car maker – often in the form of a software update delivered either wirelessly to the car or during a dealer visit.

The cost of a dealer visit for a software update can be between $200 and $300. The lawsuit increases by tenfold the understanding of that financial exposure and elevates cybersecurity to a board-level concern for most auto makers. Members of the Auto-ISAC, which is coordinating the industry’s reaction to the cybersecurity dilemma, acknowledge a steady flow of reported hacking attempts of cars.

But, as an industry, we appear to be whistling past the graveyard.

The simultaneous announcements from car makers of hacks and fixes reflects the reality that most automotive hacks, of late, have been conducted by ethical or white hat hackers. Hackers working as individuals or as employees of organizations such as IOactive, Lab Mouse of Tencent’s Keen Labs have used automotive hacks to build their reputations and relationships with auto makers.

A wide range of cybersecurity suppliers have also used hacks of auto makers or their suppliers as “door openers.” Multiple automotive cybersecurity companies have hacked cars and components as part of educating auto makers to the scope of the cybersecurity problem.

The choreographed nature of these hack-fix announcements further reflects the limited preparedness of the industry. The pending lawsuit calls into question the adequacy of the existing process of consumer notification regarding vehicle vulnerabilities.

The reality is that cars may never be certifiably secure. In fact, it is known that car company’s enterprise operations have been hacked into via their connected cars and vice versa. Knowing that is enough to cause some lost sleep among senior auto executives while motivating others to elevate cybersecurity to a board-level responsibility.

The automotive industry has one of the most complex supply chains and is further compromised by its dependency on networks of franchise dealers. Add car sharing networks and autonomous vehicles into the mix and you have a gargantuan challenge.

If FCA can be sued on the grounds of potential vulnerability, successfully or not, the time has arrived to prioritize cybersecurity counter-measures. The challenge for auto makers is that consumers assume their vehicles are secure. The reality is that nearly any car can be hacked given enough time and determination on the part of the hacker.
The saving grace is that to-date most hacks have required significant time and effort including disassembly of vehicle systems and reverse engineering of firmware. It is also somehow reassuring that the potential opportunity derived from the hacking appears to boil down to vehicle or identity theft. Thus far terrorism remains nothing more than a boogeyman.

The onset of near-universal vehicle connectivity along with collision avoidance, self-parking and other automated driving features, demand better cybersecurity preparedness in the industry. As to whether a lack of preparedness exposes auto makers to billions of dollars in liability, the courts will decide. One thing is clear, the Supreme Court of the United States chose not to dismiss the lawsuit – so the entire industry will be paying attention this March.

Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. Roger will be participating in the Future Networked Car Workshop, Feb. 7, at the Geneva Motor Show –https://www.itu.int/en/fnc/2019.

More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.


MENTOR at DVCON 2019

MENTOR at DVCON 2019
by Daniel Nenni on 02-04-2019 at 7:00 am

The semiconductor conference season has started out strong and the premier verification gathering is coming up at the end of this month. SemiWiki bloggers, myself included, will be at the conference covering verification so you don’t have to. Verification is consuming more and more of the design cycle so I expect this event to be well worth our time, absolutely. Mentor, of course is the premier verification company so first let’s see what they are up to:

Mentor, a Siemens Business will have experts presenting conference papers and posters, as well as hosting a luncheon, panel, workshop, and tutorial at DVCon 2019 February 25-28 in San Jose, CA. You’ll also find experts on the exhibit floor in booth #1005. Special guest Fram Akiki, VP Electronics Industry at Siemens PLM, will deliver Tuesday’s keynote “Thriving in the Age of Digitalization”.

KEYNOTE
Thriving in the Age of Digitalization
Tuesday, February 26, 1:30pm-2:30pm
Presented by Fram Akiki, Siemens PLM
Semiconductor technology continues its relentless advance with shrinking geometries and increased capacity stressing design and verification of today’s most demanding System on Chip solutions. While these challenges alone are significant, future challenges are rapidly expanding as market demand for Internet of Things, Automotive electronics and autonomous systems, 5G communication, Artificial Intelligence and Machine Learning technologies – and more – show explosive growth. These advances add significant complications and drive exponential growth in design and verification challenges of these solutions. This exponential development not only expands the market; it sparks new competitive pressures as new companies look to challenge the more established businesses that have long defined the industry. These factors explain why it’s important to have an integrated digitalization strategy to succeed in today’s semiconductor market.

SPONSORED LUNCHEON
A Tale of Two Technologies: ASIC & FPGA Functional Verification Trends
Tuesday, February 26, 12:00pm-1:15pm
The IC/ASIC market in the early- to mid-2000 timeframe underwent verification process growing pains to address increased design complexity. Similarly, due to increased complexity we find that today’s FPGA market is being forced to address its processes. What solutions are working? What separates successful projects from less successful ones? And how do you measure success anyway? At this luncheon we will address these and other questions. Please join Mentor, A Siemens Business as we explore the latest industry trends and what successful projects are doing to address growing complexity.

PANEL
Deep Learning –– Reshaping the Industry or Holding to the Status Quo?
Wednesday, February 27, 1:30pm-2:30pm
Participating Companies: Advanced Micro Devices, Babblelabs, Arm, Achronix Semiconductor, NVIDIA
Moderator Jean-Marie Brunet from Mentor, a Siemens Business, will take panelists through various scenarios to determine how AI and deep learning will reshape the semiconductor industry. They will look carefully at the chip design verification landscape to access whether it’s equipped to handle this new and potentially exciting area. Audience members will be encouraged to bring questions and opinions to ensure a lively and thought-provoking panel session.

WORKSHOP
It’s Been 24 Hours – Should I Kill My Formal Run?
Monday, February 25, 3:30-5:00pm
In this workshop we will show the steps you can take to make an informed decision to forge ahead, or cut your losses and regroup. Specifically, we will describe:

  • How you can set yourself up for success before you kick off the run by writing assertions, constraints, and cover properties in a “formal friendly” coding style
  • What types of logic in your DUT will likely lead to trouble (in particular, deep state space creators like counters and RAMs), and how to effectively handle them via non-destructive black boxing or remodeling
  • Matching the run-time multicore configuration and formal engine specifications to the available compute resources
  • Once the job(s) start, how to monitor the formal engines’ “health” in real time
  • Confirm the relevance of the logic “pulled in” by your constraints
  • Show how a secure mobile app can be employed to monitor formal runs when you are away from your workstation
  • Examine whether a run’s behavior is consistent with the expected alignment between the DUT’s structure and the formal engines’ algorithmic strengths
  • Leverage all of the above to make the final “continue or start over” decision

TUTORIAL
Nex Gen System Design and Verification for Transportation
Thursday, February 28, 8:30am-11:30am
In this tutorial, Mentor experts will demonstrate how to use these next-generation IC development practices to build and validate smarter, safer ICs. Specifically, it will look at:

  • How to use High-Level Synthesis (HLS) to accelerate the design of smarter IC’s
  • How to use emulation to provide a digital twin validation platform beyond just the IC
  • How to use develop functionally safe IC’s

CONFERENCE PAPERS
Tuesday, February 26, 3:00pm-4:30pm
5.1 UVM IEEE Shiny Object
5.3 Fun with UVM Sequences – Coding and Debugging

Wednesday, February 27, 10:00am-12:00pm
9.2 A Systematic Take on Addressing Dynamic CDC Verification Challenges
9.3 Using Modal Analysis to Increase Clock Domain Crossing (CDC) Analysis Efficiency and Accuracy
10.1 Unleashing Portable Stimulus Productivity with a PSS Reuse Strategy
10.2 Results Checking Strategies with the Accellera Portable Test & Stimulus Standard

Wednesday, February 27, 3:00pm-4:30pm
11.1 Supply Network Connectivity: An Imperative Part in Low Power Gate-level Verification
12.2 Formal Bug Hunting with “River Fishing” Techniques

EXHIBIT HALL – MENTOR BOOTH #1005
Mentor, a Siemens Business, has pioneered technology to close the design and verification gap to improve productivity and quality of results. Technologies include Catapult® High-Level Synthesis for C-level verification and PowerPro® for power analysis. Questa® for simulation, low-power, VIP, CDC, Formal and support for UVM and Portable Stimulus. Veloce® for hardware emulation and system of systems verification, unified with the Visualizer™ debug environment.

Join the Mentor booth theater sessions to watch technology experts discuss a broad range of topics including Portable Stimulus, emulation for AI designs, verification signoff with HLS, functional safety, accelerating SoC power analysis, and much more!

Theater sessions include:

  • Portable Stimulus from IP to SoC – Achieve More Verification with Questa inFact
  • Accelerate SoC Power, Veloce Strato – PowerPro
  • Exploring Veloce DFT and Fault Apps
  • Mentor Safe IC: ISO 26262 & IEC 61508 Functional Safety
  • Adding Determinism to Power in Early RTL Using Metrics
  • Scaling Acceleration Productivity beyond Hardware
  • Verification Signoff of HLS C++/SystemC Designs
  • An Emulation Strategy for AI and ML Designs
  • Advanced UVM Debugging

POSTER SESSIONS
Tuesday, February 26, 10:30am-12:00pm
Mentor experts will be representing the following poster sessions:

  • SystemC FMU for Verification of Advanced Driver Assistance Systems
  • Transaction Recording Anywhere Anytime
  • Multiplier-Adder-Converter Linear Piecewise Approximation for Low Power Graphics Applications
  • Verification of Accelerators in System Context
  • Introducing your Team to an IDE
  • Moving Beyond Assertions: An Innovative Approach to Low-power Checking using UPF Tcl Apps

DVCon is the premier conference for discussion of the functional design and verification of electronic systems. DVCon is sponsored byAccellera Systems Initiative, an independent, not-for-profit organization dedicated to creating design and verification standards required by systems, semiconductor, intellectual property (IP) and electronic design automation (EDA) companies. In response to global interest, in addition to DVCon U.S., Accellera also sponsors events in China, Europe and India. For more information about Accellera, please visitwww.accellera.org. For more information about DVCon U.S., please visitwww.dvcon.org. Follow DVCon on Facebookhttps://www.facebook.com/DvCon or @dvcon_us on Twitter or to comment, please use #dvcon_us.