Banner Electrical Verification The invisible bottleneck in IC design updated 1

AI Based Software Designing AI Based Hardware – Autonomous Automotive SoC Platform

AI Based Software Designing AI Based Hardware – Autonomous Automotive SoC Platform
by Mitch Heins on 10-10-2017 at 12:00 pm


For those of you who missed the NetSpeed Systems, Imagination Technologies webinar, “Alexa, can you help me build a better SoC”, you’ll be happy to hear that the session was recorded and can still be viewed (see link at the bottom of this page). I’ll warn you now however, that this was a high-bandwidth session packed with information, so much so that I had to listen through it several times to absorb everything. Here’s a condensed version for those looking for the salient points.

SoCs for autonomous automotive applications are some of the most complex ICs designed on the planet. These designs must fuse data from multiple sensors (RADAR, LIDAR, ultra-sonic, video, cameras) in real time, while also dealing with ultra-high functional safety and security requirements. Real-time processing of multiple different data streams implies a heterogeneous mixture of CPUs, DSPs, GPUs, ISPs, and dedicated specialty hardware accelerators all communicating with each other using a sophisticated network-on-chip (NoC) capable of handling coherent memory and coherent I/O access.

NetSpeed Systems and Imagination Technologies partnered to deliver a next generation autonomous automotive platform that is now used by three of the top four auto-pilot players. NetSpeed Systems delivers the NoC while Imagination Technologies delivers the MIPS I6500-F processor architecture. Their autonomous automotive SoC platform enables designers to create a scalable, heterogeneous solution using AI software techniques to synthesize an optimal hardware solution while trading off power, performance, area (PPA) and functional safety (FuSa).

Some key items used in this collaboration include:

  • NetSpeed’s GEMINI Coherent NoC, certified ASIL-D ready per the ISO 26262 standard.
  • NetSpeed’s NocStudio development environment, with machine learning-based interconnect synthesis that uses NetSpeed’s CRUX (streaming interconnect transport), ORION (AMBA and OCP bridging), GEMINI (NoC coherent cache and I/O management) and PEGASUS (NoC L2, L1, and LLC cache support) template libraries.
  • Imagination’s MIPS I6500-F variant of the MIPS 6500 CPU, certified automotive ASIL-B(D) and Industrial Control applications IEC 61508 ready as a Safety Element out of Context (SEooC).
  • Enhanced heterogeneity using the MIPS IOCU ports and dedicated CPU threads to enable low latency paths through L2 cache between hardware accelerators and the CPU.
  • FuSa capabilities of the MIPS I6500-F including ECC across memories, redundant logic and parity protection, time-outs and the use of logic built-in-self-test (LBIST) that checks the hardware at both boot-up and during processing cycles when CPUs are not busy.
  • FuSa capabilities of the NetSpeed NoC synthesis process including path redundancy and guaranteed fail-safe deadlock-free solutions.


The thing that makes this joint solution so fascinating is the fact that it is indeed scalable and programmable, enabling designers to truly customize a SoC to meet their specific requirements. NetSpeed’s NoC is able to manage up to 64 cache coherent clusters and 250 I/O coherent IPs and the NoC is compatible with popular protocols ACE, CHI, AXI, AHB, APB and OCP. Similarly, the MIPS 6500 architecture is able to support cache coherent arrays of clusters with multi-threaded CPU cores.

Added to this is the fact that the design environment (per the title of the webinar) makes use of artificial intelligence (AI) techniques to help designers make intelligent PPA/FuSa trade-offs. Once the design has been iterated, the solution then uses NetSpeed’s NocStudio software to synthesize the resulting architectural RTL code.

A nice feature of the joint solution is that NocStudio can simulate system data traffic and interactions between the processing units and the coherent cache and coherent I/Os. In so doing, NocStudio can score both PPA and FuSa results on a path-by-path basis. Additionally, each NoC path from master to slave is user configurable in terms of its power, performance (latency and quality of service or QoS), area and Functional Safety requirements. NocStudio considers the requirements of each master-slave path of the system along with higher-level constraints such as expected data traffic for various paths, physical locations of master-slave paths on the IC, numbers of competing paths in the same area and the priority of the paths with respect to desired network redundancy. Depending on the specifications for each path, NocStudio’s AI algorithms synthesize and optimize a NoC to meet the requested constraints.


Many different FuSa trade-offs can be made with NocStudio for each master-slave path. Examples of this include end-to-end parity checks, ECC checks and checksums for packets as well as hop-to-hop checks and the synthesis of redundant paths to compensate for paths that may develop errors. Because NetSpeed Systems and Imagination Technologies have partnered on the solution, the NocStudio environment can also dovetail the parity checks applied by the MIPS I6500-F processor with the checks being done by the NoC, leading to system-level coverage for all components controlled by the MIPS processor.

The solution platform generates a plethora of data files and information that can then be used to implement the SoC including:

  • Synthesizable RTL,
  • Verification checkers, monitors and scoreboards
  • Files to aid physical design (block placements in DEF, timing constraints in SDC, Clock skew and physical design scripts to run place and route)
  • SoC integration files such as IPXACT, CPU/UPF, and an architectural manual
  • FuSa documents including FMEDA and a Safety manual that can be used for ISO 26262 certification.

The most impressive part about this is solution is that it is silicon proven and in use today by several leading autonomous automotive IC providers with successful implementations such as the one done by MobileEye (see SemiWiki article).

To dive deeper into what NetSpeed and Imagination Technologies have to offer in this space you can watch the full webinar at this link.

See Also:
NetSpeed Systems
Imagination Technologies


An IIot Gateway to the Cloud

An IIot Gateway to the Cloud
by Bernard Murphy on 10-10-2017 at 7:00 am

A piece of learning we all seem to have gained from practical considerations of IoT infrastructure is that no, it doesn’t make sense to ship all the data from an IoT edge device to the cloud and let the cloud do all the computational heavy lifting. On the face of it that idea seemed good – all those edge devices could be super cheap (silicon dust) and super-low power.


But the downsides can be severe. The latency in a round-trip to the cloud may not be acceptable if that edge device needs real-time response to adjust machine behavior. Power at the edge can actually be worse if you are burning it to ship large quantities of data uphill. Security is a problem, as much in industrial IoT (IIoT) devices as in personal devices thanks to a potentially significant attack surface, from the edge to the cloud. Even reliability is at the mercy of the link and the cloud, perhaps acceptable for a consumer device, but not necessarily for industrial applications.

Which is why a lot more compute is moving to the edge. You can do most of the (local) compute you need there (especially for real-time needs), you have more control over the attack surface, you can buffer communication if the uplink is misbehaving and, if you’re careful, you can still do all of this with low power.

Then there’s the question of implementation technology. In the semiconductor world, we tend to assume this means custom ASIC solutions but it’s not always clear that approach is the most effective, especially in the IIoT where needs may be extremely diverse across widely different applications and may need to evolve as standards evolve. You could allow for more flexibility in software on the edge node, but that again becomes more power hungry.

In many IIoT applications a better compromise platform is an FPGA. It’s obviously reconfigurable and can be very power-efficient; not down at the mA consumer-level but quite satisfactory for many industrial support functions. And from a security point of view, while bitstreams (for reconfiguration) are not immune to hacking they are arguably better defended in today’s platforms than software. Moving more of those custom needs to hardware can only reduce the potential attack surface.

Ultimately whatever you are building has to connect both to sensors and actuators, and to the cloud. The sensor/actuator part of this will be part of your secret sauce presumably, as will be sensor fusion and data processing. But you don’t really need to reinvent all that communication, to sensors etc and to the cloud, if you can get it wrapped up in a reference platform. And if you can add hardware extensions on FPGA mezzanine card (FMC) daughter boards, you can configure most of what you need before you have to add anything. Sure, it won’t be as cheap (unit cost) or as compact as an ASIC, but it will be field configurable, there’s no NRE and anyway you can have it up and running tomorrow.


Aldec recently hosted a webinar in which they showed how to connect such a solution to the widely-popular AWS cloud services. They use their TySOM-1-7Z030 development board in the demo, based on the Xilinx Zynq processor with dual-core ARM A9, along with DDR, SD, flash and other interfaces, USB 2.0 and 3.0, UARTs, Gb Ethernet, HDMI, mPCIe and camera interfaces. Daughter cards offer multiple standard wireless interfaces also industrial interfaces such as RS standards and CAN, in a range of configurations covering ADAS, IoT, vision, industry and other applications.

Configuring this system follows an expected flow. You design the hardware part of the solution using Xilinx Vivado, along with Aldec Riviera-Pro for functional verification. The software part of the solution is built using the Xilinx SDK. Aldec don’t spend any time on this topic in the webinar (they have other resources you might find useful on those topics). The main focus of this webinar is on connecting your IIoT solution to AWS.

The Webinar walks you through setting up an AWS account, connecting to AWS IoT and registering and configuring your device. This requires a communication protocol, supplied by AWS, for which Aldec recommends the embedded C MQTT option, a lightweight messaging protocol for small sensors and mobile devices, optimized for high-latency / unreliable networks (remember those challenges). AWS then creates a connection kit and certificates for your “thing”, which you can add to your edge device.

Aldec provides demo examples with this type of build for TySOM boards so you can get started quickly with testcases then iterate towards your particular solution needs. Starting from a demo setup, the TySOM board will publish processed data from sensors to AWS. In the webinar, they show this in action, collecting and publishing temperature and humidity data on a fixed schedule.


The solution is scalable using multiple TySOM boards, each collecting different data from different sensors and each connecting to AWS. Looks like an interesting solution to get up and running quickly with an IIoT product/application. You can get more detail from the webinar HERE.


Webinar on Electrochemistry and how it affects Semiconductor devices

Webinar on Electrochemistry and how it affects Semiconductor devices
by Daniel Payne on 10-09-2017 at 12:00 pm

My educational background is Electrical Engineering and I’ve learned a lot since starting in the industry back in 1978 while working on bipolar, NMOS and CMOS technology, designing DRAM, data controller and GPU devices. I continue to learn about the semiconductor industry through daily reading and attending trade shows like DAC. I’ll be attending a webinar on October 25th at 10AM (PDT) on the topic, TCAD Simulation of Ion Transport and Electrochemistry. This phenomena is essential for two types of modern designs:

  • Non-volatile memories
  • Solid-state batteries

On the downside there is an unintentional case where this TCAD simulation proves useful, and that is in finding degradation mechanisms. Silvaco is hosting this webinar and their TCAD tool called Victory Device has a new ability for electrochemistry simulations.

At this webinar you will learn the following points:

 

  • Why and when to consider electrochemistry in semiconductor devices
    • Phenomena
    • Applications
  • The equations solved in electrochemistry simulations
    • Ion transport
    • Chemical reactions
  • How to set up electrochemistry simulations in Victory Device
    • Definition of chemical species properties
    • Initialization of species concentrations
    • Specification of chemical reactions
    • Output of chemical species data
  • An example: Degradation in an IGZO TFT


Presenter

Dr. Carl Hylin is a Senior Development Engineer in Silvaco’s TCAD Division. Since joining Silvaco in 2007, he has worked exclusively on Victory Device. In addition to the chemistry module, he is responsible for many of the trapping and radiation-damage models in Victory Device, as well as for much of the high-precision numerics.

Dr. Hylin holds an SB from MIT, an MS from the University of Illinois, and a Ph.D. from the University of Kentucky, all in mechanical engineering. He is a member of the Society for Industrial and Applied Mathematics (SIAM). His field of specialty is computational science and engineering, which he has applied to rocket engines and search engines as well as to TCAD.

Who Should Attend
Academics, engineers, and management interested in the simulation of semiconductor devices involving ion transport, degradation, and charge-capture; including TFTs, non-volatile memories, and solid-state batteries.

Registration
There’s a briefonline registration form here. I’ve attended several Silvaco webinars and they have been quite detailed and technical, so don’t expect any marketing fluff. At the end you will have time to type in any questions and have them answered.


Enabling a New Semiconductor Revolution!

Enabling a New Semiconductor Revolution!
by Daniel Nenni on 10-09-2017 at 7:00 am

According to semiconductor trade statistics, 2017 will be the strongest market since 2010 easily recording double digit gains causing the SOX Semiconductor Index to outpace NASDAQ and the other indexes. The question Wall Street people have now is: How much longer will semiconductors be an attractive investment? That question was answered at this year’s Global Semiconductor Alliance Executive Forum: Enabling a New Revolution. The US Executive Forum is an invitation-only event attended by the leading semiconductor executives from around the world. You can see a list of attendees HERE if you don’t believe me. The next big event is the famed GSA Awards Dinner hosted by Wayne Brady!

Gene Munster, Managing Partner Loop Ventures, kicked it off with a “Creating New Markets” keynote including his top technology breakthroughs: Zeplin (transportation), Movie Projector (media), Internet (business communication), Smartphone (personal communication). Gene went through 115 slides in 15 minutes but really the focus was on AI which includes robotics, automotive, augmented reality, and virtual reality as AI interfaces. Gene mentioned that if you are weirded out by any of this you are too old, which, as a father of four millennials, I agree with completely. This is a young person’s game, if you want to add value, lead or stay out of the way. Gene also stated that AI was mentioned by 11% of S&P 500 companies during recent investor calls. I would bet that number is much higher for semiconductor investor calls.

I have Gene’s slide deck if you want to discuss it in more detail in the comments section.

There was a “5G – The Next Generation” session with Jean-Francois Hebert of Dassault, Cristiano Amon of Qualcomm, Robert DiFazio of Interdigital Labs, Shireen Santosham of The City of San Jose, and Preet Virk of MACOM, which was very interesting. 5G is discussed at just about every conference I have attended this year but when I ask, very few people know the difference between 5G and 4G in regards to transmission rates. 5G is 10GBPS versus 4G at 100MBPS. If you recognize how profitable 4G has been for semiconductors, you can multiply that by 100x for 5G, my opinion.

The next session: AI for Real! was the best example of the future opportunity for semiconductors in a 5G world. David Edelman, former Obama Technical Advisor (MIT), Paul Daughtery, Accenture CTO, and Mark Papermaster, AMD CTO, presented slides. Afterwards the panel discussion was moderated by Aart de Geus, Chairman and Co-CEO, Synopsys.

David made some very strong points: AI is everywhere and nowhere meaning that AI touches all of our lives today whether we recognize it or not and will continue to do so on a very large scale. David also quoted futurist Arthur C. Clarke’s third of his famous three laws, “Any sufficiently advanced technology is indistinguishable from magic” and AI certainly is modern day magic.

There was also a lot of discussion on how AI will change jobs and the skill levels of American workers. The consensus was that the skill level will increase as will the acceptance of AI and new technologies. Again, I have the slides and can talk more in the comments section.

Bottom line: Whatever AI and the future holds it will certainly have an insatiable compute and storage demand in both the cloud and edge devices. This heavy demand will continue to drive the semiconductor industry and specifically the SOX for years to come, absolutely.


Why we need new regulations to protect us from Facebook and Equifax

Why we need new regulations to protect us from Facebook and Equifax
by Vivek Wadhwa on 10-06-2017 at 12:00 pm

The theft of an estimated 143 million Americans’ personal details in a data breach of consumer-credit reporting agency Equifax and the Russian hack of the U.S. elections through Facebook had one thing in common: they were facilitated by the absence of legal protection for personal data. Though the U.S. Constitution provides Americans with privacy rights and freedoms, it doesn’t protect them from modern-day scavengers who obtain information about them and use it against them. Our privacy laws were designed during the days of the telegraph and are badly in need of modernization. Much damage has already been done to our finances, privacy, and democracy—but worse lies ahead.

Credit bureaus have long been gathering information about our earnings, spending habits, and loan-repayment histories in order to determine our credit-worthiness. Tech companies have taken this one step further, monitoring our web-surfing habits, emails, and phone calls. Via social media, we have volunteered information on our friends and our likes and dislikes, and shared family photographs. Our smartphones know everywhere we go and can keep track of our health and emotions. Smart TVs, internet-enabled toys, and voice-controlled bots are monitoring what we do in our homes—and often are recording it.

In the land-grab for data, there were no clear regulations about who owned what, so tech companies staked claims to everything. Facebook required its users to grant it “a non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content” they posted to the site. It effectively required them to give it the right to use their family photos and videos for marketing purposes and to resell them to anybody. American laws are so inadequate that such companies are not even required to tell consumers what information they are gathering and how they will use it.

Unlike manufacturers liable for the safety of their products, tech companies gathering our data have practically no liability for compromising it; they can protect it as they choose and sell it to whomever they want to—regardless of how the third party will use it. No wonder Equifax had such lax security or that Russians and hate groups were able to target the susceptible with misinformation on Facebook.

The problem of data brokers’ not being required to provide industrial-strength security can possibly be fixed by the FTC. University of California at Berkeley law professor Pamela Samuelson says that it has “statutory authority to regulate unfair and deceptive practices, it can act on that authority by initiating claims against those who fail to maintain adequate security.” She notes that the FTC has used these powers before, by nudging firms to have privacy and security policies. And when firms failed to comply with their own policies, the FTC treated that as an unfair and deceptive practice.

This would level the playing field by making data brokers as responsible for their actions as most product manufacturers are for theirs. We hold our car manufacturers responsible for the safety of their products; why shouldn’t the tech companies bear similar responsibility?

New legislation could be enacted too. But Samuelson says that the data holders would fight it even harder. And, though it will be a good step forward, it will only solve yesterday’s problems.

Its falling costs will soon make DNA sequencing as common blood tests, and the tech companies that today ask us to upload our photos will tomorrow ask us to upload our genomic information. Technology will be able also to understand our mental state and emotions. These data will encompass everything that differentiates us as human beings, including our genetics and psychology. Whilst credit reports could result in withholding of loans, corporate use of our genetic data could affect our jobs and livelihoods. We could be singled out for having genetic predispositions to crime or disease and find ourselves discriminated against in new ways.

The Genetic Information Nondiscrimination Act of 2008 prohibits the use of genetic information in health insurance and employment. But it provides no protection from discrimination in such matters as long-term care, disability, housing, and life insurance, and it places few limits on commercial use. There are no laws to stop companies from using aggregated genomic data in the same way that lending companies and employers use social-media data, or to prevent marketers from targeting ads at people with genetic defects.

Some states have begun passing laws to say that your DNA data is your property; but we need federal laws that stipulate that we own all of our own data, even if it takes an amendment to the Constitution. The right to decide what information we want to share and the right to know how it is being used are fundamental human rights in this era of advancing technologies.

Harvard Law School professor Lawrence Lessig has argued that privacy should be protected via property rights rather than via liability rules—which don’t prevent somebody from taking your data without your consent, with payment later. A property regime would keep data control with the person holding the property right. “When you have a property right, before someone takes your property they must negotiate with you about how much it is worth”, argues Lessig. Imagine a website that allowed you to manage all of your data, including those generated by the devices in your house, and to charge interested companies license fees for its use. That is what would become possible.

Daniel J. Solove, Professor of Law at George Washington University Law School, has reservations about protecting privacy as a form of property right, because the “market approach has difficulty assigning the proper value to personal information”. He worries that although to an individual giving out bits of information in different contexts, each transfer may appear innocuous, the information could be aggregated and become invasive when combined with other information. “It is the totality of information about a person and how it is used that poses the greatest threat to privacy”, he says.

It isn’t going to be easy to develop the new systems for maintaining control of personal information, but it is imperative that we start discussing solutions. As Thomas Jefferson said in 1816: “Laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths disclosed, and manners and opinions change with the change of circumstances, institutions must advance also, and keep pace with the times.”

For more, please read my book, Driver in the Driverless Car. It explains the choices we must make to create an amazing future.


Driver Distraction? Don’t Look at Me!

Driver Distraction? Don’t Look at Me!
by Roger C. Lanctot on 10-06-2017 at 7:00 am

The Automobile Club of America’s (AAA) ongoing battle with driver distraction, among other issues such as drowsy driving and teen driving, turns a new page with a report flagging 23 of 30 tested in-vehicle infotainment systems as demanding that drivers pay a high or very high level of attention when performing tasks with the vehicle in motion. The report arrives as car makers have almost completely abdicated responsibility for in-dash interfaces with the acceptance and adoption of Apple’s CarPlay and Alphabet’s Android Auto smartphone interfaces.

“New Cars Have More Distracting Technology on Board Than Ever Before” – Washingtonpost.com

A growing proportion of automotive infotainment systems come outfitted with one or both of the Silicon Valley-sourced solutions. Every head unit with Android Auto or Apple CarPlay has had to go through a certification process from one or both of those companies before coming to market. This has made Apple and Alphabet either driver distraction gatekeepers or enablers.

In fact, the Android operating system is steadily insinuating itself into these very same systems and will likely beging arriving in production vehicles as a native OS beginning late in 2018. This means the dashboard screens in cars and the user interfaces on those screens will be increasingly dictated by Alphabet and Apple.

This has to be maddening to AAA, which has long stressed the cognitive distraction of smartphone use in cars regardless of whether they are used hands-free or not. AAA has never been able to convince regulators to completely forbid smartphone use of any kind in moving vehicles and Apple CarPlay and Android Auto smartphone integration systems were seen as one way to mitigate distraction.

Now, it seems, these systems have actually opened up a Pandora’s box of tempting app-based distractions that may be undoing the intended prophylactic. The National Highway Traffic Safety Administration (NHTSA) estimates that 3,477 people were killed in crashes attributed to driver distraction in 2015 and that the toll is rising.

AAA claims that the current systems violate NHTSA distraction guidelines, which are voluntary, not compulsory. For car makers, though, the rising influence of Apple and Alphabet means there is an increasing abdication of responsibility for distraction. The temptation for car makers is to blame Apple and Alphabet – that is, even though car makers are responsible for the interfaces that allow users to switch back to OEM-supplied interfaces or the car radio.

The AAA report arrives as car makers find themselves on a slippery artificial intelligence slope. While some auto makers are working on AI systems (like IBM Watson-infused OnStar Go from GM) designed to provide personalized AI systems built into the cars – Apple and Alphabet are poised to step up their in-dash game with artificial intelligence systems of their own designed to further mitigate distraction by emphasizing voice commands over touch screen interfaces.

This battle will ultimately unfold over access to vehicle sensor data, something that automakers have strenuously sought to wall off from Apple’s and Alphabet’s smartphone platforms. But if pressure grows from regulators, car makers may be forced into the arms of Apple and Alphabet who may have greater resources to bring to the challenge of mitigating distraction based on the complete sensor-infused driving context.

Alphabet certainly has an edge here as Android begins penetrating dashboards as a native operating system. AAA would likely not approve given its opposition to smartphone use in vehicles of any kind. The only alternative may be to turn the entire car into a smartphone on wheels – which, come to think of it, is more or less what is happening. Or maybe revert to the regular old car radio. Yeah, right.


Presto Engineering – Outsourced Secure Provisioning – Even Their Secrets Have Secrets

Presto Engineering – Outsourced Secure Provisioning – Even Their Secrets Have Secrets
by Mitch Heins on 10-05-2017 at 12:00 pm

When I first heard about Presto Engineering I was enamored by a statement on their web site that claimed that one of their secured solutions included, “The ability to incorporate your secrets without knowing them”. If Mr. Spock would have been in the room his eyebrow would have certainly raised. Indeed, what does that statement mean?

It turns out that in the world of the Internet of Things (IoT), almost every device has a communications link associated with it and that link is vulnerable to attack. As a result, companies building IoT systems are working feverishly to incorporate security into their devices. While security can be “programmed” into your software, almost everyone is now using hardware features to make their IoT systems more secure. And, though there are several different types of hardware security measures that can be employed, almost all of them require some type of “provisioning”. Presto Engineering is one of the companies that really knows how to do this step well.

So what is the provisioning thing and why is it so important? This brings us full circle to the statement that raised our proverbial eyebrow. Provisioning is the process whereby the secrets necessary for security functions are incorporated into individual IoT devices. The trick here is that the chain of secrecy for this data must be such that even the people doing the provisioning of an IoT chip can’t know the secrets. Yep, you heard that right. The last thing you want to do is go to the trouble of building a highly secured IoT chip only to have your secure UIDs, transport keys, authorization certificates etc. get intercepted and compromised before they ever get loaded into the chip. So, the secrets being loaded have to be secret to everyone including the company doing the actual physical provisioning.

Depending on the end IoT application, there will be different levels of secrecy and control required. In fact, the industry has even set up procedures to provision secure chips in a way that can be audited according to a set of international standards known as the Common Criteria for Information Technology Security Evaluation. These criteria specify what is known as Evaluation Assurance Levels (EALs) that range on a scale from an EAL1 (minimal) to EAL7 (government and military high security). EAL5 is typical for highly demanding commercial applications such as banks, payment, pay TV, secure access control systems, etc. The important part is that the “provisioner” uses rigorously controlled and auditable processes to securely handle its customers’ secrets while ensuring absolute integrity and confidentiality of this operation.

Security hardware can be provisioned at the wafer level during wafer probe; at the chip level, after packaging; or at the board level once the chips are placed on the board. Depending on where the provisioning is done, there will be processes that need to be in place to ensure that “secret data” can be securely transmitted to the provisioner. Additionally, the provisioner will also need to ensure that the data will remain secure until it can be physically encoded into the ICs. This implies secure connections and servers between the secret data supplier and the various manufacturing sites where the provisioning is done.

There are several strategies that can be employed to insure data integrity for the EAL required. The higher the EAL, the costlier the strategies get. For a company like Presto Engineering, the trick is to have the ability to customize the offerings to enable enough security for the requirement while minimizing the costs to their customers.

Presto, in fact, does just that. They have a comprehensive and flexible IT system that allows them to connect customers’ secret data with dedicated data storage rooms while complying with different EAL requirements. As an example, if the EAL is relatively low, Presto may be allowed to use virtual machines on shared servers to keep different customers’ data separated. By sharing servers, they can keep costs down.

Alternatively, if the EAL requirement is high, the customer may demand their data be handled only on customized high-security servers (also known as hardware security modules or HSMs). Per the diagram, the HSMs may be owned by Presto or by the customer. In either case, the more dedicated and more secure, the higher the cost and greater the lead time to deploy the provisioning systems.

In addition to data storage, Presto has secured test floors (EAL5+/6) and secured warehouses where provisioned parts are kept until they are shipped via secure methods to their customers’ locations. Presto also has expert trained staff operating secured flows who can assist customers in preparing devices for standards certification such as secure element card EMVCo testing (Eurocard, MasterCard and Visa).

Many enterprise level companies handle these provisioning tasks by themselves, however in the world of IoT, there are large numbers of small and medium sized enterprises (SME) for which this would be a daunting task. Nearly all the places where provisioning takes place are handled outside the walls of the SMEs, putting their secret data at risk. SMEs really need to look to outsourced provisioners to manage their costs, schedule and security risks.

Ideally, an outsourced provisioner should offer certain key capabilities:
[LIST=1]

  • A standardized and certified (EAL5+) secure process.
  • The ability to provision a wide range of device types, form factors and security technologies.
  • Competitive pricing at low and medium volumes with the ability to scale to larger volumes.
  • Ability to configure the provisioning process and infrastructure to meet varying requirements

    Presto Engineering offers all these capabilities and more.

    They fill a significant void by giving IoT SMEs a trusted partner who can literally, “incorporate their secrets without knowing them”. In the words of Spock, “Indeed” … to which we respond, “Presto Engineering, ahead, warp-factor 5”.

    See also:
    Secure Provisioning White Paper
    Presto Engineering Solutions Page

    Presto Engineering, Inc. is a world-class turnkey production solution for IoT, secure and high-speed (5G) deviceshelping chipmakers accelerate time-to-market and achieve high-volume manufacturing – without having to investin operations teams and capital equipment. The company offers a global, flexible, and dedicated framework, withheadquarters in the Silicon Valley, and operations in Europe and Asia.

    Presto has operations in 7 locations worldwide. All secure provisioning facilities are certified EAL5+ and are auditedannually by major bank and payment organizations. Presto has more than 50 secure provisioning experts on itstechnical staff. The company ships more than 100 million units annually and has securely provisioned more than abillion products. Customers include: major access control, pay TV, telecom, banking, and networking companies.


  • An Informal Update

    An Informal Update
    by Bernard Murphy on 10-05-2017 at 7:00 am

    I mentioned back in June that Synopsys had launched a blog on formal verification, intended to demystify the field and provide help in understanding key concepts. It’s been a few months, time to check in on some of their more recent posts.


    First up, it feels like they are finding their groove. Relaxed style, useful topics but now with a little more polish. Not marketing polish (heaven forbid), just good, sometimes even witty writing style. That makes these blogs (running now at about 2 per month) fun to read, which in turn should make them all the more effective in helping us become comfortable and more knowledgeable about the domain. Following is a quick (and incomplete) summary of a few that caught my eye.

    Iain Singleton (a fellow Brit) blogged on abstractions and why we should feel perfectly comfortable with this idea. In formal it is often necessary to replace a complex block of logic with a simplified FSM, modeling just the interesting corner behaviors of that block for the purposes of the current verification objective. Iain draws a parallel with his support commutes (in England) from Lancashire to the South East. He relates something every commuter will instantly understand – sometimes you look up and don’t know where you are or how to get where you are going. Yet you continue on auto-pilot without missing a beat. You can do that because you have subconsciously abstracted the route. Your brain doesn’t worry about intermediate details, it just remembers the principal milestones. Which is exactly what you are doing when you build an abstraction for a formal proof.

    Abhishek Muchandikar wrote a blog on how to develop confidence that you can signoff in the presence of inconclusive proofs, an important question many ask about formal. Abhishek uses another analogy I like – the (ancient) Greek phalanx formation (don’t you love it when engineers undermine the nerd myth by both knowing this stuff and using it in their writing). The phalanx formation was designed to be close to impenetrable and unstoppable, but naturally was no stronger than its weakest soldier. A similar concept applies to formal signoff in the presence of inconclusives (bounded proofs). For these cases you can run a bounded coverage analysis to the same depth to understand what parts of the code were not reached in this bounded proof, and what constraints (if any) may have limited reachability. All bounded proofs will have a weakest soldier, but adequate analysis on that weakness can lead you to confidently assert that the weakness is acceptable.

    Anders Nordstrom posted a blog on a couple of exotic usages for formal, taken from this year’s DAC. One was on hardware/software co-verification using formal methods. I was able to find an open link to the first paper he names but not the second. The first method uses specialized formal tools which can do formal proving in both RTL and C, so is perhaps a little out of reach of most of us. However, I could imagine you might in some cases model the software through an (RTL FSM) abstraction which could then be included in the hardware proof. The second exotic application Anders mentions is formal verification of mixed signal designs. There’s in fact a fairly rich list of papers around this domain. TI presented a paper at DAC on using abstract models for interface blocks in applying formal methods to verify interface behavior.

    I’ll wrap up with one more blog, from Sean Safarpour, on the organic growth of formal verification. This is interesting in part to understand where formal adoption is growing but in some ways even more to understand how that is happening. Sean said that in previous trips to Asia he found that he was pushing formal, interest was high but progress between visits was limited. The climate was completely different on his most recent trip; he didn’t have to pitch, customers were now pitching him on their successes in using formal and how they now need help to further expand usage.

    Sean makes some interesting points on how this happened. The myth that formal requires deep formal expertise still persists, yet Sean says that there are probably only a dozen or so such people in Asia. Hiring more expertise is therefore not a practical solution to expanding usage. Instead expertise has to be grown organically, yet companies can’t afford to wait for years for engineers to become deep experts. Instead they are seeding interest in verification groups (through conference attendance for example), encouraging a champion, growing a pilot group around that champion, starting with the simpler formal applications (apps) and socializing successes within engineering and also up the management chain. I suspect many other successful adoptions started in the same way. Good input for those of you still on the fence.

    You can access the full set of blogs HERE.


    Is there anything in VLSI layout other than pushing polygons? (2)

    Is there anything in VLSI layout other than pushing polygons? (2)
    by Dan Clein on 10-04-2017 at 12:00 pm

    One of the important changes that happen between 1984 and 1988 is the hardware platforms development. Calma evolved, mainframe S140 with 2 combined monitors per terminal in S280 with 2 individual monitors per terminal. This meant that from noisy and darker rooms we move to more quiet and lighted rooms. We doubled the speed and the memory in our disks. We had 2 whopping disks of 512Mb each and each was 100K US dollars.We also had 2 powerful color plotters from Versatec. Part of the layout DRC was done by hand using plots at x1000 or x10000 scale and plastic rulers. These plotters needed climate control room sat 20-21 Celsius and 40-65% humidity. A nice and refreshing room to chill when coming from a tropical day outside in Tel Aviv area. The room was also used to host the Calma mainframe and the console which we used for backups.

    Even so Calma improved in speed, memory and terminals, there was no network connection between the 2 computers. You had to use tapes (!) to transfer data from S140 to S280. Later Calma tried individual workstations, but unfortunately no network between them again, you needed to write “cassette tapes” between the workstations and the mainframe. So being a layout designer meant you need know how to write & read tapes, prepare daily and weekly backups, prepare data for IBM for verifications, align plotter paper and change ink, do maintenance on computer and plotters, etc… Suddenly opportunity to expand the knowledge was there. The only thing you needed to do is volunteer.

    At the same time Daisy decided to bring to market a layout tool to conquer the layout market, as they had the circuit world. As an experienced layout designer, I had the option to try it, so I volunteered. Went for a week at Daisy Corporation in Israel and got trained in ChipMaster 3.0, the crown jewel of Daisy at that time. Well, one-week training and a few weeks testing and the prognosis was bleak, they were missing important parts of the flow needed to use this software as augmentation to Calma or just replacement. Biggest flaw was that they were not capable to generate and read standard GDSII. For somebody to use this software in production this was a showstopper. So, I decided that we are not going to use them.

    Guess what, I was right, Daisy had a short life afterwards. But again, was time to look for something better than Calma and the market had enough processors to just do that. Like Motorola Israel, Austin team was looking at options and 2 just showed up around 1986. CAECO a software from Silicon Compilers, a future Mentor Graphics and SDA part of future CADENCE. After a few demos and benchmarking Motorola went for CAECO as the software had circuit and layout design tools, still not linked in a network, but a least the design constraints were the same. We still used printed paper for schematics (mostly for approval controls)but in the following years we were able to open schematics and layout on the same screen. How is this for a technological revolution? What about moving from a pen to a mouse with 2 buttons. Now we had select and functions in one device, 100% time savings. This was a big jump in productivity and Motorola migrated MASKAP verification software from IBM to UNIX and we were now capable to run verification locally on each machine! Talking about productivity boost! Not only we had unlimited licenses but with local CAD we developed additional Layout Design Driven DRC verifications.

    As we started standard cells, datapath cells, I/O cells, we invented specific design rules for each type of design. Onan interesting note, the first SUN machines we used were powered by 68000 processors previously designed in Austin. We progressed as the processors progressed. I am a very proud layout designer as I knew the layout project leader for 68030 Beverly Vann and worked later with the 68040 leader Geno Browning.

    As we started to build new structures, we needed to build new tools. The CAD department grew to 10 Masters and PhD in software in no time and the ideas started pouring. We had to build memories (in a multi usage process –kind of SRAM) and we needed a way to automatically code the YES (1) and NO (0)in it. Our chips had firmware that had to be loaded (coded) with contacts and diffusion. From the day the base layers were ready, almost every week we had anew netlist (coding). So the last week before tapeout the firmware team will provide a final code (netlist) and we run the generator scripts and final verifications. Guess what, I volunteered there also and learnt a few more things…

    The next big thing I was involved was testing of a structure called programmable logic array (PLA). This is a kind of programmable logic block used to implement combinational logic circuits. Esher Haritan was the CAD guy who had to implement the new “tool” and I volunteered to generate for him all combinational layouts to test this new “beast”. We had a lot of fun and from that date we are still friends after 30 years… I guess this was a great experience as I started to work with CAD on roadmaps for future tools.

    But process technology was moving forward and we migrated to 2 layers of metal (!). Now we could have “metal directions”. As we used for local interconnect metal 1, we decided, like almost everybody in the world, to use metal 2 vertical to getoutside of the cells into busses! Well if we have 2 metals and we can plan to have some simple functions “ready” in layout we, like others invented our own standard cells (functional gates) library. At first a standard cell library was just a collection of simple gates, inverters, nands, nors, flip-flops, buffers and spare cells. We just built them on a fix height all these “gates” that were later use by all designers so the layout can be done much faster. What we did do like today, we build cell length as a multiple of the VIA to VIA spacing in metal 2 (PITCH).

    We did not have modeling, or extraction of cells to use, not information about internal timing from input to output. But this was already a productivity improvement. I did cells for a few months but blocks were schematic driven and the placement and routing was manual. Tired of manual work, I started to ask around if there are tools to place and route this automatically.I heard from others that somebody in UK invented a Place & Route tool. It was called TanCell and it came from a company called Tangent. We got in touch with them and asked what is needed for a benchmark, humans versus the new software. We invited the representative to come and work inside Motorola Israel, so we had a vendor AE onsite.

    After a fail start with a poor AE we got Tommy Belpasso, who was at that time their best expert. As I was the owner of the block and the library we spent 3 weeks together. The tool was crude and to make it work you had to modify libraries specifically for TOOL limitation. No more free imagination, now you had to create cells with pins in the center as the tool was the “grandfather” of channel based Place & Route. We made it but in this battle the effort was too great. What I learnt in 3 weeks was that a good AE can make a poor tool work by finding work arounds and solving on the spot issues. I was lucky to meet a few more AEs like Tommy in my later life…

    One interesting development from this experiment was that we developed standard cells with the vertical metal 2 tracks included from top to bottom, at VIA to VIA pitch, but hidden as TEXT layer for M2. When we run verifications, the CAD wrote an “on the fly script” that will translate M2 Text into M2 (temporarily on GDSII creation) so we could verify at cell level that when the routing will go over, it will still be DRC clean. The lesson learnt again was that you can always get something new from another domain, in this case digital P&R and apply to improve your flow/methodology, etc. This is why volunteering to learn things not in my working duties was always appealing.

    More about how a layout designer can have “spice” in their profession next time.

    Dan Clein
    CMOS IC Layout Concepts, Methodologies and Tools

    Also read: Is there anything in VLSI layout other than “pushing polygons”? (3)


    Semiconductor IP on Fortune’s 2017 100 Fastest-Growing Companies List!

    Semiconductor IP on Fortune’s 2017 100 Fastest-Growing Companies List!
    by Daniel Nenni on 10-04-2017 at 7:00 am

    The Semiconductor IP market has always been a big draw for SemiWiki readership and I expect that to continue. One of the more interesting companies we have covered over the past 6+ years is CEVA, who is now on Fortune’s 2017 100 Fastest-Growing Companies List. In fact, CEVA is the ONLY semiconductor IP company on the list and they join semiconductor companies: Silicon Motion, NVIDIA, Cirrus Logic, Micro Semi, Skyworks, and IDT.

    Gideon Wertheizer, CEO of CEVA commented: “Fortune’s acknowledgment of CEVA as one of the fastest growing public companies over the past three years is a testament to our successful expansion strategy which has enabled us to become a technology leader. Our platform IPs for 5G, deep learning, computer vision, voice assistants, Bluetooth and Wi-Fi are critical building blocks for all smart and connected devices. We are very proud to feature on this list alongside some of the world’s most prominent companies.”

    We are at the intersection of three trends that are making Semiconductor IP even more interesting moving forward: The advent of systems companies making their own chips (systems companies now dominate the fabless semiconductor ecosystem and I expect that trend to continue), Artificial Intelligence is a boom to Semiconductor IP (deep learning and computer vision for example), and M&A – the mega acquisition of ARM by Softbank last year followed by the Chinese acquisition of Imagination Technologies this year.

    CEVA stock, by the way, was around $15 when we started covering them and today it is over $40.

    For the record, CEVA is the leading licensor of signal processing IP that is used for: Image and Computer Vision, Deep Learning, Audio, Voice, Speech, and Sensor Fusion. CEVA also provides Wireless Communication, and connectivity IP. The target markets include: Mobile, Wearable, Automotive, Industrial and Consumer IoT, which just about covers every topic on SemiWiki.com, absolutely.

    CEVA is also a very well run company with more than 300 employees in the US, Israel, Europe and Asia. To date more than 8 billion CEVA-powered chips have been shipped worldwide. Given the exploding silicon growth in IoT, Automotive, Robotics, Drones, Mobile, Wearables, and dozens of other vertical markets, we should hit a trillion CEVA-powered chips in the not too distant future.

    Bottom line: Semiconductor IP is a critical enabler of the fabless semiconductor ecosystem.

    About CEVA, Inc.
    CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, advanced imaging, computer vision and deep learning for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode), Wi-Fi (802.11 a/b/g/n/ac up to 4×4) and serial storage (SATA and SAS). Visit us at
    www.ceva-dsp.com and follow us onTwitter,YouTube andLinkedIn.