NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

The Real Lesson from the AWS Outage

The Real Lesson from the AWS Outage
by Matthew Rosenquist on 03-08-2017 at 7:00 am

The embarrassing outage of Amazon Web Services this week should open our eyes to a growing problem. Complex systems are difficult to manage, but if they are connected in dependent ways, a fragile result emerges. Such structures are subject to unexpected malfunctions which can sprawl quickly. One of the most knowledgeable technology companies on the planet learned just such a lesson this week. Amazon’s star-child, their cloud services, had a major disruption. It was not a nation-state attack, sophisticated teams of cyber-hackers, or even malicious insiders bent on destruction. Nonetheless, the lessons are telling. The ramifications of which will be important to all of us.

Summary of the Amazon S3 Service Disruption: We’d like to give you some additional information about the service disruption that occurred in the Northern Virginia (US-EAST-1) Region on the morning of February 28th. The Amazon Simple Storage Service (S3) team was debugging an issue causing the S3 billing system to progress more slowly than expected. At 9:37AM PST, an authorized S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process. Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended…

It was one employee, typing a few wrong codes, that caused a significant outage to major portions of the Internet. Amazon worked furiously to contain and recover from the incident. It will have to rebuild trust with customers whom were sold on the resiliency of ‘cloud’ services to avoid such events. Amazon has already stated they will learn from the event and will apply some compartmentalization controls to lessen potential damage in the future. But there is a more significant realization to be made.

The greater lesson for us all is that when hugely sophisticated systems interconnect with each other, there is an exponential increase in complexity. Due to reliance, authority, and trust, these structures can fail in spectacular fashion. The AWS example show how such a situation allows a series of cascading unintended effects, that cannot easily have been predicted, to occur and cause widespread impacts. As bad as it may have appeared, it was not too severe. If it were an intentional attack from a capable, motivated, and sophisticated attacker, I believe the results would have been catastrophic.

With the AWS outage we can see the impact of an unintentional accident and the difficulty to recover when everyone is working together to resolve the issue. Now imagine what a malicious and focused cyber-threat could do while being stealthy, striving for maximum damage, and actively undermining countermeasures and recovery actions of response teams.

If this were a malicious insider or professional hack, the damage would be a thousand times worse. We would still be picking up the shattered pieces. There would be tears falling from the AWS cloud.

This week it was cloud storage services making websites unavailable. What happens when it is a fleet of autonomous vehicles which put lives at risk or the complex national power grid infrastructure?

We must take a fresh look at understanding threats, risks, countermeasures, and protection practices as individual pieces of the computing world are growing much more complex and being connected. Traditional methods are not sufficient in understanding how chain reactions can occur in the next generation of new technologies and services.

Interested in more? Follow me on Twitter (@Matt_Rosenquist), Steemit, and LinkedIn to hear insights and what is going on in cybersecurity.


Automotive OEMs Get Boost as NetSpeed NoC is Certified ISO 26262 Ready

Automotive OEMs Get Boost as NetSpeed NoC is Certified ISO 26262 Ready
by Mitch Heins on 03-07-2017 at 12:00 pm


I read with great interest today news from NetSpeed Systems that both their Gemini and Orion NoC IPs have been certified ISO 26262 ASIL D ready. They were certified by SGS-TUV Saar GmbH, an independent accredited assessor. This is a big deal as up till now, it was left up to the OEMs to do most of the heavily lifting to qualify their IC’s interconnect for the ISO automotive functional safety standard. To be clear, they still do, however if they use NetSpeed’s certified NoC IP, a significant burden has been lifted.

To compete in the automotive space, SoC platforms are created and derivatives are generated for market segment differentiation. Many of the big blocks remain the same for the derivatives while new blocks are added and configuration of blocks are changed. The interconnect portion however always changes when doing derivatives. Each time this happens, designers have to re-create a new NoC based on a new floorplan and different anticipated traffic patterns, QoS and safety/security requirements for the design. Doing this by hand is a big burden for designers, especially when you factor in that they must make sure the new NoC now meets all of the new QoS, power, performance and safety requirements and is once again ISO 26262 compliant to the ASIL level required.

NetSpeed’s synthesis capabilities make the task of creating a new NoC incredibly easy. Designers can quickly change constraints and then re-synthesize the NoC. The cool part is that NocStudio, the synthesis tool doing all of this work, now understands the ISO 26262 standard and can give designers an estimate of the new NoC’s ISO 26262 ASIL score and level before it is even synthesized.

At this point, it should be noted that the NetSpeed NoC IP has been certified ready for ASIL-B (90% SPFM) through ASIL-D (99% SPFM) levels depending on how the NoC is configured. It should also be noted that NetSpeed’s solution is the first coherent NoC IP to be certified ISO 26262 ready. This is especially important for state-of-the-art automotive SoCs targeted for autonomous vehicles. Those SoCs have complex interactions among heterogeneous CPU cores, clusters, vision processors and storage and the complexity has gotten to the point that it has become nearly impossible to build these types of interconnects by hand. NetSpeed takes on this challenge leveraging advanced machine learning algorithms to build correct-by-construction designs that can manage the complexity while also ensuring coherency and functional safety as part of the solution.

From the ISO 26262 point of view, NetSpeed’s architecture has safety built in at multiple levels, including defect checks for both end-to-end and hop-to-hop failures. Additionally, NetSpeed lets the designer fully specify NoC master slave relationships not only in terms of QoS and security, but also for specific ASIL targets. Unlike other NoCs, NetSpeed’s NoC IP enables the designer to customize the NoC to be as heterogeneous as the design it serves. Master slave relationships can be set up for varying ASIL coverage and secure and/or non-secure data transmission. Specific masters can also be blocked from specific address ranges that may include multiple slaves. This can be done at synthesis time, creating a hardwired firewall or dynamically at run time, without the need to split the interconnect.

This brings me my last point. As with most problems, the best solutions are those that holistically take into account a problem from the beginning when early design trade-offs can be made with more degrees of freedom. Adding features like NoC coherency and functional safety onto an existing fixed architecture is extremely costly, both in terms of system performance and area. NetSpeed’s ability to synthesize in and optimize both of these functionalities at different levels of granularity makes a huge difference in the quality of the design generated.

A key point here is that NetSpeed is unique in their ability to optimize not only specific QoS, power, performance and area metrics but they can also target specific ISO 26262 ASIL levels in different parts of the system. You can’t do this if you don’t look at the problem holistically.

Interestingly, the ISO standard reviews not only your design, but also your design team and how they do their work. The reason the NetSpeed team is now certified ready for ISO 26262 is because they think holistically and methodically and it shows in their products.

See also
Press release link
NetSpeed Web Page


Perspective in Verification

Perspective in Verification
by Bernard Murphy on 03-07-2017 at 7:00 am

At DVCon I had a chance to discuss PSS and real-life applications with Tom Anderson (product management director at Cadence). Tom is very actively involved in the PSS working group and is now driving the Cadence offering in this area (Perspec System Verifier), so he has a pretty good perspective on the roots, the evolution and practical experiences with this style of verification.


PSS grew out of the need to address an incredibly complex system verification problem, which users vociferously complained was not being addressed by industry-standard test-bench approaches (DVCon 2014 hosted one entertaining example). High on the list of complaints were challenges in managing software and use-case-based testing in hardware-centric languages, reusability of tests across diverse verification engines and across IP, sub-system and SoC testing, and in managing test of complex constraints such as varying power configurations layered on top of all that other complexity. Something new was obviously needed.

Of course, the hope in cases like this is “#1: make it handle all that additional stuff, #2: make it incremental to what I already know, #3: minimize the new stuff I have to learn”. PSS does a pretty good job with #1 and #3 but some folks may feel that it missed on #2 because it isn’t an incremental extension to UVM. But reasonable productivity for software-based testing just doesn’t fit well with being an extension to UVM. Which is not to say that PSS will replace UVM. All that effort you put into learning UVM and constrained-random testing will continue to be valuable for a long time, for IP verification and certain classes of (primarily simulation-based) system verification. PSS is different because it standardizes the next level up in the verification stack, to serve architects, software and hardware experts and even bring-up experts.

That sounds great but some observers wonder if it is over-ambitious, a nice vision which will never translate to usable products. They’re generally surprised to hear that solutions of this type are already in production and have been in active use for a few years; Perspec System Verifier is a great example. These tools predate the standard so input isn’t exactly PSS but concepts are very similar. And as PSS moves towards ratification, vendors are busy syncing up, just as they have in the past for SVA and UVM. Tom told me that officially the standard should be released in the second half of 2017.

How does PSS work? For reasons that aren’t important here, the standard allows for two specification languages: DSL and a constrained form of C++. I’m guessing many of you will lean to DSL so I’ll base my 10-cent explanation on that language (and I’ll call it PSS to avoid confusion). The first important thing to understand is that PSS is a declarativelanguage, unlike most languages you have seen, which are procedural. C and C++ are procedural, as are SV, Java and Python. Conversely, scripting for yacc, Make and HTML is declarative. Procedural languages are strong at specifying exactly “how” to do something. Declarative languages expect a definition of “what” you want to do; they’ll figure out how to make that happen by tracing through dependencies and constraints, eventually getting down to leaf-level nodes where they execute little localized scripts (“how”) to make stuff happen. If you’ve ever built a Makefile, this should be familiar.

PSS is declarative and starts with actions which describe behavior. At the simplest level these can be things like receiving data into a UART or DMA-ing from the UART into memory. You can build up compound actions from a graph of simple actions and these can describe multiple scenarios; maybe some steps can be (optionally) performed in parallel, some must be performed sequentially. Actions can depend on resources and there can be a finite pool of resources (determining some constraints).

Then you build up higher-level actions around lower-level actions, all the way up to run multiple scenarios of receiving a call, Bluetooth interaction, navigating, power-saving mode-switching and whatever else you have in the kitchen sink. You don’t have to figure out scenarios through the hierarchy of actions; just as in constrained random, a tool-specific solver will figure out legal scenarios. Hopefully you begin to get a glimmer of the immense value in specifying behavior declaratively in a hierarchy of modules. You specify the behavior for a block and that can be reused and embedded in successively higher-level models with no need for rewrites to lower-level models.


Of course, I skipped over a rather important point in the explanation so far; at some point this must drop down to real actions (like the little scripts on Makefile leaf-nodes). And it must be able to target different verification platforms – where does all that happen? I admit this had me puzzled at first, but Tom clarified for me. I’m going to use Perspec to explain the method, though the basics are standard in PSS. An action can contain an exec body. This could be a piece of SV, or UVM (maybe instantiating a VIP) or C; this is what ultimately will be generated as a part of the test to be run. C might run on an embedded CPU, or in virtual model connected to the DUT or may drive a post-silicon host bus adapter. I’m guessing you might have multiple possible exec blocks depending on the target, but I confess I didn’t get deeper on this.

So in the Perspec figure above, once you have built a system level test model with all kinds of possible (hierarchically-composed) paths, then the Perspec engine can “solve” for multiple scenario instances (each spit as a separate test), with no further effort on your part. And tests can be targeted to any of the possible verification engines. Welcome to a method that can generate system-level scenarios faster than you could hope to, with better coverage than you could hope to achieve, runnable on whichever engine is best suited to your current objective (maybe you want to take this test from emulation back into simulation for closer debug? No problem, just use the equivalent simulation test.)


We’re nearly there. One last question – where do all those leaf-level actions and exec blocks come from? Are you going to have to build hundreds of new models to use Perspec? Tom thinks that anyone who supplies IPs is going to be motivated to provide PSS models pretty quickly (especially if they also sell a PSS-based solution). Cadence already provides a library for the ARM architecture and an SML (system methodology library) to handle modeling for memories, processors and other components. They also provide a method to model other components starting from simple Excel tables. He anticipates that, as the leading VIP supplier, Cadence will be adding support for many of the standard interface and other standard components over time. So you may have to generate PSS models for in-house IP, but it’s not unreasonable to expect that IP and VIP vendors will quickly catch up with the rest.

This is well-proven stuff. Cadence already has public endorsements from TI, MediaTek, Samsung and Microsemi. These include customer claims for 10x improvement in test generation productivity (Tom told me the Cadence execs didn’t believe 10x at first – they had to double-check before they’d allow that number to be published.) You can get a much better understanding of Perspec and a whole bunch of info on customer experiences with the approach HERE.

More articles by Bernard…


IoT Device Designers Get Help from ARMv8-M Cores

IoT Device Designers Get Help from ARMv8-M Cores
by Mitch Heins on 03-06-2017 at 12:00 pm

Someone once said that IoT devices live in the wild. They must be able to withstand any number of attacks, whether they be communication, physical or software based attacks. The threats are real and the consequences can range from simple irritants to life threatening situations.

It’s because of these threats that IoT device designers are now blessed with the chance to show off their design skills by creating devices that can safely be deployed, used and updated while they live in the wild. I attended a webinar this week hosted by ARM entitled, ‘Security Principles for ARM TrustZone for ARMv8-M’. ARM does a really nice job of educating designers about their products and this webinar was no exception to the rule. The webinar gave a very nice overview of how ARM is helping IoT device designers meet the challenges of keeping their devices safe in the wild.

This webinar is one of many that can be found on ARM’s developer community web site and it primarily focused on how the ARMv8-M architecture helps designers guard against software-based attacks. At the center of ARM’s solution is their TrustZone technology. TrustZone has been around for quite some time but it takes on a different incarnation when used with the newer M33 and M23 cores.

The M33 is essentially the same as a M3 or M4 except for two very noticeable differences. The first difference is that the M33 adds a co-processor interface that can be used for management of multiple sensors envisioned to be needed in an IoT system. The second difference is that there is no longer a dedicated hardware block for TrustZone. Instead, every block within the M33 has been restructured to natively handle TrustZone functionality and capabilities. The core is taking on the full responsibility of TrustZone protection. TrustZone wise the same is true for the M23 which is functionally equivalent to the M0/M0+ ultra-low power cores but again, all of the functions have been re-engineered to handle TrustZone natively.

So what does that imply? In the new architecture, there are literally two parallel execution environments for the processor. One environment is secure, the other is non-secure. There is a new hardware block that is called the security attribution unit (SAU). The SAU is responsible for partitioning and managing all memory (instruction addresses and data) into one of these two buckets. As the processor executes, each address (of instructions or data) to be fetched has its security attribute checked. Those marked as secure are checked to make sure they are indeed in the secure address space and are being used by secure code. These instructions run in the secure execution environment. Those not so marked are checked that they are in the non-secure space and their instructions are run through the non-secure execution environment.

Each environment also has its own set of registers throughout the system and these registers are managed by the SAU. The system boots first into the secure area and once the system is reset it then runs code in the non-secure space for application boot up. The system also allows for secure vs non-secure interrupts and includes additional instructions that can be used by the software designers to check address validity in one environment vs the other. ARM has added additional instructions to the ACLE (ARM C language extensions) that enables software engineers to proactively check for things like buffers that are trying to cross secure/non-secure boundaries. This latter feature enables software designers to easily protect their code against a common attack surface known as buffer overruns.

Memory and device configuration can be controlled in one of three ways. They can be hard wired at synthesis time before manufacturing, they can be programmatically changed through code in secure memory or they can be dynamically changed by the system stack using TrustZone security protocols and Root of Trust encryption using the IDAU (implementation defined attribution unit) interface. In fact, the system configuration can literally be a combination of all three of these methods.

The new architecture also support legacy systems by overlaying two more modes (Trusted and Un-Trusted) on top of the ARMv7-M privileged and non-privileged modes. The CPU handles transitions between these modes automatically based on the attribution mappings that have been set. The new cores can also handle cross-domain function calls including calls from non-secure code that want to use the services of code in the secure section. The good news for software developers is that this is handled automatically by the new architecture through the addition of a Secure Gateway instruction call. The SG call tells the system that code from the non-secure side is asking for services from the secure code side. The hardware takes care of pushing secure registers onto the stack and zeroing out the registers before running tasks for the non-secure side. This ensures that non-secure code does not get a peek into the secure side’s memory. Once the service has completed its secure side work the hardware again automatically pop’s the secure side register values for continuation of work that was going on in the secure side before the non-secure call came in.

This last scenario is essentially a good example of what ARM is trying to achieve in general with the M33/M23 architecture. They are providing a system-wide approach to security that is highly configurable and yet easy to secure.

The main idea being to keep the existing software development paradigm in place for application developers while making it as easy as possible for the IoT system designers to be able to secure their IoT devices in the wild.

See also:
ARM Developer Community
Webinar recording

ARM TrustZone


CEO Interview: Alan Rogers of Analog Bits

CEO Interview: Alan Rogers of Analog Bits
by Daniel Nenni on 03-06-2017 at 7:00 am

It has been incredible to watch the Semiconductor IP market grow from millions to billions of dollars during my career in Silicon Valley. In fact, more than half of my professional experience involves IP so when I talk about what it takes to be successful it is certainly worth a listen.

In my opinion the key ingredient to a successful IP company is engaging with your customers, partners, and ecosystem which brings me to the #1 engaging Semiconductor IP company. During my travels around the world the IP company I run into the most, be it conferences, customers, or the foundries, is Silicon Valley’s own Analog Bits, absolutely.

When was Analog Bits formed? What was your inspiration for creating the company?
Analog Bits has been at the forefront of mixed-signal IP for last 20 years. In the early days at Sun and Fairchild I was a hands on engineer and later it became more a manager function and all we were doing is managing to cover up mistakes. I got tired of it and wanted to build a company with some of the brightest minds in the field and where I could do real engineering again and to this date I still enjoying inventing clever circuits transistors. Early on we started as a consulting company and 2003 and beyond we transformed to be an IP company more centered around merchant foundries.

How have your products evolved? What has been the basis for your lasting success?
Analog Bits has always been about listening to the needs of our customers, many of whom are leaders in their respective fields. We started with PLLs, DLL, IO’s and memory IP, and have expanded to include SERDES, PVT, POR. We are now servicing customers down to 7nm.

We’ve always heard about the importance of low power, small size, enabling customer differentiation and – of course – product quality. In addition to list of repeat customers, we have also entered newer markets such as enterprise and automotive where power is an important consideration.

Where is Analog Bits based?
We have grown as the semiconductor industry grew, but always in Silicon Valley. All of our team is based here in Silicon Valley, which has given us access to the high quality technical resources and places us close to many of our customers and serving international customer from one location. If customers need something special last minute we are able to react quickly as a team.

How have you seen the market evolve?
A few years ago, we had companies using us in digital cameras, video games and communications satellites. We keep evolving along with the industry and are now servicing diverse markets such as enterprise, IOT, automotive – and even Enterprise Storage around the world. Like all semiconductor business change is the only constant and we adapt quickly as a company to change.

What types of your products are in most popular?

Our PLL products have been a standard for many, many customers. Our SERDES PHY’s have been selected by many market leaders for use in their chips. Lately, our PVT and POR products have been seeing increased demand since they are so small and flexible. We provide our products in both custom and off-the-shelf (OTS) products, with options for semi-custom as well.

Why do so many customers return to you?
We have strong reputation for quality – in products, with our customers and in our ecosystem partnerships. I think that quality and reliability have made us what we are today. Having said that, we enable product differentiation amongst our customers – which is all about power and size efficiency. Always “works first time” helps keeps customer costs and schedules in check – and we are very proud of that.

What’s next for Analog Bits?
We have some amazing new products coming out. Support for new process process nodes is part of that but also smaller and lower power IP solutions too. Reducing the impact of mixed signal IP means more flexibility for our customers and an economic advantage of not having to have an in-house team to develop the best and depend on partners like us.

About Analog Bits
Founded in 1995, Analog Bits, Inc. (www.analogbits.com), is the leading supplier of mixed-signal IP with a reputation for easy and reliable integration into advanced SOCs. Products include precision clocking macros such as PLLs & DLLs, programmable interconnect solutions such as multi-protocol SERDES and programmable I/O’s as well as specialized memories such as high-speed SRAMs and TCAMs. With billions of IP cores fabricated in customer silicon, from 0.35-micron to 16/14-nm processes, Analog Bits has an outstanding heritage of “first-time-working” with foundries and IDMs.

Also Read:

CTO Interview: Jeff Galloway of Silicon Creations

CEO Interview: Srinath Anantharaman of ClioSoft

CEO Interview: Amit Gupta of Solido Design


ESDA Event: Power and Policy in California

ESDA Event: Power and Policy in California
by Bernard Murphy on 03-04-2017 at 7:00 am

Apparently this event is now being postponed until sometime later in the year. Stay tuned

We spend a lot of our time with our heads down in the technical details and when we look up at what we think is the big picture, it’s usually just a little bit bigger, often no more than a justification for immediate product directions. So wouldn’t it be interesting once in a while to look at the really bigpicture, to understand global energy objectives, how that drives power policy in the state of California and how that drives power regulation for electronic design?


REGISTER NOW for event on Thursday March 23[SUP]rd[/SUP], starting at 6pm

ESDA will host a panel on just this topic with an impressive line-up of speakers from the California Energy Commission and the Natural Resources Defense Council, Lip Bu (Cadence CEO) is on the panel, along with Shahid Sheikh from Intel, Vojin Zivojnovic from Aggios and Vic Kulkarni from Ansys. You’ll get to hear their views and there will a chance to network with speakers and other ESDA members.

If you’re getting a little burned out on the same old stories of why low power is important, here’s a rare opportunity to get a new and bigger perspective, and new material to refresh that aging pitch on the importance of low power. I plan to be there.

WHAT: The Electronic System Design Alliance will present an informational panel, “Energy Policy and Strategy for the IoT Era,” to outline the new rules for PCs set by the California Energy Commission (CEC). It will be moderated by Grant Pierce, chief executive officer (CEO) of Sonics, Inc. and chairman of the ESD Alliance board of directors.
WHEN: Thursday, March 23, beginning at 6 p.m. with networking, light snacks and drinks, concluding at 9 p.m.
WHERE: San Jose City Hall Rotunda. 200 East Santa Clara Street, San Jose, Calif.
The program will explain the CEC’s new energy efficiency rules and regulations for PCs and monitors, and give panelists a chance to provide their perspectives. A panel discussion and audience Q&A session will follow. Panelists include:
· Vojin Zivojnovic, founder and CEO of AGGIOS
· Dave Ashuckian, CEC’s deputy director of the Efficiency Division
· Pierre Delforge, director, High Tech Sector Energy Efficiency of the Natural Resources Defense Council (NRDC)
· Vic Kulkarni, ANSYS’ senior vice president and general manager of the RTL Power Business
· Shahid Sheikh, director in Government and Policy Group with Intel Corporation
· Lip-Bu Tan, Cadence’s president and CEO

Ashuckian and Delforge will explain how the rules came about and why they are necessary, how much energy they will save, when they will take effect and how they will be enforced. They will address what the rules mean for manufacturers and the supply chain and their implications for broader national and global energy efficiency standards for electronic products, particularly as it relates to the emerging IoT market.
Attendees will learn about potential new technical innovations in design and manufacturing, insights into energy efficiency and what impact the rules will have on their companies’ as well as industries’ energy policies and strategies. Panelists will attempt to determine how the new rules could affect the economy.
The panel is open free of charge to all ESD Alliance member companies. Non-members are welcome to attend for a fee of $40.

REGISTER NOW

More articles by Bernard…


MWC 2017: The 5G Emperor’s New Clothes

MWC 2017: The 5G Emperor’s New Clothes
by Roger C. Lanctot on 03-03-2017 at 10:00 pm

A very odd phenomenon is sweeping the automotive and wireless industries and was in full flower at MWC 2017. The onset of 5G connectivity has wireless carriers excited over the high bandwidth, low latency and high availability applications inherent in this new network technology – to say nothing of network slicing for targeted applications. But the application segments promising the greatest growth lie outside the segment garnering the largest increase in connections.

Connected cars are delivering new network connections for carriers such as AT&T and Vodafone, among a few fortunate competitors, beyond the wildest expectations of even the most ardent IoT enthusiasts. Smarthomes and wearables may be hot, but the connected car is at the core of network connection expansion.

The only problem is that connected vehicles remain an elusive source of revenue. After paying the wireless and the cable bill – some of which consumers have been able to combine – there is little patience for a separate bill for the connected car! Most of the diagnostic and remote control (remote start, door unlock) applications for connected cars require minimal bandwidth.

The problem is simple to explain. There is nothing natural about connecting a car and for car makers it’s a nightmare. And, yet, consumer interest in car connectivity is high, according to Strategy Analytics’ own consumer survey and focus group data.

Twenty years ago General Motors created a compelling application in automatic crash notification (ACN) which the company was able to leverage to differentiate its cars, increase sales and drive subscription revenue. In the very earliest days of OnStar, dealers were free to charge extra for the feature – something which was quickly nipped in the bud.

Masked by GM in the deployment of OnStar was the sausage-making ugliness of wireless connections, network reliability, battery consumption, network credentials, and, fundamentally, consumer expectations – to say nothing of the potential privacy violation and cybersecurity implications. Consumers don’t understand the complexities of vehicle connectivity – they just want it to work the way their mobile phone works.

For car makers, introducing a wireless connection in the car with an emergency response responsibility carries heavy liability requirements to this day. Before any consumer has tapped into an embedded modem-based family finder app or Wi-Fi access, the car maker – in partnership with the carrier – must sell its soul as to the reliability of the on-board system in the event of a crash.

The onset of smartphones tamped demand for ACN, but ubiquitous connectivity introduced the concept of apps in the dashboard, streaming audio, Wi-Fi, digital assistants, artificial intelligence and contextual awareness. Car companies quickly came to realize that location itself was a valuable and potentially monetize-able proposition.

Now autonomous driving has usurped the attention of auto makers, shifting the focus to sensors and on-board systems capable of recognizing environmental elements in real time. Carriers are keen to capitalize on the autonomous driving craze and the billions of invested dollars, but the market leaders in self-driving technology have thus far eschewed connectivity.

Waymo, Uber, Tesla and a dozen or more startups have thus far treated wireless connections as irrelevant. In spite of the indifference of self-driving system developers, wireless carriers and infrastructure suppliers have soldiered on with tests and prototypes and proofs of concepts.

The more immediate concern of the wireless industry vis-à-vis auto makers is the emergence of V2V communications via dedicated short range communications (DSRC) technology. DSRC-based V2V promises an alternative form of vehicle connectivity capable of delivering content and safety.

Unfortunately for DSRC, commercial applications for the technology have been few and far between and so it has become almost entirely reliant on government mandates and funding – with the exception of companies such as Veniam that have focused on enterprise applications for the technology. In essence, wireless carriers have been forced into making the case for 5G based on it serving as an alternative to DSRC.

To focus 5G on safety applications is to remove the revenue opportunity. The reality is that 5G will enable new customer service value propositions integrating virtual and augmented reality to the process of building and servicing vehicles and enhancing driving. The first hint of this brave new driving world was exhibited by Audi demonstrations at MWC showing “see through” technology based on streaming video from one vehicle to a following vehicle and the same application shown in the Qualcomm and Orange booths.

Of course, it makes no sense for a driver to observe the video – in real time – projected to his or her following vehicle. The message behind the demonstration was that 5G technology can deliver this level of low latency performance via the embedded connection in the car.

Will 5G enable collision avoidance? It’s possible. 5G connectivity will enable a 5G-equipped car to avoid a collision with another 5G-equipped car via inter-vehicle communications – but that event (momentous indeed!) is years away.

For now, vehicle-to-vehicle connections are solely built around alerts and require driver intervention to prevent a crash. So, sadly, the 5G hype for automotive applications at MWC was somewhat undermined by both solely sensor-based self-driving technology and the current conceptual limitations of V2V.

Where both LTE and 5G can have an impact is in leveraging multiple layers of vehicle connections for a more comprehensive real-time view of vehicle movements in space particularly in urban settings. Collision avoidance and self-driving systems taking advantage of these connections and data processing – including neural networks and machine learning – are only currently deployed in test mules and prototypes too expensive for mass deployment.

The sad truth is that vehicle connections are critical to carrier growth, but revenue growth from automotive comparable to mobile video or online gaming will be elusive in the short-term. General Motors’ Global Connected Consumer division may be on the right track in enabling unlimited wireless data plans (this week). The key to success remains making it as simple and easy as possible for the consumer to add their car to their existing wireless plan. That will be a first good step forward – GM is in the lead here as well.

Strategy Analytics’ perspective on MWC 2017: tinyurl.com/zyejk3c#MWC2017Alters Connections Between Carriers and Car Companies


SPIE 2017: EUV Readiness for High Volume Manufacturing

SPIE 2017: EUV Readiness for High Volume Manufacturing
by Scotten Jones on 03-03-2017 at 12:00 pm

The SPIE Advanced Lithography Conference is the world’s leading conference addressing photolithography. This year on the opening day of the conference, Samsung and Intel presented papers summarizing the readiness of EUV for high volume manufacturing (HVM). In this article, I will begin by summarizing the EUV plans of the four leading logic producers, I will then touch on some general observations over several years of the conference, I will discuss the Samsung and Intel papers, some additional observations and then conclude with the prospects for EUV in HVM.

Continue reading “SPIE 2017: EUV Readiness for High Volume Manufacturing”


TSMC Design Platforms Driving Next-Gen Applications

TSMC Design Platforms Driving Next-Gen Applications
by Daniel Nenni on 03-03-2017 at 7:00 am

Coming up is the 23rd annual TSMC Technology Symposium where you can get first-hand updates on advanced and specialty technologies, advanced backend capabilities, future development plans, and network with hundreds of TSMC’s customers and partners. This year the Silicon Valley event kicks off at the Santa Clara Convention Center. For more information the Symposium landing page is HERE but first lets talk about design platforms.

The semiconductor design ecosystem, semiconductor companies, and TSMC are uniting around new methods to overcome chip design challenges by integrating the right tools and technologies into customized, powerful design platforms.

It is becoming apparent that the next growth driver for the IC industry is “ubiquitous computing” where data is generated, collected, filtered, processed and analyzed not just in the cloud or network, but also locally in smart devices all around us. To help its customers seize these opportunities, TSMC and its Open Innovation Platform® partners have developed four application-specific platforms for the next generation of high-growth applications: Mobile, High-Performance Computing (HPC), Automotive, and Internet of Things.

Smartphones occupied much of the last decade’s engineering resources and continue to grow at a healthy clip – Gartner reports 1.5 billion units sold in 2016 – pushing advanced semiconductor technology and design to new heights. However, it is now clear that mobile was just the beginning of a new silicon revolution as industry focus rapidly shifts to the optimization of advanced technology for automotive, HPC and IoT.

In mobile, growth in silicon content per device is driven by features such as dual camera, fingerprint sensors, AR/VR and migration to 4G, 4G+ and 5G. For HPC, artificial intelligence and deep learning will have significant impacts on many industries including healthcare, media and consumer electronics. On the automotive front, ADAS, night-vision, and smart energy for hybrid and electric vehicles promise to make driving more convenient, safe and green. Finally, IoT opens up a multitude of opportunities for ICs that will transform the way we live and improve how societies can be organized and managed through improved efficiency and pervasive communication.

Dr. Cliff Hou, TSMC Vice President of Research & Development, Design and Technology Platform, has pioneered the evolution of design ecosystems to design platforms and the application-specific design enablement that addresses distinct product requirements of each of these four segments. Dr. Hou asserts that application-specific design platforms deliver greatly enhanced solutions that simplify highly complex design activity, reducing the time and effort needed to bring products to market for these high-growth opportunities.

Each TSMC process and packaging optimized design platform includes reference subsystem designs to facilitate innovation; processor cores (CPU, GPU); standard interfaces, Analog/Mixed Signal IP; foundation IP that includes standard cells, SRAM and I/O; design flow, design guideline and EDA tools; and PDK and Tech Files. The goals and readiness of each platform is summarized below:

If you were lucky enough to get a golden ticket to this event it would be a pleasure to meet you. SemiWiki bloggers Tom Dillinger, Tom Simon, and myself will be there blogging live and I will be giving away signed copies of our book on The History of ARM “Mobile Unleashed” in the Solido booth during the lunch break. If you would like to do a meet and greet and get a free book stop on by and say hello.

About TSMC
TSMC created the semiconductor Dedicated IC Foundry business model when it was founded in 1987. TSMC served about 470 customers and manufactured more than 8,900 products for various applications covering a variety of computer, communications and consumer electronics market segments. Total capacity of the manufacturing facilities managed by TSMC, including subsidiaries and joint ventures, reached above 9 million 12-inch equivalent wafers in 2015. TSMC operates three advanced 12-inch wafer GIGAFAB™ facilities (fab 12, 14 and 15), four eight-inch wafer fabs (fab 3, 5, 6, and 8), one six-inch wafer fab (fab 2) and two backend fabs (advanced backend fab 1 and 2). TSMC also manages two eight-inch fabs at wholly owned subsidiaries: WaferTech in the United States and TSMC China Company Limited, In addition, TSMC obtains 8-inch wafer capacity from other companies in which the Company has an equity interest.

TSMC’s 2015 total sales revenue reached a new high at US$26.61 billion. TSMC is headquartered in the Hsinchu Science Park, Taiwan, and has account management and engineering service offices in China, Europe, India, Japan, North America, and, South Korea.


Wireless 5G BTS Need Super DSP core… CEVA XC-12

Wireless 5G BTS Need Super DSP core… CEVA XC-12
by Eric Esteve on 03-02-2017 at 10:00 am

Once upon a time, one wireless base station (BTS) was expected to support one, and only one wireless protocol, like GSM (2G), first deployed in Finland in 1991, or CDMAOne (also 2G) developed by Qualcomm and released through the TIA in 1995. Just a precision: the GSM modem speed was reaching 14.4 Kbps (with only 9.6 Kbps usable by end-user) as well as the CDMAone competing technology…

Such a modem was already supported by a DSP core (Teak from CEVA or TI’ C54x), but you have now to figure out what kind of DSP is needed today to support the following requirements:

Facilitate aggregation of various technologies such as LTE, LTE-A PRO, WiFi 11ac/ax, WiGig…
Reduce Latency (to best support V2X, VR or mobile gaming)
Support huge numbers of users and IoT devices at once (you can translate it into supporting massive MIMO technology)
Enables mission critical usage such as Cellular V2X, eHealth and industrial IoT (reliability and encryption)

supporting modem functions is still part of this DSP charter, but the speed of modems are now ranging from 20 MHz to 800 MHz, depending on the protocol!

In fact, the above description if for such a wireless standard (5G) is so advanced compared to previous generation that it needs a completely new processing approach to ensure its success…

In this picture, the key-word is aggregation, as the CEVA XC-12 targets the base station segment rather than the smartphone irself, where the CEVA XC-4500 is powerfull enough to support 5G modem gigabit requirements. In the BTS, multiple multi-gig technologies are co-existing and some of the protocols to be supported by the DSP are still to be finalized. In this case, the DSP based approach is certainly this providing the highest flexibility to implement the modem baseband. As well, it’s likely that the digital RF will be implemented through FPGA technology.

On top of increased flexibility, another strong benefit going with using the CEVA XC-12 is the reuse capability. Building a BTS is costly, as OEM have to target high performance (IP and technology node) for the highest possible number of end-users. Selecting the XC-12 allows to this OEM to add new H/W while keeping in place the previously installed system.

Let’s take a look at the new computing challenges associated with 5G adoption:

·Computational complexity is very high: higher throughput, reduced latency and massive MIMO usage
·Need for minimum mean square error (MMSE) equalization of huge matrices requiring very efficient matrix operations
·High precision to handle large matrix inversion is essential for Equalization and Beamforming calculation
·Data symbol demodulation as high as 256-QAM if not 1024-QAM!
·The larger bandwidth, modulation and MIMO dimensions automatically leads to much higher bitrates

The CEVA XC-12 architecture has been specifically developed to address all the above listed challenges. Taking the CEVA XC-4500 as a reference, here are some improvments on the symbol plane. 128-MAC used to boost 8×8, 16×16 or 32×32 MMSE-IRC by 4X factor and 20-bit Pseudo-FP to boost Massive-MIMO by more than 20 dB. FFT automatic scaling improving precision by 15 dB enabling 4X performance…

At this stage, I suggest you to attend to the Webinar to be hold on March, 29[SUP]th[/SUP] 2017How CEVA-XC12 solves the daunting computing and latency challenges of 5G NR” (you will find the link at the bottom of this blog).

Each of the four vector engines can run 32 MAC per cycle, for a total of 128 MAC per cycle, and supports 2048-bit load and store, still per cycle. The high precision fixed and floating point arithmetic has been defined for up to 256×256 matrix processing (remember the MMSE equalization requirement). Vector engines are supporting high precision non-linear ISA and 256, or even 1024 QAM demodulation ISA.

The scalar unit, completely new and based on the new CEVA-X framework (CEVA-X1, CEVA-X2), is designed for Multi-RAT systems and management of massive number of users to best support 5G BTS. Each of the four Scalar Processing Unit (SPU) can optionally integrate a Floating Point Unit (FPU).

Compared with the CEVA-XC 4500 scalar unit, the new scalar unit offers 40% EEMBC improvement (the EEMBC CoreMark benchmark reflects real-world applications and tests a processor’s basic pipeline structure, as well as the ability to test basic read/write operations, integer operations, and control operations. It tend to replace the old Dhrystone MIPS benchmark).

As the CEVA-XC 12 counts 4 SPU (and 4 VPU), the CEVA-XC 12 provides a 4X improvment for a complete BTS NR PHY. This SPU provides full RTOS support, offering ultra fast context switch.

The above picture illustrate how the CEVA-XC 12 is implemented. This Cluster Architecture allows the CEVA-X2 to control the complete system. The four processor cores are connected by paires (Core 0 with Core1, Core 2 with Core 3) by using point to point Fast Interconnect busses (FIC Master or Slave). Each paire is processing in parallel the same task and because of the point to point (and fast) interconnect, the latency is greatly improved, which is a condition to efficiently address the Virtual reality (VR) or Vehicule to Everything (V2X) applications.

To conclude, let’s add that the CEVA-XC 12 has been implemented in 10 nm technology and can reach up to 1.8 GHz in this case. For Multi-RAT control plane, the core delivers 4.4 CoreMark/MHz, while consuming 50% less than the CEVA-XC4500.

You will certainly learn and benefit from the webinar to be hold on March, 29[SUP]th[/SUP] 2017:
How CEVA-XC12 solves the daunting computing and latency challenges of 5G NR
Register at:

http://go.ceva-dsp.com/Webinar-XC12-29-3-17_1-LP.html

Product Brief:
http://www.ceva-dsp.com/assets/docs/downloads/CEVA-XC12-Product-Brief.pdf

Landing page url:
http://launch.ceva-dsp.com/CEVA-XC12/

Product page url:
http://www.ceva-dsp.com/CEVA-XC12


By Eric Esteve from IPnest