NVM Survey 25 Wide Banner for SemiWiki 800x100 px (1)

Getting Ready for Bluetooth-5 Verification

Getting Ready for Bluetooth-5 Verification
by Bernard Murphy on 01-13-2017 at 7:00 am

Bluetooth has been very successful for many years, but arguably trapped in a niche, at least for us consumers, as a short-range wireless alternative to a wire connection – to connect your phone to a car or speakers for example. (In fairness I should add that the 4.2 version has improved range and Bluetooth has already become quite successful in the IoT). But the Bluetooth special interest group has had bigger ambitions, now apparent in the recently released Bluetooth 5 (BT5) standard.

Part of the improvement is of course in extending metrics – 2X the speed, 4X the range, 8X message capacity, all with (I am told) an even better low-energy profile than BLE. Great though those improvements are, they don’t really sum up the capabilities and importance of BT5.

Start with range. Practical range is expected to be up to 120 meters, easily matching WiFi. But it gets better. BT5 supports mesh networking which means that instead of the old-style point-to-point pairing familiar in earlier versions, registration is now “friend-to-friend”, meaning you can register with a network (or be pulled in as a friend) and then have access to all nodes in the mesh. All of which makes BT5 a serious contender for home automation, connecting your phone to the fridge, lights, thermostat and your TV, and office automation, connecting to printers, projectors and other assets.

Incidentally, this explains why it is increasingly common to find radios supporting both ZigBee/Thread and BT5 (the ARM Cordio radio offers this option). As a chip-maker, you don’t want to bet on Zigbee or Thread for home automation and then find popular demand just as interested in BT5.

BT5 also offers beaconing to support local position-awareness (such as in a mall) and local push-advertising (you’re looking for shoes, we have a great deal in our store, which is just on your right), where high-capacity support allows for pushing richer content. As far as I can tell, BT5 is the only contender today in this space, so is pretty much set to own this market.

So there’s huge potential for products in consumer, retail, emergency medical and logistics applications to name just a few, but of course all that functionality means there’s a lot more you have to verify to prove your design is compliant with the standard while also supporting the earlier 4.2 standard. You need a full spec compliance VIP and verification methodology to get there. Just to give you a hint, the spec approaches 3000 pages, so building your own test plan, coverage plan and test sequences is not a task for the faint of heart.

Cadence has a well-established history in providing VIPs and especially in keeping up with the latest protocols. Scott Jacobson (product marketing director for verification IP at Cadence) told me that given the Cadence focus on VIP they can afford to put a lot of effort into preparing full-spec VIPs ahead of the market. They provide a capability they call TripleCheck which includes a test suite, a coverage plan and a verification plan. These are all customizable.

TripleCheck provides an extensive library of test sequences to stimulate the design under test. The test library contains directed tests (providing quick checks for common protocol compliance issues) as well as constrained-random test sequences for exhaustive testing to detect corner-case bugs hidden in the design. Tests support error injection in each layer of the protocol stack to check operation of the design when faced with non-compliant stimulus. This combination of directed and constrained-random tests results in high functional coverage, right out of the box. I should also mention that you can model multiple radios, a capability that is apparently becoming increasingly common.

Coverage models are provided for both SystemVerilog and ‘e’. The coverage models are open and documented, allowing you to add application-specific extensions.

The solution also comes with a verification plan mirroring the protocol specification. All requirements in the specification are listed in the plan and organized according to (specification) chapter and paragraph hierarchy. The plan is linked to the coverage model and is provided in XML to ease portability between simulation environments. If you’re using Cadence vManager, the plan simply integrates into that environment.

You can learn more about the Cadence Bluetooth 5 VIP solution HERE.

More articles by Bernard…


Making the Move from 28nm to FinFET!

Making the Move from 28nm to FinFET!
by Daniel Nenni on 01-12-2017 at 12:00 pm

If you click FinFET in the SemiWiki.com Latest News: navigation bar at the top of this page you will get a list of 86 blogs that have been viewed more than 600,000 times. If you go to the last blogs on the list, meaning the first blogs to be published, you will see a three part series, “Introduction to FinFET Technology” written by Tom Dillinger (ChipGuy), starting in March of 2012. That series has been viewed more than 60,000 times and is still getting traffic. Rumor has it Tom is writing a book on FinFETs to be published later this year so the series continues (in print).

Even though we have had FinFETs in production for quite some time now a significant amount of design work is still done on 28nm and above. Now that we have the cost effective TSMC 16FFC process and the even more cost effective (soon to be announced) TSMC 12nm, it’s time to get more competitive and say good-bye to planar devices, absolutely.

And ARM is going to help us do just that with their upcoming webinar:

Making the move from 28nm to 16nm FinFET: easy as POP!

Live Webinar: 9:00 am – 10:00 am PST and 5:00 – 6:00 pm PST
January 17, 2017

REGISTER HERE

The TSMC 16FFC process is a lower cost FinFET option that targets a wide range of applications. So consequently, many ARM-based partners are interested in moving from a traditional CMOS manufacturing process technology to using the FinFET process. However, designers are unsure of the challenges that may be encountered when moving to FinFET.

To facilitate meeting these new process challenges, ARM’s physical design group developed implementation solutions in both TSMC 28HPC+ and TSMC 16FFC, to both optimize and accelerate the implementation of ARM-based SoC designs. Using the latest ARM Cortex®-A73 processor as a case study, this webinar will summarize deep technical findings collected from a variety of implementation trials. We will share and discuss process differences, power grid creation challenges, floor planning differences (due to fin pitch requirements), key enhancements in clock tree synthesis, and revised signoff criteria.

If you are thinking of making the move to a FinFET technology process, this is one webinar that you do not want to miss!

And if you are designing an SoC, ARM also has a webinar for you:

Three Tips to Maximize your SoC performance

Live Webinar: 9:00 am – 10:00 am PST and 5:00 pm – 6:00 pm PST
January 24, 2017

REGISTER HERE

CPU performance is highly dependent on choices such as: processor speed, cache size, interconnect, memory speed, data ordering, data width and optimal integration of the IP blocks. In addition to focusing on the CPU, ARM also fulfills extensive system performance analysis work to ensure that the optimal configuration options are chosen by the designer.

Join this free webinar to understand more about the methodologies and analysis techniques used at ARM, plus how these link to CPU performance. This webinar will introduce some of the SoC design work carried out by ARM, with data for SoCs targeting mobile and server/networking applications.

If you can’t make it to the live versions, still register and they will send you a link to the replay. I can also have SemiWiki bloggers attend them so they can share their opinions, observations, and experiences.


Cyber Risks Compound with Technology Adoption

Cyber Risks Compound with Technology Adoption
by Matthew Rosenquist on 01-12-2017 at 7:00 am

AAEAAQAAAAAAAAj AAAAJDQwNjJmNmEzLTc4NmQtNDM2My1hZTJiLTVkYTg3YjU3ODVkMg

Just how reliant are we on computing infrastructure? Sometimes it takes just a little outage to get a taste of the interwoven dependency we don’t readily see.

It can be small. The international travelers landing in U.S. airports on January 3rd, one of the busiest travel days of the year, found themselves stuck in long lines due to a temporary outage with the customs processing system. Mobs of disgruntled holiday travelers waited for the issue to clear up. Airports across the country reported delays from half an hour to over two hours.

Lines of Travelers
Source: pic.twitter.com/VGLUOUiaoP (with approval)

The technical issue, not attributed to a cyber attack, impacted the travel of over 30 flights into Miami International. In Atlanta, at the biggest airport in the world, Hartsfield-Jackson was impacted for 90 minutes.

Even the most mundane things, like crossing a border, can be impacted when technology goes awry. Modern day systems are built for efficiency therefore have deeper dependencies on upstream components. Like dominoes, when one thing breaks, the ecosystem is not built to absorb the loss and instead comes to a grinding halt. This creates a backup and like ripples in a pond have far reaching consequences.

Social without Internet
In October 2016, simple devices connected to the Internet were recruited by attackers to form a botnet and collectively sent network traffic to Dyn, a Domain Name System (DNS), internet service company, which resulted in significant blackouts and slowness to major Internet sites like Twitter, Spotify, Reddit, the New York Times, Wired, Github, Etsy, and many others. It became a stinging Distributed Denial of Service (DDoS) attack the likes we have not seen in years.


Source: Downdetector.com

DDoS attacks are not new. They have been around for many years. Technology infrastructure and security services have evolved to protect against such attacks, making them largely ineffective. Until recently that is. When instead of big powerful systems trying to cause damage, attackers shifted tactics and approached it like a colony of ants. Home routers, DVD players, consumer internet cameras, and a host of other small IoT devices were harvested like crops and their collective power was pointed at targets. The impact was unprecedented. A number of such attacks, starting in the back half of 2016, continue to pose a risk to online properties. Dyn was just one simple target. What happens if such attacks are made against critical systems?

Dominion Over Electricity
Ukraine has already suffered two separate power outages attributed to hackers, the latest in mid December, which affected regions around its capital. Customers were without power for over an hour, as personnel had to manually switch equipment to restore electricity. The attack was largely seen as sending a message, rather than trying to cripple a nation.

President Obama, in his final press conference of 2016, stated the U.S. is more vulnerable than other potential adversaries:“Our economy is more digitized, it’s more vulnerable, partly because we’re a wealthier nation, and we’re more wired than other nations” – Obama 2016

Former FBI agent Austin Berglas stated:” A three-pronged attack against our power grid, transportation, and financial systems would be devastating and potentially viewed as a terrorist attack against the United States“.

Aggregate at Risk
The United States operates at an extremely high pace with huge efficiencies. We are a modern nation and have crafted our world to move at a dizzying pace where unimaginable systems work together like cogs to turn the engine of our everyday lives. The very foundations of which are now becoming more dependent on computers, data, and the Internet. Our transportation, power, financial systems, healthcare, emergency services, fuel infrastructure, communication networks, food distribution networks, and government services are all reliant in some way on digital technology that can be targeted by attackers.

Much like the butterfly-effect, situations in one small system may cascade to cause catastrophic impacts elsewhere. Who could have imagined that home cameras and DVR’s left with default passwords, by apathetic consumers, would be the engine used take down huge chunks of the Internet? Yet it did. Sadly, those were not even the nation-state level threats. They were likely just curious hackers looking to experiment with what they ‘could do’. The big players have much more powerful tools in the tool-box, which are designed for greater impact over a longer period of time.

As we go into 2017, entranced with the glistening of new technology, we must also understand there is a risk that accompanies it, which aggregates and compounds over time. Cybersecurity must play a part in the foundations of every step forward we make.

Interested in more? Follow me on Twitter (@Matt_Rosenquist), Steemit, and LinkedIn to hear insights and what is going on in cybersecurity.


Analog, Low-power Optimization at SMIC

Analog, Low-power Optimization at SMIC
by Daniel Payne on 01-11-2017 at 12:00 pm

Talking with actual IC designers is always fascinating to me, because these engineers are the unsung heroes that enable our modern day world of consumer and industrial electronics. Too often we only hear from the CEO or other C-level executives in the press about their own companies, products, services and vision. I recently had the pleasure to interact with Josh Yang, Director of the SMIC IP R&D Center about one specific IP block that his group was responsible for designing.

Q: Which analog IP block did your group design and optimize?

We recently finished silicon measurements of an ultra low power reference voltage design for IoT application using MunEDA’s circuit optimization software. The bandgap reference circuit that we used is a standard topology.

Related blog – Three Steps for Custom IC Design Migration and Optimization

Q: What kind or process node are you designing with and what tools help you to optimize?

We developed the schematic in our 55nm PDK and used MunEDA’s variation aware optimization tools to reduce power consumption, improve output voltage stability, and center the design for the process technology to get a high yield. The used bandgap structure is a conventional general purpose bandgap design for high PSRR and quick transient response. We chose this structure and tuned it manually first, using only SPICE simulation and variation analysis tools, to achieve a low current consumption of 0.5 uA. For the final optimization, we had MunEDA WiCkeD’s optimization tools reduce the power consumption by another 40% down to 0.3 uA.

Q: What are some of the engineering challenges that your analog IC designers faced with this IP block?

In analog low power designs, more MOS devices will operate closer to their weak inversion region and may become more sensitive to process variation and mismatch. Variation effects must be taken into account when optimizing analog circuits for low power, in order to find a design solution that meets all goals: low power consumption, good performance, and high parametric yield. We found manual design with SPICE simulation and variation analysis tools alone sufficient to find average design solutions, but particularly in the field of low power analog design, the conventional design style leaves a lot on the table. With MunEDA’s tools for multi-objective constrained parametric yield optimization we found significantly better solutions when minimizing power consumption with constraints on performance and yield.

Related blog – SRAM Optimization for 14nm and 28nm FDSOI

Q: What kind of simulation results did you achieve on this bandgap reference circuit with an optimized design?

The simulation results after optimization were very impressive.

Q: What kind of design centering results did you get?

The yield optimizer centered the design very well in the process window with good safety margins for all specs. The optimizer reduced the variation in temperature compensation and in the average output voltage very well:


Q: The theoretical improvements from optimization look impressive, so what about the measured silicon results for this bandgap IP block?

So we were eager to see the silicon measured results, because we knew that an optimizer can only be as good as the circuit models are. We took care to characterize our SPICE models well for analog designers, considering small signal parameters, as well as weak inversion / moderate inversion regions that are important for low power analog design.

I can say that silicon results exceeded our expectations and matched simulation results very well. All test wafers passed the design spec with 100% yield. Median measured current consumption of the design optimized by MunEDA WiCkeD was 0.27 uA just as predicted by simulation.This excellent agreement between the simulation and silicon data also confirms the high accuracy of the silicon models in our 55nm PDK, which is key for our customers to achieve similar results in their critical analog design applications.Related blog – Tuning Analog IP for High Yield at SMIC

Q: Do you recommend using an automated approach for optimizing your analog IP blocks versus a manual tweaking approach to meet design requirements?

We are convinced now that this is the right way to implement advanced low power analog designs in advanced node technology: create good analog SPICE models, and use a good analog circuit optimizer that can handle variation, performance, and current consumption.

Summary
When I started out designing DRAM circuits at the transistor-level back in 1978 we used manual device tweaking, simulation and iteration to help optimize, and it was a very laborious and error-prone process. The project schedule dictated how much time we could spend in manual optimization, and we never felt like all of the circuit parameters had really been optimized. Today, we have a much different world of choices available for transistor-level circuit designers, because they can use a much more automated approach to optimizing their analog IP blocks, like what SMIC just accomplished in their 55nm bandgap reference circuit. The SMIC experience clearly showed that an automated optimization approach with MunEDA tools produced better results than manual device size tweaking and iterating.


Tool Trends Highlight an Industry Trend for AMS designs

Tool Trends Highlight an Industry Trend for AMS designs
by Bernard Murphy on 01-11-2017 at 7:00 am

Archaeologists often use tools found in digs as a major indicator of trends in civilizations. The same could be said for trends in design, though we don’t have to reconstruct these design trends – we tend to see them ahead of us.

The trend in this case is the growing importance of sensors in designs and there’s no better example of that trend than in the evolution of smart cars. The use of sensor technology has now become an integral part of the nervous system of many automobiles. Almost all of these smart cars depend on sensing behavior and surroundings and have found application in demanding areas such as dashboards, brakes, vehicular response, increased safety, comfort etc. One common thread in all those systems is the usage of analog/mixed signal designs (AMS), often with a mechanical or micro-mechanical front-end.

The usage of AMS is what is driving this important tool trend. AMS designs typically have few overlaps with tools used in digital design but one area where design needs are starting to overlap is in design data management (DDM). Historically, small AMS design teams building more or less from scratch would see little value in DDM. But now even these small co-located design teams can see the merits of using design management for AMS designs. For medium to large design companies, with AMS designs becoming more complex and distributed, design flows are growing more involved, ignoring DM solutions is no longer an option. With analog design modules being integrated with digital and RF components, the need to have an underlying data management solution is inevitable. In the automotive space, quality and reliability is an exceedingly important criterion. With the sensors expected to last the life span of the vehicle, it becomes important for companies to know which revision of designs was used for the sensor along with the fixes made.

One can always argue that the data management for the design of sensors and SoCs could be done by using the traditional tools for digital SoCs. But the needs of the AMS designs, are quite different from digital designs. Digital design data prior to implementation is textual, and consequently rely on the same kinds of DDM tools used in software development. However AMS design data is done using schematics and layout in binary format. This has several consequences, which does not fit well with conventional software based DDM:

 

  • Since AMS designs are saved as binaries, branching and merging versions are very complex operations. Comparing two schematics or layout views require specialized tools which cannot be done by software based DDM often used for digital design as well. To ensure parallel development of designs, it becomes necessary to incorporate strict locking mechanisms. Checkout must lock an object for further checkouts, even by the designer who did the current checkout (until they have checked it back in).
  • Hierarchy and view dependencies are trickier. A cell view has multiple files, which need to be checked in and out as an atomic unit and hierarchical dependencies to other design objects (in other files) can’t be discovered easily without in-depth understanding of the implementation tool vendor’s database. This point alone explains why conventional DDM tools are probably never going to support AMS design.
  • The binary representation of design objects also requires more space on the file system than text based data. Since every user has his own personal copy of the project state this data must be stored and moved frequently, which creates additional costs and may become a performance bottle-neck for remote sites. A DDM should consider mechanisms that reduce the amount of traffic with remote sites.

But despite these restrictions, AMS designers still need the capabilities that digital designers expect: checkin and checkout control, ability to work with a local copy (isolating you from in-process changes in other areas), tagging at checkpoints and so on. An important factor for user acceptance is how well the DDM is integrated with the design tools. Unlike software developers, who are used to working with their favorite text editor command line tools, analog/mixed-signal designers work in a GUI environment and are expecting (a) a graphical visualization and (b) not having to switch between tools to handle and modify the design data.

ClioSoft’s SOS design management platform is one solution designed to meet these expanding needs. With over 200 customers developing a wide spectrum of SoCs, their robust solution is easily customizable to meet the requirements of any company. Moreover, with partnerships with all major EDA vendors, ClioSoft’s SOS is the only solution which can manage the design data for all types of designs – analog, digital and RF. What’s telling is their adoption stats, particularly in Europe which is the center of gravity for a pretty significant percentage of automotive electronic development. Among others, SOS has already been adopted in companies such as Infineon, Elmos, Creative Chip and Micronas.

Elmos recently drafted a very informative white paper on why they chose ClioSoft SOS for their DDM solution. They highlighted support for local cache servers as an important differentiator, allowing for very responsive access to all files being used at that site without requiring huge amounts of disk space. The white paper also has some pretty interesting discussion on how they managed co-development of analog and digital design in the same hierarchy, where the digital design is managed by SubVersion. They also talk about using SOS to manage tool configuration across all users and to manage tapeout.

Perhaps this is all ho-hum to digital designers but is has become essential in the world of sensor design and that’s driving a pretty significant shift in what is important for AMS design data management. You can request the Elmos white paper HERE.

Also Read

Managing International Design Collaboration

Making your AMS Simulators Faster (webinar)

3 small-team design productivity challenges managed


Netspeed Gemini NoC Provides Coherent Fabric in Mobileye’s Next-generation EyeQ5 SoC

Netspeed Gemini NoC Provides Coherent Fabric in Mobileye’s Next-generation EyeQ5 SoC
by Mitch Heins on 01-11-2017 at 7:00 am

Last week I wrote about NetSpeed’s network on chip (NoC) IP technology and design environment NocStudio. This week we see a real life application of this technology announced at CES by Imagination Technologies and NetSpeed. The companies have announced that Mobileye will use Imagination and NetSpeed IP in their next-generation ADAS and autonomous driving system-on-a-chip (SoC).

Mobileye is well known for its vision accelerators for deep-layered neural networks and they have plans to use Imagination’s I6500 MIPS CPUs along with NetSpeed’s Gemini NoC IP in their next-generation SoC dubbed EyeQ5®. Autonomous driving is very compute intensive as it must deal with a myriad of simultaneous inputs to make complex real-time decisions. In 2015, Audi used a MIPs-based Mobileye SoC to complete a fully autonomous 560-mile drive from San Francisco to Bakersfield. The SoC used was the Mobileye EyeQ4®.

In this week’s announcement, we get a glimpse of Mobileye’s next-generation SoC, the EyeQ5®. The EyeQ5® is projected to be 8X faster than its predecessor the EyeQ4® and it is expected to produce more than 15 tera operations per second while consuming less than 5W of power. To do this the EyeQ5® will make use of a complex heterogeneous multi-core SoC that utilizes 8 configurable MIPS I6500 CPUs (the EyeQ4® used 4 MIPs CPUs) coherently combined with 18 of Mobileye’s next generation vision processors (the EyeQ4® used 6). As part of the heterogeneous I6500 clusters, NetSpeed’s Gemini NoC provides the fabric that lets Mobileye’s engineers coherently mix on-chip configurations of processing clusters for high system efficiency.

The ability to configure every component of the interconnect in a coherent heterogeneous environment is a requirement for ADAS applications. Mobileye’s designers will be able to use NetSpeed’s NocStudio software with integrated machine learning capabilities to accurately model and simulate their system configurations to optimize for the best performance, power and silicon area trade-offs and then produce fully synthesis-ready RTL for SoC implementation.

The combination of Imagination’s highly scalable MIPS I6500 CPUs with NetSpeed’s deadlock-free coherent NoC fabric enables designers to implement optimized configurations of CPU cores or clusters of CPUs. In a single cluster, designers can optimize power consumption and configure each CPU with different combinations of threads, different cache sizes, different frequencies, and even different voltage levels all while being cache coherent.

Mobileye’s use of IP from Imagination and NetSpeed IP is a valuable feather in the caps of both these IP providers as Mobileye is known to be a pioneer of heterogeneous SoC designs and they know how hard it is to get it right, especially in a coherent environment. Their SoCs are used by a majority of the world’s automakers including Audi, BMW, Fiat, Ford, General Motors, Honda, Nissan, Peugeot, Citroen, Renault, Volvo and Tesla. Mobileye’s use of these IPs is a testament to the strength of the offerings from Imagination and NetSpeed.

With 8X the computational performance of EyeQ4®, it’s easy to imagine that the EyeQ5® will take on even more data fusion than its predecessor which already simultaneously accepted inputs from 8 cameras as well as information from multiple radars and scanning beam lasers. The real power of the Imagination-NetSpeed IP collaboration, however, is that it enables Mobileye’s designers to be able to tune the additional MIPS clusters to take on more tasks while simultaneously optimizing the overall system for power, performance and cost. This could give the Mobileye team the ability to quickly configure and synthesize multiple versions of the EyeQ5® SoC architecture for different automotive markets which in turn could give them the opportunity to broaden their footprint in the automotive space and possibly take on more of the electronics functionality than just the image processing.

See also:
Mobileye uses Imagination Technologies and NetSpeed Systems IP
MIPS core tackles multi-core, multi-cluser designs with up to 384 cores
NetSpeed releases Gemini 3.0 cache-coherent NoC


Analog Bits and TSMC!

Analog Bits and TSMC!
by Daniel Nenni on 01-10-2017 at 12:00 pm

TSMC Wafer

As a long time semiconductor IP professional I can tell you for a fact that it is one of the most challenging segments of semiconductor design. Given the growing criticality of semiconductor IP, the challenges of being a leading edge IP provider are increasing and may be at a breaking point. The question now is: What does it take to be a successful leading edge semiconductor IP company?

First and foremost, you must have a high tolerance for pain! Not only do you compete with other IP companies, big and small, you compete with internally developed IP which is like selling shoes to a shoemaker.

Second, you have to have a VERY close “silicon proven” relationship with the foundries. All was well in the Semiconductor IP business until FinFETs came about. Not only are FinFETs a significant design challenge requiring early access to leading edge processes, the foundries have locked down that early access. Do you remember back at 28nm and above when the foundry processes were all “T like”? IP companies developed products at TSMC and ported them to UMC, SMIC, and Chartered making it much easier to scale your IP development, right? That portability is now gone with FinFETs and as we move down the process path to 7nm and 5nm the design challenges and security restrictions are growing rapidly, absolutely.

Third, your business model had better be mean and lean with the ability to pivot at a moment’s notice. The good news is that silicon proven commercial IP is much more attractive now that design cycles are tight, the tightest I have ever seen actually. I am also seeing more systems companies making their own chips using commercial IP. Then there is the semiconductor company consolidation which is a double edged sword. It is good news if your customer takes over your competitor’s customer and not so good news if it is the other way around. So you had better be nimble, you had better be quick, and that brings us to the poster child for a successful leading edge IP company: Analog Bits.

Founded in 1995 here in Silicon Valley, Analog Bits has zero external funding and has enabled billions of chips from .25 micron down to FinFETs via more than 350 customers worldwide and more than 70 unique processes. They are experts (1st time right) at low power mixed signal IP and a pioneer in Multi-Protocal SERDES. Analog Bits is also an ardent TSMC supporter (which is where I know them from) and a member of the exclusive “TSMC Partner of the Year” club.

In fact, Analog Bits presented twice at the previous TSMC OIP Ecosystem Forum. The first presentation was Silicon-proven, low power IP for TSMC 16nm FFC for Automotive to Datacenter SOC’s and the second was Design and Verification of 16nm FFC Low Power SERDES for Datacenter and Automotive Applications. The theme of course is leading edge SoC design for two of the hottest semiconductor vertical markets. If you click on the links it will take you to the abstracts on the TSMC site. To see the full presentation you will need to have a TSMC account or you can contact Analog Bits and talk to them directly.

You can hear a bit more about Analog Bits in the recruitment video below. They are hiring big time:


Industrial IoT (IIoT) – Beyond Silicon Valley

Industrial IoT (IIoT) – Beyond Silicon Valley
by Brian Derrick on 01-10-2017 at 7:00 am

Industry 4.0, Smart Factory 1.0, and Internet of Manufacturing are industry initiatives aimed at accelerating the Industrial IoT. With current market forecast exceeding $40 billion and projected to approach $100 billion by 2020, IIoT has everyone’s attention. Well, almost everyone. Turning volumes of factory data into actionable information from the supply chain, to the floor, to operations, and up to management, and potentially to customers, is the key challenge of Industrial IoT deployment. IIoT has evolved just like we saw the integration of the back office, front office, and business intelligence evolve – point-to-point custom solutions built over decades.


Originally, equipment was connected to local Supervisory Control and Data Acquisition (SCADA) systems and clever plant managers discovered ways to use this data to manage the shop floor more effectively. As manufacturing became more complex, specialized software was developed to support a class of manufacturing management applications that allowed optimization across multiple lines, shop floors, and other locations connected to networks. With advances in sensor technology, networking communication, and computation capability, the IoT is accelerating with wild forecasted values of economic returns. Intelligent factories are one of the largest areas of return for IoT and IIoT. With hundreds of thousands of sensors already deployed, factories connected to the Internet, and suppliers and customers already communicating electronically, the industry is a great starting point for IIoT. But, unlike the software and Internet evolution, the IIoT center of excellence is not rooted in Silicon Valley.

Market Observations
With a market worth billions of dollars in the near future, the big industrial automation companies are already heavily invested in bringing industrial solutions to the market. According to an October 2016 Semicast Research report, General Electric, Siemens, and United Technologies are the largest IIoT OEMs (by revenue) but combined with 15 top OEMs, they all capture only 1/3 of the available market:


Taking a look at the headquarter locations of the top 15 IIoT industrial automation companies shows that the only company in Silicon Valley is Applied Materials, while the rest are mostly on the east coast of the US, and in Europe and Japan:

Industrial software that companies employ to control the plant or factory includes distributed control systems, human-machine interfaces, and SCADA infrastructure. The top industrial software companies (by revenue) are:

Large industrial automation companies are not the only entities chasing the IIoT market. Having long served the industrial automation and monitoring markets, sensor and actuator providers are positioned on the front lines of IIoT. As more computing and value moves to the edge of the IIoT, these critical sensor and actuator companies will become even more vital to realizing the intelligent factory. Here, Silicon Valley is represented by HP and Avago. The top sensor and actuator providers (by revenue) are:

Top cloud providers, like Amazon and Google, are highly-recognized companies that provide infrastructure, business processes, and application services to the industry. Here, Silicon Valley is represented by Google and VMWare. The top cloud services providers (by revenue) are:

Leveraging the Cloud and rapidly expanding internet connectivity, Siemens MindSphere Cloud for Industry allows for improved asset management and energy efficiency through data analysis and simulation by collecting and analyzing large volumes of factory data. Similarly, the GE Predix software platform also connects industrial equipment, analyzes data, and delivers real-time insights.

The IIoT is similar to its more highly-recognizable sibling IoT, in that the whole solution is based on connectivity. Long before the IoT, machines were connecting and communicating with each other. With the proliferation of WiFi, gateways have become a critical component of the IIoT. In fact, many smart sensors are gateways themselves. Here, Silicon Valley is represented by Cisco. The top IoT and intelligent gateway providers (by revenue):

Machine to Machine (M2M) communication (or sometimes Man to Machine) in factories is accomplished using wireless and cellular modules and terminals. M2M technology allows remote measurement, diagnostics, maintenance, monitoring, and reporting from the factory floor to a large audience within a company. The top M2M communication hardware companies (by revenue) are:

All the industrial giants in IIoT not only have relationships with every manufacturing company on the planet, they possess domain knowledge. Capturing, analyzing, and making decisions in real time from jet engines, offshore oil and gas rigs, or manufacturing plants around the world is vastly more difficult than analyzing ecommerce transactional data or tracking social media posts. This domain knowledge is captured over time, it is very different from one domain to another, and it is extremely valuable for analyzing business and making decisions.

Silicon Valley has its eye on the IIoT and companies there are not known to sit on the sidelines. Startups in Silicon Valley and around the world are already actively providing solutions:

The “big guys” are responding to the start-up pressure. For example, earlier this year Siemens set up a separate business unit called “Next47” to foster disruptive ideas more vigorously and to accelerate the development of new technologies. And, GE launched their “Fastworks” initiative to experiment rapidly like a startup and to discard ideas that do not gain traction.

The IIoT Payoff
It is clear that factory automation delivers products to market faster and cheaper, but companies need to make IIoT decisions based on how fast the return on investment (ROI) will take place. One way to do this is to look at public case studies in similar industries. For example:

  • A yogurt company implemented a fully-automated production line that increased capacity by 300 percent, lowered costs by 30 percent, and decreased lost batches of product by 95 percent.
  • An oil refinery installed wireless acoustic sensors and gas flow valves in flare stacks. This system paid for itself in five months with an ROI of 271 percent annualized over 20 years. Plus, the solution saved them over $3 million in hydrocarbon emission losses per year by quickly detecting and repairing faulty valves.
  • A company that makes sanitizing gel reduced production cost by 50 percent and achieved ROI in six weeks by adding a data analytics solution to their connected production machines.
  • A semiconductor foundry automated their customer quote system for custom ICs and achieved a 419% ROI in 3 months and achieved over $4 million in savings per year.


Looking Forward

The world of manufacturing is merging information technology, advanced IoT sensors, and analytics to create smart factories. Some companies are looking for fully-autonomous factories. Others see factory machines and manufacturing lines that automatically assemble themselves where customer orders turn into production data that drives the machine configurations. Others are exploring virtual reality, 3D printing, robot technology, and clever ways to exploit cheap sensor technology connected to the Internet. Manufacturing companies are on the leading edge of technology because small improvements to their systems add up to large savings over time. Consider this remarkable statistic provided by the oil industry: improving the productivity of existing oil well assets with IIoT solutions by 1 percent would increase the world’s output of oil by 80 billion barrels, which is the equivalent of three years of the global oil supply. Do you think that oil companies are motivated to join the IIoT evolution?

There is a blurry line between IoT and IIoT. Factory automation, commercial buildings, and in some cases healthcare fits nicely into the definition of industrial. Automotive manufacturing is an obvious industrial market segment with huge potential and when smart cars are deployed into smart cities, the transportation market looks more like an IIoT solution. The markets, applications, and opportunities seem endless.

As the larger systems of systems world begins to adopt design automation solutions and standards that semiconductor and electronic systems companies in Silicon Valley have employed for decades, the opportunity for design automation companies to create innovative solutions for new markets emerges.


CES: Carnival Corp Personifies Key to Monetizing IoT

CES: Carnival Corp Personifies Key to Monetizing IoT
by Mitch Heins on 01-09-2017 at 12:00 pm

When one thinks of CES, one typically thinks of the latest in virtual reality or huge super high resolution televisions, sophisticated drones and robots. However, what caught my eye this year came from a company you don’t typically associate with high tech gadgets and that was Carnival Corporation. Yep, the company with all of the cruise lines. So what is the connection between Carnival and high tech?

Carnival is in the business of selling personalized vacation experiences that mostly happen on large cruise ships all over the world. They aren’t however selling a cruise, they are selling an ‘experience’ and there is a big difference. An experience implies something to which the consumer is emotionally connected before, during and after the event. In Carnival’s case, a ‘gadget’ is involved, but it goes much deeper than just a neat toy and it may very well be the key to how we all need to think when it comes to how we monetize the Internet of Things (IoT).

In a nut shell, what Carnival has done is to create a wearable device known as the OCEAN Medallion. The medallion, as its name infers, is a small waterproof device that can be worn as a pendant or watch that is personalized for each individual customer. The medallion makes use of both BLE (Bluetooth Low Energy) and NFC (Near Field Communication) technology to talk to thousands of sensors and portal devices that are placed throughout the cruise ship and port-of-call locations. A key difference however is that Carnival flips the typical BLE paradigm of fixed beacons and mobile readers around to have the medallions act as mobile beacons talking to fixed readers.


The term OCEAN in OCEAN Medallion is actually an acronym for One Cruise Experience Access Network. As the name implies, OCEAN is actually a vast network of devices all working together to help Carnival deliver a more memorable ‘experience’ to their customers. The medallion is used for multiple purposes including as an ID card that enables passengers to gain access to their cabin, a form of payment for on-board purchases in shops, restaurants, spas and gambling casinos, and as a location device that can be used with interactive portals to guide passengers around the ship and to track the locations of other members of a passenger’s traveling party.

So far this is all pretty standard stuff however, Carnival goes a step further. Carnival starts with a program they call OCEAN Ready. OCEAN Ready starts by shipping the medallion to the passengers before the cruise and it enables the passenger to update a profile which becomes known as a passenger genome. A passenger genome captures an individual passenger’s likes and preferences. Carnival uses these profiles in combination with advanced algorithms in an effort to anticipate what might make a passenger’s experience richer and more memorable. What makes the system unique is that the passenger genomes are not static. OCEAN actually learns on-the-fly by observing passengers’ behaviors and choices during the cruise and continues to use new information in real-time to anticipate what might make each passenger’s experience better.

The user interface for this is an application known as COMPASS. Passengers interact with COMPASS through interactive portals that are distributed throughout the ship including the TV in the passenger’s cabin. There are no buttons to push. Instead, portals are activated by the mere fact that the passenger approaches it with their medallion. Passengers can also interact with COMPASS using their smart phones. Ship’s crew also carry portable COMPASS portals that interact directly with passenger’s medallions letting the crew know with whom they are speaking and what interests that passenger may have.

Carnival combines COMPASS and the medallions with other apps like OCEAN Concierge. Concierge enables passengers to order food, drinks or schedule activities through a COMPASS portal. The crew finds the passenger (again using the medallion) to deliver the requested items and the system learns more about the passenger as it attempts to anticipate their next need or desire. The Concierge app also makes suggestions and sends passengers invitations to events and activities on-board and ashore based on preferences indicated in the passenger’s genome.

All of this goes to what Carnival calls ‘Experience Intelligence’. OCEAN is a learning network that customizes the experience for each individual passenger and this is what really caught my eye. Carnival didn’t set out to make another gadget. Instead they started out with the idea of customer centricity and trying to discover how to personalize and customize every passenger’s experience. They looked at the problem holistically and developed a system of hardware and software that would be simple to use but provide a way to truly exceed passengers’ expectations and provide an experience that would be remembered well after the cruise was over.

In the end, Carnival will buy a lot of ‘gadgets’ to make the OCEAN Medallion program work but the payback will be passengers who will each get a customized and personalized experience that they will not soon forget. That hopefully, will translate into repeat customers and more word of mouth references for Carnival that will encourage more people to cruise on their ships.

The lesson to be learned by all of us is that for IoT to be successful, it’s all about having in mind the ‘experience’ with which we want to leave our customers. All of the gadgets in between are a means to an end, not the end itself and Carnival has figured that out.


NVIDIA on a Tear at CES

NVIDIA on a Tear at CES
by Bernard Murphy on 01-09-2017 at 7:00 am

Jen-Hsun Huang, CEO of NVIDIA, gave the opening keynote at CES this year. That’s hardly surprising. From a company that operated on the fringes of mainstream awareness (those guys that do gamer graphics), they finished 2016 as the top-performing company in the S&P 500, returning revenue growth of 35% (forecast). That’s startup growth and the same rate at which Amazon Web Services (the mighty Amazon cloud) is growing. Pretty impressive for a semiconductor company. And they are earning it. From the keynote alone, it’s obvious they are putting the same blistering level of innovation into their products that you’ll see at any of the FANG (Facebook, Amazon, Netflix, Google/Alphabet) companies.


Jen-Hsun kicked off with the PC gaming sector which remains very important to NVIDIA. This business has doubled in the last 5 years to $31B and NVIDIA provides the dominant game platform today, as represented by GeForce. They’re obviously very proud of this but they’re looking to how they can grow it further. There are a few hundred million serious gamers today, but most PC/Mac users (around a billion) play games at some level, but can’t access the more advanced games and multi-player options because they don’t have the hardware. So NVIDIA has put GeForce for gaming in the cloud, called GeForce NOW, making it accessible to all users with an Internet connection. This apparently took some serious work to preserve the performance and low latency you expect in desktop gaming. Access will be available in March for early users and will be offered on-demand at $25 for 20 hours of play. Now you see part of why these guys are doing well – they’re expanding their market to casual users, from whom they’ll make money, and at least some of those casual users will like it so much they invest in their own GeForce-enabled desktop systems. 🙂


NVIDIA has also partnered with Google on Shield, their Android-based streaming device (same concept as Roku, AppleTV, etc). It serves up all the usual options – Netflix, Hulu and (unlike AppleTV) Amazon video and, of course, gaming – games can stream from their GeForce systems to the TV or from GeForce NOW in the cloud. More interestingly (for me, I’m not much of a gamer), Shield is tying in Google Assistant, providing natural speech-control of the TV but also home automation, so you have a central hub for voice-activated (including TV) control of any smart home device. To make this a through-house ambient capability they also are introducing the NIVDIA Spot, a small AI microphone (with lots of cool tech) which plugs into a wall socket and communicates with the hub, so from anywhere in the house you can say “OK Google …” and have the Google Assistant respond. (I have to believe NVIDIA is talking with Amazon about Echo integration, though that didn’t come up in the keynote.) Shield starts at $199 and SPOTs are separately priced ($50 each I hear).

Then of course there’s NVIDIA’s role in the automotive industry, which is already significant. This isn’t just about graphics, it’s also in a very big way about AI. Jen-Hsun makes the point that GPUs were a big part of what transformed AI from an academic backwater into a major industry, especially in deep learning. He calls GPUs the “big bang” of AI. Maybe I’d be more of a geek and call it the “Cambrian Explosion” (there was AI around before GPUs, it was just evolving slowly). Either way, NVIDIA saw this opportunity and ran with it – their solutions are a dominant platform in this field.


At the show, Jen-Hsun introduced Their Xavier AI Car Supercomputer – an 8 core ARM64 CPU, a 512 core Volta GPU, the board fuses sensor information, connects to CANs and to HD maps, is designed to ASIL-D and delivers 30 Tops in 30W. NVIDIA created a car they call BB8 (for Star Wars fans) which can drive autonomously given voice directions. The example they showed was “Take me to Starbucks in San Mateo”, from which it figured out the best direction and headed out. Interestingly, they see this more as a co-pilot (they call it AI CoPilot) than a fully autonomous intelligence – BB8 hands over control to the driver whenever it gets to situations it feels it can’t handle.

It also pays attention to the driver, looking for tiredness, inattention, perhaps having had a few too many drinks, and can warn the driver (or possibly take corrective action?). Even more interesting, it does this through facial recognition on the driver and gaze tracking. It also does lip-reading with 95% accuracy (they claim), much better than human experts. Why? Because cars can be noisy environments (music, traffic, passengers), so you want to pay special attention to driver commands, even when voice can’t get through.

Finally, Jen-Hsun announced new automotive partnerships. They have added ZF as partner (5[SUP]th[/SUP] largest automotive electronics supplier), and Bosch (#1 tech supplier to the car industry) has announced a production drive computer partnered with NVIDIA. And they have announced a new partnership with Audi (my favorite car) to build a next generation AI car by 2020. In fact, Audi was demoing a Q7 driving itself in a parking lot at CES after just 4 days of training. All of which reinforced that cars are still in many ways our favorite consumer devices, which is why CES is becoming as much of a car show as an electronics show.

There’s a lot of detail I skipped here, such as Shield supporting 4K and HDR. You’ll have to watch the video HERE to get the full keynote. I was really impressed. This is a semiconductor company that has reinvented itself to play right alongside the consumer technology leaders of today, not just as an “NVIDIA inside” but in many cases as a very visible part of our consumer experience. Other semis should take note. NVIDIA has shown that there is still a path to greatness in hardware.

More articles by Bernard…