RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

CEO Interview: Jim Gobes of Intrinsix

CEO Interview: Jim Gobes of Intrinsix
by Daniel Nenni on 08-14-2017 at 7:00 am

Experience gives us the ability to make better decisions and in a fast moving industry like semiconductors, experience is critical. As chips get more integrated and complex the number of design decisions that must be made increases at a dramatic rate. Process technologies for example, never in the history of semiconductors have we had more processes to choose from: 28nm – HP, -HPM, -LP, -HPC, -HPC+, or -ULP? 22nm -CMOS, -FD-SOI, or -FinFETs? How about 16nm, 14nm, 12nm, 10nm, 8nm, 7nm, 6nm, etc…? And don’t get me started on Design IP! Given the numerous decisions that need to be made, this CEO interview is a must read as it has more than 30 years of advanced semiconductor design services behind it, absolutely.

How have the challenges of bringing an IC to market changed from when Intrinsix started 30 years ago to today?
30 years ago, in 1987, Intrinsix first few customers were struggling with much the same things that our current customers debate – questions such as What can be integrated?, How much will it cost?, and How long will it take? But since then, while everyone knows that SO much more can be integrated, there have been two additional dimensions of complexity layered on top. First, the technology choices have expanded from a handful of options – to currently dozens of processes, companies, and… we should include the extraordinary effect of the FPGA options… all of this has created a bewildering array of price-performance tradeoffs. Secondly, in the 1980’s, the industry was dominated by a small number of vertically integrated companies, supplying solutions from design through to finished IC products and systems even, but disaggregation has changed all of that. Economic forces have driven specialization in each segment of the supply chain from design to finished product. Options have proliferated but customers of new or custom IC devices must now navigate a dynamic and challenging maze of suppliers, each with a focus or specialty, each with a desire to maximize profit on their slice of the solution. Quite a different landscape when you add these factors or dimensions.

What barriers do you see for future growth and innovation?
The internet ensures an almost level playing field of information which, in turn, offers ​a ​fantastic opportunity for growth and innovation. More than at any time in the past. The barriers of “who you know” and “what you know” continue to lower while the layers of hardware and software building blocks proliferate. Sure, it can be a challenge to choose and assemble the correct pieces of hardware and software to achieve something new and you will likely need to customize some parts to differentiate your new product, but there are so many options to choose from! So the barriers to future growth and innovation are really now about access to the money and assembling teams just in time to bring ideas to market faster than the competition. But so many new and wonderful products don’t make it to market because it is just so darn expensive and hard to get it all right the first time and then climb over the chasm of product introduction and make it to volume revenue and profits. From my experience, innovators should double their estimates for time and money and then they will be about halfway to where they need to be.

What is it like working with Intrinsix?
The one word that describes what it is like for customers is consistency. For the last 2 decades, ever since we embraced Project Management as a necessary function, every engagement has the same 3 interface points: Business Development, Technical Leadership, and Project Management. These 3 people may change but there are always 3 and their roles synchronize such as in the way we discuss new potential engagements, how we frame the solutions, and the way in which we create functional teams and schedules that meet the needs of our customers. From initial contact through to chips that work reliably within their end-customers’ systems, our customers get a consistent interface.

Can you talk more about your phased approach to project planning and execution?
A great question and it follows perfectly on my comments about consistency. Here is the thing: If you are considering doing something that has never been done before, which may or may not use new tools, a new process and new IP – it is best to do some serious planning and research in an early phase. Equally important, today’s ICs tend to be super-complicated so writing down what you need/want cannot be skipped…. Or you might get something you don’t need/want. Finally, and here is where the rubber meets the road – a very specific plan for verification is required BEFORE design begins in earnest. A small set of our customers really does not need a phased approach, possibly because they have done all of this but most are still talking to their end-customers, still deciding on trade-offs and still wondering what kind of performance-power-schedule-expense options are best to consider. So, yes, Intrinsix customer success depends on a phased approach, regardless of the urgency. Going fast to the wrong place is not helpful. But throughout the early, middle, and late phases of any IC design – professional project management and savvy technical leadership keep things on track and well-communicated between all of the stakeholders.

What advice would you give to companies contemplating their first IC?
Creating a new Integrated Circuit is a terribly expensive thing to do. It may seem counter-intuitive that I would say this but companies should look hard for alternatives such as using existing chips or making things work in software – simply because the cost, especially at or near state-of-the-art, can be astronomical. Underestimation of the time and expense is endemic to the electronics industry for good reason – the numbers add up so fast that no one wants to believe it could take THAT long or cost THAT much. But it does. So the most important advice here is, whether or not you have hired experts who “know the industry”, get out and talk to suppliers who have some independence and knowledge of alternative solutions within the IC space and within your own vertical markets. Companies such as Intrinsix can use any tool, work with any foundry and manage the process from architectural hardware/software trade-offs to finished, packaged chips – so that vision and experience with today’s offerings lets us help companies choose the path that optimizes their profitability over the short and long term.

Final thoughts for our readers?
Because Intrinsix is fortunate enough to have customers at leading-edge semiconductor companies as well as ground-breaking DoD entities like DARPA, we are uniquely positioned to see some aspects of the future of electronics – and I can tell you it is very exciting! Just as important as the technologies coming from those entities, are the new ideas, research and real world solutions coming from foundries, EDA tool companies, IP providers, and packaging firms and others in this fast moving industry. But let’s also not forget though that as fast as this industry moves, the risks multiply (just like the transistors in a chip) with the numbers of moving parts and players. So I would encourage your readers to be excited yet wary. While fortune will always favor the bold in an industry that is this dynamic I would suggest that the bold will stay fortunate by being aware that complexity breeds opportunities and risks at the same rate.

Also Read:

CEO Interview: Chris Henderson of Semitracks

CEO Interview: Stanley Hyduke, founder and CEO of Aldec

CEO Interview: Vincent Markus of Menta


SEMICON West – EUV Readiness Update

SEMICON West – EUV Readiness Update
by Scotten Jones on 08-11-2017 at 12:00 pm

At the imec technology forum held at SEMICON West, Martin Van Den Brink, President and CTO of ASML presented on the latest developments on EUV. I also had an opportunity to sit down with Mike Lercel, ASML Director of Strategic Marketing for an interview.

Continue reading “SEMICON West – EUV Readiness Update”


Webinar: Fast-Track to Riviera-PRO

Webinar: Fast-Track to Riviera-PRO
by Bernard Murphy on 08-11-2017 at 7:00 am

Whether you’re right out of college, starting on your first design, a burn-and-churn designer thinking there must be a better way or an ASIC designer wanting to do a little prototyping, this webinar may be for you. It’s a fast start on using the Aldec Riviera-PRO platform for verification setup, run and debug, and more. There are a lot of tools you have to learn in your job and manuals are a painful way to get there. Fast-Track guidance is the best way to get up and running quickly.

REGISTER NOW for this Webinar on August 17[SUP]th[/SUP] from 11:00AM to 12:00PM PDT

Abstract
Riviera-PRO™ Advanced Verification Platform addresses the verification needs of engineers crafting tomorrow’s cutting-edge FPGA and SoC devices. Riviera-PRO enables the ultimate testbench productivity, reusability, and automation by combining the high-performance simulation engine, advanced debugging capabilities at different levels of abstraction, and support for the latest Language and Verification Library Standards.

This first webinar of a two-part “Fast Track” series is designed to help functional verification engineers get up to speed quickly with Design Management and Design Entry in Riviera-PRO. Includes tips and tricks to enable easier debugging of designs. Also covers how to run Simulations and handling waveforms.

REGISTER NOW for this Webinar on August 17[SUP]th[/SUP] from 11:00AM to 12:00PM PDT

Agenda

  • Introduction
  • Live Demo
  • Spec-TRACER™ demo
  • Conclusion
  • Q&A

Presenter Bio
Sunil Sahoo provides support for customers exploring simulation tools as an Aldec Applications Engineer. His practical engineering experience includes areas in, Digital Designing, Functional Verification and Wireless Communications. He has worked in wide range of engineering positions including Digital Design Engineer, Verification Engineer and Applications Engineer. He received his B.S. in Electronics and Communications Engineering from VIT University, India in 2008 and M.S in Computer Engineering from Villanova University, PA in 2010.

About Aldec
Aldec Inc., headquartered in Henderson, Nevada, is an industry leader in Electronic Design Verification and offers a patented technology suite including: RTL Design, RTL Simulators, Hardware-Assisted Verification, SoC and ASIC Prototyping, Design Rule Checking, CDC Verification, IP Cores, Requirements Lifecycle Management, Embedded, DO-254 Functional Verification and Military/Aerospace solutions. https://www.aldec.com/


Overcoming Mixed-Signal Design and Verification Challenges in Automotive and IoT Systems

Overcoming Mixed-Signal Design and Verification Challenges in Automotive and IoT Systems
by Daniel Payne on 08-10-2017 at 12:00 pm

At the recent DAC conference in Austin I attended a panel discussion over lunch where engineers from four companies talked about how they approached mixed-signal design and verification challenges for automotive and IoT systems. It seems like 2017 was the year of automotive at DAC, while in 2016 it was all about IoT. Both segments are attractive because of their size (Automotive) and growth potential (IoT). The panelists included:
Continue reading “Overcoming Mixed-Signal Design and Verification Challenges in Automotive and IoT Systems”


FPGA-Based Networking for Datacenters: A Deeper Dive

FPGA-Based Networking for Datacenters: A Deeper Dive
by Bernard Murphy on 08-10-2017 at 7:00 am

I’ve written before about the growing utility of FPGA-based solutions in datacenters, particularly around configurable networking applications. There I just touched on the general idea; Achronix have developed a white-paper to expand on the need in more detail and to explain how a range of possible solutions based on their PCIe Accelerator-6D board can meet that need.


You may know of Achronix for their embedded FPGA IP solution. In fact they launched their business offering, and continue to enjoy success with their Speedster-22i FPGA family of devices optimized for wireline applications. And they now provide a complete board built around a Speedster FPGA for a more turnkey solution ready to plug into datacenter applications.

Why is this important? Cloud services is a very competitive arena, especially between the 800-pound gorillas – AWS (Amazon), Azure (Microsoft), vmWare/Dell and Google. Everyone is looking for an edge and the finish line keeps moving. You can’t do much (at least today) to differentiate in the basic compute engines; Intel (and AMD) has too much of a lead in optimizing those engines. But networking between engines is a different story. Raw speed obviously is an important factor, but so is optimizing virtualization; there’s always more you can do to optimize between blades and racks to provide better multi-cluster performance and reduce power. Then there’s security. Networks are the highways on which malware and denial of service (DOS) attacks spread; correspondingly, they’re also where these attacks can be stopped if sufficiently advanced security techniques can be applied to transmitted data.

So each of these cloud service providers needs differentiated and therefore custom solutions. But the volume of parts they need, while high, is probably not sufficient to justify the NREs demanded by ASIC options. And more to the point, whatever they build this year may need to be improved next year and they year after that … What they need is configurable network interface cards (NICs) but sufficiently configurable that they can’t buy cost-effective options off-the-shelf. That points fairly obviously to the advantages of an FPGA-based solution, which is exactly why Microsoft is using Intel/Altera FPGAs.

A good example of how a custom networking solution can optimize performance over standard NIC solutions is in remote DMA (RDMA) access between CPUs. Think about a typical tree configuration for a conventional local area network (LAN). North-south communication in the LAN, directly from root to leaves or vice-versa, is efficient requiring potentially only a single hop. But east-west traffic, between leaves, is much less efficient since each such transaction requires multiple hops. Standard NICs will always hand processing off to the system controller and these transactions, which will be common in RDMA, create additional system burden and drag down overall performance. This is where a custom solution to handle east-west transaction can offload traffic from the system and deliver higher overall performance.

Security is as much of a moving target as networking performance. Encryption in communication between virtual machines is becoming increasingly common, as are methods to detect intrusion and prevent data loss. These capabilities could be implemented in software running on system host but that burns system cycles which aren’t going to the client application and which can chip away at competitive advantage, or fail to live up to committed service level agreements. Again a better solution in some cases can be to offload this compute into dedicated engines, not only to reduce impact on system load but also to provide a level of development and maintenance independence (and possibly additional security) in separating security functions from the VM system.


Achronix also talk about roles their solution can play in Network Functions Virtualization (NFV), from assistance with test and measurement in software defined networking (SDN) scenarios to providing an offload platform for NFV computation.

This is an exciting and rapidly growing domain. As we offload more and more of our local computation to the cloud, we expect to get higher performance at reasonable cost and, in most cases, with very high security. Effectively managing communication within the cloud is a big part of what will continue to grow cloud appeal; configurable, differentiable solutions based on platforms like the Accelerator-6D will be an important component in delivering that appeal. You can read more about Achronix’ Accelerator-6D solution HERE.


Whitepaper : The True Costs of Process Node Migration

Whitepaper : The True Costs of Process Node Migration
by Mitch Heins on 08-09-2017 at 12:00 pm

Mentor, A Siemens Business, just released a new white paper entitled, “The True Costs of Process Node Migration” written by John Ferguson. This is a good quick read that highlights some of the key areas that are often over looked when contemplating a shift of process nodes for your next design.

When considering a shift to a more advanced node we normally think of cost reductions brought on by smaller transistor sizes and pitches. The smaller the transistors and the tighter the pitch, the more logic you can fit on a die, essentially reducing the cost per transistor.

Over time, foundries further ingrained this idea by compounding savings through shifts to larger wafer sizes, again usually associated with new process nodes. Wafer sizes have grown from 100mm in 1975 to 450mm presently being used in state-of-the-art processes. The bigger the wafer, the more die that can be produced for the cost of that wafer. While prices for 450mm wafers are considerably higher than their 100mm counterparts, the number of die that can be processed on a wafer goes up as a function of the available area (e.g. square function) and the aggregate costs per die come down. That’s the good news.

John’s white paper goes on to expose many of the hidden costs that people often forget to consider. One hidden cost is that if the die size stays constant and the logic size increases, it follows that more interconnect will be required to connect the logic. This comes at the price of additional metallization layers which adds to both the wafer processing costs and non-recurring engineering (NRE) costs for the extra metallization masks. Additional metallization implies additional chip power to push the signals through the extra layers and vias. More power can also mean a more expensive package to handle the extra heat generated.

Another hidden cost comes from the fact that the geometric pitch on advanced nodes really hasn’t decreased much since the transition to nodes requiring multi-patterning lithography (e.g. 20nm and below). The good news here is that the transistor pitches are still shrinking as we’ve turned the transistor on its proverbial side using FinFET technology. However, the wafer processing and mask NRE costs to perform these miracles are rising at a much higher rate than previously, so the incremental cost savings expected from moving to a more advanced node is not as clear as it once was in the past.

The most overlooked costs of moving to a more advanced process node seems to come from the impacts that new manufacturing approaches have on design constraints and the tools and flows that are used to implement and verify them.

In general, advanced process nodes have significantly more design constraints and rules to which the electronic design automation (EDA) tools must adhere. Examples of this include advanced parasitic extraction and analysis tools to handle the more complex interconnects as well as detailed stress analysis that may be needed to ensure the behavior of each transistor.

Additionally, as multi-patterning is required for the super small geometries, more work must be done by the EDA tools to ensure that the layout of a dense layer can be successfully split into multiple masks.

These new constraints translate into the hidden costs of new tools as well as training for the designers who use them. As designs get bigger, design team sizes are growing and along with that comes the need for more EDA tools. While it may be unfair to blame these costs on a shift in process node, they are in fact real costs that must be considered when planning a new design.

Coupling the larger number of constraints and rules, with larger teams and larger designs, the real kicker comes in the form of the amount of extra hardware required to process all the design and verification tasks. Arguably, some of the highest compute demands comes from the physical verification and modeling tools.
The more advanced the technology process the more advanced the modeling and design verification methods required. Many of the design verification steps are not even feasible without some kind of hyper-scaling capability. Fortunately for us, companies like Mentor are fielding products that can take advantage of hyper-scaling. Calibre nmDRC can scale up to thousands of cores. Without this we wouldn’t be able to do the kinds of system-on-chip (SoC) circuits that we do today.

The hidden cost here is the cost to provision and maintain large compute farms on which all the tools can be run. Some companies choose to provision these compute farms internally while others are looking to leverage the cloud. Either way, the cost is real. Thankfully these costs can be amortized over many designs, design tasks and process nodes.

Another key issue is the worry of sunk EDA tool investments. Will the company be able to continue to use the tools they purchased or will they need entirely new tools to handle the next process node? Along this vein, Mentor’s Calibre nmDRC team has done a great job of segregating and layering their capabilities to ensure that a company’s previous investments in their tools are not lost when migrating to a new process node. Calibre’s capabilities and features are set up so that advanced functionality can be added on an as-needed basis. Core features required by all foundries are grouped and preserved in the base capabilities of the tool while features such as multi-patterning, pattern matching, advanced fill technology, and advanced reliability checking can be readily added to the base features when and as they are needed.

In summary, the trade-offs for shifting to a new process node comes down to comparing higher NRE costs associated with the newer process node versus the expected return on investments derived from enhanced chip performance, reduced bill of materials through integration and lower overall unit prices. The good news for designers looking to move to more advanced process nodes is that the EDA providers like Mentor continue to improve their tool capabilities and performance while making it economically feasible to add capabilities as you go.

See also:
Mentor White Paper for more details
Mentor Calibre nmDRC products


Introduction to Semiconductor Processing

Introduction to Semiconductor Processing
by Daniel Nenni on 08-09-2017 at 7:00 am

We introduced you to Semitracks last week with an interview of their CEO, Chris Henderson. This week, we thought it might be worthwhile to continue that introduction with an overview of one of their more popular online and in-houses courses:

Introduction to Semiconductor Processing.

One of the big challenges in our industry for new people is the breadth of knowledge required to do one’s job effectively. While many of today’s universities put out electrical engineering graduates, many of them will not have had coursework specifically focused on the semiconductor industry. Chris mentioned to me that in a recent poll he did during an introduction class for engineering new-hires at a large US-based semiconductor company, only about 20% had taken a course in Semiconductor Processing (Wafer Fabrication), and less than 10% had taken a course in Semiconductor Testing, and none had taken a course in Semiconductor Packaging. These are fundamental activities of our industry, and yet few people knew anything about them. This means that upwards of 90% of all incoming engineers would require additional training in order to even have a basic understanding of how we make chips. Furthermore, although most engineers have had a course in statistics, they do not typically remember this information well enough for it to be useful in the industry’s analysis efforts, that are so common with high volume manufacturing.

Granted, there are universities that run research efforts in semiconductor fabrication, packaging and test, but the students that take these advanced courses or participate in the research efforts are graduate students (Masters and PhDs). The bulk of today’s hiring in our industry is in the form of product and test engineers, as well as engineers that can help manage the company’s interactions with the foundry. These roles are not typically occupied by PhD graduates, but instead by BSEEs. These individuals need to understand the manufacturing process, if for no other reason than to know how manufacturing impacts their work. In many instances, there are subtle interactions between wafer fabrication and design, wafer fabrication and test, wafer fabrication and packaging, and so on, that cause our industry great pains and grief. To even begin to address these problems, new product and test engineers need to have a basic understanding of these topics. They need to know the language, the terminology, the basic processes, and so on, to know what questions to ask.

A useful analogy is interacting in a foreign language. Imaging you are the concierge at a hotel in a foreign country. If you see two people in a foreign country arguing at a counter over a problem they’re having, and you don’t know the language, you can’t even begin to help them. Once you learn the language though, you can begin to understand their concerns and then come up with a way to help them. The better you learn the language, the better you’ll be equipped to provide help. The same is true in our industry. If you don’t have a basic understanding of the manufacturing process, you won’t be able to help solve the problems that your company experiences.

So Semitracks can help you learn this language then. If you’re new to this industry, I would encourage you to look into “learning our language” by enrolling in a course like this. This is like taking the first year of a foreign language. Just like you begin to understand what people are saying in a foreign country after you take a year of their language, you’ll learn enough to begin to understand how our industry works. You’ll need further education to become truly effective, but this type of course provides the basis for that understanding, absolutely.


IoT Value chains now and in the future!

IoT Value chains now and in the future!
by Diya Soubra on 08-08-2017 at 12:00 pm

Today, when a company decides to roll out an IoT solution to improve operational efficiency, the management hires a system integrator (SI) to develop and deploy the system. The SI will create and deliver a wholly owned end-to-end solution by externally sourcing all the components required to sense, analyse and decide. There is no room for third parties as active participants in the operation of the system or the data flow.

The SI will find the sensors be it in the form of devices or modules. The SI will setup a network topology to collect the data and finally feed that data into the software that will extract information that will be used to drive decisions for that business. After system delivery, the complete IoT rollout is created and managed internally. This is a typical approach for any new technology across all industries. At first, only Fortune 500 companies have the capital to do such deployments and they contract the design and deployment to Tier 1 system integrators since there is no margin for error due to the large investment required.

After a few such companies start to have reasonable sets of data we will start to see the emergence of data cooperation or even data transactions in which there is data exchange between corporations in return for mutual benefit.

Up to this point there are no small players in the picture since there is no room for them. They are too small to be trusted with the task since their technology is not widely deployed yet.

As the technology matures it becomes more economical to no longer own all the parts of the system. (Think of all the in-house servers of the 1990s now happily living in the cloud.) In the case of IoT, the first logical part to be displaced is the sensing part. Imagine the case of a small company placing environmental sensors across a city and giving access to the output data to any buyer. This small company then removes the need for anyone else to develop and deploy such sensors. The value is not in the sensor but in the information extracted from the data acquired from the sensor.

This model maps directly to Fortune 500 companies. Think office plants. Many companies today rent plants to decorate their premise. For a fixed sum, an external company would be responsible to supply and maintain a set of plants across the premise. The company is buying the fact that the premise is green and pleasing to be around. They are not buying plants and soil. There is no need for a chief plants officer. Sensors fall in the same category. What the company requires is data, not sensors. A supplier of sensors will, for a fee, install and service units throughout the premise to supply data that feeds into the information system to make decisions.

There will be a difference between what I refer to as captive sensors and free sensors. The data stream from captive sensors has only one predefined destination. For example, an environmental sensor in a company will send data only to that company. The data from a free sensor is available for anyone willing to pay for it.

Free sensors require a broker to setup the transaction since the sensor/data supplier will be too small in size to transact with the companies wanting the data. Think of the Google Ad words broker model. Any one with a web page can register with Google, the broker, to sell advertising to any company wanting to place an advertisement on that web page. Without Google playing the role of the broker there is no means for the web page owner to make a transaction with the advertiser to capture revenue from the content of their web page.

(This situation will change once Blockchain is deployed for web advertising but that is the subject of a different future blog. Same for sensor data. Blockchain will remove the need for a broker.)

The migration continues as the technology used to extract information from data also gets widely established. The same model applies. One more component is no longer required to be totally owned internally to access its outputs. Think of all the cloud analytics software available today. The data stream formats need to be defined, the type of analysis is selected and then out comes information that is then used to make decisions.

The real value resides in the decisions taken based on the information extracted from the data from the sensors. Until the sensors are widely available they need to be created internally. Until the data to information conversion is widely available then that function needs to be done internally. Once those two components are widely available then there is no need to divert internal resources to develop and maintain either.

]The item that will not be externalised is the decision making since this is the essence of the business and this is the reason why the investment was approved in the first place.

I believe that the analytics part is in very good shape today since it is a very well understood, complex but and contained operation. The hard problem to solve is the availability of the sensor as a service scheme. There are millions of makers today that are ready to deploy all sorts of sophisticated sensors if they can find a way to capture revenue from that deployment. Today, there is no such possibility.

[Why is that bad for everyone? well, because if no one can capture revenue from free sensors then the IoT market will remain on a healthy but controlled growth curve. Not all companies are able to deploy the end-to-end solution internally hence IoT will take much longer to reach all companies. Say maybe 10% to 20% annual growth rate which is healthy but not spectacular compared to what would happen if those millions of makers can capture revenue from their free sensors.

Here is a specific example, a smart and connected weather station that someone installed in Europe as a technology demonstrator project.


If there was a “sense” broker out there that they could sign up with then they could sell that data on a global scale. Someone in Japan studying the butterfly effect could use that data to refine their study of the global climate. A very simple transaction to do if we had Blockchain but we don’t so we need a broker to handle that transaction instead. We need someone to play the Google of IoT sense nodes to make those transactions happen.

IoT platforms will not drive spectacular market growth unless they integrate revenue capture for free sensors. Otherwise IoT remains an extension to current IT infrastructure. A fantastic extension for sure but not the global revolution that we dream of.

Any one want to step up and play IoT “sense” broker?The window of opportunity is till wide open since Blockchain for IoT is still far off in the future.


Here Come Holograms

Here Come Holograms
by Bernard Murphy on 08-08-2017 at 7:00 am

A quick digression into a fun topic. A common limitation in present-day electronics is the size and 2D nature of the display. If it must be mobile this is further constrained, and becomes especially severe in wearable electronics, such as watches. Workarounds include projection and bringing the display closer to the eye, as in the newly re-emergent Google Glass and other AR applications. But even here it still seems we are constrained by a traditional 2D display paradigm, while sci-fi movies are almost routinely tempting us with unaided (no special viewing devices required) holographic projections – Iron Man and Star Wars provide just a couple of recent examples.


Of course sci-fi and reality don’t have to connect but the genre has been very productive in motivating real application advances, so why not here also? Following the movies, these could deliver room-sized projections (Death Star plans or the internals of Jarvis) or small projections (R2D2 projection of Princess Leia). By no longer tying image size to device size we could make compute devices as inconspicuous as we please while being able to deliver images of significantly greater size and richness (3D) than those we can view today on flat screens.


One nature-inspired development in this direction starts with fixed holograms, usually seen on credit cards and used for security. These are built on the way Morpho butterfly wings display color, not through pigment but by refraction and diffraction through scales on the wings, which causes constructive interference only for blue light. Instead of using the conventional split laser beam technique, this method combines images taken from multiple perspectives, through a computed combination sounding rather like a form of tomography, into a pattern of bumps and dips which can be written lithographically onto a sheet of plastic. The resulting hologram can be viewed in normal light, supports a full spectrum of colors and doesn’t require special viewing aids. This team (University of Utah) plans to extend their work to use liquid crystal modulators to support varying images. I imagine this direction could be interesting to the TI DLP group who already dominate the projection world.


HoloLamp introduced their AR projection system at CES this year, where they won an award. Again no special visual aids required but this already supports programmability and movement. There is some interesting and revealing technology in this solution. Apparently it uses face-tracking and motion parallax to create a 3D perception (not clear how that would work for multiple viewers?). They also claim it allows you to manipulate the 3D objects with your hands via stereo detection at the projector (an earlier report said that manipulation was only possible through the controller). Unsurprisingly, these guys use DLP ultra-short throw projectors (again TI, I assume). HoloLamp are running on one seed round raised last year.


Voxon Photonics with their VoxieBox has a different approach. They sweep a projection surface up and down very fast, so fast that the surface effectively disappears and all you see is the volume projection. (They also introduce some new terminology – instead of pixels they talk about voxels, a volume pixel, defining degrees of resolution and required refresh rates.) They are early stage, one seed round of funding so far and now looking for VC funding.


Finally, how can we put holographic display on a watch or a phone? A team at RMIT University in Australia is working on an answer. The trick of course is to build a solution on something resembling modern semiconductor processes. But there’s a catch – visible light wavelengths range from ~390 to 700nm, rather larger than modern process feature sizes. That’s important in this application because holographic methods of the standard type use phase modulation to create an illusion of depth, significantly limiting possible phase shifts (and therefore 3D appearance) that can be generated in much smaller dimensions. The Australian team has solved this using a topological insulator material (based on graphene oxide), in a 25nm dimension, to build an optical resonant cavity to accumulate sufficient phase shifts for 3D projection. Images produced so far are low resolution but multi-color. This team also sees potential to overlay a resonant cavity film on a liquid-crystal substrate to support varying images.

Perhaps yet again, technology is catching up with the movies.


Your Car Sucks Data

Your Car Sucks Data
by Roger C. Lanctot on 08-07-2017 at 12:00 pm

“Cars Suck up Data about You. Where Does It All Go?” This was the ill-informed headline in the NYTimes last month. The headline echoes the thoughts of Intel’s CEO Brian Krzanich (“Just one autonomous car will use 4,000 GB of data/day” – NetworkWorld) and Barclay’s Brian Johnson (“An ‘ocean of auto big data’ is coming, says Barclays” – CNBC) who asserted that a single autonomous car will be gathering 100 GB of data per minute.

“Cars Suck up Data about You. Where Does It All Go?” – NYTimes –


“Just one autonomous car will use 4,000 GB of data/day” – NetworkWorld –


“An ‘ocean of auto big data’ is coming,’ says Barclays” – CNBC –

There is a big difference between normal cars and autonomous cars and that difference is an enormous gulf today representing a sea of expensive hardware and software that will separate the average car from autonomy for many more years. But there is a difference between collecting data and storing and transmitting data. In the average car today precious little data is gathered, less is stored and even less is transmitted anywhere.

This reality is good and bad. There is an ocean of value in the puddle’s worth of data currently collected by automobiles on the road. The data that is gathered is important and sometimes sensitive – but there isn’t a lot of it…yet.

Cars are increasingly equipped with event data recorders tuned to store vital parameters related to crashes – but you have to crash into something for this information to be recorded AND stored and thereby become useful to anyone. If you drive a GM vehicle and have a crash – and have paid your OnStar subscription – your car will automatically call for assistance and share your location and the vehicle vectors relevant to determining the severity of your crash. That’s a good thing.

(Conversely, over the years spouses have been known to call OnStar – and BMW Assist and Mercedes Embrace etc. – for help locating their car – when what they really want to know is where their significant other is. We are also by now all familiar with law enforcement use of vehicle location information – ie. Boston Marathon bombers et. al.)

Your smartphone connection in the car is likely storing your most recent calls and your contacts – which might be interesting to other people who use your car or subsequent users of your rental car. This is a privacy vulnerability that not all car makers have corrected, but it’s relatively minor.

Your previous destinations are stored on your embedded navigation system – which might also be interesting to significant others or law enforcement depending on what you do with your car. If your car is equipped with a telematics system you may be transmitting your location periodically to a traffic information service when you are using the navigation system. Your car may also send a relatively small payload of data every time you start your car communicating vehicle health and performance data which might translate to a service notification for an oil change or tire rotation.

Whatever data your car is gathering is likely covered in the fine print in your purchase, lease or rental agreement. It is likely that you have long ago signed away control of that information.

There are companies, like Otonomo and IBM, that are helping car companies to create smartphone like opt-in customer controls in the form of on/off sliders. In the future, these controls might be enabled via smartphone app or Web portal connecting to the car.

If you are connecting your smartphone in your car, that device may be communicating far more information about you, your location, your musical tastes and your driving behavior. It is likely that much more information is transmitted via your connected phone than is communicated by your car.

While autonomous vehicles (currently consisting of slow moving shuttle buses or prototypes) are indeed capable of collecting and analyzing in real-time substantial amounts of data, little of this information is transmitted wirelessly – though much is stored for later analysis. This is the BIG difference between autonomous vehicles and existing vehicles.

Today, car companies still have a tortured relationship with data collection and wireless connectivity. Car companies like General Motors are currently fixated on reselling wireless access in the form of Wi-Fi and/or selling subscriptions to telematics services. What car companies have not yet grasped is that the real value lies in the vehicle data itself.

The opportunity lies in the vehicle performance data – which is something with which the average car driver/owner is not really concerned. The point is that car companies ought to be collecting vehicle performance data – a relatively low bandwidth activity – and selling it to suppliers to be analyzed.

If car companies paid closer attention to vehicle data they’d be more likely to avoid debacles like ignition switch failures, unintended acceleration and failing airbags. If car companies paid closer attention to vehicle data they’d be better equipped to save lives and money and avoid embarrassing appointments with the U.S. Congress…or the EU.

Suppliers should be paying for access to vehicle data from OEMs and thereby consumer/vehicle owners shouldn’t have to pay for telematics subscriptions. So the issue isn’t really a question of intrusive personal data gathering. The issue is the responsibility of the car makers to gather vehicle data to better track and manage the health and performance of their vehicles on the road.

It doesn’t do anyone any good to create a boogie man out of vehicle data collection. I want my car company to be monitoring the performance of my vehicle. When something goes wrong I want my car company to know and to be obliged to inform me.

It is even less helpful to suggest that the collection of gigabytes or terabytes of data is relevant to current drivers of everyday passenger vehicles or, worse yet, to suggest that this mass of vehicle data will be transmitted wirelessly in real time. The massive amounts of data collected by prototype vehicles is for development and machine learning purposes – it’s part of the incremental process inherent in designing autonomous driving systems.

There is one exception. Veniam, the creator of commercial vehicle connectivity systems using Wi-Fi, asserts that connected trucks and buses in Porto, Portugal using its system will transmit 1 TB of data/vehicle/day using its technology. What will make this possible, efficient and affordable is the use of Wi-Fi in the form of a city-wide mesh network thereby avoiding exorbitant wireless charges.

Veniam executives believe that a fully or partially connected autonomous fleet of cars will transmit a similar volume of data regarding status, heading, destination and speed among other things including video, a true data hog. Only time will tell if Veniam’s vision will come to pass. In the meantime, it’s time to stop worrying and love the data collection. It’s not about you. It’s about your car.

Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk