At the recent DAC conference in Austin I attended a panel discussion over lunch where engineers from four companies talked about how they approached mixed-signal design and verification challenges for automotive and IoT systems. It seems like 2017 was the year of automotive at DAC, while in 2016 it was all about IoT. Both segments are attractive because of their size (Automotive) and growth potential (IoT). The panelists included:
Continue reading “Overcoming Mixed-Signal Design and Verification Challenges in Automotive and IoT Systems”
FPGA-Based Networking for Datacenters: A Deeper Dive
I’ve written before about the growing utility of FPGA-based solutions in datacenters, particularly around configurable networking applications. There I just touched on the general idea; Achronix have developed a white-paper to expand on the need in more detail and to explain how a range of possible solutions based on their PCIe Accelerator-6D board can meet that need.
You may know of Achronix for their embedded FPGA IP solution. In fact they launched their business offering, and continue to enjoy success with their Speedster-22i FPGA family of devices optimized for wireline applications. And they now provide a complete board built around a Speedster FPGA for a more turnkey solution ready to plug into datacenter applications.
Why is this important? Cloud services is a very competitive arena, especially between the 800-pound gorillas – AWS (Amazon), Azure (Microsoft), vmWare/Dell and Google. Everyone is looking for an edge and the finish line keeps moving. You can’t do much (at least today) to differentiate in the basic compute engines; Intel (and AMD) has too much of a lead in optimizing those engines. But networking between engines is a different story. Raw speed obviously is an important factor, but so is optimizing virtualization; there’s always more you can do to optimize between blades and racks to provide better multi-cluster performance and reduce power. Then there’s security. Networks are the highways on which malware and denial of service (DOS) attacks spread; correspondingly, they’re also where these attacks can be stopped if sufficiently advanced security techniques can be applied to transmitted data.
So each of these cloud service providers needs differentiated and therefore custom solutions. But the volume of parts they need, while high, is probably not sufficient to justify the NREs demanded by ASIC options. And more to the point, whatever they build this year may need to be improved next year and they year after that … What they need is configurable network interface cards (NICs) but sufficiently configurable that they can’t buy cost-effective options off-the-shelf. That points fairly obviously to the advantages of an FPGA-based solution, which is exactly why Microsoft is using Intel/Altera FPGAs.
A good example of how a custom networking solution can optimize performance over standard NIC solutions is in remote DMA (RDMA) access between CPUs. Think about a typical tree configuration for a conventional local area network (LAN). North-south communication in the LAN, directly from root to leaves or vice-versa, is efficient requiring potentially only a single hop. But east-west traffic, between leaves, is much less efficient since each such transaction requires multiple hops. Standard NICs will always hand processing off to the system controller and these transactions, which will be common in RDMA, create additional system burden and drag down overall performance. This is where a custom solution to handle east-west transaction can offload traffic from the system and deliver higher overall performance.
Security is as much of a moving target as networking performance. Encryption in communication between virtual machines is becoming increasingly common, as are methods to detect intrusion and prevent data loss. These capabilities could be implemented in software running on system host but that burns system cycles which aren’t going to the client application and which can chip away at competitive advantage, or fail to live up to committed service level agreements. Again a better solution in some cases can be to offload this compute into dedicated engines, not only to reduce impact on system load but also to provide a level of development and maintenance independence (and possibly additional security) in separating security functions from the VM system.
Achronix also talk about roles their solution can play in Network Functions Virtualization (NFV), from assistance with test and measurement in software defined networking (SDN) scenarios to providing an offload platform for NFV computation.
This is an exciting and rapidly growing domain. As we offload more and more of our local computation to the cloud, we expect to get higher performance at reasonable cost and, in most cases, with very high security. Effectively managing communication within the cloud is a big part of what will continue to grow cloud appeal; configurable, differentiable solutions based on platforms like the Accelerator-6D will be an important component in delivering that appeal. You can read more about Achronix’ Accelerator-6D solution HERE.
Whitepaper : The True Costs of Process Node Migration
Mentor, A Siemens Business, just released a new white paper entitled, “The True Costs of Process Node Migration” written by John Ferguson. This is a good quick read that highlights some of the key areas that are often over looked when contemplating a shift of process nodes for your next design.
When considering a shift to a more advanced node we normally think of cost reductions brought on by smaller transistor sizes and pitches. The smaller the transistors and the tighter the pitch, the more logic you can fit on a die, essentially reducing the cost per transistor.
Over time, foundries further ingrained this idea by compounding savings through shifts to larger wafer sizes, again usually associated with new process nodes. Wafer sizes have grown from 100mm in 1975 to 450mm presently being used in state-of-the-art processes. The bigger the wafer, the more die that can be produced for the cost of that wafer. While prices for 450mm wafers are considerably higher than their 100mm counterparts, the number of die that can be processed on a wafer goes up as a function of the available area (e.g. square function) and the aggregate costs per die come down. That’s the good news.
John’s white paper goes on to expose many of the hidden costs that people often forget to consider. One hidden cost is that if the die size stays constant and the logic size increases, it follows that more interconnect will be required to connect the logic. This comes at the price of additional metallization layers which adds to both the wafer processing costs and non-recurring engineering (NRE) costs for the extra metallization masks. Additional metallization implies additional chip power to push the signals through the extra layers and vias. More power can also mean a more expensive package to handle the extra heat generated.
Another hidden cost comes from the fact that the geometric pitch on advanced nodes really hasn’t decreased much since the transition to nodes requiring multi-patterning lithography (e.g. 20nm and below). The good news here is that the transistor pitches are still shrinking as we’ve turned the transistor on its proverbial side using FinFET technology. However, the wafer processing and mask NRE costs to perform these miracles are rising at a much higher rate than previously, so the incremental cost savings expected from moving to a more advanced node is not as clear as it once was in the past.
The most overlooked costs of moving to a more advanced process node seems to come from the impacts that new manufacturing approaches have on design constraints and the tools and flows that are used to implement and verify them.
In general, advanced process nodes have significantly more design constraints and rules to which the electronic design automation (EDA) tools must adhere. Examples of this include advanced parasitic extraction and analysis tools to handle the more complex interconnects as well as detailed stress analysis that may be needed to ensure the behavior of each transistor.
Additionally, as multi-patterning is required for the super small geometries, more work must be done by the EDA tools to ensure that the layout of a dense layer can be successfully split into multiple masks.
These new constraints translate into the hidden costs of new tools as well as training for the designers who use them. As designs get bigger, design team sizes are growing and along with that comes the need for more EDA tools. While it may be unfair to blame these costs on a shift in process node, they are in fact real costs that must be considered when planning a new design.
Coupling the larger number of constraints and rules, with larger teams and larger designs, the real kicker comes in the form of the amount of extra hardware required to process all the design and verification tasks. Arguably, some of the highest compute demands comes from the physical verification and modeling tools.
The more advanced the technology process the more advanced the modeling and design verification methods required. Many of the design verification steps are not even feasible without some kind of hyper-scaling capability. Fortunately for us, companies like Mentor are fielding products that can take advantage of hyper-scaling. Calibre nmDRC can scale up to thousands of cores. Without this we wouldn’t be able to do the kinds of system-on-chip (SoC) circuits that we do today.
The hidden cost here is the cost to provision and maintain large compute farms on which all the tools can be run. Some companies choose to provision these compute farms internally while others are looking to leverage the cloud. Either way, the cost is real. Thankfully these costs can be amortized over many designs, design tasks and process nodes.
Another key issue is the worry of sunk EDA tool investments. Will the company be able to continue to use the tools they purchased or will they need entirely new tools to handle the next process node? Along this vein, Mentor’s Calibre nmDRC team has done a great job of segregating and layering their capabilities to ensure that a company’s previous investments in their tools are not lost when migrating to a new process node. Calibre’s capabilities and features are set up so that advanced functionality can be added on an as-needed basis. Core features required by all foundries are grouped and preserved in the base capabilities of the tool while features such as multi-patterning, pattern matching, advanced fill technology, and advanced reliability checking can be readily added to the base features when and as they are needed.
In summary, the trade-offs for shifting to a new process node comes down to comparing higher NRE costs associated with the newer process node versus the expected return on investments derived from enhanced chip performance, reduced bill of materials through integration and lower overall unit prices. The good news for designers looking to move to more advanced process nodes is that the EDA providers like Mentor continue to improve their tool capabilities and performance while making it economically feasible to add capabilities as you go.
See also:
Mentor White Paper for more details
Mentor Calibre nmDRC products
Introduction to Semiconductor Processing
We introduced you to Semitracks last week with an interview of their CEO, Chris Henderson. This week, we thought it might be worthwhile to continue that introduction with an overview of one of their more popular online and in-houses courses:
Introduction to Semiconductor Processing.
One of the big challenges in our industry for new people is the breadth of knowledge required to do one’s job effectively. While many of today’s universities put out electrical engineering graduates, many of them will not have had coursework specifically focused on the semiconductor industry. Chris mentioned to me that in a recent poll he did during an introduction class for engineering new-hires at a large US-based semiconductor company, only about 20% had taken a course in Semiconductor Processing (Wafer Fabrication), and less than 10% had taken a course in Semiconductor Testing, and none had taken a course in Semiconductor Packaging. These are fundamental activities of our industry, and yet few people knew anything about them. This means that upwards of 90% of all incoming engineers would require additional training in order to even have a basic understanding of how we make chips. Furthermore, although most engineers have had a course in statistics, they do not typically remember this information well enough for it to be useful in the industry’s analysis efforts, that are so common with high volume manufacturing.
Granted, there are universities that run research efforts in semiconductor fabrication, packaging and test, but the students that take these advanced courses or participate in the research efforts are graduate students (Masters and PhDs). The bulk of today’s hiring in our industry is in the form of product and test engineers, as well as engineers that can help manage the company’s interactions with the foundry. These roles are not typically occupied by PhD graduates, but instead by BSEEs. These individuals need to understand the manufacturing process, if for no other reason than to know how manufacturing impacts their work. In many instances, there are subtle interactions between wafer fabrication and design, wafer fabrication and test, wafer fabrication and packaging, and so on, that cause our industry great pains and grief. To even begin to address these problems, new product and test engineers need to have a basic understanding of these topics. They need to know the language, the terminology, the basic processes, and so on, to know what questions to ask.
A useful analogy is interacting in a foreign language. Imaging you are the concierge at a hotel in a foreign country. If you see two people in a foreign country arguing at a counter over a problem they’re having, and you don’t know the language, you can’t even begin to help them. Once you learn the language though, you can begin to understand their concerns and then come up with a way to help them. The better you learn the language, the better you’ll be equipped to provide help. The same is true in our industry. If you don’t have a basic understanding of the manufacturing process, you won’t be able to help solve the problems that your company experiences.
So Semitracks can help you learn this language then. If you’re new to this industry, I would encourage you to look into “learning our language” by enrolling in a course like this. This is like taking the first year of a foreign language. Just like you begin to understand what people are saying in a foreign country after you take a year of their language, you’ll learn enough to begin to understand how our industry works. You’ll need further education to become truly effective, but this type of course provides the basis for that understanding, absolutely.
IoT Value chains now and in the future!
Today, when a company decides to roll out an IoT solution to improve operational efficiency, the management hires a system integrator (SI) to develop and deploy the system. The SI will create and deliver a wholly owned end-to-end solution by externally sourcing all the components required to sense, analyse and decide. There is no room for third parties as active participants in the operation of the system or the data flow.
The SI will find the sensors be it in the form of devices or modules. The SI will setup a network topology to collect the data and finally feed that data into the software that will extract information that will be used to drive decisions for that business. After system delivery, the complete IoT rollout is created and managed internally. This is a typical approach for any new technology across all industries. At first, only Fortune 500 companies have the capital to do such deployments and they contract the design and deployment to Tier 1 system integrators since there is no margin for error due to the large investment required.
After a few such companies start to have reasonable sets of data we will start to see the emergence of data cooperation or even data transactions in which there is data exchange between corporations in return for mutual benefit.
Up to this point there are no small players in the picture since there is no room for them. They are too small to be trusted with the task since their technology is not widely deployed yet.
As the technology matures it becomes more economical to no longer own all the parts of the system. (Think of all the in-house servers of the 1990s now happily living in the cloud.) In the case of IoT, the first logical part to be displaced is the sensing part. Imagine the case of a small company placing environmental sensors across a city and giving access to the output data to any buyer. This small company then removes the need for anyone else to develop and deploy such sensors. The value is not in the sensor but in the information extracted from the data acquired from the sensor.
This model maps directly to Fortune 500 companies. Think office plants. Many companies today rent plants to decorate their premise. For a fixed sum, an external company would be responsible to supply and maintain a set of plants across the premise. The company is buying the fact that the premise is green and pleasing to be around. They are not buying plants and soil. There is no need for a chief plants officer. Sensors fall in the same category. What the company requires is data, not sensors. A supplier of sensors will, for a fee, install and service units throughout the premise to supply data that feeds into the information system to make decisions.
There will be a difference between what I refer to as captive sensors and free sensors. The data stream from captive sensors has only one predefined destination. For example, an environmental sensor in a company will send data only to that company. The data from a free sensor is available for anyone willing to pay for it.
Free sensors require a broker to setup the transaction since the sensor/data supplier will be too small in size to transact with the companies wanting the data. Think of the Google Ad words broker model. Any one with a web page can register with Google, the broker, to sell advertising to any company wanting to place an advertisement on that web page. Without Google playing the role of the broker there is no means for the web page owner to make a transaction with the advertiser to capture revenue from the content of their web page.
(This situation will change once Blockchain is deployed for web advertising but that is the subject of a different future blog. Same for sensor data. Blockchain will remove the need for a broker.)
The migration continues as the technology used to extract information from data also gets widely established. The same model applies. One more component is no longer required to be totally owned internally to access its outputs. Think of all the cloud analytics software available today. The data stream formats need to be defined, the type of analysis is selected and then out comes information that is then used to make decisions.
The real value resides in the decisions taken based on the information extracted from the data from the sensors. Until the sensors are widely available they need to be created internally. Until the data to information conversion is widely available then that function needs to be done internally. Once those two components are widely available then there is no need to divert internal resources to develop and maintain either.
]The item that will not be externalised is the decision making since this is the essence of the business and this is the reason why the investment was approved in the first place.
I believe that the analytics part is in very good shape today since it is a very well understood, complex but and contained operation. The hard problem to solve is the availability of the sensor as a service scheme. There are millions of makers today that are ready to deploy all sorts of sophisticated sensors if they can find a way to capture revenue from that deployment. Today, there is no such possibility.
[Why is that bad for everyone? well, because if no one can capture revenue from free sensors then the IoT market will remain on a healthy but controlled growth curve. Not all companies are able to deploy the end-to-end solution internally hence IoT will take much longer to reach all companies. Say maybe 10% to 20% annual growth rate which is healthy but not spectacular compared to what would happen if those millions of makers can capture revenue from their free sensors.
Here is a specific example, a smart and connected weather station that someone installed in Europe as a technology demonstrator project.
If there was a “sense” broker out there that they could sign up with then they could sell that data on a global scale. Someone in Japan studying the butterfly effect could use that data to refine their study of the global climate. A very simple transaction to do if we had Blockchain but we don’t so we need a broker to handle that transaction instead. We need someone to play the Google of IoT sense nodes to make those transactions happen.
IoT platforms will not drive spectacular market growth unless they integrate revenue capture for free sensors. Otherwise IoT remains an extension to current IT infrastructure. A fantastic extension for sure but not the global revolution that we dream of.
Any one want to step up and play IoT “sense” broker?The window of opportunity is till wide open since Blockchain for IoT is still far off in the future.
Here Come Holograms
A quick digression into a fun topic. A common limitation in present-day electronics is the size and 2D nature of the display. If it must be mobile this is further constrained, and becomes especially severe in wearable electronics, such as watches. Workarounds include projection and bringing the display closer to the eye, as in the newly re-emergent Google Glass and other AR applications. But even here it still seems we are constrained by a traditional 2D display paradigm, while sci-fi movies are almost routinely tempting us with unaided (no special viewing devices required) holographic projections – Iron Man and Star Wars provide just a couple of recent examples.
Of course sci-fi and reality don’t have to connect but the genre has been very productive in motivating real application advances, so why not here also? Following the movies, these could deliver room-sized projections (Death Star plans or the internals of Jarvis) or small projections (R2D2 projection of Princess Leia). By no longer tying image size to device size we could make compute devices as inconspicuous as we please while being able to deliver images of significantly greater size and richness (3D) than those we can view today on flat screens.
One nature-inspired development in this direction starts with fixed holograms, usually seen on credit cards and used for security. These are built on the way Morpho butterfly wings display color, not through pigment but by refraction and diffraction through scales on the wings, which causes constructive interference only for blue light. Instead of using the conventional split laser beam technique, this method combines images taken from multiple perspectives, through a computed combination sounding rather like a form of tomography, into a pattern of bumps and dips which can be written lithographically onto a sheet of plastic. The resulting hologram can be viewed in normal light, supports a full spectrum of colors and doesn’t require special viewing aids. This team (University of Utah) plans to extend their work to use liquid crystal modulators to support varying images. I imagine this direction could be interesting to the TI DLP group who already dominate the projection world.
HoloLamp introduced their AR projection system at CES this year, where they won an award. Again no special visual aids required but this already supports programmability and movement. There is some interesting and revealing technology in this solution. Apparently it uses face-tracking and motion parallax to create a 3D perception (not clear how that would work for multiple viewers?). They also claim it allows you to manipulate the 3D objects with your hands via stereo detection at the projector (an earlier report said that manipulation was only possible through the controller). Unsurprisingly, these guys use DLP ultra-short throw projectors (again TI, I assume). HoloLamp are running on one seed round raised last year.
Voxon Photonics with their VoxieBox has a different approach. They sweep a projection surface up and down very fast, so fast that the surface effectively disappears and all you see is the volume projection. (They also introduce some new terminology – instead of pixels they talk about voxels, a volume pixel, defining degrees of resolution and required refresh rates.) They are early stage, one seed round of funding so far and now looking for VC funding.
Finally, how can we put holographic display on a watch or a phone? A team at RMIT University in Australia is working on an answer. The trick of course is to build a solution on something resembling modern semiconductor processes. But there’s a catch – visible light wavelengths range from ~390 to 700nm, rather larger than modern process feature sizes. That’s important in this application because holographic methods of the standard type use phase modulation to create an illusion of depth, significantly limiting possible phase shifts (and therefore 3D appearance) that can be generated in much smaller dimensions. The Australian team has solved this using a topological insulator material (based on graphene oxide), in a 25nm dimension, to build an optical resonant cavity to accumulate sufficient phase shifts for 3D projection. Images produced so far are low resolution but multi-color. This team also sees potential to overlay a resonant cavity film on a liquid-crystal substrate to support varying images.
Perhaps yet again, technology is catching up with the movies.
Your Car Sucks Data
“Cars Suck up Data about You. Where Does It All Go?” This was the ill-informed headline in the NYTimes last month. The headline echoes the thoughts of Intel’s CEO Brian Krzanich (“Just one autonomous car will use 4,000 GB of data/day” – NetworkWorld) and Barclay’s Brian Johnson (“An ‘ocean of auto big data’ is coming, says Barclays” – CNBC) who asserted that a single autonomous car will be gathering 100 GB of data per minute.
“Cars Suck up Data about You. Where Does It All Go?” – NYTimes –
“Just one autonomous car will use 4,000 GB of data/day” – NetworkWorld –
“An ‘ocean of auto big data’ is coming,’ says Barclays” – CNBC –
There is a big difference between normal cars and autonomous cars and that difference is an enormous gulf today representing a sea of expensive hardware and software that will separate the average car from autonomy for many more years. But there is a difference between collecting data and storing and transmitting data. In the average car today precious little data is gathered, less is stored and even less is transmitted anywhere.
This reality is good and bad. There is an ocean of value in the puddle’s worth of data currently collected by automobiles on the road. The data that is gathered is important and sometimes sensitive – but there isn’t a lot of it…yet.
Cars are increasingly equipped with event data recorders tuned to store vital parameters related to crashes – but you have to crash into something for this information to be recorded AND stored and thereby become useful to anyone. If you drive a GM vehicle and have a crash – and have paid your OnStar subscription – your car will automatically call for assistance and share your location and the vehicle vectors relevant to determining the severity of your crash. That’s a good thing.
(Conversely, over the years spouses have been known to call OnStar – and BMW Assist and Mercedes Embrace etc. – for help locating their car – when what they really want to know is where their significant other is. We are also by now all familiar with law enforcement use of vehicle location information – ie. Boston Marathon bombers et. al.)
Your smartphone connection in the car is likely storing your most recent calls and your contacts – which might be interesting to other people who use your car or subsequent users of your rental car. This is a privacy vulnerability that not all car makers have corrected, but it’s relatively minor.
Your previous destinations are stored on your embedded navigation system – which might also be interesting to significant others or law enforcement depending on what you do with your car. If your car is equipped with a telematics system you may be transmitting your location periodically to a traffic information service when you are using the navigation system. Your car may also send a relatively small payload of data every time you start your car communicating vehicle health and performance data which might translate to a service notification for an oil change or tire rotation.
Whatever data your car is gathering is likely covered in the fine print in your purchase, lease or rental agreement. It is likely that you have long ago signed away control of that information.
There are companies, like Otonomo and IBM, that are helping car companies to create smartphone like opt-in customer controls in the form of on/off sliders. In the future, these controls might be enabled via smartphone app or Web portal connecting to the car.
If you are connecting your smartphone in your car, that device may be communicating far more information about you, your location, your musical tastes and your driving behavior. It is likely that much more information is transmitted via your connected phone than is communicated by your car.
While autonomous vehicles (currently consisting of slow moving shuttle buses or prototypes) are indeed capable of collecting and analyzing in real-time substantial amounts of data, little of this information is transmitted wirelessly – though much is stored for later analysis. This is the BIG difference between autonomous vehicles and existing vehicles.
Today, car companies still have a tortured relationship with data collection and wireless connectivity. Car companies like General Motors are currently fixated on reselling wireless access in the form of Wi-Fi and/or selling subscriptions to telematics services. What car companies have not yet grasped is that the real value lies in the vehicle data itself.
The opportunity lies in the vehicle performance data – which is something with which the average car driver/owner is not really concerned. The point is that car companies ought to be collecting vehicle performance data – a relatively low bandwidth activity – and selling it to suppliers to be analyzed.
If car companies paid closer attention to vehicle data they’d be more likely to avoid debacles like ignition switch failures, unintended acceleration and failing airbags. If car companies paid closer attention to vehicle data they’d be better equipped to save lives and money and avoid embarrassing appointments with the U.S. Congress…or the EU.
Suppliers should be paying for access to vehicle data from OEMs and thereby consumer/vehicle owners shouldn’t have to pay for telematics subscriptions. So the issue isn’t really a question of intrusive personal data gathering. The issue is the responsibility of the car makers to gather vehicle data to better track and manage the health and performance of their vehicles on the road.
It doesn’t do anyone any good to create a boogie man out of vehicle data collection. I want my car company to be monitoring the performance of my vehicle. When something goes wrong I want my car company to know and to be obliged to inform me.
It is even less helpful to suggest that the collection of gigabytes or terabytes of data is relevant to current drivers of everyday passenger vehicles or, worse yet, to suggest that this mass of vehicle data will be transmitted wirelessly in real time. The massive amounts of data collected by prototype vehicles is for development and machine learning purposes – it’s part of the incremental process inherent in designing autonomous driving systems.
There is one exception. Veniam, the creator of commercial vehicle connectivity systems using Wi-Fi, asserts that connected trucks and buses in Porto, Portugal using its system will transmit 1 TB of data/vehicle/day using its technology. What will make this possible, efficient and affordable is the use of Wi-Fi in the form of a city-wide mesh network thereby avoiding exorbitant wireless charges.
Veniam executives believe that a fully or partially connected autonomous fleet of cars will transmit a similar volume of data regarding status, heading, destination and speed among other things including video, a true data hog. Only time will tell if Veniam’s vision will come to pass. In the meantime, it’s time to stop worrying and love the data collection. It’s not about you. It’s about your car.
Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk
Is an ASIC Right for Your Next IoT Product?
According to a recent study by ARM, more than one trillion IoT devices will be built between 2017 and 2035. Based on research for an upcoming book on IoT devices and looking at SemiWiki IoT analytics I find that number to be reasonable, in fact, easily attainable. Even more interesting, the market for IoT devices and related services could be worth a staggering one trillion dollars per year by 2035! Clearly systems companies are in the best position to win this market as long as they can make their own IoT chips and that is where the tried and true ASIC business model comes back into play.
The ASIC business model came about in the 1980s and was the catalyst for what we now call the fabless semiconductor ecosystem. You no longer needed the massive capital and semiconductor expertise that was required to make a chip unique to your requirements and kept away from competitor’s hands. This ASIC revolution not only enabled a slew of fabless semiconductor companies that dominated the semiconductor industry, it also enabled a number of systems companies such as Apple, Cisco, Microsoft, and even Google to become what is now known as “fabless systems companies”, systems companies who control their silicon destiny.
While it is true that the pioneering ASIC companies (VLSI Technology and LSI Logic) have been assimilated, we are seeing a resurgence of ASIC activities specifically targeted at IoT devices. So if you are a systems company and want to control your silicon destiny there are two important questions you ask yourself: First, is an ASIC right for your next IoT product? And second, how will you properly evaluate ASIC partners? Which brings us to an interesting (8) page whitepaper:
Is an ASIC Right for Your Next IoT Product?
This whitepaper from Presto Engineering, Inc., looks at factors that determine potential benefits of an ASIC solution relative to a discrete alternative and discusses ways to reduce costs and increase value by managing risk and complexity.
For those of you who don’t know, Presto Engineering has been providing engineering services and turn-key solutions for more than ten years and is a recognized expert in RF, analog, mixed-signal, and secured applications. Coincidentally, IoT chips are all about RF, analog, and mixed-signal design wrapped up in a secure package, right?
Bottom line: You can build an IoT ASIC for about $5M using proven process technologies by partnering with Presto Engineering. It really is all about reducing cost AND minimizing risk so chose your IoT ASIC partner wisely.
About Presto Engineering, Inc.
Presto Engineering, Inc. provides outsourced operations for semiconductor and IoT device companies, helping its customers minimize overhead, reduce risk and accelerate time-to-market. The company is a recognized expert in the development of industrial solutions for RF, analog, mixed-signal and secured applications – from tape-out to delivery of finished goods. Presto’s proprietary, highly-secure manufacturing and provisioning solution, coupled with extensive back-end expertise, gives its customers a competitive advantage. The company offers a global, flexible, dedicated framework, with headquarters in the Silicon Valley, and operations across Europe and Asia. For more information, visit: www.presto-eng.com.
Augmented Reality Broke My iPhone!
As a semiconductor professional I am always looking for the “next big thing” that will spur growth in our industry. Mobile, IoT, Automotive, and AI are the current leaders we closely track on SemiWiki.com. Last year I delved into augmented reality via the Pokemon Go game and after a solid year of research (yes I finished the game) here is my report.
To be clear: Augmented Reality (AR) integrates a physical real-world view into a software application. In Pokemon Go’s case it uses the camera and overlays the game onto your surroundings. Even more important, Pokemon Go is a geolocation game that tracks your every move, provides a street map (via Google maps), and integrates other local information into your game. PokeStops for example are based on real places such as businesses (all Starbucks have PokeStops or PokeGyms), historical landmarks, places of interest, etc…
The first thing you will do when you play Pokemon Go is turn off the camera option because it is annoying and completely unnecessary. I can certainly see applications where this would work but not with a game where your screen represents what you can see directly in front of you. The second thing you will do is turn of the music and sound effects because they are mind numbing, especially since you have to play this game for hours and hours to get anywhere.
The two big draws for this game are nostalgia for millennials who grew up on Pokemon and the social activity aspect. I have four children and suffered through years of PokeCards, PokeCostumes, and those horrible PokeCartoons. My whole family started playing Pokemon Go when it launched last July but have all quit, except for me, due to the time commitment required. My kids are full grown, have jobs, and are working on kids of their own. The social activity is the outdoor time (exercise) and the people you meet. Yes it is easy to spot Pokemon players and once you start seeing familiar faces a conversation is sure to follow. People of all ages play this game, some people go solo (like me) and some pair up with friends, parents, or spouses.
Now to my stats, over the last year I walked/biked 3,392km (which is North of 2,000 miles), total experience points (xp) of 20,664,463, and just last week I made it to level 40 which is the last level of the game. I had a minor advantage in that my local park has a 1km track that includes 24 PokeStops and 3 PokeGyms. The town I live in is also PokeRich with 39 gyms and 100+ PokeStops. I travel quite a bit so I played Pokemon all over the world and that was an experience in itself. Pokemon is played quite differently in other parts of the world especially Asia where they ride scooters instead of walk.
It may be interesting to note that, no matter where I go, of the three team colors blue is by far the most popular then red then yellow.
The good news: Pokemon Go engaged my mind and was an added incentive for my morning walks and afternoon bike rides. It was also fun meeting people and observing the different social aspects around the world. I have a competitive streak and enjoyed destroying the gyms and was the first person in my area to hit level 40. Pokemon Go is very data plan efficient with most everything running on the phone itself. You can tell because the phone literally heats up if you are power playing.
The bad news:Pokemon Go absolutely destroyed my iPhone battery. I had a 16GB iPhone 6 when I started last July. It was two years old so I did not think much of it when the battery would not hold a charge more than an hour or so. I got an external charger and was good to PokeGo. In December of last year I bought a new 32GB iPhone 6s and the same thing happened. Today my battery will last at most two hours unless I play Pokemon Go then it is less than one hour. Seriously, you can watch the battery drain in real time while you play. Before you start saying how Apple sucks, the same thing happened to Android phones according to other PokePeople.
Worse news: Pokemon Go is also full of cheaters! Why people need to cheat at a game like this is beyond me. It really is a sad social commentary. They use software spoofing hacks and multiple accounts to gain advantages at the gyms and to get all of the Pokemon without leaving the basement of their parents house. For a company that made a billion dollars in its first year, Niantic (the Pokemon Company) seems to not care about cheating at all. In fact, their customer service is nonexistent so if cheaters bother you don’t play the game. Personally I enjoyed battling against the cheaters but that is just me.
Bottom line: AR apps are just starting out and have a ways to go but I do see value and believe that they will be one of the next big things for semiconductors. From what I am told, Apple’s next iPhone will be much more AR capable and that trend will continue with faster SoCs, custom silicon (AI/AR specific chips), more memory, and hopefully better batteries. Apple also released an ARKit to help developers with new apps. It is in Apple’s best interest to keep people on their phones and AR will definitely help, absolutely.
Machine Learning Optimizes FPGA Timing
Machine learning (ML) is the hot new technology of our time so EDA development teams are eagerly searching for new ways to optimize various facets of design using ML to distill wisdom from the mountains of data generated in previous designs. Pre-ML, we had little interest in historical data and would mostly look only at localized comparisons with recent runs to decide whatever we felt were best-case implementations. Now, prompted by demonstrated ML-value in other domains, we are starting to look for hidden intelligence in a broader range of data.
One such direction uses machine-learning methods to find a path to optimization. Plunify does this with their InTime optimizer for FPGA design. The tool operates as a plugin to a variety of standard FPGA design tools but does the clever part in the cloud (private or public at your choice), in which the goal is to provide optimized strategies for synthesis and place and route.
There is a very limited way to do this today, common in many design tools, in which the tool will sweep parameters around a seed value, looking for an optimum. Perhaps this could be thought of as a primitive form of ML, but InTime takes a much wider view. It builds a database based on multiple designs and multiple analyses of each design (and even looks across tool versions). Using this data and ML methods it builds more generalized strategies to optimize PPA than can be accomplished in sweep approaches which are inevitably limited to the current incarnation of a particular design.
Naturally in this approach there is a learning phase and an inference/deployment phase (which Plunify calls Last Mile). InTime provides standard recipes for these phases, a sample of which is shown above. In the early phases of tool adoption, you’re building a training database but you still have to meet design deadlines so recipes help you get a running start through default and other strategies. As design experience and the training database grow, learning continues to improve and inferences will correspondingly improve.
Plunify note that one of the most immediate advantages of their technology is that you can get to better results without needing to make design changes. The ML-based strategies are all based on optimizing tool options to deliver improved results. Not that you might not also want/need to make design changes to further improve. But it’s good to know that the tool flow will then infer best implementations based on that knowledge-base of learned tool setup strategies. And why spend extra time on design tuning if the tool flow can get you to an acceptable solution without additional investment?
An obvious question is if ML can improve the implementation flow, why not also work on inferring design optimizations? Plunify are taking some steps in this direction with an early-release product using ML to suggest changes in RTL code which would mitigate timing and performance problems. Learning is sensitive not just to RTL issues but also to tool vendor differences so learned optimizations may differ between tools.
Plunify is based in Singapore with offices in the US (Los Altos) and Chengdu (China). Crunchbase shows they raised a $2M Series A round last year, following earlier undisclosed seed rounds (they were founded in 2010). I have also been told that Lucio Lanza is an investor and Rick Carlson is an advisor. You can learn more about Plunify HERE.

