CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Time for U.S. Fatality Reduction Targets

Time for U.S. Fatality Reduction Targets
by Roger C. Lanctot on 09-01-2016 at 12:00 pm

Almost exactly a year ago I wrote a blog implicating the insurance industry in the high level of highway fatalities in the U.S. As part of that blog (“The Insurance Industry Has Blood on Its Hands”) I suggested that the National Highway Traffic Safety Administration ought to look into developing a fatality-reduction quota system for car makers as part of a Vision Zero implementation.


This strategy occurred to me in the context of the Corporate Average Fuel Economy requirement first enacted by Congress in 1975 in reaction to the then ominous Arab oil embargo. Thanks to that requirement, the overall average fuel efficiency of the automotive fleet on U.S. roads has almost doubled since that time.

U.S. Overall Average Fuel Efficiency

SOURCE: Wikipedia

For years, fuel efficiency has improved alongside the reduction in highway fatalities. But this tidy correlation came to an end in 2015, as highway fatalities climbed 7.2%, reversing a long-term downward trend.

Indications are that this upward trajectory is continuing into 2016, based on data published by the National Safety Council showing an 8.9% increase in highway fatalities for the first six months of 2016. The National Highway Traffic Safety Administration is sufficiency alarmed to open up its Fatality Analysis Reporting System database to researchers as the agency seeks help in better understanding where trouble may be revealed in the data.

http://tinyurl.com/zppda3d – Traffic Fatalities up Sharply in 2015

“Despite decades of safety improvements, far too many people are killed on our nation’s roads every year,” said U.S. Transportation Secretary Anthony Foxx. “Solving this problem will take teamwork, so we’re issuing a call to action and asking researchers, safety experts, data scientists, and the public to analyze the fatality data and help find ways to prevent these tragedies.”

The US Department of Transportation has a right to be proud of the progress made in reducing the rate and number of highway fatalities through passive safety systems. But this happy period of progress is over.

At the same time that fatalities are on the rise, vehicle sales have recovered to historically high levels and gasoline prices have plunged. The focus on fleet fuel efficiency suddenly seems out of touch with the need for safety systems which tend to drive up the cost and drive down the fuel efficiency of automobiles.

Is it too much to ask NHTSA to shift gears from fuel efficiency – especially at a time of ample fuel availability and rampant sales of SUVs and crossovers – to safety? Could saving lives actually be a higher calling than saving the planet? I ask these questions because it doesn’t look like we can have our cake and eat it, too.

What if CAFE requirements were suspended or extended in the context of a new regime of vison zero objectives targeted at reducing the carnage on U.S. highways. Given the fact that the U.S. accounts for only about 3% of the 1.25M highway fatalities annually, such a nationwide effort to enhance safety and reduce fatalities might well vault the U.S. beyond its already strong leadership position in automotive safety.

More than a million global highway fatalities suggests a strong international market for vehicle safety innovations. That ought to serve as sufficient motivation to take on a vision zero agenda, but clearly something more is needed.

The time has arrived for NHTSA to set aside CAFE in favor of developing a Corporate Average Fatality Reduction system. Like CAFE, CAFR requirements will be applied to individual car companies, each of which will have its own fatality reduction targets. Unlike CAFE, car companies will not be allowed to exchange fatality reduction credits – the concept is too ghoulish to give serious consideration.

“The data tell us that people die when they drive drunk, distracted, or drowsy, or if they are speeding or unbuckled,” said NHTSA Administrator, Dr. Mark Rosekind. “While there have been enormous improvements in many of these areas, we need to find new solutions to end traffic fatalities.”

The truly transformative element of such a program is that it will force a change in how car companies interact with insurance companies and their own customers. Car companies will suddenly be forced to care how their cars are actually being used in the wild – putting pressure on insurance companies and licensing authorities and driving schools and dealers to raise the levels of expectation regarding driving acumen.

Rather than simply blaming drivers for the annual slaughter of drivers, passengers and pedestrians, car makers will suddenly be answerable for understanding how people drive and what can be done to mitigate bad driving. More importantly it will create a competitive environment intended to foster the implementation of new safety systems without specifying the nature of those safety systems.

In the end, car makers might come to the conclusion that they must band together to share data to better understand the vulnerabilities of the driving public and their own vehicles. All car companies would be dependent on one another for success in reducing overall fatalities. (Perhaps regional or state-level vision zero targets will be applied as well.)

In this environment, the sharing of data between car companies and the provision of intervehicle and vehicle-to-infrastructure communications in the interest of safety will be game-changing shifts in the industry. Will it be easy? Nah. Is it necessary? Yes.

NHTSA is taking a leadership position in opening up its crash data database, but this desperate plea for help suggests a need for desperate measures. It’s time for NHTSA and the USDOT to embrace the Vision Zero concept. After all, the U.S. should be leading the way in automotive safety not highway fatalities. The U.S. is currently fourth in total fatalities behind China, India and Brazil. That’s just wrong.


Is Your Next Reality Going to be Augmented?

Is Your Next Reality Going to be Augmented?
by Rick Tewell on 09-01-2016 at 7:00 am

John Lennon reportedly once said “Reality leaves a lot to the imagination…” and now we have the technology to do something about making our reality a lot more imaginative. Unless you have been living under a rock (and there is nothing wrong with that – I just haven’t found the right rock myself) there is a LOT going on these days in the virtual reality / augmented reality (VR / AR) markets. There is some confusion about the differences between virtual reality and augmented reality and which market is bigger and better. So first, let’s talk about the differences.

Augmented Reality (AR) is the overlaying or projecting of graphics content into our view of the “real” world. While the real world might simply be what we see through with our own eyes (i.e. wearing AR glasses) it could just as easily be the real world as seen by a camera (like your phone camera – ala Pokemon Go) or even previously recorded videos of the real world.

Virtual Reality (VR) is typically a completely computer generated world that you are an active participant in with little or no “real world” content. Ok – yeah – kind of like “The Matrix”. Generally speaking, the “killer” app for VR is video games – but there are a whole host of other uses as well from education, architecture, healthcare, etc. Eventually, the graphics content of VR will become so sophisticated that it might be hard to differentiate it from the real world – and then we might have to redefine the word “real”.

There is another concept called “mixed reality” out there – which, in my opinion, is just a significantly higher level of AR where extremely complex digital objects are inserted into your real world view. In the future, this higher level of digital object creation / insertion might become so sophisticated that you will possibly have a difficult time distinguishing it from actual reality.

I personally believe that AR is going to be a HUUGE market (surpassing VR by a pretty wide margin) and here’s why. AR is just so darn practicaland can be made affordableto the point of almost being like those silly 3D glasses you wear at the movie theater.

I believe that there is a “sophistication hierarchy” for AR and that it goes something like the following – with each higher level incorporating the feature(s) of the levels below it.

AR Level 0 – 1 – simple text + basic icon overlay – very inexpensive hardware – simple to build.

AR Level 2 – 2D graphics overlay – – perhaps still images like photos or even movies. No “real time” generated 3D interactive content. Slightly more complex – perfect for places like museums or tourist attractions…

AR Level 3 –– interactive 3D content graphics overlay – here things are starting to get VERY interesting..ala Microsoft HoloLens.

AR Level 4 – object recognition -yep- Terminator style.

AR Level 5 – real world object wrapping / overlay – way cool…I think this is the the highest technology use level of AR – Tony Stark style.

From a basic electronics standpointLevels 0 and 1 require a simple projection unit and a very simple display controller. Something like our DCNanodisplay controller! Very simple, very easy. The markets for this are ENORMOUS all by itself. Think about simple turn by turn navigation or object labeling based on simple GPS data combined with compass data.

Level 2 requires the addition of a 2D GPU – either a simple raster engine or something more sophisticated like a 2D vector graphics processor. The level of sophistication that can be added here is quite impressive.

Level 3requires the addition of a 3D GPU – now things are getting more serious. With a 3D GPU – you can project full 3D objects into your reality that you can interact with at some level. Again – think something like Pokemon Go except at a useful, practical level. OK – STOP IT! Pokemon Go is NOT a practical use of AR – although it is addictive and fun…

Level 4 requires the addition of a vision image processor (VIP). While you could handle this with a big enough GPU – a VPU will be MUCH lower power and you will still need a GPU to project 3D graphics into your world.

Level 5requires the addition of a VIP that has deep learning capabilities (like CNN / DNN features – see above!). So why do you want machine learning in this case? You need to be able to recognize an object to the level of sophistication to overlay and wrap graphics around the object without having to send data into the cloud for processing. The best way to do this is to have a “trainable” system so you don’t end up with an AR system that only recognizes cows until it can call home.

So – what are the killer use cases for all of this AR goodness? How vast is your imagination? I was recently at the Roman ruins in Glanum, France. It was very hard to imagine how incredible that sight must have been (and it is a magnificent set of ruins). Imagine having AR glasses on that could project -in real time and from the vantage point I was standing at- the fully reconstructed ruins. Imagine walking into the market and seeing the stalls and maybe even people buying and selling. Imagine walking into the temple and seeing the magnificent statues and artwork that might have been there. Imagine just seeing the architecture in all its glory.

Another use case? How about something simple like looking at a sign in a foreign language and having it instantly translated for you? Or how about translating a menu for you from a foreign language?

More use cases? How about looking at something that needs to be serviced or repaired and having the step by step instructions projected, walking you through every step? How about combining IoT and having the device tell you what component needs replacing and showing you where it is and what to do to replace it – or even ordering it if you don’t have it with a simple gesture.

I could go on and on and on – but the bottom line is that AR done right will be extraordinarily affordable and will change our world significantly. I think that even John Lennon would have appreciated the ability to sprinkle imagination all over our reality…


AMD Zen and the Art of Microprocessor Maintenance

AMD Zen and the Art of Microprocessor Maintenance
by Don Dingee on 08-31-2016 at 4:00 pm

AMD is a fantastic company with highly talented people, but for some reason just hasn’t managed to put a winning streak of microprocessor architectures back-to-back. It’s frustrating to watch: they ride like mad to catch up to or even pull slightly ahead of Intel, then fall back in the pack when they have to make an extended pit stop, then ride like mad again to close the gap. Continue reading “AMD Zen and the Art of Microprocessor Maintenance”


The Role of IP Selection and Integration in First-Time Silicon Success

The Role of IP Selection and Integration in First-Time Silicon Success
by Daniel Nenni on 08-31-2016 at 12:00 pm

As IP expert Eric Esteve has written, Semiconductor IP has consistently outgrown the other design enablement segments and will continue to do so. This has been my personal experience as well during my EDA and IP career so we should all know how important Semiconductor IP is. We certainly know how valuable it is with ARM valued at $32B!

Also read:Design IP Growth Is Fueling 94% of EDA Expansion

Since the beginning of SemiWiki, Semiconductor IP has driven the most traffic. As of today there have been 640 IP blogs published on SemiWiki that have been viewed a total of 2,739,472 times, absolutely! Some of the top IP search terms we have seen are: IP Verification, IP Integration, IP Validation, Low Power IP and of course IP Selection, which brings us to the latest Open Silicon Webinar:

The Role of IP Selection and Integration in the Achievement of First-Time Silicon Success

This Open-Silicon webinar will address key considerations when selecting and integrating IP into ASIC/SoC designs. No longer can the procurement and integration of third-party IP be done in isolation as just an IP block. System issues associated with choosing the right hardware, appropriate firmware and optimum embedded software in which the ASIC/SoC will fit, is now the biggest driver for third-party IP procurement. As a result, navigating the many challenges associated with blending diverse IP from multiple vendors, increasing software complexity, design challenges in process manufacturing, hardware implementation and emulation, trade-offs in system architecture, and maintaining compliance with ever-evolving standards, is the key to successful integration and first-pass silicon.

Those joining the webinar will learn what third-party IP vendors and turnkey ASIC solutions providers are doing from a system perspective down to the transistor level, to not only mitigate these challenges and facilitate seamless integration, but reduce cost and incorporate greater flexibility and functionality while maintaining the system perspective using pre-verified and customized IP blocks. The panelists will delve deep into the architectural deliverables, trade-offs on IP selection, and quality benchmarks that enable the best performance of an ASIC/SoC for any specified application or operating condition. This includes compatibility assurances across all of the IPs front/back-end views and deliverables within any specific tool flow, as well as the foundry processes. Other topics to be discussed include system level checklists, integration checklists, integration reviews, tape-out reviews, certifications, evaluation boards and more.



Speaker Biographies:

Elias Lozano
Senior Director, IP Sourcing, Open-Silicon
Elias serves as Senior Director of IP Sourcing for Open-Silicon. He helps procure and qualify a wide variety of analog, mixed-signal and digital IP. Elias has over 25 years of specialized experience in the analog, mixed-signal and digital design markets. He is well versed in project implementation working in the forefront of VLSI/mixed-signal/analog technology, design and back/front-end ASIC methodologies. Elias has demonstrated proven success in implementing complex custom SoCs with first time working silicon. Prior to joining Open-Silicon, Elias held various IP engineering and management roles at RAMBUS, National Semiconductor and LSI Logic. Elias holds a master’s degree in electrical engineering from Washington State University.

Vamshi Krishna
IP Solutions Manager, Open-Silicon
Vamshi serves as IP Solutions Manager for Open-Silicon. He is responsible for managing third-party IP function, which involves selection, procurement, quality check and integration of various enterprise and consumer application IPs. Prior to joining Open-Silicon, Vamshi was Hard IP Applications Engineer at Intel. Prior to that, he served as Product Applications Engineer at MosChip Semiconductor, where he was responsible for IP/product quality checks, delivery and support. Vamshi holds a bachelor’s degree in electronics and communications engineering from Kakatiya University, India.

About Open-Silicon
Open-Silicon transforms ideas into system-optimized ASIC solutions within the time-to-market parameters desired by customers. The company enhances the value of customers’ products by innovating at every stage of design — architecture, logic, physical, system, software and IP — and then continues to partner to deliver fully tested silicon and platforms. Open-Silicon applies an open business model that enables the company to uniquely choose best-in-industry IP, design methodologies, tools, software, packaging, manufacturing and test capabilities. The company has partnered with over 150 companies ranging from large semiconductor and systems manufacturers to high-profile start-ups, and has successfully completed 300+ designs and shipped over 120 million ASICs to date. Privately-held, Open-Silicon employs over 250 people in Silicon Valley and around the world.www.open-silicon.com


Three Steps for Custom IC Design Migration and Optimization

Three Steps for Custom IC Design Migration and Optimization
by Daniel Payne on 08-31-2016 at 7:00 am

Popular companies designing smart phones, CPUs, GPUs and Memory components all employ teams of custom IC designers to create the highest performance chips that are as small as possible, and at the lowest costs. How do they go about doing custom IC design migration and optimization when moving from one process node to another one? That’s a great question, so I took some time at #53DAC in Austin to listen to one EDA vendor share their approach inside the TSMC booth where Open Innovation Platform (OIP) companies were making presentations. Michael Pronath from MunEDA was able to present 22 slides in about 20 minutes, so I’ll give you my recap in this blog.

Semiconductor IP using full-custom design and layout techniques are typically found in high volume applications or high-performance functions like:

  • Memory (NVM, Flash, DRAM, SRAM)
  • Custom Cells (datapaths, register files, PHY, clock distribution)
  • RF (VCO, PLL, LNA, mixer)
  • AMS (voltage reference, amplifiers, data converters)

As management asks the design team to migrate from one node like 28nm to another node like 16nm, then there’s a lot of work involved because you have to consider the impact of process variation on the design yield, how aging and reliability effect reliability at the smaller node, and how to achieve the best PPAC (Power, Performance, Area, Cost) in the time allotted. In the pure digital world the designers typically use logic synthesis, try some floorplanning, run some STA and iterate until timing closure is reached. In the AMS and RF IP world life is not so simple when it comes to porting cells:

  • Change device sizes
  • Adjust geometries like MOS width and length
  • Update biasing
  • Meet new specifications
  • Verify new Vdd levels

An approach used at MunEDA to automate this process uses the concept of specification-driven IP porting. Here’s a quick flow showing how you can start this process with your old schematic and prior PDK models, adding the new PDK models and using the Schematic Porting Tool (SPT):

So this SPT software lets you define how to replace devices from the source PDK with the new counterpart in the target PDK, giving you flexible property mapping and automating the shrinking. This happens for all device types: MOS, R, C and other properties. SPT even understands how to work with your hierarchical schematics. Why use an automated schematic porting process?

  • Correct and consistent replacement of device instances, no manual errors
  • Speed, 1,000s of devices migrated in just seconds
  • Multiple device types supported
  • Automated documentation on the porting results

Related blog – IC Design Optimization for Radiation Hardening

At DAC Michael showed an example of migrating a source schematic in Virtuoso that used TSMC N40 bulk process and then converted it into a 16nm FinFET process (TSMC N16). As the title of the blog promised, here are the three major steps involved with automated porting:

[LIST=1]

  • Schematic porting, IP re-use (circuit and process migration)
  • Design assessment (topology adjustment, simulations)
  • Sizing for sign-off (circuit analysis, optimization, verification)

    Circuit sizing and optimization is where the device parameters like Width and Length of MOS transistors are automatically selected in order to meet the circuit specifications for metrics like: Noise, jitter, speed, stability, power, area, robustness, yield, etc. You really want an automated approach here instead of a manual approach if you are under a deadline. MunEDA not only offers SPT, but it has an entire suite of tools called WiCkeD that are integrated and kind of push-button to operate.

    So how does this theory apply to actual circuits? Michael shared an example of a VCO designed in TSMC 65nm RF process that needed to have phase noise reduced, power consumption kept low, and reduce the effects of transistor mismatch. Using the WiCkeD software tools a fully automated sizing was done in a short time, reducing phase noise, and predicting high yield.

    Related blog – SRAM Optimization for 14nm and 28nm FDSOI

    Another example given was for an I/O level shifter block in TSMC 10nm FinFETtechnology where the designers needed to reduce sensitivity to process variation and Vdd variation. Using the three step approach they were able to optimize MOS widths to reduce corner spread of duty cycle and delays, running in under 10 minutes on one CPU. Corner spread was reduced by 50%.

    Summary
    My first job out of college was working at Intel and I had to manually migrate a DRAM design from one process node to a smaller one, taking me at least a man-year of effort. If only I had tools like WiCkeD from MunEDA, then it would’ve made me work smarter instead of harder.

    Related blog – Design and Optimization of Analog IP is Possible


  • A new world of 10nm design constraints

    A new world of 10nm design constraints
    by Beth Martin on 08-30-2016 at 4:00 pm

    Every time the industry transitions to a smaller process node IC design software undergoes extensive updates.

    I talked to a couple of experts in physical design at Mentor Graphics about what is involved in making place-and-route software ready for a new node. This is what I learned from Sudhakar Jilla, the IC design marketing director and Benny Winefeld, a senior product engineering manager. They said that for digital place and route software to deal with the new and more complex routing rules, changes had typically involved upgrades to the router and DRC checker. However the introduction of new constraint types, such as implant (submetal) rules, started to cause a direct impact on other place and route subsystems.

    Physical violations that emerge right after the placement and legalization stages can be roughly classified into two major categories:

    • DRC errors on submetal layers, such as implants, including:

      • Width, spacing, and area DRC on implant layers
      • Jog rules, typically on Oxide Diffusion (transistor active area) layer
      • Prohibited Drain-Drain abutment

    • Problems on metal and via layers, including:

      • Direct DRC violations, including same mask spacing, between ports or blockages of adjacent lib cells
      • Violations between lib cell ports or blockages and preroutes, such as wires and vias in the power/ ground grid
      • Unroutable cell ports
      • Pin alignment and track color matching

    To see a detailed description of the new constraints and how they affect various place and route engines download Mentor’s new whitepaper Understanding Physical Design Constraints in the 10nm Era.

    Here are few examples:

    Submetal rules—width, spacing, and area
    In the most basic of scenarios, standard cells contain just two shapes onsubmetal layers, dividing the cell vertically in half—one half for N implanted area, another for P. These shapes are usually expressed in LEF files as blockages and are often called implant layers. If such a submetal shape is too small then it is flagged as a DRC violation (Fig 1).

    Submetal rules—oxide diffusion jogs
    Some 16/14nm technology flavors added two new types of submetal rules: minimum-jog and drain-drain abutment. Min-jog violations usually apply to the oxide diffusion (OD) layer (Fig 2).

    The placer can fix this by inserting a matching cell to the cell in the middle or inserting a gap that will later be filled (Fig 3).

    Metal and via layer rules—pin access and direct DRC with preroutes
    Pin access problems are not fundamentally new but are becoming more common. The figure below (Fig 4) shows a cut-spacing violation. The placer should be able to avoid this violation without running a full DRC during every cell move.


    Abutted cells can also cause pin blockages, but you don’t want to deploy a blanket prohibition of abutment between all cluster members. Mentor’s place and route tool, Nitro-SoC, takes a statistical/analytical approach and uses soft constraints to improve routability.

    Metal and via layer rules—Pin-to-track color matching
    This is a new placement constraint that emerged at 10nm because of self-aligned double patterning . It has a direct impact on cell placement as cell ports must be centered on a routing track and the mask and track colors must match. Routing pitch is not always equal to the 2x site width, so a cell can easily land in locations where ports will either miss the track or will be on the opposite color (Fig 5).

    Nitro-SoC placer can figure out the discrete subset of legal locations for each library cell that will guarantee alignment and mask matching between cell ports and routing tracks.

    For technologies with triple-patterned M1, required same mask M1 spacing is very large. Both spreading cells and designing cells to prevent triple-pattern conflicts are too conservative. Instead, Nitro-SoC uses hybrid approach of swapping masks wherever possible. This means that the placer will replace a cell with its “mirror” variant, where M1 mask1 shapes become mask2 and vice versa. Because is not an actual cell movement it has no impact on routability or timing. Only those violations that can’t be cured with mask swapping are repaired with spreading.

    If you want far more details and examples of violations at 10nm and below and how they are handled during physical implementation, download the new whitepaper Understanding Physical Design Constraints in the 10nm Era.


    Millennial Tyranny in the Connected Car

    Millennial Tyranny in the Connected Car
    by Roger C. Lanctot on 08-30-2016 at 12:00 pm

    Nielsen’s latest AutoTECHCAST study once again introduces confusion to the connected car debate, but it’s understandable and relates to a demographic gradient around technology. Young people are aware of an interested in so-called “brought-in” technologies, while the majority of (older) people who make the majority of car purchases are more interested in built-in technologies, according to Nielsen’s study.

    Because cars last so long and probably because it’s more fun marketing to younger people, car companies are permanently fixated on marketing their products to younger consumers. Lately this means that millennials rule marketing messages. It also means that “oldies” can feel a little alienated or out of touch.

    One of my middle-aged sisters (well, they’re older than me) railed at me recently over the increasingly confusing and, more importantly, distracting array of dashboard doodads in new cars. I was driving a Ford rental at the time.

    People like my sister are generally only interested in getting from point A to point B. In-car doodads are not universally interesting to all car buyers or drivers.

    Young people are more interested in technology and more aware and comfortable with technology, Nielsen tells us. This is precisely why car makers are offering mobile device integration systems such as Apple CarPlay and Alphabet’s Android Auto – and why car makers are offering their own proprietary integration systems. They want to connect with younger consumers and they don’t want to be left out of the latest tech wave.

    That’s important to understand because it reflects the fact that car makers are not introducing smartphone integration based on consumer demand. Car makers are introducing smartphone integration like CarPlay and Android Auto to mitigate distraction – and their potential liability exposure – and as a convenience.

    The problem is that young people prefer to use their devices without the help of an integration system – at least judging from my observations of my own twenty-something children and their driving and mobile device use behavior. They prefer to interact directly with their devices because that is the interface with which they are familiar. And let’s admit it – connecting a smartphone in a car is an unnatural act.

    The problem with the Nielsen study, and why it is so confusing, is that the study’s conclusions, based on a claimed completed survey base of more than 11,000, are, according to Automotive News:

    • The base familiarity of the 44 auto-related technologies included in the survey was 25%;
    • Consumers in connected cars prefer built-in systems over brought-in systems;
    • Millennials are the most interested in new technology;
    • Nearly one-third of respondents have never heard of connected car features such as access to the Internet and cloud services.

    The headlines reporting the Nielsen study emphasized either the fact that consumers were confused about connected car technology or that it was a very low purchasing priority. This is hardly a surprise because most people using or buying a car just need transportation. They are carrying their “connectivity” in their pockets.

    The oddest aspect of the survey is that it is published by Nielsen which acquired radio ratings company Arbitron more than a year ago. Arbitron had built a $500M business on estimating listening audiences for the purposes of producing ratings which serve as a crucial element in pricing radio advertising buys – in the same way that Nielsen meters determine the pricing of TV advertising.

    The irony is that neither Arbitron nor Nielsen has a good handle on in-car listening which easily accounts for more than 50% of total radio listening. The Nielsen survey serves to reinforce the fact that Nielsen simply doesn’t understand the technical or marketing environment that is redefining content consumption in the car.

    But it’s not just that Nielsen doesn’t understand connected car content consumption. Nielsen doesn’t understand that safety, fuel efficiency, cost of ownership, reliability and brand have been and likely will continue to be more important than connectivity.

    Older consumers in particular understand this more than most because they’ve owned and used cars for a longer period of time and therefore understand the value propositions that endure. Connectivity and infotainment have yet to prove their enduring value, but may look or sound cool to younger potential car buyers.

    To add further confusion to the mix, consumers may be interested in connecting their smartphones in cars, but may be confused as to why they’d want or need a built-in connection. Car companies have yet to define a compelling value proposition for the embedded connection other than selling data minutes for Wi-Fi, which actually doesn’t sound that attractive.

    In the absence of a compelling value proposition, it’s no wonder consumers are confused as to the “meaning” of car connectivity in the context of such a survey. What is happening is that cars themselves are becoming more confusing and consumers are becoming more distracted and less interested.

    It’s hardly a shock that, in this environment, confusion reigns and content consumption is fragmenting. In this context car companies have an opportunity and an obligation to take charge. Simplify the human machine interaction in the car, emphasize safety, and focus on value.

    Car connectivity won’t become a core automotive value proposition until car companies stop chasing millennials with in-dash apps and, instead, focus on leveraging car connections for enhancing safe driving and customer retention. Consumers are confused – and uninterested – because car makers are confused. Nielsen too. (Next time, Nielsen should ask about consumer interest in faster horses.)


    I3C Will Support MIPI Pervasion Beyond Mobile: IoT, Wearable, Automotive

    I3C Will Support MIPI Pervasion Beyond Mobile: IoT, Wearable, Automotive
    by Eric Esteve on 08-30-2016 at 7:00 am

    MIPI I3C specification Draft Specification is now available to all MIPI Alliance members in First Draft Review, but we can be confident that I3C has been already implemented by some of the members. Before I3C specification, the de facto communication standard for sensors in mobile and consumer applications was I²C, requiring only two signal lines (clock and data). But I²C has several shortcomings, including the inability for sensor slaves to initiate communication, an overhead protocol that reduces throughput and pull-up resistors that limit clock speed and increases power dissipation.

    Another commonly used standard is the serial peripheral interface or SPI, with major disadvantage: SPI lacks a clearly defined standard that has resulted in many different implementations. According with the MIPI Alliance: “I3C incorporates and unifies key attributes of I2C and SPI while improving the capabilities and performance of each approach with a comprehensive, scalable interface and architecture. The specification also anticipates sensor interface architectures that mobile, mobile-influenced, and embedded-systems industries will need in the future.”

    One picture is worth than one thousand words and the above picture from MIPI Alliance clearly deals with the main two concerns for mobile, battery powered systems: performance and power consumption. Performance is crucial in the mobile industry, the most competitive market segment today, when smartphone manufacturers fight to release new products offering better features every year. Looking at the right part of the picture is enough to be convinced that I3C raw bitrate is far better than I2C (in fact you can’t even see I2C value!) and that I3C specification offers communication modes like HDR-DDR, HDR-TSP and HDR-TSL able to support sensor interface architectures that mobile industry will need in the future.

    Needless to say, for mobiles, IoT edge devices or wearable, the power consumption is so important that the time between charges of a fitness wristband, for example, could make or break the product. On the left side of the picture, the energy consumption per Megabit comparison is made between I2C and the various I3C data rates (in blue at VDD=3.3V, in red at VDD=1.8V). We can talk about the (far better) performance efficiency of I3C, as the results are expressed per Megabit of data transferred with the sensors.

    According with MarketsAndMarkets (March 2014) sensors are experiencing an unprecedented growth, from $650M in 2012 and with an expected compound annual growth rate (CAGR) of 36.25% through 2020, the total global sensor market is expected to reach $154.4B by 2020. We know the major drivers: the adoption of low-cost, small form factor sensors in smartphones and tablets and the emergence of IoT and wearables applications, most of these integrating sensors.

    Once again with MIPI specifications, the very dynamic and competitive mobile industry is expected to drive I3C adoption, impacting the production price of I3C equipped sensors. The higher the production level, and we are talking about billion sensors manufactured every year, the lower the selling price. This is that we have called the “virtuous cycle” in blogsposted as early as in 2011, the lower product price allowing adoption beyond mobile, in wearable, medical, industrial and IoT.

    That’s the reason why we can expect the first MIPI DevCon to be held on Sept. 14-15, 2016, in Mountain View to attract system architects, engineers, designers, or business and marketing executives from various industries and not only the mobile industry.

    This link is fully dedicated to I3C-related presentations at DevCon: http://resources.mipi.org/mipi-i3c-sensor-sessions-at-mipi-devcon. From this web page, a few sentences extracted from the introductions to the presentations clearly show that the goal is to provide the audience with practical information, on top of the description of the I3C various features:

    • Leveraging I2C as a foundation, many components of I3C will be familiar to implementers, but with guidance provided here, attendees will leave with a clearer understanding of MIPI I3C’s new innovative features, how they will improve their systems, and what considerations should be made to fully leverage them.
    • One of the most advanced features is the ability to operate in I3C High Data Rate modes, HDR-DDR, HDR-TSP and HDR-TSL, which provides the best performance in both speed and power.
    • Mobile stylus and touch applications commonly use proprietary interfaces to connect a variety of on-cell, in-cell and hybrid sensors to application processors. Next-generation systems will benefit from advances in a new MIPI touch standard family that leverages the MIPI I3C specification.
    • …provide a quick overview of the MIPI CSI-2 and I3C specifications and their key features that are important to meeting the required functionality, performance and power targets.

    These presentations are given by:

    Ken Foust,Sensor Technologist and Researcher with Intel Corp.
    Alex Passi,Software Engineering Manager with Cadence Design Systems
    Hezi Saar,Staff Product Marketing Manager at Synopsys, Inc.
    Dale Stolitzka,Principal Engineer at Samsung Electronics, Co.
    James GoelDirector – Technical Standards, Qualcomm Technologies, Inc.

    You will also notice that I3C is back-up by the top three semiconductor leaders (Intel, Samsung and Qualcomm), as well as by the top two IP and VIP vendors supporting MIPI technology for a long time, Cadence and Synopsys. If you consider that two out of three semi companies are also providing foundry services, and even mobile systems manufacturing for Samsung, you realize that I3C is a specification which has a bright future!

    From Eric Esteve from IPNEST

    I3C-related presentations at DevCon: http://resources.mipi.org/mipi-i3c-sensor-sessions-at-mipi-devcon

    WHAT:MIPI DevCon: Moving Mobile Forward, the Alliance’s first annual developers conference

    DevCon agenda by speakers: Speaker List
    WHEN & WHERE:Sept. 14-15, 2016, at the Computer History Museum in Mountain View, Calif.

    WHO:The conference agenda is designed for system architects, engineers, designers, test engineers, engineering managers, and business and marketing executives. Members of the media and industry analysts are invited to attend with complimentary registration.

    MIPI Alliance working group leaders and other experts will lead sessions outlining implementation experiences, use cases, and application examples from a technical perspective, and select MIPI Alliance member companies will conduct product demonstrations.

    WHY:MIPI Alliance technology is driving new capabilities within mobile and impacting markets, such as the Internet of Things (IoT), automotive, wearables, industrial, and augmented/virtual reality. MIPI DevCon 2016 will provide the latest information on MIPI specifications for implementation in mobile and other emergent markets.

    TO REGISTER:Find more details and registration links at mipi.org/devcon including a $49 “early bird” registration fee available until Aug. 19.

    PROGRAM DETAILS:The MIPI DevCon 2016 agenda features expert commentary and presentations from MIPI members representing the industry’s top companies working in mobile, IoT, automotive and other fast-growth industries. Four comprehensive informational tracks include:

    • Implementations and Use Cases for Beyond Mobile
    • MIPI I3C: Introduction and Impact on Cameras and Other Sensors
    • Verification and Debug
    • Camera and Display – Prototyping, Bridging and Compression

    Flex Logix validating EFLX on TSMC 40ULP

    Flex Logix validating EFLX on TSMC 40ULP
    by Don Dingee on 08-29-2016 at 4:00 pm

    Flex Logix has been heads-down for the last several months working toward customer implementations of their EFLX reconfigurable RTL IP cores. Today, they’ve announced a family of 10 hard IP cores ready in TSMC 40ULP, and provided an update to their roadmap for us. Continue reading “Flex Logix validating EFLX on TSMC 40ULP”


    Embedded Product Development – Make vs Buy

    Embedded Product Development – Make vs Buy
    by Prakash Mohapatra on 08-29-2016 at 12:00 pm

    Original Equipment Manufacturers (OEMs) face many questions before building any product. After they are convinced that there is a business potential in their new product, next comes the crucial stage of project execution. They aspire to build the product in-time, maybe before the competitors or better than the competing products, without compromising on their budget constraints. However, aspirations occasionally match with reality. Time-slips, production-failures, over-budgets, etc. are associated with most projects. In this blog, I will attempt to show how using COTS (Commercially off the shelf) platforms can help OEMs accelerate the time-to-market along with reduction in development cost and risk.

    Cost, performance, PCB designs, memory, time-to-market, technical support, casing, I/O configuration, size, procurement, enclosures, flexibility, scalability, component obsolescence, compliance, certifications. Whew the list goes on! You will face many more questions while building an embedded product. With customers’ expectations for better performance, yet longer battery life, is making the product development increasingly more complex. Advances in technology are inevitable, and this forces OEMs to keep pace with technology and competitors. Complex designs on a small form-factor adds substantial design risk, which may further stretch the development time. However, using COTS platforms reduces your list of concerns substantially.

    Any embedded product has mostly similar components, both for software and hardware. Hardware includes SoC, memory, power circuitry, I/Os (USB, Ethernet, VGA, WiFi, BT, etc.) integrated over a printed circuit board (PCB). The software consists of device drivers, operating system, BSPs, GUI, application layer, 3[SUP]rd[/SUP] party apps, communication stack, etc.

    Make: Full-custom Development

    OEMs are more inclined to build products from scratch, as it offers total flexibility and better control over quality, cost. However, full-custom development has many constraints.

    • Boost NRE (Non-Recurring Engineering) cost: High investment in engineering resources as the team need diverse expertize in mechanical design, hardware layout, low-level firmware, application, etc. More testing and validation is needed, as the product is developed from scratch. Hardware design iterations add to substantial project cost as well.
    • High BoM cost: Usually, sales volume of embedded products is low. So, OEMs cannot leverage economics of scale with low volume procurement of components, and thus Bill of Materials (BoM) cost is high. However, if the sales volume exceeds 50-60K per year, then it makes more investment sense to pursue full-custom designs.
    • Long time-to-market: As the development happens from scratch, the project time increases, and thus long time-to-market. Multiple hardware design iterations compound the time-slip.
    • High development risk: With scratch development of hardware and software, there is a high probability that things may go wrong at any level. This adds significant risk to the project compromising time-to-market and development cost.
    • Questionable scalability: With Moore’s Law in action, the silicon components such as SoC, are getting matured in terms of performance, power-efficiency, and cost-effectiveness. However, it is difficult to scale up a full-custom platform to accommodate these advances. Upgrades to a platform based on future customers’ demands and latest technologies may need re-design.
    • Questionable Product Life: Although designers pursue multiple sourcing of components, once a critical component such as SoC, RAM, and Flash, reach End of Life (EOL), a re-design will be needed to accommodate the substitute component. For industrial products, component obsolescence management is critical, as the product life is more than 10 years. So, each and every component used in platform must be available for this extended period. This adds substantial overhead for designers in terms of supply-chain management.

    Buy: COTS Platforms
    Let us now explore the COTS platform and their advantages. COTS platforms such as Single Board Computer (SBC) and System on Module (SoM) are available with the hardware platform and low-level software including Operating System, BSPs and Device drivers. OEMs can focus on enhancing user experience with awesome GUI, application-specific frameworks, etc., instead of engaging in generic board bring-up activities. There is no value addition in reinventing the wheel, as once an operating system is supported on a SoC, the BSPs can be reused.

    Advantages over Full-custom Designs

    • Lower NRE cost: As the hardware and associated low-level software are already available, the scope of project reduces. Focus will be on integration and application development. Thus, the resource cost, along the validation effort, comes down significantly.
    • Lower BoM cost: COTS platform vendors leverages economies of scale in component procurement and manufacturing, with huge volume. Thus, the platforms are cheaper than sum of individual parts’ cost.
    • Accelerate Time-to-Market: The project timeline becomes shorter, which accelerates time-to-market. Further, the low-level software and hardware are already matured, so bugs are limited mostly to the application layer.
    • Lower Development Risk: The platforms are validated the vendors and numerous existing customers, thus the platforms are robust and matured.
    • Long Product Life: Usually, vendors guarantee the platform availability over extended time period. In case, any component reaches EOL, then it is the vendors’ responsibility to ensure availability of the platform. OEMs need not have to worry about critical component obsolescence.
    • Less supply chain overhead: System designers need to deal with less component vendors, as most of the critical components are available on the COTS platform.
    • Access to latest technologies: Usually, market leaders of semiconductor parts such as SoC, Flash, and RAM prefer to engage with customers, who order for large volume. However, most embedded products have low sales volume. By using COTS platforms, OEMs can get access to latest technologies from leading vendors, as COTS platform vendors engage in huge volume semiconductor business.

    Single Board Computer (SBC)
    Single Board Computer (SBC) is used widely in embedded computing industry to build variety of products. SBCs are off-the-shelf, application-ready embedded platforms that host the processor, memory, power circuitry, and I/Os on a single printed circuit board (PCB), and comes along with associated device drivers, Operating Systems and Board Support Packages (BSPs). So, the product development becomes fairly simple. System designers can just build the application software and put the board in a nice enclosure, then the product is ready.

    However, there are few constraints of using SBCs for embedded product development.

    • Not scalable: In a SBC, the processing unit and I/O section are integrated over a single PCB. So, it is not possible to migrate to a latest processor with the same board. For migrating to latest technologies or meeting customers’ future expectations, new SBCs have to be used.
    • Not flexible: Customizing a SBC, based on the OEMs’ requirements, is not possible as the CPU and surrounding I/O are closely coupled due to the single-board design. Usually, standard I/Os are part of the SBC. Additional peripherals can be added using interface boards; however, this may increase the size of the platform. Further, the I/O configuration is fixed, so it is challenging to build size-constrained products.

    System on Module
    An embedded platform can be represented as below:


    The ‘Application Agnostic’ part consists of essential design commodities, including the processing & memory requirements. This part may not differ much whether the end-product is a medical device or retail PoS device, assuming the processing and memory requirements are somewhat similar.

    This ‘Application Specific’ part constitutes both the hardware and software, depending on the end-product and OEM requirements. OEMs can enhance end-user experience by creating awesome UI, user application, etc.

    A Computer on Module (CoM) or System on Module (SoM) is an embedded computing solution that consists of the application-agnostic hardware and software. System designers can focus on the application-specific part by using an off-the-shelf SoM, and thus accelerate time-to-market. The combination of an application-agnostic SoM and application-specific carrier board, along with display and peripherals, offers a complete platform for building any end-products. OEMs can design carrier boards as per their size and I/O requirements. The SoM can be inserted into the carrier board through some standard connector such as SODIMM or MXM.

    In addition to the generic benefits of COTS platform, SoM also resolves the scalability and flexibility issues inherent to SBC.

    • Platform scalability: Most vendors offer pin-compatible SoMs. This means that a carrier board can be used along with multiple SoM, without any hardware changes. Some application software change may be needed. This ensures seamless migration to latest technologies. For example, a ECG machine is launched in the market; however, after 2 years due to market demands, the OEM intends to use the latest and faster processor. Without a platform redesign, the OEM can easily migrate to the latest technology by using a SoM based on the new processor, on the existing carrier board. Thus, platforms remain future-proof. Further, product variants with different performance and price can be launched without full-scale development for each variant.
    • Platform flexibility: Each application has specific requirements in terms of I/Os, size, performance, and power. The OEM can select an off-the-shelf SoM based on the performance and power needs. The carrier board can be custom-built as per the I/O and size requirements. Thus, SoM approach offers more flexibility than the SBC approach.

    Conclusion
    We can summarize that in the choice between ‘Make vs Buy’ for embedded product development, the ‘Buy’ option is more favorable in terms of time-to-market, development cost, obsolescence management, development risk, and scalability. Further, among the COTS platforms, SoM are better equipped to handle the demands of projects instead of SBC.

    As usual, this post is constrained by my bounded rationality. Please share the improvement areas and flaws on this post.