CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Cadence is again the best EDA company to work for!

Cadence is again the best EDA company to work for!
by Daniel Nenni on 03-10-2016 at 7:00 am

We wrote about the history of Cadence in preparation for our book “Fabless: The Transformation of the Semiconductor Industry” in 2012. EDA played a key role in enabling the fabless semiconductor revolution and Cadence was right there at the beginning. Famed EETimes editor Richard Goering helped us with the book and the Cadence chapter specifically since at that time he worked for Cadence. Before joining Cadence, Richard covered EDA starting in 1985 most notably as the EE Times’ EDA editor for 17 years.

Cadence was the first EDA Company to hire a professional editor like Richard and I credit that strategy with their continued success in the media today. Coincidentally, Cadence was the first EDA company to make it on the Fortune Magazine “100 Best Companies to Work For” list last year citing a cultural transformation driving the company’s recent success. They are referring to the hiring of famed venture capitalist Lip-Bu Tan as CEO in 2009 of course.

“Our company’s success is fueled by the open culture we have created, listening to our employees and empowering them to work together to delight our customers and make a difference to the future of electronics,” said Tan. “Being named to FORTUNE’s list of the 100 Best Companies to Work For is a tremendous honor, and achieving this recognition for the second year in a row speaks to the achievements of our employees and the strength of our culture.”

This year not only did Cadence again join the ranks of the top 100 Best Companies to Work for, they moved up a few notches to #52. So congratulations to the hard working people at Cadence and congratulations to EDA because without us there would be no fabless semiconductor industry, absolutely.


You can see the Fortune 100 Best Companies to work for HERE, the Cadence Fortune page is HERE, and the Cadence Great Place to Work page is HERE.

About Great Place to Work

Great Place to Work® is the global authority on high-trust, high-performance workplace cultures. Through proprietary assessment tools, advisory services, and certification programs, including Best Workplaces lists and workplace reviews, Great Place to Work® provides the benchmarks, framework, and expertise needed to create, sustain, and recognize outstanding workplace cultures. In the United States, Great Place to Work® produces the annual Fortune “100 Best Companies to Work For®” list and a series of Great Place to Work® Best Workplaces lists including lists for Millennials, Women, Diversity, Small and Medium Companies and over a half dozen different industry lists.

About Fortune
Fortuneis a global leader in business journalism known for its unrivaled access to industry leaders and decision makers. Founded in 1930, Fortune has transformed into a digital-first operation with nearly 17 million monthly unique visitors on Fortune.com as well as 3.4 million global readers in print. Fortune is home to some of the strongest business franchises, including: Fortune 500, Best Companies to Work For, World’s Most Admired Companies, Fastest Growing Companies and Most Powerful Women. The Fortune Conference Division extends the brand’s mission into live settings, hosting a wide range of annual conferences for top-level executives, including the FORTUNE Global Forum and the Most Powerful Women Summit.

About Cadence
Cadence enables global electronic design innovation and plays an essential role in the creation of today’s integrated circuits and electronics. Customers use Cadence software, hardware, IP and services to design and verify advanced semiconductors, consumer electronics, networking and telecommunications equipment, and computer systems. The company is headquartered in San Jose, Calif., with sales offices, design centers and research facilities around the world to serve the global electronics industry. More information about the company, its products and its services is available at www.cadence.com.


Creating a better embedded FPGA IP product

Creating a better embedded FPGA IP product
by Don Dingee on 03-09-2016 at 4:00 pm

Our introduction to Flex Logix and its embedded FPGA core IP drew several comments, predominantly along the lines of a few things like this have been tried before. In this installment, we dive into the EFLX cores, the FPGA toolchain, the roadmap, and a powerful integration feature. Continue reading “Creating a better embedded FPGA IP product”


e-Armageddon

e-Armageddon
by Bill Montgomery on 03-09-2016 at 12:00 pm

CNN: International, Final Report. Wednesday, December 25th, 2019 : The events that have unfolded in the last 72 hours have devastated the entire civilized world, and have left society as we know it on the brink of collapse. I’m told that our networks are now shutting down, and that the report I am about to file will be likely be the final one of its kind. So, I must consider this an historical record, and shall publish this report in that context.

In what has clearly been a well-coordinated attack by a global band of highly-organized cyber-terrorists, the connected world has come crashing down ushering in what many are calling the Internet of Things Armageddon.

The first attacks which caused massive outages were focused on the EU, Russian, Chinese, and Western Hemisphere power grids. Efforts to restore electrical energy to the grid were thwarted as each successful restoration was quickly taken down with a new series of attacks that again turned the world’s most powerful and wealthy regions into seas of darkness. Early reports suggested that the attacks originated in the East but more recent assessments concluded that a home-grown, highly-organized, terrorist element was responsible for the coordinated attack. In essence, machines programmed for mass destruction of the grid attacked machines woefully unequipped to protect it.

Back-up power supply was very quickly exhausted, and soon other countries throughout the planet were plunged into a state of permanent darkness.

Crime spiked dramatically but soon, as the impact of what was happening set in, thoughts turned to survival at any cost. It was not to be, as the cyberterrorist group next turned its attention to the world’s water supply. Easily penetrating security protocols that originated in the 1990’s, hackers shut down water distribution, and in selected regions diverted contaminants into the water supply, rendering our most vital resource unsafe to consume. Many who did became critically ill, and the death toll mounted worldwide. Freshwater lakes and rivers, previously reserved for adventure tours, quickly became go-to survivor destinations and were eventually overwhelmed with migrant travelers, all of whom were fighting to survive.

Most arrived on foot, living off the land en route, as the world’s fuel supply had been shut down on many levels. Vehicles dotted the highways, out of fuel or energy and unable to transport their occupants to safe havens. Planes stopped flying. Trains stopped running.

With food production and distribution permanently suspended world-wide, only those with outdoor survival skills are expected to endure. Expectations are that the world’s population will dwindle from its current 8 Billion to 500 million, levels last seen in the early 1600’s.

As I write this I have been informed that the cyberterrorists have accessed major nuclear facilities and have selectively restored power at these locations, enough to launch missiles aimed at major cities in every corner of the planet.

It appears now that those often seen carrying placards with the ominous message, “The End is Near,” are finally right. And the irony is that the very thing that ushered in unfathomable advances in civilization – technology – is the thing that also will return our world to the dark ages.

If only we had listened to those that told us repeatedly that the lack of ironclad security in our connected world was a global Achilles heel…

Over and out…

Author’s Note: The above fictional account is intended to shock those who read it, and ideally to create a groundswell movement that says things need to change and fast. We are collectively ignoring the dangerous vulnerabilities that characterize our connected world, rendering the horrible events depicted above a very real possibility. We cannot continue to rely on security protocols developed in the 90’s and repeatedly proven to be ineffective in keeping cyberterrorists at bay.

I’m not alone in my thinking. Late last year, Director of US National Intelligence, James Clapper, and several other U.S. Intelligence Community executives testified before a congressional committee on worldwide cyber threats, and their national and economic security implications.

According to Clapper, cyber threats to the U.S. are increasing in frequency, scale, sophistication, and severity of impact, and nearly all information communication technologies and information technology networks and systems are at risk. “These weaknesses,”he explained, “provide an array of possibilities for nefarious activity by cyber threat actors.”

And the US is not the only part of the world under attack. The problem is global, and must be addressed and soon, before predictions made by esteemed leaders in the crypto community like Napier University’s Professor Bill Buchanan, come true. Buchanan writes:

One day, and I think it might be soon, we will wake up and RSA will be cracked. Either it will be super computers cracking the prime numbers, or it will be quantum computers, but when it happens there will be no proper identity on the Web and all the tunnels will be broken…”

If Buchanan is right, and I and many others believe he is, the companies that are best suited to fix this problem need to aggressively step up before it’s too late. I’m talking the big guys who have long touted PKI as the technology that can safely secure our e-world.

It can’t. Not anymore.

It’s time to openly acknowledge the embedded vulnerabilities in this 1990’s technology and stop using them, in favour of rapidly adopting crypto schemas that aren’t susceptible to outside threats. And if that means losing top-line revenues, and hurting the bottom line while you transition to safe crypto schemas, that’s a price that I believe needs to be paid. In fact, it must be paid as the consequences of maintaining the status quo are just too severe.

And now the good news…introducing IBE 3.0

Fortunately, there is proven technology which was improved in recent years that can protect our connected world from outside threats. Identity-Based Encryption (IBE) 3.0

IBE 3.0 is an evolution of Identity-based Encryption, standardized crypto technology developed by Adi Shamir (he is the ‘S’ in RSA) in 1984, improved in the late 1990’s by Stanford research, and commercialized as IBE 2.0 by Voltage Security, now an HP company. IBE 3.0 was developed and then patented in 2014 by Connect in Private (CIP), and is offered under the brand CLAE.

Pascal Pallier, the former Head of Cryptography and Innovation at Gemalto Security Labs states:

“CLAE achieves in a single cryptographic function all the ultimate functionalities that one can expect from a modern encryption mechanism. It supports authentication at no extra cost, and the certificate-less feature makes it easy to integrate in pre-existing applications. CLAE is basically what secure applications need, regardless of whether people are even aware that such technology exists and is available.”

CLAE is not a security solution, per se, but a “cryptographic ingredient” that can be baked into any offering. It is ideal for any company or service provider that is striving to economically and easily secure applications and services. CLAE provides all the benefits of IBE 2.0, and significantly more, including end-to-end security, authentication at the application layer, and greatly simplified set up and maintenance.

Most importantly, when RSA and ECC are cracked,CLAE will still be standing.

For more information on IBE 3.0/CLAE, please send me an email message – bill@connectinprivate.com


Quick History of the Internet of Things..

Quick History of the Internet of Things..
by Bill McCabe on 03-09-2016 at 7:00 am

The internet isn’t that old if you really think about it. We’ve been working on it for decades but so far as public use, it’s only been a short span of about 20 years. In 1974, the TCIP/IP structure that we know today had it’s birth. It was not until ten years later that the first domain name system or DNS was introduced. The first website actually came online in 1991. The internet that was proposed just a scant two years earlier came crashing into our mainstream world. It was a technological awakening that had been a long time coming.


In no time the internet took over. By 1995, multiple websites and systems came online. Entertainment by means of bulletin board systems began to be seen. All of it came from the imaginings of others that had taken place decades earlier.

The term “internet of things” or “IoT” is also not a new one. It’s frequently used and has been so for years, but in a survey it was revealed that even those who work in it every day are not at all conversant with the history of the IoT. That history—or at least the ideology—goes back a great deal further than most people know.

The first look at the internet of things—arguably—came from Nicola Tesla in 1926 when he commented in Collier’s “When wireless* is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole………and the instruments through which we shall be able to do this will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.” It was a comment that got him laughed at in some circles, but one which was remarkably accurate considering the state of computing at that time.

In 1998, Google incorporated and too, in 1998, inTouch a project that was developed at MIT was put into play by Scott Brave and Professor Hiroshi Ishii who announced “….We then present inTouch, which applies Synchronized Distributed Physical Objects to create a “tangible telephone” for long distance haptic communication.”

In 1998, the real IoT was touched by Mark Weiser, who developed a water fountain that was amazing and delightful to everyone who saw it. It rose and fell respectively according to the pricing trends and the volume of stock on the NYSE.

1999 saw the term Internet of Things spoken by Kevin Ashton who was the then executive director for the Auto-ID Center. “I could be wrong, but I’m fairly sure the phrase “Internet of Things” started life as the title of a presentation I made at Procter & Gamble (P&G) in 1999. Linking the new idea of RFID in P&G’s supply chain to the then-red-hot topic of the Internet was more than just a good way to get executive attention.”

Business Week in 1999 was the scene of the next big announcement about the term Internet of Things. 1999 – Neil Gross, speaking to Business Week commented, “In the next century, planet earth will don an electronic skin. It will use the Internet as a scaffold to support and transmit its sensations. This skin is already being stitched together. It consists of millions of embedded electronic measuring devices: thermostats, pressure gauges, pollution detectors, cameras, microphones, glucose sensors, EKGs, electroencephalographs. These will probe and monitor cities and endangered species, the atmosphere, our ships, highways and fleets of trucks, our conversations, our bodies–even our dreams.”

IoT has continued to grow and to evolve and projections are bright for this new methodology for using the internet. The future of IoT is now –with devices coming online every day. The world is reliant upon connected cars, connected medical devices and even connected homes. Motorcycles, vehicles and even wearables are growing daily. Iot homes and even utility monitors and meters are coming online daily across the country.

Companies today are scrambling to get their own IoT systems online and moving, and new recruits are being brought in every day to head up IoT systems in companies from small to large.

Today, 2016, IoT is growing by leaps and bounds. The vast majority of companies are seeking some way to get on board the IoT train. In many cases, IoT has solved problems, in other cases, such as the security aspects, IoT has caused as many issues as it has resolved.

Suffice it to say that IoT will continue to grow and to present challenges and benefits for the companies who are using it. IoT managers and CIoTO’s are among the business employees who are in demand today because of the vast and rapid growth of IoT. What is your company doing to find those new and necessary IoT workers today?


TSMC 2016 Technology Symposium and Apple SoCs!

TSMC 2016 Technology Symposium and Apple SoCs!
by Daniel Nenni on 03-08-2016 at 4:00 pm

It is that time again, time for the originators of the pure-play foundry business to update their top customers and partners on the latest process technology developments and schedules. More specifically, all of the TSMC FinFET processes (16nm, 10nm, 7nm, and beyond), TSMC IP portfolio (CMOS image sensor, Embedded Flash, Power IC, and MEMS), TSMC’s backend technology (InFO and CoWos), and the latest update on the TSMC OIP Ecosystem.

The future of the semiconductor industry is promising with many growth opportunities ahead. To capture these opportunities, we need to continue to work as a collaborative innovation force. Together, we will help each other grow business and stay competitive. This vision is the foundation for the TSMC Grand Alliance. At TSMC, customers are always at the center of all our efforts. With this spirit, TSMC has become our customers’ TRUSTED technology and capacity provider along the way.

It will be interesting to hear more about TSMC’s FinFET market share and if they really did double down on 16nm capacity. I would also like to know where 10nm stands. In my opinion it will be a quick transition node like 20nm that most companies (except for Apple) will skip so they can stay on the new and improved 16FFC until 7nm goes into production. My guess is that TSMC will spend much more time on 7nm than 10nm next week. It will also be fun to try and figure out what Apple is up to based on TSMC’s updates. For example this comment from the last conference call tells me that Apple will be using a 16nm FFC variant for the iPhone 7 this fall:

As customer accelerated their technology migration into 16-nanometer node, we anticipate a significant demand drop in 20-nanometer in 2016. However, we also expect a continual ramp-up of 16-nanometer this year and expect it to contribute more than 20% of wafer revenue in 2016. We estimate our foundry market segment share of 16, 14-nanometer node increases from about 40% in 2015 to above 70% in 2016 exceeding the previous prediction we made in mid-2014.

The other Apple “tell” is the InFO packaging technology. Last year TSMC predicted that InFO will contribute more than $100 million in revenue by Q4 2016. If you consider packaging is $2 or so per chip in revenue contribution that is a SIGNIFICANT amount of chip volume which again points to Apple using TSMC for the A10 SoC.

There are four different TSMC Technical Symposiums in the U.S. and others around the world after these:

Tuesday, March 15
San Jose McEnery Convention Center
San Jose, CA
Registration Opens at 8:30 a.m.

Tuesday, March 22Boston Marriott Burlington
Burlington, MA
Registration Opens at 8:30 a.m

Thursday, March 24
Four Seasons, Austin
Austin, TX
Registration Opens at 8:30 a.m.

If you are not one of the lucky golden ticket holders, Tom Simon and I will be there and will post our observations and opinions on SemiWiki shortly thereafter. If you are looking for specific information let us know in the comments section and we will do our best to get it.

Established in 1987, TSMC is the world’s first dedicated semiconductor foundry. As the founder and a leader of the Dedicated IC Foundry segment, TSMC has built its reputation by offering advanced and “More-than-Moore” wafer production processes and unparalleled manufacturing efficiency. From its inception, TSMC has consistently offered the foundry segment’s leading technologies and TSMC COMPATIBLE® design services.


Upcoming ARM & Open-Silicon Webinar on Custom SOC’s for IoT

Upcoming ARM & Open-Silicon Webinar on Custom SOC’s for IoT
by Tom Simon on 03-08-2016 at 12:00 pm

IoT products call for a higher level of system integration than ever before. Companies seeking to go to market now have a much higher bar in terms of size, power, reliability and manufacturability. The first IoT devices evolved from embedded development boards, like the groundbreaking Arduino. These were fine for prototypes or proof of concept, but are often clunky and not truly optimized platforms.

Product differentiation in IoT devices comes from a combination of software and hardware. It’s not necessary to look any further than Apple, Nike or Fitbit for examples of nicely integrated hardware and software. While it is good to see the pendulum swing back from the focus on software only businesses, a lot of people with ideas for new products do not have the background for dedicated hardware development.

Still, there are a lot of impressive things that can be done with productized processor chips such as the ARM Cortex M0 offered by many semiconductor companies. These can be combined with peripherals and sensors to build circuit boards. However, to really effectively compete in the market requires custom SOC development. Bringing the system onto a chip offers huge benefits, and ultimately a smaller BOM.

The good news is that there are readily available solutions for small and midsize companies needing the advantages that SOC’s offer IoT based products. Foremost among them is ARM’s initiative which has applied radical new business models to enable adoption of custom SOC’s. Still even with easier access to IP for product development and simplified business models the process of developing high quality silicon can be daunting, especially for companies predominantly focused on the software side.

To complete the needs of these IoT companies, there are turnkey ASIC development companies that can design and manufacture custom SoC’s. Open-Silicon is a leading vendor in this area and is going to host a webinar on March 23[SUP]rd[/SUP] at 8AM Pacific Time on the topic of how to revolutionize IoT product development by utilizing turnkey ASIC development. The moderator of this webinar will be Semiwiki’s own Daniel Nenni.

ARM is collaborating with Open-Silicon in offering this event. The first speaker will be ARM’s Cortex-M Product Manager Tim Menasveta. The products he oversees are a key component of low power IoT product development. He has deep experience in the embedded processor market in both development and management roles

Open-Silicon’s Senior Solutions Architect Pradeep Sukumaran will follow to talk in depth about the product development process that can be used by IoT product developers to smoothly incorporate custom silicon into their products.

While it might seem a leap to go from designing circuit boards using off the shelf parts to achieving highly integrated finished products using custom SOC’s, the speakers will review the business case in detail. They will also cover the range of available IP and SOC platforms to help jump start chip development. There will also be a discussion of the role played by turnkey ASIC developers in the product development process.

The information available in this webinar should be highly valuable to anyone developing IoT products, especially those looking for reduced overall product costs, higher reliability, lower power and product differentiation. It is especially interesting because of the participation of ARM and their partnership with Open-Silicon.

If you are interested in registering for this online webinar on March 23[SUP]rd[/SUP] at 8AM Pacific Time, there is more information available at this link.


Accellera and Portable Stimulus

Accellera and Portable Stimulus
by Bernard Murphy on 03-08-2016 at 7:00 am

I’ll start with a quick note on DVCon. This seems to be gaining momentum each year. In addition to the events in the US, Europe and India, a DVCon event is now planned for China, kicking off in Shanghai in 2017. At a time when we’re all bemoaning the future of EDA and EDA conferences, DVCon is booming internationally, no doubt reflecting the relentless growth of the verification challenge.

In standards development, Accellera is working in a number of areas; my focus for this blog is their work on the Portable Stimulus (PS) standard, in development under the Portable Stimulus Specification Working Group (or more simply PSWG). The purpose of the PSWG is to provide for sharing scenarios, results and coverage requirements across platforms (virtual, emulation, simulation, prototype and board) and teams (architect, hardware developer, software developer, analog developer, post-silicon debug, ..), all the way from block level to system level, in a non-proprietary standard. Which sounds quite a bit bigger than portable stimulus, but they were allowed only four letters for the group acronym.

Development is about halfway towards version 1 (expected early 2017) and most of the key semiconductor systems and EDA players are actively involved. They are starting with a baseline of a declarative specification (separating what should be tested from how it is tested and moving towards a solver-based philosophy). This should be able to integrate with non-PS code and, in the baseline proposal should fit with existing ecosystems. The preference for a declarative rather than more common procedural approaches is driven by ability, especially at the system level, to trace paths, potentially formally prove properties, define coverage and more.

The working group are now looking at next steps. A joint contribution from Cadence and Mentor suggests a new domain-specific declarative language, also providing for integration from legacy C/C++/SV code. The Breker contribution suggests a declarative C++ along with value constraints and path constraints. The Vayavya contribution is a complementary syntax to generate register sequences and firmware and driver routines from a canonical hardware/software interface description.

The value in portability between platforms and teams is an obviously good thing in many ways, but what I find gets lost in that general message of standardization goodness is standardization around an important shift in system-level testing and coverage. For me this starts a DVCon or two ago where there were fairly strident demands from the audience wanting to know why vendors weren’t providing tools to help improve system-level coverage as they had with constrained random (CR) for block-level coverage.

CR is too crude for system-level tests, but there seemed at that time to be no widely-accepted alternative to directed or software-driven test. Neither offers a systematic way to optimize hardware coverage. But graph-based/declarative approaches can give better guidance at the system-level both to the verification engineer and to solvers to construct/randomize sequences of actions and to determine a usable sense of coverage. For solvers, think of it as sequence-based constrained-random, where what is randomized is not just data but also paths through feasible sequences of system-level actions.

I have seen solutions along these lines emerging from several of the EDA vendors and I’m sure some of the motivation for PS has been to get the representation of this capability standardized before it becomes frozen into proprietary formats. And that is really important because this system level aspect of verification is going to be the heart of the problem and the heart of valuable solutions.

You can learn more about the Portable Stimulus Working Group HERE.

More articles by Bernard…


Post-making new Things stand out on the IoT

Post-making new Things stand out on the IoT
by Don Dingee on 03-07-2016 at 4:00 pm

Sales says this next IoT project is going to be huge. Engineering isn’t so sure. Marketing says we should pilot it to find out. If it were just software, it might not be such a problem, but with hardware comes investment tradeoffs. Without guaranteed volumes of millions of units, are ASICs a realistic option to hit aggressive size, power, and cost targets? Continue reading “Post-making new Things stand out on the IoT”


Carol Burnett and Automotive Safety

Carol Burnett and Automotive Safety
by Roger C. Lanctot on 03-07-2016 at 12:00 pm

American television viewers of a certain age will remember the Carol Burnett Show and its star, Carol Burnett, and her customary ear tug at the end of each show. TV Guide tells us the “ear tug first made famous during the 1967-79 run of CBS’s Carol Burnett Show was a message to her grandmother, a way of saying, “Hello, I love you.”‘ Burnett later added her late daughter, Carrie Hamilton, as an intended recipient of the greeting.

I was reminded of the Burnett gesture while visiting with Fujitsu Labs executives at the recent Mobile World Congress in Barcelona. This team of researchers has come up with what they call FEELythm, a sensor intended to be attached to the ear of a commercial truck driver.

Introduced commercially more than a year ago, the FEELythm wearable sensor detects driver drowsiness based on pulse using a proprietary algorithm developed by Fujitsu. The device can also connect to digital tachographs and link to fleet-management systems for real-time monitoring and guidance.

The objective is to improve safe-driving management by predicting dangers related to fatigue, stress and tension and plotting those findings on a hazard map, the company says. Fujitsu claims that human error “not attributable to driving violations or skill has accounted for approximately 67% of all traffic accidents in recent years.”

The device is intended for commercial drivers including everyone from taxi drivers to long-haul truckers. The device has a battery life of up to five days and is capable of giving audio and physical alerts to those drivers. Fujitsu says the device is actively in use and the company’s sales goal for the device is 70,000 units over three years.

The FEELythm device comes to my mind in the context of the latest contradictory report from the Insurance Institute for Highway Safety (IIHS) in the U.S. IIHS researchers report that owners of Honda’s with lane departure warning systems were frequently turning those systems off, perhaps in reaction to the annoying warnings which can be activated when a driver is making a lane change without activating the turn signal.

“Lane Departure in Cars Gets Turned Off”

The IIHS report is significant for two reasons. First, IIHS had previously reported in 2012 that its research showed that cars with lane departure warning systems saw increased claim rates rather than reductions. Second, the new finding is in keeping with an ongoing theme promoted by IIHS that only active safety systems, that actually take control away from the driver, can be shown to reduce insurance claims.

“Crash Avoidance Features Reduce Crashes, Insurance Claims Study Shows” – IIHS

What this means to car buying consumers is that IIHS, which is funded by the insurance industry, is withholding its endorsement of safety systems such as lane departure warning. Ultimately, IIHS is caught on the horns of a dilemma. Much of its research favors features such as automated braking and adaptive headlights, but the organization stops short of advocating that insurers offer discounts to consumers buying cars with these features.

It is actually quite difficult to find a U.S. insurer willing to reduce your insurance bill for buying a car laden with advanced driver assist systems. This is in contrast to Europe, where discounts for buying cars with such systems are widely available – just ask Volvo!

(It is worth noting the significantly lower crash and fatality rates in Europe though it is unclear whether this is due to better roads, better mass transit, higher licensing standards, or more expensive gas.)

The lane departure warning research takes the analysis a step further suggesting that driver warnings and alerts may not be enough – and, in fact, may be sufficiently annoying that drivers turn them off. The recommendation from IIHS is for either cleverer sensor integration or involuntary operation of these safety systems.

(My personal recommendation is a connected car system tied to a usage-based insurance program that would reward the driver for keeping the system turned on – something that could be verified by monitoring.)

The IIHS finding is significant in the context of the USDOT’s pursuit of a mandate for inter-vehicle wireless communication or V2V. The National Highway Traffic Safety Administration has strongly hinted at its intention to mandate a wireless device to broadcast vehicle location between cars. Such a system will generate alerts and warnings, the very same alerts and warnings that IIHS finds are annoying to drivers – maybe even distracting.

This isn’t the only safety-related agenda item where NHTSA and IIHS part company. The USDOT, at the direction of the US Congress, has mandated back-up cameras, yet another technology drivers are likely to misuse or abuse. But no one would argue these systems should not be adopted. In the same way, it is better to promote blind-spot detection and lane departure warning, even if some customers turn them off.

In the end, IIHS is clearly advocating for automated driving. Anything less is likely to fail in the eyes of these researchers. The human factor will always be the fatal flaw.

The irony is that the more automated driving becomes the less insurance consumers will need. Maybe this explains the IIHS skepticism. Could it be that IIHS is rejecting and failing to advocate for the adoption of safety systems in the interest of insurance industry self-preservation? Insurers, it seems, are simply fed up with humans driving cars.

In this context it is hardly a surprise that Fujitsu would see fit to affix a sensor to the ears of drivers of commercial vehicles. As Google’s CEO Eric Schmidt said at TechCrunch in 2010: “Your car should drive itself. It’s amazing to me that we let humans drive cars. It’s a bug that cars were invented before computers.”

More articles from Roger…


Webinar from CEVA: Machine Type Communication

Webinar from CEVA: Machine Type Communication
by Eric Esteve on 03-07-2016 at 7:00 am

By 2020, ABI Research predicts that there will be more than 45 billion connected devices worldwide. More than half of these devices will incorporate multiple standards in the same device, such as Wi-Fi, 802.15.4g, GNSS and cellular communications.

This webinar will address the question: How To Design a LTE-Based M2M Asset Tracker SoC?

Mobile M2M devices will require cellular communications such as LTE MTC Cat-1, Cat-0 and the upcoming narrow-band Cat-M, NB-IoT and LPWAN as well as positioning technologies to track them. More than half of cellular enabled M2M devices will also integrate either GNSS (GPS) or LTE-OTDOA or a combination of both to track devices indoor an outdoor.
How can system designers integrate, quickly and reliably, LTE MTC, WiFi, GNSS and LTE-OTDOA on an SoC and keep power and area to a minimum ?

Join CEVA, Galileo and Nestwave experts to hear about:

  • Overview and market trends in connectivity and positioning for IoT and M2M.
  • Introduction to the technology required to track assets, outdoor, using Cellular M2M standards such as LTE MTC (Cat-1, Cat-0, Cat-M) together with multiple Satellite based Navigation systems and indoor, using WiFi 802.11n together with LTE-OTDA. Combined use cases will also be reviewed.
  • How to implement a flexible asset tracker SoC that will work both indoor and outdoor and be flexible for future standards evolution and proprietary improvements.


Target Audience

Communication and systems engineers targeting multimode applications requiring emerging cellular protocols such as LTE MTC Cat-1, Cat-0 or Cat-M, and flexible WiFi for IoT combined with positioning standards such as GNSS and LTE-OTDOA.

To attend to the webinar live, on March 22, REGISTER HERE

Asset Tracker Application
Some application, such as wearable, will only require a battery life of a few days, but others such as asset trackers will demand a battery life of 5-10 years. For system designers, addressing wearable related challenges will be completely different than designing an asset tracker. As of today, some wearable devices are considered as fashionable gadget, the end user accepting to pay sometimes more than for a mainstream smartphone even if the battery life is only a day or so.

At the other side of the scope, if you define asset tracker system specification, you may end up to count every cent as you need to meet very stringent cost requirement and the battery life is expected to last several year instead of days. It’s clear that one wireless communication standard can’t fit the demand coming from so different applications. But how selecting the right wireless standard in respect with your system needs?

Speakers

CEVA’s expert will emphasize on the importance of the processor architecture to efficiently enable multimode connectivity solutions. Finally, they will describe how implementing actual solutions for various IoT and M2M use cases using the latest communication DSP.

REGISTER HERE

Eric Esteve from IPNEST