CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

RISC-V opens for business with SiFive Freedom

RISC-V opens for business with SiFive Freedom
by Don Dingee on 07-11-2016 at 4:00 pm

When we talk about open source, free usually comes in the context of “freedom”, not as in “free beer”, and open IP often serves as a base layer of value add for commercialization. The creators of the RISC-V instruction set, now working at startup SiFive, have released specifications for their aptly-named Freedom processor IP cores looking for “enablement of great ideas”. Continue reading “RISC-V opens for business with SiFive Freedom”


How to Bring Coherency to the World of Cache Memory

How to Bring Coherency to the World of Cache Memory
by Tom Simon on 07-11-2016 at 12:00 pm

As the size and complexity of System On Chip design has rapidly expanded in recent years, the need to use cache memory to improve throughput and reduce power has increased as well. Originally, cache memory was used to prevent what was then a single processor from making expensive off chip access for program or data memory. With the advent of multi-core preprocessors, caches began to play an essential role in enabling rapid sharing and exchange of data between the cores. Without caches many of the benefits of a multi-core architecture would be lost to the inefficiencies of SRAM access.

As a result, processors in multi-core chips are built with cache coherent memory interfaces. Over time many new IP blocks, such as PCIe, have been developed that are part of SoC ecosystems, and many have support for cache coherency. There are of course multiple implementations for cache coherency. Even within a given interface there are parameters that can affect interoperability. In many cases there are good reasons for differing protocols for cache coherency, however this diversity of choices has stymied SoC architects and designers.

Recently I wrote about Arteris and their new Ncore cache coherency network that can link together IP blocks that support a variety of cache coherency protocols. Naturally, Ncore supports ARM’s AMBA ACE protocol. ARM sees the Ncore offering from Arteris as an efficient means to link together IP that uses heterogeneous cache protocols. This is great for cache coherent enabled IP going into SOC’s, but what about IP that is still necessary but has no cache support?

Well, some of the strongest interest in Ncore apparently has come from SoC companies that are faced with integrating non-cache coherent blocks into their designs. Next I’ll discuss how this can be done with Ncore to provide all the advantages of cache coherency to those blocks.

Ncore uses the Arteris FlexNoc as a transport layer for cache agents. This provides tremendous flexibility in allocating resources for cache data transfers. It also supports the cache coherent agents. For blocks that already have local cache Ncore provides a protocol interface and provides logic units for managing coherency. IP Blocks with only a traditional memory interface can use a non-coherent bridge provided by Ncore. Also proxy caches can be synthesized to meet the IP block needs.

The Ncore non-coherent bridge translates non-coherent transactions into IO-coherent ones. Multiple non-coherent data channels can be connected to a single bridge, allowing aggregation for more efficiency. Ncore proxy caches have read pre-fetch, write merging and ordering capability. The proxy caches are configurable up to 1MB per port. Both MSI and a subset of MEI coherence models are supported.

From the perspective of the non-cache coherent IP, it is still talking to external SRAM. But in actuality Ncore is presenting this block as a fully cache coherent block to the rest of the cache coherent network. Ncore allows the SoC architect to tune the parameters for the cache bridge to ensure optimal operation. Ncore and FlexNOC come with a fully integrated and sophisticated design suite to tailor the system to the SoC power, area, and performance requirements.

With the addition of Ncore Arteris now is in the enviable position of offering IP for unified SOC interconnect. Using one underlying transport layer for both coherent and non-coherent SOC data transfers lets architects build in the optimal interconnect resources. This approach maximizes utilization of chip real estate, while ensuring sufficient throughput for all data requirements. For more information on Arteris and their Ncore Cache coherent network IP, go to their website.


Mainstream PCB Design Requires a Complete Tool Platform, Too

Mainstream PCB Design Requires a Complete Tool Platform, Too
by Tom Dillinger on 07-11-2016 at 8:00 am

The EDA tool offerings for printed circuit board design commonly address one of three customer markets: (1) the enterprise design team, (2) the product development engineer, and (3) the “maker”.
Continue reading “Mainstream PCB Design Requires a Complete Tool Platform, Too”


Car Sharing, Ride Hailing on Collision Course

Car Sharing, Ride Hailing on Collision Course
by Roger C. Lanctot on 07-10-2016 at 4:00 pm

Do car makers know what they are getting themselves into with car sharing? Car companies are lacing up their skates and venturing out on the thin ice of car sharing. General Motors’ Maven, with fledgling efforts in New York City and Ann Arbor, Mich., is the latest incarnation of this movement. The movement is pervasive and growing as are the questions regarding the ultimate outcome.

Strategy Analytics: “Automakers Explore Car Sharing, Ride Hailing and Other Connected Mobility Programs”tinyurl.com/jr52f5a

Morgan Stanley released a report last week estimating the global share of vehicle miles traveled for shared vehicles (including taxis and ride hailing services) to rise from an estimated 4% in 2015 to 26% in 2030. The report correctly identifies the current economics that favor a massive currently under-utilized privately-owned global fleet of internal-combustion-based cars blocking the adoption of more expensive electrified vehicles the cost of which could be rationalized by higher levels of usage enabled by sharing.

SOURCE: Morgan Stanley

Car sharing (think ZipCar, Car2Go) is not to be confused with ride hailing (think Uber, Lyft), both of which are transforming transportation. Both applications are targeted at converting private automotive transportation into networked public transportation.

Car companies are B2B businesses selling cars to and through dealers as well as to fleet operators. As such, rental car companies, car sharing and ride hailing companies are all fleets squarely in the car selling comfort zone of car makers – which explains multiple strategic relationships to provide vehicles to these service providers and their drivers.

But when car companies enter the car sharing business directly they are moving into the realm of B2C, setting the stage for a transformation with unpredictable outcomes. Professor Dr. Larry Burns, former corporate vice president of research and development at GM, speaking at the recent Mentor Graphics IESF event referenced University of Michigan research showing that 18,000 shared and networked vehicles “can provide the same mobility as 120,000 conventional vehicles.” In other words, local mobility needs can be solved with 15% of the current volume of vehicles on the road – not good news for auto makers.

Working against shared use is the enduring romantic attachment to the metal. Morgan Stanley tips its hat to the deep ties Americans and others have to their cars: “private car ownership is deeply woven into the American cultural fabric.” Morgan Stanley also notes the aspirational nature of car ownership for high-growth auto sales regions in Southeast Asia including India and China.

Burns counters that the bloom is off the rose of car ownership. In his IESF presentation he asked:
Do you enjoy…

  • Shopping for a car?
  • Financing a car?
  • Insuring a car?
  • Buying and pumping gasoline?
  • Getting a car washed?
  • Maintaining a car?
  • Parking a car?
  • Driving a car?
  • Sitting in traffic?

Consumers are increasingly considering letting go of car ownership – but the industry and consumers are at a turning point. Recent focus groups conducted by Strategy Analytics found little consensus regarding the future of transportation – but car ownership was seen as remaining in the picture for the foreseeable future.

Strategy Analytics: “Millennials’ Choice of Car or Transit Mode is Driven by Cost and Usability” –TinyURL.com – shorten that long URL into a tiny URL http://tinyurl.com/h3ajvbo

Millennials’ Choice of Car or Transit Mode is Driven by Cost and Usability

The path to shared networked vehicles leads through car sharing and ride hailing services. Car sharing seems to only currently be suited to urban settings with purpose-built individually used vehicles increasingly available for one-way ad hoc trips. (Maven is currently using existing GM vehicles solely for roundtrips.)

Ride hailing services, meanwhile, are caught up in a maniacal race to the bottom, burning cash while recruiting drivers such that the more successful they are the more money they lose and the more difficult it is for drivers to make a living. Worse still, Uber and Lyft drivers – as well as drivers for similar services – are discovering the corollary proposition of contributing to shared vehicle miles driven. It is not unusual for ride hailing drivers to notch 4K-5K miles/month.

To counter these expensive business models, Uber, Lyft and their ilk are avidly pursuing autonomous driving technology to remove the key source of cost: the driver. This is obviously going to take years.
Oddly, GM has been at the forefront of vehicle networking (OnStar), electrification (EV-1), and car sharing (en-V), but has yet to successfully integrate each of these elements. It is still possible for GM to bring the vision together under the banner of Maven. So far, Maven has only seen fit to make existing, conventional GM vehicles available.

Like all car makers, GM is confronting the almost impossible turning radius required to shift from personal vehicle ownership (low utilization rate) to shared ownership (high utilization rate). The transition is already costing the Uber’s and Lyft’s of the world billions of dollars. (GM is now, of course, participating in Lyft’s financial burn with its recent $500M investment.)

Meanwhile, Local Motors has begun bringing its own self-driving Olli vehicles to the market – targeting confined urban settings such as the National Harbor outside Washington, DC, and the downtown area of Las Vegas (MOU signed). Local Motors hopes to have as many as 30 of its networked, shared, printed, self-driving Olli’s cruising National Harbor before the end of 2016 – as well as elsewhere in the U.S. and Europe. (Local Motors will be remotely monitoring these driverless vehicles.)

Local Motors fulfills the final element of Professor Burns’ future mobility catechism: tailored. As he intoned at the IESF event, future cars will be characterized as: “Efficient, Connected, Coordinated, Driverless, Tailored.” The lower operational cost promised by the transformation that is slowly unfolding will increasingly become clear to the driving public. Millions of consumers around the world are doing the math and that math increasingly (but at a slow pace) favors shared and/or ad hoc vehicle use over ownership.

In the meantime, most of us will keep buying increasingly expensive cars and pumping cheap gas. Professor Burns asked a key question of the IESF audience – “Will change come in the form of evolution or transformation?” It looks like evolution wins. Car makers have a lot to lose if we all start sharing instead of owning. We can thank Professor Burns for clarifying that.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


NTSB Entry Raises the Stakes of Tesla Probe

NTSB Entry Raises the Stakes of Tesla Probe
by Roger C. Lanctot on 07-10-2016 at 12:00 pm

The National Transportation Safety Board’s entry into the investigation of the first fatal crash of a Tesla Model S is a monumental turning point in the autonomous driving movement. While long-time observers of the NTSB note that it only gets involved in investigations where broader implications exist, the agency’s interest also reflects the fact that the National Highway Traffic Safety Administration lacks the technical ability to properly investigate a crash cause that is likely tied to a software failure.

As in the case of Toyota’s unintended acceleration fatalities, recalls and penalties, software is chiefly implicated in the fatal Tesla crash in Florida. In the Toyota case, NHTSA turned to the National Aeronautic and Space Administration for help and NASA ultimately turned to outside experts who criticized what they described as Toyota’s “spaghetti code.”

The source of the unintended acceleration in the Toyota Prius remains unresolved, but the primary learning from the experience was the realization of the investigatory limitations of the automotive industry’s primary regulatory agency. Those limitations, a legacy of the agency’s reduction in size going back to the Reagan Administration, remain uncorrected.

As a result, NHTSA lacks the fundamental expertise necessary not only to investigate the crashes of autonomous vehicles but also to evaluate the performance of these vehicles or even to properly set guidelines. This is a big problem for the industry and for the motoring public leaving individual state authorities in the awkward position of blindly cobbling together rules and guidelines of their own for the operation of self-driving vehicles on local and interstate highways.

Tesla and Google have more or less been left in the position of regulating and policing themselves. This is less of a problem for Google since its vehicles are not made available to the general public and generally operate at low speeds on local roads. It’s a different matter for Tesla Motors.

Ironically, Google lobbied the California Department of Motor Vehicles to leave autonomous vehicle regulatory oversight and guidance to Federal authorities in the form of NHTSA. During last year’s California DMV hearings on the subject, Google lobbyists and executives (including at least one former NHTSA executive, Ron Medford) implored the California DMV to relinquish its authority over Google’s local on-road testing activities.

Google’s argument before the California DMV was that the state agency was incapable of comprehending let alone evaluating the self-driving software Google was developing and deploying. It is clear that Google’s pleas were a cynical play to shift control to an agency with which it felt it had greater influence – knowing all along that NHTSA, too, lacked the necessary expertise and resources.

But the cynicism of Tesla’s CEO, Elon Musk, puts Google’s cynicism to shame. Musk has opened up his own traffic court serving as judge, jury, witness and prosecution. With each new Model S crash, Musk is quick to provide his assessment of fault – nearly universally lying with the driver – absolving himself and his company of responsibility.

The fatal crash in Florida is the first instance of Tesla acknowledging a potential flaw in its software and sensing architecture. Still, Musk fell back on the various caveats for use of the autopilot system including hands on the wheel etc. intended to release Tesla from any responsibility.

States such as California have begun insisting on full disclosure of self-driving car crash data – especially in the case of Google. Tesla is technically not offering a self-driving car, but state and Federal authorities may soon begin insisting on this same kind of sharing of crash data.

Transportation network companies (TNCs) such as Lyft and Uber, operating like Google and Tesla, outside the normal regulatory bounds are also being asked to disclose data about their drivers and crashes and other incidents. It seems that the battle for the next generation of transportation technology is evolving into a battle for data.

The manner in which the Tesla fatal crash has exposed the software blindspot of NHTSA has wider implications for the government’s role in redefining transportation safety. Safety in transportation is increasingly being determined by software systems. NTSB’s decision to enter NHTSA’s investigation suggests that NHTSA itself may not be up to the very task it has given itself – of promoting collision avoidance and autonomous driving.

Ultimately, this calls into question its plans to mandate the implementation of vehicle-to-vehicle wireless communications for the purpose of crash avoidance as well as its ability to comprehend, provide guidelines for and regulate the process of self-driving software development and sensor fusion. The arrival of the NTSB on the Tesla crash scene is an acknowledgement within the regulatory community that NHTSA is out of its depth, unequal to the task.

Until such time as the NTSB, NHTSA or NASA can sort out which agency has the scope or expertise to oversee autonomous vehicle development and deployment we are likely to see an ongoing and expanding free-for-all on U.S. highways. Such a free-for-all may lead to more fatalities, technological advancement or simple chaos – maybe all of the above.

Someone in Washington needs to sort out government’s role and properly fund the relevant agencies such that progress is successfully and safely achieved. The alternative will be a widespread call from safety advocates that all autonomous driving testing, development and deployment cease. Since autonomous driving technology is intended to save us all from ourselves, that can’t be the outcome we want to see.

Roger C. Lanctot is Associate Director in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here: https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk


Hacking Your Way Across the Chasm

Hacking Your Way Across the Chasm
by Michael Tanner on 07-10-2016 at 7:00 am

Geoffrey Moore’s “Crossing the Chasm” remains one of the most useful and widely read business books within the high-technology business. I say this not only because I spent a decade of my life working with Geoffrey and the rest of The Chasm Group as a consultant, but also because to this day, I have yet to find another book that is so well written and that communicates the basics of market behavior in such a simple and straightforward way. And, while Crossing the Chasm was originally born in 1991 from what were mostly B2B marketing challenges at the time, the market adoption and market development ideas within remain fundamental, proving themselves applicable across all forms of new innovations.

Growth-hacking” is a more recent concept. Coined by Sean Ellis1 in 2010, growth hacking involves using technical approaches and analytics to test and then optimize marketing activities, getting around traditional approaches at a substantially lower cost. ( Note: Neil Patel and Bronson Taylor wrote a great on-line reference to growth hacking — click here). Twenty years ago “hackers” were thought of as thieves who broke into networks and computers to steal stuff. Today, the term “hacker” is a more general piece of jargon that’s used to describe someone who uses innovative technical or analytical techniques to overcome barriers.

The two ideas: growth-hacking, and chasm-crossing, feel at odds at fist glance. Where growth hacking involves a series of tactical approaches that broadly test what works, chasm-crossing involves making strategic and more methodical choices about where, what, and how to sell, and then going after a single market beach head with a vengeance. But, growth-hacking can also have more than just tactical objectives. With a little foresight, these two concepts can intersect in some interesting ways that were not possible when Crossing the Chasm was originally authored.

In my experience, one of the big hurdles that teams face when making decisions related to chasm-crossing is the lack of comfort that comes from having insufficient facts. Growth hacking involves a process of trial and error to quickly identify facts about what works and what doesn’t. The challenge is to organize the growth-hacking tactics in a way that produces the types of information you need.

For example, the criteria for identifying potential chasm-crossing segments typically involves quantifying target segment attractiveness based upon a few things:

[LIST=1]

  • The availability and access to a well-funded set of buyers
  • The urgency (rather than importance) of the need
  • The degree to which the required ‘whole product’ is complete, and
  • The referral leverage that one set of customers might have into another segment

    Modern marketing and sales automation tools can give you some real insight here. For example: by simply identifying visitors in an intelligent way through your website and through social media you can help identify hot spots through the purchase process, starting with visits. By reverse IP-lookup, you can learn the names of companies visiting your website. If you have a general idea of the titles that might be good prospective buyers, including the line of business buyers and the infrastructure buyers, you can provide them to an outside service who will in-turn deliver a direct marketing list to you based upon your visitors, whether or not they’ve actually given you their names and email addresses. Over time you can then develop specific content sets that can test different messages in order to see which value propositions return the highest open rates, click through rates, trials, and eventual purchases by segment.

    Using targeted content marketing or feeds from twitter and blog posts, you might also hypothesize prospective solutions to see which resonate the best across different segments, which titles in your list spend the most time looking through content, and which stimulate actual content downloads or product evaluation. Most lead scoring systems within marketing automation tools can be setup to automate this effort. You can then use the analytics generated along with actual sales data to help answer the key questions above. Moreover, if you manage to create some sort of referral mechanism, either built into the process, or built into the actual product, you can methodically track where these referral references are coming from and going to.

    There are challenges to this approach too. First, if you are an early stage business there is a small investment into marketing automation tools and a learning curve. You’ll need to have and work with your outbound direct marketing team to setup analytics that help answer the key questions. Second, you’ll need to make sure that you differentiate between the early adopter tire-kickers who spend lots of time evaluating without consummating a real deal, and the pragmatic buyer types who have the ability to drive lifetime value. A careful study of titles, initial vs. follow-on purchases, and the up-sells achieved by segment can help differentiate these two types of buyers.

    Third, while you can’t understand solution gaps completely with such a low-touch approach, you certainly can glean insight from targeted content marketing that features prospective solution elements. The key is to have some call to action that can be measured. Product managers and engineers who decide solution priorities may not be used to working directly with those who execute tactical marketing programs in this way, so here’s where you may need a real “growth hacker” skill-set on the team who can span and integrate both cultures.

    Fourth (and finally), you must be very sure that you do not confuse correlation withcausation. But, all challenges aside, by putting just a few processes and tools in place early on you can bring far more factual insights to bear for the chasm-crossing discussion than could have been done cost-effectively in the past. Today, facts can actually be quite plentiful with a little planning. The challenge is now more about sifting through what is relevant to arrive at real insight.

    1. [Ellis, Sean (June 26, 2010). “Find a Growth Hacker for Your Startup” Startup-marketing.com ]


  • Are Smart things making us smarter?

    Are Smart things making us smarter?
    by Prakash Mohapatra on 07-08-2016 at 12:00 pm

    Nowadays, we don’t have to learn how to drive a car well because there are systems (automated braking, monitoring, etc.) in the car that is taking care of many things without our knowledge. We don’t have to remember whether we have switched off the lights before leaving the house. The smart home automation system will switch off the lights after detecting no sound or activity for some time.

    Self-driving cars are the way to go in future. In future, people don’t have to hire a chauffeur. You may just board on the car and tell it where to go. The car shall use GPS to find the optimal route and take you there, cruising through the traffic. You can get down from the car, and then the car will find an empty parking spot for itself. While you are leaving, call the car from your smartphone to be in the entrance in 5 mins. When you reach out, the car will be waiting for you with the rear door open and playing your favourite music, setting the appropriate temperature for your comfort. Awesome!!

    Not only that, come next the smart fridges. You can check whether there are beer cans in the fridge by sending a message to the fridge. On detecting there are are only few beer cans left, the fridge can order beer cans from an online retailer. The beers shall be delivered in your home without your involvement. Maybe we can term this as “Self-Replenishing Fridges”. The advertising gimmick for this product would be “The fridge that never becomes empty”.

    I may sound like a guy who loath technology and who doesn’t want technology to enhance our lifestyle. However, I believe that I am looking at things with a more conservative viewpoint. In my view, most companies are struck in a red ocean, in which the only focus of companies is to create competitive advantage. The companies’ pursuit of doing things better than the competitors tend to expand the chasm between the technology and customer utility.

    I agree that future of technology is all about convergence and integration, i.e, the seamless migration from one environment to another for the end user. In a typical day, people spend majority of their time in three environments: home, office and travel. I believe technology is about integrating all three environments together, so that when a user moves from one environment to another, the devices are aware of the movements (contextual awareness) and take appropriate actions.

    For e.g, when I move from my home network to my office network, my smartphone shall hide my personal profile and show my official profile. I believe consumer electronics giants such as Apple, Samsung have intentions of dominating in each of these environments. Apple has already penetrated our home with iPod, iPhone and iPad. With the tremendous success in the home segment, Apple extended its offerings to penetrate the other two environments with enterprise (BYOD offerings) and automotive (IoS in car).

    Once a firm has dominant position in one environment, it is easy to offer complementary offerings to penetrate other environments. Similarly, Samsung is also attempting to penetrate the enterprise segment with its Samsung Knox offering. I may have missed to notice any tangible drive by Samsung in the automotive segment. However, the eco-system of Android shall work out in favour of Samsung as most of its smartphones are based on Android.

    With the growing prominence of automotive apps, Samsung is in a favorable position to also penetrate into vehicle with its android smartphones. I believe rather than embedded intelligence in vehicles, it is more beneficial for both the automotive OEMs and end users to extend the capabilities of the smartphones by using automotive apps. This strategy decouples the mismatch in the product life cycle of automobiles and consumer electronics technology. However, many applications in ADAS will need embedded intelligence.

    It is obvious that tech firms will keep on innovating and create new trends to penetrate in each of these environments. It is also infallible that we consumers will become prey of these tech trends and depend more on these machines rather than our brains. Then I really question whether these smart things are really making us smarter or just offer us an illusion that we are becoming smart.

    What do you think?


    Artificial Intelligence is Everything!

    Artificial Intelligence is Everything!
    by Daniel Nenni on 07-08-2016 at 7:00 am

    My first brush with AI was a LISP class for my undergraduate degree. LISP, originated from MIT in 1958, was the language of choice for AI research and spawned a new class of computer hardware called LISP Machines in the 1980s. My first personal experience with AI was the HAL 9000 system from the 2001 Stanley Kubrik movie Space Odyssey. Today I have my own personal AI systems (Amazon echo and Apple Siri) that I rely on every day.

    Most people don’t realize this but AI is already an active part of our daily lives: in our cars, in our phones, and in our homes. In fact, in regards to our cars, our lives will literally depend on AI, absolutely. I also believe the collective intelligence of the human race is on a downward trend so we will need all of the help we can get!

    The challenge of AI of course is compute power which is good news for the semiconductor industry because that “need for speed” will consume leading edge silicon like there is no tomorrow. The fabless semiconductor ecosystem is already gearing up for this deep learning experience on embedded systems and this webinar is a quick example:


    Summary

    As Artificial Intelligence (AI) marches into almost every aspects of our lives, one of the major challenges is bringing this intelligence to small, low-power devices. This requires embedded platforms that can deliver extremely-high Neural Network performance with very low power consumption. However, that’s still not enough.

    Machine Learning developers need a quick and automated way to convert and execute their pre-trained networks on such embedded platforms. In this session, we will discuss and demonstrate tools that complete this task within few minutes, instead of spending months on hand porting and optimizations.

    REGISTER HERE

    Join CEVA experts to hear about:

    • Overview of the leading deep learning frameworks, including Caffe and TensorFlow
    • Various topologies of neural networks, including MIMO, FCN, MLPL
    • Overview of most common neural networks such as Alexnet, VGG, GoogLeNet, ResNet, SegNet
    • Challenges in porting neural networks to embedded platforms
    • CEVA “Push button” conversion approach from pre-trained networks to real-time optimized
    • Programmer Flow for CNN Acceleration

    Target Audience:
    Computer vision engineers, Deep learning researchers, Project managers, marketing experts and others interested in embedded vision and machine learning.

    Speakers:
    Liran Bar, Director of Product Marketing, Imaging & Vision, CEVA
    Erez Natan, Neural Network Team Leader, Imaging & Vision, CEVA



    About CEVA, Inc.

    CEVA is the leading licensor of signal processing IP for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, computer vision and computational photography for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (Smart and Smart Ready), Wi-Fi (802.11 a/b/g/n/ac up to 4×4) and serial storage (SATA and SAS). Visit us at www.ceva-dsp.com and follow us on Twitter, YouTube and LinkedIn.