CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

EVS Codec: The Next Big Thing in Mobile Voice

EVS Codec: The Next Big Thing in Mobile Voice
by Majeed Ahmad on 08-18-2015 at 4:00 pm

What is the next big thing in LTE-based 4G mobile networks? Apparently, it’s Voice over LTE (VoLTE) these days, especially after 3GPP has released the Enhanced Voice Services (EVS) codec that industry watchers call a breakthrough in audio and voice communications.

Long Term Evolution or LTE is the first cellular system that has been developed for data applications from grounds-up. So voice service has mostly been a dilemma for the LTE business. Initially, mobile operators began moving the voice calls to the 2G and 3G networks as a stop-gap measure. Then, in the early 2010s, mobile operators like MetroPCS and SK Telecom started launching VoLTE services that used both AMR narrowband and wideband codecs.

The 2G cellular networks, such as GSM and CDMA, mostly use adaptive multi-rate (AMR) codec that operates on narrowband 200-3400 Hz signals at variable bit rates in the range of 4.75 Kbps to 12.2 Kbps. However, the traditional voice standard—also known as AMR narrowband or AMR-NB—falls short in terms of voice clarity and noise cancellation because it sacrifices voice quality to enable lower bandwidths.


EVS is a major breakthrough for VoLTE service

The AMR wideband or AMR-WB standard improves speech quality and audio coding through a wider bandwidth of 50-7000 Hz and consumes relatively less channel capacity, from 12 kbps to nearly 24 Kbps. The AMR-WB codec—synonymous with HD voice—features enhanced audio processing, multiple microphones and speakers and improved echo cancellation to enhance voice quality and reduce background noise.

However, AMR-WB has taken too long for commercial realization, and the fact that devices on both ends are required to be HD voice-capable has led to limited availability. Then, in 2014, 3GPP finalized the EVS codec that goes well beyond AMR-WB in terms of speech quality, wider frequency range, and bandwidth utilization.

The Physics of EVS

The EVS voice codec offers HD quality voice of AMR-WB in less bandwidth than the AMR-WB codec. So, mobile operators like T-Mobile, now using 24 Kbps for HD voice, can employ super-wideband EVS and have the same audio quality with 5.95 Kbps to 7.36 Kbps. Furthermore, EVS enables innovative music and audio applications like live-to-air studio quality calls from mobile phones.

EVS uses 50 Hz to 14 KHz bandwidth that encompasses narrowband, wideband, super-wideband and full-band voice communications. It’s backward compatible with both AMR-NB and AMR-WB codec standards and can be used for even 2G and 3G networks to reduce bandwidth demands while maintaining the same voice quality. EVS has put in place error resilience mechanism for both circuit-switched 2G and 3G voice services as well as packet-switched Voice over IP (VoIP) applications.


EVS and the evolution of mobile voice
(Image credit: Qualcomm Inc.)

EVS, a robust codec that uses unique concealment techniques to minimize errors, is able to quickly recover from lost packets. Moreover, it boasts highly efficient jitter buffer management as well as channel-aware mode (CAM) for partial redundancy. Next, EVS features source-controlled variable bit-rate (VBR) adaptation for better speech quality at the same average active bit rate than fixed rate coding.

That allows mobile operators to optimize network capacity and voice call quality as desired for their service. However, the fact that EVS is able to offer unprecedented quality for speech, music and mixed content also means that it’s an intensive codec in terms of computational requirements. According to technology watchers, it’s six times more powerful than ARM-WB in terms of processing requirements.

Merits of a DSP Audio Solution

The EVS audio codec mandates a dedicated DSP solution designed specifically for voice processing. The super-wideband EVS codec, which provides excellent voice and audio quality on any mobile network, requires substantially higher signal processing power to run sophisticated multi-microphone noise reduction and echo cancellation algorithms.

CEVA, the supplier of DSP cores, is one of the firms proactively supporting the new voice codec. It has recently announced the availability of EVS voice codec for its TeakLite family of audio DSPs. CEVA provides EVS capability in the form of audio and voice software package that can run on the TeakLite DSPs.


TeakLite-4 processor is specially designed for codecs like EVS

A CPU that is not specifically designed for voice processing will simply consume too much MHz and power. Even a dedicated voice DSP like the TeakLite-4 that is specifically designed for such codecs takes quite a lot of MHz capacity. In fact, the performance of EVS and appropriate dual-microphone noise reduction can take up to 6mW when optimized for the TeakLite-4 DSP.

It’s worth noting that memory footprint of both code and data is quite large in EVS voice codec, and that mandates adequate memory mechanisms like caches. The CEVA-TeakLite-4 DSP core has all the required memory mechanisms like caches and advanced memory management.

Moreover, TeakLite-4 has a dedicated voice processing functionality and not just general DSP functionality like the M7 processor. In a nutshell, a DSP offers better performance for less area compared to the M7 microcontroller because of its dedicated voice processing ISA and memory management capabilities.

Also read:

CEVA-TeakLite-4 DSP Product Note

CEVA and LTE: Happy Together


CEVA achieves first certified Bluetooth 4.2 IP

CEVA achieves first certified Bluetooth 4.2 IP
by Don Dingee on 08-18-2015 at 8:05 am

SoC designers working on chips for the IoT and wearables now have access to cutting-edge certified Bluetooth Smart technology from CEVA. At Bluetooth ASIA in Shanghai, CEVA announced the RivieraWaves Bluetooth Smart 4.2 IP Platform has achieved full certification by the Bluetooth SIG to the Bluetooth 4.2 specification using the current, most stringent testing suite.
Continue reading “CEVA achieves first certified Bluetooth 4.2 IP”


Jen-Tai Hsu Joins Kilopass and Looks to the Future of Memories

Jen-Tai Hsu Joins Kilopass and Looks to the Future of Memories
by Paul McLellan on 08-18-2015 at 7:00 am

Kilopass has a new VP of engineering, Jen-Tai Hsu. I sat down with him last week to find out where he came from and where he and Kilopass are going.

He grew up in Taiwan and went to National Taiwan University where he studied electrical engineering. Then he came to the US and went to Case Western Reserve University to get a masters degree, studying MEMS/Silicon sensors, and finally to UCLA for a PhD in EE device physics. That is enough education for anyone, so time for a real job.

He started working at National Semiconductor as a process integration engineer on flash memory for a couple of years. Then he went to Intel where he ended up staying for 12 years. He continued to work on flash memory before moving into product engineering. Intel was the first company getting two bits into a cell using 4 different voltage levels technology (Multi-Level Cell, or MLC) with project code name Voyager. After a couple of years he moved to what we would today call an IP group working on the various forms of SERDES needed for the PC business: USB, PCIe, SATA and so on.

I think that it was at VLSI Technology where the first PC chipset was developed. We had IP blocks, that we called megacells, for all the chips that made up the rest of the motherboard beyond the processor and memory. Over a weeked a group threw together a chipset and it looked feasible. This became a huge business for VLSI over the next years. Intel bundled our chipset, called Topcat, with their processor, I forget whether they stamped Intel logos on them. But we always knew that eventually it would be Intel’s business since they knew all about next generation processors ahead of anyone else, by definition. Anyway, that became true and Jen-Tai spent 9 years working on chipsets at Intel until, in 2008, he left to join GUC.

GUC (Global Unichip Corporation) is a subsidiary of TSMC that does designs for customers and then uses TSMC as a foundry to manufacture them. As the ASIC business changed from being all about gates to being about IP too, every design services company needed access to IP, preferably that they had developed in-house to give them some differentiation from all the other design services companies. Often this IP is a family of SERDES interfaces since they are needed for a huge number of designs and they are beyond the abilities of many design teams to do themselves. The way GUC was organized the job was still very technical since Jen-Tai was both the senior director of the group and also owned the top level design. GUC had been using Synopsys’s IP but once the internal development was successful they switched to their using their internally developed IP. This enabled them to win heavyweight contracts from major networking manufacturers, major telecoms companies and more. IP revenue ramped up from nothing to tens of millions.

The next step was to Pericom as VP engineering doing analog intensive design. Low cost high margin chips competing with the usual suspects: NXP, TI, Maxim, IDT. They had a number of different design centers all with their own methodology, which he unified so everything was much easier to integrate.

Finally, to where we are today, he joined Kilopass. He loves going to the fundamentals of technology and memories are the cutting edge of bringing process technology to life as products.

Kilopass are known for one-time-programmable (OTP) memories, both small registers used to hold encryption keys or capture which redundant DRAM columns should be disabled, and also larger memories to hold code. Their current architecture has a read access time of around 30-40ns, the next generation should be below 20ns and consume just 1/10 of the power. This is obviously perfect for IoT type designs that need to get power as low as possible.

Kilopass is also working on new memory technology for the SRAM/DRAM type market. So not one-time-programmable. However, they haven’t announced details yet so you’ll have to wait until the new year to unwrap that particular present.

The press release announcing Jen-Tai’s appointment is here.


My Top Ten Regrets if I were Dying?

My Top Ten Regrets if I were Dying?
by Daniel Nenni on 08-17-2015 at 4:00 pm

As birthday #55 rapidly approaches I say to myself: Self, if I were dying what would be my regrets? The first thing I did was ask The Google because I’m not coming up with anything really interesting myself. Also, it really isn’t a pressing problem for me as my life expectancy has increased quite a bit over the last 30 years, or so I’m told. Of course I could die from a Taiwan taxi ride this week so it is something to think about no matter what your circumstances are.

When my wife and I decided to start a family I purchased a very large life insurance policy. We both agreed that one of us should stay home and care for the children (we have two boys and two girls) so we would be single income for quite a while. I was a rising star in Silicon Valley so she stayed home. During the life insurance policy process they came up with a life expectancy for me of 68 years. At the time that sounded pretty good since my father died in his early forties and I lost a younger brother.

Fast forward to today, based on a recent study my life expectancy is now about 15- 20 years longer. Part of it is my fitness, diet, and family life, which is excellent. The other part is advancing medical science and being proactive with regular blood tests and check-ups. I had a very near death experience and that put me on this journey of health and wellness but I digress…

According to The Google here are the top ten regrets of dying people:

[LIST=1]

  • I should have pursued my dreams and aspirations
  • I worked too much and never made time for my family
  • I should have stayed connected with my friends
  • I should have said I love you more
  • I should have spoken my mind more
  • I should have resolved more conflicts
  • I should have had children
  • I should have saved more money for retirement
  • I should have had the courage to live truthfully
  • I should have let myself be happier

    Okay, that was no help because I have all of those pretty much covered. I have also traveled extensively and have but one thing left on my official bucket list (run with the bulls in Spain) but my wife won’t let me. I think my greatest concern is living long enough to enjoy my grandchildren with a reasonable quality of life and that brings us to semiconductors.

    Quite frequently I’m asked where semiconductor growth will come from after smartphones. IoT is an easy answer but specifically what in IoT? For me it is health and wellness. Picture this: You swallow a pill sized nanochip and the doctor uses a VR headset to see what is really going on inside your body. If he finds something, you swallow a nanobot to make the appropriate adjustments. Not too far-fetched if you watch Game of Thrones and see what kind of medical science was available back then.

    Circling back to the dying regrets thing, what I would really like is a five minute warning before I die because I think my biggest regret would be dying in a Starbucks or on the toilet like Elvis. Make an app for that and it would be the next Uber sized unicorn, absolutely.


  • Designing for Variation

    Designing for Variation
    by Paul McLellan on 08-17-2015 at 7:00 am

    There is a widespread phenomenon in designing chips that new effects creep up on you. First they are so small you can ignore them. Then you can add a little pessimism to your timing budget or whatever gets affected. But eventually the effects go from second order to first order. You certainly can’t ignore them, and the guard bands required to just be pessimistic use up everything there is. Finally, you have to be accurate.

    Variation is one of these areas. Of course we had variation in 90nm processes too, but it was too small to cause problems. But as we get down below 28nm with FinFETs, double patterning and ultra-low voltage required by IoT then variation becomes too significant to ignore. Apparently a rule of thumb is that double patterning requires 20X as many SPICE simulations. I like to say “you can’t ignore the physics any more” but mostly because it makes it sound like I can remember it myself. Increasingly, design groups are trying to ensure that their design will yield with very large variation, 6 sigma or even 7 sigma. Also, with strange combinations like wanting to use 7 sigma for the bit-cell of a memory, 6 sigma for the sense amps and 3 sigma for the digital periphery.

    It used to be that we genuinely had “corners” that were actually at the corner. Slow-slow, fast-fast and so on. But now the PVT (process, voltage, temperature) corners are exploding. It is not at all obvious which ones are important and which are covered by other simulations. Analog/RF and memory are perhaps the worst, but even such safe stuff as digital standard cells cannot ignore variation and the number of simulations required for characterization has exploded.

    The challenge is that to do this requires a lot of simulations. Thousands, or in some cases billions. Most of these simulations are wasted since they are not the ones at the extreme. What you would really like is a tool that ran the simulations that were necessary and ignored the ones that were not. Or used machine learning to optimize which simulations were required.


    Well, that is basically what Solido does. Their Variation Designer tool takes in:

    • the netlist (or there is also direct interface to Cadence’s Virtuoso ADE)
    • PDKs from the foundry

    and then it runs the simulations in your SPICE engine (eg Spectre, HSPICE, BDA AFS, etc), interfacing to SGE, LSF or RTDA that are already managing the compute resources.

    Solido is one of those EDA companies that have been around for a long time, founded in 2005. If I had a dollar for every EDA company that was founded to address a problem too early then I’d be rich. But 28nm arrived and suddenly variation was a huge deal and instead of them knocking on reluctant design group’s doors, their own doors were getting knocked on (their doors are in Canada). Initially, the mobile people who were driving fast to advanced nodes. But now the mainstream is coming through. They have thousands of users in a few dozen companies. Nobody these days like their name being used as a reference, but they have about 8 of the top 12 semiconductor companies as customers. Their website has MicroSemi, Applied Micro, Sidense, Cypress, Huawei, nVidia, Broadcom, etc.. So right at the bleeding edge. As a private company they don’t publish all the numbers, but they are profitable. 40 people today but plan to be 50 by the end of the year.

    The next easy time to see them is at TSMC’s OIP event coming up next month. Register here. The Solido website is here.


    GaN Technology for the Connected Car

    GaN Technology for the Connected Car
    by Alex Lidow on 08-16-2015 at 4:00 pm

    GaN technology is disruptive, in the best sense of the word, making possible what was once thought to be impossible – eGaN® technology is 10 times faster, significantly smaller, and with higher performance at costs comparable to silicon-based MOSFETs. The inevitability of GaN displacing the aging power MOSFET is becoming clearer with domination of most existing applications and enabling new ones.

    This posting highlights the contribution GaN technology is making to several automobile applications – the increasingly complex infotainment system, all important safety systems, and the emergence of electrically powered vehicles.

    Automotive Applications: Introduction and Overview
    The automotive industry understands the trend to have the interior of the car a “living space,” and has begun to show its vision of the future for the fully mobile lifestyle. The dashboard is being taken over by the smartphone, while sensors and computers are being added to increase its safety. Moving toward a longer-term goal, our vehicles are on a path to become fully electric, reducing our use of fossil fuel needed to power them.

    There are a few things in common with these trends. They all involve batteries, a greater reliance on sensors, and they all rely on wireless communications. As a result, there is growing pressure for faster sensors, more wireless bandwidth, and anything that will help us “un-tether” from the relentless recharging of our phones and other electronic devices, including one day, our cars.Let’s take a closer look.

    Infotainment: Smartphone and Wireless Power Throughout the Cabin
    Mobility has become a major theme for the consumer. Smart phones allow us to take our music, games, movies, television shows, contacts, and “the internet” with us at all times…even in our automobiles! Applications such as Google Maps give us directions, tell us about traffic conditions, and provide us with street and satellite images of our destination. We want our vehicle to be completely in synch with our smartphones, tablets, laptops, and desktops.

    A rapidly emerging technology to enable the batteries in our electronic devices keep up with the demands added by the vehicle’s infotainment system is wireless power transfer. The latest techniques enable wireless charging of multiple objects without contact with the power transmission unit (PTU) with efficiencies similar to wired chargers.

    Wireless phone charging in a car is becoming more critical as the smartphone itself is becoming the information receiver and router for the dashboard infotainment center. Several automotive manufacturers are adopting operating system standards that enable seamless Android or iOS interfaces to dashboards that become “slaves” to the information and entertainment available in the drivers and other occupants’ smartphones.

    The Rezence® wireless power transmission standard developed by a consortium of electronics industry leaders such as Samsung, Qualcomm, Intel, and EPC is undergoing rapid adoption in mobile phone and tablet charging applications. To Implement this standard, several automotive manufacturers are developing embedded wireless charging stations in the center console of the vehicle so smartphones, as well as other mobile devices, can remain charged while the automobile is in operation, despite intense and continuous usage.

    Given that the Rezence standard uses a 6.78 MHz standard frequency for power transmission, a stretch for the aging silicon power devices, GaN technology is the heavy favorite for adoption over the slower and less efficient silicon power MOSFET in both mobile and automotive applications.

    Beyond using wireless power transfer technology to charge devices, some visionary designers in the automotive industry are exploring ways to use this technology to reduce or eliminate the wiring harnesses throughout the car thus reducing cost, weight, and fire hazards.

    In addition to wireless charging becoming commonplace within the car’s cabin, it is becoming available to charge fully electric cars or plug-in hybrids. With a “charging mat” as the power transmitter, you will merely have to place the mat on the floor of your garage, park the car over the mat and off you go – no need to “connect the car to an outlet.”

    Safety: Sensing and Autonomous Control
    To ensure safety and prevent collisions, it is critical that a vehicle be aware of its surroundings at all times. The higher the speed of the vehicle, the more rapidly the “situational awareness” system needs to sense, and the more precisely it needs to interpret the distance to the potential hazard.

    Today automotive manufacturers use a variety of sensors in these safety-related functions, including ultrasonic sensing, microwave radar short-range radar, and video pattern recognition. Light Distancing and Ranging (LiDAR) sensors have recently begun to emerge in automotive sensing applications.

    Although we anticipate broad adoption in automotive, initially LiDAR sensorswere used to generate three-dimensional digital topographical maps used for landscape mapping and navigation software by companies such as Google and Nokia NAVTEQ-Bing. Because LiDAR chases the speed of light for improving resolution, eGaN® power transistors, with about a 10 times advantage in switching speed over silicon MOSFETs have been used almost exclusively in these mobile applications.

    The imaging speed and depth resolution has become so good using eGaN® FETs that manufacturers experimenting with autonomous vehicles are using similar LiDAR sensors for driverless navigation systems. In addition, several automakers are incorporating eGaN® FET-based LiDAR sensors in their vehicles for general collision avoidance and blind spot detection. LiDAR has a very exciting future, since it is the detection and guidance system being used for “driverless cars.”

    Electric Drive: Automotive Freedom From Fossil Fuels
    The inevitable evolution – from an internal combustion engine, to hybrid vehicles, plug-in hybrids, and, finally, to fully electrically powered cars – is potentially a very large market for GaN technology. The demand for electrical power grows in proportion to the amount of propulsion handled by the electric motor; for example, the Tesla S delivers 416 hp, or 310 kW of electrical power to the rear wheels. Delivering more power to propel a vehicle requires higher voltages in order to keep the current levels flowing through the motor windings with minimum conduction losses. Today the dominant transistor in electric or hybrid vehicle propulsion systems is the insulated gate bipolar transistor (IGBT) in voltages ranging from 500 V to 1200 V.

    However, wide bandgap (WBG) transistors made using either silicon carbide (SiC) or GaN technology hold great promise for this high power application, since they have higher efficiency at lower switching frequencies and possess the ability to operate at much higher temperatures.

    The requirements for electric motor drives sit at the interface between GaN, SiC and IGBT technologies. Ultimately, the cost and reliability of the electric drive system will determine the winner for this application, but for now, it is too soon to call.

    Summary: GaN Technology for the Connected Car
    In 2013 there were 65 million cars manufactured worldwide. This presents a huge potential market for any technology that can improve the customers’ automotive experience. Infotainment mobility through wireless charging and autonomous vehicles, enabled by LiDAR sensors, are two areas that will emerge within the automotive world over the next few years. Both of these applications rely on the higher speed and low cost of GaN transistors.

    In the future, as electric vehicles gain acceptance and become more ubiquitous, motor controls for the powertrain has the potential to become an enormous market for GaN transistors. The issue among the competing technologies – GaN, SiC and IGBT – will be the cost.

    The automotive industry is undergoing a technological disruption and is taking advantage of high performance gallium nitride technology. GaN devices are appearing in an ever-increasing number of systems, with the future looking even more promising, as discussed above several areas are clearly emerging:

    • Infotainment – where electronic devices such as phones and GPS systems can be powered wirelessly
    • Safety – LiDAR sensing and autonomous control of the vehicle is leading to safer driving with more precise avoidance control systems
    • Electric Drive – electric vehicle propulsion putting us on the path to “freedom from fossil fuels”
    • Autonomous Vehicles – LiDAR sensing and electronic control systems are available and being tested throughout the world

    Gallium nitride is displacing silicon as the fundamental material used for power conversion with the promise to displace silicon not just in power transistors, but in analog and digital integrated circuits as well. EPC is pursuing this $350B combined power transistor, analog and digital IC semiconductor market, and the reason is simple – GaN technology is faster, smaller, and now, price competitive with MOSFETs.

    Also read: Four Things a New Semiconductor Technology Must Have to be Disruptive


    My Candid Conversation with Karen Bartleson

    My Candid Conversation with Karen Bartleson
    by Pawan Fangaria on 08-16-2015 at 7:30 am

    If you don’t know about Karen Bartleson, before I get into details, let me tell you that she was the President of IEEE-SA for the past 2 years and has been nominated by the IEEE Board of Directors as one of the candidates for IEEE President-Elect for 2016. The IEEE is an organization I admire as it plays a key role in advancing technology and innovation particularly in the electronic and semiconductor industry. Since electronics has entered a larger sphere of our lives including communication, electrical, transportation, home, healthcare, and even governance, the role of IEEE widens to lead and make a bigger contribution in our lives. Hence, I found this to be a great opportunity, motivation and pleasure talking to Karen and finding out about her views on some of the contentious issues our semiconductor industry is facing as well as some general global issues, and what IEEE could do to facilitate and influence in getting them resolved.

    Before I enter into the conversation, here is a brief about Karen Bartleson. She has 35 years of experience in the semiconductor industry, mostly in EDA. She started her career with Texas InstrumentsDesign Automation group and currently is Senior Director responsible for Corporate Programs and Initiatives at Synopsys. In between she had several encounters with proprietary and industry standard tools & formats with other companies. The association with standards was so passionate that it led Karen to the position of President of IEEE Standards Association for the past 2 years. She also chaired IEEE Internet Initiative and currently is a member of IEEE Global Public Policy.

    Seeing Karen’s profile in IEEE rightly pointing towards some of the key areas in our modern age, I was especially interested in talking to her, particularly about IEEE’s initiatives in IoT, Patent policy, policies on global issues etc. Here is the conversation –

    Q: Karen, you have been IEEE-SA president and spearheaded several standards. In today’s internet age, the industry needs to converge on some standards for IoT (Internet of Things) verticals as well as horizontals. Currently market forces are driving that effort. How do you think, IEEE can catalyze the effort for an early convergence on IoT standards?

    A: It’s typical that in an emerging market, standards are fragmented and numerous. A good example is from our own EDA industry. As techniques and tools for low-power design were developed, each had a different way of expressing low-power design intent. Designers created in-house solutions and EDA vendors created tool-specific ones. As the market progressed, designers realized the inefficiency and error-prone nature of dealing with multiple ways of representing low-power design intent. A group of leading design companies made the conscious decision to work together and demand that EDA vendors cooperate to produce a single standard. Thus, IEEE Standard 1801, the Unified Power Format (UPF), was created.

    The Internet of Things is an emerging market too, so it’s no surprise to see a myriad of standards proposals coming forward. As a leader in market-driven standards, the IEEE can provide its proven and well-respected platform for standards development to the IoT developers. Actually, this is already happening. As part of the IEEE’s IoT community, the IEEE Standards Association has been developing new standards for IoT as well as leveraging its famous standards like 802.11 (Wi-Fi). There are continuing workshops and other activities from the IEEE-SA to raise awareness of existing standards and to unify the market around a common platform for IoT standards development. We are maintaining a dedicated website for IoT related projects, standards, studies, and so on.

    Q: Often we see disputes in royalty rates on standards’ patents, and we have seen several court cases around that. In fact, recently a US court asked for IEEE recommendation on royalty payments for standard-essential patents. Do you think IEEE can pro-actively come up with detailed guidelines for royalty payments in case of patents for different types of standards? Can these be followed to stop undue expenses of money, time and resources in long running patent lawsuits?

    A: This is definitely a serious issue, not only in the US. The European Commission has been struggling with it too. There are infamous cases in which there have been three or more orders of magnitude difference in amounts that patent licensors and licensees believe are “reasonable”.

    The IEEE worked hard for the past couple of years to update its standards patent policy, in light of the request you mentioned as well as others. In February of 2015, I’m proud to say that the IEEE approved updates to its patent policy.

    The updates bring greater clarity in four areas: i) the meaning of “reasonable” rate, ii) non-discrimination through the definition of “Compliant Implementation”, iii) availability of Prohibitive Orders, iv) on permissible demands for reciprocal license. The IEEE patent policy protects both patent holders and implementers by clearly describing participants’ obligations and providing assurance to implementers. While the policy is voluntary – it is not a law – for participants in IEEE standards development, it does offer a solid framework in which to work. The policy does not say anything in dollar terms as that is not a prerogative of IEEE. It recommends the royalty to be based on the ‘smallest sellable unit’. I am hopeful that the updated policy will improve the standard-essential patent landscape all over the world.

    Q: On the Internet Initiative in general, what can IEEE do to maintain and promote Net Neutrality across the world, and in what ways?

    A: The IEEE Internet Initiative has the goal to bring the voice of technologists to the policy makers in the areas of cyber-security, cyber-privacy, and Internet governance. Net Neutrality falls under the Internet governance area. Net Neutrality is, of course, a controversial contemporary issue. The IEEE has not given position statements and has not taken sides. Instead, through the IEEE Internet Initiative, the conversation about Net Neutrality can be brought to forums such as ETAP – Experts in Technology and Policy which has been held in San Jose, CA and Tel Aviv, Israel. In addition, IEEE publications such as the award-winning Spectrum magazine have been publishing articles about Net Neutrality to educate technologists and policy makers. Also, cyber-security and cyber-privacy are taking the front and center of the initiative. As well as preventing Balkanization of the Internet, the “Splinternet”.

    Q: On a global scale, IEEE is definitely a global, professional organization. However, a more involved participation from developing and underdeveloped world is lacking. Can we see more low-fee IEEE conferences, lectures, resource sharing, other ways of motivation, etc. in these regions? How would you do that?

    A: Yes, over 50% of IEEE members are outside of the US and participation from developing and underdeveloped regions will benefit everyone. For traditional activities such as conferences, funding is required of course. IEEE does fund humanitarian activities and supports local sections. As with everything, there is never enough money to do all the things everyone wants. Yet, one of the ways to bring more local conferences to these areas would be for industry to sponsor them. When industry realizes an emerging economy, it often brings resources to bear. IEEE would like to get closer to industry, which we have done successfully in standards. Governments, too, can provide funding with the right incentives. As IEEE becomes more involved in global public policy, it can show governments how supporting IEEE activities can help build a thriving academic climate and a technically capable workforce.

    As for resource sharing and other motivation, IEEE’s new platform called Collabratec will enable IEEE members everywhere to build online communities. These communities can enable all kinds of things such as mentoring, technical discussions, and education. Certainly this means that developing and underdeveloped areas need Internet access and connected devices. Both Googleand Facebook are working on this, and surely these companies are full of IEEE members.

    But I want to mention the young professionals of IEEE. I have met many of them at regional meetings this year and through Facebook – really! They are full of enthusiasm and positive attitudes about the future. The young professionals in developing and underdeveloped regions are engaging and helping each other to shape the future of the IEEE. As they continue their paths with the IEEE, they will surely bring greater participation all over the world.

    Q: Any thoughts on rural education and introduction of technologies to underdeveloped world?

    A: Education is the most important thing a human being needs after air, food, water, clothing, and shelter. Education can lead to introducing new technologies to underdeveloped parts of the world. These can dramatically improve the quality of life. This is why I believe in the IEEE’s mission to foster technological innovation and excellence for the benefit of humanity – in short, advancing technology for humanity.

    The most obvious, but not necessarily the easiest, way to bring education and new technologies to the underdeveloped world is via the Internet. The challenges include infrastructure development and readily available electronic communication devices. The IEEE’s work in so many areas of technology that can help build up the infrastructure and bring cost-effective devices to market can certainly improve the lives of many.

    Q: Nowadays, there is semiconductor technology infusion in most medical equipments, healthcare instruments etc. Do you think IEEE can join with global healthcare organizations such as WHO (World Health Organization) to prevent serious ailments like Cancer, AIDS etc.? So far, the effort by health organizations has been mostly towards creating awareness about these.

    A: Wow. If the IEEE could prevent disease, that would be incredible. The IEEE members contribute to the advancement of technology for humanity which includes electronic devices, power grids, computers, communication systems, standards, and a wide variety of technologies that are used by researchers for healthcare and disease prevention. The IEEE holds conferences on relevant topics that include speakers and participants from WHO. The WHO also holds conferences that include IEEE experts and they leverage standards from IEEE. For example, WHO provides a list of medical devices for Ebola care and some of the devices conform to IEEE standards. As for a deeper partnership with WHO, that is entirely doable given that a program could be determined to leverage each other’s strengths and be mutually beneficial.

    Q: Coming to increasing value to members of IEEE, what are the initiatives you are taking, particularly towards creating a platform for jobs for students and career advancement for professionals?

    A: The IEEE already has platforms for career development and jobs, both for students and professionals. They include things like a resume builder, job listings, and continuing education. I think this needs to be publicized more which can increase their effectiveness. The Collabratec platform also promises to enhance IEEE’s current offerings.

    Q: How do you see growing importance of social media? Today, it is seen as an alternative for knowledge development, finding answers to any of your needs, sharing technology, and so on.

    A: Social media is a part of everyday life for a significant percentage of the world’s population. Today’s estimate is that over 3 billion people use the Internet – that’s almost half the world. Using the Internet to communicate with people everywhere, nicknamed as social media, has become as common as the telephone. So it definitely will continue to grow in importance for knowledge sharing, cultural development, and social awareness all over the globe. Generally, it is felt that the younger generations rely more on social media than the older ones. If that is true, then I am a member of the younger generation.

    Q: Social media has proliferated across the world including developing countries. How can IEEE leverage that for sharing knowledge and technology with a larger section of society (beyond its members) the world over?

    A: IEEE can leverage social media in many ways. The IEEE Facebook page has 1.3million followers and the IEEE Communications Society has 1million. These and other IEEE pages are quite active, sometimes with 1000 shares of a post. They are not closed to only IEEE members, so society as whole is able to access them. For those who prefer LinkedIn, the IEEE main page has 74,000 followers and is also open for anyone to view. There are a variety of LinkedIn groups available for all kinds of special interests. And the new Collabratec platform will be available for non-IEEE members.

    However, I think the IEEE can use social media to post valuable content in places that are not just within the technical realm. For instance, we can participate in conversations on Redditand we can continue working with popular media outlets to create content of interest to the general public. For instance, I contributed to a CNN article about IoT. We have a Public Visibility Committee that is exploring new ways to get the message out about the value of the engineering profession and other subjects of interest to people beyond the engineering sphere.

    Q: Okay, the last question. I can see that there is a long list of items to be done. What’s the first thing you would focus on given that you are elected as IEEE president?

    My biggest interest is in the IEEE becoming more involved in global public policy. During the past couple of years, the IEEE Board of Directors has determined its priorities and areas of focus that will position us into the future. Because the term of the IEEE President is only 1 year, I believe it’s important to focus on how to keep initiatives and programs moving forward to completion. This requires a strong partnership among the Past President, President, and President-elect. I have the utmost respect for the current President and President-elect, and I fully support their direction. One area of focus identified by the Board that I would concentrate on the most is IEEE’s public imperative. This includes having IEEE become more involved in global public policy. By uniting technologists and policy makers, I think we can significantly and positively change the world.

    This was a greatly inspiring discussion with Karen. My one hour long conversation with her tells me about her great energy, motivation and enthusiasm to change things for betterment of lives around the world. I was happy to learn that she was directly involved in framing the patent policy in Feb this year. I found her to be well informed about the demography in different parts of the world. She really belongs to the younger generation. I hope she comes up with flying colors and does wonders in the short span of one year, if elected President of IEEE.

    By the way, the balloting for IEEE President’s election starts on 17[SUP]th[/SUP] August 2015. Visit the IEEE Election Page for more details.

    Also review some of the IEEE pages on technical topics discussed in this article:
    IEEE-SA IoT website – http://standards.ieee.org/innovate/iot/
    IEEE ETAP Forum on Internet Governance, Cybersecurity and Privacy –http://etap.ieee.org/
    IEEE professional networking platform, Collabratec – https://ieee-collabratec.ieee.org/
    The CNN article on IoT with statements by Karen

    Pawan Kumar Fangaria
    Founder & President at www.fangarias.com


    Snapdragon 820 SoC Finds Qualcomm at Crossroads

    Snapdragon 820 SoC Finds Qualcomm at Crossroads
    by Majeed Ahmad on 08-16-2015 at 4:00 am

    Qualcomm’s new system-on-chip (SoC), Snapdragon 820, has come out with a few technical details, and it’s already making waves with its impressive GPU features and a powerful camera engine. At the same time, however, a couple of industry bytes have clouded the Snapdragon 820 launch fanfare.

    First, Apple’s new iPhone, expected to be launched in September 2015, will be using Intel’s LTE modem chips in some product versions instead of Qualcomm’s Gobi modem platform. Second, Samsung, which has just launched Galaxy Note 5 and Galaxy S6 Edge+, is using its in-house baseband chips based on CEVA DSP cores.


    Snapdragon 820 marks another generation leap in SoC technology

    Now both Apple and Samsung have their own application processors, and they are replacing Qualcomm’s baseband chips in some of their smartphone models. Another top-tier smartphone maker, Huawei, is also developing in-house application processor and baseband chips through its chip unit HiSilicon.

    Then, there is Asustek, a rising smartphone star with ZenFone handsets, and it’s using Intel’s mobile SoC solutions both for application processor and baseband. Other notable handsets makers like Motorola and Xiaomi also seem inclined toward MediaTek for more cost-effective solutions.

    That’s the challenging backdrop in which Snapdragon 820 is going to enter the mobile market. Some of the challenges are technical, for instance, the heat-related issues that mired its predecessor Snapdragon 810. But other challenges are based on pure market dynamics regardless of how good the chip is. Nevertheless, Snapdragon 820 comes with some notable hooks that might be especially attractive to mid-range smartphones.

    GPU Plus Image Processing Angle

    So far, Qualcomm has provided only a few details about the new chip, and they mostly relate to imaging and video features. For instance, Qualcomm claims that the Adreno 530 GPU is 40 percent faster than its Adreno 430 GPU predecessor in terms of graphics benchmarks. Qualcomm’s next-generation Adreno GPU also consumes 40 percent less power. Moreover, the Standalone GPU power manager feature lets the graphics power to be turned on and off more quickly, which significantly improve power savings when GPU is idle.

    On the camera side, Qualcomm has brought forth the 14-bit Spectra image signal processor that supports three cameras at a time and claims to offer DSLR-like quality photography. Snapdragon 820 can handle one 25-megapixel camera or two 12- megapixel image sensors that can be used as depth sensing cameras.


    Snapdragon 820: An attempt at bringing intensive graphics processing to mobile

    That’s a major shift in smartphone camera landscape where lens and image sensors have so far been the main criteria. Qualcomm has raised the bar in image processing in a quest to take the smartphone camera envy to the next-generation applications such as object recognition and virtual reality and enable them with a low-power footprint.

    The powerful combination of Adreno 530 GPU and Spectra camera engine is clearly aiming at boosting the user experience for computational photography, computer vision and virtual reality. The GPU-plus-image processor angle also marks a crossroads for the SoC devices that have mostly been focusing on the CPU might and number of cores.

    According to industry reports, Snapdragon will have a new 64-bit quad-core CPU called Kryo that is custom designed and is based on the ARMv8; though the San Diego, California–based chipmaker hasn’t provided any details on the CPU side. Moreover, the Snapdragon 820 chipset is going to be built on Samsung’s 14nm FinFET process and is expected to be available in the first half of 2016.

    Also read:

    Why Qualcomm Lost Samsung and Will Get Them Back!

    3 Key Frontiers for Samsung’s Next Mobile SoC

    Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronics and The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future.


    The Intel Apple Deal is a Nothingburger!

    The Intel Apple Deal is a Nothingburger!
    by Daniel Nenni on 08-15-2015 at 12:00 am

    The latest Intel rumor that the pro Intel media are flogging is that Intel modems will be in some of the new iPhones. The deal is estimated at around $1B. An “estimated” value of a “rumor” deal is quite funny in itself but let’s take a deeper look at what we are gossiping about here.

    Intel got into the 3G/LTE business after acquiring the Infineon wireless division back in 2011. In 2013 Intel also acquired Fujitsu Wireless. “Rumor” has it that the combined cash outlay for these two deals is an “estimated” $2B. I also heard a rumor that Intel bought smaller companies for its wireless effort including $25M for an LTE company in Dresden which was recently closed down. These acquisitions included hundreds of talented engineers (estimate) some of which no longer work at Intel (rumor).

    The Apple deal was leaked by analyst Gus Richard with Northland Capital Markets and repeated by dozens of “media” outlets:

    Intel’s Modem Wins at Apple: Apple has been evaluating Intel’s model for a while. We now believe that Intel will capture roughly 50% of Apple’s modem business in the upcoming iPhones due to launch September 9th. Further, assuming a 50% share of modem business in the new iPhones we estimate that this win could represent $750M to $1.25B in revenue for Intel in CY16. This is a marque win for Intel and would go a long way to reducing the mobile business losses.

    I’m all for this by the way. Competition is the foundation of the mighty fabless semiconductor ecosystem, so good for Intel, if it is true. I do have some observations worth considering:

    The modem in question is manufactured by TSMC using a 28nm process. Why? Because Infineon and Fujitsu and just about every other wireless chip company uses TSMC at 28nm. In addition to the lengthy design and manufacturing process a modem must go through, further qualifications by the regional carriers are required.

    I’m sure Intel had planned on making a 14nm version of the modem using their second generation FinFET process but I question those plans given that in the not too distant future these modems will be integrated into the SoC. By not too distant future I mean today in some cases and next year in most of the others. I know for a fact that Apple has assembled a talented modem team here in Silicon Valley. Qualcomm and Mediatek already have SoCs with an integrated modem for class 7 LTE (leading edge modems are now class 10).

    Intel is a long time TSMC customer by the way but I highly doubt they get Apple or Qualcomm sized wafer discounts. So you have to ask yourself: Self, how much money is Intel really going to make on this deal? $1B of revenue is nice but not if the margins are significantly lower than the Intel corporate norm. And certainly not if you are getting out of the mobile business altogether which I think Intel should, absolutely.

    Interesting to note, TSMC reported a serious revenue spike in July. Month over month revenue increased 35%, year over year revenue increased 25%, Jan-July year over year increased 28%. Any thoughts on where this revenue spike came from? I will share my observations, opinions, and experience in the comments section and please do the same. I’m in Taiwan next week so I should be able to get a good answer in a couple of days. Sound reasonable?


    DVCon India

    DVCon India
    by barun on 08-14-2015 at 12:00 pm

    After its successful launch last year, the “Design and Verification Conference & Exhibition India” (DVCon India) will be held on Sept 10 – 11 in Bangalore. The event primarily has two tracks: ESL and DV. The ESL track covers electronic system level (ESL) design and verification, including virtual prototypes of electronic systems and SoCs, pre-silicon software development and debug, power and performance analysis with realistic use cases, architectural exploration, high-level synthesis, and interoperability standards for system models. The DV track coversdesign and verification (DV), including design and verification languages, simulation methodologies based on SystemVerilog, including the Universal Verification Methodology (UVM), and complementary technologies such as formal verification, hardware acceleration, in-circuit emulation (ICE), and prototyping. This year there will be keynotes from Industry veterans, including Harry Foster, Chief Scientist, Design Verification Technology Division, Mentor Graphics; Manoj Gandhi, Executive Vice President and General Manager, Verification Group, Synopsys; and Vinay Shenoy, Managing Director, Infineon Technologies India and Chairman, IESA.

    DVCon India is always focused on emerging trends. This year also we will discuss key trends like formal analysis and software driven verification in DV track and virtual platform for verification and performance assessments and high level synthesis in ESL track.

    The key topics to be discussed under the ESL track are:

    • Transaction-level modeling of systems and SoC
    • Verification techniques using SystemC-UVM or other C/C++ testbenches
    • High-level synthesis techniques to reduce power and increase performance
    • Hardware-software co-development and co-verification
    • Links between ESL and embedded systems software
    • ESL extensions to handle modeling and verification of analog/mixed-signal (AMS) designs

    The key topics to be discussed under the DV trackare:

    • Multi-language and other extensions to the UVM
    • Management of verification process, resources, and metrics
    • Formal techniques, assertion automation/synthesis, and static verification
    • Software-driven verification using C/C++ embedded test cases
    • Debug automation, including identification of error sources
    • DV extensions to handle verification of analog/mixed-signal (AMS) designs

    In 2014, the very first DVCon India was held in Bangalore. Two parallel tracks were identified for the conference—“Design and Verification (DV)” and “Electronic System Level (ESL)”—based on the experience gained from the Indian SystemC Group, sponsorship from Accellera and DVCon US. There was an overwhelming response at each stage right from the call for abstracts. Every abstract and tutorial proposal was reviewed by more than three members and finalized for selection after internal discussion. The Technical Program Committee welcomed thoughts from the authors on their papers and made it flexible for them to present in a style that would reach the audience better. Dr. Walden C. Rhines, CEO of Mentor Graphics, Dr. Mahesh Mehendale, CTO, MCU at Texas Instruments, Janick Bergeron, Synopsys Verification Fellow, and Mr. Vishwas Vaidya, AGM, Electronics at Tata Motors, delivered the key speeches. Initial expectations were that a small number of delegates would attend, but DVCon India 2014 managed to bring together over 450 attendees from more than 80 different companies and universities. Feedback from attendees on the two-day event, especially on the technical program, was very encouraging and positive.

    We expect the 2015 conference to be a huge success given the strong content and the strenuous efforts put in by the planning team.

    You can register for DVCon India 2015 at:http://dvcon-india.org/registration/