RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Atmel’s L21 MCU for IoT Tops Low Power Benchmark

Atmel’s L21 MCU for IoT Tops Low Power Benchmark
by Majeed Ahmad on 03-30-2015 at 7:30 am

The Internet of Things (IoT) juggernaut has unleashed a flurry of low-power microcontrollers, and in that array of energy-efficient MCUs, one product has earned the crown jewel of being the lowest-power Cortex M-based solution with power consumption down to 35µA/MHz in active mode and 200nA in sleep mode.

How do we know if Atmel Corp.‘s SAM L21 microcontroller can actually claim the leadership in ultra-low-power processing movement? The answer lies in the EEMBC ULPBench power benchmark that was introduced last year. It ensures a level playing field in executing the benchmark by having the MCU perform 20,000 clock cycles of active work once a second and sleep the remainder of the second.


ULPBench shows SAM L21 is lower power than any of its competitor’s M0+ class chips

Atmel has just released the ultra-low-power SAM L21 microcontroller it demonstrated at the electronica in Munich, Germany in November 2014. Architectural innovations in the SAM L21 MCU family enable low-power peripherals—including timers, serial communications and capacitive touch sensing—to remain powered and running while the rest of the system is in a reduced power mode. That further reduces power consumption for always-on applications such as fire alarms, healthcare, medical and connected wearables.

Next, the 32-bit ARM-based MCU portfolio combines ultra-low-power with Flash and SRAM that are large enough to run both the application and wireless stacks. Collectively, these three features make up the basic recipe for battery-powered mobile and IoT devices for extending their battery life from years to decades. Moreover, they reduce the number of times batteries need to be changed in a plethora of IoT applications.

Low Power Leap of Faith

Atmel’s SAM L21 microcontrollers have achieved a staggering 185.8 ULPBench score, which is way ahead of runner-up TI’s SimpleLink C26xx microcontroller family that scored 143.6. The SAM L21 microcontrollers consume less than 940nA with full 40kB SRAM retention, real-time clock and calendar, and 200nA in the deepest sleep mode. According to Atmel spokesperson, it comes down to one-third the power of competing solutions.

Markus Levy, President and Founder of EEMBC, credits Atmel’s low-power feat to its proprietary picoPower technology and the company’s low-power expertise in utilizing DC-DC conversion for voltage monitoring. Atmel’s picoPower technology employs flexible clocking options and short wake-up time with multiple wake-up sources from even the deepest sleep modes.


ULPBench aims to provide developers with a reliable methodology to test MCUs

In other words, Atmel has taken the low-power game beyond architectural improvements to the CPU while optimizing nearly every peripheral to operate in standalone mode and then use a minimum number of transistors to complete the given task. Most lower-power ARM chips simply disable the clock to various parts of the device. The SAM L21 microcontroller, on the other hand, turns off power to those chip parts; hence, there is no leakage current in thousands of transistors in that part.

Here is a brief highlight of Atmel’s low-power development efforts that now encompass almost every peripheral in an MCU device:

Sleep Modes
Sleep modes not only gate away the clock signal to stop switching consumption, but also remove the power from sub-domains to fully eliminate leakage. Atmel also employs SRAM back-biasing to reduce leakage in sleep modes.

Consider a simple application where the temperature in a room is monitored using a temperature sensor with the analog-to-digital converter (ADC). In order to reduce the power consumption, the CPU would be put to sleep and wake up periodically on interrupts from a real-time counter (RTC). The measured sensor data is checked against a predefined threshold to decide on further action. If the data does not exceed the threshold, the CPU will be put back to sleep waiting for the next RTC interrupt.

Sleepwalking
Sleepwalking is a technology that enables peripherals to request a clock when needed to wake-up from sleep modes and perform tasks without having to power up the CPU Flash and other support systems. For instance, Atmel’s ultra-low-power capacitive touch-sensing peripheral can run in all operating modes and supports wake-up on a touch.

For the temperature monitoring application, as mentioned above, this means that the ADC’s peripheral clock will only be running when the ADC is converting. When the ADC receives the overflow event from the RTC, it will request its generic clock from the generic clock controller and peripheral clock will stop as soon as the ADC conversion is completed.

Event System
The Event System allows peripherals to communicate directly without involving the CPU and thus enables peripherals to work together to solve complex tasks using minimal gates. It allows system developers to chain events in software and use an event to trigger a peripheral without CPU involvement.

Again, taking temperature monitor as a use case, the RTC must be set to generate an overflow event, which is routed to the ADC by configuring the Event System. The ADC must be configured to start a conversion when it receives an event. By using the Event System, an RTC overflow can trigger an ADC conversion without waking up the CPU. Moreover, the ADC can be configured to generate an interrupt if the threshold is exceeded, and the interrupt will wake up the CPU.


SAM L21 MCU board

Low Power MCU Use Case

Paul Rako has mentioned a sensor monitor in his recent post in Atmel’s Embedded Design Worldblog. Rako writes in his post titled “The SAM L21 pushes the boundaries of low power MCUs” about this sensor monitor being asleep 99.99 percent of the time, waking up once a day to take a measurement and send it wirelessly to a host. Such tasks can be conveniently handled by an 8-bit device.

However, moving to IoT applications, which constitute protocol stacks, there is number crunching involved and that requires a faster ARM-class 32-bit chip. So, for battery-powered IoT applications, Rako makes the case for 32-bit ARM-based chip that can wake up, do its thing, and go back to sleep. If a high-current chip wakes up 10 times faster but uses twice the power, it will still use less energy and less charge than the slower chip.

Next, Rako presents sensor fusion hub as a case study in which the device saves power by skipping the radio chip to send the data from each sensor and instead uses the ARM-based microcontroller that does the math and pre-processing to combine the raw data from all sensors and then assembles the result as a simple chunk of data.

Atmel has scored an important design victory in the ongoing low-power game that is now prevalent in the rapidly expanding IoT market. Atmel already boasts credentials in the connectivity and security domains—the other two key IoT building blocks. Its connectivity solutions cover multiple wireless arenas—Bluetooth, Wi-Fi, Zigbee and 6LoWPan—to enable IoT communications.

Likewise, Atmel’s CryptoAuthentication devices come with protected hardware key storage and are available with SHA256, AES128 or ECC256/283 cryptography. The IoT triumvirate of low power consumption, broad connectivity portfolio and crypto engineering puts Atmel in a strong position in the promising new market of IoT that is increasingly demanding low power portfolio of MCUs to be matched with high performance.

Also see:

Atmel’s New Car MCU Tips Imminent SoC Journey

Atmel’s Ready to Wear Sensor Hubs

Majeed Ahmad is author of books Smartphone: Mobile Revolution at the Crossroads of Communications, Computing and Consumer Electronicsand The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future.


embARC for a Free Ride

embARC for a Free Ride
by Eric Esteve on 03-30-2015 at 3:27 am

It’s probably the first time that Synopsys is offering such a direct access to free and open source software. The goal is to support customers developing application code for IoT and embedded devices based on ARC IP core family. The designer can select the Real Time Operating System (RTOS) which best meet the system requirements, unlike with ARC well-known competitor. Open source software also includes drivers, core services and middleware. Because the dynamic power consumption of the ARC EM processors can be as low as 3 uW/MHz, ARC IP core family is IoT preferred solution. embARC Includes commonly used components for the Internet of Things (IoT) such as MQTT and CoAP internet protocols as well as FreeRTOS and Contiki OS operating systems, helping to jump-start IoT development.


Synopsys has decided to offer a dedicated web portal (www.embarc.org) allowing ARC developers to freely download examples and documentation. This portal is also a forum for users to interact and get help from each other and even more a central repository for easy access to tools and embedded software to run on ARC EM processors. As far as I know, it’s the first time that one of the big 3 EDA/IP vendor proposes central repository capability. Central repository is a concept shared by large semi companies as well as open source software users/developers, quite often geeks considering that their work should be shared with the design community for free. This approach can help a design team starting to develop embedded software for IoT application to benefit from ready to use (open source) software functions, RTOS and drivers and greatly accelerate Time-To-Market (TTM).

embARC software can be split into middleware, libraries running on a RTOS, drivers and core services. The design team can freely download middleware like IoT comms, Networking, File System or GUI, standard toolchain and cloud libraries (C Lib, Maths Lib or Xively Lib) and select the RTOS of it choice. The benefits are multiple: cheaper development cost, faster TTM and the guarantee that the most appropriate solution will be selected to support embedded software development. Using ARC EM-based hardware development platform is a must-have to explore various design tracks, optimize and eventually validate the software.

ARC EM Starter Kit from Synopsys will help starting software development quickly, benefiting from lots of examples to get started with. This development boards includes timers, watchdog timers, UARTs, SPI, I2C, micro USB, SD Card slot, 20 pin JTAG and more. The ARC MetaWare toolkit complements this H/W offer, providing rich DSP software library and C/C++ Compiler as well as GNU tools support.


Availability and Resources
The embARC Open Software Platform is available now, at no cost at www.embarc.org.

embARC.org is a dedicated website that provides developers centralized access to free and open-source software, drivers, operating systems and middleware supporting the embARC Open Software Platform. The website also provides documentation and a forum-based community where developers can share their resources, expertise and code to help speed deployment of ARC-processor based embedded systems.

The ARC EM Starter Kit and the MetaWare Development Toolkit are also available now from the websites below:

From Eric Esteve from IPNEST


Life Without EUV: SPIE Day 2

Life Without EUV: SPIE Day 2
by Scotten Jones on 03-29-2015 at 11:00 pm

I previously published a summary of day 1 of SPIE and I wanted to follow up with observations from successive days.

SPIE, the international society for optics and photonics, was founded in 1955 to advance light-based technologies.Serving more than 256,000 constituents from approximately 155 countries, the not-for-profit society advances emerging technologies through interdisciplinary information exchange, continuing education, publications, patent precedent, and career and professional growth. SPIE annually organizes and sponsors approximately 25 major technical forums, exhibitions, and education programs in North America, Europe, Asia, and the South Pacific. www.spie.org
Tuesday 2/24 – day 2

Optical lithography with and without NGL for single-digit nanometer nodes – Burn Lin, TSMC
The paper began with a discussion of growth in cost per node from the 1.15x per node increase to 1.4x for the latest node. Issues with overlay were really the heart of this paper. The author discussed the many issues that can lead to poor overlay from warpage, back side particles, wafer non linearity due to uneven heating, mask flatness and particles, lens heating and others. The

The move to multi patterning is making overlay an even bigger issue. Overlying multi patterning over single patterning or multi patterning with 3 photo and etch steps on multi patterning with 2 photo and etch steps creates higher order overlay issues.

The authors sees ArFi single exposure limited to approximately an 80nm pitch. Multi pattering gains resolution by pitch splitting but creates cost and overlay issues. He see EUV with an NA > 0.33 as difficult and expensive to achieve as well as k1 < 0.4 being difficult yielding a pitch of 32.4nm, marginal for 7nm, may need some double patterning.

The key to solving the overlay issue according to the author is to go to single exposure and etch at all layers. A table was presented indicating that the least expensive option at N7 is single exposure with Multi beam E Beam (MEB). Interestingly this opinion appears to contrast with the more main stream TSMC approach of using EUV.

The author also discussed Directed Self Assembly (DSA) as a very useful option to reduce costs but one still having CD uniformity plus placement and defect issues.

Also read: EUV Makes Progress and Other Observations From SPIE 2015

Evolving optical lithography without EUV – Donis G. Flagello, Nikon
Nikon has stopped working on EUV leaving it to ASML. I came to this paper very interested to hear what Nikon’s roadmap is to move forward without EUV. I have to say this was one of the more disappointing papers I attended at SPIE. It was long on quirky “Latin like” nomenclatures but short on an actual roadmap.

Relative costs for 193i double and triple patterning versus EUV were presented showing EUV to be more expensive. Continued progress in ArFi scanner throughput was noted with additional increases in wafers per days forecast for 2018. Whereas 3,500 wpd was common in 2010, today 4,000 wpd is common and by 2018 6,000 wpd is forecast.

The author also offered that gains for 450mm are better for ArFi than for EUV.

There was a fair amount of discussion of resolution enhancement techniques that are being sued in optical microscopy and the potential to apply them to lithography. Immersion lithography after all is a lithography application of immersion techniques that have been used in microscopy for decades. This however struck me as a proposal for a research program as opposed to a particle roadmap.

Integration of NAND flash memory ISO multilayer etching to improve productivity – Chang-kwon Oh, SK Hynix
3D NAND is an area of intense interest for me and I am currently working on a blog discussing the impact I expect it to have on the industry.

Some of the key take aways from this paper were that for 2D NAND costs take off at the 1x, 1y and 1z generations. Cell to cell cross talk is also an increasing issue for 1x and subsequent generations.

SK Hynix expects to introduce 3D NAND in 2015.

3D NAND offers improved density, writing speed, endurance and power efficiency. The trade off is productivity, yield and complexity.


The Earth is Not Flat; Neither is IP

The Earth is Not Flat; Neither is IP
by Paul McLellan on 03-29-2015 at 7:00 pm

Chip design is largely about assembling pre-designed IP, either developed in other groups in the same company, or brought in from a 3rd party, or occasionally developed within the SoC design group itself. But that makes it sound like it is a bunch of blocks linked together with some interconnect, but of course another important aspect of real-world SoCs is the hierarchy. The SoC world is not flat just like the real world.

For example, an SoC might instantiate two Ethernet ports but each port actually consists of an Ethernet PHY and an Ethernet MAC. And the MAC might consist of bus interfaces, transmitter control, receiver control and more. Or an SOC might contain several USB ports but each on consists of a USB PHY and a USB controller.


There are several advantages of hierarchical IPs:

[LIST=1]

  • Abstraction: It is quite easy to integrate fully capable functional blocks in your design rather than a laundry list of individual IPs. This is much quicker and more intuitive
  • Compatibility: Does version 10 of the USB controller really work with version 7 of the PHY? Should I use the latest version of both these blocks and hope for the best? These kinds of questions can be eliminated by simply choosing the USB subsystems, where proven configurations with versions that work with each other
  • Dependency management: Bringing in a subsystem automatically brings in all the dependencies that are needed. There is no need for cumbersome dependency discovery, since the subsystem will bring in all the needed dependencies
  • Discovery: By looking at the hierarchies in which a particular IP of interest is used can help with easier discovery of the various components available to the team. For example, if the USB PHY is used in a hierarchy that contains other I/O interface blocks, it is very useful to be able to discover the context in which this block is frequently used. This can aid the design process immensely

    Methodics’s IP management suite ProjectIC has been built from the ground up to handle hierarchical IP subsystems. Each “IP” can either be a genuine standalone design object or a complete hierarchy.
    ProjectIC has many features that support hierarchical IPs:

    • Every IP can have a list of resources—other IPs that are in the system. These resources can also have resources of their own, and so on. Building a hierarchical IP in ProjectIC is as simple as including another IP as a resource. The tool automatically figures out any other resources that are implied by including this IP
    • Building a workspace with a hierarchical IP is handled seamlessly. All the resources that are part of the hierarchy are automatically instantiated in the workspace
    • Resource locations inside a workspace can be controlled with fine granularity
    • The tool automatically checks for circular dependency between resources, resource conflicts etc
    • Container IPs can be used to create hierarchy levels and compatible bundles
    • Each resource in a hierarchy can be independently moved from one version to another. This can be done in multiple different ways—from the command line, from the GUI etc—allowing for maximum flexibility in interacting with resources
    • Several other features like hierarchical releases, property inheritance, IP hierarchy traversal, tree views etc are also fully supported

    On a different topic, earlier this month the Methodics Industry Survey results were pulled together. The big picture results:More than 90% of respondents said IP is reused within their design projects and yet 31.7% are still managing IP manually through the use of spreadsheets, and another 39.7% are managing IP through home-grown solutions. Additionally, 28.3% of respondents said they have no IP defect-tracking solution in place.

    This sort of result reminds me of the story of two Victorian-era shoe-salesmen being sent to Africa. The first replies “no business here, nobody wears shoes.” The second salesman replies “huge opportunity, nobody wears shoes.”

    Only 30% of respondents use a 3rd party IP management solution. If you are not one of them, Methodics has some shoes for you.

    Full survey results are here


  • Passage of Time with Watches

    Passage of Time with Watches
    by Pawan Fangaria on 03-29-2015 at 7:00 am

    During my childhood in my native place in India, although there were good watches around from Seiko, Citizen and some of the Indian companies, I used to see some old men and women never using any watch but still being fairly accurate in perceiving time by just watching the position of sun, or moon, or the shadow formed by a certain object. Well, if we delve into what we call ‘sundials’ which were used by Egyptians, Greeks, Chinese, and others back in 1500 B.C. or before, they worked on the principles of movement of celestial bodies. They were used not only to observe time in a day, but also to record days, weeks, months, years and so on, and to determine arrival and departure of different seasons. The credit of invention of sundials goes to Egyptians.


    Here, I would like to recollect my memoir and illustrate an Indian sundial which I had personally seen in Delhi. Back then, I had just passed out of my 10[SUP]th[/SUP] class and had visited Delhi as a tourist. Above is a picture of ‘Samrat Yantra’, popularly known as ‘Jantar Mantar’ in Delhi; there are some more structures attached to it. Although I couldn’t understand the complex geometry used in calculation of time out of the shadows on these structures, I was told that the accuracy of time was in terms of seconds. They could find out the shortest and longest days of the year. No wonder, today also we have the concept of ‘day light saving’ to adjust our watches to alarm us at the right time. We have evolved to a large extent in terms of watches, but the guiding principles to determine time are the same, based on the movement of Earth, Sun, Moon and perhaps other celestial bodies!

    The evolution of watch industry has been very long, if not the longest. And interestingly, many of the older generation watches are still being used. The first spring-driven mechanical clock came in 15[SUP]th[/SUP] century which was used as a stationary time piece. Then the pocket-able watches, table watches, and stationary pendulum clocks, still driven on springs came between 15[SUP]th[/SUP] and 17[SUP]th[/SUP] centuries. The ‘Nuremberg Egg’ shown in the picture was made in 1510 by Peter Henlein who is considered as the inventor of watch. With every new generation of the watches, they improved in the accuracy of time. In 1761, the British government rewarded John Harrison with 20,000 pounds for improving the accuracy of clocks after “Scilly naval disaster of 1707” which was due to inaccuracy in calculation of positions of the warships.

    Well, if we see early evolution of watches, it appears to have taken centuries for small incremental updates. The first wristwatch appeared in 18[SUP]th[/SUP] century from the houses of Breguetand Patek Philippe. And then Rolextook another century to make waterproof watches. It goes on and on with improvements such as automatic, self-winding machine, battery driven, quartz driven, with additional functions such date, time, calendar, and so on. We could also see some intelligent functions done through watches, as we see them in old James Bond movies! However, the basic mechanism was mechanical and main function was time keeping in all those watches. It seems to be monotonous with so many centuries spent on a particular type of machine dealing with time as its only function, doesn’t it? So, naturally, the style and jewellery became the quotients embedded in high-end watches. We see so many high-end brands of watches today also being carried from 17[SUP]th[/SUP] and 18[SUP]th[/SUP] centuries. Added to them are many new brands too. Their primary function is to tell time, but they are worn conspicuously to reflect upon the personalities of their owners.

    A shift in the watch technology came in 1970s and 1980s when electronic digital watches appeared with LCD and LED displays. The first digital watch, Pulsar was introduced by Hamiltonin 1970 and then other companies such as Casio, Seiko, Intel, and Texas Instruments jumped into the fray. Run on quartz, they brought a revolution in watch industry sending the old mechanical watches out of business. However, indication of time remained as the prime function of watches, although a few built-in databases, dictionaries etc. were added. Display of time on LCD and LED screens with a variety of digital watches became so ubiquitous that it reflected on everything, your pen, key ring, car dashboard, office desk, computer screens, and so on; time was displayed on most of the noticeable objects. That trend has remained till date. The electronic LCD watches started becoming boring and their business was soon out of favour.

    By 1990s, perhaps the need was felt that more functions needed to be added on watches to keep them functionally relevant. With the arrival of mobile technology, most obvious appeared to be the phone communication and some amount of computing done through the watches. Multiple technology companies including IBM, AT&Tand Samsungstarted R&D in this area with AT&T patenting a wristwatch with phone in 1993. In 1998, it was Prof. Steve Mann, Electrical and Computer Engineering at University of Toronto who invented and designed the world’s first Linux wristwatch. For this work, Prof. Steve received the honour of “The Father of Wearable Computing” from IEEE ISSCC2000 in Feb 2000.

    Soon after this invention, in 1999, Samsung launched the world’s first commercial watch phone, SPH-WP10. This watch phone had integrated speaker and microphone and had about 90 minutes of talk time. It had a monochrome LCD screen. Then IBM developed a wristwatch with Linux 2.2, Bluetooth, 8MB memory, accelerometer, and fingerprint sensor. In the following years several other watch phones were launched by different companies, e.g. Fossil’sWrist PDA, Microsoft’sSPOT, Samsung’s next watch phone S9110, Sony Ericsson’s NBW series, and so on. However, the technology appeared to be too clumsy for watch phones and took a backseat.

    What was happening in parallel was the advent of mobile phones. The mobile phones with bigger screen sizes than watches were perceived to be better platforms for integration of other functions including time, computing, music, video, phone, conferences, and so on. The mobile phones surreptitiously came into the market, re-invented themselves into smartphones and stole the show. They snatched the market, not only from the budding watch phones but also from PCs and notebooks. The watch phones were again left in the lurch.

    However, the time doesn’t stand still; watches were yet to see their time. The mobile phone industry with several functions integrated into them gave a significant push to miniaturization in semiconductor ICs and SoCs and also a vast mobile network for data exchange. This ecosystem built by mobile phones again offered a ripe platform for watches to re-invent themselves in the modern context and see their luck!

    The actual journey of watches into the realm of what we call ‘smartwatch’ began in 2012 when Pebblewas unveiled. This smartwatch is compatible with iOS (the operating system used in iPhone) and Android (the Google driven OS used in other smartphones) based devices. It’s laden with features such as call alerts, SMS, iMessage, calendar, activity tracking, gaming, and so on. Interestingly, Pebble generated its initial funds for starting this project through crowd-funding. Today, we have more advanced smartwatches with Apple smartwatch to the fore. There are many others in the fray. And I believe the time for smartwatches has arrived now.

    If we look at the overall journey of watches through the passage of time, not even from sundial, but from an spring-driven watch in 15[SUP]th[/SUP] century to an smartwatch in 21[SUP]st[/SUP] century, what we have covered in last three decades is many times more than what we would have covered in previous five centuries! World is moving much faster now!!


    Intel to Buy Altera?

    Intel to Buy Altera?
    by Paul McLellan on 03-28-2015 at 1:05 pm

    You may already have heard today’s big news in the semiconductor fabless ecosystem that Intel is apparently in talks to buy Altera. I embarrassed myself predicting that Samsung were in talks to buy Freescale (which, of course, they might have been but NXP won that particular race). But this time it is definite enough that the WSJ covered ittoo. Altera had a market cap of $10.4B so this is a big acquisition, up there with the aforementioned NXP/Freescale merger.

    The Wall Street types (by which I mean people who work in finance, not the WSJ people) are all trying to predict the effect that this will have on TSMC, since they assume that if the deal is done tomorrow morning that Intel will be making all Altera FPGAs from, maybe, tomorrow evening onwards.

    But here is the reality. Altera used TSMC down to 20nm. Then it famously switched to Intel foundry for 14nm. It is right now working on taping out those first parts. On their earnings call they admitted that this was slipping from Q1 to Q2 (to be fair, Xilinx said the same thing about their parts in TSMC 16FF+). I would not in the least bit be surprised to find that they slip further still since I have heard that the program is not going smoothly. Switching from a foundry that truly knows what is doing, TSMC, to one that is just starting in the business, Intel, was never going to be easy.

    Once those parts tape out (let’s be generous and keep to the Q2 estimate) they need to be prototyped and then early production parts shipped to customer to design them into things like LTE base-stations or routers, then those systems need to go into manufacturing and get into volume and then Altera will get volume orders. That will be in 2017 or 2018. Until then, every FPGA that Altera ships will be manufactured by TSMC. If that sounds a little unlikely, it is just the same as Intel’s LTE modem line which is also manufactured by TSMC and not in-house at Intel, and is unlikely to be until 2017 it seems. And that is a part that they would dearly love to bring inside since it means they can then integrate it with their application processors for tablets and mobile.

    In the meantime, every part shipped by Xilinx will also be manufactured by TSMC (or maybe there are some very old parts still shipping from UMC, Xilinx’s old foundry until 28nm) and every Lattice part will ship from UMC.

    This makes it all sound very important but actually the number of wafers that FPGA companies need is not that high compared to anything going into mobile devices.

    In my opinion this is really negative for Intel foundry. The implication is that Intel cannot compete in foundry without owning its customers. The only other publicly announced foundry partners for Intel I know of are Tabula (who shut down recently) and Achronix. Both Tabula and Achronix have Intel as an investor, so partially owning their customer. Also they were both in the FPGA business and so compete with Altera. I’m guessing that Achronix will not be happy if this happens.

    See also Tabula Closes Its Doors

    Another wrinkle. Altera have a close relationship with ARM. In fact their next generation products (the one that Intel will make) contain ARM processors. That was unlikely enough when Intel was the foundry but Altera was the company designing the parts and selling them. If Intel buys Altera then they will be designing, marketing and selling ARM parts. Given Intel’s obsession with Atom (see Intel’s failed mobile strategy) I wonder how that will play out.

    See also Pigs Fly. Altera Goes with ARM on Intel 14nm

    I talked to Xilinx but after thinking for a bit they decided not to comment. And kudos for Kevin Morris at EE Journal for his piece last year When Intel Buys Altera.


    CEVA Eyes DSP Scale in China’s $65 LTE Handsets

    CEVA Eyes DSP Scale in China’s $65 LTE Handsets
    by Majeed Ahmad on 03-27-2015 at 8:00 pm

    China Mobile’s bid to go for 3-mode Long-Term Evolution (LTE) has led to the first major breakthrough, $65 LTE handsets, and here baseband and application processors provided by chipmakers like Leadcore Technology, MediaTek and Spreadtrum Communications have all one thing in common: DSP cores from CEVA Inc.

    The advent of the $65 LTE handsets in the world’s largest mobile phone market could reinvigorate the smartphone boom all over again. Here, it’s important to note that signal processing is the centerpiece in two of the major building blocks of the LTE technology: orthogonal frequency-division multiplexing (OFDM) and multiple-input multiple-output (MIMO). Not surprisingly, therefore, DSP socket supplier CEVA is seeing China’s expected 300 million new LTE subscribers in 2015 with a lot of hope and excitement.

    The CEVA-XC soft-modem for LTE baseband chips

    In 2014, there was an interesting twist to China’s LTE story when China Mobile decided to reduce the LTE format specifications to 3-mode products. Within the LTE standard domain, 3-mode products support GSM, TD-SCDMA and TD-LTE cellular standards for 2G,3G and 4G wireless communications, respectively. These wireless standards are predominantly used in China, so inevitably, 3-mode LTE would be more suitable for China’s chipmakers and smartphone manufacturers.

    The international version—5-mode LTE—supports GSM, W-CDMA, TD-SCDMA, TD-LTE and FDD LTE and favors chipmakers like Qualcomm who have a global footprint. Apparently, China Mobile’s bet to stick with TD-LTE has started to show results with the launch of inexpensive smartphones. And China Mobile, one of the three large cellular operators in mainland China, is expected to consume 50 percent to 60 percent of these 3-mode LTE phones.

    CEVA’s Smartphone Sockets

    In January 2015, the trade media in China reported Xiaomi launching a $65 LTE phone that sported mobile chipset from Leadcore Technology. Leadcore, which uses DSP cores from CEVA, has actually replaced Qualcomm’s Snapdragon 410 that Xiaomi used in its earlier Redmi 2 handset. Qualcomm uses its proprietary Hexagon DSP cores in the Snapdragon chips.

    Likewise, TCL’s upcoming LTE phone for China Mobile is reported to cost $65 and is based on MediaTek’s quad-core MT6582 application processor and LTE MT6290 modem chip. Again, MediaTek licenses CEVA-X DSP cores and subsystems from CEVA. Next up, Spreadtrum, another licensor of CEVA DSP cores, is supplying SC9620 LTE baseband chips to Coolpad and Lenovo handsets. Spreadtrum claims to have shipped 30 million CEVA-powered baseband chipsets.

    Lenovo a388t phone features a DSP core from CEVA

    Winning DSP sockets in China’s volume 4G wireless market could be a vital breakthrough for CEVA, but the DSP licensor of baseband chips is not putting all its eggs in China’s 3-mode LTE basket. CEVA has also scored an important design win in Samsung’s Galaxy phones.

    Samsung is trying to reinvigorate its Exynos SoC with the help of LTE application processors. According to Forward Concepts, which focuses on DSP-centric wireless communications market research, Samsung’s quad-core Exynos ModAP is the Korean firm’s first generation of integrated LTE modem-application Processor solution with multimode LTE connectivity. The second one is the Exynos 300 modem that supports LTE-Advanced. Both modem chips are based on CEVA DSP cores.

    CEVA claims to have shipped DSP sockets in more than 1 billion chips in 2013 and around 40 percent of these chips went into mobile phones. Wireless baseband chips inside smartphone are a key volume market for CEVA and China’s great baseband game could well bring the next big growth opportunity for the DSP licensor. According to industry research firm Strategy Analytics, CEVA licensees MediaTek and Spreadtrum rank second and third, respectively, after Qualcomm in the global mobile baseband chip market.

    About DSP Socket in Smartphones

    The DSP part is now predominantly used in system-on-chip (SoC) solutions for communication and consumer markets, and here, mobile phones constitute the largest segment for DSP cores. All baseband chips carry one or two DSP cores. On the baseband side, the voice signal needs to be digitized and compressed, modulated onto a wireless signal, transferred through the wireless infrastructure to the other end of the call, and decompressed again.

    According to a recent newsletter of Forward Concepts, even application processor have now started to deploy DSP functionality, either as co-located DSP cores or as SIMD extensions to the CPU instruction set. On the application processing side of a mobile phone, data files containing video, images and audio need to be decoded and sent to the device’s screen, speakers and headset, all very specific DSP tasks.


    CEVA’s DSP solution for mobile handsets

    The CEVA DSP cores allow both hybrid and soft modem approaches for developing mobile baseband chips. For the hybrid approach, which mixes hardwired design with a programmable processor, the CEVA-X family of multi-purpose DSP cores enable a high level of concurrent instructions processing as well as low power consumption.

    The CEVA-XC family of DSP cores, built on the CEVA-X processors, offers a complete soft-modem implementation, supporting multiple wireless standards concurrently on the same chip in software. They use a single engine for all wireless processing and thus eliminate the need for multiple baseband co-processors. That way, the CEVA-XC DSP cores reduce power consumption and die size related to additional memories, data buffers and overall data traffic.

    Also see CEVA and LTE: Happy Together

    Majeed Ahmad is author of books Age of Mobile Data: The Wireless Journey To All Data 4G Networksand Essential 4G Guide: Learn 4G Wireless In One Day.


    Full-chip Multi-domain ESD Verification

    Full-chip Multi-domain ESD Verification
    by Paul McLellan on 03-27-2015 at 7:00 am

    ESD stands for electro-static discharge and deals with the fact that chips have to survive in an electrically hostile environment: people, testers, assembly equipment, shipping tubes. All of these can carry electric charge that has the “potential” (ho-ho) to damage the chip irreversibly. Historically this was a problem only for I/O pads which had to take care to dump the unwanted influx of charge without harming any of the on-chip transistors. There are three models for the aggressor, almost always just identified by their acronyms: human body model (HBM), machine model (MM) and charged device model (CDM). They all inject charge in various well-specified ways, using circuits involving capacitors, resistors and inductors.

    In modern chips, with thinner gate oxides and multiple power domains, ESD is not an issue confined to the pad-ring. ESD protection devices need to be included in the core. Of course many chips are bumped and in that case the pads are often not confined to the “pad-ring” since there is none, but even chips where the pads are at the edge of the chip can suffer internal failures from ESD. The ultimate way to check ESD is with a real chip and real ESD test equipment, but obviously, except in the case of a test-chip, that is too late to address any issues.

    ESD cells and devices such as diodes, transistors, clamps, and so on, consist of a large number of elementary devices that are interconnected by metal layers to provide sufficient ESD protection. Detailed understanding of the current flow and potential distributions in these interconnects and devices is important to optimize the device layouts and to ensure a balanced current distribution, low resistance, and efficient connection of devices to power nets. Standard parasitic extraction and simulation approaches are inadequate to describe these effects.

    Silicon Frontline’s ESRA (ElectroStatic Reliability Analysis) fills this gap and provides a full-chip ESD analysis solution. It delivers extraction, analysis and debugging capability in one integrated environment with the capacity to analyze the full chip. Highlighted violations permit designers to perform corrections at any time in the design process.


    ESRA builds on production-proven technologies, including fast and guaranteed accurate parasitic extraction and circuit-proven, high-capacity matrix solvers. Layout based, full-chip visualization and debugging of current density and potential distribution is included, and the whole solution is seamlessly integrated within existing layout flows.

    ESRA automates verification of ESD protection networks for electrical connectivity, resistance, and current density checks. It:

    • replaces manual ESD checks with well defined automated checks
    • offers a new verification methodology that quickly identifies issues in the layout, and analyzes weak elements of ESD network
    • provides a detailed (mesh-based) simulation and analysis of ESD protection devices and network elements, ensures the efficiency of electrical connections, and their compliance with current density and resistance rules
    • enables early capture of ESD protection problems avoids expensive silicon re-designs and re-spins

    Problems can be displayed graphically annotated onto the layout by highlighting problem areas using color.


    In summary, ESRA verifies that ESD design guidelines are met, highlights weak areas of designs, reports current density violations and high resistance paths. Details of ESRA are on Silicon Frontline’s website here.


    Medicals Marriage with Semis

    Medicals Marriage with Semis
    by Pawan Fangaria on 03-26-2015 at 7:00 pm

    I remember a couple of decades ago, my father used to go to a nearby doctor’s clinic to get his blood pressure and sugar levels checked. I guess, in around 1990s small electronic kits became available to measure these usual daily health indicators and instantly display the numbers. I bought a few for my father then. Today, the scene is very different. Even your ECG (Electrocardiogram) can be done at your home, office, or wherever you are through a small portable and very much affordable ECG machine. There can be many such other examples. The ubiquity of such small and powerful healthcare systems has been possible with the infusion of semiconductor ICs and sensors into these systems. The semiconductor chips have not only disrupted the prices of computing and consumer electronics (e.g. PCs, mobile phones, households etc.), but also medical and healthcare systems. Along with making the prices affordable, the semiconductor chips have also made these systems automated and easy-to-use for the healthcare personnel, patients or healthy persons alike for preventive health check up.

    It’s a very healthy sign that the medical semiconductor market is continuously growing. The ICs and sensors, specifically those used in small, powerful medical systems are driving the sales of ICs for medical purposes. The advancement has gone to the extent that a biotechnologically treated pill can also have micro-sensors, which after getting into your stomach can transmit intended signals about the condition of your stomach to the employed healthcare system (or even your Smartphone), and get out of your stomach with excretion without doing any harm to you.

    The IC Insights’Medical Semiconductor Market Forecast report shows that the worldwide medical semiconductor sales CAGR (Compounded Annual Growth Rate) between 2013 and 2018 can grow to ~12.3% with a total sale reaching to $8.2 billion by the end of 2018. The CAGR was ~6.9% between 2008 and 2013. As we know, a medical semiconductor system may consist of optoelectronic, sensor/actuator or discrete (O-S-D) components along with the ICs, the report further suggests that the O-S-D portion can rise at a CAGR of 20.3% while IC portion can rise at a rate of 10.7%; albeit the total IC portion by the end of 2018 stays higher at $6.6 billion compared to $1.6 billion of O-S-D.

    The O-S-D components are frequently used in optical imaging and diagnostic equipments. The advancements in SoCs, MEMS (Micro-electro-mechanical systems), and analog front-end data converter technology has given rise to portable and smaller size healthcare equipments which can be used at other places then just hospitals. Since their prices have also reduced significantly, they have become more affordable. This has opened up a new market for semiconductor medical ICs, embedded sensors and systems.

    Today, small imaging systems can cost one-tenth the price of large diagnostic systems (such as MRI or CT scanners) installed in hospitals and can be used in doctor offices, clinics, or elsewhere. Wearable devices such as fitness band, sleep pattern monitor, cardiac monitor, and so on are giving rise to another dimension in the medical semiconductor system market. In this market, software apps are equally important along with the hardware medical systems.

    It’s not only towards making portable medical systems, development of more powerful and integrated large medical systems are also happening that can be used at large scale. These systems can lower healthcare cost for treatment of severe ailments such as cancer which were either not possible or out of reach for common people due to their prohibitive costs. Detection of diseases sooner than later and preventive treatment including less invasive surgery has become possible today with the use of semiconductors. It is expected that the total medical electronics systems sales can reach to ~$70 billion by the end of 2018.

    Semiconductors and medical systems are complimenting each other; the semiconductors make the medical systems available and affordable while the medical systems drive the growth of semiconductor market. Happy Marriage!!


    Verification IP for PCIe and AXI4

    Verification IP for PCIe and AXI4
    by Daniel Payne on 03-26-2015 at 2:00 pm

    Engineers love acronyms and my latest blog post has three acronyms in the title alone, so hopefully you are doing or considering SoC designs with the AMBA AXI4(Advanced eXtensible Interface 4) interface specification along with PCI Express (Peripheral Component Interconnect Express). One big motivation for using semiconductor IP and verification IP along with standards is that you can get your new product to market faster, with fewer bugs and using the minimum engineering effort. When you hear the phrase “Verification IP” your mind may quickly jump to vendors like Cadence or Synopsys, however Mentor Graphics is also in this business as well. Doing just a quick Google search on the phrase “Verification IP” turned up these three EDA vendors, along with SemiWiki in the #4 position:

    Mentor produces something called the Verification Horizons Newsletter, where I read an article by David Aerne and Ankur Jain, “Fast Track to Productivity Using Questa Verification IP“. Here’s what to look for with any verification IP:

    • Proven by multiple customers
    • Checks for compliance to each protocol
    • Has a compliance test suite
    • Gives engineers analysis coverage

    Related – Virtual Emulation Extends Debugging Over Physical

    Integration

    Let’s say that your DUT (Design Under Test) is a PCIe using RC (Root Complex). The design IP along with verification IP would look like this:

    QVIP stands for Questa Verification IP, a Mentor product name. The QVIP has wrapper modules for each use case, making integration connection easier. Interface types supported for PCIe QVIP include: Serial, Pipe, PIE8 and MPCIe.

    Configuration

    Verification engineers can quickly configure each QVIP to model a PCIe End Point (EP) or Root Complex (RC) using a descriptor. This descriptor approach is quicker than writing UVM (Universal Verification Methodology) code to create analysis ports.

    Related – UVM Debugging Made Easy & Productive in Questa

    Here’s what an example PCIe QVIP configuration looks like:

    Further automation allows you to bring up a PCIe test bench using a sequence at a high level, shown in this code fragment:

    Starter Kits

    Buying your design IP and then getting them to work with Mentor’s QVIP is enabled through quick starter kits that allow you to install, instantiate, configure and bring up QVIP in a work day:

    Related – A Functional Verification Framework Spanning Simulation to Emulation

    APIs

    Generic APIs are provided that let you use read and write commands across all of the ARM AMBA protocols: AHB, AXI3, AXI4, ACE and CHI.

    This generic API approach makes it easier to verify each SoC that uses ARM AMBA protocols.

    Summary

    Mentor Graphics does offer verification IP called QVIP that makes the task of SoC verification easier to bring up for the most popular protocols like AMBA AXI4 and PCIe bus interface. Connectivity modules, configuration, quick starter kits and portable utility sequences help automate the verification tasks. Monitors with QVIP ensure protocol compliance, and for analysis you get scoreboard and coverage collectors. Your verification team can track and achieve coverage goals by using the test suites and functional test plans.

    Automation is your ally for verification, and Mentor’s QVIP can help. Read the full newsletter article here.