RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

ARMmbed? IoT dedicated ARM OS!

ARMmbed? IoT dedicated ARM OS!
by Eric Esteve on 11-21-2014 at 12:10 pm

The IP vendor #1, leading the pack with revenues more than twice the closest competitor revenues, has to position on the new IoT market, especially because ARM’s main product line is processor IP family, and MCU or CPU is certainly at the earth of the SmarCoT: the “Smart” part. In fact, ARM’s customers have the freedom to develop any chip (as soon as they pay the license and royalties) addressing the Internet of Things. But ARM marketers have made a serious home work to define how they see the IoT market and define solutions around ARM Cortex M family to address this market.

It’s all about ecosystem, and ARM knows pretty well about the concept, we even can say that they have introduced it within the semiconductor industry! You need partners to build an ecosystem and ARM has defined three partner classes: Cloud (providing services), Partners (for development tools) and Silicon Partners who bring the technology. These three groups form the mbed Partner Ecosystem (mbed is in fact the Operating System developed by ARM to support IoT).


Interesting in the above scheme is the fact that you don’t see any IC or any CPU or MPU IP, even if you can easily guess that you will find it into the IoT devices on the right side… but not only. If you dig in the new web site fully dedicated to ARMmbed, you quickly identify the Silicon, for example in the Smart Home solution:


The multiple home appliances and control developed around ARM Cortex M running mbed OS are the smart things (SmarCoT) connected to Internet via a single gateway (integrating Cortex A CPU running Linux). In fact mbed[SUP]™[/SUP] OS enables low-power wireless devices with IPv4 or IPv6, integrates with home/mobile gateways and electric meters to provide Internet routing. One of the new standards for home automation is Thread – its IPv6 based standard that brings IP to the edge, it consumes minimal power and allows mesh networking among home appliances. It runs over low power radio (802.15.4) MAC and PHY.

If you are interested by Smart City, the same mbed OS would be used to support large scale and secure IoT street:


Analogous to a Web Server that accepts connections from mobile phones or web browsers, a Device Server handles the connections from Internet of Things (IoT) devices (or “Little Data”). This Device Server is a key enabler for cloud service providers, operators and enterprises to access the IoT growth market with production deployments, bringing end node devices in to the world of web services. The Device Server can be used to connect the Little Data world of IoT to the Big Data applications.


ARM has already built a specific policy for mbed OS partners who :

  • contribute technically to the platform,
  • use it within their own solutions for developers,
  • work with ARM on marketing,

and finally benefit from being part of the ecosystem.

Technical benefits include early access rights to platform source that enable porting code and integrating solutions before it goes public, and support from the team creating the platform itself. Partners can also get development licenses for the latest versions of Device Server. Marketing benefits include being featured on the mbed Developer Website, social media channels, involvement with press releases, guest blogs and videos, and participation at ARM® mbed events.

ARM is claiming a developer community being already over 70,000 strong, and if you look at the partner’s logos, you can see that almost all the big names of the MPU space, the Atmel, Freescale, NXP, Renesas, ST are mbed OS partners. Complemented with Big Data service providers like Alcatel-Lucent, IBM or Ericsson to name a few, this initiative could support the creation of a new industry, required to develop IoT. ARM is very good at building an ecosystem and such ecosystem is helping facilitate the building of deeper relationships between new and non-traditional customers and partners.

If you are interested in discussing becoming a partner, please contact partnership@mbed.org

Eric Esteve from IPNEST


Global Foundries and IBM, More Details

Global Foundries and IBM, More Details
by Paul McLellan on 11-21-2014 at 7:00 am

Now that the dust has started to settle on the GlobalFoundries acquisition of IBM’s semiconductor business it is possible to look into another level of detail about what GlobalFoundries will be acquiring in the way of technology and IP. Of course, the deal hasn’t formally closed yet so this won’t all happen instantly. Estimates are that the deal may take as long as a year to close, and the rules are quite strict on how closely people can work together on an unclosed deal so it is going to be a challenge to manage the transition.


Firstly, there is additional capability in the specialized foundry business, meaning anything other than regular digital SoC type manufacturing. GlobalFoundries already has a good capability in this area, primarily running in the old Chartered fabs in Singapore, some of which have been upgraded to 300mm. But with IBM the gain:

  • PA/FEM (power amplifiers, front-end modules) and transceivers
  • High performance RF and AMS with SiGe
  • High voltage and power management
  • A specialty foundry business to address growth opportunities in mobile RF, which is expected to grow fast (see graph above)

This all runs in the old IBM 200mm fab in Essex Junction, Vermont, which is about 3 hours drive north of GlobalFoundries’ fab8 in Malta NY. The capacity is around 40K wafer starts per month.


They are also taking over IBM’s commercial ASIC business which is focused on network and computing infrastructure, in particular wired communications, wireless communications infrastructure (base stations) and storage. This runs in the IBM 300mm fab in East Fishkill, New York (a couple of hours south of fab8). The ASIC business is expected to grow with a CAGR of 6.5% (see graph above). East Fishkill has a capacity of around 14K wafer starts per month.

Fab8 has a capacity of 60,000 300mm wafers per month (or roughly 120,000 200mm equivalents).


There is major investment in technology in the north east with the college of nanoscale science and engineering (CSNE) in the area too (in Albany) too. But clearly GlobalFoundries now has a world class technology development team.

So the bottom line is that the acquisition:

  • Reinforces GlobalFoundries’ long-term commitment to manufacturing and technology leadership
  • Provides R&D expertise to give a path to 10nm and beyond
  • Expands segment growth in RF and ASIC
  • Becomes IBM’s sole source foundry partner
  • Gives them strategic relationships with top OEM industry suppliers

GlobalFoundries have a presentation deck covering the acquisition here.


More articles by Paul McLellan…


HLS Tools Coming into Limelight!

HLS Tools Coming into Limelight!
by Pawan Fangaria on 11-20-2014 at 10:00 pm

For about a decade I am looking forward to seeing more of system level design and verification including high level synthesis (HLS), virtual prototyping, and system modeling etc. to come in the main stream of SoC design. Although the progress has been slow, I see it accelerating as more and more tools address the typical pain points in designing and verifying at the system level. Naturally, if you can’t confidently verify a design done in certain way, you wouldn’t design it in that way. So, the message is clear – close the gap between what a designer wants to do and what the tools provide; closer the gap, faster the adoption.

I was particularly happy seeing the sixth annual HLS survey reportconducted by Calyptoamong SoC, IC and FPGA design engineers and managers. I couldn’t imagine only 4% of engineers among 750 who responded to the survey do not use HLS. That means majority of the respondents were active users of HLS and they knew about the actual problems they face while using HLS. What if these problems are solved to delight them? Let’s see what these pain points are –

It’s clear, proving C and RTL equivalence is a major challenge; RTL structure depends on the constraints put in the synthesis tool and hence can significantly vary in its sequential behavior depending upon those constraints. So, the C-to-RTL formal verification tools must take into account the design intent. The other major challenge in tracking the mismatches between C and RTL models and lack of test coverage for the generated RTL signifies that there is acute need to generate equivalent RTL testbench also along with the RTL model.

Which C source code errors were hardest to identify during HLS? As usual, detection and removal of dead code in any software is a major pain point. The point important to HLS is the fact that such errors (e.g. uninitialized memory read, ABR etc.) in C source code can show their effect differently in different contexts, thus affecting consistency in results and re-usability of code. These errors must be removed early in the design process to keep the code quality high and re-usable which can be synthesized consistently.

The question on power reduction was interesting; we are already seeing major power reduction at RTL through various RTL level tools in the market. The power reduction at system level and then at C++/SystemC/HLS is a step well perceived for best way to start the design at system level. HLS tools can optimize micro-architecture to minimize power and also utilize RTL power optimization tools to produce power optimized RTL in one go.

What are the hardware types being designed using HLS? Clearly major concentration is towards wireless, video, imaging, graphics etc. However it is interesting to see the other 25%, that means the advantages of using HLS in more designs is being recognized by the design community.

Look at the HLS reportat Calypto website to find more details. I like this process of Calypto; gathering inputs from design community and then incorporating those into their tools, that’s a great way to accelerate closing the gaps between design and tool-to-design. So what’s Calypto doing to address the key points in their HLS tool Catapultand Sequential Logical Equivalence Checking (SLEC) tool? We need to watch out on that. Stay tuned to hear more on specific HLS improvements from Calypto to provide a superior experience to designers.

More Articles by PawanFangaria…..


Using HAPS-DX for system-level deep trace debug

Using HAPS-DX for system-level deep trace debug
by Don Dingee on 11-20-2014 at 4:00 pm

Debugging an ASIC design in an FPGA-based prototyping system can be a lot like disciplining a puppy. If you happen to be there at the exact moment the transgression occurs and understand what led up to that moment, administering an effective correction might be possible.

Catching RTL in the act requires the right tools. Faults in a complex design are rarely obvious, more likely rooted in a sequence of events sourced from multiple IP blocks partitioned across FPGAs and clock domains. For example, debugging a USB protocol fault calls for capturing a deep trace buffer on the port itself, correlated with streams from other test points in IP blocks and interconnects.


Most FPGA-based prototyping systems are not equipped for deep trace debug operations. FPGAs do not contain enough internal memory to capture very long traces. The Hawthorne effect enters the equation; configuring FPGA resources for debug operations consumes FPGA resources and can affect the outcome of the observation or other operations in unpredictable ways.

Of course, test points never seem to be in the right places at the right time, especially as the root cause of a problem traces farther back into multiple blocks. Having to rebuild an entire RTL design partitioned across several FPGAs just to change the test point configuration for debugging a problem is time consuming and risky.

Most design philosophies use a cascade-up approach: debug the details at the IP block level, then abstract functional blocks as black boxes at system integration. This not only greatly reduces the test point loading, but also reduces system test time and fosters IP reuse – if it all goes as planned.

When Synopsys designed the HAPS-70 Series of FPGA-based prototyping systems, they hit system-level partitioning and interconnect head on, and addressed much of the system-level debugging capability. It became clear that IP block testing needed a similar approach, and to scale down and enable more teams, Synopsys introduced the HAPS-DX. Developers then could design and prototype IP blocks on a smaller, more cost effective platform, and pass artifacts directly up to the HAPS-70 platform for SoC integration.

A big feature of the HAPS-DX is the detailed, deep trace debug capability. HAPS-DX has an 8GB DDR3 SDRAM SODIMM and a suite of logic analysis tools. It can grab 128 signals at 140 MHz, for a full five seconds of data. Synopsys Verdi and Siloti visualization tools can be used to view the results.


A new, short video from Troy Scott and Peter Zhang of Synopsys shows how the HAPS-70 can be used with the debug features of the HAPS-DX and ProtoCompiler non-invasively. Effectively, the HAPS-DX serves as storage and control for debug operations, connected to watchpoints on the HAPS-70 via high-speed serial links.

This approach leaves the HapsTrak 3 connectors wide open for daughter cards and FPGA interconnect – the partitioning on the HAPS-70 is unaffected. The HAPS-DX captures data, and shares it with a host workstation for display over the high bandwidth UMRBus.


With the capture mechanism in place, ProtoCompiler is used to set triggers. Its RTL Instrumentor allows navigating the RTL design hierarchy visually. A few clicks can set watchpoints or triggers by signal name. A run-time utility then takes over sampling, and can export data in an FSDB format for display and analysis. Results are fully correlated and can be annotated back to RTL source.

This video, “Synopsys ProtoCompiler for RTL Debug with HAPS Systems”, appears with other helpful FPGA-based prototyping videos from the Synopsys marketing and engineering teams.

Related articles:


Don’t be an “ID-IoT”

Don’t be an “ID-IoT”
by Bill Boldt on 11-20-2014 at 8:00 am

hacker

Let’s just come out and say it: Not using the most robust security to protect your digital ID, passwords, secret keys and other important items is a really, really bad idea. That is particularly true with the coming explosion of the Internet of Things (IoT).



The identity (i.e. “ID”) of an IoT node must be authenticated and trusted if the IoT is ever to become widely adopted. Simply stated, the IoT without authenticated ID is just not smart. This is what we mean when we say don’t be an ID-IoT.

It seems that every day new and increasingly dangerous viruses are infecting digital systems. Viruses — such as Heartbleed, Shellshock, Poodle, and Bad USB — have put innocent people at risk in 2014 and beyond.

Because the digital protection mechanisms themselves have become targets, and with the IoT multiplying the number of targets the targets must be hardened.

It is not hard to see that trust in the data communicated via an ubiquitous (and imvasive) IoT will be necessary for it to be widely adopted. Without trust, the IoT will fail to launch. It’s as simple as that. In fact, the recognized inventor of the Internet, Vint Cerf, completely agrees saying that the Internet of Things requires strong authentication. In other words, no security? No IoT for you!

A bracing reason that data security is so important is that money now is simply electronic data, so everyone and every company are at risk of financial losses stemming directly from data breaches. Data banks are where the money is now kept,so data is what criminals attack. While breaches are, in fact, being publicized, there has not been much open talk about their leading to significant corporate financial liability. That liability, however, is real and growing. CEOs should not be the least bit surprised when they start to be challenged by significant shareholder and class action lawsuits stemming from security breaches.

Although inadvertent, companies are exposing identities and sensitive financial information of millions of customers, and unfortunately, may not be taking all the necessary measures to ensure the security and safety of their products, data, and systems. Both exposure of personal data and risk of product cloning can translate to financial damages. Damages translate to legal action.

The logic of tort and securities lawyers is that if proven methods to secure against hacking and cloning already exist, then it is the fiduciary duty of the leaders of corporations (i.e. the C-suite occupants) to embrace such protection mechanisms (like hardware-based key storage), and more importantly, not doing so could possibly be argued as being negligent. Agree or not, that line of argumentation is viable, logical, and likely.

A few CEOs have already started to equip their systems and products with strong hardware-based security devices… but they are doing it quietly and not telling their competitors.

Software, Hardware, and Hackers

Why is it that hackers are able to penetrate systems and steal passwords, digital IDs, intellectual property, financial data, and other secrets? It’s because until now, only software has been used to protect software from hackers. Hackers love software. It is where they live.


The problem is that rogue software can see into system memory, so it is not a great place to store important things such as passwords, digital IDs, security keys, and other valuable things. The bottom line is that all software is vulnerable because software has bugs despite the best efforts of developers to eliminate them. So, what about storing important things in hardware?

Hardware is better, but standard integrated circuits can be physically probed to read what is on the circuit. Also, power analysis can quickly extract secrets from hardware. Fortunately, there is something that can be done.

Several generations of hardware key storage devices have already been deployed to protect keys with physical barriers and cryptographic countermeasures that ward off even the most aggressive attacks. Once keys are securely locked away in protected hardware, attackers cannot see them and they cannot attack what they cannot see. Secure hardware key storage devices employ both cryptographic algorithms and a tamper-hardened hardware boundary to keep attackers from getting at the cryptographic keys and other sensitive data.

The basic idea behind such protection is that cryptographic security depends on how securely the cryptographic keys are stored. But, of course it is of no use if the keys are simply locked away. There needs to be a mechanism to use the keys without exposing them — that is the other part of the CryptoAuthentication equation, namely crypto engines that run cryptographic processes and algorithms. A simple way to access the secret key without exposing it is by using challenges (usually random numbers), secret keys, and cryptographic algorithms to create unique and irreversible signatures that provide security without anyone being able to see the protected secret key.

Crypto engines make running complex mathematical functions easy while at the same time keeping secret keys secret inside robust, protected hardware. The hardware key storage + crypto engine combination is the formula to keeping secrets, while being easy-to-use, available, ultra-secure, tiny, and inexpensive.

http://www.youtube.com/watch?v=zZsOjyo26tg


Bill Boldt, Sr. Marketing Manager, Crypto Products Atmel Corporation



Is Your Washing Machine a Connected Thing?

Is Your Washing Machine a Connected Thing?
by Eric Esteve on 11-20-2014 at 4:00 am

In fact the question could be about your watch, thermostat or other smart appliance, as soon as the “thing” relies on one or more sensors to function. In this case, we are close to call this thing an IoT (or SmarCoT), we just need to add WiFi, BTLE, ZigBee connectivity. Sensors are ubiquitous, integrated into smartphone, automotive, thermostat, home appliances and many more. If you want to define IoT, you should start to list the many things that use sensors. When a “thing” uses sensor(s), a CPU or DSP or combination of both is not far away, to process the sensor raw data. It becomes a “smart thing”. Connecting a smart thing look like a good idea? Just do it and you will have built a SmarCoT (Smart Connected Thing) or IoT!

The next question is which processor to select, especially when the thing will be battery powered, and the electronic system has to be smart in term of power consumption. We are not talking about a device you need to charge every day or even week, rather several weeks. We know for quite a long time that integration is always a good path for low power: moving from external I/Os to internal connections greatly help to save power. Thus, using a CPU IP subsystem that could be integrated into a larger chip sounds good.

Synopsys DesignWare Sensor and Control IP Subsystem is optimized to process the extensive amount of data in sensor fusion applications. The subsystem includes a rich library of off-the-shelf DSP functions supporting filtering, correlation, matrix/vector, decimation/interpolation and complex math operations. Designers can implement these sensor-specific DSP functions in hardware using a combination of native DSP instructions within the EM5D or EM7D processor and tightly coupled hardware accelerators to boost performance efficiency and reduce power consumption by up to 85 percent compared to discrete solutions.

Synopsys has built a real subsystem as you can see on the above picture, integrating ARC CPU (EM4, EM5D, EM6 or EM7D) tailored to the computation needs. A computation hungry application may require using Instruction and Data memory cache, but you can decide to integrate only the right quantity of embedded SRAM. In both cases, the solution will exhibit minimized power consumption, as there is no power hungry internal bus like AMBA AHB or the like. This architecture sounds like a main differentiator when compared with the competition, even when dealing with low performance CPU like ARM Cortex M, for example.

But low power architecture doesn’t mean low capability: this IP subsystem is expected to interface with sensors, thus it offers a full set of tightly coupled peripheral interfaces (see above), including both digital and analog interfaces, as well as a Pulse Width Modulator (PWM) interface. The lower cost is a sensor, the higher DSP manipulations will be needed, and Synopsys proposes various DSP accelerators, either hard wired (EM4 or EM6), either with native instructions for the EM5D and EM7D versions. Even a Floating Point Unit can be added as a licensable option if needed.

Claiming that your solution is low power is one thing, demonstrating this claim in a real case is better!

Synopsys has benchmarked ARC sensor IP subsystem supporting the same 9-D sensor fusion application, in term of cycle counts as well as energy consumption with two competitor solutions. Synopsys IP subsystem is integrated into an ASIC built on 40LP when the X and Y solution are based on standard parts (from well-known microcontroller manufacturers). The cycle count reduction, by a factor of 5 (with competitor X) or 4 with Y, is impressive. Such a 80% reduction in cycle counts, plus the smarter bus free architecture explains why Synopsys solution exhibits more than 85% savings in energy consumption!

In fact, this ARC IP subsystem is like sensors, it’s ubiquitous. It’s not possible to describe in a short paper all the potential configurations ranging from pure RISC CPU, to 25% RISC/75%DSP, passing through any possible implementation. You will get a very good overview, and also some in-depth information by listening this webinar:

Webinar: Simplify Sensor and Actuator Functionality for your IoT Solution

From Eric Esteve from IPNEST


Qualcomm Enters Server CPU Market

Qualcomm Enters Server CPU Market
by Paul McLellan on 11-19-2014 at 6:00 pm

Fresh from the leaked memo that Intel is merging its mobile business into its PC client group, Qualcomm is going the other way and has confirmed that it is entering the ARM server CPU market, an announcement made at its analyst day earlier today.

This is a major trend that less than a month ago I reported from the Linley microprocessor conference. You can go and read the whole thing but the money quote is:ARM has a tiny share. But as I reported last year, that is all set to change. The 64-bit ARM v8 instruction set has opened up new markets and almost all embedded vendors are moving their future investment to ARM. However, the time to design-in, ship and ramp equipment in a conservative market means that the crossover will take 5-10 years, but:

  • AppliedMicro shipping X-Gene and sampling X-Gene2
  • Cavium plans to sample Thunder in Q4 (their current products are MIPS based)
  • Feescale sampling LS1 and plans to sample LS2 this quarter
  • LSI/Avago/Intel shipping ARM version of Axxia (although presumably this will be short lived now Intel owns that business)
  • AMD sampling Hierofalcon for embedded market
  • Broadcom shippping StrataGX and developing Vulcan CPU

Now we can add Qualcomm to this list. Since Qualcomm is pretty much doing the most advanced SoCs on TSMC’s most advanced processes, and given that it has its own ARM processor already, this should put it in a good position. As Qualcomm’s CEO Steve Mollenkopf argued, their ability to quickly adopt next-gen manufacturing processes will give it an edge.

It still remains to be seen if “ARM servers” are really a market. It is supposedly driven by demand from internet giants but until Facebook or Google announce that they are building datacenters at scale using ARM-based CPUs the jury is still out. It is also clear that the companies building ARM-based server CPUs cannot all be successful. I would expect only one or two of these companies to achieve true scale. But there is clearly a value proposition. For some tasks, maximum single thread performance is the most important thing and Intel is untouchable there. But for many tasks, such as servicing hundreds of thousands of simultaneous users on the web, raw performance of a thread is probably less important than aggregate performance of the datacenter against the important metrics of power, cost and physical size. Good performance at 10% of the cost, 10% of the power and 10% of the physical volume sounds pretty compelling. The total cost of ownership of a datacenter includes a high electricity bill to deliver power to the servers and another high electricity bill to power the air-conditioning to get the heat out. Reducing that may be even more important than the cost of the server chips.

But Facebook is at least talking the talk. Jay Parikh, Facebook’s vice president of infrastructure engineering said:Qualcomm-based ARM servers gives us the ability to rethink the way that we have built certain parts of our infrastructure.

So Intel is trying to get into Qualcomm’s primary business and now Qualcomm is trying to get into Intel’s.

Wall Street Journal article is here

See also Will ARM Rule the World?

Qualcomm also announced the latest version of its Gobi LTE modem series, the 5th generation named 9×45. It will be out next year. I’m guessing it is build in TSMC 20nm process. It supports speeds of up to 450Mbps which is pretty amazing (as a comparison, when commercial Ethernet was first introduced it ran at just 10Mbps). This requires carrier aggregation, which means that the mobile device communicates simultaneously using several channels. This has the advantage of retaining backwards compatibility for devices that can only use a single channel. It is supposedly lower power than the previous 300Mbps version, the 9×35 and requires less board space (presumably it is a smaller die and a smaller package).

See also Gobi, the Jewel in Qualcomm’s Crown


More articles by Paul McLellan…


MIPS CPU and Newton2 Platform for Wearables

MIPS CPU and Newton2 Platform for Wearables
by Eric Esteve on 11-19-2014 at 11:00 am

I have written recently about SmarCoT (Smart Connected Thing) and smartwatches are one of these numerous smart and connected applications that some still refer to as IoT. Imagination Technologies is working hard to be part of the SmarCoT ecosystem and Ingenic, IMG customer, has recently launched a MIPS-based chip (M200) and Newton2 platform addressing high end applications like wearable. Ingenic expects MIPS 64-b CPU to be integrated in target applications like:

  • Infotainment: smartwatches, augmented reality headsets, smart glasses, smart cameras
  • Healthcare: wearable healthcare monitors
  • Fitness and wellness: fitness bands, activity trackers, smart clothing, sleep sensors

And such products are likely to address the needs of mid to upper class customers, able to spend a couple of $100’s or more for a gadget. These applications are expected to be a more effective driver for SmarCoT development than your electricity meter, it’s more fashionable, rank you in the early adopters and immediately value the buyer. Just like listening an iPod in the 2000’s.

I have learned from this blog from Imagination Technologies Ingenic introduces MIPS-based M200 chip and Newton2 platform for wearables and IoT that “The new GEAK Watch 2 uses an Ingenic wearable chip and delivers over 15 days of battery life”. Here we come one of the most important factor when dealing with wearable, the power consumption. If the chip maker is responsible for the product power consumption, it’s clearly better to select the right CPU IP core, delivering the best power efficiency and to architect the SoC for ultra-low power from the beginning. IMG has rethink their CPU port-folio in respect with the different market, High-end or Ultra-affordable mobile, Smart TV and STB, Networking and finally wearable, as you can see in the spider diagram below:

Thanks to the spider diagram, the result is clear: the wearable segment can afford with lower performance level and lower features, but need low power. This translate into implementing a power-saving hardware architecture where a high-performance MIPS CPU clocked at 1.2 GHz tackles most of the heavy lifting, while less demanding tasks are handled by a secondary low-power 300 MHz MIPS CPU. The multimedia department sees the addition of a 3D graphics engine that supports OpenGL ES 2.0. M200 also integrates a dedicated, multi-standard video engine for low power decoding and encoding of popular codecs like H.264 and VP8 (up to 720p at 30 fps). The chipset also includes an ISP for image pre-processing that supports a range of vital features for camera vision applications. When in full operating mode, the M200 chip consumes only 150mW and standby power consumption for Newton2 is less than 3mW, allowing devices to work for twice as long.

The above Ingenic M200 chip architecture sounds like an Application Processor design (for late 2000’s smartphone), as it’s pretty complex, integrating the MIPS flavor of big.LITTLE, 512 KB L2 cache, a DDR3/LPDDR2 Memory controller and PHY, a Video PU, a Graphic PU, an ISP, an Audio Codec and plenty of interfaces, MIPI DSI and CSI and an USB OTG, to name a few. Reaching an active power dissipation of 150 mW has certainly been challenging, if you take a look at the complete system, a 15mmx30mm board, it also integrates a PMU IC, a camera IC, a display IC, a GPS and sensor IC, a 9axis gyroscope, a huge eMCP memory chip and a Broadcom WiFi + Bluetooth 4.1 chip!

As far as I am concerned, I have no problem to rank Newton2 within the IoT category, I just would rank it in the high end segment, with smart glasses and the like. Thus, we may expect to see sales reaching (several) million units, but the billion unit step is more questionable. This type of wearable (probably sold several $100) could be the flagship products, driving the mass market customers to the SmarCoT concept (even if they don’t massively buy the above product for cost reason), expecting the end user to massively buy the multiples-and yet to develop- SmarCoT products…

From Eric Esteve from IPNEST

More Articles From Eric Esteve


Atmel, IoT and CryptoAuthentication

Atmel, IoT and CryptoAuthentication
by Paul McLellan on 11-19-2014 at 7:00 am

One of the companies that is best positioned to supply components into the IoT market is Atmel. For the time being most designs will be done using standard components, not doing massive integration on an SoC targeted at a specific market. The biggest issue in the early stage of market development will be working out what the customer wants and so the big premium will be on getting to market early and iterating fast, not premature cost optimization for a market that might not be big enough to support the design/NRE of a custom design.

Atmel has microcontrollers, literally over 500 different flavors and in two families, the AVR family and a broad selection of ARM microcontrollers/processors. They have wireless connectivity. They have strong solutions in security.

Indeed last week at Electronica in Germany they announced the latest product in the SmartConnect family, the SAM W2 module. It is the industry’s first fully-integrated FCC-certified Wi-Fi module with a standalone MCU and hardware security from a single source. The module is tiny, not much larger than a penny. The module includes Atmel’s recently-announced 2.4GHz IEEE 802.11 b/g/n Wi-Fi WINC1500, along with an Atmel | SMART SAM D21 ARM Cortex M0+-based MCU and Atmel’s ATECC108A optimized CryptoAuthentication engine with ultra-secure hardware-based key storage for secure connectivity.


That last item is a key component for many IoT designs. Security is going to be a big thing and with so many well-publicized breaches of software security, the algorithms, and particularly the keys, are moving quickly into hardware. That component, the ATECC108A, provides state-of-the-art hardware security including a full turnkey Elliptic Curve Digital Signature Algorithm (ECDSA) engine using key sizes of 256 or 283 bits – appropriate for modern security environments without the long computation delay typical of software solutions. Access to the device is through a standard I²C Interface at speeds up to 1Mb/sec. It is compatible with standard Serial EEPROM I²C Interface specifications. Compared to software, the device is:

  • higher performance (faster encryption)
  • lower power
  • much harder to compromise

Atmel have a new white paper out, Integrating the Internet of Things, Necessary Building Blocks for Broad Market Adoption. Depending on whose numbers you believe, there will be 50 billion IoT edge devices connected by 2020.


As it says in the white paper:On first inspection, the requirements of an IoT edge device appear to be much the same as any other microcontroller (MCU) based development project. You have one or more sensors that are read by an MCU, the data may then be processed locally prior to sending it off to another application or causing another event to occur such as turning on a motor. However, there are decisions to be made regarding how to communicate with these other applications. Wired, wireless, and power line communication (PLC) are the usual options. But, then you have to consider that many IoT devices are going to be battery powered, which means that their power consumption needs to be kept as low as possible to prolong battery life. The complexities deepen when you consider the security implications of a connected device as well. And that’s not just security of data being transferred, but also ensuring your device can’t be cloned and that it does not allow unauthorized applications to run on it.


For almost any application the building blocks for an IoT edge node are the same:

  • Embedded processing
  • Sensors
  • Connectivity
  • Security
  • and while not really a building block, ultra low power especially for always-on applications

My view is that the biggest of these issues will be security. After all, even though Atmel has hundreds of different microcontrollers and microprocessors, there are plenty of other suppliers. Same goes for connectivity solutions. But strong cryptographhic solutions implemented in hardware are much less common.

The new IoT white paper is available for download here.


More articles by Paul McLellan…


Arteris on a winning streak in 2014

Arteris on a winning streak in 2014
by Don Dingee on 11-19-2014 at 3:00 am

When Arteris sold key network-on-chip intellectual property and most of its human assets to Qualcomm earlier this year, it was big news. We suggested the bigger news after a restaffing effort would be a next-generation NoC release, and a new round of design wins.

Some developments were already in the pipeline. Continue reading “Arteris on a winning streak in 2014”