CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Smartphone Trends Revealed

Smartphone Trends Revealed
by Daniel Payne on 11-20-2015 at 4:00 pm

I’ve been using cell phones since the 1980’s and I’ll never forget my first one, the Motorola DynaTAC (aka Brick Phone). The data plan was paid for by my EDA employer, and it did make me more productive because clients, prospects and co-workers could get a hold of me by simply dialing, not having to go through a secretary to send me a message. It seems like every couple of years I have been upgrading my cell phone, and with AT&T I’m on a two year cycle where I purchase a replacement phone, typically a refurbished phone at a discounted price. That’s my experience, but what is happening in the smartphone industry today?

Thankfully we have sources like IC Insights that dutifully collects and reports on such business trends. I’ve just read their most recent research bulletin titled, “Strong Smartphone Growth in a Saturated Cellphone Market”. In 2015 they have analyzed that some 89% of new cell phones are being sold to existing customers, while 10 years ago that number was only 56%.

1.675 billion, now that’s a lot of cell phones being sold in 2015 and the percentage sold to existing customers are predicted to inch up from 89% to 91% in 2016. Vendors like Apple, Samsung and Xiaomi are hoping to convince consumers like me to upgrade to a new cell phone model often, at least once per year or sooner if possible.

The cell phone market has two types of phones: feature phones and smartphones. The feature phones have pre-built apps and cannot add new apps, while smartphones use Apple iOS or Android (and some Windows) to allow nearly unlimited apps. Let’s look at the growth rate of the total cellphone market versus Smartphones and other cellphones:

As expected all of the growth is in the Smartphone segment, while all of the decline is in the other cellphone category. Smartphone shipments were over 50% of the total quarterly cellphone shipments starting back in Q1 of 2013. In Q4 of 2015 the estimate is for 80% of total cellphone shipments to be a Smartphone. By 2019 the forecast is for Smartphones to command 95% of total cellphone shipments.

The two biggest Smartphone vendors Apple and Samsung sold some 504 million devices in 2014 for a 40% market share, and in 2015 they will increase devices to 560 million Smartphones however with a slight drop to a 39% market share. Samsung is in trouble because their shipments are expected to decline by 1% this year, while Apple is flying high with a projected increase of 20%, thank you iPhone 6s.

Who is causing Samsung to decrease in unit shipments? It’s the Chinese companies where the seven largest vendors in China will now reach a 31% market share. Remember the Chinese company names Huawei and Xiaomi.

Read the full research bulletin here.

Related Blogs


Security Coprocessor Marks a New Approach to Provisioning for IoT Edge Devices

Security Coprocessor Marks a New Approach to Provisioning for IoT Edge Devices
by Majeed Ahmad on 11-20-2015 at 12:00 pm

The advent of security coprocessor that offloads the provisioning task from the main MCU or MPU is bringing new possibilities for the Internet of Things (IoT) product developers to secure the edge device at lower cost and power points regardless of the scale.

Hardware engineers often like to say that there is now such thing as software security. And they quote Apple that has all the money in the world and an army of software developers. The maker of the iPhone chose a secure element (SE)-based hardware solution while cobbling the Apple Pay mobile commerce service. Apparently, with a hardware solution, engineers have the ecosystem fully in control.

Security is the basic building block of the IoT bandwagon, and there is a lot of talk about securing the access points. So far, the security stack has largely been integrated into the MCUs and MPUs serving the IoT products. However, tasks like encryption and authentication take a lot of battery power, a precious commodity in the IoT world.


Security coprocessor for provisioning: A new milestone in the IoT arena

Atmel’s solution: a coprocessor that offloads security tasks from main MCU or MPU. The Atmel ATECC508A security coprocessor uses elliptic curve cryptography (ECC) capabilities to create secure hardware-based key storage for IoT markets such as home automation, industrial networking and medical.

The ATECC508A crypto-authentication chip comes at a manageable cost—50 cents for low volumes—and consumers very low power. The chip makes provisioning—the process of generating a security key—a viable option for small and mid-sized IoT product developers.

A New Approach to Provisioning

It’s worth noting that security breaches rarely involve breaking the encryption code; hackers mostly use techniques like spoofing to steal the ID. So the focus of the Atmel ATECC508A crypto-engine is the tasks such as key generation and authentication. The chip uses ECC math to ensure sign-verify authentication and subsequently the verification of the key agreement.

The IoT security—which includes the exchange of certificates and other trusted objects—is implemented at the edge node in two steps: provisioning and commissioning. Provisioning is the process of loading a unique private key and other certificates to provide identity to a device while commissioning allows the pre-provisioned device to join a network. Moreover, provisioning is carried out during the manufacturing or testing of a device and commissioning is performed later by the network service provider and end-user.


What can happen to an unsecure node?

Presently, snooping threats are mostly countered through hardware security module (HSM), a mechanism to store, protect and manage keys, which requires a centralized database approach and entails significant upfront costs in infrastructure and logistics. On the other hand, the ATECC508A security coprocessor simplifies the deployment of secure IoT nodes through pre-provisioning with internally generated unique keys, associated certificates and certification-ready authentication.

It’s a new approach toward provisioning that not only prevents over-building, as done by the HSM-centric techniques, but also prevents cloning for the gray market. The key is controlled by a separate chip like the ATECC508A security coprocessor. So if there are 1,000 IoT systems to be built, there will be exactly 1,000 security coprocessors for them.

Certified-ID Security Platform

At the ARM TechCon held in Santa Clara, California on 10-12 November 2015, Atmel went one step ahead when it announced the availability of Certified-ID security platform for the IoT entry points like edge devices to acquire certified and trusted identities.

The Atmel Certified-ID platform leverages internal key generation capabilities of the ATECC508A security coprocessor to deliver distributed key provisioning for any device joining the IoT network. That way it enables a decentralized secure key generation and eliminates the upfront cost of building the provisioning infrastructure for IoT setups being deployed at smaller scales.


Toolkits allow system design houses to provision prototypes and first-run devices

Atmel, a pioneer in Trusted Platform Module (TPM)-based secure microcontrollers, is now working with cloud service providers like Proximetry and Exosite to turn its ATECC508A coprocessor-based Certified-ID platform into an IoT edge node-to-cloud turnkey security solution. TPM chips, which have roots in the computer industry, aren’t well-positioned to meet the cost demands of low-price IoT edge devices.

The San Jose, California–based microcontroller supplier has also announced the availability of two provisioning toolkits for low volume IoT systems at the ARM TechCon. The AT88CKECCROOT toolkit is a ‘master template’ that creates and manages certificate root of trust in any IoT ecosystem. On the other hand, AT88CKECCSIGNER is a production kit that allows designers and manufacturers to generate tamper-resistant keys and security certifications in their IoT applications.

Also read:

4 Reasons Why Atmel is Ready to Ride the IoT Wave

Atmel’s L21 MCU for IoT Tops Low Power Benchmark

6 Memory Considerations for IoT Designs Built Around Cortex-M7 MCUs

Majeed Ahmad is the author of the book The Next Web of 50 Billion Devices.


Globalfoundries 22FDX Technology Shows Advantages in PPA over 28nm Node

Globalfoundries 22FDX Technology Shows Advantages in PPA over 28nm Node
by Tom Simon on 11-20-2015 at 7:00 am

I really enjoy ARM Techcon when it rolls around every year because it has such a wide range of topics and exhibits. You can find maker gadgets, IoT information, small boards for industrial control, software development kits, semiconductor IP vendors as well as the big EDA players and foundries. This year after perusing the exhibit floor I attended a talk sponsored by Cadence on the topic of using Globalfoundries 22NM FD-SOI process to implement a quad core ARM Cortex-A17. Joerg Winkler and Tamer Ragheb with Globalfoundries discussed the rational for choosing their 22FDX technology for this project. After which, they went into to the specifics of using Cadence Innovus for the actual physical design process. We all know that the 28nm node is popular because of its relatively low cost and ease of implementation. To reduce power or increase performance beyond this companies typically need to make the leap to FinFET nodes, which comes with a big increase in cost and complexity. FD-SOI is increasingly becoming a choice to explore for companies that are looking for lower power and better performance. FD-SOI is already a low leakage process because the of the depleted channel that is insulated from the body silicon. One of the most interesting and appealing aspects of FD-SOI is the ability to dynamically add forward or reverse body bias to the devices without causing more leakage effects. Adding body bias on FD-SOI does not draw current through the source or drain. Forward body bias (FBB) can reduce the voltage needed to cause the gate to switch and also causes a faster transition. Both of these behaviors save significant power and boost performance. Adding reverse bulk bias (RBB) can reduce leakage and is useful for reducing static power consumption when the gates are not needed. The Globalfoundries presentation at ARM Techcon discussed using their unique 22FD-SOI node, which is comparable to 28nm bulk in price, but offers better power/performance when FBB and RBB are designed in. Albeit, extra work is required and their presentation discusses in some detail how the body biasing domains were partitioned and routed. However, the work is comparable to the effort required for power and voltage domains, which are in widespread use. The Cadence Innovus reference flow that was used for the paper is shown below. Globalfoundries has also implemented an ARM Cortex-A9 Neon in a number of different processes for the purposes of comparing PPA. Again the implementations have been done using Cadence Innovus. The 28nm implementation has a single operating point, i.e nominal frequency and total power. Due to the body bias capabilities the 22FDX design can operate over a range of frequencies and corresponding power values. The following chart shows the results. The 22FDX design was implemented twice, with LVT for the FBB case and with RVT for the RBB case, this afforded overlap in their operating ranges. The 22FDX designs are a win any way you look at it. The area is lower and it does better with power and performance, depending which way the body bias is applied. I remember back when low Vt cells were introduced during the time when leakage was starting to grow rapidly relative to dynamic power. Back then I worked with several companies to devise methods for selectively placing high and low Vt cells to squeeze out performance and yet keep the low Vt cell count to a minimum. A solution like 22FDX with body biasing would have been a revelation. 22FDX is like getting a knob to tune the design after it is completed. It’s good that companies are getting more attractive choices for keeping design and fabrication costs low and also improving PPA.


Migrating legacy USB Stack to USB Type-C platforms

Migrating legacy USB Stack to USB Type-C platforms
by Rajaram Regupathy on 11-19-2015 at 4:00 pm

There is a great deal of buzz around the new USB Type-C connector and its power delivery specifications. Industry leaders like Intel, Google, and Apple are leading the way by integrating this new connector into products like MacBook and Chromebook. The new connector will soon find its way into smartphones and many other types of products.

Just as USB slowly replaced legacy ports, USB Type-C may slowly replace most of the connectors on your PCs or Notebooks. This next-generation connector is powerful and flexible. It has the capability of providing up to 100 W of power and enabling support for multiple functions over the same lines in a standard way. This sharing of signal lines and higher power negotiation is achieved using a protocol defined in the USB Power Delivery specification over Configuration Channel (CC) signal line of the USB Type-C connector.

This article explores some of the key changes that needs to be looked into while planning integration legacy USB stack with USB Type-C support

V[SUB]Bus[/SUB] Session/Power Role Swap:
One of the important changes in the new specification involves the V[SUB]Bus[/SUB] line. Over and above the support for higher power, there are other changes that will affect a legacy USB system. For example, the new specification enables a USB device to power up the host and also defines scenarios where removal of the V[SUB]Bus[/SUB] does not indicate disconnection. During a session, using the Power Role Swap command a device can change the flow of V[SUB]Bus,[/SUB], during which V[SUB]Bus[/SUB] will drop close to 0V. V[SUB]Bus[/SUB] management is handled outside of the USB system, thus necessitating legacy USB systems to handle such scenarios in tandem with the CC module.

Role Negotiation – Data Role Swap – No OTG:
One of the more complex implementations in legacy USB system is the On The Go (OTG) state and timing implementation. The new specification helps decide role of a device over the CC channel using simple protocol commands or basing the role on the initial state of the CC channel. Thus, a legacy USB system now has to now migrate from OTG to the CC protocol to base its on events from the CC module.

USB Connection/Disconnection:
Connection and disconnection of USB functionality in legacy OTG connectors was implemented primarily using the ID pin or V[SUB]Bus[/SUB]. In the new USB Type-C environment, connection and disconnection is based on the state of the CC channel. The voltage level of the CC signal line helps determine the connection status to the CC module, which then has to be communicated to the legacy USB system. This CC signal line also helps determine how much current can be consumed when the system is not in power delivery mode. Thus, a legacy USB system has to wait for events from the CC module for connection and disconnection, as well as for power capabilities.

Alternate Modes:
The new USB Type-C connector supports multiple functions over and above USB functionality over its USB signals lines. For example, if a system enters a 4-port Display Port alternate mode, the Display Port interface consumes all super speed signal lines of the USB Type-C port. As a consequence, no USB 3 functionality is available during alternate mode. This multiplexing of USB signal lines is handled outside the USB software subsystem, namely the CC module, and the legacy USB subsystem need to be adapted for such changes.

Billboard Class:
A new class of USB device – Billboard class – has been defined by the USB Implementers Forum (USBIF) for the devices that supports alternate modes. As the designation suggests, Billboard class is an informative class and is used to communicate the alternate modes supported by a device to the host. It is important to note that this class does not support any USB functions and thus does not consume any endpoints. This change requires a new driver to read and interpret descriptors, then present the descriptors of this new class to the user.

This article explored some of the key changes that needs to be taken care in a legacy USB host or a device stack when migrating to the new USB Type-C connector. Though the change is on the USB connector, it has impact on the USB software behavior as well. Also developers has to keep in mind with the new USB Type-C connector, the USB Type-C cable determines also plays role in determining the system behavior.


Intel Analyst Day – More Capex-Less Losses- PCs Slow/Stabilizing- More M&A?

Intel Analyst Day – More Capex-Less Losses- PCs Slow/Stabilizing- More M&A?
by Robert Maire on 11-19-2015 at 12:00 pm

Like other semi stocks we could see a relief rally as the analyst day is likely to be better than previous news flow- Not much new to tell. Most all of the bad news has been wrung out of the stock- 10nm delays, slowing PCs, tablet losses- the bar has been reset on most issues to “beatable levels”.
Continue reading “Intel Analyst Day – More Capex-Less Losses- PCs Slow/Stabilizing- More M&A?”


Wearables + IoT = Internet of People

Wearables + IoT = Internet of People
by Nick Langston on 11-19-2015 at 7:00 am

The Internet of Things (IoT) promises sensor-equipped devices in constant communication with the cloud. The ‘Things’ are all manner of devices measuring and sensing, ceaselessly clicking away and transforming the amorphous sturm and drang of life into a rich river of digits and data. It’s estimated that by 2020 there will be more than four such devices for everyone on Earth.

So what?

If the best that the IoT can offer is a refrigerator that can update my shopping list automatically then who really cares?

It’s when Wearables are leveraged along with all other facets of the IoT that we begin to see the true value to individuals and communities. This is the beginning of the Internet of People.

Today, nearly one billion people are carrying smartphones. Besides the tremendous processing power packed into even an older phone these devices also include the ability to sense orientation, location, altitude, proximity and motion. Of course, they can also transmit data to the cloud. Complement these sensors with additional biometric sensors contained in wearable devices or smart garments and you have the ability to put personal measurement data in a new context – time of day, location, route, activity, companions.

Imagine what is possible as other parts of the IoP begin to emerge like the Smart Home or Connected Car. Your smart bedding can measure your sleep invisibly. With soft sensors in the seats and steering wheel your car can track your heart rate and temperature as well as fatigue. The smartphone can be the primary device to aggregate all of this data.

But again, so what?
Wearables today are for the most part descriptive – they quantify what we’ve already done. The opportunity to create real value is in allowing the wearables to become prescriptive – tell us what we should do. This is less draconian than it sounds. Already some devices (the Spire and Garmin products come to mind) notice a period of inactivity and buzz us with a message to get up an move around. But its when all of this contextual data — from our wearables, homes, cars and workplaces — is analyzed to find out when we’re at our best that true insight can be delivered.

Imagine for a minute that the great day or week that you experienced could be quantified. Imagine the same for the tough day or week you had. By understanding what you did and experienced leading up to that great week, the IoP can help you stay on track.

All of these sensing devices working in concert with constantly improving data science will deliver us something that technology has long promised: a metric for happiness.

You can imagine a personal dashboard with an arrow pointing your status toward the green or red end of the dial and prompting you with specific actions to move it into the green.
This is where the technology wants to go.

Here’s a question for you: How would you feel knowing all this information was being collected about you – even if it was used to help you?


Don’t forget to follow SemiWiki on LinkedIn HERE…Thank you for your support!


Vector DSP IP charts course for IoT/M2M

Vector DSP IP charts course for IoT/M2M
by Don Dingee on 11-18-2015 at 4:00 pm

For some time, we’ve been talking about ideas for IoT-specific chips, evolved from garden-variety MCUs or mobile SoCs. I sat in on a fascinating talk from an MCU vendor at ARM TechCon 2015 regarding multi-protocol radio silicon, and a question kept coming from the audience: what about software-defined modems? The vague response from the presenter was software-defined modems on an IoT/M2M scale is possible, but expensive in terms of silicon and power consumption.

That’s not a new problem. One of the motivating factors for adding fast multiplier capability to the ARM7 core way back when was to enable some lighter algorithms to run without a full-blown DSP. There are still jobs that need a high performance DSP, like 4G LTE, and the drive from CEVA, Qualcomm, and others has been to streamline DSP cores for efficiency.

A lot of territory exists between those extremes, much of it on the IoT and in M2M applications. LTE has been a two-edged sword for M2M. GSM bands are being repurposed for use in LTE as crowded spectrum becomes scarce. AT&T, with a burgeoning 4G base to worry about, is sunsetting its 2G network at the end of 2016. LTE has covered the globe, literally, at the expense of some other once-promising standards such as WiMAX. For those that need increased mobile bandwidth and a long future ahead, LTE is hard to beat in M2M.

Let’s suppose your ideal application doesn’t resemble a head-shrunken smartphone with a high performance LTE DSP core. There are a lot of IoT/M2M protocols to choose from. Some are quite solid, like Bluetooth, GNSS, Wi-Fi, and ZigBee. Some are gaining attention rapidly, most notably LoRa and Thread. Some are still dripping wet specifications, like Narrowband-IoT. Some are dedicated to IoT networks, like SIGFOX. The use cases vary widely.

Several questions arise. Who knows, for sure, which of these specifications will win, and in what exact variant of the specification? Maybe more importantly, will some edge devices be required to operate on more than one network, either simultaneously or with one side always-on and the other sleepy? What about chips for multi-protocol gateways? If one commits to a network, or combination of networks, and something changes, a hardware-centric design can blow up.


CEVA is challenging the idea that vector DSP can’t scale to these lower power IoT/M2M devices. Two new cores, the CEVA-XC5 and CEVA-XC8, go after these software-defined modems in style. Compared to the previous XC323 low-end, the new IP brings up to 70% lower dynamic power consumption, 40% die area reduction, and 20% lower memory usage.

There are certainly use cases where a tuned hardware implementation of a mature standard can be better than software defined modems. Bluetooth Smart comes to mind, and CEVA’s Emmanuel Gresset admits that if BLE is all a design needs, CEVA’s hardware-based implementations are better. Software and vector DSP is the play where there are more advanced IoT/M2M protocols and some flexibility required, maybe in regional deployments or where specifications wax and wane in popularity over a long life cycle. There are some details in antenna design that need to be addressed when combining radios, but those also exist in hardware-based designs.

I go back to the unfortunate story of WiMAX, a specification that was supposed to dominate wireless deployments and instead lost silicon supporters rapidly to LTE. While backers of all these new IoT/M2M specs are optimistic, history may repeat itself for things like Weightless and 802.22. The XC5 and XC8 support LTE Cat-0 and Cat-M, including long DRX and power saving mode (PSM). On the other hand, SIGFOX or LoRa or Narrowband-IoT may take off and be the next big thing. The name of this game is flexibility.


The primary difference between the XC5 and XC8 is the MAC capability – 16 per cycle in the XC5, 32 per cycle in the XC8, presumably with some area and power delta. Both feature 8-way unrestricted VLIW, and 4-way set associative non-blocking program cache. AXI integration is a given. The power scaling unit allows multiple clock sources and multiple voltage domains, allowing flexible SoC power management strategies.

This kind of power management capability also pushes this DSP IP down into wearable space, where some designers have chosen tethered operation to save power instead of providing a direct LTE connection on a wearalone. It also opens the possibility of multi-protocol software-defined modems, as in this example with LTE Cat-0 and GNSS. LTE-OTDOA is also gaining momentum, using basestation signals for positioning.


One reason software defined modems haven’t been in this IoT/M2M conversation much is until now, power efficient vector DSP IP hasn’t existed. The fact that the microcontroller vendors are looking long and hard at multi-protocol radios for IoT devices is significant – and they may get it right on some point implementations. For many others working with specifications that are in flux or of varying regional importance, a soft vector DSP-based implementation may be the best way to get there quickly and cost effectively.

For more on the CEVA-XC5 and CEVA-XC8 DSP cores, visit:

CEVA-XC5 / CEVA-XC8 Communication DSPs

More articles from Don…


The Internet of Challenges Bumps Along

The Internet of Challenges Bumps Along
by Bernard Murphy on 11-18-2015 at 12:00 pm

More from ARM TechCon. Great show as always, high-energy and a reminder that systems and solutions are where it’s at. There was a very big focus on Internet of Things in all its many guises, from devices to detect whether a garbage container is full, to a child’s necklace to store immunization and other health data, to new ways to push entertainment directly to whatever screen we happen to have in hand. But for those who think the IoT is an easy path to fame and fortune, there were reminders that this domain still has plenty of challenges ahead.

Let’s start with venture investment. That sexy new wearable you think will blow the VCs away? Forget it. According to Eric Klein, a partner and VC at Lemnos Labs, the Consumer IoT market is “very saturated”. That aligns with a couple of frequently overlooked points in consumer economics. First, we only have so much disposable income. Even if a carrier is nominally prepared to underwrite the cost of a device, they are digging deep into your wallet for that plan. If they find a way to bundle an additional device into the plan, they’ll dig even deeper. Second, we all hit gadget fatigue at some point, even the uber-geeks. Life (for most of us) offers too many other distractions. Eric’s take-away – focus your creative energy on enterprise and commercial applications. Businesses don’t care about sexy; they pay to reduce costs and increase revenues.

Next up, security. An expert panel (Paul Kocher – Cryptography Research, Eduardo Montañez – Freescale and Zack Colby – ARM) discussed where they really think we’re at. ARM has done a lot of good work to build a turnkey security solution at the device level – TrustZone® with CryptoCell as the basis for a trusted platform, secure communication over mbed TLS, secure code compartments through mbed OS uVisor and secure lifecycle management through mbed ID, Config and Update. And GlobalPlatform will certify complete Trusted Execution Environments. All of which are good steps to get to to a unified approach to security in the IoT.

But another observation was that this only works if the people at the end of the value chain turn it on. There is evidence that at least some of those folks find that step too difficult, or too costly, or just not very important – until of course there’s a breach, at which point it will be your fault. So we have to deal not only with bad actors but also careless actors – in buying, installing and maintaining. Changing these behaviors won’t be as easy as standardizing software and hardware.

Last and very far from least, a few observations from Colt McAnlis’ keynote. He’s a developer advocate at Google and a very entertaining speaker. His main point is simply stated and sobering – that if we’re not careful, all the entrepreneurs who are busy developing zillions of new solutions are going to screw up the IoT. One example is obvious (after he presented it). You walk through a shopping mall; each business has B2C (biz-to-consumer) “things” to ping your phone as you walk by, alerting you to their great sale on jeans. Two problems here: the annoyance of constant pings and your battery draining thanks to that WiFi/BT communication. Then you click on the message/app which takes you to a website with a rich graphic (or even video) experience, requiring an LTE session, which drains your battery even more. Now amplify that to add automated parking stations, automated hotel doors, automated checkouts… Pretty soon you’ll need a backpack battery to keep your phone charged. And then someone (Colt’s daughter in the keynote) will develop an app to block all that stuff and B2C IoT (two acronyms in a row, a personal best) will die a quick death. That’s what will happen if we don’t coordinate to avoid annoying consumers.

Colt had more good stuff on not defaulting to using internet “classic” technologies and data on the IoT. An example he gave was JSON. Great method on the standard internet to pass around structured packages of data. All text, human readable, easily debugged, what’s not to like? The answer is a massively bloated format burning unnecessary power to communicate and pack/unpack on a cell phone. There are more compact formats like FlatBuffers which are much more efficient. Another “classic” use-model example is widespread use of pictures. We’re graphics junkies now but they’re very expensive to download. That’s not a problem if you can pick and choose but see above. In our website development kits, we need a new target “format”, along with laptop, tablet and phone. We need an “only marginally annoying IoT format” which will skip downloading ads, cute Flash videos (and while we’re at it, will download just the one page you need, in minimal text, not the home page, or a page where you have to scroll forever to get to what you need).

Colt made us hold up our phones at the end and swear we were not going to screw up IoT. If we want it to thrive, we’d better listen.

More articles by Bernard…


Intelligent Devices for Internet Of Things (IDIOT’s), Software Defined Networks and BotNets

Intelligent Devices for Internet Of Things (IDIOT’s), Software Defined Networks and BotNets
by Arun Majumdar on 11-18-2015 at 7:00 am

One conversation topic I hear these days is: the Internet of Things is coming and all the devices will be intelligent. It will be achieved by embedding some kind of AI technology or machine learning or reasoning or whatever into the devices themselves.
Continue reading “Intelligent Devices for Internet Of Things (IDIOT’s), Software Defined Networks and BotNets”


Breaking the Limits of SoC Prototyping

Breaking the Limits of SoC Prototyping
by Pawan Fangaria on 11-17-2015 at 12:00 pm

Earlier this month during my conversation with Dr. Walden C. Rhines, he emphasised the need for our next generation designers to think at system level and design everything keeping the system’s view in mind. The verification will go through major transformation at the system level. I can see the FPGA prototyping systems already in place for large SoCs. The designers sitting at multiple sites can access a FPGA prototyping system remotely and prototype IP, subsystem or SoC utilizing one or more FPGAs without any issue.

It takes a lot more work than just increasing the capacity of FPGA for large SoCs; both combinational logic and sequential blocks need innovative methods to accommodate different types of logic structures, clock structures, modes of operations, test structures, I/O optimization, flexible interconnects, and so on while maintaining high performance. The configurability and flexibility of hardware extension determines the scalability of a FPGA system, but even more important is the software support to provide ease of design, verification, debug, and rework. The potential of a technology can only be realized after making it easy to design and operate.

Recently, S2Chad announced its single module UltraScale VU440 Prodigy Logic Module for FPGA-based prototyping. Today, it was a pleasure to see another press release from S2C extending its XilinxVirtex[SUP]®[/SUP] UltraScaleFPGA prototyping board family with Dual VU440 Prodigy[SUP]TM[/SUP] Logic Module. Now a larger design can be partitioned and fitted onto two VU 440 FPGAs without any need of cabling, thus improving the SoC reliability and performance.


S2C has a clear lead in terms of scalability and ease of prototyping large SoCs on FPGAs. The Dual VU440 LM is very compact (280mm x 250mm) on a single board and can handle up to 88 million gates. It can be used as a standalone board or inside the Cloud Cube offered by S2C. The Cloud Cube is an enterprise class prototyping system that can accommodate up to 16 such logic modules, thus scaling the SoC design to more than billion gates.


The two FPGAs are connected through 518 direct interconnects and 12 GTH transceivers. There are 1200 general purpose I/Os and 64 GTH transceivers on 12 high-speed connectors that are compatible with S2C’s Prodigy Daughter Cards. The system has 177.2Mb internal memory and DDR4 SO-DIMM and DDR3 SO-DIMM sockets that can support up to 16GB of high-speed memory. The clock management scheme can be set for standalone as well as cloud-cube mode. The Prodigy connector I/O voltages can be adjusted through runtime software in GUI with 4 status LEDs on-board to indicate I/O voltage. The system supports 30000+ design interconnections between two FPGAs with LVDS running at 1.2GHz.

The system is supported by state-of-the-art Prodigy Player Pro[SUP]TM[/SUP] Runtime software that can import the design, partition it and run P&R software. The runtime software sets up clock, reset, I/O voltages, self-test, and hardware monitoring. The monitoring of hardware including cable setups between connectors and daughter cards can be done from remote location. S2C also provides other software for design implementation, Multi-Debug system for multi-FPGA deep-trace debug, and ProtoBridge AXI software for interconnect.

The Prodigy ProtoBridge[SUP]TM[/SUP] AXI software links the system-level simulation environment to the FPGA-based prototyping platform, thus allowing managed traffic flow. The abstraction of interconnect also renders the IP blocks reusable.

The Dual VU440 LM is S2C’s 6[SUP]th[/SUP] generation SoC prototyping system that is quite sophisticated compared to previous generation FPGA systems and allows easy IP based prototyping for large SoCs. Multiple FPGA configuration options are possible through Ethernet port, USB port, JTAG, and micro SD card. Also there is on-board battery charging circuit that makes FPGA bin file encryption easy.

Read the press release HERE. The datasheet for Dual VU440 Prodigy[SUP]TM[/SUP] Logic Module is HERE.

Also read:
S2C ships UltraScale empowering SoFPGA
Taking a Leap Forward to Prototype Billion Gate Designs

Pawan Kumar Fangaria
Founder & President at www.fangarias.com