RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

GSA Award Nominees Announced

GSA Award Nominees Announced
by Paul McLellan on 11-04-2013 at 4:32 pm

Today GSA announced the award nominees for the 2013 awards. They will be presented at the GSA Award Dinner on Thursday December 12th at the Santa Clara Convention Center. The keynote will be given by Steve Forbes.

Recently it was announced that the 2013 Dr. Morris Chang Exemplary Leadership Award winners are CEO and Chairman, Dr. Sehat Sutardja and President and Co-founder, Ms. Weili Dai of Marvell Technology Group Ltd. (Marvell).

The evening’s program will recognize leading semiconductor companies that have exhibited market growth through technological innovation and exceptional business management strategies. The award categories and nominees (in alphabetical order) are as follows:

Start-Up to Watch Award

  • GEO Semiconductor, Inc. (GEO)
  • Quantenna Communications, Inc.
  • Tabula, Inc.

Most Respected Private Semiconductor Company Award

  • Aquantia Corporation
  • Cortina Systems
  • SiTime Corporation

Most Respected Emerging Public Semiconductor Company Award (Achieving $100 to $250 Million in Annual Sales):

  • Ambarella, Inc.
  • Cavium, Inc.
  • InvenSense, Inc.

Most Respected Public Semiconductor Company Award (Achieving $251 Million to $1 Billion in Annual Sales)

  • Dialog Semiconductor
  • Microsemi Corporation
  • Silicon Labs

Most Respected Public Semiconductor Company Award (Achieving Greater than $1 Billion in Annual Sales)

  • MediaTek Inc.
  • QUALCOMM Incorporated
  • Xilinx, Inc.

Best Financially Managed Semiconductor Company Award (Achieving Up to $500 Million in Annual Sales):

  • Audience, Inc.
  • InvenSense, Inc.
  • RDA Microelectronics

Best Financially Managed Semiconductor Company Award (Achieving Greater than $500 Million in Annual Sales)

  • Maxim Integrated Products, Inc.
  • Semtech
  • Xilinx, Inc.

Analyst Favorite Semiconductor Company Award Nominees (chosen by analyst Joseph Moore of Morgan Stanley)

  • Ambarella, Inc.
  • Avago Technologies
  • Cavium, Inc.

Analyst Favorite Semiconductor Company Award Nominees (chosen by analyst Quinn Bolton of Needham & Company, LLC)

  • Ambarella, Inc.
  • Inphi Corporation
  • MaxLinear Inc.

Outstanding Asia Pacific Semiconductor Company Award

  • MediaTek Inc.
  • Samsung Electronics, Co., Ltd.
  • Spreadtrum Communications Inc.

Outstanding EMEA Semiconductor Company Award

  • CSR plc
  • Dialog Semiconductor
  • NXP Semiconductors

You can make reservations to attend the Awards Dinner here.


More articles by Paul McLellan…


Addressing Power at Architectural and RTL Levels

Addressing Power at Architectural and RTL Levels
by Paul McLellan on 11-03-2013 at 4:30 pm

Major power reductions are possible by reducing power at the RTL and system levels, and not just at the gate and physical level. In fact, as is so often the case in design, changes can have much more impact when done at the higher level, even given that at that point in the design there is less accurate feedback about changes. Later the impact of a change is known much more accurately but the difference any change can make is smaller. In fact 80% of chip power is determined by the RTL level and above, and the maximum difference that can be made by clever synthesis and clock tree gating is 10-20%.


Power is, of course, a huge issue in SoC design. Not just for mobile and other battery powered devices, but also for tethered devices like servers and routers (a lot of the cost of a datacenter is cooling) or home DVRs and televisions (where fans are not acceptable). And all chips have potential thermal issues if the power is too high, from reliability to package cost.

There are many changes that can be made at the RTL level or above. Here are some of the most important ones:

  • System architecture level

    • SW-HW partitioning
    • OS/firmware-level APIs for standby/sleep modes
    • Single core vs Multi cores
    • Bus and memory architecture
    • Communication vs computation tradeoffs
  • Micro-architecture level

    • Frequency and voltage scaling
    • Memory/register file banking
    • Auto-inferencing of appropriate FIFOs and other communication channels
  • RTL level

    • Combinational clock gating
    • Sequential clock gating
    • Power gating

Below the RTL level the main optimization are multiple voltage domains, multiple threshold libraries (high performance on critical path, low power otherise) and clock network optimization.


On Tuesday November 19th Calypto is hosting a webinar. Abhishek Ranjan, who is a senior director of engineering, will present how to use Calypto’s HLS product Catapult along with the PowerPro RTL level power optimization tool, to reduce power at the architectural and RTL levels. The webinar is titled Techniques for Reducing Power at Various Levelsand will last about an hour. It starts at 11am Pacific Time. He will discuss dynamic voltage and frequency scaling (DVFS), power-gating, bus-data encoding, low power arithmetic architectures, memory-banking, sequential clock/memory gating and other micro-architectural techniques.

Details and registration are here. November 19th at 11am Pacific.

And a reminder about a Calypto webinar next week How to Maximize the Verification Benefit of High Level Synthesis with SystemC at 11am on Tuesday November 5th. Details and registration here. And for anyone from outside the US (or in Arizona!) we come off daylight savings time a couple of days before so make sure to log in at the correct time.


More articles by Paul McLellan…


Fabless: The Transformation of the Semiconductor Industry

Fabless: The Transformation of the Semiconductor Industry
by Daniel Nenni on 11-03-2013 at 4:00 pm


As I have mentioned before, Paul McLellan and I are writing a book on the history of the fabless semiconductor industry. There is a preview available HERE, it will initially be sold as an e-book on SemiWiki and put into print early next year. Working with Paul McLellan and Beth Martin on this was an amazing experience. The research, the writing, the “constructive” criticism of everyone who participated, it was time consuming and exhausting at times but well worth the effort, absolutely. We truly wrote this book for the greater good of the fabless semiconductor industry.

Like the fabless ecosystem itself, writing this book was a collaboration of monumental proportions with contributed chapters from the leading companies that made all of the cool mobile electronics stuff we have today possible. The book starts with the invention of the transistor and chronicles the evolution of the fabless semiconductor ecosystem up to where we are now. The final chapter is forward looking so we need your help (crowdsourcing):

WHAT’S NEXT FOR THE SEMICONDUCTOR INDUSTRY?

We’ve talked a lot about the history of the semiconductor industry, from its nascent beginning with the invention of the transistor and integrated circuit, through the changing business models and technological innovations that shaped the world of electronics we have today. But where are we heading?

Currently, smart phones and tablets powered by highly-integrated SoCs are the largest market driver for semiconductor technology. Even so, over the past 5 years the semiconductor industry has seen relatively flat revenue growth. The following passages are from industry luminaries sharing their vision of what will take the semiconductor industry to the next level of innovation and financial success.

This is your chance to be part of a best-selling book chronicling the transformation of the semiconductor industry. Be an industry luminary, send me a maximum of 300 words and be part of history! If we include your passage in the book you will get not just fame but also good fortune (a free copy of the book). Sound reasonable? I need them by November 29[SUP]th[/SUP] and you know where to find me.

Now available on Amazon.com


Webinar: IP Lifecycle Management: What is it, what problems does it solve?

Webinar: IP Lifecycle Management: What is it, what problems does it solve?
by Daniel Nenni on 11-03-2013 at 11:00 am

SoC’s are now dominated by IP blocks sourced either from 3rd parties or internal design teams. This means that IP is now critical to the success of the SoC, yet it is part of the design that teams have the least control over, or visibility into. Most design teams utilize at best ad-hoc methods to manage this IP, and the few that utilize some form of formal process tend to limit it to the management of the underlying IP data.

IP Lifecycle Management follows IP from creation through qualification and distribution into final SoC integration. As the IP passes through each stage it is tracked and managed to give a very high-level of visibility into the design and the IP status. Advanced analytics integrated throughout enable potential problems to be identified early and resolved quickly.

Formalizing and codifying this IP management process significantly reduces the risk of bad IP impacting the final design, eliminates unnecessary rework to significantly reduce design and verification resource requirements, and improved internal design reuse.

In this webinar IP Lifecycle Management will be defined, each aspect of the lifecycle will be introduced together with the problems it solves and how it benefits design teams. The webinar will utilize practical examples running on the ProjectIC platform to demonstrate the benefits of IP Lifecycle Management. In addition to the examples their will the opportunity for Q&A with the presenter.

ProjectIC is an IP Lifecycle Management platform that is methodology agnostic and can be easily integrated into any design flow. It is built on top of Methodics industry leading and proven IP Data Management Platform to deliver the capabilities needed to manage IP driven SoC Designs.

The webinar will take place on Tuesday 5th of November at 1PM Pacific Standard Time – to register for the webinar please visit http://www.methodics.com/11052013-webinar.

lang: en_US

More Articles by Daniel Nenni…..


SEMICO Impact 2013 Next Wednesday

SEMICO Impact 2013 Next Wednesday
by Paul McLellan on 11-01-2013 at 5:54 pm

Semico’s IMPACT 2013 IP event is next Wednesday November 6th at the DoubleTree Hilton in San Jose.

Here’s what you get if you attend. Keynotes from:

  • Kurt Shuler of Arteris. Give him some hard questions about Qualcomm who have just acquired their technology and engineering team
  • Chris Rowen of Tensilica, recently acquired by Cadence
  • Steve Teig of Tabula, building FPGAs with Intel 22nm as their foundry
  • John Koeter of Synopsys
  • Robert Krohn of Cisco

A panel on IP Ecosystem Solutions for Complex Systems moderated by Mahest Tirupattur of AnalogBits with panelists Jason Polychoronopoulos of Mentor, Warren Savage of IPextreme, Chris Rowen of Tensilica/Cadence and Suk Lee of TSMC.

A panel on Designing for New World Applications moderated by Kent Shimasaki of Infinitedge with panelists Ron Moore of ARM, Grant Pierce of Sonics, Steve Singer of Inside Secure and John O’Neill of Skyworks.

A technical track hosted by Constellations (IPextreme) including talks from (surprise) IPextreme, Recore Systems, Ridgetop group, Certus Semiconductor and Atrenta.

And not only is there such a thing as a free lunch, there is a free breakfast and a networking reception afterwards including a chance to win an iPad min and a NEST thermostat.

The full agenda is here. The registration page is here ($75 registration closes Monday at 5pm, a few registrations will be accepted a the door for double $150).


More articles by Paul McLellan…


Using OTP Memories for High-performance Video

Using OTP Memories for High-performance Video
by Paul McLellan on 11-01-2013 at 4:15 pm

One of the most demanding applications where semiconductors are used is in the various applications of digital video from tablet computers, to home entertainment. iPad with a retina display is already at high-definition (HD) resolution (2048×1536) and all indications are that video is racing towards what is known as 4K resolution, also known as ultra high definition, 3840×2160 pixels which is roughly four times the pixels and so four times as demanding as HD.

One of the leaders in digital TV processing (and other home control and connectivity applications) is Sigma Designs. Coincidentally they were also our lead beta customer when I was CEO at Envis. Doing high performance video is hard enough, but doing it within a tight power budget is a real challenge. Our power-reduction tool Chill wasn’t compelling enough for them to adopt it but they just announced their selection of OTP (one-time-programmable) memory supplier and it is Sidense.

Sigma have signed a multi-year license to use Sidense SHF OTP macros. These are used in advanced processes from 40nm down to 16nm FinFET. Sigma will start by using SHF in a 40nm implementation, which has already been qualified in G and low-power/low-leakage variants, for set-top-box (STB) and digital TV applications. In some sense this is a continuation of an existing relationship: Sigma have been a customer of Sidense since 2008 and have products in production using older technologies.

The factors that make Sidense attractive for these applications are:

  • small area (so low cost)
  • no mask or process changes to standard digital process (so low cost)
  • high security: there is no visible difference between a 0 and 1 bit cell, even etching the die down, and no charge is held on the bitcell
  • advanced node coverage (20nm now, 16nm in qualification)
  • high performance at low power (both active and standby)


An SHF module consists of an OTP core (the bitcell array), charge pump hard macro for in-field programming (generates the non-standard voltages required), device access port (DAP) providing access through 16/32 bit parallel bus and SBPI, which provides serial and byte-wide interfaces with SPI-compatible protocols. Read speeds are as low as 20ns depending on configuration and process and (at 28/20nm) a 1us/bit write speed.


At the recent TSMC OIP meeting, Sidense revealed the advanced process roadmap for design, characterization and qualification (above). FinFET structure aligns well with Sidense OTP implementation (which is antifuse and basically depends on forming a tiny crystal in the gate-oxide, technically known as dielectric breakdown induced epitaxy). For all nodes down to 40nm, IP9000 qualification is complete. It is in progress for 28nm and 20nm. 16nm is at the test chip stage.

There is more information about Sidense SHF memories here.


Is FD-SOI Smarter than Moore?

Is FD-SOI Smarter than Moore?
by Eric Esteve on 11-01-2013 at 12:03 pm

If you have read the excellent article from Paul McLellan, you know about FDSOI as a technology, so I will not come back to FDSOI device, and the comparison with FinFET in term of device topology, doping level and so on. If you missed it, I would recommend you to read this article, as well as the many comments (all of them being relevant). It’s good to know that Semiwiki readers are so smart! Let’s have a look at the FDSOI features making the technology a smart choice, smarter than bulk at the same technology node:

  • At first, FDSOI is cheaper than Bulk, as you need less mask levels to process FDSOI devices. Some people still think it’s more expansive, as they have in mind the extra cost of 10% of the SOI wafer. But, when this wafer has been completely processed, the final cost is lower.

  • FDSOI is faster than Bulk. If you take some time to decrypt the above picture, you will see that, for the same power budget, an ARM processor will reach slightly above 1.2 GHz on 28LP, 1.4 GHz on 28G and almost 1.6 GHz on 28FDSOI technology.

  • FDSOI is cooler than Bulk. If you want your processor to consume as low power as possible, but still exhibit good performance, reaching for example 1.5 GHz, you will compare 28FDSOI @ 0.9V with 28G @ 0.85V (28LP is already “out”) and see that there is almost one order of magnitude in term of leakage power.

So, FDSOI is clearly an attractive technology, especially for wireless or multimedia AP, as it allows minimizing drastically the power budget (by almost an order of magnitude for the leakage power), or increasing the processor core frequency. In fact, using FDSOI is equivalent to design on one technology node back (28nm instead of 20nm), and benefit from lower mask cost and process complexity.

As I told you before, Semiwiki readers are pretty smart, and I have extracted two comments:
“Silicon-proven IP is prerequisite to the success of any technology. So, ST needs close collaboration with fabless companies, or maybe even opening up some of their designs or at least their experiment in designing with FDSOI to other parties.”

“One of the main pre-requisite for success of FD-SOI or FinFET will be availability of IPs. Most of the cases the selection of foundry, even process nodes depend on the availability of silicon proven IP.”

Because STMicroelectronics is a chip maker, they know how important it is to have the right IP port-folio available for SoC design on FDSOI technology. They have managed IP migration, to support their own SoC design, and propose the following approach, extracted from the White Paper “Planar fully depleted silicon technology to design competitive SOC at 28nm and beyond’:
At SOC level, migrating an existing design from bulk to planar FD represents an effort comparable to half-node migration, for example from 45nm to 40nm. In other words, it brings very worthwhile benefits at reasonable efforts. A typical approach could be:

  • CPU and GPU: the main objective is maximum peak performance and the design is re-worked, making the most of FBB;
  • Other SOC blocks: the main objective is power savings, by reaching the target operating frequencies at lower Vdd; there is no change to block design, Timing Analysis is re-run and ECO (Engineering Change Order) is performed to fix violations if needed.
  • Other IP such as IOs and PHY blocks are swapped for their planar FD counterpart.

As far as I am concerned, I think that the availability of the right IP on FD-SOI will be very important for the adoption of this technology. STM seems to be on line with this position, as Giorgio Cesana, Director of Marketing and Communication, STM, will present at IPSOC Grenoble, on November 6[SUP]th[/SUP]a paper titled: “FD-SOI Technology for Efficient SoC: IP Development examples”. I definitely plan to attend, and I will give you a feedback about it!

From Eric Esteve from IPNEST

More Articles by Eric Esteve …..

lang: en_US


Using Formal to Find Bugs in ARM Microprocessors

Using Formal to Find Bugs in ARM Microprocessors
by Paul McLellan on 11-01-2013 at 12:35 am

2.5x ROI vs simulation. 25% of bugs found for only 10% of the overall verification cost. 36% of bugs in a current CPU project. These impressive results for formal analysis are what ARM’s Laurent Arditi reported at JUG 2013 after painstaking recording of metrics over several production programs.


As you can see from the above graph, adoption has gone in a two steps forward one step back way but now seems to be on track to increasingly wide adoption. This has been because of a perceived lack of ROI on the investment in tools and training in the early years. So Laurent’s approach is two fold:

  • demonstrate that common statements about formal verification (it is limited in size, can’t handle the designs we do, doesn’t find real bugs) are false
  • show that there are approaches that drive down verification cost (of formal and simulation) and increase the benefits

So what are the aspects of a design to focus on:

  • embedded assertions/properties are primarily written for simulation and so can be used for formal at no extra cost
  • X-propagation is low hanging fruit since simulation has issues with it and formal can do it with very few hand-written properties
  • complex clocking schemes are hard to verify with simulation but formal has found many corner case bugs and a major bug on the Cortex-A12
  • use Jasper ProofKits
  • reduce the cost of simulation. correlate formal coverage with simulation coverage, don’t try and do stuff like X-propagation in simulation, remove a big effort from simulation/humans
  • use simulation tricks (like reducing FIFO depths, changing arbitration) to reduce formal proof times too


The Cortex-A12 had a lot of use of formal. It accounts for less than 10% of the verification costs and found 18% of the real bugs (maybe even 25%). So ROI for formal is 2.5X compared to simulation. On another (unidentified) CPU in development, 36% of all bugs were caught by formal (see the graph above).


Another phenomenon is that with enough simulation most bugs get found anyway, but with formal they are found much earlier, meaning less RTL churning, especially late in the design cycle when it is hard to cope with. This is partially because formal can start before simulation testbenches are ready.

How do you discuss formal with managers? Firstly, don’t raise expectations that nothing happens and suddenly the entire design is formally proven, that’s just not the way the world is (and probably will never be). Don’t forget that although you may care about what is proven, managers do not. Highlight how the coverage is increasing and the number of problems found. Finding bugs early saves a lot of simulation time, and found late shows how formal finds corner cases that simulation misses.

There is more in the presentation. If you are a Jasper user (not necessarily one who attended JUG) then you can download the presentations, including this one, here.


More articles by Paul McLellan…


I could show you the FPGA, but then I’d have to configure you

I could show you the FPGA, but then I’d have to configure you
by Don Dingee on 10-31-2013 at 6:00 pm

One of the present ironies of the Internet of Things is as it seeks to connect every device on the planet, we are still designing those things with largely unconnected EDA tools. We may share libraries and source files on a server somewhere, but that is just the beginning of connection.

It is not surprising that synthesis tools from Altera, Xilinx and other FPGA vendors are vastly different in terms of where they put files and how they are configured. This becomes painfully evident to design teams as soon as they try to target FPGAs from two or more vendors. IP written in RTL that is theoretically “portable” and “synthesizable” can become lost in a forest of files, and have build and simulation settings applied that shake unexpected errors loose.

A team working with one FPGA architecture may have become used to the idiosyncrasies of that tool set. New designers, even those familiar with the synthesis and simulation tool itself, may find a steep learning curve in the details of reviewing designs and getting known-good IP to work. In many cases, the learning from the learning curve isn’t written down anywhere.

The problem is magnified when teams are distributed, with differences of distance, time, and language. The old adage “it takes longer to show someone how to do it than it does to actually do it” comes into play, which is a drag on productivity and a deterrent to scalability. Design teams know they have to share files, but often miss sharing the configuration details.

As FPGA designs have gotten larger and more numerous, and expertise comes from all over the globe, the problem is getting more urgent. Aldec and Synopsys each have vendor-independent FPGA synthesis and simulation tools, but Aldec is taking the next step in distributed team-based design management with their new release. I had a few minutes with Satyam Jani, product manager for Aldec Active-HDL, for some insight on what drove the latest improvements.

Based on feedback from actual users, the latest Active-HDL 9.3 release supports a user-defined folder structure. This ensures that designers have a consistent methodology in placing files, and prevents the problem of IP getting lost amongst the trees – especially when IP needs to be retargeted, alleviating the need to relocate files to match the other tool. It also facilities the design review process, because teams customize the structure to meet their needs exactly and know where to look for what types of information.

Part of that customized structure is a mix of file types: HDL files, schematics, text, waveform, and scripts. When the project with the HDL files are loaded, startup scripts can be executed to set the working directory, initialize local variables, set debug preferences, set the underlying standard level (for instance, VHDL 2008 or VHDL 2002), and other parameters. This allows teams to establish build consistency automatically, without written cookbooks a designer has to follow and the possibility (probability?) that different team members take different steps.

Also handy is the team category applied with an .adf file, which controls simulation. At different stages, designs are put through different tasks. For instance, initially a waveform viewer may be utilized. When issues are found, a debugger is brought in to isolate the problems, and finally a code coverage tool is applied. Each of these modes usually requires the simulator to be reconfigured manually, but with the team category the desired settings are defined and available in a pull-down menu, capturing the learning curve for everyone to use.

There are several other minor changes in this Active-HDL release. One I find fascinating is the ability to place JPG, PNG, and BMP files on a schematic. This has two uses: watermarking a design with a logo, and annotating a design visually to indicate a point of emphasis. The waveform viewer has also been enhanced, with saved settings and new comparison files, and support for floating point values. These and several other enhancements came directly from user inputs on making the tool more connected.

Aldec Active-HDL FPGA Design Creation and Simulation

I’m totally convinced that the path forward for technology innovation in the near term is not in creating yet-another-standard seeking to disrupt the norm, but instead including things in a framework that allow various approaches to work together. That is not an easy task, but it has tremendous value, and I think the folks at Aldec are doing a remarkable job of creative inclusion.

More articles by Don Dingee…


lang: en_US


ARM and the Internet of Things

ARM and the Internet of Things
by Paul McLellan on 10-31-2013 at 6:00 pm

I was at ARM TechCon earlier this week, and attended Simon Segars (the CEO of ARM for the last 4 months) keynote speech that opened the second day. A theme of his speech was that just as innovation continues to happen in so-called mature industries like automobiles, the same will happen in mobile. One particular area of focus for ARM and for everyone else is what has come to be known as the Internet of Things (IoT). This has been talked about for years but will start to become real over the next few years (and in some areas, like smart-meters and smart-thermostats and bluetooth enabled door locks it already is).

ARM commissioned a study from the intelligence unit of The Economist (the magazine that insists on calling itself a newspaper). The report is freely available to download from the ARM website here (pdf).

It turns out that almost every business is thinking about IoT. 95% of C-level executives expect their employees to be using IoT within 3 years. 76% expect to be using IoT for internal operations or processes. 74% expect to be using IoT externally in their own products and services.


As the report says:Kevin Ashton coined the term the “Internet of Things” (IoT) in 1999 while working at Proctor & Gamble. At that time, the idea of everyday objects with embedded sensors or chips that communicate with each other had been around for over a decade, going by terms such as “ubiquitous computing” and “pervasive computing”. What was new was the idea that everyday objects—such as a refrigerator, a car or a pallet—could connect to the Internet, enabling autonomous communication with each other and the environment. He is currently a general manager at Belkin, a US manufacturer of consumer electronics. Looking back, he says: “I was incredibly excited and optimistic about the Internet of Things, but compared to my optimism, progress seemed incredibly slow. It was quite frustrating. We were dealing with a lot of senior executives who had grown up long before the age of email, and it just wasn’t clicking with them.”

So the term is over a decade old, but finally things are starting to move. But it requires a lot of coordination, as Simon pointed out. The IoT devices tend to be extremely low power with limited on-board compute power. They communicate through networks to find their way back to the cloud where the compute resources, databases, and interconnectivity to other devices resides. Just as cars have a lot of standards (when was the last time you got into a rental car and couldn’t find the accelerator, or the gas pump was too big to fit) so IoT will require a lot of standards if it is really to take off. Otherwise it will be what Simon called the internet of silos, devices with their own network protocols, their own cloud back-ends and so on.


Many of the devices will be very small and very low power, perhaps scavenging power from their environment or with batteries intended to last the whole life of the device. In general they will not be using state-of-the-art wireless technology since they don’t need that much bandwidth and can’t afford the power. They may only need a few bits of data per second of bandwidth for instance. Or only communicate to a local reader (like Walmart or FedEx using RFID to automatically track every unit in a shipment).

Of course ARM hopes and expects to get their unfair share of the IoT market, both in the devices themselves and, increasingly, in the network and server farms where power will be at a premium and servicing billions of devices is more important that having the absolutely highest single thread performance (which is Intel’s sweet spot).

Once again the report is here.


More articles by Paul McLellan…