CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Intel Got Fit At CES 2016 And Even Reached Some New Heights At X-Games

Intel Got Fit At CES 2016 And Even Reached Some New Heights At X-Games
by Patrick Moorhead on 04-24-2016 at 8:00 pm

You may have noticed this weekend that Intel was all over the X-Games. You couldn’t turn on the TV, web video or Twitter without seeing the company on and around the X-Games. Intel’s love affair with sports started when Brian Krzanich took the reigns as Intel’s CEO and has been amplified at nearly corporate event since. Krzanich has had a BMX bike soar over his head a few times at CES which to me is a physical embodiment of just how committed Intel’s CEO is to changing the company’s perception. At this year’s CES, Intel devoted a lot of time to their latest technologies and how they enable four key experience areas: sports, health and wellness, creativity and what they’re calling the “human experience”. In fact, Intel has been spending these past few CESs and IDFs (Intel Developer Show) showing how the company is diversifying its computing capabilities and platforms beyond just PC. What we’re seeing at Intel is part brand campaign to improve its perception amongst millennials, but ultimately to get younger developers to choose Intel for their IoT projects without hurting their brand in PCs and datacenter.


IoT: making up for mobile
As Intel has said on numerous occasions, they “missed” the mobile market entry window and have been over-investing ever since. They don’t want to miss the window on IoT. Even though most all of Intel’s profits comes from their datacenter and PC chip and platform franchises, the company is making major investments in its IoT (internet of things) offerings which as end points include the company’s low power Curie modules with Quark processors inside. Intel drove these processors into many big-brand fitness and sports applications as modules and or wearables that allow athletes to gather more information about their exercise and to improve using big-data analytics. Now let me talk about what Intel is doing.

X-Games
At CES 2016, Intel showed off some interesting new technologies as well as major announcements. One of the biggest announcements Intel made at CES was the partnership with the X-Games which just happened this past weekend. At the X-Games, Intel helped measure real-time data of Men’s Snowboard Slopestyle and Men’s Snowboard Big Air events, giving unique real-time data with Curie modules measuring things like speed, air time and height. This gives both the riders and the viewers more data than ever before and make the X-Games experience more modern and data-driven than ever before. Oh, and every time you saw the real-time stats, you saw that it was brought to you by Intel.

Red Bull

In addition to the X-Games partnership, Intel also announced a new partnership with Red Bull Media House. This relationship should help Red Bull and all of their various sponsored athletes the ability to collect tons of valuable data about their performance. And because Red Bull Media Group is one of the leaders in implementing new technologies in sports, it is not much of a stretch to see them using Curie technology in ways that enhance the viewing experience as well. Imagine a space-walk in virtual reality. That would be cool.

Curie IoT end points
The Curie modules used in these extreme sports scenarios include a low-power 32-bit Intel Quark micro-controller, 384KB flash memory and 80KB of SRAM. It also has a low-power DSP sensor hub with what Intel is calling “proprietary pattern matching”. For connectivity, it is using Bluetooth Low Energy (BLE), which helps give it long battery life and the ability to share data. It also has a 6-axis combo sensor with accelerometer and gyroscope, something you would expect to be standard for tracking someone’s movement. Last but not least, it also has a PMIC for battery charging built into the Curie to enable smart charging capabilities.

Oakley face wearable
In addition to the major sports announcements, Intel at CES also talked about some new wearable fitness technologies they helped build. Intel had three-time Ironman Champion Craig Alexander talk about the Oakley Radar Pace. The Oakley Radar Pace is essentially a wearable activity tracker and coach that is designed to track and train the user in real-time using voice activated commands and embedded computing. It monitors a user’s performance as they go along and provide feedback on their technique in order to improve their training in whichever sport they are competing in. Intel and Oakley did not give details about the internal components of the Radar Pace, but it will be available later this year.

Marketing and branding with a long-term point
Intel continues to push forward on its IoT strategy, products and marketing, giving us a better view into how Intel plans to place its chips in the ever growing world of wearables. Intel is using sports and exercise as its primary, visibleentry into the wearable space, a sub-segment of IoT wearables. Let me be clear- these products don’t all have 100% Intel silicon inside, some have none, and if you are wondering about that, you could be missing the point. This very visible sports effort is a brand play with ties to some real products today with the objective to attract developers to use Intel Curie and data platforms for their future products. It’s also to look really cool to millennials who Intel believes it needs to attract for their future growth.

Future sports and fitness data play?
With their new partnerships with the X-Games and Red Bull Media Group Intel should also learn even more about what athletes and fitness junkies need at all levels. They already own Basis, which is a maker of some of the best wearable fitness trackers and heart rate sensors, but it appears clear that Intel wants to make further investments and improvements to their position in the wearable space. These investments may be how Intel plans to gather data about the human body and our capabilities to better understand how to better interpret and gather data. After all, if Intel can learn things in the most extreme conditions, pushing the human body to its absolute limits there’s no saying what they could do with data from day to day activities. Oh and there’s a lot of value in that.


More from Moor Insights and Strategy


Intel And Qualcomm Partner (Yes, Really)

Intel And Qualcomm Partner (Yes, Really)
by Patrick Moorhead on 04-24-2016 at 4:00 pm

For the longest time, the 802.11ad space, also known as WiGig by others, was a conglomeration of different 60 GHz Wi-Fi technologies. There have been many companies that have announced technologies utilizing 60 GHz Wi-Fi technologies including Intel, Nitero, Peraso, Qualcomm, Samsung Electronics and SiBEAM. Even though many of these companies are members of the Wireless Gigabit Alliance which has a certification process, there is still a certain level of proprietary technology that most of these companies don’t share with each other. However, today, Qualcomm and Intel, the two biggest leaders in 802.11ad 60 GHz Wi-Fi, have announced multi-gigabit interoperability between each other’s devices.


Qualcomm’s Mark Grodinsky, product management director, shows off Intel-Qualcomm WiFi AD interoperability at industry analyst event

What makes this partnership all the more interesting is that Intel and Qualcomm have been at one another’s throats for many years in the smartphone space. This competition was not just limited to the smartphone space, as once Qualcomm bought Atheros they also became competitors in the Wi-Fi space. But the reality is that both companies realize the importance of making 802.11ad 60 GHz Wi-Fi an interoperable technology that can be considered reliable enough to be truly commercialized beyond a couple docking and display solutions. Intel and Qualcomm haven’t announced any new products that utilize WiGig as a result of this announcement, however there were a few announced at CES. Those announcements from Qualcomm included the LeTV Le Max Pro, which features Qualcomm’s Snapdragon 820 as well as a router from TP-Link and a laptop from Acer.

This announcement is probably the biggest announcement for Wi-Fi in 2016 because it finally means that 802.11ad 60 Ghz Wi-Fi can finally become a broadly available commercial technology. WiGig or 802.11ad is no longer a multitude of different Wi-Fi silos with each company creating their own vertical solutions. The reason why Intel and Qualcomm partnering together is such a big deal is because both Qualcomm and Intel own a significant market share of the Wi-Fi connectivity solutions today. Also, both companies have been the first to ship commercial WiGig solutions to their customers and can actually be used for wireless docking and streaming today.

With Intel and Qualcomm now working together to deliver interoperability, that means that Intel’s 60GHz WiGig in laptops and tablets can find its way onto a network with an access point utilizing Qualcomm’s 60 GHz 802.11ad. It also means that smartphones using Qualcomm’s 60 GHz Wi-Fi solution can communicate with docks or displays that utilize Intel’s 60 GHz 802.11ad Wi-Fi solution. And vice versa. Future solutions that utilize 60 GHz gigabit wireless like wireless displays, AR and VR headsets and other low latency high resolution solutions finally have the ability to exist outside of certain companies’ chipset silos. The breaking down of these different technology silos finally means that 802.11ad can stop being just a bunch of technology demos and narrowly commercialized solutions and become a broadly adopted consumer and enterprise solution.

Thanks Intel and Qualcomm for making this very good decision.


More from Moor Insights and Strategy


Quantum Code-Cracking Takes Another Hit: Lattice-based Cryptography

Quantum Code-Cracking Takes Another Hit: Lattice-based Cryptography
by Bernard Murphy on 04-24-2016 at 12:00 pm

Public-key crypto-systems rely these days on approaches founded in mathematical methods which are provably hard to crack. The easiest to understand requires factorization of a key based on the product of two large prime numbers. Much has been made recently of the ability of quantum computers to crack this style of encryption. A more complex method requires solving b[SUP]k[/SUP] = g where b and g are real number elements of a finite group and k must be an integer. This is the discrete logarithm problem in elliptic curve cryptography. A quantum computing algorithm has also been developed for this case. Therefore, in theory, widely known public key methods are crackable unless perhaps the key is unmanageably large.

But encryption systems are now turning to another method – lattice-based cryptography with noise. The approach rests in effect on solving linear equations – a very well studied problem for which excellent solutions exist – but then adds noise to the values. It turns out that Gaussian elimination, the foundation to any of these solutions, is very brittle in the presence of even small amounts of noise in the sense that it is difficult to extract a correct or even approximate solution in these cases.

The method is based on something called Learning with Errors which was derived in the course of studying a machine-learning problem. This has been adapted to something even more cryptically :rolleyes: called Ring Learning with Errors which operates over the ring of polynomials in a finite field (which, it turns out, is related to solving optimization problems on lattices, which, it turns out, is related the linear equation problem). Public key exchange involves exchanging two polynomials: a(x) and b(x) = a(x).s(x) + e(x) where s(x) is the secret and e(x) is a small random error polynomial. In a return exchange, the two parties can come to agreement on the key. I’m not even going to attempt to explain the detail of the exchange here – I’m still inching my way through the paper.

Cracking lattice-based methods is provably as hard as some other hard problems in lattice theory, and you can dial in ever higher levels of difficulty by increasing the rank of the polynomials and other factors. I haven’t seen comparisons with complexity in factoring large numbers but I assume you can dial up the lattice method to a point that it becomes just as computationally hard to solve. But what is most important is that quantum computing has not been shown to offer any advantage in speeding up attacks on this style of encryption (some believe it may be impossible for QC to provide any speedup though this has not been proven). In effect, before quantum computing has had a chance to make a dent on encryption code-cracking, it has quite probably become obsolete (for this purpose).

This is no longer limited to academic research. Quantum-hardened encryption was added to OpenSSL in 2014 and a freeware version is available on GitHub so it’s reasonable to assume that more implementations are out there.

If you are determined to wade through the math (as I said earlier, I am still inching my way through this article), click HERE. A broader view of post-quantum cryptography is HERE.

More articles by Bernard…


10 Predictions for the Future of IoT

10 Predictions for the Future of IoT
by Ahmed Banafa on 04-24-2016 at 7:00 am

A Google search for “Internet of Things” term reveals over 280,000,000 results, thanks to the media making the connection between the smart home, wearable devices, and the connected automobile, IoT has begun to become part of the popular parlance. But that’s not the complete picture, according to Gartner’s Nick Jones, vice president and distinguished analyst “The IoT demands an extensive range of new technologies and skills that many organizations have yet to master,” he added “A recurring theme in the IoT space is the immaturity of technologies and services and of the vendors providing them. Architecting for this immaturity and managing the risk it creates will be a key challenge for organizations exploiting the IoT. In many technology areas, lack of skills will also pose significant challenges.”

In the coming years, IoT will look completely different than it does today. IoT is a greenfield market. New players, with new business models, approaches, and solutions, can appear out of nowhere and overtake incumbents. But business is the key market. While there is talk about wearable devices and connected homes, the real value and immediate market for IoT is with businesses and enterprises. The adoption of IoT will be much more similar to the traditional IT diffusion model (from businesses to consumers) than the consumer-led adoption of social media and personal mobility.


Source: dzone.com

The top 10 trends of IoT:

1. Platforms. The platform is the key to success. The “things” will get increasingly inexpensive, applications will multiply, and connectivity will cost pennies. Keeping in mind that IoT platforms bundle many of the infrastructure components of an IoT system into a single product. The services provided by such platforms fall into three main categories:

[LIST=1]

    • Low-level device control and operations such as communications, device monitoring and management, security, and firmware updates.
    • IoT data acquisition, transformation and management.
    • IoT application development, including event-driven logic, application programming, visualization, analytics and adapters to connect to enterprise systems.

    2. Standards and Ecosystems. Gartner noted that as IoT devices proliferate, new ecosystems will emerge, and there will be “commercial and technical battles between these ecosystems” that “will dominate areas such as the smart home, the smart city and healthcare. Organizations creating products may have to develop variants to support multiple standards or ecosystems and be prepared to update products during their life span as the standards evolve and new standards and related APIs emerge,” according to Gartner. There will be a battle for IoT application mind share. With billions of devices projected to be spewing out petabytes of data, application developers will have a field day launching thousands, or even millions, of new and cool apps. But, similar to the smartphone world, all of these apps will be fighting for mind share, and only a few will rise to the top to be valued by businesses and consumers.


    Source: Booz Allen

    3. Event Stream Processing
    . According to Gartner: “Some IoT applications will generate extremely high data rates that must be analyzed in real time. Systems creating tens of thousands of events per second are common, and millions of events per second can occur in some telecom and telemetry situations. To address such requirements, distributed stream computing platforms (DSCPs) have emerged. They typically use parallel architectures to process very high-rate data streams to perform tasks such as real-time analytics and pattern identification.”

    4. Operating Systems
    . There’s a wide range of systems out there that have been designed for specific purposes.

    5. Processors and Architecture. Designing devices with an understanding of those devices’ needs will require “deep technical skills.”

    6. Low-Power, Wide-Area Networks. Current solutions are proprietary, but standards will come to dominate. According to Gartner: “Traditional cellular networks don’t deliver a good combination of technical features and operational cost for those IoT applications that need wide-area coverage combined with relatively low bandwidth, good battery life, low hardware and operating cost, and high connection density. The long-term goal of a wide-area IoT network is to deliver data rates from hundreds of bits per second (bps) to tens of kilobits per second (Kbps) with nationwide coverage, a battery life of up to 10 years, an endpoint hardware cost of around $5, and support for hundreds of thousands of devices connected to a base station or its equivalent. The first low-power wide-area networks (LPWANs) were based on proprietary technologies, but in the long term emerging standards such as Narrowband IoT (NB-IoT) will likely dominate this space.”

    7. Low-Power, Short-Range IoT Networks. Short-range networks connecting IT devices will be convoluted. There will not be a single common infrastructure connecting devices.

    8. Device (Thing) Management
    . IoT things that are not ephemeral — that will be around for a while — will require management like every other device (firmware updates, software updates, etc.), and that introduces problems of scale.

    9. Analytics. According to Gartner, IoT will require a new approach to analytics. “New analytic tools and algorithms are needed now, but as data volumes increase through 2021, the needs of the IoT may diverge further from traditional analytics,” according to Gartner. The currency of IoT will be “data.” But, this new currency only has value if the masses of data can be translated into insights and information which can be converted into concrete actions that will transform businesses, change people’s lives, and effect social change.

    Source: SIA

    10. Security
    . According to Gartner, threats extend well beyond denial of sleep attacks: Those are attacks using malicious code, propagated through the Internet of Things, aimed at draining the batteries of your devices by keeping them awake. According to Gartner “The IoT introduces a wide range of new security risks and challenges to the IoT devices themselves, their platforms and operating systems, their communications, and even the systems to which they’re connected. Security technologies will be required to protect IoT devices and platforms from both information attacks and physical tampering, to encrypt their communications, and to address new challenges such as impersonating ‘things’ or denial-of-sleep attacks that drain batteries. IoT security will be complicated by the fact that many ‘things’ use simple processors and operating systems that may not support sophisticated security approaches.”


    Source: Security Intelligence

    What is next?

    The market is endless. It’s exciting but you need to build great software and hardware with a sophisticated backend with multiple security levels and to bring order and sophistication to data and understanding that security is an art that involves cryptography. Most companies don’t have the talent they need to develop secure products.


  • Samsung 10nm and 7nm Strategy Explained!

    Samsung 10nm and 7nm Strategy Explained!
    by Daniel Nenni on 04-23-2016 at 7:00 am

    Samsung Foundry had an intimate gathering recently for 200 customers and partners that I missed, but I know several people who attended. This event was a precursor to #53DAC where Samsung has the largest foundry presence. I was able to clarify what I had heard via a phone call with Kelvin Low so here is my version of what is important:

    Samsung is all in on the foundry business
    Samsung is opening up their 200mm fabs, internal IP, design methodologies (IE: low power), and related services (packaging) to foundry customers. To me this is a definitive statement as to their foundry commitment. Samsung is not however going into the captive ASIC business like TSMC (GUC), UMC (Faraday), GlobalFoundries (Invacas), and SMIC (Brite Semiconductor). Samsung could easily buy an established ASIC supplier like eSilicon, Open-Silicon, or Verisilicon, but Samsung is choosing to not compete with their ASIC partners, which makes complete sense since the other foundries do. I would bet Samung will get a much larger share of the ASIC business in the not too distant future (it’s a safe bet since I have already asked my ASIC friends about this).

    Samsung Foundry is continuing to focus on 28nm FD-SOI
    I saw this at the FD-SOI symposium where Kelvin presented “28FDS – Industry’s first mass produced FDSOI technology for IoT era, with single platform benefits.” Unfortunately the slides are not up yet, I will let you know when they are posted. For China, Samsung is FD-SOI enabling their ASIC partners which is a great strategy, Verisilicon for example is very active in China.

    Key FD-SOI take aways from the Symposium:

    Proven manufacturability

    • Variability lower than bulk
    • No reliability concerns – all WLR and PLR completed
    • No FD-SOI specific in-line defect generation and systematic failure
    • Proven performance benefits on silicon

    28FDS commercial products are in production

    • Technology deployed in actual products
    • 12 tapeouts in 2015 and >10 tapeouts so far in 2016

    Full foundry support from design to manufacturing

    • Samsung Foundry supports foundation and basic IP
    • Other IP by 3rd party vendors (ARM, Synopsys, etc…)
    • Regular MPWs available for design validation

    28FDS will be a long-lived node

    • Derivative offerings including RF and eNVM
    • Increase reach into new markets (Auto, IoT, Industrial, etc…)

    Samsung Foundry is offering a low cost version of 14nm
    This was not surprising at all given the TSMC 16FFC announcement last year but I am told that Samsung Foundry LPC (cost down version) offers process simplifications (less masks) without compromising performance. LPC is also PDK compatible with LPP for seamless design migration. Thus far Samsung has shipped more than .5M 14nm wafers making them the largest FinFET foundry share holder today and that’s a fact.

    Samsung Foundry 10nm will be in production by the end of 2016
    Samsung is approaching 10nm differently than TSMC. Rather than doing a quick node transition from 10nm to 7nm, Samsung will focus on 10nm as a full node by building out different versions targeted at multiple markets. According to Samsung a “true” 10nm can be done using double patterning thus saving the cost of triple or quad patterning. Samsung does use triple patterning on one of the metal layers but still allows bidirectional routing which is easier to design to.

    Samsung Foundry 7nm will use EUV for cost reduction
    As I was told at SPIE, Samsung will use EUV for 7nm logic before using EUV for memory. An executive from ASML EUV (Dr. Hans Meiling) even presented at the Samsung event to bring everyone up to date. Given that Samsung 10nm will be a full node, delaying 7nm until 2020 (EUV ETA) should not be a problem.

    Bottom line: Samsung is showing significant foundry leadership skills again with FD-SOI and FinFETs. Not only does this greatly benefit the fabless semiconductor ecosystem by giving us more innovative foundry choices, it also benefits the semiconductor industry by continuing to push the cost per gate to affordable levels.


    Enterprise Design Management Engineered for SoCs

    Enterprise Design Management Engineered for SoCs
    by Don Dingee on 04-22-2016 at 4:00 pm

    In my initial look at ClioSoft’s design management system created from the ground up for the semiconductor industry, I made the opening case for managing and reusing IP across an ASIC design organization. Let’s for a moment say we agree on the need for an enterprise software package to do design management Continue reading “Enterprise Design Management Engineered for SoCs”


    Static Timing Analysis Keeps Pace with FinFET

    Static Timing Analysis Keeps Pace with FinFET
    by Daniel Payne on 04-22-2016 at 12:00 pm

    At SemiWiki we’ve been blogging for several years now on the semiconductor design challenges of FinFET technology and how it requires new software approaches to help chip designers answer fundamental questions about timing, power, area and design closure. When you mention the phrase Static Timing Analysis (STA) probably the first commercial EDA tool that pops into mind is PrimeTime from Synopsys. I learned more about what’s been recently updated in PrimeTime by talking with Robert Beanland of Synopsys by phone, and we’ve kept in touch over the years since both working at Viewlogic in the 90’s.

    Synopsys engineers have focused on three major areas of improvement with the latest release of PrimeTime 2015.12: Performance, Accuracy and Productivity.

    Performance
    Like most EDA tools there is a never-ending demand from engineers that they see results quickly, like within the same work day instead of waiting multiple days. One way to get faster results from STA is to exploit multiple CPUs, so the Holy Grail is to get linear scalability when going from 1 to 2, 4, 8 and 16 cores. Clever engineers at Synopsys have figured out how to eek out a further 2X overall speedup in PrimeTime by using up to 16 cores with the 2015.12 release. With the latest version, just comparing 1 core to 16 cores you can expect a speed improvement of 10-15X, pretty close to the ideal speedup.

    Another important performance metric is RAM usage with EDA tools, because running a flat STA design on a big SOC (designs in the range of 1 billion transistors) can consume 1T of RAM. A technique called HyperScale allows for less RAM usage, something quite helpful for large designs because HyperScale supports partitioning of your design into smaller pieces and distributing them across multiple smaller machines.

    Accuracy
    Faster timing results are great, but only if it means that the accuracy is acceptable. With STA tools there have been a couple of approaches used: Graph-based Analysis (GBA) and Path-based Analysis (PBA). GBA produces full coverage results across the entire timing graph with some pessimism. PBA is more accurate, but it runs on a path by path basis requiring longer run times. From a methodology viewpoint you typically start out running STA with a GBA approach to get results quickest, then near the end of your project use PBA to get highest accuracy on critical timing paths. With the 2015.12 release the accuracy of GBA has been updated by improving its accuracy using Parametric On-Chip-Variation, getting timing results even closer to PBA and ultimately, HSPICE results.

    Productivity
    Engineers are constantly being given changes to the spec in terms of features and requirements, which lead to the practice of Engineering Change Orders (ECOs). As designs get close to tapeout, one key to managing the tapeout schedule is to tightly control any changes that are introduced to the design and only permit changes which move the design closer to meeting PPA (Power, Performance, Area) targets. Achieving the lowest possible power use can drive ECO changes right up until tapeout. In the area of power ECO capabilities for 14nm FinFET I was impressed to see that Samsung was able to get a 20% total power reduction using signoff timing to reduce power. This release also supports downsizing where cells with smaller transistors replace initial cells, plus techniques like Vth swapping. HiSilicon reported that on a recent 16nm FinFET tapeout, PrimeTime provided fast, accurate and predictable design closure helping them reach performance and power targets.


    Summary
    There’s plenty of challenges in designing SoCs with FinFET technology, and for users of STA tools like PrimeTime you can benefit from using the latest release to help meet those challenges with improved performance, accuracy and productivity. Even early designs for 10nm FinFET will benefit from support for the new and complex placement rules that work together between PrimeTimeand IC Compiler.

    Related Blogs


    Webinar: How to Implement an ARM Cortex-A17 Processor in 22FDX 22nm FD-SOI Technology

    Webinar: How to Implement an ARM Cortex-A17 Processor in 22FDX 22nm FD-SOI Technology
    by Daniel Nenni on 04-22-2016 at 7:00 am

    Who’s doesn’t like a good webinar? I certainly do as it is one of the most time efficient ways to interact with the fabless semiconductor ecosystem, absolutely. Especially when it addresses two of the top trending topics on SemiWiki and they are ARM and FD-SOI. Here is a quick summary of what you will learn:

    GLOBALFOUNDRIES
    Technical Webinar Series:

    How to Implement an ARM Cortex-A17 Processor in 22FDX 22nm FD-SOI Technology

    Moore’s Law has progressed unabated for decades, pushing the laws of physics and helping to power unprecedented innovation throughout the world. Soon science fiction will become reality, as the fastest, most computationally powerful devices will have transistors consisting only of a molecule and a few atoms. However, the best solution isn’t always the biggest chip with the smallest, fastest transistors. For the mobile, pervasive and intelligent computing space, other factors such as ultra-low-power consumption and RF integration have equal or higher priority. For these applications, GLOBALFOUNDRIES 22FDX platform with 22nm fully depleted silicon-on-insulator (FD-SOI) technology offers optimized, differentiated solutions, with an optimal combination of performance, low power and cost.

    One of the essential building blocks of these apps is a high-performance, low-power processor. This webinar outlines the physical architecture considerations and physical design steps of implementing an ARM Cortex-A17 quad-core processor in 22FDX FD-SOI technology, including:

    • Digital implementation flow with industry-standard EDA tools
    • Application of body-bias for specific design intents and scenarios
    • Initial PPA results of an ARM Cortex sub-module
    • Analysis of details and results, including comparison to a 28nm implementation

    Adopting a technology platform usually includes a new design flow. Not in this case, since the 22FDX digital design flow is similar to the bulk flow with support from all of the major EDA vendors. The flows use EDA techniques (implant-aware, source/drain-aware, double patterning, UPF support) which have been deployed on earlier nodes. This case uses the Cadence tool suite from initial design creation to signoff.

    GLOBALFOUNDRIES design IP for the ARM Cortex-A17 processor includes standard cell base cells, power management cells, and cache memory instances, each with support for body-biasing. Strategic use of software-controlled, dynamic body-biasing enables specific application scenarios and optimization criteria to be applied on a block-by-block basis, resulting in optimized tradeoffs of performance and power. Sample scripts show how this is done.

    The concept of an optimizable technology platform is great, but PPA results are what really counts. The performance and power consumption of 22FDX 22nm FD-SOI with body-bias compared to 28nm bulk technologies? This implementation shows ~30% higher frequency at the same power. Optimized for power reduction, there is ~45% power reduction at the same frequency. Both optimizations have ~45% area reduction. The implementation of an ARM Cortex-A9 sub-module based on an initial release of the Invecas 8-track continuous RX standard cell library shows significant boost in frequency and power efficiency compared to 28SLP.

    The 22FDX platform is ready to adopt for new designs, with the starter kit of 22FDX digital design flow available now. More information including webinars and white papers are available at GLOBALFOUNDRIES.com/22fdx.

    Also Read: ARM and FD-SOI are like Peanut Butter and Jelly!


    Feeding the Startup Cycle

    Feeding the Startup Cycle
    by Zach Shelby on 04-21-2016 at 12:00 pm

    I am a technologist, an entrepreneur and most recently an angel investor. As I have announced my investments in promising young companies over the last couple years, many people have asked me why. Isn’t the stock market easier (well…), isn’t that risky (yep), what does that mean for your role at ARM (business as usual), how do you chose a company etc.? Maybe a little background first.

    For me, being an entrepreneur was a natural choice, something picked up from my father and a drive to succeed at building new things. I spent the first decade of my career creating new technologies for the Internet of Things, and like most technologists, became frustrated when big companies in the mid-2000s did nothing with the technology. My solution was to go and deploy the technology myself, and my first technology startup Sensinode was born in 2005. We had a big vision – bring Internet and Web technology to embedded devices, and really create a scalable platform of innovation instead of the silos of lock-in we had in automation systems at the time.

    What I didn’t realise at that time, was that the resistance we were experiencing to the adoption of IoT was a disruption point. Startups are a great way to change an industry, and in the best cases change the world, through the application of new technologies or business models that the status-quo isn’t ready for (crossing the disruption point). We succeeded with Sensinode, by being an early innovator, not growing too fast, and having plenty of luck. In 2013 we had a successful exit to ARM. For me, the most exciting and fulfilling thing was being able to help realize a vision and then find a home where it can scale. In our case it was helping to realize the Internet of Things, for which ARM mbed is an awesome home.

    It took a little inspiration before I built up the courage to become an angel investor – Could I really help other startups? Was the time and risk manageable? Why? That all changed for me in December 2014, at the Nokia Foundation awards, when I heard Jorma Ollila (former CEO of Nokia) tell why he personally donated millions of Euros to provide grants for technology graduate students. His logic was simple and inspiring, as a university student a similar grant allowed him to do graduate studies in the UK which he felt helped in the success of his career. He was feeding a positive circle, kiitos Jorma. I realised that the angel investors in my company, in particular Vesa Raudaskoski (Nokia, Elektrobit, Eden Rock), played a key part in helping us succeed (and keeping our sanity).

    For me investing in startups is about playing my part in the positive startup cycle of the technology industry (and it keeps life exciting!). If I can help startups succeed through encouragement, my experience and early funding, then I’m helping what makes Silicon Valley such a powerful centre for innovation. My startups have their roots in Finland with international plans, as I find Finland to be one of the best startup scenes on the planet. Great technical resources, reliable people, a positive attitude to startups, reasonable cost, and right-sized VC. And hey, we’re the home of Slush, the biggest startup event in the world 🙂

    My first investment was in a company bringing natural gesture recognition to Augmented Reality (AR) industrial applications called Augumenta. Where VR can be compared to the PC, AR is the mobile in a new age of computing. More recently, I invested in CubiCasa, who created a technology platform for indoor floor plans and have already achieved a scalable business. Content that will some day be used in navigation, AR and VR. Just last week I closed on an investment in a fast growing company helping to save energy for entire industries (stealth for now).


    For my next project I plan on realising something a little bigger and more personal. As a kid I had an opportunity to play with a lot of technology, building solid state electronics and sensors in the garage, running a BBS and coding my much beloved C64. I would like to bring that same opportunity to every child in Finland. Something fellow technologist and CTO of Espotel, Jaakko Ala-Paavola and I are actively working on. Stay tuned!