CEVA Dolphin Weninar SemiWiki 800x100 260419 (1)

Windows 10 Anniversary Update May Finally Bring New Consumer Hardware Sales

Windows 10 Anniversary Update May Finally Bring New Consumer Hardware Sales
by Patrick Moorhead on 09-25-2016 at 4:00 pm

Ever since Microsoft launched Windows 10, they have primarily been focused on getting people to upgrade their existing machines from Windows 7 or 8 to Windows 10, in addition to promoting their Surface line. The upgrade program allowed users of both operating systems to upgrade their systems to Windows 10 for free for a year. That all ended on July 29th, just a few days ago. In that year, Microsoft managed to get the Windows platform from zero Windows 10 users to 350 million monthly active users in a single year. That is a pretty astonishing number, but still not good enough for Microsoft to hit their 1 billion devices monthly active user goal which has since been revised.


Windows Anniversary Update focuses on PC hardware upgrades
For Microsoft to achieve their goal of 1 billion active monthly devices, they needed to change their strategy from just encouraging upgrades of software to upgrades of hardware. Sure, Microsoft may find some users switching from one upgraded Windows 10 device to another, but there’s also a major chance that device may not have Windows 10 quite yet. Microsoft plans to entice more people to switch to Windows 10 with a new major update that will drive users to upgrade their hardware and not just the operating system itself.

These functionalities are thanks to Microsoft’s Anniversary Update to Windows 10 marking the operating system’s first year in existence, which is tomorrow, and implementation of a major update to many components of the operating system. This update is part of a series of software releases that Microsoft has been working on since the announcement of Windows 10 and follows a cadence that I expect the company will continue to follow to make Windows 10 the best operating system possible. This is also another milestone in the Windows as a Service for consumers.

Focus on usability, security, productivity, design

The Windows 10 Anniversary update will be available for free to all Windows 10 users running Windows 10 right now and into the future. The update makes upgrades to usability, security, productivity and design. There are many upgrades across the board, and it seems that many of them appear to encourage users to upgrade their hardware as a means to enable to these new Windows 10 functionalities. Microsoft is making major upgrades to things like Cortana, Edge, Xbox Gaming, Tablet Mode, Windows Store, Action Center, Start Menu and many other areas. Many of these upgrades are going to entice users to upgrade their hardware to more capable systems that best utilize these new functionalities within Windows 10.

Windows Ink

One of the biggest features that’s being added to Windows 10 which could drive hardware upgrades is Windows Ink. Windows Ink is a few feature that utilizes digital pen technology with the latest in touch screen technology to enable new Windows experiences and functions. Windows Ink is something that Microsoft showed me many months ago when they first teased this Anniversary Update. Back then it showed a lot of promise, but now I can see how it will make the user experience more natural through enabling the use of a pen for quick note taking and notations in productivity applications like Excel, Word and PowerPoint.

The Windows Ink capability of drawing on applications will not be limited to Office applications either; it will be available on first party Windows applications like OneNote, Maps and Edge. Microsoft would also be held to a fire if they didn’t enable third party applications which currently include Adobe Illustrator, Drawboard and Fluidmath. Having strong first and third party titles means that Microsoft can show users different applications of the Windows Ink capability and entice users and developers to want to own Ink-capable devices to create new experiences and content. As a result, Windows Ink has a huge potential to increase the demand for pen-enabled devices and make touch screens a must. I am not a huge pen user, but research suggests that for 2-in-1s, pen is a very desired (top 3) feature for buyers.

Windows Hello improvements
Another feature of the Anniversary Update for Windows 10 is the improvements to security through Windows Hello. Previously, Windows Hello was mostly limited to the login of a computer and enabled biometric or facial recognition on certain devices. These capabilities were limited to few devices, but are going to see increased adoption from Windows OEMs thanks to the new Windows 10 Anniversary Update features. Some of the new features include the ability to work on applications and websites like iHeartRadio and Dropbox plus the ability to authenticate with websites that are FIDO-compliant through the Edge browser.

There will also be the ability to unlock your PC through nearby devices like the Microsoft Band where a user may simply need to come near a computer to be able to unlock it. The Windows Hello improved usability and security capabilities that are enabled by the Anniversary Update are poised to give users more reasons to upgrade their hardware since they are unlikely to have a Windows Hello compliant fingerprint sensor or camera. We could actually see these new features driving improved attachment rates of things like Intel RealSense cameras and Synaptics fingerprint devices like its SecurePad, external security adapter and “Iron Veil” mouse. To get the full appreciation for Windows Hello, you need to try it for yourself. It is transformational and makes the PC experience much more enjoyable…. and potentially safer.

Gaming improvements
There are other improvements like Microsoft’s unification of Xbox and Windows 10 gaming which is a part of the Windows 10 Anniversary Update. As we saw at E3, games will no longer be tied just to being Xbox exclusives anymore; titles will be available as ‘Xbox and Windows 10’ exclusive which means that users will be able to own one title and play it freely across different platforms. This ability may encourage some gamers that are away from their Xboxes to want to be able to play those same games on the go. That will mean that they will need to upgrade their hardware and software very likely with a new Windows 10 PC with DirectX 12 capabilities. The new Windows 10 machines will be able to support these new capabilities and to deliver console-class gaming on the go without any loss of experience to the user. Net-net, if you want to play DirectX titles enabled by Windows 10, you need new hardware that supports it.

Cortana improvements
Even with the improvements being brought by Cortana including the ability to work above the lock screen may require new hardware with better microphones. This update finally delivers a lot of the features that may drive users to consider upgrading their machine even if it is already running Windows 10. There are a lot of new and innovative designs that enable these new experiences and capabilities, be they touch, convertible 2-in-1, biometric security or just a thinner form factor. Many users might not even think about it, but they will very likely also experience much better performance and battery life by upgrading to these new machines built for Windows 10. The Windows 10 Anniversary Update may finally be the version of Windows 10 users and OEMs have been waiting for. To have the best Cortana experience, you will need microphones tuned for the experience so that Cortana is useful from far-field or even mid-field distances.

Wrapping Up
So far on the consumer side, Microsoft has spent most of their effort on Windows 10 getting consumers to upgrade their operating systems from Windows 7, 8, and 8.1 to 10 on older hardware and spent most of their promotion dollars getting consumers to buy Surface.

For Windows 10 Anniversary Edition, Microsoft has invested more time in providing or improving features that work better with new consumer hardware, like Windows Ink, Windows Hello, and Cortana. To be fully successful and drive consumer hardware sales, the key will be Microsoft and PC OEM marketing programs and spend that makes the case to buy the new hardware based on the new or improved experiences. This has been tough going so far, hence the depressed consumer PC sales, but if what I am hearing is correct, at least the marketing spend will be there.

I’m optimistic about new consumer PC sales based on what I have personally experienced with Anniversary Edition in the Windows 10 Insider Program and talking with Microsoft and PC OEMs, but am not “Babe Ruthing” it, either. With Windows Anniversary Edition, this is the best opportunity in 5 years to drive new consumer PC sales.


The renegade whose dream started the latest space race

The renegade whose dream started the latest space race
by Vivek Wadhwa on 09-25-2016 at 12:00 pm

Elon Musk’s company SpaceX and Jeff Bezos’s Blue Origin have grabbed the headlines in the space race, both of them building rockets and spacecraft. But there is fascinating backstory on how this race and the private space industry came into existence. It is the tale of a renegade entrepreneur, Peter Diamandis, who founded the XPRIZE foundation to incentivize the building of rockets in order to find a way into space himself.

Journalist Julian Guthrie’s brand new book, How to Make A Spaceship: A Band of Renegades, An Epic Race, and the Birth of Private Spaceflight(Penguin Sept. 20, 2016), tells that story. It reads like a thriller, and it reveals many secrets.

In 2011, Diamandis recruited me to be head of academics at a futuristic think tank, Singularity University, that he and Ray Kurzweil had founded. I am in awe of him and have a bias. Nonetheless, I have no doubt that the lessons in the book will resonate with today’s entrepreneurs, engineers, and adventurers, because they illustrate how the impossible can be made possible.

As Guthrie narrates the story, and as I have heard from Peter Diamandis on several occasions, it all started with Apollo 11’s landing on the Moon, in July, 1969. Watching this, Diamandis became determined to fly into space himself and organized his life around that dream. He completed two degrees from MIT and a medical degree from Harvard — not so he could practice medicine, but to enhance his chances of getting into the Astronaut Corps. Then he founded a national student space club that Jeff Bezos was chapter head of at Princeton; an international space university; and a rocket company. When NASA began to wind down manned space flight, Diamandis realized the government wouldn’t get him where he wanted to go and he would have to do it on his own.

He found his inspiration for a modern-day space race in an unlikely place: During the golden age of aviation, French hotelier Raymond Orteig offered a $25,000 prize to the first person to fly non-stop between New York and Paris. Several unsuccessful attempts were made before an American airmail pilot named Charles Lindbergh won the competition in 1927 with his plane, the Spirit of St. Louis, galvanizing creation of the commercial airline industry. Nine teams competed for the $25,000. Between them, they spent around $400,000 — 16 times the value of the cash prize. Diamandis marveled: “Orteig didn’t spend one cent backing the losers. By using incentives, he automatically backed the winner … great return on his money”.

In May 1996, Diamandis launched a $10 million prize for the first non-governmental team to build and fly a manned rocket to space twice within two weeks. When he announced the prize, tentatively titled the XPRIZE, he didn’t have the money. He did what entrepreneurs do — made a big promise with the hope all would work out. It took him years to raise the money, and it came from the most unlikely of sources: an Iranian woman, Anousheh Ansari, who had just sold her company and shared his dream of going into space.

When the XPRIZE was announced, only the world’s largest three governments — those of the U.S., Russia, and China — had launched people into space. The XPRIZE soon had 24 teams from more than a dozen countries competing. Across the globe, engineering students scraped together money and resources to try to build a manned space program. Space scientists ignored ribbing from colleagues who said the dream of private space was impossible. Retirees, working in rice fields in Texas, built engines and rockets. A famous programmer named John Carmack (now CTO of Oculus Rift) decided to try to do for aerospace what he had done for video games.

And in the Mojave Desert, an airplane designer named Burt Rutan, who had secretly attracted funding from a billionaire — Microsoft cofounder Paul Allen — began his covert space program. He had fewer than 30 engineers working on the spaceship. As with breakthroughs that came later with the Internet, personal computers, and smartphones, wherein failures were expected and iterations were the norm, Rutan began by throwing foam models off the Mojave tower, created on the basis of doodles on a napkin. With every new type of plane, Burt plotted and planned and worked out hundreds of details in his mind before testing anything in a computer. There was never an epiphany, a single “aha” moment; only iteration after iteration, layer after layer. How these foam models led to the world’s first private spaceship, SpaceShipOne, is one of the great entrepreneurial adventure stories of our time.

Rutan won the $10 million XPRIZE in 2004. Richard Branson bought the rights to the technology and is developing SpaceShipTwo to fly paying passengers to the edge of space. Elon Musk, who met Diamandis in the spring of 2001 and was inspired by the XPRIZE, has hit one milestone after another and hopes to take NASA astronauts to space beginning next year. Bezos, who met with Diamandis in the early days of the XPRIZE, is also making history with his own suborbital spacecraft.

The story sounds incredible — from the pages of science fiction. And it has a happy ending. But as all entrepreneurial ventures go, nothing went according to plan: It was riddled with failure and disappointment, ugly battles broke out between friends and founders, the world often looked like it was coming to an end, and Diamandis had to gamble everything he had.

Most interesting is an observation Branson makes in the book’s foreword: There isn’t much of a difference between being an adventurer and an entrepreneur. As an entrepreneur, you push the limits and try to protect the downside. As an adventurer, you push the limits, and protect the downside — which can be your life.

For more, follow me on Twitter: @wadhwa and visit my website: www.wadhwa.com


Network vs. Platform

Network vs. Platform
by Sudeep Kanjilal on 09-25-2016 at 7:00 am

So, how do we define these terms? What is the core difference? Why does it matter – what is the implication on business models? And finally, which firms today embody these models?

First, the basics. Digital business models are inherently exponential in terms of value generation. Data drives powerful flywheel effect, and these business models often ‘tips over’ the markets and industries they operate in. I wrote about this at length in my previous blog (Digital – what is different about it?)

There are, however, three kinds of ‘exponential growth models’ correlating to different digital business models. It is important firms get that distinction right, as they pursue their digital strategy. And I think this is particularly important for established Fortune 500 firms, as they have large existing consumer base, established set of service/product offerings, and established digital channels. For them, its not just a product strategy – it’s a product, vendor, supply-chain and channel strategy all rolled into one!

A digital firm can therefore opt for one of the 3 basic models – Network, Marketplace and Platform. The basic difference between them is the nature of interactions between the firm and its consumers, and between its consumers.

Network is the simplest model, with value determined by the broadcast value. Telecommunication network, Skype, Social networks like Facebook are the simplest form of network, and are typically homogeneous, operate in a NXN mesh, are bidirectional. The value of these networks are determined by Sarnoff’s law: V = f(N).

Marketplace is the next evolution of this model. This digital models are heterogeneous, operate in a one-to-many mode, are bidirectional, and provide a foundation for multiple businesses to come together and operate. The value generation of this model is a scale higher than simple network model (Metclaf’s law): V = f(N^2).

Microsoft (or rather, WinTel) was the original founder of this model (hence them ‘tipping over’ the personal computing market in their favor, cornering 95% of the industry profit). ebay, Craigslist, etc. then proceeded to create these marketplaces on the web. Facebook extended its original network model to Marketplace model with Facebook Marketplace, and so has Google Shopping. The most famous example of this strategy is, of course, iTunes – which Apple successfully leveraged to tip the smartphone market in its favor (and walk away with 95% of the industry profit).

Platform is, however, the highest form of evolution of digital business models. Its build on platform model – but create marketplace on marketplace: sub-groups of users coming together, homogeneously and/or heterogeneously, to create their own marketplaces. It requires an Open Platform, so that user community can extend the original platform to new functionalities and add new capabilities. The economic value generation is exponential (Reed’s law): V = f(2^N).

The early adopters of this model are WhatsApp, Facebook Messenger, WeChat in China. Google, a firm I would normally expect to lead this race, is absent. Apple is trying its hand at this, but early results are desultory as this will require Apple to open its platform, something it does not do well.

Where does this leave the established consumer facing Fortune 500 firms? Frankly, except for a very few – nowhere. Even the early pioneers like Chase, Capital One are really struggling to define the true nature of the digital platform they are attempting to develop and roll out, and I suspect are reticent regarding the scale of operational change they have to make in the way they roll our product, way they manage channels, way they interact with partners and client. But some are trying.

The change will come, make no mistake about it. It has already come in retail (Amazon), telecommunication (Apple), digital advertising (google), digital media (facebook) – in each of these verticals, the market tipped over and got reshaped by one dominant firm. It has not yet played out fully in these verticals, yet, as each of these firms have not yet transitioned from Marketplace to Platform model. Perhaps they will make that transition and retain leadership status, perhaps a new start up will move faster and displace them. Current size/market dominance, however imposing, is no defense against new digital models – just ask Nokia.

But the most wrenching change will come to the other consumer verticals – banking, various retail categories, hospitality, travel, etc. Most of these firms are not yet digital, or operating in the most rudimentary digital business model. Banks, in particular, believe they are ‘protected’ by regulations – and yet…

We are in for a very interesting ride ahead!


Mentor Webinar on Power Exploration for Optimizing Power

Mentor Webinar on Power Exploration for Optimizing Power
by Bernard Murphy on 09-23-2016 at 8:00 pm

There are a lot of clever techniques to automatically find and even implement methods for register gating and memory gating, but the bulk of power-saving still depends on designer and architect insight based on expected range of use of a device, complemented by practical use-case simulations. Of course this team needs to be able to see total power and breakdowns by type (switching, leakage, etc.) and by regions in the design in order to understand what areas need help and what fixes might provide the most useful reductions.

This Webinar is on September 27[SUP]th[/SUP] at 9am Pacific. Register now!

The design teams needs to be able to experiment quickly with possible power-saving options to see which might have the most significant impact. This requires the ability to run through a lot of experiments in a short time to compare and contrast options, especially since some options will have performance, area or latency impact. Obvious experiments include:

  • Changing the Vt mix to reduce leakage in a block which is not very performance-sensitive
  • Adding power-switching or DVFS to a block which use-cases show can handle the latency incurred in powering back on, or the reduction in performance at lower V/f.
  • Adding higher-level clock gating to power down more logic in inactive modes.
  • Adding gating to data and address ports on memories from which a surprising level of power savings can be milked by judicious gating of address and data busses.

These experiments can show significant opportunities for power saving but are generally too complex to be proved formally. That’s why these decisions generally require design judgment based on a broad sampling of simulation use-cases and what the design team believes should bound normal usage. That said, I know at least some design teams will make these fixes and also also build in software-controlled bail-out options to selectively disable gating if “normal” usage at a customer proves to exceed the bounds they expected 😎

The central value of a solution like this is sufficiently accurate power-estimation (within 15% of gate-level estimates), good visualization of power distributed by types, location in the design and some granularity in time, and fast-turn to get delta improvements based on what-if choices in the design architecture.

REGISTER HERE

Overview
Power consumption impacts multiple applications and markets such as handheld, workstations, and servers to name a few. Because the greatest opportunities for optimizing power are at the micro-architecture and RTL design stages, the PowerPro Platform provides an interactive approach to power exploration and power reduction at RTL. This platform uses an unique architecture that eliminates iterations through simulation and synthesis, giving immediate power feedback.This web seminar will show how PowerPro easily identifies where power is wasted; from the micro-architectural level to memory, register and combinational elements. The platform provides “what-if” analysis, interactively assessing the impact on power due to potential design transformations.

What You Will Learn

· Survey results: Power reduction- who is doing it and when
· Overview of the PowerPro platform
· Interactive RTL analysis and exploration environment
· 3 detailled examples of RTL power reduction guidance
· Optimizing further with design exploration
· Examples and customer results

ABOUT THE PRESENTER

Stuart Clubb
Before re-joining Mentor Graphics in 2015, Stuart managed the North American FAE team for Calypto Design Systems. Moving from the UK in 2001 to work at Mentor Graphics, Stuart held the position of Technical Marketing Engineer, initially on the Precision RTL synthesis product for 6 years and later on Catapult for 5 years. He has held various engineering and application engineering roles ASIC and FPGA RTL hardware design and verification. Stuart graduated from Brunel University, London, with a Bachelors of Science.
Who Should Attend
· RTL designers
· Power architects
· Project managers
Products Covered
· PowerPro RTL Low-Power


The Virus of Car Ownership

The Virus of Car Ownership
by Roger C. Lanctot on 09-23-2016 at 4:00 pm

What if we all looked at driving as less of a right and more of an addiction, a disability, or a disease to be avoided, cured or overcome? What if driving were seen as a menace to society draining lives, money and time from the economy? What would our public policy priorities become in this new context?

Sweden isn’t waiting to find out. The country concluded its first experiment in vehicle-less living at the end of last year and is pondering the second phase of its exploration of a less vehicle-focused way of life.

While Lyft CEO John Zimmer recently chimed in that automated driving will obviate the need for car ownership within five years, neither Zimmer nor Uber’s Travis Kalanick have demonstrated yet that they can make money with their ride hailing model with or without drivers. It’s not likely that they will be successful with automating driving – certainly not profitably.

The beauty of Sweden’s UbiGo test is its emphasis on paying for whatever transportation you use or need via a single mobility subscription. Car companies are attempting to embrace this vision, as in the case of Daimler’s Moovel, but they may not perceive the ultimate consequences.

Within the UbiGo model in the Swedish city of Gothenburg 70 paying households relied on the test-version of UbiGo for their everyday travel for 6 months. The UbiGo service combines public transport, car sharing, a rental car service, and a taxi and bicycle system – all in one app, all on one invoice and with 24/7 support and bonus points for sustainable choices.

Some cities have seen fit to offer incentives for residents to use ride hailing services, suggesting that cities, like San Francisco, see the merit in prying human beings away from and out of their cars by paying them. But UbiGo takes this process to another level.

Unlike congestion charging used in cities like Stockholm and London, where drivers simply pay more to bring their cars into designated urban zones, UbiGo creates a system of incentives to reward non-drivers. (Truly, the next step is support groups where determined drivers will be treated like addicts, severely sanctioned or penalized or perhaps actively shunned by “cured” no-longer-drivers.)

Today car makers are playing along with investments in ride hailing and car sharing services seeking to increase the transportation options for city dwellers and visitors. But the likely long-term outlook is an urban environment ruled by shared public transportation resources.

In five years, Uber and Lyft may no longer be around as existing taxi providers – newly appified and reliably profitable – regain command of ad hoc transportation with or without drivers. The real revolution, though, lies in looking at driving as an addiction, a disability or a disease.

Cities around the world are overwhelmed with cars. Multiple-day traffic jams of the type seen in China will soon swamp the likes of Sao Paulo, New York, Mumbai and Paris. There has to be a better way and Sweden is in the forefront of this innovation.

You can learn more here and see for yourself whether it changes your thinking:
http://www.ubigo.se/published-papers/


The CIA, NSA, and Pokemon Go?

The CIA, NSA, and Pokemon Go?
by Kevin Kostiner on 09-23-2016 at 12:00 pm

So it’s finally out, the truth about Pokemon Go (and probably the rest of the app based mobile gaming world) and it’s a shocking, painful truth that will pretty much destroy the industry and force people back to their sedentary, solitary lives in front of their computers. And just as the average weight of an American was starting to drop. So sad! Before continuing, you need to read the below article for context

The CIA, NSA and Pokemon Go
By
Bryan Lunduke, NETWORKWORLD from IDG

So Pokemon Go is really a CIA covert opp. Personally, I’m shattered. I had finally come to the acceptance that it was safe, once again, to be outside and involved in the augmented reality world that we live in.

So from the shocking facts exposed in this article about Pokemon Go, one can infer all of the following:

[LIST=1]

  • The US military can recruit on high school campuses…whether parents agree to this or not! To be different, the CIA is using mobile gaming to secretly identify potential recruits…so what’s wrong with that? Imagine 13 year old covert operatives working in Iran, how innovative!
  • Pokemon Go is actually a covert means for the CIA to track your past and present movements while learning all about your daily habits.
  • Edward Snowden’s trove of NSA data actually includes the great Pokemon Go conspiracy, he’s just too busy hunting for a Charizard in Russia to release those pages.
  • The CIA can read, modify and delete any files on your device through the game. So what’s wrong with that. You have too many photos on it anyway. The government pretty much takes care of everyone anyway, why not let them manage your storage issues as well!
  • The camera on your device is now an extension of the CIA’s vast monitoring network. Consider it your civic duty in helping protect your community as you capture another Mewtwo in the neighborhood park.
  • Since the CIA can also see what you’re looking at right now while knowing exactly where you are, consider the importance of spending more time in front of your computer and NOT outside! Maybe being inactive and overweight has it’s advantages. Thank you CIA!
  • The CIA will know what you look like…all the time. So no more going the entire day without brushing your hair and making yourself presentable. Don’t be selfish, consider the impact on CIA case agents having to see your unkempt face every day.
  • While the CIA will monitor everything about you through the game they will also have intimate knowledge on where the best Pokemon are located since they’re monitoring them as well. So follow your phones lead and it will take you to all the right places.
  • It’s good manners to show gratitude to the CIA every time you level up in the game. So periodically whisper a thank you into your phone….they’ll hear you!

    Pokemon Go is only one of many mobile gaming apps. Consider how busy the CIA must be right now trying to manage their covert operations across so many platforms. You may want to take seriously their recruiting efforts and apply for a job at the agency. Who knows, you might get the master key to all Pokemon as you spy on gamers around the world. Now that would be fun!

    Postscript: Note to CIA monitor who just read this post….I’m joking!!! Hopefully the CIA has a sense of humor!


  • Demystifying IoT – The 15 key building blocks of an IoT solution

    Demystifying IoT – The 15 key building blocks of an IoT solution
    by Padraig Scully on 09-23-2016 at 7:00 am

    IoT solution development is complex. In many cases, development entails combining expertise from a number of different areas such as embedded system engineering, connectivity solution design, big data handling, application development, and data encryption techniques. Each area demands a specific array of competences and proficiency to function within its own realm. Furthermore, a varied skillset with diverse knowledge is required to develop a complete solution that blends offerings across all of these areas. But within these areas what are the key building blocks of an IoT solution?


    The breakdown of IoT solutions into key building blocks was recently analyzed as part of an industry white paper published by IoT Analytics with the title “Guide to IoT solution development”. In the paper, the analysts discuss the IoT Solution development process across 5 major phases:
    [LIST=1]

  • Business case
  • Build vs. Buy Decision
  • Proof of Concept
  • Piloting
  • Commercial Deployment

    According to the paper, developing end-to-end IoT Solutions involves multiple layers that fuse together various components. In many cases OEMs are unaware of the complexity in IoT Solution Development.

    “When we started our IoT implementation effort we had no clue what we needed and who to approach – to be honest, we didn’t even know what we were looking for.” IoT Project Manager at a Machinery OEM.

    The paper outlines how IoT needs to be thought through from end-to-end or device-to-cloud. On a high level there are 5 major layers of an IoT solution including one cross-layer: Device, Communication, Cloud Services, Applications, and Security.

    1. Device layer:
    Adding MCUs and firmware to basic hardware (e.g., sensors and actuators) creates “simple” connected devices. Adding MPUs and OSs makes these connected devices “smart”.

    2. Communication layer:
    Enabling communication to the outside world through various connectivity networks gives the devices a “voice”.

    3. Cloud Services layer:
    Ingesting, analyzing and interpreting the data at scale through cloud technologies generates “insights”.

    4. Application layer:
    Connecting and enhancing these insights to the greater ecosystem through a system of engagements enables “action” through a vast range of new applications and connected services.

    5. Security cross-layer:
    Securing an IoT solution is an element of such importance that it merits an established “foundation” in each of the other building blocks.

    Each layer is made up of components that bring the end-to-end solution seamlessly together.

    IoT Analytics’ 2016 IoT platforms market report reveals that some companies offer more components than others and together with their partner ecosystem some can provide complete end-to-end IoT solution support. However, with 360+ competing platform providers in the market today it can be difficult to understand what they really offer. To assist companies in better understanding the offerings of IoT solution providers, the IoT Analytics white paper showcases a high-level comparison of 8 major IoT solution providers including Microsoft, Amazon, IBM, Intel, GE, Google, PTC and SAP. The comparison breaks down each layer into components and highlights examples to create a clearer picture, for example:

    1. Device:
    Operating System: Offers low-level system software managing hardware and software resources and providing common services for running system applications e.g., Windows 10 IoT.
    Modules and Drivers: Offers adaptable modules, drivers, source libraries that reduce development and testing time e.g., AWS IoT Device SDKs.
    MPU / MCU: Offers multi-purpose programmable electronic devices at microprocessor or microcontroller level e.g., Intel Atom processors.

    2.Communication:

    Connectivity Network / Modules: Offers connectivity network / hardware modules enabling air interface connectivity e.g., AT&T M2M, Telit IoT Modules.
    Edge Analytics: Enables time-sensitive decisions, local compute, analytics on a smart / edge device e.g., Cisco Fog Data Services.
    Edge Gateway (hardware based): Enables manageability, security, identity, interoperability based on a Cloud enabled hardware device e.g., Dell Edge Gateway 5000.

    3. Cloud Services:

    Storage / Database: Cloud based storage and database capabilities (not including on premise solutions) e.g., Azure SQL.
    Device Management: Enables remote maintenance, interaction and management capabilities of devices at the edge e.g., Azure IoT Hub.
    Event Processing & Basic Analytics: Processes events and handles big data analytics e.g., Azure HDInsight.
    Advanced Analytics: Performs advanced stream analytics and machine learning e.g., Azure Machine Learning.

    4. Application:
    Visualization: Presents device data in rich visuals and/or interactive dashboards e.g., MS Power BI.
    Business System Integration: Enables integration with existing business systems e.g., Azure Logic Apps.
    Development Environment: Offers an integrated development environment with comprehensive SDKs for creating applications and services e.g., MS Visual Studio.

    5. Security
    :

    Physical Protection, Firmware Attestation: Protects / verifies the integrity of peripherals / firmware and detects malicious changes e.g., Intel Trusted Platform Module.
    E2E Encryption of Data & Communication: Secures data / communication through digital certificates and public-key encryption e.g., Symantec SSL, TLS, X.509 certificates.
    Privacy Management, Data at Rest: Encryption software that protects information that cannot be deciphered easily by unauthorized users e.g., Azure Disk Encryption, Key Vault.
    Application Identity & Access Management: Set of processes and services that stores directory data and manages communication between users and domains, including user logon processes, authentication, and directory searches e.g., Active Directory, Identity Manager.

    Understanding exactly what is required on a component level for your IoT solution can ease development and integration issues for your connected solution. However, as the IoT Analytics’ database of 640+ Enterprise IoT projects shows there is clearly no one-size-fits-all approach to successful IoT solution development.

    For a consistent methodology to steer your organization through the challenging process as well as other best practices for OEMs, ODMs, and device manufacturers check out the IoT Analytics’ “Guide to IoT solution development” white paper which is available for download free of charge.

    Footnotes:
    § Smart Device:Enables edge analytics, time-sensitive decisions & local compute. Maximizes security, manageability, interoperability, solutions reliability and reduces bandwidth
    costs. In many cases, cloud enabled smart devices are equipped with a natural user interface. Note: MPU = Microprocessor.
    † Edge Gateway:May also be classed as a Smart Device.
    ‡ Simple Device:Generates data, performs instant actions & transmits data. Typically has constrained resources, low hardware costs, basic connectivity, basic security/identity, and no/light manageability. Note: MCU = Microcontroller.

    https://iot-analytics.com/product/guide-to-iot-solution-development/?utm_source=semiwiki&utm_medium=blog&utm_campaign=keybuildingblocks


  • RTL Design Restructuring Explained

    RTL Design Restructuring Explained
    by Daniel Payne on 09-22-2016 at 4:00 pm

    Modern SoC designs can use billions of transistors where transistors are grouped into gates, then gates grouped into cells, then cells grouped into blocks, blocks grouped into modules, and so on, creating a complex hierarchy. What a front-end designer conceives of logically for a hierarchy will differ from how an optimized physical hierarchy appears in order to meet physical implementation constraints in the back-end of the design process. Reasons to use a different physical hierarchy include:

    • Reduce die size and therefore costs
    • Increase utilization
    • Improve power consumption

    Floor planning is the step in a design flow where the physical hierarchy is controlled and realized, however making changes to the design hierarchy has some side effects like:

    • Iterations of re-design
    • Function verification required

    Let’s define some terms first.

    RTL Building
    Organizing all of the blocks in a design is how levels of a hierarchy are created and defining what the connectivity between blocks should look like.

    In this simple example we’ve created two levels of hierarchy using four blocks. New logic blocks can be added into your design because of connectivity re-routing, feedthrough wires, power-related logic, or DFT issues like clock-gating or on-chip test controllers.

    RTL Restructuring
    Restructuring happens when we want to change our cell instances and connectivity in order to optimize our design or just meet some physical design constraints. In the following example we move block C from underneath B to be under A:

    Ungrouping is where we remove an existing level of hierarchy, like taking blocks C and D from underneath B and placing them at the same level as block A:


    The opposite of ungroup is to group cells, so here we take instances C and D then combine them into a single group:

    Related blog – A Versatile Design Platform with Multi-language APIs

    Partitioning is where we combine group, ungroup and move actions during our optimization process, like partitioning the top level:

    A final restructuring concept is known as Clean, where we remove some design logic like cell instances, connections or process statements.

    Approaches to Restructuring
    For each new SoC you could just cobble together some customized scripts per project, or just start making manual edits to source files for restructuring. This approach doesn’t force you to buy anything, however it will cost you time to develop and perform so many manual edits, likely introducing bugs along the way.

    Automation sounds like a more powerful approach that would scale well into the millions of instances, save engineering time and allow for more floorplan iterations to reach something optimal.

    Defacto Solutions
    Fortunately for the SoC design community there is an EDA vendor named Defacto Solutions that does have a tool in this important space, and they’ve named their tool STAR. The design flow for using STAR is shown below where it has input files like your RTL code or even gate-level netlists, then within the tool you do all of the restructuring (group, ungroup, move, remove, add), finally it will output your restructured RTL code along with any gate-level netlists:

    Popular RTL languages like SystemVerilog, Verilog and VHDL are accepted as inputs. Your restructured RTL code is automatically created, saving you time, plus there will be comments added that explain all of the automated edits. Original comments, indentation and pre-compilation directives are maintained so that you can still read the code.

    Related blog – A Brief History of Defacto Technologies

    When you restructure your RTL design then the UPF file for power intent is also automatically updated for you, and you can even do a coherency analysis to double-check that the RTL and UPF are consistent with each other. Restructuring RTL also updates SDC files that you have for timing constraints. So STAR is really part of a unified flow for your design, beyond just RTL.

    STAR Usage
    Moving from theory to practice, here’s what actual users of STAR are doing with the tool:

    Notice how IP-XACT files can be used as inputs along with RTL and connectivity files. IP-XACT became a standard in 2009 as a way to enable automated configuration and integration using the XML file format. This usage flow has five major steps to it:

    [LIST=1]

  • Parse the design to extract all hierarchy and connectivity
  • Create the top-level
  • Restructure or partition as needed to reach physical requirements
  • Review DRC reports to check for warnings or errors
  • Output the restructured design

    Automated Restructuring Benefits
    Now that we know about what restructuring is, and how it helps optimize the physical hierarchy, just how useful is it in the real world? Users of STAR have shared:

    • Saving 10% in smaller die area. SOCIONEXT (Japan)
    • Remove the need to manually edit code (DAC customer)
    • Reducing project development schedule by several man-months
    • Eliminate long loops, optimize loopbacks, and remove redundant ports and many equivalent features in minutes
    • Build “correct-by-construction” sub-system at RTL level, dramatically reducing debug time

    Summary
    Stop using custom scripts and manual editing of RTL files for your SoC design restructuring, and instead consider using an automated approach from a commercial EDA vendor that has lots of happy customers.


  • ESL Architectural Power Estimation Support from TSMC — yes, TSMC

    ESL Architectural Power Estimation Support from TSMC — yes, TSMC
    by Tom Dillinger on 09-22-2016 at 11:00 am

    Electronic system level (ESL) modeling for system architecture exploration is rapidly gaining momentum. The simulation performance requirements for hardware/software co-design are demanding — an abstract model for SoC IP cores is required. Typically, soft IP will include a number of model configuration parameters. The SoC architect needs to optimize performance, power, and area (PPA) through evaluation of various design alternatives. Some soft IP cores include the capability to define a configurable instruction set architecture (ISA), for optimum performance of specific algorithm code.

    ESL-based design is benefiting from several standardization activities. The SystemC language definition has become the norm for model description, driven by the Open SystemC Initiative (OSCI). The definition of a synthesizable SystemC language subset has also guided IP core release — with high-level synthesis support from EDA vendors, SoC designers can realize both efficient model simulation and optimized cell-based implementations. The emphasis on transaction-level modeling (TLM) for core interface abstraction has provided architects with performance insights, without requiring implementation detail. A set of SystemC libraries release by the OSCI as part of the TLM2.0 standard has facilitated SystemC IP model interoperability in a complex verification environment.

    Parenthetically, verification engineers approaching ESL model simulation from an RTL background are likely dealing with an unfamiliar time-base representation. SystemC descriptions may be untimed, loosely timed, or approximately timed. A loosely-timed model reflects non-pipelined transactions — e.g., a complete, atomic read/write access operation has a corresponding timing interval, applying a blocking communication interface. In a loosely-timed model, there are two timing points — i.e., start of transaction, end of transaction. An approximately-timed model breaks transactions into individual steps, with a non-blocking interface. For example, in an approximately-timed model, there would be start/end request and start/end response timing points for each operation, which enables pipelined transaction simulation detail.

    SoC architects are rapidly adopting ESL modeling for system performance analysis. Yet, power dissipation is also a crucial optimization objective. How does an SoC architect integrate power estimation into the design exploration phase (long before physical implementation), with technology-based accuracy?

    I recently had the opportunity to chat with the team at TSMC who are working on this problem. They described a unique and innovative project underway at TSMC with key partners to address the SoC architect’s dilemma. “As much as 50% of the power may be saved if optimization and analysis is done at the early system level, whereas barely 10% or less of the power can be saved through late gate-level optimization. Optimization at the system level gives the earliest opportunity and greatest gain in system low-power design.” they noted.

    “Our customers and IP partners approached us, requesting assistance to define an ESL-based power modeling methodology.”, they highlighted.

    Initially, I was admittedly a bit surprised at this initiative — however, as they described the TSMC System-PPAmethodology, it became evident to me that TSMC is an ideal innovator to spearhead this activity. TSMC has an extremely close relationship with IP vendors, who develop/qualify/release their designs on TSMC process shuttles.

    The TSMC team briefly described the System-PPA IP power model generation flow — please refer to the figure below.

    IP vendors typically release a SystemC model for their IP, using an approximate-timing reference. A set of power-state API’s into the model is written. (This is a relatively low resource effort, according to TSMC.) This code is incorporated into the TLM 2.0 wrapper template developed by TSMC. IP power characterization is executed, and a power data look-up table (LUT) with specified PVT conditions is generated. To support this flow, TSMC has developed a Baseline Virtual Platform(BVP), where IP vendors and system developers can plug-in ESL level power models and perform power analysis and optimization using the TSMC-developed Virtual Platform Analyzer.

    Cadence/Tensilica, source of configurable DSP cores, and Arteris, source of Network-on-Chip (NoC) IP, have teamed up with TSMC to collaborate on the early System-PPA implementation activity.

    The goal of the TSMC System-PPA methodology is to provide a general, extendible TLM2.0 framework, where individual SoC IP cores each include the API wrapper, and can collectively be presented to the Virtual Platform Analyzer application.

    TSMC will be collaborating with additional IP partners in the future, and will be working with EDA vendors to help build momentum for this approach.

    With today’s system power design requirements, an ESL platform provides the most efficient and effective method for early system architecture exploration. It is essential that power optimization be an integral part of this analysis. TSMC’s System-PPA power modeling methodology enables effective and accurate power analysis during ESL definition.

    -chipguy


    Solutions for Variation Analysis at 16nm and Beyond

    Solutions for Variation Analysis at 16nm and Beyond
    by Tom Simon on 09-22-2016 at 7:00 am

    Variation is still the tough nut to crack for advanced process nodes. The familiar refrain of lower operating voltages and higher performance requirements make process variation an extremely important design consideration. As far back as the early 2000’s design teams have been looking for a better approach to model variation than simply adding margin. This just meant that you were trading performance for yield. Back then it was thought that statistical static timing analysis (SSTA) would provide a viable solution. However, that did not pan out right away. The sign-off tools available then simply were not up to the task.

    Another approach in use is called Advanced OCV, which attempted to take into consideration path lengths, by modeling chains of the cell in question with inserted parasitic elements. AOCV suffers from not including a number of significant design elements. Foremost amongst these is not looking at all of the arcs. Also it ignores side inputs. Compared to SSTA, AOCV tends to be either extremely optimistic or extremely pessimistic.

    According to Cadence, in a recent white paper they published, a Statistical OCV approach offers the best solution to modeling variation. Without the compute and data expense of SSTA, statistical OCV takes into consideration pin to related-pin dependencies, input slew, output load, and provides the variation information needed for the signoff flow. The Cadence paper is authored by Ahmed Elzeftawi, Sr. Principal Product Manager and Ken Tseng, Software EngineeringGroup Director at Cadence.

    The paper goes on to say that the Liberty Technical Advisory Board has created a unified Liberty Variance Format (LVF) document which includes OCV modeling coupled with timing, noise and power models. The Liberty Technical Advisory Board represents a broad consortium consisting of design tool providers, foundries and semiconductor companies. By taking advantage of statistical mean and Sigma values it’s possible for tools using this method to report timing values as probabilities or as discrete representations.

    Producing these models requires looking at every transistor in the cells and deciding which ones contribute most to variation. Each timing arc within a cell must be analyzed. Additionally, the impact of input slew and output load must be included in the resulting models.

    Cadence has extensive offerings for characterization and modeling that can be applied to standard cells, IO’s, memories and mixed signal blocks. Using Monte Carlo simulations as a reference they see very good correlation with their own technology for characterization. The Cadence Virtuoso Liberate characterization suite has many elements. The foundation tool in the suite is known as Virtuoso Liberate provides fast library characterization for standard cells and complex IO’s. The Virtuoso Liberate LV solution is useful for library validation providing functional equivalence and data consistency checking.

    To handle variation, they offer Virtuoso Variety which provides modeling of random and systematic process variation. Virtuoso Variety can generate Advanced OCV, Statistical OCV and LVF models. In addition, Virtuoso Liberate MX is useful for custom and compiled memories and Virtuoso Liberate AMS provides mixed signal characterization.

    The Cadence Innovus Implementation System can take advantage of these models to speed up timing verification and improve performance. At the end of the paper they provide as an example a 1 GHz design with the set up and hold slack for the top 200 paths. It’s pretty plain to see that there’s an average improvement of 150 picoseconds for set up and 200 picoseconds for hold.

    It was inevitable that statistical approaches would be used to deal with variation. I remember having discussions with design managers 10 years ago about the promise of statistical approaches. It’s nice to see now that they have come to fruition. Certainly at nodes beyond 16nm this technology will be more than a “nice to have”. If you are interested in reading the entire white paper, it can be found here.