RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Two New Announcements at ITC from Synopsys

Two New Announcements at ITC from Synopsys
by Daniel Payne on 10-22-2014 at 4:00 pm

Each year at the International Test Conference(ITC) we hear about the latest advances from the testability side of both EDA vendors and academics. This year Aart de Geus, Chairman and Co-CEO of Synopsys delivered a keynote speech titled, “Testing Positive, for Complexity“. Yesterday I spoke with Robert Ruiz and Savita Banerjee of Synopsys by phone to glean insight into two of their new technology advances:

Synospys has been offering test automation tools and IP for many years now, and along with competitor Mentor Graphics they dominate this space.

SemiWiki readers have heard much about FinFET technology on the design side, and it turns out that FinFET transistors introduce new types of silicon defects which then must be detected during test like opens in FinFET multi-finger gates. Even at the 16 nm node with the non-planar topology there are new failure mechanisms to take into account like bridging faults.

It’s no longer sufficient to assume that the only failures in an IC design are between cells, so we must be able to detect all possible defects inside of each cell too.

A defect on just one of the many fins in a FinFET transistor will make the output transition time slow down, but still logically function:

To address these latest test concerns Synopsys has added new technology dubbed “slack-based cell-aware test”. Defects internal to a given cell are now detected during ATPG (Automatic Test Pattern Generation) by either automatically or manually defining a sequence of vectors per cell.

To understand slack-based test consider the following example where an XOR gate with an internal defect (marked as Red Dot) causes that cell to perform more slowly, taking an extra 30 ps to transition. The closest flip-flop to receive this slower XOR signal has a slack time of 50 ps, so the extra 30 ps delay is not detected.

There’s a second, much longer path connected to this same XOR gate, and it’s slack is only 10 ps so the 30 ps slower XOR signal will now be detected at the flip-flop in green:

An actual 28 nm TSMC customer design with this scenario was run through TetraMAX to catch and identify the fault.

Synopsys Tool Flow
If you own all Synopsys tools, then the flow to setup and run a slack-based cell-aware test includes four EDA tools:

You can still use TetraMAX with other EDA vendor tools for: extraction, circuit simulation and static timing analysis. Some scripting is required.

Test Results
That customer design block in 28 nm compared cell-aware testing which found 2,347 faults, to slack-based cell-aware which found 3,323 faults, an improvement of 40% more faults detected. The number of patterns did increase from 263 to 439, or 66% to get the improved coverage, so there is increased test time involved.

Avago reported that using slack-based cell-aware test versus just slack-based improved the fault coverage inside cells from 16.4% to 67.4%, and delay faults were improved from 81% to 89.6%, all by just adding 2% more patterns.

Test Compression
Synopsys calls their test compression approach DFTMAX Ultraand it enables higher defect coverage with lower test costs.

BIST for Embedded Flash
Next up was Savita and she introduced their new BIST technology for embedded flash, now part of the DesignWare STAR Memory System. The Flash IP is shown in green, connected to a wrapper in purple, then controlled by a test processor in purple (SMS Processor):

Without BIST for Flash you would have to use an expensive tester, adding time and cost to your project. Using the BIST for Flash approach you can decrease test costs by 20%, use a standard IEEE 1149.1 tester interface, and even do in-field diagnostics compliant with ISO 26262. UMC is the first foundry to support this BIST technology at their 55 nm node, so stay tuned for more foundries to become supported as well.

Summary
Test coverage is critical to ensuring that our consumer and commercial products are defect free, and the engineers at Synopsys have added new technology like slack-based cell-aware test, and BIST for embedded Flash to keep pace with market and technology demands.


SecureCore: secure MPU for IoT

SecureCore: secure MPU for IoT
by Eric Esteve on 10-22-2014 at 8:44 am

The evaluation of the number of connected (IoT) systems by 2020 varies depending on the source, but we should see in the range of 30 billion IoT devices by that date. We already know some of the basic requirements: such a device will have to be connected, low cost, ultra-low power and… secure. The first three are key enablers, but we can accept to make some trade-off, like for example playing a small extra cost for a device being really low power, so you will only change the battery after 3 years instead of one. But do we really want to trade off the security? I think there is a consensus about the fact that an IoT device will have to be protected against external aggression, hackers and other malwares. If some developments will certainly be made at the software level made to increase the security of such IoT system, it’s probably a good idea to start at the H/W level to use a secure MPU core, like ARM SecureCore SCx00 family, which is currently serving secure markets like smart card for Pay TV, banking or SIM applications.

The SecurCore family consists of theSC000™, the SC100and the SC300processors.

If we zoom to the secure ARM core area (below pictured: A modern application processor combining a TrustZone® based TEE and hypervisor) we see that the TrustZone vertical band build a physical and logical separation between the “trusted execution environment” (on the right) where only trusted Apps are allowed to run on Trusted OS and the left part of core, where you can run Enterprise or Personal Apps. If you want to know more in details about the various features developed to guarantee that the core is secure, you should read this white paper from ARM:

“GlobalPlatform based Trusted Execution Environment an TrustZone ready” here: TrustZone WP

Among others, you will learn:
Processor Security Controls Limit Access and cannot be Bypassed
Legacy systems can often provide unnecessary modes of operation which when misconfigured or left enabled, can allow security controls to be bypassed. An example of this cited by CESG, is the ‘System Management Mode’ (SMM) of x86 architectures. The TrustZone® security extensions have a monitor mode which provides a single point of entry into the Trusted World which cannot be bypassed

Direct Memory Access (DMA) is Limited and Controlled
The TrustZone® security extensions place an additional bit, known as the NS (or Non-secure) bit on the AXI system bus. These bits indicate if a transaction can be secure, and are used to indicate the processor state when the transaction was requested. For instance, when executing within the Trusted Execution Environment the transaction will be secure, and non-secure while in normal operation and executing within Android, Linux or any other conventional operating system.

DMA from External Devices is Additionally Protected
It is strongly recommended by ARM that the NS bit if taken off chip, forces all transactions from external masters to be non-secure and so secure peripherals, whether RAM, fuses, or IO, can only be controlled by on-chip bus masters. Once exposed off chip, with access to the rest of the AXI system bus, if an attacker could force the line to the secure state, they could force a secure address access.

Secure Credential Storage
The Trusted World can be used to securely store and manage private keys. The actual encryption or decryption can then occur within the Trusted World at the request of applications in the Normal World, without ever compromising the security of the keys.

Thanks to the Silicon Valley Bank (SVB), you can have an idea of the segmentation of the IoT market into ten segments. I have made a tentative ranking in term of security needs for each segment. The goal is to rank into « Life Critical », « Industry » (sensible data at company level) or « Wallet » (linked to the need to protect your day to day payment). If we consolidate these data, we can see that no less than 33% of the companies have a secure need at the « Life Critical » level, 64% deal with company sensitive data and only 6% with data directly impacting your « Wallet ». In fact, this means that all of the IoT players will need to build secure sustems, and some of them, dealing with life critical data, may need to add extra level of security (like redundancy for example) to process and protect the data.

I hope that you agree with the equations : IoT = secure… and secure H/W = SecureCore.

Eric Esteve
from IPNEST


SEMI Strategic Materials Conference and SEMICON Japan

SEMI Strategic Materials Conference and SEMICON Japan
by Paul McLellan on 10-22-2014 at 7:01 am

I attended the SEMI Strategic Materials Conference earlier this month. I cover a pretty broad range of stuff on SemiWiki from embedded software and system-level design on down. But I usually stop at lithography and TCAD, which have a major impact on design and the whole fabless ecosystem. I hadn’t really thought about the material supply chain. When I read, say, that Intel is using Hafnium in its transistors I have never really stopped to think about where the Hafnium comes from. When did someone decide to use it? How do you get it? How do you ensure that it is almost unimaginably pure?

Tim Hendry gave the keynote on the second day. His Vice-president of the Technology and Manufacturing Group at Intel. It was titled Strategies & New Models for Creating an Affordable Material Supply Chain. He is tasked with ensuring that Intel has a supply chain for all the materials that they need. This is becoming increasingly critical since the cost of chemical and gas costs are going up per wafer and they are getting increasingly exotic. Many materials used in semiconductor manufacture just don’t have any other uses.


Intel have a three step development process. Research, which is done internally but also in collaboration with external organizations such as universities and research institute. The output of this stage are the options that can be selected among. There is then a pathfinding stage in which the process is developed and taken to medium volume with high yield. It is then transferred into manufacturing for high yield high volume manufacturing, usually in several fabs in different geographical areas. Some materials are acquired in small quantities (in bottles, or gas cylinders) but some the quantities involved are ISO containers full.

The challenge for the supply chain is to work with suppliers during the pathfinding stage so that a full supply chain is in place when high volume manufacturing starts. There are typically multiple levels of supplier (suppliers to the suppliers of Intel and so on) and cooperation may we be required deep down the supply chain to ensure that material is available in the volumes and meeting the qualifications, such as purity, when required.


Changing topic (well, still with SEMI) the next big SEMICON coming up is in Japan. It will take place from December 3rd to the 5th at Tokyo Big Sight (which is a new location for the show). There is a special focus on The World of IoT, a show within a show. In fact the opening keynote is The Future Brought by IoT, with Chikatomo Hodo of Accenture, Donald Jones of the Scripps Tranlational Science Institute, Tokuhisa Nomura of Toyota and Yuzuru Utsumi of ARM.

There are over 100 hours of tech and biz programs making SEMICON Japan the one-stop information source for the microelectronics supply chain. Plus, as usual, an extensive exhibit hall. Special sessions are on women in business, IT forum, IoT forum, SEMI market forum, 2.5D/3D forum, GSA forum and manufacturing innovation forum.

To register for SEMICON Japan click here.


More articles by Paul McLellan…


There’s good news about BadUSB

There’s good news about BadUSB
by Bill Boldt on 10-22-2014 at 4:00 am

The good news about the recently-revealed BadUSB is that there actually is a cure: Hardware crypto engines were invented to protect software, firmware and hardware from exactly these types of attacks, among many others. These uber-tiny, ultra secure hardware devices can be easily and cost-effectively added to USB sticks (and other peripherals) Once installed into the peripherals, devices such as Atmel CryptoAuthentication will block the bad code. Period.

BadUSB is Bad for More Than Just USB

All systems with processors are vulnerable to bad code, which can do bad things to good systems. All it takes is a way to transfer bad code from one processor to another… and, that happens all the time. USB sticks, USB accessories, the Internet, wireless links like Wi-Fi or Bluetooth — you name it — can be vectors for diseased code. What BadUSB has revealed to us is that all embedded systems, unless equipped with robust protection mechanisms, are now vulnerable to catching diseased code. (Embola?)

Embola
One contracted, a machine infected with Embola can send private and sensitive information to bad guys, or let them take over your system for ransom or other mal-purposes. They can turn on cameras and microphones to spy, grab your photos and bank account information, or even mess with your car. Whatever they want they can have, and you most likely will never know it.

So, what can you do to protect against Embola? The answer is twofold:
1. Don’t let the bad code in, and
2. If it does get in don’t let it run.

While this sounds pedantically simplistic, these steps are NOT being taken. They are described here:

Secure Download

Secure download uses encryption to ensure that the code that is received by the embedded system is kept away from hackers. The code is encrypted using an algorithm such as Advanced Encryption System (AES) by using an encryption key. That encryption key is created using a secret that is only shared with the target system. The encrypted code is sent to the target embedded system to be decrypted and loaded for its use.


There is another step that can be taken that adds even more security, which is authentication using a digital signature. If the decrypted code has not been altered, the signature made on the digest of that decrypted code and the signing key will be exactly the same as the signature that was sent over during download. Once authenticated the code can be safely run on the target system. What does this mean? No risk of Embola!

Secure Boot

Secure boot also uses digital signatures to ensure that the code to be booted when the target system starts up matches the code that the manufacturer intended and has not been tampered with. It sort of works in a similar way as secure download. If the code to be booted has been altered, then the signature made by hashing the digest of that code with a secret signing key will not match the signature from the manufacturer. If they don’t match, the code will not load.

http://www.youtube.com/watch?v=bvaHLp1BXaM

Bill Boldt, Sr. Marketing Manager, Crypto Products Atmel Corporation


Crossfire on Continuous Path of Improvement

Crossfire on Continuous Path of Improvement
by Pawan Fangaria on 10-21-2014 at 10:00 pm

In an ever growing world of IPs, it’s essential that a tool which vouches to simplify designer’s job of IP development and help improving its quality remains versatile to encompass various formats, databases, common data models, standard libraries, scripting etc. that are used in the development of IPs and their exchange between different vendors. It also needs to represent common environment for easily and efficiently viewing a design, validating the design consistency, analyzing and debugging the design and providing consistent reports; it even needs to check error and warning messages in the reports for their consistency with the design. And the tool needs to be customizable for a typical IP provider as well as extensible to accommodate newer formats, databases, scripts, GUI infrastructure and so on.

As I have seen Crossfire from Fractal Technologies, it’s a perfect tool (amid growing complexity, size and heterogeneity in IPs) to ease the burden from designer in many respects. It has continuously added new features to enable faster and easy designing and to cater to growing sizes of IPs.

Crossfire reads all industry standard formats from front-end to back-end design flows including any user defined format in ASCII, HTML or PDF (converted to text) file. It employs most of the available parsers and database interfaces including any custom readers and constructs a common data model and APIs to maintain consistency and accuracy across different formats. It has proven legacy checks implemented in the system, has native checks for various data models and uses available custom scripts for various purposes. The common data model is comprehensive enough to accommodate vendor proprietary data sheets, characterization data or any other information. There are several kinds of validation checks at library (e.g. terminal name consistency, timing arcs etc.) as well as design level (e.g. pin compatibility of layout, schematic and underlying format or database). Since an IP can be in the form of a hard macro (GDS) or a synthesizable IP (RTL), Crossfire makes sure that only relevant checks are done at a particular level, thus eliminating unnecessary checks and making the validation efficient.

The report presentation and its review and analysis need special mention as it is tightly coupled with design details and provides an opportunity to designers to do what they want to with the design based on the report. I recently reviewed an on-line demoof Fractal’s Diagnose tool which has an excellent GUI helping designers to analyze rules, formats, messages, and errors and so on in sync with the design and do what they want. There is bind-key implementation to control various operations in the GUI. As an IP can be large, there can be overwhelming number of messages; to keep the messages within viewable limit there is provision for setting message level where the messages below the set level will only be displayed. Also, the messages can be searched with keywords just like in text editors. They can also be filtered to keep only those matching with the filter.

The reports can also be generated in HTML where rules and format names along with their details can be displayed. There is provision of tool-tips which automatically appear when the mouse hovers at certain locations. Errors, warnings, waivers etc. can be viewed for each rule or format. Analysis of errors is done very intelligently.

An error can be selected by dragging the mouse over it; if the format is text based then an editor opens up and you can click ‘analyze’. In the above image, the schematic and symbol is displayed which tells about the missing pin in symbol. In the report, the error with respect to symbol pin will be flagged ‘red’.

Waiver is another important feature where you can waive an error in the report if you think fit; just the select the error(s) and click ‘waive’, a dialog box opens up, add waivers and the values to waive; it’s necessary to provide the reasons to waive. In the messages, the waivers are tagged with G:Lines. After waiving certain error, if you decide to un-waive it, you can easily do so by just un-checking the respective waive rule in the dialog box and clicking ‘Apply’.

These features to move backward and forward between report and design makes designers life easy to review the cause and effect to achieve their goals.

As the designs get more complex and their sizes increase, newer features are sought to be added by customers and that keeps Crossfire updated as per latest trends in the semiconductor design industry. The latest features added are –

· Checks for new rules – Condition based check for values and 0 values in the tables; double defined voltage maps; 0 sized polygons; pin layer based attributes in .lib file.
· HTML report with user defined colors.
· Reading all formats directly as .gz files.
· Possibility to run checks on more than 1500 .lib files.
· Speedup in layout viewer by 50%.

The Crossfireis a very useful tool for IP and design development; it helps in maintaining the consistency between different formats, databases, libraries, and legacy and so on and keeps the quality of design in check though out the design flow, thus incorporating quality-by-construction.

More Articles by Pawan Fangaria…..


Processor for Internet-of-Things (IoT)

Processor for Internet-of-Things (IoT)
by barun on 10-21-2014 at 4:00 pm

Due to increasing proliferation of sensors in our everyday lives, evolution of IoT is natural. The mix of different building blocks with different speed-power-performance constraints makes IoT as the hottest upcoming application area for semiconductor IP vendors. The System-on-Chips (SoCs) coming up in this area typically features a central processor which needs to manage ever increasing number of sensors, processes more complex data, accommodate faster response time etc. But the application focus of IoT has some unique characteristics which makes it important to rethink the usage of standard processors.

POWER
The most important of these is power. A large portion of IoT device needs to runs energy harvested from external environment or on battery without any replacement for the whole life-time, which may be more than 10 years in certain scenarios. Hence power consumption is the most critical area which a processor needs to address. A generic processor wakes up, processes the data as fast as possible and then goes to sleep as quickly as possible. But that may not be ideal scenario for IoT processors. Fast processing does not necessarily imply minimum power consumption and also drawing of substantial power from battery within a short span of time may lead to lower battery life. The processing speed of IoT device can be tuned to ensure minimum energy consumption, particularly in the applications where response time of the device is not critical, for example IoT device used with electronics appliances. One of ways this can be achieved is to run the processors at near threshold voltage. This reduces the power consumption allowing the optimum performance for the application.

Another big contribution towards power consumption is due to the processor-memory transactions. Fetching of data from flash memory consumes a substantial amount of power. Hence an IoT processors need to be designed such that it reduces the memory transaction without sacrifice of performance. Also this may improve the response time as flash memory transaction typically slower.

The amount of hardware used in any DSP processing is directly proportional to the precision level of the operands. Higher the precision, lower the error in final result, but higher power and area consumption. Allowing processor to change the precision dynamically depending on the energy available as well as the criticality of the operation will ensure optimum usage of energy in IoT SoC

SECURITY

Security is another key area in IoT application. Security breach in an IoT device can reveal information about one’s bank accounts, financial status, location, health condition etc. Also it can allow wrong person to get access of products (like automobile, medical device, home entertainment) used by other person as well as critical infrastructure like power plants, manufacturing facilities, transport network etc.

There are several mechanisms to enhance the security of a processor. The first of them is to use bus encryption. Bus in a system has higher chance of attack, for example a hacker can tap the processor-memory bus in a PCB and get the critical information. In bus encryption model a processor takes the instruction from memory in an encrypted form and decrypt it internally, process it, encrypt the result again before putting it in the bus. Hence the decrypt value is not accessible outside the processor. To enhance the security data as well as address can be encrypted before it is put on a processor memory bus.

Another common way hacker attacks a processor is power analysis. In this process attacker measures the current drawn by a processor with high precision and correlate that with the computation performed by the processor to identify the value of the cryptographic key. One of the easier solutions of this problem is to insert NOP (no operation) instruction randomly which actually insert noise in power spectrum preventing the analysis tougher. Another effective way to achieve this is to change the key used for encryption, decryption over time which makes the power analysis redundant.

To prevent malevolent users from getting the control of the system, authentication of the IoT system is a key requirement. Authentication can be done by implementing a key-based cryptographic protocol, which in some cases, rely on the randomness in the device manufacturing process converting the randomness to a system/device-specific uniqueness.

P.S. I am grateful to Dr Anupam Chattopadhyay of NTU, Singapore who has helped me with some suggestions regarding a few sections of the above blog


How Lucio Lanza Got Into EDA

How Lucio Lanza Got Into EDA
by Paul McLellan on 10-21-2014 at 7:01 am

Lucio Lanza is this year’s recipient of the Kaufman award. Unlike most recipients, Lucio worked closely with Phil Kaufman earlier in his career. I met with him at his office in Palo Alto to hear the story.

Even if you have never met him, it would be a reasonable guess from his name that Lucio Lanza is Italian. And you’d be right. He grew up in Milan and went to Politecnico di Milano. Through high-school he had read lots of philosophy but decided if he was going to affect the world he had better study engineering. So he got a degree in electronics doing work on satellite communication. In those days, the politecnico was short of funds and so of 1000 people in each year’s intake only 25 were allowed to study electronics due to the cost of the labs. But as a result they had a pick of jobs in Italy.

Lucio graduated and decided to go to Olivetti. In a weird coincidence, many years later, they would do a big strategic agreement with VLSI Technology, where I worked, and I would find myself in Ivrea a couple of times, essentially an Olivetti company town outside Turin/Milan. Back then they were most famous for typewriters and the company owned hotel in town (long since closed) even looked like one.

Olivetti decided to move aggressively into electronics. It was a huge company with 65,000 employees at the time. They set up a group of 17 people in downtown Milan with Lucio. He designed the 16-bit CPU that made Olivetti a leader in machines for banking and stole business from NCR, Siemens, Ericsson and others. The group, now 65 people strong, got moved to corporate headquarters in Ivrea and put in charge of all electronics.

A big decision was who should be the semiconductor partner. They split their microprocessor into parts and then asked companies if they could build it. The first partners was Monolithic Memories (after Intel, Motorola and others had turned them down). Then AMD wanted to become a second source for military applications and that became the well-known AMD 29000 bit-slice technology. Lucio’s processor chopped up.

They started to work with Intel. But the processor itself was only 5% of the system cost, they needed a portfolio of peripherals too. Olivetti standardized on Intel’s 8080 so for a year or two Lucio worked for Intel, paid for by Olivetti (with no Intel stock!). In 1977 Lucio realized this was stupid and joined Intel working on peripherals. Then he was in charge of the 186 and then strategy for microprocessors. But in Intel strategy does not mean you make decisions, you just run the process by which decisions are made. One key person was the head of the engineering team who was Phil Kaufman (yes, the Kaufman award guy).

Next was Ethernet. At this point Xerox had created Ethernet but were basically the only user of it. Communication protocols were all controlled by the phone companies. But Lucio realized that if they could make Ethernet a standard then they would control the protocol and bypass them. They needed Xerox (customer), Intel as a semiconductor manufacturer, where they both worked, but also a system company. Phil and Lucio flew to Italy and unsuccessfully tried to convince Olivetti to be that company, Lucio already knew everyone there. They flew back to the US despondent. The very next day, they called Gordon Bell (in another weird coincidence, he was chairman of the board at Ambit where I would later work) and asked if DEC would be the system company and he said yes on the spot. It was a big gamble for everyone since at the time Ethernet had precisely two customers, Xerox PARC and University of Hawaii who had developed the original Aloha protocol on which Ethernet was based. So a market of two. It seems Phil Kaufman had an amazing ability to convince people. So that was how the original DIX (DEC-Intel-Xerox) “blue book” Ethernet standard happened and the start of what made Ethernet such a dominant networking technology.

Another Intel colleague, Aryeh Feingold, had left to found Daisy Systems. Phil Kaufman left to go to Silicon Compilers. After a little dance Daisy convinced Lucio to join Daisy, which is how he got into EDA (although back then we called it CAE, computer-aided engineering). He started in a fairly junior position since he had to earn his stripes, and a few months later was VP marketing. He stayed for 3 years.

Daisy realized that they couldn’t sell into the core semiconductor groups of semiconductor companies since they already had their own workstations and software. In those days, every big semiconductor company had a huge internal EDA group. The people who would accept them were the designers in system companies who didn’t really have IC design expertise. So they started a huge program to make it easy to go to foundries by supporting all their libraries. They educated system companies to ask the question “Which CAE workstation supports your library/foundry?” and Daisy had 12, which was a lot more than anyone else. Once they had traction there, they started to get into the semiconductor companies too.

So that is the story of how Lucio Lanza got into EDA.


Power Management Policies for Android Devices

Power Management Policies for Android Devices
by Daniel Payne on 10-20-2014 at 10:00 pm

I’ll never forget the shock when I upgraded from a Feature Phone to my first Android-powered SmartPhone, because all of a sudden my battery life went from 6 days down to only 1 day between charges. As a consumer, I really want my battery to last much longer than one day, so the race is on for mobile phone companies to design their devices with the maximum possible battery life. The combination of operating system, power management policies and hardware are what determine the battery life in a modern, Android-based device. On the EDA side there are companies like DOCEA Power that focus on modeling power at the Electronic System Level (ESL), before RTL coding has even started.

ESL power modeling and simulation addresses the key factors in the evaluation of power management policies, as you can develop, update and maintain power models from the IP block level to the complete system inclusive of the process technology-dependent factors.

With the DOCEA Power approach, you can build power models bottom up and model power states and power events from the application and software from the top down. You can automate the power model generation, clock and voltage connections and parametrize the models to also include PVT and implementation level details necessary for accurate power analysis. Models can be refined with characterization data as the design matures to track the power and performance targets.

Power models which contain system power, thermal and performance states for computational elements (PU’s or processing elements), interconnect, both active and idle power states with dynamic power calculations to account for task load, task consumption and idle power including temperature dependent leakage as well as clock and power gated idle power reduction techniques.

DOCEA power provides the ability to model power as a function of power state residency, system activity, and power state event transitions from realistic workloads and applications.

Power Management Policies
Let’s take an overview of power management policies typically found on Android devices.

Performance Governor
This locks the phone’s CPU at maximum frequency, producing impressive benchmark results and providing a faster race-to-idle. Race-to-idle is the process by which a phone completes a given task, such as syncing email, and returns the CPU to the extremely efficient low-power state.

Related – ESL Tool Update from #51DAC

Conservative Governor
This controls the phone to prefer the lowest possible clock speed as often as possible. The conservative governor can introduce choppy performance, however it can be good for battery life.

OnDemand Governor
This governor has a hair trigger for boosting clock speed to the maximum speed set by the user. If the CPU load placed by the user slows, the OnDemand governor will slowly step back down through the kernel’s frequency steppings until it settles at the lowest possible frequency, or the user executes another task to demand a ramp.

Userspace Governor
This governor allows any program executed by the user to set the CPU’s operating frequency, something more common for servers or desktop PCs where an application (like a power profile app) needs privileges to set the CPU clock speed.

Powersave Governor
The opposite of the Performance governor is the Powersave governor, and it locks the CPU frequency at the lowest frequency set by the user.

Related – Power Modeling and Simulation of System Memory Subsystem

Interactive Governor
Similar to the OnDemand governor, the Interactive governor dynamically scales CPU clock speed in response to the workload placed on the CPU by a user. Interactive is significantly more responsive than OnDemand, because it’s faster at scaling to maximum frequency. Interactive is the default governor of choice for today’s smartphone and tablet manufacturers.


​Android battery life using Interactive Governor

InteractiveX Governor
Created by kernel developer “Imoseyon,” the InteractiveX governor is based heavily on the Interactive governor, enhanced with tuned timer parameters to better balance battery vs. performance. The InteractiveX governor’s defining feature, however, is that it locks the CPU frequency to the user’s lowest defined speed when the screen is off.

Hotplug Governor
The Hotplug governor performs very similarly to the OnDemand governor, with the added benefit of being more precise about how it steps down through the kernel’s frequency table as the governor measures the user’s CPU load. However, the Hotplug governor’s defining feature is its ability to turn unused CPU cores off during periods of low CPU utilization. This is known as “hotplugging.”

Related – Power and Thermal Analysis of Data Center and Server ICs

All of these power management policies can be quite complex and the policies which might provide added battery life may have significant tradeoffs in terms of design and validation complexity as well as functionality and performance. Evaluation of power management policies and algorithms should take into account the hardware interaction with the application and software behavior:

Traditional power management which improves battery life benchmarks may assume that the system is idle 70-90% of the time so a “race to halt” and optimizations for idle power reduction are key. Applications which require high quality of service such as media playback, communications and isochronous support require policies that use low latency and sustained performance.

Summary
Android devices may use many different power management policies, and we only covered a handful of possible policies in this blog. My follow-on blog will describe the methodology proposed by DOCEA Power to model, simulate and optimize power prior to RTL coding.


i.am, I said

i.am, I said
by Don Dingee on 10-20-2014 at 4:00 pm

The tie between rock artists and technology isn’t new. One of the first prominent rockers-turned-entrepreneurs is Tom Scholz of Boston, an engineer who has a couple MIT degrees and several patents to his name. Neil Young is currently out with Pono, attempting to make a higher-resolution audio format based on FLAC encoding to get past the overly compressed and clipped fare standard with MP3s.

When Intel announced Will.i.am as director of creative innovation at CES 2011, a few heads turned. Continue reading “i.am, I said”


A Complete Scalable Solution for IP Signoff

A Complete Scalable Solution for IP Signoff
by Pawan Fangaria on 10-20-2014 at 7:00 am

In an SoC world driven by IP, where an SoC can have hundreds of IP (sourced not only from 3[SUP]rd[/SUP] party but also from internal business units which can have a lot of legacy) integrated together, it has become essential to have a comprehensive and standard method to verify and signoff the IP. Additionally, these checks must be performed objectively and quickly without requiring experts for each type of testing such as power, timing, CDC, DFT, physical congestion etc. The key challenge for SoC designers and integrators has been verifying the IP and ensuring their worthiness of being integrated into SoC. As the IP can vary in size and complexity by large extent and there is a large spectrum of IP suppliers which is continuously growing, the solution must be flexible and adaptable to accommodate specific needs of particular IP and must be scalable to cover the whole spectrum as well as address emerging needs in future.

Although, I knew about Atrenta’sIP Kit being used at TSMCfor soft IP qualification, I didn’t know about the versatility of this comprehensive solution until I attended the webinarpresented by Robert Beanland, Sr. Director, Corporate Marketing at Atrenta.

The IP Kit performs extensive verification provided by SpyGlass platform on an IP (including RTL, library files, constraints in terms of SDC, UPF/CPF etc. and waivers), cleans up SDC, CDC, UPF/CPF etc., applies the supplied waivers on rules and provides a clean IP with dedicated reports of power dissipation (including power domains), fault coverage, SDC coverage, clocks & timing etc. along with handoff (datasheet and dashboard reports) and standard reports including signoff. It provides a complete package of all reports, constraints, waivers, RTL, libraries and SpyGlass Abstract Models. The SpyGlass Abstract Models are smart models which are unique creations by Atrenta; I will talk about this a little later.The quality of an IP can be easily ascertained through these reports.

The use-model is quite simple (although rigorous work is done under the hood) for anyone to be able to do the IP signoff easily. It just needs you to run three commands: 1) ‘aipk_read’ that reads the design and supporting files and does the design setup & check for goals; 2) ‘aipk_run’ that does the design analysis; and 3) ‘aipk_pack’ that packages the design once all the goals have been met successfully. The process can also be tuned to any specific customer’s needs.

An extensive dashboard allows you to define rules for IP quality report such as pass/fail criteria, customized tests, and power and CDC checks etc. as per company requirements. Additionally, IP specs can be defined for how a rule should look, for example stuck-at conditions, false path propagation etc. Details about any particular test in the report or its rule can be navigated as shown above. Similarly the trend lines for any particular failing or passing rules can be seen automatically.

TSMC has put the Atrenta IP Kit through extensive use in qualifying all Soft IP; it has constituted an on-line portal which can be used for any IP to pass through the IP Kit and generate healthy quality matrix for the IP at hand. Any validated IP can be re-validated again against any new configuration for re-use. In several of the IP qualifications at TSMC, various kinds of problems such as index out of range, unconstrained I/O ports, unsynchronized CDC paths, and many more have been observed which were not known to the IP providers or SoC integrators. This helps tremendously by pin pointing the exact problems, thus helping faster resolution of problems and eliminating longer loops later during SoC integration.

Now is the time to talk about the SpyGlass Abstract Models! The IP Kit can also be used to provide an Abstract Model with high capacity and performance for its effective and fast integration into an SoC. The model is enhanced such that issues internal to the IP are minimized and the model is focussed on connectivity and configuration information with respect to SoC integration. At the SoC level, the IP connected together are signed-off producing the dashboard for SoC Signoff.

The hierarchical SoC abstraction flow enables billion+ gate SoC results to be available in a few hours instead of weeks. Specifically, these models offer substantial reduction in memory (5-10x), massive performance improvement (15-50x) and noise reduction to the extent of 10-100x (i.e. number of violations) compared to flat design analysis. The SoC design houses can build a repository of qualified IP by passing all 3[SUP]rd[/SUP] party as well as internal IP through the IP Signoff process provided by the IP Kit. The Smart Abstract Models can be built out of these qualified IP through IP Acceptance (i.e. validating IP assumptions in the SoC context) and used in SoCs.

By the use of fewer and shorter iterations, the design convergence becomes faster which can reduce the design schedule up to 60%. The IP Signoff with standardized rule sets and other processes, seamless integration into SoC flow, and extensibility to adopt new technologies and 3[SUP]rd[/SUP] party tool reports is extremely beneficial for IP suppliers as well as SoC integrators for internal as well as external usage.

What’s more? Atrenta is further strengthening the IP Kit by adding a new verification capability to automatically incorporate large number of assertions in an IP which can be used at SoC level to ascertain whether an issue relates to configuration, connectivity, or the IP itself. This will provide an additional high productivity boost to SoC integration and signoff. Stay tuned to get this new capability!

Atrenta provides a complete RTL platform for IP as well as SoC Signoff, flexible use-model, high impact and low noise methodology along with high quality management reports. You can learn more about this complete offering in detail by attending the recorded on-line webinar here.

This reminds me about my discussion with Piyush Sancheti, VP, Product Marketing at Atrentaearly in this year where we saw an acute need of standardized sign-off process, both at IP as well as SoC level.
Read more on that – RTL Sign-off – At an Edge to become a Standard.

More Articles by Pawan Fangaria…..