RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Silvaco TCAD Webinar

Silvaco TCAD Webinar
by admin on 01-26-2015 at 4:50 pm

TCAD is a somewhat specialized area since not that many people design semiconductor processes compared to the number who design chips. Bit without TCAD there would be no chips. One area where the two domains intersect is that of SEE, where neutrons (mainly) can cause a flop or a memory bit to change. Since we live on a radioactive planet that is not going away and the smaller a transistor the less it needs to flip it.

Silvaco have a webinar coming up on the topic titled Simulating Total Dose, Prompt Dose, Damaging Fluence and SEU Using TCAD. It is Tuesday February 17th from 10-11am Pacific time. Her is what it will cover:

  • Introduction to a newly available and recently declassified Total Dose model
  • Description of the physical mechanisms accounted for in the Total Dose Model, including radiation induced de-trapping of trapped oxide holes.
  • How certain bias conditions during radiation can reduce the trapped hole concentration in radiation hardened oxides, leading to radiation induced threshold voltage recovery (this is NOT the normal “rebound” effect caused by the slow formation of interface traps).
  • How to simulate a particle fluence that creates damage in the semiconductor
  • How to simulate transient, very high dose rate “prompt” events.
  • Simulating other, more traditional high energy Single Particle Events (SEE)
  • Examples include, threshold voltage shift and inter device leakage from Total Dose oxide charging, Image Sensor Damage from a fluence of protons, Prompt Dose effects on a circuit, Single Event Burnout (SEB) of a power PiN diode, Single Event Upset (SEU) of a 22nm SRAM

The presenter is Derek Kimpton, Principal Applications Engineer at Silvaco, who spent four years characterizing radiation effects on devices at Plessey Semiconductors in Lincoln, England. Whilst there he published the paper in Solid-State Electronics on a new and predictive total dose oxide charging model, that is the basis for the code implemented in Silvaco’s latest TCAD Victory Device simulator.

Who should attend? Well, all in the radiation effects community, with an interest in the simulation of radiation effects on electronic devices using physics based (TCAD) tools. But given that all chips are affected by radiation then everyone is at least peripherally affected by this. So although this is an area of increasing importance even to people who think it doesn’t affect them.

For more details and to registration are here.


More articles by Paul McLellan…


30+ Years of Semiconductors – The base matters!

30+ Years of Semiconductors – The base matters!
by Pawan Fangaria on 01-25-2015 at 10:00 am

Although CMOS technology in semiconductors was patented in 1960s, commercial ICs and electronic systems based on CMOS ICs started picking up in 1970s, and the real growth with personal computer (PC) market took place in 1980s. Then Intelmicroprocessors started dominating the semiconductor market with increasing processing speed and likewise high memory demand. To catalyze that growth from software side was Microsoftwindows; we have all heard about “Wintel” jargon that ruled the PC market with PCs having Intel processors and windows running on them. The market grew most rapidly in 1980s (CAGR 16.8% in 1989) and 1990s (CAGR 13.6% in 1999) before two recessions hitting the market in 1[SUP]st[/SUP] decade of the new century bringing the CAGR down to 0.5% in 2009.

Thanks to IC Insightsfor providing this statistics in its report. The CAGR in this decade is expected to be at 4.1% (that is less than 9% averaged over 30 years), even though wireless networks and IoT connections are expected to rise at maximum CAGR of ~19% to ~22.5% while Cellphone market is expected to maintain CAGR at around 9%.

The point to note here is that even after global recession in 2001 due to dot.com and 2008 due to subprime crisis, the base semiconductor earnings have remained steady and has been rising in this decade. True, in 1980s and 1990s it showed maximum growth, but the base was very low ~$10 – $43B in 1980s which went up to ~$139B in 1990s. Today, in 2014, it’s expected to be at $333B. So, even 4.1% CAGR in this whole decade is not bad (in my opinion, of course I would be happy if it would have been moreJ), the base is what matters. Any more growth due to IoT and Smartphones would be icing on the cake! IoT is expected to see maximum growth, but with low base at ~$3.3B and wireless at ~$9.4B.

It’s a common phenomenon; a small cap company grows much faster compared to a mid cap and then a large cap. There are several pitfalls for a mid cap company to become large cap, but after attaining large cap status, it becomes steady. Semiconductor industry is in sweet spot; not company wise, but sector wise it is seeing a large cap phenomenon. So, definitely, it will not see double digit growth, but the base is high enough for it to sweat to retain single digit growth.

Today, semiconductor has entered into every aspect of our life and has become an essential ingredient. Why was the semiconductor market more or less steady during recession without growth or minimal decline in particular years? The essentials had to be maintained for consumption; expansions were curtailed, consolidation of businesses happened which stopped growth; however people had to keep their PCs running over the network, office equipments working, cars with all electronics, home appliances working, medical and healthcare taking their care and so on. Semiconductor has become a daily consumption like food for people.

It will not go down from here. Yes, it can rise with new growth drivers like IoT. Okay, in this decade IoT can rise at ~22% with low base, but in the next decade we can see the same story – the base will steadily increase with high % CAGR initially and then moderating before decreasing! Let’s see.

More Articles by PawanFangaria…..


IBM to Humiliate 20% of Workforce?

IBM to Humiliate 20% of Workforce?
by Daniel Nenni on 01-25-2015 at 7:00 am

There are four big technology companies that I grew up with: Intel, Apple, Microsoft, and IBM. I still follow all four but it is sometimes hard to watch. Last week there was talk of a massive layoff at IBM and I have just confirmed it with my Upstate sources. According to an article in Forbes it will be 25% of the more than 400,000 workforce. According to my sources it will be closer to 20% but that is still more than 80,000 people who will lose their jobs. IBM is calling it a volunteer Transition to Retirement (T2R) which is a nice name for being terminated I guess. You can read about IBM’s 11[SUP]th[/SUP] quarter of declining revenue HEREbut take some aspirin first.

Let’s take a look at what happened after the x86 Server Division acquisition by Lenovo last October. Here is a comment from Alliance@ IBM (the official IBM Employees’ Union website) that suggests IBM “stuffed” the acquisition with employees that should not have been included:

Comment 01/06/15: Just heard from a former co-worker who was transitioned to Lenovo last year… As they returned to work for 2015 on 1/5/15 approximately 70% of the IBM’ers who transitioned to Lenovo as part of the System X buyout were told that they would be offered a package if they leave voluntarily. The majority of folks affected are said to be in the Development group. Folks in development who have been with IBM for 15 years are being offered one year of salary plus a year of health benefits. Folks outside development who had been with IBM for 25 years are being offered the same package. People who received the notice have one week to decide, if they volunteer to leave then their last day will be January 30th. If not enough people volunteer to leave then Lenovo will likely do mandatory layoffs. Lenovo is blaming IBM for sending too many duplicate and non-essential positions over with the buyout and not adequately reporting on what folks actually did relating to System X. I believe this to be accurate based on things I saw during the phase to “determine if you were in scope for the transition”. Our manager at the time told us you had to work on System X for at least 51% of your time to even be considered, but friends in other divisions were saying that people on their teams who didn’t even touch System X were being sent over. The common factor (anecdotal) was the person’s age. The theory at the time: “let the layoff be on Lenovo’s books and spare IBM from further claims of ageism”

You gotta love open forums! They are the best sources of information, absolutely. If you read some of the more recent comments you will see other claims of IBM abusing senior staff with Performance Improvement Programs as a motivation for the T2R “volunteer” program.

Comment 01/23/15: I’m also on T2R —- was planning to retire at the end of this year. I also got a 3 today without warning and told I would need to be on a PIP. Totally shocked. According to the T2R documentation, we are exempt from resource actions but not performance based termination. I think IBM is trying to figure out a way to get rid of us all early.

The question I have is: How is this T2R going to affect the IBM Semiconductor Operations acquisition by GlobalFoundries? If I was GF I would be VERY wary of employee stuffing!


Industrial Internet “In-Security” – Awaiting a Cyber Pearl Harbor?

Industrial Internet “In-Security” – Awaiting a Cyber Pearl Harbor?
by Charles DiLisio on 01-24-2015 at 7:00 pm

You feel violated when internet intruders (hackers) cause digital harm (theft of social security numbers, credit cards, logins, e-mails or addresses), however, it’s frightening when organized cyber attacks destroy critical physical infrastructure (disrupt water, power or gas). Its annoying having to update passwords or get a new credit card. How unnerved would you be if the power is out of weeks our you don’t have gas for your car? This is the new age of cyber-terrorism awaiting us as we connect more of our critical infrastructure to the Internet.
Continue reading “Industrial Internet “In-Security” – Awaiting a Cyber Pearl Harbor?”


Measuring Metastability

Measuring Metastability
by Jerry Cox on 01-24-2015 at 7:00 am

Measuring metastability is just 50 years old this year. In 1965 my colleague Tom Chaney took a sampling ‘scope picture of an ECL flip-flop going metastable. S. Lubkin had made mention of the phenomenon over a decade before that, but at that time most engineers were unaware of the phenomenon or did not believe it actually existed. Later many who saw the sampling ‘scope picture doubted the method’s validity. Subsequently, flip-flop output-voltage traces, patiently photographed by Tom in a darkened room, began to turn the tide. This lead to a paper that was rejected because one reviewer (perhaps and electrical engineer) saw a simple analog circuit that he felt was old and uninteresting and another reviewer (perhaps a computer scientist) said metastability could never occur so the paper should be rejected. Later in 1973 the classic Chaney and Molnar paper was accepted for publication in the IEEE Transaction on Computers.


Reliable measurement of metastability in synchronizers and arbiters has been hard to realize. Many subtle problems deceive the unwary and adequate simulation tools and models became available only in the 1980s. Here is a list of several of the major problems:

  • Measurements in silicon can only be made in circuits specially designed for that purpose. Simulation in advance of fabrication should be done for any other circuit and is preferable to avoid product re-spins.
  • Shorting the metastable nodes, either in silicon or in simulation, seems painless, but actually yields erroneous results. This is because the method is only valid if the circuit behaves symmetrically. However, hardly any synchronizers really do.
  • Most synchronizers today are master-slave designs and the recovery characteristics of the master and slave latches are usually quite different. This requires measurement of the characteristics of both latches and the calculation of an effective settling time-constant.
  • The effective settling time-constant is a function of the clock waveform and duty-cycle.
  • The load on the output of a synchronizer affects its behavior so care must be taken to include the subsequent circuit in simulations.
  • The component stages in a multistage synchronizer interact with each other invalidating the simple concept of multiplying their failure probabilities.
  • Beside clock domain crossings the potential for metastability hides in many surprising places: initialization logic, flip-flop reset signals, memory interfaces and analog input circuits.

Synchronizer designers need a tool for metastability analysis that overcomes these problems, has been proved in silicon and is easy to use. MetaACE from Blendics fits that need, but does not help with the collateral need for an educational tool that makes it easy for synchronizer designers, SoC engineers and engineering students to learn about metastability in its many manifestations.

To meet this educational need, Blendics is announcing MetaACE LTD, a version of MetaACE that limits the number of netlist nodes it can analyze to 250 or less. This node limitation does not otherwise narrow the functionality of the tool. MetaACE LTD is sufficient to handle most unextracted netlists and many netlists with capacitance-only extractions. Doing capacitance-only simulations also improves the run-time with only a small loss in accuracy. Other than the node limit, the two tools are the same and share GUIs and file formats.


MetaACE LTD is available for free download. There will be a Webinardescribing its use on Wednesday 18 Feb 2015 at 11 AM PT. Also, a public synchronizer is soon to be available for download including an extracted netlist and transistor model. This public synchronizer was developed as a master’s thesis project at Southern Illinois University Edwardsville. It can provide a benchmark for comparison with your favorite synchronizer circuit and is a great way to try out MetaACE LTD.


How Imagination tested the PowerVR Series6XT

How Imagination tested the PowerVR Series6XT
by Don Dingee on 01-23-2015 at 10:00 pm

We have been hearing for some time about the Synopsys HAPS-70 and how they have co-created the hardware and software architecture for FPGA-based prototyping with their customers. Now, we see details published by Synopsys on how they collaborated with Imagination on the design of the PowerVR Series6XT GPU.

The first thing to come to grips with is just what a beast the PowerVR Series6XT GPU is. With up to eight unified shader clusters and an array of diverse co-processor units, testing all the configurations and concurrent execution of IP blocks pre-silicon is a tall order. The danger, as designs get larger and larger, is making an error in partitioning the design onto a prototype. This hazard multiplies when customers put the PowerVR Series6XT GPU into their own designs with other IP around it.


Synopsys and Imagination worked together to tackle the partitioning of a basic two-shader cluster, some of the GPU logic, and test logic allowing synchronization of stimuli from DDR3 storage and a connection to a PC host. This spanned four Virtex-7 FPGAs on a HAPS-70 S48. The biggest part of the two-week, manual effort was iterating the partitioning to get the right combination of logic and I/O multiplexing. The result was a prototype running at 8 MHz, which allowed 7000 regression tests to be run successfully – all pre-silicon.

When attempting to scale up to the full Series6XT GPU design, it became evident that the test logic that swallowed 90% of an FPGA in the initial prototype was going to exceed 100% quickly. The logical choice would be repartitioning again, but issues with I/O multiplexing using the “manual” synthesis rules would cut the system performance to 2 MHz. This would make evaluation of the full-up GPU with live video output excruciatingly slow.

Automation came to the rescue. ProtoCompiler has the ability to synthesize code versus HAPS-aware constraints, including interconnect. The teams upped the FPGA count to six, dialed in constraints including keeping FPGA utilization to 80%, and selected a pin-muxing strategy. By using the abstraction flows feature to explore FPGA-to-FPGA interconnects quickly, typically in less than a minute, ProtoCompiler was able to pick the best possible multiplexing ratio. The result was a full-up live video analysis prototype in five, not six, FPGAs running at 7.3 MHz.

One more performance tweak would make the difference. With the partitioning set, the chance to optimize interconnects came into play. The HAPS-70 supports a high-speed time-domain multiplexing feature on all connectors. ProtoCompiler understands how to assign source synchronous clocks, split multi-source nets, and other details to use the HSTDM feature. After a day of exploration of an HSTDM scheme, full-up performance was 12 MHz.


This successful effort retains all of the benefits of FPGA-based prototyping. Executing design changes in RTL is quick and easy. A host connection and debug tools allow control and visibility into the design and the test environment, facilitating sophisticated tests such as video analysis via a compressor/decompressor and frame buffer. The power of a synthesis environment that has detailed knowledge of the prototyping platform also shows the potential.

Synopsys published these results via a presentation at the SNUG Japan sessions in September 2014, and a short article in 4Q2014 edition of Synopsys Insight (on page 7). The author, Andy Jolley of Synopsys who worked directly with the Imagination teams, is presenting a live webinar to discuss his findings on Feburary 4, 2015 – the event is now open for registration:

Successful GPU IP Implementation on Synopsys HAPS Platforms using ProtoCompiler

Whether you are looking to the use the PowerVR Series6XT GPU or just facing a design of similar complexity, the lessons learned from this development are worth a look.


Will the Apple A9 Fall Flat?

Will the Apple A9 Fall Flat?
by Robert Maire on 01-23-2015 at 12:00 pm

Several months ago we had suggested that we were concerned that Apple’s A9 processor would wind up being 20nm planar (maybe 14nm planar) rather than the expected 14nm FinFET. As we are now under 9 months from a likely launch time for Apple’s next gen IPhone the timing for getting a 14nm FinFET processor on board the phone looks much more difficult. The generally held expectation is that the A9 would be 14nm FinFET, closely following on the heels of Intel’s 14nm FinFET release last year and a significant upgrade from the A8 which itself was a huge uptick in transistor count, density and overall performance from the prior A7 processor and helped make the iPhone 6 a significant hit.

Also Read:
The TSMC iPhone 6!

The math doesn’t add up…
If we assume a September roll out and work back from there, adding up the production time of the processors, getting them tested and shipped in volume and soldered onto the circuit boards and assembled into phones, we are likely talking about volume production of A9 by the end of the June quarter. Given the ongoing news about slow spending by TSMC & Samsung and the recent delay at GloFo, it seems hard to put together enough capacity at a high enough yield in the time left (maybe 5 months at best) to satisfy an Apple roll out of a new phone and associated volumes. While we wouldn’t rule it out completely, it seems increasingly difficult to get 14nm up to snuff in time without a huge risk that Apple is not likely to want to take given the potential embarrassment and potential fall out of supply issues.

KLAC is a leading indicator…
Last nights lackluster guidance for foundry spending in H1 2015 continues and underscores the sluggish rollout of 14nm FinFET at foundries. Remember that 14nm FinFET stumped even the great Intel so its no surprise that it has slowed everyone else down as well. Its hard to get the 14nm process up to yield without yield management tools (in significant volume). Right now a ramp in mid 2015 is dubious.

A “6S” would fit Apples “tick tocK” pattern…
Much like Intel’s “tick tock” practice of “shrink & exploit”, Apple seems to come out with a more significant upgrade on every other IPhone model and the IPhone 6 was a biggie, which suggests that the one this fall is less significant. This would seem to simply that the next model will be a “6S” rather than a “iPhone 7”.

20nm capacity could fit the situation…
Both Samsung and GloFo have nicely working 20nm capacity with GloFo supplying Qualcomm out of Malta. The news reports over the last several months point to Samsung winning the lions share of the A9 along with sidekick GloFo and a potential TSMC chaser (already building the A8 at 20nm). We found those news reports curious as it seemed to make little sense for Apple to commit so early in the process. However those reports make more sense if they gave up on the thought of going to 14nm FinFET in time for September and instead settled on the safe and readily available 20nm capacity today.

Also Read: Who will Manufacture Apple’s Next SoC?

This would also further support the reason why the foundries don’t appear to be in as much of a hurry over 14nm spending if the A9 deal is already done with existing technology/capacity. Furthermore this would also support the back filling of 28nm and 20nm capacity that has been talked about. Though there is the potential of 14nm planar, we don’t think that is is likely scenario (though stranger things have happened). The pieces all seem to fit together….

Core Wars…
Recently there has been a lot of buzz about the number of cores in the the processor of Android phones which are now touting 8 core designs. Maybe rather than a 14nm shrink and shift to FinFET, Apple could stick with 20nm planar and increase the die size a bit to squeeze in more cores? Though Apple probably does not want to be seen as following Android , there may not be a choice here. Obviously the Apple OS would have to be capable of using more cores (something we have no real clue about). It could be but we just don’t know …but it makes some sense.

Apple counting on things Big and Small???
It may be that Apple is counting on the IWatch and foot long, IPad Pro to carry the momentum in 2015 rather than an iPhone refresh. Logically this seems to make sense as the IWatch will likely roll from the Spring into the fall holiday selling season for 2015’s holiday gift idea while the Ipad pro attacks the business market. If this is the case it may take the pressure off of needing a big IPhone refresh in 2015. Better to wait til 2016 for the IPhone 7 and a jump step in processor power.

Slowing Moore’s Law forces choices…
It feels to us the 28nm was the last “good node” as per transistor cost increased from there in its first upward excursion ever after the long downward curve of Moore’s Law. 20nm has been OK albeit with increasing costs due to multi patterning. But 14nm FinFET looks to be a major cost dislocation causing a significant jump in wafer and per transistor costs that will set the industry on its ear and cause heartache. Most of the delay can be laid at the feet of the delay in EUV and next generation Litho improvements that would allow shrinks without as much multi patterning that we are now facing. While not the only issue in the continuation of Moore’s Law , it is clearly the core culprit.

Given what we heard from KLAC last night, that Actinic, “at wavelength” mask inspection will not be available until 2020, it underscores the view that EUV will not be ready for high volume manufacturing for another 5 years and into the 7nm node forcing more pain again at 10nm. (don’t cry for ASML as they are more profitable with current tool sales and EUV delays). End users, such as Apple & Qualcomm will have to deal with the previously reliable cadence of Moore’s Law slowing down and figure out how to roll out new and better products to an ever more discerning consumer base who always want the next great thing.

Robert Maire
Semiconductor Advisors LLC


How the iPhone Ended Nokia’s Reign!

How the iPhone Ended Nokia’s Reign!
by Daniel Nenni on 01-23-2015 at 7:30 am

The origin of ARM’s success in mobile phone space is largely traced to Symbian’s decision to exclusively support the ARM Instruction Set Architecture (ISA). This in turn was the consequence of a mid-1990s decision by Texas Instruments to use ARM in its mobile phone ASICs for Nokia, the driving force behind the inception of the Symbian smartphone project.

When the GSM cellular standard was about to enter the commercial arena, TI’s Gilles Delfassy sat in a sauna with executives of Nokia, then a troubled conglomerate, and agreed on a DSP-centric approach to build the upcoming digital cell phones. Digital signal processors or DSPs, which later became the foundation of TI growth, were developed unnoticed at its European division until this meeting took place in Helsinki in 1992. By sealing a business pact to supply specialized chips for Nokia’s cellular products, Delfassy placed TI’s DSP technology squarely in the middle of the emerging GSM products.

What happened next at TI was reminiscent to Nokia’s own blossoming into a telecommunications specialty from being a messy electronics giant. TI had just about sewn up the mobile handset silicon market by devoting vast engineering resources to Nokia for development of platforms based on its chipsets. On the other hand, the transformation of Nokia from a Victorian-era industrial conglomerate to a wireless powerhouse was a Finnish fable in its own right.

Also Read: New book untangles the Internet of Things (IoT)!

Fast forward to 2010 and the Nokia fairy tale had come down to earth. What happened to one of the most celebrated corporate champions from tiny Finland? According to Henry Blodget, former research analyst and founder of news blog Business Insider, the iPhone happened.

How did the Finnish mobile phone giant reach this crossroads? Is Nokia the next Kodak? A new book chronicles Nokia’s lost decade in which the venerable handset champion found itself in the clutches of a vicious cycle. “Nokia’s Smartphone Problem: The End of an Icon?” delves into one strategic blunder after another to provide a vivid account of this tale of management indecision. It provides a riveting look at how this comedy of errors took one of the world’s most global companies to a near-death experience.

Nokia’s Smartphone Problem” is written to educate and inform managers in the IT, wireless, semiconductor, and consumer electronic industries. It’s a groundbreaking book that exposes the past, present, and future of Nokia and smartphone businesses at large to find all the pertinent answers regarding smartphone product development cycles. That translates into a detailed treatment of the smartphone industry’s business models and basic building blocks like hardware, operating systems, apps, and ecosystems. And that makes the book a must-read for managers tasked with formulating a mobile strategy for their businesses.

The Nokia story is engulfed in a plethora of misconceptions. A lot of information about the mobile phone pioneer is cluttered, and a number of facts are not in place. “Nokia’s Smartphone Problem” aspires to clear the air, develop a comprehensible picture, and thus set the record straight. Nokia is no more the master of the mobile game, but it is still an important company. The book digs deep into Nokia’s heritage, strategy blunders, major stumbling blocks, and bailout efforts. That way, it attempts to recollect notes from this epic moment in Nokia’s life and create an authentic document that not only recounts Nokia’s breathtaking transformation, but also provides a discourse on the Finnish company’s turnaround plan.

The book was first published in May 2013 at the height of Nokia’s chaotic relationship with Microsoft. The second edition of Nokia’s Smartphone Problem, published in October 2014, covers Nokia’s formal exit from the smartphone business while Microsoft takes over its mobile phone unit to carry on with its unfinished business of reinvigorating the Windows-based smartphones.

The book takes a microscopic look at Nokia’s turbulent relationship with Microsoft and provides an insider look into Nokia’s multi-layer tie-up with the Redmond, Washington based software giant. It further reconstructs how Nokia is aiming to reinvent itself in the mobile infrastructure business.

The book also argues that chipmakers, a crucial part of the smartphone value chain, wouldn’t want the market to go polarized between Apple’s iPhone and Samsung’s Android handsets. Semiconductor firms are an important source of smartphone innovation and they have a crucial stake in the mobile game.

Nokia’s Smartphone Problemfeatures 20 images to highlight defining moments in the company’s smartphone and post smartphone era. The book is available on Amazon and Barnes & Noblein both paperback and e-book formats.


Windows on a TV

Windows on a TV
by Daniel Payne on 01-23-2015 at 12:00 am

This month I upgraded my TV at home with a 40″ LED set from Samsung, Denon AV receiver and Samsung Blu-ray player. Also being a Google fan I bought a Chromecast device.




At CES there were multiple announcements from Intel, and one that caught my eye was the Intel Compute Stick because it reminded me of the Google Chromecast device by plugging into a TV set.

This consumer electronics area is filled with devices from many manufacturers that connect to a TV, and Intel wants to offer us Windows 8.1 apps on a TV with this new Compute Stick. Convergence between the Internet and TV has been quite the rage for years now. It’s hard to compete with Chromecast because it is priced at $35.00 and I got it on sale at Best Buy for just $29.00, and then Google sweetens the offer by giving out a $20.00 Google Play credit, so in reality I paid only $9.00 for my Chromecast device.

Here’s what’s inside of the Intel Compute Stick:

  • Quad-core Atom Processor
  • 2 GB of RAM
  • 32 GB of storage
  • MicroSD support
  • WiFi
  • Bluetooth 4.0
  • USB connector
  • Mini-USB for power
  • Windows 8.1 or Linux

With such a device connected to a TV you could:

  • Browse the web with Bing
  • Social networking
  • Stream content: Netflix, Hulu
  • Play games
  • Run Windows Remote Desktop

Details are still sparse from Intel at the moment, but the retail price is set at $149 and actual product release later this year. I can see that geeks will be interested in using Linux more than Windows 8.1, while most consumers will opt for the Windows version because it is most familiar. On the Linux side the Intel device will cost just $89 and comes with 8 GB of storage and 1 GB of RAM. The Compute Stick even reminds me of the popular Raspberry Pi computer aimed at hobbyists and DIY makers as they both run Linux and have an HDMI connector.

Related: ARM + Broadcom + Linux = Raspberry Pi

To really use Windows 8.1 on a TV would require a bluetooth mouse and keyboard combination, so I look forward to the first shipment of the Intel Compute Stick and I plan to try one out at my local Best Buy store.


Tracing Insight into Advanced Multicore Systems

Tracing Insight into Advanced Multicore Systems
by Pawan Fangaria on 01-22-2015 at 7:00 am

After knowing about the challenges involved in validating multicore systems and domains of system and application level tracing as explained by Don Dingee in his article “Tracing methods to multicore gladness” which is based on the first part of Mentor Embedded multicore whitepaper series, it’s time to take a deeper insight into what all has to be considered and done for an effective tracing of a multicore system and application software.

Software tracing can be based on static instrumentation or dynamic breakpoints and hardware based tracing use probing technology needing extra hardware. It’s not necessarily technical merit that need to be considered for the approach to be used in tracing, rather strategic decisions need to be made with a combined approach depending on the target system and its architecture. The parameters to be considered for trade-off include intrusiveness, performance, capacity, granularity and availability of hardware and software resources for the trace infrastructure.

The tracing cycle consists of certain pertinent steps that need to be performed with the trace event data; the steps include collection of data, import of data into an analysis and visualization tool, analysis, exploration and post-processing of the collected data, and deficiency identification and improvement. The event data collection includes trace instrumentation, configuring tracing options, start and stop of running applications on the target system. The collected data is loaded into a trace analysis/visualization tool on a host system, where it generally is transformed and optimized into manageable visual representation, according to the tool’s preference.

Mentor’s Sourcery Analyzer supports trace event data generated by the LTTng framework. It also extracts symbolic names from the binary image of the user application that has been traced and maps these to trace event data, thus enabling the look-up of source code from trace data. Sourcery Analyzer supports the import of custom trace event data into its native event format by text importer facilities and JavaScript scripted data imports.

The analysis tools abstract data from the raw trace event data in user understandable form such as function hit count, heat map, and top time-in function chart as shown above. In order to extract the resource usage per CPU in a multicore system, an analysis is required that is able to deliver such meta-information from the raw trace event data.

The visualization tools can represent the data in appropriate forms such as logs, graphs, charts, and tables as per the need; a log view can provide details about event sequences and associated information; a graph can provide trends, patterns, min/max information and so on; and a chart can provide statistical information about main factors such as producers, consumers, polluters, etc.

Then comes the most important part of exploration of the data where correlation and synchronization of trace data from different domains, such as kernel and user space (via a common time-axis) takes place. The data section corresponding to any anomaly or peculiarity needs to be located and its initiator identified, may be through user space traces.

The accuracy and success of analysis of a complex multicore system depends highly on the availability of solutions to correlate trace data from different domains that have been recorded during the execution of the examined application code. Generally, system and application time sources are not synchronized, and time is counted with different resolutions by different methods such as time-stamp count registers (in x86 architectures) and memory mapped external timers (in embedded architectures), which vary in their handling of numeric value overflow, linearity and resolution. The above illustration shows a time-correlated kernel and user-space traces scroll-synchronized through a selection cursor, the red vertical bar.

Sourcery Analyzer supports a palette of facilities to correlate time synchronized (scroll-synchronized by placing dedicated synchronization cursors at desired time-stamps) as well as time offset traces (that can be rebased to align their time scales).

The abstracted data from analysis and visualization tools provide a good ground for the trace tools to do further measurements and computations to determine the system behavior, for example event blocks spending most execution time.

Measurement and calculation tools for post-processing the data are seamlessly integrated into Sourcery Analyzer with easy and useful user interface. The above illustration shows frame events of different durations with annotations applied by the measurement tool. The ‘PulseWidth’ graph indicates the absolute frame duration values across the x-axis.

From the visualization graphs, the calculation tool can extract and compute derived data such as load curves for individual CPUs and load trend over all CPUs obtained from moving averages of individual CPU load graphs.

Finally, the user should be able to look up the source code (through accurate and comprehensive source code information display) from the trace event data in case of any encountered problem or concern and apply appropriate improvement.

Trace data formats and customization options are other criteria to look at when choosing a particular tracing tool and technology. Mentor Embedded Sourcery Analyzer provides a versatile platform for trace data analysis, visualization, extraction, correlation, measurement and computation. It supports, but is not limited to, LTTng instrumented Linux kernels and software trace instrumented user applications. Read the second part of Mentor Embedded multicore whitepaper series – “Software Tracing Tools and Techniques for Advanced Multicore Development” to know more. Manfred Kreutzer provides a great detailed description about all the procedures required for tracing.

More Articles by PawanFangaria…..