Banner 800x100 0810

Hybrid Memory Cube Shipping

Hybrid Memory Cube Shipping
by Paul McLellan on 09-25-2013 at 4:37 pm

Today Micron announced that it is shipping 2GB Hybrid Memory Cube (HMC) samples. The HMC is actually 5 stacked die connected with through-silicon-vias (TSVs). The bottom die is a logic chip and is actually manufactured for Micron in an IBM 32nm process (and doesn’t have any TSVs). The other 4 die are 4Gb DRAM die manufactured by Micron themselves with TSVs to for vertical communication. The target market is next generation routers and high performance servers which need memory with very high bandwidth and low power. The HMC can deliver 160GB/s of bandwidth at a power saving compared to normal DRAM of up to 70%.

I talked to Mike Black who is technology strategist for Micron. He said that the work started about 6 years ago when they decided to look at advances in technology since they knew they would want to stack die but to do so required mastering not just making TSVs but creating a whole manufacturing chain for thinning the wafers (without breaking them), bonding everything. Plus the decision to do a system architecture with a powerful logic chip, much more powerful than the type of logic that can be built on a DRAM wafer line.

I think that the announcement today is probably somewhat arbitrary since Micron have been shipping HSC to some partners already. Full production is the second half of 2014.

I think the announcement is interesting for two reasons. One is that the HMC is an interesting device in and of itself. The basic interface is not proprietary to Micron and is defined by the HMC Consortium. Besides Micron, Samsung and SK Hynix are contributors (which covers pretty much the whole DRAM market these days) as are Altera, Xilinx and ARM.


But perhaps even more important for those of us that follow the semiconductor industry in general is that this is a comparatively high volume part using TSVs, so one of the first 3D chips. Of course many people know that Xilinx has been shipping very high end FPGAs using a 2.5D approach on a silicon interposer. But this is not a high volume device and has a price point in the tens of thousands of dollars apparently. Micron are not saying what the price point of the HMC is, but it is clearly a performance, power, cost-of-ownership type of sale meaning that it will cost a lot more than just buying the same amount of DRAM in conventional form.

Analysts that follow the 3D chip numbers seem to think that use of TSV is going to grow from 1% to 10% by 2017. Since the semiconductor market is roughly $400B that is $40B in 3D chips. Micron reckon their internal numbers are in line with that, at least for the memory markets where they are involved.

Memory turns out to be an ideal pipe-cleaner for 3D since it has redundancy, error correcting codes (ECC), and in this incarnation has a powerful logic chip that can, for example, reconfigure the memory to cope with an open TSV. Also, DRAM doesn’t have the acute power and thermal issues of, say, microprocessors so an HMC is more tolerant of problems and doesn’t stress every aspect of the technology simultaneously.


Intel 14nm versus Samsung 14nm

Intel 14nm versus Samsung 14nm
by Daniel Nenni on 09-25-2013 at 4:15 am

The legend of Intel being two process nodes ahead of the rest of the industry is quickly coming to an end. To come to terms with this you need to do an apple to apple comparison which is what I will do right here, right now.

First and foremost let’s compare SoC silicon delivery since SoCs are driving the semiconductor industry and will continue to do so for years to come. In regards to microprocessors Intel is a monopoly and I don’t see that changing, ever. For this comparison I will use a smartphone SoC because apparently Intel has not yet figured out how to make an SoC that fits both phones and tablets like the rest of the industry (sarcasm).

In regards to processes, I’m talking about 14nm. Which 14nm process is better technically? I don’t think we will ever get agreement on that so let’s just say that Samsung and Intel will have something equivalent. As you read in Samsung 28nm Beats Intel 22nm, Samsung definitely has the process technology to challenge Intel. In regards to 14nm wafer cost I give that to Samsung hands down. Remember, Samsung is the number one memory maker and they know how to minimize wafer costs. Samsung also has displays, memory, and other products in the smartphone BOM. Additionally, Samsung is the number one smartphone/tablet systems company so they could easily give away 14nm wafers for free and not even notice it in their financials.

According to Intel sources, Morganfield, the first Intel 14nm smartphone SoC, will be delivered in the first half of 2015. According to the semiconductor equipment manufacturers Intel is having problems with variability at 14nm so this could easily slip. Intel did not do double patterning at 22nm so new process steps have been introduced. The more process steps, the more times you touch a wafer, the higher the variability, simple as that. And that is your Intel F.U.D. for today.

According to my trusted sources the top fabless companies are designing to 14nm right now. The 1.0 (production) version of the PDKs (process design kits) are now available and 14nm tape-outs will happen in Q4 2013, which means production silicon in the first half of 2015. Since 14nm is a half node of 20nm I do not see any F.U.D. here. The foundries used the same metal fabric for 20nm and 14nm with double patterning so they have seen and solved the variability issues. 20nm silicon is out now and will be in production the first half of 2014, absolutely.

Why did I use Samsung 14nm instead of TSMC or Globalfoundries? Technically they are equivalent, I just thought I would take less grief since I have been more critical of Samsung since they became a foundry. Personally I favor TSMC and GF as pure-play foundries since they do not compete with customers, or get sued by them. (Oh snap).

Tonight I’m at the GSA Semiconductor Executive Forum featuring Dr. Condoleezza Rice and semiconductor luminaries from around the world. Next Tuesday I will be at the TSMC Open Innovation Ecosystem Forum networking with 1,000+ fabless semiconductor professionals. If you ask the right questions at these events you will get the answers you need to be an “internationally recognized industry expert” like myself. I have also been called an “industry luminary” which has a nice ring to it too. Just don’t call me a “thought leader” because that sounds like some kind of creepy mind control thing. Of course my blogs do sometimes make people sleepy…….. you are getting sleepy….. Okay now quack like a duck.


Another Major Consolidation in Semiconductor Space!

Another Major Consolidation in Semiconductor Space!
by Pawan Fangaria on 09-25-2013 at 4:00 am


This time it is between the suppliers of semiconductor manufacturing equipments. And they are among the top ranked global peers. Applied Materials Inc., holding the numero uno position in sales of chip manufacturing equipments in 2012, agreed to acquire Tokyo Electron Ltd, the third in that ranking. Gary Dickerson of Applied Materials will be the CEO and Tetsuro Higashi of Tokyo Electron will be the Chairman of the combined entity. Details of the merger which is primarily on stock swap basis can be looked at http://www.bloomberg.com/news/2013-09-24/applied-materials-shares-slip-as-sales-miss-estimates.html

[TABLE] border=”1″
|-
! class=”blocksubhead” style=”width: 63px” ! 2012
Rank

! class=”blocksubhead” style=”width: 63px” ! 2011
Rank

! class=”blocksubhead” style=”width: 300px” ! Company
! class=”blocksubhead” style=”width: 90px” ! 2012 Revenue
! class=”blocksubhead” style=”width: 114px” ! 2012 Market Share (%)
! class=”blocksubhead” style=”width: 90px” ! 2011
Revenue
! class=”blocksubhead” style=”width: 108px” ! 2011-2012
Growth (%)
|-
! class=”blocksubhead” style=”width: 63px” ! 1
! class=”blocksubhead” style=”width: 63px” ! 2
! class=”blocksubhead” style=”width: 300px” ! Applied Materials
! class=”blocksubhead” style=”width: 90px” ! 5,513
! class=”blocksubhead” style=”width: 114px” ! 14.4
! class=”blocksubhead” style=”width: 90px” ! 5,877
! class=”blocksubhead” style=”width: 108px” ! -6.2
|-
! class=”blocksubhead” style=”width: 63px” ! 2
! class=”blocksubhead” style=”width: 63px” ! 1
! class=”blocksubhead” style=”width: 300px” ! ASML
! class=”blocksubhead” style=”width: 90px” ! 4,887
! class=”blocksubhead” style=”width: 114px” ! 12.8
! class=”blocksubhead” style=”width: 90px” ! 6,790
! class=”blocksubhead” style=”width: 108px” ! -28.0
|-
! class=”blocksubhead” style=”width: 63px” ! 3
! class=”blocksubhead” style=”width: 63px” ! 3
! class=”blocksubhead” style=”width: 300px” ! Tokyo Electron
! class=”blocksubhead” style=”width: 90px” ! 4,219
! class=”blocksubhead” style=”width: 114px” ! 11.1
! class=”blocksubhead” style=”width: 90px” ! 5,098
! class=”blocksubhead” style=”width: 108px” ! -17.2
|-
! class=”blocksubhead” style=”width: 63px” ! 4
! class=”blocksubhead” style=”width: 63px” ! 5
! class=”blocksubhead” style=”width: 300px” ! Lam Research
! class=”blocksubhead” style=”width: 90px” ! 2,835
! class=”blocksubhead” style=”width: 114px” ! 7.4
! class=”blocksubhead” style=”width: 90px” ! 2,314
! class=”blocksubhead” style=”width: 108px” ! 22.5
|-
! class=”blocksubhead” style=”width: 63px” ! 5
! class=”blocksubhead” style=”width: 63px” ! 4
! class=”blocksubhead” style=”width: 300px” ! KLA-Tencor
! class=”blocksubhead” style=”width: 90px” ! 2,464
! class=”blocksubhead” style=”width: 114px” ! 6.5
! class=”blocksubhead” style=”width: 90px” ! 2,507
! class=”blocksubhead” style=”width: 108px” ! -1.7
|-
! class=”blocksubhead” style=”width: 63px” ! 6
! class=”blocksubhead” style=”width: 63px” ! 6
! class=”blocksubhead” style=”width: 300px” ! Dainippon Screen
! class=”blocksubhead” style=”width: 90px” ! 1,484
! class=”blocksubhead” style=”width: 114px” ! 3.9
! class=”blocksubhead” style=”width: 90px” ! 1,810
! class=”blocksubhead” style=”width: 108px” ! -18.0
|-
! class=”blocksubhead” style=”width: 63px” ! 7
! class=”blocksubhead” style=”width: 63px” ! 9
! class=”blocksubhead” style=”width: 300px” ! Advantest
! class=”blocksubhead” style=”width: 90px” ! 1,423
! class=”blocksubhead” style=”width: 114px” ! 3.7
! class=”blocksubhead” style=”width: 90px” ! 1,162
! class=”blocksubhead” style=”width: 108px” ! 22.5
|-
! class=”blocksubhead” style=”width: 63px” ! 8
! class=”blocksubhead” style=”width: 63px” ! 11
! class=”blocksubhead” style=”width: 300px” ! Hitachi High-Technologies
! class=”blocksubhead” style=”width: 90px” ! 1,138
! class=”blocksubhead” style=”width: 114px” ! 3.0
! class=”blocksubhead” style=”width: 90px” ! 986
! class=”blocksubhead” style=”width: 108px” ! 15.4
|-
! class=”blocksubhead” style=”width: 63px” ! 9
! class=”blocksubhead” style=”width: 63px” ! 7
! class=”blocksubhead” style=”width: 300px” ! Nikon
! class=”blocksubhead” style=”width: 90px” ! 1,007
! class=”blocksubhead” style=”width: 114px” ! 2.6
! class=”blocksubhead” style=”width: 90px” ! 1,378
! class=”blocksubhead” style=”width: 108px” ! -27.0
|-
! class=”blocksubhead” style=”width: 63px” ! 10
! class=”blocksubhead” style=”width: 63px” ! 8
! class=”blocksubhead” style=”width: 300px” ! ASM International
! class=”blocksubhead” style=”width: 90px” ! 965
! class=”blocksubhead” style=”width: 114px” ! 2.5
! class=”blocksubhead” style=”width: 90px” ! 1,332
! class=”blocksubhead” style=”width: 108px” ! -27.5
|-

As per Gartner data, we can clearly see from the table, growth % was rather negative for both the giants, a burgeoning -17% for Tokyo Electron. As both are struggling, naturally, this merger may help sharing the burden and creation of a bigger giant of about $29B market value. That will leave all the other players in the market far behind. However, it remains to be seen, how the joint entity complements each other to create a larger economic value, thus helping the combined company as well as the macro-economy. Nevertheless, it is a joining of two global forces on opposite hemispheres, which is bound to bring global synergy in semiconductor equipment business.

In recent economic down-turn it has been observed that most of these companies experienced upward cost pressures in developing more sophisticated machines to cater to shrinking technology nodes, and at the same time, reduced outlay from semiconductor manufacturers on spending for production equipments. Also, semiconductor production equipment buyers’ market is more or less consolidated among Intel, TSMC, Samsung and a few in that category. Naturally, that invites mergers to manage cost (by complementing and eliminating overlaps) and increase capital and revenue, provided there is a right fit.

It’s learned that Applied Materials and Tokyo Electron do not have much overlap and after completion of merger, together they can save ~$250M of cost in the first year and ~$500M by third year. However, it must also be noted that there are other competitors in the same space they are in. Also, it remains to be seen whether any antitrust issue arises out of this as this is a merger of major forces which may lead to concentration of economic power? Considering the current economic situation of this business, I guess not.

So, what do we infer from this story? Will there be more mergers in this space? I guess, yes, because others, who are left significantly behind, may need to join in order to provide a healthy competition to this combined entity. Comments welcome!


SystemVerilog Assertions and Functional Coverage

SystemVerilog Assertions and Functional Coverage
by Daniel Payne on 09-24-2013 at 8:26 pm

Ashok Mehtahas designed processors at DEC and Intel, managed ASIC vendor relationships, verified networks SoCs, directed engineers at AMCC, and used SystemVerilog since it’s inception. He recently authored a book: SystemVerilog Assertions and Functional Coverage. The book is available in both hardcover and Kindle formats at Amazon.

Continue reading “SystemVerilog Assertions and Functional Coverage”


But I Never Have Seen a Synchronizer Failure

But I Never Have Seen a Synchronizer Failure
by Jerry Cox on 09-24-2013 at 8:00 am

You may say, “Why should I worry about synchronizer failures when I have never seen one fail in a product?” Perhaps you feel that the dual-rank synchronizer used by many designers makes your design safe. Furthermore, those chips that have occasional unexpected failures never show any forensic evidence of synchronizer failures. Why worry?

There are contemporary cases and have been cases of synchronizer failure over the years. In fact, there have been many more than can be listed because firms designing digital systems are reluctant to call attention to their failures and because the infrequent and evanescent nature of these failures makes them hard to locate and describe. A few cases are listed here. To indicate the time span of these documented cases, let’s look at the first and last.

ARPAnet (1971): The Honeywell DDP 516 was used at a switching node in the original ARPA network, but in early tests it failed randomly and intermittently within hours or days. Persistent efforts to capture the symptoms were unsuccessful. Eventually, Severo Ornstein diagnosed the problem based on his prior experience with metastability. He then remedied it by halving the synchronizer clock rate (effectively doubling the resolution time). Honeywell did not accept this solution so each DDP 516 had to be modified before installation in the original experimental network.

Technion (May 2013): Scientists at the Technion in Israel reported on a case of metastability in a commercial 40nm SoC that failed randomly after fabrication. Normally, there would have been no forensic evidence that metastability was the cause of these failures. However, by use of infrared emission microscopy they identified a spot on the chip that correlated with the failure events in both time and location. The spot contained a synchronizer with a transient hot state that confirmed its role in the failures. Because the system employed coherent clock domains, the synchronizer MTBF was sensitive to the ratio of frequencies used in the two clock domains to be synchronized. The original, unfortunate choice of this ratio led to the failures and a more favorable choice improved the MTBF by two orders of magnitude. For the application at hand, this was an acceptable solution, but it was a highly expensive and time-consuming way to resolve the problem.

Another difficulty in reporting metastability failures is their infrequency. A Poisson process models the failure rate well, as demonstrated by the correlation between simulations and measurements in silicon. Suppose a product has a failure rate such that there is 50% chance that a problem will be detected in tests that that have a 30-day duration. Further, suppose that no failure happened to be detected during that test period. If product life is 10 years and 10,000 times as many products are to be sold as were tested the expected number of failures would be over a million. Many would be benign, but in safety-critical systems some could lead to fatal results. Even if the probability of failure during test is orders of magnitude less than 50%, the multipliers associated with the longer life and greater numbers in service can make the failure risk significant.
These considerations make it clear that physical tests of products after fabrication are necessary, but insufficient. Only simulations of synchronizer performance, best done before fabrication, can verify the lifetime safety of a multi-synchronous system. This conclusion is of increased importance as synchronizer performance becomes increasingly dependent upon process, voltage and temperature conditions and more highly variable within and among chips.

lang: en_US


Xilinx’s Vivado HLS Will Float Your FPGA

Xilinx’s Vivado HLS Will Float Your FPGA
by Luke Miller on 09-23-2013 at 8:30 pm

Very rarely does the FPGA designer, especially with respect to RADAR, think of the FPGA as a floating point processor. Just to be sure I asked my 6 year old and she agreed. But you know what, the Xilinx FPGAs float. Go try it, order some up and fill up the tub.

Anyways I purpose a duel to the avid VHDL coder. I want you to design me a Sine(x) function in VHDL, you provide X and the output will be the angle in radians. It must have the answer within 32, 150MHz clocks and have single precision floating point. It also needs to be pipelined, thus after the 32 clocks, the data will continually be produced provided there is an input every clock cycle which will keep the pipe full.

So how long would that take you? Well I did this very experiment and used Xilinx’s Vivado HLS and was done in about 10 minutes and that includes my son’s diaper change. I used the Taylor Series expansion out to the 13[SUP]th[/SUP] term. I removed all the factorial divides by using LUT’s to store the 1/3!, 1/5! … terms. What we are left with is the very simple code below:

What’s even grander is I did not have to run like a 100 RTL simulations to verify my design. Why? Because since the input to HLS is C/C++, you are running an executable that completes in a second to verify your math. Think about it, why do you run an RTL simulation for a module you design? You need to verify the math, the latency and boundary conditions. It is in iterative process. So is the HLS tool but you move much faster as the time between code tweaks is much faster in the C/C++ domain than RTL simulation domain. Here is how we did with respect to performance and device usage:

In HLS you’re mainly trading off FPGA device area, clock speed and latency. You no longer have to wait for MAP to complete to see device usage and Fmax. Think of that next time you are sizing FPGAs for your next proposal. The C/C++ you write can be added to a library and reused again and again. Now some of you may be asking, why don’t I just use a SIN(X) function in math.h? Well that works too, and that would only take 10 seconds to design using Xilinx’s Vivado HLS, but the latency will be 48 clocks. We needed to hand code the C function because I needed a pipelined, 32 clock solution. Go download a free trial at Xilinx to play with Vivado HLS today! You won’t regret it…

lang: en_US


A Brief History of Silvaco

A Brief History of Silvaco
by Daniel Nenni on 09-23-2013 at 5:00 pm

Silvaco is the leading supplier of TCAD software, and a major supplier of EDA software for circuit simulation and design of analog, mixed-signal and RF integrated circuits.

The company was founded in 1984 by Dr. Ivan Pesic. The initial product, Utmost, quickly became the industry standard for parameter extraction, device characterization and modeling. It was followed in 1985 by SmartSpice, bringing Silvaco into the SPICE circuit simulation market. SmartSpice was later complimented by a family of circuit simulation products for analog, mixed-signal, and RF.


In 1987 Silvaco entered into the TCAD market and by 1992 had become the dominant TCAD supplier with the Athena process simulator and Atlas device simulator. These would later evolve into a complete family of 2D and 3D process, device and stress simulation products.

In 1997 Silvaco entered the analog IC CAD market with a suite of EDA tools for schematic capture, layout, physical verification (DRC, LVS and LPE) and parasitic extraction. In the same year Silvaco also began offering its tools for 3D physics-based interconnect modeling for passive components and interconnect parasitics.

In 2004, Silvaco entered the digital market, offering tools for cell/core library characterization, place and route, and Verilog simulation.

In 2006 Silvaco began to offer foundry-specific PDKs to enable designers to streamline their design flows all the way to fabrication. This began with the TSMC 0.18um process and has since grown to over ninety PDKs for eighteen foundries, and the number of supported PDKs continues to grow.

In October 2012, after an extensive battle, Dr. Pesic succumbed to cancer. Ownership of Silvaco remains in the Pesic family, with Dr. Pesic’s son, Iliya Pesic, now Chairman of the Board. David Halliday, a veteran with over twenty years of experience at Silvaco, has been appointed interim CEO. The company continues to move forward and is a proactive member of the EDA and TCAD business communities through such industry associations as Si2, SEMATECH, CMC, and GSA.

The company has always been privately held, financially strong, and debt-free, growing steadily on retained earnings to become the largest privately-held EDA company.

In addition to its headquarters in Santa Clara, California, Silvaco maintains US sales and support offices in Boston, Massachusetts, and Austin, Texas. Silvaco also has international sales and support offices in Japan, Korea, Taiwan, Singapore, and the United Kingdom. In addition, Silvaco employs local distributors in China, India and Malaysia.

Silvaco’s broad customer base includes leading foundries, fabless semiconductor companies, integrated device manufacturers, universities, research institutions and IC design houses.

Silvaco’s mission is to remain the leading TCAD supplier while becoming the leading EDA supplier delivering best-in-class tools, complete design tool flows, expert technical support, and professional services.

Silvaco has a proven track record of providing leading edge and innovative solutions and will continue to build on its success having all the attributes required to provide best-in-class tools for the ever more challenging requirements imposed by future generations of semiconductor devices and technologies.

Silvaco is determined to remain independent, financially stable and debt free to enable them to be a long term premier alternative to the ‘old guard’ EDA companies. Silvaco will focus on providing excellent tools and outstanding support to continue to grow its already extensive, established and loyal customer base.


Apple: "It’s The Sales Channel, Stupid!"

Apple: "It’s The Sales Channel, Stupid!"
by Ed McKernan on 09-23-2013 at 1:00 pm

Apple’s decision to launch the iphone 5C as a “high priced” device as opposed to a $300 entry level mass market consumer play appears to be intertwined in a much more overriding strategic plan that is beginning to play out in the market. Many analysts who pushed for the low cost device saw the need as necessary to save the ecosystem but were wary of the possible margin erosion that could damage Apple’s earnings and brand image. Tim Cook, though has put in play a contrarian channel strategy that looks for Carriers to compete for customers with $0 down and lower cost data plans as well as instituting buyback programs on used iphones that can more than pay for the $199 fee of a new device. The net effect will be higher sales to existing customers as they upgrade on a yearly basis and a broadening of the developing world user base as Apple resells the “pre-owned” devices in the $300 range that analysts were hoping for all along.

No sooner than had DoCoMo signed up as the last major carrier to handle iphones in Japan did they initiate a price war with KDDI and Softbank to win back customers lost over the past few years. In a recent article in the WSJ, DoCoMo was noted to offering the iPhone 5S to customers for free on a two year contract, thereby undercutting the cost by $199. Meanwhile KDDI and Softbank are offering users $63 to $100 if they purchase the iphone 5C with a two year contract. As late as August, it was noted that DoCoMo lost 145,000 customers to its competitors due to not having the iphone in its stores. And even more stunning is the fact that DoCoMo signed an agreement with Apple that guarantees that a minimum of 40% of their overall device sales will be iphones. In other words, Apple is nearly guaranteed to own over half the Japanese market at its margins and at the expense of the carriers margins. With China Mobile expected to come on board in November, the same scenario is likely to play out in a market that is more than twice as large as the US.

After the initial weeks of undersupply, it is quite likely that in all three of the world’s major markets: US, Japan and China, a price war will break out with carriers that effectively moves the iPhone 5S price from $649 to $450 and the iPhone 5C from $549 to anywhere from $350 to $400. Moreover the Apple program to funnel customers into their stores to receive rebates on used phones will have a further leveling effect on the market as users receive cash for their phones and carriers lose the power to redirect customers over to other subsidized phones. I expect Samsung to be most impacted as they lose the high and mid range markets but all of Android will be effected.

The sales channel effect that Apple seeks to impose is very similar to how BMW and Mercedes were able to increase market share the past 30 years while maintaining premium brand values. The leasing of a new BMW was attractive to many first time buyers because it took into account the high residual that existed after three years. Typically it was calculated at 50-60% of the list price. The next level down of users, which were many, cold be counted on as being happy to buy a pre-owned vehicle in the price range that was much more closely aligned with new American cars (substitute the words Android phone here).

In this case, though, Apple has gone one better in that it is able to retain 50% of the value of the phone after one year, which is incredible for any high technology product due to the steep Moore’s law cost curve. Perhaps the continuous upgrades of iOS help maintain the high value. If Apple pulls this off, then Carriers risk losing their up front $199, which makes up a significant amount of their profit. Then the giant sucking sound you hear will be Apple taking hold of the majority of the customer’s wireless bill. Apple would then have successfully taken the place of Wintel whereas carriers take on the role of PC OEMs.

While every analyst spent the last year haranguing Apple about the need for a low cost iphone, the real story was about the creation of competition in the carrier oligopoly ecosystem which in turn will drive sales to much higher levels while maintaining margins and revenue growth. To paraphrase James Carville, It’s the sales channel, stupid!

lang: en_US


Develop A Complete System Prototype Using Vista VP

Develop A Complete System Prototype Using Vista VP
by Pawan Fangaria on 09-22-2013 at 6:00 pm

Yes, it means complete hardware and software integration, debugging, verification, optimization of performance and power and all other operational aspects of an electronic system in semiconductor design. In modern SoCs, several IPs, RTL blocks, software modules, firmware and so on sit together on a single chip, hence making it almost impossible to validate the whole system by traditional means. In such a scenario, nothing can be better than having a complete platform which enables designers to connect these components together at various abstraction levels (even before their RTL level implementation), optimize their architecture in terms of power and performance, validate the whole system and present a prototype that can lead the actual implementation without much problems, thus significantly improving the total turn-around-time of the system development.

It was a pleasant surprise coming across Vista Virtual Prototyping solution which is part of the Vista[SUP]TM[/SUP] Platformof Mentor Graphics. I read Mentor’s whitepaper, “Vista Virtual Prototyping” with rapt attention as it gives interestingly good level of details about the working of the system and how it can solve the big problem of optimizing (power and performance) and deciding about the overall architecture of the system. I am summarising some of those points below, but would recommend the audience to read the paper to know about the actual details; it’s an interesting and engaging read.

Primarily the system has two main parts – i) TLM (Transaction Level Modelling) based platform for creation of models for Virtual Prototype, ii) Platform to use the Virtual Prototype, integrate and validate software and firmware on the whole system. Here are some of the important and easy-to-use components of the platform –


[Vista Schematic Block Diagram Editor]

The block diagram editor provides a simple schematic creation by linking graphic symbols of various TLM models. It’s possible to view and modify the SystemC source code of any TLM instance. On each save operation, the structural SystemC code of the schematic view is automatically generated and saved.

After design phase of the Virtual Prototype, it can be used by software and firmware engineers for software development as well as several verification tasks such as HW/SW analysis, HW/SW co-simulation and debugging. It supports UI, application stacks, firmware and drivers running on top of OS such as Linux, Android and Nucleus as well as Bare-Metal mode. And it provides facilities to develop Linux kernels and fast booting of the OS. It can also be linked with physical devices of the host workstation such as terminals, displays, USBs, Ethernet connections etc.

Although Vista Virtual Prototype can be invoked from the command line, the platform provides an easy-to-use Sourcery CodeBench IDE environment which enables better control of the simulation with direct visibility and control of hardware objects (such as registers and values stored in them), tight HW/SW debugging, and file system interactions.


[Sourcery CodeBench HW/SW Debug GUI]

In the tightly coupled HW/SW debugging, hardware simulation can be controlled by setting breakpoint on the access of a hardware object, resetting devices and/or cores, setting simulation mode etc. And there are viewing and analysis facilities for various entities on the display such as hierarchical path to the breakpoint, SystemC simulation time, CPU core tracing, state of DMI (Direct Memory Interface), mode of simulation etc. The Vista Virtual Prototyping supports TLM modelling at LT (Loosely Timed) and AT (Approximately Timed) levels of hardware timing modes. Accordingly, there are two modes of simulation; Functional Mode which corresponds to LT, is fast, and is concentrated at integrating, debugging and validating the software; and Performance Mode which corresponds to AT, is slow, and is concentrated at analyzing and optimizing performance and power consumption. It allows users to select and switch between these timing modes during run time.

The platform provides excellent viewing and analysis capabilities with reports in graphical, textual or tabular form with varying degrees of granularity and display control on each view as desired by the user. Analysis types like Throughput, Latency, Power (static and dynamic), Power Distribution, Bus Throughput, Contention on bus model (Address phase and Data phase), and Arbitration time (Address and Data) are easily and automatically performed by Vista Virtual Prototyping Analyzer.


[Power distribution comparison of two architectures]

Multiple simulation sessions can be compared to determine the effects of system configuration changes, protocol selection, and software changes on the design behaviour and its performance and power attributes.

An Embedded Sourcery Analyzer conducts software analysis such as CPU State and Statistics, File System Activity, Function Calls and Statistics, Lock Wait and Hold Times, Process and Thread State, Page Fault Rate, Memory Usage and Cache Hit/Miss Ratio.


[Native Unified Software IDE across Hardware Evolution]

Vista Virtual Prototyping is integrated into an embedded software design flow that combines validating and optimizing the software on an abstracted simulation model, through emulation, FPGA prototyping during the pre-silicon stage to final product at the post-silicon stage. Users can easily change the underlying hardware model between Vista Virtual Prototypes to hardware prototypes to boards staying within the same native Sourcery IDE.

Click here to download the whitepaper and know more.


What Does Sports and NoC Have in Common?

What Does Sports and NoC Have in Common?
by Randy Smith on 09-22-2013 at 11:00 am

As an Oakland Raider season ticket holder I attend as many Raider home games as possible. If you have ever attended a live sporting event at a large stadium, and you travelled by car, you are probably familiar with the traffic problems that occur at the end of the game when everyone wants to leave the stadium parking lot at the same time. At most venues you will find law enforcement on hand to control the traffic. On the surface, it seems their role is to help traffic merge more efficiently since there are typically many lanes funneling down to fewer lanes over multiple stages to get through to the exit and/or onto a major highway. You should also understand that they are there to give higher priority to certain vehicles. Police and emergency vehicles have higher priority, followed by perhaps limos and busses, and then the low priority traffic (me in my car). This situation actually looks a lot like the traffic problems in a modern SoC.

As design content increases we find that the data moving about our designs is traveling relatively longer distances and is competing with other traffic in the system. To manage this, buffers and arbitrators are inserted along the way. The most obvious merge pattern occurs on the path to request data from memory, typically off-chip DRAM. In modern SoC this traffic flow is managed by the Network on Chip (NoC) IP. As the traffic cop, the NoC’s responsibility is to make sure each priority of message is handled properly. Making sure that all requests are handled on time is called managing the Quality of Service (QoS). Not all NoC implementations do this in the same way though. It is important to understand the differences when selecting a NoC.

Many NoC architectures (e.g., ARM, Arteris) use a mechanism where the initiator of a message sets the message’s priority (Initiator Based QoS). Simply put, the traffic cop believes the priority you tell him you have and sends you through accordingly without regard to what is happening downstream. If he has no higher priority traffic, he simply sends all of the cars straight through. The problem is that this traffic cop doesn’t have a radio. The flood of traffic he is sending forward may completely fill the lanes at the next intersection and no one is telling him about it. So, higher priority traffic coming from another direction will now not be able to get through the next intersection because it is full, full of lower priority traffic which is blocking the way. The solution: Use the police radio!

In addition to Initiator Based QoS, Sonic’s has NoC IP that also supports Target based QoS. In this approach, the decisions on handling priorities are made closer to the target (or endpoint) of the path. Arbitration decisions are made with knowledge of all of the traffic headed to the target. In short, the traffic cops are talking to each other over the radio (i.e. the system will not select an arbiter that cannot make forward progress). The resulting decisions then can minimize latency and improve memory efficiency. This allows dynamic allocation of the network bandwidth. This plays hand-in-hand with other mechanisms employed by Sonics including virtual channels (aka threads), non-blocking flow control, and other configuration parameters. Click the image below to see a more detailed description.


I am glad the Oakland Police Department uses their radios. It helps me exit the area quickly without impacting ambulances or other higher priority traffic. Apparently, Sonics can do the same for SoC designs as well. For more details, check out SonicsSX® and SonicGN®.