BannerforSemiWiki 800x100 (2)

Should Intel Offer Foundry Services?

Should Intel Offer Foundry Services?
by Daniel Nenni on 12-15-2013 at 11:00 am

This has been a heated topic since Intel announced that it would open its manufacturing facilities to the fabless ecosystem more than a year ago. I for one think it is a colossal mistake and I’m not surprised that many others share this view. IDM’s offering of excess manufacturing capacity to semiconductor design companies started the fabless revolution so this is Deja vu all over again. Those same IDMs are now fabless, fab lite, or out of business. Right?

Intel’s first foundry customers made complete sense. Achronix and Tabula are emerging FPGA companies meaning Intel can closely control the flow of process information. Achronix was an equity investment using the ASIC model where Intel does all of the heavy lifting so the process recipe never leaves the building. Ditto for Netronome. Altera however is more the traditional foundry model and this is where Intel crossed the insanity line, absolutely.

In order for Altera to compete with the likes of Xilinx they will need full access to Intel’s process technology and the ability to co-develop a process tuned for FPGAs. Altera and TSMC did this for many years up until Xilinx joined TSMC at the 28nm node. As a result, there is going to be a big shake-up in the FPGA market share numbers at 28nm. Historically Altera and Xilinx have been close but my guess is at 28nm Xilinx will run away with 70% market share which is exactly why Altera moved to Intel (they really had no choice).

From what I am told, the first versions of the Intel 14nm design rule manual delivered to Altera were redacted like something from the FBI. That has changed over time and the fact of the matter is that Intel’s process recipes are now available in the mainstream fabless semiconductor ecosystem which might as well be public domain. There are no secrets in Silicon Valley, especially now with semiconductor social media websites like SemiWiki.com, believe it.

For example: I know what the yield problem really was at Intel 14nm. It was not double patterning as many had guessed and technically it was a defect density problem as Intel had suggested but there was much more to it than that. Simply stated, it was increased CMP slurry at wafer’s edge causing excessive curvature which significantly limited yield.

Okay, back to the foundry question. Intel will not make money from Altera for many years to come and will be lucky to break even considering what a demanding customer Altera is. Same goes for Tabula and Achronix so this whole “filling the fabs” thing is a crock. Intel is just biding time until they can land a big SoC foundry fish which I don’t see happening. I work with SoC companies and they are not going to Intel, believe it.

Qualcomm Chairman Dr. Paul Jacobs said as much at a recent analyst conference:

Qualcomm Inc. (QCOM) BMO Technology, Media & Entertainment Conference December 11, 2013

The transcript is a good read if you are into the SoC business but here is the Intel as a foundry parts:

The other part of the question was about Intel what do we see about Intel as a potential source for foundry and I mean we’re certainly open to it, it’s — there they have expressed interest in going that way obviously TSMC and the other fabless guys have a different model right now for how they build their fabs they’re very flexible. They can run multiple different products through them simultaneously and it’s all software controlled where the cartridge of wafer go and Intel is famous we have been known for having a copy exact model, so they need very large volumes of a particular chip to run through that.

But I mean people change and so forth, so I mean it’s certainly interesting I mean I am glad to hear that their interested in going that way and we’ll see how that plays out. But right now I mean the fundamental of that in the foundry is that typically these leading edge chips go into a set of iconic phones an iconic phone is like a — it’s almost like a movie launch. Things happen in the first small period of time, so you have to ramp incredibly fast, build a lot of capacity on the frontend which means that after that those first wave has passed through that now there is a lot of foundry capacity leftover.

And the fabless model have to be there in order for other people to absorb that coming later, the later waves to absorb that. So I think the fabless model is very well suited to the mobile space and to these kind of leading edge designs and so I think it’s sort of necessary as a way of participating in the market.

Spin it any way you want but based on my experience with QCOM and the other top fabless companies there is no way they will be able to work within the rigid process requirements and the strict business demands of Intel. The foundry business just does’nt work that way. Unless of course they are desperate like Altera.

My advice: If Intel wants to fill their fabs, instead of giving $1B of chips away for free (Baytrail) and whoring out their excess capacity, Intel should acquire fabless companies that are in emerging markets to better diversify and do what Intel does best, make chips!

The Intel foundry guys sat at the table next to me at the GSA Awards last week which recognized the leading fabless companies around the world. Hopefully they made a shopping list of companies to acquire that can fill their fabs and return Intel to financial growth on par with the rest of the semiconductor industry (30% versus Intel’s 0%).



Mentor Buys Oasys

Mentor Buys Oasys
by Paul McLellan on 12-14-2013 at 1:24 pm

Mentor is acquiring Oasys, subject to all the usual caveats about shareholder and regulatory approval. The shareholder paperwork went out earlier this week. The common stock is valueless so presumably the price is low (and Mentor historically has not paid high prices for its acquisitions).

So what is going to happen with the technology? Oasys have synthesis, placement and floorplanning technology called RealTime Designer that works at the RTL level. It turns out that working at the RTL level instead of doing optimization at the gate-level as other synthesis tools do is extremely fast.

Mentor has a place and route system called Olympus that is built on technology that they acquired from Sierra and from Pyxis. However, they lacked good SoC synthesis technology. Mentor has always (well, for a long time) had FPGA synthesis technology but that is a different market and largely a different technology under the hood.

My prediction is that Mentor will graft the Oasys floorplan compiler onto Olympus and thus make it competitive with offerings from Synopsys, Cadence and Atoptech. Whether this is enough to make the product really successful remains to be seen. In EDA most of the profit goes to the #1 supplier (I think that is Synopsys at the moment) and some to the #2 (either Atop or Cadence, although I’ve heard lots of rumors about Cadence P&R not being competitive at the moment, these things go in waves). It is hard to make good money if you are #4 although since physical design sells for such a high price there may be more profit to be divided up.

Mentor also has a major ownership stake (probably a controlling interest) in Calypto who have sequential equivalence checking and sequential power reduction (which they developed themselves) and the Catapult high-level synthesis product (that Mentor put into Calypto when they did the deal). I expect they will also end up getting the Oasys synthesis technology to put under the hood of Catapult so that it goes all the way from C/C++ down to placed gates with accurate timing. One of the problems of high level synthesis is that the metrics used for quality of results are very coarse (two multipliers are bigger than one etc) making it hard for the tool to make good choices. With the very fast synthesis from Oasys added in, they should be able to make better decisions and thus produce better results. Oasys did some work with AutoESL before Xilinx acquired them and it looked promising (and it is an open secret that the synthesis engine in Xilinx’s Vivado is based on Oasys’s technology). Of course Calypto may also be a sales channel for RealTime Designer as a standalone synthesis tool.

So we will have to wait and see what actually happens. There are various rumors about Joe Costello trying to roll up several EDA companies into one and thus produce another “big” EDA company overnight but I don’t see this being anything like that. But Costello is the chairman of the board of Oasys, of course, so is involved at least that much.

Anyway, it looks like a good deal for Mentor: not much money, make their P&R competitive, give their HLS a competitive advantage and, perhaps, a new product line.


More articles by Paul McLellan…


Verification of Multirate Systems with Multiple Digital Blocks

Verification of Multirate Systems with Multiple Digital Blocks
by Daniel Payne on 12-13-2013 at 8:27 pm

Our popular smart phones have a whole slew of RF-based radios in them for: Bluetooth, WiFi, LTE, GSM, NFC. Using just a single clock frequency for a DSP function or SoC is a thing of the past, so the design of multirate systems is here to stay. So now the challenge on the design and verification side is to use a methodology that supports:

  • Multiple levels of design abstraction
  • RF blocks and digital blocks interacting
  • Multirate simulation together with HDL simulation

Continue reading “Verification of Multirate Systems with Multiple Digital Blocks”


AMD Goes 3D

AMD Goes 3D
by Paul McLellan on 12-13-2013 at 7:16 pm

I attended the 3D packaging conference in Burlingame this week. The most interesting presentation to me was by Bryan Black of AMD. He argued very convincingly that Moore’s Law is basically over for the PC microprocessor business and the way forward is going to be 3D. AMD are clearly working on all this.

Increased density and performance/power at each node is great, but in the end it is lower cost transistors that drives the transition for all except the leading edge.

Problem #1: for PCs, everything that can be integrated in the same process has already been integrated. Cache, FPU, multimedia, GPU etc. All that is left is in different process: DRAM, MEMS, storage, optics etc which can’t be integrated onto a mainline digital process.

Problem #2: we are going to go backwards. Next generation processes will no longer support mixed functionality such as analog, cache, high performance, low power.

Problem #3: the cost of next generation process may never cross over against current processes so the economic rationale for moving process will stall.

Problem #4: yield is dropping as mask count increases. sweet spot die size is dropping dramatically further lowering yields since microprocessors are large challenging die

Problem #5: more and more of the power budget is going for memory access, leaving less to do actual computation

Solution: High-bandwidth memory (HBM) is the first step to getting this under control. Lowers the power needed for memory, freeing up more power for performance. Can continue to scale for years. Can also move analog off and save a dozen mask steps. Can reduce die size to hit reduced sweet spot.

What happens next? Designs are currently running at 100K or so per year using TSV and stacked die and interposers. Next year it will be a million or two. Once the numbers are in the millions, costs will reduce dramatically and a lot of integration will make more sense to do at the packaging level rather than on big SoC like we have been doing. 2014/15 is when 3D ICs become real and start to ship in high volume.

More articles by Paul McLellan…


Taming The Interconnect In Real World For SoCs

Taming The Interconnect In Real World For SoCs
by Pawan Fangaria on 12-13-2013 at 1:30 pm

Interconnect plays a significant role in the semiconductor design of a SoC; if not architected and handled well, it can lead to an overdesigned SoC impacting on its power, performance and area. Since a SoC generally contains multiple IPs requiring different data paths to satisfy varying latency and performance cycles, it has become extremely difficult to architect an interconnection for the overall SoC which can provide the best throughput as well as latency. A single cycle mismatch can lead the interconnect fabric to allow low priority traffic and block high priority traffic. A load-balancing is extremely important for high performance on critical paths and management of traffic for less performance intensive slower paths. For example, the bus fabric between CPU and main memory, through memory controller, must be configured such that the memory bandwidth is maximized whereas latency is minimized.

The problem doesn’t end with a good optimized architecture; the real challenge comes when performance critical paths need fine tuning in the real design and that gets prohibited by the architecture. So, what’s the way out? I’m delighted to see a novel approach provided by Carbonthrough its state-of-the-art Virtual Prototyping tool, SoC Designer. It progressively balances architectural optimizations with real accurate performances as and when the design decisions are made down the cycle of the overall SoC design.


[Architectural Exploration – CAXITG and Memory models connected through Arteris FlexNoC]

It’s a two phase approach. In the first phase (i.e. architectural exploration), Carbon IP Exchange is used to configure 100% cycle accurate interconnect model and quickly isolate performance bottlenecks with traffic generators and flexible memory sub-systems. Above picture shows how Carbon AXI Traffic Generator (CAXITG) and memory models are configured with 100% cycle accurate Arteris FlexNoCinterconnect. CAXITG enables performance analysis of AMBA AXI-based systems in Carbon SoC Designer. Traffic such as read/write, pipelining, random data rate, throughput and latency can be profiled. Transaction tracing and back-pressure analysis can be done. Waveforms can be viewed within SoC Designer to identify performance bottlenecks. Wait states can be used to simulate low performing memory sub-systems. Cost Vs performance trade-off analysis can be done for the interconnect fabric. It’s wise to use bandwidth and latency as required to conserve power.


[Arteris FlexNoC with ARM Cortex A9 dual core CPU]

The second phase involves optimizing the system with real implementation accurate IP blocks and real world workloads. Virtual Prototypes allow this reusability with great ease. Above is an example where the traffic generator has been replaced with an ARM Cortex A9 processor. Design trade-offs can be re-visited and re-validated against actual multicore processor traffic. Similarly, the platform can be further updated with actual DMA and memory controller IPs.

Another complexity in interconnect optimization comes from hardware based cache coherency. AXI Coherency Extensions (ACE) to AMBA bus protocols introduce extra complexity of commands complying with ACE specifications, maintaining cache state to ensure coherency and legality operations and correct execution of Barrier transactions across multiple cores issuing such transactions.


[Carbon A15 bare metal CPAK multi-processor reference platform]

The A15 Bare Metal CPAK can be used very effectively to analyse cache coherency and its performance implications with an ARM Cortex A15 and ARM CCI-400 Cache-Coherent Interconnect.

SoC Designer has in-built ACE monitoring capabilities. A comparison of non-coherent and coherent workloads can be made instantly.


[Non-coherent A15 application workload – Average read latency ~9.6 cycles]


[Coherent A15 workload (same application) – Average read latency ~16.5 cycles]

The SoC Designer profiling helps quantify the cost of coherency in the broader context of overall system performance. This can also provide clear understanding of design partitioning across multiple networks on chip.

A detailed description of various capabilities of SoC Designer and procedures for performance optimization of interconnect for SoC design without wasting power or area is given in a whitepaperat Carbon website. It’s worth reading and knowing about what this Virtual Prototyping platform can do for you.

More Articles by Pawan Fangaria…..

lang: en_US


Mobile Maturity Leads to Extremes

Mobile Maturity Leads to Extremes
by Bill Boldt on 12-12-2013 at 3:00 pm

The smartphone is becoming a commodity, a lifecycle stage where the strong get stronger, the weak get weaker, and the products standardize and start to look alike. This dynamic is driving innovation in existing products to extremes and spawning a new class of wearable devices.

Today two major players are leading the mobile hardware market, chased by a gaggle of wannabes fighting desperately for third place. The weaker players are getting acquired or are facing worse fates. The “insanely great” inventions of the Jobs Era (i.e., iPhone, iPad, and Mac Air) are being aggressively emulated. The before-and-after picture below tells the story.

Smartphones before (left) and after the Apple iPhone


Meaningful differentiation in smartphones and tablets is extremely difficult now for three simple reasons. Operating systems have narrowed to a small handful of solutions that basically do the same thing. Processor makers are engaged in a performance arms race, but the practical result has been hardly more than a narrow range of mostly similar products. Radios, by definition, adhere to standards, leaving OEMs with implementation advantages that quickly evaporate.

In this situation, most competitors are pursuing or will pursue a two-track mobile strategy. They are creating completely new platforms such as wearables, while they take the existing black rectangle format to extremes.

Emerging wearable devices are gaining momentum with new and somewhat exotic form factors. Popping up before our eyes is an array of watches, glasses, fitness bands, smart-clothing, sensors in shoes, sensors that are swallowed, tattooed sensors, and implanted sensors.

We are probably on the verge of a Cambrian explosion in the wearable market where a wide diversity of products may swamp the market and have to fight for survival. Due to diversity, the wearable market will likely be highly fragmented, making it difficult for any one type of device to re-create the growth of the smartphone market. Only the most aggressive market forecasts show the wearable market being up to 20 percent the size of the smartphone market in five years.

The likely evolution of wearables will be that they become part of an integrated user platform with the smartphone at the center acting like a personal hub of whatever the user bears or wears. The elements of such an integrated wearable platform are already appearing.

A personal, mobile ecosystem is evolving. Watches will be used to control glasses and phones and act as skin-contact biosensors collecting real-time data on the wearer. Images from the glasses will be transferred to the phone for processing, or the glasses will connect directly to the cloud, depending on where the radio is integrated. Messages received by the phone will be displayed in the glasses or on the watch.


High-end phones and tablets will likely remain the main targets for innovation as they evolve together with wearables. Phone and tablet makers will drive the features of the black rectangle design to extremes to make them more secure, run faster, last longer, sound better, look cooler, provide higher resolution, support all kinds of new accessories, and in general push the limits of human imagination.

Many of the new features will be integrated with, and be portals to, new services. Two excellent examples of imaginative new services that are coming are augmented reality and context awareness. Another coming service will surely be sensor-based, highly accurate, indoor, location-based services. Yet another is likely to be bio-sensor-based telemedicine as a shortage of non-primary care specialists is expected to double to 64,800 by 2025 up from a projected shortage of 33,100 by 2015.

Bill Boldt is a marketing executive and market researcher.

lang: en_US




Xilinx Pulls Back the 20nm UltraScale Curtain

Xilinx Pulls Back the 20nm UltraScale Curtain
by Luke Miller on 12-12-2013 at 10:00 am

This week Xilinx has announced that “The Xilinx 20nm All Programmable UltraScale™ portfolio is now available with detailed device tables, product documentation, design tools and methodology support.”

Do you know what 20nm is? It’s small, tiny. Think about it this way, as I just learned today that one nanometer is about as long as your fingernail grows in one second, so in 20 seconds… there you go another channel length goes by. I know what you all just did… You just looked at your finger nails did not you?? Maybe some of you chewed on them, I looked.

Also read: Xilinx Begins Shipping TSMC 20nm FPGAs!

Before I continue, let me explain some language that has changed slightly as not to confuse my dear readers. We are used to the Virtex-5,6,7 nomenclature but the ‘8’ is not to be. It has morphed into UltraScale, as this 20nm node is simply not a device shrink. UltraScale 3D FPGAs contain a step-function increase in both the amount of connectivity resources and the associated inter-die bandwidth in this second-generation 3D IC architecture. The big increase in routing and bandwidth and new 3D IC wide memory optimized interface ensures that next-generation applications can achieve their target performance at extreme levels of utilization. The other change is that the DSP rich devices are in the Kintex family and the Virtex family will be GT rich. Bottom line here, is there is an UltraScale FPGA that is going to meet your massive, insatiable, intense design, or maybe you’re not so large design. Either way Xilinx has you covered as always. Below is what the Kintex UltraScale has to offer. Click here to get the whole story.

The Power of the Xilinx FPGA is the reprogrammability of the FPGA. Yes, all FPGAs are reprogrammable but Xilinx has innovated in how one programs BILLIONS of transistors to produce a particular function or design. That is not a trivial task. A decade ago, when FPGAs were considerable smaller, using VHDL or Verilog was reasonable then followed by RTL simulations. This model is no longer effective if one is to beat their competitors in time to market. Think about it, can we really hand code an FPGA in a reasonable amount of time that has billions of transistors? I believe no, not reasonable in QOR, Cost and Schedule. Not that there are not great FPGA designers (is that a double negative?) but the problem lies with what is happening in the industry. Systems that used to contain racks and racks of FPGA boards are reduced down to a set of FPGA boards. The Interface layer collapses, memory, power, system integration time and cost etc… So how did Xilinx get here, being the leader in the FPGA, ASIC Class devices?

If Xilinx just created a 20nm device and made no changes to the tools that program and route the device, we would have a very flashy paper weight. Xilinx in parallel with the silicon design has retooled and created the ‘Vivado’ design tool suite. ISE is no longer for the UltraScale FPGAs. Vivado is the center of the UltraScale FPGA programmability. What this means for the FPGA designers is the confidence that you can really utilize the UltraScale FPGA to 90%+ and still meet timing closure that no longer takes a day but hours.

For us FPGA designers, that is a big deal. As if that is not enough Xilinx has opened up on how one can program the FPGA. C/C++, SystemC, OpenCL, MATLAB etc… And of course still VHDL/Verilog when necessary, for example top level design wrappers. Vivado HLS, as I have written before will allow the user to explore and design the UltraScale FPGA faster than ever. I encourage you to check out all the papers, videos, IP, Cores, Libraries, and resources about Xilinx’s ground breaking 20nm UltraScale FPGAs, you will not be disappointed, but wishing for an evaluation board today at Xilinx.com

More articles by Luke Miller…

lang: en_US


Impact Conference: Focus on the IP Ecosystem

Impact Conference: Focus on the IP Ecosystem
by Daniel Payne on 12-11-2013 at 7:07 pm

Jim Feldhan, President of Semico Research presented earlier this month at the Impact Conference on the topic: Focus on the IP Ecosystem. I’ve reviewed his 19 page presentation, and summarize it with:

  • End markets like smart phones and tablets are dominant
  • Growth drivers include the Internet of Things (IoT)
  • World semi forecast of $355 billion in 2014
  • IP growth healthy at 20% or so

Continue reading “Impact Conference: Focus on the IP Ecosystem”


Known Unknowns and Unknown Unknowns

Known Unknowns and Unknown Unknowns
by Paul McLellan on 12-11-2013 at 3:18 pm

Donald Rumsfeld categorized what we knew into known unknowns and unknown unknowns. In a chip design, those unknown unknowns can bite you and leave you with a non-functional design, perhaps even intermittent failures which can be among the hardest problems to debug.

Chips are too big to do any sort of full gate-level simulation, and a more flexible approach to x-propagation and detection, using static timing analysis, RTL lint tools and various flavors of formal verification. There are two fundamental problems with analysis of unknowns: excessive pessimism and excessive optimism.

When an X signal is propagated, then the algorithm can choose to set the value to 0 or 1 based on some heuristics, which leads to excessive optimism since the signal may have the other value. But if the X signals are all propagated as unknowns then there is excessive pessimism and sometimes the whole design can degenerate into a mess of unknown signals. Furthermore, signal paths can recombine and in some circumstances whether the signal is 0 or 1 then the output is well defined but shows as a (false) error.

In some cases, static tools are able to handle all these issues satisfactorily, and when they can they tend to be preferable to simulation. But sometimes validating RTL synthesis and final timing verification for example, the potential for optimism in the ‘x’ semantics of RTL simulation remains an issue that must be resolved dynamically.

Most teams validate ‘x’ propagation in gate-level simulation, but gate simulations are time-consuming, tedious to debug and overly pessimistic with respect to ‘x’ on re-convergent paths, which can result in simulation failures that do not represent real bugs. Gate-level simulations also can only be performed later in the simulation cycle since one needs a gate-level netlist, meaning that this time-consuming methodology for resolving x-propagation issues often delays the critical path to tape out.

Low power designs have additional x-related issues. When blocks are powered down or voltages are scaled then outputs may go unknown as a result. Almost all designs use some of these low power techniques these days. In mobile they are necessary for extending battery life and in tethered systems it is often not thermally manageable to have the whole chip powered up at once.

Synopsys’s VCS simulator has added technology called Xprop which eliminates ‘x’ optimism at RTL to enable correlation with hardware design behavior. Xprop can be used to reduce and potentially eliminate gate-level simulations for ‘x’ validation.

There is a recent webinar presented by Rebecca Lipon and Bruce Greeene of Synopsys. The webinar covers:

  • Review the pros and cons of existing methodologies for x-validation
  • Explain how VCS Xprop eliminates ‘x’ optimism in advanced simulation flows (such as VCS-NLP)
  • Demonstrate how to debug ‘x’-related issues identified by VCS-NLP and Xprop using Verdi Power-Aware Debug

Link for more details and to replay the webinare are here. 45 minutes + 15 minutes of questions.


More articles by Paul McLellan…


Designing a DDR3 System to Meet Timing

Designing a DDR3 System to Meet Timing
by Daniel Payne on 12-11-2013 at 12:00 pm

My very first thought when hearing about HSPICE is using it for IC simulation at the transistor-level, however it can also be used to simulate a package or PCB interconnect very accurately, like in the PCB layout of a DDR3 system where timing is critical. I attended a webinar this morning that was jointly presented by Zuken and Synopsys entitled: Eliminate DDR3 Timing Errors with HSPICE and Zuken Constraint-based PCB Routing.

The two speakers were Griff Derryberry of Zuken, and Hany Elhak of Synopsys. The last time that I saw Griff he was attending an HSPICE SIG event in San Jose. I blogged about Hany back in February: Modeling TSV, IBIS-AMI and SERDES with HSPICE.


Griff Derryberry, Zuken


Hany Elhak, Synopsys

Designing a DDR3 system requires that you meet the multi-gigabit/second data rates while taking into account signal integrity issues, because the waveforms begin to look like deformed sine waves. The critical timing measurements are shown below: tVAC, tDS (setup), tDH (hold), tR (ringback)

The placement and routing of each DRAM package directly impacts the performance, so you need to do matched routing on the differential clock pair, and limit multiple route lengths.

Live Demo

Griff presented a live demo to show the steps used in the design and PCB layout of a DDR3 system. He used the following five EDA tools:

  • Schematic Capture (Zuken)
  • Constraint Manager (Zuken)
  • Place and route editor (Zuken)
  • HSPICE circuit simulation (Synopsys)
  • Waveform results (Synopsys)

The schematic capture for the DDR3 system was done in a Zuken tool:

The memory controller is on the far left side, and the DRAM chips are in the middle. The first simulation was using an ideal transmission line, without actual PCB parasitics:

The eye diagram looks pretty good at this very early stage when viewing HSPICE results in Custom WaveTool, showing an aperture of 360ps:

The next step was to do a quick placement and routing, then get a netlist with interconnect effects:

Now when this is simulated and measured there is an aperture of just 158ps, and several constraints were not met for tVAC, tDS, tDH, tR.

The final step was to change the routing, and this time using constraints to drive it. The requirement of 1.270mm maximum skew was met at 1.15mm by adding meanders to the routing:

With the new routing a netlist was extracted, HSPICE run, and results compared in the viewer showing an aperture of 326ps and no timing violations:

Q&A

Q: How do you control skew in the constraint manager?
A: We use the Constraint Manager for any of our nets, making a set.

Q: What simulation model did you use for traces on the board?
A: We know the traces, stacks, dielectrics, etc. It all gets extracted into the W element in HSPICE.

Q: What were the derating values?
A: The derating values are automatically loaded into Custom WaveView for you.

Summary
The methodology presented in this webinar shows that you can realize a DDR3 design and meet all the tight timing requirements by using constraint management, controlling the physical design placement/routing, and performing signal integrity analysis. EDA tools from both Zuken and Synopsys were used. To see the complete 33 minute webinar, view it here.

lang: en_US