Synopsys IP Designs Edge AI 800x100

Apple: "It’s The Sales Channel, Stupid!"

Apple: "It’s The Sales Channel, Stupid!"
by Ed McKernan on 09-23-2013 at 1:00 pm

Apple’s decision to launch the iphone 5C as a “high priced” device as opposed to a $300 entry level mass market consumer play appears to be intertwined in a much more overriding strategic plan that is beginning to play out in the market. Many analysts who pushed for the low cost device saw the need as necessary to save the ecosystem but were wary of the possible margin erosion that could damage Apple’s earnings and brand image. Tim Cook, though has put in play a contrarian channel strategy that looks for Carriers to compete for customers with $0 down and lower cost data plans as well as instituting buyback programs on used iphones that can more than pay for the $199 fee of a new device. The net effect will be higher sales to existing customers as they upgrade on a yearly basis and a broadening of the developing world user base as Apple resells the “pre-owned” devices in the $300 range that analysts were hoping for all along.

No sooner than had DoCoMo signed up as the last major carrier to handle iphones in Japan did they initiate a price war with KDDI and Softbank to win back customers lost over the past few years. In a recent article in the WSJ, DoCoMo was noted to offering the iPhone 5S to customers for free on a two year contract, thereby undercutting the cost by $199. Meanwhile KDDI and Softbank are offering users $63 to $100 if they purchase the iphone 5C with a two year contract. As late as August, it was noted that DoCoMo lost 145,000 customers to its competitors due to not having the iphone in its stores. And even more stunning is the fact that DoCoMo signed an agreement with Apple that guarantees that a minimum of 40% of their overall device sales will be iphones. In other words, Apple is nearly guaranteed to own over half the Japanese market at its margins and at the expense of the carriers margins. With China Mobile expected to come on board in November, the same scenario is likely to play out in a market that is more than twice as large as the US.

After the initial weeks of undersupply, it is quite likely that in all three of the world’s major markets: US, Japan and China, a price war will break out with carriers that effectively moves the iPhone 5S price from $649 to $450 and the iPhone 5C from $549 to anywhere from $350 to $400. Moreover the Apple program to funnel customers into their stores to receive rebates on used phones will have a further leveling effect on the market as users receive cash for their phones and carriers lose the power to redirect customers over to other subsidized phones. I expect Samsung to be most impacted as they lose the high and mid range markets but all of Android will be effected.

The sales channel effect that Apple seeks to impose is very similar to how BMW and Mercedes were able to increase market share the past 30 years while maintaining premium brand values. The leasing of a new BMW was attractive to many first time buyers because it took into account the high residual that existed after three years. Typically it was calculated at 50-60% of the list price. The next level down of users, which were many, cold be counted on as being happy to buy a pre-owned vehicle in the price range that was much more closely aligned with new American cars (substitute the words Android phone here).

In this case, though, Apple has gone one better in that it is able to retain 50% of the value of the phone after one year, which is incredible for any high technology product due to the steep Moore’s law cost curve. Perhaps the continuous upgrades of iOS help maintain the high value. If Apple pulls this off, then Carriers risk losing their up front $199, which makes up a significant amount of their profit. Then the giant sucking sound you hear will be Apple taking hold of the majority of the customer’s wireless bill. Apple would then have successfully taken the place of Wintel whereas carriers take on the role of PC OEMs.

While every analyst spent the last year haranguing Apple about the need for a low cost iphone, the real story was about the creation of competition in the carrier oligopoly ecosystem which in turn will drive sales to much higher levels while maintaining margins and revenue growth. To paraphrase James Carville, It’s the sales channel, stupid!

lang: en_US


Develop A Complete System Prototype Using Vista VP

Develop A Complete System Prototype Using Vista VP
by Pawan Fangaria on 09-22-2013 at 6:00 pm

Yes, it means complete hardware and software integration, debugging, verification, optimization of performance and power and all other operational aspects of an electronic system in semiconductor design. In modern SoCs, several IPs, RTL blocks, software modules, firmware and so on sit together on a single chip, hence making it almost impossible to validate the whole system by traditional means. In such a scenario, nothing can be better than having a complete platform which enables designers to connect these components together at various abstraction levels (even before their RTL level implementation), optimize their architecture in terms of power and performance, validate the whole system and present a prototype that can lead the actual implementation without much problems, thus significantly improving the total turn-around-time of the system development.

It was a pleasant surprise coming across Vista Virtual Prototyping solution which is part of the Vista[SUP]TM[/SUP] Platformof Mentor Graphics. I read Mentor’s whitepaper, “Vista Virtual Prototyping” with rapt attention as it gives interestingly good level of details about the working of the system and how it can solve the big problem of optimizing (power and performance) and deciding about the overall architecture of the system. I am summarising some of those points below, but would recommend the audience to read the paper to know about the actual details; it’s an interesting and engaging read.

Primarily the system has two main parts – i) TLM (Transaction Level Modelling) based platform for creation of models for Virtual Prototype, ii) Platform to use the Virtual Prototype, integrate and validate software and firmware on the whole system. Here are some of the important and easy-to-use components of the platform –


[Vista Schematic Block Diagram Editor]

The block diagram editor provides a simple schematic creation by linking graphic symbols of various TLM models. It’s possible to view and modify the SystemC source code of any TLM instance. On each save operation, the structural SystemC code of the schematic view is automatically generated and saved.

After design phase of the Virtual Prototype, it can be used by software and firmware engineers for software development as well as several verification tasks such as HW/SW analysis, HW/SW co-simulation and debugging. It supports UI, application stacks, firmware and drivers running on top of OS such as Linux, Android and Nucleus as well as Bare-Metal mode. And it provides facilities to develop Linux kernels and fast booting of the OS. It can also be linked with physical devices of the host workstation such as terminals, displays, USBs, Ethernet connections etc.

Although Vista Virtual Prototype can be invoked from the command line, the platform provides an easy-to-use Sourcery CodeBench IDE environment which enables better control of the simulation with direct visibility and control of hardware objects (such as registers and values stored in them), tight HW/SW debugging, and file system interactions.


[Sourcery CodeBench HW/SW Debug GUI]

In the tightly coupled HW/SW debugging, hardware simulation can be controlled by setting breakpoint on the access of a hardware object, resetting devices and/or cores, setting simulation mode etc. And there are viewing and analysis facilities for various entities on the display such as hierarchical path to the breakpoint, SystemC simulation time, CPU core tracing, state of DMI (Direct Memory Interface), mode of simulation etc. The Vista Virtual Prototyping supports TLM modelling at LT (Loosely Timed) and AT (Approximately Timed) levels of hardware timing modes. Accordingly, there are two modes of simulation; Functional Mode which corresponds to LT, is fast, and is concentrated at integrating, debugging and validating the software; and Performance Mode which corresponds to AT, is slow, and is concentrated at analyzing and optimizing performance and power consumption. It allows users to select and switch between these timing modes during run time.

The platform provides excellent viewing and analysis capabilities with reports in graphical, textual or tabular form with varying degrees of granularity and display control on each view as desired by the user. Analysis types like Throughput, Latency, Power (static and dynamic), Power Distribution, Bus Throughput, Contention on bus model (Address phase and Data phase), and Arbitration time (Address and Data) are easily and automatically performed by Vista Virtual Prototyping Analyzer.


[Power distribution comparison of two architectures]

Multiple simulation sessions can be compared to determine the effects of system configuration changes, protocol selection, and software changes on the design behaviour and its performance and power attributes.

An Embedded Sourcery Analyzer conducts software analysis such as CPU State and Statistics, File System Activity, Function Calls and Statistics, Lock Wait and Hold Times, Process and Thread State, Page Fault Rate, Memory Usage and Cache Hit/Miss Ratio.


[Native Unified Software IDE across Hardware Evolution]

Vista Virtual Prototyping is integrated into an embedded software design flow that combines validating and optimizing the software on an abstracted simulation model, through emulation, FPGA prototyping during the pre-silicon stage to final product at the post-silicon stage. Users can easily change the underlying hardware model between Vista Virtual Prototypes to hardware prototypes to boards staying within the same native Sourcery IDE.

Click here to download the whitepaper and know more.


What Does Sports and NoC Have in Common?

What Does Sports and NoC Have in Common?
by Randy Smith on 09-22-2013 at 11:00 am

As an Oakland Raider season ticket holder I attend as many Raider home games as possible. If you have ever attended a live sporting event at a large stadium, and you travelled by car, you are probably familiar with the traffic problems that occur at the end of the game when everyone wants to leave the stadium parking lot at the same time. At most venues you will find law enforcement on hand to control the traffic. On the surface, it seems their role is to help traffic merge more efficiently since there are typically many lanes funneling down to fewer lanes over multiple stages to get through to the exit and/or onto a major highway. You should also understand that they are there to give higher priority to certain vehicles. Police and emergency vehicles have higher priority, followed by perhaps limos and busses, and then the low priority traffic (me in my car). This situation actually looks a lot like the traffic problems in a modern SoC.

As design content increases we find that the data moving about our designs is traveling relatively longer distances and is competing with other traffic in the system. To manage this, buffers and arbitrators are inserted along the way. The most obvious merge pattern occurs on the path to request data from memory, typically off-chip DRAM. In modern SoC this traffic flow is managed by the Network on Chip (NoC) IP. As the traffic cop, the NoC’s responsibility is to make sure each priority of message is handled properly. Making sure that all requests are handled on time is called managing the Quality of Service (QoS). Not all NoC implementations do this in the same way though. It is important to understand the differences when selecting a NoC.

Many NoC architectures (e.g., ARM, Arteris) use a mechanism where the initiator of a message sets the message’s priority (Initiator Based QoS). Simply put, the traffic cop believes the priority you tell him you have and sends you through accordingly without regard to what is happening downstream. If he has no higher priority traffic, he simply sends all of the cars straight through. The problem is that this traffic cop doesn’t have a radio. The flood of traffic he is sending forward may completely fill the lanes at the next intersection and no one is telling him about it. So, higher priority traffic coming from another direction will now not be able to get through the next intersection because it is full, full of lower priority traffic which is blocking the way. The solution: Use the police radio!

In addition to Initiator Based QoS, Sonic’s has NoC IP that also supports Target based QoS. In this approach, the decisions on handling priorities are made closer to the target (or endpoint) of the path. Arbitration decisions are made with knowledge of all of the traffic headed to the target. In short, the traffic cops are talking to each other over the radio (i.e. the system will not select an arbiter that cannot make forward progress). The resulting decisions then can minimize latency and improve memory efficiency. This allows dynamic allocation of the network bandwidth. This plays hand-in-hand with other mechanisms employed by Sonics including virtual channels (aka threads), non-blocking flow control, and other configuration parameters. Click the image below to see a more detailed description.


I am glad the Oakland Police Department uses their radios. It helps me exit the area quickly without impacting ambulances or other higher priority traffic. Apparently, Sonics can do the same for SoC designs as well. For more details, check out SonicsSX® and SonicGN®.


Dassault’s Simulation Lifecycle Management

Dassault’s Simulation Lifecycle Management
by Paul McLellan on 09-21-2013 at 4:29 pm

The first thing to realize about Dassault’s Simulation Lifecycle Management platform is that in the non-IC world where Dassault primarily operates, simulation doesn’t just mean functional verification or running Spice. It is anything during the design that produces analytical data. All of that data is important if you are strictly tracking whether a design meets its requirements. So yes, functional coverage and Spice runs, but also early timing data from synthesis, physical verification, timing and power data from design closure, and so on. All of this is what the automotive, aerospace and other worlds call “simulation.” To them, in the mechanical CAD (MCAD) world, anything done on the computer as opposed to on a machine-tool, is simulation. Similarly, with that world view, anything done with a chip design other than taping it out and fabricating it is simulation.

So Simulation Lifecycle Management (SLM) is an integrated process management platform for semiconductor new product introduction. The big idea is to take the concepts and processes used in MCAD to design cars and planes and push them down into the semiconductor design process. In particular keeping track of pretty much everything.

In automotive, for example, there is ISO 26262 (wikipedia) which covers specification, design, implementation, integration, verification, validation, and production release. In practice this means that you need to focus in on traceability of requirements:

  • document all requirements
  • document everything that you do
  • document everything that you did to verify it
  • document the environment that you used to do that
  • keep track of requirement changes and document that they are still met

That’s a lot of documenting and the idea is to make almost all of it happen automatically as a byproduct of the design process. To do that, SLM needs to be the cockpit from where the design process is driven.
There are really two halves to the process. One primarily used by management to keep define processes and keep track of that state of the design. The core management environment really has three primary functions:

  • dynamic traceability: the heart of the document what you did, how you know you did it and the environment you used to do it
  • process management: knowledge that everything that has been done is, in fact, documented
  • work management: see results, keep track of where you are in metrics like functional coverage, percentage of paths meeting timing during timing closure and so on.

The other half, primarily used by engineers actually doing the design, running tools and scripts and making judgement calls. This is called the integrated process architecture and also consists of three parts:

  • process capture: a way to standardize and accelerate design processes and flows for specific tasks
  • process execution: integrating the processes and flows into the load-leveling environment, server farms, clouds or whatever the execution environment(s) in use are
  • decision support: automation of what-if analysis for tasks like area-performance tradeoffs where many runs at many different points may need to be created and then the mass of data analyzed to select the best tradeoff

There is obviously a lot more detail depending on which task you did down into. But to get a flavor of it, the above screen capture shows some of the requirements traceability. A requirement may generate many more sub-requirements that, in turn, generate tasks that are required to demonstrate that the requirement is met (and keep track of everything to be able to reproduce the run too). Again, don’t forget that where the above diagram says “simulation” it might mean keeping track of how many DRC violations remain to be fixed.

Subsequent blogs will go into more detail of just how SLM works in practice.


Designing Power Management ICs

Designing Power Management ICs
by Paul McLellan on 09-20-2013 at 5:49 pm

With all the focus in design on SoCs in the latest sexy process (Hi-K Metal Gate! FinFETs!) it is easy to forget all the other chips that go into a system. When we say “system on a chip” there are actually very few systems that really get everything onto a single chip. One of the big areas that usually cannot go on the latest sexy process are the power management ICs that delivery very precise voltages to those SoCs starting from typically noisy power coming out of whatever plugs into the wall outlet, or from battery power that isn’t so noisy but that changes its characteristics as the battery runs down. One of the big design requirements for power management ICs is to do their work without wasting much of the power. In your smartphone, for example, wasted power shows up as a hotter phone to hold and shorter battery life, neither of which the end-user wants. It is especially important that the power management ICs consume only tiny amounts of power when the associated SoC is largely shutdown, as your smartphone is a lot of the time it is in your pocket.

These power management ICs are usually built in processes like 0.13um or 0.18um, which sound really outdated to the SoC designer but are actually the state-of-the-art processes for a lot of analog, mixed-signal and power designs.

Not surprisingly, the design process for a power IC is very different from that SoC. It is an important market: higher growth than the overall semiconductor market, very competitive, and with an increasing focus on power efficiency, delivering almost all the power taken in as input in whatever form is required as output, and consuming almost none in the power management IC itself.

One of the leaders in power management ICs is Richtek Technology. They have a large portfolio of parts that deliver innovative power management solutions that improve the performance of consumer electronics, computers, and communications equipment. Founded in 1998, the Company is headquartered in Taiwan with additional offices in Asia, the U.S., and Europe.

K C Chang is the VP of Technology Development at Richtek. On October 3rd he will present a webinar, along with Andy Biddle of Synopsys, on some aspects of their design flow, their EDA tool selection criteria and some recent results. Andy will discuss the Galaxy Implementation Platform highlighting some of the recent capabilities that help power management IC designers bring highly efficient products to market earlier. They will present the key challenges and trends with latest power management integrated circuits and discuss recent EDA tool innovations to shorten development time and maximize quality of results.

If you are involved with the design of power management ICs then you should attend this webinar Power Management ICs – Efficient Design: A Richtek and Synopsys Perspective. The live webinar is on October 3rd at 10am Pacific Time. For more details and to register go here. The same link will work to view it after the event. It is scheduled to last 50 minutes plus Q&A.


Using OTP Memories To Keep SoC Power Down

Using OTP Memories To Keep SoC Power Down
by Paul McLellan on 09-20-2013 at 1:43 pm

Virtually all SoCs require one-time programmable (OTP) memory. Each SoC is different, of course, but two main uses are large memories for holding boot and programming code and small memories for holding encryption keys and trimming parameters, such as radio tuning information and so on.

There are alternatives to putting an OTP on-chip. The data can be held off-chip in some sort of programmable memory (or, perhaps, ROM). But this obviously has the disadvantage of requiring the cost of an extra chip. In smartphones it is not just the cost of another chip that is a problem, but the additional volume taken up by two chips. There is just not a lot of room inside a smartphone to fit everything.

Another alternative to OTP memory is flash memory. This has a big advantage, which is that a flash memory can be reprogrammed many times. However, this comes with a big disadvantage in terms of added process complexity and, thus, the cost of the silicon. Even when off-chip flash memory already exists, security reasons may make using it for holding critical data impractical and running code out of flash memory may, in fact, require data from the flash to be copied to SRAM on the chip, which is both an added cost and yet another increase of unwanted power.

OTP memory has the advantage that code can be executed in-place and does not need to be copied from external memory into on-chip SRAM. It is fast enough and with low enough power as to make copying data out to SRAM something that is unnecessary.

The Sidense one-transistor OTP (1T-OTP) architecture is especially area efficient since it uses a single transistor per bit cell. Furthermore, it does not depend on charge storage and so once programmed, it cannot be un-programmed by environmental or electrical upsets. The patented Sidense 1T-Fuse™ antifuse technology works by permanently rupturing the gate-oxide under the bit-cell’s storage transistor in a controlled fashion, obviously something irreversible.

Another big advantage of the Sidense antifuse approach is that it uses an unmodified digital process. No additional masks or process steps are required, so nothing is added to the wafer manufacturing cost. The per-chip cost rises due to the area occupied by the OTP, but since the 1T-OTP macros are very area-efficient this increase is usually very small. Additionally if the 1T-OTP is programmed at the tester, the increase in test time will also result in some extra cost.

The Sidense 1T-OTP memory uses a low read voltage, which further keeps the power of the memory down. The Sidense memory does require some non-standard voltages internally, especially during programming, but these are created using embedded charge pumps and are hidden from the user. The OTP memory can simply be hooked up to the chip’s power supply network just like any other memory block.

Another option to the Sidense solution to lower the power even more is to use differential bit storage. This technique requires each bit of information to be represented using two transistors: one 0 and one 1. This makes sensing the state simpler and as a result the voltage required for the memory can be lower still, along with the associated power. Obviously this comes at the cost of an increase in area since the number of transistors required to represent a given amount of data is doubled within the memory macro.

Read the white paper Using Sidense 1T-OTP in Power-sensitive Applications here.


Who is Blogging at Cadence?

Who is Blogging at Cadence?
by Daniel Payne on 09-20-2013 at 1:31 pm

As a blogger in the EDA industry I get to write every week, however I also end up reading every blog on SemiWiki plus multiple other sites to keep current on what’s happening in our business. I thought that it would be informative to look at Cadence Design Systems and how they are using blogging to talk not just about their own EDA tools but our industry as well.


Continue reading “Who is Blogging at Cadence?”


Process Variation is a Yield Killer!

Process Variation is a Yield Killer!
by Daniel Nenni on 09-20-2013 at 11:00 am

With the insatiable wafer appetites of the fabless semiconductor companies in the mobile space, yield has never been more critical. The result being better EDA tools every year and this blog highlights one of the many examples. It has been a pleasure writing about Solido Design Automation and seeing them succeed amongst the foundries and their top customers. Here is a Q&A with Amit Gupta, president & CEO of Solido, to get more details on the new Solido Variation Designer 3.0 release:

Q: What is Solido Variation Designer used for?

Solido Variation Designer is variation analysis and design software for custom ICs. Our users run Variation Designer to achieve maximum yield and performance on their designs. It boosts SPICE simulator efficiency while increasing design coverage.

Q: Who are the customers of Solido Variation Designer?

Variation Designer is being used by the world’s top semiconductor companies and foundries to design memory, standard cell, analog/RF and custom digital designs at leading design nodes including TSMC, GLOBALFOUNDRIES and Samsung 130nm, 90nm, 65nm, 40nm, 28nm, 20nm, 16nm and 14nm.

Q: What specific customer challenges does Solido Variation Designer 3.0 address?

Variation Designer 3.0 is based on user input from a wide range of semiconductor companies designing anywhere from 130nm to the most advanced process nodes. In general, we are seeing our customers increasingly being hit by variation issues resulting in sub-optimal performance and yield compared to what the manufacturing process allows for. Variation Designer 3.0 gives our users the ability to address the following:

  • PVT corner design.PVT variation includes process (e.g. FF, SS, FS, SF, TT model corners that can be device specific), voltage, temperature, load and parasitic based variation. When taking all the combinations of these parameters, our customers end up having 1000s or 10,000s of corner combinations to simulate. The challenge is that to simulate all the corner combinations is accurate, but very slow. Guessing which corners to simulate is faster but inaccurate.

Our customers use Solido Variation Designer Fast PVTto automatically figure out which are the worst case corners while simulating only a fraction of the corner combinations. This results in far fewer simulations than brute force PVT corner analysis without compromising accuracy.

  • 3-sigma Monte Carlo design. The process model corners that foundries like TSMC, GLOBALFOUNDRIES and Samsung release in their PDKs are not well-suited to individual designs. They are either overly conservative leading to overdesign, or overly optimistic leading to yield loss. Consequently, foundries are now releasing local and global statistical variation models for designs to run Monte Carlo analysis simulation on their designs. However, brute force Monte Carlo SPICE simulation is slow, inefficient and time consuming.

Our customers use Solido Variation Designer Fast Monte Carlo to cut down the number of simulations to achieve 3-sigma design without compromising accuracy, and to extract design specific 3-sigma corners to design to.

  • High-sigma Monte Carlo design. To design to 6-sigma, 5 billion Monte Carlo sample simulations would be needed, which would take years and is therefore impractical. Alternatively, designers are designing to 3-sigma and extrapolating to 6-sigma, but this methodology is inaccurate. Some companies have developed internal importance sampling techniques, but these don’t scale and suffer from accuracy issues.

Our customers use Solido Variation Designer High-Sigma Monte Carlo to get the 5 billion Monte Carlo accuracy runs in only a few thousand simulations.This is a dramatic reduction in SPICE simulations and improvement in design coverage. Solido High-Sigma Monte Carlo is fast, accurate, scalable and verifiable. Example designs being run include memory bit cells, memory sense amps, memory columns/sub-arrays, analog designs (e.g. SerDes, Data Converters), and standard cell library designs (e.g. flip-flops).

  • Variation debug. If the design is failing PVT corners, 3-sigma or 6-sigma Monte Carlo verification steps, designers need to identify the design sensitivities to variation and figure out how to fix the design, making it robust to variation. Manually changing the device sizes and running PVT or Monte Carlo analysis to check whether the changes fix the design is tedious and time consuming.

Our customers use Solido Variation Designer DesignSense to automatically identify design sensitivities to variation, which enables them to quickly make necessary design changes and verify that it’s meeting specifications.

  • Cell optimization. Similar to variation debug where the design is failing PVT corners, 3-sigma or 6-sigma Monte Carlo verification steps, or the design is not optimized against spec, changing device sizes and running PVT or Monte Carlo analysis to check whether the design is optimal is also tedious and time consuming.

Our customers use Solido Variation Designer Cell Optimizer to automatically vary device sizes within any design and PDK sizing constraints, to optimize the design against PVT and 3-sigma to 6-sigma Monte Carlo variation.


Q: How have your customers deployed Solido Variation Designer in their production flows?

Solido Variation Designer has been established in the signoff flow of most world leading semiconductor companies and foundries.

Users input designs into Solido Variation Designer through the integration we have with Cadence Virtuoso Analog Design Environment or simply by feeding it a netlist. Variation Designer then automatically iterates with the user’s SPICE simulator (we integrate with Cadence Spectre/APS, Synopsys HSPICE/XA/HSIM/FineSim, BDA AFS, Mentor Graphics Eldo and Agilent GoldenGate) to run Fast PVT, Fast Monte Carlo, High-Sigma Monte Carlo, DesignSense and Cell Optimizer tasks. We also support all PDKs that contain process corner or Monte Carlo variation data, and we are qualified by various foundries like TSMC and GLOBALFOUNDRIES.

Some example benefits our customers have seen after adopting Solido Variation Designer:

[TABLE] border=”1″
|-
| style=”width: 121px” | Solido Variation Designer App
| style=”width: 113px” | Customer Design
| style=”width: 180px” | Customer Challenge
| style=”width: 225px” | Benefit of adopting
Solido Variation Designer
|-
| style=”width: 121px” | Fast PVT
| style=”width: 113px” | 28nm DAC
| style=”width: 180px” | 1215 corners takes too long to run, guessing which are worst-case is error prone, no standardized methodology.
| style=”width: 225px” | Correctly found worst-case corners for all outputs in only 296 simulations (4.1x simulation reduction), standardized on Solido Fast PVT methodology.
|-
| style=”width: 121px” | Fast Monte Carlo
| style=”width: 113px” | 20nm folded cascode amplifier
| style=”width: 180px” | 3000 Monte Carlo simulations takes too long to run, running only 100 Monte Carlo simulations doesn’t verify to 3 sigma, no standardized methodology.
| style=”width: 225px” | Verified to 3 sigma in only 300 simulations (10x simulation reduction), standard on Solido Fast Monte Carlo methodology.
|-
| style=”width: 121px” | High-Sigma Monte Carlo
| style=”width: 113px” | 16nm memory column
| style=”width: 180px” | Verifying to 6-sigma would take 5 billion simulations which is impractical, extrapolating to 6-sigma is inaccurate.
| style=”width: 225px” | Verified to 6 sigma in only 4500 simulations, run was fast, accurate, scalable and verifiable, standardized on Solido High-Sigma Monte Carlo methodology
|-
| style=”width: 121px” | DesignSense
| style=”width: 113px” | 40nm comparator
| style=”width: 180px” | Determining device sensitivities to PVT corner and statistical variation difficult, no standardized methodology.
| style=”width: 225px” | Automatically determined device sensitivities to variation, making the design robust to variation, standardized on Solido DesignSense methodology.
|-
| style=”width: 121px” | Cell Optimizer
| style=”width: 113px” | 28nm flip-flop
| style=”width: 180px” | Optimizing specifications across PVT and statistical variation is time consuming and uses too many simulations.
| style=”width: 225px” | 24.1% improvement in flip-flop setup time performance in only 2.75 minutes, standardized on Solido Cell Optimizer methodology.
|-

Q: What’s new in Solido Variation Designer 3.0?
Lots, this is Solido’s biggest release ever. Highlights include:

  • Significantly increased capacity
  • New features, enhancements and performance improvements in every application
  • Re-engineered GUI and full command-line interface for all apps
  • Expanded simulator support and third-party tool integration

Q: What detailed features did you add to Solido Variation Designer 3.0?
Solido Variation Designer 3.0 Fast PVT enhancements:

  • Increased capacity by 10x
  • Support for custom string-based variables in the netlist
  • 2D scatterplots
  • Interactive impacts

Solido Variation Designer 3.0 Fast Monte Carlo enhancements:

  • Faster 3-sigma verification with density-based stopping
  • “Simulate-and-predict” mode for up to 20x faster 3-sigma runtimes
  • Improved accuracy and robustness of density estimates
  • Enhanced results visualization when running multiple corners
  • Verified capacity increased by 10x for both number of devices and number of samples

Solido Variation Designer 3.0 High-Sigma Monte Carlo enhancements:

  • 20x faster algorithms for large designs
  • 10x increase in variable capacity
  • Process variable impacts
  • Support for binary and multi-modal output measurements
  • Support for high-sigma global+local analysis

Solido Variation Designer 3.0 Cell Optimizer enhancements:

  • Improved, faster cell optimization algorithm
  • Support for Spectre netlists

Solido Variation Designer 3.0 integration enhancements:

  • Mixed-language netlist support
  • Spectre netlist-in support
  • Support for Agilent GoldenGate
  • Native Mentor Graphics Eldo support
  • Runtime Design Automation NetworkComputer support
  • Faster, more scalable, and more robust Cadence Virtuoso ADE integration

Solido Variation Designer 3.0 general enhancements:

  • Up to 100x faster load times on large circuits/netlists
  • Way better performance with large Cadence designs; especially extracted views
  • Re-engineered, even more responsive GUI
  • New command-line interface for all apps
  • New report generation system with customizable templates
  • Re-designed, more robust netlist parser
  • Updated and more comprehensive documentation
  • TSMC 16nm / TMI2 support
  • Hundreds of minor quality, reliability, usability and performance improvements

Q: How can our readers get more information?
You can visit our website at www.solidodesign.com for more information. You can also contact us at info@solidodesign.com for an in-person or WebEx demo.

lang: en_US


Apple’s 64 Bit Plan to Finish Off Android

Apple’s 64 Bit Plan to Finish Off Android
by Ed McKernan on 09-20-2013 at 10:00 am

Many people are underestimating the speed and the magnitude of the transition that is about to take place with the tandem rollout of iOS 7 and the 64 bit A7 processor. While the former provides a nice visual upgrade to the entire ecosystem the latter will be used to collect accolades and drive application development that will result in a complete, robust 64 bit environment for all Apple users by next Labor Day at the latest. When this transition completes, what happens to mobile, 32 bit computing? The likely guess is that it withers, taking many players with it. At this moment, roadmaps across the globe are being torn up as development teams must aim for a more aggressive market place with not much time to execute. The clock has started ticking until Apple goes full 64 bits by this time next year.

As a measure of comparison, the 286 to 386 and the 32 bit Pentium class to 64 bit x86 Xeon server processor transitions took roughly 5 years in hardware terms alone. When Andy Grove unleashed the 386 Red X advertising campaign in 1990 there were only a handful of apps that ran in 32 bit mode. The folks in Redmond didn’t get around to a full blown 32-bit operating system until the launch of Windows NT in July 1996. It makes one appreciate what Apple is trying to accomplish over this coming year with a user base and software community that is over an order of magnitude larger. How will Google, Samsung, Microsoft, Intel and others respond to this coordinated drive to leave behind all that is 32 bit? All of the above named companies certainly have money to stay in the game for the long run. It is the smaller ARM mobile chip vendors that are most at risk. It is possible to envision a scenario where all the players scatter to a different corner of the market. Microsoft, for one, will likely get closer to Intel in order to save corporate but in doing so may underfund Nokia for a successful consumer push. Google could decide to make peace with Apple on smartphones and concentrate on its wearables while letting Android lag in forked 32-bit land which may be fine for Amazon but what about Samsung and the other China players?

The common threat that Apple imposes with its 64-bit processor and iOS platform has to have disparate mobile players considering alliances so that they can close the technology and capability gap by the Fall of 2014. Apple’s rollout was intended to shock their competitors with the primary goal of testing how fragile the Android market is when the future is incremental. The mobile TAM could very likely consolidate around Apple at the high end and China clones at the low end with Samsung stretched try to serve all. Without a concerted alliance with Google can Samsung really force Apple into single digit market share like Microsoft imposed on them in the 1990s. It is doubtful.

One could paint a scenario in the late summer of 2014 where Android phones are relegated to the sub $100 space along with 7” tablets. Larger screens and improved cameras would not be able to overcome the “32 bit” processor and allow for pricing to even approach whatever becomes the equivalent iphone 5C next year. This is partially subjective but it is based on what I observed in the PC market in the 1990s. Intel consistently obsoleted their processors within a matter of months so that competitors could not gain a profitable foothold by offering something equivalent. The one caveat was during periods of allocation. Thus AMD and the cloners were stuck selling processors for an average price of $60-$70 while Intel enjoyed prices that were on average 3-4 times higher.

The sucking sound you will hear is Apple leveraging technology with a branding campaign that will create separation in the marketplace. Expect to see Apple impose a price floor that is much higher than the ceiling of competitors. You will know when capitulation begins when the subject invariably turns to mobile companies spending more time focusing on the future promise of tens of billions of IoT devices.


Interface PHY IP supporting Mobile Application on TSMC 20nm? Available!

Interface PHY IP supporting Mobile Application on TSMC 20nm? Available!
by Eric Esteve on 09-20-2013 at 8:42 am

If we check the many articles daily published in Semiwiki, I am sure that Moore’s Law has been mentioned every single day. There is a good reason why we constantly write about new technologies and advanced features like FinFet, FD-SOI, 450 mm wafers or double patterning: all of these are new challenges that the SC industry will have to take up. As designs migrate to smaller process nodes, such as 20-nm and 16-nm FinFET, the technology challenges to extend Moore’s Law become increasingly complex. TSMC has implemented double patterning mask technology on its 20SoC process utilizing two photo masks, each with half of a pattern, to enable printing of images below the node’s minimum spacing design rules. We know in 2013 that the most wonderful technology would be useless if it’s not supported by IP vendors, developing the “LEGO” blocks you need to successfully design a SoC. Those who read my articles know how crucial it is for the SC industry to benefit from high quality PHY IP, allowing supporting High Speed Serial Interfaces protocols, like USB, DDR, PCI Express®, and MIPI®.

The above picture is obtained from Synopsys TSMC 20 nm Test Chip characterization of the PHY (here an USB 2.0 PHY), and is representative of the quality of the design. We call it an EYE diagram: if the signal generated by the PHY is well built by the on-chip circuitry, then this EYE will be well open, so you can insert this red form within it and that’s the guarantee that the Interface will work as specified (at 480 Mbps in this case). Porting an existing PHY design, validated on an older technology node (larger gate length and different design rules) is absolutely not straightforward; it may happen that a complete redesign could be a shorther path. Synopsys’ development of DesignWare IP and Interface PHY at 20-nm focused on minimizing yield and manufacturability issues while adhering to the standards’ specifications, as well as TSMC’s advanced layout and design rules for manufacturability with double patterning technology. The result of these efforts can be seen on this EYE diagram for the PCIe 2.0 PHY IP:

A very interesting point is made by John Koeter: “As the leading provider of physical IP with more than 80 test chip tape-outs in 20- and 28-nm, Synopsys is focused on developing IP in the most advanced process nodes to help designers take full advantage of the processes speed and power characteristics while implementing high-quality, proven IP,” said John Koeter, vice president of marketing for IP and systems at Synopsys. “By offering a broad portfolio of IP for the 20-nm process, Synopsys enables designers to more easily meet their goals of creating differentiated products with less risk and faster time to volume production, while also reducing the risks associated with moving to the 16-nm FinFET process.”

What type of application will be targeted by SoC designed in TSMC 20 nm, and later 16-nm FinFET processes? Most probably mobile, smartphone or media tablet, for three main reasons: chip cost (area), performance and power consumption. Synopsys is claiming that TSMC’s 20SoC process enables designers to reduce the power consumption by up to 25 percent or increase performance by 30 percent. Mobile application are known to be MIPI friendly, that’s why the last EYE diagram is for MIPI D-PHY:

Availability
The Synopsys DesignWare USB 2.0 PHY, USB 3.0 PHY, DDR4 multiPHY, PCI Express 2.0 PHY, and MIPI D-PHY for the TSMC 20SoC process are available now, just click to get more information about silicon-proven Synopsys DesignWare USB, DDR, PCI Express and MIPI PHY IP.

If you want to take a look at an EYE diagram being Analog designer’s nightmare, this one is a good example:

Eric Esteve from IPNEST

lang: en_US