RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Semiconductor market negative in 2012

Semiconductor market negative in 2012
by Bill Jewell on 11-15-2012 at 12:30 am

September WSTS data shows the 3Q 2012 semiconductor market increased 1.8% from 2Q 2012. The year 2012 semiconductor market will certainly show a decline. 4Q 2012 would need to grow 11% to result in positive growth for 2012. The outlook for key semiconductor companies points to a 4Q 2012 roughly flat with 3Q 2012. The table below shows the midpoint of revenue guidance for 4Q 2012 versus 3Q 2012.


Guidance for 4Q 2012 varies widely, from double digit declines from TI and Infineon to a 20% increase from Qualcomm. Most companies expect a flat to down 4Q 2012. The companies generally expect weak end market demand in 4Q, with the possible exception of mobile communications. We at Semiconductor Intelligence are forecasting the 4Q 2012 semiconductor market will be up 0.5% from 3Q 2012, driving a 2.5% decline for year 2012.

What is the outlook for 2013? The overall economic outlook is uncertain. The latest forecast from the International Monetary Fund (IMF) calls for worldwide GDP growth of 3.6% in 2013, a slight improvement from 3.3% in 2012. The advanced economies are expected to grow 1.5% in 2013, up from 1.3% in 2012. The IMF expects the Euro Zone to begin a slow recovery from the downturn caused by the debt crisis. Emerging and developing economies will be the major growth drivers with 5.6% growth in 2013. China GDP growth should increase slightly in 2013 after slowing in 2011 and 2012.

We at Semiconductor Intelligence have developed a forecast model based on GDP. Since semiconductors are at the low end of the electronics food chain, the market tends to follow the acceleration or deceleration of GDP growth rather than the rate of GDP growth. The 0.3 percentage point acceleration in GDP growth from 2012 to 2013 indicates 2013 semiconductor market growth of around 8% to 10%.

The electronics market outlook is mixed. Business and consumer spending on PCs is weak. However smartphones and media tablets are continuing to show healthy growth. Inventory adjustments are being made in the semiconductor supply chain. Thus the semiconductor market should turn up quickly when end demand picks up. Based on these factors, we at Semiconductor Intelligence are forecasting 9% growth in the semiconductor market in 2013. The chart below compares our forecast with other recent forecasts.


Creating Plug-and-Play IP Networks in Large SoCs with IEEE P1687 (IJTAG)

Creating Plug-and-Play IP Networks in Large SoCs with IEEE P1687 (IJTAG)
by glforte on 11-14-2012 at 2:15 pm

Until now, the integration and testing of IP blocks used in large SOCs has been a manual, time consuming design effort. A new standard called IEEE P1687 (or “IJTAG”) for IP plug-and-play integration is emerging to simplify these tasks. EDA tools are also emerging to support the new standard. Last week mentor announcedTessent IJTAG, which simplifies connecting any number of IJTAG-compliant IP blocks into an integrated, hierarchical network, allowing access to them from a single point. IJTAG will save engineering time by automating design tasks, and potetntially reduce the length of an aggregated test sequence for all the IP blocks in an SOC. This translates into reduced test time and smaller tester memory requirements. For more information on the standard and Tessent IJTAG, click here.

BACKGROUND
In the world of chip design, there are de-facto standards for what information must be included in the library package delivered with a reusable design block, also referred to as an “IP” (i.e., a block of Intellectual Property). Unfortunately, most IP packages do not include information on how to communicate test features or other commands to the block, or any predefined way to integrate specific test features into the overall design. Some IPs, like embedded processors and SerDes I/O interfaces, may come with build-in self-test (BIST) circuitry, but hardly any IP you buy off the shelf comes with a complete, ready-to-go test solution. In addition, there has not been a common protocol to interface to these blocks, so each block requires additional learning and customization to communicate to it.

Consequently, the integration, testing, and communication to IP blocks used in large-scale semiconductor devices has been, to a large extent, a manual design effort. Our customers tell us that integrating hundreds of IP blocks into a comprehensive test plan for a large SoC is becoming vastly more difficult and time consuming. Many of these IP blocks are stand-alone discrete functions such as clock controllers, power management units, thermal sensors, and more. It is necessary to control these IP blocks during testing. Challenges involved in integrating these IP blocks include frequent changes to the design, test engineers being unfamiliar with the blocks, an increasing number and diversity of IP blocks, and complicated operational and test setup sequences to manage.


Figure 1. The IEEE P1687 (IJTAG) standard defines two languages, ICL and PDL, that allow the interfaces to a reusable block of IP to be defined in a common manner that enables ply-and-play integration. Tessent IJTAG provides automation support enabling operations on any IP block in a design to be initiated from a single top-level access point.

A NEW STANDARD FOR IP INTEGRATION AND TESTING
To address these IP integration and testing challenges, the IEEE P1687 committee has proposed the IJTAG standard, which builds on the popular IEEE 1149.1 (JTAG) board and card-level test access standard. IJTAG provides a set of uniform methods to describe and access chip internal IP blocks, creating a basis for design, test, or system engineers to easily reuse IP in their products. IJTAG is being defined by a broad coalition of IP vendors, IP users, major ATE companies, and all three of the largest EDA vendors, and is expected to be rapidly and widely adopted by the semiconductor industry.

In the context of IJTAG, the IPs we are referring to are pre-defined blocks of analog, digital or mixed signal circuitry performing particular functions, such as a clock a generator, an interface to an external measurement probe, a radio tuner, an analog signal converter, a digital signal processor, or one of hundreds of other functions. These so-called “embedded instruments” may be internally designed circuit blocks reused across a product line, or 3[SUP]rd[/SUP] party IP purchased from an external source. Typically, they have a control (digital) interface, such as a command register, associated commands and synchronization signals, and a data path. To use the instrument functionality, the designer needs to ensure that defined commands (bit sequences) are sent to the instrument control register in order to reset or initialize it, to set a mode, monitor a state, perform a debug action, or to elicit any other behavior within the instrument’s repertoire. The interface for communicating these control sequences, and the sequences themselves, are defined by the suppliers in a wide variety of different styles with little commonality.

So designers need to create unique logic to integrate each embedded instrument (IP block) into an overall design. For SoCs that often have literally hundreds of instruments from a variety of sources with disparate interface styles, this is a major undertaking that requires lots of engineering time.

This is the problem that IJTAG is designed to solve by providing a method for plug-and-play IP integration enabling communication to all the instruments (IP blocks) from a single test access point (TAP). IJTAG describes a standardized way to define the interface and pattern sequences used by an instrument, and to retarget those commands to any point within an IJTAG-compliant hierarchy. Mentor’s new product, Tessent IJTAG, provides the automation support to implement IJTAG in large, complex designs. Tessent IJTAG can be used to integrate embedded instruments at the die, package, board or system level, providing a single access point at any desired level.

Overview of IEEE P1687
A reusable IP block with a compliant IJTAG digital interface is called an embedded “instrument.” The IJTAG description of an embedded instrument includes its I/O ports and register details as well as the specific sequences needed to control and access the instrument. Both parts are described through P1687 languages defined by the standard: Instrument Connectivity Language (ICL), and Procedural Description Language (PDL).

ICL provides an abstract, hierarchical definition of signals and registers necessary to control the instrument. ICL does not include details of the inner workings of instruments, but only the I/O ports, registers, bit fields, and enumerations of data elements that are necessary to carry out instrument operations. ICL also describes the network connecting all the different instruments. The standard only describes how to accessand control the instruments, and not the details of the instruments themselves.

PDL describes how to operate the instrument through commands and data written to an instrument’s port, and how to monitor the instrument by reading data from the port. These operations are described with respect to the boundaries of the instrument. PDL is compatible with the popular Tcl scripting language.

With the combination of the ICL and PDL descriptions for an instrument and its operations, the instruments can be easily ported between designs. The IJTAG standard ensures that a PDL defined sequence of operations written and verified at the instrument level can be used without modification after that instrument has been included inside a larger design. The automated process of retargeting translates and merges PDL sequences from the instrument’s instance level to the top level of the design (or any other level in between). In other words, Tessent IJTAG reads all the instruments’ PDL definitions and generates new PDL appropriate for the top-level of the design in which the instruments are embedded (Figure 1). Tessent IJTAG can also translate the top level PDL sequences into other formats, such as bit patterns for automatic test equipment (ATE), Verilog files for verification, or system level PDL used for integrating a subsystem into a larger design.

Figure 2 shows the ICL for a typical instrument. It defines an 8-bit shift register named “R”, which receives the command patterns, as well as input and output lines, and several control signals. Figure 2 shows PDL describing actions performed on the instrument. For example, the line “iWrite R $value” causes an 8-bit pattern named $value to be written to register R. Similarly, “iRead R 0xFF” causes the R register to be read and compared to the value 0xFF (i.e., all 8 bits should be 1’s).
Note that the user (e.g., a design or test engineer) simply has to declare the instrument interface and the commands to be given to it. The behavior of all the interfaces complies with IJTAG, so the designer only needs to learn one way to route sequences to the instrument interfaces, and does not have to know the internal details of every instrument as in the past.


Figure 2. The ICL description of a simple embedded instrument interface. Note that IJTAG is not concerned with any of the internal implementation details of the instrument (reusable IP block) itself.


Figure 3. An excerpt of PDL describing some actions to be performed on the embedded instrument from Figure 1, such as writing a value to its command register and reading a value from it.

IJTAG Automation
To fully enable and automate the use of IJTAG, the Tessent IJTAG tool performs four tasks:

  • Reads all the ICL and PDL files for a design.
  • Performs design rule checks to validate that the instruments and other P1687 components are properly connected to the top-level access point.
  • Retargets the PDL description of an instrument to the top level access point.
  • Translates the resulting retargeted 1687 PDL into IEEE 1364 Verilog test bench language and standard test vector formats like WGL, STIL or SVF (Serial Vector Format).

Using the ICL and PDL descriptions, Tessent IJTAG will “retarget,” or transform the instrument level PDL patterns to whatever is required at another level in the IJTAG hierarchy. Retargeting to a higher level merges the PDL patterns for multiple IP blocks in a highly efficient manner that results in minimum cycle counts for IP access within a reconfigurable 1687 network. At the top level, the retargeted PDL can be translated into a variety of pattern formats to support further integration, verification and production testing.

SIMPLE EXAMPLE
To make this a bit more tangible, consider the simple design in Figure 4. Here five instruments, or IP blocks, have been connected together with paths to a top level access point shown on the left. Each of the five IPs is provided with its own ICL file describing its control interface, and a PDL file describing the operations that can be performed on it. The designer connects these together through a simple network of registers that connect back to a top level IEEE 1149.1-compliant external test access point, or TAP, on the left. The form of these registers and preferred ways to connect them together are recommended in the IJTAG standard, although these mechanisms are very flexible.


Figure 4. Instruments (IP blocks) connected through an IJTAG network to a top level test access point (TAP). Tessent IJTAG automatically retargets the instrument level definitions to the top TAP, enabling control of all IPs from a single access point.

Tessent IJTAG reads all the ICL and PDL files included with the IP blocks and verifies that all instruments are IJTAG compliant. Tessent IJTAG also extracts ICL descriptions for the interconnecting paths to the top level TAP from either a gate level or RTL netlist, and verifies that they are also correct and IJTAG compliant. Tessent IJTAG then uses the ICL and PDL definitions to merge and retarget the instrument PDLs so that all the operations provided by each instrument can be controlled from the top level TAP. From the user’s perspective, causing an operation to happen in any selected instrument is simply a matter of writing a command to the instrument in the top level PDL. Pattern translations, merging and routing happen automatically.

The user can also translate the top level PDL into other types of patterns, such as test vectors to be applied to the top level TAP from automatic test equipment (ATE), or simulation test benches that verify the ICL against the IP’s Verilog description.

To improve ease-of-use and accelerate IJTAG-based test development, the Tessent IJTAG tool has the unique ability to extract (generate) all the ICL network descriptions using the netlist of the overall design. Tessent IJTAG also supports automated creation of the ICL network description to be added to the overall design in case the design does not already contain this network. Using this facility, the user can define how all IJTAG IP will be connected together to a common access point.

SIDE NOTE: JTAG PATTERNS VS. SCANTEST PATTERNS
It is important to differentiate the patterns we are discussing in the IJTAG context from automatic test program generation (ATPG), which typically refers to test patterns designed to stimulate structural or “scan chain” testing within a digital IC. Scan tests are used to detect defects within a manufactured IC by inserting a pattern of ones and zeros that propagate through the logic, and comparing the resulting internal states to the expected result. A mismatch indicates a defect in the implementation of the logic circuitry (a design defect) or in the manufacture of the device (a manufacturing defect). Scan test patterns are automatically derived from the actual circuit logic during the ATPG process and are distinct from the command sequences applied to an instrument interface to set a mode or initiate an operation in the block.

In the IJTAG context, a test engineer may perform a test on an instrument by using the IJTAG interface, for example, by putting it into a known state and then using a monitoring command to read back a value from the some point inside the instrument. However, this is quite different than a scan pattern set.

Tessent IJTAG can be used together with ATPG because scan testing typically requires that the logic to be tested must be initialized and certain modes must be set before scan testing can commence. IJTAG can be used to perform these setup and mode control functions. For example, test setup may require enabling power and clock domains across the design, programming PLL output waveforms, putting instruments into a “bypass” mode, or switching instruments in and out of the test scope, to name just a few of the possible usages. All these tasks can be done efficiently through an IJTAG network before the ATPG patterns are computed (when the test patterns are being generated) or applied (during production testing). In this scenario, the command patterns defined by the retargeted top level PDL could be combined with the scan test ATPG pattern set as a single test vector file. However, generating the scan patterns themselves is not done by Tessent IJTAG, but in a separate operation performed by Tessent FastScan or TestKompress.


Adesto Acquisition of Atmel Serial Flash: Strange Bedfellows?

Adesto Acquisition of Atmel Serial Flash: Strange Bedfellows?
by Ed McKernan on 11-14-2012 at 1:00 pm

On October 1, Adesto Technologies announced that it had acquired Atmel’s DataFlash and Serial Flash business groups. At first sight, this seemed a rather counter intuitive move for one of the most aggressive (and visible) companies in the emerging memory field. The purchase raised many questions to those, not least the moderator of this Blog, who have followed Adesto and its development of CBRAM (Conductive Bridging RAM). Was this a case of the company refocusing its attention towards Flash or is there more to the acquisition than meets the eye? Is it possible for a relatively young start-up to develop a successor technology while keeping customers happy (and supplied) with products based on the very technology they aim to replace? More over at www.ReRAM-Forum.com


The logic of trusting FPGAs through DO-254

The logic of trusting FPGAs through DO-254
by Don Dingee on 11-13-2012 at 8:15 pm

Any doubters of the importance of FPGA technology to the defense/aerospace industry should consider this: each Airbus A380 has over 1000 Microsemi FPGAs on board. That is a staggering figure, especially considering the FAA doesn’t trust FPGAs, or the code that goes into them.

Continue reading “The logic of trusting FPGAs through DO-254”


Jasper User Group Keynotes

Jasper User Group Keynotes
by Paul McLellan on 11-13-2012 at 1:31 pm

I attended the Jasper User Group this week, at least the keynotes, the first by Kathryn Kranen the CEO of Jasper and the second by Bob Bentley of Intel.

Kathryn went over some history, going back to when the company was started (under the name Tempus Fugit) back in August 2002 with a single product for protocol verification. Now, since Q3 2010 Jasper has had 10 quarters of profitability and a growth rate of 35% since 2008. The company is private so doesn’t publish real numbers for revenue etc but Kathryn did say that the company just passed the 100 employee mark so you can make your own guesses.

Kathryn went on to talk about the multi-app approach where she feels they have cracked the code. This makes it easier to work with lead customers on specific apps with joint customer/AE/R&D initiatives and then do what she calls massification, making it widely deployable. A new white paper on JasperGold Apps is here.

Bob Bentley told the story of formal verification within Intel. His basic philosophy is that proving correctness is much better than testing for correctness. As Djikstra said in the context of software development, “testing shows the presence of bugs not their absence.” Bob started off giving Intel’s policy of not endorsing vendors and thus saying nothing should be taken that way. In fact Intel use a mixture of internal tools and commercial tools.


Formal approaches suddenly gained a lot of traction after the 1994 Pentium floating-point divide bug. This caused Intel to take a $475M charge against earnings and management “don’t ever let this happen again”. In 1996 they started proving properties of the Pentium processor FPU.

Then in 1997 a bug was discovered in the FIST instruction (that converts floating point numbers to integers) in the formally verified correct Pentium Pro FPU. It was a protocol mismatch between two blocks not accounted for in the informal arguments. Another escape.

So they went back to square one and during 1997-98 the verified the entire FPU against high-level specs so that mismatches like the FIST bug could no longer escape. During 1997-99 the Pentium 4 processor was verified and there were no escapes.


That formed the basis of work done at Intel in the 2000s as they generalized the approach and also scaled it out to other design teams and simplified the approach so that it was usable “by mere mortals” rather than formal verification gods.

They also extended the work to less datapath-dominated parts of designs such as out-of-order instruction logic or clock gating for power reduction.

Going forward they want to replace 50% of unit level simulation with formal approaches by 2015. This is a big challenge, of course. This will spread the word and democratize formal as an established part of the verification landscape and systematize it.


Going forward they want to extend the work to formally verifying security features, firmware, improve test coverage while reducing vector counts, do symbolic analysis of analog (including formally handling variation), and pre-validating IP.


Analog FastSPICE AMS — Simple, Fast, nm-Accurate Mixed-Signal Verification

Analog FastSPICE AMS — Simple, Fast, nm-Accurate Mixed-Signal Verification
by Daniel Nenni on 11-12-2012 at 7:00 pm


Verification and AMS are top search terms on SemiWiki so clearly designers have a pressing need for fast and accurate verification of today’s mixed-signal SoCs that include massive digital blocks and precision analog/RF circuits. They need simulation performance to verify the mixed-signal functionality, and they need nanometer SPICE accuracy to ensure the SoC meets tight analog/RF specifications.

Current mixed-signal verification solutions are severely compromised. The co-simulation approach worked well for older and simpler designs, but for tightly-integrated analog and digital (big A and big D circuits) they have significant flow and feature limitations. Newer tools based on Verilog-AMS have a well-deserved reputation of being very hard to set up, needing expert-level support. Debugging in Verilog-AMS is often very difficult for SPICE guys, who don’t like programming languages – they prefer to see schematic, netlist, and waveforms – and digital guys don’t want to have to work from an analog design environment.

BDA brings its powerful nanometer circuit verification platform to this problem along with an innovative approach to Verilog-AMS simulation. By focusing on designer’s use models, the BDA solution lets users stay in their preferred flow. Digital designers follow their well-known text-based Verilog use model. Analog designers follow their well-known schematic-based SPICE use model. The underlying powerful simulation and verification capabilities are shared, but designers access them through their usual method of working, without needing lots of training or needing to switch operating paradigm. With this approach it’s very easy for a digital designer to simulate their design using a Verilog-based flow, and replace modules of interest with SPICE netlists. Similarly for the analog designer, they use a SPICE-based flow but easily replace some modules with Verilog or Verilog-AMS netlists.

For years AMS tool providers have claimed that “single-kernel” implementations are needed for fast AMS. BDA has dis proven that notion. AFS AMS uses the standard Verilog API to interface to the Verilog simulator. The Verilog simulator is so much faster than even Analog FastSPICE, the API is not a bottleneck. In fact, AFS AMS is blowing away the performance of existing “single-kernel” implementations. The big deal here is not just the performance – it’s that the digital guys can keep using their existing Verilog simulator with the original HDL and testbench along with all of their Verilog simulator’s bells and whistles.

BDA may have broken the logjam in Verilog-AMS verification by making an AMS product so straightforward to set up, so fast to run and so easy to use that it can be useful as an everyday tool.

Also read: A Brief History of SPICE


Cadence sets the Global Standards in VIP for AMBA based SoC

Cadence sets the Global Standards in VIP for AMBA based SoC
by Eric Esteve on 11-12-2012 at 11:48 am

We have shown in Semiwiki how strong Cadence position was in Verification IP (VIP) in a previous post focusing on Interface standards like SuperSpeed USB or PCI Express. But IP based functions are used everywhere in a SoC, not only to interface with the external world, and need to be verified, as well, like for AMBA based functions. Cadence has worked closely with ARM to ensure its VIP solutions support ARM CoreLink™ CCI-400 Cache Coherent Interconnect and CoreLink NIC-400 Network Interconnect using the AMBA 4 protocols. Using Network on Chip (NoC) is now common for SoC design, even if the concept is no more than 10 years old, and using a Cache Coherent Interconnect is recommended when the SoC is using multiple processor cores. Then comes the need for a proven, flexible and highly differentiated verification solution for ARM CoreLink interconnect IP, including the most advanced AMBA specifications such as AXI4 and AXI Coherency Extensions (ACE), that Cadence propose with various verification products dedicated to Non Coherent Interconnect (AXI4, AHB or APB VIP) as well as Cache Coherent Fabric with AXI Coherency Extensions (ACE) VIP.

Looking at the customer list for a specific product often tell you more than looking at the product brief itself. For example, Cadence proudly mention three customers, HiSilicon, Faraday and Ceva, for the ACE, AXI4 and AXI VIP, each of these customers designing SoC for a specific application, each of them having his own careabouts.

HiSilicon is the chip design company affiliated to Huawei, involved in leading edge Network Processor or Set Top Box SoC design, requiring high computational power, often multi-core based. As most of you probably know, Huawei is now one of the leaders on this market, and HiSilicon demand is for a stable, proven VIP solution to successfully verify the performance of the SoCs. Standard VIP has been used, allowing to verify a complex design as fast as possible to allow for the best Time To Market.

CEVA, the market leader in DSP IP, had a different need. As we can see in the above picture, CEVA was developing a complete sub-system including their XC4000 core, plus Program memory and Data memory subsystems and the related L1 program and data caches, and related Emulation functions (ICE). To best optimize this XC4000 Architecture, CEVA was using an internally modified AXI protocol. Because the interconnect IP was modified, the standard Verification IP from Cadence could not be used as is, but Cadence and CEVA have worked together to modify the AXI Verification IP, in order to be able to completely run verification on the XC4000 DSP subsystem. Flexibility from Cadence has allowed deriving an effective VIP solution to support CEVA specific needs.

Faraday, providing ASIC design services and subcontracting SoC design to various customers, is another example of a successful partnership, Cadence bringing AMBA AXI Verification IP product, and Faraday designing various type of SoC for customers targeting UMC technology.

If we come back to the image at the top, we notice on the right side a block named “Interconnect Validator”, sounding like the tool could have been used in Star War environment. In fact, Cadence Interconnect Validator verifies the interconnect fabrics that connect IP blocks and subsystems within an SoC. Whereas a principal aim of verification IP (VIP) is to verify that IP blocks follow a given communication protocol, Interconnect Validator verifies the correctness and completeness of data as it passes through the interconnect. It’s just like if R2D2 will soon work in place of the designer! Because it automates a critical, yet difficult and time-consuming task, Interconnect Validator greatly increases verification productivity at the subsystem and SoC levels. This is the type of tool which is expected to greatly speed-up the TTM for complex SoC, reducing the time dedicated to Verification, known to be longer than the pure design task, probably in the 2/3[SUP]rd[/SUP] to 1/3[SUP]rd[/SUP] proportion. If you want to have a look, just go to Interconnect Validator page.

Features

  • AMBA protocol support: ACE, AXI4, AXI3, AHB and APB
  • OCP 2.0 protocol support
  • Supports verification at the subsystem and SoC levels
  • Supports any number of master ports, slave ports, and interconnect
  • Enables verification of hierarchal / cascaded fabrics
  • Enables verification of non-standard interconnect

Interconnect Validator works in conjunction with VIP components to model and monitor all ports on an SoC’s interconnect. Sophisticated algorithms track data items as they are transported through the interconnect to their destinations. Arbitration of traffic is accounted for as well as data transformations such as upsizing, downsizing, and splitting.

Cadence has built a page dedicated to AMBA AXI4 Verification IP, where you will find some of the customer testimonials mentioned in this post, as well as nice video, one of them from Mirit Fromovich, in charge of the World Wide deployment of AMBA Verification IP, who I thank for her support in helping me to better understand these complexes VIP…

Eric Esteve from IPnest


Next Generation FPGA Prototyping

Next Generation FPGA Prototyping
by Paul McLellan on 11-12-2012 at 7:00 am

One technology that has quietly gone mainstream in semiconductor design is FPGA prototyping. That is, using an FPGA version of the design to run extensive verification. There are two approaches to doing this. The first way is simply to build an prototype board, buy some FPGAs from Xilinx or Altera and do everything yourself. The other way is to buy a HAPS system from Synopsys, which is a more general purpose solution. Today, over 70% of ASIC designs now use some form of ASIC prototyping.

Synopsys have just announced some major upgrades to HAPS with the announcement of the HAPS-70 series and the associated software technologies.


Firstly the performance of the prototype itself is increased as much as 3X by the enhanced HapsTrak I/O technology with high-speed time-domain multiplexing (HSTDM). This gives transfer rates between FPGAs in the system of up to 1 Gbps. Since all I/Os support HSTDM, this allows thousands of signals to be transferred between FPGAs and overcomes the limitation that when the design is partitioned there are often too few I/O pins for the number of signals between the partitions.

The system is modular. A single module (containing a single Xilinx FPGA) supports 12M ASIC gates. A layer in the chassis can have two or four or these, to support 24M or 48M gates respectively. And up to 3 layers extend the capacity to 144M ASIC gates. The low end is good for IP validation and the higher end for whole SoCs (and if the IP was validated using HAPS then a lot of that work automatically can be rolled over into the SoC design set up).

One of the challenges of a system like this, once the design will not fit in a single FPGA, is to partition the design into multiple FPGAs. Many systems don’t have natural partition lines, such as separating into IP blocks. There is an enhanced Certify software that automates the multi-FPGA partitioning to accelerate system bringup in HAPS. In experiments, 90% of designs could be partitioned automatically.

Another development is that it is possible to use a combination of the FPGA internal memory, external memory and the Identify software to increase the debug visibility by as much as 100 times. This is one of the big challenges of FPGA prototyping: you don’t necessarily know in advance which signals are going to turn out to be the important ones to monitor and there are too many to monitor them all, but the more data you can collect easily, the more likely that you have captured what you need when an anomaly is seen.


Why are AMS designers turned off by Behavioral Modeling?

Why are AMS designers turned off by Behavioral Modeling?
by Ian Getreu on 11-11-2012 at 8:10 pm

Analog Mixed-Signal (AMS) behavioral models have not caught on with the AMS designer community. Why? I suspect a significant reason (but certainly not the only one) is the way they are presented.

First, what is AMS behavioral modeling?

I define it as “a set of user-defined equations that decribe the terminal behavior of a component”. [Without “user-defined” in there, it would apply to every SPICE model]. When some people talk about behavioral modeling, they immediately start talking about AMS languages. It’s the equivalent of talking about the English language in a discussion of the latest John Grisham book. Important for its creation, but irrelevant to the content.

Behavioral modeling is simply a technique for creating a model – one arrow in the modeler’s quiver. Another technique is Macromodeling – the use of previously defined blocks to create a new block. A superb example of macromodeling is the well-known Boyle-Solomon op amp model [1] that simply puts together SPICE elements based on a deep, thorough understanding of the op amp’s structure and behavior.

AMS designers are comfortable with macromodeling, because it uses interconnected building blocks just like a typical schematic – it is a straightforward extension of what they do every day.

Behavioral Modeling, on the other hand, requires learning that damn language and developing a set of equations. It is intimidating for at least 3 reasons:

  • AMS designers are not linguists, they are primarily superb assemblers of components
  • Developing a set of equations is a lot harder (and takes longer – a real problem in today’s environment) than assembling pre-existing blocks
  • Behavioral modeling (actually, any type of modeling) tests the designer’s real understanding of the device – often more deeply than they want to admit

In summary, it is off the beaten track for an AMS designer to develop a behavioral model from scratch – for some, way off the beaten track. The way around this is to work with previously-created behavioral models as a starting point – either do simple modifications or use it as a template. The more popular way around it is to have someone else (a modeler) develop the model to the designer’s specifications. Turn it into another building block that the designer can use.

Show me a good AMS designer and I’ll show you a good AMS modeler.

I believe AMS designers would love to talk about models – what’s in them, their accuracy, their deficiencies and how they can be improved. But I suspect the discussions on AMS languages, when they are expecting a discussion about models, are not of interest to most AMS designers – and may even be a turn-off. Am I right?

[1] G. R. Boyle, B. M. Cohn, D.O. Pederson, J. E. Solomon, “Macromodeling of Integrated Circuit Operational Amplifiers”, IEEE Journal of Solid-State Circuits, Vol SC-9, No. 6, December 1974, pp.353-364.


Static Timing Analysis for Memory Characterization

Static Timing Analysis for Memory Characterization
by Daniel Payne on 11-11-2012 at 6:18 pm

Modern SoC (System On Chip) designs contain a larger number of RAM (Random Access Memory) instances, so how do you know what the speed, timing and power are for any instance? There are a couple of approaches:
[LIST=1]

  • Trust the IP supplier to give you models that use polynomial equations to curve-fit the performance numbers based on a sample of memory instances. Pros – results are calculated quickly. Cons – results are less accurate.
  • Analyze each instance to get the exact performance numbers. Pros – results are most accurate. Cons – characterization run times can be lengthy.

    Ken Hsieh of Synopsys authored a White Paper recently about this subject called: The Benefits of Static Timing Analysis Based Memory Characterization. In this blog I’ll cover the second approach, analyzing each memory instance to get the accurate performance numbers quickly.

    Static Timing Analysis (STA) is applied to the transistor-level netlist of each RAM instance as shown in the following diagram to quickly identify the slowest and fastest paths:

    Benefits of the STA approach are that it can quickly find these worst case paths without having to supply input stimulus, or wait for SPICE circuit simulation results. Here’s the design and characterization flow using a transistor-level STA tool, along with SPICE and FastSPICE circuit simulators:


    At the top shown in Orange is where a RAM architecture is designed then a memory netlist created. The purple rectangle in the middle denotes the transistor-level STA (Tx-STA) tool that quickly identifies any timing violations per instance and then sends that info to either the SPICE or FastSPICE simulators for further analysis. If the timing and noise results do not meet spec, then the designer goes back to the memory architecture and modifies the netlist. This flow will generate a memory library model called CCS (Composite Current Source) that is within 5% of SPICE results.

    Another flow diagram is shown below after the Memory Compiler has been fully characterized and released into production:

    Here the IP provider creates the transistor-level STA config file and netlist. Then, the IP user can quickly run the STA tool on the transistor-level netlist in the context of their entire design (input slopes and output loads). This tool flow is quite fast because the IP user is not required to create input stimulus, or determine what the critical paths are.

    The Synopsys tool for Tx-STA is called NanoTime and it has features to perform both setup and hold time checks in an exhaustive manner. Using the CCS models you can get to full-chip SoC signoff.

    Summary
    Synopsys has an STA-based approach to characterizing and using memory compiler instances that can be quickly and accurately (within 5% of SPICE) completed by both IP providers and IP users. Alternative approaches that really solely upon dynamic circuit simulation require much longer characterization and design times, plus you’re never quite sure that you found all of the worst-case paths.

    Further Reading
    White Paper: The Benefits of Static Timing Analysis Based Memory Characterization