wide 1

A Brief History of Mobile: Generations 3 and 4

A Brief History of Mobile: Generations 3 and 4
by Paul McLellan on 10-18-2012 at 8:30 pm

The early first generation analog standards all used a technique known as Frequency Division Multiple Access (FDMA). All this means is that each call was assigned its own frequency band in the radio spectrum. Since each band was only allocated to one phone, there was no interference between different calls. When a call finished the band could be re-used for another call, the allocation wasn’t permanent.

GSM uses a technique called Time Division Multiple Access (TDMA). Despite the mistaken marketing of GSM of providing CD quality sound just because it was digital (it certainly does not), the real advantage of 2G standards was being able to get four times (initially, up to 8 later) as many calls into the same radio bandwidth. Over time it would thus drive down call costs. TDMA works by allocating each call, not just to a particular frequency band as with FDMA, but also to specific time slots within that band. The phones and base-station would only communicate with each other in those slots leaving the other slots free for other calls. With the distances and speeds involved, speed-of-light considerations come into play and the power and precise timing of communication needs to be carefully controlled to ensure that one call does not step on another one in the neighboring slot.

Most of the other technologies that were adopted in competition with GSM were dead-ends, either technically or simply from a business scale point of view. But one technology, used by Verizon and Sprint in the US and all carriers in South Korea, turned out to be very significant: CDMA.

CDMA stands for Code Division Multiple Access. The original version is also known as IS-95 but several subsequent versions were known as CDMA-2000. An explanation of how CDMA works sounds a bit preposterous. Basically all phones transmit in the same band of frequencies at the same time. Since the bandwidth used for the transmission is much larger than the bandwidth being transmitted (compressed voice) it is called a spread-spectrum technology.

So how does a phone pick out the one transmission meant for it from the noise of all the other simultaneous transmissions? That is where the “code” in CDMA comes in. Each phone is allocated a unique code and that code is XORed with the data. The rate of the code is much higher than the data so several bits of code get XORed with each bit of data. The cleverness comes in that the codes are all mutually orthogonal. Without going into an in-depth mathematical analysis of what that means precisely, the effect that if a phone attempts to correlate a call with a different code, it correlates to zero, and if it attempts to correlate with a call using its allocated code then it recovers the original signal.

CDMA is so elegant it is one of those ideas that might be nice mathematically but fail in the real world. After all, signals take different times to reach the phone depending on how far away the base station is, there are reflected signals of nearby buildings and so on. So the transmission has to really be sought out in the received radio signal. In fact received wisdom is that it takes a DSP running at 100 MIPS or more to be able to decode a CDMA signal. The first implementations of CDMA were, indeed, not very reliable.

One of the big challenges is that the power levels of all the radios need to be constantly adjusted so that one with high power doesn’t overwhelm ones with lower power like everyone at a party trying to talk louder than everyone else. The code approach only causes partial rejection of incorrect signals and an excessively powerful one may get through.

CDMA is a technology created from whole cloth by one company, Qualcomm, based in San Diego (actually La Jolla). They created the technology, patented it, licensed it to semiconductor manufacturers and cell-phone manufacturers, and even at the beginning had a joint-venture with Sony to manufacture phone handsets to kick start the market.

In practice, Qualcomm was the company that understood CDMA and had all the rights, so it was hard to build CDMA phones except by buying chips from Qualcomm. Riding this wave, Qualcomm has risen to be a top-10 semiconductor company, still fabless. Today TSMC manufactures most of their chips

The reason that Qualcomm and CDMA have turned out to be so important is that 3G standards are largely based on Qualcomm’s patents. CDMA makes more efficient use of wireless spectrum (which is the bottleneck resource) by the way the power levels dynamically adjust. In comparison, TDMA cannot adapt by packing 5 calls into a channel instead of 4 if radio conditions are good and there are no channels left.

W-CDMA (Wideband CDMA) is a generic term for a number of wireless technologies all based on Qualcomm’s fundamental technology although initially developed by NTT DoCoMo in Japan. It is the basis of all European and US 3G standards. The Chinese TDSCDMA is also based on the same approach, although supposedly designed to get around Qualcomm’s patents and thus avoid the royalties that all other manufacturers pay. Qualcomm claims it still infringes many patents but since the phones only work on one network in China, China Mobile, and have no export market there is little they can do.

The big change in the 3G era was the arrival of smartphones. Responsive data access suddenly became important, not just the capability to make voice calls. Data is very different from voice in a couple of ways. Firstly, voice is a fixed data-rate and there is not really any advantage to transmitting it faster, just more efficiently. Data is not like that, everyone really wants gigabits of bandwidth to their phone if they could get it. Secondly the reliability requirements are higher. If a packet of voice fails to get through it is not worth retransmitting it, better to have a few milliseconds of silence (or comfort noise) in the middle of the call. But data is not like that, and usually every packet needs to be retransmitted until it successfully gets received.

As a result, in 2/3G standards, voice is circuit switched and a dedicated special channel is set up for each call, whereas data is packet switched without a dedicated radio resource for each data circuit.

There were expected to be a number of 4G technologies, in particular Qualcomm’s successor to CDMA2000 called UWB (Ultra-Wide Broadband). But Qualcomm stopped development of the technology and threw their weight behind LTE.

LTE stands for Long Term Evolution (only an international committee could pick a name like that). Actually what current marketing by cellular operators calls 4G is often called 3.9G inside the mobile industry. In fact there are so many standards with different capabilities that it is almost arbitrary where they are broken into generations. So the current generation is now called 4G and the next generation is meant to be called “true 4G” but don’t hold your breath.

LTE is an evolutionary development of the GSM standard by way of W-CDMA. It is incompatible with 2G and 3G systems and thus needs dedicated radio-spectrum. Initially CDMA operators were expected to have their own 4G evolution, but in the end they too have decided to migrate to LTE.
Until LTE, all standards were a sort of hybrid, with digitized voice handled differently from digital data such as internet access. LTE is a flat IP-based approach where voice is compressed into digital data as before, but no longer has a dedicated circuit-switched mode of transmission; it is simply transmitted over the data channel like a “voice-over-IP” phone service such as Skype.

The transition to LTE is complicated by the need to keep phones working in all areas as the LTE build-out proceeds. The most common approach is to use LTE for data when it exists and fall back to the 3G data when it does not. Meanwhile, voice calls are still circuit switched through the existing 3G system (GSM or CDMA). Depending on the architecture of the handsets and the network, it may or may not be possible to both make a voice call and have data access at the same time.

Eventually, when all areas have LTE base stations and all handsets support LTE, it should be possible to shut down the legacy circuit switched infrastructure and use the freed-up spectrum for more LTE bandwidth.

That is where we are today. In large metropolitan areas LTE is up and running. Smaller markets will transition more slowly. State-of-the-art smartphones such as the Samsung Galaxy S3 and the iPhone 5 have LTE data access but still circuit switch the voice. Unless using an over-the-top (OTT) voice service such as Skype that simply re-routes calls through the data channel (and bypasses the carrier billing for a voice call).

One challenge for carriers is that they have got used to charging much more for a voice call (and a text message) than the equivalent amount of data. For example a GSM Enhanced Full Rate vocoder compresses voice into 12kb/s. And into nothing when you are not talking, which is about half the time (because you are listening). A 3GB/month data subscription costs about $20-30 but can handle about 1000 hours of calls (as data) without exceeding the data cap. But 1000 hours is more hours than there are in a month. You literally cannot exceed gigabyte sized data caps with voice calls. But a user with several thousand minutes per month of voice calls is paying about ten times as much until now.

Also see: A brief History of Mobile: Generations 1 and 2


Virtuoso Has Twins

Virtuoso Has Twins
by Paul McLellan on 10-18-2012 at 6:01 pm

Cadence has apparently announced that going forward the Virtuoso environment is going to be split into two and offered as two separate code-streams, the current IC6.x and a new IC12.x. The idea is to introduce a new product with features that were specifically developed for new technologies such as double patterning aware layout design and checking, use of local interconnect, FinFET enhancements.

I’m sure part of this is that Cadence wants to charge more for these feature to the people that need them without getting caught up in endless negotiations that people who don’t use them have no reason to pay extra, a problem that every EDA company faces when they try and get value for the incremental R&D required to keep on the process node treadmill.

I ran Custom IC at Cadence for a year or so and one of the biggest problems I had was that we had large numbers of very conservative semiconductor companies who would not upgrade to new versions of Virtuoso and, in fact, stayed on versions which officially we no longer supported. Then, to make it even worse, they would find they would need some feature that we had wisely added to a later release and insist that we back-patched it into the unsupported release that they were still using. Even though they would happily (well, probably unhappily but they had no choice) pay for this it was a huge distraction for the engineering team. To add insult to injury, those same semiconductor company’s CTOs would give keynote speeches about how EDA companies need to get their engineering out ahead of the process roadmap so that Virtuoso (and other tools) were ready when their most advanced groups needed the features.

So I see this announcement (actually I’ve not seen it officially announced but it does seem to be real) as a rational response to this sort of behavior by semiconductor companies. Their most advanced groups need advanced features and will put up with some instability and a fast release cycle to get them. But other groups treat Virtuoso as a good malt whisky, much better if you ignore it and let it mature for 10 years before use. This is not entirely irrational behavior: advanced groups do need advanced features and many groups make use of only the most basic features and see upgrading as more of a cost than a benefit. The groups also have different sensitivity to price and different interest in taking a real look at competition (shrinking at the moment since rumor has it that SpringSoft Laker is not going to long survive its assimilation into the Borg of Synopsys).


iPhone5 Versus Samsung S3: the Key Question

iPhone5 Versus Samsung S3: the Key Question
by Paul McLellan on 10-18-2012 at 8:29 am

In all the discussion about iPhone versus Samsung, the profit leader and the volume leader in the handset business, there is way too much discussion about boring stuff like how many MIPS the A6 chips has and whether the maps are any good on iPhone (no) and is there enough 28nm capacity for Qualcomm. Boring.

The real question that everyone wants to know the answer to is: will it blend?


TSMC dilemma: Cadence, Mentor or Synopsys?

TSMC dilemma: Cadence, Mentor or Synopsys?
by Eric Esteve on 10-18-2012 at 4:49 am

Looking at the Press Release (PR) flow, it was interesting to see how TSMC has solved a communication dilemma. At first, let’s precise that #1 Silicon foundry has to work with each of the big three EDA companies. As a foundry, you don’t want to lose any customer, and then you support every major design flow. Choosing another strategy would be stupid.

The first PR came on October 12, about Chip on Wafer on Substrate tape out, here is an extract: “TSMC today announced that it has taped out the foundry segment’s first CoWoS™ (Chip on Wafer on Substrate) test vehicle using JEDEC Solid State Technology Association’s Wide I/O mobile DRAM interface… A key to this success is TSMC’s close relationship with its ecosystem partners to provide the right features and speed time-to-market. Partners include: Wide I/O DRAM from SK Hynix; Wide I/O mobile DRAM IP from Cadence Design Systems; and EDA tools from Cadence and Mentor Graphics.”

As you can see, both design tools from Cadence and Mentor are mentioned, and Cadence can be honored: the test vehicle is based on Wide I/O mobile DRAM IP from the company. We will have a look at Wide I/O more in depth soon in this blog.

Cadence and Mentor? Look like one is missing!

Then, today, the industry was awarded that Synopsys has “received TSMC’s 2012 Interface IP Partner of the Year Award for the third consecutive year. Synopsys was selected based on customer feedback, TSMC-9000 compliance, technical support excellence and number of customer tape-outs. Synopsys’ DesignWare Interface IP portfolio includes widely used protocols such as USB, PCI Express, DDR, MIPI, HDMI and SATA that are offered in a broad range of processes from 180 nanometer (nm) to 28nm.”

If you want to know more about the Interface IP market, weighting over $300 million in 2011, you should take a look at this post

The PR about the Chip on Wafer on Substrate (CoWoS) from TSMC shows that Cadence invests to develop the memory controller technology of the near future, to be used for 3D-IC on mobile applications. I suggest you to read this excellent article from Paul McLelan, so you will understand how work CoWoS from a Silicon technology standpoint.

I will rather focus on the Wide I/O Memory Controller. Here is the description of the key features, as described by Cadence:
Key Features

  • Supports Wide I/O DRAM memories compliant with JESD229
  • Supports typical 512-bit data interface from SoC to DRAM (4 x 128 bit channels) over TSV at 200MHz offering more than 100Gbit/sec of peak DRAM bandwidth
  • Independent controllers for each channel allow optimization of traffic and power on a per-channel basis
  • Supports 3D-IC chip stacking using direct chip-to-chip contact
  • Supports 2.5D chip stacking using silicon interposer to connect SoC to DRAM
  • Priority and quality-of-service (QoS) features
  • Flexible paging policy including autoprecharge-per-command
  • Two-stage reordering queue to optimize bandwidth and latency
  • Coherent bufferable write completion
  • Power-down and self-refresh
  • Advanced low-power module can reduce standby power by 10x
  • Supports single- and multi-port host busses (up to 32 busses with a mix of bus types)
  • Priority-per-command (AXI4 QoS)
  • BIST algorithm in hardware enables high-speed memory testing and has specific tests for Wide I/O devices

It’s amazing! During the last ten years, we have seen a massive move from parallel to serial interface, think about PCI moving to PCI Express, PATA being completely replaced by SATA in storage application in less than 5 years, and the list is long. With the Wide I/O concept, we can see that a massively (512-bit) parallel interface, running at 200 MHz (to be compared with LPDDR3 at 800 MHz DDR), can offer both a better bandwidth up to 17 GB/s, and a better power per transfer performance than LPDDRn solution.

Anything magic here? The higher performance in term of bandwidth can be easily explained: adding enough 64-bit wide busses will allow passing LPDDR3 performance. But the reason why the power per transfer is better is more subtle: because it’s a 3D technology, the connection between the SoC and the DRAM will be made in the 3[SUP]rd[/SUP] (vertical) dimension, as shown in the picture from Qualcomm : thus, the connection length will be shorter than any connection made on a board. Moreover, the capacitance (due to the bumping or bonding material and to the track on the PCB) will be minimized with 3D connection. Then the power per bit transferred at a certain frequency. I did not checked how this was computed, but I am not shocked by this result…

So, Wide I/O memory controller looks like a superb new technology developed by Cadence, the mobile market is healthy enough (an understatement!) to decide to introduce the technology, but, as mentioned by Qualcomm on the above picture “Qualcomm want this but also competitive pricing”…

Eric Esteve from IPNEST


Xilinx Programmable Packet Processor

Xilinx Programmable Packet Processor
by Paul McLellan on 10-17-2012 at 5:19 pm

At the Linley conference last week I ran into Gordon Brebner of Xilinx. He and I go a long way back. We had adjacent offices in Edinburgh University Computer Science Department back when we were doing our PhDs and conspiring to network the department’s Vax into the university network over a two-week vacation. We managed to do it. I think Gordon and I (and another co-conspirator called Fred whose name wasn’t really Fred, but that’s another story) drank some beer together at times, but I’m a bit hazy about that.

I came to the US and Gordon remained in Edinburgh and eventually became head of the computer science department. Then he decided to join Xilinx and move over here and do research on novel things to do with FPGAs. One area goes back to that networking stuff, how to use FPGAs to build a custom network processor that has exactly the functional and power/performance requirements.

At the evening exhibit session, they were demonstrating this at 100G line rates showing that FPGAs do have the necessary performance. And devices are now large enough that you can build significant architectures and systems.

The normal alternative is to use a network processor unit (NPU) or else a generic multicore CPU and try and get the right mix of resources, interconnect, ports etc in a fixed architecture.

So how do you actually build one of these network processors? The user describes packet processing requirements in the PX++ language, which has an object-oriented programming style. The PX++ compiler generates a customized micro-coded architecture, described in RTL, when is then synthesized for the FPGA. Subsequent changes to the PX++ description, unless very large, can then be compiled to microcode updates only, with no re-synthesis being required. The compiler also takes throughput and latency requirements separately, and customizes the generated architecture to meet these, thus showing a scalable approach. This results in trade-offs, e.g. higher throughput means more FPGA resource use, lower latency means less run-time programmability.

In the future I think we will see more of this. Today, when you think of programming an algorithm you think if implementing it in software. But for really high performance, compiling it into gates can be more effective, either using general high-level synthesis (such as AutoESL that Xilinx acquired last year) or something like PX++ that is completely focused on a specific but important problem.


8 Reasons Why I Hate My iPhone 5

8 Reasons Why I Hate My iPhone 5
by mbriggs on 10-17-2012 at 8:05 am

images?q=tbn:ANd9GcQrge1L1VKsYlHk14HvfapvlesjacXWpRugIVWGe3UHaTVWjT1P

Not really, but I regret upgrading from my Verizon based HTC Incredible to the iPhone 5. If you are on the fencebetween and iPhone 5 and a Samsung S3, consider reading this post.

I’ve been an anti Apple fan boy of sorts for the duration, but have gradually been sucked into the Apple eco system. I really like my iPad (3) and my “recliner laptop” is an old MacBook Pro. I have found the process of making calls on my Incredible (power button, swipe, call) very trying and thought an Apple device would do a much better job with many usability issues.

The 8 reasons are:

[LIST=1]

  • Voice recognition. I’m a terrible phone typist and like to utilize voice recognition for email and text messages. Android is far better at transcribing my random babblings. Siri does a fine job with “call dan cell”, or “text harry”, but that’s about it. Siri seems to understand “F*ck You Siri” and gives an interesting response. My money says it’s in the top 5 of all phrases uttered to Siri.
  • Camera. I’ve taken 20 pictures thus far, none of them good. I see much better photos from my wife’s HTC Thunderbolt and my friend’s iPhone 4s. I’ve seen the purple flare problem even when it didn’t seem like the sun should be a factor. I can accept that issue, given it appears on other phones, but my pictures are fuzzy.
  • Data Plan. Verizon forced me off my unlimited plan. I shouldn’t be using much data as I do zero video and minimal audio streaming, but I’ve gotten a warning message two months in a row that I’m nearing my Verizon threshold. (Yes, I fixed the wifi bug immediately, but still got a warning this month.)
  • Photo Sharing. I can’t run Photo Steam on my MacBook Pro as a later version of Mac OS X is required. I decided to bite the bullet and pay $20 to Apple to upgrade to Mountain Lion, which annoys me. I found out that my 5 year old (2.16 GHz Intel Core 2 Duo) is not upgradable to Mountain Lion. Ironically Photo Stream runs nicely via iCloud on my Windows 7 machine.
  • Maps. I bet you are tired of hearing about this one. I’m dependent on voice based navigation. I really miss google maps. I find the best alternatives, MapQuest and Nokia Maps unsatisfactory.
  • Cost. It’s not a $200 upgrade. Add in tax, $30 upgrade charge, cheap case and voila, it’s $300. I get that the S3 would be the same.
  • App installation. Oftentimes when I try to press the “INSTALL APP” button I need to press a half dozen times. The problem comes and goes. I don’t know if I have a flaky screen or something else is amiss. It seems to help if I kill all the apps that I am not using, but this, too, is annoying. I’m sure it will work fine if I take it to the genius bar or the Verizon store.
  • Password. Apple has forced me to use a secure password (upper and lower case, number, special character) which is hard to type. I don’t install apps in bunches, I install them periodically – and need to type in the d*mn password every time. Google still lets me use my old, lower case only, password.

    So, I lied. I don’t really HATE my iPhone 5, but if I had to do it over gain I’d get a Samsung S3. Surf to Mike’s Blog.


  • IP-SoC 2012 Conference: don’t miss keynotes talk from Cadence, Synopsys, STMicroelectronics…

    IP-SoC 2012 Conference: don’t miss keynotes talk from Cadence, Synopsys, STMicroelectronics…
    by Eric Esteve on 10-17-2012 at 4:47 am

    … Mentor Graphics, Design & Reuse or Gartner. The IP-SoC conference in Grenoble has been the very first 100% dedicated to Design IP, created by Gabriele Saucier 20 years ago, when “reuse” was more a concept than a reality within the design teams, and when Design IP was far to be a sustainable business.

    Pr Gabriele Saucier had the intuition that this concept would turn into a real business, and created “Design & Reuse”, the well-known IP portal, some years after the conference had started. Now, in 2012, Design IP and Computer Aided Engineering (CAE), both market segment as defined by EDAC, are weighting almost the same:

    EDAC report revenues of $2,292 million for CAE in 2011 and $1,580 million for Design IP.

    But, if you look at Design IP results as reported by Gartner, Design IP segment reach $1,910 million. And my guess is that some Design IP sales are not accounted by EDAC or Gartner, like these sold directly by the Silicon Foundries, by certain ASIC design houses like Global Unichip or Open Silicon, or even that some IP are directly traded between chip makers. Thus, estimating the IP market to be a two billion plus ($2.2B ?) looks realistic.

    IP and EDA are both essential building blocks for the semiconductor industry. It was not clear back at the end of the 90’s that IP will become essential: at that time, the IP concept was devalued by some products exhibiting poor quality level, un-efficient technical support, leading program manager to be very cautious to simply decide to buy. Making was sometimes more efficient… In the meantime, the market has been cleaned up, the poor quality product suppliers disappearing (being bankrupt or sold for asset) and the remaining IP vendors have understood the lesson.

    Today, none of the vendor launching a protocol based (digital) function would take the chance to launch a product which has not passed an extensive verification program, and mixed-signal IP vendors know that only Silicon proven functions will really sale. This leaves very small room for low quality products, the IP market is now mature and consolidating, even if some new comers are doing pretty well, especially in the PHY (Cosmic Circuits, VSemiconductor…), Chip Infrastructure (Arteris going from $5M in 2010 to $15M in 2011) and some promising companies, like Imagination Technologies or CEVA, finally make it, thanks to the smartphone explosion!

    IP-SoC conference last two days, December 4 and 5, so the program is wide, here is an extract:

    Keynote Talks
    IP Business: Status and Perspectives by Gabriele Saucier, CEO, Design And Reuse

    Semiconductor IP market overview” by Ganesh Ramamoorthy, Research Director, Gartner Inc

    Managing the IP Sourcing Process: an IDM Perspective” by Philippe Quinio, Group Vice President of IP Sourcing & Strategy, STMicroelectronics

    Cloud and Mobility: Disrupting the IP Ecosystem” by Martin Lund, Senior Vice President, Research and Development, SoC Realization Group, Cadence

    Keynote Talk” by Joachim Kunkel, Senior Vice President and General Manager, Solutions Group, Synopsys

    Power is now a Software Issue” by Colin Walls, Mentor Graphics

    Invited Talks

    Duopoly, Monopoly à Opportunity” by Marc Miller, Sr. Director of Marketing, Tabula

    µfluidic applications: an upcoming Eldorado for µelectronic ?” by CEA

    If you want to register, just go to Design & Reuse web site, here

    If you come, we should meet, as I plan to attend to the conference, and present a paper. The topic? I will let you know in a next blog. The presentation will certainly be IP centric, and you most probably will hear about Mobile Express, PHY IP, MIPI… just stay tuned.

    Eric Esteve


    12m FPGA prototyping sans partitioning

    12m FPGA prototyping sans partitioning
    by Don Dingee on 10-16-2012 at 9:30 pm

    FPGA-based prototyping brings SoC designers the possibility of a high-fidelity model running at near real-world speeds – at least until the RTL design gets too big, when partitioning creeps into the process and starts affecting the hoped-for results.

    The average ASIC or ASSP today is on the order of 8 to 10M gates, and that includes things up to an ARM Cortex-A9 processor core comfortably. However, that size has until recently swamped FPGA technology, forcing an RTL model to be partitioned artificially across several FPGAs before it can fit into an FPGA-based prototyping system. After spending a bunch of time integrating verified RTL IP blocks into a single functional design, it seems a bit counter-productive to split it back up to see if it really works at the validation stage. Depending on the skills of the partitioner, the diamond that was a nice RTL design can be reduced to rubble quickly.

    That risk has kept many designers from using FPGA-based prototyping for large and fast designs, opting instead for virtual platform and simulation techniques which can handle very large models today. These are both good approaches to verify functional integrity, but more and more designs are unearthing IP issues that only appear when running with faster I/O and real software (which could take WEEKS in a simulation platform). If a design team doesn’t crank things up and stress the RTL getting a look at-speed, there’s a bigger chance for failure on the first silicon pass, and that can get brutally expensive in time, money and missed markets.

    We’ve seen one major development on the FPGA-based prototyping front recently – from Aldec – and we’ve been pre-briefed on another one coming from another vendor shortly. (Insert #ICTYBTIHTKY hashtag here. You’ll read it here as soon as we can talk about it.) Let’s dig a bit into why the Aldec approach gets my attention.

    We first learned about the Aldec HES-7 in an earlier post from Daniel Payne about a month ago. I’ve been digging through the white paper co-authored between Aldec and Xilinx, looking beyond the headline that the HES-7 system now goes to 96M gates. While that’s an impressively large size, utilizing that capability requires a design to be partitioned across 8 FPGAs in pairs separated by a PCI Express interconnect.

    As the Aldec-Xilinx white paper describes, when you partition RTL to fit FPGA-based prototyping environments, you suddenly need to worry a lot about the clock tree, balancing resources between the FPGA partitions, dealing with which part of the logic gets the memory interface and I/O pins, and more. Some of you out there may be very comfortable with your partitioning skills and might have developed a formula that splits your gigantic RTL design reliably into 8 FPGA-sized pieces without side effects – I’d be thrilled to hear of a real-world example we could share here, especially how much effort this takes.

    But let’s face it, the reason Chevy puts Corvettes into showrooms is to sell Silverados, so most people can get real work done while they dream of someday needing a lot more horsepower. Not many SoC designers need 96M gates. I’m betting that the vast majority of SoC designers would love to have a 12M gate platform, running at around 50 MHz in a single FPGA without RTL partitioning. That would be the exact value proposition the Aldec HES-7 XV2000 offers. Insert one ARM Cortex-A9 based design, no partitioning, and a lot less waiting for results.

    There’s an interesting study John Blyler blogged recently on a survey asking why designers turn to FPGA prototyping for SoC design. That HW/SW co-verification bar in the chart he shows is huge. He’s also hinting at the issue we talked about of verified IP falling down during validation when integrated in a larger design.

    What are your thoughts on the state of FPGA-based prototyping? Does the ability to put an entire 12M gate design in a single large FPGA on a prototyping system open up the methodology for more SoC designers? Or does it just push the envelope so larger RTL designs can fit now and partitioning will still be required? Are the results of FPGA prototyping worth the effort of partitioning? Does the ability to validate with real software in a much shorter time offset the investment in the methodology?


    ReRAM Cell Modeling and Kinetics

    ReRAM Cell Modeling and Kinetics
    by Ed McKernan on 10-16-2012 at 8:55 pm

    Introducing the first ReRAM-Forum movie!! In part 2 of their recently published papers in the Transactions on Electron Devices of the IEEE, Professor Ielmini’s group describe the modeling of resistive switching in bipolar metal oxide ReRAM. Like part 1, the paper is collaboration with David Gilmer of Sematech who provided the Hafnium Oxide based ReRAM samples. The numerical model solves the drift/diffusion equations for ion migration, allowing the evolution of the conductive filament to be viewed in ‘real time’. The model reproduces the abrupt set and gradual reset transitions along with the kinetics of cell behavior observed experimentally. The reset process is shown in the embedded movie. See More at ReRAM-Forum.com.


    Current Timing Closure Techniques Can’t Scale – Requires New Solution

    Current Timing Closure Techniques Can’t Scale – Requires New Solution
    by Daniel Nenni on 10-16-2012 at 8:30 pm


    There’s a nice article on timing closure by Dr. Jason Xing, Vice President of Engineering at ICScape Inc. on the Chip Design website. Not familiar with ICScape? Paul McLellan called ICScape the The Biggest EDA Company You’ve Never Heard Ofand Daniel Payne did Schematic, IC Layout, Clock and Timing Closure from ICScape at DAC, just to get you started.

    Current IC designs have advanced quickly from 65 and 45 nanometers, down to 28, 20, and below. This progression to ever-smaller geometries has brought significant challenges in achieving timing closure to meet production deadlines and market windows. Engineering teams often struggle to efficiently perform late-stage ECOs (engineering change order) to meet their design as well as time to market objectives.

    In the current methodology, engineers are forced to fix ECOs using two or more different tools in the flow, iterating far too many times, often just to meet production deadlines and market windows. This method of handling ECOs will get worse with each new process node. This brings up a dire need for a solution that will allow efficient and effective handling of ECOs and hence design closure.

    What are the challenges with timing closure?

    Jason Xing is co-founder and Vice President of Engineering at ICScape, where he architected clock and timing closure products. Jason has over 15 years EDA research and development experience. In 1997, He joined Sun Labs after receiving his PhD in Computer Science from the University of Illinois at Urbana-Champaign. At Sun Labs, Xing did research on physical and logical concurrent design methodologies and shape-based routing technologies. In 2001, he joined the Sun Microsystems internal CAD development team before he started ICScape in 2005. Jason holds another PhD in Mathematics from University of Louisiana.

    ICScape is a leader in developing and delivering fast and accurate design closure solutions for today’s complex SOC designs, and a complete suite of analog and mixed-signal design, implementation and verification solutions. ICScape’s tools have been successfully used to design and deliver integrated circuits for a variety of application areas including storage, wireless, base band, data communications, multimedia, graphics, chipset and power management design. The solutions are silicon-proven through well over 100 tape-outs. The SOC design closure solutions fit into existing flows complementing signoff static timing analysis and physical design tools. While the analog / mixed-signal tools (AMS) form a complete solution, individual tools fit into existing AMS tool flows, preserving your current investment in tools.

    ICScape now has a landing page on SemiWiki so you will be reading more about them soon. With all the EDA consolidation ICScape is the one to watch for both digital and analog solutions.