100X800 Banner (1)

IP-SoC 2011 Trip Report: IP again, new ASSP model, security, cache coherence and more

IP-SoC 2011 Trip Report: IP again, new ASSP model, security, cache coherence and more
by Eric Esteve on 12-13-2011 at 9:05 am

For the 20[SUP]th[/SUP] anniversary of IP-SoC, we had about ten presentations, most being really interesting; the conference has provided globally a very good level of information, speakers coming from various places like China, Belarus, The University of Aizu (Japan), University of Sao Paulo (Brazil), Silesian and Warsaw University of Technology (Poland), BNM Institute of Technology (Bangalore-India) and obviously from western Europe and USA. I am going to IP-SoC for more than five years now, and I am glad to see that there is no more room for the insipid “marketing” presentation that some IP vendors used to give. It was real information, and if you were attending to presentation focused on security, like this given by Martin Gallezot (one of my former colleagues at PLDA), you really needed to know a bit on the topic (Physical Attacks against Cryptographic) to fully understand… but that’s exactly what you expect when you go to a conference, isn’t it? Learn something new for you.

And, obviously, doing networking within this niche part of the industry which is IP. IP-SoC was the right place to network, and I did it as much as possible, as well as finding new customers in the IP vendor community. Sorry, I can’t give you names; we need to close the deal first!

Starting with the 20th anniversary Special Talk, we had (as usual) a « Semiconductor design IP Market overview » from Gartner ; if you remember my blog in January about Gartner, they were very good at forecasting… the past. This year, Gartner has improved, as they are now giving a year-to-year design IP revenue growth forecast of 10% for 2011, 4% for 2012, and 8% for 2013 and later, which is more on line with what we can expect from the IP market, compared with the few % they gave last year. Also interesting, the result of a survey made by Gartner with the IP consumer. In particular:

  • To the question: “What are the most important criteria to select a specific IP?” 90% have answered “It must be Silicon provenand tested”.
  • To the question: “Why do you use design IP?” 70% have answered “To improve Time-To-Market”.

Nothing surprising in these answers, but rather the confirmation that it’s really difficult to enter the IP market, as even if your product is wonderful, it will not be silicon proven, and the first sale will be very difficult to make!

The conclusion from Gartner was, at least to my opinion, rather funny: “78% of the semiconductor growth in 2012 will come from Smartphone and Media Tablet; so you should sell on this market”. Funny, because if you are not selling on this market, it’s probably too late in December 2011 to modify your port-folio to attack it in 2012…

Last information you may value: IP market weighted $325M in 1998, and weights now $1.7B; this represent 0.12% of the value of the End Equipment served by these IP; impressively low, isn’t it?

Another presentation http://www.design-reuse.com/ipsoc2011/program/ was by Marc Millerfrom Tabula. I know Tabula (and Marc) since 2005, when we decided at PLDA to support them with our PCI Express IP. At that time, only a few people did really understood what exactly their product was. I think in the meantime, a lot more people understand Tabula new concept, based on “dynamically reconfigurable hardware logic, memory, and interconnect at multi-GHz rates”. That’s a pretty good idea: the same piece of Silicon, say a LUT as we are in the FPGA world, will be used to support different logic functions, within the same system clock period! Two remarks: at first, the Silicon should run as fast as possible, that’s why Tabula has invested into 40nm technology; second remark: how damn can we use the existing design (EDA) tools? The answer is: no way! So, Tabula is 50% an EDA company, designing his own toolset, and 50% a hardware FPGA vendor. According with Marc, the “official” release of the tools is to come very soon, I say “official” as Tabula is already claiming to have customers. Will Tabula successively compete with the duopole? I don’t know, but their product is real innovation!

I realize that one blog is too short to cover all the other interesting presentations (cache coherence, 3D-IC for butterfly NoC to name a few), I will have to come back in a second blog. I will just mention… my presentation before leaving you: Interface IP Market Birth, Evolution and Consolidation, from 1995 to 2015. And further?”That was the first time I saw people standing during the show, not to leave the room but to better see the slides! Among a few compliments I got after the presentation, one was especially precious to me, as it came from a SC, and even IP, veteran: Jonathan Edwards, now IP Manager at ST-Ericsson. In fact, Jonathan started his career back in the 70’s working with GEC Plessey then INMOS in the UK, and for ST-Microelectronics when they bought INMOS, and has stayed in the technical expertise role all time long. Thanks again Jonathan!

By Eric Estevefrom IPnest– Interface IP Survey available here.


Learning About MEMS

Learning About MEMS
by Daniel Payne on 12-12-2011 at 6:34 pm

My automobile has an air bag system that uses a MEMS (Micro Electro Mechanical System) sensor to tell it when to deploy, and I’ve read headlines talking about MEMS over the years so I decided it was about time to learn more by attending a Webinar on Wednesday, December 14th at 8AM Pacific Time.

The EDA company hosting the Webinar is Tanner EDA and I’ve read specific customer examples about MEMS design in four different applications:


Hymite – MEMS packaging


Knowles – Microphone

What I’ve learned so far about MEMS:

[LIST=1]

  • You can write macros in C or C++ to control your MEMS layout in the L-Edit tool, and they’re called T-Cells. Reminds me of Cadence Pcells or other IC layout approaches like PyCells.
  • Some MEMS chips need to be hermetically sealed in order to function properly.
  • Arcs and circles are typical MEMS layout shapes, unlike most IC designs that are strictly rectangular in shape.
  • Visuallizing MEMS layout in 3D helps shorten product development times.
  • Accelerometers are what goes into the airbag system and MEMS are ideal for this application.
  • MEMS can be used to barcode micro-particles used in medical testing.
  • The realms of IC design and Mechanical design still use separate analysis software but common layout tools.

    After the webinar on Wednesday I’ll blog about what I’ve learned.


  • View from the top: Ajoy Bose

    View from the top: Ajoy Bose
    by Paul McLellan on 12-12-2011 at 4:13 pm

    I sat down yesterday with Dr. Ajoy Bose, CEO of Atrenta, to get his view of the future of EDA – looking through a high-power “spyglass” of sorts. I first met Ajoy when he was at Software & Technologies. I was then the VP of Engineering for Compass Design Automation and we were considering off-shoring some development. We eventually dipped a toe in the water with small groups in both India (at Hyderabad) and Moscow. The feel of India from inside a high-tech company building is very different from the feel outside on the street, but that’s a story for another day.

    Ajoy believes that what Cadence calls SOC Realization in their EDA360 white paper is a transformation of how design is done that is as great as the move from schematics to RTL, although just as then, it is a transformation that takes years to complete.

    We talked about the fact that chips are no longer really designed at the RTL level – they are assemblies of IP blocks. But IP quality and other design meta-data is lacking in standard representations in the design flow, which is getting to be a big problem. Hmm, sounds like the interoperability forum I was at earlier in the week.

    Ajoy believes the Holy Grail is early exploration, making sure a design meets its targets for power, performance and area well before you commit to detailed implementation. This process requires more collaboration within the ecosystem, along with standards for IP creation, IP assembly and SoC assembly. One opportunity is solving problems early and only once in the design cycle, which requires additional information in the form of constraints and waivers.

    We also talked about the fact we need a much better way to abstract a block. For physical design, we can take a block and “black-box” it, just needing to know where the terminals are. But for IP blocks it’s not that easy. You can’t do IP-based design the way you used to be able to take your TI 7400 series databook and do printed circuit board design. IP doesn’t work like that for timing, or for testability. Much more detailed information needs to be processed to get a useful answer. The same is true for power consumption.

    Of course another big change is that software forms more and more of the design. This is seen most clearly at Apple where software is the king. Apple builds chips especially optimized to run exactly the required software load.

    Ajoy reckons that about 10% of design groups are taking all this into account and doing design that starts with the software and then designing the hardware using the software to focus that development. The other 90% design the chip and then let the software guys have at it, which is much slower and less predictable.

    From an EDA business point of view it is clear that the system companies are taking more control. These types of companies seem to be prepared to pay for tools that deliver value. Since they are not already making enormous purchases of less differentiated stuff they seem less inclined to insist that everything is simply rolled into an all-you-can-eat deal for next to nothing.

    There is a chance that design handoff will move to this block level, the level that specifies the virtual platform for the software people and the IP to be assembled for the SoC team. It is still early, but Ajoy and Atrenta believe the change is coming.


    Intel Proves Last Year’s Conventional Wisdom Wrong

    Intel Proves Last Year’s Conventional Wisdom Wrong
    by Ed McKernan on 12-11-2011 at 7:00 pm

    Back in the 1990’s, Richard Branson, the legendary Entrepreneur and investor was asked how to become a millionaire, and he allegedly responded, “There’s really nothing to it. Start as a billionaire and then buy an airline.” I think the same principle can be applied to a large part of the Semiconductor Industry as we witness another major downturn that has been in the works since this summer and cuts across memory, analog and logic vendors. The one true shelter in the storm has been Intel, whose stock saw a major upswing following the September 13[SUP]th[/SUP] announcement by Microsoft that Windows 8 would need an x86 processor to insure backward compatibility – a necessary requirement in the business world. Not surprisingly ARM’s stock has declined since then.

    Forecasting and controlling for variables can be tricky as anyone can argue that the semiconductor industry could be reflecting the slowdown in Europe or the Thailand Flood that took out 30% of the world’s hard drive production. Last week I had the opportunity to meet a number of customers in China focused on the consumer electronics business. It was quite shocking to hear the magnitude of the price drops that have occurred in the memory and microcontroller market since June. The Thailand flood caused DRAM vendors to dump product immediately, not waiting for the PC cutbacks expected in Q1. Wafers were then reallocated to build more Flash and MCUs, which led to price declines there as well. Recent semiconductor forecasts show DRAM down nearly 30% year over year. The bright spots were in x86 processors and NAND flash.

    The viewpoint that I have been trying to communicate is much longer term. What should we expect over the next two to four years? At the beginning of 2011, the conventional wisdom (CW) was that Apple’s growth in tablets would spill over into other vendors and as a result PCs would see a major slowdown at the expense of ARM suppliers despite Intel communicating to the world that it was significantly upping its Capital Expenditure to nearly $11B to retrofit 4 fabs for 22nm and to build 2 fabs for 14nm. ARM and nVidia’s stock raced ahead of the expected tablet boom and the follow on Windows 8 driven “ARM PCs” coming in 2H 2012.

    Few analysts thought for a moment to look into the reasoning behind Intel’s massive CapEx that was followed by an even greater stock buyback combined with increasing dividend payouts. It turns out that Intel has known for more than a year that Microsoft’s Windows 8 was going to have to be split in order to have a light weight O/S for consumer and a heavy duty version for corporate. Furthermore, the data center build out with $4000 Xeon processors and double digit emerging market growth would overcome any PC cannibalization in the Western World due to Tablets. In the end, Otellini could write the checks and still sleep at night.

    It is true that Intel would wish to have a competitive tablet processor to close any pathway for ARM to build on its Smartphone success. But from all the presentations that Intel has given this year, it is apparent that they believe they just need to get to 14nm production with Finfets in 2014 and then they will be All Alone with a 4-year process lead. Doors will close on competitors and foundry partnerships will be established – particularly with Apple and probably Qualcomm and one of the large FPGA vendors.

    From our current observation point, we can see that Intel has a greater wind at its back today as compared to 12 months ago. The tablet market is Apple’s and Amazon’s based on a $10 processor. Intel will field a $10 part for the purpose of forcing nVidia, Qualcomm, TI and others to play in the mud. I expect many ARM CPU vendors will re-evaluate the worthiness of playing in the tablet and smartphone markets at such a low price and return on investment.

    AMD has fallen off the radar screen in the near term giving Intel sole ownership of the ultrabook market. Intel will look to convert 70%+ of the mobile PC market into ultrabooks because in 18-24 months (after Haswell) they could own it all and diminish nVidia’s graphics business that thrives today on attachments to Sandy Bridge.

    Finally, in 2011, Intel figured out that the right way to look at tablets and smartphones was as the drivers of the cloud that is built on $4000 Xeon processors. Intel now expects its server and storage business to double in the next 5 years to $20B. I think this is conservative. Regardless, it is rare to hear of a large Fortune 500 company growing an 80%+ Gross Margin business at double-digit rates.


    During a Question and Answer segment at the recent Credit Suisse Investors Conference, Paul Otellini was confident as he explained the economics of today’s Fabs and the $10B ones coming with 450mm in 2018. The dwindling number of players who are able to afford the price tag and the 4-year process lead with 14nm coming in 2014 should make competitors shudder. The capital-intensive airline business model that Richard Branson spoke about may be about to come to most of the semiconductor industry, with the likely exception of Intel.

    FULL DISCLOSURE: I am Long AAPL and INTC


    Synopsys Eats Magma: What Really Happened with Winners and Losers!

    Synopsys Eats Magma: What Really Happened with Winners and Losers!
    by Daniel Nenni on 12-10-2011 at 6:00 pm

    Conspiracy theories abound! The inside story of the Synopsys (SNPS) acquisition of Magma (LAVA) brings us back to the 1990’s tech boom with shady investment bankers and pump/dump schemes. After scanning my memory banks and digging around Silicon Valley for skeletons with a backhoe here is what I found out:

    The Commission brings this action against defendant Credit Suisse First Boston LLC, f/k/a Credit Suisse First Boston Corporation (“CSFB”) to redress its violation of provisions of the Securities Exchange Act of 1934 (“Exchange Act”) and pertinent rules thereunder, and rules of NASD Inc. (“NASD”) (formerly known as the National Association of Securities Dealers) and the New York Stock Exchange, Inc. (“NYSE”).

    Investment banker Frank Quattrone, formerly of Credit Suisse First Boston (CSFB), took dozens of technology companies public including Netscape, Cisco, Amazon.com, and coincidentally Magma Design Automation. Unfortunately CSFB got on the wrong side of the SEC by using supposedly neutral CSFB equity research analysts to promote technology stocks in concert with the CSFB Technology Group headed by Frank Quattrone. Frank was also prosecuted personally for interfering with a government probe.

    6. The undue and improper influence imposed by CSFB’s investment bankers on the firm’s technology research analysts caused CSFB to issue fraudulent research reports on two companies: Digital Impact, Inc. (“Digital Impact”) and Synopsys, Inc. (“Synopsys”). The reports were fraudulent in that they expressed positive views of the companies’ stocks that were contrary to the analysts’ true, privately held beliefs.

    The full complaint is HERE, it is an interesting read.

    To make a long story short: Frank Quattrone went to trial twice: the first ended in a hung jury in 2003 and the second resulted in a conviction for obstruction of justice and witness tampering in 2004. Frank was sentenced to 1.5 years in prison before an appeals court reversed the conviction. Prosecutors agreed to drop the complaint a year later. Frank didn’t pay a fine, serve time in prison, nor did he admit wrongdoing! Talk about a clean getaway! Quattrone is now head of merchant banking firm Qatalyst Partners, which, coincidently, handled the Synopsys acquisition of Magma on behalf of Magma.Qatalyst is staffed with Quattrone cronies and former CSFB people. Disclosure: This blog is opinion, conjecture, rumor, and non legally binding nonsense from an internationally recognized industry blogger who does not know any better. To be clear, this blog is for entertainment purposes only.

    Okay, here’s what I think happened: Qatalyst went to Magma CEO Rajiv Madhavan with a doom and gloom Magma prediction for 2012 and a promise of a big fat check from Synopsys. In parallel Qatalyst went to a Synopsys board member(s) and suggested that investors want to see a return on the $1B+ pile of cash Synopsys was hoarding and added that “if you don’t buy Magma your competition will”. The rest is in the press releases.

    A couple of interesting notes: Synopsys will have to pay Magma $30M if the acquisition does not go through. I can assure you there are some people who definitely do NOT want this merger to go through so there is a possibility it will not pass regulatory scrutiny. Frank Quattrone’s involvement may not help this process assuming he has some regulatory enemies from his legal victory.

    Magma will have to Pay Synopsys $17M if they get a better offer and back out of the deal. Mentor only has $120M in cash so they are in no position for a bidding war, even though I think that is the rightful home for Magma products. Cadence has $700M in cash but I don’t think they could outbid Synopsys even if they wanted to, which from what I have been told they don’t.

    “Bringing together the complementary technologies of Synopsys and Magma, as well as our R&D and support capabilities, will help us deliver advanced tools earlier, thus, directly impacting our customers demand for increased design productivity.” Aart J. de Geus Synopsys (SNPS) Q4 2011 Earnings Call November 30, 2011 5:00 PM ET

    If “complimentary technologies” means “overlapping products” I agree with Aart. Daniel Payne did a nice product comparison table on the SemiWiki Synopsys Acquires Magma!?!?!? Forum thread. 10,000+ people have viewed it thus far which would be considered “going viral” on our little EDA island.

    Winners and Losers?

    Synopsys is the biggest winner. Magma has been undercutting EDA pricing since day one so expect bigger margins for Synopsys! Aart also gets to write the final Magma chapter which has gotta feel pretty good. Kudos to Synopsys on this one.

    Emerging EDA companies like Berkeley Design Automation and ATopTech are big winners. One of Magma’s biggest attractions was that they were NOT Synopsys/Cadence. Big EDA customers and semiconductor foundries do not like product monopolies and will search out innovative alternatives.

    Magma is a winner with a very nice exit. Being dog number four in a three dog race is not much fun. Magma’s accomplishments are notable, no shame there, and they do have some excellent people/technology.

    Cadence is a winner/loser. Winner as they do not have to compete with Magma anymore. Loser as they are now even farther behind Synopsys in just about everything. Magma customers are losers. If history repeats, Synopsys will upsize prices and legacy the overlapping Magma products, as they did with EPIC, NASSDA, etc…

    Mentor is the biggest loser. If Mentor had acquired Magma (as I blogged), Mentor would be the #2 EDA company hands down. Carl Ichan really missed a great opportunity to make history. You really let me down here Carl. Comments will not be allowed on this blog.

    Please share your thoughts on the Synopsys Acquires Magma!?!?!?Forum thread. Send all personal attacks and death threats to me directly at: idontcare@semiwiki.com.


    Atrenta’s users. Survey says….

    Atrenta’s users. Survey says….
    by Paul McLellan on 12-09-2011 at 7:32 pm

    Atrenta did an online survey of their users. Of course Atrenta’s users are not necessarily completely representative of the whole marketplace so it is unclear how the results would generalize for the bigger picture, your mileage may vary. About half the people were design engineers, a quarter CAD engineers and the rest split between test engineers, verification and other things.

    There are some questions that focus on use of Atrenta’s tools that I don’t think are of such wide interest, I’ll focus on the things that caught my eye.

    Firstly, the method to design your RTL. Do you create it from scratch or modify existing RTL? It is now a 40:60 split with 40% of designers writing their own RTL and 60% modifying existing RTL.

    When it comes to the top level RTL, there is a split between doing it manually (57%), with scripts (57%) and using a 3rd party EDA tool (12%). Yes, those numbers total more than 100%, some people obviously use more than one technique.

    On the main limitations of their current approach, designers had a litany of woes. Missing design manipulation features (35%), support not consistent and reliable (26%) and ECOs hard to handle (34%). But clearly the #1 problem is the difficulty of debugging design issues at 49%. There were many other things listed from missing IP-XACT files, IP being unqualified, to just plain “error prone”.


    The final question was about what aspects of the design flow were most critical to improve. The choices for each feature were critical, very important, nice to have, not important and don’t know. So let’s take the critical and very important groups and see what the top concerns were.

    First was reduce verification bugs due to connectivity problems. The next 3 are all facets of a similar problem: rapidaly adapt legacy designs, effort to integrate 3rd party IP and effort to make updates when 3rd party IP is in use. Slightly behind that is to reduce the time and effort to create test benches.


    Challenges in 3D-IC and 2½D Design

    Challenges in 3D-IC and 2½D Design
    by Paul McLellan on 12-09-2011 at 5:18 pm

    3D IC design and what has come to be known as 2½D IC design, with active die on a silicon interposer, require new approaches to verification since the through silicon vias (TSVs) and the fact that several different semiconductor processes may be involved create a new set of design challenges

    The power delivery network is a challenge in a modern 2D (i.e. normal) die but designing the power delivery network is more challenging still with TSVs, passing the power up from the die at the bottom of the stack (or the interposer) up to the higher die. There are possibilities of inter-die noise and other issues. There are two approaches to handle this. The first approach, which can be used if all the die data is available, is to simulated everything concurrently. The second approach is to use models where a chip power model (CPM) is generated for missing data for co-analysis with the available data.

    Another specific power-related problem is thermal and thermal-induced stress failures. The IC power is very temperature-dependent, especially leakage power. In a 3D design the thermal dissipation is more complex. Similar to CPM, a chip thermal model (CTM) can be generated for each die in the design, including temperature dependent power and per-layer metal density. The CTM is used for accurate prediction of power and temperature distribution.

    From a signal integrity point of view, a new problem is jitter noise analysis for wide-I/O applications. In an interposer design, which is a lot less pin limited than a regular package, a parallel bus might have as many as 8K bits which, apart from skew considerations, can introduce significant jitter due to simultaneous switching.

    So it is clear that a new approach is required, a comprehensive anaysis for power, noise, and reliability to ensure successful tape-out of 3D and silicon-interposer-based designs.

    This is a summary of a paper by Dr Norman Chang of Apache presented at the IEEE/CPMT society 3D-IC workshop held in Newport Beach on December 9th. The conference website is here.


    Low power techniques

    Low power techniques
    by Paul McLellan on 12-08-2011 at 5:49 pm

    There was recently a forum discussion about the best low power techniques. Not surprisingly we didn’t come up with a new technique nobody had ever thought of but it was an interesting discussion.

    First there are the techniques that by now have become standard. If anyone wants more details on these then two good resources are the Synopsys Lower Power Methodology Manual (LPMM) and the Cadence/Si2 Practical Guide to Low Power Design. The first emphasizes UPF and the second CPF but there is a wealth of background information in both books that isn’t especially tied to either power format.

    • Clock gating
    • Multiple Vt devices
    • Back/forward biasing of devices
    • Power gating for shutdown
    • State retention
    • Multi-voltage supplies
    • Dynamic voltage scaling
    • Dynamic frequency scaling (Dynamic Voltage and Frequency Scaling – DVFS)

    A lot can be done at the system/architectural level, essentially controlling chip level power functionality from the embedded software, such as powering down the transmit/receive logic in a cell-phone when no call is taking place.

    Asynchronous logic offers potential for power saving, especially for the ability to take silicon variation in the form of lower power as opposed to binning for higher performance. After all, 30% of many SoCs power budget is consumed by the clock itself. But there are huge problems with the asynchronous design flow since synthesis, static timing, timing driven place & route, scan-test etc are all inherently built on a synchronous model and break down when there is no clock. These are soluble problems if enough people wanted to use asynchronous approaches, but a lot of tools need to be fixed all at once (but to be fair, this was the case with the introductions of CPF and UPF too). But definitely it has a feel of “you just have to boil the ocean.”

    WIth more powerful tools, such as those from Calypto, more clock gating can be done than the simple cases that synthesis handles (replacing muxes recirculating values with a clock gating cell). If a register doesn’t change on this clock cycle, then the downstream register won’t change on the next clock cycle. Some datapaths have a surprising amount of these sorts of structures that can be optimized, although the actual power savings are usually data dependent.

    As we have gone down through the process nodes, leakage power has gone from an insignificant part of total power dissipation to being over half in some chips. Some of the new Finfet transistor approaches look like they will have a big positive effect on leakage power too, but there is a lot that can be done with any process using libraries containing both low-leakage and high-performance cells and using the high-performance cells only on the most critical paths.

    The real bottom line is that power requires attention at all levels. The embedded software ‘knows’ a lot about which parts of the chip are needed and when. For example, the iPad supposedly has multiple clock rates for the A5 chip and only goes up to the full 1 GHz when that performance is needed for CPU intensive operations. Architectural level techniques such as choice of IP blocks can have a major impact. Low power synthesis is with clock gating and multiple libraries. Circuit design techniques. And finally process innovation that keeps the power under control as the transistors get smaller and smaller.


    Driving in the bus lane

    Driving in the bus lane
    by Paul McLellan on 12-08-2011 at 1:16 pm

    Modern microprocessor and memory designs often have hundreds of datapaths that traverse the width of the chip, many of them very wide (over one thousand signals). To meet signal timing and slope targets for these buses, designers must insert repeater cells to improve the speed of the signal. Until now, the operations associated with managing large numbers of large buses have been manual: bus planning, bus routing, bus interleaving, repeater cell insertion and so on. However, the large and growing number of buses, especiallly in multi-core microprocessor designs, means that a manual approach is both too slow and too inaccurate. So Pulsic have created an automated product to handle this increasingly critical task, Unity Bus Planner.

    Another problem that automation helps to address is that the first plan is never the last. As von Clausewitz said: “No campaign plan survives first contact with the enemy”. In just the same way, no bus plan survives once the detailed placement of the rest of the chip gets done. Repeater cells, since they involve transistors, don’t fly over the active areas but have to interact with them, so as the detailed layout of the chip converges the bus plan has to be constantly amended.

    During the initial floorplanning stage, designers do block placement and power-grid planning followed by initial bus and repeater planning. Buses that cross the whole chip need to be carefully planned. At the end of this stage initial parasitic data is available for simulating critical parts of the design.

    During bus planning itself, designers fit as many buses as possible through dedicated channels to avoidtiming issues. Very fast signals require shielding with DC signals such (such as ground) to prevent crosstalk noise issues. Often architects interleave buses so that they provide shielding for each other rather than using valuable resources for dedicated shielding. But planning, interleaving and routing hundreds of very wide buses is slow and error-prone. Internal tools created to do this are often unmaintainable and inadequate for the new generation of still larger chips.

    Signals on wide buses need to arrive simultaneously with similar slopes so that they can be correctly latched. This means that the paths must match in terms of metal layers, vias, repeater and so on, a very time consuming process, especially when changes inevitably need to be made as the rest of the design is completed.

    With interactive and automated planning, routing and managment, designers can complete bus and repeater-cell planning in minutes or hours rather than days or weeks. Automation also makes the inevitable subsequent modifications faster and more predictable.

    The Unity Bus Planner product page is here.