Banner 800x100 0810

Notes from Common Platform: Collaborate or Die

Notes from Common Platform: Collaborate or Die
by Beth Martin on 02-07-2013 at 2:16 pm

FinFETs are hot, carbon nanotubes are cool, and collaboration is the key to continued semiconductor scaling. These were the main messages at the 2013 Common Platform Technology Forum in Santa Clara.

The collaboration message ran through most presenations, like the afternoon talk by Subi Kengeri of GLOBALFOUNDRIES and Joe Sawicki of Mentor Graphics.

Subi talked about technology development, identifying the market drivers of technology (mobile), the technology challenges (power density, metrology for FinFETs), and the future device architectures (III-V, SiGe, carbon tube). Listening to him lay out the technology landscape, one starts to understand why cooperation between Common Platform Alliance members (IBM, Samsung, GlobalFoundries) is so important.

While the industry was once full of vertically integrated semiconductor companies who could conceive of, design, and manufacture their ICs under one roof (“real men have fabs”), today, the fabless semiconductor industry depends on the “ecosystem” of IP, EDA, manufacturing, test and packaging. The EDA part of the equation, design enablement and manufacturing ramp, was covered by Joe Sawicki.

Joe emphasized that DFM today involves pulling manufacturing knowledge into the design flow as early as possible. Mentor is keenly aware of the importance of manufacturing on all stages of design, so much so that they built a new website focused on foundry solutions.

To get a manufacturable design, you have to go far beyond simply reading in a technology file with basic spacing rules. You need enough information flowing between tools for a true design/manufacturing co-optimization. From the EDA point of view, this means merging physical verification with place & route (Calibre InRoute), fusing DFM with test and yield analysis, streamlining final verification (pattern matching, DFM scoring), improving metal fill, developing technologies to improve circuit reliability (PERC), and improving test coverage for low- or 0-defect applications (cell-aware ATPG). It begins to look as if the nice, distinct boxes of the IC design flow are escaping their boundaries, blending together. Is it chaos? Is it cats and dogs living together? No, it’s the future; everything working better through mashups and collaborations.

In fact, my analysis of word usage in the day’s presenatations revealed that “collaboration” was the most frequently used term of the day. Keep in mind that this reporter’s analysis is based on general recollections during happy hour, but still, I stand by it.


No EUV before 7nm?

No EUV before 7nm?
by Paul McLellan on 02-07-2013 at 1:31 pm

I was at the Common Platform Technology Forum this week. One of the most interesting sessions is IBM’s Gary Patton giving an overview of the state of semiconductor fabrication. Then, at lunchtime, he is one of the people that the press can question. In this post, I’m going to focus on Extreme Ultra-Violet (EUV) lithography.

You probably know that the biggest challenge is the light source. This is actually made with droplets of tin. One laser shapes the drop and then another high powered laser zaps the drop to create plasma and EUV light. This is collected, goes through 6 mirrors with very low efficiency and a reflective mask. Very little, maybe 5%, of the original light actually makes it to the wafer to expose the photoresist. Right now the power of the light source is up to about 30W but it needs to be more like 250W and so it is off by a factor of 10.

Gary also mentioned two other problems that I’ve written about before but which don’t get mentioned nearly so much. After all, these are only problems if we get a usable EUV light source that works reliably ten billion times a day with adequate power.


The two other problems are that masks will not be defect free and that it is not possible to put a pellicle on the mask (since it would absorb all the EUV).

First, the defect problem. There are two issues. Firstly, the mask will have defects that will print and so either the design needs to be redundant enough that it doesn’t matter or else the defects need to be on parts of the mask that don’t matter since they are going to print anyway. This means knowing where the defects are and that is the next problem. You can’t just look for them with visible light since they are too small. So you need to scan them. But the scale of a mask versus the scale of a defect is huge: it is equivalent to searching 10% of California to make sure there are no golf balls there.

In refractive optics (what we use today) the mask is covered with a thin transparent film called a pellicle. One purpose of the pellicle is that any contamination that would end up on the mask instead ends up on the pellicle. It is thus out of the focal plane of the stepper and so doesn’t print. EUV masks cannot have a pellicle. But that means that any contamination will print. Worse, it won’t be obvious that this his happening and it will continue to print until the mask is cleaned. This is potentially very expensive since hundreds more fabrication steps may take place on a wafer that is actually already a dud. The whole wafer. Every die.

Some people have said that getting EUV to work is “just engineering.” Gary says this is not the case. “It is not just hard engineering work, it is hard physics problems too.” At lunch someone asked Gary if it was like 80% engineering with 20% of physics problems to be resolved. He said he wasn’t really sure how you put a number on it like that but his guess would be 60% physics problems and 40% engineering.

If EUV becomes workable, IBM thinks that it will be for the 7nm node. Not 14nm or 10nm. It seems that “EUV will eventually work because it has to.” But reality doesn’t always cooperate on that basis.


Using Soft IP and Not Getting Burned

Using Soft IP and Not Getting Burned
by Daniel Payne on 02-07-2013 at 10:11 am

The most exciting EDA + Semi IP company that I ever worked at was Silicon Compilers in the 1980’s because it allowed you to start with a concept then implement to physical layout using a library of parameterized IP, the big problem was verifying that all of the IP combinations were in fact correct. Speed forward to today and our industry still faces the same dilemas, how do you assemble a new SoC designed with hard and soft IP, and know that it will be functionally and physically correct?

They say that it takes a village to raise a child, so then in our SoC world it takes collaboration between Foundry, IP providers and EDA vendors to raise a product. One such collaboration is between:

These three companies are hosting a webinar on Tuesday, March 5, 2013 at 9AM, Pacific time to openly discuss how they work together to ensure that you can design SoCs with Soft IP and not get burned.

Agenda

  • Moderator opening remarks
    Daniel Nenni (SemiWiki) (5 min)

  • The TSMC Soft IP Alliance Program – structure, goals and results
    (Dan Kochpatcharin, TSMC) (10 min)

  • Implementing the program with the Atrenta IP Kit
    (Mike Gianfagna, Atrenta) (10 min)

  • Practical results of program participation
    (John Bainbridge, Sonics) (10 min)

  • Questions from the audience (10 min)

Speakers

Daniel Nenni
Founder, SemiWiki
Daniel has worked in Silicon Valley for the past 28 years with computer manufacturers, electronic design automation software, and semiconductor intellectual property companies. Currently Daniel is a Strategic Foundry Relationship Expert for companies wishing to partner with TSMC, UMC, SMIC, Global Foundries, and their top customers. Daniel’s latest passion is the Semiconductor Wiki Project (www.SemiWiki.com).


John Bainbridge
Staff Technologist, CTO office, Sonics, Inc.
John joined Sonics in 2010, working on System IP, leveraging his expertise in the efficient implementation of system architecture. Prior to that John spent 7 years as a founder and the Chief Technology Officer at Silistix commercializing NoC architectures based upon a breakthrough synthesis technology that generated self-timed on-chip interconnect networks. Prior to founding Silistix, John was a research fellow in the Department of Computer Science at the University of Manchester, UK where he received his PhD in 2000 for work on Asynchronous System-on-Chip Interconnect.


Mike Gianfagna
Vice President, Corporate Marketing, Atrenta
Mike Gianfagna’s career spans 3 decades in semiconductor and EDA. Most recently, Mike was vice president of Design Business at Brion Technologies, an ASML company. Prior to that, he was president and CEO for Aprio Technologies, a venture funded design for manufacturability company. Prior to Aprio, Mike was vice president of marketing for eSilicon Corporation, a leading custom chip provider. Mike has also held senior executive positions at Cadence Design Systems and Zycad Corporation. His career began at RCA Solid State, where he was part of the team that launched the company’s ASIC business in the early 1980’s. He has also held senior management positions at General Electric and Harris Semiconductor (now Intersil). Mike holds a BS/EE from New York University and an MS/EE from Rutgers University.


Dan Kochpatcharin
Deputy Director IP Portfolio Marketing, TSMC
Dan is responsible for overall IP marketing as well as managing the company IP Alliance partner program.
Prior to joining TSMC, Dan spent more than 10 years at Chartered Semiconductor where he held a number of management positions including Director of Platform Alliance, Director of eBusiness, Director of Design Services, and Director of Americas Marketing. He has also worked at Aspec Technology and LSI Logic, where he managed various engineering functions.

Dan holds a Bachelor of Science degree in electrical engineering from UC Santa Barbara, a Master of Science in computer engineering, and an MBA from Santa Clara University.

Registration
Sign up here.


Semiconductors Down 2.7% in 2012, May Grow 7.5% in 2013

Semiconductors Down 2.7% in 2012, May Grow 7.5% in 2013
by Bill Jewell on 02-06-2013 at 10:29 pm

Guidance 1Q13 292x300

The world semiconductor market in 2012 was $292 billion – down 2.7% from $300 billion in 2011, according to WSTS. The 2012 decline followed a slight gain of 0.4% in 2011. Fourth quarter 2012 was down 0.3% from third quarter. The first quarter of the 2013 will likely show a decline from 4Q 2012 based on typical seasonal patterns and the revenue guidance of key semiconductor companies.

Intel, Texas Instruments (TI), STMicroelectronics (ST), and Broadcom all have similar guidance – with the low end ranging from declines of 9% to 11%, the midpoint a 6% or 7% decline, and the high end a decline of 2% to 4%. AMD’s guidance is slightly more negative, ranging from -12% to -6%. Qualcomm, Toshiba, Renesas Electronics and Infineon expect 1Q 2013 revenues to increase from 4Q 2012, ranging from Qualcomm’s midpoint of 1% to Toshiba’s 15% guidance for its Electronics Devices segment. The major memory companies did not give specific guidance. Samsung expects a seasonal decline. SK Hynix sees strong demand for mobile applications but weak demand for PC applications. We at Semiconductor Intelligence estimate a 2% increase for Micron Technology based on its projections of bit growth and price trends. We estimate the overall semiconductor market will decline 1% to 3% in 1Q 2013.

What is the outlook for the years 2013 and 2014? The latest economic forecast from the International Monetary Fund (IMF) calls for World real GDP growth to accelerate from 3.2% in 2012 to 3.5% in 2013 and 4.1% in 2014. U.S. GDP growth is expected to slow slightly from 2.2% growth in 2012 to 2.0% in 2013 before accelerating to 3.0% in 2014. The Euro Area will continue to work through its debt crisis in 2013 with a GDP decline of 0.2% in 2013 and then recover to 1.0% growth in 2014. Japan’s GDP declined 0.6% in 2011 due to the devastating earthquake and tsunami. Rebuilding boosted Japan’s GDP 2.0% in 2012, but growth is expected to moderate to 1.2% in 2013 and 0.7% in 2014. Asia will continue to be the major driver of World economic growth. China’s GDP is forecast to accelerate from 7.8% in 2012 to 8.2% in 2013 and 8.5% in 2014. The IMF groups South Korea, Taiwan, Singapore and Hong Kong in the category of Newly Industrialized Asia (NIA). NIA GDP growth should pick up from 1.8% in 2012 to 3.2% in 2013 and 3.9% in 2014.



Semiconductor market growth is closely correlated to GDP growth. Our proprietary model at Semiconductor Intelligence uses the GDP growth rate and the change in GDP growth rate (acceleration or deceleration) to predict semiconductor market growth. Other major factors in determining semiconductor market growth are key electronic equipment drivers (such as media tablets and smartphones), inventory levels, fab utilization and semiconductor capital spending. Our prior forecast in November 2012 called for 9% semiconductor market growth in 2013 and 12% growth in 2014. Based on the slight decline in the market in 4Q 2012 and expected decline in 1Q 2013, we have revised the 2012 forecast to 7.5%. We are holding our 2014 forecast at 12%. The chart below compares our forecast with other recent forecasts for 2013 and (where available) 2014.


At the low end of the 2013 forecasts are WSTS and Gartner at 4.5% and IDC at 4.9%. Mike Cowan’s January model adjusted to the final 2012 data yields a forecast of 6.1%. IHS iSuppli is the highest at 8.3%, slightly higher than our 7.5%. The available forecasts for 2014 are 5.2% from WSTS, 9.9% from Gartner and 12% from Semiconductor Intelligence. Gartner and Semiconductor Intelligence both show significant growth acceleration from 2013 to 2014 while WSTS has only slight acceleration.

***
Semiconductor Intelligence does consulting and custom market & company analysis. See our website at:
http://www.semiconductorintelligence.com/




RTL Clock Gating Analysis Cuts Power by 20% in AMD Chip!

RTL Clock Gating Analysis Cuts Power by 20% in AMD Chip!
by Daniel Nenni on 02-06-2013 at 10:00 pm

Approximately 25% of SemiWiki traffic originates from search engines and the key search terms are telling. Since the beginning of SemiWiki, “low power design” has been one of the top searches. This is understandable since the mobile market has been leading us down the path to fame and fortune. Clearly lowering the power consumption of consumer products and networking centers is an important design consideration and this effort begins with the chips used in these devices.

Semiconductor design innovators like AMD wanted to improve on previous generation designs in terms of faster performance in a given power envelope, higher frequency at a given voltage, and improved power efficiency through clock gating and unit redesign.

The AMD low-power core design team used a power analysis solution (PowerPro[SUP]®[/SUP] from Calypto[SUP]®[/SUP]) that helped analyze pre-synthesis RTL clock-gating quality, find opportunities for improvements, and generate reports that the engineering team could use to decrease the operating power of the design.

By targeting pre-synthesis RTL, power analysis can be run more often and over a larger number of simulation cycles — more quickly and with fewer machine resources than tools that rely on synthesized gates. The focus on clock gating and the quick turnaround of RTL analysis allowed AMD to achieve measurable power reductions for typical applications of a new, low-power X86 AMD core.

This article by Steve Kommrusch of AMD describes the power analysis methodology AMD used to improve clock-gating efficiency and identify key features and advantages that the tool delivered. Quantitative results are interpreted and presented in graphs and tables. Comparative data between PowerPro results and PTPX post-synthesis results, show that doing power analysis at the RTL stage rather than waiting until post-gate synthesis was very useful.

Ultimately, even given instructions per clock (IPC) and frequency improvements, PowerPro helped achieve an approximately 20% reduction in typical dynamic application power compared to an already-tuned low-power X86 CPU. You can read the whole article HERE.

Calypto Design Systems leads the industry in technologies for ESL hardware design and RTL power optimization. These technologies empower designers to create high quality and low power electronic systems for today’s most innovative electronic products.


UVM: Lowering the barrier to IP reuse

UVM: Lowering the barrier to IP reuse
by Don Dingee on 02-06-2013 at 2:00 am

One of my acquaintances at Intel must have some of the same viewing habits I do, based on a recent Tweet he sent. He was probably watching “The Men Who Built America” on the History Channel and thinking as I have a lot recently about how the captains of industry managed to drive ideas to monopolies in the late 1800s and early 1900s.

Difference between 1800’s & today is that barriers to entry r so low & marketplace is so varied that monopolists have very narrow domains.

The comment on technological barriers being lowered is true, especially in a new semiconductor industry that relies more and more on merchant foundries and commercial IP, now the building blocks of choice for most teams. Variation in the market is also a truism, and it is forcing former deathly rivals to cooperate so that all may prosper – a distinct shift in thinking from the winner-take-all thinking that dominated so much of the Industrial Age.

In an EDA industry marked by at least three distinct approaches, the shift in the landscape is driven by a huge problem: IP has only been reusable under carefully controlled conditions, usually meaning adoption of a particular tool chain and verification methodology. Paradoxically, as more IP has been developed, the problem has worsened. Not only does new IP require design and test, but steps are being retraced to reengineer and integrate existing IP into new environments. This mix eventually becomes overwhelming, if not for design resources then certainly for test resources.

The genesis of Universal Verification Methodology (UVM) fascinated a lot of people, wondering why Cadence, Mentor, and Synopsys would cooperate, or even be seen in the same photo. Unifying the disparate approaches to IP verification lowers a major barrier to IP integration and reuse, and UVM provides a better and faster way to test using coverage-driven verification.

More people are getting interested in UVM and SystemVerilog, including participants here at SemiWiki – here are a couple samples of recent forum contributions:

Evolution of the test bench – part 2
SVA : System Verilog Assertion is for Designer too!!

As with any new standard, it takes time for people to understand and embrace the technology. Both the hardware IP designer and the test/verification engineer should take note. The definitive source document is the UVM 1.1 User Guide, free for open download at Accellera.

A new learning resource has debuted this week. Aldec has launched their Fast Track ONLINE training portal, with a series of modules planned on UVM. These self-paced modules give practical examples of concepts related to SystemVerilog, transaction level modeling (TLM), and more on the standard and how to implement it.

Registration is simple, and Fast Track training modules are free.

Creating and leveraging truly reusable IP is one of the keys to getting more designs done, tested, and launched. Success in reusing IP – hardware, software, design, test, everything that goes into the final integrated product – frees up resources for innovative breakthroughs and differentiation of platforms. UVM is a positive change for the EDA industry, and should be a major help for designers willing to embrace it.


Could “Less than Moore” be better to support Mobile segment explosion?

Could “Less than Moore” be better to support Mobile segment explosion?
by Eric Esteve on 02-05-2013 at 4:52 am

If you take a look at the explosion of the Mobile segment, linked with the fantastic world-wide adoption of smartphone and media tablet, you clearly see that the SC industry evolution during –at least- the next five years will be intimately correlated with the mobile segments. Not really a surprise, but the question I would like to raise is: “will this explosion of SC revenue in mobile segment will only be supported be applying Moore’ law (race for integration, more and more functionalities in a single chip targeting the most advanced technology nodes), or could we imagine that some subsequent mobile sub-segment could be served by less integrated solution, offering much faster TTM and finally better business model and more profit?”

At first, let’s identify today’s market drivers, compared with these of the early 2000’s. At that time, the PC segment was still the largest, we were living in a monopolistic situation and Intel was benefiting from Moore’s law, technology node after technology node. If you schematize, the law says that you can choose between dividing the price by 2 (for the same complexity) or doubling the complexity (for the same price), and in both cases increase the frequency. Intel clearly decided to increase the complexity, keep the same die size… and certainly not decrease the price! TTM was not really an issue for two reasons: Intel was (and still is) the technology leader, always the first to support the most advanced node, and the company was in a quasi-monopolistic situation.

The explosion of mobile has changed the deal: performance is still a key factor, but the most important is power consumption (say MIPS per Watt). Price has becoming a much more important factor, even if performance is still key, on such a competitive market, with more than 10 companies addressing the Application Processor market. And TTM is also becoming a key factor on such a market.
To summarize, we move from a PC market where (Performance, somewhat TTM) are the key factors to a Mobile market where (MIPS per Watt, Price, TTM) are keys. Unfortunately, I don’t think that following Moore’s law in a straight way can efficiently address these three parameters…

  • Leakage currentis becoming such an important, as highlighted in this article from Daniel Payne, that going forward will help increasing the CPU/Logic performance, but may decrease at the end the power efficiency (MIPS per Watt)! This has forced design teams to use power management techniques, but the induced complexity has a great impact on IC development and validation lead-time…
  • Price: we talk about IC Average Selling Price (ASP), the chip maker think in term of “cost”, at first, then ASP when they sell the IC. There are two main factors affecting this cost: IC development cost (Design resources, EDA, IP budget, Masks, Validation…) and chip fabrication cost in production (Wafer, Test, Packaging). If you target an IC ASP in the $10/20 range, like for example an Application Processor for smartphone, you quickly realize that, if your development cost is in the $80 million range (for a chip in 20nm), you must sell… quite a lot of chips! More likely, the break-even point is around 40 or 50 million chip sold!
  • Time-To-Market(TTM): once again, we are discovering that, for each new technology node, the time between the design start and the release to production (RTP, when you start to get return on your investment) is longer and longer. It would take a complete article to list all the reasons, going from Engineering gap to longer validation lead-time, passing by wafer fab increased lead-time, but you can trust me: strictly following Moore’s law directly induces a longer overall lead-time to RTP!

Does that mean that Qualcomm, for example, is wrong when proposing the above 3 step road-map, ending with a single chip solution? Certainly not… but they are Qualcomm, the emerging leader (and for long time in my opinion) in the mobile segment, offering the best technical solution, with a TTM advantage. But, if you remember, more than two Billion systems will ship in the mobile segment by 2016, which means about 20 billion IC… We can guess that Qualcomm will NOT serve all the mobile sub-segments, thanks for the competition able to enjoy some good piece of business! This article is addressing Qualcomm (and Samsung or even Apple) followers: “Less than Moore” could be a good strategy too! I realize that it will take another post to describe which could be the possible strategies linked to “Less than Moore”, so, please be patient, I will release this in a later article…

From Eric Esteve from IPNEST


Sanjiv Kaul is New CEO of Calypto

Sanjiv Kaul is New CEO of Calypto
by Paul McLellan on 02-04-2013 at 11:15 am

Calypto announced that Sanjiv Kaul is the new CEO. I first met Sanjiv many years ago when he was still at Synopsys when I interviewed for a position there around the time I transitioned out of Compass and went back to the parent company VLSI. I forget what the position was. Then about three or four years ago when I did some work for Oasys he was on the board there and was their executive chairman and helping them with marketing part time. Funnily enough, Oasys got a new CEO too just before the end of last year.

Sanjiv was senior VP and GM for physical compiler when he was at Synopsys. Prior to that he was the marketing director for the launches of PrimeTime and Formality. Since then he has been involved in many startups, not all EDA, as advisor, board-member or in operational roles.

Apparently this has been very closely held. Calypto’s PR agency only found out today and the Calypto website still hasn’t been updated yet. UPDATE: now it has been updated.

And here he is again, looking rather less CEO-like, along with Paul van Besouw and Joe Costello from Oasys’s DAC video four years ago.

Press release is here.

Also Read:

Atrenta CEO on RTL Signoff

CEO Interview: Jason Xing of ICScape Inc.

CEO Interview: Jens Andersen of Invarian


Software Driven Power Analysis

Software Driven Power Analysis
by Paul McLellan on 02-03-2013 at 8:15 pm

Power is a fundamentally hard problem. When you have finished the design, you have accurate power numbers but can’t do anything about them. At the RTL level you have some power information but it is often too late to make major architectural changes (add an offload audio-processor, for example). Early in the design, making changes is easy but you often lack accurate data to use to guide what changes make sense.

In many systems, power depends on what the SoC is doing and, in turn, that depends on the software. For example, a cell-phone obviously consumes different amounts of power when you are making a call from when it is just sitting in your pocket. In fact the power changes dramatically depending on whether you are talking (lots of data to transmit) versus listening (the transmitter, a huge power hog, is mostly idle). What the system is doing depends largely on what the software on the phone has enabled and disabled.

Assuming you have a virtual platform, and there are all sorts of reasons why you should that have been covered in earlier blogs, then you can use it to do power analysis. The reality in many SoC designs is that the power consumption depends on the contents of a comparatively small number of registers that enable or disable various functions, maybe disable their clocks, maybe power them down completely.

To do architectural power analysis proceeds in three steps. Firstly, identify the critical registers that alter the system behavior in ways that have a big impact on power. For example, going back to the cellphone there are probably registers that enable and disable the transmit and receive logic for the radio. If it is a multi-mode phone (GSM and CDMA) for sure there are registers that disable the logic associated with the unused standard. There may be registers that change the clock frequency of DSPs or control processors.

Second, having identified the registers, for each setting of the registers, the blocks concerned need to be analyzed for power using traditional EDA power analysis, probably at the RTL level unless more detailed information is available. The more “typical” the vectors you can get your hands on the better. Functional verification vectors are usually not very good for this since they are trying to exercise as many corner cases as possible, which, by definition, don’t happen very often.

So now you have a table of registers, and for each setting of those registers you have average power numbers. Next, instrument the virtual platform with callbacks. Each time the value of one of those registers changes then that needs to call back the virtual platform infrastructure and ultimately be logged along with the time or the clock-cycle.

Now you have a fully instrumented system. How do you exercise it? You run the software load. There are probably various software scenarios (boot, standby, listening to mp3, making a call, sending a text, surfing the net, using GPS and so on). You can run as many of these as is appropriate. You will end up with a logfile that shows when every register that has major effect on power changes. It is straightforward to work out what the average power for each block was and for how long, and then to add them up to get a total power number for that scenario. Obviously the total power for a system like a cellphone depends on what assumptions you make about how it is used. A teenager listening to mp3s all day and sending lots of text messages will have very different power consumption (aka battery life) from a salesman who practically lives on the phone making calls.

This doesn’t work for every SoC. In particular, if the power is very dependent on the data more than the mode the chip is in, then it is very important to have realistic data streams. Many SoCs are not like that though. A cellphone’s depends much more on whether you are speaking and hardly at all on what you say.

Download the Carbon whitepaper here.


Help, my IP has fallen and can’t get up

Help, my IP has fallen and can’t get up
by Don Dingee on 02-03-2013 at 8:10 pm

We’ve been talking about the different technologies for FPGA-based SoC prototyping a lot here in SemiWiki. On the surface, the recent stories all start off pretty much the same: big box, Xilinx Virtex-7, wanna go fast and see more of what’s going on in the design. This is not another one of those stories. I recently sat down with Mick Posner of Synopsys, who led off with this idea:
Continue reading “Help, my IP has fallen and can’t get up”