BannerforSemiWiki 800x100 (2)

If requirements ask for it, it had better be there

If requirements ask for it, it had better be there
by Don Dingee on 01-29-2014 at 8:00 pm

Engineers are known for their attention to detail and precision in thinking, but sometimes still struggle during compliance audits. This is especially true the longer a list of requirements becomes, especially unstructured lists kept in spreadsheets and on Post-It notes.

It gets even more complicated, because in defense circles with standards like DO-254, one has to understand the process and “the customer”. Continue reading “If requirements ask for it, it had better be there”


A Brief History of Qualcomm

A Brief History of Qualcomm
by Paul McLellan on 01-29-2014 at 12:48 pm

Qualcomm is the largest fabless semiconductor company in the world. If you have a smarphone there is a good chance you have a Qualcomm chip in your pocket. It is headquartered in San Diego with offices pretty much everywhere.

Qualcomm’s roots are in Linkabit, which was founded by Irwin Jacobs and Andrew Viterbi. They, along with other Linkabit alumni founded Qualcomm in 1985. The story I heard is that one of the motivations was that Viterbi, who invented the Viterbi decoder in the 1960s, and which is widely used in cell-phones and disk drives, felt that they didn’t really make money from licensing the decoder given how widespread it was and wanted to create a company that could license technology much more profitably. Whether that story is true or not, it is certainly one aspect of how the story played out.

Qualcomm started with a radio system for truckers called OmniTracs. It remained part of Qualcomm until late last year. This was a CDMA-based satellite system. In that era you might remember that they also were the company that supplied the Eudora email system, that was part of OmniTracs but also available separately.

A couple of years later they developed all the technology for CDMA cell-phones and entered both the base-station business and (in a joint venture with Sony) the cell-phone business. They also sold CDMA chips to other manufacturers and licensed CDMA technology to other chip makers. They were pretty much a one-stop shop for CDMA. In 1993 the US Telecommunications Industry Association adopted Qualcomm’s CDMA as an industry standard. Initially Sprint and Verizon were both using CDMA while most other operators were using GSM.

Qualcomm is today around a $25B company. Today the company is split into two main parts, a technology licensing division called Qualcomm Technology Licensing Division, and Qualcomm Technologies Inc which runs engineering and, in particular, their fabless semiconductor business.

I negotiated a technology licensing deal with Qualcomm in around 1997 when I was at VLSI Technology. Just to show you how fast the company has grown, given that it is $25B today, here is the story, We had been unable to get what we considered reasonable terms and had walked. Qualcomm wanted a royalty from us, which was reasonable. But they also wanted a royalty from anyone we sold chips to for use of the same patents that we had already licensed. We felt that put us at to severe a disadvantage competing against Qualcomm’s own chipsets. At the end of Q2 Qualcomm needed $2M to make their quarter. They significantly lowered the royalties and caved on some other conditions provided we could pay them $2M in non-refundable pre-paid royalties (so they could recognized them that day—this really was the end of the quarter). VLSI Technology was in the CDMA business as well as GSM where we already had a strong presence. My guess is that license was inherited first by NXP and then by the now defunct ST-Ericsson. Anyway, the point is that back then $2M was make or break for their quarter, now that is about what they make in an hour.

In the late 1990s, Qualcomm got out of the both the base station and handset businesses and focused completely on technology licensing and fabless SoC development.

3G and 4G wireless air interface standards all depend on various aspects of CDMA and so require patent licenses from Qualcomm, who have continued to innovate and develop more advanced CDMA technologies out ahead of the competition. I believe it is not possible to build a cellphone SoC without a patent license (well, unless you are in China when you can claim the patents are not violated). Just this week Qualcomm acquired a further portfolio of patents from HP including that Palm patents and others.

Since 2007, the current line of SoCs is sold under the name Snapdragon. Qualcomm have an ARM architectural license and design their own CPUs using the ARM instruction set. Krait is the name of the latest incarnation. They also design their own graphics processor (GPU) called Adreno and digital signal processor (DSP) called Hexagon. They recently purchased Arteris’s technology and engineering group, whose network on chip (NoC) technology they used.

Snapdragon chips are integrated Application Processor (AP) and modem, unlike many of their competitors who use two separate chips. More recent Snapdragon chips also have on-chip WiFi and Bluetooth. They are used in a huge variety of cell-phones including Samsung Galaxy, Xiaomi and other market leaders. Although Apple builds its own Ax application processors, they use Qualcomm modems.

Almost all their chips are build in the TSMC 28nm LP process although they are sampling chips in TSMC 20nm too and will presumably ramp those to volume during 2014.


High Quality PHY IPs Require Careful Management of Design Data and Processes

High Quality PHY IPs Require Careful Management of Design Data and Processes
by Pawan Fangaria on 01-29-2014 at 10:05 am

In last few years IP design has grown significantly compared to the rest of the semiconductor industry. There are newer IP start-ups opening across the world, particularly in India and China. Amid this rush, I wanted to understand the actual dynamics pushing this business and whether all of these IPs follow quality standards. Quality is a must considering IP integration into high-end SoCs. I found a very nice opportunity talking to Ritesh Saraf, CEO at OmniPhy. OmniPhy develops specialized IPs for top tier companies like SerDes PHY including HDMI 2.0, Ethernet , USB, PCIe, SATA PHYs, etc.

What I learned from Ritesh is that there are a few major reasons for the growth of IP business:
a) Number of protocols, complexity and speed of execution has grown. This has forced SoC vendors to source IPs from third parties and integrate IP into their SoCs rather than develop everything themselves. Only a few players are developing IPs themselves.
b) Emerging economies like China and India have proved their mettle in making successful IPs at lower cost. Also, there is good availability of talent in these regions — one can find designers with 6-8 years of experience in AMS design which is generally difficult in the USA. This often tips the scales for a “buy vs. make” strategy.

Earlier SoC vendors were satisfied with off-the-shelf IPs from third party vendors. But in recent times, they are also demanding differentiation and customization at a faster pace.

Considering the gold rush towards developing IPs with new entrants, short cycles and the desire to have them at lower cost, I was concerned about the quality of these IPs: Is it being sacrificed somewhere? It was interesting to learn from Ritesh that to lower the cost of IP, vendors may cut costs somewhere in the development process, verification process, the tools used for design management and so on. It’s important to have designers experienced with taking designs through production, otherwise there can be failure either before production (initiating re-spins) or later in the field.

So I wanted to know what OmniPhy does to maintain the quality of their IPs. Ritesh described in detail various aspects of their quality process, such as controlled design management to use the correct versions of cell views in the entire design flow, diligent design reviews, various levels of testing and signing-off through a comprehensive checklist of procedures with extensive rules. They require their designers to have extensive experience in order to make effective decisions during the design process.

Ritesh said usage of effective quality tools makes a difference. In the case of digital design, they have been using open source tools such as SVN for design management. But analog designs have a different flow: they need development and verification hand-in-hand between different designers in the team and that needs much tighter control of design revisions. In the case of AMS designs, there are analog designers and digital designers; they think differently, so there is a need for an intelligent design management tool that can ease the pressure of check-in/check-out synchronization, sharing of cell views between designers, and ensure that correct views are used in higher levels of designs while being seamlessly integrated into the AMS design flow. A lot of bugs appear during top level verification and procedural diligence is needed to flush them out.

Ritesh further talked about earlier days of small analog designs (such as IO or ADC with 10-20 cells) when designers used to manage them manually, something not possible today. Not using a good design management (DM) tool is a big risk. OmniPhy has analog designs with 1000s of cells, with 20-25 designers working on one design at a time. It’s imperative to have an integrated DM and control solution for analog IP design to ensure quality.

OmniPhy uses ClioSoft’sSOS for design management. SOS is a good vehicle to control the design flow: the verification team does not need to wait, they can check-out the DUT from the system, get all information about the changes (who, what, why…) and continue with the verification process. The tool tracks the changes to be verified and resolution of all issues. At tape-out time, management can use SOS to freeze the design (i.e., make the completed cells read only). Any change at that time would be based on management’s decision. In other words, it’s a nice control on creeping elegance! The DM system by ClioSoft provides a greater level of confidence in the state of the design.

Ritesh was kind enough to share some of the screenshots of their actual designs and flows.


[Analog PHY IP – Data Management complexity]

An HDMI 2.0 PHY design like that above has 8-10 schematic designers, 8-10 layout designers and about 4 verification designers.


[Data Flow for AMS PHY IP]

The analog design flow uses Cadence Virtuoso and ClioSoft SOS, whereas the digital flow uses the SVN open source version control system. The digital flow is easy to maintain because there is clear distinction between development and test, but in the case of analog design, an integrated DM is a must.

This is the top level assembly of a PHY. All custom design views are managed in ClioSoft SOS viaDFII(Integration of SOS in Virtuoso). Digital PnR blocks are checked into the same library. The DM system assures that the blocks at the top level have passed through the final verification and ensures a stable state of the design data at the top level.

Considering this complexity in designing IPs, I asked Ritesh if the cost of the DM tool justified the ROI it provides? Ritesh happily said, “It doesn’t cost at all, considering the savings in re-spins. If you talk in monetary terms, it’s just ~2% of our total EDA spend.”

Also Read

ClioSoft at Arasan

Data Management in Russia

Managing Multi-site Design at LBNL


The Biggest Supplier in the Biggest Mobile Market is a Company You Have Never Heard Of

The Biggest Supplier in the Biggest Mobile Market is a Company You Have Never Heard Of
by Paul McLellan on 01-29-2014 at 10:05 am

If you live in the bay area it is easy to come to the conclusion that Apple has huge market share and is in a very strong position in the mobile market. Everyone has an iPhone.

But the truth is less flattering. Yes, Apple continues to make large profits and it made record iPhone shipments. However, only 51M and Wall Street expected nearly 55M. The problem is that Apple is growing much more slowly than the overall market. Apple grew 7% year on year in Q4 but the market grew by about 50%. Apple’s market share is down to 16% in Q4, from 22% in Q4 of 2012. With (probably) no new models for a couple of quarters I think that market share will continue to shrink.

The other big company is Samsung. They are #1 in unit shipments at around 90M, not that far off twice Apple. While not as profitable as Apple due to its product mix, it is still a very profitable business. The second tier players like Sony, LG and Nokia are all struggling too, not cheap enough to be competitive at the low end but without the fashion cachet of an iPhone or Samsung Galaxy. Meanwhile, other Chinese names (Huawei, ZTE and Lenovo) also continue to do well.

The biggest market for smartphones is China and the growth is all at the low end. Yes, Apple finally has a deal with China Mobile but it is really too highly priced to manage the kind of market share that it has in the West. Samsung used to be #1 in pretty much every market (it is #1 worldwide) but in China last quarter it looks like the leader is Xiaomi. That’s right, a company you have almost certainly never heard of overtook Samsung in the biggest smartphone market of all. It was founded in 2010 and only released its first smartphone in 2011. This is unit sales, of course. Samsung has products at every price point and presumably sold its share of the high-end Galaxy phones that compete pretty much head on with iPhone and so it probably made the most money in China. That is a Xiaomi phone above (the first two characters are xiaomi small rice, the second are shouji hand machine which is what they call mobile phones, end of your Chinese language trivia for the day).

The reality is that the market is pretty mature. Android has leveled the playing field so the user experience is very similar on all phones. The growth numbers in $ terms are slowing too, which reflect both the maturing of the market and the transition to cheaper phones. Roughly the market grew 40% last year and is expected to grow a little over 20% this year, half the rate.

Mobile will continue to be the biggest market for chip suppliers. I think the internet of things (IoT) is at the overhype stage right now. Sure Google just bought Nest for a lot of money. Wearables were the big thing at CES. But these are not going to be selling in the billions of units in 2014 (there were over a billion smartphones sold in 2013) and they are not going to sell at $600 price points like the high end of the smartphone market.

2014 should be an interesting year.


More articles by Paul McLellan…


The Changing Semiconductor Foundry Landscape!

The Changing Semiconductor Foundry Landscape!
by Daniel Nenni on 01-29-2014 at 8:00 am

The foundry landscape is changing again and it is definitely something that should be discussed. There are some people, mostly influenced by Intel, that feel the foundry business has hit the wall at 20nm which couldn’t be further from the truth. After spending 30 years working in Silicon Valley, I have experienced a lot of change which is why I founded SemiWiki.com and co-authored a book on the fabless semiconductor revolution. Chronicling this change and looking towards the future is for the greater good of the semiconductor industry, absolutely.


A big change happened at 28nm, when TSMC was the only foundry to yield, which resulted in wafer shortages and fab capacity issues. Of course TSMC did not initially build capacity for 90% market share. What semiconductor company would (with the exception of Intel)? Fabless companies such as Qualcomm, Broadcom, and Marvell that were used to multiple manufacturing sources were limited to a single source at 28nm which was not a comfortable position for them at all. Pricing and delivery is everything in this business thus the multiple manufacturing source business model. As it stands today, 20nm looks to be the same with TSMC in a dominant market position.

The top fabless companies will make a correction at 14nm and use both TSMC and Samsung for competitive pricing and delivery. There really was no other choice since GlobalFoundries does not have the capacity yet to source a QCOM or Apple and Intel 14nm failed to make a passing foundry grade. With the exception of Altera, NONE of the top fabless semiconductor companies will use Intel at 14nm, which is one of the reasons why the Intel fab #42 in Arizona is being shuttered, my opinion. If fabless companies had the choice between Samsung/Intel and GlobalFoundries they would chose GF without a doubt. Working with an IDM/foundry that competes with you is a last resort for sure.

This change is of great help to the fabless semiconductor ecosystem in regards to jobs and design enablement (EDA and IP for example). Due to ultra-strict security measures and process differences it will require many more engineers, tools, and IP to manufacture at both TSMC and Samsung at 14nm. This cost of course will be offset by cheaper wafers due to the pricing pressure competition brings.

If you want a more detailed understanding of the changing foundry landscape there are three very good sources of information:

[LIST=1]

  • IC Insights’ McLean Report
  • GSA 2014 Foundry Almanac
  • Me

    Why me? Because pound for pound I have access to more reports, attend more conferences, and talk to more semiconductor people than anyone else in this industry, believe it. I am connected to 17,962 semiconductor professionals on LinkedIn so if I don’t know the answer to your question I most certainly know someone that does. Generally I make people buy me lunch for a discussion on the foundry business but now that my book “Fabless: The Transformation of the Semiconductor Industry” is out, if you buy the book I would be happy to take your call or email and answer whatever questions you may have. Connect with me on LinkedIn, if you haven’t already, and let’s talk.

    More Articles by Daniel Nenni…..

    lang: en_US


  • CDNLive World Tour

    CDNLive World Tour
    by Paul McLellan on 01-28-2014 at 11:00 pm

    CDNLive is becoming a real worldwide event, starting in March in San Jose and ending in November in Tel Aviv, Israel.

    The complete schedule is:

    • March 11-12th, Santa Clara, California
    • May 19th-21st, Munich, Germany
    • July 15th, Seoul, Korea
    • August 15th, Shanghai, China
    • August 7th, Hsinchu, Taiwan
    • August 11-12th, Bangalore, India
    • September 16th, Boston, Massachusetts
    • September 18th, Austin, Texas
    • November 3rd, Tel Aviv, Israel

    As always, a lot of the content are presentations by users of Cadence tools or in-depth ‘techtorials’. Cadence knows that users don’t come to hear a lot of marketing guys present a lot of power point, they want to hear stories from the trenches from either Cadence customers or Cadence’s own black-belt application engineers.

    The call for papers for CDNLive Silicon Valley closed in December so you are too late if you want to present this year (although how about Germany in May or Austin in September for a change of scenery).

    To give you a better idea of how this works out, here are the highlights of the 2013 CDNLive in Silicon Valley. Of course the speakers will be different and some of the details will probably change, but the basic format this year will be the same, with a lot of parallel special-interest tracks, keynotes, exhibits and more.

    • Technical Sessions: Sessions took place in nine tracks over the two full days of the conference. Close to 100 presentations were delivered, including a wide variety of user-authored papers addressing all aspects of design and IP creation, integration, and verification. Attendees discovered how others are using Cadence technologies and techniques to realize silicon, SoCs, and systems—efficiently and profitably.
    • Keynote speakers: Attendees heard from industry leaders, Lip-Bu Tan (Cadence President & Chief Executive Officer), Young Sohn (Samsung President & Chief Strategy Officer) and Martin Lund (Cadence Sr. VP Research & Development) about industry trends in silicon, SoC, and system realization.
    • Designer Expo: More than 35 exhibitors participated in the Designer Expo, and highlighted the collaborative ecosystem available to support you. Cadence and our partners enjoyed lunch and an evening reception mingling with customers and exploring joint solutions.
    • Networking opportunities: The R&D luncheon offered an informal atmosphere to engage in stimulating technology discussions with Cadence technologists and industry peers.


    If you have a Cadence account you can download the full proceedings from last year here.

    Details of this year’s CDNLive events can be found on this page, which has links to more details for each of the individual events (and you can download proceedings for any of the other CDNLives). The CDNLive page is here.


    More articles by Paul McLellan…


    A Brief History of the Apple iPod

    A Brief History of the Apple iPod
    by Daniel Payne on 01-28-2014 at 9:58 pm

    In January 2001 we had a new American president, George W. Bush, I was working at Mentor Graphics, and Apple introduced an MP3 player called the iPod with a hard drive capable of holding 1,000 songs. In the previous decades we enjoyed portable music from tape-based, CD, or mini-CD devices like the Sony Walkman. The first several generations of the iPod used two ARM 7TDMI-derived CPUs, clocked at just 90 MHz to keep battery life reasonable. [SUP]1[/SUP] The iTunes software helped you organize the music, but at first it only worked on Apple computers.


    iPod, 1st generation, 2001[SUP]2[/SUP]

    Subsequent generations of the iPod continued to use ARM architecture chips, while the audio chip was either Wolfson Microelectronics or Cirrus Logic. iTunes supported the Windows platform in 2002, and the iTunes Music Store launched in 2003 with some 200,000 title to buy, so now how you bought music become easier and affordable. The iPod family began to branch out in 2004 with the iPod mini, achieving a smaller size by using the Microdrive from Hitachi and Seagate.


    iPod Mini, 2004

    Just one year later in 2005 the iPod nano line basically replaced the iPod mini, and is best noted for its use of Flash memory instead of a hard drive, so now we had the first solid state music player by Apple driving the yield for Flash memory chips. The entry-level iPod shuffle debuted that same year, but had no display and continued to use Flash for music storage.


    iPod Nano, 2005


    iPod Shuffle, 2005

    Competitors to the iPod like Microsoft Zune tried to enter the market, however the Microsoft Zune first sold in 2006 but was out of the market 5 years later. [SUP]3[/SUP]Microsoft just never had the marketing buzz, ease of use, or online store success that Apple created.

    2007 was the year that Apple announced both the iPhone and iPod Touch products, where the iPod Touch was similar to the iPhone in terms of size, glass display and multi-touch surface, but without a cell phone radio. All the cool kids in middle school could now own an iPod Touch and look like an adult without having to pay a monthly cell-phone bill.


    iPod Touch, 2007

    Sales of the iPod family of products grew slowly from 2002-2003, then rose briskly in 2004. Volumes of the iPod peaked in 2009, caused by increased competition, market saturation, and the trend towards Smart Phones offered both by Apple and all of the growing list of Android competitors.

    Semiconductors used inside of the iPod touch include: [SUP]4[/SUP]

    • Apple – ARM-based CPU (similar in iPhone)
    • Toshiba – NAND Flash (Samsung Flash in the iPhone)
    • Wolfson – audio chip (shared in iPhone)
    • Samsung – DRAM
    • Marvell – WiFi chip
    • Apple – Communications chip
    • Broadcom – touch screen controller chip
    • STMicroelectronics – motion sensor chip
    • Texas Instruments – video driver chip

    Notice how Apple has by 2007 started designed their own ARM-based chips, and is headed down the path as a fabless design company. Apple and ARM go together all the way back to 1990 when Advanced RISC Machines Ltd was created as a joint venture between Acorn Computers, Apple Computer and VLSI Technology.

    If Apple doesn’t design their own chips in these high-volume, consumer-oriented products, then they often pit one semiconductor vendor against another for the commodity parts like: [SUP]5[/SUP]

    • Flash (Toshiba, Samsung, Hynix Semiconductor, Micron Technology)
    • DRAM
    • Audio
    • Radios

    Today you have four choices of iPod, each aimed at a slight different usage:

    • iPod shuffle – tiniest size, highest portability, lowest price.
    • iPod nano – small size, portable, small display.
    • iPod touch – largest display, like an iPhone without the cell phone.
    • iPod classic – greatest storage capacity, small display.

    With all of the Apple success in designing their own processors for the iPod, iPad and iPhone devices you have to wonder if Intel’s days of supplying processors for the iMac, MacBook Pro and Mac Pro computers are indeed numbered.

    Full Disclosure
    Our family household owns: iPod, iPad, iPhone, iMac, MacBook Pro and iTunes. We also enjoy: Kindle PaperWhite, Samsung Galaxy S3, Samsung Galaxy Note 2, HTC One, Google Nexus, HP laptop, Dell laptop and custom gaming PCs.

    References
    1. Wikipedia
    2. Apple
    3. Wikipedia
    3. IFIXIT
    4. Bloomberg Business week

    lang: en_US


    Xilinx’s Mixed Signal FPGA

    Xilinx’s Mixed Signal FPGA
    by Luke Miller on 01-28-2014 at 10:00 am

    Something in all the Xilinx chatter of UltraScale 20nm, 16nm, having massive amounts of Gigabit transceivers, DSP blocks, RAM, HLS, Rapid Design Closure gets lost… and that is Xilinx’s ability for Mixed Signals. I do not mean when you are talking with the wife (Remember Listen!), but a wonderful block that lives within the 7 series of Xilinx FPGAs. It is called ‘XADC’, what it does is marvelous and once again using Xilinx allows more of the board design, integration, cost and time to market (The FPGA Blob strikes again…) to be absorbed by the Xilinx FPGA. Let me explain…

    In the ‘Internet of things’ and everything else around us, is a desire for sensor fusion and we live in an analogue world. Everything needs to be monitored. Power, current, voltage, temperature, humidity, velocity, acceleration, jerk, jolt, jounce, kid’s attitudes, sampling audio streams, heart rates, blood pressure, glucose levels, distance, light intensity, flow meters and this list can keep going. All the above sensors as most, unless we are doing some RADAR/EW design, LTE or Medical design, we do not need 3 GHz sample rates and beyond. So just for a few moments think my friend, about all that you can do! In one Xilinx part nonetheless.

    Below is the block diagram of the XADC in the Xilinx 7 series FPGAs. It is a very powerful core and important as we can now think of the FPGA not as just a piece of digital processing but truly mixed signal processing. That is not a trivial task.

    Above you will see are dual 12-bit, 1 Mega sample per second (MSPS) ADCs. The dual ADCs support a range of operating modes, such as externally triggered and simultaneous sampling on both ADCs and various analog input signal types, for example, single ended. The ADCs can access up to 17 external analog input channels. All of the channels are available to your FPGA design. Of course the channels are time interleaved. They can be triggered or continually round robin sampled. If all this does not excite you, please wire the ADCs to measure your heart rhythm as you may be dead, make sure to filter out any potential 60 Hz noise or higher frequencies from the fluorescent lights. You can use Vivado HLS for the adaptive filter algorithm using QR decomposition. Sorry, I digressed. As for me and my fellow nerds, we are excited!

    So you want to try working with the XADC, I know you do! Once again here is where Xilinx shines. There are vast amount of resources, reference designs to allow you in a matter of minutes to get started. I always like video before reading so here is a greatXilinx video to watch. All the details for the XADC are found here in UG480, Zynq included. Now for the bread and butter, the reference design… Here you go, click here. You will need to sign up for a Xilinx account. In this design Xilinx shows the user how to perform analogue simulations which is really, really cool and hopefully our minds will start thinking that Xilinx is just more than Digital!

    More articles by Luke Miller…

    lang: en_US


    TSMC OIP presentations available!

    TSMC OIP presentations available!
    by Beth Martin on 01-27-2014 at 6:27 pm

    Are you a TSMC customer or partner? If so, you’ll want to take a look at these presentations from the 2013 TSMC Open Innovation Platform conference:

    Through close cooperation between Mentor and Synopsys, Synopsys Laker users can check with Calibre “on the fly” during design to speed creation of design-rule correct layout, including electrically-aware voltage-dependent DRC checks.

    • Verify TSMC 20nm Reliability Using Calibre PERC(Mentor Graphics)
      Calibre PERC was used in close collaboration with TSMC IO/ESD team to develop an automatic verification kit to verify CDM ESD issues for the N20 node.

    • EDA-Based DFT for 3D-IC Applications (Mentor Graphics)
      Testing of TSMC’s 2.5D/3D ICs implies changes to traditional Built-In Self-Test (BIST) insertion flows provided by commercial EDA tools. Tessent tools provide a number of capabilities that address these requirements while reducing expensive design iterations or ECOs, which ultimately translates to a lower cost per device.

    • Advanced Chip Assembly & Design Closure Flow Using Olympus-SoC (Mentor Graphics & NVIDIA)
      Mentor and NVIDIA discuss the chip assembly and design closure solution for TSMC processes, including concurrent MCMM optimization, synchronous handling of replicated partitions, and layer promotion of critical nets for addressing variation in resistance across layers.

    More articles by Beth Martin…


    Compositions allow NoCs to connect easier

    Compositions allow NoCs to connect easier
    by Don Dingee on 01-27-2014 at 6:00 pm

    I blame it on Henry Ford, William Levitt, and the NY State Board of Regents, among others. We went through a phase with this irresistible urge to stamp out blocks of sameness, creating mass produced clones of everything from cars to houses to students.

    Thank goodness, that’s pretty much over. The thinking of simplifying system design to quickly produce products of uniform quality had its run – everywhere that is, except the semiconductor industry. Reflected in slogans like “Copy Exactly” and the combined physics and economics of transistor theory demanding replication by the billions, semiconductors rely on sameness for viability in production.

    Levittown, PA, circa 1959 – courtesy Wikipedia

    System designers feel no such constraints, however. The last I checked with the folks at Semico Research, the number of unique IP blocks in a large SoC design is approaching 100. Diversity in blocks is on the increase, with CPUs, GPUs, memory, network interfaces, display interfaces, camera interfaces, and more – each with their own unique requirements for interconnect.

    The problem lies where diversity of design meets parity of production, a process we oversimplify into the term “integration”. To get IP blocks working together in the design, they obviously have to connect somehow, and hardware teams went to work on optimizing interconnects by the type of block to get the most performance in the least space. For visibility into design, test teams demand that everything be accessible with the same interconnect. For programming, software teams demand that everything be accessible using as few protocols as necessary.

    Keeping the interconnect simple proved to be more difficult than thought. We tried the bus approach; it worked when the IP block count was fairly small, but quickly resulted in conflicts as multiple devices vied for limited resources. We tried JTAG, which served the needs of test but didn’t help with performance. We tried the crossbar matrix; it achieved performance but became so complex in and of itself, it was difficult to implement for larger designs in smaller geometries.

    The network-on-chip (NoC) was born, to provide an abstraction between IP blocks using an initiator-target strategy. As hardware designers got familiar with the approach, different NoC implementations evolved. This meant one of three things had to happen in larger SoC implementations: 1) design teams had to agree on a NoC, and adapt each IP block into it, meaning in some cases a lot of work; or 2) design teams were restricted in the IP they could select to only blocks using the NoC of choice; or 3) a top-level NoC layer communicating between disparate NoC layers had to evolve, adding latency with a second layer.

    None of those are great choices for most designs. Arteris thinks they have the solution in a new strategy: NoC compositions. FlexNoC uses the connectivity and address maps from all NoCs in a system to derive a connectivity and address map of each target, seen from each initiator, in each mode of operation. It also builds a top-level model of interconnect, which allows a full SoC simulation, and is able to check for degenerate routing loops that allow for deadlocks.

    This approach eliminates the dreaded second layer NoC, and doesn’t require additional bridging which would add further delays. Background on the NoC composition strategy is available in a new white paper authored by Jonah Probell, senior solutions architect at Arteris.

    Playing Well with Others: How NoC Compositions Enable Global Team Design

    Rather than enforcing stiff rules on IP design teams in creating interconnects, or limiting their choices of third-party IP, NoC compositions could ease the process of generating high-performance interconnect between disparate IP subsystems within an SoC.

    More Articles by Don Dingee…..

    lang: en_US