RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Once Upon A Time… ASIC designers developed IC for Supercomputer in the 80’s

Once Upon A Time… ASIC designers developed IC for Supercomputer in the 80’s
by Eric Esteve on 07-07-2011 at 10:41 am

During last week-end, I had the good surprise to meet with one of my oldest friend, Eric, who remind me the old time, when we were working together as ASIC designers for… a Supercomputer project.

In France, in a French company (Thomson CSF) active in the military segment and being able to spend which was at that time a fortune ($25M) to develop a supercomputer from scratch, and when I say from scratch, that mean that we had to invent almost everything, except the ASIC design methodology and the EDA tools, both being provided by VLSI Technology Inc. To be honest, we have been very lucky that a French solution (like Matra Harris Semiconductor or Thomson Composant Speciaux) had not be chosen, which could have happened for obscure political reasons. We had in our hands which were considered as the Rolls Royce for ASIC designers in 1987: all the design team was equipped with SUN workstation, and the design tool set from VLSI was really user friendly… except it was missing a synthesis tool, but none of us knew Synopsys, this obscure start-up, so we were pretty happy to start. Just for your information, I will describe what was the type of work done by a two engineer team during a 18 month period.

Just a word about the project itself. The supercomputer chief architect was a talented University Professor, talented but this was his first contact with the industrial world. He had defined the machine architecture, based on three main areas: the CPU boards (based on off-the-shelf CPU chips, Weitek Abacus), the FIFO based interconnects network and the memory area, as well as six different ASIC devices. It was a “superscalar” architecture. The task I was assigned with Eric was to design all the function which will be reused by the different ASIC designs: the FIFO, the Test functions and the Clock distribution inside the chips.

The first one was the easier, as we had to define the specification of a FIFO compiler, the compiler itself being a full custom design, at transistor level, would be subcontracted by VLSI Technology. We just took a pen and a paper and defined the memory point, transistor by transistor, and the FIFO behavior… in written. No simulations (SPICE was not part of our EDA package), just discussion with Michel Gigluielmetti, our interface at VLSI. VLSI was in charge of the compiler design and model generation, as we had to be able starting to design and integrate FIFO far away before seeing any working Silicon. When I look back, it was pretty risky, isn’t it?

The Test Strategy was based on the newly introduced JTAG IEEE 1149.1 for « Standard Test Access Port and Boundary-Scan Architecture », this part was not that difficult, as everything was defined in the standard.

Then, we looked at the Clock distribution. Remember, it was in 1987, there was no Clock Distribution Macro” that we could use. The Clock was a magic signal, running at 20 MHz (such a high speed!), that designers were using to run their simulations, a perfect signal with no skew… How to manage the clock distribution in a chip counting 2 or 3,000 Flip Flops? Starting inside the chip, we then thought about the inter chip communications, and drop a couple of equations… and discovered that we had a real issue here! How will the entire system works, with chips communicating from board to board which could be located at one meter from each other? What about the fly time within interconnect? And so on… The most surprising is that we (two beginners) discover this issue, and that none of the seasoned engineers working at a higher management level did not even thought about it!

So Eric and I send a note, copying all the management, to raise the issue. Then started one of the most amazing, creative time, after the project leader decided to assign us to work full time on the clock distribution within the machine, inside and outside the ASIC. At first, define the basic equations:

When you send a data from a “slow” device, you have to comply with:

Temission_slow + Tinterconect + Tset-up + Skew < Clock_cycle

But, when the emitter is “fast”, the equation becomes:

Temission_fast + Tinterconnect > Thold + Skew

This is at that step that we have discovered that two identical ASIC devices could exhibit variations from 1 to 3, when taking into account the voltage, process and temperature induced variations! Our managers guess what it was 10 or 20%… So we defined the clock distribution in the machine, selecting external buffers as fast as possible (for the minimum transit time, the specification was… 0ns), trying to minimize the impact of the buffering. But when doing this, we realize that it could not work for any case, even if we increase the clock period (and decrease the frequency, which was not really what you want to do when you design a supercomputer…). With the help of VLSI technology, we defined a kind of Digital Lock Loop (DLL), in order for the ASIC to self calibrate (a fast device would delay the time at which the data was emitted, to guarantee the hold time). We also defined different phase for the clock period, so we could artificially enlarge the clock cycle, to receive the data with no set-up problem. In other words, we had to invent the wheel, even if I am sure that the designers working at Cray Research did it before us! When I see the size of a team working today on a single device (OMAP5 or equivalent), I think we were very lucky to discover the ASIC design in such a way.

Because there is a moral in every story, I must say that the project was suddenly ended by Thomson CSF top management, when it appeared that the machine would never work, at least at 20 MHz, the official reason being the difficulties from Weitek to ship the CPU. Then, the Engineering manager of the software team moved to another Thomson’ subsidiary, in charge of developing tools for the stock market. Eric moved to Australia for one year, to learned surf. By the way, he is still living there! As far as I am concerned, I stayed in ASIC design, doing chips for Aircraft motors or Analogue simulation for the TGV and finally was ASIC FAE for TI, where my largest customer was Advanced Computer Research Institute (ACRI) designing… a supercomputer! But that is another story…
By Eric Esteve


TSMC Financial Status Plus OIP Update!

TSMC Financial Status Plus OIP Update!
by Daniel Nenni on 07-05-2011 at 8:00 am

Interesting notes from my most recent Taiwan trip: Taiwan unemployment is at a record low. Scooters once again fill the streets of Hsinchu! TSMC will be passing out record bonuses to a record amount of people. TSMC Fab expansions are ahead of schedule. The new Fab 15 in Taichung went up amazingly fast with equipment moving in later this year. When was the last time you saw a fab built ahead of schedule and under budget? Simply amazing! Taiwan is also ready to overtake Japan as the world’s largest semiconductor materials market. The Taiwan market grew from $6.9 billion in 2009 to estimated $9.1 billion in 2010, showing 36%+ growth. Go Taiwan!

The Motley Fool did a nice TSMC financial article with pretty pictures. I like pretty pictures. The bottom line is that not only is TSMC the largest semiconductor foundry, TSMC is also the most profitable. The important point here is margins. Margins translates into pricing flexibility as supply outpaces demand, which is coming, believe it! Semiconductor manufacturing capacity utilization today is running at 90%+ in most segments. With all the new fab space coming online from TSMC, Samsung, Intel, and GlobalFoundries in 2012 it may be a different story. Either way TSMC wins.

Unfortunately Motley Fool does not know semiconductors as they listed NVIDIA and LDK Solar as industry peers/competitors! DOH! One of the most amusing things I do for money is consult with Wall Street types and explain exactly what the semiconductor market is and who the real players are. I also slip in some EDA and Semi IP information whenever possible. Even with the recent acquisitions, Wall Street simply does not care about EDA, but I digress.

The one semi-relevant example Motley Fool uses is number four foundry, SMIC. TSMC Gross Margins are 49.6% versus SMIC at 20.8%. UMC, the number two foundry, is at 27.5%. GlobalFoundries financials are private but I will see what I can find out. Intel and Samsung will never tell foundry capacity or margin numbers so I shouldn’t even be mentioning them in the same paragraph as the real foundries.

Coming this fall from TSMC is the new and improved Open Innovation Platform Ecosystem Forum. TSMC is preparing a massive design ecosystem event on Tuesday, October 18th at the San Jose Convention Center. A call for papers already went out, 18 papers will be presented to an open forum of industry executives from TSMC, ecosystem partners, and customers. This is a DO NOT MISS event! There will be focused breakout sessions on all manner of design issues AND a pavilion with around 80 TSMC Design Ecosystem partners showing their wares. Plus, I will be there (free food), such a deal. The food is always good at TSMC events!

The Open Innovation Platform® is the substantiation of TSMC’s Open Innovation model that brings together the thinking of customers and partners under the common goal of shortening design time, minimizing time-to-volume and speeding time-to-market, and ultimately time-to-money.

No doubt this event will be sold out. Follow SemiWiki.com for TSMC OIP updates coming soon.

Note: You must be logged in to read/write comments.


Two More Transistor-Level Companies at DAC

Two More Transistor-Level Companies at DAC
by Daniel Payne on 07-02-2011 at 8:38 pm

In my rush on Wednesday at DAC I had almost over-looked the last two companies I talked with: Invarian and AnaGlobe. These last two I had hand-written notes on paper, so I just got to the bottom of my inbox tonight to write up the final trip reports.

Invarian
Jens Andersen and Vladimir Schellbach gave me an overview of tools that perform temperature, package and analog layout analysis:

  • Models actual component temperature
  • Identifies electromigration
  • Finds hotspots
  • Solves full 3D heat transfer equation
  • Accounts for block layout impact
  • Accounts for power dissipation

The Invarian tool named InVar works with a SPICE simulator like Berkeley’s Analog Fast SPICE tool. They analyze both analog and digital design flows. The only other competitor in this space would be Gradient DA.

Summary
Watch this startup, even with under a dozen people in Moscow and Silicon Valley they have an interesting focus on temperature variation that the big four in EDA haven’t starting serving yet. Their IR drop and EM analysis have plenty of competitors.

AnaGlobe
How would you load an IC layout that was 180GB in size? At AnaGlobe they use the Thunder chip assembly tool and get the design loaded in under two hours. Yan Lin gave me a quick overview of their tools.

GOLF is a new PCell design environment based on OA.

PLAG is another OA tool for flat panel layout.

Summary
AnaGlobe is certainly a technology leader for large IC database assembly. Their GOLF tool competes with Virtuoso, Ciranova and SpringSoft. PLAG looks to have little competition. Big name design companies use AnaGlobe tools: Nvidia, Marvell, SMIC, AMCC.


Apache Design Automation acquired by Ansys

Apache Design Automation acquired by Ansys
by Daniel Payne on 06-30-2011 at 2:52 pm

We all knew that Apache had filed for an IPO earlier and were just waiting for the timing and price to be revealed. Rumors have been circulating about an acquisition and today we know that the rumors were true asAnsys paid $310 million in cash for Apache.

Ansys stock has surged some 35% over the past twelve months:

Products
This acquisition looks totally complimentary in terms of products. Ansys also purchased Ansoft back in 2008, so they now have a good mix of software tools across multiple disciplines:

  • Low-Power IC Design (Apache)
  • Electromagnetics
  • Explicit Dynamics
  • Fluid Dynamics
  • Multiphysics
  • Structural Mechanics

It will be interesting to see if Ansys creates a division just for Apache tools or merges it into the Electromagnetics division.


Cadence to launch PCIe gen-3 (8 GT/s) IP and VIP: fruit of Denali acquisition

Cadence to launch PCIe gen-3 (8 GT/s) IP and VIP: fruit of Denali acquisition
by Eric Esteve on 06-28-2011 at 10:59 am

The recent announcement from Cadence, officially launching the PCI Express 3.0 Controller IP, as well as the associated Verification IP (VIP), made of Compliance Management System (CMS) which provides interactive, graphical analysis of coverage results, and PureSuite which provides the PCIe associated test cases, clearly demonstrate that the acquisition of Denali is bringing another fruit – after DDR4 Controller IP. Maybe some history will help. Back in 2006, Denali was known for their VIP products for Interface functions like PCIe, USB or SATA, when they first launch a PCI Express (gen-1 at that time) Controller IP. It was quite surprising, especially for their former partners, suddenly becoming their competitors! Nevertheless, they found a place on the market, positioning on the high end (and expensive) side, supporting Root Port or End Point and soon Single Root I/O Virtualization (SR-IOV), a solution targeting the PC Server market when Synopsys and PLDA where positioned on the mainstream PCIe IP market. Then PCIe 2.0 specification was issued, in 2007, and Denali was still in the race.
With the launch of this PCIe 3.0 solution, still in the emerging phase and probably reserved, for the moment, for high end, advanced applications, like storage, supercomputing, enterprise and networking, Cadence/Denali is following the same strategy: high end, high margin IP by opposite to mainstream solution, well covered by the competition. The customer mentioned by Cadence in their Press Release, PMC-Sierra, and the 6Gb/s SAS Tachyon protocol controller ASSP integrating this IP is clearly in this market segment.

Features

The PCIe core includes these features:

Single-Root I/O Virtualization
The PCIe core provides a Gen 3 16-lane architecture in full support of the latest Address Translation Service (ATS) specification, Single-Root I/O Virtualization (SR-IOV) specification, including Internal Error Reporting, ID Based Ordering, TLP Processing Hints (TPH), Optimized Buffer Flush/Fill (OBFF), Atomic Operations, Re-Sizable BAR, Extended TAG Enable, Dynamic Power Allocation (DPA, and Latency Tolerance Reporting (LTR). SR-IOV is an optional capability that can be used with PCIe 1.1, 2.0, and 3.0 configurations.
Dual-mode operation
Each instance of the core can be configured as an Endpoint (EP) or Root Complex (RC).
Power management
The core supports PCIe link power states L0, L0s and L1 with only the main power. With auxiliary power, it can support L2 and L3 states.
Interrupt support
The core supports all the three options for implementing interrupts in a PCIe device: Legacy, MSI and MSIx modes. In the Legacy mode, it communicates the assertion and de-assertion of interrupt conditions on the link using Assert and De-assert messages. In the MSI mode, the core signals interrupts by sending MSI messages upon the occurrence of interrupt conditions. In this mode, the core supports up to 32 interrupt vectors per function, with per-vector masking. Finally, in the MSI-X mode, the controller supports up to 2048 distinct interrupt vectors per function with per-vector masking.
Credit Management
The core performs all the link-layer credit management functions defined in the PCIe specifications. All credit parameters are configurable.
Configurable Flow-Control Updates
The core allows flow control updates from its receive side to be scheduled in a flexible manner, thus enabling the user to make tradeoffs between credit update frequency and its bandwidth overhead. Configurable registers control the scheduling of flow-control update DLLPs.
Replay Buffer
The Controller IP incorporates fully configurable link-layer reply buffers for each link designed for low latency and area. The core can maintain replay state for a configurable number of outstanding packets.
Host Interface
The datapath on the host interface is configurable to be 32, 64, 128 or 256-bits. It may be AXI or Host Application Layer (HAL) interface.

If we take a look more in depth into this PCIe gen-3 Controller, we see that Cadence has based the architecture on a 128 bit data path. This means, for 8 lane PCIe running at 8 GT/s, that the core is cadenced at 500 MHz (simply calculate 8 * 8000 / 128) which probably require to use technology nodes below 65 nm, as the core is –I guess- in the 500 K gates range, technology selection which is consistent with thesupercomputing and networking markets, and rather high end storage. Another remark we can do is that Cadence, as far as we can see, do not provide the PHY Interface for PCI Express function (PIPE). This version of the PIPE can run at 500 MHz for a 16 bit (or 250 MHz for a 32 bit and Cadence decided for the 16-b (per lane) Controller interface, allowing to keep in the same clock domain the PIPE and the Controller.
Apparently Cadence let the PHY IP supplier taking care of the PIPE 3.0 design, which makes sense as it may be necessary to position carefully the PIPE in respect with the PHY, in term of chip topology.

Offering the PCIe 3.0 VIP is a must, as Cadence is clearly and strongly positioned on Verification. If you consider the “size” of the PCIe 3.0 specification, counting more than 1000 pages, and offering many new features when compared with PCIe 2.0 and the latest engineering change notices (ECNs) such as ID-based Ordering, Re-Sizeable BARs, Atomic Operations, Transaction Processing Hints, Optimized Buffer Flush/Fill, Latency Tolerance Reporting and Dynamic Power Allocation, running Verification campaign on such an emerging product is a “must do”. Denali has always been very active in PCIe VIP, so Cadence has probably benefited from Denali’ long experience (seven years or so) for this protocol.

Eric Esteve



SOC Realization

SOC Realization
by Paul McLellan on 06-27-2011 at 5:28 pm

There are some very interesting comments to the last entry on SoC Realization and how more and more chips are actually assembled out of IP. There was clearly a lot of discussion in this area at DAC, although most people (Atrenta being an exception) don’t use the term SoC Realization, presumably because it was originated by Cadence. But just because there is a dysfunctional gap between what Cadence talks about in EDA360 and what they currently deliver doesn’t mean that the basic ideas are wrong. I’ve argued before that so far both Synopsys and Mentor are currently doing a better job at implementing the EDA360 vision than Cadence are themselves.

One of the point that several commenters made was that IP reuse is expensive, in the sense that if you design a block to be reused rather than just used in the chip you are working on then it will take more effort. Reuse seems to work best at the very low level (standard cells, memory blocks etc) and the higher functional level, the higher the better. Microprocessors obviously, but also large blocks like GPS or cable-modems.

One of the biggest challenges about re-use is that it really is expensive. Long before any of us had even thought about IP in semiconductors, Fred Brooks in his book The Mythical Man-month reckoned that a reusable piece of operating system code that had to run on a range of machines and be reliable, was 9 times more expensive than the quick-and-dirty bit of code that a programmer could come up with to solve their specific issue, a factor of 3 for flexibility and a factor of 3 for making it industrial strength. At a seminar in Cambridge (the real one!) where he was on sabbatical back then, I guess this must have been 1975, he said that he thought that the two factors of 3 should have been two factors of 5 and so it was actually 25 times more expensive.

When I was at VLSI Technology, we had blocks of reusable silicon that we called FSBs, for Functional System Blocks. We jokingly called the Fictional System Blocks since, in most cases, the work to make them reusable (models, vectors etc) didn’t yet exist. The block did genuinely exist, and had seen silicon, but it would be too expensive to make all the blocks resuable without a customer. Instead, we did a sort of just-in-time reusability, making the blocks reusable once someone showed up and wanted to use them. Making all the blocks truly available at all the process nodes would have made the whole idea unprofitable.

3rd party IP, of course, typically comes with all that is required to use the block. The idea is that the cost of making blocks reusable is amortized across a large number of customers/designs. One problem is that often that is not the case, the design is used in just a very small number of designs in different processes, with a result that many IP companies really have more of a service business model (porting IP) than stamping out lots of pre-standardized product.

Nonetheless, most of most chips these days is IP that was either from a previous chip, or from a 3rd party. Very little is RTL written just for that one chip. SoC Realization, by whatever name, is the way many chips are designed.


ARM and Mentor Team Up on Test

ARM and Mentor Team Up on Test
by Daniel Payne on 06-27-2011 at 2:31 pm

Introduction
Before DAC I met with Stephen Pateras, Ph.D. at Mentor Graphics, he is the Product Marketing Director in the Silicon Test Solutions group. Stephen has been at Mentor for two years and was part of the LogicVision acquisition. He was in early at LogicVision and went through their IPO, before that he was at IBM in the mainframe processor group.

Continue reading “ARM and Mentor Team Up on Test”


TSMC Versus Intel: The Race to Semiconductors in 3D!

TSMC Versus Intel: The Race to Semiconductors in 3D!
by Daniel Nenni on 06-26-2011 at 4:00 pm

While Intel is doing victory laps in the race to a 3D transistor (FinFet) @ 22nm, TSMC is in production with 3D IC technology. A 3D IC is a chip in which two or more layers of active electronic components are integrated both vertically and horizontally into a single circuit. The question is which 3D race is more important to the semiconductor industry today?

Steve Liebson did a very nice job in his blogs: Are FinFETs inevitable at 20nm? “Yes, no, maybe” says Professor Chenming Hu (Part I) and (Part II). DR. Chenming Hu is considered an expert on the subject and is currently a TSMC Distinguished Professor of Microelectronics at University of California, Berkeley. Prior to that he was he was the Chief Technology Officer of TSMC. Hu coined the term FinFET 10+ years ago when he and his team built the first FinFETs and described them in a 1999 IEDM paper. The name FinFET because the transistors (technically known as Field Effect Transistors) look like fins. Hu didn’t register patents on the design or manufacturing process to make it as widely available as possible and was confident the industry would adopt it. Well, it looks like he was right!

In May of this yearIntel announced Tri-Gate (FinFET) 3D transistor technology at 22nm for the Ivy Bridge processor citing significant speed gains over traditional planar transistor technology. Intel also claims the Tri-Gate transistors are so impressively efficient at low voltages they will make the Atom processor much more competitive against ARM in the low power mobile internet market. Intel has a nice “History of the Transistor” backgrounder HERE in case you are interested.

Time will tell but I think this could be another one of Intel’s billion dollar mistakes. A “significant” speed-up for Ivy Bridge I will give them, but a low power competitive Atom? I don’t think so. TSMC’s 3D IC technology on the other hand is said to achieve performance gains of about 30% while consuming 50% less power. Intel already owns the traditional PC market so trading the speed-up of 3D transistor technology for lower power planar transistors is a mistake. A mistake that will allow ARM to continue to dominate the lucrative smartphone and tablet market.

Intel also does not mention 22nm Tri-Gate manufacturing costs which is key if they are serious about the foundry business. I still say they are not serious and this is another supporting data point. Foundry capacity will soon outpace demand so low manufacturing costs will be a critical competitive advantage.

TSMC has chosen to wait until 14nm to bring 3D transistor technology to the foundry business. Given that TSMC is the undisputed foundry champion and the father of the FinFet is a “TSMC Distinguished Professor of Microelectronics at University of California”, my money is again on TSMC. I won my previous bet of Gate-last HKMG (TSMC) versus Gate-first (IBM/Samsung) so I’m letting it ride on 14nm being the correct node for FinFets.

Note: You must be logged in to read/write comments


HDMI vs DisplayPort?… DiiVA is the answer from China!

HDMI vs DisplayPort?… DiiVA is the answer from China!
by Eric Esteve on 06-26-2011 at 9:40 am

During the early 2000’s, when OEM starting to question the use of LVDS to interface with display devices, two standards has emerged: High-Definition Multimedia Interface (HDMI) and DisplayPort. HDMI has been developed by silicon Image, surfing on the success of Digital Video Interface (DVI), and was strongly supported by a consortium counting Hitachi, Ltd., Panasonic Corporation, Philips Consumer Electronics International B.V., Silicon Image, Inc., Sony Corporation, Technicolor S.A. (formerly known as Thomson) and Toshiba Corporation, the “founders”. Pretty impressive list, at least in the Consumer Electronic market! The Video Electronics Standards Association (VESA) decided to launch a competing standard, DisplayPort. DisplayPort was back-up by companies linked to the PC market, like HP, AMD, Nvidia, Dell and more… The competition looked promising, as both standards exhibits the same high level features: based on high speed differential serial signaling, packet based protocol, layered based architecture, allowing to increase bandwidth by using 1 to 3 lanes (HDMI) or 1 to 4 (DP) for the most significant.

But the expected big fight was deceiving. Is it because HDMI was promoted with a strong energy by a single company, who had to be successful or to die, when VESA was acting as a “non profit” organization? Or because the initial market for HDMI, Consumer Electronic, has allowed the pervasion of the standard in the Wireless Mobile segment and the Mobile PC segment, generating huge number of HDMI powered devices to reach consumers (and a strong cash flow for Silicon Image, based on a $0.04 per port)? Nevertheless, when DisplayPort started to reach the market, in 2008, with a mere 10 Million ports, HDMI already exhibits a strong 250+ Million!

This forecast from iSupply was issued in 2007 leaves no doubt about the winner! Even more interesting is the latest data from iSupply, dated March 2011: “High-Definition Multimedia Interface (HDMI), the de facto connection standard in liquid crystal display TVs and other advanced digital equipment, will be present in nearly 630 million consumer electronic items shipped during 2011…”. In the meantime, HDMI has become the “de-facto connection standard” (OK, for LCD TV looks restrictive, but the “other digital equipment” cover: Wireless Handset, Set-Top-Boxes, Notebooks and more… so de-facto looks appropriate!). Amazingly, these two forecast from iSupply, made with a 4 year time distance, including a major, worldwide, financial crisis, are pretty consistent with 615 Million forecasted in 2007, compared with 630 Million in 2011, for the items shipped in 2011. That’s not so often that analyst are right, so we can recognize it when it happen!

As of today, the status is that DisplayPort is finally emerging, almost exclusively in the PC segment (and with a threat coming from MIPI DSI in the mobile devices like Media Tablet, Netbook…). Ok, Apple has integrated DisplayPort in his Mobile PC, Intel has launched ThunderBolt which will use both DisplayPort and PCI Express, major Chipset manufacturers are supporting DisplayPort (but also HDMI…), but the HDMI powered items will pass the Billion in 2013… when DisplayPort, according with ResearchAndMarket survey in February 2011 will see “External DisplayPort Device Shipments to Increase 100% From 2009 to 2014”…to reach 10% of the HDMI number!

So, apparently, HDMI has won the battle. But, adopters of HDMI technology are concerned. At first, by the impact on their bottom line when selecting HDMI. The rights for HDMI standard belongs to HDMI Licensing, LLC… which is 100% owned by Silicon Image. Moreover, when a chip maker, or an OEM, wants to get HDMI certification, he has to submit his product for compliance to HDMI Authorized Testing Center (ATC) for testing. The ATC list includes 11 companies… 7 of these are Silicon Image subsidiaries. Should we mention that Silicon Image also market HDMI based ASSP? Thus, when you license HDMI, you will have to pay a per port royalty (to SI), if you decide to develop your own solution, you will have to obtain compliance for your IP or your ASSP to… Silicon Image again.

This monopolistic situation has made several chip makers or OEM pretty nervous. Amazingly, the riposte came from China: along with Sony Corp.—an original founder and supporter of the HDMI interface—and Samsung Electronics Co., several Chinese TV manufacturers have banded together in a consortium to develop the Digital Interactive Interface for Video and Audio (DiiVA). DiiVA charter members (called Promoters) include leading CE and home appliance manufacturers Changhong Electric Co., Haier Co., Hisense Electric Co., Konka Group, Panda Electronics Co., Samsung Electronics, Skyworth Group, Sony Corporation, SVA Information Industry Co., TCL Corporation, and chip developer Synerchip Co., Ltd. As a side remark, DiiVA is defined as a bidirectional protocol, opposite with DisplayPort and HDMI. Another good point for the end user, the consumer!

A robust alternative that would provide equivalent or even expanded high-definition capability while avoiding HDMI license costs, DiiVA remains on the horizon as a potential competitor to HDMI from 2011 onward, within China and potentially other Asian domestic markets. While still relatively early in the product launch phase, DiiVA remains a technology to watch closely … The Chinese market has shown a demonstrable propensity to gravitate to its own technical standards, and the DiiVA interface is likely to receive a boost from foreign brands looking to grow their share within the China market. And though the support enjoyed by HDMI among devices in general is impossible to ignore, the possibility that a China government agency mandates the use of DiiVA—even if only to promote regional interests in the technology sector—could spell trouble for HDMI in the years to come within the rapidly expanding China consumer market.

From an IP vendor perspective, HDMI is still a very promising market (evaluated by IPnest to grow to up to $100M by 2014, including per-port royalties), DisplayPort is taking of, and will generate a decent market (not yet evaluated in details, but DP IP market could represent a $25M by 2014) and DiiVA is just another market opportunity to develop sales for High Speed Serial Interface IP, the IP market size is still to be evaluated.

Eric Esteve

http://www.linkedin.com/publishers


Smartphones in the BRICs

Smartphones in the BRICs
by Paul McLellan on 06-24-2011 at 2:45 pm

The latest edition of GSA Forum has an article by Aveek Sarkar of Apache on system design for emerging market needs. The BRIC (Brazil, Russia, India, China) type countries are characterized by a small rich segment, a large and growing middle class and a large poor segment. One big trend is that smart phone use is expanding very fast. These countries don’t have a large installed base of PCs, and almost certainly never will, so smart phones are the primary way people there access the internet. Smart phones are growing over four times as fast as the overall mobile industry.

But smartphones are not just the same phones as we use in the US etc since different markets need different features. For example, mobile payments are huge in Kenya (search for M-pesa for details). In general, there is no bank-card infrastructure and phones are going to be the way financial transactions get done. Much of the Chinese market requires two SIM-card slots. And, although it is purely a software issue, different markets require different languages.

These markets move much faster with new designs tumbling over each other, so time to market is extremely critical. But the price point is probably the most critical, $600 is simply way too high. The way phones are largely designed is with collaboration between a chipset vendor like Mediatek, Broadcom or ST-Ericsson and a system integrator. Communication is very important.

Lack of communication leads to one of two problems: the phones don’t work or are unreliable, or else they are over-engineered and unnecessarily expensive. One problem is that because of time-to-market issues designs tend to get padded with the chip designers having to assume the worst about the board design (corners will probably be cut) and the board designers over engineering the board because they don’t know enough about the internals of the chips. Or worse, they don’t. If both groups over engineer then the design is more expensive than necessary. If both groups under-design then the design may not work at all due to, for example, power supply droop due to lack of decoupling capacitors. As clock rates continue to increase into the GHz range and voltage margins decrease, these issues are getting worse not better.

One solution to this problem it for board and chip designers to be able to provide models to each other so that the reliability can be analyzed and so the product can come to market one time and at the lowest possible price point. This allows for a full analysis of the system and avoids the $100M disaster of shipping a phone that is too unreliable to be successful.

Of course these problems of modeling chips, boards and packages, and analyzing noise and power distribution are all areas the Apache has been working on for years.

Aveek’s article is here.