BannerforSemiWiki 800x100 (2)

I Love DAC

I Love DAC
by Paul McLellan on 04-13-2012 at 1:16 pm

For the fourth year Atrenta, Cadence and Springsoft are jointly sponsoring the “I LOVE DAC” campaign. In case you have been hibernating all winter, DAC is June 3-7th in San Francisco at the Moscone Center.

There are two parts to “I LOVE DAC”. First, if you register by May 15th (and they haven’t all gone) then you can get a free 3-day exhibit pass for DAC. In fact this pass entitles you not just to the exhibits but also the pavilion panels (which take place in the exhibit hall), the three keynotes and the evening receptions after the show closes.

Secondly, if you go to the Atrenta, Cadence or Springsoft booths you can get an “I LOVE DAC” badge. Each day somebody who is walking the show floor wearing one of the badges will be randomly given a new iPad (aka, but not by Apple, as iPad3). If you still have an “I LOVE DAC” from the previous 3 years then you can wear that one and still be eligible.

The “I LOVE DAC” page on the DAC website, where you can register, is here.


EDPS: 3D ICs, part II

EDPS: 3D ICs, part II
by Paul McLellan on 04-12-2012 at 10:00 pm

Part I is here.

In the panel session at EDPS on 3D IC a number of major issues got highlighted (highlit?).

The first is the problem of known-good-die (KDG) which is what killed off the promising multi-chip-module approach, perhaps the earliest type of interposer. The KDG problem is that with a single die in a package it doesn’t make too much sense to invest a lot of money at wafer sort. If the process is yielding well, then identify the bad die cheaply and package up the rest. Some will fail final test due to bonding and other packaging issues, and some die weren’t good to begin with (so you are chucking out a bad die after having spent a bit too much on it). With a stack just 4 die and a wafer sort that is 99% effective (only 1% of bad die get through), the stack only yields 95% and those 5% discarded do not just contain bad die, there are (almost) 3 good die and an expensive package too. Since these die are not going to be bonded out, they don’t automatically have bond pads for wafer sort to contact and it is beyond the state of the art to put a probe on a microbump (and at 1gm of force on 20um bump, that is enormous pressure) so preparing for wafer sort requires some thought.

The next big problem is who takes responsibility for what, in particular, when a part fails who is responsible. Everyone is terrified of the lawyers. The manufacturing might be bad, the wafer test may be inadequate, the microbumping assembly may be bad, the package may be bad and, generally, assigning responsibility is harder. It looks likely that there will end up being two manufacturers responsible, the foundry who does the semiconductor manufacturing, the TSV manufacturing and (maybe) the microbumps. And the assembly house or OSAT as we are now meant to call them (outsourced semiconductor assembly and test) who puts it all together and does final test.

The third big problem is thermal analysis. Not just the usual how hot does the chip get and how does that affect performance. But there are different thermal coefficients of expansion which can cause all sorts of mechanical failure of the connections in the stack. This was one of the biggest challenges in getting surface mount technology for PCBs to work reliably: the parts kept falling off the board due to the different reaction to thermal stresses. Not good if it was in your plane or car.

Philip Marcoux had a quote from the days of surface mount: “successful design and assembly of complex fine-pitch circuit boards is a team sport.” And 3D chips obviously are too. The team is at least:

  • the device suppliers (maybe more than one for different die, maybe not)
  • the interposer designer and supplier (if there is one)
  • the assembler
  • the material suppliers (different interconnects, different TSVs, different device thicknesses will need different materials, solder, epoxy…)
  • an understanding pharmacist or beverage supplier (to alleviate stresses)

His prescription for EDA:

  • develop a better understanding of the different types of TSV (W vs Cu; first/middle/last etc)
  • coordinate with assembly equipment suppliers to create an acceptable file exchange for device registration and placement
  • create databases of design guidelines to help define the selection of assembly processes, equipment and materials
  • encourage and participate in the creation of standards
  • develop suitable floorplanning tools for individual die
  • develop 3D chip-to-chip planning tools
  • provide thermal planning tools (chips in the middle get hot)
  • provide cost modeling tools to address designer driven issues such as when to use 3D vs 2.5D interposer vs big chip

It is unclear to me whether these are all really the domain of EDA. Process cost modeling is its own domain and not one where EDA is well-connected. Individual semiconductor companies and assembly houses guard their cost models as tightly as their design data.

Plus one of the challenges with standards is when to develop them. Successful standards require that you already know how to do whatever is being standardized and as a result most successful standards start life as a de facto standard and then the known rough edges are filed off.

As always with EDA, one issue is how much money is to be made. EDA tools partially make money based on how valuable they are, but also largely by how many licenses large semiconductor companies need. In practice, the tools that make money either run for a long time (STA, P&R, DRC) or you sit in front of them all day (layout, some verification). Other tools (high level synthesis, bus register automation, floorplanning) suffer from what I call the “Intel only needs one copy” problem, that they don’t stimulate license demand in a natural way (although rarely in such an extreme way that Intel really only needs a single copy, of course).


Doing what others don’t do

Doing what others don’t do
by Paul McLellan on 04-12-2012 at 2:56 pm

Wally Rhines’ keynote at U2U, the Mentor users’ group meeting, was about Mentor’s strategy of focusing on what other people don’t do. This is partially a defensive approach, since Mentor has never had the financial firepower to have the luxury of focusing all their development on sustaining their products and then make acquisitions of startups to get new technology. Even when they have acquired startups, they have tended to be ones in which nobody else was very interested.

In his keynote at DAC in 2004, Wally pointed out that every segment basically grows fast as it gets adopted and then goes flat. This is despite the significant investment that is required to keep products up to date (for example, there has been no growth in the PCB market despite the enormous amount of analysis that has been added since that early market phase). Once there are no new users moving into a product segment then the revenue goes flat. Consequently all the growth in EDA has come from new segments. Back 8 years ago Wally predicted that the growth would come from DFM, system level design and analog-mixed signal. DFM has grown at 12% CAGR since then, ESL at 11%, formal verification at 12%. But mainstream EDA grew at just 1%.

So that raises the question of what next? Which are the next areas that Mentor sees as adding growth.

First, low power design at higher levels. Like so much in design, power suffers from the fact that you only have accurate data when the design is finished and you have the least opportunity to change it, whereas early in the design you lack good data but it is comparatively influence it. Embedded software is increasingly an area that has a lot of effect on power and performance but the environments for hardware design are just not optimized for embedded software. Mentor has put a lot of investment into Sourcery CodeBench to enable software development on top of virtual platforms, emulators, hardware and so on. To give an idea of just how different the scale is in embedded software versus IC design, there are 20,000 download per month.

Second, functional verification beyond RTL simulation. Most simulation time is spent simulating things that have already been simulated. By being more intelligent about directing constrained random simulation, Mentor is seeing reductions of 10 to 50 times in the amount of simulation required to achieve the same coverage. With server clock rates static and multicore only giving limited scalability, emulation is the only way to do full-chip verification on the largest designs and increasingly surrounding an emulator with software peripherals makes it available to dozens of designers to share.

Third, physical verification beyond DFM. Calibre’s PERC (programmable electrical rule checking) allows much more than simple design rules to be checked: power, ESD, electromigration, or whatever you program. 3D chips also require additional rule checking capability to ensure that bumps and TSVs align correctly on different die and so on.

Fourth, DFT beyond just compression. Integrating BIST with compression and driving compression up to 1000X. Moving beyond the stuck-at model and looking inside cells for all the possible shorts and opens which catches a lot more faulty parts that pass the basic scan test. 3D chips, again, require special approaches to test to get the vectors to the die that are not directly connected to the package.

Fifth, system design beyond PCB. This means everything from ESL and the Calypto deal, to chip-package-board co-design.

Mentor also has even more off the beaten track products. Wiring design for automotive and aerospace. Heat simulation. Thermal analysis of LEDs. Golf club design?

Well, something is working. Mentor have gone from having leading products in just 3 of Gary Smith EDA’s categories to 17 today, on a par with Synopsys and Cadence. And, of course, last year was Mentor’s first $1B year, making Mentor the #2 EDA company.


Cadence support for the Open NAND Flash Interface (ONFI) 3.0 controller and PHY IP solution + PCIe Controller IP opening the door for NVM Express support

Cadence support for the Open NAND Flash Interface (ONFI) 3.0 controller and PHY IP solution + PCIe Controller IP opening the door for NVM Express support
by Eric Esteve on 04-11-2012 at 10:19 am

The press release about ONFI 3.0 support was launched by Cadence at the beginning of this year. It was a good illustration of Denali, then Cadence, long term commitment to Nand Flash Controller IP support. The ONFI 3 specification simplifies the design of high-performance computing platforms, such as solid state drives and enterprise storage solutions; and consumer devices, such as tablets and smartphones that integrate NAND Flash memory. The new specification defines speeds of up to 400 mega-transfers per second. In addition to the new ONFI 3 specification, the Cadence Flash and controller IP also support the Toggle 2.0 specification.

“NAND flash is very dramatically growing in the computing segment and is no longer just for storing songs, photos, and videos,” said Jim Handy, director at Objective Analysis. “The result is that the bulk of future NAND growth will consist of chips sporting high-speed interfaces. Cadence support of ONFI 3 and other high-speed interfaces is coming at the right time for designers of SSDs and other systems.”

If you look at this IP segment size, and compare the design starts count with the DDRn Controller IP design starts, it was so far one order of magnitude less. Looking at the design wins made by Cadence on the IP market, you can see that the Denali products have generated 400+ design wins for DDRn memory controller when the Flash memory design wins are in the 50+ range. To make it clear, we are talking about the Flash based memory products used in:

  • Data centers to support Cloud computing (high IOPS need)
  • Mobile PC or Tablets to support “instant on” (SSD replacing HDD)
  • NOT the eMMC and various flash cards

The latter market segment generates certainly a lot more IP sales, but at a fraction only of the cost of the IP license for a Flash controller managing NVM used in data center or SSD. Flash memory controller IP family from Cadence is targeting the high end of the market.

It’s also interesting to notice that Synopsys, covering most of the Interface protocols IP, including DDRn memory controller where the company is enjoying good market share as well as Cadence, is not supporting Flash memory controllers. You may argue that this market segment is pretty small, and why should Synopsys care about it? Simply because it could be the future of the storage market! If you look at storage, you probably think “SATA” and Hard Disk Drive (HDD)… All HDD shipped to be used inside a PC are SATA enabled, as well as the very few SSD integrated to replace HDD in the Ultra Notebook market. That’s right. But, as a matter of fact, SATA, as the standalone protocol to support storage, has reached a limit. A technology limit, as SATA 3.0, based on a 6 Gbps PHY, will be the latest SATA PHY.

We can guess that SATA, as a protocol stack, will survive, as some features like Native Command Queuing (NCQ) are unique to SATA and very efficient to optimize storage access (whether HDD or SSD). But the PHY part is expected to be PCI Express based in the future, the protocol name becoming “SATA Express”, at least for the PC (desktop, enterprise, mobile) or Media Tablet segments, where the use of one lane PCIe gen-3 will offer 1 GB/s bandwidth, to be compared with 0.48 GB/s with SATA 3.0.

Still in the storage area, but Flash based, the current solution for high I/O Per Second (IOPS) application, is based on the use of a Nand Flash Memory Controller integrated with an Interface protocol which could be SATA 3.0, USB 3.0 or PCI Express (in theory) but which is in practice based on PCIe, like for example a x4 PCIe gen-2 offering 20Gb/s raw bandwidth, or 2GB/S effective.

Here, the emerging standard will be named NVM Express and will confirm afterwards the solution currently used, and probably define a roadmap to support higher bandwidth needs associated with cloud computing development.

Using Nand Flash devices has a cost: accessing a specific memory point will end up degrading the device (at that specific point), especially for Multi Level Flash (MLC). This effect being amplified when you use Flash devices manufactured in smaller technology nodes, and getting worse as far as you are using higher capacity devices, as they are built on the smallest nodes. In other words, the more you use a SSD, the more important is the risk to generate an error. Cadence implements sophisticated, highly configurable error correction techniques to further enhance performance and deliver enterprise class reliability. Delivering advanced configurability, low-power capabilities and support for system boot from NAND, the Cadence solution is scalable from mobile applications to the data center. The IP is backward-compatible with existing ONFI and Toggle standards. The existing Cadence IP offering supports the ONFI 1, ONFI 2, Toggle 1 and Toggle 2 specifications, and also provides asynchronous device support. Cadence also offers supporting verification IP (VIP) and memory models to ensure successful implementation.

The move from SATA based storage, to HDD, SATA Express compliant, or SSD, NVM Express compliant, will certainly change the storage landscape, as well as the IP vendors positioning. Synopsys is well positioned on SATA IP and PCI Express IP segment, when Cadence is not supporting SATA IP, but supports Nand Flash and PCI Express Controller IP. With the emerging of “SATA Express” and “NVM Express”, it will be a new deal for IP vendors, interesting to monitor!

By Eric Esteve from IPNEST


Analog Automation – Needs Design Perspective

Analog Automation – Needs Design Perspective
by Pawan Fangaria on 04-11-2012 at 7:00 am

Recently I was researching the keynote speeches of isQED (International Society for Quality Electronic Design) Symposium 2012 and saw the very first, great presentation, “Taming the Challenges in Advanced Node Design” by Tom Beckley, Sr. VP at Cadence. I know Tom very well as I have worked with him and I admire his knowledge, authority and leadership in analog and mixed-signal domain. Inspired by his presentation, I became curious and further read his detailed speech put into blogs at EDA360 page. It was great pleasure going through the astounding collection of details and ideas there. I read it twice very minutely.

As Custom IC, AMS, Physical Design has been my core expertise; it reminded me about one of my article, “Need and Opportunity for Higher Analog Automation” in early Feb. In that article my emphasis was on systematic variation and layout dependent effects at lower nodes, which are primarily analog specific depending on device parameters and their relative placement, thereby making it essential to automate the detection and correction of such effects early in the design cycle. It is heartening to see Cadence coming up with Rapid Prototyping methodology to develop design building blocks called ‘modgens’ which account for layout dependent effects, parasitics and new P&R rules at 20nm, the methodology even uses double patterning technology to increase the routing pitch.

It’s great methodology which can address mega function generation employing automatic detection of analog structures like current mirror and differential pair from schematic and generating layout followed by extraction and verification. Using these building blocks for higher level design can serve the purpose, but not in all cases. Preserving the placement constraints to re-construct the layout with any changes is fine till ECO, but that cannot be employed for large changes. If we look at the problem from design perspective, in today’s context, at 20nm, a typical analog IP block could be big in the range of 40000 to 50000 transistors. Clearly and specifically knowing that analog design can be an ocean of secrets, all of that may not rightly fit into the scheme of rapid prototyping building blocks into abstracts and then assembling them. Even if 75% to 80% of that fits into the building block level automation, the rest 20% (about 10000 transistors) needs to be done by hand which would be a substantial task considering the effects and complexities with each transistor at 20nm we talked about. At 14nm, it is going to be tougher. This leaves us at the same place we were with manual looping between circuit and layout.

Considering the design perspective, it needs a general approach to placement and routing the analog design with due attention to 20nm issues. In my article, I had also talked about the need for a general approach to analog automation based on an open standard analog constraint format which includes design constraints for symmetry, matching, shielding, placement, floorplanning, routing, clocking, timing, and electrical and so on. Of course, the abstraction approach solves the problem to great extent, but for completeness of the design, it needs a general automation applicable to the whole design. Once that becomes available, that can take the centre place.

Comments from design and EDA community are welcome. I would be happy to know if there are more new ideas for analog automation.

By Pawan Kumar Fangaria
EDA/Semiconductor professional and Business consultant
Email:Pawan_fangaria@yahoo.com


EDPS: 3D ICs, part I

EDPS: 3D ICs, part I
by Paul McLellan on 04-10-2012 at 10:00 pm

The second day (more like a half-day) of EDPS was devoted to 3D ICs. There was a lot of information, too much to summarize in a few hundred words. The keynote was by Riko Radojcic of Qualcomm, who has been a sort of one-man-band attempting to drive the EDA and manufacturing industries towards 3D. Of course it helps if you don’t just have a sharp arrow but the wood of Qualcomm behind it. Curiously, though, Qualcomm themselves have been cautious in actually using 3D IC technology. Possibly they have done some test chips but I don’t know of any parts that they have in production. Other presentations were by Altera, Mentor and Cadence, plus a panel discussion.

Those who have been in hi-tech for years know about the hype cycle, I believe originated by Dataquest. Gartner, Dataquest’s heir, reckons TSV are now on the slope of enlightenment where things become real. At the top of the hype cycle are nanotube electronics, wireless power and Occam processes (whatever they are).

There are a number of varieties of 3D IC with different challenges for both EDA and manufacturing. If you include package-on-package (POP) then you probably have a 3D IC in your pocket. Almost all the smartphone SoCs including Apple’s A4/A5 series, have memory and logic in the same package. POP does not require Thru-Silicon-Vias (TSVs) since it is all wire-bonded inside the package.

One thing I learned is that while everyone talks about copper TSVs, the small TSVs that are so far in use are actually tungsten. Nobody knows how to make copper TSVs that small. There is actually a reasonable possibility that we will end up with two types of TSV: tungsten for signal, small and tightly spaced, and copper for power supply and perhaps clock, much bigger and with higher current capacity.

One big development earlier this year was the finalization of the Jedec Wide I/O standard which allows memory to be stacked on top of logic, so called memory-on-logic (MOL). With much shorter distances, lower capacitance and 512 bits wide this allows a much higher bandwidth lower power interface between the processor and memory. The memory is also designed to be stacked, with more than one memory die. In fact all the memory suppliers have already announced various forms of memory stacks.

Interposer based designs, so called 2.5D, have the big advantage of not requiring TSV except through the interposer itself (unless there is a memory stack). As everyone who watches 3D ICs probably knows, Xilinx is shipping a high-end FPGA using silicon interposer technology, perhaps the only high(ish) volume product in production.

All camera chips are also 3D of some sort, with the CCD device on top of the image processing chip. There was an inconclusive debate as to how many of these use TSVs and how many use another technology whereby the signals are brought around the edge of the thinned die rather than going through it.

Thermal is going to be a big issue. Mobile chips are low power but they are not that big, so the power density and thus many of the thermal problems are the same as for server CPUs. IBM is looking at liquid cooling through microchannels. That has its own set of challenges and may or may not work out, but for sure that is not going to be the solution in your phone.

The motivation for 3D is a mixture of economic and what has come to be called “More than Moore”. We can no longer continue to scale CMOS the way we have been doing, and we are falling off the 30% per year cost reduction treadmill. It is still an open question whether we have a post optical lithographic process that we can manufacture economically, EUV being the most promising with E-beam the only other serious alternative. High-density TSV and 3D stacking offers an alternative way to keep Moore’s law on track (think of Moore’s law as being about transistors per volume, just that we only used one die for the first 50 years). It is the first of several disruptive technology changes that substitute for just scaling our lithography.

But, as with any disruptive technology, you don’t get it for free. It is disruptive. You need new tools, a new supplier ecosystem. There are new sorts of interactions you need to worry about such as where TSVs can be placed, how they affect silicon through stress, managing electrical and thermal coupling in the stack and…and…

The roadmap, at the highest level, goes through POP (in production today), to wide-I/O MOL and eventually LOL.


The best graphics chip is the one seen the most

The best graphics chip is the one seen the most
by Don Dingee on 04-10-2012 at 2:48 pm

If I say “graphics chip”, most techies will say NVIDIA or AMD. But in the new post-PC world , neither of these players holds the key to the future. One that does is a little company making 43 cents on every latest version iPad and iPhone. Another is designing their own approach. Should you care what graphics is in your phone? Continue reading “The best graphics chip is the one seen the most”


MEMS and IC Co-design

MEMS and IC Co-design
by Daniel Payne on 04-10-2012 at 11:37 am

This morning I attended a webinar about MEMS and IC co-design from a company called SoftMEMS along with Tanner EDA. I learned that you can co-design MEMS and IC either in a bottom-up or top-down methodology, and that this particular flow has import/export options to fit in with your mechanical simulation tools (Ansys, Comsol, Open Engineering) as well.
Continue reading “MEMS and IC Co-design”


Oasys Gets Funding from Intel and Xilinx

Oasys Gets Funding from Intel and Xilinx
by Paul McLellan on 04-10-2012 at 8:00 am

Oasys announced that it closed its series B funding round with investments from Intel Capital and Xilinx. The fact that any EDA company has closed a funding round is newsworthy these days; companies running out of cash and closing the doors seems to be a more common story.

Oasys has been relatively quiet, which some people have taken to mean that nobody is using RealTime Designer, their synthesis tool. But in fact they have announced that #2-4 US semiconductor companies, namely Texas Instruments, Qualcomm and Broadcom (via its acquisition of Netlogic) are customers. As is Xilinx, the #1 FPGA vendor, or vendor of programmable platforms as they seem to want to be known. Now with Intel, Oasys have filled out the enviable position of having relationships with the top 4 US semiconductor vendors and the top FPGA vendor. These are the companies doing many of the most advanced designs today.

On the SoC side, Oasys have tapeouts at both 45nm and 28nm already. Ramon Macias of Netlogic (now part of Broadcom) said publicly nearly a year ago, they had already taped out their first 45nm design and were now using RealTime Designer on 28nm designs. On the programmable platform side, Xilinx licensed Oasys’s technology a couple of years ago and have been using it internally. They have “achieved excellent results across a wide range of designs.”

Chip Synthesis is a fundamental shift in how synthesis is applied to the design and implementation of integrated circuits (ICs). Traditional block-level synthesis tools do a poor job of handling chip-level issues. RealTime Designer is the first design tool for physical register transfer level (RTL) synthesis of 100-million gate designs and produces better results in a fraction of the time needed by traditional logic synthesis products. It features a unique RTL placement approach that eliminates unending design closure iterations between synthesis and layout.


EDA Industry Talks about Smart Phones and Tablets, Yet Their Own Web Sites are Not Mobile-friendly

EDA Industry Talks about Smart Phones and Tablets, Yet Their Own Web Sites are Not Mobile-friendly
by Daniel Payne on 04-09-2012 at 12:55 pm

images?q=tbn:ANd9GcTDJsfnqTKzzfAVaUi2zjF5gvc 6daoganHmJjsILyaBWMw5qtn

As a blogger I write weekly about the EDA industry and certainly our industry enables products like Smart Phones and Tablets to even exist, however if we really believed in these mobile devices then what should our web sites look like on a mobile device?

It’s a simple question, yet I first must define mobile-friendly before sharing what I discovered. Here’s what I consider to be a mobile-friendly web site:
[LIST=1]

  • On a Smart Phone or Tablet it means maximum content and minimum of graphics, because I don’t want my data plan to go over the limit.
  • On a Smart Phone or Tablet I only want to scroll vertically, not horizontally.
  • I don’t want to pinch, zoom, scroll, pinch, zoom, scroll, double-tap. That is too much work.
  • Navigation should be near the top of the browser page, and large enough that my fingers select the correct menu.
  • Search is essential to allow me to get the content I need quick, because on mobile I’m in a hurry.
  • The site looks and works well in either landscape or portrait orientations.

    Mobile-friendly web site design is not:

    • The identical experience as the desktop.

    Now that we know what mobile-friendly web site design is all about let’s see if any EDA company has optimized their web site so that mobile visitors have a pleasant experience. Today I visited the following sites and can report that NO major EDA company has a mobile friendly web site, what a disappointment. Kudos to ARM for leading the way on supporting mobile-friendly devices with their web site.

    [TABLE] style=”width: 500px”
    |-
    | EDA and IP Sites
    | Mobile Friendly
    |-
    | www.synopsys.com
    |
    |-
    | www.cadence.com
    |
    |-
    | www.mentor.com
    |
    |-
    | www.ansys.com
    |
    |-
    | www.atrenta.com
    |
    |-
    | www.tannereda.com
    |
    |-
    | www.agilent.com
    |
    |-
    | www.aldec.com
    |
    |-
    | www.arm.com
    |
    |-
    | www.apsimtech.com
    |
    |-
    | www.atoptech.com
    |
    |-
    | www.berkeley-da.com
    |
    |-
    | www.calypto.com
    |
    |-
    | www.chipestimate.com
    |
    |-
    | www.ciranova.com
    |
    |-
    | www.eve-team.com
    |
    |-
    | www.forteds.com
    |
    |-
    | www.gradient-da.com
    |
    |-
    | www.helic.com
    |
    |-
    | www.icmanage.com
    |
    |-
    | www.jasper-da.com
    |
    |-
    | www.lorentzsolution.com
    |
    |-
    | www.methodics-da.com
    |
    |-
    | www.nimbic.com
    |
    |-
    | www.oasys-ds.com
    |
    |-
    | www.pulsic.com
    |
    |-
    | www.realintent.com
    |
    |-
    | www.sigrity.com
    |
    |-
    | www.solidodesign.com
    |
    |-
    | www.verific.com
    |
    |-

    I could’ve researched more EDA and IP sites, but I think that you can see the clear trend here, we talk about the mobile industry but don’t apply mobile-friendly to our own web sites.

    What about the media sites that write about our industry? They didn’t fare much better on being mobile-friendly:

    [TABLE] style=”width: 500px”
    |-
    | EDA Media Site
    | Mobile Friendly
    |-
    | www.semiwiki.com
    |
    |-
    | www.eetimes.com
    |
    |-
    | www.garysmitheda.com
    |
    |-
    | www.marketingeda.com
    |
    |-
    | www.eejournal.com/design/fpga
    |
    |-
    | www.edacafe.com
    |
    |-
    | www.deepchip.com
    |
    |-
    | www.chipdesignmag.com
    |
    |-
    | www.dac.com
    |
    |-
    |
    |
    |-

    The one media site that is mobile-friendly belongs to me, and I converted it in about one hour of effort.

    High volume sites are mostly optimized with a few notable exceptions:

    [TABLE] style=”width: 500px”
    |-
    | Popular Web Sites
    | Mobile Friendly
    |-
    | www.apple.com
    |
    |-
    | www.tsmc.com
    |
    |-
    | www.intel.com
    |
    |-
    | www.samsung.com
    |
    |-
    | www.cnn.com
    |
    |-
    | www.google.com
    |
    |-
    | www.facebook.com
    |
    |-
    | www.twitter.com
    |
    |-
    | www.techcrunch.com
    |
    |-
    | www.engadget.com
    |
    |-

    Making a Web Site Mobile Friendly
    Web sites that are dynamic and template-driven can be quickly adapted using Cascading Style Sheets (CSS) to detect the size of the browser and then serve up pages that are specifically formatted. For those of you who are curious and a bit geeky the basic approach is to use a style sheet which detects orientation and size of the browser for three configurations: Smart Phones, Tablets, Desktop

    Beyond CSS, there is one more trick that you have to add in the Header of each web page to force the mobile browser to tell you it’s real dimensions:

    Hopefully I can report back one year from now and show some marked improvement from the EDA and IP industries on getting their web sites to be mobile-friendly.