Synopsys IP Designs Edge AI 800x100

Google Datacenter

Google Datacenter
by Paul McLellan on 10-22-2012 at 5:42 pm

In my blog about Intel’s latest results I linked to an interesting article in Wired about Google’s datacenters.

I happened to be browsing some websites in the Netherlands (actually I don’t speak a word of Dutch, a Dutch friend pointed it out to me) and there is an article showing how the pictures that accompany the Wired article have been photoshopped. You don’t need to be able to read Dutch to get the basic idea, the pictures are animated to show where one side of most of the pictures is cut and pasted from the other side (after reflection).

You can run the whole article through (irony of ironies) Google Translate to get a version in bad English (double dutch?).


Hybrids on BeO then, 3D-IC in silicon now

Hybrids on BeO then, 3D-IC in silicon now
by Don Dingee on 10-21-2012 at 8:10 pm

Once upon a time (since every good story begins that way), I worked on 10kg, 70 mm diameter things that leapt out of tubes and chased after airplanes and helicopters. The electronics for these things were fairly marvelous, in the days when surface mount technology was in its infancy and having reliability problems in some situations.

One of the problems with surface mount in the early going was the coefficient of thermal expansion, or more accurately the difference in CTE between the ceramic packages needed for defense-style temperature range requirements (-55 to +125C), and that of the FR-4 fiberglass most printed circuit boards were constructed from. With a few heating and cooling cycles, the ceramic packages would grow or shrink at a different rate than the board underneath them, stress the solder joints, and cause cracks or breaks. BGA, solder balls, and other fine pitch techniques were yet to be invented.

The solution for dense electronics in small places with wicked temperature extremes was hybrid microelectronic assemblies, and with some improvements in materials and process it still is today. The “for dummies” (and I resemble that remark) version:

1) Print the circuit on a ceramic substrate. At the time, the technology for substrates was beryllium oxide, viciously toxic in particle form when inhaled, but quite safe made into non-porous substrates. (One designer I worked with had a BeO coffee mug he drank from everyday to prove the point.) BeO also has super high thermal conductivity, providing a conduction cooling path. Today, you’re more likely to find aluminum nitride (AlN) in use.

2) Drop chips in raw die form onto the substrate in their proper locations.

3) Bond the pads on each chip to the corresponding pads on the substrate with thin gold wires – pretty much the same thing done inside a single IC package, except on a much larger scale with a lot of various dies and connections.

4) Put the finished circuit substrate into a Kovar metallic case, with I/O pins, and seal the edge with a weld so it’s hermetic.

5) Solder several hybrids to a flex harness providing interconnects between hybrids and connectors to other subsystems to make up the final assembly.

The lesson from electronics history is good ideas don’t go away when they are supplanted by innovation; they come back when a similar problem arises again on a smaller scale.

The idea of 3D-IC has been percolating for some time, and it’s the modern version of hybrids. The scale and materials are different, but as the TSMC name suggests – CoWoS, chip on wafer on substrate – it’s the same concept, minus wires and metal cases, and implemented completely in an EDA flow. This isn’t just to get more stuff in less space by better utilizing the Z axis, as the 3D name would imply. It’s about using the right process for the right function. Using silicon micro-bumping and through-silicon vias (TSVs) , a complete subsystem in proven silicon can be installed on a newly designed piece of 20nm digital logic. The EDA breakthrough will be making that a smooth flow instead of manual design and extra process steps.

With all the chatter about 28nm, 20nm, 14nm, and beyond, many folks might have lost sight that analog processes are no where near those geometries, and they don’t need to be. They are built out on mature, low risk, low noise process nodes. While analog is obviously involved in A/D and D/A converters, there are also MEMS sensors, and networking PHYs, and wafer-scale cameras and microphones that can all take advantage of a 3D process, without having to be redesigned into a cutting-edge geometry. Sematech summarized this nicely:

Memory subsystems are also becoming decidedly more analog in their signaling characteristics as speeds increase. Our Eric Esteve wrote earlier in a post discussing Cadence’s JEDEC Wide I/O mobile DRAM IP, and its target of 100Gbit/sec of DRAM bandwidth. Taiwan’s Industrial Technology Research Institute (ITRI) and TSMC both recently reported working with Cadence to tape out Wide I/O designs and prove out the new CoWoS flow.

If you missed the first round of hybrids, the idea is back, and it’s all in silicon this time. 3D-IC opens up a whole new range of possibilities for SoC design, not unlike what we’ve already seen at the microcontroller level on less aggressive process nodes with integrated mixed-signal EDA flow. The microcontroller-on-steroids with a much faster digital core, memory subsystems, and multiple analog I/O systems quickly and completely blending mature analog process nodes with advanced digital nodes is close at hand.


Why Blog on SemiWiki.com?

Why Blog on SemiWiki.com?
by Daniel Nenni on 10-21-2012 at 7:00 pm

The Semiconductor Wiki Project, the premier semiconductor collaboration site, is a growing online community of professionals involved with the semiconductor design and manufacturing ecosystem. Since going online January 1st, 2011 more than 400,000 unique visitors have landed at www.SemiWiki.com viewing more than 3M pages of blogs, wikis, and forum posts. WOW!

Anybody can blog on SemiWiki and quite a few people do for personal fulfillment and professional enrichment.

Today everything and everyone is connected and crowd sourced. In fact, all social media, from blogs, to forums and wikis have a profound impact on how people communicate, search for information, and make decisions. Both personally and professionally, social media is no longer an experiment or a moonlighting function, social media is now an integral part of how we communicate.

Blogging is a life changing experience. Blogging is personal branding and can take you from relative obscurity to an internationally recognized industry professional. Blogging is mind expanding and develops communication skills you may never have known possible. One warning however, blogging is addictive. Once you turn it on it is very hard to turn off.

For me blogging is where I think, plan, and reflect. Blogging encourages me to research, gather credible information, and test hypotheses. Blogging is sometimes dangerous and nurtures my risk taking side but also extremely collaborative and provides a real-time feedback loop never before possible. Most importantly, my wife reads my blogs and now after 30+ years together she actually knows what I do for a living besides sitting in my La-Z-Boy with my laptop. She now knows what a semiconductor is, what EDA means, and why semiconductor IP is so important to our everyday lives.

As a SemiWiki blogger you will get personal invitations to industry conferences, seminars, and webinars. You will get exposure and access to semiconductor professionals at all levels, from CEOs to CTOs to engineers, marketing, sales, and public relations people, the entire semiconductor ecosystem at your fingertips.

5 things you should know about SemiWiki:

[LIST=1]

  • SemiWiki is global. Your experience here will be from around the world with an incredible amount of information at your fingertips. Make sure you connect and interact, make sure you engage at all levels.
  • Build relationships and network. You can truly connect here with people whom you have not met. Make friends and create a global support system for your professional life.
  • Take the good and the bad. Distinguish between fact and opinion, objective and subjective. People will either like or dislike your posts and there is something to be learned from both.
  • Don’t be evil. Top influencers will have one thing in common, they use their influence for the greater good.
  • Be yourself.Impersonating others online is a crime so just be yourself. Share your knowledge, share your profession, share your passion. You don’t have to be an expert or industry icon to be a top influencer on SemiWiki.

    Take control of your social media destiny, join SemiWiki and start blogging today! If you need further convincing feel free to contact me directly on LinkedIn http://www.linkedin.com/in/danielnenni It would be a pleasure to link to you and share my 13,000+ connections.


  • TSMC OIP Forum 2012 Trip Report!

    TSMC OIP Forum 2012 Trip Report!
    by Daniel Nenni on 10-21-2012 at 6:00 pm

    The second annual TSMC Open Integration Platform Ecosystem Forum was last week and let me tell you it was excellent. Great update on the TSMC process technology road maps, great for networking within the fabless semiconductor ecosystem, great for seeing what’s new in EDA and IP, and great for SemiWiki. It was time well spent for sure. You can see my TSMC OIP 2011 trip report HERE for reference.


    The opening video was excellent this year! It was all about collaboration of course and an orchestra is a perfect example. My wife played first chair violin so this theme really clicked with me. Last year’s theme was a rowing team which did not click with me. You can see the symphony videoHERE.

    First up was Rick Cassidy. Rick is President of TSMC North America. Prior to joining TSMC in 1997 Rick was Vice President and General Manager of National Semiconductor’s Military and Aerospace Division. He joined National in 1979. Before that, Rick was an officer in the U.S. Army. He earned his Bachelor of Science degree from the United States Military Academy at West Point.

    According to Rick attendance was up from last year which I certainly agree with. I counted 1008 seats in the main auditorium and estimate that 95% of them were taken. This does not include the partners manning the booths in the exhibition room next door.

    Rick presented the TSMC vision and mentioned some interesting numbers:

    *TSMC has more than 5,000 silicon validated IP available today, WOW! I have been through the TSMC silicon validation process many times and let me tell you it is rigorous to say the least.

    *TSMC has invested $1.5B in design enablement thus far in 2012!

    *TSMC in 1987 had one fab, a $20M CAPEX, 30 products and shipped 3,600 wafers

    *TSMC in 2012 has 11 fabs, 5,498 different technologies, 12,569 products, $50B CAPEX, 615 customers, and a 15.3M wafer capacity!

    Rick mentioned that his decision to join the semiconductor industry was based on the opportunity to change the world. I wish I could say the same. 30 years ago I was a starving college student and my decision was financial. I knew there was big money to be made in Silicon Valley and I wanted some. Looking back however we did change the world and there is still plenty of money to be made in doing so.

    Next up was Dr. Mark Liu. Mark is TSMC’s Executive Vice President and Co-Chief Operating Officer. He joined TSMC in 1993 as an Engineering Manager. Prior to that Mark served in a number of technical capacities first with AT&T Bell Laboratories as a principal investigator in High Speed Electronics Research and later at Intel Corporation where he developed process technologies for Intel’s 32-bit microprocessors and flash memory products. Mark is a member of the Board of Directors of Silicon System Manufacturing Company in Singapore. He received Ph.D. degrees in electrical engineering and computer science from the University of California, Berkeley.

    I met Mark when I toured Fab 12 in 2010, I blogged about it HERE. A memorable experience for sure. Mark ramped up TSMC’s first 200mm fab in 1993 and has been building fabs for TSMC ever since. Mark talked about “The Internet of Things” and what 2030 will look like. Mark also stated that:

    *The TSMC 20nm design ecosystem (EDA and IP) are available today

    *20nm is close to complete and will be in production next year

    *TSMC will have three fabs for 20nm.

    Next up was Dr. Cliff Hou, Cliff is vice president of R&D. Cliff’s door and mind is always open for new technology discussions and debates on the future of the semiconductor ecosystem. Cliff joined TSMC in 1997 and was appointed TSMC’s Vice President of Research and Development (R&D) in 2011. He was previously Senior Director of Design and Technology Platform where he established the company’s technology design kit and reference flow development organizations. He also led TSMC’s in-house IP development teams from 2008 to 2010. Cliff holds 20 U.S. patents and serves as a board member of Global Unichip Corp. He received his Ph.D. in electrical and computer engineering from Syracuse University.

    Cliff added that:

    *20nm engagements with partners and customers started much earlier

    *TSMC overcame 20nm challenges through collaboration


    *16nm FinFET will require even deeper collaboration

    Cliff also mentioned that at 40nm partners and customers started design work when the PDK was release 0.5, at 28nm design work started at PDK 0.1, at 20nm design work started at PDK .05, and 16nm will start at PDK .01. The 20nm PDK 1.0 and 20nm foundation IP is silicon validated and available today with customer tape-outs expected in Q1 2013. 16nm PDK .1 will be available in Q1 2013 with the production version PDK 1.0 scheduled in Q4 2013.

    The most interesting thing for me was the FinFET discussions and there were plenty of them which I will blog about separately. For those of you who don’t know about FinFETS start here with the FinFET Wiki. 2013 will be the year of the FinFET, absolutely!


    A Brief History of Aldec

    A Brief History of Aldec
    by Daniel Payne on 10-20-2012 at 5:31 pm

    Dr. Stanley Hyduke founded Aldecin 1984 and their first product was delivered in 1985, named SUSIE (Standard Universal Simulator for Improved Engineering), a gate-level, DOS-based simulator. The SUSIE simulator was priced lower than other EDA vendor tools from the big three: Daisy, Mentor and Valid (aka DMV). Aldec maintains a global network of regional offices and is the only EDA company to have Corporate Headquarters located in Nevada.

    Continue reading “A Brief History of Aldec”


    DAC: It’s the Last Week for Many Submissions

    DAC: It’s the Last Week for Many Submissions
    by Paul McLellan on 10-19-2012 at 2:36 pm

    The deadline is coming up at the end of next week (technically on Monday October 29th for those of you who like real brinkmanship) for several aspects of DAC (not submission of papers for the conference itself) but most of the less academic-oriented things.

    Proposals for:

    • Special Sessions
    • Tutorials
    • Panel sessions (in the conference itself)
    • Pavilion panel sessions (in the exhibit hall)
    • Workshops

    must all be submitted by the 10/29 cutoff.

    One of the things I think has been a really big positive for DAC was creating the Pavilion in the exhibit hall. The pavilion panel sessions are usually really interesting and presentations are often standing room only. I’m sure next year will be the same and if you have good ideas for pavilion panel sessions then next week is the time to get them down on paper.

    For those readers who are also DAC exhibitors there is also a (not compulsory) meeting a week or so later on Wednesday November 7th. DAC is in Austin next year (you knew that, right?) but this meeting is 3.30-4.30 on 11/7 in the San Carlos Room at the Hilton San Jose (the one by the convention center). They’ll tell you what is planned and take suggestions for other stuff they should be planning.

    And for those of you that really like to plan ahead, the 51st and 52nd DACs are both in San Francisco from June 1-5 2014 and June 8-12 2015. Put them on your calendar. That gives me plenty of time to do extensive research to update my blogs on what are the best bars and restaurants to visit during DAC.


    Intel Quarterly Report: Needs to Do Better

    Intel Quarterly Report: Needs to Do Better
    by Paul McLellan on 10-19-2012 at 11:51 am

    Intel announced its quarterly results a couple of days ago. They had previously downgraded 3rd quarter sales estimates but they managed to beat the downgraded numbers. If you look at the transcript of the call (I didn’t listen live) you’ll see very little mention of mobile and Atom. This is bad news for Intel. Its core business is the PC and the PC business is going nowhere. Not going away but it is not going to be the catalyst for future growth.

    Intel’s sales in Q3 were down 5% from 2011Q3 and flat from last quarter. There is some short-term hope with Windows 8 driving an imminent corporate PC upgrade cycle. But overall PC sales are forecast to fall in 2012 versus 2011. Some of that can be blamed on the weak economy (“not many sales in Greece”, actually nobody said that).

    Looking under the PC hood, datacenter revenue is up 6% on 2011Q3 but down from Q2. However, the client part (notebooks, desktops etc) is down 8% from 2011Q3. In fact someone told me anectdotally that the biggest end-user of Intel chips is Google. And it wouldn’t surprise me if Amazon, Apple, Oracle, Salesforce and the rest of the big datacenter crowd are not in other top spots. They are increasingly worried about energy efficiency of computation and a general purpose high-power processor is not the sweet-spot, it is the easiest to manage spot.

    Intel’s big challenge is that it can see that the PC market is in secular decline outside of the datacenter. There are also storm clouds on the datacenter that might eventually impact even that revenue. Read the Wired article about going inside Google’s datacenter (they’ve fixed the funny captioning where they called a cooling plant the server room and vice-versa) and you’ll see this quote:So far, though, there’s one area where Google hasn’t ventured: designing its own chips. But the company’s VP of platforms, Bart Sano, implies that even that could change. “I’d never say never,” he says. “In fact, I get that question every year. From Larry.”

    If your largest end-user is thinking about designing you out, you worry. And there are similar stories about Apple designing their own microprocessors, perhaps even picking up AMD for a bit of its spare change to avoid legal hassles.

    Intel’s Ultrabook program (MacBook Airs that runs Windows) doesn’t seem to be getting a lot of traction yet although some of them look very…well, just like a MacBook Air. But they are not compellingly cheaper. I wouldn’t want to pretend that a hipster coffee shop in the Mission in San Francisco is representative of the world, but you never see anything but Apples in there. It is not clear if they are competing with a MacBook air or an iPad anyway.

    Intel’s big problem is mobile. And the problem is two-fold. Firstly, Intel isn’t yet a force in mobile although it does have a few wins. Secondly, even if it won large market share I don’t see how it can survive on the margins they would get. This is made worse by the fact that Apple and Samsung take all the profit in handsets. Samsung is not going to stop building their own chips so they are ARM (or at least not Intel) forever. Apple is not going to change from ARM probably ever, but even if they did it would not be on the basis that Intel gets their traditional PC margins on whatever Ax it is.

    So what about Intel’s manufacturing lead? It is certainly real at 22nm, they are shipping product in volume and nobody else is. I know nothing about Intel’s costs but in the merchant foundry industry all the evidence is that 20nm is going to be much more expensive that 28nm. Perhaps 4X the cost for a wafer, so around 2X cost for the same functionality. That will come down over time, probably, with process learning and yield improvement but I doubt it will get compellingly below 1, meaning 28nm will be roughly the same cost. If Intel’s costs are similar, that is fine for the datacenter business which can withstand the cost to get the functionality but will not work for the smartphone business. Especially at the low end, the sub $100 smartphone.

    Fundamentally, Intel assumed that whatever came after the PC, Windows binary compatibility, especially Microsoft Office, would be the key to the future and only they owned the lock. But that doesn’t seem to be true. As I wrote a few years ago (in pre-iPad days when the post PC device was still being called a netbook):”My gut feel is that the netbook will be more like a souped up smartphone than a dumbed down PC and so Atom will lose to ARM. The smartphone and netbook markets will converge. Microsoft will lose unless it ports to ARM. There will be no overall operating system winner (like smartphones).”

    Apart from changing out the term “netbook” there is not much to change about that, and some has already come to pass.


    A Brief History of Mobile: Generations 3 and 4

    A Brief History of Mobile: Generations 3 and 4
    by Paul McLellan on 10-18-2012 at 8:30 pm

    The early first generation analog standards all used a technique known as Frequency Division Multiple Access (FDMA). All this means is that each call was assigned its own frequency band in the radio spectrum. Since each band was only allocated to one phone, there was no interference between different calls. When a call finished the band could be re-used for another call, the allocation wasn’t permanent.

    GSM uses a technique called Time Division Multiple Access (TDMA). Despite the mistaken marketing of GSM of providing CD quality sound just because it was digital (it certainly does not), the real advantage of 2G standards was being able to get four times (initially, up to 8 later) as many calls into the same radio bandwidth. Over time it would thus drive down call costs. TDMA works by allocating each call, not just to a particular frequency band as with FDMA, but also to specific time slots within that band. The phones and base-station would only communicate with each other in those slots leaving the other slots free for other calls. With the distances and speeds involved, speed-of-light considerations come into play and the power and precise timing of communication needs to be carefully controlled to ensure that one call does not step on another one in the neighboring slot.

    Most of the other technologies that were adopted in competition with GSM were dead-ends, either technically or simply from a business scale point of view. But one technology, used by Verizon and Sprint in the US and all carriers in South Korea, turned out to be very significant: CDMA.

    CDMA stands for Code Division Multiple Access. The original version is also known as IS-95 but several subsequent versions were known as CDMA-2000. An explanation of how CDMA works sounds a bit preposterous. Basically all phones transmit in the same band of frequencies at the same time. Since the bandwidth used for the transmission is much larger than the bandwidth being transmitted (compressed voice) it is called a spread-spectrum technology.

    So how does a phone pick out the one transmission meant for it from the noise of all the other simultaneous transmissions? That is where the “code” in CDMA comes in. Each phone is allocated a unique code and that code is XORed with the data. The rate of the code is much higher than the data so several bits of code get XORed with each bit of data. The cleverness comes in that the codes are all mutually orthogonal. Without going into an in-depth mathematical analysis of what that means precisely, the effect that if a phone attempts to correlate a call with a different code, it correlates to zero, and if it attempts to correlate with a call using its allocated code then it recovers the original signal.

    CDMA is so elegant it is one of those ideas that might be nice mathematically but fail in the real world. After all, signals take different times to reach the phone depending on how far away the base station is, there are reflected signals of nearby buildings and so on. So the transmission has to really be sought out in the received radio signal. In fact received wisdom is that it takes a DSP running at 100 MIPS or more to be able to decode a CDMA signal. The first implementations of CDMA were, indeed, not very reliable.

    One of the big challenges is that the power levels of all the radios need to be constantly adjusted so that one with high power doesn’t overwhelm ones with lower power like everyone at a party trying to talk louder than everyone else. The code approach only causes partial rejection of incorrect signals and an excessively powerful one may get through.

    CDMA is a technology created from whole cloth by one company, Qualcomm, based in San Diego (actually La Jolla). They created the technology, patented it, licensed it to semiconductor manufacturers and cell-phone manufacturers, and even at the beginning had a joint-venture with Sony to manufacture phone handsets to kick start the market.

    In practice, Qualcomm was the company that understood CDMA and had all the rights, so it was hard to build CDMA phones except by buying chips from Qualcomm. Riding this wave, Qualcomm has risen to be a top-10 semiconductor company, still fabless. Today TSMC manufactures most of their chips

    The reason that Qualcomm and CDMA have turned out to be so important is that 3G standards are largely based on Qualcomm’s patents. CDMA makes more efficient use of wireless spectrum (which is the bottleneck resource) by the way the power levels dynamically adjust. In comparison, TDMA cannot adapt by packing 5 calls into a channel instead of 4 if radio conditions are good and there are no channels left.

    W-CDMA (Wideband CDMA) is a generic term for a number of wireless technologies all based on Qualcomm’s fundamental technology although initially developed by NTT DoCoMo in Japan. It is the basis of all European and US 3G standards. The Chinese TDSCDMA is also based on the same approach, although supposedly designed to get around Qualcomm’s patents and thus avoid the royalties that all other manufacturers pay. Qualcomm claims it still infringes many patents but since the phones only work on one network in China, China Mobile, and have no export market there is little they can do.

    The big change in the 3G era was the arrival of smartphones. Responsive data access suddenly became important, not just the capability to make voice calls. Data is very different from voice in a couple of ways. Firstly, voice is a fixed data-rate and there is not really any advantage to transmitting it faster, just more efficiently. Data is not like that, everyone really wants gigabits of bandwidth to their phone if they could get it. Secondly the reliability requirements are higher. If a packet of voice fails to get through it is not worth retransmitting it, better to have a few milliseconds of silence (or comfort noise) in the middle of the call. But data is not like that, and usually every packet needs to be retransmitted until it successfully gets received.

    As a result, in 2/3G standards, voice is circuit switched and a dedicated special channel is set up for each call, whereas data is packet switched without a dedicated radio resource for each data circuit.

    There were expected to be a number of 4G technologies, in particular Qualcomm’s successor to CDMA2000 called UWB (Ultra-Wide Broadband). But Qualcomm stopped development of the technology and threw their weight behind LTE.

    LTE stands for Long Term Evolution (only an international committee could pick a name like that). Actually what current marketing by cellular operators calls 4G is often called 3.9G inside the mobile industry. In fact there are so many standards with different capabilities that it is almost arbitrary where they are broken into generations. So the current generation is now called 4G and the next generation is meant to be called “true 4G” but don’t hold your breath.

    LTE is an evolutionary development of the GSM standard by way of W-CDMA. It is incompatible with 2G and 3G systems and thus needs dedicated radio-spectrum. Initially CDMA operators were expected to have their own 4G evolution, but in the end they too have decided to migrate to LTE.
    Until LTE, all standards were a sort of hybrid, with digitized voice handled differently from digital data such as internet access. LTE is a flat IP-based approach where voice is compressed into digital data as before, but no longer has a dedicated circuit-switched mode of transmission; it is simply transmitted over the data channel like a “voice-over-IP” phone service such as Skype.

    The transition to LTE is complicated by the need to keep phones working in all areas as the LTE build-out proceeds. The most common approach is to use LTE for data when it exists and fall back to the 3G data when it does not. Meanwhile, voice calls are still circuit switched through the existing 3G system (GSM or CDMA). Depending on the architecture of the handsets and the network, it may or may not be possible to both make a voice call and have data access at the same time.

    Eventually, when all areas have LTE base stations and all handsets support LTE, it should be possible to shut down the legacy circuit switched infrastructure and use the freed-up spectrum for more LTE bandwidth.

    That is where we are today. In large metropolitan areas LTE is up and running. Smaller markets will transition more slowly. State-of-the-art smartphones such as the Samsung Galaxy S3 and the iPhone 5 have LTE data access but still circuit switch the voice. Unless using an over-the-top (OTT) voice service such as Skype that simply re-routes calls through the data channel (and bypasses the carrier billing for a voice call).

    One challenge for carriers is that they have got used to charging much more for a voice call (and a text message) than the equivalent amount of data. For example a GSM Enhanced Full Rate vocoder compresses voice into 12kb/s. And into nothing when you are not talking, which is about half the time (because you are listening). A 3GB/month data subscription costs about $20-30 but can handle about 1000 hours of calls (as data) without exceeding the data cap. But 1000 hours is more hours than there are in a month. You literally cannot exceed gigabyte sized data caps with voice calls. But a user with several thousand minutes per month of voice calls is paying about ten times as much until now.

    Also see: A brief History of Mobile: Generations 1 and 2


    Virtuoso Has Twins

    Virtuoso Has Twins
    by Paul McLellan on 10-18-2012 at 6:01 pm

    Cadence has apparently announced that going forward the Virtuoso environment is going to be split into two and offered as two separate code-streams, the current IC6.x and a new IC12.x. The idea is to introduce a new product with features that were specifically developed for new technologies such as double patterning aware layout design and checking, use of local interconnect, FinFET enhancements.

    I’m sure part of this is that Cadence wants to charge more for these feature to the people that need them without getting caught up in endless negotiations that people who don’t use them have no reason to pay extra, a problem that every EDA company faces when they try and get value for the incremental R&D required to keep on the process node treadmill.

    I ran Custom IC at Cadence for a year or so and one of the biggest problems I had was that we had large numbers of very conservative semiconductor companies who would not upgrade to new versions of Virtuoso and, in fact, stayed on versions which officially we no longer supported. Then, to make it even worse, they would find they would need some feature that we had wisely added to a later release and insist that we back-patched it into the unsupported release that they were still using. Even though they would happily (well, probably unhappily but they had no choice) pay for this it was a huge distraction for the engineering team. To add insult to injury, those same semiconductor company’s CTOs would give keynote speeches about how EDA companies need to get their engineering out ahead of the process roadmap so that Virtuoso (and other tools) were ready when their most advanced groups needed the features.

    So I see this announcement (actually I’ve not seen it officially announced but it does seem to be real) as a rational response to this sort of behavior by semiconductor companies. Their most advanced groups need advanced features and will put up with some instability and a fast release cycle to get them. But other groups treat Virtuoso as a good malt whisky, much better if you ignore it and let it mature for 10 years before use. This is not entirely irrational behavior: advanced groups do need advanced features and many groups make use of only the most basic features and see upgrading as more of a cost than a benefit. The groups also have different sensitivity to price and different interest in taking a real look at competition (shrinking at the moment since rumor has it that SpringSoft Laker is not going to long survive its assimilation into the Borg of Synopsys).


    iPhone5 Versus Samsung S3: the Key Question

    iPhone5 Versus Samsung S3: the Key Question
    by Paul McLellan on 10-18-2012 at 8:29 am

    In all the discussion about iPhone versus Samsung, the profit leader and the volume leader in the handset business, there is way too much discussion about boring stuff like how many MIPS the A6 chips has and whether the maps are any good on iPhone (no) and is there enough 28nm capacity for Qualcomm. Boring.

    The real question that everyone wants to know the answer to is: will it blend?