RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Microprocessors: Will ARM Rule the World?

Microprocessors: Will ARM Rule the World?
by Paul McLellan on 10-30-2014 at 12:00 pm

Last week was the Linley Microprocessor Conference. Not the mobile one, which I find the most interesting since smartphones are such a bit part of what drives process technology these days, this is the one focused on networking and servers. But increasingly both markets are being driven by the same thing, namely mobile data. In fact smartphones are growing so fast that they are already the primary way that the internet is accessed and that trend is only going to accelerate. Desktop and notebook computers are a comparative niche. Jag Bolaria (of Linley) gave the opening keynote giving an overview of this part of the industry. Reading their data is a little difficult since they have a, to me, slightly weird definition of embedded, namely processors that last for a long time. I think they really mean anything other than a standard PC, which also has the slightly weird effect of showing Intel losing market share since the more that is done with standard server chips then the more it doesn’t count in the embedded market, as opposed to selling chips that go into routers, for example. Another complication is that Avago purchase LSI and then sold the Axxia part of that business to Intel. For this year, Linley continued to track LSI as a separate company.

Overall market share is lead by Intel followed by Freescale. To my surprise, AMD increased market share against Intel in this market since suppliers are looking for lower costs and an alternative to Intel. Cavium are also doing well selling their Octeon SoCs into many segments. Of course ARM don’t appear on this list since they don’t sell processors. However, when we look at share by instruction-set-architecture (ISA) they show up, but with a suprisingly low share. x86 slightly trails the Power architecture (Freescale, Applied Micro, Axxia for now, all use this).


ARM has a tiny share. But as I reported last year, that is all set to change. The 64-bit ARM v8 instruction set has opened up new markets and almost all embedded vendors are moving their future investment to ARM. However, the time to design-in, ship and ramp equipment in a conservative market means that the crossover will take 5-10 years, but:

  • AppliedMicro shipping X-Gene and sampling X-Gene2
  • Cavium plans to sample Thunder in Q4 (their current products are MIPS based)
  • Feescale sampling LS1 and plans to sample LS2 this quarter
  • LSI/Avago/Intel shipping ARM version of Axxia (although presumably this will be short lived now Intel owns that business)
  • AMD sampling Hierofalcon for embedded market
  • Broadcom shippping StrataGX and developing Vulcan CPU

Linley believes that a dual architecture strategy (such as MIPS and ARM) will not endure since qualifying two architectures is too expensive. So, provided the ARM parts are not rejected by the market, then MIPS and Power will lose market share.


As I said above, the market is driven by cellular data. Next year there will be 2 billion connected devices shipped, around half of which will use high data rate LTE (a big part driven by Chinese deployment although based on Xilinx conference calls and other information this is somewhat pushed out). A new technology like LTE forces the operators to focus on coverage (so then they can sell handsets) and then on capacity expansion as the transition happens and more handsets use more and more data. Capacity expansion means small cells which is a big growth opportunity.

In the server market, no surprises, Intel has 96% market share. Their strategy is to leave no holes. No matter which price/performance point you are interested in, Intel has a processor for you: Xeon E7, E5, E3, Atom and even customized processors. But ARM has this market in their sights along with their partners. As I said above, AppliedMicro are in production, AMD, Cavium, Broadcom and more. Calxeda were a casualty, only having 32 bit ARMv7 and not enough money to do 64-bit. Jag also said “expect more entrants” which I suspect he knows some non-public stuff that I don’t.

But there are challenges for ARM in servers. Unlike with Intel there is no common platform and each vendor has its own uniqueness. OS vendors are sitting on their hands waiting for volume before they invest. Of course they are attacking a walled-city but Intel has very high pricing (good for them) which creates market opportunity. But it is hard for ARM processors to match Xeon E5/7 performance, which matters for some markets (where single thread performance matters) but not others (where cost, power, physical size and general TCO is important, such as, say, Facebook data centers).

Linley himself talked about the Internet of Things (IoT). Industrial applications are already broadly deployed with, for example, 300M smart meters installed. For that market a discrete solution that costs $20 is OK. Large consumer goods are next with an SoC desired with a cost under $10. But the really big market is small things with an SoC at under $3. I’m personally not convinced since the market is so fragmented. Also, I’m not convinced that IoT devices have minimal compute needs since they also have very low power needs which means thay cannot be uploading huge amounts of data to the cloud continuously for processing. But it seems some vendors are developing IoT ASSPs integrating CPU, memory, analog and wireless to drive the cost down.


Finally, a little bit of process roadmap. But one message is that cost-focused processors will stay on 28nm LP indefinitely. Jag pointed out that 16nm reduces power but increases cost and the density is the same as 20nm (it has the same metal fabric). For high end servers that is an acceptable combination but for IoT and other lower end applications, 28nm will be around for a long time.


More articles by Paul McLellan…


Adding a Digital Block to an Analog Design

Adding a Digital Block to an Analog Design
by Daniel Payne on 10-30-2014 at 7:00 am

My engineering background includes designing at the transistor-level, so I was drawn to attend a webinar today presented by Tanner EDAand Incentia about Adding a Digital Block to an Analog Design. Many of the 30,000 users of Tanner tools have been doing AMS designs, so adding logic synthesis and static timing analysis from Incentia makes sense to fill out the design flow.

Jeff Miller from Tanner EDA presented the overall AMS tool flow, including Incentia tools for big A, little D designs:

Steve Lin from Incentia talked about their digital design flow from RTL to gates with timing and power closure:

Logic synthesis is provided by the Incentia tool called DesignCraft, and it uses industry standard file formats like:

  • Verilog, VHDL
  • Synopsys .LIB, CCS Library
  • SDC for timing constraints
  • VCD, SAIF and FSDB files for switching activity
  • SDF – standard delay format
  • Tcl scripting

Incentia also supports a DFT methodology with their TestCraftDFT tool, then interface to ATPG tools from multiple vendors: Mentor, Synopsys and SynTest.

Related – Affordable AMS EDA Tools at DAC

Following logic synthesis the next step is Place and Route using the HiPer Place and Route tool from Tanner, typically handling designs up to 50K gates. Standard file formats are supported, like:

  • LEF, DEF
  • GDS
  • Liberty for cell timing
  • Verilog gate-level netlist

Placement, clock routing, and n-layer auto routing steps were reviewed, and the finished layout has SDF for use in timing analysis.

Related – A New Digital Place and Route System

For static timing analysis the tool from Incentia is called TimeCraft which accepts the SDF file from P&R. Signal Integrity (SI) issues are accounted for, along with IR drop. Multi mode, multi corner (MMMC) analysis is supported, allowing you to do timing analysis across a grid or LSF to reduce run times. The largest design run through TimeCraft has been a 100M instance tape-out.

At 65 nm and smaller geometries the on-chip variation (OCV) can start to produce overly-pessimistic timing analysis results. Location-based OCV (LOCV) can also be taken into account with the TimeCraft tool. With the LOCV approach you will have fewer timing violations, plus smaller worst-case negative slack, making timing closure happen quicker.

The Incentia tool called ECOCraft supports Engineering Change Orders (ECO) to fix setup or hold time violations after P&R. Leakage power values can be reduced by running the ECOCraft Power tool by optimizing the use of multi-Vt cells and cell-sizing.

A live demo was performed on a small block with an 8 bit ADC circuit including a Finite State Machine, written in Verilog. After logic synthesis was run using a script, then the layout editor was invoked and P&R setup and run (cell placement, clock buffer placement, de-coupling capacitors, filler cells, clock routing, signal routing, power routing).

An SDF file was created, then used in Static Timing Analysis (STA) with TimeCraft using another script. The results of STA look like:


Summary
I cannot remember the last time that an EDA vendor actually ran their tools live during a webinar, so kudos to Jeff at Tanner EDA for doing this today as proof of how fast this tool flow actually is. Tanner users will be pleased that they can complete their big A, little D designs using this AMS implementation flow with Incentia tools. The GUI and scripts looked easy to run and learn, so expect a quick ramp-up time.

The full 60 minute, archived webinar is available online after a short registration process.


Viva the New Industrial Revolution! What Etsy, 3D Printing, and Kickstarter Means to Semiconductor Companies?

Viva the New Industrial Revolution! What Etsy, 3D Printing, and Kickstarter Means to Semiconductor Companies?
by Charles DiLisio on 10-29-2014 at 4:00 pm

The world is changing and IC companies need to adapt to this to stay competitive — moving to systems (hardware and software) vs. just product (hardware). Three key trends that are underway that change the way IC vendors need to think about their customer and the customer’s customer:

  • Markets are Fragmenting: We are moving away from the homogenous baby boomer generation to a heterogeneous world of multiple generational, multicultural wants and needs. These new consumer’s are seeking products that are uniquely crafted and of high quality — think Etsy. Design becomes as important as functionality. Young consumers want something that is unique to their interests — not volume, but high value.

  • Means and Barriers to Manufacturing are Declining:3D printing and cloud fabs or prototype manufacturing are changing the way things are made. Design software like Autodesk 360 is becoming virtually free. In the near future, you will be able to make parts or who products right at your home. Mass Customization will be the rule not the exception. Even electronic circuits are readily modularized for makers in things like LittleBits or Tiny Circuits.

  • Crowd-funding for New Products:Kickstarter and other sites provide early funding to new products of all types. Even venture capital firms are asking start-ups to first seek crowd-funding and if the product gets funding then they look to follow-on investments.

The Da Vinci 3D All-In-One Scanner/Printer

The above trends will cause IC companies to act faster with respect to IC design and manufacturing. Also, the idea behind Moore’s Law — the learning curve and reducing costs in conjunction with greater volumes doesn’t work in a world of Mass Customization and 3D Printing. Which brings me to this observation — 3D printing is changing the way we look at product development. Now the newDa Vinci All-In-One 3D Scanner/Printeris on the scene at a price point of $799 a significant price below the Stratasys Makerbot Replicator 3D printer. Now makers/designers have the capability to scan (copy) objects and then print them in one machine. This further reduces the barriers to manufacturing in a very important way.

Why Etsy, 3D Printing and Kickstarter are Important to IC Vendors?

IC vendors — don’t think volume, think systems. More integration requested by the customer seems logical, but can you get paid for it and how long will it take to develop the IC? Think programmability like microcontrollers like Arduino (Atmel) mbed (ARM), or PIC (Microchip). Programmability gives you the IC vendor ability to modify IC functionality using software and increase time to market and not be dependent solely on volume.

Second, how can you improve or enhance your offering using software? Can you make your IC offering more flexible, responsive and easy to use with software? Think of National Semiconductor’s Simple Switcher that allowed the engineer to develop a power management solution with simple inputs, but at a much, higher level. See the links to Littlebits or TinyCircuits above.

Third, IC vendors need to think look to new markets and don’t depend solely on existing markets. Leaders in one platform say PC’s (Intel/Microsoft) don’t necessarily lead in the new platform Smartphones (Qualcomm, Apple, Android). If Mass Customization though the Internet of Things (IoT) becomes a new platform, new leaders will likely emerge. Will or can you become a leader?

Therefore, IC vendors need to seek new potential product areas — I suggest that you attend some IoT Meet-up groups like PnP Thursdays or IoT Business Meetup SV. If you don’t know what a Meet-up group is you are already behind. Get Peter Thiel’s book “Zero to One”. One of Peter’s thoughts: “Figure out something that nobody else is doing and look to create a monopoly in some area that’s been underdeveloped.”


Cadence Mixed Signal Technology Forum

Cadence Mixed Signal Technology Forum
by Paul McLellan on 10-29-2014 at 7:00 am

Yesterday was Cadence’s annual mixed-signal technology forum. I think that there was a definite theme running through many of the presentations, namely that wireless communication of one kind or another is on a sharp rise with more and more devices needing to connect to WiFi, Bluetooth and so on. This was most obvious during the panel session after lunch which was on the ecosystem needed for the internet of things. However, the way to design radios (and analog interfaces in general) is increasingly to design the smallest possible analog blocks and then use digital, even quite complex digital, to calibrate the analog.David Su of Qualcomm gave the opening keynote on Designing WLAN SoCs. He started with a history of wireless LANs, pointing out the huge increase of 3 orders of magnitude increase in data rates. The effect of this has been that data cost has been declining about 2X per year even though the cost of a wireless router has been roughly static. We went from 1MHz to 160MHz channels and now with future standards allowing 4 or 8 channels to be used.He is a big fan of minimizing the analog and using digital to calibrate, what he (and Cadence) calls digital assisted analog design. Of course there are still big challenges. The biggest problem, beyond simply power, is digital interference with the analog on the SoC. There are various strategies for coping with this, minimizing the aggressor (the digital logic) by techniques like clock gating and minimizing switching large registers on one clock cycle. Next, strengthen the victim (analog) by wells and robust analog design. And then try and minimize the coupling by spacing the blocks apart and even potentially some process tricks such as deep wells.Ken Kundert (who used to work at Cadence and was the principal author of Spectre) made a plea that the way that analog engineers designed needed to modernize or designing these kind of digitally trimmed analog is almost impossible. If designs are to be done in a reasonable time then the digital and analog need to be designed in parallel and the most promising way to do that is to start from a spec of the analog block and use that to produce a model and a self-checking testbench. This isn’t as hard as it seems. Digital design has huge state so verification is hard, but synthesis, place & route makes implementation fairly straighforward. Analog is the other way around, specifications are simple, there is little state, but implementation is hard. Hence the need to use models since the schematic comes too late and simulates too slowly for the digital design team.Wilbur Luo of Cadence gave an overview of the mixed signal offering. Increasingly the methodology revolves around having Virtuoso and Encounter Digital Implementation (EDI) running on a common open access database able to share the same semantics without losing things like constraints in implementation. There are even lower capacity versions of Encounter, Tempus, Voltus etc which run inside Virtuoso to enable digital design.After lunch there was a panel with Rob Consaro of Freeescale, Ron Moore of ARM, Ian Dennison from Cadence Scotland and Doug Patulio from TSMC. Rob related a tale of how the tools and methodologies now mean that power methodology is now a solved problem compared to a few years ago when he tried and failed to design a chip with lots of power domains in the days before CPF/UPF. But noise is the big challenge going forward. Ron said in some cases ARM is going back to a single power domain to keep the area down which makes it a challenge to handle leakage.Doug pointed out that TSMC has always been in the business of helping customers get to the next node. But he feels that 28nm is the last time that will happen. Some people will move to 16FF+ of course. But TSMC is putting a lot of effort into re-engineering older processes using what they have learned from the advanced processes, especially for ultra-low power aimed specifically at IoT. Ian talked about putting sensors and analog and digital all on the same die, perhaps using TSVs or other 3D technologies. There is a mismatch at present as to which process sensors and other MEMS devices need for fabrication, with which digital technologies make most sense. One question from the audience asked about software: how do we get analog models running on emulators so that we can use virtual platform technology to do early software development? There is some possibility that using real-number models will help but there was no clear answer.Oh, and I won a copy of the Mixed-signal Methodology Guide.
More articles by Paul McLellan…


Who Really Needs USB 3.1?

Who Really Needs USB 3.1?
by Eric Esteve on 10-29-2014 at 4:58 am

USB is certainly the most ubiquitous of the Interface protocols. I would bet that everybody is using USB everyday (I mean activate a USB connection, as we also use PCIe or SATA even if we don’t realize that we do it), but which application will get benefit of the 10 Gbps delivered by USB 3.1? Before precisely answering the question, let’s review some facts:

  • More video/images are created as video standard is moving from 4K to 8K
  • End user consumption per day (Video, audio, social media) is growing, estimated at 63 Gigabytes per day per person
  • The storage devices prices are dropping, this is true for both HDD and SSD, leading to use larger devices (for the same price).

These facts allow understanding where USB 3.1 adoption will come first, in storage and digital office segments, to support applications like Hubs, Docking stations, Host add-in cards or SSD.


As an analyst, I have tried to model SuperSpeed USB adoption, back in 2009. There is a consensus about the adoption behavior, coming from the “Innovation Theory”: it looks like a Gaussian curve (with Innovator, Early Adopter, Early Majority, Late Majority and Laggard categories). By the way, this Gaussian curve and the related categories are exactly the same than for an epidemic, Ebola or the flu virus! Let’s come back to electronic. This theory tells us that the adoption rate is low at the beginning (Innovator and early adopters), then reach the mainstream (early majority and late majority) and finally the laggards.

For an IP vendor, the $1 million question is to know about the market dynamic, how long it will take for USB 3.1 to reach the mainstream for each specific market segment. Synopsys has built the above picture, which is the addition of five Gaussian curves: Storage, Digital Office, Cloud, Mobile and Digital Home. According with John Koeter, vice president of marketing for IP and prototyping at Synopsys, the company has an “extensive knowledge in developing USB IP, more than 3,000 USB design wins”. Thus, Synopsys knows the key factor, by experience about selling USB 3.0 IP, which is the adoption rate.

USB 3.0 specification has been released at the end of 2008, starting to sale in 2009, and it took 5 years for the new standard to reach all the potential applications. If you take a look at the above picture, Synopsys has built it by duplicating for USB 3.1 what has happened with USB 3.0, and I fully agree with this approach. We can consider that the early adopters for USB 3.0 will be the early adopters for USB 3.1 (and so on for the other categories). In other words, the application currently integrating USB 3.0 will certainly move to USB 3.1. This is a “semi-empirical theory”, based on past behavior, but with only one unknown: USB 3.1 adoption rate in Intel PC chipset. If this adoption comes faster than for USB 3.0 (which was too long, by the way), we may even see quicker sales of USB 3.1 than USB 3.0 IP.


Synopsys is claiming offering the most comprehensive USB 3.1 solution, and there is no doubt about this assertion! IPnest has made the IP vendor ranking for USB 3.0 IP in 2013, and Synopsys enjoys 75% market share. In fact, USB 3.1 is just an enhancement of USB 3.0, running at twice the data rate (10 Gbps instead of 5 Gbps), and we can see that most of the architecture can be reused.

Synopsys Controller IP operates at USB 3.1/3.0/2.0 speeds

  • Interoperates with all USB generations
  • Synopsys customers can use their existing USB 3.0 drivers
  • Configurable by user

    • Address all markets – storage to mobile phones
    • AXI speeds and bus width
    • More memory for higher performance or less area for cost savings

Synopsys also offers VIP:

  • Virtual model of DesignWare® USB 3.1 IP
  • Develop SW up to 9 months ahead of HW availability
  • USB 3.1 IP VDK configured to behave identically to real RTL for accurate driver development
  • Includes model of multi-core ARM® Cortex®-A57 Versatile Express board

It will be quite a challenge for Synopsys competitor to win market share on USB 3.1 IP market, as the company has enjoyed so far about 160-180 USB 3.0 design win, for a total (including the internally designed IP) of 300 to 350 design starts, since the release of the standard.

Also Read: USB 3.1: Physical, Link, and Protocol Layer Changes

From Eric Esteve from IPNEST


IBM leaves semiconductors – end of an era

IBM leaves semiconductors – end of an era
by Bill Jewell on 10-29-2014 at 2:00 am

IBM last week agreed to transfer its semiconductor business to GlobalFoundries. GlobalFoundries will acquire wafer fabs in East Fishkill, New York and Essex Junction, Vermont; IBM’s commercial microelectronics business, which includes ASIC and foundry; over 10,000 IBM patents related to semiconductor manufacturing; and over 5000 fab and ASIC employees. GlobalFoundries will supply all IBM’s 22nm, 14nm and 10nm ICs for the next 10 years. IBM will take a $4.7 billion pre-tax charge to write down the assets of the semiconductor business and to cover paying GlobalFoundries $1.5 billion over the next three years. IBM will focus on fundamental semiconductor research for next generation computing.

IBM began semiconductor manufacturing for internal demand, which was huge when IBM was the world’s dominant computer company. Although exact numbers are not available, IBM was almost certainly the world’s largest semiconductor manufacturer for many years. As IBM became less dominant in computers, its semiconductor division had extra capacity. In 1993 IBM entered the merchant semiconductor market as a top 10 company with $2.5 billion in sales. IBM sold DRAMs (which were invented at IBM), ASICs and microprocessors. IBM withdrew from the DRAM business in 1999 but continued to sell ASICs and foundry services.

IBM leaving the semiconductor business is the end of an era. IBM was one of 34 original licensees of AT&T’s transistor patent in 1952, according to Bo Jojek in History of Semiconductor Engineering. We at Semiconductor Intelligence examined the original 34 licensees to see what became of them. The original 34 companies were from the U.S., U.K., West Germany and the Netherlands. Sony was the first Japanese company to license the AT&T patent, but was not one of the original 34.

It appears only 22 of the 34 companies developed and marketed transistor products. Of the 22 companies, most of them either went out of business or were absorbed by other companies in the 1950s and 1960s. 12 companies became meaningful suppliers in the semiconductor business. What happened to those 12 companies and to transistor inventor AT&T?

  • AT&T –semiconductor business was part of Lucent Technologies spinoff in 1996. Lucent spun off semiconductor businesss as Agere Systems in 2002. Agere merged with LSI Corp. in 2007. LSI was bought by Avago Technologies in 2014.
  • General Electric– sold its semiconductor business to Harris in 1988. Harris Semiconductor was spun off as Intersil in 1999.
  • IBM – divesting its semiconductor business to GlobalFoundries.
  • IT&T Corp. – divested semiconductor business over the years. Most of the remains of IT&T Semiconductor are now part of Vishay and Micronas.
  • L.M. Ericsson – sold most of its semiconductor business to Infineon in 2002. Ericsson exited the modem IC business in September 2014 (previously part of joint venture with STMicroelectronics).
  • Microwaves Associates –now M/A-Com, still makes microwave semiconductor devices.
  • Minneapolis Honeywell –now Honeywell, still makes semiconductor sensors.
  • N.V. Philips –spun off semiconductor business as NXP Semiconductors in 2006. NXP is still a top 20 semiconductor company.
  • National Cash Register Company –now NCR. Remains of semiconductor business now part of NetApp.
  • Raytheon Manufacturing –sold semiconductor business to Fairchild Semiconductor in 1997.
  • Siemens and Halske –now Siemens. Spun off its semiconductor business as Infineon Technologies in 1999. Infineon spun off its memory business in 2006 as Qimonda (now out of business).Infineon remains a top 20 semiconductor company.
  • Sprague Electric Company –sold semiconductor business to Sanken Electric in 1990.
  • Texas Instruments –divested most non-semiconductor businesses in the 1990s. Remains a top 10 semiconductor company.

Of AT&T and the original 34 patent licensees, only Texas Instruments remains as the same company and a significant player. If Siemens’s Infineon spinoff and Philips’ NXP spinoff are included, three of the original 34 licensees are still major semiconductor suppliers today. However compared to the changes in the semiconductor industry over the last 60 years, the changes in suppliers is not surprising. The semiconductor market first exceeded $1 billion in the mid-1960s and the major customers were mainframe computers makers (IBM), the U.S. military and the U.S. space program. Today the market is over $300 billion and the major applications include smartphones and tablet computers – unknown devices until about 20 years ago. The market has gone from single transistor devices in the 1950s to billions of transistors on an IC today.


The GF IBM Deal Explained!

The GF IBM Deal Explained!
by Daniel Nenni on 10-28-2014 at 10:00 pm

I have it on pretty good authority that IBM has in fact come to terms with GlobalFoundries on the sale of their semiconductor business, or so I blogged last month. Did I mention that my grandparents and their many siblings settled in Upstate NY in the early 1900s from Italy via Ellis Island? So yes, I do qualify as an insider:

Insider says IBM and GlobalFoundries reach deal
Posted on September 17, 2014 | By Larry Rulison
A post to SemiWiki.com by industry author and blogger Dan Nenni says that IBM and GlobalFoundries have a “handshake deal” in place to take over IBM’s chip manufacturing.

The official announcement was last week and the slide deck is HERE in case you are interested. Since I was in Taiwan at the time and missed the official briefing I was afforded a quick one-on-one with Sr VP Gregg Bartlett at the beautiful new GF HQ. Rather than regurgitate what everyone else has been feeding you I will try and offer an insider’s view:

  • The IBM ASIC business is exactly what GF needed to get into the system houses
  • The IBM IP portfolio is exactly what GF needed to differentiate in the fabless semiconductor ecosystem
  • The IBM talent (5,000+ employees) is exactly what GF needed to create an East Coast semiconductor dynasty
  • The IBM patents (10,000+) is exactly what GF needed to secure their semiconductor legacy
  • The IBM acquisition is exactly what GF needed to secure a nice investor exit (IPO)

Growing up in Silicon Valley I have always viewed IBM as third world company. It was not just an East Coast versus West Coast thing, IBM was the status quo versus the entrepreneurs of Silicon Valley. Clearly Silicon Valley won but now IBM Semiconductor is in the hands of some very capable entrepreneurs which should turn out to be a very powerful combination.

Also Read:GlobalFoundries and IBM

Final approval on this will probably take the better part of a year but let’s talk about that for a minute. GF will be making a filing with the Committee on Foreign Investment in the United States (CFIUS) which consists of 16 departments and agencies including Homeland Security. The majority of the cases submitted proceed without investigation which is what I believe will happen here. Remember, IBM sold their PC business to Lenovo and GF bought AMD’s manufacturing business so this is not their first CFIUS rodeo. In the meantime the integration plans have already begun which will be like integrating peas and carrots at a family dinner.

The final and probably most important point is that GF will now own the IBM semiconductor process recipes moving forward. In the past, the process architecture was done in Albany and the implementation in Fishkill. At 28nm it was a “copy exact” type of deal out of Fishkill. At 14nm it is an architectural licensing deal out of Albany which is why Samsung has a different 14nm implementation than the other ex-Common Platform members. Moving forward 10nm will also be an architectural licensing arrangement. 7nm is unknown at this time.

Bottom line: This is an accretive deal for GF and puts them into the same league as Intel and TSMC, absolutely!


ECDH Key Exchange is Practical Magic

ECDH Key Exchange is Practical Magic
by Bill Boldt on 10-28-2014 at 7:00 pm

nsa

What if you and I want to exchange encrypted messages? It seems like something that will increasingly be desired given all the NSA/Snowden revelations and all the other snooping shenanigans. The joke going around is that the motto of the NSA is really “Yes We Scan,” which sort of sums it up.

Encryption is essentially scrambling a message so only the intended reader can see it after they unscramble it. By definition, scrambling and unscrambling are inverse (i.e. reversible) processes. Doing and undoing mathematical operations in a secret way that outside parties cannot understand or see is the basis of encryption/decryption.

Julius Caesar used encryption to communicate privately. The act of shifting the alphabet by a specific number of places is still called the Caesar cipher. Note that the number of places is kept secret and acts as the key. Before Caesar, the Spartans used a rod of a certain thickness that was wrapped with leather and written upon with the spaces not part of the message being filled with decoy letters so only someone with the right diameter rod could read the message. This was called a skytale. The rod thickness acts as the key.

A modern-day encryption key is a number that is used by an encryption algorithm, such as AES (Advanced Encryption Standard) and others, to encode a message so no one other than the intended reader can see it. Only the intended parties are supposed to have the secret key. The interaction between a key and the algorithm is of fundamental importance in cryptography of all types. That interaction is where the magic happens. An algorithm is simply the formula that tells the processor the exact, step-by-step mathematical functions to perform and the order of those functions. The algorithm is where the magical mathematical spells are kept, but those are not kept secret in modern practice. The key is used with the algorithm to create secrecy.

If a secret key is kept secret, the message processed with that algorithm will be secret from unintended parties. This is called Kerckhoffs’ principle and is worth remembering since it is the heart of modern cryptography. What it says is that you need both the mathematical magic and secret keys for strong cryptography.

Key Agreement

There needs to be a way to exchange the key during the session where the encrypted message is to be sent.

ECDH key agreement, is a way to send the secret key without either of the sides actually having to meet each other. EC stands for elliptic curve that have the property that given two points on the curve (P and Q) there is a third point, P+Q, on the curve that displays the properties of commutivity, associativity, identity, and inverses when applying elliptic curve point multiplication. Point-multiplication is the operation of successively adding a point along an elliptic curve to itself repeatedly. Just for fun the shape of such an elliptic curve is shown in the diagram.

The thing that makes this all work is that EC point-multiplication is doable, but the inverse operation is not doable.Cryptographers call this a one-way or trap door function. (Trap doors go only one way, see?) In regular math, with simple algebra if you know the values of A and A times B you can find the value of B very easily. With Elliptic curve point-multiply if you know A and A point-multiplied by B you cannot figure out what B is. That is the magic.

Now for some Math:

To best explain key agreement with ECDH, let’s say that everyone agrees in advance on a number called G. Now we will do some point-multiply math. Let’s call the sender’s private key PrivKeySend. (Note that each party can be a sender or receiver, but for this purpose we will name one the sender and the other the receiver just to be different from using the typical Alice and Bob nomenclature used by most crpyto books.)

Each private key has a mathematically related and unique public key that is calculated using the elliptic curve equation. Uniqueness is another reason why elliptic curves are used.

If we point-multiply the number G by PrivKeySend we get PubKeySend. Let’s do the same thing for the receiver who has a different private key called PrivKeyReceive and point-multiply that private key by the same numberG to get the receiver’s public key called PubKeyReceive. The sender and receiver can then exchange their public keys with each other on any network since the public keys do not need to be kept secret. Even an unsecured email is fine.

Now, the sender and receiver can make computations using their respective private keys (which they are securely hiding and will never share) and the public key from the other side. Here is where the commutative law of point-multiply will work its magic.

The sender point-multiplies the public key from the other side by his or her stored private key. This is equates to:

PubKeyReceive point-multiplied by PrivKeySend which = G point-multiplied by PrivKeyReceivepoint-multiplied by PrivKeySend

The receiver does the same thing using his or her private key and the public key just received. This equates to:

PubKeySend point-multiplied by PrivKeyReceive = G point-multiplied by PrivKeySend point-multiplied by PrivKeyReceive.

Because point-multiply is commutative these equations have the same value!

And, the rabbit comes out of the hat: The sender and receiver now have the exact same value, which can now be used as the new encryption key for AES, in their possession.No one besides them can get it because they would need to have one of the private keys and they cannot get them. This calculated value can now be used by the AES algorithm to encrypt and decrypt messages. Pretty cool, isn’t it?

Below is a wonderful video explaining the modular mathematics and discrete logarithm problem that creates the one-way, trapdoor function used in Diffie-Hellman key exhange. (Oh yeah, the “DH” in ECDH stands for Diffie-Hellman who were two of the inventors of this process.)

Bill Boldt, Sr. Marketing Manager, Crypto Products Atmel Corporation


G.fast on the copper quick road for broadband

G.fast on the copper quick road for broadband
by Don Dingee on 10-28-2014 at 3:00 pm

After a four year gestation period typical of global communications standards, G.fast has reached the point where chipset makers can implement parts against stable specifications. Formal approval of the physical layer spec, G.9701, is expected by the end of 2014. G.9700, dealing with power spectral density issues, was approved earlier this year.

What does G.fast do? Conceptually, it is the evolution of DSL, delivering broadband over copper. It clears the way for a theoretical aggregate bandwidth of as much as 1Gbps, on standard loops of telephone-grade twisted pair wire already in place to most homes and businesses.

Like its predecessors, such as the current VDSL2, G.fast begs the question of FTTx – fiber to the cabinet (FTTC), distribution point (FTTdp), premises (FTTP), node (FTTN), or other entity close enough to the subscriber to complete a short run in existing copper. This is in contrast to fiber to the home (FTTH), which requires similar points of presence but also ripping up yards, driveways, junction boxes, garage walls, and other pieces of a dwelling to provide service. For the last 200m, G.fast can be far cheaper than FTTH, and provide substantial bandwidth.

Faster signals bring with them quality and interference challenges. G.fast is relatively short range. Compared to VDSL2 which can handle several thousand meters with some degradation, G.fast handles at most 250m, and really only performs at shorter ranges. Fortunately, most lines are shorter – BT says 80% of copper lines in the UK are less than 66m. To gain performance, G.fast ups the frequency, operating in a 106 MHz profile with a future of 212 MHz, overlapping FM bands.

With concerns in both power spectral density shaping and far-end crosstalk reduction, DSP comes into play with vectoring algorithms. In the recently announced DP3000 DPU chipset and CP1000 CPE chipset, Sckipio makes heavy use of the CEVA-XC DSP core for G.fast vectoring.

VDSL2 introduced vectoring, and G.fast takes it to a new level of effectiveness. With a wider frequency range, G.fast vectoring engines require advanced algorithms and more calculations. To keep complexity realistic, G.fast limits bit loading to 12 bits per frequency carrier, compared to 15 in VDSL2. The gains in crosstalk reduction using G.fast vectoring are massive.

Among equipment providers, Alcatel-Lucent and Huawei are both pushing G.fast. As with any advanced standard, the driving factor is integrated chipsets to provide performance and reduce costs for volume rollouts. Sckipio has the lead here with G.fast vectoring built in to their solution, with Broadcom still working on multi-chip solutions.

Will G.fast be broadly adopted? The telcos are pitted against DOCSIS 3.1 and cable operators, working a hybrid fiber coax (HFC) architecture. Some say with the need to roll fiber to the last 200m, G.fast may be simply a stop-gap to FTTH. Others say that last 200m is a 20 year project to cover an entire country, and G.fast speeds without tearing up copper are worth it.

It may be longer in the US, lagging behind in broadband with large distances to cover and lack of competition in most markets. By any measure, Akamai or the UN Broadband Commission, US broadband penetration and speed generally sucks. (The UN conclusion that WiMAX would be most cost effective is rather hilarious – in the words of Dr. McCoy: “It’s dead, Jim.”)

Alternatives complicate the picture. Broadband in the US is two-thirds cable. Google Fiber is fast but very selective with build-on-demand instead of metro-wide rollouts – effectively, you have to move to the “fiberhood” to take advantage of it. VDSL2 may represent “good enough” performance with the ability to span longer distances, especially in rural contexts.

It will likely be at least another year of trials before we see if G.fast gets wide scale traction. It seems well suited for dense urban markets in Europe and Asia even if North America is slow to adopt it for other reasons. Improved speed over VDSL2 and the appeal of consumer self-installation and quicker deployment compared to FTTH should spur uptake.


Imec and Coventor Partner Up

Imec and Coventor Partner Up
by Paul McLellan on 10-28-2014 at 7:00 am

Today imec and Coventor announced a joint development project for 10nm and 7nm process development. Imec, which is in Leuven Belgium, is a partner with pretty much all the semiconductor companies that are planning work at these advanced nodes. It mostly does pre-competitive research and development. This type of research is very expensive for any one semiconductor company to carry out so it makes sense to share the investment across the entire industry. It will be interesting to see whether the GlobalFoundries acquisition of IBM’s semiconductor business changes the landscape since historically IBM has done a lot of research themselves, and those researchers now work for GF. Imec is a big operation with a staff of over 2000 people, including 670 industrial residents and guest researchers.

To adopt the 7nm node, the industry needs to select the optimal layout, as well as optimize process step performance and control methodology. Using Coventor’s SEMulator3D platform, engineers from imec and Coventor are working together to reduce silicon learning cycles and development costs by down selecting the options for development of next-generation manufacturing technologies. The SEMulator3D platform is an integrated set of modeling tools with enhanced visibility, accuracy and performance that enables engineers to interactively model and simulate a wide range of manufacturing effects in software before committing to expensive test chips.

At imec, process and integration experts have connected imec’s own optical lithography simulations with Coventor’s SEMulator3D virtual fabrication platform to explore FinFET scaling to the 7nm node and to compare the process window marginalities in several dense SRAM designs using Spacer Assisted Quadruple Patterning and either multiple immersion or EUV patterning cut/keep solutions. Moreover, a Spacer-Assisted Quad Patterning scheme for 7nm dense interconnect was devised using SEMulator3D, and process window marginalities for an immersion based multiple block patterning solution were analyzed.

Additional collaboration will focus on the predictive modeling of Directed Self-Assembly for advanced patterning. This is one of the great hopes “lithography in a bottle” for creating patterns at these very advanced nodes without requiring uneconomic numbers of process steps and assuming that EUV doesn’t come on line in time for 7nm (if it ever does).


Obviously at one level this provides imec the capability to do virtual fabrication using SEMulator3D but the tool itself is not static and needs to keep adding capabilities for 10nm and 7nm as development of these process nodes proceeds. As David Fried, Coventor’s CTO, said:Imec is the premier semiconductor research center, and this collaboration allows us to synchronize our modeling roadmap with one of the industry’s most advanced process roadmaps, as well as to speed the development of their 10nm and 7nm technology. Working together with imec on novel integration schemes, designing SEMulator3D-specific structures for imec’s testsites, and then calibrating advanced models to imec’s wafer processing is an extremely effective and valuable way for Coventor to optimize our virtual fabrication platform for emerging market requirements.

See also Imec’s Process Secret Decoder Ringand What Comes After FinFET?

More information on SEMulator3D is here.


More articles by Paul McLellan…