RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

EUV Slips a Year Per Year…Or More

EUV Slips a Year Per Year…Or More
by Paul McLellan on 04-19-2014 at 1:54 am

I was at EDPS in Monterey the last couple of days. It is one of the most interesting conferences to attend. Go next year since you already missed it this year. It is not big but the quality of the content is high. Historically the dinner in the middle is in the Monterey Yacht Club and there is a keynote speech. A few years ago it was me but this year there were two upgrades. eSilicon sponsored the dinner so we got some free wine and better food, and Wally Rhines gave the keynote instead of me.

Dan is going to blog about the keynote itself (which was also the keynote at U2U last week but I couldn’t make it since I was at the GSA event and if I wasn’t there I should have been at the SEMI event on Internet of Things…guys, you need to coordinate). But part of Wally’s keynote is one of the topics I like to go out on a limb on. EUV. It’s not happening.


As Wally amusingly (well, we had all had some free eSilicon wine so we were easily amused) pointed out, in 2003 we predicted we would move to 157nm F[SUB]2[/SUB] in 2007 and EUV (14nm) in 2009. As I hope everyone who reads semiwiki knows, we are currently stuck with 193nm off-axis immersion lithography and the fact that we can print features on an 80nm pitch without double patterning and about 50nm with is totally amazing. But it is not cheap. There really seem to be no technical issues to get down to maybe 7nm using these techniques although we will be up to quadruple and maybe octuple patterning and…show me the money.


However, by 2005, two years later the predictions moved out…two years. Meaning no progress. F[SUB]2[/SUB] was now 2009 and EUV would be 2011. That would have been nice.


By 2006, only one year later, everything had slipped another two years. Definitely going backwards. EUV was 2013, yeah. It would save the day at 10nm.


157nm F[SUB]2[/SUB] dropped off the radar. EUV moved out again. Now scheduled for 2015. Maybe.


I don’t want to be too negative, there has been real progress in EUV. But it is a Rube Goldberg technology. I’ve related this before but here is how it works. Firstly, EUV is absorbed by air so the entire optics has to be in a high vacuum unlike traditional lithography. Not only air absorbs EUV but so does pretty much everything else so the masks have to be reflective not refractive (think patterned mirrors not photographic negatives). Even ordinary mirrors like in your bathroom (or even a state-of-the-art astronomical telescope) absorb EUV so all the other optics has to be Bragg mirrors (a sort of interference mirror formed of alternating layers of silicon and molybdenum) that only reflects about 70% of the incident light and absorb the other 30%. So they get hot. Well, not very hot, the light source isn’t that powerful yet haha.

So you drop molten tin, hit it with one laser to shape it and then hit it with a 100MW laser to get about 1% of the energy emitted as secondary radiation. The current best value is 80W of emission. And since those 100MW lasers are maybe 5% or 10% efficient you need a couple of GW of power in the subfloor under the fab to power it. A nuclear power plant would be nice. Then about 8 mirrors with 70% efficiency. So you start from 2GW and end up with a few watts hitting the photoresist.

Everyone is very focused on the light source power, which is really important for whether EUV will be economically more attractive than multiple-patterning. But another issue that I think people underestimate is the pellicle. Nobody talks about this much (except me, although experts tell me it is truly important when I bring it up, so I think I’m onto something). In refractive optics they put a layer on top of the mask (think saran wrap) that means that contaminants (think dust although it is much smaller) are not in the focal plane but a tiny distance away. But with EUV’s reflective optics, pellicles would absorb the EUV too so there isn’t a pellicle. So any contaminant will land on the mask and is in the focal plane and will print. So we need to clean the masks regularly. I read somewhere, although I can’t find it now, that Intel said they will not use EUV until there is a pellicle. And masks cannot be cleaned an infinite number of times, the pattern starts to degrade. The only interesting pellicle material appears to be crystalline silicon which apparently isn’t totally EUV absorbent. I shall be at SEMICON West in July at the Lithography seminar to find out what has happened in the last year. If you have any interest in this stuff, that is the place to be in July.

As Lars Liebmann said at a Common Platform (is that still a thing?) Symposium a couple of years ago “EUV is not as far along as X-ray lithography was when we decided it wouldn’t work.” At IBM they built cyclotrons and all sorts of weird stuff to attempt to make X-ray lithography (like 1nm “light”) work. I asked him at ICCAD last year whether that was still true and he said he though it was closer. Only later did I realize that perhaps he’d meant closer to realizing it wasn’t going to work. I remain a skeptic. That EUV will not only not be ready for Xnm (pick your X) but it will never work economically.


More articles by Paul McLellan…


Samsung ♥ GLOBALFOUNDRIES

Samsung ♥ GLOBALFOUNDRIES
by Daniel Nenni on 04-18-2014 at 11:00 pm

Had I not been briefed personally I may not have believed it. Samsung and GLOBALFOUNDRIES will work closely together on satisfying 14nm wafer demand while sharing Samsung’s FinFET secret sauce. This tells me two things: Samsung has more 14nm design wins than I had originally reported and the new GF CEO is serious about the pure play foundry business which I had not reported, so I’m 0 for 2 on this one:

Samsung and GLOBALFOUNDRIES Forge Strategic Collaboration to Deliver Multi-Sourced Offering of 14nm FinFET Semiconductor Technology. Shared technology allows global capacity for 14nm FinFET fabrication in the U.S. and Korea

Seoul, Korea and Santa Clara, Calif. April 17, 2014 – Samsung Electronics Co., Ltd. and GLOBALFOUNDRIES today announced a new strategic collaboration to deliver global capacity for 14 nanometer (nm) FinFET process technology. For the first time, the industry’s most advanced 14nm FinFET technology will be available at both Samsung and GLOBALFOUNDRIES, giving customers the assurance of supply that can only come from true design compatibility at multiple sources across the globe…

“This unprecedented collaboration will result in a global capacity footprint for 14nm FinFET technology that provides AMD with enhanced capabilities to bring our innovative IP into silicon on leading-edge technologies,” said Lisa Su, senior vice president and general manager of Global Business Units at AMD. “The work that GLOBALFOUNDRIES and Samsung are doing together will help AMD deliver our next generation of groundbreaking products with new levels of processing and graphics capabilities to devices ranging from low-power mobile devices, to next-generation dense servers to high-performance embedded solutions.”

“This strategic collaboration extends the value proposition of a single GDSII multi-sourcing to the FinFET nodes. With this true multi-source platform, Samsung and GLOBALFOUNDRIES have made it easy for fabless semiconductor companies to access FinFET technology and increase first-time silicon success,” said Dr. Stephen Woo, president of System LSI business, device solutions, Samsung Electronics Division. “Through this collaboration, we are advancing the foundry business and support model to satisfy what customers have been asking for.”

“Today’s announcement is further proof of the importance of collaboration to enable continued innovation in semiconductor manufacturing,” said GLOBALFOUNDRIES CEO Sanjay Jha. “With this industry-first alignment of 14nm FinFET production capabilities, we can offer greater choice and flexibility to the world’s leading fabless semiconductor companies, while helping the fabless industry to maintain its leadership in the mobile device market.”

This announcement was embargoed until now so I have not had a chance to speak with my closest 18,225 LinkedIn friends about it but I’m guessing there will be quite a bit of discussion in the Silicon Valley coffee houses. Even better, I will be at the EDPS Workshop in Monterey today and tomorrow with professionals from all walks of semiconductor life. This will be the most questioned topic for me, absolutely, since I’m the so-called foundry expert.

An interesting thing: On one side of the briefing table was Ana Hunter, Vice President of GLOBALFOUNDRIES, formerly Vice President Foundry, Samsung Semiconductors. On the other side was Kelvin Low, Senior Director, Foundry Marketing Samsung, formerly Director Product Marketing, GLOBALFOUNDRIES. It’s a small world after all.

Take a look at the Samsung-GLOBALFOUNDRIES 14nm Collaboration slide deck HERE and let me know what you think in the comments section. Personally, I think this is a real game changer for the fabless semiconductor ecosystem.

More Articles by Daniel Nenni…..

lang: en_US


Power and Thermal Simulation in ESL Verification Flows

Power and Thermal Simulation in ESL Verification Flows
by Daniel Payne on 04-18-2014 at 8:11 pm

At the recent DVcon there was a keen focus on design verification and validation. Much of the attention is on Logic/circuit design verification, UVM, and IP verification. At the system level functional verification has improved to comprehend complex hardware and software interaction using Virtual Platforms/SystemC and Transaction Level Model simulation that complements the software development and debug activities. The backend validation remains a complex task, which often delays successful product launch and adoption. Designs must now also satisfy use case specific and scenario dependent power and thermal targets. These requirements are defined by marketing and the customers at inception and successful designs must converge on these targets much earlier in the development process. Complete system level verification and software validation must also comprehend the power and thermal validation flows in the verification area. UVM and IP verification is good and necessary, however, complex SoC verification also requires dynamic and coupled simulation to meet power targets, verify power management and validate thermal management algorithms and policy.

If your new SoC design uses software based policy decision making or autonomous embedded power/thermal management controllers, hardware, behavioral sensors and micro-sequencers to implement power and thermal management, then this becomes part of the verification flow. The question then becomes, when do I verify power and thermal, early at the ESL level or later at the RTL level? My hunch is that earlier at the ESL level makes more sense, because at the early stage you have more impact in making trade-offs than at the RTL stage where it’s too late to make such trade-offs. Typical questions that need to be answered about power verification include:

  • Will my design meet the power specified in the customer requirements and component data sheet?·
  • What is Idd across frequency, voltage and temperature for a range of conditions?
  • How does power and temperature vary for each and every power state/power mode: for example Turbo, Active, Idle, Idle with clock gating, Idle with power gating, retention and sleep states?
  • Can I guarantee items like MP3 playback, talk time, video encode/decode and associated battery life?
  • What invokes the power state transitions, their entry and exit latencies and how does this affect the resulting power/Idd calculations or estimation?
  • How does the software device drivers, core/micro-controllers micro-code and system firmware affect power?
  • How does OS-directed power management affect power?

To get any of these answers requires early power modeling at the ESL level, and a company called Docea Power is focused on such power modeling and simulation.

Power Simulation

Docea Powerhas an ESL tool for power simulation and validation called Aceplorer. Here’s what the flow looks like for Aceplorer:

With Aceplorer your SoC power model can account for both dynamic and statistical use cases, allowing you to explore and then optimize the best power management strategy considering both hardware and software. This task is well-suited for an ESL approach. Power models can be created and refined as your design progresses but not gated by RTL and circuit level details. Imagine being able to run your operating system and apps to see how they impact power before your design is completed.

Thermal Simulation

Knowing the thermal response of your SoC before silicon is quite important in meeting specs, however it does require both power and thermal modeling and a coupled power-thermal simulator. Thermal sensors placed within an SoC and the system design provide dynamic feedback to your system when thermal limits are reached. Software and hardware policy engines can control the thermal mitigation scheme which can alter the system operating point frequency and voltage in response to the resource demands of the application and the environmental/physical conditions.

Docea Power also has an ESL tool that creates compact thermal models which can be used analyze power and temperature over time, solve power as a function of temperature and help optimized power and thermal management policies.

With the Docea approach you can use SystemC and TLM models to drive the power and functional models, not only calculating power and temperature
but also tracking the power event and thermal event triggers and the resultant change in behavior of an application. This approach isn’t just for software debug and software development, rather it’s a way to verify complex power state and power event transitions, converge on power targets
and account for design specific parameters that most IP are affected by:

  • Power supply and voltage network
  • Clock networks
  • Control of the voltage and clocking, plus the interaction with software and applications

Using a co-simulation with a power and thermal ESL model does enable rapid simulation speed, ease of use of configuration and multi-case testing. This answers three basic questions:

  • Did I converge on my power targets for all use cases?
  • Did I verify the firmware, software and drivers with the hardware interaction for all power and thermal state modes and state transitions?
  • Does this have a noticeable or negative impact on performance, latency, responsiveness or QoS of the applications required for proper use of the IP or system?

Summary
For you next SoC design, why not consider modeling and simulating for both power and thermal at the ESL level?

lang: en_US


Signoff Accurate Timing Analysis at Improved Run-time & Capacity

Signoff Accurate Timing Analysis at Improved Run-time & Capacity
by Pawan Fangaria on 04-18-2014 at 4:30 pm

The semiconductor design sizes, these days, can easily be of the order of several hundred millions of cells, adding into the complexity of verification. Amid ever growing design sizes, it’s a must that the timing verification is done accurately. Normally Static Timing Analysis (STA) is done to check whether all clocks and signals in the circuit at every stage are properly timed. While introducing hierarchy is inevitable in architecting such large semiconductor designs, every level of hierarchy leads to abstraction in timing, thus causing certain amount of loss in accuracy of timing. The EDA vendors are struggling hard to strike a balance between memory usage, run-time and accuracy of timing analysis by using different hierarchical methods of timing analysis; a flat design analysis can provide most accurate results as it runs through each leaf level cell and wires through it, but consumes large amount of memory and needs very long run-time.

Extracted Timing Model (ETM) is a common hierarchical analysis approach which replaces respective blocks by delay and timing constraint arcs to speed-up timing analysis and reduces memory consumption. However, this approach has a major limitation that multiple-constraint modes need to be merged into a single ETM for the top level analysis. Again merging across corners is not possible. Hence, MMMC (Multi-mode Multi-corner) analysis requires extra allocation of time to extract ETMs that comprehensively cover all modes and corners. Moreover advanced on-chip variation (AOCV) requires multiple characterizations (for multiple instantiated blocks) for the same view in analysis. SI-aware ETM, path exception modeling, multiple-master clock modeling for a generated clock, waveform propagation-aware ETM etc. are some of the other limitations and complexities in the ETM approach of timing analysis. Also, this approach is good to work only in a given context. When the context changes (which is typically the case because the context is not known exactly until the end of the design cycle), ETMs must be regenerated.

Another common approach to model a block is Interface Logic Model (ILM), where only the connections from inputs to the first stage of its flip-flops and the connections from the last stage of flip-flops to the outputs remain in the model along with the clock tree driving these flip-flops. All the other internals of the block are wiped out. While this approach can deliver highly accurate results significantly faster at lesser memory consumption, it lacks to comprehend over-the-block routing, constraint mismatches, latch-based designs, and pessimism in arrivals due to timing-window CPPR (Common Path Pessimism Removal). Also, too many ILMs to cover all modes and corners for each timing analysis view would require huge data storage and management.

In order to overcome the issues of ILM and ETM methods, concurrent analysis of hierarchical blocks (where design partitions are timed independently and dependencies between them are resolved by asserting constraints and synchronizing other data at the block boundaries) with an aim at converging iteratively by asserting new constraints at each iteration is attempted. However, this approach also has a set of limitations.

I was very impressed with Cadence’sTempus Timing Signoff Solution which provides full flat timing analysis (naturally most accurate) and uses massive distributed processing to improve and control run-time and capacity. To my pleasant surprise, in order to make the solution more powerful for full chip analysis, Cadence has introduced a new concept, called Scope-based Analysis into its Tempus Solution.

In this approach, only those portions of the design that a user wants to analyze (due to change) are dynamically abstracted with full-chip level context. The user defines the change space at the level of granularity equal to physical/logical block boundaries. Once the blocks or top-level scope is provided to the Tempus solution, dynamic abstraction of the design is done under the hood. Analysis of the resulting design is done in significantly lesser run-time and memory footprint.

This approach is consistent with flat timing analysis providing several major benefits at the user as well as design level. The tool keeps operating with the same user scripts and constraints that were used for flat timing analysis without any change in the use model of timing analysis. The reports produced are also consistent in format and content with that of flat timing analysis, thus saving the user from unnecessary debugging for any deviation. The added value of this approach is 2-3 times faster analysis at significantly less peak memory than a full flat analysis. Also each scope-based analysis can be run in parallel, a key strategy of Tempus Solution. The approach is fully compatible with MMMC analysis without requiring any change in the user-script set up for defining constraints and analysis views.

Since flat-level timing analysis is necessary to achieve accuracy in timing, Tempus Timing Signoff Solution along with its innovative Scope-based Analysis approach to accurately and efficiently analyze portions of the design affected by the design change, provides the ultimate experience in full chip timing analysis of large semiconductor designs of the order of hundreds of millions of cells. One can read a whitepaper describing in detail about this and other methods of hierarchical timing analysis.

More Articles by Pawan Fangaria…..

lang: en_US


Sensor clusters at edge call for NoCs nearby

Sensor clusters at edge call for NoCs nearby
by Don Dingee on 04-17-2014 at 6:30 pm

In his recent blog on EETimes, Kurt Shuler of Arteris took a whimsical look at the hype surrounding the IoT, questioning the overall absence of practicality and a seemingly misplaced focus on use cases at the expense of a coherent architecture. I don’t think it is all that bleak, but when it comes to architecture, Kurt is right, and here is the case in terms of sensor clusters. Continue reading “Sensor clusters at edge call for NoCs nearby”


Does Processor IP still get the Lion’s share in 2013?

Does Processor IP still get the Lion’s share in 2013?
by Eric Esteve on 04-17-2014 at 1:00 pm

I think that the answer is pretty obvious, but the interesting point is to figure out which processor type, and which part of revenues, up-front license or royalties? One of my customers, let’s call him Mr. X, ask me to clarify this point. Mr. X has bought the excellent report from Gartner “Market Share: Semiconductor Design Intellectual Property, Worldwide, 2013” in which most of the IP vendors (at least all the large vendors) are ranked by IP segment. He wants to size the processor IP market, but the first question is: what should we call a processor IP?

In fact, Gartner rank into “Processor” category both the “Microprocessor” IP and the “Digital Signal Processor” (DSP) IP, and I tend to agree, except that I would have added two other segments in the Processor category:

  • Graphic IP
  • Fixed Function Signal Processor

Let’s take a look at the market weight for this processor category. According with Gartner, the overall design IP market size is $2.45 billion, and the processor category weights $1095.8 million, or 44.7%. Not really a surprise, the #1 is ARM ltd., with 82% share. The real surprise comes with the company ranked second with $65 million, Cadence!

Clearly, Tensilica acquisition has been accredited, and the pretty high acquisition price ($380 million in cash) seems to be justified, especially when you consider that Tensilica revenues have grown by 45% in 2013, after a 25% growth in 2012… It’s not completely crazy to foresee processor IP revenue from cadence to reach $100 million in 2015, and $200 million in 2020!

A very interesting behavior of the processor IP segment is the royalty relative share. Reminder, the processor IP segment weight $1095.8 million (I love the $0.8 million precision, make you feel that the error rate could be less than 0.01%), and the royalty level is higher than $600 million, definitely higher than 50%. This behavior is typical of the processor IP business model, where royalty is always part of the equation. How many IP vendor are dreaming to build such a recurrent model, where your company still make money even if you have moved to make surf in Hawaii! In fact, the processor IP segment is one of the very few segments where an IP vendor can ask for royalties. Even the vendors selling at the edge hard IP, like a 12 Gbps PHY or a 100 Gbit/s ADC, can’t do it. This business model involving royalty payment is reserved to patent protected functions, like Processor or GPU IP core, Network-on-chip and the like. The vendor may always ask for royalties, but the only way to make sure to get it is when this vendor has invented the function, or at least benefit from the patent property, by opposition of the protocol standard based IP, or the well-known mixed-signal or digital IP.

The Processor IP segment is clearly the leader with 45% of the IP market (and almost 60% if we aggregate “Graphic” and “Fixed function signal processor” IP, which make sense to my opinion). The clear leader is still ARM Ltd., and should lead both the Processor segment and the overall IP market for(ever) a long time! But, if you plan to start a processor IP company tomorrow, your chances are very close to zero, as ARM’s customers have invested a huge amount of resources, time and money to develop the software pieces around ARM processor, so you would have to offer them a VERY good reason to change! Many start-up have tried so far, and only Tensilica (offering a completely new concept with “Dataplane”), CEVA (offering DSP IP core) or Imagination Technologies (offering GPU IP core instead of CPU) have been successful. MIPS was a long time direct ARM competitor, but does MIPS CPU still really compete with ARM?

Eric Esteve from IPNEST

You need an accurate Interface IP market survey? In the “Interface IP Survey” you will not only find the detailed IP sales results by protocols (DDRn, USB, PCIe, SATA, MIPI, Ethernet, HDMI, DP), and by vendor, but also market intelligence (IP vendor competitive analysis, market trends) by protocol, and a 5 years forecast.


Customization can add extraordinary power to your tool

Customization can add extraordinary power to your tool
by Pawan Fangaria on 04-16-2014 at 4:30 pm

In EDA arena we often find companies providing customization platforms along with the tools they offer to their customers. I admire such companies because they equip the end users of a tool to extend its functionality as they like according to their environment, thus increasing the designer productivity significantly. And I’m witness to some expert creative users of Cadencetools, during my job at Cadence, who made very powerful customized tools based on the SKILL scripting. What reminded me about Cadence and SKILL is a nice video demo of Concept Engineeringposted at EDA Directwebsite. Concept Engineering provides a versatile platform for visualizing a semiconductor design at all levels – RTL, block, gate, transistor and Spice; that helps in debugging a design to great extent. They provide customization platform based on Tcl/Tk scripting which can extend the functionality of these tools to a large extent as per an individual customer needs.

I learnt from the demo that the StarVision tools of Concept Engineering provide a rich set of Tcl/Tk based APIs which can be used to access design database as well as the GUI to make any modification or reporting in any desired format. The tool provides good level of flexibility to add the Tcl/Tk script from the command line at the start of design loading or later directly from the GUI or console window. In the demo all the three methods of loading the user-ware scripts to generate three different types of example reports have been demonstrated.

While using command line method, the design file and tcl script file can be provided in the command line for batch processing. After loading the design, all user-ware scripts are executed. A typical example command for automatically printing design schematics of all modules in PDF files is –

C:….xxx>spicevisionpro g185.sp –userware ..xxprintPDF.tcl
where g185.sp is the design file and printPDF.tcl is the tcl script with APIs for printing schematics.

The schematics at the transistor level and the top level as generated above are printed in individual PDF files. The top level schematic can run into multiple pages and the page size can be controlled through the script. Similarly, the schematics of all other modules generated at various intermediate levels in the design are printed in individual PDF files.

In another example, user ware Spice netlist was loaded through GUI and report generated by loading report.tcl script from the API directory. The Spice netlist report produced was identical.

In yet another method, Tcl/Tk commands and APIs are executed through the console window. A typical scenario is to do it through a source command. The procedures in the script are immediately executed.

In the above example, after loading the design, the tcl script is executed through source command at console. The report generated contains all modules with their interface ports along with their directions.

There are different types of user-ware APIs for various purposes such as customized visualization, ERC checking etc. which can be used by users as per their requirements.

In the above form there is a list of APIs which can be used for modifying GUIs, traversing design database and modifying as desired, adjusting the design hierarchy and so on.

The source code of the user-ware APIs is simple enough. Above is a small snippet of the source code. The users can develop new APIs on their own by writing such simple code as per their customization need. Nevertheless Concept Engineering ships hundreds of user-ware APIs along with their complete documentation which can be easily adopted by customers for customizing their tools and seen as examples for developing new APIs.

It was a nice experience going through the video which can be accessed by the interested audience here.

More Articles by Pawan Fangaria…..

lang: en_US


Intel Lost $1B in Mobile Last Quarter

Intel Lost $1B in Mobile Last Quarter
by Paul McLellan on 04-16-2014 at 8:00 am

Intel announced their quarterly results today. Revenue was $12.8B, up 1% from a year ago with operating income of $2.5B also up 1% from last year.

Since the future of the world is mobile and not desktop/laptop, the mobile results are the most interesting. Mobile sales fell 61% to $156M. This includes mobile products and anything Atom. They lost $929M in mobile, close enough to $1B for the headline to this blog. As Brian Krzanich said on his Reddit AMA:we wanted the world of computing to stop at PC’s and the world never does didn’t stop innovating…we missed the mobile move.

Looks like they are still missing it.

PC and datacenter volumes were up 1% and 3% respectively. PC prices were down 3% year to year and datacenter up 8% year to year. That meant that PC was $7.94B, $2.8B of which was profit. Datacenter was $3.09B of which $1.32B was profit.

Internet of things (IoT) rose 32% to $482M. I’m not completely sure what is in that business, but nearly half a billion that is a significant sized business.

Intel reduced headcount by 1000 people (to 106,000) and their cash position fell by $1B during the quarter although they still have $19B of which $11B is trapped offshore, presumably to avoid corporate income tax.

So, bottom line, server sales are healthy but mobile sales are coughing up blood. Of course, in some sense Intel participates in mobile through server sales. The paradigm of the future is a mixture of mobile devices such as smartphones along with cloud datacenters which, despite ARM’s efforts, are still an Intel stronghold.

They are predicting revenue of $13B in Q2, up a little from this quarter. In the CFO’s commentary on the conference call they said:we are forecasting the midpoint of the gross margin range for the second quarter to be 63%, a 3% increase from the first quarter. This is primarily driven by lower factory startup costs as we ramp 14nm, higher platform volume write offs as we qualify the first 14nm products. This is partially offset by the increase in tablet volume and related contra revenue dollars.

I think that means that they are saving some money by putting a fab in Arizona on ice and running 14nm in an existing fab, meaning that they get a 3% increase in gross margin, which is significant. Plus they are selling more tablets (which I assume are in PC not mobile) although I think that they are starting from a pretty small base.


More articles by Paul McLellan…


Xilinx Showcases Worlds First ‘High Performance’ Analogue FPGA

Xilinx Showcases Worlds First ‘High Performance’ Analogue FPGA
by Luke Miller on 04-16-2014 at 7:00 am

Last February Xilinx presented a prototype device at the 2014 IEEE international Solid-State Circuits Conference (ISSCC, titled “A Heterogeneous 3D-IC Consisting of Two 28nm FPGA Die and 32 Reconfigurable High-Performance Data Converters” and click here to get a copy of the paper. Let me just share the intro my dear reader…

In this paper, we demonstrate an aggregate using sixteen 16-b DAC instances running at 1.6GS/s with an FPGA-to-die interface power of 0.3mW/Gb/s. We introduce a reconfigurable receive system that allows channel count to trade with system sample rate. Specifically we demonstrate a 500MS/s ADC by interleaving four 125MS/s units.”

The device is the most innovative and in my humble opinion, is the silicon device of the decade. The device demonstrates so much of what Xilinx is; the premier leader in Programmable Logic, no one else comes close and this is a glimpse of what goes on behind the wonderful doors at Xilinx. Since pictures speak a thousand words, the block diagram of the REAL Silicon device is shown below. Xilinx is not in the propaganda business, and once again here is proof. The device is a fully programmable Analogue FPGA. It is absolutely astounding.

This ‘AFPGA’ (Analogue FPGA) has Integrated ADCs, DACs, PCIe, Gigabit transceivers, and the kitchen sink. Think about it, this ‘Chip’ would have taken a few Circuit Boards to build a few years ago. Xilinx demonstrated its ground breaking 3D IC technology/ Stacked Silicon Interconnect once again by integrating two Virtex-7 350T’s and built the most advanced programmable device in existence. Think about what you could do with this device.

Since I am a RADAR/EW fella, I think Digital RF Memory immediately. How about software defined radio on a chip? Awesome FPGA for UAVs, and TR modules in AESA RADAR Arrays. The bottom line is all systems have analogue sensor data that eventually feeds FPGAs. This major step in system integration not only reduces the BOM, but power drops as well.

This solution completely integrates the FPGA-data converter interface with a measured power of only 0.3mW/Gb/s, which is about 2 orders of magnitude improvement compared to discrete data converter interfaces.”

Can you say wow? I mean I’m am giddy over this FPGA, it’s like meeting the wife all over again. Do not tell her I said that…

This is no sissy device by the way, it performs very well as show below.

Above is a picture of the real device. What does all this mean? Well the FPGA blob strikes again and as expected more devices are consumed by the FPGA. Xilinx devices truly are Open Programmable Hardware. While this prototype device may never see a real design likened to the test cars at the car shows, Xilinx’s innovation cannot be denied. While this is not an ADI, TI or E2V device, but that is not the point. Someone had to take the risk of building and integrating such a device. Simple to design in Power Point, but certainly much different building this wonder on the fab. May I say Xilinx, well done and keep the innovation coming. Now if I’m lucky I’ll get my hands on one of these and perhaps make my ‘Bird RADAR’ dream a reality…

lang: en_US


What, SD doesn’t have enough pins?

What, SD doesn’t have enough pins?
by Don Dingee on 04-16-2014 at 6:00 am

I was in a Twitter conversation over the weekend with some very smart people, and one of the discussion points was how slow and painful the formal standardization process can be. One suggestion was that IoT companies should “just do it”, creating specification-by-implementation. Continue reading “What, SD doesn’t have enough pins?”