RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Writing the unwritten rules with ALINT-PRO-CDC

Writing the unwritten rules with ALINT-PRO-CDC
by Don Dingee on 02-09-2015 at 11:30 am

EDA verification tools generally do a great job of analyzing the written rules in digital design. Clock domain crossings (CDCs) are more like those unwritten rules in baseball; whether or not you have a problem remains indefinite until later, when retaliation can come swiftly out of nowhere.

Rarely as overt or dramatic as a bench-clearing brawl, metastability and other issues due to CDCs can be very hard to spot. Static timing analysis is of little help. Functional simulation may or may not have executed enough timing scenarios. At-speed testing in real silicon is an expensive and late way to discover a problem. Is there an alternative, pre-silicon?

The new release of Aldec ALINT-PRO-CDC 2015.01 provides designers a way to capture experience in debugging CDC issues before they start. Linting, or design rule checking, sifts through HDL code looking for constructs that match or violate a set of rules. This provides a way to automate code review, highlighting areas that may lead to problems.


What rules to use to stop CDC problems that are difficult to discover seems to be a problem in itself. This is where experience and the previously unwritten rules come in. ALINT-PRO-CDC accepts a set of design constraints. A key feature added is the ability to read Synopsys Design Constraints (SDC) 2.0 files. This brings in information used to aid synthesis, such as:

  • create_clock, create_generated_clock – specifies clock network sources and relations between them
  • set_clock_groups – defines groups of clocks that are asynchronous to each other
  • set_input_delay, set_output_delay – relation of the input signals to design clocks
  • Set of commands used to access design elements – get_clocks, get_pins, get_nets

Without knowing what the constraints were prior to synthesis, this information can be difficult to glean from HDL code alone. Most EDA platforms deal with SDC 2.0 files.

Aldec has gone a step further, allowing teams to add constraint extensions. This can help describe elements such as custom synchronizers, encrypted IP, behavioral models, FPGA vendor primitives, descriptions of reset networks, and other constructs. If particular areas of a design, or certain approaches, have led to CDC issues in prior debug activity, that information can be captured in the form of constraints.

Once constraints are established, ALINT-PRO-CDC goes in with advanced technology. Its synthesis engine looks at clocked elements and performs conditional analysis. A pattern matching engine validates synchronizer structure and finds forbidden netlist patterns. Clocks and resets are detected automatically, and clock domains are extracted.


The strategy reaches beyond only static checks – ALINT-PRO-CDC features integration with Aldec Riviera-PRO for dynamic checks at simulation. This greatly enhances capability for metastability insertion, precisely targeting areas in designs rather than randomly hunting around. Additional assertions and coverage statements help make sure CDC issues are exposed.

Linting is a fabulous tool – perhaps most famously applied in the Toyota sudden acceleration investigations – to find and highlight subtle problems in code that may lead to defects. HDL design is philosophically no different than software design, and it is surprising linting is not being used more broadly in EDA circles.

Aldec is hoping to not only help designers prevent CDC-related bugs, but to add another tool to the safety-critical design formal verification process such as called for in DO-254. With the increase in third-party IP and growing design complexity, the burden on designers to review code without automation help is becoming dangerously high.

Writing the unwritten CDC rules can prevent problems later in the design season. For more information, including an overview presentation, see the Aldec ALINT-PRO-CDC home page.

Related articles:


A Public Synchronizer

A Public Synchronizer
by Jerry Cox on 02-09-2015 at 7:00 am

You might ask yourself “Why would anyone want to have a public synchronizer available to download?” Usually designers just grab a flip-flop from his or her company’s or a standard cell vendor’s library. However, are these handy solutions the best course of action today? Current SoC designs have numerous clock domains providing many opportunities for metastability mischief at the crossings between these domains. Using handy solutions without fully understanding their reliability is dangerous for the design of safety-critical products

Modern flip-flop designs use high Vth transistors to reduce power while maintaining low clock-to-Q delay, but ignore synchronizer performance. Some firms have developed specialized synchronizer standard cells with high mean-time-between failures (MTBF). This measure of reliability depends on the synchronizer’s recovery time-constant tau and vulnerability window Tw. In safety-critical designs, synchronizer MTBF can be improved substantially by reducing tau at the expense of power and clock-to-Q delay. Such specialized designs provide a competitive advantage and are usually considered confidential IP that must be kept hidden.

Keeping the design and performance of these private synchronizers under wraps makes it impossible to compare their performance or establish benchmarks. Engineering students and researchers’ understanding of synchronizer subtleties is hindered.

To overcome these drawbacks our colleagues at Oracle and Southern Illinois University Edwardsville have developed a public synchronizer that provides many benefits:

  • Engineers can use the public synchronizer’s extracted netlist to compare a synchronizer MTBF obtained with their in-house analysis tool against that obtained with MetaACE, a Blendics tool verified in silicon.
  • Designers can layout the public synchronizer in their current process so that it serves as a benchmark for new designs.
  • Researchers and engineers can use the public synchronizer to investigate the effect of changes in semiconductor processes (P), supply voltages (V), or junction temperatures (T) on MTBF and do this without exposing details of their private synchronizer design.
  • Students can use the public synchronizer‘s extracted netlist and the associated FreePDK to study synchronizer design issues.

The above benefits sound useful so let’s look at the Public Synchronizer design. Two cascaded level-sensitive multiplexer-based latches, a master and a slave, were chosen as the basic design and for good testability a Level Sensitive Scan Design (LSSD) was also included. Although a synchronizer and a data flip-flop use this same circuit, the characteristics to be optimized are very different. Ian W. Jones of Oracle Labs, suggested a design based on a standard textbook circuit. George Engel and Sam Dunham, both at Southern Illinois University Edwardsville, optimized transistor sizing for synchronizer service and completed the layout.

This layout and fully extracted netlist were obtained using the FreePDK, a purposely non-manufacturable, Free Open-Access 45nm Process Design Kit and Standard Cell Library from North Carolina State University. The public synchronizer circuit occupies an area of 16 sq μm and possesses a t(CLK−Q) of 55ps. An analysis by the Blendics tool, MetaACE, gave a tau(eff) of 13ps and a Tw of 43fs when the synchronizer was operated from a 1.0 Volt supply. For clock and data rates of 1.5 GHz and a 50% duty cycle, the synchronizer MTBF is 1.6 x 10e6 years assuming 85% of a full clock period is available for synchronizer resolution. A two-stage synchronizer with the same clock rate, data rate and resolution-time assumption would have an MTBF of 5.5 x 10e37 years, a much more prudent value for a safety-critical application with a production volume of a million or more units.

Try the public synchronizer using your process. How does its MTBF compare to what you have been using? If you don’t have a convenient way to measure MTBF download MetaACE LTDand attend the Blendics webinar.


Webinar: How IoT Designs Driven by Cost Power Security

Webinar: How IoT Designs Driven by Cost Power Security
by admin on 02-08-2015 at 8:30 pm

SoCs being developed for the fast growth Internet-of-Things market will sell for and operate on a small fraction of the power of mobile devices’ chips. More importantly, IoT SoCs will be far more vulnerable to hacker attacks than the much better protected chips in portable devices. As a result, designers developing SoCs targeting IoT applications face a set of challenges that require computing capability unique to this class of devices: (1) extensive power management functionality, (2) sensor data and network protocol stack processing, (3) detecting and thwarting security attacks, (4) and enabling all these functions in a silicon footprint no larger than an 8-bit alternative.

Andes Technology Corporation will host a webinar on Tuesday, February 10, 2015 at 10:00 AM Pacific Time, that will detail how IoT designs are driven by cost, power, and security. One IoT device example that will be described, the smart meter, serves to illustrate the importance of these design considerations. It contains an analog interface from a sensor providing voltage, current, and temperature readings. A microcontroller in the design performs the compute functions for the design. In addition, there is a communications port, power-line communications, Zigbee, or some form of RF interface. For program and debug the design typically comes with an interface to external PC.

The 8-bit processors first used in IoT applications have a simple CPU architecture and instruction set, developed in the early 1970s, suited to control applications 30 to 40 years ago. With the rise of the smart phone, the computing requirements changed dramatically and demanded a 32-bit architecture, that were designed in the late 1980s, able to run on rechargeable batteries. Both these processors are being applied to the new Internet-of-Things devices now coming on the market, but neither provides the adequate computing architecture and instruction set required by this next generation of products.

The 32-bit embedded processor of which there are a number of alternatives provides the compute power, but suffer the problem of being designed for applications that were the major market drivers of their day: the PC, set-top box, and the mobile phone, tablet, and variety of consumer devices—cameras, audio recorders, and so on. The functionality in these 32-bit processors has yet to deliver a hit end-IoT product, comparable to the smart phone. For example, activity trackers and smart watches fall short on power and end user capability.

What the Internet-of-Things requires is a 32-bit processor with an architecture that serves the demand for high performance, while providing the power savings needed to last long periods between recharge or to run on harvested power. This webinar presents one such 32-bit embedded processor system. Designed in 2005 from the ground up, the Andes Technology N8 MCU plus AE210 peripherals will be used to illustrate how new architectural features can achieve both performance and power savings in a gate count comparable to an 8-bit CPU. Two features that will be described to drive home the point are frequency scaling and flash acceleration, neither supported directly on existing 32-bit embedded CPUs.

To provide the demand for enhanced security, the presentation will also describe hardware functionality built into the new 32-bit architecture: data and address scrambling and differential power analysis protection. The first provides protection from hacks that target the interface between CPU and memory. The second protects from hacking the program by observing the power use signature of the CPU.

Please join the webinar on Tuesday, February 10, 2015 10:00 AM – 11:00 AM PST. To register, click here.

By Emerson Hsiao, Senior VP, Sales and Technical Service, North America Operations


Integrated Spec Design & Documentation for SoC

Integrated Spec Design & Documentation for SoC
by Daniel Payne on 02-08-2015 at 1:00 pm

One challenge in SoC projects is maintaining consistency between the specification, design and documentation throughout the product lifecycle. Imagine the chaos if your specification for power is 300 mW, the design is actually 350 mW and the documentation promises 250 mW. Traditionally the design and documentation process are separate and unrelated, creating opportunities for discontinuities. One company that has decided to focus on keeping specification, design and documentation consistent is Magillem, and they delivered a capability called ISDD (Integrated Specification, Design & Documentation) back in May 2014.

In this methodology you would use a Magillem front-end capture tool using common parts (instances, configurability, interfaces, hierarchy, partitions, hardware-software interfaces) that then automatically generates all of the representations from a single source. There is an Accelleara standard called IP-XACT/IEEE1685 that uses the XML schema to define IP blocks. What the Magillem approach does is to use the XML schema to link the specification of design elements captured in IP-XACT with a set of XML fragments for documentation content. As your design changes you can propagate updates to the associated documentation. Using standards like XML you can now aggregate design data with any external product information.

Related – TLM Modeling Environment and Methodology Goes Commercial

So your entire SoC becomes a coherent set of hardware descriptions, software and documentation, all tied together through XML. Product Lifecycle Management (PLM) tools from companies like Dassault Systemes connect with the Magillem Content Platform providing you with:

  • IP catalog
  • Defect management
  • Revision control
  • Configuration control

Customers like ST Microelectronics are using this ISDD approach from Magillem in their SoC development process.

Magillem has a lot of experience with XML-based tools, and with their Content Assembly Platform have served diverse markets like legal and technology. They are also members of: Accellera, Cadence Connections, OCP IP, ARM connected Community, EDAC and ARTEMISIA.

Related – A Brief History of Magillem

Summary

It is now possible to maintain consistency between all of the representations in an SoC by using a single source that keeps everything updated. Hardware, firmware and documentation can all be connected, instead of separated and disjointed.


FD-SOI at Samsung

FD-SOI at Samsung
by Paul McLellan on 02-08-2015 at 7:00 am

Various foundries have made announcements about licensing FD-SOI technology from ST Microelectronics and then fallen quiet. GlobalFoundries made an announcement a couple of years ago. Samsung made an announcement just before DAC last year. But neither company has said anything much since. Of course the big noise at 14/16nm is all around FinFET but the reality is that the number of designs moving to those process nodes is relatively small. Many designs are remaining at 28nm or larger processes (TSMC has re-architected their 45nm and 65nm processes to have ultra-low power versions for example). FD-SOI is seen as a good way to extend 28nm by giving it most of the characteristics of 20nm, even lower power if the biasing is used, at a slightly reduced cost since it is slightly cheaper to manufacture than bulk planar. In particular, I don’t see IoT designs being 14/16nm SoCs and older processes where the analog and RF are easier are probably going to be the workhorses for those markets (I hesitate to call IoT a market but it is certain that lots of devices will be connected to the internet in the coming years).

As if to emphasize this, Yongjoo Jeon titled his presentation 28FD-SOI Cost Effective Low Power For Long-lived 28nm.

Yongoo started with a history of technology migration. Down to 130nm we scaled everything including the gate-oxide thickness but that ran out of steam then. Since then we have had copper interconnect at 90nm, low-K dielectric at 65nm, stress engineering at 45nm and Hi-K metal gate (HKMG) at 32/28nm. Basically the era of material innovation. At 20nm planer hit the gate length scaling limit and the two structure innovations going forward are FinFET (at 22nm if you are Intel and 16/14nm if you are not) and FD-SOI (initially as a retro-fit to 28nm processes).

One of the challenges faced by FD-SOI has been the perception that it is only available and used by ST. Customers want alternative sources. Of course they need other stuff too, such as low cost per transistor, IP support, performance and, above all, low power. With Samsung, the worlds #2 (or #3 depending on how you count) foundry the process has a lot more credibility.

Last month in Tokyo was the FD-SOI and RF-SOI Forum in Tokyo Samsung presented on FD-SOI. Another interesting looking presentation is by Sony who are using 28nm FD-SOI for RF design, but their presentation is not yet available so I don’t have details, but the fact that customers (as opposed to foundries) are starting to endorse the process is more good news for the ecosystem.


Samsung emphasized the cost aspects since there are 289nm FD-SOI has fewer process steps than bulk 28HKMG and the BEOL (metal) is the same. The simpler process helps to offset the fact that the SOI substrate is more expensive than bulk.

Compared to 28HKMG the performance is better, the power is lower and the area is the same. And it is much better in all dimensions than either 45bulk or 28PSION.


Samsung emphasized that the better short-channel control means a shorter channel length and more gate bias. This gives two knobs to control performance and leakage. Gate CD-biasing, which is physical, and body-biasing which is electrical and can be used to reduce leakage when performance is not required or the circuit is idling. But where FD-SOI really shines is that the voltage can be further reduced down to 0.63V with reasonable performance and much lower power.


Samsung did a full qualification of the process completed in September 2014 and both the pmos and nmos transistors passed everything.

The business model on the IP side is that IP vendors will supply everything except the foundation libraries which will be delivered by Samsung themselves. I don’t know if any of this work is shared with ST or if Samsung have a completely separate ecosystem.


Has the Semiconductor Industry Gone Mad?

Has the Semiconductor Industry Gone Mad?
by Daniel Nenni on 02-07-2015 at 7:00 pm

The weather in Taiwan last week was very strange. It was so cold I tried to turn on the heat in my hotel room only to find out it was not possible. If you want more heat they bring a portable heater because who needs central heat in Hsinchu? Even stranger is all of the media hyperbole on the next process nodes:

Intel CFO: We’re so far ahead that Apple has no choice but to work with us

What he actually said is that Intel is so far ahead of the competition when it comes to PC processors that Apple (and just about every other PC maker) has no choice but to use Intel chips. True as that may be I’m not sure reminding everyone that you have a monopoly on the PC business is such a great idea. In regards to Apple it is hard to tell what they will do for semiconductors. At one time the media thought that Apple would no longer do business with their competitor (Samsung) after successfully moving to TSMC at 20nm. Now the media has “affirmed” that Apple is using Samsung 14nm exclusively for the iPhone and iPad this year:

Apple affirmed to return to Samsung for 14nm ‘A9’ chips for next iPhones, iPads

As I have said before, no one likes a monopoly so I find it highly unlikely that Apple will use just one foundry if at all possible moving forward. Given that they make two different chips, one for the iPhone and a larger more powerful one for the iPad, it makes using two foundries that much easier. You should also know that Samsung 14nm is LP (low power) while TSMC 16nm FF+ has a higher performance range so making the A9 at Samsung and the A9x at TSMC is much more believable.

The other thing you should ask yourself is why did Samsung and GlobalFoundries REALLY do the 14nm licensing deal last year? The answer is because customers “suggested” they do so. And by customers I mean the two largest wafer customers which are Apple and Qualcomm of course. I remember Paul McLellan and I being briefed on this last Spring and me thinking to myself, “Has the semiconductor industry gone completely mad?”

Samsung ♥ GLOBALFOUNDRIES

In a recent conference call TSMC called GlobalFoundries “Samsung’s accessory” which was funny but it also has a much deeper meaning. Given the choice of a single manufacturing source for a specific process node or a source with an “accessory” Apple or Qualcomm will chose the latter, which is what they have done at 14nm. There have been no announcements as to whether Samsung and GlobalFoundries will again work together (copy exact) on 10nm but if Apple and QCOM say so they will, absolutely. You have to follow the money trail in the fabless semiconductor ecosystem for sure.

The other question I asked myself at the end of this trip was: “Self, how long until UMC becomes TSMC’s accessory?” And if this trend catches on who will be Intel’s foundry accessory?


Product Review: Bose – SoundTrue Around-Ear Headphones

Product Review: Bose – SoundTrue Around-Ear Headphones
by Daniel Payne on 02-07-2015 at 7:00 am

My old headphones with microphone lost a channel, so it was time to upgrade and I went shopping for something that had high fidelity and fit over my ears, instead of on my ears. After some online research I opted for the Bose headphones, because that brand has been around for decades, they offer many models to choose from, and are readily found at nearby stores like BestBuy. The model I bought are called SoundTrue Around-Ear headphones.

Pros
Comfortable enough for all-day sessions, while keeping sounds outside to a minimum. Faithful sound.

Cons
Using the microphone to speak with around-ear headphones sounds very funny because your own voice is muffled, so if you do a lot of talking with apps like Skype then buy an on-ear headset instead.

Summary
At $179.00 these headphones live up to the Bose reputation for faithfully producing sound, without over-emphasizing the bass like so many other brands do these days.

Commentary
At my local BestBuy store in Tualatin, Oregon I listened to several brands of headphones:

  • Bose QuietComfort Acoustic Noise Canceling Headphones
  • Beats by Dr. Dre
  • Skullcandy
  • Bose SoundTrue Around-Ear

There certainly are dozens of makes and models to choose from when it comes to headphones, so a lot of it for me came down to several factors:

  • Color – I prefer white
  • Fit, comfort
  • Brand reputation
  • Sound fidelity – for classical music and movies on Netflix
  • Build, quality
  • Price – under $200
  • Features – microphone

I was quite used to on-ear headsets, which sound great when making Skype calls because you can hear your voice in a natural fashion. The headphones by Dr. Dre and Skullcandy were OK, but they seemed aimed to the youth market where the bass is pumped up, while the Bose just felt, sounded an looked right to my aesthetic taste. My Acura RL also has a Bose sound system, so I was probably already a loyal Bose fan when looking for headphones.

When I first placed the headphones on in the store I was impressed by how much of the ambient store sounds were instantly cut off, letting me really concentrate on listening to the demo tracks playing. There’s a volume control built-in to the cord, so I could easily adjust by clicking up or down. The Bose brand with noise canceling was high-tech and attractive to me, but I didn’t really need that feature in my office use. Once I got home and listened to classical music with a symphony orchestra I was impressed, because I could hear each of the instrument sections clearly playing, there were no muddled frequencies or exaggerated tones.

I’ve enjoyed listening with these headphones connected to my MacBook Pro, iPad and Samsung Galaxy Note 2 devices. I can hear all of the conversations and directional sounds from movies, each instrument in classical music, all while keeping sounds in the room blocked out. It’s a real immersive experience for me.

Making a Skype or VOIP phone call was a surprise to me with this over-ear unit, as all of a sudden my voice was now being muffled by the headset itself, so I don’t recommend trying to use any microphone app with these.

Bose does provide a padded carrying case and a detachable cable with this product. I know that my son has literally worn out his gaming headset before, and it didn’t have a detachable cable and that was the first thing to break for him, although it wasn’t the Bose brand. The cable is 66″ long, which is just the right size when I watch Netflix on my laptop or listen to music.


Why Would You Leave Yahoo to Go Into EDA?

Why Would You Leave Yahoo to Go Into EDA?
by Paul McLellan on 02-06-2015 at 7:00 am

I sat down this afternoon with Peter Theunis, the CTO of Methodics. Conveniently their office is about a 15 minute walk from where I live so we could chat face to face.

Peter started programming when he was 8 and his first “product” was a weather system for orchards where sensors in the orchards would send information back to a weather program that would advise the farmers when and what to spray the orchards with.

Peter came from Belgium as an exchange student at Berkeley, got a job at a job-fair and ended up staying in the US (I came for a couple of years, over 30 years ago, it happens to many of us). He worked for a couple of start-ups that didn’t start and then decided he had better join a larger company and get his green card. So he worked for a Telco in San Diego for a couple of years until they outsourced all their engineering to Shenzhen.

In 2006 he joined Yahoo and worked there for 8 years in a number of different jobs, including 2 years working for Marissa Mayer when she joined from Google to be come CEO. One thing that he found there is that nothing off the shelf works for Yahoo. Vendors would come by and they would ask “will it scale to 600M people” and the vendor would have to admit that it wouldn’t really. He also learned the importance of requirements and, for a company like Yahoo, latency, throughput, scalability and capacity.

He had known Simon, the CEO of MethodICs, for a while and even advised them on occasion. Simon caught him at the right moment when he asked him to give MethodICs a shot and come and be CTO. Since Peter (well, his wife) was pregnant, the attractiveness of walking to work rather than spending 3 hours a day on Yahoo’s buses from San Francisco was also a big plus. He joined Methodics in June last year.

The prospect of making more of a difference was also attractive. I asked him what he meant. “The purpose of every major site on the internet, such as Yahoo, is to get you to click on ads. That is the metric. How many ads got clicked on.” At MethodICs, while it is not curing cancer, it is enabling the semiconductor industry which in turn has made an enormous difference to the lives of almost everyone in the world over the last few decades.

When he arrived, he was surprised, shocked even, to really see that way that the design pipeline works. It is very inefficient with ad hoc processes involving thinks like email. It was obvious that MethodICs could add a huge amount of value by standardizing and automating processes, and moreover providing metrics that allow the processes to be improved over time.

He is overseeing the development of ProjectIC to make it even more secure and reliable, “enterprise grade”. It needs to be robust and easy to operate so that it can be installed, forgotten about and will run for a decade (as opposed to internet software where 2 years is the maximum life before upgrading it completely). Simon asked him to “think outside the box and bring some of the Yahoo approach to the table.”

One big area is testing. Yahoo has hundreds of packages to be deployed to, perhaps, a thousand hosts. Simultaneously. There is a need to test combinations, multiple versions, automate the testing. IP has a lot of the same issues with multiple versions, complex interactions and lots of views. Software testing has changed from throwing it over the wall to Q/A to where testing (unit testing) is part of the development methodology with analytics to improve the flow.

Since semiconductor companies have often grown through acquisition, they often have multiple flows and, often, installation procedures that are not automated but are manual. It is not possible to change these flows or processes, especially in the middle of a project, so MethodICs needs to be flexible, to automate what can be automated but also support manual processes.

Peter has lots of ideas for the future as MethodICs brings on board a number of world-class computer scientists. We will all be watching.


Cadence 2014 Results

Cadence 2014 Results
by Paul McLellan on 02-05-2015 at 7:01 pm

Cadence announced their Q4 and 2014 results yesterday. They are the only one of the big 3 EDA companies whose fiscal year is the calendar year so Synopsys and Mentor will not be joining them in announcing them this week.

I won’t go into the numbers in detail, you can find them all easily enough. But it is a pity that statements like this can’t be backed up with the company names:We had over 10 full flow digital wins in 2014. We also had segment share gains at several leading customers, including a global marquee company and most recently at a major fables semiconductor company.

Lip-Bu said that Tempus/Voltus/Quantus had more than 50 tapeouts which is impressive considering how new these products are. Of course it is possible for Synopsys to claim the same tapeouts, perhaps. I’ll bet risk-averse design groups (that would be all of them) run PrimeTime as well as Tempus, for example.

Palladium emulation seems to be doing well. This is a particularly sensitive area since margins on a product like Palladium are lower than a pure software product and the market has recently become very competitive. For once we have a name, and a good one, in that MediaTek doubled their Palladium capacity last quarter. Cadence have their next generation emulation (presumably still called Palladium) which is shipping in the second half of this year. In the questions it was admitted that the product was late (“took a bit longer”) but that it will have higher margins than the current product line.

IP grew 40% in 2014 vs 2013, led by DDR (the outgrowth of the Denali acquisition) including by HiSilicon earlier in the year with the world’s first 16FF tapeout. Tensilica had the largest number of new licenses ever last quarter and also passed the 2B cores/year run-rate in production. Then in the questions it is clear Cadence is having some major success at 10nm in IP too:In the most advanced 10-nanometer we won and in fact the largest IP contract to-date and with one of our largest top customer.

You might remember that the Virtuoso product line got split into the original version for mature nodes and Virtuoso for advanced nodes used for 20nm and below (with support for double patterning, FinFET etc). Over 40 customers are using the advanced node version.

For a number of reasons, Palladium just being one of them, margins were guided down in the first half of next year. Other factors called out in the questions: social security payments run in the first half of the year until they meet the cap; vacation is mostly taken in the second half of the year. They also said that margins are under pressure from their investment in technology and in customer support for their major new customer wins. They are forecasting a 6% top line growth for 2015, slightly faster than the overall EDA industry and faster than the world economy.

Lip-Bu was asked about the extra investment and margins and would margins expand in the future as the investment started to earn a return. Lip-Bu gave one of his inscrutable answers which is not clear whether the answer is yes or no:First of all, I think the investments are highly leveragable across customers. That will provide the leadership and market share as a key driver for success for our business going forward. And I am confident if we execute and then successful in the long-term and are proliferating our product more in the winning customer, marquee customer and also the next level of customer will be also embraced, that in the end, it will benefit in our shareholders.

Another interesting comment in the questions was about 10nm being a long lived node:clearly the 10-nanometer, the main reason as it is going to be a long node because 7 and 5 is unclear and EUV timing is unclear.

SeekingAlpha transcript of the call is here.


Google Glass: The Second Coming and a Brief History

Google Glass: The Second Coming and a Brief History
by Majeed Ahmad on 02-05-2015 at 4:00 pm

Google Glass is dead; long live Google Glass. That’s how Ori Inbar stated the recent closure of Google Glass beta-test project in his report titled “Smart Glasses Market 2015: Towards 1 Billion Shipments” released by www.augmentedreality.org.

Inbar says that Google, a smart glass pioneer, not only compromised its status in the promising wearable devices market by abruptly ending the program, but also hurt its Glass Certified Partners. He adds that despite privacy and cultural concerns, the project has raised public awareness about smart glasses to an unprecedented level.

Inbar contends that Google Glass is the best thing that happened to augmented reality since the iPhone. However, he acknowledges that it also drew harsh criticism from technophobes, ethics pundits, privacy defenders and fashionistas alike. And that trade press picked on that negative buzz and readily crafted catch phrases like ‘Glasshole’ and ‘Glass Half Empty’.

And now that Tony Faddell—the iPod pioneer and CEO of Nest—has taken the charge of Google Glass work, Inbar quotes reliable sources about the launch of the next version of the Google Glass later this year.

Google Glass History

In 2010, the Internet giant’s top-secret projects lab Google X began the development of this camera- and Internet-equipped wearable computer. The project was announced on Google+ by Babak Parviz, an electrical engineer who specialized on the interface between biology and technology and had worked on putting displays into contact lenses.


Parviz made the first Glass demo along with Sergey Brin
(Photo courtesy of Entrepreneur)

Steve Lee, a veteran product manager who specialized in location and mapping technologies, was also involved in the project’s initial development. Lee had earlier worked on Latitude, a Google app that enabled users to broadcast their GPS location to friends.


Lee was an early contributor to the Glass project
(Image credit: USA Today)

Thad Starner, a Georgia Tech professor who had been building and wearing head-mounted computers since the early 1990s, eventually became the technical lead for the Project Glass. Back in 2003, Starner had shown Google founders Larry Page and Sergey Brin a clunky version of a wearable computer that he had built at Georgia Tech.


Starner claims to have coined the term augmented reality

Glass was a pet project of Google’s co-founder Sergey Brin. The Internet-connected eyewear was released for developers in February 2013 and became available for consumers later in 2014. Google Glass was, in fact, a do-everything computer and information portal that boasted augmented reality technology and epitomized the next wave of disruption in mobile computing.

It represented a new class of wearable and embedded computers that first absorbed the smartphone capabilities and then promised to offer even more. Many industry watchers called Glass the next iPhone. It was a great idea that encouraged people to imagine and to create innovative new applications and spawn the brand new wearable industry.

The technology behind the Glass was game-changing. However, on the other hand, Glass was a product ahead of its time. It was a mini-computer on your face with a social twist; consumers at large were wary of it being a somewhat creepy device that secretly searched information for its owners.

Moreover, the product design of Glass didn’t go well in the fashion-conscious consumer electronics world where it was imperative for a personal device to look cool. The US$1,500 per pair price tag of Glass didn’t help either when it went on sale for just one day on April 15, 2014.

The second part of the article about Google Glass history is based on excerpts from the book The Next Web of 50 Billion Devices: Mobile Internet’s Past, Present and Future. The book is available in both paperback and e-book formats.