BannerforSemiWiki 800x100 (2)

The Infamous Intel FPGA Slide!

The Infamous Intel FPGA Slide!
by Daniel Nenni on 03-11-2014 at 10:30 am

As I have mentioned before, I’m part of the Coleman Research Group so you can rent me by the hour to better understand the semiconductor industry. Most of the conversations are by phone but sometimes I do travel to the East Coast, Taiwan, Hong Kong, and China for face-to-face meetings. Generally the calls are the result of an event that needs further explanation or just a quarterly update. Again, as an active semiconductor professional I share my experiences, observations, and opinions so rarely will I agree with the analysts or journalists who rely on Google for information.

In 2003, Kevin Coleman founded Coleman Research to give investors a better way to access industry knowledge. Coleman helps thousands of clients get answers to their most critical questions, without leaving their desks. Rather than spending hours reading research reports, or traveling to meet people at conferences, we connect clients directly with industry experts, to hear immediate, relevant insights.


The Intel analyst meeting last November was full of surprises and resulted in a series of phone consultations. The Intel 14nm superior density claim slides were the most talked about and were absolutely crushed by TSMC, which I wrote about in “TSMC Responds to Intel 14nm Density Claims”. The other slide that caused a flurry of calls is the one above comparing Altera and Xilinx planar to FinFET. After talking to dozens of people (including current and former Altera, Intel, and Xilinx employees) I have concluded that this slide is an absolute fabrication. Get it? Fabrication? Hahahahaaaa….


I did a comparison of the Altera and Xilinx analyst meetings and found the slide above which supports my point. Clearly silicon does not lie so when the competing FPGA FinFET versions are released we will know for sure, but my bet is that Altera/Intel will lose this one. It also goes to my point that the transistor is not everything in modern semiconductor design and Intel’s claims of process superiority are a paper tiger when it comes to finished products.


There are thousands of FPGA and semiconductor process professionals reading SemiWiki so I’m hoping for a meaningful discussion in the comments section. If any of you would like to post a rebuttal blog I’m open to that as well. SemiWiki is an open forum for the greater good of the fabless semiconductor ecosystem, absolutely.

The most recent event that caused a flurry of calls was the JP Morgan Report: Meetings at MWC – Intel Mobile Effort Largely a Side Show, but Some Problems in Foundry a Concern. The press really had a field day with this one:

Some issues popping up with foundry business – we are concerned.Ourchecks indicate there have been some problems with Intel’s foundry effortscentered on design rules and service levels. It appears Intel is being inflexibleon design rules and having trouble adapting to a service (foundry) model. Our J.P. MorganFoundry analyst, Gokul Hariharan wrote today that Altera has re-engaged TSMC.

This resulted in a handful of tabloid worthy articles taking the JP Morgan report completely out of context:

Altera to switch 14nm chip orders back to TSMC, says paper Commercial Times, March 4; Steve Shen, DIGITIMES [Wednesday 5 March 2014]

While I appreciate the consulting business this generated I really do question the motives of Steve Shen. The first “Altera leaving Intel” rumor started HERE and I’m sure this won’t be the last but I’m still not buying it and neither should you.

More Articles by Daniel Nenni…..


Effective Verification Coverage through UVM & MDV

Effective Verification Coverage through UVM & MDV
by Pawan Fangaria on 03-10-2014 at 5:00 pm

In the current semiconductor design landscape, the design size and complexity of SoCs has grown to large extent with stable tools and technologies that can take care of integrating several IPs together. With that mammoth growth in designs, verification flows are evolving continuously to tackle the verification challenges at various levels. Today, verification is not a single continuous flow; it is being done from several different angles including formal verification; h/w, s/w and h/w-s/w co-simulation; acceleration, emulation, assertion-based and so on. VIPs (Verification IPs) for standard components in SoCs are to the fore to ease the pressure on verification teams.

In such a scenario, it’s evident that the verification of SoCs in every organization must be a continuous improvement and coverage building process where coverage from various verification processes can be added up and accumulated; testcases, testbenches and verification plans maintained and re-used between same as well as different projects to the largest extent possible; interoperability maintained between different verification engines and best quality obtained with optimum utilization of resources.

Cadencehas established a very novel, effective and everlasting methodology for verification called MDV (Metric-driven Verification) that uses the well-known standard UVM (Universal Verification Methodology) which is based on OVM (Open Verilog Methodology), itself an evolution from eRM (e Reuse Methodology). UVM supports both e and SystemVerilog, thus enjoying wide-spread use in the semiconductor industry.

MDV methodology advocates a step-by-step approach in a planned way starting from test coverage, code coverage, advanced verification up to planned verification closure of design features. Cadence Incisive Verification Kit includes verification planning (test structures against specific features can be specified in MS Excel spread sheet) codified within the tool. The vPlanner feature of Incisive vManager can also be used to identify abstract features and hierarchies of features that closely resemble the hierarchy of specification. A vPlan can be hierarchical integrating together vPlans of other features. It becomes executable when it is able to re-direct verification activities and is dynamically updated with highest priority items executed first. The coverage is accumulated progressively as various verification engines execute. Test driven verification is RTL centric and can be in the form of block, expression, toggle or finite state machine coverage. Then there is additional assertion-based functional testing. Constrained Random testing, although incapable of keeping track of what is tested, is very effective in finding bugs in random manner. The tracking and visualization of what has been tested is done by Coverage Driven approach; however that can generate huge data, thus limiting usability and scalability. In a generic approach of plan-based MDV, the overall verification can be organized in a verification plan which can have milestones set, feature wise or design hierarchy wise, and can capture what is tested through various means; feature hierarchies are organized by the executable vPlan which contains many-to-many relationships between features and tests, thus also helping in traceability against specification.

Incisive Verification Kit includes several real world testbench examples that can enable design verification engineers to plan and embrace this new verification methodology and scale on productivity through re-use of verification plans, testbenches and several other components. Above is a testbench architecture that shows how UVCs (Universal Verification Components) are hooked up together to a UART DUT. Then there is a cluster-level testbench for the kit APB system with major re-use of serial interface and APB UVCs. This modular and layered approach creates a user-friendly plug-and-play environment where hardware and software verification components can be easily re-used from block to cluster to chip to system between multiple projects and platforms. Each of these components has its own pre-defined executable vPlan which can be plugged into the master vPlan of the SoC. Cadence has a rich set of commercial VIPs for standard interfaces (e.g. USB, PCI Express, AXI etc.) in its portfolio.

The intensity of re-use in this verification platform plays a vital role in accelerating the testbench development and scaling verification for large SoCs through manageable, repeatable and closed-loop process. Not only are the verification components re-used, but also vPlans, sequence libraries and register definitions.

MDV provides analysis of coverage contribution for each regression run against a specific feature. Random seeds and tests that contribute most to the overall coverage can be identified and run as much of the time as possible for effective simulations.

It’s interesting to note the results from a real Cadence customer project where project timeframe was reduced from 12 months to 5 months and more bugs found with lesser resources.

The Incisive Verification Kitprovides comprehensive hands-on workshops (which include techniques from both planning and execution perspectives) for verification engineers getting up to speed with MDV platform. It has material for several design paradigms such as low-power and mixed-signal. A subset of the workshops is available through Cadence online support in Cadence Rapid Adoption Kits. Read the Cadence whitepaper for more detailed description on this powerful and effective SoC verification methodology.

More Articles by Pawan Fangaria…..

lang: en_US


Intelligent Sensors

Intelligent Sensors
by Paul McLellan on 03-10-2014 at 3:48 pm

Wearables are clearly one of the hot areas of the Internet of Things (IoT). A big part of that market is sensors of one sort or another. Andes low power microprocessors are a good fit for this market which requires both 32 bit performance and ultra low power. Performance is needed since IoT by definition has internet access in some way (perhaps piggy backing on your phone or wireless hub) which requires enough performance to run a network stack. Low power is required since many IoT applications need to last for a very long time and, perhaps, never have their battery replaced.

Intelligent sensors are the fusion of a microprocessor, sensor, and communications interface into a single physical unit. Fueled by the automotive, wearables, medical equipment, and industrial automation markets, innovative new intelligent sensors are expected to grow at a CAGR of 9.8% from 2012-2018 with sales to reach $6.9 billion by 2018.

On March 22nd at 10am Pacific Andes Technology is presenting a webinar entitled Intelligent Sensors. It will be presented by Dr. Emerson Hsiao, Director Field Application Engineering.

Andes, one of EE Times’ Silicon 60: Hot Startups to Watch, is a leading provider of embedded microprocessor IP for intelligent sensor vendors and SoC developers. In this webinar, they will discuss the processing, power consumption, security, and cost requirements for integrating an embedded microprocessor core into an intelligent sensor. Andes provides a portfolio of cores ranging from a low-end 8051 upgrade 32-bit MCU to a high-end 1 GHz 32-bit microprocessor to address all the needs of the intelligent sensor design community. Attendees will learn how to use the complete Andes embedded microprocessor design solution including a graphical SW development tool (IDE), rich operating system and application software stack, easy-to-use debugging tools, and convenient in-circuit evaluation and simulation platforms.

Attendees Will Learn:

  • The breadth of processing requirements for intelligent sensors in product applications ranging from wearables to automotive to medical equipment to industrial automation.
  • The tradeoffs of power and performance in selecting a microprocessor core.
  • How to implement security features in an embedded microprocessor core.
  • The complete embedded microprocessor design solution from Andes.

Dr. Hsiao has an extensive background in the ASIC and IP industry. Prior to Andes, he worked at Kilopass Technology as the VP of Marketing. Dr. Hsiao previously held the General Manager position for Faraday Technology USA, where he spent several years in field application in various locations including Taiwan, Japan and USA. Dr. Hsiao worked at UC Santa Barbara as a visiting scholar prior to Faraday. He received his Ph.D in Electrical Engineering from National Taiwan University.

Register for the webinar here.


More articles by Paul McLellan…


EDAC Update: Elections, Kaufman and More

EDAC Update: Elections, Kaufman and More
by Paul McLellan on 03-10-2014 at 3:24 pm

I wrote recently about the EDAC mixer in Mountain View. Due to college basketball there won’t be one in March, the next one will be in April. Details later in the month.

The EDA Consortium (EDAC) is seeking nominations for the Board of Directors for the two-year term beginning May 29, 2014. Voting member companies are entitled to nominate their CEO, president or COO to serve on the consortium’s Board of Directors. Nine of the nominees will be elected to serve a two-year term on the EDA Consortium Board. The deadline for nominations is Monday, March 31st at 5pm Pacific.

DATE is in Dresden in two weeks, from March 24-28th. You are cutting it fine but EDAC members get a discount on exhibit booth space at DATE (and at DAC which is in San Francisco the first week of June).

Meanwhile, here is some news from some of the groups that work on various issues of general interest to EDA companies.

The Emerging Companies Group promotes the interests of companies whose annual revenue is less than $5M. The Emerging Companies Committee is planning another exciting year of events with information especially useful for start-ups. Upcoming events include Marketing Best Practices, and more installments of the popular EDAC – Jim Hogan series, including Investments Fueled by the Upcoming Energy Boom in the spring and Alternative Sources of Funding for Emerging Companies scheduled for the fall. Previous installments of the EDAC – Jim Hogan series are available in the EDAC media library.

The Export and Government Relations Group supports the interests of EDA in matters pertaining to product export. In September 2013, the Export Committee, in cooperation with the Emerging Companies Committee, held an informative seminar on export regulations at the new EDAC offices in San Jose. Many smaller companies do not have the resources to track these complex and ever changing sets of export regulations, but ignorance of the laws is not an excuse. Non-compliance can be costly, with penalties ranging from significant fines to imprisonment. This valuable seminar was recorded, and is available to members on the EDAC web site. A publicly available preview is available; EDAC members can view the full presentation. Members can also visit the Export Compliance Update, which reviews key export information for EDA companies.

The License Management & Anti-Piracy Group provides a forum for members to identify and solve software licensing and piracy problems common to EDA vendors and their customers. The committee met with Flexera in October 2013 to discuss our thoughts on external license reclamation. (External License Reclamation is a tool or procedure that attempts to force an application program to give up its licenses by applying an external action like process suspension or stoppage.) After some discussion, we concluded this is a very risky practice, and in general is not supported by EDA vendors. This document details the issues surrounding external license reclamation, and why the LMA committee does not support the practice.

There is more at EDAC. Interoperability. Market Statistics Service. The Kaufman Award. All at the EDAC website here. EDAC members have full access to all the supporting materials, videos, presentations etc.

More articles by Paul McLellan…


WordPress and EDA Software, How Do They Compare?

WordPress and EDA Software, How Do They Compare?
by Daniel Payne on 03-09-2014 at 8:34 pm

I first started using WordPress in 2008 after having written my own Content Management System (CMS) to build and manage web sites. WordPress is the number one CMS in the world, is just 10 years old, and is used by over 70 million users. What got me thinking about WordPress and EDA software companies was a recent book by Scott Burken, The Year Without Pants: WordPress.com and the Future of Work. In the book Scott talks about his experience working at Automattic (the company that owns WordPress) as one of the first-ever engineering leads and contrasts it with working at Microsoft, a traditional software company.

Let’s start off with a quick comparison between WordPress and most EDA software companies:

[TABLE] style=”width: 500px”
|-
|
| WordPress
| Most EDA Companies
|-
| Pricing
| Freemium
| Expensive
|-
| Licensing
| Open Source
| Proprietary, Leased Software
|-
| Users
| 70+ Million
| About 200,000
|-
| Release Cycle
| Every 2 weeks
| Maybe twice per year
|-
| Cloud-based
| Yes
| Limited
|-
| Schedules
| None, really
| Bureaucratic, elaborate
|-
| Sales & Marketing
| Word of mouth
| About 30% of total revenue
|-
| Web volume
| #19 in the world
| #61,421 in the world (mentor.com)
|-
| Adding Features
| 29,827 Plugins
| Scripts – Tcl, Skill, C, API, etc.
|-
| Customer Support
| Team Happiness
| Thankless job
|-
| Management Decisions
| Bottom up
| Top down
|-
| Employee locations
| Remote
| Centralized offices
|-
| Communication
| IRC, Skype, Blog
| email
|-
| Formal Meetings
| None
| Often, unproductive
|-
| Rating, Ranking
| Not used
| Annual spectacle
|-

OK, I know that WordPress is not as technically sophisticated as a formal analysis tool in EDA, however it does contain 248,090 lines of code and installing it uses some 1,100 files. Did you see how often they release a new version of WordPress, every two weeks! Now that’s what I call being responsive to customers.

When I worked at Mentor Graphics in 2003 our product team worked on a FastSPICE circuit simulator and released a new version every month, or 12 times per year. I think that many EDA start-up companies release frequently because they are adding new features at a rapid rate, and customers are thrilled with the added automation. What tends to happen at large EDA companies is that products can get stuck into dependencies on other products or a framework, so that all products must be released at the same time instead of being autonomous.

If you need a feature not included in WordPress then you quickly search from a centralized repository of Plugins, however in the EDA world we really don’t have a place to share our scripts, mostly because management doesn’t want to share any automation with a potential competitor. I figure that most large EDA users have a way to share their own scripts internally, but not for the world at any price.

At Automattic, all new employees work their first month in Team Happiness, a customer support role in order to find out how real users are using WordPress and finding bugs or just having trouble learning how to get their work done. In the EDA world I recall how Model Technology (acquired by Mentor Graphics) assigned their developers to answer the customer support phones each week in order to stay close to the customer, and better understand how ModelSim was actually being used and what users really wanted in their functional simulator.

What shocked me most about WordPress was that the founder Matt Mullenweg sets general directions about what should be done, but doesn’t order any person or team to go out and do it. Teams were formed, and they simply divided up the work and got it done, all without using formal scheduling or deadlines. I’m not sure how the chaos style used at WordPress would work in an EDA world, however I bet that most EDA startups follow a similar approach of informal project management.

Transparency was another huge part of the success at WordPress, where they use internal blogs to document what is happening within each team, so that the entire company is privy to your development progress and requests for help. The culture at Automattic is certainly unique and maybe the stodgy folks in EDA could loosen up a bit and get back to their start-up roots and become more productive at the same time.

Summary

I recommend Scott Burken’s book about the inner workings of WordPress because it answered many of my questions about what the culture at WordPress was, how they got started, what they believe in, and how working at a software company can be fun and rewarding. The only negative point I could think of about Scott Burken, is that he used to work at Microsoft on the team that created Internet Explorer, the most un-compliant web browser on the planet and loathed by web developers.

lang: en_US


Internet of Things and the Wearable Market

Internet of Things and the Wearable Market
by Daniel Nenni on 03-09-2014 at 8:00 pm

My wife and I drove to Southern California last week in search of information on the wearable computing market. After stops in Irvine, San Diego, and some play time in La Jolla we returned in time for the CASPA Symposium: “The Wearable Future: Moving Beyond the Hype; the Search for the Holy Grail and Practical Use Cases”. CASPA is the Chinese American Semiconductor Professionals Association and their Spring Symposium was at the Intel HQ Auditorium in Santa Clara with a standing room only crowd.


The big attraction for me was the keynote speaker Dr. Reza Kazerounian, SVP & GM, Microcontroller Business Unit of Atmel. I originally ran across his name during my research for “A Brief History of STmicroelectronics” (the piece I did last week) as he was CEO of ST Americas from 2000 to 2009. It was truly an honor to hear Dr. Reza Kazerounian speak.

The Internet of Things (IoT) is opening up fresh horizons for a new generation of intelligent systems that leverage contextual computing and sensing platforms, creating new markets. One of these platforms is the wearable category of devices, where the combination of sensors using low-power sensor fusion platforms, and short-range wireless connectivity, are giving rise to a variety of exciting end markets. From self-quantification to a variety of location-based applications, to remote health monitoring, wearables are becoming the harbinger for a whole host of services. With the right set of biometric sensors combined with local fast data analytics, wearables have the potential to revolutionize the health care industry. These devices can provide real-time data and contextual information along with all the health care requirements, improving the quality of care, and lowering the overall cost of care. This discussion will review the underlying technologies needed to make the “always-on health care revolution” happen, and explore how the future of medicine is being shaped by wearable devices.

Contextual computing is the key term here and, yes, I had to look it up. The application I’m most interested in, besides fitness, is security. I want my smartphone to know it is me holding it by my movements, voice, and usage. I remember back when my credit card kept getting security flagged when I started traveling internationally. Once Visa profiled my usage it never happened again. As the smartphone takes over our financial lives, security will be even more critical, absolutely.

There are three key components to wearable market silicon: Low power, low cost, and low area. Billions of these devices will be deployed over the next 10 years so the market will by far exceed smartphones. The wearable market will be very fragmented which opens up opportunities for entrepreneurs around the world. In fact, Dr. Kazerounian predicted that 15% of those devices will come from companies that are less than 3 years old to which I agree wholeheartedly.

One of the big challenges is low power connectivity. For now these devices will be talking to our smartphones and that means ultra-low power connectivity. Coincidentally Atmel just announced a new SmartConnect family that combines Atmel’s ultra-low power MCUs with its wireless solutions and complementary software into a single package:

“Ultra-low power wireless connectivity is critical for embedded applications in the era of the Internet of Things,” said Reza Kazerounian, Sr. Vice President and General Manager, Microcontroller Business Unit, Atmel Corporation. “Atmel’s SmartConnect technology is about simplifying the use of embedded wireless connectivity technologies and enabling users to accelerate their time-to-market. This simplicity allows all players to participate in the IoT market, fueling the innovation needed to accelerate adoption.”

Celebrating their 30th year, Atmel is an IoT market leader with an interesting history that you can read about HERE.

More Articles by Daniel Nenni…..

lang: en_US


Semiconductor Strategy – From Productivity to Profitability

Semiconductor Strategy – From Productivity to Profitability
by Pawan Fangaria on 03-08-2014 at 8:30 am

The semiconductor industry seems to be the most challenged in terms of cost of error; a delay of 3 months in product development cycle can reduce revenue by about 27% and that of 6 months can reduce it by almost half; competition is rife, pushing the products to next generation (with more functionality, low power, high performance, more compact, better graphics and much more) in short intervals. This trend has clearly segmented the semiconductor market into design creators (IP vendors focusing on most PPA optimized IPs) and design integrators (SoC vendors focusing on overall quality, cost and time-to-market); fabs with ever shrinking technology node and complexity remaining concentrated among a few with large capital investment capability.

Considering these challenges, the semiconductor companies are focused on improving their processes, developing expertise to handle new complexities, increasing verification coverage to improve quality and so on. While these initiatives definitely improve productivity, today’s business environment needs a greater focus on profitability improvement. Higher cost and reduced profitability has led to mergers of several organizations even though they were productive. It’s no secret, to remain profitable, companies have to be closer to their customers, be collaborative, produce what they require fast and re-use whatever they can, but how?

Last week I was talking to Michael Munsey, Director of Semiconductor Strategy at Dassault Systemes. I was very impressed with the kind of strategy and a broader solution framework they are putting in place to address the issue of productivity and its transformation to profitability in the semiconductor industry. Just to be curious I read into Dassault’s own profitability and that was impressive too; with ~$2.6B revenue and ~31.6% operating margin (Non-IFRS), it’s no wonder Forbes keeps this company in the list of world’s most innovative companies!

So, the idea Dassault has put together seems to be very innovative which relates to the current semiconductor market segmentation and aligns itself with that such that every stakeholder in the design chain (or rather complete product cycle) can gain maximum value out of the product. How’s that possible?

Considering the global business reality the 3DEXPERIENCE platform constituted by Dassault focuses on best design creation, flawless integration and manufacturing optimization that lead to profitability. In order to create best devices, ‘Product Engineering’ framework provides requirement specification management and New Product Introduction (NPI) to develop what customer (or market in broader sense) needs and continuous defect tracking and resolution to remain relevant. ‘Design Engineering’ then manages IPs, their protection, integration of IPs through collaboration between various teams and verification of complete design against the specifications. Finally, ‘Manufacturing Engineering’ works through device configurations, packaging simulations, and analyzing and optimizing yield. The overall platform is geared towards rapid design integration of best devices through highly collaborative environment for analysis, prototyping and optimization, and then cost optimized, risk reduced manufacturing that can generate profit.

The 3DEXPERIENCEplatform has four major solution spaces. Design Collaboration solution provides ‘Semiconductor Collaborative Design’ that includes issues & defects tracking and change management; requirement, traceability & test; and project & portfolio management. Enterprise IP Management provides ‘Semiconductor IP Management’ that includes issues, defects & change management; requirement, traceability & test; and project & portfolio management. Requirement Driven Verification provides ‘Semiconductor Verification & Validation’ that includes the whole Collaborative Design and IP management pieces along with the common issues, defects & change management; requirement, traceability & test; and project & portfolio management. Then there is Manufacturing Collaboration which provides ‘Semiconductor Manufacturing Configuration’ that includes Semiconductor Packaging Simulation, Semiconductor Manufacturing Process Improvement, project & portfolio management and requirements, traceability & test.

These solution pieces together form a system which checks on wastage of resources and efforts; cost of NPIs, quality processes, misalignments in product and requirements and so on to reduce re-spins, increase re-use and optimize resources thus resulting into profitable business. In future, I will talk more on these individual pieces and exciting stories about how these solutions together address global semiconductor design and manufacturing challenges. Stay tuned!

More Articles by Pawan Fangaria…..

lang: en_US


IC Layout with Interactive or Batch DRC and LVS Results

IC Layout with Interactive or Batch DRC and LVS Results
by Daniel Payne on 03-07-2014 at 6:27 pm

IC designers have a long tradition of mixing and matching EDA tools from multiple vendors, mostly because they enjoy best-in-class tools, or they just purchased each EDA tool at a different time and asked for them to work together. Such is this case with IC layout tools from Silvacoand DRC/LVS tools from Mentor Graphics. Pawan Fangaria blogged about the Results Viewing Environment (RVE) of Calibre back in October 2013. Today I learned that the IC layout tool from Silvaco is called Expert, and that it has an integration with Calibre RVE.

Continue reading “IC Layout with Interactive or Batch DRC and LVS Results”


Key Ingredients for ESL Power Modeling, Simulation, Analysis and Optimzations

Key Ingredients for ESL Power Modeling, Simulation, Analysis and Optimzations
by Daniel Payne on 03-07-2014 at 6:00 pm

There’s a French EDA company named DOCEA Powerthat is uniquely focused on power analysis at the ESL level and I had a chance to interview Ridha Hamza to get new insight on ESL design challenges and their approach. Ridha started out doing SRAM design at STMicroelectornics in the 1990’s, moved into the emerging field of MEMS, and finally joined DOCEA Power four years ago.


Continue reading “Key Ingredients for ESL Power Modeling, Simulation, Analysis and Optimzations”


On-Chip Clock Generation beyond Phase Locked Loop

On-Chip Clock Generation beyond Phase Locked Loop
by Daniel Nenni on 03-07-2014 at 8:00 am

Inside a today’s typical VLSI system, there are millions of electrical signals. They make the system perform what it is designed to do. Among those, the most important one is the clock signal. From an operational perspective, clock is the timekeeper of the electrical world inside the chip/system. From a structural perspective, clock generator is the heart of the chip; clock signal is the blood; and clock distribution network is the vessel.

Timekeeper has played and is playing a critical role in our human life. History shows that the progressive advancement of our civilization is only made possible by the steady refinement of the timekeeper: the clock/watch. The same is true for VLSI system. The purpose of VLSI system is for processing information. The efficiency of performing this task is highly dependent on the time scale used. This time scale is controlled by the clock signal. It has two key aspects: its size (the absolute clock frequency) and its resolution (the capability of differentiating nearby frequencies, or the frequency and time granularity). In addition, another characteristic is also important: the speed that time scale can be switched from one to another (the speed of clock frequency switching). Phase Locked Loop (PLL) has traditionally been used as on-chip generator of clock signal. It is a beautiful blend of digital and analog circuits in one piece of hardware. From a reference time scale, it can generate other time scales. However, due to its usage of compare-then-correct feedback mechanism, the choice of time scales that can be produced is limited. Equally harsh is the problem that the change of time scale (frequency switching in PLL) takes very long time. Although PLL has played a key role that makes today’s VLSI system magnificent, these two problems are limiting chip architect’s capability for creating further innovation.

The source of the problem originates from the very fact that electrical circuit is not born for handling time, but magnitude (or level). Inside a circuit, information is represented by the medium of electron. It is created on the magnitude of electron flow, using proportional (analog) or binary (digital) relationships. Time is created indirectly through a voltage level crossing a predetermined threshold. Therefore, the task of building a timekeeper inside VLSI system is inherently difficult since it relates two basic properties of the universe: time and force. In implementation, another fact has made the task of creating time inside circuit even more challenging:since the first day that clock signal is introduced into VLSI design, it is assumed that all the pulses inside a particular clock pulse train have to be equal-in-length. This presupposition has limited our options in the creation of timekeeper circuit. Consequently, our current solution is not completely satisfactory: 1) we cannot generate any arbitrary frequency we want. 2) we cannot switch frequency quickly.

Since timekeeper controls VLSI system’s operation pace through clock-driving-circuit, a fundamental question can be asked: do all the pulses in a clock pulse train have to be equal-in-length? This question is equivalent to asking: what does clock frequency really mean? In 2008 a novel concept, Time-Average-Frequency, is introduced. It removes the constraint that all pulses (or clock cycles) must be equal-in-length. It is based on the understanding that clock frequency is used to indicate the number of operations executed (or events happened) within the time window of one second. As long as the specified number of operations is completed successfully in a specified time window (such as one billion operations within one second for a 1 GHz CPU), the system does not care how each operation is carried out in detail. This breakthrough in clock frequency concept is crucial. It can free our hand in making the clockwork.


Figure Clock as a technology.

From the day of Robert Noyce and Jack Kirby’s first integrated circuit in 1959 to today’s system of billions-transistors-on-a-chip, the art of integrated circuit design can be roughly individualized into three key areas: processor technology, memory technology and analog technology. Processor technology focuses its attention on how to build efficient circuit to process information. Using transistors to do logic and arithmetic operation with high efficiency is at its highest priority. Memory technology is the study of storing information in circuit. Its aim is to store and retrieve information in large amount and in high speed. Analog technology squares its effort at circuit of interfacing electrical system with human. Inside VLSI system, information is processed in binary fashion. Once outside, information is used by us in proportional style since our five senses is built upon proportional relationship. Analog circuit is the bridge in between. During the past several decades, the advancements in these three circuit technologies have made today’s VLSI system very powerful. However the driver of these three technologies, the clock, has not seen fundamental amelioration. The time scale is not flexible: the available clock frequencies are limited; the switching between frequencies is slow.

To improve VLSI system’s information-processing-efficiency further, the next opportunity is at the method of clocking: 1) we need a flexible on-chip clock source; 2) and it needs to be available to chip designer at a reasonable cost. Now is the time for clock being recognized as a technology, as illustrated in above figure. In this field, there are four key issues: high-clock-frequency, low-noise, arbitrary-frequency-generation and fast-switching. The first two have been studied intensively by researchers. The last two have not drawn much attention. There are two reasons. The first one is that arbitrary-frequency-generation and fast-frequency-switching are difficult to achieve, especially to achieve them simultaneously (in contrast, arbitrary-voltage-generation and fast-voltage-switching are easy to do). The second reason is that chip/system architect has not asked for it. As a result, circuit designer does not have the motivation. These two factors are cause-and-effect of each other: system architect does not know that it can be done; circuit designer does not know that it is needed. The goal of this article is to break this lock, to provide a vision that it can be done and it is useful. The aim of Time-Average-Frequency is to provide the means of making flexible on-chip clock source available to chip designer. This concept and technology is a link between circuit and system: a circuit level enabler for system level innovation.

This book “Nanometer Frequency Synthesis beyond Phase Looked Loop” introduces a new way of thinking about the fundamental concept of clock frequency. It presents a new circuit architecture for frequency synthesis: Time-Average-Frequency based Direct Period Synthesis. It proposes a new circuit component: Digital-to-Frequency Converter (DFC). Its influence can go beyond clock signal generation. It is a new frontier for electronic system design.

Nanometer Frequency Synthesis Beyond the Phase-Locked Loop (IEEE Press Series on Microelectronic Systems) by Liming Xiu

lang: en_US