Banner Electrical Verification The invisible bottleneck in IC design updated 1

Is Facebook causing the end of happiness?

Is Facebook causing the end of happiness?
by Vivek Wadhwa on 04-15-2018 at 7:00 am

For the past 30 years, most of us around the globe have welcomed modern technology with few questions and fewer reservations. We have treated each new product as a “solution” and paid little attention to its accompanying problems.

The past six months, though, has seen a rapid change of opinion in the United States, as many in the technology elites have called GAFA (Google, Apple, Facebook, Amazon) and other tech giants to account. One of the most outspoken of Silicon Valley’s moguls, Roger McNamee, who was a mentor to Mark Zuckerberg, has published several articles highly critical of Facebook and launched a campaign, “Truth About Tech”, to educate the world about the evils of Big Tech and strategies for healthier interactions with technology.

I was not surprised by this turn of events, because I had begun work, more than a year ago with Alex Salkever, on a new book on precisely this topic: technology’s impacts on all of us. In the forthcoming Your Happiness Was Hacked, we were fashioning a narrative in which technology companies’ interactive products have been robbing us of fulfillment and connection by deliberately limiting our choices, using sophisticated manipulation to entice us into ever more consumption of their wares.

This may at first be counter-intuitive. The promise of the Internet, the smartphone, social media, and virtual and augmented realities is of enrichment and improvement of our lives by the additional choices they offer. But it is a mirage. Though the Internet may seem to offer an endless range of applications, content, and communication tools, the unhappy reality is that the options available are rapidly decreasing in utility and reward and increasingly herding us into habits of mindless consumption.

Witness what has become of Google. The search engine that originated as a means of finding the most relevant answers to search queries has degenerated into a massive online advertising medium that heavily prioritizes whatever others pay it to. A search on a mobile phone — say, for the best hotel in Mumbai — yields a handful of results of which every one of the top 10 has either been paid for specifically or represents a giant media or hotel company.

Facebook too manipulates the information we would imagine it supplying unfiltered. Its deep detective work into our individual lives is its basis for manipulating our news feeds with the aim of maximizing our clicks and taps — without actually asking us whether we enjoy the endless array of pictures of our friends’ weddings. (We must, because we spend time there, right?

Then there are the incessant beeps, noises, and interruptive alerts of Whatsapp. Intrusions of this type are now common to most communication applications, and they take a large toll on our well-being. They make it harder for us to do our jobs in a concentrated or thoughtful fashion. We accomplish less, which makes us miserable. Economists are even suggesting that the very technologies that we suppose make all of us so productive have, through their distractiveness, instead become responsible for a plateau in the growth of worker productivity in the past decade.

Yet we find ourselves unable to break the habit: we are afraid of missing out; we are expected to respond quickly to friends, relatives, and co-workers; and all of these technologies embed addictive characteristics — the most obvious being psychological rewards such as “likes” — that use the same techniques of beguilement as casinos’ computerized gambling machines do to ensnare us.

The raw truth is that smartphones and applications foster psychological addictions without consideration of the human cost or of design principles that might be less profitable for them but healthier for people in the long run.

How can we alter our technology lives such that we enjoy real choice, understand the trickery of enticements, and regain the agency necessary to human happiness? How can we make the tech companies back off and allow us to establish our own cadence in our use of their tech?

Pushed to operate ethically, smartphone makers could allow us, on their phones’ home screens, to select a “focus mode” that would disable all notifications and social media, even taking the additional step of reverting them to greyscale to reduce the attractiveness of their screens’ brightly colored notification bubbles. YouTube could ask us whether we wish to always play another video automatically when we first sign up for the service, in order to help us avert binge watching.

As for our own defenses, we will need to work hard to insert pauses into periods of thoughtless enthrallment. Turning off most applications’ alerts, checking e-mail only in batches at designated times, and using our phones to call family and friends and talk to them rather than sending them incessant smartphone messages would help most of us make a great start on rejoining the living.

For more, you can preorder my next book,Your Happiness Was Hacked, it will show you how you can take control and live a more balanced technology life.

This article is one in a series related to the10th Global Peter Drucker Forum, with the thememanagement. the human dimension, taking place on November 29 & 30, 2018 in Vienna, Austria #GPDF18


Enabling A Data Driven Economy

Enabling A Data Driven Economy
by Alex Tan on 04-13-2018 at 12:00 pm


The theme of this year CDNLive Silicon Valley keynote given by Cadence CEO, Lip-Bu Tan evolves around data and how it drives Cadence to make a transition from System Design Enablement (SDE) to Data Driven Enablement (DDE). Before elaborating further, he noted on some CDNLive conference statistics: 120 sessions, 84% done by users, 1200 registered attendees and for the first time it was extended to a two-day event.

Lip-Bu provided snapshots of data growth hitting 5-8 Zettabytes volume. He indicated that in data driven economic cycle, we need to understand and address how data get created, stored, transmitted and analyzed as illustrated in figure 1.

Admitting to have more financially apt perspectives, Lip-Bu shared his upbeat take on how data has driven the economic cycle. Last year growth was 22% and for the first time crossing the $400 billion mark. He noted it has shown an encouraging strength going forward. The enablers, he coined as ‘key waves’ are in mobile, automotive, machine learning, edge computing and data center.

For 2018-2019, these segments will bring in growth ranging from 4.2% CAGR in cellular (5G, 3D sensing), 11.4% CAGR in automotive (ADAS, infotainment, etc.) to 13.1% CAGR in IoT (with distributed edge clouds, closer to user, shorter latency and different way to compute). He also reiterated growth coming from hyperscale webservices or data centers. He shared research data on AI related Venture Capital (VC) funding to top $14 billion with 1600 deals and 42% projected CAGR for Deep Learning chipset covering 2016 – 2025 period.

The opportunities are there, spanning from sensors and devices feeding data to the intelligent edge (where protocol translation and device management take place), through ML and neuromorphicprocess, and eventually ending with the cloud. On the horizon, Lip-Bu pointed out some emerging disruptive technologies such as silicon photonics, neuromorphic computing, quantum computing, nanotubes, and blockchain as becoming the future growth drivers. There will be push towards 400Gb/s and 800Gb/s interface speeds; augmenting quantum with AI to gain stability and performance; neuromorphic applications with ultra-low-power environment; blockchain assisted transaction through semi/GPU’s and brain related applications (wakeup or sleep controls).

AI and Hardware Design
Halfway into his presentation, Lip-Bu introduced two guest speakers addressing hardware solutions optimized for machine learning related and data analytics applications. The first one was Rodrigo Liang, a former Oracle SPARC hardware executive turned CEO, who just received first round of funding for his SambaNova System startup. “Semi(conductor) is capital intensive effort”, he said. He believed semiconductor (silicon) is in the center of AI. “We need to consider the software stack, what the software wants”.

Rodrigo replayed the compute evolution from scale-up in nature, to scale-out and eventually to AI computing oriented. Each domain owns its own unique bottleneck to tackle: from CPU instruction set, network latency or bandwidth to the current memory bandwidth or capacity related. Furthermore, each is also characterized by its own business challenges (such as power, cooling, cost constraints) and technology issues (memory, implementation platform: FPGA vs custom ASIC, new software development: neural network types).

The second guest speaker was Gopal Raghavan, CEO from Eta Compute, a startup founded in 2015 attempting to deliver enabling solution for intelligent IoT devices. He showcased an embedded platform to do low-power, machine learning audio/speech and visual/image recognitions, allowing training done on the edge. This approach was intended to avoid the need of transmitting high volume data over the power intensive RF based network.

The hardware design was asynchronous and utilized a few of Cadence tools (JasperGold formal verification, Modus test insertion, Virtuoso ADE, Variety and Tempus Statistical). The amount power consumed by the demonstrated device was between 1.0 to 1.5 mW and only needing 55nm low-cost, process technology.

Enabling Cadence Solution Offering
In the second half of his talk, Lip-Bu showed more growth data in design starts based on technology nodes, including for 10nm and smaller of 29.2%. The projected EDA CAGR growth of 6.2% for 2017-2022 (from 2.1% level in 2011-2016). He noted that as proof to Cadence’s culture of innovation, between 2015-2017 more than 25 new organically developed tools were introduced. He stressed on three areas in moving from SDE to DDE, namely system integration, package and board and CHIP (core EDA).


He announced an enhanced Virtuoso Design Platform to support advanced process nodes including 5nm (more coverage on this in my subsequent blog). He highlighted solution supports to photonics (hi-speed) or packaging (2.5D, 3D); the ongoing AI/ML augmentation in the implementation fabrics (from design creation, physical implementation, electrical signoff to physical signoff; and addressing mixed-signal, low-power and safety in verification spectrum (from formal/static, simulation, emulation to prototyping). His take on key technologies to address the uncertainty of design intent are parallelization, optimization and ML or data analytics.

Closer to the design ecosystem, the IP segment has an 18% growth and tends to be more vertical focus (HPC, auto, mobile/communication). It has a comprehensive portfolio including for advanced nodes with further works in PCIe, USB and memory related areas. Commenting on the recent nuSemi acquisition as enabling those hyperscale data center to address high-speed I/O connectivity needs, he alluded to the Star-IP notion as applied to Tensilica. He said Tensilica processor as an ideal core to power various kind of applications such as upcoming sentiment analysis, song analysis, etc. Its accompanied software stack includes Xtensa Neural Network Compiler on top of Xtensa C/C++ compiler.

In his closing remarks, Lip-Bu convinced that the existing design ecosystem comprising of four spheres (foundry, IP, EDA/Cadence, customer) should now include these additional, smaller spheres (software, channel, standard and compliance, design tools). It is a $400 billions IT industry with new frontier of requirements. “It is not a sunset industry”, he quipped.


Intel Based FPGA Prototyping Webinar Replay

Intel Based FPGA Prototyping Webinar Replay
by Daniel Nenni on 04-13-2018 at 7:00 am

Due to the overwhelming response, here is the first part of the webinar that I did with S2C and a link to the replay. Richard Chang, Vice President of Engineering at S2C did the technical part of the webinar. Richard has a Masters degree in Electrical engineering from the University at Buffalo and more than 20 years experience designing chips, including two US patents. Here is the agenda:

 

Achieve High-performance & High-throughput with Intel based FPGA Prototyping

FPGAs have been used for ASIC prototyping since the beginning of FPGAs (1980s) allowing hardware and software designers to work in harmony developing, testing, and optimizing their products. The high density FPGA – Intel Stratix 10 and Arria 10 are available now with Stratix 10 FPGAs delivering breakthrough advantages in performance, density, and system integrations with single logic die using the Intel 14nm Tri-gate process. In this webinar, we will highlight the advantages of using Intel FPGAs for prototyping and walk through the implementation flow for both single and multi-FPGA boards.

  • Stratix 10 & Arria 10 FPGA Highlights
  • S2C S10 & A10 Prototyping Platforms
  • Single FPGA Design and Debug Flow
  • Multi-FPGA Design and Debug Flow
  • Demonstration – Implementing DDR4
  • Q&AIt really did bring me back to the good old Altera vs Xilinx days where they used to beat each other up and provide customers with the most cost competitive products. Based on what I have learned by working with S2C the past few months Intel/Altera is now superior to Xilinx for FPGA Prototyping, absolutely.Webinar:Intel’s latest Stratix-10 and Arria-10 FPGAs have considerably improved FPGA prototyping applications. Using the Intel 14nm process the Stratix-10 FPGA performance is more than twice the speed and capacity is more than five times larger than the previous generation. Today, we will start the webinar with highlights of Stratix-10 and Arria-10 features for FPGA prototyping. We will then introduce the new S2C Intel-based product line. We will also illustrate the compile flows for both single and multi FPGA designs. Finally, we will walk through a quick design implementation using a DDR4 reference design followed by questions and answers.

    Intel is now shipping the production version of its flagship Stratix-10 FPGA 2800 devices. The 2800 is about 3 times the density of the stratix-5 generation which makes design fitting and partitioning much easier. In addition, the Intel Stratix-10 FPGA uses a single logic die architecture versus multiple dies which enables higher utilization and better performance. Intel is also planning to ship the Stratix-10 5500 device that will almost double the capacity of the 2800. Additionally, the 5500 will have a package footprint that allows easy upgrading from the 2800.

    The Intel 14nm process also makes a big difference on performance. The maximum frequency has increased from 174MHz to 427MHz compared with the previous Stratix 5 generation. There’s also significant improvement on Stratix-10 FPGA I/O and high-speed transceivers. LVDS is now fully configurable and can run at 1.6GHz making pin-multiplexing between FPGAs more efficient. The high-speed transceivers can run at up to 58G – which is more than enough for most SoC Prototyping applications such as video streaming and high-speed data transfer.

    The Arria-10 has most of the features of the Stratix-10 except it is smaller. The largest Arria-10 device the 1150 is about half the size of the Stratix-10 2800. With its attractive entry price point the 1150 is suitable for a variety of small to mid-sized IoT/SoC applications. The Arria-10 has abundant internal memories and lots of DSP cores. In fact, the DSP cores are the industry’s only hardened floating-point DSP blocks making the Arria-10 the top choice for computation intensive applications.

    Many of today’s applications, such as AI, IoT, computer vision, and autonomous driving, requires intensive software and firmware development so having the ability to deploy an array of pre-silicon platforms for software development and compatibility testing dramatically increase the chance of a successful product launch. With affordable pricing, Arria-10 1150 FPGA is the ideal candidate for those applications.

    Next I will introduce S2C’s complete FPGA prototyping solution for Stratix-10 and Arria-10 FPGA but first a quick overview of S2C. S2C is a worldwide leader in providing both hardware and software solutions for FPGA prototyping. The S2C 60+ member team is fully dedicated to delivering FPGA prototyping solutions and they have served over 400 customers in the past 15 years. S2C is headquartered in San Jose, CA with direct support centers in Shanghai, Beijing, Hsinchu, Shin-Yokohama and Seoul.

    S2C offers a wide range of Intel Stratix-10 and Arria-10 based FPGA prototyping hardware. For the S10-series S2C offers Single, Dual, and Quad Prodigy Logic Modules that can go from 28M gates to 220M gates when the 5500 is available from Intel. The 2800 Dual and Single Prodigy Logic Module are shipping now and the 2800 Quad Prodigy Logic Module will be available in July. For smaller and medium sized designs, the A10-1150 is a good alternative with 2 form factors to choose from: standard expandable chassis with flexible I/O or the PCIe finger form factor.

    The S10 and A10 Prodigy Logic Modules are S2C’s 6[SUP]th[/SUP] generation FPGA prototyping system that is easy to expand for different applications, scale for different design sizes, and are reusable for different projects. Next is a 1 minute video that highlights the key features of the new S10 and A10 Prodigy Logic Module chassis system…

    Another key feature of S2C’s S10 and A10 FPGA prototyping systems is the many off-the-shelf daughter cards that are available. The use of daughter cards for FPGA prototyping is an important concept as it allows flexibility in case design specs change, expandability for design growth, and reusability for future designs.

    S2C provides 80 different types of memory, interface, and accessory cards for customers to quickly put together prototyping platforms that closely resemble final products. Some examples are ARM processors, PCIe, Ethernet, USB, DDR4, Flash memories, HDMI, and many others.

    S2C also provides daughter card design guidelines in case prefer to develop your own application daughter cards or if you choose not to build-your-own but still want a customized application specific daughter cards. S2C also provides daughter card design services.

    Next Richard will explain the FPGA prototyping software flows for Intel Stratix-10 and Arria-10 FPGAs….

     

     


HCM Is More Than Data Management

HCM Is More Than Data Management
by Alex Tan on 04-12-2018 at 12:00 pm

While tracking Moore’s Law has become a more expensive and difficult endeavor in the HPC design, the mobile SOC design space is also increasingly heterogeneous and complex. Strict safety guidelines such as the ISO-26262 being imposed in the automotive applications further exacerbate the situation.

Looking closer into the design ecosystem, we could view the segregated landscape as being occupied by four key players, namely foundry, EDA, IP and design service providers. For example, the first ADAS computer vision SOC tapeout in February last year is as a result of a collaboration among three IP companies (Dreamchip, ARM, Arteris), an EDA (Cadence), a design service (INVECAS) and a foundry (Global Foundries). It is intuitively clear that collaboration should serve as a common denominator, in order to ascertain a seamless design implementation and successful product rollout.

Design realization involves taking its formulation into different level of abstractions, which then get optimized, verified, analyzed and aligned with foundry requirements. All of these imply frequent and occasionally massive data generation, in binary and ASCII alike. Key to a proper handshake among these ecosystem players is a formal process or policy for data and version control management. Last month, the use of Hardware Configuration Management (HCM) from ClioSoft, SOS7, as embedded agent in various underlying point-tools or flow interface had been discussed in this blog. In this article, we will expand its usage scenarios within the ecosystem.

Foundry Files
When a new or derivative process node is introduced, it is normally accompanied by foundry’s Process Design Kit (PDK). A PDK is a collection of foundry-specific data files and script files used with EDA tools in a chip design flow. PDK main components are models, symbols, technology files, parameterized cells (PCells), and rule files. Any process related fine-tuning and control variations could result in an incremental release of PDK. On the other hand, timing models and its related parameters are captured and released as SPICE model as illustrated in Figure 2.

Once the PDK is passed to the foundry customers, the chain reaction starts. The design and IP teams should make a call as to which part of the design step(s) in the flow that need a respin. PDK changes usually impact changes to routing vias and metal stacks, parasitics parameters or extraction setup; although they may or may not be relevant to the integrity of the standard cells library. SPICE model updates, however, would trigger a library recharacterization and a timing respin. With ClioSoft’s HCM SOS7, such PDK update can be captured as separate reference projects, allowing ease of retrievals for correlating with prior versions and tracking trade-off of chosen design metrics. It is usually normal to expect between 4 – 6 iterations for a new process. For example, annually TSMC releases between 500 – 700 techfiles and 50-70 PDK updates for all supported processes.

Aside from PDK, there is usually a validated reference flow accompanying each foundry process node rollout. A reference flow is adopted by foundry and open-source IP provider such as ARM to address critical design challenges associated with the new process technologies and pipe-clean the flow to be ready for performance, power and area optimization.

Packaging
Other variations which might require creating different design implementation scenarios, are presented from the IP and packaging selections. Depending on the market segments (automotive, IoT or mobile), the form factor, power or thermal requirements may drive the package selection. Figure 5 shows various package technologies vs market segments. With stringent requirements such as ISO-26262
and the availability of advanced packaging analysis, it is becoming common to analyze the impact of project targeted packaging on the system’s silicon. For example, FOWLP (Fan-Out Wafer Level Packaging) is known for its low cost and high performance and selected in low-power, high performance mobile applications. A thermal-stress analysis can be performed to assess its reliability. Another example is the System-in-Package (SiP) which is targeted for the IoT wearable, RF and automotive. Each of these packaging analysis such as 3D-Electro-Magnetic simulation, thermal and stress analysis need to be aligned and synchronized with upstream system silicon. Since SOS7 platform is methodology agnostic, data management from this downstream analysis can be folded into the ecosystem.

IP Reuse and Management
We often discuss about design reuse as it applies to both internal and third party IPs. The steps in generating, maintaining and propagating design changes as well as user experiences as manifested in scripts, documents, or other file formats, are daunting–especially with increasingly shortened deliverable schedules to meet time to market. ClioSoft SOS7 addresses most of these requirements. It helps the design team to streamline the IP development and management, ensuring efficient collaboration while dealing with many design collaterals.

ClioSoft’s SOS7 platform can be easily integrated with different applications. The development environment is separated from the release environment. Normal procedural access steps (such as checkout, modify, checkin) are enforced with either a corresponding locking mechanism or concurrent checkout (with merging capability), similar to Software Configuration Management (SCM) features. Several other ClioSoft’s SOS7 neat features include:

  • Customizable triggers as a condition, e.g., no check-in prior to a clean code linting.
  • The use of symbolic labels/tags on revisions to communicate a revision status.
  • Customizable composite object, treating multiple files as single object.
  • Sandbox for local workspace, while SOS7 monitors and send project level periodical updates.
  • Rewind or snapshot features add flexibility to move along progress or debug timelines.
  • Simplified IP release through a script, copying collaterals from development to the release environment.
  • Tool level ‘diff’-ing of two revisions.

Unlike Software Configuration Management (SCM) which may be confined into a distinct set of files and formats, Hardware Configuration Management (HCM) involves handling many design parts and formats. ClioSoft SOS7 offers an integrated development and management platform for not only design data but also design knowledge.

For more info on ClioSoft HCM SOS7 please check HERE

Also Read

ClioSoft and SemiWiki Winning

IoT SoCs Demand Good Data Management and Design Collaboration

ClioSoft’s designHUB Debut Well Received


A Turnkey Platform for High-Volume IoT

A Turnkey Platform for High-Volume IoT
by Bernard Murphy on 04-12-2018 at 7:00 am

Innovation in smart homes, smart buildings, smart factories and many other contexts differentiates in sensing, in some cases actuation, implementation certainly (low power for example) and rolling up data to the cloud. It isn’t in the on-board CPU and I doubt any of those entrepreneurs want to create their own Bluetooth or Wi-Fi (though they may want to optimize power or add some features). They mostly want the CPU and the communication to do their job as transparently as possible, with minimum design overhead and cost, requiring them only to add the special-sauce hardware and application software to differentiate their solution.

CEVA is already very well established in providing the communication part of this package. They are inside 9B+ devices shipped across multiple protocols, from BT/BLE and Wi-Fi up through all the cellular standards, and now offering 5G; specifically, Bluetooth and Wi-Fi solutions are provided through their RivieraWaves family. So it’s probably safe to assume they have communication part of the solution all wrapped up.

The standard choice for a CPU would of course be an ARM Cortex-M-class core – a safe bet and a big supporting ecosystem. But of course there’s a cost in licensing and royalties; this hasn’t historically been a big issue in premium devices but it could be a problem in price-competitive IoT devices. Which is one reason that RISC-V is attracting a lot of interest across the price-spectrum. Briefly on RISC-V, this is an open-source instruction-set architecture (ISA) developed originally in UC Berkeley and now available in open source implementations from Berkeley, ETH Zurich & University Bologna and in commercial implementations from Codasip, Cortus, Andes and SiFive (and others).


CEVA already provides the communications part of the software stack to run on an ARM platform, but given this growing interest in RISC-V, they now also offer turnkey hardware platforms including the “Zero-risky” open-source CPU implementation from the ETH Zurich & University Bologna, with FreeRTOS and communication stacks running on it (in this example for Wi-Fi). All you have to add is RF, sensor and peripheral interfaces, memory as needed, a real-time clock and your application software. All for a lower price than would be achievable with the standard platform which, Franz Dugand (Director of Sales and Marketing for Connectivity at CEVA) says is why one reason this platform is attracting a lot of attention.

According to Franz, the majority of customers they have today are using a more extended architecture in which this subsystem services all the communication functions, communicating through AXI with an application processor subsystem for more extensive processing. He tells me the Wi-Fi solution is scalable all the way from 802.11n up to ac/ax for big access points. What is different between these solutions is in implementation – clock frequency and memory size. Also the modems differ from one Wi-Fi version to another, from pure hardware implementations through software-defined, running on a DSP (naturally a core-strength for CEVA).

The turnkey solution for Bluetooth looks quite similar, with support for both low-energy and dual-mode operation and proven with both RivieraWaves and 3[SUP]rd[/SUP]-party RF.

CEVA provides FPGA-based evaluation boards hosting the Zero-riscy implementation of the RISC-V core along with, I believe, both Wi-Fi and BT/BLE options. They have run Wi-Fi benchmarking for their implementation against both Cortus and Cortex-M0-based solutions. Running each at the same clock frequency, they have been able to show comparable performance between across all three implementations.

One interesting point Franz made – he said they don’t ship ARM cores with their reference boards for the same cost reasons that customers may encounter. An obvious question for me was why they don’t use one of the SoC FPGAs which includes a built-in ARM core. His answer was revealing – they use low-end FPGAs (Spartan) to keep the board cost down. Using an SoC FPGA like Zynq would dramatically increase this cost. Also the SoC versions tend to use big processors (Cortex-A), where the CEVA target applications will more commonly be based on Cortex-M-class processors. Now with a Zero-riscy core, all those problems go away; the reference board and software are truly turnkey, and at a much more accommodating price-point.

Franz wrapped up with a compelling datapoint on what it took for them to move from an ARM-based implementation to a RISC-V implementation:

  • 1 week to build a new hardware platform (replace the CPU, run simulation, generate new FPGA binary)
  • 1 week to port the software
  • 1 week of system level validation

Three weeks is not a big investment to enable cutting your costs. You can learn more about CEVA RISC-V-based solutions HERE. There’s also an interesting viewpoint on how RISC-V is changing the game for IoT HERE.


Is there anything in VLSI layout other than “pushing polygons”? (8)

Is there anything in VLSI layout other than “pushing polygons”? (8)
by Dan Clein on 04-11-2018 at 12:00 pm

The year is 1999 and I decided that is time to try something else in layout. In 1989 in Israel I was part of the biggest chip in the world, the Motorola DSP9600. In 1998 in Canada I was part of the biggest Synchronous DRAM in the world, it was time to try analog/mixed signal/RF projects.

The opportunity came from PMC Sierra who already had a digital team in Ottawa and wanted to build a Mixed Signal team here. Tad Kwasniewski and Bill Bereza and myself started a new local group for Mixed Signal Design and Layout. Back to hire and train people, setup a new group, etc.

The rest of PMC was in 0.18-microns process and I needed to ensure that we have a proper setup, flow, tools, verification, etc. The success of our first local chip CRSU-10G, OC-192 was in jeopardy without proper setup in a new 0.13-micron process with a pretty aggressive project in mind for year 2000. Having a solid system in MOSAID for Electromigration thanks to Graham Allan I knew what has to be done. I shared the concept in my book original book version in 1999. The complexity of Electromigration is much bigger when you have to deal with huge buffers that drive 32 mA outputs. In this case the CML cells had source device of 3200-micron width in 0.13 microns so the metal size and number of vias arrays were crucial to be correct. Extracting information from SPICE models file I build a new 0.13 Electromigration table, yes, the layout guy 😊.

However, when I wanted to release it for use Colin Harris, advised me to get approved by the reliability department. I shared the file with Khai Nguyen, who was our PhD in reliability. After a few simulations he accepted my table and this became law for layout and design, but only for 0.13 microns. Two years later when we had major hiccup in another project, PMC decided to treat Electromigration much more seriously. Jurgen Hissen, one of mixed signal designers with a flair for programming, wrote an entire software to check it, a novelty in 2002. Peter O’Shea the new reliability PhD prepared a training course and Electromigration became a law for all Mixed Signal Design and Layout. More about this in the next book revision coming this year.

New design types bring new challenges. In MOSAID my problem was to verify big memory chips with millions of devices, in PMC the output of Mixed Signal Layout was actually small. Our blocks had up to 100k devices so using a hierarchical tool had no specific value. We were using Diva for online during development and Calibre for final GDSII. This meant that we needed for every process to qualify and maintain 2 verification tools from 2 different vendors. I invited Carey Robertson and Dan Chapman to talk to me in Vancouver. I knew that Calibre was a 2 licences software, one flat and one hierarchical, so I wanted to ask for a solution to my “IP level” verification. I explained them that if they can cut a licence that is “limited by number of devices” they can sell even more Calibre as people will replace DIVA. The calculation was simple: If the user can use a single verification deck for small blocks as well as for full chip, they can sell more licences and we (the users) can qualify/calibrate only one deck per process. In this case I even built their business case so it was a no brainer…

I had to reach Joseph Sawicki to get the ball rolling but by end of 2000 the Calibre CB (Cell & Block) with various device limitations was born, PMC Sierra ditched DIVA and the rest is history… How is this for a layout designer extracurricular activity?

I always liked competition and I knew that if only one company has a tool for my world, they will stop improving. I volunteered to work with all EDA vendors on perfecting their tools. One of my old friends from MOSAID, Jean Crepeau, now in Synopsys, got me another interesting engagement. A team from Victor, NY needed help to improve COSMOS, the Virtuoso competitor. For many quarters they drove 700 KM round trip for one day in Ottawa. They brought with them a computer disk which we hook up to a PMC desktop and we spent time reviewing features, idea, action and results from previous visit, etc. We had lot fun and the software was ready for market release but politics killed it and nobody was there to save at least the team. All this knowledge lost and Synopsys invented the new IC Designer/ Galaxy / etc… A novelty feature that was available in 2002 in COSMOS was Resistance and Capacitance as you route a signal, in this case based on a table like the Electromigration one. It knew how to calculate # of vias and metal width table based. Will talk more about this when we reach EAD software from Cadence.

In 2002 CADENCE decided that their verification team working on DIVA & Assura & Vampire & … can benefit from some training on flows. I worked with Gregg and updated the training material done for Mentor and worked with Beverly Higazi to organize the CADENCE visit. This time the training was actually challenging. For 5 days in the same class I had PhD in physics or software and Bachelors of Art, people with 20 years’ experience and new hires. Our material in this case was “too little” or “too much”, so the training room got very tense in the first day. I agreed with Beverly to work “overtime” and clarify some physics notions and concepts for the people who never had to deal with terms like resistance and capacitance. It was another success story and I have only good memories and a lot of pictures from this experience. We all learnt a few new things. I found again that training may be one of my future hobbies and the need for simple explanations.

I participated in Design Automation Conference (DAC) for 20 years and followed all their announcements for tutorials, workshops, etc. A new initiative coming from Synopsys. Karen Bartleson who was at that time a Marketing Director, was presenting at DAC a 2 hours tutorial called: “Introduction to Chips and EDA for non-technical audience.” Her intention was to train people outside engineering, lawyers, financial people, etc. 10,000 feet view of the VLSI industry and the relation to EDA organizations.

I always had to battle with support organizations to explain what we are really doing in VLSI, why do I want to hire aggressive people sometimes with poor “soft skills” and how important “thinking outside the box” is for our success. Karen agreed to let me audit the training and later provided me with the original material with the condition that will be used internally and free. I added a lot of company specific information like job descriptions and a few pages from my book to make it more specific.

I used this course many times in the last 10 years to train HR, Finance, IT, purchasing, document control, etc. In one of the pilots one of the financial controllers who worked in VLSI industry for 20 years wrote me:

“Thank you for your course, now I can explain my family what the company is actually doing and what is my personal contribution to the company success.”

The course isfree but the rewards are “priceless”.

More to come so stay put…

Dan Clein, view the rest of the series…


Embracing Architectural Intent

Embracing Architectural Intent
by Alex Tan on 04-10-2018 at 12:00 pm

During DVCon 2018 in San Jose, one topic widely covered was the necessity of describing and capturing intent. Defining our design intent up-front is crucial to the overall success of a design implementation. It is not limited to applying a process level intent, such as the use of verification intent with embedded assertions in code or optimization intents through the use of constraints capture in SDC or UPF, but also it is to be done at the architectural level. With the shift-left trend being touted at many design forums, to have an architectural intent should reduce the chance of ambiguity and potential failures.

Magillem has been an EDA platform provider for configuring, integrating and verifying IPs. Its product called Magillem Platform Design Solution is comprised of four stages: specification, design, documentation and data analytics. The following table captures all the stages and its adjoining product solutions.


In current design environment, system architects are prone to be disconnected from the design teams as facilities used to capture the architectural intent are not integrated into the overall flow. For example, a hardware system description is pushed down to the design team to be recaptured through a logical implementation that involved changes which do not get fed back to the architect. This usually occurs as the system abstraction involved basic drawings without any semantic value of the elements.

Magillem has introduced Magillem Architecture Intent or in short MAI as a front-end design environment for system architecture inception. It bridges the gap between software intent and hardware refinement. The inputs can be originated from either a software map (software intent flow) or from a hardware map (block diagram). This product fits at the top of the Magillem tool chain as captured in the above table. MAI ensures coherency in design views are intact when further refinement taken place along design implementation.

The main features for MAI include the following:

  • On the software side, it captures a given system from software system map and generates an early hardware description. The existing product, called Magillem Registers Engine (MRE), provides an advanced register language that allow one to develop, elaborate, and compile register descriptions from various formats such as Excel, SystemRDL and CMSIS (Cortex Microcontroller Software Interface Standard) and generates IP-XACT format output.
  • On the hardware side, it captures a given system from hardware block diagram, tracks and synchronizes any hardware refinement or software interface updates.
  • Allows exploration of schema design and traversing across hierarchies with added flexibilities of filtering only targeted schema of interest, auto placement of components, or graphically duplicating specific schemas.
  • Provides granularity of design entities viewing (component instances, bus interfaces, connections, bus and interface parameters, etc.)
  • Generate an IP-XACT description of captured design.
  • Includes APIs and an editor for design refinements.

There are many ways of capturing architectural intents. In a top-down design flow, an integrated capture facility such as MAI may prevent incoherencies in both design efforts and contents.

To learn more about MAI and associated environment, please refer to this publication links: MAI – Datasheet; MAI – Press Release


Emulation Outside the Box

Emulation Outside the Box
by Bernard Murphy on 04-10-2018 at 7:00 am

We all know the basic premise of emulation: hardware-assisted simulation running much faster than software-based simulation, with comparable accuracy for cycle-based 0/1 modeling, decently fast setup, and comparably fine-grained debug support. Pretty obvious value for running big jobs with long tests. But emulators tend to be pricey, so you really don’t want them idling waiting for the next big job; how can you leverage that resource so it’s delivering value round the clock? Certainly through virtualization where multiple verification jobs can share a common emulation resource but also by expanding use models beyond the standard “big verification” tasks.

Many of these applications are also familiar – ICE, simulation acceleration, power analysis and co-modeling with software for example. All good use-models in principle, but how are they working out in live projects? I talked with Frank Schirrmeister at DVCon last month to get insight into some customer applications.

I’ll start with simulation acceleration (SA), a use-model where part of the verification task runs in simulation, part runs in emulation and the two parts communicate/synchronize as needed. MicroSemi described their use of this approach at a 2017 DAC session. They had an interesting challenge in moving to an SA configuration since packet-switching within their SoC is controlled by 3[SUP]rd[/SUP]-party firmware which is often not available during the design phase. They work around this in their UVM testbench (TB) by randomizing packet-switching to cover as many switching scenarios as possible. With this setup, in SA they found a 20X speedup in run-times over pure simulation, not quite as exciting as they expected. They subsequently traced this problem to a high level of communication between the UVM TB and the emulation DUT. Putting a little work into optimizing randomization to lower communication boosted the gain to 40X. As they stepped up design size, they saw even bigger gains. The moral here is that SA can be a big win for simulation workloads if you’re careful to manage communication overhead between the TB and the DUT (which of course should be transaction-based).

Frank also mentioned another interesting acceleration application reported by Infineon. Gate-level simulation is becoming very important for signoff in a number of areas, yet often this is timing-based, where emulation can’t help. But emulation can help getting through initialization, beyond which interesting problems usually appear. Runs can hot-swap from an emulation start to timing-based simulation, greatly accelerating this signoff analysis. Infineon reported that this mixed flow reduced initialization run-times of 3 days to 45 minutes, an obvious win. I would imagine that even in simulation applications where you don’t need timing but you do need 4-state modeling or simply interactive debug, a fast start through emulation would be equally valuable.

At an earlier DAC, Alex Starr of AMD talked about using emulation for power intent verification, by which he meant verifying that the design still works correctly as the design operates in or transitions through many power-state sequences (power-down, power-up, etc.). Alex made the point, common to many power-managed designs today, that verification has to consider all possible sources of power switching and DVFS – firmware-driven, software-driven and hardware-driven – requiring a very complex set of scenarios to be tested. What you want to watch out for is, for example, cases where the CPU gets stuck trying to communicate with a powered-down block, or cases where retention logic states are not correctly restored on power-on.

AMD still does some of this testing in simulation, but where emulation really shines is being able to run many passes through many power sequences where simulation might be limited practically to testing one power sequence. Why is this important? Power state sequencing and mission-mode functionality are largely independent, at least in principle, so to get to good coverage across a useful subset of the product of both you need to run many mission mode behaviors against many sequences. Alex stressed that being able to run an emulation model against an external C++ stimulus agent gave them the confidence they needed to a level of coverage which would have been impossible to reach in simulation.

In a different application, when we think of emulation support for firmware we think of development and debug, but Mellanox have used Palladium emulation to help them also profile firmware against the developing hardware. To enable this analysis, they captured instruction pointers, per processor, from their verification runs. Since cycle counts are easily recovered from the run data, they could then run a post process on the emulation results to build the kind of information we normally expect from code profiling (e.g. prof, gprof):

  • Map instruction addresses to C code (in the F/W) through e.g. the ELF
  • Build a flat profile for each function with how many cycles it consumed, versus line of code
  • Build a hierarchical profile showing time consumed by parent/child relationships, versus (hierarchical) lines of code

Mellanox noted that they were able to fully profile and optimize their firmware before hardware was available, while also having full visibility down to the cycle level to debug.

I have only touched on a few customer examples here. You can read about a hardware-performance profiling example HERE and another simulation acceleration example HERE. All of these cases highlight ways that Palladium Z1 emulation can be exploited beyond the core use-model (run verification fast). Worth thinking about when you want to maximize the value you can get out of those systems.


Cleaning Trends for Advanced Nodes

Cleaning Trends for Advanced Nodes
by Scotten Jones on 04-09-2018 at 12:00 pm

I was invited to give a talk at the Business of Cleans Conference held by Linx Consulting in Boston on April 9th. I am not a cleans expert but rather was invited more to give an overview talk on process technology trends and the impact on cleans. In this write up I will discuss my presentation. I discussed each of the three main leading-edge technology segments, DRAM, Logic and NAND.
Continue reading “Cleaning Trends for Advanced Nodes”


SPIE Advanced Lithography 2018 – ASML Update on EUV

SPIE Advanced Lithography 2018 – ASML Update on EUV
by Scotten Jones on 04-09-2018 at 7:00 am

At the SPIE Advanced Lithography Conference in February ASML gave an update on their EUV systems, in this blog I will provide a summary of what they presented. I have also written about my impressions on EUV for the overall conference here.
Continue reading “SPIE Advanced Lithography 2018 – ASML Update on EUV”