webinar banner2025 (1)

Verification 3.0 Holds it First Innovation Summit

Verification 3.0 Holds it First Innovation Summit
by Randy Smith on 03-26-2019 at 5:00 am

Last week I attended the first Verification 3.0 Innovation Summit held at Levi’s Stadium in Santa Clara along with about 90 other interested engineers and former engineers (meaning marketing and sales people, like me). There was a great vibe and feel to the event as it exuded an energy level that I have not felt at an EDA event in years. The attendees included longtime EDA veterans as well as a few newcomers. Perhaps more important, the list of participating companies was quite long including speakers from Avery, Breker, Metrics, TV&S, Imperas, OneSpin, Vayavya, Agnisys, Concept, Methodics. Vtool, and Verifyter. Blue Pearl, Willamette HDL, and XtremeEDA were also supporting the event. Quite a collection of verification experts. All these companies gave presentations, spoke to attendees in a tabletop gathering at the end of the event (with great food!), or did both.

EDA industry luminary, Jim Hogan, who has been a driving force behind the Verification 3.0 effort, kicked off the event. Jim is involved in several of the companies supporting this effort as a consultant, investor, and board member. Unfortunately, due to traffic, I missed Jim’s remarks, but we did get a chance to talk at the reception later in the evening where Jim told me, “There are some major themes that Joe outlined in his talk. It’s time to take a new approach to verification, that’s why we called it verification v3.0. We outlined this in an article last year.” Herding so many start-ups is quite a challenge, but Jim is off to a terrific start.

Next up was Joe Costello, former Cadence CEO and the “Tony Robbins of EDA”. I worked at Cadence during Joe’s tenure and his infectious smile, positive attitude, and fervent enthusiasm were clearly all still in effect. Joe laid out a clear case for the likely path of verification solutions over the next five years. He discussed the macroeconomic factors and the design trends that are driving a new approach to verification solutions and then suggested a target opportunity for the participating companies.

The first macroeconomic factor mentioned by Joe is the move to cloud computing. The cloud computing market is already in the same order of magnitude as the entire semiconductor market, measured in the hundreds of billions of dollars per year. Yet, most EDA companies have been slow to make use of these services. Cloud-based EDA solutions would stop semiconductor designers from also needing to be experts at running their own massive compute farms. This also goes hand-in-hand with the second macroeconomic factor, SaaS (software as a service). Deploying EDA tools as a service is far simpler to do in a cloud environment where both the use of the hardware AND software would be measured. This allows users to only pay for the tool usage they use, rather than pay for (and try to predict) their maximum needed capacity of licenses.

So, you might be thinking that these are just infrastructure issues, it is not the next algorithm or paradigm to solve verification issues. What I can tell you is the biggest hurdle of semiconductor design today is the COST of verification. That cost is in licenses and hardware – but it is especially important in headcount. Having spent some time the last few years helping firms on their recruiting challenges, I know for certain that there are not enough verification engineers available to meet the semiconductor industry’s current needs. So, improving efficiency in verification is critical to improving the results of verification as well as reducing its costs.

Improving the efficiency of verification also can mean building more platforms which are specific to certain types of designs. Joe specifically mentioned the fledgling market for design specific processors. Domain specific processors are coming about due to the end of Moore’s Law and Dennard scaling, as well as the concerns about efficiently scaling solutions. Building processors for specific applications is an approach to improve the efficiency and results of designs build for specific problems. Joe specifically mentioned ‘RISC-V’ as an example of open processors enabling this approach. Now doubt, ARM could also go down this path to some extent as well.

Which leads us to this: If you are going to have domain specific architectures, then can’t you develop specific verification environments to aid in the design and verification of those designs? For example, why not build an environment around a specific processor (that also supports extensions) including the IP and verification best practices specific to that application – an environment supporting ISO 26262 for functional safety? An environment knowledgeable of video/audio codec standards? The list is long. Beyond the changes this can drive in the verification industry, the opportunities in semiconductor IP are enormous.

This event was well worth attending, and if you are interested in verification, it would be well worth your time to attend the next Verification 3.0 event whenever that might next happen. You can check out the website at https://verification30.com/. I heard slides might be online next week.


Self-Certification Insufficient?

Self-Certification Insufficient?
by Roger C. Lanctot on 03-26-2019 at 12:00 am

The crash of Ethiopia Air Flight 961 may have a negative impact on the development of autonomous vehicle technology. The Federal Aviation Administration (FAA) is now forced to reconsider the “self-certification” process used for the Boeing 737 Max 8 airplane involved.

Self-driving car developers have been seeking the same self-certification for their own systems. The FAA’s failure suggests that may not be good enough. The U.S. Department of Transportation’s National Highway Traffic Safety Administration is taking public comments regarding a regulatory framework for autonomous vehicles, which are already cruising U.S. highways. The challenge for air and land travel governed by software is the same: Understanding and regulating the algorithms and code inside black boxes.

Feds move to consider cars with no steering wheels, brakes – Automotive News

Regulating surface transportation is complicated by the role of local regulators in the U.S. – the 50 states. It is complicated by the fact that many of these states view automated driving technology as the killer app capable of easing congestion, reducing vehicle emissions and enhancing mobility for disadvantaged or disabled populations.

But the killer app might become itself a killer. With each new fatality attributed to an autonomous vehicle, come the investigations to determine what sort of algorithmic failure led to the crash. Thus far, from Florida to Arizona, the source of the software shortcoming seems to have been successfully located – but that may not always be the case.

In the case of Ethiopia Air 961 the story appears to be even worse as multiple reports suggest that a software update was either in the works – to correct the failure experienced by Lion Air Flight 610 – or ready to be implemented, but failed to deploy in time. Another layer to the story derives from the self-certification where, according to reports in the Washington Post, Boeing employees were more or less deputized to act as FAA representatives as part of the process.
Further still, reports have emerged that pilots and airline representatives being shown the new system in the Boeing 737 Max 8 identified multiple areas requiring further training and preparation:

For cars it is not a case of training drivers. It is a case of training machines to drive. We go from the forensic “black box” of the airline industry to the emerging A.I. black box of the self-driving car industry. The question is whether we are inclined to put our “faith” in A.I. from self-driving car developers or with regulators. We already know the limitations of the regulators.

Conferring our faith to the A.I. black box in the self-driving car reminded me of the A.I. challenges currently facing the health care industry. In the words of one interviewee, Dr. Eric Topol, cardiologist and the founder and director of the Scripps Research Translational Institute, quoted in the New York Times last week:

“There’s no shortage of deep liabilities for A.I. in health care. The liabilities include breaches of privacy and security, hacking, the lack of explainability of most A.I. algorithms, the potential to worsen inequities, the embedded bias and ethical quandaries.”

Self-certification in the airline industry – necessitated no doubt by expenses and staffing limitations at the FAA – will now come under renewed scrutiny. Will self-certification be good enough for self-driving cars?

The incompetence and failure at Boeing and the FAA raise unavoidable questions regarding the regulation of transportation. The latest fatal Tesla crash (with a semi-trailer), just a few weeks ago, and these two 737 Max 8 crashes are testing the tolerance of transportation users.

U.S. safety agencies to investigate fatal Tesla crash in Florida – CNBC

The debate calls to mind a presentation I gave last week at a security conference put on by the Metropolitan Police in the U.K. I concluded by noting the likelihood that regulators will require the ability to remotely control autonomous vehicles. In other words, regulators will not allow autonomous vehicles unless there is a provision to control them remotely.

Not surprisingly, some of the law enforcement members in the audience wanted to discuss the topic in further detail. The story of vehicle remote control is both old and new.

General Motors and Hyundai Motor America offer remote vehicle slow-down functions as part of the stolen vehicle tracking and recovery solution in their telematics offerings for passenger vehicles. Brazil attempted to mandate vehicle immobilizer technology several years ago, but abandoned the effort over privacy and security concerns. Finland’s regulatory authority requires a driver for a certified autonomous vehicle, but the driver need not be IN the vehicle.

Remote control is the main differentiator between airline and surface transportation, which is far more deadly than flying. For now, the airline industry and its regulatory authorities have determined that the risks of remote control for airplanes are greater than the rewards. For cars, it is increasingly looking like remote control will be essential.

Even with remote control, though, the challenge of certification and regulation remains. In the U.S., states are opting for less regulation, not more. An audience at the Future Networked Car Symposium at the Geneva Motor Show voted by a show of hands slightly in favor of more regulation – perhaps reflecting the presence of executives from multiple European regulatory authorities.

Ten years from now we will somehow arrive at the nirvana of autonomous vehicle technology where we are saving lives and reducing congestion, and emissions, and parking garages are eliminated completely. There are going to be bumps along the way. Fasten your seat belt.

Also read: Surviving in the Age of Digitalization


Update on SystemC for High-Level Synthesis

Update on SystemC for High-Level Synthesis
by Tom Dillinger on 03-26-2019 at 12:00 am

The scope of current system designs continues to present challenges to verification and implementation engineering teams. The algorithmic complexity of image/voice processing applications needs a high-level language description for efficient representation. The development and testing of embedded firmware routines (commonly written in ‘C’) are driving the trend toward SW/HW “virtual prototyping” verification strategies. And, to be sure, the time-to-market (TTM) pressures are extreme – despite the increased scope and diversity, there is little relief in design schedules. To improve design productivity and verification throughput, hardware models must be represented at higher levels of abstraction, while also providing a well-defined synthesis flow to implementation.

The SystemC hardware description language was originally conceived to help address these design modeling and verification pressures. Yet, SystemC adoption has been slow. The overall language semantics were well-defined, but the modeling guidelines for implementation synthesis were unclear. And, significantly, the influence and support of a “standards” organization for SystemC modeling was lacking. At the recent DVCon (Design and Verification Conference) in San Jose, a workshop session focused on SystemC provided a very positive update on the issues above – indeed, I would anticipate an acceleration in adoption by system designers.

First, a standards update…

Mike Meredith from Cadence Design Systems described the initiatives within Accellera to define and document SystemC usage guidelines. The list of active working groups is impressive:

  • SystemC Language
  • SystemC Synthesis
  • SystemC Verification
  • SystemC Datatypes
  • SystemC Analog/Mixed-Signal
  • SystemC Configuration, Control, and Inspection

The other workshop participants added to Mike’s overview, with encouraging comments.

“The Accellera initiatives are expanding beyond the base language definition, with use case examples covering ‘what to model’ and ‘how to model’.”

“The Accellera activities have focused on clarifying the relationship and distinctions between SystemC (v2.3.3) and C++(v11/v14).”

“A draft of the SystemC library integration with the Universal Verification Methodology(UVM)has been prepared – for example, how to adapt a System Verilog constrained random testbench to exercise SystemC models.”

“The unique nature of automotive system designs requires both the productivity of SystemC and AMS simulation integration. The Accellera working group has been updating the SystemC/AMS user guide and regression test suite, describing in detail the synchronization activity between the(continuous domain)analog and(discrete event)digital models.”

“The high-level synthesis semantics of SystemC assertions is a focus area, in support of assertion-based verification(ABV)environments.”

Mike expanded upon the last comment above, to describe the main emphasis of the Accellera SystemC synthesis working group, namely the development of a SystemC modeling standard for high-level synthesis (HLS).

HLS for SystemC involves a sequence of algorithms to realize an implementation-based model:

  • elaboration
  • input synthesis directives and constraints
  • characterization of the hardware resources for all operations
  • scheduling operations to clocks
  • generation of the RTL model

To enable SystemC synthesis, additional “hardware-centric” features were needed – e.g., modules, ports, signal, processes, bit-accurate datatypes, communication channels, and clocks. SystemC synthesis directives are also unique, offering (optional) user guidance on:

  • program loop interpretation (e.g., “UNROLL_LOOP”)
  • resource allocation (i.e., binding operations to resources)
  • cycle scheduling (e.g., pipelining evaluations, latency)
  • allocation and mapping to registers
  • reset behavior
  • creation of finite-state machine states and transitions
  • definition of data channels (e.g., point-to-point interfaces, FIFO’s, etc.)
  • pin-level protocols for data communication (with SystemC function calls through an event on a “SC_port”)

Even with cycle-accurate definitions for protocols and controls, the base algorithm models are still abstract – SystemC for HLS maintains its design and verification productivity.

Mike went into detail on the most significant updates to SystemC modeling for HLS, specifically how the model structure is defined, and how (implicit) clocking is incorporated. His illustrations used the concept of a SystemC “thread” (process).

The figure below illustrates the SystemC module structure for HLS (from Mike’s DVCon presentation). The structure contains elements familiar to RTL designers – e.g., ports for hierarchical connectivity and signal communication.

The definition of a concurrent sequential process is fundamental to RTL modeling, and is reflected in SystemC (for HLS) as an “SC_METHOD” or “SC_THREAD”. The figures below illustrate the features of these processes, and a brief coding example, applicable to both verification and synthesis.

The SC_THREAD and SC_CTHREAD include both a reset preamble and a “wait()” function to represent clocked evaluation. (Briefly, the SC_THREAD process is sensitive to any event, whereas the SC_CTHREAD is suspended by a clock signal – the SC_CTHREAD process is used to define a FSM in the output RTL model.)

The SC_METHOD would not include any wait suspension execution control. Note that the verification flow is directed to execute the reset code by “registering” the thread/process, as opposed to the semantics of a constructor function. (A constructor would only be evaluated once, whereas a model reset may be re-executed as part of verification.)

Mike also described the SystemC standards activity for synthesis of variable datatypes, using 2’s complement evaluation – see the figure below.

“Not all users will need the full datatype width of standard.”, Mike highlighted. “For more efficient hardware implementation through synthesis, other bit width datatypes are available in the Accellera SystemC library. For non-integer numeric datatypes, user-defined behavior for the saturation and rounding of a calculation is provided.”

I left the DVCon SystemC workshop very enthused about the progress of the Accellera working groups, and the standards activity to define SystemC semantics for HLS. Admittedly, only two EDA vendors are currently providing SystemC HLS support. Regardless, I expect the interest in SystemC modeling to grow and the adoption rate to increase.

Design and verification engineers interested in learning more about SystemC (and ideally, participating in the working groups) are encouraged to go to the Accellera web site (link).

For more information on the Cadence Stratus HLS offering, please go to this link.

-chipguy


Device-as-a-Service – a Market for the Future

Device-as-a-Service – a Market for the Future
by Krishna Betai on 03-25-2019 at 5:00 am

There is an emerging market in the world of IoT, and service providers are yet to realize the potential of it. With x-as-a-Service — x taking the shape of either software, platform, or infrastructure — already in play, it is only a matter of time before Device-as-a-Service becomes a cash cow for these providers.

Like its predecessors, DaaS too is based on the pay-per-use model, which has been around for a while and has become popular among consumers, convenience being the selling point. From basic household utilities such as electricity to music and video streaming services like Spotify and Netflix, the PPU model has become a ubiquitous force that has changed the definition of consumption and is likely to end the concept of product ownership.

When the pay-per-use model is applied to smart appliances, the result is a Device-as-a-Service model. A consumer would be able to use a smart washing machine, dryer, refrigerator, or air conditioner without actually paying their heavy price tags, instead, they would pay a monthly fee based on their usage. Imagine paying a monthly bill for using a washing machine, the amount of which would vary should the user wash a handful of clothes, instead of washing an entire load of several pounds. The “units consumed” for a refrigerator would shoot up if its door is kept open longer than required, resulting in a spike in the monthly bill. Furthermore, harnessing the “smart” feature of an air conditioner, consumers could be advised to operate the appliance at a specific temperature or range to keep the monthly bill within a certain amount and avoid unnecessary surcharges. This, of course, would depend on external factors such as changes in seasons and climate.

There are several benefits of the Device-as-a-Service model:

From the viewpoint of service providers, the DaaS model would provide them with steady, predictable business after the initial installation of the appliance. Rather than receiving a lump sum amount for the outright purchase of an appliance, they would generate revenue from the monthly payments made by the consumer, regardless of the number of times the appliance is actually used in the 30-day period. Moreover, service providers can monitor the daily wear and tear of the appliances from the large volumes of data generated, thus enabling predictive maintenance. This would help in optimizing repair costs and inventory management of spare parts.

From the consumer viewpoint, convenience and flexibility are the highlights of the DaaS model. In the event that a consumer moves into a new home, they can do away with the hassle of shifting heavy appliances, and can simply inform their service provider of the change and continue their usage, uninterrupted. This would be possible due to the contractual nature of the model.

Environmentally, DaaS would avoid wasteful consumption of energy, aid in the timely maintenance of the appliance thereby extending its useful life, and optimize its usage in a way that would not harm the environment.

Companies like HP and Amazon already follow a pricing model that resonates with DaaS in some way. HP sells its printers at an economical price, charging more for the ink cartridges that its users purchase almost monthly. The online retail giant launched the Kindle, a revolutionary reading device, at a mouth-watering price, fully aware that the millions of e-book purchases would generate more revenue and turn out to be profitable in the long run. Carriers like Verizon and AT&T also follow a similar pricing model, dishing out subsidized smartphones in exchange for contractual agreements with a typical timespan of 24 months. These examples are a testament to the fact that data-driven consumption and billing is the future.

While IoT sensors are heavily used in automobiles and medical devices, their scope can be expanded to smart appliances as well. Whirlpool, for instance, has already taken the lead; the company can track the usage of its smart washing machines and even order detergents for its users before they run out of it. The only obstacle in the DaaS model is whether or not the current 4G networks would be able to manage so many connected appliances at a given point of time. Thanks to the rapid development of the faster, stable, and more reliable 5G network, this does not seem like much of a hurdle.

As companies put together the pieces of the IoT puzzle and become better equipped to handle vast amounts of data over superfast networks, the market of Device-as-a-Service seems to be a viable and obvious next step towards an IoT-enabled future.


A Smarter Way to Do Multi-Board PCB Systems

A Smarter Way to Do Multi-Board PCB Systems
by Daniel Payne on 03-23-2019 at 2:15 pm

Many electronic product ideas start out as sketches on the back of a napkin, then migrate over to diagrams drawn in Visio or PowerPoint, finally entered into EDA-specific tools. With that methodology there’s a big disconnect between the diagrams drawn with a purely graphical tool and the EDA tools, because there’s no data linkage happening, so there’s no consistency and no automation when a change is made to the specification. Necessity is the mother of all inventions, so I recently spoke with Gary Hindeat Cadence to learn about how this need for system-level capture was turned into a new product, named Allegro System Capture, announced earlier this year.

Q: How new is System Capture?

It’s been in development for a few years now, and it’s a platform for hardware design of systems with multiple boards, packages, cables and harnesses.

Q: Why was System Capture created in the first place?

I’ve had previous roles at Cadence as an AE and AE director, visiting customers across Europe that design PCB systems for automotive, industrial, mil-aero, networking and even formula racing. These teams often started out their designs with PowerPoint or Visio to capture the big-picture, but there was never any linkage to the electrical system and requirements. So we created System Capture as a way to automate the graphical diagrams that also included connectivity, and partitioning a system into multiple boards.


Electronic system definition that drives the detailed implementation

Q: Does System Capture work with any schematic or PCB layout tool?

Our System Capture tool works with the Cadence Allegro and OrCAD tools only at this point.

Q: Can you give me an example of how using your approach is beneficial?

Sure, consider a two board system having RF and Digital. With this approach you can partition your system into two boards, then have design engineers assigned to each board working in parallel, while the interconnect between the two boards is first entered with System Capture, maintaining consistency of the interconnect between the boards.

Q: What types of engineers would be using your new tool?

It really depends on the project, typical users would be: system architects, hardware architects, lead engineers senior engineers, EE, some MCAD users, PCB designers, SI experts, even manufacturing engineers.

Q: What problems does this new approach help mitigate?

Well, it eliminates surprises that happen when bringing together two or more boards. Let’s say that something changed like pin positions or pins names, with this approach you catch these changes so there aren’t surprises. The system-level connectivity is defined from the top-down, and there are consistency checks as you create the PCB layout for each board.


System connectivity mismatches identified and highlighted for users to resolve

Q: How much time can an engineer expect to save with this approach?

With the connectivity already defined in System Capture you can expect that during schematic capture between a 2X to 5X speedup. Placing decoupling capacitor rails is up to 10X faster on schematic pages now.

Q: What types of simulation analysis are supported?

We’ve got two types, signal integrity analysis is done with the Allegro Sigrity SI tool, and power integrity with Allegro Sigrity PI.

Q: Where can I learn more about System Capture?

On the web there’s a product page and data sheet, contact your local Cadence AE, or come visit us at PCB West in Santa Clara.

Related Blogs


Semiconductor Market Downturn in 2019

Semiconductor Market Downturn in 2019
by Bill Jewell on 03-23-2019 at 5:00 am

The global semiconductor market grew 13.7% in 2018, according to World Semiconductor Trade Statistics (WSTS). Each year, we at Semiconductor Intelligence review semiconductor forecasts and compare them to the final WSTS data. We used projections which were publicly released from late 2017 through early 2018, prior to the release of January 2018 WSTS data in March 2018. These forecasts ranged from 5.9% from Mike Cowan to 21.3% from Future Horizons. Most were in the 6% to 8% range. Our Semiconductor Intelligence projection in February 2018 was 12%, the closest to the final number of 13.7%.

We were set to award ourselves the (virtual) trophy for forecast accuracy. However, in researching recent forecasts for 2019, we found Objective Analysis posted a report on its forecast accuracy since 2008. The December 2017 video from VLSI Research shows a chart with the Objective Analysis statement “strong start supports 10%+ growth” for 2018. But in the video Jim Handy of Objective Analysis said their forecast was 14%. Thus, Objective Analysis wins the (virtual) trophy for 2018. The 2017 semiconductor market grew 21.6%. Last year we awarded our (again virtual) trophy to Future Horizons for its 11% projection. However Objective Analysis would have won that year also, with a forecast of ~20%.

What is the outlook for 2019? The year 2018 semiconductor market finished weak with an 8.2% decline in the fourth quarter from the third, according to WSTS. The first quarter of 2019 will be even weaker. Most major semiconductor companies are expecting up to double digit declines in 1Q 2019 from 4Q 2018. The exceptions are Qualcomm which expects a 0.9% increase (with 9.3% at the high end) and Infineon, which sees a flat 1Q 2019. Weak end demand and inventory adjustments are cited as key factors in the declines. Memory companies are the hardest hit, with Samsung down 24.3% in 4Q 2018 and SK Hynix down 13.0%. Micron just reported a 26.3% revenue decline in its fiscal quarter ended February 28, 2019. Micron’s outlook for the quarter ending May 31 is a 17.7% decline.

The global economic outlook for 2019 points to slower growth in 2019. The International Monetary Fund (IMF) January 2019 forecast is for world GDP growth to slow from 3.7% in 2018 to 3.5% in 2019. The decline is led by the advanced economies, with the U.S. slowing from 2.9% in 2018 to 2.5% in 2019 and the Euro area slowing from 1.8% to 1.6%. China is expected to drag down growth in the emerging/developing economies as its GDP growth decelerates from 6.6% in 2018 to 6.2% in 2019. On the positive side, India continues to show of over 7% and accelerating, the ASEAN-5 (Indonesia, Malaysia, the Philippines, Thailand and Vietnam) exhibit steady growth of around 5% and Latin America is recovering. Key factors cited by the IMF for the slowdown are trade tensions (especially between the U.S. and China) and the uncertainty of the U.K.’s exit from the European Union (Brexit). The outlook for 2020 show slight improvement, with acceleration to 3.6% world GDP growth led by emerging/developing economies.

The outlook for key end equipment is also bleak. IDC in March forecast a 0.8% decline in smartphone unit shipments in 2019 and a 3.3% decline in combined PC and tablet unit shipments.

Recent 2019 semiconductor market forecasts are generally negative. Our latest projection from Semiconductor Intelligence is a 10% decline. Several forecasts are in the -5% to -1% range. Objective Analysis has a chance for a three-peat forecast trophy for 2019; but would have to share with Morgan Stanley if -5% is closest to the final number. IC Insights expects a slight 1.6% gain for the IC market while Gartner projects a 2.6% gain for semiconductors. Memory is the weakest link in 2019. IC Insights projects the IC market excluding memory will grow 6.7%. WSTS expects 2.6% growth for semiconductor excluding memory. Our Semiconductor Intelligence forecast is for a 2% decline in semiconductor without memory.

The current outlook for the semiconductor industry for 2019 assumes lower memory prices, slower electronic equipment demand, inventory corrections, and slower growth for the global economy. Despite all the uncertainty, few analysts expect a global recession in 2019. The expectations for the 2020 semiconductor market are mixed. VLSI Research and Gartner forecast a rebound in 2020 of 7.0% and 8.1% respectively. IC Insights projects a 1.9% decline in the 2020 IC market. Our preliminary 2020 forecast from Semiconductor Intelligence is 5% to 10% growth.


SPIE Advanced Lithography Conference – ASML EUV Update

SPIE Advanced Lithography Conference – ASML EUV Update
by Scotten Jones on 03-23-2019 at 12:00 am

At the SPIE Advanced Lithography Conference ASML gave an update on both the current 0.33NA system and the 0.55 high-NA system development. I saw the presentations and got to sit down with Mike Lercel (Director of Strategic Marketing).
Continue reading “SPIE Advanced Lithography Conference – ASML EUV Update”


Attend Parts of DAC For Free, Really

Attend Parts of DAC For Free, Really
by Daniel Payne on 03-22-2019 at 5:00 am

The Design Automation Conference (DAC) is the must-see, annual event for semiconductor professionals that design chips, use EDA software, and buy semiconductor IP. Like all conferences there’s an entrance fee, but for the 11th year now you can get a free pass, courtesy of three sponsors: Avatar Integrated Systems, ClioSoft, Truechip. The free pass is part of the I Love DACpromotion going on now, but you must act before the deadline of May 17th. DAC is located in Las Vegas this year from June 2-6, so make your airline and hotel arrangements early to get the best deals.

Here’s what you’re going to experience with the I Love DAC pass:

  • Four daily Keynote sessions
  • Access to the Exhibition Floor with over 170+ Exhibitors
  • Access to two pavilions with daily presentations
  • DAC Pavilion, sponsored by Cadence

    • SKYTalks (mini-keynotes)
    • Industry leader discussions
    • Hot industry topic panels
    • Tear-downs
  • Design-On-Cloud Pavilion in Design Infrastructure Alley

    • Daily presentations focused on cloud-based and IP Topics
  • Chip Essentials Village

    • Demonstrations from leading companies providing essentials to SoC design.
  • Daily networking receptions Sunday – Wednesday

Design Infrastructure Alley
In 2018 there were IT and specialty vendors galore, with familiar names like:

  • Google Cloud
  • Microsoft
  • Cadence
  • Amazon Web Services
  • Metrics
  • Alibaba Cloud
  • IBM
  • Dell EMC
  • Univa
  • Footprintku
  • PureStorage
  • Rescale
  • Six Nines
  • Altair
  • Suse
  • ICmanage

These companies provide us the hardware, software and services needed while running EDA tools, licensing, storing massive data, security and even support cloud-based flows.

Chip Essentials Village
Let’s say that your company wants to exhibit at DAC on a budget, then check out the Chip Essentials Village because it offers an exhibit kiosk at a value price, presentation time in a theater and more.

DAC Experience
I’ve been attending the DAC conference since 1987 and I always come away filled with new insights learned from the Keynotes, foundries, EDA and IP vendors. You’ll rub elbows with system designers, architects, RTL designers, circuit designers, IC layout designers, CAD engineers, researchers, C-level executives, and of course the team of SemiWiki bloggers. There are about 60 technical sessions to attend, exhibits to peruse, and many networking opportunities.

About DAC
The Design Automation Conference (DAC) is recognized as the premier event for the design of electronic circuits and systems, and for electronic design automation (EDA) and silicon solutions. A diverse worldwide community of more than 1,000 organizations attends each year, represented by system designers and architects, logic and circuit designers, validation engineers, CAD managers, senior managers and executives as well as researchers and academicians from leading universities. Nearly 60 technical sessions selected by a committee of electronic design experts offer information on recent developments and trends, management practices and new products, methodologies and technologies. A highlight of DAC is its exhibition and suite area, with approximately 200 of the leading and emerging EDA, silicon, and intellectual property (IP) companies and design services providers. The conference is sponsored by the Association for Computing Machinery (ACM), and the Institute of Electrical and Electronics Engineers (IEEE), and is supported by ACM’s Special Interest Group on Design.


Narrow-Band IoT Adoption Grows as IP Options Narrow

Narrow-Band IoT Adoption Grows as IP Options Narrow
by Bernard Murphy on 03-22-2019 at 12:00 am

Cellular as a method to communicate with the IoT is on a tear for obvious reasons. It’s long-range with no concerns about the lesser reach of Bluetooth or Wi-Fi, it needs no added infrastructure since it already works with 2G/3G/4G (and ultimately 5G I presume) and it’s designed for ultra-low power, supporting those devices expecting to run on a coin-cell battery for 10 years. Commercial cellular IoT networks are blossoming across the world, with a total of 69 launches by 33 operators in 34 counties as of Q4, 2018; and NB-IoT represents 80% of all deployments.

For the big cellular players with in-house communications design expertise this is just another direction to grow. But this is IoT, with lots of new silicon design teams, so the market is likely to be more fragmented than more familiar mobile markets. Many of these players, not all new ventures, lack silicon communications expertise so depend on proven IP to handle the modem.

There used to be a number of providers in the NB-IoT space. CEVA, still very much active, has well-established expertise in cellular and introduced their first Dragonfly NB-IoT solution early last year and their eNB/Rel 14 release of that product more recently. ARM was pursuing NB-IoT with its Cordio platform but announced late last year that they would no longer pursue this direction. Commsolid, another IP supplier in this space, was acquired by Goodix and now makes chips rather than IP. When you’re building an IoT solution, modem chips are one way to go of course but if you want ultra-low power and ultra-low cost (which you generally do for high volume edge devices) it’s a lot more attractive to look at integrated ASIC solutions with the modem in an IP.

Which puts CEVA in an enviable position in serving this expanding market. In their eNB-IoT release they have also added multi-constellation GNSS positioning support, satisfying a need for location services in the majority of new IoT products, whether mobile or fixed (an interesting market wrinkle in itself; I have written about this before). A report from DNB Markets (on Nordic Semiconductor following MWC 19) confirms this. DNB are confident in cellular IoT prospects based on what they saw at the event and noted CEVA’s enabling position in driving competition in this space, citing interest coming from semiconductor companies who don’t have cellular expertise, but also from non-semiconductor companies who want to build their own chipsets and modules.

Nurlink, a China-based IC design company specializing in cellular IoT wireless communications, recently announced the introduction of their NK6010 eNB-IoT SoC powered by CEVA-Dragonfly. This supports all eNB-IoT frequency bands and major global carriers, as required to support certification of devices on any eNB-IoT commercial network around the world. Nurlink’s goal is to drive adoption of their chip in IoT devices such as smart meters, wearables, asset trackers and industrial sensors. They added that they’re now engaged with (mobile network) operators worldwide to certify their SoC.

That certification step shouldn’t be ignored. To be allowed onto the networks, you have to prove your device will play well with others in real life (not just in the lab), according to MNO expectations. If you’re already not a communications expert, this can be daunting. CEVA works hard to make this transition as smooth as possible. While MNOs will not certify IP, CEVA have built their own silicon based on the IP which they have been running through test trials at Vodafone’s IoT Future Lab in Düsseldorf, Germany. Using those open lab facilities which provide a realistic end-to-end live environment of the NB-IoT technology, CEVA connected to the Vodafone NB-IoT network and demonstrated end-to-end IP connectivity with its test chip running an eNB-IoT compliant software stack. This provides a “pre-certification”, not an official signoff but getting as close to compliance as possible short of proving it in the end-product, which should simplify certification for product developers.

Lastly, how low can you go on power? Integrating the modem into your ASIC automatically reduces power from a multi-chip solution. On top of that, Dragonfly is designed for additional power reduction down to a few micro-amps in sleep-mode through dedicated instructions to support power-saving mode (LTE PSM), also through support for LTE eDRX (extended discontinuous reception). Since communication should be relatively infrequent for applications intended for eNB-IoT, getting to 10-year battery life should be achievable as long as you don’t hog power in your application or sensors.

Want to learn more about CEVA Dragonfly? Click HERE.


ARM, NXP Share Usage, Challenges at Synopsys Lunch

ARM, NXP Share Usage, Challenges at Synopsys Lunch
by Bernard Murphy on 03-20-2019 at 7:00 am

Synopsys runs a “Industry verifies with Synopsys” lunch at each DVCon, which isn’t as cheesy as the title might suggest. The bulk of the lunch covers user presentations on their use of Synopsys tools which I find informative and quite open, sharing problems as much as successes. This year, Eamonn Quiqley, FPGA engineering manager from ARM and Amol Bhinge, R&D emulation and verification HW director from NXP, shared their experiences.

Eamonn hails from Ireland where they are great spellers but terrible pronouncers as I think the saying goes (half of my relatives are from the Cork area); pronouncing his name challenged most of the other speakers (it’s “Aymon” by the way). He talked about providing enterprise-class FPGA-based verification at ARM at their Trondheim, Redhill and Austin facilities. Here FPGA means FPGA-prototyping using HAPS.

I’m guessing this isn’t the only enterprise-scale use of FPGA prototyping, but it’s the first I have seen and it’s pretty impressive. We’re getting more familiar with datacenter-based emulation, but this is HAPS prototyping in long aisles of cabinets (I counted at least 12 per side in one image), each with multiple bays of prototyping systems. Looks just like a regular datacenter aisle but without the flashing lights on the cabinets (all the flashing lights are on the systems inside).

The goal of course is to provide global access and resource sharing with resilience (reliability, maintainability) and to optimize use of resources, also to provide flexibility in how these systems can be used. The trick in meeting the flexibility goal is to provide configurability in a controlled/limited range of options. This they accomplish through a number of widely-used (for them) configurations, from 1 to 16 FPGAs. The most heavily used configuration has 4 FPGAs, with each FPGA connected to the others. They add another S104 system to this to extend to support 8 FPGAs, which he said was designed to cover many needs and could be adapted if needed. They use these configs most commonly for CPU debug. For GPU debug they double this up again, allowing for up to 16 FPGAs. Cabling and configurations are designed to support multi-design mode (MDM) to maximize usage at all times.

Debug on FPGA prototypes is always tricky; after all they’re designed for speed rather than deep and broad visibility. Eamonn said that they find that deep-trace debug works really well if you know what you want to look at, capturing up to 2k signals at 17MHz, whereas global state visibility, running at 100k cycles/hour, works well if you know roughly whenyou want to look but not where.

Amol (NXP) opened with an interesting stat. Did you know that every new car contains at least 100 NXP products? I didn’t but it doesn’t sound unreasonable given the level of automation we’re now seeing even in entry-level cars. Rather than talking about specific verification objectives, Amol provided an entertaining and enlightening tour through challenges he still sees in SoC verification.

He kicked off with an interesting statement. Verification tools provide many flavors of coverage, but in his view it is already difficult to address just one type at a reasonable level across multiple domains. He views coverage closure as a long pole for multiple reasons: exclude files for IPs are not as reusable as they should be, it is difficult to deal with tie-offs, constants and parameters (he suggested these need added focus in verification flows) and they’re still struggling to get coverage on IPs.

He made an interesting point –there should be more investment in coverage for IO muxing. I know this is an area already covered as an app by formal tools (in fact he mentioned this area when he discussed formal tools), but I also know that IO muxing architectures can be highly custom, even from design group to design group within a company. I wonder how much effort is required to configure these apps to custom structures? Perhaps so much that many verification groups still resort to simulation-based signoff, in which case coverage metrics would certainly be interesting?

Amol said they worry particularly about false passes, whether checkers, assertions or VIPs may themselves contain errors or may overlook certain possibilities. He noted they had found parameter errors and tieoff errors which should have been caught but were not. He particularly likes Certitude, Z01X and the VC Formal FTA App in tracking down problems of this nature.

Gate-level verification continues to be important (thanks to automotive I believe) and they have found defects at this level which escaped RTL verification. A problem here is turn-around time, in tool run-time (he mentioned running 44 gate-level test cases took many months) and also in debug. He likes shaking out possible bugs earlier in RTL, and he cited VC Formal FXP as a useful tool in this area. But he still sees need for more work in tools and methodologies.

Amol wrapped up with a request for more support in performance verification, particularly along targeted paths such as PCIe to DDR or core to DDR. He mentioned need for more standardization and innovation in this area.
Overall, entertainment and insight into what can be possible for enterprise level FPGA prototyping and where yet more development is needed. And a free lunch – what more could you ask for? To watch the event, click HERE.