Banner Electrical Verification The invisible bottleneck in IC design updated 1

FPGA Prototyping Speeds Design Realization for the Internet of Things

FPGA Prototyping Speeds Design Realization for the Internet of Things
by Daniel Nenni on 04-02-2018 at 7:00 am

When we talk about the Internet of Things (IoT), it isn’t a stretch to say that every intelligent device we interact with will become connected, sharing vast amounts of data with one another to make our lives more efficient. It isn’t only consumers of smart home, infotainment, and wearable technologies that are driving the demand, but also industrial, military, and government applications such as smart cities and factories that are changing the connectivity landscape.

Webinar April 4 : Achieve High-performance & High-throughput with Intel based FPGA Prototyping

From the Very Small to the Behemoth
When we explore IoT from this perspective, we understand that these devices can range from the smallest designs comprised of a handful of sensors and actuators made up of only a few million gates to extremely complex machines containing hundreds of sensors and billions of gates. Whatever the size and complexity, these smart systems require a great deal more software and real environment tests especially when integrating commercial IP. All of the IoT examples mentioned mandate interoperable connectivity, sophisticated control, and test efficiency forcing design teams to rethink their development strategies. Add to this the time-to-market pressures that consumer IoT devices demand and it becomes clear that engineers need adequate solutions to address these issues.

Getting Confidence in Your Design Early
FPGA-based prototyping is specifically geared toward meeting the design and verification demands created by the complexities of IoT devices. Prototyping technology advances in areas of partitioning and multi-FPGA debug have allowed FPGA-based prototyping to scale to address not only the small multi-million gate designs but also designs of up to 1.5 billion gates. FPGA-based prototyping allows designers to develop and test their systems and provides software developers early access to a fully functioning hardware platform long before silicon is available. The hardware prototype is the only solution that can be employed early enough for practical software development and testing. Software models don’t have the speed and capacity that hardware platforms provide for accuracy and reliability.

Even the smallest designs must negotiate very complex software issues and require an enormous amount if rigorous testing. The nature of this type of testing could run any design into the ground missing crucial time-to-market windows. Although prototyping can take weeks to set up, the number of tests that can be performed in a short amount of time after initial set up severely handicaps other solutions. At the modest of speed (just 5 Megahertz) and after 4 weeks of set up, FPGA prototyping can complete a staggeringly higher number of tests than other solutions just days after its initial set up. For more information on this, you can check out the Emulation vs. Prototyping Cross Over Calculator at http://www.s2cinc.com/prototyping-calculator.

Transactors Are Key to IoT Design Success
FPGA-based prototyping is well-suited for designs that are fully rendered in RTL that can be mapped to an FPGA. However, many IoT-natured designs may not be completely mapped to an FPGA and may be partially still only available as behavioral models in descriptions such as C++ or SystemC. In these cases, transaction-level interfaces play a critical role in being able to bridge the abstraction level between behavioral models and live hardware. These transactors offer a way to communicate between software running on a host and an FPGA-based prototyping platform that often includes memories, processors, and high-speed interfaces.

A system that can harness the power of transactors can allows designers to maximize the benefits of FPGA-based prototypes much earlier in the design project for algorithm validation, IP design, simulation acceleration and corner case testing. A prototype combined with a transactor interface makes a range of interesting applications possible throughout the design flow.

As we move into the next stage of connectivity, devices will need to undergo very exhaustive testing, FPGA prototyping is a key technology that will play an effective role in this process.

Webinar April 4 : Achieve High-performance & High-throughput with Intel based FPGA Prototyping


Intel to buy Micron – Trump Blocks IC Equipment Sales to China – Broadcom Fighting for QCOM

Intel to buy Micron – Trump Blocks IC Equipment Sales to China – Broadcom Fighting for QCOM
by Robert Maire on 04-01-2018 at 12:00 pm

It has been reported over the holiday weekend that Intel is in talks with Micron over a proposed merger that would value Micron at $70 per share in a deal of a combination of stock and cash for Micron shareholders. It is said that the boards of both companies have already approved the deal.

Intel’s CEO Brian Krzanich said, “We see enormous upside in the memory market over the next decade with the continued move to all things data. We view this as a perfect fit with our position in the data center and new data centric applications”

BK went on to explain why the deal was being done now; “We saw a value opportunity created as our stock price increased on declining sales while Micron’s stock price has declined on increasing sales, this aberration in the stock market created an opportunity we could not ignore.”

It was also reported that part of the unspoken reason for Intel’s acquisition was that Intel was under pressure from the administration to cease its cooperation with China, particularly the new memory fab it was planning on transferring technology to.

In related news, the Trump administration has called for a halt in chip equipment sales to China. The administration is using a unique combination of CFIUS and war powers act to block sales of advanced chip technology to China citing the military and economic threat.

In separate news, Hock Tan has vowed to keep up the fight to convince the administration to allow his bid for Qualcomm to continue.

Broadcom/Qualcomm failure motivated Intel/Micron
An unnamed source at Intel said, “Once the Broadcom/Qualcomm deal was dead we (Intel) were free to pursue other deals without the fear of a new chip behemoth”. ” We feel the Micron deal is a regulatory slam dunk as it’s two US companies in the face of foreign competition. We get an added bonus of bragging rights against Samsung by becoming the world leader in chips once again”.

Merger Moves
There are some other concerns about the merger. As part of the pre-arranged deal with the government included a move of the combined companies headquarters to Boise Idaho which would support the administrations goal of bringing more jobs to the American heartland and out of California.

A White House spokesperson said ” We are supportive of the move of the US chip industry back to its roots and source of its core ingredient in Idaho”

Intel going back to its beginning
By merging with Micron, Intel is going back to its early roots of being a memory manufacturer. Intel was the first to produce a DRAM chip in 1970 with the 1103 one kilobyte memory chip. In 1974 it controlled over 80% of the DRAM market before losing out to foreign competition. Mr Krzanich said “Our goal is to get back to a very high share of the memory market,… if at first you don’t succeed, try, try again….”

Chip Equipment sales to China Halted
As part of the back and forth in the trade tussle between China and the US, the administration took a pre-emptive move by halting “any chip equipment sales to China that includes unique American technology”.

President Trump hailed the decision as part of a comprehensive package aimed at limiting China’s economic and military aspirations.

Press secretary Sarah Sanders went into further detail about specific companies and products; ” Included in the chip equipment ban are laser anneal systems made by Veeco which “bake” chips. Etching systems made by Lam Research which are used to create patterns of “waves or Ruffles” on chips using SADP (self aligned double patterning) printing techniques. Applied Materials deposition equipment was cited for its ability to deposit unique layers of material flavorings including BBQ and Sour Cream”. Sanders went on to say, “We are also protecting our chip packaging industry by limiting the sales of equipment related to chip stacking as currently employed by Pringles which allows for very dense stacking of chips in a unique singular, protected package”

Trump went on to say, ” The American public was kept in the dark about real purpose of all this “chip” technology and its sales to China. We will protect the American chip industry and protect its workers so that Americans will be free to savor chips made in the US for future generations “

Further “Americanization” of Broadcom
As part of its efforts to gain approval to move ahead with its Qualcomm bid the company has accelerated the move of its headquarters and legal domicile back to the United States. This was also part of the agreement with the US agency CFIUS in order for the Brocade acquisition to go through.

In a little noticed SEC filing Broadcom announced that Hock Tan, CEO of Broadcom, has legally changed his name to “Harold Thomas”.

Separately it was announced that Trump would be having an oval office “photo op meeting ” with Thomas. Trump was quoted as saying ” I have never met Thomas before but any American bringing jobs back from Asia will have my full support in whatever he wants to do”. CFIUS was said to be re-evaluating its stance given those comments.

Happy April First!!!


Nvidia: What the Doctor Ordered

Nvidia: What the Doctor Ordered
by Roger C. Lanctot on 04-01-2018 at 7:00 am

Nvidia has an affinity for taking on computational challenges. Whether it be diagnostic tools derived from medical imaging, mapping the earth’s core or defining the structure of HIV, Nvidia is there with GPU computational resources to amp up the teraflops and shrink the computing time. In fact, according to Nvidia’s most recent earnings report, GPUs are increasingly being turned to by the leading creators of supercomputers in the U.S. and Japan.


The automotive industry has brought Nvidia the toughest test yet, in the words of Nvidia CEO Jensen Huang, in the form of mastering automated driving. In the midst of regulatory and legislative battles over self-driving technology, Nvidia has stepped forward with a clear path toward solving the problem and closing the gap between current and future safe autonomous vehicle performance.

Regulators and politicians may pontificate about autonomous technology but Nvidia is getting down to business. Nvidia is in a unique position to solve the problem as a public company at the center of an ecosystem of 370 development partners (and growing) all working on the autonomous driving challenge. Nvidia brings to that challenge a portfolio of hardware, software, servers, code libraries and technicians all working toward the same objective in real-time in the real world.

The implications of Nvidia’s market position are most significant because the challenge shared by all is getting, in the words of Nvidia CEO Jensen Huang, “from 1 million miles to 1 trillion miles.” To achieve any level of confidence in an automated driving system, developers will need to drive or simulate driving hundreds of billions of miles. If there is a company on the planet that knows anything about simulation, it is Nvidia.

In his keynote at GTC 2018 in San Jose this week, the Nvidia CEO picked up the automated driving gauntlet signifying Nvidia’s intention to conquer that challenge with new tools in the form of its DRIVE Constellation – a computing platform based on two different servers.

The first server runs Nvidia DRIVE Sim software to simulate a self-driving vehicle’s sensors, such as cameras, lidar and radar. The second contains a powerful Nvidia DRIVE Pegasus AI car computer that runs the complete autonomous vehicle software stack and processes the simulated data as if it were coming from the sensors of a car driving on the road.

Autonomous vehicle creators like Waymo and Cruise have been able to rack up single digit millions of miles from the dozens or hundreds of test vehicles they have been able to put on roads. But the weakness of this brute force method is manifest in the individual on-road mishaps and, in the case of Uber just last week, fatalities that result from unanticipated scenarios or system failures.

Automotive industry safety standards allow little or no room for failure and here, too, Nvidia is a leader. Huang says the Nvidia line up represents “the first line up of computers that Nvidia has ever made that stands up to the highest standard of functional safety ISO 26262 and ASIL-D. It is the first time that we have delivered this level of software and hardware complexity.” In fact, Huang describes the earlier announced Xavier as “the most complex SoC the world has ever created.”

One might wonder what Nvidia’s reward has been for the millions of dollars the company is pouring into the development of these self-driving compute platforms. The reality is that Nvidia’s shift in focus from automotive infotainment systems to autonomous driving technology has been almost as challenging as the process of creating those new systems.

Gaming, datacenters and professional visualization remain the dominant portions of Nvidia’s business. Revenue growth for automotive in the most recent quarter was a disappointing 3% and down sequentially as a result of this transition from infotainment to safety. Driving automation appears to be the correct bet for Nvidia, but it’s clearly a long-term play.

Nvidia’s commitment, though, is clear and deep – maybe that’s because the company sees a trillion-dollar opportunity, according to Huang. Presenters at the GTC event in San Jose are sharing papers on everything from object detection to driver monitoring with the common threads being the application of artificial intelligence and deep learning with Nvidia’s help. It would be hard to find a more technically oriented event focusing on automated driving anywhere in the world.

The scope of Nvidia’s comprehension of the task and CEO Huang’s grasp of the nitty gritty details was manifest in the closing segment of his keynote yesterday which focused on a virtualization exercise enabling a remote driver to guide a driverless vehicle. Huang called the audience’s attention to a fact that few have grasped that autonomous vehicles will require remote operation and control – a solution to which Nvidia has already created.

The only issue unaddressed by Nvidia’s impressive presentation was the special role the company is playing in the automation of driving. At a time when car companies and their suppliers are just beginning to come to grips with the need to share information and even vehicle data, Nvidia is in a unique position to create a data sharing platform to further accelerate the advancement of autonomous vehicle technology.

Nvidia’s Huang gave only the slightest hint of such a prospect. With 370 partners – and growing – it will be hard for the company to ignore the opportunity. It may even be something that partners seek out or that regulators require. Shrinking the 1M-to-1T gap will require more than simulation – it will require collaboration and data sharing. Huang made a single comment regarding the prospect of “opening up” Nvidia’s data sets during an analyst Q&A at the end of the day yesterday. Time will tell whether this is part of Nvidia’s solution to what may be its greatest challenge ever.


FlexE at SoC IP Days with Open Silicon

FlexE at SoC IP Days with Open Silicon
by Daniel Nenni on 03-30-2018 at 12:00 pm

On Thursday April 5th the Design and Reuse SoC IP days continues in Santa Clara at the Hyatt Regency (my favorite hangout). SemiWiki is a co-sponsor and I am Chairman of the IP Security Track. More than 400 people have registered thus far and I expect a big turnout, if you look at the program you will see why. You should also know that registration is free and the food at the Hyatt is exceptional, absolutely!

In addition to RISC-V, IP Management, Automotive IP, Design Automation, Security IP, eFPGA, NVM, and Video IP, there is an Innovative IP section that features a talk “FlexE IP – A New flexible Ethernet client interface standard” presented by Vasan Karighattam, VP of Engineering, SoC and SSW, at Open Silicon. Prior to Open-Silicon, Vasan led the development of next-generation architectures of MDSL / HDSL transceivers and service framers, and was a key participant in the standards activity for the IEEE 802.17 Resilient Packet Ring MAC in the Optical Components group at Intel Corporation.

Here is the presentation abstract:

Traditional hierarchical network model with several explicit layers of devices are evolving to more simplified network model with a cloud services layer and a transport layer. FlexE (Flexible Ethernet) is a communications protocol published by the Optical Internetworking Forum (OIF). FlexE enables equipment to support new Ethernet connection types enabling simplified packet transport. FlexE allows data center providers to utilize optical transport network bandwidth in more flexible ways. FlexE dissociates the Ethernet rate on the client side from the actual physical interface (also called server) by introducing a new shim through the IEEE defined MAC and PCS layers. FlexE supports bonding, sub-rating, and channelization capabilities.

Open-Silicon’s FlexE IP is fully compliant to the OIF FlexE standard supporting various MAC client rates. Built upon a flexible and robust architecture, Open-Silicon’s FlexE IP core is compatible with various MACs supporting different rates. The FlexE IP supports FlexE aware, FlexE unaware and FlexE terminate modes of mapping over the transport network. Designed to be easily synthesizable into many ASIC technologies, the Open-Silicon FlexE IP is uniquely built to work with off-the-shelf MACs from leading technology vendors. Using vendor-specific proven MACs allows for fast and seamless integration of the FlexE IP into the technology of choice.

If you need a quick primer on FlexE here is an animated chalk talk from Sebastien Gareau, Hardware Systems Architect at Ciena:

https://www.youtube.com/watch?v=UQf1nFt3bdc

You can also see a deep dive FlexE presentation from the Internet Engineering Task Force from March 2017:

What is Flexible Ethernet

“FlexE refers to a generic mechanism defined in OIF-FLEXE-01.0 implementation agreement for supporting a variety of Ethernet MAC rates e.g.: – 200G MAC through bonding of 100GBASE-R PHYs – sub-rate of 50G MAC over a 100GBASE-R PHY. The FlexE group refers to a group of from 1 to 254 bonded 100G Ethernet PHYs. FlexE utilizes the FlexE group framework to provide the aforementioned flexible MAC rates.”

In addition to the rich technical content, more than 20 of your favorite IP companies will be on hand for networking, demos, and giveaways. Open-Silicon will showcase their Comprehensive IP Subsystem Solution for High-End Networking Applications and demonstrate its Comprehensive High Bandwidth Memory (HBM2) IP Subsystem Solution for 2.5D ASICs using FinFET technologies.

Remember, this is a free event and the food is exceptional. I hope to see you there!


Meeting the Challenges of National Defense Strategy

Meeting the Challenges of National Defense Strategy
by Alex Tan on 03-30-2018 at 7:00 am

In February this year, the Department of Defense (DoD) submitted a $686.1 billions budget for 2019 and published a National Defense Strategy outlining the overall spending for defense and military programs. The recently signed US $1.3 trillion spending bill included part of the funding. According to DoD Defense Budget Overview document, the increase in defense related program is anticipated to gradually increased over the next five years (refer to fig. 1).

As illustrated in fig. 2, the strategic approach of DoD is to expand competitive space by means of having creative operating concepts, interoperability, accelerate cycle of innovation and revitalize alliances and new partners.

As a solution provider, Cadence has a strong presence in the defense sponsored programs and has established good relationships with relevant parties in both congressional and the DoD ecosystem. Cadence’s active participations in DARPA programs such as CRAFT (Circuit Realization At Faster Timescales), CHIPS (Common Heterogeneous Integration and IP Reuse Strategies) and ACT (Arrays at Commercial Timescales) are some notable examples. Cadence holds a DoD Facility Security Clearance, offers “cleared” engineering resources and maintains commercial relationships with virtually every company that designs or provides electronics to the DoD. Cadence ability to leverage commercial best practices within the confines and requirements of national defense has been key to its success in navigating through its continuous evolution. In the public research domain, over 250 US universities (over a thousand worldwide) have also participated in Cadence Academic Network program, which exposes roughly thirty thousands students annually with best industry practices and design technology.

Cadence Solution and Enablement Strategy
With its design automation tools, semiconductor IP and solution flows, Cadence has systems design enablement comprising of three tenets:
-Acquire internal expertise on end markets requirements.
-Work directly with leading systems companies in developing products.
-Partner with other industry leaders to provide comprehensive solutions.

System Prototyping and Digital Twin
Catching a glimpse into the DoD Defense Budget Overview details, it stated that “Prototyping and experimentation should be used prior to defining requirements and commercial-off-the-shelf systems… A rapid, iterative approach to capability development will reduce costs, technological, obsolescence, and acquisition risk.” Along this line, the concept of digital twin which refers to a digital replica of a physical product can be applied. The digital twin captures a complete digital footprint of a product from design and development through the end of the product life-cycle. The basic elements of a digital twin for electronics include the physical electronic system in real space, the system prototypes in emulation space and data connections that tie the virtual and real systems together. The success of digital twin relies upon mission data collection and transmission to the digital twin. Hence, proper boundary conditions and capture of the mission profiles and environmental characteristics is crucial in order to prevent unintentional blocking of critical insights potentially needed to improve system efficacy or to avoid field failures. To support this, Cadence offers an array of electronics systems development program execution including:

– Common environment for chip, package and board for system development.
– Incremental parasitic extraction while physical layout progresses.
– Tools with foundry approved signoff engines yielding fewer iterations.
– Utilize metric driven verification methodology that tracks sign-off progress.
– Integrated hardware and software development to reduce integration time.
– Support to industry proven IPs to speed design cycle and to lower costs.


Early prototyping is also necessary to prevent surprises. The focus of system function, size, weight and power are the measured metrics. Cadence has developed a new and improved system prototyping methodology that folded-in emulation and analysis steps as well as an explicit go/no-go step prior to committing an idea to do a downstream, physical prototyping step. Emulation provides capacity, accuracy, performance and link to physical analysis. It enables running application software on hardware designs resident to the emulator. Cadence Palladium Z1 Enterprise Emulation Platform provides the enterprise-level reliability and scalability to accelerate system verification at different levels (IPs,subsystems, chips).

Hardware System Co-design
Taking further note from the DoD document: “Platform electronics and software must be designed for routine replacement instead of static configurations that last more than a decade… will realign incentive and reporting structures to increase speed of delivery, enable design tradeoffs in the requirements process,..

Within this context, traditional design implementation involves approach that are segmented (e.g., chip, package, PCB). Such disjointed development is normally yielding a subpar result. With Cadence Virtuoso System Design Platform design team could perform co-design and co-analyze across different domains. It saves time and efforts while producing higher performance and cost competitive systems. For example in today analog and RF designs, failing to account for PCB or package level validation will result in great chance of chip failure. With Virtuoso such system level assessment can be done prior to chip completion.

The new National Defense Strategy provides challenging guidelines to revamp national defense programs and reiterates the impacts of rapid technological advancements to the security environment. According to DoD, the technological advantage depends on a healthy and secure innovation base that includes both traditional and non-traditional defense partners. It continues to streamline processes so that new entrants and small-scale vendors can provide cutting-edge technologies. Cadence has been a direct performer on several DoD projects, collaborated with many DoD suppliers and can act as a key partner to meet the demanding requirements in the agenda. For more details of such engagements and Cadence solutions, please refer to this whitepaper HERE.


Edge Devices Rely on Intelligent MEMS Based Sensors

Edge Devices Rely on Intelligent MEMS Based Sensors
by Tom Simon on 03-29-2018 at 12:00 pm

MEMS sensors play a huge role in intelligent systems these days. Mobile and IoT devices would essentially be blind if not for the rich variety of MEMS sensors integrated into them. The MEMS sensor market is growing rapidly, topping $10B in 2016 and slated to exceed $20B by 2020. MEMS is also growing in the RF market, where they are providing alternatives to passive and active electronic devices. Some surveys show that RF MEMS surpassed 6 billion units in 2016. In comparison the next most popular MEMS devices are microphones, which shipped 4.5 billion units in the same year.

A pair of white papers from Mentor talk about the growing importance of the MEMS market and the design considerations necessary to integrate MEMS into intelligent systems. MEMS are used in almost every market, telecom, medical, industrial, defense, consumer, automotive, and aeronautics, among others. As you might imagine the consumer market accounts for the largest share, coming in at $28B in 2016. Impressive, yet not surprising considering how many are used in phones and other gadgets.

An important trend in MEMS sensor based systems is combining sensors into one package and implementing sensor fusion there as well. Sensor fusion not only merges sensor data so it is easier to process downstream, it also helps weed out artifacts and reduce noise in sensor data. It’s not uncommon to have 3 axis gyroscopes, 3 axis accelerometers, temperature, barometric and magnetic sensors all in one unit.

MEMS sensors are also outperforming their predecessors. One of the most interesting examples is an accelerometer from Memsic, that has no moving parts. It relies on detecting the motion of heated gas molecules induced by acceleration. It consists of a minute flat chamber with a heater at the center and several distributed thermal sensors. In the absence of motion, the thermal sensors will see a uniform temperature gradient, which will become disrupted by acceleration. As you can imagine the processing to calibrate and convert the temperature values to acceleration is an essential part of the total solution. This device has high vibration immunity and up to 50,000g shock tolerance, specifications mechanical alternatives would be hard pressed to match.

To succeed with any MEMS based intelligent sensor design requires the highest level of integration possible. This applies not only to the design itself, but also to the methodology used to deliver the completed design. Ideally the MEMS and circuit layout should be created in the same environment and then simulated as one.

Mentor’s Tanner tools make it possible to create schematic for the MEMS and circuit together, referencing the appropriate models for each. The MEMS models can be defined using System Model Builder using analytical equations in SPICE or Verilog-A to map the electrical outputs of that portion of the design. The combined schematic can be simulated to verify design functionality and performance.

The Mentor white papers I mentioned above describe a flow that enables design and simulation from a common platform. It’s well understood that 3D silicon shapes are derived from the 2D layers created in layout tools. However, for mechanical designs, such as MEMS, people often start with a 3D model. The drawback for MEMS flows is that while you can perform 3D simulation on such a model, it is cumbersome to convert it into a 2D representation suitable for mask preparation. Masks are needed to complete the MEMS fabrication process in just the same way they are needed for semiconductor fabrication.

Mentor’s Tanner tools take a unique approach to solve this dilemma. They have designers start by defining a series of fabrication steps for the MEMS design. Next comes creating the masks in the layout tool, after which the fabrication process can be ‘simulated’ to ensure the resulting structure matches the intended design. At the same time the 3D model created can be exported to 3D modeling tools for that phase of the analysis.

The two white papers from Mentor on the Tanner tools for MEMS based circuits describe the flow more thoroughly. MEMS based IC designs have brought about dramatic innovation in every sector they are used. This goes for drones, to automobiles and even rockets, which all rely on robust and compact sensors. Novel new sensors and applications will continue to become available. The companies that can most efficiently integrate and deliver solutions that use them will be the next round of winners in the ongoing evolution of intelligent sensor based products.


The Future of Verification Management

The Future of Verification Management
by Bernard Murphy on 03-29-2018 at 7:00 am

One of the great aspects of modern hardware verification is that we keep adding new tools and methodologies to support different verification objectives (formal, simulation, real-number simulation, emulation, prototyping, UVM, PSS, software-driven verification, continuous integration, …). One of the downsides to this proliferation of verification solutions is that it can become increasingly onerous to manage all the results from these multiple sources (and multiple verification teams) into the unified view you ultimately need – have we covered all testplan objectives to an acceptable level?

You could administrate the combination of all this data through mounds of paperwork and presentations (not advisable) or more realistically through scripting, spreadsheets and databases. I’m sure some of the biggest engineering organizations have the time, skills and horsepower to pull that off. For everyone else this task can be daunting. It isn’t as simple as rolling up dynamic coverage from RTL sims. Consider that some of the testing may be formal-based; how do you roll that up with dynamic coverage? (Answer: by showing combined coverage results – dynamic and formal. Then as you move up through the hierarchy, you see a combined coverage summary). Some may come from emulation, some from mixed-signal sims, some may be for power verification, some for safety goals, some from HLS, some related to application software coverage. You can see why traditional RTL sim rollup scripting may struggle with this range.

Larry Melling (Director product management and marketing at Cadence) shared with me how vManager automates this objective and how some customers are already using the solution. Briefly, vManager is a system to gather functional verification results from a variety of platforms and objectives and score progress against an executable testplan. You can tie in all the Cadence verification platforms, from formal to prototyping, and objectives from functional to safety, to power and application software coverage. The system can also tie into requirements tracing and functional spec annotation, so you can see side-by-side how well you are meeting expectations for the product definition and provide spec updates/errata where these may deviate.

Larry pointed to several examples where vManager is in active use. Infineon already uses the tool to combine formal and simulation coverage results. I’ve written about this before. I don’t know of any solution (yet) to “merge” these coverage results but seeing them combined is a good enough way to judge whether the spirit of good coverage has been met. Infineon also notes that seeing these results together helps them optimize the split between formal and simulation effort, so that over time they improve the quality of results (more formal, where appropriate) while also reducing effort (less simulation, where appropriate).

ST, who are always very careful to quantify the impact of new tools and methodologies, showed that adoption of vManager with metric-driven verification ultimately was able to reduce verification effort by more than 50% over their original approach. They added that verification productivity increased, they had higher quality RTL and that tracking multi-site project status was much easier than in previous projects.

One last very interesting example. In software product development there have been aggressive moves to agile design and continuous integration (CI) methods. You might not think this has much relevance to hardware design, however RAD, a telecom access solutions provider based in Israel, thinks differently. They do most of their development in FPGAs; here lab testing plays a big role and regression data is not always up-to-date with latest source-code fixes. To overcome this problem, RAD have switched to a CI approach based on Jenkins for its ability to trigger regressions automatically (at scheduled times) on source-code updates.

RAD uses vManager together with Jenkins to run nightly regressions on known-good compilations, thus ensuring that regression results are always up-to-date with the latest source-code changes. They commented on a number of advantages, but one will resonate with all of us – the impact of the well-known “small” fix (trust me – this won’t break anything). A bug caused by such a fix previously could take an indeterminate amount of time to resolve. Now it takes a day – just look at the nightly regression. That’s pretty obvious value.

A quick note on who might need this and how disruptive adoption might be. Larry acknowledges that if you’re building IoT sensor solutions (100K gates), the full scope of vManager (multi-engine, multi-level) is probably overkill. Solutions like this are for big SoCs supported by large development and verification teams. Also, this solution is designed to fit comfortably into existing infrastructures. No need to throw out what you already have; you can adopt vManager just to handle those components for which you don’t have a solution in place – such as combining formal and simulation coverage results.

You can learn more about vManager HERE and read a useful white paper about metric-driven verification HERE.


Webinar: Fastest Lowest-Cost Route to Developing ARM based Mixed Signal SoCs

Webinar: Fastest Lowest-Cost Route to Developing ARM based Mixed Signal SoCs
by Daniel Nenni on 03-28-2018 at 7:00 am

When it comes to building edge devices for the internet-of-things (IoT), you don’t want to have to break the bank to prototype an idea before diving into the deep water. At the same time, if your idea is to shrink an edge device down to it’s smallest dimensions, lowest power and lowest cost, you really want to be able to prototype your design with more than a collection of discreet components on a printed circuit board. You want to know that your design can be cost-effectively implemented on a single SoC and you want to know that it’s going to work. So, what’s an inventor / entrepreneur to do?

With the advent of the IoT, Arm started working hard on a vision which they call ‘The path to a trillion connected devices’. As part of this vision Arm knew that it had to vastly increase the number of designs and thus designers that were going to be using Arm processors. One of their first strategies to enable this (circa 2015) was to announce a Fast-track license to make it easier for designers to use their Cortex-M0 processor. The M0 is Arm’s smallest core which also happens to be ideal for low power, low cost IoT edge devices.

Around the same time, Mentor Graphics acquired Tanner EDA, a software company specializing in low total-cost-of-ownership (TCO) design automation software for designing analog and mixed-signal designs. To Arm and Mentor, this was a match made in heaven.

By 2016 the parties were working together to offer a design solution that would enable companies, regardless of size, to be able to quickly and cheaply design, prototype and implement IoT SoC devices. In 2017, Arm went on to announce a zero-dollar license fee for their Cortex-M0 and Cortex-M3 processors and subsystems and added design services to help companies who had never done an IC design before, paving the way for individuals and smaller companies to get their feet wet in the growing IoT market space.

The beauty of the ARM – Mentor relationship is that the IP provided by ARM is well known and proven by 100’s of companies with over 20 billion Cortex-M0 and Cortex-M3 processors shipped to date. Similarly, Tanner EDA has a long history of developing CAD tools for analog/mixed-signal designers since its start as a business unit of Tanner Research in 1988. From the beginning, their vision has been to provide designers with easy to learn, highly interoperable tools with a low TCO. A perfect match for design teams wanting to break into the IoT market.

If you’ve got a great IoT idea and this is starting to wet your appetite, you should view a recently recorded webinar hosted by Arm and Mentor (link below). The webinar begins with Phil Burr, senior product marketing manager of Arm, describing some of the key attributes of both the Cortex-M0 and Cortex-M3 processors and associated subsystems that lets designers quickly assemble analog sensors with Arm processors and embedded software.

Phil goes on to explains Arm’s DesignStart Eval and DesignStart Pro programs. The DesignStart Eval program is used to design and then prototype a SoC using a FPGA-based solution with real hardware / software debug capabilities. Assuming success, the next move is to use the DesignStart Pro program which takes the design into a real SoC. In both cases, there are no ‘up-front’ licensing fees for using the Arm processors (M0 or M3). Business is done through a royalty-based model when the design reaches certain volumes indicating success.

The second half of the webinar is presented by Jeff Miller, lead strategist managing Tanner’s analog/mixed-signal products at Mentor, a Siemens business. Jeff picks up where Phil left off and takes the audience through a demonstration of how one might put together a typical IoT edge device using the Tanner tools and Arm processors. Jeff starts with how to use the Tanner tools to first design an analog MEMS-based pressure sensor and then integrate it with the Arm Cortex-M0 processor and subsystem using the Arm Advanced Peripheral Bus (APB). Jeff gives a nice step-by-step overview of how each of the various parts, MEMS and associated analog circuitry, analog amplifier, A-to-D converter, and digital control blocks are modeled and assembled as a sub-system and then how that sub-system is integrated and simulated with the Cortex-M0 core and embedded software. Jeff’s presentation focuses mainly on design and verification of the system and leaves the implementation presumably for another future webinar.

Core to the Tanner’s design solution is the ability to use a variety of different modeling techniques that lets designers quickly assemble and test their ideas through simulations. Jeff makes use of SoftMEMS’s finite element modeling (FEM) software to simulate the MEMS-based pressure sensing device. The output from this simulation is then used to create a VerilogA model which is employed in a mixed-mode simulation using Tanner’s T-Spice spice simulator and Mentor’s ModelSim logic simulator.

T-Spice handles the MEMS VerilogA model and analog circuitry (amplifier and A-to-D convertor) while ModelSim handles the Verilog RTL model for the sensor’s controller block. Finally, this subsystem is replaced with a ‘stand-in’ behavioral model that allows the subsystem to be simulated with the Arm Cortex-M0 core and an embedded C program all within ModelSim. It’s a very nice demo of how complex systems can be assembled with relative ease, absolutely.

The nice part is that all of this can be done on a reasonable budget and could be used to prototype your design to the point where it could be shown to investors for your first round of funding. From there the sky’s the limit. I would encourage you to watch the webinar for yourself and learn just how easy it is to get your first IoT SoC up and running. In the meantime, if you are interested in exploring further what Arm and Mentor have to offer, you can check out the links below for more information.

See Also:

Webinar: Fastest, Lowest-Cost Route to Developing Mixed Signal SoCs
Arm Cortex-M products and services
Mentor Tanner products and services


Uber’s Epic Fail Changes Everything

Uber’s Epic Fail Changes Everything
by Roger C. Lanctot on 03-27-2018 at 4:00 pm

This morning at Nvidia’s AI and Deep Learning Conference, GTC 2018, CEO Jen-Hsun Huang will give a keynote in which he will tout the company’s extraordinary progress in fostering and advancing the cause of artificial intelligence and deep learning along with the correlated autonomous driving industry. Hundreds of companies of all kinds are leveraging Nvidia’s Drive PX platform and technology as part of their efforts to master automated driving.

– Nvidia CEO Jen-Hsun Huang keynote – 9 a.m. PT – March 27 – live Webcast

To hear Nvidia tell it you’d think Nvidia is single handedly enabling automated driving, but of course it takes an eco-system and there are hundreds of established and startup companies and Nvidia competitors contributing to the cause. What you won’t hear in this morning’s keynote is the fact that Nvidia has paused its own autonomous vehicle testing activities in the wake of the fatal crash of an Uber autonomous vehicle in Tempe, Ariz., last week.

The news of the testing pause was revealed on the first day of GTC 2018 yesterday as part of the automotive track at the event – though the announcement was not formal. Nvidia took the step out of respect for the woman who was killed in the fatal crash and out of concern for what the post-crash investigation might find.

The fatal crash in Tempe is evolving into an epic fail for Uber with wide-ranging implications for the broader autonomous vehicle testing industry. Uber is an Nvidia customer. While Nvidia quietly “paused” its own autonomous vehicle testing, executives at Velodyne, a company which provided Lidar laser sensing technology used on the Uber test vehicle, noted that they were cooperating with investigators as part of the effort to determine what went wrong.

The significance of the Uber crash is that it is widely considered to be a failure of either the hardware or the software within the Uber system. Dashcam video from the crash captures the safety driver distracted at the point of impact with the pedestrian while showing the pedestrian located prominently in the middle of the road walking a bicycle laden with bags.

The conclusion of technical and non-technical observers is that the Uber system ought to have perceived the pedestrian and either issued a warning to the safety driver or made an attempt to slow down – neither of which occurred. Ergo, the failure has been attributed to the machine NOT the driver or the pedestrian – even though the driver should not have been distracted and the pedestrian should not have been in the middle of the road.

This is a seismic development for suppliers like Nvidia and Velodyne. While it is most likely that Uber’s software is the chief suspect in the incident, it is not possible to immediately exonerate the hardware providers. Uber has done nothing less than make Nvidia and Velodyne accessories to manslaughter.

No amount of regulation or legislation can fix this. Nvidia, and maybe Velodyne, can be expected to review their policies with regard to their developer programs – as will competitors. Nvidia, in particular, goes above and beyond the call of duty in offering hardware and software developer kits and related databases and server resources. There is no doubt that the epic Uber failure will lead to a reassessment of those programs with an eye toward helping partners avoid a repetition of Uber’s fatal event.

Nvidia has no choice but to take steps in the context of the fear, uncertainty and outrage likely to be stimulated by a robot car killing a human being. This is precisely the type of event that is capable of slaying a nascent industry in the crib.

Nvidia is not alone in fostering autonomous vehicle testing. Startups, universities, car companies, semiconductor companies, wireless carriers, Tier 1 auto industry suppliers, Baidu, Alibaba and maybe some day Apple and Amazon have all entered the self-driving car race – but Uber, alone, has blood on its hands.

Uber has hereby introduced a substantial coterie of investors to the liability exposure inherent in this form of testing – sending a frisson of fiduciary fright up the spines of investors in competing platforms and systems. This widespread concern arrives even as viewers of the video continue to shake their heads at the failure of the Uber system to perceive the pedestrian – suggesting jaw-dropping incompetence is to blame somewhere along the value chain.

The New York Times reports that Uber’s new CEO, Dara Khosrowshahi, was in the process of re-evaluating or terminating the self-driving car program. Maybe he knew something we all should have known, that the program was fatally flawed.

It is remarkable that it was a well-funded organization like Uber that brought this setback to pass, rather than a tiny startup like Comma.ai or any of the dozens of companies fitting aftermarket kit to normal vehicles. With dozens of autonomous driving platforms competing to solve the automated driving challenge, one can expect widespread efforts within the industry to improve oversight and development tools.

Companies like Nvidia and Velodyne cannot count on regulators and legislators to “figure out” what is needed. Regulators and legislators are more likely to get this one wrong than to propel the industry forward with new understanding. Expect the autonomous vehicle supplier community to step forward and provide enhanced support and guidance to partners. No one wants to be party to the next Uber event – which may well be inevitable.

Given the growing recognition of the importance of autonomous driving simulation there will be growing interest in greater sharing of data and “learnings” from such events as the Tempe crash. The other important takeaway, though, is that fellow Tempe tester, Waymo, which claims billions of miles of simulated driving and more than five million miles of actual on road miles traveled may well be the least inclined to share what it has learned. Uber’s failure amplified for the entire developer community the size of the development delta between Waymo and the rest of the industry. Without help from suppliers like Nvidia, and in the context of weak-kneed investors, that delta is only likely to grow in the wake of Uber’s epic fail.


Configurability for Embedded FPGA Hard IP

Configurability for Embedded FPGA Hard IP
by Tom Dillinger on 03-27-2018 at 12:00 pm

IP providers need to evaluate several complex engineering problems when addressing customer requirements – perhaps the most intricate challenge is the degree of IP configurabilityavailable to satisfy unique customer applications.
Continue reading “Configurability for Embedded FPGA Hard IP”