RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

Top Three Reasons to Attend the Synopsys Fusion Compiler Event!

Top Three Reasons to Attend the Synopsys Fusion Compiler Event!
by Daniel Nenni on 11-22-2019 at 10:00 am

As a professional semiconductor event attendee I can pretty much tell if an event will be successful by looking at the agenda. What I look for is simple, customer presentations. Not company presentations or partner presentations but actual customer case studies presented by name brand companies. For this event Google, Intel, and Samsung stand out for me.

Intel because they have gone through some major disruptions in the last year. Example: Hiring Jim Keller as senior vice president in the Technology, Systems Architecture and Client Group (TSCG) and general manager of the Silicon Engineering Group (SEG). Jim is a very disruptive personality and that is exactly what Intel needed in the design ranks, my opinion.

Google because they are doing some extremely clever stuff! The whole Google approach to chip design is also very disruptive. If you ever get a chance to participate in a Google chip project do it if at all possible. If you ever get to hear a Google chip person speak do not miss it. Seriously, I speak from experience here on both parts, absolutely.

Samsung is a bleeding edge company in regards to logic and memory chips. They design a very wide spectrum of silicon and systems and literally go where no chip designers have gone before. Always worth listening to Samsung.

Synopsys’ Fusion Compiler was announced a year ago and from what I have heard it is doing quite well delivering on promises Synopsys made from the beginning. I know of a very large SoC that was taped out recently using Fusion Compiler and there were no complaints which is very rare in this business. In fact, I was told that Synopsys support was excellent for this project.

Fusion Compiler Technical Symposium
Wednesday, December 4, 2:00 PM, Synopsys Building 1

Since its launch one year ago, Synopsys’ Fusion Compiler™ RTL-to-GDSII product has delivered on its promise to help digital designers efficiently bring their differentiated products to market faster, realizing their Simply Better PPA™ goals.

But you don’t have to take our word for it.

Come hear from industry leaders including Arm, Google, Intel, Renesas, and Samsung at the Fusion Compiler Technology Symposium as they discuss today’s design challenges and how these challenges are being solved with Fusion Compiler.

AGENDA

2:00 PM Registration and Refreshments

3:00 PM Presentations

5:00 PM Networking Reception and Entertainment

LOCATION

Synopsys – Building 1 at the New Pathline Park Complex

800 N. Mary Ave.

Sunnyvale, CA 94085

About Synopsys
Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software partner for innovative companies developing the electronic products and software applications we rely on every day. As the world’s 15th largest software company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and is also growing its leadership in software security and quality solutions. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing applications that require the highest security and quality, Synopsys has the solutions needed to deliver innovative, high-quality, secure products. Learn more at www.synopsys.com.


U.S.-China trade war continues

U.S.-China trade war continues
by Bill Jewell on 11-22-2019 at 6:00 am

Electronics production

The trade dispute between the U.S. and China continues to drag on. According to Reuters, U.S. President Donald Trump recently threatened to raise tariffs further on Chinese imports if no deal is reached. Tariffs affecting most consumer electronics imports from China are scheduled to go into effect on December 15, according to a timeline from China Briefing.

The trade war has already had a significant impact on U.S. electronics imports. In the first three quarters of 2019, total U.S. electronics imports have dropped 6% versus the first three quarters of 2018. Imports from China dropped 12%. China still is by far the largest source of electronics imports, accounting for 54% in 1Q-3Q 2019. The second largest source, Mexico, dropped 3%. Two countries benefiting from the U.S.-China dispute are Vietnam (third largest) and Taiwan (fourth largest). U.S. electronics imports versus a year ago are up 59% from Vietnam and 64% from Taiwan. All other significant sources of U.S. electronics imports were down from a year ago, with the biggest declines coming from South Korea (down 32%) and Malaysia (down 29%).

Numerous companies have shifted production out of China in recent months. Samsung ended mobile phone production in China, moving to countries such as Vietnam and India. Inventec Corp., plans to shift production of notebook PCs (including HP branded PCs) for the U.S. market from China to Taiwan. A CNBC article cites Vietnam, Taiwan and Thailand as the biggest beneficiaries of the production shifts.

Electronics production data by country demonstrates the shifting production. China electronics year-to-year growth was in the 12% to 15% range in each month of 2018. In 2019, growth has ranged from 7% to 11%. Taiwan’s production has boomed in 2019, reaching a 24% three-month-average growth versus a year ago in August. Vietnam has experienced accelerating electronics growth in 2019, reaching 12% in October. U.S. electronics production has been showing modest growth in the 5% to 7% range for most of 2018 and 2019 but slipped to 2% in September. Thus, it appears the U.S.-China trade dispute has not been a significant boost to U.S. electronics manufacturing. Other major electronics producing countries have been weak lately. South Korea, Japan and the 28 countries of the European Union (EU28) have been flat to negative for most of 2019.

Although the shift of electronics production from China to other Asian countries has been accelerated by the current trade dispute, the trend has been in place over the last few years. Multinational companies are moving production to Vietnam and other countries due to lower labor costs, favorable trade conditions and openness to foreign investment.

How is the trade dispute affecting overall electronics in 2019? Key electronic equipment markets remain weak. Gartner projects combined unit shipments of PCs and tablets will decline 3.1% in 2019, followed by a 2.4% decline in 2020. IDC forecasts a 2.2% drop in smartphone units in 2019. Smartphones are expected to grow 1.6% in 2020, helped by the emerging 5G market. The impact of the trade dispute on PC, tablet and smartphone shipments is difficult to measure. These are mature markets which have been weak the last few years.

How will the U.S.-China trade dispute affect the economy and electronics going forward? Goldman Sachs estimated the trade dispute has cut 2019 GDP by 0.5% in the U.S. and 0.7% in China. The Consumer Technology Association (CTA) estimates tariffs on China have cost the U.S. consumer technology industry almost $12 billion since July 2018.

U.S. consumers have not yet seen tariff driven price increases on most electronics. However, unless a resolution is reached, on December 15 a 15% tariff will be applied to U.S. imports from China of mobile phones, TVs, digital cameras, set-top boxes, laptop PCs, tablets, video monitors, headphones, video game consoles, smartwatches, fitness trackers and other consumer products. Consumers are conditioned to expect a general trend of lower prices and higher functionality for electronics. If implemented, the 15% tariff will not affect the 2019 holiday season, but going forward it will negatively impact the U.S. demand for consumer electronics in 2020.


MIPI gaining traction in vehicle ADAS and ADS

MIPI gaining traction in vehicle ADAS and ADS
by Tom Simon on 11-21-2019 at 10:00 am

I am old enough to remember when cars did not come with air conditioning unless you purchased it as an option. Of course, now you can’t even find a car that doesn’t come with air conditioning. So, it goes with Advanced driver assistance systems (ADAS). They are becoming more and more common and will certainly become baseline features in cars in the future. In all likelihood autonomous driving systems (ADS) will follow the same path as they become more feasible and affordable. Video data from sensors, either heading to an internal display or to a computer for processing, is required for both of these systems.

The automotive environment brings with it a number of specialized requirements for these systems, such as low power and high reliability in a challenging physical environment. They also must also be cost effective. System designers for ADAS and ADS have been turning to existing standards for transferring video information in mobile systems, which share many of the same requirements as ADAS and ADS. Specifically, there has been a lot of interest in MIPI® Alliance specifications. The proven technology found in the well-established D-PHY℠ for connecting high resolution cameras, vision processor and displays has become a popular solution for in-vehicle video needs.

Mixel, a leading provider of mixed signal mobile IP has published an article discussing the application of their D-PHY IP in GEO Semiconductors’ GW5 CVP product family. MIPI D-PHY is a source synchronous PHY that uses one clock lane and a varying number of data lanes. It is a widely adopted standard that has been in use since 2009. There are two differential pins per signal. D-PHY can be used with MIPI CSI-2℠, DSI℠ and DSI-2℠, to connect to cameras and displays. Mixel’s D-PHY v2.1 TX and RX IPs can handle 2.5Gbps per lane, up to 4 lanes to achieve 10Gbps. The TX and RX IPs are AEC-Q100 compliant for auto-grade 0/1/2 temperature ranges.

In GEO Semiconductor’s product they used D-PHY v1.1 TX and RX with 4 lanes of 1.5 Gbps, for a total of 6Gbps. The GEO GW5400 includes in-camera vision processing to enable ADAS functionality. The GEO GW5 supports up to 8-megapixels and includes GEO’s eWARP® geometric processor, innovative High Dynamic Range (HDR) Image Signal processor (ISP), and 2D graphics functionality. For the GEO GW5 there are 2 RX interfaces, supporting dual sensors. However virtual channels can be used to connect to more sensors. There is an HDR feature that allows each RX interface to receive images from multiple HDR sensors and combine them into a single high dynamic range video stream.

The Mixel PHY IP come with a BIST engine that can be used for IC, board or system tests. Mixel has had silicon success across multiple nodes at a variety of foundries. Mixel reports widespread deployment of their IP in ADAS and ADS chipsets.

MIPI interfaces will increasingly play a major role in ADAS and ADS systems. In the future in addition to radar, LIDAR and video sensor input, ADAS and ADS will also rely on data links between vehicles and between vehicles and their surroundings. Sensor data rates and resolution will increase over time as well. From reading the Mixel article it is pretty clear that they intend to stay on the forefront of the technology. The article which can be found here also goes into more details about the specifics of their offering and the GEO Semiconductor products that employ their IP.

Also Read:

A MIPI CSI-2/MIPI D-PHY Solution for AI Edge Devices

FD-SOI Offers Refreshing Performance and Flexibility for Mobile Applications

New Processor Helps Move Inference to the Edge


Mustang Mach-E!

Mustang Mach-E!
by Roger C. Lanctot on 11-21-2019 at 6:00 am

Ford Motor Company detonated an epochal explosive in the form of an electrified Mustang SUV on the eve of the Los Angeles Auto Show last night. The move marked an industry altering turning point as auto makers commence the process of electrifying their internal combustion engine line-ups in anticipation of a global market embracing electrification.

The move came three days ahead of a rumored electrified pickup truck announcement expected from Tesla Motors and follows by one year Rivian’s announcement of plans for its own electrified pickup truck. Of course, the significance of a rush to electrify pickup trucks cannot be lost on Ford, which makes the F-150 – the best-selling vehicle of any kind in the U.S. for the past 36 years.

Ford sells nearly a million F Series pickup trucks every year and has tipped its plans for a full electric version sometime in late 2020 or early 2021. That is about the same timing that Rivian (in which Ford is an investor) has discussed for its own EV pickup – i.e. end of 2020. For its part General Motors asserted that it is in the process of refitting its Hamtramck plant to make electric pickup trucks – though a specific timeframe for delivery is unclear.

The electric Mustang Mach-E likely represents the first domino to fall in a sweeping shift in sports car propulsion of domestic makes from internal combustion to EV tech. Ford’s introduction of an electric Mustang SUV likely points to the eventual arrival of Cadillac and Corvette equivalents and, perhaps further down the road, an EV Camaro and EV FCA Challenger.

With sales of sedans and sports cars in decline the shift to SUV form factors with EV propulsion suddenly seems like a no-brainer. But the boldness and courage required by Ford to make this move ought not to be underestimated.

Ford (with the Focus) and GM (with the Volt and Bolt) have flirted with EVs in the past, but these expensive endeavors have failed to fire up consumers to the point of putting up impressive sales figures. These models had the trappings of “regulatory” offerings intended to fulfill California zero emission requirements or Federal Corporate Average Fleet Efficiency (CAFÉ) standards. Dealers were unenthusiastic about these early EV models and advertising dollars in support of the effort were scarce.

The launch of the Mustang Mach-E moves Ford’s EV effort to center stage and the announcement, coming at the L.A. Auto Show with a significant dealer audience in attendance, marks a pivotal moment for the industry. The iconic Mustang will now stand as the fulcrum of a committed EV marketing effort that will reshape Ford’s relationship with its customers, its dealers, and its suppliers.

Ford dealers will now be on the front lines of the new vehicle resale proposition of marketing both ICE and EV vehicles on the same showroom floor. The software and connectivity elements of the Mach-E with over the air software updates and an exceptionally nimble infotainment system will present a substantial contrast to existing in-vehicle systems – at least until elements of the Mach E can be extended across the other vehicles in the Ford line up.

As important as the shift of domestic marques from ICE to EV will be as it unfolds, the shift of the pickup sector will be even more powerful and momentous. The vehicle volumes and profits at stake in the pickup sector are more critical to the automakers involved – GM, Ford and FCA – and the change in performance characteristics and expectations will require different means of communication.

At last week’s Fleet Forward event, put on by Bobit Media, an industry analyst from Vincentric noted the total cost of ownership advantage of high-mileage EVs. A Cox Automotive executive, also speaking at Fleet Forward, noted the growing number of EVs making their way to the market … and with greater range. (The Mustang Mach-E has a 300-mile range, according to Ford.)

SOURCE: Cox Automotive

Shifting sports cars to EV propulsion is an almost pure enhancement – if you ignore the loss of soothing engine growls and roars. Shifting pickups along the same path may require some demonstration and convincing – though Ford has already taken the first steps in this direction with its stunt of towing a railroad train with an EV propelled F Series truck earlier this year.

The transformative impact of the Mustang Mach-E launch cannot have been lost on attendees of last night’s press event or Ford dealers or Ford competitors. Along with the Mach E comes a comprehensive software update solution, a new in-house developed infotainment user interface (dubbed “Menlo”), smartphone-based keyless vehicle access via the FordPass app, and a global fast/and regular speed charging network – in conjunction with multiple partners including Shell’s Greenlots.

It’s a new day for Ford, a rebirth for the Mustang, and a turning point for the industry. It will be interesting to see what impact electrified pickups will have following the arrival of a Mustang rendered silent but deadly with its electrified powertrain.


WEBINAR REPLAY: AWS (Amazon) and ClioSoft Describe Best Cloud Practices

WEBINAR REPLAY: AWS (Amazon) and ClioSoft Describe Best Cloud Practices
by Randy Smith on 11-20-2019 at 10:00 am

ClioSoft has been working with the leading cloud computing providers running experiments on various EDA cloud architectures for a while now. One example of that was a project with Google I previously wrote a blog about, For EDA Users: The Cloud Should Not Be Just a Compute Farm. Since then, ClioSoft has also teamed up with Amazon Web Services (AWS) to show examples and talk about best practices for designing in the cloud. This information was shared at a webinar on Thursday, October 17th, 2019. You can sign up to view the replay of that webinar here.

All of us have heard about the advantages of on-demand computing. Some of the EDA companies have now come along and offered licensing solutions to accommodate that. However, there are multiple ways to architect EDA solutions in the cloud. I think it is important that everyone understands the trade-offs with various cloud architectures. Design Data Management tools, such as those that come from ClioSoft, provide additional benefits to cloud architectures, though it is not the case that “one size fits all” when it comes to implementing your cloud architecture. In fact, at a high level, there are at least two dimensions to the architectural choices – the tool architecture and the data architecture.

When considering the tool architecture in a cloud environment, we are describing where tools will run. Today that even applies to interactive tools. Cloud services are giving us ever-decreasing latencies, and since we can render full-motion video over the internet, it should not be a problem to render interactive EDA tools over the internet. However, to work optimally, we need to have the correct hardware for each tool. We also need to understand EDA tool workloads – how many resources, for how long, at what point in the design flow?

Data architecture is also critical to your efficiency and cost. You need to decide where you will keep each type of data. However, much more than that, modern solutions involve caching data. You also want to consider persistent storage in the cloud. Where are the master copies of each type of data (e.g., library data, design data, simulation results, etc.)? Where are the caches? There are lots of decisions. Depending on your tools, it may be difficult to change your architectural choice later. The benefits are tremendous, but you also want to be correct as possible on your initial implementation. To do that, you need information on all the optimization parameters you have at your control on AWS – Amazon EC2 Instance Types, Operation System Optimization, Networking, Storage, and Kernel Virtual Memory. There is a lot to learn about and control. Do you know what an AMI is?

In addition to superior design data management solutions, information is exactly what ClioSoft has been preparing for its customers. The information shared in the previously mentioned blog was quite helpful. Now ClioSoft has followed that up with this webinar collaboration with AWS. Of course, ClioSoft is in the AWS partner network.

Speaking for AWS in the webinar is David Pellerin, the AWS Head of Worldwide Business Development. Dave has an interesting background. He has been with Amazon for more than seven years. He has worked in a variety of fields, including accelerated and reconfigurable computing, data center and cloud services, HPC software development tools, field-programmable gate arrays, financial computing, life sciences, and health IT. Dave has also authored several books related to programming and design, including VHDL Made Easy. Clearly, he understands EDA, too.

Also speaking in the webinar is Karim Khalfan, VP of Application Engineering at ClioSoft. I have known Karim for a very long time, and I appreciate that not only does he have a deep understanding of design data management, but that he also has a knack for making these complex issues easy to understand. Adding Dave’s experiences in textbooks, I think everyone will be able to learn a lot from this webinar.

Also Read

WEBINAR REPLAY: ClioSoft Facilitates Design Reuse with Cadence® Virtuoso®

WEBINAR: Reusing Your IPs & PDKs Successfully With Cadence® Virtuoso®

For EDA Users: The Cloud Should Not Be Just a Compute Farm


NXP Pushes GHz Performance in Crossover MCU

NXP Pushes GHz Performance in Crossover MCU
by Bernard Murphy on 11-20-2019 at 6:00 am

RT1170 system

I first heard about NXP crossover MCUs at the 2017 TechCon. I got another update at this year’s TechCon, this time their progress on performance and capability in this family. They’ve been ramping performance – a lot – now to a gigahertz, based on a dual-core architecture, M7 and M4. They position this as between 2 and 9X faster than competitive solutions, certainly a major performance advantage.

Quick refresher on why they’re doing this. MCUs used to be the staid but inexpensive and reliable cousins of the flashier processors you’d find in your phones. What you wanted in your car, appliances, printers and many applications wasn’t a lot of flash and features; you wanted reliability and low cost. Now thanks to the explosion in expectations for what everything and anything should be able to do, we now want very high performance and very low power, communications, human machine interfaces, voice recognition and face id everywhere. Still at very low cost.

Perhaps you could do this by scaling advanced processors down to MCU price levels (few $), but that’s a big stretch. And there are other considerations besides cost. Many MCU apps depend on real-time support and for that they have to run real-time operating systems rather than the Linux OS used by their up-market cousins. On top of that, requiring a large base of MCU application developers to switch OS would be impractical. Also, while product teams using MCUs want to take advantage of AI capabilities, they have limited resources and expertise. For all these reasons, NXP argues that it’s best to start with architectures built for MCU developers and grow them into supporting advanced features. Makes sense to me.

The dual processor approach follows a familiar big.little kind of theme, in which a high performance (1GHz) M7 core handles advanced applications only needing to run intermittently, such as smart speaker functions (audio pre-processing through echo cancellation, noise suppression, beamforming, etc). A lower-performance (400MHz), more power efficient core (M4) can handle lighter weight and standby tasks such as wake-word processing. In fact the M4 can handle more than that, according to Gowri Chindalore, Head of Strategy for embedded processing. He told me it can also handle fingerprint sensing, quick voice recognition and quick face id. The two cores are in separate power domains; on detecting a wake-word or gesture, the M4 wakes the M7 for phrase recognition, perhaps “hey it’s dark in here, turn on the lights”. Gowri said the system can support between 100 and 150 phrases.

NXP became one of the pioneers in machine learning (ML) programmability across a range of platforms when they introduced their eIQ ML software development environment. This can start from any of the standard ML trained network representations and map to differing targets, optimizing the mapping as needed to best suit the resources of a given target. All this without having to understand all the technical details of TensorFlow Lite, Glow and other models. Another plus for MCU developers who want the capability without a lot of extra training.

There are a few more important features. The RT1170 hosts a 2D GPU, an addition to earlier processors, so it can generate complex graphics for appliance and industrial systems. It’s also automotive-qualified, so think cockpit displays and graphical steering wheel controls. This MCU also provides a hardware root of trust (HRoT) through their EdgeLock subsystem (HRoTs are the way all serious hardware security is going now). EdgeLock provides secure boot, a range of cryptography options and a secure real-time clock, useful in many contexts eg forcing a timeout after an unreasonable delay.

One more point that has raised some questions: the RT1170 doesn’t use embedded flash but rather 2MB of (embedded) SRAM. This decision was apparently customer driven; customers didn’t want the performance hit or the cost of flash. To ensure there is no security problem as a result of this change, data is stored encrypted in SRAM and is decrypted on-the-fly in zero-cycles as it’s read into the MCU.

The RT1170 is built on 28nm FDSOI technology, providing all the low-power management features you need (power islands, low leakage) but still at a much more price-conscious level than you’d find in application processors or GPUs at more advanced FinFET nodes.

NXP sees this platform having multiple applications: industrial and retail (factory automation controllers, unmanned vehicles, building access controls, retail display controllers), consumer and healthcare (smart home, professional audio applications and patient monitoring systems) and automotive applications (in-vehicle HMIs and 2-wheeler instrument clusters). Lots of opportunities – we can’t all build our own custom devices, in fact most of us can’t; we need more solutions like this. You can learn more about the RT1170 HERE.


Webinar – IP for securing automotive systems

Webinar – IP for securing automotive systems
by admin on 11-19-2019 at 10:00 am

Modern cars have about as much in common with their predecessors as modern cell phones have in common with dial up land-line phones. Cars now are loaded with a bevy of electronics, some of which serve the convenience of the driver and others are essential for vehicle operation and occupant safety. With the introduction of sophisticated electronics, comes the potential for security threats.

Cars often already have built-in cellular connections and will be expanding their interactions with external devices in the form of electronic keys, other vehicles, roadway instrumentation, traffic and routing information, over the air updates and more. With each of these communication channels comes the potential for vulnerability. Defending these systems against compromise is essential for preventing theft, nuisance, damage, and collision – with the threat of bodily injury or death.

Automotive system designers need to address the challenges of hardening and securing vehicles against these threats. Fortunately, there is IP available that can help provide the kinds of protection that vehicle systems need. Silvaco, a leading supplier of IP, will be offering a free webinar on the topic of “IP Solutions for Secure Autonomous Driving” on December 3 2019 at 10AM PST. The presenter is Conor Culhane, Senior Application Engineer at Silvaco. He specializes in embedded security solutions, and holds a BS in Computer Engineering from Georgia Institute of Technology.

His presentation will cover many aspects of building secure SoCs for automotive applications. Connected vehicles present a variety of attack surfaces. To help  counter this software upgrades must be secure and critical vehicle networks need to be physically isolated. Part and parcel of this is hardware identification for authentication and secure cryptographic key management.

The webinar will also cover security IP solutions from Silvaco that address the needs of this market. They offer security processors, cryptography, hashing and secure key management. They also offer modules that serve as runtime integrity checkers and DRAM protection. Lastly Silvaco has cypher engines for secure AES and public key engine accelerators.

While thieves may not be able to jump in and rub two wires together to steal cars anymore, clever malicious actors will be looking for other ways to steal or damage cars, or cause worse problems. System designers need to stay apprised of the latest developments in automotive security. This webinar will go a long way to providing this kind of information. Registration is available on the Silvaco web site.


S2C Delivers FPGA Prototyping Solutions with the Industry’s Highest Capacity FPGA from Intel!

S2C Delivers FPGA Prototyping Solutions with the Industry’s Highest Capacity FPGA from Intel!
by Daniel Nenni on 11-19-2019 at 6:00 am

In 2016 we published our book “Prototypical: The Emergence of FPGA-Based Prototyping for SoC Design” which began an incredible journey through ASIC prototyping. While we are working on an update to that book there is some recent Prototyping news that is worthy of praise.

First and foremost, S2C Inc. has just announced THE single most dense FPGA prototyping boards the industry has ever seen. Based on the new Intel Stratix 10 GX 10M FPGA, S2C has announced single, dual and quad FPGA configurations.

UPCOMING WEBINAR: Prototyping with Intel’s New 80M FPGA and S2C!

The Stratix 10 GX 10M FGA is the newest addition to Intel’s 14nm Stratix 10 Family and features up to 80 million ASIC gates (2.5x denser than Xilinx). Imagine that, more than 320M ASIC gates on a single board, wow!

And now that Intel 10nm is in high volume manufacturing I expect to see even higher density FPGAs coming out in 2020. The legendary Xilinx vs Intel (Altera) FPGA wars are back on, absolutely!

S2C Product Highlights:

  • Supports designs up to 80 million ASIC gates with a single FPGA, simplifying the prototyping effort for complex design.
  • Prodigy Logic System hardware facilitates comprehensive out-of-the-box prototyping, reducing time-to-prototyping.
  • Complete Player Pro prototyping software-stack streamlines Quartus-based FPGA design compilation, reducing prototype configuration time.
  • Supported by Prodigy MDM debug module, accelerating design debug.
  • Supported by a rich portfolio of Prototype Ready IP in the form of plug-play daughter cards, enabling rapid prototype platform bring-up.

The Single 10M Prodigy Logic System is optimized and trimmed to assure signal integrity and enable the best performance, supporting up to 1.4 Gbps for general-purpose I/O, and up to 16 Gbps for the high-speed transceivers.  Remote management capabilities are supported over USB or Ethernet, including FPGA configuration, power on/off/recycle, Virtue UART for debugging, system monitoring, as well as identification of the presence of specific Prodigy daughter cards, and remote test with the auto-detection technology.

“Intel’s Stratix 10 GX 10M FPGA is approximately 2.5 times larger than the current largest commercially available FPGA and is likely to be the highest-capacity single FPGA for the next 2 to 3 years.  Using the Stratix 10 GX 10M FPGA will significantly increase current SoC/ASIC design prototyping capacity, simplify the prototyping process and achieve a much lower cost per gate”, commented Toshio Nakama, CEO of S2C.  “Our immediate availability of the Single 10M Prodigy Logic System marks our strong commitment to deliver the best prototyping solutions to accelerate their software development and design validation.”

Single S10 10M Prodigy™ Logic System

The other interesting piece of Prototyping news is that Synopsys officially acquired DINI last week. Mike Dini has been a fixture on the prototyping scene for as long as I can remember. He could be seen at various conferences reading the newspaper in his 10×10 booth. Unfortunately, Dini spent most of 2018 in a legal battle with Cadence. The first filing was in June of 2017 and the resolution (in DINI’s favor according to my sources) was on 10/18/2018. David vs Goliath legal battles can really take the wind out of David’s sails/sales, been there done that. Synopsys now has one less competitor to worry about and Mike Dini has the Cadence settlement and the Synopsys acquisition cash. Congratulations on the exit Mike!

Also Read:

AI Chip Prototyping Plan

WEBNAR: How ASIC/SoC Rapid Prototyping Solutions Can Help You!

Are the 100 Most Promising AI Start-ups Prototyping?


ITC shines light on new Mentor Test announcements

ITC shines light on new Mentor Test announcements
by Tom Simon on 11-18-2019 at 10:00 am

The 50th International Test Conference was just held in Washington DC, where papers, sessions, workshops and announcements addressing the increasing complexity and expanding use of semiconductors showed that innovations in test are crucial to design and product success. Test methodologies and even the scope of test have expanded over the lifetime of this event. If test methods had grown linearly with design size and complexity, today’s massive designs would be effectively untestable. At the same time test activity has moved from being a manufacturing step into something necessary throughout the life of the design in many applications.

The one major message here is that the scale and scope of test is expanding, and the industry is working to keep up and track these changes. Evidence of this is provided in announcements by Mentor during the ITC. The first of these deals with Mentor’s Tessent Connect, which provides much needed automation in hooking up hierarchical test elements. The benefits of hierarchical test are well understood. Each core can have test added during design. The result is easier scan insertion, better observability and quicker test pattern generation. Also, top level resources are conserved by applying IJTAG based on IEEE 1687. When there are design iterations, and there always are, only the blocks affected need to have test changes.

The downside is that a lot of manual effort is required to connect each core for the chip level test implementation. Tessent Connect helps automate the process of making these connections. Designers using Tessent Connect work at a higher level of abstraction that focuses on intent rather than the details of stringing together individual wires. This is useful especially when working in cross team environments. To help facilitate its adoption Mentor has also created a quickstart program for Tessent Connect to help with flow assessment and provide implementation services.

The second Mentor announcement at ITC was the introduction of the Tessent Safety ecosystem. They describe it as a comprehensive portfolio of best-in-class automotive IC test solutions from Mentor and links to its industry-leading partners. In applications such as automobiles, test now plays a major role during system operation. This has led to expanded use of Logic BIST, which can be used in chips throughout their life. For instance, ISO 26262 calls for regular and repeated testing of automotive systems during operation to detect failures so corrective action can be taken. These tests must be performed quickly and in such a way as to not interfere with overall system operation.

Mentor’s Tessent Safety ensures that tests are non-destructive to system operation and that tests are run much faster than alternative approaches. One new technology they are using is called Observation Scan Technology (OST), which includes IP that can be inserted selectively to boost observability. This translates into a 10X improvement in performance and helps reduce layout congestion. Mentor is also adding close links to their Austemper SafetyScope and KaleidoScope products.

Mentor is participating in the ARM Functional Safety Partnership Program, leveraging ARM Safety Ready IP, like the Cortex-R52 processor. There are many other aspects to the Tessent Safety ecosystem. A partial list includes analog test capabilities, memory BIST – at RTL or gate level, automotive grade ATPG and transistor level defect simulation. The level of rigor in the Tessent Safety ecosystem comes as no surprise given their long experience with automotive applications and their test expertise. The Tessent Connect and Tessent Safety announcements from this year’s ITC are available on the Mentor website.


AMAT last to confirm foundry led recovery

AMAT last to confirm foundry led recovery
by Robert Maire on 11-18-2019 at 6:00 am

Good end to a weak fiscal year- and end to down cycle
As expected and well telegraphed by TSMC, LRCX, ASML & KLAC, AMAT put up a good quarter and guide as the last to report that the industry has turned the corner on the down cycle. While not a rip roaring recovery, its better to return to growth than continue a downward trend.

Results were at the high end of guidance coming in at EPS of $0.80 versus street of $0.76 and revenues of $3.75B versus $3.68B.  More importantly, guidance for the January quarter is for $0.87 to $0.95 on revenues of $4.1B +- $150M.

No surprise here- All driven by TSMC
Its quite clear that the hockey stick like huge uptick in TSMC spending focused on the end of year is the primary reason for AMATs strong outlook.  While Intel has been tepid at best and memory is still dead with display going nowhere, its TSMC that is carrying the entire load of the recovery. Their uptick is so strong it has been able to offset the weakness in other areas.

We would remind investors that Applied has one of the strongest relationships with TSMC of any equipment supplier, having TSMC been called :the house that Applied built”.  It is somewhat funny that whereas in the past relationship TSMC needed Applied to be a force in the chip industry, now the tables are turned and Applied has TSMC to thank for the recovery.

Running on 5 of 8 cylinders
Applied repeated what we have been saying for many months now, that this up cycle will be driven by foundry/logic.  Memory remains virtually dead and display treading water at best.

The question is when do those 3 cylinders- DRAM, NAND & Display start firing again? We certainly agree with Applied by saying its a question of when not if.

On the call the company was very careful, multiple times, not to comment on the shape or size of the recovery. The company also demurred on the question of a potential 2020 recovery of NAND and left out entirely the timing of a DRAM recovery.

Given that capacity has still been coming off line recently in memory it will be a fairly long time before memory spending starts up again.  In our view, its unclear if NAND will recover before the end of 2020.

Who benefits most from a Foundry/Logic recovery?
Given that this up cycle is very much led by foundry/logic with memory stuck in neutral, we think its appropriate to revisit who has the best exposure to foundry/logic of the big 4 semi equipment makers.

We think its clear that KLAC is likely the highest exposure to foundry/logic and historically has been viewed as the anti-memory play.  TSMC is spending a lot of money on process control keeping on their annual improvement cadence and KLA gets the lions share of that.

Applied is likely second in line to foundry/logic exposure with a long history of support of TSMC, but still very dependent upon memory for its business. Applied will see a mild recovery but really needs memory to kick back in.

ASML is likely third as it gets most of its EUV business from TSMC but still relies on DUV and memory as being volume buyers of scanners making up a huge part of purchases and most current profitability.

LRCX is fourth as it has historically been the poster child for the memory industry and saw huge upside from memory’s spending spree in the last up cycles. That said, even Lam is seeing an upturn given TSMC’s huge uptick.

The stocks
We have been positive on the stocks in the semi equipment sector calling for strong upside prior to the quarterly reports and urging investors to get in before Lam was the first to report.

We had also suggested at the time that after the stocks had their run up due to the positive reaction of the turn in the down cycle to an up cycle that we would be inclined to take some money off the table.

We think that post Applied quarter that lightening up may be prudent.

We have seen a strong run up across the board in all the stocks. The stocks are at all time highs in many cases and at all time highs in P/E ratios in most all cases. This is despite the fact that the recovery will be weak and slow with memory still dead.

The market seems to be pricing into the stocks a normal rip roaring semiconductor recovery when in fact we have a half baked, foundry/logic only recovery without any clear sight lines to a full recovery.

We are also concerned that there is not another near term upside for the stocks until they report the current quarter.  Though one could argue that the next upside surprise will be memory recovering but we feel that we are going into the seasonally weak Q1 when memory is at its normal nadir, so a memory recovery is several quarters away at a very minimum.

At this point given that the stocks are priced to perfection in a less than perfect environment we also have the macro risk of China and trade that still hangs over us. The China issue seems to have gotten marginally worse of late as the deal we thought we had now seems more elusive as the intellectual property transfer issues seems less than settled.

Applied has just hit an all time high, jumping almost 10% on well known “news” of a recovery. We have no problem taking some of our profits from the turn in the cycle off the table until we see a better upside/downside risk/reward profile. We think Applied remains a premier company in the space and put up a very good report however the valuation is a bit ahead of reality.