webinar banner2025 (1)

MEMS Actuation and the Art of Prototyping

MEMS Actuation and the Art of Prototyping
by Bernard Murphy on 11-27-2019 at 5:00 am

Thermal actuator prototyping

I mentioned a while back that I’m really getting into the role that sensors play in our new hyper-connected world – in the IoT, intelligent cars, homes, cities, industry, utilities, medicine, agriculture, etc, etc. If we can think of a way to sense it and connect it, someone is probably already doing it. But there’s more to these devices at the edge than sensing. One of the really neat things about MEMS is that you also can build machines – actuators – at the micro-level, delivering a breathtaking range of possibilities, already used in camera autofocus, laser-based 3D scanning using micro-mirrors, the tiny speakers in earbuds, pumps for biomedical analysis and many other applications.

These machines can get pretty complicated so John Stabenow (Mentor/Tanner) and Mary Ann Maher (SoftMEMS) eased me into it with a simple example – a thermal actuator, a device which causes an arm to bend (in this example) thanks to thermal expansion due to resistive heating. These are not toy examples; such devices have multiple real uses, to tilt micro mirrors for barcode reading or biomedical imaging, to tune lasers and to control adaptive optics for sight correction.

The idea is not difficult to understand. Two pads (blue above) are anchored to the substrate, and from these extend two arms, one narrow and one mostly wide (though it starts narrow, to not impede flexing I would guess). These arms are connected together at the far end. Now apply a voltage across the pads. Current flows through the arms, resistively heating each; obviously the narrow arm heats up more (higher resistance) and expands more than the wide arm. Since they’re connected at the far end, the structure bends upwards. Presto – you have a micro-machine with an arm which will bend upwards when you apply a voltage across the pads.

An interesting point here is that the design of these devices is pretty unconstrained. Where conventional semiconductor devices follow well-defined recipes and rules, teams who build these kinds of devices can up to a point do whatever they want, as long as their MEMS foundry is willing to support them. I’ve seen examples using polymers as part of the arm structure, rather than polysilicon, for preferable characteristics in bending at lower voltages. Less dramatically, all dimensions and materials (typically less exotic) are completely open for experimentation.

Clearly this means that design must be tested through a lot of virtual prototyping and analysis, though here that prototyping and analysis uses tools that wouldn’t look out of place in turbine design – drafting, 3D modeling and finite element analysis (FEA) of thermal and mechanical behaviors.

Mentor/Tanner, SoftMEMS and OnScale have partnered to provide a front-to-back solution for this prototyping. Tanner provide the tools to define parametrizable structures, a layer at a time. SoftMEMS converts this to 3D-models, given additional input on material choices and other parameters. OnScale then provides cloud-based FEA for elastic scalability in thermal and mechanical analyses. The results are directly viewable back in the Mentor and SoftMEMS tools.

The flow is setup so that you can easily run multiple trials in parallel to compare and contrast different options. The Tanner tools allow for parametrization with no need for coding in some cases, and with an option to code in more complex cases. Using these capabilities, you can parametrize the length of the actuator arms for example. Parameterized structures are called T-cells (think of them as a type of P-cell). SoftMEMS understands T-cells so can carry these through in 3D-modeling to generate multiple variants for prototyping analysis.

You might be thinking at this point (because I was) “OK, all very good and necessary, but how do I know that simulated behavior will correlate with manufactured behavior? Geometries and physical properties (resistance, thermal expansion coefficients, stiffness) can all be influenced by defects and manufacturing variability.

When you’re building regular semiconductors, you have access to PDKs with lots of rules, primitive devices and P-cells, all qualified by the foundry. These are what you rely on to know that your analysis of your design will correspond reasonably with what is finally built.

John and Mary Ann told me that the state of this art is not so advanced for MEMS, in part because of the huge variability in design of these devices, also because foundries, customers and tool makers are still figuring out how best to optimize yields. Currently it is typical to correlate 3D models with as-processed devices and to do statistical simulations to model variation so that they can develop reasonable correspondences between modeling and manufacturing.

This enables some platforms to offer PDKs for a stable range of cell types; however it is still necessary to support customers who want to experiment outside those constraints. This team finds in such cases that it is best to work with the foundry and customers to build PDKs on-the-fly to suit that customer’s special needs. Mary Ann cited both Bosch and ST as examples where this is quite common. She believes the Mentor-Tanner/SoftMEMS/OnScale partnership is differentiated from other prototyping options through it’s strengths in supporting such flows.

Very cool technology. I’m hoping to write about more examples over time. You can learn more about the collaboration HERE.


Mentor unpacks LVS and LVL issues around advanced packaging

Mentor unpacks LVS and LVL issues around advanced packaging
by Tom Simon on 11-26-2019 at 6:00 am

Innovations in packaging have played an important role in improving system performance and area utilization. Advances like 2.5D interposers and fan-out wafer-level packaging (FOWLP) have allowed mixed dies to be used in a single package and have dramatically reduced the number of connections that need to go all the way to the PCB level. Mixed dies allows for mixing process nodes and combining different types of chip, or chiplets, in a single package. Also every time a net needs to travel to a PCB, there are issues with delay, coupling and transmission line effects, among other things. Yet despite their advantages these high density advanced packaging (HDAP) introduce more complexity into package verification. By moving what used to be on-chip or on-board signals into the package, complexity goes up greatly. At the same time the mature approaches for PCB or IC LVS and LVL cannot easily be applied to this problem.

Mentor unpacks LVS and LVL issues around advanced packaging

Mentor has recently published a technical paper entitled “A deep dive into HDAP LVS/LVL verification” written by Tarek Ramadan, that looks closely at the verification challenges which arrive when HDAP is used. There are a host of issues that stem from how new much of the technology is.

The ownership of design and verification for HDAP can vary from organization to organization and from chip to chip. Often, interposer designs are considered more chip centric and the responsibility can fall to the silicon teams. For FOWLP opposite can be the case and packaging teams may be tasked with verification.

Because of the interdependence between the IC’s and the package interconnect, package verification might have to wait for die information to stabilize and be delivered, which can delay the entire design. The paper describes the methods that can be used to permit parallel work to avoid schedule impacts.

Another complicating factor is that unlike IC processes, the ‘stack up’ for a package may vary from design to design due to the specifics of the chips and how they can be most efficiently combined. This prevents the use of off the shelf PDK-like information for layer and pin definitions. The tools used in these flows must be flexible and have the ability to adapt to design specific configurations easily.

To help readers understand the possibilities, the Mentor paper goes through several cases that illustrate how LVS and LVL verification can be completed in various scenarios. The first case deals with what happens when there is no explicit schematic for the interconnect in the package. Of course, with simpler technologies spreadsheets sufficed for determining correct package connections. The efficacy of spreadsheets goes down with the increased complexity of HDAP. The Mentor paper describes how labels on the geometry can be used to overcome these issues and help detect shorts and opens in these designs.

The other limitation that is encountered when trying to use chip level LVS and LVL tools is that there are no devices per se in the package netlist. The paper talks about how assembly level tools can work around this issue by creating placeholders for the pins in the package. There are a number of variations where the data needed is not directly available and the flows must accommodate this.

Based on their understanding of the issues enumerated in the paper, Mentor offers several tools that facilitate LVL and LVS verification of HDAP designs. Their Xpedition™ Substrate Integrator (xSI) tool performs HDAP system-level connectivity management and planning. Calibre 3DSTACK HDAP, used in conjunction with xSI, can run all the flows outlined in the paper, providing solutions for most cases. The paper makes good reading and can be downloaded from the Mentor website.


Where has the ASIC Business Gone?

Where has the ASIC Business Gone?
by Daniel Nenni on 11-25-2019 at 10:00 am

Delta ASIC Design Services

As the traditional ASIC business disappears before our eyes with the recent divestitures and acquisitions, I have been asking questions amongst the fabless semiconductor ecosystem and am getting few answers.

Who or what is going to step in to enable start-ups and new to silicon systems companies with application specific chips?

Interestingly, I met Gert Jørgensen at the Tower Jazz Symposium last week. Gert is the VP for Sales and Marketing at the ASIC company DELTA Microelectronics. Gert has been with DELTA since 1982 where he worked as test engineer, design engineer and project manager before moving to business development.

Just in case you missed that, Gert has been doing ASICs for 37 years at the same company. He is the closest thing to a chip design unicorn as I have ever seen in my 35 year semiconductor career, absolutely.

Coincidentally, or not, Gert and I are doing a webinar on “Choosing the Right ASIC Manufacturing Model for Your Business” next week:

ASIC production is a part-science, part-art discipline which requires extensive knowledge. The many available options, which combine various 3rd party services and internal resources, require an understanding of the technical intricacies, the pros and cons, and the financial implications of each option. The more knowledge you have, the cheaper ASIC production can be for your company.

This webinar examines three common business models for hardware implementation including IC production and the financial impact of each. Using a real-life project case, it then identifies production volume break even points, distinguishing where one production model has an obvious financial benefit over another.

The webinar is on Tuesday December 3rd at 10am PST. SemiWiki Webinars are generally 30 minutes but more importantly they give you access to key people inside the semiconductor industry even if you are not able to attend it directly. Just register and you will be sent a link to the replay when it is finished. I always ask the presenters to give their contact information at the end of the presentation and they do. I hope to see you there!

About Delta
DELTA Microelectronics is a European company. We offer services ranging from design (front and back end), development of test solutions, production testing of components, wafer probing, failure analysis and logistics for the supply of components including purchasing of wafers and packaging.

We allow the customer to get the most cost effective combination of services.

History
DELTA has been supporting microelectronics development since 1976, providing services to hundreds of successful integrated circuit projects for some of the world’s best-known OEMs/IDMs and fabless semiconductor suppliers. We are a business unit of DELTA Danish Electronics, Light & Acoustics that was established in 1941.

DELTA Microelectronics is headquartered in Hørsholm, Denmark, and has an office in South Wales, UK.

Partners and in-house capabilities
A range of European and Far Eastern wafer and packaging partners enable DELTA to provide a full supply chain solution. DELTA has a large semiconductor test department where we can test wafers and components. Our test engineers ensure that the test hardware and software are customized to your chip. DELTA’s experienced ASIC design team is specialised in very low power chips, payment systems, RFID designs, sensor interfaces and optical chips.


Could TSMC’s spend be part of the seasonal pattern?

Could TSMC’s spend be part of the seasonal pattern?
by Robert Maire on 11-25-2019 at 5:00 am

Is there more downside than upside in stocks?
Entering a seasonally weak period, then what?
Does China trade come back to haunt industry?
Cycle is past the bottom-But what kind of up cycle?

The most recent up cycle in the industry was a huge one, driven by a huge spend on NAND as SSD’s sucked up infinite number of devices. DRAM spend wasn’t too shabby either and logic/foundry kept up a fairly good pace.

In our long history of following the industry it was clearly one of the stronger up cycles…the kind of strength that makes less experienced management and industry analysts say that the industry is no longer cyclical. While the down cycle was not like the down cycles of old in which everyone lost money and some companies went out of business, it was still a significant cut in revenues and earning for a full year (or more).

No two cycles have ever been the same so its impossible to predict the size, shape, slope & length of the current upcycle with any accuracy.

Not firing on all cylinders
Its clear that memory is still weak and even Applied Materials, on its recent earnings release, didn’t want to comment on the timing of NAND recovery and DRAM seems out of the range of even wild speculation at this point.

However, foundry/logic seems to have been enough to get the industry off the bottom it was bouncing along on.

Our concern is that the memory recovery seems far enough away and uncertain enough that we have to discount it significantly when looking at valuation. If DRAM doesn’t recover til 2021 and NAND at the end of 2020 at best, what is that worth?

The excess capacity of idled tools sitting in fabs suggests that the memory recovery slope will be fairly shallow when it happens. It will almost certainly not be a huge tsunami of spend that we saw in the last up cycle. Memory makers still have a significant hangover from the drunken spending binge and likely will be a bit more conservative in the current up cycle when it comes.

Could china come back to the forefront of concerns?
Everyone did an excellent job of kicking the trade can down the road for quite a long time as we suggested would be the case.  The problem is that nothing was ever solved or accomplished.  If anything we are in worse shape now than when we started as both sides have had time to dig into positions.

China is less likely in our view to concede very much as they likely view the current administration as either a short timer or in a strategically weaker position or both.  The truce/business as usual seems to be the current status so not much has happened over the prior 3 years and all China has to do is stick it out for another year or maybe less.

We had written many groundbreaking articles about both Taiwan and Hong Kong risks over several years and those issues seem to be coming to a head, increasing the difficulty of a clean solution.

With elections less than a year away it seems less likely that a trade war that could hurt farmers and consumers would be started in earnest.  Trump has all but promised Tim Cook that Apple will be exempted (well played Tim..).

The delay of an EUV tool destined for China may be the beginning of a different approach to trade. We would also not be surprised for Huawei to become a target again.

We don’t see a “real” China trade solution
At best the administration will do some hand waving, declare victory and hope that everyone forgets prior promises.

At worst the administration needs a diversion from impeachment and starts a trade war to rally people around the flag and the administration.

The bottom line is that we see more downside than upside for chip stocks now than several months ago, related to China trade. An end to the trade war seems priced in but is now less certain.

Q1 seasonality may soften investors chip optimism
The calendar first quarter has almost always been the weakest quarter for chip stocks.  The industry is in the “post partum” depression after the strong holiday selling season for electronics. Certainly after the new Iphone cycle in September.

On a historical basis memory tends to be at its lowest price point based on seasonally lower demand.

Chinese new year always takes a week or two bite out of the quarter as well. In general Q1 is always weak for semiconductors.

Could TSMC’s spend be part of the seasonal pattern?
TSMC announced a huge uptick in spending which is the main driver of the “recovery” of the industry.  The spending seems clearly focused around an end of year “hockey stick” as TSMC gears up for next year’s 5NM production.

From a seasonal timing perspective, TSMC has to order and receive new equipment in Q4 and Q1 to get process ready in Q2 and production ready for the next Iphone in Q3.

TSMC moves less equipment into the fab in Q3 as it usually has been in ramp mode.

Basically TSMC is now in an annual spending pattern based on release dates of the new Iphone by its biggest and bestus customer, Apple.

This suggests that after a couple of strong quarters of tool orders and shipments that TSMC will likely slow going into the summer of 2020.

The question at hand will then be will memory come back before TSMC’s spending spree slows down or could the industry see a plateau or air pocket?

We think its fairly likely that TSMC will not likely increase spend from where it is now.  The probability is higher that TSMC will slow from this peak spend period spanning Q4 and Q1.

The Stocks
After recommending that investors get into the stocks prior to the quarterly reporting season which we predicted would have an upside surprise, we suggested, after Applied reported last week that investors would be better off taking some profits off the table at the end of earnings season after Applied reported.

So far that appears to be the case as we have seen stock price weakness since we made that call post Applied’s rounding out of a great quarterly earning season.

The stocks still seem to have a bit of air in them as they are still trading at industry historically high valuations on a P/E basis, yet fundamentals are not at historically high levels nor does it look like we are getting there soon given memories uncertainty.

In short we think downside beta remains higher than upside beta in current circumstances.  We think that we could see the stock prices stick around here or go lower but we are hard pressed to find a motivation to make them go higher in the near term. There remain a lot of cross currents and risks over the next quarter or two which are not priced in, whereas a strong recovery has been already priced in.

We think our negative call last week remains the appropriate position on the stocks….


Top Three Reasons to Attend the Synopsys Fusion Compiler Event!

Top Three Reasons to Attend the Synopsys Fusion Compiler Event!
by Daniel Nenni on 11-22-2019 at 10:00 am

As a professional semiconductor event attendee I can pretty much tell if an event will be successful by looking at the agenda. What I look for is simple, customer presentations. Not company presentations or partner presentations but actual customer case studies presented by name brand companies. For this event Google, Intel, and Samsung stand out for me.

Intel because they have gone through some major disruptions in the last year. Example: Hiring Jim Keller as senior vice president in the Technology, Systems Architecture and Client Group (TSCG) and general manager of the Silicon Engineering Group (SEG). Jim is a very disruptive personality and that is exactly what Intel needed in the design ranks, my opinion.

Google because they are doing some extremely clever stuff! The whole Google approach to chip design is also very disruptive. If you ever get a chance to participate in a Google chip project do it if at all possible. If you ever get to hear a Google chip person speak do not miss it. Seriously, I speak from experience here on both parts, absolutely.

Samsung is a bleeding edge company in regards to logic and memory chips. They design a very wide spectrum of silicon and systems and literally go where no chip designers have gone before. Always worth listening to Samsung.

Synopsys’ Fusion Compiler was announced a year ago and from what I have heard it is doing quite well delivering on promises Synopsys made from the beginning. I know of a very large SoC that was taped out recently using Fusion Compiler and there were no complaints which is very rare in this business. In fact, I was told that Synopsys support was excellent for this project.

Fusion Compiler Technical Symposium
Wednesday, December 4, 2:00 PM, Synopsys Building 1

Since its launch one year ago, Synopsys’ Fusion Compiler™ RTL-to-GDSII product has delivered on its promise to help digital designers efficiently bring their differentiated products to market faster, realizing their Simply Better PPA™ goals.

But you don’t have to take our word for it.

Come hear from industry leaders including Arm, Google, Intel, Renesas, and Samsung at the Fusion Compiler Technology Symposium as they discuss today’s design challenges and how these challenges are being solved with Fusion Compiler.

AGENDA

2:00 PM Registration and Refreshments

3:00 PM Presentations

5:00 PM Networking Reception and Entertainment

LOCATION

Synopsys – Building 1 at the New Pathline Park Complex

800 N. Mary Ave.

Sunnyvale, CA 94085

About Synopsys
Synopsys, Inc. (Nasdaq: SNPS) is the Silicon to Software partner for innovative companies developing the electronic products and software applications we rely on every day. As the world’s 15th largest software company, Synopsys has a long history of being a global leader in electronic design automation (EDA) and semiconductor IP and is also growing its leadership in software security and quality solutions. Whether you’re a system-on-chip (SoC) designer creating advanced semiconductors, or a software developer writing applications that require the highest security and quality, Synopsys has the solutions needed to deliver innovative, high-quality, secure products. Learn more at www.synopsys.com.


U.S.-China trade war continues

U.S.-China trade war continues
by Bill Jewell on 11-22-2019 at 6:00 am

Electronics production

The trade dispute between the U.S. and China continues to drag on. According to Reuters, U.S. President Donald Trump recently threatened to raise tariffs further on Chinese imports if no deal is reached. Tariffs affecting most consumer electronics imports from China are scheduled to go into effect on December 15, according to a timeline from China Briefing.

The trade war has already had a significant impact on U.S. electronics imports. In the first three quarters of 2019, total U.S. electronics imports have dropped 6% versus the first three quarters of 2018. Imports from China dropped 12%. China still is by far the largest source of electronics imports, accounting for 54% in 1Q-3Q 2019. The second largest source, Mexico, dropped 3%. Two countries benefiting from the U.S.-China dispute are Vietnam (third largest) and Taiwan (fourth largest). U.S. electronics imports versus a year ago are up 59% from Vietnam and 64% from Taiwan. All other significant sources of U.S. electronics imports were down from a year ago, with the biggest declines coming from South Korea (down 32%) and Malaysia (down 29%).

Numerous companies have shifted production out of China in recent months. Samsung ended mobile phone production in China, moving to countries such as Vietnam and India. Inventec Corp., plans to shift production of notebook PCs (including HP branded PCs) for the U.S. market from China to Taiwan. A CNBC article cites Vietnam, Taiwan and Thailand as the biggest beneficiaries of the production shifts.

Electronics production data by country demonstrates the shifting production. China electronics year-to-year growth was in the 12% to 15% range in each month of 2018. In 2019, growth has ranged from 7% to 11%. Taiwan’s production has boomed in 2019, reaching a 24% three-month-average growth versus a year ago in August. Vietnam has experienced accelerating electronics growth in 2019, reaching 12% in October. U.S. electronics production has been showing modest growth in the 5% to 7% range for most of 2018 and 2019 but slipped to 2% in September. Thus, it appears the U.S.-China trade dispute has not been a significant boost to U.S. electronics manufacturing. Other major electronics producing countries have been weak lately. South Korea, Japan and the 28 countries of the European Union (EU28) have been flat to negative for most of 2019.

Although the shift of electronics production from China to other Asian countries has been accelerated by the current trade dispute, the trend has been in place over the last few years. Multinational companies are moving production to Vietnam and other countries due to lower labor costs, favorable trade conditions and openness to foreign investment.

How is the trade dispute affecting overall electronics in 2019? Key electronic equipment markets remain weak. Gartner projects combined unit shipments of PCs and tablets will decline 3.1% in 2019, followed by a 2.4% decline in 2020. IDC forecasts a 2.2% drop in smartphone units in 2019. Smartphones are expected to grow 1.6% in 2020, helped by the emerging 5G market. The impact of the trade dispute on PC, tablet and smartphone shipments is difficult to measure. These are mature markets which have been weak the last few years.

How will the U.S.-China trade dispute affect the economy and electronics going forward? Goldman Sachs estimated the trade dispute has cut 2019 GDP by 0.5% in the U.S. and 0.7% in China. The Consumer Technology Association (CTA) estimates tariffs on China have cost the U.S. consumer technology industry almost $12 billion since July 2018.

U.S. consumers have not yet seen tariff driven price increases on most electronics. However, unless a resolution is reached, on December 15 a 15% tariff will be applied to U.S. imports from China of mobile phones, TVs, digital cameras, set-top boxes, laptop PCs, tablets, video monitors, headphones, video game consoles, smartwatches, fitness trackers and other consumer products. Consumers are conditioned to expect a general trend of lower prices and higher functionality for electronics. If implemented, the 15% tariff will not affect the 2019 holiday season, but going forward it will negatively impact the U.S. demand for consumer electronics in 2020.


MIPI gaining traction in vehicle ADAS and ADS

MIPI gaining traction in vehicle ADAS and ADS
by Tom Simon on 11-21-2019 at 10:00 am

I am old enough to remember when cars did not come with air conditioning unless you purchased it as an option. Of course, now you can’t even find a car that doesn’t come with air conditioning. So, it goes with Advanced driver assistance systems (ADAS). They are becoming more and more common and will certainly become baseline features in cars in the future. In all likelihood autonomous driving systems (ADS) will follow the same path as they become more feasible and affordable. Video data from sensors, either heading to an internal display or to a computer for processing, is required for both of these systems.

The automotive environment brings with it a number of specialized requirements for these systems, such as low power and high reliability in a challenging physical environment. They also must also be cost effective. System designers for ADAS and ADS have been turning to existing standards for transferring video information in mobile systems, which share many of the same requirements as ADAS and ADS. Specifically, there has been a lot of interest in MIPI® Alliance specifications. The proven technology found in the well-established D-PHY℠ for connecting high resolution cameras, vision processor and displays has become a popular solution for in-vehicle video needs.

Mixel, a leading provider of mixed signal mobile IP has published an article discussing the application of their D-PHY IP in GEO Semiconductors’ GW5 CVP product family. MIPI D-PHY is a source synchronous PHY that uses one clock lane and a varying number of data lanes. It is a widely adopted standard that has been in use since 2009. There are two differential pins per signal. D-PHY can be used with MIPI CSI-2℠, DSI℠ and DSI-2℠, to connect to cameras and displays. Mixel’s D-PHY v2.1 TX and RX IPs can handle 2.5Gbps per lane, up to 4 lanes to achieve 10Gbps. The TX and RX IPs are AEC-Q100 compliant for auto-grade 0/1/2 temperature ranges.

In GEO Semiconductor’s product they used D-PHY v1.1 TX and RX with 4 lanes of 1.5 Gbps, for a total of 6Gbps. The GEO GW5400 includes in-camera vision processing to enable ADAS functionality. The GEO GW5 supports up to 8-megapixels and includes GEO’s eWARP® geometric processor, innovative High Dynamic Range (HDR) Image Signal processor (ISP), and 2D graphics functionality. For the GEO GW5 there are 2 RX interfaces, supporting dual sensors. However virtual channels can be used to connect to more sensors. There is an HDR feature that allows each RX interface to receive images from multiple HDR sensors and combine them into a single high dynamic range video stream.

The Mixel PHY IP come with a BIST engine that can be used for IC, board or system tests. Mixel has had silicon success across multiple nodes at a variety of foundries. Mixel reports widespread deployment of their IP in ADAS and ADS chipsets.

MIPI interfaces will increasingly play a major role in ADAS and ADS systems. In the future in addition to radar, LIDAR and video sensor input, ADAS and ADS will also rely on data links between vehicles and between vehicles and their surroundings. Sensor data rates and resolution will increase over time as well. From reading the Mixel article it is pretty clear that they intend to stay on the forefront of the technology. The article which can be found here also goes into more details about the specifics of their offering and the GEO Semiconductor products that employ their IP.

Also Read:

A MIPI CSI-2/MIPI D-PHY Solution for AI Edge Devices

FD-SOI Offers Refreshing Performance and Flexibility for Mobile Applications

New Processor Helps Move Inference to the Edge


Mustang Mach-E!

Mustang Mach-E!
by Roger C. Lanctot on 11-21-2019 at 6:00 am

Ford Motor Company detonated an epochal explosive in the form of an electrified Mustang SUV on the eve of the Los Angeles Auto Show last night. The move marked an industry altering turning point as auto makers commence the process of electrifying their internal combustion engine line-ups in anticipation of a global market embracing electrification.

The move came three days ahead of a rumored electrified pickup truck announcement expected from Tesla Motors and follows by one year Rivian’s announcement of plans for its own electrified pickup truck. Of course, the significance of a rush to electrify pickup trucks cannot be lost on Ford, which makes the F-150 – the best-selling vehicle of any kind in the U.S. for the past 36 years.

Ford sells nearly a million F Series pickup trucks every year and has tipped its plans for a full electric version sometime in late 2020 or early 2021. That is about the same timing that Rivian (in which Ford is an investor) has discussed for its own EV pickup – i.e. end of 2020. For its part General Motors asserted that it is in the process of refitting its Hamtramck plant to make electric pickup trucks – though a specific timeframe for delivery is unclear.

The electric Mustang Mach-E likely represents the first domino to fall in a sweeping shift in sports car propulsion of domestic makes from internal combustion to EV tech. Ford’s introduction of an electric Mustang SUV likely points to the eventual arrival of Cadillac and Corvette equivalents and, perhaps further down the road, an EV Camaro and EV FCA Challenger.

With sales of sedans and sports cars in decline the shift to SUV form factors with EV propulsion suddenly seems like a no-brainer. But the boldness and courage required by Ford to make this move ought not to be underestimated.

Ford (with the Focus) and GM (with the Volt and Bolt) have flirted with EVs in the past, but these expensive endeavors have failed to fire up consumers to the point of putting up impressive sales figures. These models had the trappings of “regulatory” offerings intended to fulfill California zero emission requirements or Federal Corporate Average Fleet Efficiency (CAFÉ) standards. Dealers were unenthusiastic about these early EV models and advertising dollars in support of the effort were scarce.

The launch of the Mustang Mach-E moves Ford’s EV effort to center stage and the announcement, coming at the L.A. Auto Show with a significant dealer audience in attendance, marks a pivotal moment for the industry. The iconic Mustang will now stand as the fulcrum of a committed EV marketing effort that will reshape Ford’s relationship with its customers, its dealers, and its suppliers.

Ford dealers will now be on the front lines of the new vehicle resale proposition of marketing both ICE and EV vehicles on the same showroom floor. The software and connectivity elements of the Mach-E with over the air software updates and an exceptionally nimble infotainment system will present a substantial contrast to existing in-vehicle systems – at least until elements of the Mach E can be extended across the other vehicles in the Ford line up.

As important as the shift of domestic marques from ICE to EV will be as it unfolds, the shift of the pickup sector will be even more powerful and momentous. The vehicle volumes and profits at stake in the pickup sector are more critical to the automakers involved – GM, Ford and FCA – and the change in performance characteristics and expectations will require different means of communication.

At last week’s Fleet Forward event, put on by Bobit Media, an industry analyst from Vincentric noted the total cost of ownership advantage of high-mileage EVs. A Cox Automotive executive, also speaking at Fleet Forward, noted the growing number of EVs making their way to the market … and with greater range. (The Mustang Mach-E has a 300-mile range, according to Ford.)

SOURCE: Cox Automotive

Shifting sports cars to EV propulsion is an almost pure enhancement – if you ignore the loss of soothing engine growls and roars. Shifting pickups along the same path may require some demonstration and convincing – though Ford has already taken the first steps in this direction with its stunt of towing a railroad train with an EV propelled F Series truck earlier this year.

The transformative impact of the Mustang Mach-E launch cannot have been lost on attendees of last night’s press event or Ford dealers or Ford competitors. Along with the Mach E comes a comprehensive software update solution, a new in-house developed infotainment user interface (dubbed “Menlo”), smartphone-based keyless vehicle access via the FordPass app, and a global fast/and regular speed charging network – in conjunction with multiple partners including Shell’s Greenlots.

It’s a new day for Ford, a rebirth for the Mustang, and a turning point for the industry. It will be interesting to see what impact electrified pickups will have following the arrival of a Mustang rendered silent but deadly with its electrified powertrain.


WEBINAR REPLAY: AWS (Amazon) and ClioSoft Describe Best Cloud Practices

WEBINAR REPLAY: AWS (Amazon) and ClioSoft Describe Best Cloud Practices
by Randy Smith on 11-20-2019 at 10:00 am

ClioSoft has been working with the leading cloud computing providers running experiments on various EDA cloud architectures for a while now. One example of that was a project with Google I previously wrote a blog about, For EDA Users: The Cloud Should Not Be Just a Compute Farm. Since then, ClioSoft has also teamed up with Amazon Web Services (AWS) to show examples and talk about best practices for designing in the cloud. This information was shared at a webinar on Thursday, October 17th, 2019. You can sign up to view the replay of that webinar here.

All of us have heard about the advantages of on-demand computing. Some of the EDA companies have now come along and offered licensing solutions to accommodate that. However, there are multiple ways to architect EDA solutions in the cloud. I think it is important that everyone understands the trade-offs with various cloud architectures. Design Data Management tools, such as those that come from ClioSoft, provide additional benefits to cloud architectures, though it is not the case that “one size fits all” when it comes to implementing your cloud architecture. In fact, at a high level, there are at least two dimensions to the architectural choices – the tool architecture and the data architecture.

When considering the tool architecture in a cloud environment, we are describing where tools will run. Today that even applies to interactive tools. Cloud services are giving us ever-decreasing latencies, and since we can render full-motion video over the internet, it should not be a problem to render interactive EDA tools over the internet. However, to work optimally, we need to have the correct hardware for each tool. We also need to understand EDA tool workloads – how many resources, for how long, at what point in the design flow?

Data architecture is also critical to your efficiency and cost. You need to decide where you will keep each type of data. However, much more than that, modern solutions involve caching data. You also want to consider persistent storage in the cloud. Where are the master copies of each type of data (e.g., library data, design data, simulation results, etc.)? Where are the caches? There are lots of decisions. Depending on your tools, it may be difficult to change your architectural choice later. The benefits are tremendous, but you also want to be correct as possible on your initial implementation. To do that, you need information on all the optimization parameters you have at your control on AWS – Amazon EC2 Instance Types, Operation System Optimization, Networking, Storage, and Kernel Virtual Memory. There is a lot to learn about and control. Do you know what an AMI is?

In addition to superior design data management solutions, information is exactly what ClioSoft has been preparing for its customers. The information shared in the previously mentioned blog was quite helpful. Now ClioSoft has followed that up with this webinar collaboration with AWS. Of course, ClioSoft is in the AWS partner network.

Speaking for AWS in the webinar is David Pellerin, the AWS Head of Worldwide Business Development. Dave has an interesting background. He has been with Amazon for more than seven years. He has worked in a variety of fields, including accelerated and reconfigurable computing, data center and cloud services, HPC software development tools, field-programmable gate arrays, financial computing, life sciences, and health IT. Dave has also authored several books related to programming and design, including VHDL Made Easy. Clearly, he understands EDA, too.

Also speaking in the webinar is Karim Khalfan, VP of Application Engineering at ClioSoft. I have known Karim for a very long time, and I appreciate that not only does he have a deep understanding of design data management, but that he also has a knack for making these complex issues easy to understand. Adding Dave’s experiences in textbooks, I think everyone will be able to learn a lot from this webinar.

Also Read

WEBINAR REPLAY: ClioSoft Facilitates Design Reuse with Cadence® Virtuoso®

WEBINAR: Reusing Your IPs & PDKs Successfully With Cadence® Virtuoso®

For EDA Users: The Cloud Should Not Be Just a Compute Farm


NXP Pushes GHz Performance in Crossover MCU

NXP Pushes GHz Performance in Crossover MCU
by Bernard Murphy on 11-20-2019 at 6:00 am

RT1170 system

I first heard about NXP crossover MCUs at the 2017 TechCon. I got another update at this year’s TechCon, this time their progress on performance and capability in this family. They’ve been ramping performance – a lot – now to a gigahertz, based on a dual-core architecture, M7 and M4. They position this as between 2 and 9X faster than competitive solutions, certainly a major performance advantage.

Quick refresher on why they’re doing this. MCUs used to be the staid but inexpensive and reliable cousins of the flashier processors you’d find in your phones. What you wanted in your car, appliances, printers and many applications wasn’t a lot of flash and features; you wanted reliability and low cost. Now thanks to the explosion in expectations for what everything and anything should be able to do, we now want very high performance and very low power, communications, human machine interfaces, voice recognition and face id everywhere. Still at very low cost.

Perhaps you could do this by scaling advanced processors down to MCU price levels (few $), but that’s a big stretch. And there are other considerations besides cost. Many MCU apps depend on real-time support and for that they have to run real-time operating systems rather than the Linux OS used by their up-market cousins. On top of that, requiring a large base of MCU application developers to switch OS would be impractical. Also, while product teams using MCUs want to take advantage of AI capabilities, they have limited resources and expertise. For all these reasons, NXP argues that it’s best to start with architectures built for MCU developers and grow them into supporting advanced features. Makes sense to me.

The dual processor approach follows a familiar big.little kind of theme, in which a high performance (1GHz) M7 core handles advanced applications only needing to run intermittently, such as smart speaker functions (audio pre-processing through echo cancellation, noise suppression, beamforming, etc). A lower-performance (400MHz), more power efficient core (M4) can handle lighter weight and standby tasks such as wake-word processing. In fact the M4 can handle more than that, according to Gowri Chindalore, Head of Strategy for embedded processing. He told me it can also handle fingerprint sensing, quick voice recognition and quick face id. The two cores are in separate power domains; on detecting a wake-word or gesture, the M4 wakes the M7 for phrase recognition, perhaps “hey it’s dark in here, turn on the lights”. Gowri said the system can support between 100 and 150 phrases.

NXP became one of the pioneers in machine learning (ML) programmability across a range of platforms when they introduced their eIQ ML software development environment. This can start from any of the standard ML trained network representations and map to differing targets, optimizing the mapping as needed to best suit the resources of a given target. All this without having to understand all the technical details of TensorFlow Lite, Glow and other models. Another plus for MCU developers who want the capability without a lot of extra training.

There are a few more important features. The RT1170 hosts a 2D GPU, an addition to earlier processors, so it can generate complex graphics for appliance and industrial systems. It’s also automotive-qualified, so think cockpit displays and graphical steering wheel controls. This MCU also provides a hardware root of trust (HRoT) through their EdgeLock subsystem (HRoTs are the way all serious hardware security is going now). EdgeLock provides secure boot, a range of cryptography options and a secure real-time clock, useful in many contexts eg forcing a timeout after an unreasonable delay.

One more point that has raised some questions: the RT1170 doesn’t use embedded flash but rather 2MB of (embedded) SRAM. This decision was apparently customer driven; customers didn’t want the performance hit or the cost of flash. To ensure there is no security problem as a result of this change, data is stored encrypted in SRAM and is decrypted on-the-fly in zero-cycles as it’s read into the MCU.

The RT1170 is built on 28nm FDSOI technology, providing all the low-power management features you need (power islands, low leakage) but still at a much more price-conscious level than you’d find in application processors or GPUs at more advanced FinFET nodes.

NXP sees this platform having multiple applications: industrial and retail (factory automation controllers, unmanned vehicles, building access controls, retail display controllers), consumer and healthcare (smart home, professional audio applications and patient monitoring systems) and automotive applications (in-vehicle HMIs and 2-wheeler instrument clusters). Lots of opportunities – we can’t all build our own custom devices, in fact most of us can’t; we need more solutions like this. You can learn more about the RT1170 HERE.