webinar banner2025 (1)

What Mary Meeker Missed

What Mary Meeker Missed
by Roger C. Lanctot on 06-01-2018 at 12:00 pm

It must be a measure of the dim view taken of the automotive industry by Silicon Valley types that of the 294 slides in Mary Meeker’s annual Trends presentation delivered at this year’s Code Conference less than 10 of those slides refer to transportation. The Kleiner Perkins Caufield & Byers partner even managed to avoid using the word “car.”

With the main headline of the much-anticipated report being that the globe has achieved peak smartphone sales, with little new growth to be had, the opportunity to recognize the inexorable upward march of vehicle sales was overlooked. (An analysis of the report and the slides can be found here: https://tinyurl.com/ydbk7qf7 – care of Recode.)

This oversight is especially notable given the report’s focus on data collection. The car has rapidly emerged as the new frontier for data gathering as autonomous vehicle technology begins to take hold. Concerns surrounding data privacy are swirling across the worlds of social media and mobile devices, while car companies are beginning to lay the groundwork for an opt-in culture tuned to the needs of an increasingly connected transportation industry.

Dim view or not, the A-Team – Apple, Alphabet and Amazon – are all keenly interested in exploiting the unexplored world of vehicle data for marketing purposes with voice-based digital assistants capable of converting the car into a browser on wheels. But leading players in the world of transportation, such as Uber and Tesla, recognize deeper stores of value in using vehicle data to improving transportation experiences and business models and processes.

The car is the ultimate mobile device and it happens to serve the most inefficient business in the world: transportation. Massive amounts of metal are used to move people and goods across crumbling infrastructure at great cost and with relatively low utilization rates (at least of the vehicles themselves).

Upping the efficiency of the transportation network will require oceans of data to optimize transportation networks for ad hoc, on-demand services such as ride hailing, but also to improve public transportation options and to refine subscription-based vehicle usage models. At the core of transportation inefficiency is the increasingly tenuous vehicle ownership proposition.

In her talk, Meeker does highlight the growing inclination of consumer households to rely on on-demand transportation options in lieu of vehicle ownership. This behavior is reflected in the declining portion of household spending devoted to transportation even as cars continue to become more expensive.

Meeker is highlights the shift to cheaper ad hoc transportation options and lauds Uber’s algorhythmic acumen. So, she steps up to the edge of the question of potentially declining vehicle ownership but steps back before making the logical leap. It is almost as if cars are irrelevant in her calculations or at least a major blind spot.

In developed markets such as North America and Europe, vehicle ownership does appear to be in some peril. Ride hailing and car sharing options – along with proliferating public transportation options – are putting pressure on the ownership proposition. In Europe and China, there are active measures to limit the use of cars in traffic-clogged or pollution-choked cities (Europe) or to using licensing limits to make it difficult to obtain a car (China).

In spite of these obstacles, global vehicle sales are forecast to climb at a compound annual rate of 2.5% through at least 2025, rising to more than 112M vehicles annually and on a trajectory to 120M. While the developed world may be bumping up against peak vehicle sales and ownership, the developing world is only just beginning to explore its very own love affair with cars.

Data is at the core of preserving vehicle relevance in both worlds. Data is also at the heart of the market disruptions wrought by Uber and Tesla.

Both Uber and Tesla capture transportation related data. The customer opt-in is more or less a requirement. How do they do it and why do consumers accept it? Because the trade-off is implicit.

Just as Google provides free search in exchange for using your search behavior as a basis for advertising (and for the privilege of manipulating your search results) Uber provides a discounted taxi service in exchange for the data it collects. The data that Uber collects – regarding a substantial proportion of local ad hoc transportation activity – is gold.

By now Uber knows the most popular routes in and around 100’s of major cities around the world. The company knows how to manage the supply and demand of transportation services and also knows a lot about driver and passenger behavior. Uber is in position to advise municipalities regarding multimodal transportation infrastructure, parking, and traffic and may even be able to advise on congestion mitigation solutions – even though Uber itself is a major contributor to that congestion.

The ocean of data gathered on a daily basis by Tesla is rapidly advancing the company toward the autonomous vehicle future by enabling the company to gather essential information regarding driving behavior and driving conditions. Tesla has also managed to convert the thousands of Tesla drivers using its autopilot feature into human guinea pigs helping Tesla understand when and where humans have to take control of the driving task from the system – and even helping Tesla understand when and why the system fails.[

Tesla’s data collection activity is providing the company with advanced electric vehicle insights regarding the number and type of trips taken, the impact of driving behavior on battery performance and the long-term performance of the vehicle batteries. Tesla by now knows far more than any other electric vehicle company about the ideal locations for charging stations and consumer behavior associated with the charging activity.

Just as Uber provides discounted taxi rides in exchange for data, Tesla’s value exchange is in the form of software updates. Tesla will continue to enhance the value of your “ride” as long as you’re still opted into the data sharing proposition – though it’s not as if you have a choice.

Uber and Tesla are creating and delivering value related to the driving experience in exchange for the data they collect. Notably, both companies have eschewed leveraging vehicle data for annoying and intrusive marketing messages.

A quick side note here: Waze is another transportation-centric application that has built its brand on data aggregation. It is worth noting that Waze is seeming to lose its way of late with an increasing emphasis on those very distracting advertising and marketing messages during navigation that Tesla and Uber have skirted.

Some analysts have suggested that the value of vehicle data is such that cars ought to be or will someday be free to consumers in exchange for access to the vehicle data. The closest approximation to this value proposition is Hyundai’s WaiveCar program which allows Hyundai Ioniq drivers to use the car for free as long as they allow WaiveCar to load the car with external advertising signage.

The idea of a free car is no less compelling than the idea of a free smartphone – but neither of these propositions are likely. What is more likely is an exchange of value, in the case of the car, that enhances the safety or efficiency of the driving experience. An example: Tesla just updated its Autopilot software to lengthen the stopping distance associated with the system in order to get back in the good graces of Consumer Reports.

The greater existential problem facing the automotive industry, though, is efficiency. Cars are only used 3%-4% of the time. This fundamental inefficiency is spurring the consumer consideration of less expensive alternatives, such as Uber, Lyft, Car2Go etc.

Car companies seeking a long term future beyond selling large volumes of vehicles primarily in emerging markets will do well to shift their focus to leveraging vehicle data to enhance the efficiency of vehicle usage by enabling and supporting networked transportation services such as car sharing, ride hailing and subscription-based “ownership.” These new forms of networked customer engagement will help preserve the relevance of cars in a world of inefficient automobile-centric transportation.


Imec Technology Forum: Gary Patton of GLOBALFOUNDRIES

Imec Technology Forum: Gary Patton of GLOBALFOUNDRIES
by Scotten Jones on 05-31-2018 at 12:00 pm

The imec technology forum was held in Belgium last week. At the forum I had a chance to sit down with Gary Patton the CTO of GLOBALFOUNDRIES (GF) for an interview and he also presented “Enabling Connected Intelligence – Technology innovation: Enablers for an intelligent future” at the forum. In this article I will discuss what I see as the keys points from the presentation and interview.
Continue reading “Imec Technology Forum: Gary Patton of GLOBALFOUNDRIES”


CEO Interview: Jason Oberg of Tortuga Logic

CEO Interview: Jason Oberg of Tortuga Logic
by Bernard Murphy on 05-31-2018 at 7:00 am

I first met Jason Oberg, CEO and one of the co-founders of Tortuga Logic, several years ago when I was still at Atrenta. At that time Jason and Jonny Valamehr (also a co-founder and the COO) were looking for partners. The timing wasn’t right, but we’ve stayed in touch, for my part because their area of focus (security) is hot and likely to remain hot.

Jason has a PhD in hardware security from UCSD around timing side-channels, supervised by Ryan Kastner. Jonny is also a PhD in hardware security, from UCSB supervised by Tim Sherwood. They formed the company in the summer of 2014, funded by an NSF grant and a small business innovation research (SBIR) grant. They started in an incubator in San Diego and have now branched out, moving their head office to San Jose where we met for this interview:

What does Tortuga Logic do?
Everything related to securing hardware, which needs to cover a lot of security approaches, from the hardware/software boundary, through secure boot, to protection of privileged assets, resources and more. We believe strongly in the need for a secure design lifecycle, spanning the design and verification chain and extending even into post-silicon. Of course there are a number of security offerings today, from IP to formal-based tools, each of which have important roles to play, but on their own these are insufficient to provide a high assurance of system security. Our role is to complement these as I’ll explain shortly.

Incidentally, Tortuga doesn’t currently look at physical security concerns, such as power side-channels, PUFs and TRNG validation.

We have seen a lot of hardware security vulnerabilities recently. What’s your view on why and what are the challenges and opportunities in this space?
First let’s get rid of a possible misunderstanding. In big companies at least, this isn’t because designers are careless or don’t know what they are doing; many of these companies have large and very expert security teams. The real issue is that the range of possible problems is effectively unbounded, and therefore practical security, like safety, must be driven by an ROI calculation. What are the reasonably conceivable problems against which you are willing to defend? You have to draw a line somewhere because you can’t imagine all possibilities; even correcting for those you can imagine will have some consequences in cost, performance and power. Like it or not, you’ll have to trade off security in some areas against other objectives

Based on the judgement of your internal experts, you draw that line and make a conscious decision that whatever is on the other side of the line is too difficult to exploit. Over time, exploits become more sophisticated and what was once inconceivable or too difficult to exploit becomes conceivable. This is just the nature of security.

Smaller companies and teams don’t have armies of security experts, so face bigger challenges. First, they don’t have as much expertise in where to draw the line or what might lurk on the other side. And second, while they can and should use proven security IP and tools, those alone do not guarantee their systems will be secure. You can use security features incorrectly just as you can use any other feature incorrectly, and some of those bugs can be quite subtle. So the burden is still on the system designer to verify that their system is secure. Formal plays an important role in validating some integration characteristics but it can’t uncover potential problems in hardware/software interaction for example.

Certainly an opportunity here is to provide metrics and tools to analyze the vulnerabilities of a design, especially to simplify/standardize this task. The ideal is to create a security verification plan and ultimately an audit trail to be able to demonstrate your path to security. You could imagine a goal to demonstrate some standardized level of security in an auditable process, not unlike what we do today for safety. Microsoft has one good example (for software) in their Security Development Lifecycle (SDL) where they do up-front threat-modeling followed by threat model-aware architecture, design and verification.

How are Tortuga Logic’s solutions different from other offerings in this space?
A Tortuga user starts with threat modeling. A very important point to understand is that this doesn’t mean listing known or possible vulnerabilities. It means rather building a description of all the things you want to protect, e.g. assets and keys, along with all the possible ways those assets can be accessed. Think of this as a way to characterize the complete problem space before drilling down to discover and analyze specific vulnerabilities, which is where many system security methods start today. Based on that top-down analysis, Tortuga can then build analytics to assess vulnerabilities, to determine the effectiveness of protection methods against attacks you know about, also to help you understand what you haven’t considered.

The product next takes as input your design files and security rules based on this threat modeling and generates a security monitor model in the form of synthesizable IP which can be inserted in the design. There are several reasons we took this approach. First, this model sits in the existing verification environment, monitoring for potential weaknesses. It requires no change to the verification environment such as testbenches, so adds minimal overhead for the verification team. Because it is synthesizable, it can be run in emulation and FPGA prototyping, which means you can continue to monitor for weaknesses during HW/SW testing. You can even use our models with formal. This is an end-to-end solution across the verification flow.

Who is using your products?
We’re bound by the same constraints as other suppliers, so we’re limited in sharing names. I can tell you that we have been working for some time with government research labs, also large aerospace and defense primes. We have publicly announced a relationship with Xilinx who are working with us on both internal and external security objectives. We are also working with large processor companies exposed to the Meltdown and Spectre problems.

Where can people learn more about Tortuga?
Check out the whitepapers on our website and of course feel free to contact us (we’re very actively staffing up so welcome new engagements). We’ll also be at DAC. We won’t have a booth, but we plan to present at some of the major EDA vendor booths.

Also Read:

CEO Interview: YJ Su of Anaglobe

CEO Interview: Ramy Iskander of Intento Design

CEO Interview: Rene Donkers of Fractal Technologies


Top 10 Highlights from the Samsung Foundry Forum

Top 10 Highlights from the Samsung Foundry Forum
by Tom Dillinger on 05-30-2018 at 9:00 am

Samsung Foundry recently held their annual technology forum in Santa Clara CA. The forum consisted of: presentations on advanced and mainstream process technology roadmaps; the IP readiness for those technology nodes; a review of several unique package offerings; and, an informal panel discussion with IP designers and EDA flow developers describing their recent collaborations with Samsung Foundry. Here are the (very subjective) highlights of the presentations and panel discussion.
Continue reading “Top 10 Highlights from the Samsung Foundry Forum”


Webinar: Custom SoCs for Narrowband IoT

Webinar: Custom SoCs for Narrowband IoT
by Daniel Nenni on 05-30-2018 at 7:00 am

This joint CEVA and Open-Silicon webinar, moderated by myself, will elaborate on Narrowband IoT (NB-IoT) custom SoC solutions that are based on the CEVA-Dragonfly IP subsystem, and serve a wide range of cost- and power-sensitive IoT applications. Those joining the webinar will learn about the CEVA-Dragonfly NB1 IP subsystem, which pre-integrates the CEVA-X1 processor, optimized RF, baseband, and protocol software to offer a complete NB-IoT modem IP solution that can be extended seamlessly with GNSS and sensor fusion functionality.

Registration
Date: Tuesday, June 19, 2018
Time: 8 a.m. PST / 11 a.m. EST
Duration: 60 mins

The webinar will also address Open-Silicon’s NB-IoT custom SoC platform and software SDK, and how they enable customers to differentiate within the silicon with robust security and proprietary accelerator features with reduced risk, development schedule and cost.

The panelists will discuss the role of turnkey custom SoCs in lowering entry barriers, reducing time-to-market, increasing performance, adding security, and facilitating customization and scalability. The panelists will present sample use case platforms and explain how custom SoCs can enable product differentiation and total cost of ownership (TCO) savings for the next generation of NB-IoT applications.

This webinar is ideal for hardware designers and system architects of NB-IoT equipment/modules.

Speakers:

Emmanuel Gresset
Business Development Director, CEVA
Emmanuel is a Business Development Director in CEVA Wireless BU. For the last 30 years, Mr. Gresset has been with systems and semiconductor companies working in the fields of signal processing, wireless modems as well as processor and system-on-a-chip architecture in various companies: Octasic, STMicroelectronics, Philips, VLSI Technology, Spectral Innovations and Thomson. He received his M.Eng from the Ecole Supérieure d’Electricité in Paris.

Pradeep Sukumaran
Director, Systems & Software, Ignitarium
Pradeep Sukumaran is Director, Systems & Software at Ignitarium, a front-end design and software consulting company of Open-Silicon. Ignitarium offers high end VLSI and SW solutions to customers, with a strong focus on IoT and Vision Intelligence technology. Pradeep has over 17 years of experience in the embedded software and systems domain. Prior to Ignitarium, he was Senior Solutions Architect at Open-Silicon.

Naveen HN
Engineering Manager, Open-Silicon
Naveen HN is an engineering manager for Open-Silicon. He oversees board design, post-silicon validation and system architecture. He also facilitates Open-Silicon’s SerDes Technology Center of Excellence and is instrumental in the company’s strategic initiatives. He has over 16 years of experience in various domains of embedded systems design. Naveen is an active participant in the IoT for Smart City Task Force, which is an industry body that defines IoT requirements for smart cities in India. He received his M. Tech from SJCE, Mysore.

About CEVA, Inc.
CEVA is the leading licensor of signal processing platforms and artificial intelligence processors for a smarter, connected world. We partner with semiconductor companies and OEMs worldwide to create power-efficient, intelligent and connected devices for a range of end markets, including mobile, consumer, automotive, industrial and IoT. Our ultra-low-power IPs for vision, audio, communications and connectivity include comprehensive DSP-based platforms for LTE/LTE-A/5G baseband processing in handsets, infrastructure and machine-to-machine devices, advanced imaging and computer vision for any camera-enabled device, audio/voice/speech and ultra-low power always-on/sensing applications for multiple IoT markets. For artificial intelligence, we offer a family of AI processors capable of handling the complete gamut of neural network workloads, on-device. For connectivity, we offer the industry’s most widely adopted IPs for Bluetooth (low energy and dual mode) and Wi-Fi (802.11 a/b/g/n/ac/ax up to 4×4). To learn more, visit us at www.ceva-dsp.com

About Open-Silicon, Inc.
Open-Silicon transforms ideas into system-optimized ASIC solutions within the time-to-market parameters desired by customers. The company enhances the value of customers’ products by innovating at every stage of design — architecture, logic, physical, system, software and IP — and then continues to partner to deliver fully tested silicon and platforms. The company has partnered with over 150 companies ranging from large semiconductor and systems manufacturers to high-profile start-ups, and has successfully completed 300+ designs and shipped over 135 million ASICs to date. To learn more, visit www.open-silicon.com


ISO 26262 First – ASIL-D Ready Vision Processor IP Available

ISO 26262 First – ASIL-D Ready Vision Processor IP Available
by Tom Simon on 05-29-2018 at 12:00 pm

Synopsys made a pretty major announcement regarding their new ASIL-B,C and D ready embedded vision processor IP. This matters because you cannot bolt on the design elements and features needed to achieve these ASIL levels later, and this IP is absolutely necessary for ADAS systems and other critical safety systems in automobiles. These features have to be baked into the architecture, and the tools necessary to support them also need to be available. Simultaneously, Synopsys has gone to great lengths to ensure that the added safety features have minimal impact on performance.

So, what exactly are some of these features? Their press release itemizes them: “lockstep capabilities, ECC memories, error checking on core registers and safety-critical registers, a dedicated safety monitor, and a windowed watchdog timer for each core. An optional dedicated safety island monitors and executes safety escalations and diagnostics within the SoC and protects system bring-up”.

However, a list of safety features is not enough, there are a few more essential elements needed to crack the barrier to successfully implementing a vision processor based SOC for a system that complies with ISO 26262. Performance and capability are the first two major items to check off. The other necessary piece is development tools that are also ASIL ready. Let’s talk about these in order. I was fortunate enough to chat with Gordon Cooper, Product Marketing Manager at Synopsys for EV processors, about these topics. Much of what he discussed went beyond the press release and made it easier to understand their latest announcement.

Gordon told me that the crux of the announcement is that they have added the Safety Enhancement Package to their EV6x family of vision processors. Also, they have added vector processing that runs up to 1.2GHz with a 10 stage pipeline. The 1.2GHz speed, he pointed out, is at worst case for automotive standard conditions of 125 to -40 C in 16FF. Performance like this is much harder to achieve in these conditions, however they represent what is commonly found in automotive operating environments. Gordon emphasized that benchmarking for these applications is extremely important, it’s not enough to read a spec sheet and try to make a decision.

Gordon talked about data processing requirements for these systems as well. They are seeing 3-4 megapixel image sizes at frame rates of 30 fps, and increasing. By 2020 there will be over 24 cameras per vehicle as well as radar, all with higher resolutions. Yet, for different tasks, there needs to be differentiation in the kind of processor that is required. For instance, monitoring a driver’s face to detect distracted driving is a far different task than pedestrian or object detection. Synopsys offers different configurations of its EV6x family for each of these categories of tasks.

Gordon says that after thorough examination they have decided that 8 or 12 bit precision is preferred to the 16 bit precision often used. TensorFlow coefficients start out as 32 bit float, but they are quantized to 12 or 8 bits for recognition applications, with 8 bit being suitable in most instances.

On the tools side, they have invested in creating the documentation necessary to facilitate ASIL-D qualification. The tool chain has EV runtime and libraries, including OpenVX and OpenCV kernel libraries. They support C/C++ and OpenCL for developing applications and vision kernels. In addition, there is comprehensive debugging tool support. Synopsys also has a CNN graph mapping tool that helps mapping to the CNN engine. System level simulation support is available with system level models for host and EV processors.

Synopsys already has a large presence in the automotive market. Their processors are used in almost every application within that space. They already have customers who have taken some of the ASIL-D ready EV6x family to silicon. Availability of this IP will help accelerate ISO 26262 certification. Gordon also made it clear that they are going to be supplying their customers with a steady stream of updates to ensure they benefit from the latest research in vision processing. There is extensive material available on their website about the EV6x vision processor family and the Safety Enhancement Package that provides ASIL-D readiness.


Innovation in a Commodity Market

Innovation in a Commodity Market
by Bernard Murphy on 05-29-2018 at 7:00 am

Logic simulation is a victim of its own success. It has been around for at least 40 years, has evolved through multiple language standards and has seen significant advances in performance and major innovations in testbench standards. All that standardization and performance improvement has been great for customers but can present more of a challenge for suppliers. How do you continue to differentiate when seemingly everything is locked down by those standards? Some may be excited the potential for freeware alternatives; however, serious product companies continue to depend on a track-record in reliability and support, while also expecting continuing improvements. For them and for the suppliers, where do opportunities for evolution remain?

Performance will always be hot. Progress has been made on a bunch of fronts, from parallelism in the main engine (e.g. Xcelium) to co-modeling with virtual prototyping on one side (for CPU+SW) and emulation on the other (for simulation acceleration). However, I was struck by a couple of points Cadence raised in an SoC verification tutorial at DVCon 2018, which I would summarize as raw simulator performance only delivers if you use it effectively. Some of this comes down to algorithms, especially in testbenches. It’s easy to write correct but inefficient code; we’ve all done it. Being intelligent about limiting complex calculations, and using faster algorithms and better data structures, these are all performance optimizations under our control. Coding for multi-core is another area where we really shouldn’t assume tools will rescue us from ourselves. (You can check out the tutorial when these are posted by DVCon).

We can optimize what we have to repeat on each run. I’ve written before about incremental elaboration – rebuilding the simulation run-time image as fast as possible given design changes. Incremental compile is easy, but elaboration (where modules and connections are instantiated) has always been the bottleneck. Incremental elaboration allows for large chunks of the elaborated image to remain untouched while rebuilding just those parts that must be changed. Save/Restart is another widely used feature to minimize rework, since getting through setup can often take 80% of the run-time. However, this capability has historically been limited to understanding only the simulation model state. Now that we have test environments reading and writing files and working with external code (C/C++/SystemC), that basic understanding has limited checkpointing to “clean” states, which can be very restrictive. The obvious refinement is to save total model state in the run, including read and write pointers and the state of those external sims. Which you now can.

An obvious area for continued innovation is around AMS support, and one especially interesting domain here is power modeling in mixed-signal environments. This gets a little more complicated than in a digital UPF view since now you have to map between voltage values in the analog and power states in the UPF, among other things. The basics are covered in the standards (UPF and Verilog-AMS) but there’s plenty of room to shine in implementation. After all, (a) there aren’t too many industry-hardened mixed-signal simulators out there and (b) imagine how much power you could waste in mixed-signal circuitry if you don’t get it right. Cadence has notes on a few updates in this domain here, here and here.

X-propagation is another area related to power. Perhaps you thought this was all wrapped up in formal checks? Formal is indeed helpful in X-prop, but it can only go so far. Deep-sequence checks are obviously much more challenging, potentially unreachable in many cases. These problems are particularly problematic between (switched) power state functions. Missing isolation on outputs from such a function should be caught in static checks, but checking that isolation remains enabled until the block is fully powered up and ready to communicate, this ultimately requires dynamic verification.

However, there’s room to be clever in how this is done. Simulation can be pessimistic (always X when possible) or somewhat more optimistic, propagating only the cases that seem probable. Maybe this seems unnecessary; why not just code X’s into the RTL for unexpected cases? It seems the LRM can be overly optimistic (in at least some cases?), whereas X-prop handling through the simulator (no need to change the RTL) gives you more control over optimism versus pessimism. You can learn more about how Cadence handles X-prop in simulation here.

So yes, the innovation beat goes on, even in simulation, a true veteran of EDA. Which is just as well since it still dominates functional verification and is likely to do so for a long time yet 😎


China Chips Taiwan and Technology

China Chips Taiwan and Technology
by Robert Maire on 05-28-2018 at 12:00 pm

Three critical China issues; Trade, Taiwan & Technology. China is a “double edge sword” of risk & opportunity. These issues greatly impact stock valuations. We have recently given a presentation at both the SEMI ASMC conference in Saratoga Springs and The Confab conference in Las Vegas. Both conferences include senior management of the semiconductor industry covering a wide variety of topics.


For those who read our newsletter, you know we have opined on China and trade many times and of late the subject has come to the forefront of general news so this has turned out to be a very timely topic.

In our view it is very clear that the issue of China trade has at the very heart of it the semiconductor industry and can either negatively or positively impact the industry in a huge way. Investors and industry participants must pay particular attention to this issue as it has come to a head and the stocks and fortunes of the companies will be greatly impacted.

Right now we see more downside beta than upside. Just the mere threat of a trade war has likely changed the momentum in the relationship between China and the US for the negative. Over the last several weeks we have seen a rollercoaster ride of reversing directions in trade that has left everyone spinning and confused.

Technology is also at the heart of trade as who has the technology and who wants it and how they get it have huge implications. We have already seen some early warning signs of technology ownership issues.

Finally Taiwan has not been mentioned much but everyone seems to forget that Taiwan is a short missile flight away from China that has recently raised the Taiwan issue again by forcing airlines to name Taiwan as part of China. While this may seem petty, it is a more ominous message sent by China about the future of Taiwan and with it TSMC and all the semiconductor operations on the runaway island.

Below is a link to the slide deck of the presentation we have given as we think it will be of interest to investors and industry participants alike…

China Chips- Trade Taiwan & Technology

Conclusion: Resistance is Futile- Join the Movement!

*Much like Japan, Taiwan & Korea before them, China entering the semiconductor industry is a normal progression of modernization

*The US will also need alternative suppliers like Micron & GloFo

*The US can participate and profit in China – A huuuuge market

*Everyone must participate with eyes wide open to risks

*The US government can help level the playing field of trade & IP concerns

*China will likely be faster than Japan, Korea or Taiwan in build up

*US must promote & protect & invest in new tech – AI, VR, IOT etc…

*China remains a very sharp double edged sword that cuts both ways


Should EDA Follow a Foundry Model?

Should EDA Follow a Foundry Model?
by Daniel Nenni on 05-28-2018 at 7:00 am

There is an interesting discussion in the SemiWiki forum about EDA and the foundry business model which got me to thinking about the next disruptive move for the semiconductor industry. First let’s look at some of the other disruptive EDA events that I experienced firsthand throughout my 30+ year career.

When I started in 1984 EDA was dominated by what we called DMV (Daisey, Mentor, Valid). Before that it was Calma running on Data General Minicomputers. Back then EDA was a systems business where software was bundled with hardware. SUN Microsystems and Cadence changed that by putting minicomputers on engineer’s desks allowing them to pick and choose the software tools they used. EDA then became a software centric business selling perpetual licenses with yearly maintenance contracts. Software subscriptions soon followed which caused a bit of financial indigestion for EDA companies but clearly it was disruption for the greater good.

The most recent EDA disruption is Siemens acquiring Mentor. We are now seeing the effect it is having on the ecosystem, a very positive effect. We now have three VERY competitive EDA companies going upstream from chip to software development to complete systems. It really is an exciting time to be in EDA!

Meanwhile, back at the castle, the majority of commercial software is now in the cloud via an SaS business model resulting in gold mines of data and analytics, except of course EDA software.

The forum discussion Should EDA Follow a Foundry Model? was started by long time SemiWiki member Arthur Hanson. Arthur is a hardcore investor who came to SemiWiki looking for semiconductor knowledge to supplement his stock portfolio. Arthur and I have met, we talk on the phone and email. I was just starting to work with Wall Street at the time and found his investor insight quite helpful. Remember, when an outsider asks a question you need to understand what he is asking and why he is asking it.

“Just like asemifoundry takes knowledge in executing making chips for a variety of customers and shares it, yet keeps each customers information separate and private, should not an EDA firm be set up in its own cloud to share the expertise that they develop from monitoring a large number of separate process for different be used to improve the processes for all their customers? TSM has done an excellent job of keeping individual customer IP separate and private but uses the improvement in process information to the benefit of all. Would not this process if applied to EDA speed up the evolution of the design process to the benefit of all through the use of big data. If TSM can keep proprietary information separate and confidential while spreading process improvements, couldn’t EDA firms use the same structure to benefit their customers as well. Auditing the process on a real time basis could assure security while giving the customer the best practices on a real time basis. This could also be done on a virtual machine bases with most of the process done at the customerssite, although this would be unwieldy and cumbersome compared to a private cloud. Any thoughts, comments or observations on this appreciated and solicited.”

The resulting discussion is quite interesting so check it out when you have time. More than five thousand people have viewed it thus far which is a pretty big discussion if you think about it, and I have. SemiWiki is made up of all levels of semiconductor professionals from A to C level and we know who reads what, when, and where, so I can tell you this discussion is resonating at all levels of the ecosystem, absolutely.

My personal opinion is that disruption is again coming to EDA and that disruption will be in the cloud. We did a “Do you want your EDA Tools in the cloud” poll and again the interesting part was who voted and where they were in the ecosystem. The $10B question is: Who is trusted enough to implement EDA in the cloud? The answer is towards the end of the forum discussion:

Originally Posted by count
I think it would be interesting if the foundries, ie TSMC, got into the EDA game and charged a wafer royalty on it as you said. Better yet, a cloud based EDA tool that could also be used for ordering after designs are validated. If it could be integrated in a sort of design to manufacturing workflow, that would be amazing. Especially for smaller customers who are designing IoT chips and are focused on time to market, something like that seems like it could be valuable.

Originally Posted by KevinK
Why would a TSMC or Samsung even consider this option given Cadence’s or Synopsys’ current market caps and revenues ? Given the stock premium that an acquisition would cost, either foundry could build two leading edge fabs for the same price. I don’t know Samsung’s internal economics, but TSMC’s typical return on invested capital (ROIC) runs around 30-40%. Even though an EDA acquisition wouldn’t be “capital” per se, I’m sure that the foundries would use their ROIC as a hurdle rate for other major uses of money. Neither EDA company offers close to that rate, even before considering the revenue haircut an EDA/IP company would suffer, once tied to a single foundry.

Originally Posted by Daniel Nenni
One word: Disruption
Do you actually think Intel Foundry or Samsung Foundry or any other IDM foundry for that matter has a chance at catching up with TSMC while playing by TSMC’s rules? Much less beating them? It’s not gonna happen. Intel or Samsung could buy Cadence or make a significant investment and cut a wafer royalty deal in the cloud exclusive to their customers. Foundries, better than EDA companies, could pull of EDA in the cloud, absolutely.

Just my opinion of course…