RVN! 26 Banner revised (800 x 100 px) (600 x 100 px)

DAC56 Keynotes and SKYtalks – The Big Picture

DAC56 Keynotes and SKYtalks – The Big Picture
by Daniel Payne on 04-11-2019 at 12:00 pm

Many of us have engineering degrees and are well paid to maintain a deep but narrow focus into a specific domain, but what about the big picture, like industry trends and emerging challenges? Well, DAC56 has just the thing to deliver us a front row seat to the big picture, and it’s contained in both the Keynotes and SKYtalks.

Hors D’Oeuvres from Chaos

In the 1970’s while living at home, listening to the radio, I stumbled upon this new type of music made with a synthesizer, it was played by Walter Carlos and called Switched-On Bach, and I was hooked on both synthesizers and classical music. Likewise, Thomas Dolby has a colorful musician career that also started out with synthesizers and now he has a popular YouTube channel. What if music were to embrace principles from AI and deep learning? Experience his keynote on Tuesday, June 4th at 9:20AM in the Keynote booth 1145.

From student project to tackling the major challenges in realizing safe & sustainable electric vehicles

As an avid cyclist I also have a bond for motorcycle riders, because we both share a common two-wheel ethos and experience the gradual electrification of our transportation. Bas Verkaiak and a group from the Eindhoven University of Technology actually built an electric motorcycle, then rode it around the world in just 80 days. After completing such an epic feat, the group spun off and created SPIKE Technologies. Find out how this energetic group created their own electric motorcycle and what they learned along the way. Wednesday, June 5th at 9:20AM at the Keynote booth 1145.

Reverse Engineering Visual Intelligence

I love watching SciFi movies where the plot involves getting into the human mind and creating an alternate reality, think The Matrix. James DiCarlo, MD, PhD from MIT is studying Human Intelligence (HI) and Artificial Intelligence (AI), and has an approach using neural network models to form the next generation of computing. How we humans use vision will be the focus of this Keynote held on Thursday, 9:20AM at the Keynote booth.

Cutting Edge AI: Fundamentals of Lifelong Learning and Generalization

If you’ve been reading SemiWiki over the past 12 months, then you recall that most of the recent VC money is pouring into AI startups that are doing new chips and software to tackle AI challenges. Hava Siegelmann from DARPA is presenting a Sky Talk on Monday at 1PM in the DAC Pavilion, booth 871. Find out what she’s learned in AI by studying nature, like Super Turing computation, stochastic and asynchronous communication, adaptivity and interactive computation.

The Memory Futures

I started out my IC design career doing DRAM chips, so I love all things memory related. The two biggies in memory today are NAND Flash and DRAM, so find out from Micron expert Gurtej Sandhu, Ph.D. how memory companies are squeezing even greater densities with each new process node. How in the world do they keep developing to 5nm and smaller. His Sky Talk is on Tuesday, June 4th at 1PM in the DAC Pavilion.

Incorporation of Security into Chip Design

I first met Serge Leef while at Mentor in Wilsonville, Oregon and have kept in touch with him over the years, and since August 2018 he’s been with DARPA as a program manager in the Microsystems Technology Office (MTO). Literally every week I read headlines about cybersecurity threats and data breaches, so his talk will delve into the area of automating security into the actual chip design process. This Sky Talk is on Tuesday, June 4th at 2:15PM in the DAC Pavilion.

Summary

Plan on attending DAC this year from June 2-6 in sunny Las Vegas, get the big picture and inspiration from the Keynotes and Sky Talks, and meet these interesting speakers and some of us SemiWiki folks as we walk around the exhibit area, learning more about our semiconductor industry up close. There should be 6,000 attendees and over 170 companies in the exhibit area, along with the strong technical program.


A Collaborative Driven Solution

A Collaborative Driven Solution
by Alex Tan on 04-11-2019 at 7:00 am

Last week TSMC announced the availability of its complete 5nm design infrastructure that enables SoC designers to implement advanced mobile and high-performance computing applications for the emerging 5G and AI driven markets. This fifth generation 3D FinFET design infrastructure includes technology files, PDKs (Process Design Kits), tools, flows and IPs –all of which have been developed and validated by multiples silicon test vehicles through earlier collaboration with leading EDA and IP vendors.

Normally each process node shift is expected to deliver significant improvements in one or more of PPAC (Performance, Power, Area or Cost) design metrics. For example, the innovative scaling features in full-fledged EUV 5nm node provides a 1.8X logic density and 15% speed gain based on the ARM® Cortex®-A72 core testcase. While the process refresh update seems so regular (about every 18 to 24 months), the intricacies imposed by the new process technology keep rising and its direct impacts on the EDA space have been constantly endured foremost by both the physical verification and circuit simulation tools.

Mentor, a Siemens Business has been the industry leader in providing physical verification solution through its Calibre physical verification (PV) platform, which includes Calibre nmDRCand Calibre nmLVS. As a design signoff tool, there are three most sought criteria in PV: accuracy, reliability and performance –all of which are attainable through tight collaboration with both the targeted foundry and alpha customers. Foundry rigorous trials such as TSMC applied double-blind QA procedure has helped to facilitate tool and design flow readiness.

Design Density, Performance and Rule Complexity
As physical verification has evolved around design rules development and its verification, the rule complexity is directly proportional to the device and interconnect technology of the underlying process. Despite the slow down of Moore’s law, design density is still increasing driven by the relentless compute power demand to process data on the cloud and edge. Historically, transistor count has been used as the classic metric to measure the forward trend. Recent multi-core design and increased IPs inclusion trends have driven the transistor counts, pushing the number of design rules and the associated operations needed to implement those rules upward. The non-linear growth of DRC rules prompts challenges to a timely adoption of new process shift by the design teams.

Deep Collaboration and EDA Tool Certifications
A key success criterion for tool certification is to incorporate new functionality based on the foundry requirements in the early stages of process node development. During this development stage, foundry needs to step through the learning curve and bootstrap their prior known node experiences to enhance the overall ramp time. Over the years, Mentor has participated in repeat successful collaboration including three main physical verification areas (DRC, LVS, xACT/xRC) with multiple foundries.

To have foundries utilize Calibre tools internally as they develop a new process provides the most valuable return as it allows earlier identification and simultaneous fine-tuning of foundry design requirements and hardening the verification tools with any needed rules. For example, Mentor Calibre has been part of the TSMC EDA tool certification.

“TSMC’s 5-nanometer technology offers our customers the industry’s most advanced logic process to address the exponentially growing demand for computing power driven by AI and 5G,”said Cliff Hou, Vice President of Research & Development/Technology Development at TSMC. “5-nanometer technology requires deeper design-technology co-optimization. Therefore, we collaborate seamlessly with our ecosystem partners to ensure we deliver silicon-validated IP blocks and EDA tools ready for customer use. As always, we are committed to helping customers achieve first-time silicon success and faster time-to-market.”

In very advanced node such as TSMC 5nm, a deeper design-technology co-optimization is also necessary. Such earlier and heuristic collaborative efforts among foundry, EDA provider and the alpha customer will culminate in a number of pilot tapeouts and the start of silicon risk production cycle. For example, the flurry of pilot 5nm tapeouts occurring in the last few quarters will be followed by silicon bring-up in the second half of 2019.

Tool Capacity, Memory and Runtime
Tool scalability involves several variables such as code vectorization and optimal memory footprint. Memory usage is a key metric that also ties to tool performance. The diagram in figure 2 shows the normalized Calibre engine performance trend as a result of incorporating continuous speed improvements over several process nodes. In two recent Calibre nmDRC versions across six different 7 nm designs, Mentor reported a consistent 40-50% decrease in memory usage as the underlying data structures and memory management techniques were improved.

Calibre facilitates pre- and post- physical validations by providing ease-of-use interfaces for navigating and visualizing complex verification errors. Without proper integration and planning, completing a verification task may incur significant post-run analysis time. This can be minimized by enabling the many available Calibre features to configure, launch, review, and debug within the designer’s chosen flow as it is built to accommodate many third party and design team internal flows. For example, Calibre has uniquely used special debug layers for double-patterning debugging, and automated waiver processing for masking out IP errors during chip integration debugging.


The immense challenges of a process node shift have strained silicon ecosystem stakeholders which include foundries, designers and EDA companies. Aside from having ample solution expertise and commitment, EDA company such as Mentor has resorted in deep collaboration and partnership with foundries and designers to perform early process exploration and enabling successful deployment of the needed toolset including Calibre physical verification tools.

Check HERE for more discussion on Mentor Calibre physical verification tool for advanced process node.


Functional Verification using Formal on Million Gate Designs

Functional Verification using Formal on Million Gate Designs
by Daniel Payne on 04-10-2019 at 12:00 pm

Verification engineers are the unsung heroes making sure that our smart phone chips, smart watches and even smart cars function logically, without bugs or unintended behavior. Hidden bugs are important to uncover, but what approach is best suited for this challenge?

With the Universal Verification Methodology (UVM) there’s the constrained-random approach that can find bugs that designers or verification engineers never thought of. The only downside of using constrained-random is the limitation to smaller DUTs, not covering all state spaces, missing corner-case bugs and not finding all Trojan paths.

Continue reading “Functional Verification using Formal on Million Gate Designs”


Lip-Bu Keynote at CDNLive 2019

Lip-Bu Keynote at CDNLive 2019
by Bernard Murphy on 04-10-2019 at 7:00 am

Cadence CEO Lip-Bu Tan is always an interesting guy to listen to for his broader technology industry overview and his insight into emerging tech through his Walden International investments. Though we’re usually heads-down in challenging technical problems, it’s good to look up from time to time to check whether what we are working on is hot or not – not always a secure bet in the rapidly-changing markets of today.

Lip-Bu gave the opening keynote at CDNLive Silicon Valley this year, starting with a theme of the data-driven economy, which is driven by 5G, machine-learning (ML), the cloud and Industry 4.0. His views here were pretty much in sync with what you’ll hear from the infrastructure equipment guys and many others, that demand is all about data creation, processing, transmission and storage.

He echoed a number of points I have written about before, which is at least validation for me that I’m not making this stuff up. First, the McKinsey analysis of the impact of AI on semiconductors. Better hope that AI isn’t a passing fad because they project the CAGR for AI-based silicon will be 5X of that for non-AI silicon up through 2025, though Lip-Bu dialed this down to 3X. He also supported the view that more systems companies are building silicon, the hyperscalars certainly, but also infrastructure equipment makers coming back to silicon design, new silicon startups and more industries moving to (more) electronification (is that a word?).

Where does he put his money? The Walden portfolio includes investments from the cloud to the edge to devices. In the cloud, focus is on scale-out, security and micro-verticals for AI healthcare and robotics, automotive, smart devices and vision.

What I found most interesting is how Lip-Bu and his team are positioning Cadence to meet these needs. This they call SDE 2.0 for intelligent system design. In these areas, Lip-Bu always stresses humility; they’re not trying to become something radically different. They’re instead building on their core competency in design excellence through computational software (EDA+IP) while extending this into adjacent areas through system innovation and pervasive intelligence. In system innovation, they see opportunities in system analysis, embedded software and security. In pervasive intelligence, they see opportunities in their Tensilica platform and algorithmic know-how in verticals. All innovation will continue to be heavily supported by a culture of organic innovation – end-to-end tool rewrites, massive parallelism, machine learning and cloud enablement. Importantly for Cadence and for EDA in general, Lip-Bu sees the potential for this expansion to break through the $10B ceiling to reach $30B.

He also talked about the Cadence cloud strategy. I know that Dan has already written about this, so I won’t spend much time on this topic. What did strike me was their CloudBurst introduction, a hybrid cloud, making it easier for teams to burst excess workload onto a cloud (AWS or Azure) while still taking advantage of their in-house compute assets. Makes a lot of sense to me, at minimum as a half-step to broader cloud-based deployment.

Particularly interesting to me is the Cadence partnership with Green Hills Software. “Partnership” understates the relationship as Cadence has taken a 16% ownership stake in the company and Lip-Bu sits on the board. This might be viewed as a counter to the Synopsys direction in software, though I see it as more like the Mentor direction, but stronger. Green Hills has strong safety and security software IP, such as their RTOS, and is well-established with automotive, aerospace and defense customers. The partnership offers an opportunity to bridge safety and security assurance across the hardware/software divide.

Interesting directions and interesting insights. Cadence clearly understand directions and seems to be making some aggressive moves to follow these trends.


IC Integrity Thesis

IC Integrity Thesis
by Jim Hogan on 04-09-2019 at 12:00 pm

Most of my investments are associated with large changes in the semiconductor industry. These changes create opportunities for new and disruptive technologies. I also look to find solutions that provide a compelling reason to adopt a new technology or approach. When talking about a new approach, it often takes longer to overcome the status quo.
Continue reading “IC Integrity Thesis”


IP-XACT The Answer for IP Reuse

IP-XACT The Answer for IP Reuse
by Tom Simon on 04-09-2019 at 7:00 am

To a lawyer, the term intellectual property means just about anything intangible that has value. However, when you bring that term up in the context of semiconductor design, it means something pretty specific to most people. Yet the implied meaning of the term intellectual property (IP) within the semiconductor field has changed over the years. In my estimation, the term IP has applied to things that can be reused, especially in the form of saleable design content. Initially IP was used to refer to explicitly reusable things like libraries and hard macros that were sold to other enterprises.

Over many years the meaning has expanded in two dimensions: more for design content used internally and for higher levels of abstraction such as RTL, simulation views, and system or architectural level specifications. Not surprisingly, the definition has shifted as a result of improvements in the ability to actually reuse design content.

The industry began making meaningful steps toward improving IP reuse back in 2003 when the SPIRIT Consortium started developing the IP-XACT standard. The Spirit Consortium was made up of companies that use EDA tools and those that develop EDA tools. The SPIRIT board included ARM, Cadence, Freescale, LSI, Mentor, NXP, ST Synopsys and TI. The composition of this board speaks to the perceived importance that IP and its reusability had gained. Later in 2009 SPIRIT merged with Accellera, another highly regarded EDA standards organization, and continued to develop IP-XACT for enabling IP reuse.

In 2010 IP-XACT became an IEEE standard, IEEE 1685. According to Accellera:“The IP-XACT forms that are standardized include: components, systems, bus interfaces and connections, abstractions of those buses, and details of the components including address maps, register and field descriptions, and file set descriptions for use in automating design, verification, documentation, and use flows for electronic systems. A set of XML schemas of the form described by the World Wide Web Consortium (w3c) and a set of semantic consistency rules (SCRs) are included. A generator interface that is portable across tool environments is provided. The specified combination of methodology-independent meta-data and the tool-independent mechanism for accessing that data provides for portability of design data, design methodologies, and environment implementations.”

The net result is that now when we apply the term IP to semiconductors, it can mean almost any facet of each stage of a design. This is no accident. The goal has always been to be able to reuse previous design efforts on new projects without starting over from scratch. IP-XACT has made a meaningful contribution to this.

One interesting consequence of this is that there are more consumers of IP related information. Originally there were “customers” who received a well-defined package as a commercial product. I do not want to underplay the challenge in doing this, but now there can be multiple internal and external recipients for various different views of IP. IP-XACT can help greatly with this process.

Of course, IP-XACT is useful for creating designs. It systematically allows designers to define interfaces, registers, address maps, etc. If everyone is working from the same playbook, fewer errors will occur during integration. It enables earlier communication about system elements. The actual design, be it Verilog, VHDL, netlist or other EDA tool specific representation, is held in its native format. IP-XACT can provide an ideal mechanism to distribute a specific subset, or representation, of the design to each group that needs it.

Magillem, a leader in IP-XACT solutions, has created a group of tools for working with IP-XACT. One of them is aimed at generating design distributions that are tailored to the exact needs of the specific consumer. It is easy to see that a verification team might need simulation views and the design hierarchy. Or on the other end of the spectrum, a customer might need to only get hard macros, but not the RTL or netlist used to create it.

In Magillem’s solution, designs consist of views of the design that point to various file sets. The IP-XACT representation understands the design hierarchy and can be used to selectively traverse it to assemble all the necessary elements. They can be generated individually or as part of a fully elaborated hierarchical assembly.

I recently had a conversation with Vincent Thibaut, Chief Strategy Officer at Magillem, about their generation capability. He talked about their Packager (MIP) that takes legacy IP that does not have IP-XACT information and creates an IP-XACT certified description, making it ready for use in an IP-XACT system. Magillem customers can use the Magillem Generator Studio (MGS) to create views that are needed by any consumer. Data sheets for both, the Magillem IP-XACT Packager and the Magillem Generator Studio are available for download on their website. In addition to this they offer a full suite of IP-XACT tools for managing every aspect of the IP-XACT process.


Cloud-based Functional Verification

Cloud-based Functional Verification
by Daniel Payne on 04-08-2019 at 12:00 pm

The big three EDA vendors are constantly putting more of their tools in the cloud in order to speed up the design and verification process for chip designers, but how do engineering teams approach using the cloud for functional verification tests and regressions? At the recent Cadence user group meeting (CDNLive) there was a presentation by Vishal Moondhra from Methodics, “Creating a Seamless Cloudburst Environment for Verification“.

Here’s a familiar challenge – how to run 1,000’s of tests and regressions in the shortest amount of time using hardware, EDA software licenses and disk space on hand. Well, if your existing, local resources aren’t producing results fast enough, then what about scaling into the cloud all or parts of the work?

Regression testing is well suited for cloud-based simulations because the process is non-interactive. The hybrid approach taken by Methodics uses both on-premise and cloud resources as shown below:

OK, so the theory to seamlessly run jobs in the cloud for regression looks possible, but what are the obstacles?

[LIST=1]

  • Synchronizing data
  • Design environment in the cloud
  • Optimizing cloud resources
  • Getting results quickly and efficiently

    Synchronizing Data
    Many large IC teams can have engineers in multiple regions around the globe, doing work 24/7, making constant changes to design files. A modern project can have thousands of files, taking up hundreds of GB of space, containing both ASCII text files and binary files, simulation log files, even Office documents. Synching all of this data could be kind of slow.

    At Methodics they’ve tackled this issue with a product named WarpStor, and under the hood it uses instantaneous snapshots of your design database, along with the delta’s of previous snapshots. All file types are properly handled with this method.

    You get a new snapshot when there’s a new project release using the Percipient IP Lifecycle Management (IPLM) tool, or with a simple rule – maybe every 10 commits in the project.

    Design environment in the cloud
    Testing an SoC involves lots of details, like the OS version and patch level, EDA vendor tools and version, environment variables, configuration files, design files, model files, input stimulus, foundry packages. You get the picture.

    Your CAD and IT departments are familiar with these environment details, created with scripts and vendor frameworks, but what if they presume that all files are on NFS? Methodics has come up with a tool called Arrow to control how tests are executed in the cloud for you so the version dependencies are used, proper OS version is selected, and all of the synched design files are in the cloud ready to run.

    Optimizing cloud resources
    Sure, you have to pay for CPU cycles and disk storage in the cloud, so with the Arrow orchestration engine you get just the right compute and storage resources, while minimizing overhead. Arrow brings up hypervisors and container instances for each of your tests, then each container gets returned to the pool when it is complete and passes, or Arrow keeps around your failing tests in live containers, and tidies up by removing logs and other un-needed files in passing tests.

    Containers get provisioned quickly in under a second using recipes, and your workspaces are built from a ‘snapshot’ using WarpStor.

    A job manager like Jenkins can be used by Arrow whenever your container and workspace are ready.

    Getting results quickly and efficiently
    Running all of those functional verification tests is sure to create GB of output data for log files, coverage results, and extracted values from scripts. You decide exactly which information should be tagged and analyzed later, while un-needed data is purged.

    Summary
    IC design teams are stretching the limits of design and verification computing resources on premise, so many are looking to add cloud burst capabilities to help with demanding workloads like functional regression testing. Methodics has thought through this challenge and come up with the WarpStor Sync and Arrow tools that work together in enabling cloud based functional verification tasks.

    Read the 10 page White Paper online, after a brief registration process.

    Related Blogs


  • The Answer to Why Intel PMOS and NMOS Fins are Different Sizes

    The Answer to Why Intel PMOS and NMOS Fins are Different Sizes
    by Jerry Healey on 04-08-2019 at 7:00 am

    Like many others, we have often wondered why the PMOS fins on advanced microprocessors from Intel are narrower than the NMOS fins (6nm versus 8nm). This unusual dimensional difference first occurred at the 14nm node and it coincided with the introduction of Solid State Doping (SSD) of the fins at this node.


    We have concluded that the difference in fin dimensions occurs as a consequence of the SSD process. In the SSD process the PMOS fins experience a total of five etch operations whereas the NMOS fins experience only two etches. Each of these etches, especially the ones to remove the Boron doped glass, will require a slight silicon etchant to ensure complete removal of the doped glass from the surface of the fins. Each of these etches will also result in a slight thinning of the silicon fins.

    As a consequence of the PMOS fins receiving five etches to the NMOS fins two etches, the PMOS fins are slightly thinner than the NMOS fins.

    The SSD process begins after the P-Well and N-Well formation and the fin etch. These operations are followed by the deposition of thin 5nm layer of Boron doped glass. The P-Well is then masked and the Boron doped glass is etched away from the N-Well region (refer to Figure #1). This etch will involve a slight silicon etch that will thin the PMOS fins slightly. The NMOS fins will not see this etch (recall that the PMOS transistor fins are located in the N-Well and that the NMOS transistor fins are located in the P-Well).

    The PMOS and NMOS fins are then encased in a thick layer of oxide that is then CMPed and etched back to the boundary between the undoped portion of the fin and the well (refer to Figure #2). This is the first etch that the NMOS fins experience and the second fin etch seen by the PMOS fins. However, because this etch is mainly removing only undoped glass it is unlikely to thin the silicon fins.

    The wafers are then annealed to drive the Boron into the P-Well along the lower edges of the fins. The Boron glass has been removed from the N-Wells so they do not see this extra dopant.

    All of the glass is then removed from the fins including the layer of Boron doped glass along the base of the P-Well (refer to Figure #3). This is the second etch that the NMOS fins experience and the third etch seen by the PMOS fins. Since Boron doped glass is being removed this etch will also slightly etch both the PMOS and the NMOS fins.

    Next, a double layer of oxide (2nm thick) and a layer of SiON (2nm thick) is deposited across the wafers. The P-Well is then masked and this double layer is removed from the N-Well. This operation is followed by the deposition of a 5nm layer of Phosphorus doped glass.

    A thick layer of undoped glass is then deposited that encases the fins. This oxide is polished and then etched back to the boundary between the undoped portion of the fin and the well (refer to Figure #4). During this etch Phosphorus doped glass is removed from the undoped portion of both the PMOS and the NMOS fins down to the Well boundaries. However, the NMOS fins are still covered in a protective double layer of oxide + SiON and do not experience this etch. Since these protective layers have been removed from the PMOS fins they fully experience this etch and are thinned as a result. This is the fourth PMOS fin etch.

    The wafers are then annealed to drive the Phosphorus dopant into the N-Well along the edges of the fins. The P-Well does not experience this dopant drive because the thin protective layers of oxide + SiON shield the P-Wells from any dopant diffusion.

    All of the oxide (doped and undoped) is then removed from between the fins (refer to Figure #5). The NMOS fins are still protected by the thin layers of oxide + SiON so they do not experience this etch. However, the PMOS fins do experience this etch and it will thin them. This is the fifth PMOS fin etch.

    Finally, two thin layers of oxide + SiON are deposited, followed by a thick layer of STI oxide. The STI oxide and the thinner layers of oxide + SiON are etched down to the undoped fin/well boundary leaving behind STI oxide between all of the fins. Since this etch is removing undoped glass the fins will be unaffected.

    So the difference in the dimensions of the PMOS and the NMOS fins occurs as a result of the fact that the SSD process causes the PMOS and the NMOS fins to experience a different number of etch operations that are designed to remove doped glasses.


    For more information on this topic and for detailed information on the entire process flows for the 10/7/5nm nodes attend the course “Advanced CMOS Technology 2019” to be held on May 22, 23, 24 in Milpitas California.

    https://secure.thresholdsystems.com/Training/AdvancedCMOSTechnology.aspx

    Written by: Dick James and Jerry Healey


    Which Way is Up for Lyft, Uber?

    Which Way is Up for Lyft, Uber?
    by Roger C. Lanctot on 04-07-2019 at 7:00 am

    Lyft’s initial public offering was expected to be the biggest tech offering in two years. A public offering is very much like an elevator and everyone getting on the elevator wants to go up. It’s worth noting as the doors open on the Lyft IPO elevator, General Motors is likely to be getting off – and they are not alone.

    Why would anyone get off this elevator if the party is upstairs in the penthouse? Probably because Lyft is making the grab for cash long after most of the meatiest global markets have been sewn up. As if to emphasize the point, Uber made an 11th-hour bid for Careem to lock up the Middle East, one of the few remaining under-served and potentially lucrative markets.

    There are a couple of reasons to raise cash with a public offering. The most important one is to stimulate growth. In Lyft’s case that can only come from international expansion – a prospect that promises to deepen the financial crater wherein Lyft already resides.

    Didi, Yandex, Uber, Ola, Gett. These are the companies that have corralled the largest global ride hailing market opportunities. More importantly, these are the companies that have done battle with the regulators, worked out the licensing, fought off the taxi drivers and established their market presence.

    This is no party for a fashionably late arrival. The low hanging fruit has been picked. The battles have been fought. The lessons have been learned.

    While Uber has had a tough go of it here and there, conceding China to DiDi, for example, the company has crafted a survival strategy that is already altering market conditions and growth prospects globally. The key to Uber’s strategy – where it has been forced to innovate – has been sub-contracting with existing taxi drivers in markets ranging from the Middle East to Eastern Europe, Scandinavia and Japan.

    This go along to get along approach is helping to keep Uber in business – i.e. customers can request an Uber even though it is a taxi that will provide the ride. It has helped keep Uber in play while changing the underlying path to profit.
    At least Uber has a path forward. Lyft has nothing but market barriers and an endless vista of loss-producing opportunities. As if to further drive the point home, Uber has reduced driver per-mile compensation in some markets stirring up Uber resentment.

    The tweak to compensation only emphasizes the reality that Uber can readily undercut Lyft at any time – especially given how many Lyft drivers are also driving for Uber. At the touch of a button, Uber can have Lyft on its knees in the U.S., while steadily inching its way to profitability overseas.

    In spite of this, the Lyft IPO is poised for liftoff. No doubt the early investors, like GM, will cash out along with a mass of insiders. In the absence of a Google or Facebook-like upside these investors will clearly be counting on either an acquisition exit or an autonomous vehicle breakthrough. The prospects of either of these outcomes are slender.
    I will happily continue to use Lyft, in the U.S., but I won’t be entering that elevator. Too many people will be getting off the elevator when markets open today. There is wisdom in that decision.


    The ESD Alliance Welcomes You to an Evening with Jim Hogan and Paul Cunningham

    The ESD Alliance Welcomes You to an Evening with Jim Hogan and Paul Cunningham
    by Bob Smith on 04-05-2019 at 7:00 am

    An informal “Fireside Chat” like no other featuring Jim Hogan, managing partner of Vista Ventures, LLC., and Paul Cunningham, Cadence’s corporate vice president and general manager of the system verification group, is in the works for Wednesday, April 10.

    Hosted by the ESD Alliance, a SEMI Strategic Association Partner, at the SEMI corporate headquarters in Milpitas, Calif., we’re planning plenty of time for networking, dinner and insights from Jim and Paul.

    Naturally, Jim and Paul have more than enough to talk about in just one hour-long discussion. Paul’s experiences are varied and include being an entrepreneur as founder and CEO of Azuro. Azuro subsequently was acquired by Cadence in 2011. He now manages Cadence’s system functional verification activities, after moving from physical design tools development 18 months ago.

    Attendees can expect to hear about the verification challenges ahead, as well as open source architectures and the necessary development platforms. Paul previously worked in artificial intelligence before it was an industry trend and may be asked about this as well.

    The evening begins at 6:30pm with dinner and runs until 9pm. SEMI is located at 673 S. Milpitas Boulevard in Milpitas, Calif., and has ample free parking.

    Everyone from the electronic system and semiconductor design ecosystem is welcome to attend. It is open free of charge to all ESD Alliance and SEMI member companies. Non-members and guests can attend for a fee of $40. Registration information can be found at: https://bit.ly/2Ot4azb

    Please plan to join us and bring your questions or observations. Be sure to ask us about our newest initiative ES Design West co-located with SEMICON West 2019 July 9-11 at San Francisco’s Moscone Center South Hall.

    About Jim Hogan and Paul Cunningham
    Jim Hogan doesn’t really need an introduction from me. He is well-known as an experienced senior executive and tireless advocate for our electronic system and semiconductor design ecosystem who has worked in the semiconductor design and manufacturing industry for more than 40 years. Jim currently serves as a board director for electronic design automation, intellectual property, semiconductor equipment, material science and IT companies.

    Paul Cunningham’s product responsibilities at Cadence include logic simulation, emulation, prototyping, formal verification, Verification IP and debug. Previously, he was responsible for Cadence’s front-end digital design tools, including logic synthesis and design-for-test. Cunningham joined Cadence through the acquisition of Azuro, a startup developing concurrent physical optimization and useful skew clock tree synthesis technologies, where he was a co-founder and CEO. He holds a Master of Science degree and a Ph.D. in Computer Science from the University of Cambridge, U.K.

    About the Electronic System Design Alliance
    The Electronic System Design (ESD) Alliance, a SEMI Strategic Association Partner representing members in the electronic system and semiconductor design ecosystem, is a community that addresses technical, marketing, economic and legislative issues affecting the entire industry. It acts as the central voice to communicate and promote the value of the semiconductor design ecosystem as a vital component of the global electronics industry. Visit www.esd-alliance.org to learn more.