webinar banner2025 (1)

TSMC in the Cloud Update #56thDAC 2019

TSMC in the Cloud Update #56thDAC 2019
by Daniel Nenni on 06-13-2019 at 10:00 am

During my Taiwan visit, prior to Las Vegas, I was fortunate to spend time with Willy Chen and Vivian Jiang to prepare for the cloud panel I moderated at #56thDAC. Willy and Vivian are part of the ever-important Design Infrastructure Marketing Division of TSMC, which includes the internal and external cloud efforts. TSMC first announced their external cloud offering last year: TSMC announces initial availability of Design-in-the-Cloud via OIP VDE and OIP Ecosystem Partners and has made follow-up announcements with all of the key vendors and participated in multiple cloud panels last week in Las Vegas. Make no mistake, TSMC is a semiconductor cloud pioneer, absolutely.

There are however a couple of things I would like to point out as an objective semiconductor cloud insider. I first heard of TSMC seriously considering the cloud more than 10 years ago. Back then the big hurdle was customer security and having been through TSMC’s security protocol for EDA and IP vendors many times I can tell you TSMC is all about security. But TSMC is also all about enabling customers of all types and getting high quality wafers to the masses and today that means cloud.

Another interesting point in the semiconductor cloud transformation is that systems companies are driving the leading edge foundry business instead of traditional fabless chip companies. Some of these systems companies are actually cloud based companies (Google, Microsoft, Amazon, and Facebook) so there is no security concern there. In fact, cloud security is above and beyond anything we have ever seen in the semiconductor industry and TSMC knows this by direct experience with their cloud customers.

As more systems companies use the cloud for chip design the fabless companies have no choice but to follow. The cloud company chip designers are the extreme case. They can do simulations and verification in hours versus days or weeks. Imagine being able to run a SPICE simulation or characterization run in an hour versus over night?

As I mentioned before, investment in fabless chip companies more than tripled in 2017 and doubled again in 2018. Similar to the fabless transformation, where semiconductor companies no longer had to build fabs, today’s fabless companies don’t have to buy computers and tools, they just go to the cloud where TSMC and EDA is already there waiting for them, absolutely.

One of the more interesting cloud events at #56thDAC was the Mentor Calibre Luncheon (FREE FOOD). SemiWiki Blogger Tom Simon sat in front of me and will blog this in more detail so spoiler alert: Willy Chen was on the panel and he talked about TSMC cutting down the DRC runtime of an N5 testchip from 24 hours to 4 hours using Azure Cloud.  AMD was on the panel and they talked about doing the same thing with their N7 products scaling to 4000 CPU cores using a Microsoft Azure VM (which is an AMD EPYC based server).

Admittedly the AMD presentation was a little self-serving but my takeaway was that AMD partnering with TSMC and pivoting to the cloud for chip design before their much bigger competitors do is a VERY big deal.


SpaceX: Starlink to Carlink

SpaceX: Starlink to Carlink
by Roger C. Lanctot on 06-13-2019 at 5:00 am

When SpaceX launched 60 satellites into orbit last month – the first of a planned fleet of 12,000 such satellites which will ultimately deliver terrestrial Internet access in concert with earth-bound stations – astronomers were alarmed at the apparent impact on earth-based observatories from light pollution. SpaceX CEO Elon Musk subsequently assured one and all that higher orbits and re-orientation of hardware would mitigate the problem. Of course, he added some Musk-ular snark by suggesting that space observations in the future should take place from orbit.

The launch and spectacular light show, though, underlined the turning point likely to soon impact multiple industries – not least of which will be the automotive industry. A new form of Internet connectivity is on the way – and this is but one as OneWeb is in the works as well while legacy players like Intelsat evaluate their options in a new world order where satellite connectivity is suddenly on the menu for passenger cars.

This has significance relevance to the automotive industry which struggles today with unreliable cellular signals at a time when cybersecurity, software updates, and autonomous operation demand reliable connectinos. On February 26th of this year, at Mobile World Congress in Barcelona, 25 attendees representing 13 industry-leading companies (Microsoft, SoftBank, Intelsat, OneWeb, among others) sat down to dinner to discuss this prospect.

From that dinner emerged the World Connected Car Alliance, an organization focused on enabling hybrid satellite-cellular connected vehicles with universal and constant connectivity solutions from both space and ground. The WCCA shows promise of bringing together satellite and cellular industries for the first time to embrace technologies that will enable the connected vehicle of today, and the autonomous vehicles of tomorrow.

Two distinct models for the architecture of autonomous vehicles (AVs) have emerged. One is for a self-contained, mostly unconnected car with all sensors and crucial systems onboard, which only exchanges data with the Internet when necessary. The other represents an always-connected vehicle that relies heavily on the computing power and real-time driving experience of other vehicles provided by the cloud.

It’s likely that most AVs will fall somewhere between these two extremes, and the decision about the degree of reliance on the Internet will be influenced by such considerations as safety, security, cost, and most of all, the reliability and ubiquity of the off-board communications system. However, there is no doubt that all AVs will be powerful computer systems with all of the software refresh capabilities that that implies. AVs will also be an extension of living room, delighting passengers with various entertainment models that are starting to emerge. All AVs will need robust communications.

Before there was General Motor’s Onstar, Volvo Cars had set in motion a plan for a hybrid cellular-satellite connected car system working with Orbcomm. Of course, this was back in the days of analog cellular technology and before Orbcomm had filed for bankruptcy. Needless to say, this Volvo system never made it to market – but its existence reflected the automotive industry’s preference for a belt and suspenders approach to connectivity that could guarantee the vehicle connection.

Satellite has historically sought out the desperate and wealthy to deliver the connectivity of last resort. This meant, among others, rural, maritime, oil, gas, and mining verticals. This also meant an ever-diminishing part of the planet that was not covered by terrestrial LTE was the target of satellite. Traditional satellite companies’ fear, above all other things, was commoditization. This is ironic considering they are a commodity, and learning to be a commodity will make them successful.

The approach WCCA brings satellite into the mainstream of competitive communications. This means that satellite will augment and support autonomous fleets and vehicles. There are vast areas of the Autobahn that have no connectivity, and wireless access along the M40 or the U.S. Interstate system is often non-existent. There are holes in terrestrial networks that leave many suburban and secondary roads all over the world disconnected. If satellite is blended and integrated with terrestrial communications so that an IP address persists, and the application session holds while switching between satellite and terrestrial, then a major piece of the ‘reliable and ubiquitous’ puzzle is solved.

SOURCE: Kymeta

The 3GPP core sees the satellite as just another cell tower. Pricing is competitive with terrestrial. 5G and LTE extend their reach and coverage with satellite infrastructure. Hybrid connectivity would mean an enhanced level of security and safety for autonomous vehicles by delivering constant contact.

Hybrid satellite-cellular connectivity is rapidly becoming a reality. Companies interested in learning more or, indeed, joining the effort can visit: www.wcca.car


#56DAC – What’s New with Custom Design Platform

#56DAC – What’s New with Custom Design Platform
by Daniel Payne on 06-12-2019 at 10:00 am

Dave Reed, Synopsys

TSMC attends DAC every year and they do something very savvy, it’s a theatre where they invite all of their EDA and IP partners to present something of interest, followed by a drawing for a prize. At the end of the day they even have a nice prize, like a MacBook Air, which I didn’t win. On Wednesday I watched Dave Reed of Synopsys present an update on the Custom Design Platform.

Dave Reed, Synopsys

Tools in the Custom Design Platform include:

Back in April they announced that these tools were certified by TSMC for the 5nm FinFET process node, which is always a big deal for IC design teams pushing the bleeding edge because you need your EDA tools ready.

The integration between these tools is tight, so the phrase used is DRC Fusion or Extraction Fusion, because users don’t want to wait hours streaming data out of one tool and made compatible with the next tool in the flow. Adoption of the Custom Compiler tool increased this past year, now with 100 logos and 3,000 users, and all internal Synopsys IP designs use their own tools.

Analog IC designers know that layout parasitics will affect the performance, accuracy and reliability of their circuits, so the Synopsys flow allows for early use of parasitic estimates, followed by partial extraction and fully extracted netlists.

Each new, smaller process node has an increase in circuit simulations and increase in parasitics, so FineSim can be used to simulate circuits like SERDES, PLL and ADC, now about 3X faster than before. Plus, they’ve added RF simulation to FineSim, so you have another choice than HSPICE.

Automating the layout of analog design is a noble quest, tried by many vendors in the past, so Synopsys continues with a template-based approach where expert layout designers capture their best practices, allowing for transistor size changes. All this is done without having to write code, or becoming a computer science major. I asked Dave about the CiraNova technology that they acquired years ago, and it’s still be used under the hood for layout automation.

IC designers used to work in either a digital or an analog environment, with tedious file interchanges between them, but not any more because Synopsys allows seamless use between the digital flow of IC Compiler II and the analog flow of Custom Compiler. DRC checking can be done either in batch mode or even interactively, saving you time.

Summary

If the Custom Design Platform can be used internally at Synopsys for creating all of their own IP in TSMC nodes at 28nm all the way down to 5nm, then it’s going to work for your project too. This is a competitive market segment and Synopsys keeps plugging away, year by year, making it easier to reach design closure through clever automation.

Related Blogs


Intelligence in the Fog

Intelligence in the Fog
by Bernard Murphy on 06-12-2019 at 5:00 am

By now, you should know about AI in the cloud for natural language processing, image ID, recommendation, etc, etc (thanks to Google, Facebook, AWS, Baidu and several others) and AI on the edge for collision avoidance, lane-keeping, voice recognition and many other applications. But did you know about AI in the fog? First, a credit – my reference for all this information is Kurt Shuler, VP Marketing of Arteris IP. I really like working with these guys because they keep me plugged in to two of the hottest domains in tech today – AI and automotive. That and the fact that they’re really the only game in town for a commercial NoC solution, which means that pretty much everyone in AI, ADAS and a bunch of other fields (e.g. storage) is working with them.

Now back to the fog. In simpler times we had the edge, where we are sensing and actuating and doing a little compute, and the cloud to which we push all the heavy-duty compute and which serves feedback and updates back to edge nodes. It turns out that this two-level hierarchy isn’t always enough, especially as we introduce 5G. That standard will introduce all sorts of new possibilities in electronification, but it doesn’t have the same range as LTE. We can no longer depend solely (or even mostly) on the cell base stations with which we’re already familiar; to be effective 5G requires mass deployment of small-cell stations connecting to edge nodes and handling backhaul either through the core wireless network or via satellite. These small-cell nodes are the fog.

AI has already gained a foothold in these fog nodes to better optimize quality of MIMO communication with mobile (or even stationary) edge nodes. MIMO quality depends on beamforming between multiple antennae at the base station and the user equipment (UE). Figuring out how to optimize this at any given time through link adaptation and how to best schedule transmissions at the base station to minimize interference between channels, these are complex problems which increasingly look like a good fit for AI. There are other AI applications too, in managing intermittent reliability problems and in intelligently automating network slicing.

Once you have AI support in a fog node, it’s not a big leap to imagine providing support to the edge nodes it services. But haven’t we all been arguing that AI is moving to the edge? Why do we need support in the fog? Yes, AI is moving to the edge but it’s a constrained form of AI. In voice command recognition for example, an edge node can be trained to recognize a catalog of commands, even phrases if they’re relatively short (I’ve heard up to ~10 words). If you want natural language recognition for more open-ended command possibilities, you have to go to the cloud, which can handle the complexity of the task but has its own downsides – latency and security among others. Handling tasks of intermediate complexity in the fog (without needing to go to the cloud) could look like an attractive proposition, certainly to the operators who will probably charge for use of that capability.

All interesting background but what does this have to do with us in the design world? Network equipment makers are increasingly returning to custom design to provide at least some of these capabilities, (and indeed other needs in the rapidly booming 5G domain, such as supporting end-to-end private wireless networks). The chip suppliers who feed these companies are racing ahead too. Nokia, Ericsson and Qualcomm all have positions on 5G and AI. Which means that AI-centric design will boom in this area.

I don’t know if the operator equipment companies will use standard AI chips (Wave Computing, Movidius, ..), adapted over time to their needs or will build their own. Either way, I do expect a boom in 5G-centric AI applications, especially for these fog nodes. Which will mean increased demand for AI-centric SoC design with need for highly customizable on-chip networks within accelerators, cache coherence between the accelerator(s) and IPs on the SoC and super-fast connectivity to off-chip high-bandwidth memory or GDDR6. In other words, all the capabilities that AI leaders like Wave Computing, Movidius, Baidu, Google, etc. etc. (it’s a long list) have been building on and continue to build on with Arteris IP capabilities. Such as Ncore NoCs, the AI package, FlexNoC and CodaCache. Check them out.


The Complexity of Block-Level Placement @ 56thDAC

The Complexity of Block-Level Placement @ 56thDAC
by Tom Dillinger on 06-11-2019 at 10:00 am

The recent Design Automation Conference in Las Vegas was an indication of how the electronics industry is evolving.  In its formative years, DAC was focused on the fundamental algorithms emerging from academic research and industrial R&D, that enabled the continuation of the Moore’s Law complexity curve.  (Indeed, the most prestigious award at this year’s DAC recognized the scan-based Design for Test implementation that has become the de facto standard throughout digital semiconductor design.)  More recently, DAC added the “Design Track” and the “IP Track” sessions, to highlight the innovative methodologies that designers were using, leveraging the advances in EDA capabilities.

The evolution of electronic design applications was a prevalent theme at this year’s conference, with a multitude of sessions covering diverse topics:

  • optimization of machine learning architectures for both data-center and edge inference engines (especially, resilience approaches to reduce the susceptibility to malicious attacks)
  • opportunities for in-memory/near-memory processing on large datasets (a significant change to the traditional von Neumann architecture)
  • advances in packaging technology for heterogeneous integration, especially for a “chiplet-based” design implementation

Nevertheless, the importance of ongoing innovations in fundamental EDA algorithms and products remains vital to the industry.  (The Best Paper award at the conference was presented to the authors of a unique approach to cell placement.)

At the conference, I had an opportunity to talk with Vinay Patwardhan, Product Management Director in the Digital & Signoff Group at Cadence.  We discussed the characteristics of current SoC designs for these new applications, and the demands they were placing on existing design implementation flows.  The insights (and customer data) that Vinay shared were very enlightening.

“Customers are pursuing extremely high cell count designs, enabled by advanced process nodes.  Good examples are cloud-scale ASICs.”, Vinay explained.  “The amount of memory integrated into these designs exceeds the logic and IP reuse functionality, and the trend is growing.”

In the figure below, the red bars represent the % SoC die area occupied by memory over time, the dark grey bars are the % area associated with IP reuse, and the light grey bars are the % area associated with new logic. (Source:  Semico Research)

“From a physical design tool perspective, the implications of this design trend are two-fold.  The number of cell instances in a floorplanned design block is growing.  And, in particular, the sheer number and diversity of macrocells is exploding – for example, small SRAM buffers, register files, and cell relative placement groups.”, Vinay continued.

“Exploding?”, I asked somewhat skeptically.  “Can you give me an example?” 

Vinay shared the data for a set of design blocks from a recent collaboration between Broadcom and the Cadence Innovus R&D team – the figure below includes several examples of mixed cell and macro placement block design netlists.

(From:  Jack Benzel, Broadcom, “Concurrent Placement of Macros and Standard Cells with the Mixed Placer”, CDNLive Silicon Valley conference, April 2, 2019.)

623 macros in a 6.6M instance block design.  Wow.

In the “olden days”, there were a few (relatively large) macrocells associated with a block netlist.  The SoC block micro-architect and physical design engineer worked briefly to pre-place these macros within the block floorplan.  Typically, they were placed (manually) around the block periphery to minimize the internal routing track blockage, as depicted below.

If the macros were auto-placed, they typically ended up at the block periphery as well, due to their characteristically-low pin count and thus, low contribution to the overall netlist wirelength optimization measures.  Only if the timing paths through the macro were super-critical would an internal placement be selected.

Clearly, many current SoC designs are pursuing a new paradigm.  The SoC design examples above are no longer amenable to a seeded manual macro pre-placement.  (Fortunately, the increasing number of metallization layers available in advanced process nodes enables allocation of block-level routing layers over the macro hard IP implementation, enabling a relaxed set of constraints for macro placement.)

Vinay described the recent “mixed-placement” innovations in Innovus (an apt product name).  He explained, “We re-architected the solver algorithms within Innovus placement to address the unique characteristics of current block netlists, potentially integrating many macros.  Macro-rich designs introduce a new set of constraints – for example, appropriate spacing between macros may need to be maintained for allocation of signal repowering buffers across the block and for cell insertion algorithms to address hold time issues.  And, although the mixed placement is automatic, micro-architects will want to provide (relative placement) grouping constraints to guide the algorithms to a more optimal solution.” 

Examples of potential Innovus placement constraints are given in the figure below, along with a methodology flow diagram.  Both the “mixed-placement” and “place_opt” steps in the flow diagram have been optimized for a macro-rich design netlist.

SoC designs are changing to address new (data-centric) applications – that evolution is certainly not new.  The macro-dominated nature of SoC design blocks was surprising to me – 600+ macros in a netlist exceeds my prior experience.  The roles of the micro-architects and PD engineers are changing, from focusing on macro pre-placement to defining the (minimal) set of design constraints to guide mixed-placement algorithms.  Kudos to Cadence on the recent Innovus product release, to enable more optimal implementations of these design blocks.

https://www.cadence.com/content/cadence-www/global/en_US/home/tools/digital-design-and-signoff/soc-implementation-and-floorplanning/innovus-implementation-system.html

-chipguy


Wally Rhines Keynote @ #56thDAC!

Wally Rhines Keynote @ #56thDAC!
by Daniel Nenni on 06-11-2019 at 5:00 am

One of the perks of blogging on SemiWiki is the events you get to attend for FREE and the amazing people you get to meet and Wally Rhines is certainly one of those people. You will not find a more intelligent, innovative, and genuinely nice group of people in my experience. Having traveled the world meeting thousands of people I can tell you that I have never been the smartest person in a meeting, not even close. Viva la semiconductor industry.

The nice thing about Wally’s keynotes is that not only does he send me slides, he also answers questions in a way that even non semiconductor professionals (my wife) can understand. My beautiful wife and I caught Wally’s keynote in the DAC Pavilion and here are the points that stuck out to her:

Acceleration of Semiconductor Revenue Growth. This slide was a little bit of a shock to my wife. For most of my thirty five year career semiconductors has been a steady low single digit growth industry. Certainly it was enough for my wife and I to put four kids through college so no complaints there. The last two years however were 22.2% and 15.5% growth but as Wally explained it was mainly due to high memory pricing. Last year Memory was a record 39% of semiconductor revenue while five years ago it was 23%. Thankfully my wife does not correlate her spending with industry growth.

Integrated Circuits Are Capturing an Increasing Share of Electronic System Product Value. The easiest example is cars. The good news is that the semiconductor content in cars is increasing dramatically but as my wife pointed out that is cause for concern since maintenance prices are also on the rise. She reminded me that we recently replaced a tire sensor on her Toyota Sienna for $300+.

Systems companies are the fastest growing customers (5-Year CAGR +70%). I would say that also applies to TSMC for leading edge process nodes. Mobile companies with SoCs and cloud companies with domain specific processors are great examples. Remember, Apple alone is about 20% of TSMC’s leading edge revenue.

Automotive Industry Drives Large Growth in IC Design. According to Wally, and he would know, hundreds of companies are developing electric and autonomous vehicles. When is that bubble going to burst and who is going to get caught in the automotive downturn? As Wally said they all have to buy EDA tools but only a few will actually go into high volume manufacturing so the downstream supply chain is at risk for sure. I’m not sure when my wife and I will buy new cars but when we do they will be EV for and hopefully somewhat autonomous.

Merger & Acquisition Activity Has Decreased. The semiconductor industry had a bubble in 2015 with close to $100B in acquisitions (Altera, Freescale, Broadcom, etc…). 2016 was $65B (ARM, Linear), 2017 $29B, 2018 $26B, and at the time of Wally’s presentation 2019 was $9B. During the conference it was announced that Infineon acquired Cypress Semiconductor for $10B so the decline continues but not as steep and who knows what 2H2019 will bring. We have not heard from Hoc Tan (CEO of Broadcom) this year (Broadcom bought CA Technologies for $19B last year). In my opinion semiconductor and EDA M&A will continue at the usual pre-bubble pace. There aren’t many EDA companies left to acquire but IP companies are sprouting up everywhere so I would expect much more IP M&A in the very near future (look to Synopsys, Cadence, and Silvaco).

Venture Capital Investment in Fabless Semiconductor Startups was on Steady Decline Until 2017. It really is a relief to see investment come back to semiconductors. It seems the investment community was so focused on Unicorns (Uber, Airbnb, SpaceX, WeWork, Pinterest, etc…) that they forget about the all important semiconductor community. That changed in 2017 when VC for fabless companies more than tripled to $1.5B and more than doubled that in 2018. 2019 is starting out a bit slow at about $1B thus far so we may be slowing down already. China of course is outspending the US (9x in 2017 and 3x in 2018) but from what I have experienced the money is used “less effectively” in China so those numbers look bigger than reality. Never the less China is outspending the rest of the world and that should be of great concern.

Majority of VC Funded Startups are Focused on AI and Machine Learning. Wally made a couple of points here that my wife and I talked about in more detail. Health care is of great concern to us baby boomers and it seems like medical science is not keeping up with the rest of the sciences. AI and ML could greatly advance medicine in all regards but it would require complete participation/collaboration with everyone involved and that will be much more difficult in the US than China. China also has a billion more people than we do so much more data is “readily” available.

Wally had a dozen more AI/ML slides and ended with How long before the Silicon Transistor Runs Out of Gas? but I need to stop here. If you have more questions post them in the comment section and Wally will probably reply.  He is the most engaging man in EDA, absolutely.


The RISC-V Revolution is Sweeping Across the APAC Region and Australia

The RISC-V Revolution is Sweeping Across the APAC Region and Australia
by Daniel Nenni on 06-10-2019 at 9:14 pm

Join SiFive Tech Symposiums in Tokyo, Daejeon, Pangyo, Hsinchu, Singapore and Sydney

As we make our way around the world meeting and engaging with others in the semiconductor and hardware design community, we are seeing an increased interest in RISC-V based hardware innovation. This is due in large part to the emergence of  market-ready RISC-V core IP, development tools and silicon solutions based on cloud-based design platforms that facilitate the creation of custom SoC solutions for edge computing, AI, IoT, wearable, server and other target vertical markets.

Having just completed an exciting tour through Europe, we’re gearing up for our visit to the APAC region and Australia, which will take place throughout the month of June. If you live in the region, or will be visiting, you won’t want to miss this. Here is a glimpse of what’s in store for attendees. For more details, and to register to attend any of our great symposiums, please visit www.sifivetechsymposium.com

 

Tokyo Highlights

Software Hardware Consulting Group, a company specializing  in FPGA and SoC microcontroller IP solutions, embedded security, voice, speech, wired and wireless connectivity, will be our co-host. This symposium will take place on Tuesday, June 11 and includes keynotes by Huzefa Cutlerywala, VP of sales (APAC) for SiFive, and Shumpei Kawasaki, CEO of Software Hardware Consulting Group. There will also be presentations by our customer, QuickLogic, and ecosystem partners, including DTS Insight, IAR Systems and Rambus. Attendees will also have an opportunity to see demonstrations and learn about the latest design platforms for RISC-V based SoCs, development boards, IP, software and more. For more information, and to register for the symposium in Tokyo, please visit: https://sifivetechsymposium.com/agenda-tokyo/

 

Daejeon Highlights

The Korea Advanced Institute of Science and Technology (KAIST) will be our co-host. This symposium will take place on Monday, June 17 and will include presentations from Yunsup Lee, alumni of KAIST and co-founder and CTO of SiFive, and Keith Witek, SVP corporate development and strategy at SiFive. There will be presentations by our partners, IAR Systems, UltraSoC and OpenEdges Technology, as well as other companies, including Samsung and FuriosaAI. Attendees will also have an opportunity to see demonstrations and learn about the latest design platforms for RISC-V based SoCs, development boards, IP, software and more. For more information, and to register for the symposium in Daejeon, please visit: https://sifivetechsymposium.com/agenda-daejeon/

 

Pangyo Highlights

The Korea Semiconductor Industry Association (KSIA) will be our co-host. This symposium will take place on Tuesday, June 18, and will include presentations from Yunsup Lee co-founder and CTO of SiFive, and Keith Witek, SVP corporate development and strategy at SiFive. There will also be presentations by many ecosystem partners, including UltraSoC, Rambus, OpenEdges Technology, Hancom MDS and IAR Systems, as well as other companies, including Samsung and FuriosaAI. Attendees will also have an opportunity to see demonstrations and learn about the latest design platforms for RISC-V based SoCs, development boards, IP, software and more. For more information, and to register to attend the symposium in Pangyo, please visit: https://sifivetechsymposium.com/agenda-pangyo/

 

Hsinchu Highlights

Microchip and ACTT are our co-hosts in Hsinchu. This event will take place on Tuesday, June 18, and features a great lineup of speakers. There will be keynotes by Thomas Xu, the CEO of SiFive China, Jianjun Xiang, CEO of ACTT, Vishakh Rayapeta, applications engineer at Microchip, Christopher Moezzi, VP and GM AI/ML Solutions BU of SiFive, and Jimmy Hu, VP of R&D at SiFive China. There will also be presentations by Imagination Technologies, Industrial Technology Research Institute and Rambus. Attendees will also have an opportunity to see demonstrations and learn about the latest design platforms for RISC-V based SoCs, development boards, IP, software and more. For more information, and to register to attend the symposium in Hsinchu, please visit: https://sifivetechsymposium.com/agenda-hsinchu/

 

Singapore Highlights

This symposium will take place on Wednesday, June 19, and will be hosted at the prestigious National University of Singapore. Some of the highlights include keynote presentation by Huzefa Cutlerywala, VP of sales APAC at SiFive, and presentations by Anand Bariya, VP of engineering at SiFive, and Trevor Carlson, assistant professor at the National University of Singapore. There will also be myriad presentations by industry veterans and luminaries. Attendees will also have an opportunity to see demonstrations and learn about the latest design platforms for RISC-V based SoCs, development boards, IP, software and more. For more information, and to register to attend the symposium in Singapore, please visit: https://sifivetechsymposium.com/agenda-singapore/

 

Sydney Highlights

Morse Micro will be our co-host in Sydney. This symposium will take place on Friday, June 21. Some of the highlights include keynote presentations by Huzefa Cutlerywala, VP of sales APAC at SiFive, and Michael De Nil, co-founder of Morse Micro. There will also be a presentation by Anand Bariya, VP of engineering at SiFive and a member of the faculty at the research lab at the University of Sydney, as well as myriad presentations by industry veterans and luminaries. Attendees will also have an opportunity to see demonstrations and learn about the latest design platforms for RISC-V based SoCs, development boards, IP, software and more. For more information, and to register for the symposium in Sidney, please visit: https://sifivetechsymposium.com/agenda-sydney/


Cadence on 5G Intelligent System Design #56thDAC

Cadence on 5G Intelligent System Design #56thDAC
by Daniel Nenni on 06-10-2019 at 10:00 am

As much as I love all EDA vendors I must say Cadence did the best DAC this year. Great booth, great location, excellent content, and of course a great party. The 5G presentation in the Cadence booth by Ian Dennison was of great interest to me as I am still trying to wrap my head around this whole 5G thing. I was able to meet with Ian privately and he sent me his slides. It is a very detailed presentation covering a lot of data so it is a multi-blog sort of deal.

5G is a bit controversial in my neighborhood. There is a NIMBY (not in my back yard) movement that opposes towers which are based on the current 4G implementation that are admittedly a bit unsightly. As a result we have a big dead spot in my area where I cannot get 4G and I can only imagine what 5G will bring our quiet little hamlet.

I had not met Ian before so it was a pleasure. Ian is a 33 year semiconductor veteran that came to Cadence through an acquisition in 1994 and he has resided there ever since. He works in the Cadence Intelligent System Division in Edinburgh researching systems design, analog design, future flows, and emerging markets. Ian’s presentation “Cadence 5G Intelligent System Design” fits right into that description, absolutely.

If you look at the transformation of EDA over the last 35 years you will see a clear path to systems design which is where we are today. Mobile, Industrial IoT, Datacenter, Automotive, Aero/Defense, and Health Care are all critical systems that revolve around semiconductor technology. Bottom up design includes Analog, RF, Digital, device support, Embedded Software and Security, and Systems Analysis. Machine Learning is now an important part of the systems design equation with the massive amounts of data at our disposal.

5G is an easy example of the complexities we face as an industry. No matter how much coverage and bandwidth we provide it will be quickly used and abused. The critical part of modern semiconductor design is creating chips and systems that can thrive inside the infrastructure of the United States. China is another story of course since they don’t have NIMBYs.

The slide above is a breakout of a 5G Subsystem. The existing 4G towers with a mast radiohead and baseband (placed every 2-3km) will continue to provide 4G and slower 5G services. Much less invasive 5G mmWave radioheads can then be embedded in populated areas (every 200m). Yes, you can put one on the streetlight in front of my house, no problem.

The invasive infrastructure here of course is the edge computing subsystem which is about the size of a shipping container so NIMBYs beware. With the coming onslaught of AI for literally everything we do it will not be possible for our edge devices (smartphones, autonomous cars, delivery drones, etc…) to manage ALL of the computation AI will require. Nor will our 4g/5G/6G networks ever be able to handle ALL of the data generated from an autonomous edge.

In my neighborhood we have taken to making exposed infrastructure into art. Hopefully we can do the same for the Edge Computing Subsystems because, again, we will never have enough compute power and if we are to be competitive as a country we will have to both innovate and accommodate.

For further investigation Cadence has a 5G Landing Page. It talks about 5G Systems and Subsystems, 5G Handset, 5G Radioheads, 5G Baseband and Edge Computing, plus 5g Front Haul and Back Haul. Also listed are Cadence related products for each of the landing pages, definitely worth a look.


The Integrated Circuit

The Integrated Circuit
by John East on 06-10-2019 at 5:00 am

The “20 Questions with John East” series continues

Noyce and the rest of the traitorous eight left Shockley without a clue as to what they would do next.  They believed in semiconductors and knew that they were the very best semiconductor guys in the world.  Their hope was to find a company who would hire them en masse.  After some false starts, Noyce was introduced to Sherman Fairchild. Fairchild was a scientist / engineer who had turned entrepreneur.  Among the many companies he had started was Fairchild Camera.  The company was very successful:  during World War II more than 90% of the military reconnaissance cameras bought by the American forces were made by Fairchild Camera.  Sherman Fairchild saw the potential of semiconductors.  He and Noyce put together a deal.  In 1957 Fairchild opened a new business segment:  Fairchild Semiconductor.  It was located in Palo Alto, populated initially only by the traitorous eight, and its president was Bob Noyce.  Bob worked for a man named John Carter, Fairchild’s CEO, who was based in the corporate headquarters in Long Island, New York.  Then— the event that changed the world.  In 1959, Noyce invented the integrated circuit.  At just about the same time, Jack Kilby, a Texas Instruments engineer also invented the integrated circuit.  This led to some interesting times.

I knew both Bob and Jack.  They were probably the two nicest men I have ever met.  Over the years, they were quite complementary about each other.  Jack Kilby was as kind and gentle a man as you will ever meet.  That was a good thing because he was huge.  I’d guess six feet eight or six nine with a big frame.  When I shook his hand, I felt as though he could crush mine if he chose to.  Luckily he didn’t.  Bob Noyce was the greatest “people” guy that you ever met.  After one meeting with him, you walked away feeling like you were his best friend.  He was a really likeable guy with tons of charisma.

Kilby’s patent specified putting all the components on the same piece of germanium, but he interconnected them with wire bonding techniques.  Obviously this was totally unpractical with respect to making a real IC.  But Bob Noyce got it right.  His patent put all the components on the same silicon chip and interconnected them with deposited, patterned, and etched metal  — just the way we still make them today.  The Noyce patent is displayed on a plaque in front of the Charleston Road building where he made the invention.  Kilby beat Noyce to the punch (and to the patent office) by six months, but you couldn’t make an IC without Noyce’s metalization.  After a big legal battle, TI and Fairchild cross licensed their patents and that was the end of it except for the ongoing arguments over which state should get the credit — Texas or California.  Since I was born in Texas and live in California, I feel confident that, if this thing is ever really settled, I’ll come out on the winning side.

While I’m at it, a little bit about Moore’s law. Everybody knows about Moore’s Law.  But here’s some perspective from 1965.  When Gordon Moore first articulated his “law”, he was predicting that the number of components on a chip would double from a starting point of 32 components in 1964 to a gargantuan 64 components in 1965.  The logarithmic graph that he published was so bold as to predict a time when there would be 64 thousand components on a chip.  To the guys in the fab trying to make these things, that seemed crazy!!  I know it seemed crazy to me when I got there in 1968!  Gordon firmly believed in the 64K number, but if you had asked him about 64 million, he probably would have thought you were nuts.  Today we’re doing on the order of 64 billion.  Wow.  Sounds like more Alice in Wonderland stuff, doesn’t it?

Fairchild moved to a bigger facility in Mountain View in 1960. They had started on Charleston Rd in Palo Alto.  That building is still there.  It’s a Historical Landmark. The new facility comprised a headquarters building at 313 Fairchild Drive and an adjacent manufacturing building at 545 Whisman Road in Mt View.  Later they added the “Rust Bucket” (The metal framed structure was painted in an orange color that looked like rust) on Ellis Street.  When I reported to work on September 9, 1968, I headed to 545 Whisman Road where I spent the next eight years.

Eventually, working for John Carter drove Noyce nuts.  Carter didn’t see the potential of semiconductors.  He didn’t want to spend the money to do the job right.  He had an east coast management style which was very, very different from the style that had developed in the Bay Area semiconductor business.  He didn’t believe in stock options for the rank and file.  This friction and other issues eventually led to Carter’s resignation, but Sherman Fairchild decided not to give the top job to Noyce, who thought he deserved it.

Noyce and Moore decided to leave.  Andy Grove, who was running part of the Palo Alto R&D labs, heard about it and asked to join them.   (More about Andy later) They left in June 1968.  I had accepted my job in May.  They left in June.  Hence, all the craziness in my “Day One” and “Off with their heads” chapters.  In order to fill Noyce’s position, Sherman Fairchild recruited Lester Hogan.  Les had been running the semiconductor division of Motorola. Earlier in his career, Hogan had worked under Bill Shockley at Bell Labs.  Hogan agreed to join, but only if he could bring eight of his best people along with him.  Fairchild agreed.  In they came.

And the stuff hit the fan!!

See the entire John East series HERE.

Pictured: The contenders for the title of “Inventor of the integrated circuit”.  On the left Jack Kilby.  On the right Bob Noyce.  In my view  the title goes to Bob, but Jack was a really smart guy and the nicest man I ever met

Getting Real about Vehicle Data

Getting Real about Vehicle Data
by Roger C. Lanctot on 06-09-2019 at 9:56 pm

There is a lot of talk about data being the new oil fueling the automotive industry. Industry interest began in earnest in 2016 when McKinsey published a report – “Monetizing Car Data” – that noted in the executive summary that the car data market could be as large as $450B-$750B by 2030.

McKinsey: Monetizing Car Data – https://tinyurl.com/y85pk8nu

There’s nothing like big numbers to get the attention of a large, slow-moving industry. Since the publication of that report, car companies and their suppliers have been in a rush to figure out how to capture this value, while regulators have moved to put privacy barriers in place and warn of cybersecurity risks.

Strategy Analytics is hosting a discussion on the topic of data monetization myths this week in Tel Aviv, Israel, with executives representing industry leaders including: OtonomoGeneral MotorsHarman International, and Continental Corporation.  These executives will address the range of issues related to vehicle data including my particular bugbear: the fact that vehicle data is not only more interesting and valuable to car makers than it is to consumers, but also that car makers should feel an obligation to collect vehicle data thereby taking responsibility for vehicle performance.

Seminal though McKinsey’s report may have been, it failed to take into account the emerging cross-OEM data sharing and aggregation that will be necessary to tap into the deepest veins of vehicle data value. It is also blind to the massive value aggregation occurring daily at Tesla Motors as the company hoovers up data related to autonomous driving edge cases.

The rich world of new mobility services and smart cities seen to be on the horizon will not arrive without vehicle connections and data collection.  Precisely how that is coming to pass today will be discussed this Wednesday, June 12th, at the Crowne Plaza City Center in Tel Aviv.  There are a few seats left for the event. If you are in Tel Aviv and are interested in attending, please contact my colleague, Serge Rozenblum, at srozenblum@strategyanalytics.com.

Event details:

Fireside Chat: Managing and Monetizing Vehicle Data – Dispelling the Myths on Wed, June 12th 1:30-3:00.

 At Crowne Plaza City Center (Azrieli) (Tel Aviv) – Hall A – floor 11

 Strategy Analytics’ Director of Mobility, Roger Lanctot, moderating a discussion with:

Ben Volkow, Otonomo, Founder and CEO

Dr. Barak Hershkovitz, General Motors, Director Future Mobility Engineering, Director Global EV Customer Experience

Hadas Topor Cohen, Harman, Senior Director, Head of Products Software Platforms Product Unit

Dr Karoline Bader, Continental, Senior Manager Business Development & Strategy

We have limited seating for this event, so kindly RSVP soon by sending an email.

We look forward to welcoming you.