webinar banner2025 (1)

AI and the Domain Specific Architecture

AI and the Domain Specific Architecture
by Daniel Nenni on 10-03-2018 at 7:00 am

Last month I attended the 2018 U.S. Executive Forum where Wally Rhines was one of the keynotes. I was also lucky enough to have lunch with Wally afterwards and talk about his presentation in more detail and he sent me his slides which are attached to the end of this blog.

The nice thing about Wally’s presentations is that they are not company specific while a lot of keynotes are company pitches in disguise. The other thing is that his slides are very detailed and tell a story so reading them really is the next best thing to being there.

When I first started in Silicon Valley in the 1980s we all designed and manufactured our own CPUs which I consider domain specific architectures. Intel then came around with a more general architecture and the personal computing revolution began. Fabless semiconductor companies then restarted domain specific computing with GPUs and SoCs that are now replacing Intel chips at an alarming pace. System companies (Apple) then took the lead with custom SoCs and now even software companies (Google) are making their own domain specific chips (TPU). There are also IoT and automotive domain specific chips flooding the markets.

We have a front row seat to this transformation on SemiWiki because we see the domains that read our site. The first IoT blogs started in 2014 and we now have over 400 that have been read close to 2 million times. Automotive also started for us in 2014 and we now have more than 300 blogs that have been read more than 1 million times. AI started for us in 2016 and now we have close to 100 blogs that have been read more than 250 thousand times. IoT wins but AI has just begun.

Wally has some interesting slides on AI, Automotive, VC Funds, and the China semiconductor initiative. Definitely worth a look. Here is his summary slide for those who are short of time:


Here are the other keynotes. I have access to the slides and will blog about them when I have time but since Wally sent me his slides he goes first. I will end this blog with the perilous thoughts I had on this subject during my long and dark drive home.

Opening Keynote: Looking To The Future While Learning From The Past
Presentation by Daniel Niles / Founding Partner / AlphaOne Capital Partners

Keynote: Convergence of AI Driven Disruption: How multiple digital disruptions are changing the face of business decisions
Presentation by Anthony Scriffignano / Senior Vice President & Chief Data Scientist / Dun & Bradstreet

AI and the Domain Specific Architecture Revolution

Presentation by Wally Rhines / President and CEO / Mentor, a Siemens Business

AI Led Security

Presentation by Steven L. Grobman / Senior Vice President and CTO / McAfee

AI is the New Normal – 3 key trends for the path forward
Presentation by Kushagra Vaid / General Manager & Distinguished Engineer – Azure Infrastructure / Microsoft

Innovating for Artificial Intelligence in Semiconductors and Systems
Presentation by Derek Meyer / CEO / Wave Computing

The Evolution of AI in the Network Edge
Presentation by Remi El-Ouazzane/Vice President and COO, Artificial Intelligence Products Group / Intel

GSA Expert Panel Discussion
Moderated by Aart de Geus / Chairman and Co-CEO / Synopsys

Keynote: Long Term Implications of AI & ML
Presentation by Byron Reese / CEO, Gigaom / Technology Futurist / Author

The semiconductor industry (EDA included) has posted some very nice gains in the past two years but how long can that continue? Take a look at this graph and you will see a pattern that will no doubt repeat itself but the question is how low will we go?

One thing I can tell you is that EDA is definitely in a bubble. Look at the VC money and all of the fabless startups that are buying tools, especially the ones in China. At some point in time money will run out and only a fraction of these companies will continue to expand and buy more tools. Someone else can run the numbers but my bet is that the EDA bubble will pop in 2019, absolutely.


Synopsys Seeds Significant SIM Segue

Synopsys Seeds Significant SIM Segue
by Tom Simon on 10-02-2018 at 12:00 pm

It turns out that consumers are not alone in their love-hate relationship with SIM cards. SIM cards save us from increasingly widespread cellphone cloning. However, if your experience is anything like mine, it seemed that with every new phone, a new SIM card format was needed. Furthermore, people travelling overseas who wanted to avoid roaming charges often find themselves trying to buy SIM cards in foreign countries (and languages) and then juggling these new cards with their original card to make calls, read texts and more recently pay for purchases.

Consumer gripes aside, there are bigger drawbacks with SIM cards as we know them. Manufacturers encounter higher costs due to the additional parts and the reliability issues associated with the slot and removable card. Also, the IoT has changed the needs for subscriber verification. It’s not just for phones anymore. Expect to see cars, watches, appliances, sensor hubs and almost all manner of connected devices needing unique and secure subscriber identification. Already, it should be clear that these things cannot have SIM cards slots that require manually adding or changing cards every time they are put into service or undergo a service provider change.

So what is a SIM card anyway? It is really a software application running on a secure processor and a set of security identifiers in the form of a subscriber ID number and an associated encryption key. There is more to it, but for the purposes of this discussion this will suffice. The physical hardware in the card is known as a Universal Integrated Circuit Card (UICC). It contains a CPU, memory and interface hardware. The software that runs on the UICC is called Universal Subscriber Identification Module (USIM) application.

In order to eliminate the need to insert, and subsequently replace, SIM cards, a method is needed to securely bootstrap a permanently built-in UICC and perform a secure Over the Air (OTA) transfer of the subscriber and security information. More than just handset side software and hardware is required to accomplish this. The GSMA has developed a standard for remote SIM provisioning. This has enabled the birth of the eSIM, which uses a UICC that is soldered to the mobile/remote device circuit board. Though, a separate component may still be an issue for some applications because it still takes up valuable board real estate and adds to the BOM.

The next logical step in this evolution is the incorporation of the UICC hardware and software into the system SOC. Synopsys has announced their iSIM, which is a UICC and USIM that can be integrated into any SOC design. Synopsys already has a large portfolio of secure and security related IP, so the addition of this innovative offering will dovetail nicely with their related IP. I recently spoke to Rich Collins, Senior Marketing Manager at Synopsys, about this.

He emphasized that the demands of IoT require moving from a standalone SIM card. IoT designs are already combining the application processor and modem, so including SIM functionality fits the requirements of these IoT systems. To ensure they are offering a complete solution he outlined for me their partnership with Truphone. Rather than leave it to their customers to find and integrate carrier side software to handle the network and OTA provisioning, Synopsys is offering a complete solution that includes the integration with Truphone’s worldwide network. Importantly, end users will have the freedom to choose any local carrier they wish, and Truphone will enable the handoff.

Synopsys is combining many of its mission critical IP components to facilitate iSIM functionality. Among these is their True Random Number Generator (TRNG) which is a vital element to avoid compromised security. Synopsys is integrating Truphone’s eSIM software stack (which includes Javacard OS) in a new tRoot HSM implementation for iSIM.

With the proliferation of IoT devices, the advent of 5G combined with the many benefits of completely integrated SIM hardware, it is clear that the next year or two will see big changes in SIM implementations. Synopsys is well positioned to help SOC and device designers navigate the process. End users and IoT deployment operators should be pleased with the end results. For more information on the complete Synopsys iSIM offering, visit the product page on their website.


GLOBALFOUNDRIES Pivot Explained

GLOBALFOUNDRIES Pivot Explained
by Scotten Jones on 10-02-2018 at 7:00 am

GLOBALFOUNDRIES (GF) recently announced they were abandoning 7nm and focusing on “differentiated” foundry offerings in a move our own Dan Nenni described as a “pivot”, a description GF appears to have embraced. Last week GF held their annual Technology Conference and we got to hear more about the pivot from new CEO Tom Caulfield including why GF abandoned 7nm and what their new focus is.

GLOBALFOUNDRIES Pivoting away from Bleeding Edge Technologies

Background
GF was created in 2008 in a spin-out of the fabs formerly owned by AMD. In 2010 GF acquired Chartered Semiconductor, the number three foundry in the world at that time and in 2015 GF acquired IBM’s microelectronics business. Figure 1 illustrates the key milestones in GF’s history.


Figure 1. GLOBALFOUNDRY Milestones

GF is owned by Mubadala Development Company (MDC). MDC financials include the technology segment made up of GF. Based on Mubadala financial disclosures, from 2016 to 2017 GF grew revenues by 12.4% and saw their operating loss widen from 8.0% of revenue in 2016 to 27.2% of revenue in 2017 calling into question the sustainability of GF’s business model.

On March 9[SUP]th[/SUP], 2018 Tom Caulfield became the new CEO of GF with a mandate to build a sustainable business model.

7nm History
In the early 2010s GF was working on development of their own 14nm process technology but realizing they were falling behind their competitors ultimately abandoned their in-house development and licensed 14nm from Samsung. The licensed 14nm process was launched in 2014 in Fab 8 (see figure 1). GF has continued to improve on that process adding process options and more recently launching a shrunk 12nm version. The 14nm and newer 12nm version have been utilized by AMD for microprocessors and graphics processors, by GF for their FX-14 ASIC platform and by other customers.

With the IBM Microelectronics acquisition in 2015, GF received a significant infusion of researchers including Gary Patton who became the CTO of GF. Beginning around 2016, the combined GF and IBM research teams started to develop their own in-house 7nm technology. The initial version was planned to be based on optical exposures with GF also planning an EUV based follow-on version.

By all account’s development was proceeding well. In a July 2017 SemiWiki exclusive, GF disclosed their key 7nm process density metrics and at IEDM in December 2017 GF disclosed additional process details. My write up of GF’s process density metrics is available here and a comparison of GF’s 7nm process to Intel’s 10nm from IEDM is available here. GF’s 7nm process appeared to be a competitive process. I have also written about the leading-edge 7nm and beyond processes here.

One concern I have had about GF 7nm for a long time is scale. GF was reportedly installing only 15,000 wafers per month (wpm) of 7nm capacity. The average 300mm foundry fab had 34,213 wpm capacity at the end of 2017 and are projected to reach over 40,343 wpm by the end of 2020, and 43,584 wpm by the end of 2025 [1]. Newer leading-edge fabs are even larger and are what is driving up the average. At the leading-edge, wafer cost is roughly 60% depreciation and larger fabs have better equipment capacity matching and therefore higher capital efficiency and lower costs. Figure 2 illustrates the wafer cost versus fab capacity for a wafer fab in the United States running a 7nm process calculated using the IC Knowledge – Strategic Cost and Price Model – 2018 – revision 03 for a greenfield fab.


Figure 2. Wafer Cost Versus Fab Capacity for 7nm Fab in the United States

Even though 15,000 wpm is past the steepest part of the curve there is still several hundred dollars in cost per wafer advantage for larger capacity wafer fabs.

Tom Caulfield also mentioned GF needed $3 billion dollars of additional capital to get to 12,000 wpm and they could only fund half of it through cash flow, they would have to borrow the other half and the projected return wasn’t good.

Customer Inputs
When Tom took over as CEO he went out on the road and visited GF’s customers. What he found was a lack of commitment to GF’s 7nm process in the customer base. Many customers were never going to go to 7nm and of the customers who were, GF wouldn’t have enough capacity to meet their demands. There was also concern in the customer base that 7nm would take up all the R&D and capital budgets and starve the other processes they wanted to use of investment.

What Did GF Give Up?
By exiting the 7nm and smaller wafer market GF has given up some opportunity. Figure 3 illustrates the total available market (TAM) for foundry wafers in 2018 and 2022. Even in 2022 the forecast is for 7nm to be less than 25% of the market and the TAM for >=12nm to increase from $56 billion dollars in 2018 to $65 billion dollars in 2022.


Figure 3. Foundry Market

In terms of specific markets, GF are conceding some of the computing, graphic processing and data center opportunity. Currently AMD is GF’s largest customer and long term that business will presumably shrink as AMD moves to smaller geometries.

What Now?
GF will be focused on four major “differentiated – feature rich” offerings going forward.

[LIST=1]

  • FinFET – GF will continue to offer 14nm and 12nm FinFET based processes and they are continuing to add to these offerings with RF and analog capabilities, improved performance (10-15%) and density (15%), embedded memory options, enhanced MIM capacitors and advanced packaging options.
  • RF – this is a segment where GF has a clear leadership position as I discussed in a another article available here. With the pivot away from 7nm GF is increasing investment in this segment with more capacity. At the Technology Conference GF said “If you think RF, think GF” and I agree that is an ap slogan.
  • FDSOI – GF’s FDX processes with 22FDX currently and 12FDX are the industry leader in the emerging FDSOI space as I discussed in another recent article available here. FDSOI shows great potential in the IOT and Automotive markets. If FDSOI really takes off this could be a huge win for GF and they have already announced $2 billion dollars of design wins for the 22FDX process.
  • Power/AMS (Power. Analog and Mixed Signal) – this segment combines Bipolar/CMOS/DMOS (BCD), RF, mmWave, embedded non-volatile memory and Micro-Electro-Mechanical-Systems (MEMS) for the consumer space such as high-speed touch interfaces.

    Conclusion
    GFs’s pivot away from 7nm has aligned the companies R&D and capital spending more closely with their customers needs. Whether GF can build a sustainable business model on the four business segments they are now focused on remains to be seen but more closely aligning your companies focus with your customers’ needs certainly appears to be a step in the right direction.

    [1] IC Knowledge – 300mm Watch Database – 2018 – revision 02


  • Make Versus Buy for Semiconductor IP used in PVT Monitoring

    Make Versus Buy for Semiconductor IP used in PVT Monitoring
    by Daniel Payne on 10-01-2018 at 12:00 pm

    As an IC designer I absolutely loved embarking on a new design project, starting with a fresh, blank slate, not having to use any legacy blocks. In the early 1980’s we really hadn’t given much thought to re-using semiconductor IP because each new project typically came with a new process node, so there was no IP even ready for re-use, at least not at the IDM that I worked at. In 2018 by stark contrast we now have a thriving IP economy providing IC designers with everything starting from simple logic functions at the low-end, all of the way up to processors at the high-end, plus every kind of AMS function that you can imagine. My former Intel co-worker Chris Rowen once famously stated, “The processor is the new NAND gate.”

    So let’s say that your next SoC has a power budget, timing specifications and thermal reliability metrics, so you naturally want to have PVT (Process, Voltage, Temperature) monitors placed around your chip in strategic locations so that you can measure and control everything, but should you create your own IP from scratch or just buy something off the shelf? Great question.

    Let’s make a quick list of what it might take to develop your own PVT blocks and start using them:

    • Analog IC design skills, do we need to hire someone
    • Expertise to achieve high accuracy from in-chip monitors, with smallest die size and robust operation
    • Awareness of legal positioning (patents, design rights, trademarks)
    • Budget for EDA tools for analog IC design
    • Design time and effort
    • Budget for test chip and mask costs
    • Understanding of fabrication and device packaging timescales
    • Awareness of silicon validation and debug time
    • Contingencies for test chip iteration before production use, allowance for refining the design to meet specifications
    • Understanding and expertise to know where to place each monitor and how many monitor instances to place an SoC for a given application

    According to the website www.payscale.com the total pay for an analog IC design engineer ranges between $74,378 and $182,461 per year, with a median pay at $105,979.


    You also want that Analog IC design engineer to have experience making PVT monitors from 40nm down to FinFET process nodes, with working silicon as proof.

    Each PVT instance has to be accurate enough to feed back data to a digital controller that then makes decisions about DVFS (Dynamic Voltage Frequency Scaling), clock throttling, or changing VDD levels to meet specs and enhance reliability. If the accuracy of the sensor is off, then the control decisions will be inefficient or worst case harm the chip operation or fail to meet the specs.

    Other IP vendors have created their own PVT monitors and may have patented them, so you need to ensure that your novel IP designs and techniques aren’t infringing an existing patent. There were 141 semiconductor patent suits in 2013 for US District Court Cases.


    Source: Jones Day

    The lawyers get rich in patent disputes and both parties drain their precious financial savings downwards until a victor is established. In many cases the patent victor is able to batter the losing company down enough to either bankrupt them or cause them to be acquired, not a pretty sight.

    EDA tools for an analog IC designer include:

    • Schematic Capture
    • Circuit Simulation (SPICE)
    • Layout Tool
    • Schematic-driven Layout
    • DRC/LVS
    • Parasitic Extraction
    • Reliability analysis
    • Transistor Sizing
    • Design centering with Monte Carlo analysis
    • IR Drop analysis
    • Electromigration analysis

    For the digital controller portions you’ll likely need:

    • HDL entry
    • Logic synthesis
    • Static Timing Analysis
    • Place & Route
    • DFT tools

    The PVT sensors themselves are largely analog while the controller is digital, so more tools for co-simulation will be required:

    • AMS simulation and verification

    Getting all of the PVT monitor blocks designed and implemented is going to take time, probably on the order of many man-years effort, so add that figure up in your total calculations for making the IP.

    Mask costs are highly dependent on the process node that you’re at, so 40nm is about $900K while 28nm masks are about $1.5M, and the costs get steeper. You’re going to need a test chip with the new PVT monitors on to really be certain.


    Source: AnySilicon

    Test chip costs depend on the process node, die size and foundry partner that you choose. Contact your local account manager and get a figure to work with.

    Fabrication time depends on the foundry, their capacity at the moment, and whether you are using a multi-project wafer or not. Think weeks to months of time just waiting, when you could certainly be doing something more productive with your engineering staff.

    The magic moment is when packaged parts or a raw wafer are delivered and you get to debug your first silicon, engineers and test engineers huddle around and make frantic measurements, debugging their test program, maybe several days to determine if your new IP is working properly across voltage and temperature range. Worst case you’ll find some functional bugs or see that the design doesn’t quite meet the accuracy of your on-chip monitor requirements, so a re-spin is required, sending you back to iterate which will take you more weeks of design, verification and fabrication again.

    Even when your PVT IP is working, you now have to be judicious in where to place each sensor and the controller portion in order to optimize SoC-level performance or to fully enhance device lifetime.

    If this process outline above sounds laborious, error-prone, engineering heavy, expensive and defeating to your corporate goals of time to market, then just know the alternative is to purchase your PVT in-chip monitoring IP from a trusted vendor like Moortec. I’ve been talking with these folks over the past year and am rather impressed because of these factors:

    • 8 years of commercial experience with PVT monitoring subsystems
    • 60+ customers to date using their IP

      • Consumer Electronics (Digital TV, Mobile, Notebooks, SSD)
      • Datacenter (AI, Networking, Enterprise, Cloud Computing, HPC)
      • IOT (Wearables, Smart Home, Smart City)
      • Automotive (Infotainment, Collision Avoidance, Autonomous Driving)
      • Cyrpto-currency Mining (Bitcoin, Litecoin, Etherium)
    • IP working in 40nm down to 7nm

    Here’s a high-level view of their PVT subsystem:

    Source: Moortec

    Summary
    Every SoC design project has the same decision to make about using PVT in-chip monitoring, make or buy. Hopefully you do some back of the envelope calculations on the make side, then give the folks at Moortec a call to help complete the comparison. Most markets are moving so fast that efficient deployment of your internal design teams, combined with the ever-present time to market pressures, dominate business decisions. So, using trusted IP from a vendor like Moortec sounds like the lowest risk, fastest route to market.

    Related Blogs


    SURGE 2018 Silvaco Update!

    SURGE 2018 Silvaco Update!
    by Daniel Nenni on 10-01-2018 at 7:00 am

    The semiconductor industry has been very good to me over the past 35 years. I have had a front row seat to some of the most innovative and disruptive things like the fabless transformation and of course the Electronic Design Automation phenomenon, not to mention the end products that we as an industry have enabled. It is truly amazing that my iPhone X has more processing power than NASA and Neil Armstrong had when landing on the moon, absolutely.
    Continue reading “SURGE 2018 Silvaco Update!”


    Electronics healthy but trade wars loom

    Electronics healthy but trade wars loom
    by Bill Jewell on 10-01-2018 at 7:00 am

    Production of electronic equipment is healthy based on July and August data. China, the largest electronics producer, showed three-month-average change versus a year ago (3/12) of 13.8% in August. Growth for China has been in the 12% to 15% range since January 2017, picking up from the 8% to 11% range in 2016. South Korea’s electronics production has been more volatile, but was over 12% in June and July. The United States has shown accelerating growth since turning negative in 2016. U.S. electronics 3/12 change has been above 6% for the last four months (May through August), the highest since February 2004. European Union (EU) total industrial production growth has been in the 2% to 3% range from April to August, slowing from 5% earlier in the year. Our Semiconductor Intelligence global index of electronics production has been relatively robust in the last two years, in the 7% to 9% range since January 2017.

    Electronics production change in key Asian countries has been mixed. India and Vietnam have led with way, with India’s electronics growth accelerating from 15% in 2017 to 27% year-to-date through July 2018. Vietnam has decelerated from 33% growth in 2017 to a still robust 18% year-to-date. India and Vietnam should continue to outperform other Asian countries because of their low labor rates and improving infrastructures. China and Thailand have maintained growth in the 12% to 14% range over the last year and a half. Malaysia and South Korea have picked up from 4% in 2017 to 10% and 8% respectively year to date. Japan electronics growth is stuck in the low single-digits while Taiwan has seen declines.

    Total electronics exports in 2017 were $1.8 trillion while imports were $2.1 trillion, according to the World Trade Organization (WTO). China continued as the largest electronics exporting country in 2017 at $592 billion. However, the total other Asian countries became larger than China at $627 billion. China was also the largest importing country with $408 billion in 2017. The U.S. was second at $336 billion.

    The U.S. and China are currently in a trade war, with each country imposing tariffs on imports. The impact of the trade war on U.S. electronics and semiconductor companies is explained in a New York Times article. The effect on Asian companies is covered in a article in the Nikkei Asian Review. In addition, the U.S. is renegotiating the North American Free Trade Agreement (NAFTA). A preliminary agreement was reached with Mexico and discussions are ongoing with Canada.
    Without getting into the politics of these trade issues, it is important to note the significance of China and Mexico to U.S. electronics imports. According to the U.S. Census Bureau, the majority of U.S. electronics imports in 2017 came from China at 59%. Mexico was the second largest source of imports at 18%. Most of the rest was from other Asian countries at 19%.

    Electronics and semiconductors are truly global industries. Free trade is necessary to keep these industries running efficiently. Hopefully the current trade issues can be resolved in a reasonable time.


    Design Automation and the Engineering Workstation

    Design Automation and the Engineering Workstation
    by Daniel Nenni on 09-28-2018 at 7:00 am

    This is the seventeenth in the series of “20 Questions with Wally Rhines”

    Several common aspects have existed for what is now the modern Electronic Design Automation (EDA) industry. When I joined TI in 1972, the company was very proud of its design automation capability as a competitive differentiator. Much of the success of TTL for TI came from the ability to crank out one design per week with automated mask generation with the “MIGS” system. Other semiconductor companies had their own EDA capability. So it was somewhat revolutionary when Calma introduced its automated layout system at about the same time that Computervision and Applicon did the same, all based upon 32 bit minicomputers. Computervision, Calma and Applicon became the big three of the first generation layout tools.

    TI considered these newcomers as a threat. Development progressed upon a system based upon the TI 990 minicomputer. Meanwhile, our competition largely moved to Calma and, in some cases, Applicon. Design engineers were rebelling when I arrived in Houston in 1978. We were using the TI 990 based “Designer Terminal” with Ramtek displays to do the layout editing with a light pen. Designs were digitized on home grown systems based upon layouts that were created by draftspersons (mostly draftsmen) on a grid matched to the design rules of the chip. Our people wanted to buy a Calma system. However, just as we got our first Calma, GE acquired Calma and quickly destroyed the company. With the introduction of the Motorola 68K in early 1979, a host of companies, including Apollo, began developing a new generation of engineering work stations.

    By now, the management of TI’s Design Automation Department, realized the limitations of its approach. The 1982 Design Automation Conference in Las Vegas further affirmed the need to move to a next generation approach. So TI became one of the first major semiconductor companies to commit to the newly introduced Mentor Idea Station product based upon the Apollo workstation. Internal support groups in large corporations don’t usually surrender their corporate roles, despite their competitive disadvantage, and TI was no exception. A plan existed to complement the Mentor software with DAD-developed software. Mentor readily agreed since TI was a very large customer win. TI’s management accepted the whole strategy because of the strong history of success of DAD in maintaining a competitive design advantage for integrated circuit design.

    Subsequently, the Daisy-Mentor-Valid competition ensued. Mentor turned out to be a good choice because it was based upon the Apollo workstation and Mentor resources were not tied down to developing new hardware, as were the Daisy and Valid teams. But TI was not one of Mentor’s most desirable customers. The DAD engineers were experts in design software and they wanted to tweak the system capabilities as well as to add major new functionality. Mentor had realized major success with systems companies in aerospace, defense and automotive industries and was rapidly becoming a worldwide standard, especially in Europe. Meanwhile, my role in the TI Semiconductor Group changed. I was appointed President of the TI Data Systems Group, TI’s $700 million revenue business in minicomputers and portable terminals and was moved to Austin, Texas. While I was away over the next three years from late 1984 through mid 1987, a decision was made to divorce TI from Mentor and port our own TI software to the Apollo workstations. In mid-1987, I returned to Dallas as Executive VP of the Semiconductor Group only to find that the original move to commercially available design automation products had been reversed. TI was once again an island in an industry that was building upon broad innovation from a diverse set of designers working with commercial EDA suppliers. Unlike the early semiconductor history when TI had nearly 40% market share, TI now had 10% to 15% share and the economies of scale didn’t justify custom design tools.

    One of the entities that reported to me initially as EVP of the Semiconductor Group was DAD. I appointed Kevin McDonough, who later became Engineering VP of Cyrix Semiconductor, to assess our position in design automation and recommend a solution. Kevin did the evaluation and came back with an answer: Adopt the Silicon Compiler Systems design automation platform and move ahead to “RTL based” design. And so we did. We committed $25 million to SCS and started a conversion of the MOS portion of our design business to SCS. Few people even remember that SCS, which was an outgrowth of an AT&T Bell Labs spin-out called Silicon Design Labs, or SDL, actually developed the entire language based top-down design methodology before VHDL and Verilog even existed. TI and Motorola were among the first adopters. The TI sale gave SCS the credibility for an acquisition by Mentor Graphics at a premium price. Why would Mentor acquire SCS when they already had a strong IC Station product that could effectively compete against Cadence Virtuoso predecessor products? The answer: a tops-down, language methodology was clearly the direction of the future for the semiconductor industry. The problem: The two methodologies (Top Down, language based versus detailed layout) were disruptively different. Traditional designers viewed RTL design as the province of “computer programmers”. “Real” IC designers knew how transistors worked and could craft superior IC’s with a detailed design and layout system.

    What evolved was internecine warfare. Hal Alles, VP of IC Design at Mentor, veteran genius developer from Bell Labs and founder of SDL, had the undesirable challenge of convincing the two groups to work together. They didn’t. Step by step, the SCS designers denigrated the traditional IC design approach and Mentor’s message was bifurcated. The result: a window for Cadence to become the clear leader in traditional IC detailed design. Meanwhile, other companies exploited the fact that SCS had a closed system for language based design using two languages, L and M, which were proprietary. VHDL and much later Verilog, became public domain languages for top-down design. DAD was one of the organizations that reported to me at TI during the 1982 through 1984 period. We initiated a research proposal to create an open language for top-down design called VHDL, or VHSIC Hardware Description Language. In 1983, we were granted a contract to develop VHDL with IBM and Intermetrics as co-developers. I was one of five speakers at a special event to announce the plan in 1983. In 1987, VHDL became an IEEE standard 1076. In 1985, Prabu Goel formed a company, Gateway Automation, that subsequently developed an even simpler language called Verilog. The company was acquired by Cadence.

    The final result: Engineers who were accustomed to schematic capture were gradually displaced by language based developers. The EDA industry went into one of its major discontinuities, the transition from schematic capture to RTL based design. Mentor lost a lot of momentum and SCS never really became a standard for RTL based design although it might have been if it had made its languages open.

    The 20 Questions with Wally Rhines Series


    Custom SoC Platform Solutions for AI Applications at the TSMC OIP

    Custom SoC Platform Solutions for AI Applications at the TSMC OIP
    by Daniel Nenni on 09-27-2018 at 12:00 pm

    The TSMC OIP event is next week and again it is packed with a wide range of technical presentations from TSMC, top semiconductor, EDA, and IP companies, plus long time TSMC partner and ASIC provider Open-Silicon, a SiFive Company. You can see the full agenda HERE.

    AI is revolutionizing and transforming virtually every industry in the digital world. Advances in computing power and deep learning have enabled AI to reach a tipping point toward major disruption and rapid advancement. However, these applications require much higher memory bandwidth. ASIC platforms enable AI applications through training in deep learning and high speed inter-node connectivity, by deploying high speed SerDes, a deep neural network DSP engine, and a high speed high bandwidth memory interface with High Bandwidth Memory (HBM) within a 2.5D system-in-package (SiP). Open-Silicon’s implementation of a silicon-proven ASIC platform with TSMC’s FinFET and CoWoS® technologies is centrally located within this ecosystem.

    Open-Silicon’s first HBM2 IP subsystem in 16FF+ is silicon-proven at 2Gbps data rate, achieving bandwidths up to 256GBps, and being deployed in many ASICs. The data-hungry, multicore processing units needed for machine learning require even greater memory bandwidth to feed the processing cores with data. Keeping pace with the ecosystem, Open-Silicon’s next generation HBM2 IP subsystem is ahead of the curve with 2.4Gbps in 16FFC, achieving bandwidths up to >300GBps.

    This 7nm ASIC platform is based on a PPA-optimized HBM2 IP subsystem supporting 3.2Gbps and beyond data rates, achieving bandwidths up to >400GBps. It supports JEDEC HBM2.x and includes a combo PHY that will support both JEDEC standard HBM2 and non-JEDEC standard low latency HBM. High speed SerDes IP subsystems (112G and 56G SerDes) enable extremely high port density for switching and routing applications, and high bandwidth inter-node connections in deep learning and networking applications. The DSP subsystem is responsible for detecting and classifying camera images in real time. Video frames or images are captured in real time and stored in HBM, then processed and classified by the DSP subsystem using the pre-trained DNN network.

    Implementation challenges for AI ASICs include design methodologies for advanced FinFET nodes, physical design of large ASIC >300 mm2 running at GHz speed, power and timing closure, system level power and thermal and timing signoff. Open-Silicon has overcome these challenges with advanced implementation strategies that enable Advanced On-Chip Variations (AOCV) flow for physical design and timing closure, correlation between implementation and signoff that results in faster design convergence, an advance node power plan and validation techniques, and system level signal and power integrity signoff for a complete 2.5D SiP. Additionally, various in-house development tools help debug and analyse the design data through physical design phases, thus speeding convergence of complex designs.

    Open-Silicon’s DFT methodology enables the test and debug challenges in large ASIC designs by incorporating methods such as core wrappers, hierarchical BIST/scan, compression, memory repair, power aware ATPG and enablement of wafer probing to ensure quality KGD before 2.5D assembly, interconnect test between ASIC and HBM, and incorporating design practices recommended by TSMC CoWoS® to improve 2.5D SiP manufacturing and yield.

    Open-Silicon’s ASIC design and test methodology, low area high performance HBM2 IP subsystem, and its experience in high speed SerDes integration and DSP subsystem implementation, offer best-in-class custom silicon solutions for next generation AI and high performance networking applications.

    Who: Bhupesh Dasila, Engineering Manager – Silicon Engineering group, Open-Silicon
    What: Custom SoC Platform with IP Subsystems Optimized for FinFET Technologies Enabling AI Applications
    When: Wednesday, October 3 2018, 1:00 pm
    Where: EDA/IP/Services Track, Santa Clara Convention Center

    Open-Silicon is exhibiting at Booth, #907

    The TSMC Open Innovation Platform®
    (OIP) Ecosystem Forum is a one-of-a-kind event that brings together the semiconductor design chain community and approximately 1,000 director-level and above TSMC customer executives. The OIP Forum features a day-long, three-track technical conference along with an Ecosystem Pavilion that hosts up to 80 member companies.


    Mesh Networks, Redux

    Mesh Networks, Redux
    by Bernard Murphy on 09-27-2018 at 7:00 am

    It isn’t hard to understand the advantage of mesh networking (in wireless networks). Unlike star/tree configurations in which end-points connect to a nearby hub (such as phones connecting to a conventional wireless access point), in a mesh nodes can connect to nearest neighbors, which can connect to their nearest neighbors and so on, therefore allowing for multiple possible paths to route data to a target device. Important characteristics of this approach are (in theory) reliability and lower maintenance cost. If some node in the network fails, communication can automatically route around the failure until the bad node is repaired or replaced.


    Investment in this area has focused strongly on Wi-Fi, for example the solution supported by Google Home, and the Zigbee and Thread standards. Clearly these have seen some traction but there’s an obvious question – if mesh is so great, why aren’t we seeing it everywhere? Apparently the standards aren’t quite standard enough to ensure full interoperability between solutions from different vendors. So your phone, fitness tracker, thermostat, smart speaker and laptop may not collaborate seamlessly in that mesh. More annoyingly, they may appear to collaborate but fail from time to time in mysterious ways. So much for reliability and low maintenance.

    A better solution has to start from a better standard which closes off any wiggle room for incompatibilities. Which is where Bluetooth gets interesting. Incidentally, I’m coming to view Bluetooth as the stealth technology of wireless communication. Just a humble little capability to connect your phone to earbuds and your car infotainment center, right? Then we got Bluetooth Low Energy (BLE) which now looks pretty interesting for many IoT edge nodes. Then the standard added broadcasting for indoor beacons, and more recently we got Bluetooth mesh, a capability defined to fix the interoperability problem. Not such a junior partner in communications anymore.

    BT mesh defines a full-stack all the way from the low-level radio up to the application layer, eliminating wiggle-room; every solution built on BT Mesh should be interoperable. Also mesh requires that a device adopt and stay with a mesh model (similar to BT profiles); once a model for behavior of a node in a network is adopted it can never change. So for example, a BT mesh light switch purchased this year should be able to control a BT mesh light bulb purchased 30 years from now.

    Mesh is supported through a software stack on top of BLE, so adding mesh support is just a software upgrade on top of BLE. Therefore your smartphone should be able to join a BT mesh and therefore be able to control any mesh device (given appropriate privileges). It’s less clear that this is in the roadmap for ZigBee-enabled devices.

    Since BT mesh is built on BLE, it is naturally aligned with low-energy usage. In a mesh network, it may seem like this is no better than a theoretical advantage; all that peer-to-peer communication will neutralize any advantage in BLE, surely? The standards group thought of that, allowing for different classes of node in a mesh. A low-power node can operate on a low duty-cycle and on wake-up can poll an identified Friend node rather than having to broadcast to find a nearby node. A Friend node will also buffer messages addressed to its low-power friend and will forward when that node wakes up.

    Friend nodes and Relay nodes will commonly be served by main power, so are not as constrained in power usage, though still quite power efficient thanks to BLE. So mesh really can be low-power friendly. Mesh is also designed to be very secure, providing end-to-end government-grade security from the sender to the target application. Even though other nodes in the mesh are receiving and retransmitting messages, they can’t read those messages. And finally, all of this builds on a long-established and widely-deployed technology (Bluetooth), minimizing adoption problems, need for major upgrades and maintenance.

    What are the primary use models? Definitely in-building applications, such as controlling lighting, heating or window shades in a smart home or office or factory. Even more interesting is use for location services, especially indoors; this is where beacons become popular. We’ve already seen promos showing how we could find a store in a mall or products in a supermarket (that can’t come soon enough for me). The same technology can help track assets in a factory, patients and medical equipment in a hospital or concession stands in a stadium. The range of (non-mesh) BLE is already quite good; mesh extends that even further. There’s interest now in mesh to enable smart-city functions.

    All of this sounds good, but what’s the ground reality? I’m told by Paddy McWilliams (Eng Dir at CEVA) and Franz Dugand (Sales and Mktg Dir, Connectivity, at CEVA) that in China the battle is pretty much over. Alibaba for example has specified that, of the mesh standards, they are going with BT mesh. Others in China are apparently coming to similar conclusions. In one case I heard of, after carefully comparing ZigBee and talking to customers, a services provider told their product makers they expected BT mesh support (perhaps Alibaba’s choice had something to do with that). Curiously the US and EU seem to behind in adoption at present, but China is a huge market. It seems unlikely we can simply ignore their direction.

    CEVA has been active and very successful in Bluetooth for a long time. They already have a mesh stack which will run on top of their BLE4.2 and BLE5 IPs, so you probably should check them out when you’re thinking about how you can get in on this action.


    Crossfire Baseline Checks for Clean IP at TSMC OIP

    Crossfire Baseline Checks for Clean IP at TSMC OIP
    by Daniel Nenni on 09-26-2018 at 12:00 pm

    IP must be properly qualified before attempting to use them in any IC design flow. One cannot wait to catch issues further down the chip design cycle. Waiting for issues to appear during design verification poses extremely high risks, including schedule slippage. For example, connection errors in transistor bulk terminals where timing and power closure will work regardless. Such an issue would only be uncovered during final SPICE netlist checks. Another potential problem could include a case where LEF does not match GDS, completely slipping through the cracks, through full synthesis, and would only be caught during chip level DRC or LVS. This would ultimately require updates to the IP as well as re-synthesis (more slippage).

    How can one avoid these potential problems? Simple, with Fractal’s Crossfire QA suite. Fractal is your specialized partner for IP qualification. Crossfire can help you deal with design view complexities, increasing amount of checks required to correctly QA an IP, and the difficulties of dealing with excessive volumes of data.

    Crossfire supports over 30 standard design formats, from front-end to back-end, including simulation and schematic views, binary databases such as Milkyway, OpenAccess, and NDM, documentation, and custom formats such as Logic Vision and Ansys APL. Any other ASCII based custom formats can also be easily integrated into the tool.

    Getting back to the scope of this article, the recommended baseline of checks can be separated into three sections: cell and pin presence for all formats, back-end checks, and front-end related checks.

    Cell and Pin Presence Checks
    Although consistency checks such as cell and pin presence may sound trivial, and for the most part, they are, one cannot sweep such an important task under the rug. Don’t be surprised if an IP or standard cell library from a well-known IP vendor is delivered with inconsistencies between the various formats, including cell and pins names, port direction, and hierarchy differences.

    Back-end Checks
    Ensuring layout related consistencies across all back-end related formats is an important part of the IP QA qualification. Pin labels and shape layers must match across all layout and abstract formats. All layout formats such as GDS, Oasis, Milkyway CEL, NDM and OpenAccess layout views must directly match across the board. When comparing a layout to an abstract format such as LEF, Milkyway FRAM or NDM frame, one must ensure that all layer blockages correctly cover un-routable areas in the layout. On top of that, pin shapes and layers must match in order to guarantee a clean DRC/LVS verification down the line.

    Other important checks to consider include area attribute definitions for non-layout formats which must match the area defined by the boundary layers for various layout formats. IP and standard cell pins must be accessible by the router and for non-standard cell related IP, pin obstruction needs to be checked in order to ensure accessibility. In some cases, ensuring that all pins are on a pre-defined grid can also be a necessary task. In the end, these checks will ensure a quicker and less error-prone P&R execution.

    Front-end Checks
    Front-end checks can be broken into seven separate sections: timing arc, NLDM, CCS, ESCM/EM, NLPM, functional characterization, and functional verification. In this blog, we’ll be covering the latter two related to functional checks. The first five sections related to characterization deserve an article all on their own, therefore, they will be covered in an upcoming blog.

    Functional characterization checks ensure the timing arcs are defined correctly when compared the given Boolean functions for formats like Liberty, Verilog, and VHDL. Other checks include power down function correctness, ensuring related power and ground pins are defined correctly when compared to spice netlists or UPF models (correct pins are extracted from spice by traversing the circuits defined in the spice format). We also recommend checking related bias pins and whether input pins are correctly connected to gate or antenna diodes.

    When dealing with standard cell libraries, it is important to establish the Boolean equivalence of all formats that describe the behavior of a cell. This will ensure that all formats behave in the same manner when dealing with functionality during various front-end related timing simulations.

    What else can Crossfire do?
    Crossfire is technology independent. From a tool perspective, the differences include:

    • Exponential data size growth (up to 2x when compared to previous node)
    • Introduction of new design formats (i.e. NDM)
    • Number of corners increasing drastically in newer nodes (i.e. FinFet based)

    As a tool, Crossfire only has to differentiate between standard cell libraries and all other IP (memories, digital, analog, mixed-signal, etc.). Some checks, such as abutment or functional verification, are designed specifically for standard cell libraries.

    Crossfire is a proven validation tool used by various Tier 1 customers. All checks and formats supported by Crossfire are based upon direct cooperation with our customers. Customers moving from “old” to “new” technology nodes automatically get all the checks and format support developed for and used by Tier 1 customers. This cycle of shared knowledge is passed on from one technology node to another.

    Conclusion
    IP qualification is an essential part of any IC design flow. A correct-by-construction approach is needed since fixing a few bugs close to tapeout is a recipe for disaster. Given that, IP designers need a dedicated partner for QA solutions that ensures the QA needs of the latest process nodes are always up-to-date. In-house QA expertise increases productivity when integrated with Crossfire. All framework, parsing, reporting, and performance optimization is handled by the tool. On top of that, with a given list of recommended baseline checks, we ensure that all customers use the same minimum standard of IP validation for all designs.

    TSMC OIP
    The Crossfire team and I will be at a booth in the TSMC OIP exhibit hall next week giving out free copies of our Fabless book, discussing the need for IP qualification, and demonstrating the latest Crossfire software. I hope to see you there!