webinar banner2025 (1)

Konica Minolta Talks About High-Level Synthesis using C++

Konica Minolta Talks About High-Level Synthesis using C++
by Daniel Payne on 07-11-2019 at 8:00 am

Konica Minolta printer

In the early days of chip design circa 1970’s the engineers would write logic equations, then manually reduce that logic using Karnaugh Maps. Next, we had the first generation of logic synthesis in the early 1980’s, which read in a gate-level netlist, performed logic reduction, then output a smaller gate-level netlist. Logic synthesis then added the capability to move a gate-level netlist from one foundry to another. In the late 1980’s logic synthesis allowed RTL designers to write Verilog code and then produce a gate-level netlist. Ever since that time our industry has been searching for a design methodology even more productive than RTL coding, because a higher-level design entry above RTL entry could simulate quicker, have a higher capacity and even reach a larger audience of system-level users that don’t want to be encumbered with the low-level semantics of RTL coding.

High-Level Synthesis (HLS) is an accepted design paradigm now, and the engineers at Konica Minolta have been using C++ as their design entry language for several years while designing multi-functional peripherals, professional digital printers, ultra-sound equipment for healthcare and other products.

The original C++ design flow used is shown below using the Catapult tool from Mentor – a Siemens business, with benefits like 100X faster simulation times than RTL:

Even with this kind of C++ flow, there are some extra steps and issues, like:

  • Manually inspecting algorithm code takes too much time.
  • Code coverage with GCOV produced no insight for synthesizable C++ code, plus no expression, toggle and functional coverage analysis.
  • Manual waivers were required to close coverage.

The Catapult family of tools extends beyond just C++ synthesis, so more of these tools were added, as shown below highlighted in green:

Let me explain what some of these boxes are doing in more detail:

  • Catapult Design Checker – uncover coding bugs using static and formal approaches.
  • Catapult Coverage – C++ coverage analysis, knowing about hardware.
  • Assertion Synthesis – auto-generation of assertions in the RTL.
  • SCVerify – creates a smoke test, and sets up co-simulation of C++ and RTL, comparing for differences.
  • Questa CoverCheck – finds unreachable code using form RTL coverage analysis

Checking C++ Code

So, this newer flow looks pretty automated, yet there can still be issues. For example the C++ is untimed, while RTL has a notion of clock cycles, so during RTL simulation it’s possible for a mismatch to arise. The Catapult Design Checker comes into play here, and when run on several Konica Minolta designs the tool detected some 20 violations of the Array Bound Read (ABR) rule, where an array index is out of bounds. Here’s an ABR violation example:

The fix to this is adding assertions in the C++ code:

With the C++ assertions in place you will see any violations during simulation, plus Assertion Synthesis will add PSL code as shown below that are used during RTL testing.

Code Coverage

The Catapult Coverage (CCOV) tool understands hardware, while the original GCOV tool doesn’t, so CCOV supports coverage of:

  • Statement
  • Branch
  • Focused Expression
  • Index Range
  • Toggle Coverage

One big question remains though, how close is C++ coverage to the actual RTL coverage? The SCVerify tool was used on 10 designs to compare results of statement and branch coverage, which shows close correlation below with an average statement coverage of 97% and branch coverage of 93% for CCOV.

Unreachable Code

Having any unreachable code is an issue, so using the Questa CoverCheck tool helps to identify and then selectively remove if from the Unified Coverage Database (UCDB). Here’s what an engineer would see after running CoverCheck, the items shown in Yellow are unreachable:

Once a designer sees the unreachable code they decide if this was a real bug or can be waived, if the element is reachable then create a new test for it.

Closing Coverage

During high-level verification the LSI engineers are trying to reach coverage goals, and they can ask the algorithm developers to add more tests. In the future the algorithm developers could use CCOV to reach code coverage, while the LSI engineers use the remaining Catapult tools to reach RTL closure.

Conclusions

Takashi Kawabe’s team at Konica Minolta have successfully been using Catapult tools in a C++ flow over the years to more rapidly bring products to market than with traditional RTL entry methods. By using the full suite of Catapult tools they are simulating 100X faster in C++ than at the RTL level, and have shown that C++ level signoff is now possible.

The design world has come a long way since the 1970’s, and using C++ level design and verification is here to stay.  There’s an 11 page White Paper authored by Kawabe-san on this topic, and you can download it online here.

Related Blogs


WEBINAR: GPU-Powered SPICE – The Way Forward for Analog Simulation

WEBINAR: GPU-Powered SPICE – The Way Forward for Analog Simulation
by Randy Smith on 07-10-2019 at 9:37 am

Several years ago, I was a consultant to a company called Gauda, Inc.  I enjoyed working with Gauda as the technology was interesting. On June 3, 2014, Gauda, Inc. was acquired by D2S, Inc. so their technology lives on. Gauda was focused on optical proximity correction (OPC) and optical proximity verification solutions utilizing GPUs, rather than CPUs, as the main processing engine. To do this, one must excel in parallelizing computationally intense algorithms and then properly map those algorithms onto the very different architecture of a GPU. It seems that Empyrean has been able to do this in order to produce a GPU-powered SPICE simulator. I am really interested to learn more about Empyrean’s approach and results and now we will all get the chance to do just that!

 

On August 8th, 2019 at 10:00 am PDT, Empyrean will discuss ALPS-GT™, the EDA industry’s first commercial   GPU-Powered SPICE simulator during a SemiWiki Webinar Series event. ALPS-GT has already been adopted by some top SOC design houses. The discussion will be led by Chen Zhao of Empyrean. Chen will provide concrete comparative numbers on some of the most challenging designs against Empyrean’s own traditional CPU-driven ALPS™, which was voted by users as the “Best of DAC 2018” in SPICE simulation for performance (3X to 8X faster than other industry-standard parallel SPICE simulators for post-layout simulation in 2018). ALPS and ALPS-GT together cover a large pool of users and types of designs. ALPS-GT was showcased at DAC 2019 and received lots of attention. Empyrean has been working closely with some key, early customers in the use of ALPS-GT for several designs in the 7nm process technology node and seem to now be ready for the broader market.

Chen Zhao is an application engineering manager of Empyrean Software. He is responsible for application engineering and customer support since he joined Empyrean in 2014. Zhao has extensive experience in full custom IC design and SPICE simulation. Zhao received a BS degree in electrical engineering from New York University and an MS degree in electrical and computer engineering from Johns Hopkins University.

This webinar is the first in this year’s SemiWiki Webinar Series which is anticipated to have many more webinars this year. Each webinar is expected to last between 30 and 45 minutes. There will be a brief introduction, followed by 20-to-30 minutes of technical presentation and/or demonstrations, followed by a Q&A period. As we expect many attendees, we may not get to all questions during the webinars, but we will be sure to ask the presenting companies to get back to you by phone or email to answer all your questions. I will be moderating many of these webinars which are expected to be quite informative in showcasing the latest developments in EDA, semiconductor IP, and related markets.

Click here to register for this webinar. Once registered, you will also receive a few reminders.

About Empyrean Software
Founded in 2009, Empyrean Software is an Electronic Design Automation (EDA) and intellectual property (IP) technology leader delivering fast and true physically aware, design closure and optimization solutions for timing, clock and power of systems on chip (SoCs). The company also offers a high-performance accurate circuit simulator and is an analog IP and fast SerDes IP provider. For details, go to http://www.empyrean-tech.com/


Smart Hearing is Heating Up

Smart Hearing is Heating Up
by Bernard Murphy on 07-10-2019 at 6:00 am

A lot of the attention in intelligent systems is on object detection in still or video images but there’s another very active area, in smart audio. Amazon and Google smart speakers may be the best-known applications but there are more obvious (and perhaps less novelty-driven) applications in enhancing the hearing devices we already use, in headphones, earpods and hearing aids, also in adding voice-control as a new dimension to human/machine interfaces.

Knowles IA8201

I talked to Jim Steele, VP of technology strategy at Knowles, a company that may not be very familiar to my readers. Knowles has been working in the audio space for around 70 years and is now addressing mobile, hearable and IoT markets. They provide for example microphones and smart microphones, audio processors, and components for hearing aids and earpods. They’re inside the Amazon Echo, cellphones and many hearing aids. Not surprising that they claim “Knowles inside” for many audio experiences.

About 4 years ago, Knowles acquired Audience, a company that specialized in mobile voice and audio processing. Audience was already used in a number of brand-name mobile phones and was apparently the first proponent for using multiple microphones together with auditory intelligence (I would guess beamforming, acoustic echo cancellation [AEC], etc.) to suppress background interference in noisy environments. Combining Knowles and Audience technologies provided a pretty rich set of capabilities which they recently spun into their IA8201 chip, the latest product in their AISonic family of audio edge processors. This is a variant on their IA8508 core, right-sized especially for ultra-low power always-on applications with trigger-word detection.

The design (block diagram pictured above) is based on three Tensilica cores, adapted with Knowles customization. The DeltaMax (DMX) core does the heavy lifting in beamforming, AEC, barge-in (you want to give a command while music is playing), noise suppression and multiple other functions. The HemiDelta handles ultra-low power wake-word detection and the Tensilica HiFi 3 core provides audio and voice post processing for leveling, equalization and other functions.

There’s a lot of technology here packed into a small space. This device supports up to 4 microphones which, together with beamforming and other features, should provide excellent speaker discrimination in most environments. This will be a real boon for the hard of hearing (yes, there are already hearing aids on the market with multiple microphones in each device).

Trigger-word detection can be as low as 1-2mW, allowing for extended use between charges. Certainly useful for a battery operated smart-speaker, but also useful for using your earbuds to make a call through a Bluetooth connection to your phone. Just say “Hey Siri, call my office”. Barge-in lets the command through, the trigger word is recognized and microphones in the earbuds pick up your voice through bone conduction (sounds creepy but that’s the way it works).

Voice recognition for command-processing runs on the DMX code, through temporal acceleration via 16-way SIMD with an instruction set optimized for machine-learning. Wake-word training can be supported by Knowles for OEMs and user-based training is also supported for command words and phrases.

Jim added that applications are not just about voice-support. A growing area of interest in products in this area is for contextual awareness: listening for significant sounds like sirens (while you’re driving – maybe you need to pull over), a baby crying, a dog barking or glass breaking (while you’re not at home). All of these can provide important alert signals. Of course you don’t want to be bothered with false alarms. Jim said that false accepts are down to 1 in 100 (for wake-word also) and can improve with training. Also alarm-type signals, OEMs might send video snip with the alert to help quickly determine if there is cause for further action.

Jim sees a lot of applications for this device, for hearables, for home safety and for IoT voice-control applications in appliances, TVs and other home automation devices. Lots of opportunity to get away from annoying control panels or phone apps, rather moving right to the way we want to control these devices – through direct commands. You can learn more HERE.


The Nanometrics – Rudolph Technology Merger: What Was Nanometrics Thinking?

The Nanometrics – Rudolph Technology Merger: What Was Nanometrics Thinking?
by Robert Castellano on 07-09-2019 at 10:00 am

On June 24, 2019, Nanometrics and Rudolph Technology announced they will combine in an all-stock merger of equals transaction. The companies say the combination increases the SAM (served available market) opportunity to approximately $3B.

This article attempts to analyze the two companies in their different business segments, to compare these companies against their peers, and ascertain the validity of the claim that the SAM will increase by $3B. Finally, I will attempt to show that this merger is a mistake for NANO.

Financial Parameters

Table 1 shows financial parameters for NANO and RTEC between 2013 and 2018. Revenues for both companies are for ONLY the metrology/inspection market where both compete, so I deleted RTEC’s revenues coming from other areas such a lithography. This table also compares their revenues with the global wafer front end (WFE) market, so I eliminated RTEC’s back end metrology/inspection revenues. I will discuss these markets later in this article.

Of note is that these companies had revenues that are extremely small compared to the entire WFE market of $57.3B in 2018. As a percentage of revenue compared to WFE, NANO was never greater than 0.50% of WFE and RTEC never greater than 0.26%.

In Chart 1, I plot YoY growth of revenues listed in Table 1. NANO (orange) beat the WFE ((blue)) in three out of the five years while RTEC (grey) beat WFE on only two of five years.

Equally important, the WFE average YoY growth was 15.2%. This compares to NANO’s average five year growth of 21.8%, while RTEC has been erratic with an average five year growth of just 0.9%.

Metrology/Inspection Metrics

Metrology/inspection equipment is critical to assuring high yields in semiconductor manufacturing while acting as an alarm if processing conditions go out of sync and result in defective chips.

Chart 2 shows that market share for these companies is almost at the noise level of the $6B market, according to our report “Metrology, Inspection, and Process Control in VLSI Manufacturing.” Factor in KLA with a 50% share, and Hitachi High Tech and Applied Materials with 10% each, and it is implausible for this merged NANO-RTEC to increase its SAM in the overall metrology/inspection market.

We can hone in on various segments of the metrology/inspection market to see if these companies have a leading market share. Chart 3 shows that in the Wafer Inspection market, which makes up about 50% of the total metrology/inspection market according to our report. NANO is not a participant in this segment and RTEC held only a 1% share on revenues of just $30.6M

Chart 4 presents market shares for the Thin Film Metrology segment, which is about $1B. In this case, RTEC has a share in the noise level, but NANO held a 27.9% share in 2018. Here, NANO had a positive revenue increase of 28.1% YoY in 2018, based on sales of equipment to China for 3D NAND production. But in this segment, KLAC is a dominant player with nearly a 50% share, which will make it difficult for NANO to gain more share in 2019.

Advanced Packaging Market

RTEC sells both inspection and lithography equipment for advanced packaging of semiconductor devices. I refer the reader to my June 24, 2019 SemiWiki article entitled “Lithography For Advanced Packaging Equipment.” Chart 5 shows that although RTEC held a 28% share of the market in 2018, YoY its revenues grew only 10.0% compared to competitors, which grew 2-4 times greater. This subsegment is less than $400 million.

Lithography For Advanced Packaging Equipment

I presented in the above SemiWiki article my forecast that the advanced packaging market, primarily WLP, will grow at a compound annual growth rate of only 6.8% between 2016 and 2022, yes 2022 which is important as I’ll explain later. In the article, I also noted:

“The top three companies – Canon, Veeco, and EV Group held a 70% share of the market, and if we include SUSS, these companies held an 85% share of the market.”

RTEC is not included in the top four companies and NANO is not a participant in the market.

Flat Panel Display (FPD) Market

RTEC sells lithography equipment for flat panel displays. Unfortunately, the market for displays is dominated by Canon (CAJ) and Nikon (OTCPK:NINOY). Table 2 presents a forecast of FPD lithography equipment shipments. According to our report entitled “OLED and LCD Markets: Technology, Directions, and Market Analysis,” shipments in 2019 are forecast to drop 50% YoY and grow only 22% in 2020.

Lithography equipment for 10.5G panels (for TVs) are will represent nearly 50% of shipments in 2019 and 2020, and RTEC does not make a system that large. Equipment for 8G panels (for TVs) will be next, and RTEC also doesn’t make a system that large. RTEC’s focus is 6G for smartphones (a flat market) and microdisplays, although a nascent industry, is led by EV Group with its nanoimprint technology.

Takeaway

According to the NANO-RTEC press release:

“Each company currently has a semiconductor industry SAM of at least $1B, with additional SAM expansion opportunities of $400M to $500M per company. The combination is expected to expand the companies’ served market opportunity to approximately $3B.”

Thus, the press release spins the SAM to $3B, but no time frame is given. I expect little or no SAM growth in the Inspection/Metrology market for NANO-RTEC. For advanced packaging inspection and lithography (as discussed in my above SA article), a CAGR of 6.8% between 2016 and 2022 implies the advanced packaging market will only increase 40% between 2019 and 2022, significantly less than the 300% indicated for the SAM to increase from $1B to $3B.

In my opinion, NANO made a mistake in this merger. It has a larger market share than RTEC in Metrology/Inspection. NANO would be better served by acquiring or investing in metrology/inspection startups, thereby entering different segments of the market and thereby increasing its own SAM. In a December 31, 2018 SemiWiki article I noted:

“There are several startups gearing to compete against market leader KLAC. FemtoMetrix (Irvine, CA), uses Optical Second Harmonic Generation (SHG), a non-destructive, contactless, optical characterization method to characterize surfaces, interfaces, thin-films, as well as bulk properties of materials. Already, FemtoMetrix has completed its first round of equity financing in a deal led by Samsung’s Venture Division and SK Hynix Ventures, and announced a license agreement with Boeing. This type of new technology will eventually compete against KLA-Tencor.”

Semiconductor Metrology Inspection Outpacing Overall Equipment Market in 2018

RTEC has invested in a lot of applications, but they are in my opinion drilling 12 one-inch holes rather than one 12-inch hole, and as a result has minimal market share in markets led by large companies already firmly entrenched.

RTEC has a smaller market share than NANO in their core metrology/inspection market. So, another surprise is why the RTEC CEO will become the NANO-RTEC CEO.


Automotive Market Pushing Test Tool Capabilities

Automotive Market Pushing Test Tool Capabilities
by Tom Simon on 07-09-2019 at 8:00 am

It’s easy to imagine that the main impetus for automotive electronics safety standards like ISO 26262 is the emergence of autonomous driving technology. However, even cars that do not offer this capability rely heavily on electronics for many critical systems. These include engine control, braking, crash sensors, and stability and traction control. A failure of any of these systems can endanger human life or safety. As a result, there is a focus on implementing ISO 26262 across the board in products aimed at the automotive market. At the same time there is increasing complexity in many chips coupled with utilization of more advanced nodes, both of which have major implications for safety.

Meeting the requirements of ISO 26262 includes activity at every stage of chip development, starting during specification and going right through to design, manufacture, test and out into the field. Test in particular plays a significant role in implementing ISO 26262. After all, failure detection is essential. I recently had the opportunity to read an interesting white paper by Mentor on how their Tessent test tools can be helpful in meeting ISO 26262 requirements. The paper talks about three areas where test has evolved to meet the needs of ISO 26262 in the automotive market. They are on-line testing during chip and system operation, new types of testing to comprehensively detect a wider range of failures and making analog circuits testable.

Beyond just testing chips at the time of manufacture, or perhaps at power-on, ISO 26262 calls for continuous and periodic testing. Fortunately, many pieces of the manufacturing test elements embedded in the chip can be used for testing during operation. Mentor’s Tessent MissionMode architecture provides test access to individual IP and memory elements during chip operation. Tessent MissionMode in turn can be driven by previously stored test vectors or externally through the system bus. IEEE 1687 IJTAG is used to wrap IP blocks for test and can be accessed by Tessent Mission Mode. However, Tessent MissionMode can also access any other type of test interface, as needed.

One intriguing capability of Tessent MissionMode is non-destructive memory BIST. Testing is done in short busts, accessing small ranges of memory. This is non-destructive because the memory contents are saved and restored, making the test operation transparent to the running system. Because of the low overhead of each individual burst, this testing can cover the entire memory piecewise without contention with system operation.

Mentor also has some innovations in Logic BIST, where they apply a hybrid solution that utilizes ATPG and compression in Logic BIST, which can be used during power-on, power-off or for on-line testing. The benefits of this hybrid approach include area savings, because the scan chains are utilized for tester runs and self-testing. An issue with BIST during functional operation is increased power consumption due to high toggle rates. Tessent offers the ability to scale toggle rates to limit transitions, trading this off with some increase in pattern count.

The Mentor paper on Tessent also discusses how test methodologies have to move beyond ‘netlist’ level faults and add checking for cell level issues. The paper says that by some estimates cell level faults account for almost half of all circuit level defects. Traditional fault models are not targeting these faults. This means that many of these defects are found only by chance. Tessent now includes a Cell-aware methodology that digs deeper into the circuits to look for these harder to find defects.

Despite the well-established methodologies for digital fault analysis, analog designs have gone without until now. Tessent DefectSimfault simulator is the first commercial solution for analog designs. A key enabler is the development of analog simulation tools that run orders of magnitudes faster than before. It is now possible to efficiently look at parametric variations to help find defects. The paper describes some interesting techniques that are being applied to make analog test more effective. A majority of field failures in mixed signal chips occur due to issues with the analog portion. Automated analog test generation will be a tremendous help in securing ISO2626 compliance in mixed signal chips.

With every innovation in chip design, there comes an added challenge for test. The added requirements for meeting ISO 26262 add even more difficulty in the test domain. However, this Mentor white paper on the topic of how Tessent features can be applied shows that with innovation the test challenges for automotive systems can be more easily addressed. It also points out once again how the demands of developing automotive electronics is providing motivation for many of the current improvements in electronic design. The full white paper, entitled “MEETING ISO 26262 REQUIREMENTS USING TESSENT IC TEST SOLUTIONS” makes good reading and is easily available from the Mentor website.


Early IP Block Error Detection is Critical!

Early IP Block Error Detection is Critical!
by Daniel Nenni on 07-08-2019 at 10:00 am

The rising complexity of modern SoC designs, as enabled by progressing manufacturing technology, leads to an increasing validation challenge as the only way to manage complexity increase is by re-using more pre-designed IP blocks. These IP-blocks are provided by various suppliers such as a foundry partner, internal design groups, open-source IP and third-party IP companies. This is driving the trend of increasing IP Qualification costs, which is part of an exponential growth of total SoC design cost.

Cost in terms of IP qualification and design-verification is mostly found in design-time spent in verification runs and resolving the issues found. A well-known design principle is that the closer an error is detected to its point of creation, the less costly fixing it becomes. That is why design-teams deploy various techniques before accepting an IP block into their SoC design flow. Any issue found during IP qualification or incoming inspection, can be directly fixed within the IP by the IP supplier. Failing early detection, design teams are faced with the tedious path of tracing an issue in the final design back to a single IP block. After which of course the IP still needs to be repaired, re-released and re-integrated in the final design.IP qualification methods have evolved from self-certified questionnaires, through home-grown IP qualification scripting into an industry-standard IP qualification solution known as  Fractal Crossfire.

An IP qualification solution will  certify that an IP release is complete, has internal consistency and will exhibit predictable trends within and over  characterization corners. All these are must-have properties for an IP block before it can be included in any SoC design.We argue that for a design-team that is approaching tape-out, having the IP qualified is necessary, but not enough. In the scenario where a design-team is receiving regular incremental revisions of an IP block, it is also  essential that these revisions gradually converge to a steady state where changes to the IP block are limited to only  the bare minimum. If in a late stage of the design an IP block is shipped that has a large delta with respect to the   previous release, this can pose a huge risk to the final design schedule, even when the IP is successfully qualified.

Introducing IPdelta

The arrival of a new IP release close to tape-out is the high-risk scenario where IPdelta is indispensable. Anything that has ended up in this new IP release is a danger to the verification level already achieved. IPdelta will analyze the delta between the two IP revisions. Regardless of object ordering within files or databases, it will compare contents of all individual databases, models and formats and categorize the differences it finds.

Certain changes are expected — and should therefore be complete in their implementation throughout the different views. Some, or many, changes can be unexpected, indicating that the IP-release contains updates that are not necessarily needed for the current design and require further investigation. As the amount of changes can be huge, IPdelta offers users a sophisticated delta-browsing interface that allows users to quickly zoom in on changes that pose a real threat to timing and power closure

By comparing IP releases, IPdelta provides the confidence that only the expected updates are present in the new IP release so that insertion in the final design is safe.

Conclusion
It can be concluded that an IP qualification sign-off is a necessary but not a sufficient condition for safe insertion  of a new IP release into an existing SoC design. No matter how thorough an IP qualification tool is it is only made to  judge an IP release in isolation. When receiving a new IP release, designers need to be ensure that the delta between  versions is kept to the bare minimum necessary.


Texas Instruments and the TTL Wars

Texas Instruments and the TTL Wars
by John East on 07-08-2019 at 8:00 am

The “20 Questions with John East” series continues

Most people in the IC business understand very well that TTL products dominated our industry for 30 years or so.  They’ll also probably know that TI was the king of TTL. But,  if you ask those people what TTL is,   most won’t have any idea.  If you’re one of those people, rest easy.  You’re about to find out.

There were three IC companies that really mattered in the early days.  Fairchild,  Motorola,  and Texas Instruments (TI).  There were a handful of Wannabe’s as well —  National,  Signetics,  Amelco, Siliconix to name a few —  but Fairch,  Mot, and TI were the big guys. Intel had not yet arrived on the big scene.  The first standard logic family was Fairchild’s RTL.—   Resistor-Transistor Logic. — (Inputs came in through Resistors.  Outputs went out through Transistors.  They were Logic devices).  It was built using bipolar transistors as was almost everything in those days. Micrologic was the name they gave the family.  Sadly, RTL was brain dead.   Fairchild had a great head start.  The Noyce patent really worked. The Kilby patent didn’t.   But – the RTL circuit that Fairchild chose had big problems  — namely fan in,  fan out,  noise margin,  and speed.  Sadly, those pretty much captured everything that mattered in those days.  So – Fairchild switched to DTL. — Diode-Transistor Logic. —  (The inputs came in though Diodes.  The outputs went out through Transistors.  It was Logic).  DTL had been invented by others earlier.  IBM had used versions of it in their 360 series of mainframe computers.  It was a much better design than RTL.   It solved all of the RTL problems except speed. Fairchild introduced their DTL family in 1964.

TECHNO-BABBLE ALERT.  If you’re not a circuit geek, skip the next paragraph!!

DTL gates included back to back diodes.  The cathode of one diode (called the input diode) went to the input pad and the cathode of the other diode (called the level setting diode) went to the base of a transistor that we called the phase splitter.  The diode anodes were common.  Why not put those two diodes in the same isolation region?  Instead of two diodes, use a single bipolar NPN transistor!  The input diode would be the emitter-base junction of the transistor.  The level setting diode would be the collector – base junction of the transistor.  (for a two input gate, the transistor would have two emitters.  For a four input gate,  it would have 4 emitters.  That schematic diagram looks a little weird,  but it’s no problem to make) Why was that better?  When the input pad went low, the transistor turned on and yanked the charge off of the base of the phase splitter.  That shortened the time required to turn off the phase splitter which was the number one problem facing the engineers who were trying to speed up the circuits. As with most inventions,  it isn’t always completely clear who deserves the credit.  A real, working IC version of a TTL gate had a lot more to it than my simplified description above.  Tom Longo, working at Sylvania,  was the first to put it all together into a commercially successful IC.

It was a big breakthrough!! After working through a few “pet” names (Longo called it  SUHL and James Buie of TRW called his version TCTL), it ended up being called TTL (Transistor – Transistor Logic).  Whatever you called it, TTL was better!  It was faster than DTL with similar costs. Longo and a few others may have invented TTL, but it was Texas Instruments who made hay with it.  TI introduced their 5400  TTL family in 1964  — the same year Fairchild announced their DTL family.  But TTL was better!  TI soon put their TTL products in an inexpensive plastic package which they had developed.  The plastic package gave them lower costs than anyone else.   Early on TI sold TTL gates for $1.00 (actually twenty five cents per gate for the quad, two input NAND gate. There were four gates in each package).  That was an outrageously low price for the times.  They stole the march.  TTL took over the world and TI became the king of that world.

Why didn’t Longo et al get the credit they deserved?  TI took such a huge, early lead in TTL that everybody thought it was a TI invention.  It wasn’t.  TI recognized a good thing when they saw it and they jumped on it.  They pounded the market with it.  There’s a Harvard case study wrapped up in this somewhere.

And Tom Longo?  He later moved to Transitron and then to Fairchild.  He was a very smart guy and a butt kicker extraordinaire!  I couldn’t decide if that was a good thing or a bad thing.  I worked several levels below him at Fairchild so I didn’t interface with him on a daily basis.  The butt-kicking was sort of fun to watch  — unless it was your butt being kicked.  (Of course,  now and then I took my turn.) Speaking of butt-kicking, though, Tom never liked the fact that TI was kicking our butts with the product that he had invented.

Fairchild,  seeing the success that TTL was experiencing, later tried to get into the TTL market  — at first by introducing a proprietary family of  TTL products ( the 9000 family) instead of by second sourcing the 5400 family.   But there was a problem:  we didn’t really understand collector – emitter leakage.  (Iceo).  One thing we knew about Iceo,  though, was we had plenty of it!  Iceo was our dominant yield problem.  This was exacerbated by gold doping.  The big problem with respect to speed those days was turning off bipolar transistors.  Turning them on was easy, but to turn them off you had to wait for all the minority carriers to exit the base region.  That took forever.  Somebody had figured out earlier that a few gold atoms in the base region would help the minority carrier lifetime problem.  We called that gold doping.  You had to gold dope in order to make the speed requirements. The problem? Gold made the Iceo problem worse.  When you gold doped, leakage went up, yields went down, and costs went up.

So  —  circa 1969 — our yields were bad and costs were high.  The solution? Some of the Motorola guys brought in by Hogan in turn imported what they thought was the Motorola process (Sadly —  it wasn’t quite) and installed it in one of the Mountain View fabs. The yield was pretty good right out of the chute so we went into production.  We didn’t do HTRB (High Temperature Reverse Bias is a simplified form of Life Testing).  There were no formal qual requirements at Fairch in those days.  We just went straight into production.  Big mistake!!!!!  We had screwed up!!!   There was an Op Life problem that we would have caught if only we’d done HTRB. Several months of our early production wafers were unreliable!  After we figured out that we had a problem, we burned in some of the units we had made but threw out the rest.  It wasn’t any fun!!  Heads rolled.  The red queen was still lurking.

Off with their heads was still operative.

Tom Longo went on to become founder and CEO of Performance Semiconductor.

Next week:  The curse of P&L management

See the entire John East series HERE.


Banks and ATMs Under Cyber Attack

Banks and ATMs Under Cyber Attack
by Matthew Rosenquist on 07-06-2019 at 5:00 am

The Silence hacking crew, mostly attributed to a group of very crafty Russian hackers, has struck again pulling-in over $3 million in cash from ATMs.

At least 3 banks have been attacked in the latest campaign, with Dutch Bangla Bank being the largest. The criminal hackers first compromised the bank’s card management infrastructure then undermined the integrity of the approval systems allowing co-conspirators to use bank ATMs to withdraw large sums of cash totaling over $3 million.

This hacking crew originated in Eastern Europe around 2016 and first started attacking financial institutions in Russia, Ukraine, and Poland before expanding into the Asia Pacific region. Banks in India, Bangladesh, and Sri Lanka are the most recent targets.

Silence is considered a top tier cyber-criminal group. Their methods are technically sophisticated and they operate with a good degree of patience. There is speculation they may have deep experience in penetration, reverse engineering, application development, and even security practices.

With custom software, effective hacking skills, and a network of money mules, this band is able to victimize banks at incredible levels. Their success in stealing large sums of cash will only promote more attacks.

Customers cannot stop such attacks. Because Silence targets the banking and ATM infrastructures, there is little the everyday user can do to protect themselves other than bank with a reputable institution that will actively work to prevent, detect, and respond to such attacks.

From a risk perspective, Silence is ranking high among hacking groups. A new breed of highly capable and technically savvy threat groups are emerging. Some are direct appendages or supported by Nation States while others are smart organized criminals looking to leverage the opportunities in the digital world for their own profit. Regardless, these groups are at the forefront of developing stealthy and effective exploits, driving malware capability evolution, and pulling off some of the biggest heists against the financial community. Success allows for reinvestment in tool, capabilities, and reach. This makes them stronger and more difficult for cross-border authorities to track and take-down those responsible.

This is not the last we have heard from Silence or others of their ilk. I predict by the end of 2019 we will see a number of significant breaches to APAC banks which will stir great concern across the global financial landscape, effecting both traditional banking as well as emerging cryptocurrency exchanges.

The cybersecurity firm Group-IB has been tracking the activities of the Silence group for several years and has a good write-up of their profile and evolving techniques.


Chapter 1 – Predicting Trends in the Semiconductor Industry

Chapter 1 – Predicting Trends in the Semiconductor Industry
by Wally Rhines on 07-05-2019 at 6:00 am

Figure 1 is the most basic of all the predictable parameters of the semiconductor industry, even more so than Moore’s Law.  It is the learning curve for the transistor.  Since 1954, the revenue per transistor (and presumably the cost per transistor, if we had the data from the manufacturers) has followed a highly predictable learning curve.  Before Moore’s Law, the learning curve provided a guiding light for the semiconductor industry.  Texas Instruments used it for strategic advantage and shared its data with Boston Consulting Group who published a book called “Perspectives on Experience”1.  In the days of germanium and silicon discrete transistors, companies like TI could use the learning curve, for example, to predict what the unit cost would be after 100,000 units were produced, based upon the actual cost per unit of the first 1,000 units produced.  They could then price the particular transistor product at a loss initially to gain leading market share and therefore achieve higher profitability and market influence when they reached future high unit volume sales.  TI didn’t create the technology of learning curves.  It was developed in 18852 and has been used in industries like aviation, even before the transistor was invented, to predict the future cost per airplane when a certain cumulative unit volume was achieved.  TI’s unique approach for semiconductors lay in the use of the learning curve to drive a

pricing strategy early in the life of a new component.

Figure 1.  Learning curve for the transistor from 1954 to 2019

Figure 2 shows how the learning curve works.  The vertical axis is the logarithm of the cost per unit of anything that is produced.  The product can be a good or service; anything that benefits from the experience of doing the same thing, or making the same product, again and again.  Published learning curves typically use the revenue per unit because companies are unwilling to divulge their cost data.  The companies, however, know their costs and, over the history of the semiconductor industry, have used that data to strategically position themselves against competition.  The horizontal axis of the learning curve is the logarithm of the cumulative number of units of a product or service that have been produced throughout history. When the data is plotted, it results in a straight line with a downward slope.  Cost per unit decreases monotonically as we develop more experience, or “learning”.  Since the learning curve is a “log/log” graph, the data generates a line rapidly initially as the small number of cumulative units doubles in a short period of time.  As time goes on, movement of the straight line to the right slows since it takes longer to double the total cumulative number of units.  Every time the cumulative number of units produced doubles, the line reflects a decrease in the cost per unit by a fixed percentage.  The percentage is different for different products but tends to be similar across a broad range of products in an industry like semiconductors.

Figure 2.  The learning curve is a log/log plot of cost per unit vs cumulative units manufactured

More broadly, learning curves can be applied to any good or service where the cost per unit of production can be measured. We are just not as aware of the phenomenon today because the measurement applies only when cost is measured in constant currency.  A deflator must therefore be applied to the cost numbers to account for the portion of inflation that is caused by governmentally driven inflation.  In addition, the learning curve only applies in free markets. Tariffs, trade barriers, taxes and other costs must be removed before actual cost comparisons can be made.  The reason that learning curves have been so valuable in the semiconductor industry is that it is one of the few industries that has operated for over sixty years in a relatively free worldwide market, with minimal regulation and tariffs as well as a very low cost of freight between regions.

One of the great things about semiconductor learning curves is that they will be applicable as long as transistors, or equivalent switches, are produced. While Moore’s Law is quickly becoming obsolete, the learning curve will never be.  What will happen, however, is that the cumulative number of transistors produced will stop moving so quickly to the right on the logarithmic scale.  Then the prices will not decrease as rapidly as they have in the past. The visible effect of improved learning will diminish.  At some point, monetary inflation will be larger than the manufacturing cost reduction and transistor unit prices may actually increase with time in absolute dollars even though they are decreasing in constant currency. In the meantime, the learning curve is a useful guidepost for predicting the future. Currently, in 2019, the revenue per transistor is decreasing about 32% per year.

Those who purchase microprocessor or “system on chip” (SoC) components may recognize that, in 2017, the price per transistor is decreasing at a slower rate than 32% per year. Figure 3 explains this. The 32% number applies to the total of all semiconductor components produced in 2017. However the cost per transistor is made up of different kinds of semiconductor components — memory, logic, analog, etc.  It becomes apparent from Figure 3 that the semiconductor industry is producing far more transistors in discrete memory components, particularly NAND FLASH nonvolatile memories, than in other types of semiconductors.  When the memory learning curve (consisting mostly of NAND FLASH and DRAM) is separated from the non-memory learning curve, it is evident that cost per transistor and cumulative unit volume for memory are way ahead of non-memory.  That’s okay because the learning curve doesn’t specify how the decreasing cost per transistor is achieved – only that it will happen as a function of cumulative transistors produced.

Figure 3.  Cumulative unit volume of transistors used in memory components is increasing much faster than unit volume of transistors in other types of chips.

Another aspect of interest in Figure 3 is the set of data points near the end of the curve that were generated by data from 2017 and 2018.  The data points are above the learning curve trend line. How can this happen if the learning curve is a true law of nature?  Very simply, the period from 2016 through 2018 was one of memory shortages, particularly DRAM.  Prices per transistor increased instead of decreasing because market demand exceeded supply.  Won’t this cause a long- term deviation from the learning curve?  No.  Whenever a market supply/demand imbalance occurs, the cost per transistor moves above or below the long-term trend line of the learning curve.  This is always a temporary move.  When supply and demand come back in balance, the cost per transistor will move to the other side of the learning curve.  Area generated above the learning curve will normally be compensated by a nearly equal area below the learning curve and vice versa.  This is another useful benefit of the learning curve because it allows us to predict the general trend of future prices even when short term market forces cause a perturbation.

While I’ve focused on transistors in this discussion of learning curves, it should be noted that we could just as easily use electrical “switches” as our unit of measure.  The same learning curve would then work for mechanical switches, vacuum tubes and transistors as seen in Figure 5 of Chapter 3.  This figure also shows another attribute of the learning curve.  In this case, the metric on the vertical axis is revenue per MIP (or millions of computer instructions per second) for various types of electrical switches.  Learning curves can be used to predict improvements in performance, reliability (in FITS), power dissipation and many other parameters that benefit from the cumulative unit volume of production experience.

Learning curves also provide a useful tool for predicting “tipping points” for new technology adoption.  A good example is the introduction of “compression technology” in the semiconductor test industry in 2001.  In hindsight, a major innovation like this was inevitable just by examining the learning curve for the cost of testing a transistor in an integrated circuit (Figure 4). The ATE cost learning curve was not parallel to the silicon transistor learning curve and had a less steep slope.  Industry ATE cost was not decreasing fast enough.

The ATE industry should have seen that change was inevitable.  Pat Gelsinger, in his Design Automation Conference Keynote address in 1999 highlighted his prediction that “in the future, it may cost more to test a transistor than to manufacture it”.  Such a prediction would have occurred had it not been for compression technology (also called “embedded deterministic test”) which started out in 2001 with a 10X improvement in the number of “test vectors” required to achieve the same level of test and then progressed to nearly 1000X by 20183.

Figure 4.  Until 2001, reduction in the revenue per transistor of the automated test equipment industry was decreasing at a slower rate than the transistors produced by their customers, the semiconductor component industry.

Introduction of “embedded deterministic test”, or test compression, in 2001 significantly reduced the number of testers required and, by 2012, reduced the revenue of the ATE industry by $25B per year.

 

1 Boston Consulting Group, “Perspectives on Experience”, 1970, Boston, MA

2 https://en.wikipedia.org/wiki/Learning_curve#In_machine_learning

3Rajski, J., Tyszer, J., Kassab, M. and Mukherjee, N., “Embedded Deterministic Test”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems ( Volume: 23 , Issue: 5 , May 2004

Read the completed series


Where Have You Gone, Lee Iacocca

Where Have You Gone, Lee Iacocca
by Roger C. Lanctot on 07-04-2019 at 8:00 am

The automotive industry is a funny business. It is simultaneously ruled by ego driven “visionaries” and penny-pinching bean counters. (Don’t believe me? Just ask Bob Lutz.) This id-superego tension plays out in business section headlines every day most recently including the death of FCA’s Sergio Marchione who passed nearly a year ago and today’s news of the demise of legendary Chrysler Chairman Lee Iacocca.

At a time when two former CEOs, Martin Winterkorn of Volkswagen and Carlos Ghosn of Renault-Nissan, are facing criminal charges, it’s worth considering the absence of product-focused industry leadership from the C-level of most car companies. General Motors’ CEO Mary Barra’s brief pre-CEO flirtation with maverick behavior (“No more crappy cars!”) evaporated after her appointment and the resulting shift in emphasis to financial concerns.

Today’s CEO’s are universally focused on financial issues and meeting or battling regulatory requirements while the actual engagement with consumer desires and the marketplace is left to the marketing and advertising team. It’s no surprise, then, that Ford, FCA, and GM are all abandoning key passenger car segments now dominated by import marks. Bigger cars promise fatter margins and safer, bigger customer bull’s-eyes.

Iacocca’s passing closes the books on bold product-oriented statements emanating from the CEO’s office. The closest the automotive industry comes, today, to a CEO-driven approach to the market resides entirely with Tesla Motors CEO Elon Musk – whose company coincidentally defied all skeptics by shipping a record number of Tesla EV’s in its latest quarter reported yesterday.

Still, Musk isn’t making cars for the masses just yet. Iacocca and Chrysler, in his day, were delivering a wide range of cars – including convertibles! – to a very wide audience indeed. At a time when the bean counters increasingly rule the industry it is refreshing to recall the Iacocca of old, circa 1984.

Listening to Iacocca is a nearly incomprehensible blast from the past. Take a listen to his turn as a Chrysler spokesperson in a 1984 television advertisement: https://tinyurl.com/yxaqsssy – Youtube

Here is the transcript – no surprise he was considered a potential presidential candidate at the time.

“A lot of people think America can’t cut the mustard anymore, that quality counts for nothing, and hard work for even less, and commitment, that went out with the hula hoop.

“Well, when you’ve been kicked in the head like we have, you’ll learn pretty quick to put first things first. And in the car business product comes first, and product is what brought us back to prosperity; high mileage, front wheel drive, quality products.

“By the way, with the best safety recall record over the last two full model years of any American cars. Convertibles, they said, nobody wanted, but everybody copied. Sports cars and luxury cars and turbo, so powerful, so efficient, you’ll never go back to V8 again. And a wagon, so versatile, so right for America today, we can’t build enough of them. Not bad for a company that had one foot in the grave.

“Today every man and woman at Chrysler has a commitment, to build cars that will take on the best.

“We will build two sedans this fall, LeBaron GTS and Dodge Lancer that will challenge BMW, Audi, even Mercedes, for thousands less. And next year we will build a small car right here in America with quality that we’re determined will beat the Japanese at their own game and we will build the best backed American cars with five-year or 50,000 mile protection.

“Quality, hard work, commitment, the stuff America was made of. Without them there is no future. I have one and only one ambition for Chrysler, to be the best. What else is there?”

Two car companies do stand out for the innovative products already on the road or in their development pipelines: Volkswagen and Renault. It is notable that Volkswagen is talking of an alliance with Ford Motor Company and FCA has talked of aligning with Renault. Both conversations could open up the North American market to a stream of innovative and unique small cars – including electrified models – that will lead to further erosion of Detroit dominance of the U.S. market.

We miss you, Lee. We miss your vision, your gumption and your tagline: “If you can find a better car, buy it!”