Bronco Webinar 800x100 1

ESD Alliance and IEEE CEDA Announce a New Recognition Program – the Phil Kaufman Hall of Fame

ESD Alliance and IEEE CEDA Announce a New Recognition Program – the Phil Kaufman Hall of Fame
by Mike Gianfagna on 02-03-2021 at 10:00 am

Phil Kaufman Award Winners

Anyone even remotely associated the EDA industry will know about the Phil Kaufman award. Every industry has its ultimate recognition – the Academy Awards and the Grammys are familiar ones in pop culture. The Nobel Prize gets a bit geekier and the Morris Chang award from the GSA is geekier still. If you’re an EDA geek, the Phil Kaufman award is the ultimate recognition. There have been 25 recipients of this prestigious honor since its inception in 1994, pictured above. There’s one problem though. Deceased members of the community are not eligible to receive the Phil Kaufman Award, a policy set by the IEEE. A recent decision has changed that. When I saw the ESD Alliance and IEEE CEDA announce a new recognition program – the Phil Kaufman Hall of Fame, I got very interested.

The new program posthumously recognizes individuals who made significant and noteworthy contributions through creativity, entrepreneurism and innovation to the electronic system design industry and were not recipients of the Phil Kaufman Award.  Some of the Kaufman Award recipients above are no longer with us. Each was a significant force in our industry; they are missed. I think it’s a great idea to allow other deserving and high-impact contributors who are no longer with us to be recognized. The Phil Kaufman Hall of Fame allows this to occur.

“Many individuals made significant contributions to the semiconductor design industry and helped it grow to where it is today, underpinning the entire global semiconductor and electronic products markets,” said Bob Smith, executive director of the ESD Alliance. “Unfortunately, many of these contributors died but should be recognized for their efforts that were instrumental in shaping our community. The Phil Kaufman Hall of Fame is intended to change that.”

Nominations for the Phil Kaufman Hall of Fame are now open. Submissions will be reviewed by the ESD Alliance and IEEE CEDA Kaufman Award review committees and approved nominees will be honored for their contributions and achievements in 2021. Nominations will remain open through Friday, March 26, 2021. You can learn more about the program and download the nomination form here. Anyone can submit a nomination and the form is relatively short, so think about deserving professionals who are no longer with us. This is a great way to keep their memory alive.

Inductees will be announced in early April. A special Phil Kaufman Hall of Fame page on the ESD Alliance and IEEE CEDA websites will host their photos, citations and tributes.

A little background on the Phil Kaufman Award and the organizations that support it would be useful. The Phil Kaufman Award honors individuals who have had a demonstrable impact on the field of electronic system design through technology innovations, education/mentoring, or business or industry leadership. It was established as a tribute to Phil Kaufman, the late industry pioneer who turned innovative technologies into commercial businesses that have benefited electronic designers.

After many years as a design engineer and manager at companies including Intel, Phil became chairman and president of Silicon Compiler Systems, an early provider of high-level EDA tools.

Subsequently, Phil became CEO of Quickturn Design Systems, a pioneer in emulation.  Phil passed away from a heart attack during a business trip in Japan in 1992. The ESD Alliance (previously the EDA Consortium) founded the Phil Kaufman Award to honor his memory and contributions to the electronic design industry.

The Electronic System Design (ESD) Alliance, a SEMI Technology Community representing members in the electronic system and semiconductor design ecosystem, is a community that addresses technical, marketing, economic and legislative issues affecting the entire industry. It acts as the central voice to communicate and promote the value of the semiconductor design ecosystem as a vital component of the global electronics industry.

The IEEE Council on Electronic Design Automation (CEDA) provides a focal point for EDA activities spread across seven IEEE societies (Antennas and Propagation, Circuits and Systems, Computer, Electron Devices, Electronics Packaging, Microwave Theory and Techniques, and Solid-State Circuits). The Council sponsors or co-sponsors over a dozen key EDA conferences including: the Design Automation Conference (DAC), Asia and South Pacific Design Automation Conference (ASP-DAC), International Conference on Computer-Aided Design (ICCAD), Design Automation and Test in Europe (DATE), and events at Embedded Systems Week (ESWEEK).  

The Council also publishes IEEE Transactions on Computer-Aided Design of Integrated Circuits & Systems (TCAD), IEEE Design & Test (D&T), and IEEE Embedded Systems Letters (ESL). The Council boasts a prestigious awards program in order to promote the recognition of leading EDA professionals, which includes the A. Richard Newton, Phil Kaufman, and Ernest S. Kuh Early Career Awards. The Council welcomes new volunteers and local chapters.

Download the nomination form and get involved as the ESD Alliance and IEEE CEDA announce a new recognition program – the Phil Kaufman Hall of Fame. You’ll be glad you did.


Qualcomm Takes the Wheel

Qualcomm Takes the Wheel
by Roger C. Lanctot on 02-03-2021 at 6:00 am

Qualcomm Takes the Wheel

Qualcomm took center stage in the automotive industry this week to state its intention to dominate future dashboard infotainment systems. Long known for its wireless connectivity presence, Qualcomm took the wraps off its ramping up infotainment design wins for its Snapdragon 3 platform while revealing its next generation Snapdragon 4 solution.

The range of announcements, which included multiple strategic collaborations with car makers and suppliers, highlighted Qualcomm’s successful convergence of connectivity, safety, and infotainment technology into a single device thereby tipping a hat to the architectural transformation sweeping the automotive industry. Qualcomm is leaning in to the cockpit domain controller movement that is integrating functionality to enhance driving safety and pleasure.

Suitably, Qualcomm thrust General Motors to the forefront of its announcement. GM is the go-to partner to highlight infotainment innovation as the company provides the optimal combination of volume vehicle deliveries with innovation and risk taking in dashboard designs.

GM provided the added impetus this year of highlighting its own thrust into electrification – with the launch of the Ultium EV platform – and its aggressive moves into driving automation, also a highlight of Qualcomm’s announcements. Qualcomm’s partnership with GM had the added benefit of emphasizing the importance of China’s automobile market, the largest in the world. For years, GM has sold more cars in China than it has in the U.S. (GM is second only to Volkswagen among foreign auto makers.)

An important element of Qualcomm’s launch was the inclusion of its Snapdragon Ride Platform portfolio of safety-grade system-on-chips (SoCs) designed for automotive safety integrity level D (ASIL-D) systems. These chips embody the essential integration of safety, connectivity, and artificial intelligence capabilities suited to fulfill requirements for New Car Assessment Program (NCAP) Level 1 advanced driving assistance systems (ADAS) systems and Level 2 automation systems. Seeing Machines, Arriver, and Valeo (Park4U) were all mentioned as Qualcomm strategic partners.

A further roster of essential partners mentioned as part of Qualcomm’s announcement were key tier collaborators including Garmin, Google, Harman International, Joynext Technology, LG Electronics, Panasonic, AlpsAlpine, Maxim Integrated, Micron, and many others.

Qualcomm’s multifaceted announcement marks a critical changing of the guard in the automotive industry and the conclusion of a decades-long battle to tackle the automotive infotainment market. Prior to Qualcomm’s late arrival with high profile design wins across the globe, the company watched from the sidelines as first Intel and then Nvidia sought to conquer the automotive infotainment opportunity.

Intel notched strategic wins with BMW, GM, FCA (Stellantis), Volvo and Tesla while Nvidia touted wins at Audi. Both companies ultimately shifted their focus almost exclusively toward advanced driver assistance and autonomous drive systems.

Qualcomm’s climb to the top was also complicated by its failed $44B acquisition of fellow automotive semiconductor supplier NXP – a deal initiated in October 2016 on the cusp of President Donald Trump’s election and finally abandoned in the face of Chinese objections more than a year later. The rejected acquisition may or may not have delayed Qualcomm’s rise, but its official arrival was completed this week.

(Similar roadblocks have emerged to SoftBank Group’s attempt to sell U.K. chip designer Arm to U.S. chipmaker Nvidia, according to Nikkei.com The proposed $40B acquisition “is hitting regulatory roadblocks in major markets, as the blockbuster deal has raised antitrust and national security concerns among policymakers.”)

The onset of in-dash Snapdragon solutions marks the simultaneous rise of Qualcomm and China in the automotive market. It will be interesting to see Qualcomm’s performance in the automotive market both inside and outside China – given the company’s massive intellectual property portfolio.

As a major advocate of protecting intellectual property, Qualcomm stands at the fulcrum of the world’s largest automotive market as a rising force in automotive and wireless semiconductors setting the stage for unique opportunities and outcomes. Given the substantial contributions to Qualcomm’s announcement from Chinese partners it is cleare the company is committed to a massive in-market presdence.

Making Qualcomm’s announcements this week even more important is the potential impact on vehicle electrification, connectivity, and driving automation. Converging all of these experiences in the fourth generation Snapdragon platform sets the stage for entirely new driving experiences – in mass market vehicles – intended to save lives, fuel, and time while enhancing the overall in-vehicle experience.

Announcements and endorsements highlighted in Qualcomm’s Automotive Redefined Technology Showcase 2021 event are available here: https://www.qualcomm.com/news/media-center/press-kits/automotive-redefined-technology-showcase-2021


Do You Care About What You’re Measuring? Part 2: Cloud Data Centers

Do You Care About What You’re Measuring? Part 2: Cloud Data Centers
by Steve Logan on 02-02-2021 at 10:00 am

Do You Care About What Youre Measuring Cloud Data Centers

When I think about servers and data centers, I think about multiple-core/high-power CPUs, Intel’s domination over the years and GPUs coming on strong in recent years. I think about very fast digital interfaces, such as PCI Express connections and the latest DDR memory interface. Precision analog isn’t something that first comes to mind. But it’s there, with one of the larger cloud computing companies, if you look closely enough.

One of my favorite aspects of 20 years in the semiconductor industry is the job of new product definition. Finding a market need for a new analog mixed signal device, especially in this era with tens of thousands of integrated circuits, is definitely not easy. In this case study, our product definition started with a great relationship between our sales team and the customer. Based on the level of feedback we were able to get, the account manager and field apps engineer clearly earned the trust of the lead designer at this cloud computing customer.

In data centers, “metered power” has become one of the common ways of charging their end customers for usage. Fixed costs for power allocation are common by the rack. In more recent years, a new “pay for use” pricing model has emerged. By pricing for allocated power and reserved cooling capacity, cloud computing companies and their end customers can reduce overbuilding and overpaying.

In our customer example, we learned this designer needed to accurately measure power in order for the cloud computing company to accurately bill their end customers down to their milliseconds (or maybe even microseconds) of server power usage. As the business manager of the amplifier product line, I got pulled into the conversation. We initially discussed using a 1milliohm shunt resistor, an existing current sense amplifier and measuring the amplifier output with a microcontroller’s ADC. The MCU also had an integrated multiplexer in front of the ADC. The designer’s original thought was to use our current sense amplifier through one mux channel and measure the voltage with another channel.

Measure current. Measure voltage. Calculate power in the digital domain. Simple, right?

But not so fast. After some reflection from my product definer, along with the customer, we thought about the use case of transient responses. With those complex CPU functions, GPU functions, memory reads and writes and high-speed digital interfaces, the transients and load steps could be pretty nasty. With each ADC measurement and power calculation, they’d have to account for the delay through the multiplexer between the voltage and current readings. In a slow moving or steady state current load, they’d be fine. But with fast moving transients, both sides came to the conclusion that a power multiplier/amplifier was needed. The power amplifier calculated instantaneous power by multiplying the load current across a sense resistor by a fraction of the voltage set by an external resistive divider in order to produce a true power output from the amplifier.

Another interesting product differentiation was providing a current out of the power amplifier. Nearly every other amplifier and current sense amplifier provides a voltage output. In this customer’s case, the power amplifier was located on the server board a long distance away from the MCU with the integrated ADC. Similar to the principle behind 4-20mA industrial current loops, the current output from the power amplifier eliminates any errors caused by voltage drops across the parasitic resistance of the PCB, which is often significant for high-current systems. With a simple resistor to ground at the mux/ADC input, the customer could convert the current to voltage and get accurate, fast readings representing instantaneous power.

In the end, the data center company could accurately measure the power drawn and bill their customers accordingly. They designed in our power multiplier. And they clearly cared about what they were measuring.

Part I The Question That Has Guided My Analog Mixed Signal Career


Falsely Vilifying Cryptocurrency in the Name of Cybersecurity

Falsely Vilifying Cryptocurrency in the Name of Cybersecurity
by Matthew Rosenquist on 02-02-2021 at 6:00 am

Falsely Vilifying Cryptocurrency in the Name of Cybersecurity

I get frustrated by shortsighted perceptions, which are misleading and dangerous is far easier to vilify something people don’t fully understand.

Here is another article, titled Bitcoin is Aiding the Ransomware Industry, published by Coindesk, implying cryptocurrency is the cause of digital crime.

This is one of many such pieces littering the Internet. I am a cybersecurity expert and have spent over 30 years fighting theft, fraud, and misuse of computers. I find such articles to be shortsighted and lacking the strategic picture which must consider both the benefits and drawbacks of any technology.

Correlation is not causation.

One small negative aspect is not the whole picture and shouldn’t be used to undermine great innovation (such tactics also tried to stop the automobile, electricity, and the Internet).

Let’s not only focus on skewed statements but instead, think critically about how technology can be leveraged for the greater benefit.

If narrow viewpoints, like attributing Bitcoin is aiding ransomware, are considered, then it would be equally logical to contemplate the following absurdities:

  • Email is aiding the Phishing Industry
  • Web browsers are aiding Online Fraud Industry
  • Operating Systems are aiding the Malware industry
  • Networks are aiding the Denial-of-Service industry
  • Cash is aiding every Crime industry
  • Communications are aiding the Disinformation Industry
  • Credit cards are aiding fraudulent purchasing

Is it justifiable to blame digital technology for every societal issue? Technology innovation can be greatly beneficial and should not be flatly vilified for a small negative contribution.

For the record, less than 1% of crypto transactions are related to crime. Percentage-wise, cash is used more often in criminal activities, as it is the preferred monetary instrument and store of value. If impacting crime is the motivator, then perhaps the conversations should be about eliminating cash. From a cybersecurity perspective, moving to a digital system has far more merits than moving away from cryptocurrency.

Fear of the Unknown

Fear or a lack of understanding is the key to this line of false-logic being perpetuated. Nobody is advocating the elimination of the Internet, computers, electricity, automobiles, or credit cards because society already understands their value. They are willing to accept the accompanying risks because they want the benefits.

Most of the people I encounter, who are quick to shun digital currency, often are unable to describe the long-term benefits to themselves or the global community. Only knowing the downsides makes for a highly biased position. History repeats itself. Years ago, people rallied to outlaw the emergence of automobiles, favoring a world reliant on animal power. They only saw the obnoxious sound, smell, and mechanical dangers but failed to see the benefits of a global transportation and logistics network that fueled massive economic growth, social liberties, preservation of freedoms, and expansion of personal independence.

We all must look beyond the hype.

Crime, fraud, and theft existed long before cryptocurrency, digital networks, and computers. Crypto and digital currencies also bring tremendous benefits, including the amplification of innovation, presently and especially in the future.

Tech is a tool and it can be used for good or bad. See both sides before making a judgment.


Trust, but verify. How to catch peanut butter engineering before it spreads into your system — Part 1: Validation.

Trust, but verify. How to catch peanut butter engineering before it spreads into your system — Part 1: Validation.
by Raul Perez on 02-01-2021 at 10:00 am

iStock 1071112784

I will address this topic with two blog posts: validation (i.e. post silicon) — Part 1, and verification (pre-silicon) — Part 2 (coming soon!). In this blog post, I will focus on validation.

One of the upsides of using catalog chips that have been in the market for a long time and have ramped in substantial volumes is that other system companies already found a lot of the bugs . The chip supplier has had an opportunity to fix them, screen or calibrate defective parts at automated testing (ATE), withdraw the chip from the market, or at least warn new users about bugs with an ERRATA. Your system may be different, and you may still get bitten by some bug that is exposed due to your unique operational conditions. But generally catalog parts after they have ramped for some time in volume provide a certain herd immunity to the system companies that use them.

Unfortunately, catalog chips are also going to cost you significantly more than a custom silicon chip in larger volumes, the footprint will be significantly bigger considering all components needed, and will also result in a more inflexible set of options for the system designers. But when you go the custom silicon route YOU are the first and possibly the ONLY user for this chip. So how do you prevent silicon bugs getting into your system?

First, let’s talk about the risks:

  • Peanut butter engineering at your chip supplier.

This refers to the reality that your chip supplier is in the business of making as many chips as they can in as short a period of time as they can with a fixed amount of resources. Strong engineering culture at your supplier is a mitigation in general. But given the commercial pressures chip companies are under they need to deliver revenue. And the pressure from management is generally to produce more with the same engineering resources; to spread the peanut butter so to speak over all the chips they are working on. So how does peanut butter engineering manifest itself in real life during validation?

Here are some ways:

  1. Very liberal (i.e. watered down) interpretations of JEDEC standards.
  2. Over leveraging previous chip data. Examples are; to qualify by similarity (QBS) more than they should, using old chip data to reduce how much is validated in a new chip under the concept that it’s “the same circuit–even though layout is different and it’s not exactly the same circuit or surrounded by the same circuits/noise”, too small sample quantity bench validation, no corner material ATE testing, etc…
  3. Unroot caused ECOs: shotgun engineering and spec limit opening to get the silicon out the door without knowing why something is not behaving as expected according to worst case simulations.
  • Automated test equipment (ATE) program changes over the lifetime of the product.

J-STD-46 standard defines some of the reasons why a chip supplier must inform customers that a major change has been made provided that they have purchased components up to 2 years prior, and with whom there are various possible contractual obligations. In annex A of the J-STD-46 under datasheet changes, it is stated that the “Elimination of final electrical measurement or burn-in (if specifically stated in the datasheet as being performed)” is listed as an example of a major change that requires a PCN (product or process change notice) to be issued. However, most datasheets that I’ve seen don’t explicitly state what is ATE tested and what isn’t. So the supplier absent some other agreement with the system company will not issue a PCN for a test program change. In my experience, suppliers remove tests over the lifetime of the chip without issuing a PCN to respond to market pressures to reduce the price of the product while trying to maintain profit margins. This means they lower costs by removing tests based on “historical data” for that chip. You may get a lot that is very different from historical data and that can cause a problem that is undetected at ATE because the tests have been removed.

  • Supplier development teams in different business units (BUs) don’t necessarily have the same engineering methodologies and same rigor.

You may have a good relationship and good working knowledge of how one team at a chip supplier works, and with which you’re very satisfied. But when it comes to validation and verification, each team tends to do its own thing. Sometimes justifiably so depending on what products they make. But many times simply because the teams have different engineering cultures, and there may not be a truly unified methodology in practice for all of the supplier’s development teams. It often happens that smaller chip companies are acquired through time by larger ones, and they continue on doing things the way they have always done them. Which may have been great 10 or 20 years ago but may no longer be so great. So every time you start working with a new chip dev team at one of your known suppliers, you need to raise your guard and check as if this is a new supplier you’ve never worked with before.

All of these risks have mitigation, which are fairly straightforward to implement as long as the chip supplier is cooperative and the system company has the right specialists on its side.

The following are the main mitigation to the risks above:

  • Contracts. This is not legal advice, and I am not a lawyer, so please make sure to seek legal counsel to draft good contracts.

Require in your contracts with custom silicon suppliers the following:

  1. Establish your PCN requirements in your supplier agreement. Make sure to cover changes to the ATE program. This can get messy though since these are changed often by suppliers. They’d have to issue a new datasheet now showing what parameters are no longer covered by the datasheet assuming you implement point (2) below.
  2. Require the supplier to provide you a datasheet that describes how every parameter will be verified/guaranteed. Usually, the main ways to guarantee a parameter in a datasheet is by design/simulation, by bench evaluation (30 or more units), by corner lot evaluation, by ATE, by qualification testing (JESD47, JESD22, and JESD17) or by a combination of the above. Notice that by asking for the datasheet to show how each parameter is validated/guaranteed (including ATE) is going to require the supplier now to issue a PCN whenever the test program is changed to reduce coverage according to J-STD-46.
  3. Require the supplier to provide you with the validation reports used to guarantee the parameters in the datasheet. This is important since you want to make sure to see that the data was collected, for how many units, and what is the CPK and confidence interval for the data. The supplier should provide you with detailed bench testing reports for all the blocks and interfaces, detailed 30 unit or more bench evaluation reports for all the parameters guaranteed in the datasheet by bench evaluation and for whichever ones it is possible to char that are guaranteed by design, and a detailed report showing corner samples at a minimum passing all parameters guaranteed by ATE in the datasheet.
  • Review all validation reports in detail.

Check the following in the reports:

  1. Look at the plots to check that truly the points of data add up to the sample size the supplier is saying they took to calculate CPK, and also to check that they actually took the data for your chip and didn’t just re-use old data from a different chip.
  2. Check what tests are failing for corner parts. Do you care about that parameter that is failing for corner parts? Is it one of the corners that broke your system tests during the corner build? Chip suppliers may simply say not to worry, that corners don’t really happen in real life, or any other type of justification to do nothing about the issue. They could do this because they have many other chips on their work list to worry about. So your chip’s corner fails are not their worry. But when the system engineer has the corner sample test fixture results he may realize that the corner parameter that is failing ATE at the supplier is also correlated to the tests he is seeing issues with at the system factory tester for the corner build data. So while those corner units are being filtered out by ATE at the supplier, if later in the chip’s lifetime that ATE test is removed to cost down the chip testing you will now start seeing those corner units making it into the system builds and showing up as DPPM issues. So bottom line, the chip supplier needs to provide these reports with data plots using the same parameter naming as in the datasheet so that the system engineers can check against their factory test fixture data, and flag what ATE tests should never be removed by the chip supplier.
  3. Check the CPK for the bench validation data. Parameters in the datasheet that are guaranteed by bench validation or guaranteed by design are NOT ATE tested, which means whatever data was taken by the supplier to generate those reports is the data that will forever guarantee those parameters. Is the sample size big enough? How many lots of units did the supplier use for the bench evaluation? Do the data plots look bi-modal?
  4. Check the qual report to make sure the chip supplier is following the JEDEC standards previously mentioned.
  5. Complete a correlation between your system factory test fixture and the chip supplier’s ATE tests for system critical parameters. I’ve worked on a lot of PMICs and one test that is a classic for this is end of charge. Since once you enter the final phase of charging in voltage mode the final time to 100% is heavily influenced by the resistance in the system charge path which is system specific. There are many different types of test correlations that could be applicable to your system.
  • ECO reviews.

When the chip is evaluated, some bugs may be found. It is critical that the system company has chip specialists helping to review these ECOs proposed by the chip supplier to make sure that proper root causing is completed by the chip supplier. Chip specialists that may be needed depending on what types of bugs are found are: DV and AMS verification engineers, analog chip designers, digital chip designers, RF chip designers, package engineers, foundry engineers, and others. Chip suppliers are sometimes under pressure to tape out quick fixes, or simply hand waive away issues and ask for spec limit changes. This is a very serious danger to your custom chip program as you can end up in a run-break-fix cycle with multiple tape outs due to bad or incomplete root causing which will put your system schedule at risk. There are many tools available to debug chip issues such as FIBs, and other FA techniques. You must check that proper methods are being used to root cause your chip’s bugs and not accept incomplete root causes for your ECOs. This is why it is vital that your system company has chip technical experts on your side to ensure your project is not the one where the chip supplier spreads resources thin and you end up getting peanut butter engineering adding risk to your system launch.

  • Request corner samples for the custom chip, and build some of your systems with them.

Not all suppliers, especially the ones that have their own internal fabs will want to do this without some pushing from the system company side. But it’s really the only way you have to check if your system will have any issues when you ramp into higher volumes than what you are building at your EVT and DVT builds. Usually you can build 100 samples of each corner and see what the CPK looks like for your factory tests with those units. On a mostly digital chip, using the slow NMOS slow PMOS (SS), fast NMOS slow PMOS (FS), slow NMOS fast PMOS (SF), and fast NMOS fast PMOS (FF) will give you good insights into whether you will have a problem. As you know if the CPK is 1 you have a problem, so you want to see CPK of 1.33 or better at your factory tests with corner samples. This check is part of the validation phase of the custom silicon dev process we run at customsilicon.com.

Some suppliers will tell you that corner units will be filtered out at ATE to try to avoid having to provide the corner samples. But that is not a valid argument because many parameters in a datasheet are not ATE tested, and even the ones that are may not be tested in the future as the supplier starts to remove testing over the lifetime of the product. So you need to know if your system will be sensitive to process corners using system factory testers before you ramp to mass production. If you do all your builds with mostly nominal material you may not see an issue. But once you hit volume you will start seeing DPPM issues if your system is sensitive to some of the chip supplier process corners. It’s better to catch this early and either spin the chip to fix them, change OTP trim, change the ATE to fix the issue by calibration or filter them out at the chip supplier’s ATE so you never receive the parts. Parameters that you find are critical like this should be highlighted to the supplier as a “never remove ATE testing” parameter.

  • Trust, but verify.

Custom system silicon when done with the assistance of silicon experts puts the system company in control of its own destiny. It’s important to note that when purchasing catalog parts for your system, unless you perform similar due diligence to what is described above, you’re trusting but not verifying that your components will be of good quality and not likely to cause yield or other issues when you go to production in high volumes.

For more information contact us.


Pitching Without a Net. Look Ma, No Slides!

Pitching Without a Net. Look Ma, No Slides!
by Bernard Murphy on 02-01-2021 at 6:00 am

Book Cover min

It’s a given in the business world that whenever you need to communicate to a group you need a slide deck. Yet we vigorously agree that most pitches are miserably bad, for all the usual reasons. All about the presenter’s product, not audience needs. A firehose of technical detail designed to drown any possible objection. A script to hide behind so the presenter won’t forget any points they want to make, and a convenient shield against anything the audience might have to say. Take a risk – consider pitching without a net.

When slides are a bad idea

We all nod, recognizing well these sins in others. Then we race off to commit the very same mistakes in our own pitches. Evidently knowing what not to do is not enough. Sometimes we need to reset, to ask what our audience wants from this interaction. Sometimes they want data, and a pitch may be as good a way as any to deliver that information.  Sometimes slides aren’t the right answer.

When your audience wants a discussion about their needs, especially if you’ve not been delivering to those needs, then slides are like a red rag to a bull. Whatever spin you put on them, slides say “I know how to fix the mess I made. You should listen attentively because I’m going to explain.” The worst possible place to rebuild a damaged relationship is to walk through a tedious exposition of your solution to a problem that, based on the evidence, you clearly don’t understand.

Ditching the pitch

I have to admit that I’ve messed up – many times. I’ve had to face angry customers, rebuild confidence that we were still the right choice. Like most of us, I’ve always prepared very carefully, slide deck at the ready, knowing that in some way I would have to explain our under-performance and our suggestions for climbing back out of the hole.

The real challenge is in knowing how to use that information in the meeting. Marching through the slides would be suicidal, see above. A more rational approach would be to have a discussion, let your customer vent, figure out their most pressing concerns and show a slide or two that might be relevant. Or maybe show a scaled back version of the pitch, trimmed to address your now improved understanding. Sometimes that does the trick. But I have also tried a different approach which can work even better, if you have the stomach for it. I give my pitch without slides.

It’s easier than you think

Two points here. First, I imagine you recoiled in horror at the thought of losing your precious slides, your safety net. Without that net you’ll surely fail. But as we constantly remind our customers, we shouldn’t let fear of failure outweigh the upside. And I’m not suggesting you shouldn’t build slides – only that maybe you shouldn’t present them.

Second, this doesn’t take a superhuman feat of memory. When I pitch in this way, I remember the main flow and some features from my slides. But I haven’t memorized them, and I don’t recite a mental walk-through. Instead, I tell a story of how we’ve been working to meet this valued customer’s needs, weaving in key points I remember from my deck.

What’s the upside? Without slides, the audience can’t read ahead. They have to listen to you. They have no time to misinterpret what you’re about to say or trip you up on ambiguities. You’re looking at them, so you see body language. The format is inherently interactive. If someone has a question, you can deal with it quickly. That builds trust – you’re paying attention to their feedback, you’re tracking what they care about, not what you care about. At the end of one memorable talk I gave, my initially hostile audience were thanking me for a great discussion. That’s an outcome that might be worth the risk.

Want to know more? I tell that story in The Tell-Tale Entrepreneur, along with several other stories on the power of storytelling in business settings.


Examining a technology sample kit: IBM components from 1948 to 1986

Examining a technology sample kit: IBM components from 1948 to 1986
by Ken Shirriff on 01-31-2021 at 10:00 am

box opened w700

I recently received a vintage display box used by IBM to illustrate the progress of computer technology. This display case, created by IBM Germany1 in 1986 included technologies ranging from vacuum tubes and magnetic core memory to IBM’s latest (at the time) memory chips and processor modules. In this blog post, I describe these items in detail and how they fit into IBM’s history.

An IBM display box, showing components and board from different generations of computing. Click this (or any other photo) for a larger image.

First-generation computing: tube module

IBM is older than you might expect. It was founded (under the name CTR) in 1911 and produced punched card equipment for data processing, among other things. By the 1930s, IBM was producing complex electromechanical accounting machines for data processing, controlled by plugboards and relays.

The so-called first generation of electronic computers started around 1946 with the use of vacuum tubes, which were orders of magnitude faster than electromechanical systems. Appropriately, the first artifact in the box is an IBM pluggable tube module. The pluggable module combined a vacuum tube along with its associated resistors and capacitors. These modules could be tested before being assembled into the system, and also replaced in the field by service engineers. Pluggable modules were also innovative because they packed the electronics efficiently into three-dimensional space, compared to mounting tubes on a flat chassis.

Tube module from an IBM 604 Electronic Calculating Punch.

 

The pluggable tube module is from an IBM 604 Electronic Calculating Punch (1948). This large machine was not quite a computer, but it could add, subtract, multiply, and divide. It read 100 punch cards per minute, performed operations, and then punched the results onto new punch cards. It was programmed through a plugboard and could perform up to 60 operations per card. The IBM 604 was a popular product, with over 5600 produced. A typical application was payroll, where the 604 could compute various tax rates through multiplication.

The IBM 604 Electronic Calculating Punch behind a Type 521 Card Reader/Punch. Photo from IBM.

 

The 604 used many different types of tube modules. A typical module implemented an inverter, which could be used in an OR or AND gate.2 The tube module in the display box, however, is a thyratron driver, type MS-7A. The thyratron tube isn’t exactly a vacuum tube since it is filled with xenon. This tube acts as a high-current switch; when activated, the xenon ionizes and passes the current. In the 604, thyratron tubes were used to drive relay coils or magnet coils in the card punch.3

A thyratron tube, type 2D21. This tube is from the pluggable module in the box.

 

Although the 604 wasn’t quite a computer, IBM went on to build various vacuum-tube computers in the 1950s. These machines used larger pluggable tube modules that each held 8 tubes.4 The box didn’t include one of these modules—probably due to their size—but I’ve included a photo below because of their historical importance.

A key-debouncing module from an IBM 705. Details here.

 

Second generation: transistors and SMS (Standard Modular System) card

With the development of transistors in the 1950s, computers moved into the second generation, replacing vacuum tubes with smaller and more reliable transistors. IBM based its transistorized computers on pluggable cards called Standard Modular System (SMS) cards. These cards were the building block of IBM’s transistorized computers including the compact IBM 1401 (1959), and the larger 7000-series mainframe systems. A computer used thousands of SMS cards, manufactured in large numbers by automated machines.

The photo below shows the SMS card from the box.5 The card is a printed circuit board, about the size of a playing card, with components and jumpers on one side and wiring on the back. A typical SMS card had a few transistors and implemented a simple function such as a gate. The cards used germanium transistors in metal cans as silicon transistors weren’t yet popular. I’ve written about SMS cards before if you want more details.

The SMS card in the technology box, type AXV.

Third generation: SLT (Solid Logic Technology)

In 1964, IBM introduced the System/360 line of mainframe computers. The revolutionary idea behind System/360 was to use a single architecture for the full circle (360°) of applications: from business to scientific computing, and from low-end to high-end systems. (Prior to System/360, different models of computers had completely different architectures and instruction sets, so each system required its own software.) The System/360 line was highly successful and cemented IBM’s leadership in mainframe computers for many years.

Although other manufacturers used integrated circuits for their third generation computers, IBM used modules called SLT (Solid Logic Technology), which were not quite integrated circuits. Each thumbnail-sized SLT module contained a few discrete transistors, diodes, and resistors on a square ceramic substrate. An SLT module was capped with a square metal case, giving it a distinct appearance. Although an SLT module doesn’t achieve the integration of an IC, it provides a density improvement over individual components. Each small SLT module was roughly equivalent to a complete SMS card, but much more reliable.7 By 1966, IBM was producing over 100 million SLT modules per year at a cost of 40 cents per module.6

The board below is a logic board using 24 SLT modules. These modules implement AND-OR-INVERT logic gates, the primary logic circuit used in System/360. This board was probably part of the CPU.

A logic board using SLT modules. (The display box labeled this as an MST board though.)

 

The photo below shows the circuitry inside an SLT module. This module has four transistors (the tiny gray squares). SLT modules typically include thick-film resistors, but none are visible in this module.

Closeup of an SLT module showing the tiny silicon dies mounted on the ceramic substrate.

 

The box also has an SLT card with analog circuitry (maybe for the computer’s core memory or power supply). This card has one SLT module, a simple module that contains four transistors (number 361457). I don’t know why this board has so many discrete transistors; perhaps they are higher-power transistors than SLT modules provided.

A card using an SLT module (the metal square in the lower left).

Integrated circuits: MST (Monolithic System Technology)

For a few years, IBM used SLT modules while other computer manufacturers used integrated circuits. Eventually, though, IBM moved to integrated circuits, which they called Monolithic System Technology (MST). An MST module looks like an SLT module from the outside, but inside it contains a monolithic die (i.e. an integrated circuit) rather than the discrete components of SLT. MST was first used in 1969 for the low-end System/3 computer.

An MST module looks like an SLT module from the outside, but has an integrated circuit die inside.

 

The photo above shows the box’s MST module. The silicon die is the tiny shiny rectangle in the middle, connected to the 16 pins of the module. The chip was mounted upside down, soldered directly to the substrate. This upside-down mounting is unusual; most other manufacturers used ceramic or plastic packages for integrated circuits, with the silicon die connected to the pins via bond wires.

Core memory

The box contains a core memory plane; most computers from the 1950s until the early 1970s used magnetic core memory for their main memory.8 This plane holds 8704 bits and is from a System/360 Model 20, the lowest-cost and most popular computer in the System/360 line.9

Core plane from a System/360 Model 20.

 

In core memory, each bit is stored in a tiny magnetized ferrite ring. The ferrite rings are organized into a matrix; by energizing a pair of wires, one bit is selected for reading or writing. Multiple core planes were stacked together to store words of data. Because each bit required a separate ferrite ring, magnetic core memory was limited in scalability. This opened the door for alternative storage approaches.

Closeup of the core plane, showing the wires through the tiny ferrite cores.

Semiconductor memory

IBM was an innovator in semiconductor memory and this is reflected in the numerous artifacts in the box that show off memory technology.10 Modern computers use a type of memory chip called DRAM (dynamic RAM), storing each bit in a tiny capacitor. DRAM was invented at IBM in 1966 and IBM continued to make important innovations in semiconductor memory.

Although magnetic core memory was the dominant RAM storage technique in the 1960s, IBM decided in 1968 to focus on semiconductor memory instead of magnetic core. The first computer to use semiconductor chips for its main memory12 was the IBM System/370 Model 145 mainframe (1970). Each chip in that computer held just 128 bits, so a computer might need tens of thousands of these chips.11 Fortunately, memory density rapidly increased, as shown by the dies below. I’ll discuss the 2-kilobit chip in detail; my die photos of the others are in the footnotes13.

The box includes a display with four memory dies: 2 K-Bit, 64 K-Bit, 256 K-Bit, 1 Megabit.

 

The photo below shows the 2-kilobit die14 under a microscope. It is a static RAM chip from 1973, not as dense as DRAM since it uses six transistors per bit. The tiny white lines on the chip are the metal layer on top of the silicon, wiring the circuitry together. Around the outside of the die are 26 solder bumps for attaching the chip to the substrate. Note that this chip is mounted upside down (“flip-chip”) on the substrate, unlike most integrated circuits that use bond wires. The chip is covered with a protective yellowish film, except where the solder bumps are located.

Die photo of the 2-kilobit chip.

 

To increase the density of storage, four of these chips were mounted in a two-layer MST module, yielding an 8-kilobit module. The module in the box (below) has the square metal case removed, showing the silicon dies inside. These memory modules provided the main memory for the IBM System/370 models 115 and 125, as well as the memory expansion for the models 158 and 168 (1972).

The memory module has chips on two levels. This is an 8-kilobit module composed of four 2-kilobit chips.

 

Each memory card (below) contained 32 of these modules to provide 32 kilobytes of storage. In the photo below, you can see the double-height memory modules along with shorter modules for support circuitry. A four-megabyte main memory unit held 144 of these cards in a frame about 3 feet × 3 feet × 1 foot, so semiconductor memory was still fairly bulky in 1972.

The memory board contains regular MST modules and double-height modules that hold the memory chips.

 

Moving along to some different memory chips, the box includes two silicon wafers holding memory dies, a 5″ wafer and a 4″ wafer.

The two silicon wafers.

 

The smaller four-inch wafer (1982) holds 288-kilobit dynamic RAM chips, an unusual size as it isn’t a power of 2.15 The explanation is that the chip holds 32 kilobytes of 9-bit bytes (8 + parity). In the die photo, you can see that the memory array is mostly obscured by complex wiring on top of the die. This wiring is due to another unusual part of the chip’s design: for the most efficient layout, the memory bit lines have a different spacing from the bit decode lines. As a result, irregular wiring is required to connect the parts of the chip together, forming the pattern visible on top of the chip. Because this die is on the wafer, you can see the alignment marks and test circuitry around the outside of the chip.

Die photo of the 4″ wafer.

 

The five-inch wafer holds 1-megabit memory chips16 that were used in the IBM 3090 mainframe17 (1985). This computer used circuit cards with 32 of these chips, providing four megabytes of storage per card, a huge improvement over the 32-kilobyte card described earlier. The 3090 used multiple memory cards, providing up to 256 megabytes of main storage. The die photo below shows how the chip consists of 16 rectangular subarrays, each holding 64 kilobits.

Die photo of the 1-megabit DRAM chip on the 5″ wafer. The dark circles are dirt, not solder balls.

 

The photo below shows how this die is mounted upside-down on the ceramic substrate with the solder bumps connected to the 23 pins of the module. This module (not part of the box) was used in the IBM PS/2 personal computer.18 The die below looks green, unlike the die above, but that’s just due to the lighting.

Construction of an IBM memory module. This module was not part of the box, but the die is the same as the 5″ die. Photo courtesy of Antoine Bercovici.

 

The photo below compares three memory modules from the technology box. The first module is the 8-kilobit module containing four 2-kilobit chips, described earlier. The second module is a much wider 512-kilobit module, built from four 128-kilobit dies. The third module contains a 1-megabit chip (the one in the 4-chip display, not from the wafer). These megabit modules were used in the IBM 3090 mainframe’s secondary storage.

Three memory modules: 8-kilobit, 512-kilobit, and 1-megabit.

Disk platter

The box contains a segment of a 14″ IBM disk platter, used in disk storage systems from minicomputers to mainframes. IBM was a pioneer in hard disks, starting with the IBM RAMAC (1956), which weighed over a ton and held 5 million characters on a stack of 24″ platters. IBM switched to 14″ platters in 1961 and by 1980 the IBM 3380 disk system held up to 2.5 gigabytes in a large cabinet of 14″ platters.19 The 14″ platter was also popular in low-cost, removable disk cartridge (1965) used with many minicomputers. The 14″ disk platter was finally replaced by an 11″ platter with the introduction of the IBM 3390 disk drive in 1989. Nowadays, laptops typically use 2.5″ platters; amazingly, disk capacity kept increasing as disk diameter steeply decreased.

Section of a 14″ disk platter from the display box.

Artifacts from the IBM 3090

At the time of the box’s creation, the 3090 mainframe was IBM’s new high-performance computer (below), so the box has several artifacts that show off the technology in this computer. Although the IBM 3090 (1985) had top-of-the-line performance at the time, by 1998 an Intel Pentium II Xeon microprocessor had comparable performance,20 illustrating the remarkable improvements of microprocessor technology.

An IBM 3090 data center. Photo from the IBM 3090 brochure.

 

In 1980, IBM introduced the thermal conduction module (TCM), an advanced way to package integrated circuits at high density, while removing the heat that they generate.21 A TCM starts with a multi-chip module with about 100 high-speed integrated circuits mounted on a ceramic substrate, as shown below. This substrate contains dozens of wiring layers to connect the integrated circuits.22 To remove the heat, the ceramic substrate is packaged in a TCM, which has a metal piston contacting each silicon die. These pistons are surrounded by helium (which conducts heat better than air), and the whole TCM package is water-cooled. Finally, nine TCMs are mounted on a printed circuit board.

The hierarchy of components in the IBM 3090: chips are mounted on a ceramic substrate, which is assembled into a TCM. A board holds nine TCMs.

 

This incredibly complex heat-removal system was required because the 3090 used emitter-coupled logic (ECL), the same type of circuitry used in the Cray-1 supercomputer. Although ECL is a very fast logic family, it is also power-hungry and generates much more heat than the MOS transistors used in microprocessors.

The ceramic substrate for a TCM, from the box. It is fairly small, measuring 11×11.7 cm. This substrate holds 100 silicon dies; one is visible near the middle.

 

The photo above shows the ceramic substrate. Normally, the substrate has 100 silicon dies mounted on it, but this sample has just a single die. The box also includes a cross-section slice of the ceramic substrate (below). This shows the 38 layers of wiring inside the substrate, as well as the pins on the underside.

Cross-section of the ceramic substrate, showing the multiple layers of internal wiring.

 

Each TCM had 1800 pins so it could be plugged into a printed circuit board and connected to the rest of the system. Each board held 9 TCMs and was powered with an incredible 1400 amps. The box includes a PCB sample, showing its multi-layer construction (below), and the dense grid of holes to receive the ceramic substrate.

Closeup of the printed circuit board used in the IBM 3090. The routed groove shows the multi-layer construction.

 

Finally, here’s a nice cutaway of a TCM from the detailed IBM 3090 brochure. At the bottom, it shows the silicon dies mounted on the ceramic substrate. The dies are contacted by the heat sink pistons in the middle. The connections on top are for the cooling water.

This cut-away image from IBM shows the internal construction of a TCM.

 

Conclusion

This technology exhibit box was created 35 years ago. Looking at it from the present provides a perspective on the history of both IBM and the computer industry. The box’s date, 1986, marks the peak of IBM’s success and influence,23 right before microcomputers decimated the mainframe market and IBM’s dominance. What I find interesting is that the technology box focuses on mainframes and lacks any artifacts from the IBM PC (1981), which ended up having much more long-term impact..24 This neglect of microcomputers reflects IBM’s corporate focus on the mainframe market rather than the PC market (which, ironically, IBM created).

In the bigger historical picture, the technology box covers a time of great upheaval as electromechanical accounting machines were replaced by three generations of computers in rapid succession: vacuums tubes, then transistors, and finally integrated circuits. In contrast to this period of rapid change, nothing has replaced integrated circuits over the past 50 years. Instead, integrated circuits have remained, but improved by many orders of magnitude, as described by Moore’s Law. (Compared to the room-filling IBM 3090 mainframe, an iPhone has 1000 times the performance and 50 times the RAM.) Will integrated circuits continue their dominance for the next 50 years or will some new technology replace them? It remains to be seen.

Thanks to Cyprien for providing this amazing box of artifacts. I announce my latest blog posts on Twitter, so follow me @kenshirriff. I also have an RSS feed.

Notes and references

  1. The box was apparently created in Stuttgart, Germany. The components are protected by a piece of plexiglass, with labels in German for all the components, such as Mehrschicht-Keramiktrager for multi-layer ceramic substrate. The labels are listed here if you’re interested.
    The box is labeled in German: “Computertechnologie”.

     

    The box originally included several German books on computer technology but since they are missing I had to do some research and come up with my own narrative.

  2. For more information on the pluggable tube modules, see the schematics of IBM’s pluggable units (which lack the box’s MS-7A module). (I suspect the MS-7A was selected for the box because it is more compact than most of the pluggable modules, having one layer of circuitry below the tube, rather than two.)
  3. The IBM 604 service manual says that the thyratron tube modules are designated TH, but the module in the box is designated MS-7A. I don’t know why the designations don’t match up.
  4. People sometimes think that an 8-tube module held a byte. This is wrong for two reasons. First, bytes didn’t exist back then. IBM’s early scientific computers used 36-bit words, while the business computers were based on characters of 6 bits plus parity. Second, 8 tubes didn’t correspond to 8 bits because circuits often required multiple tubes. For instance, a tube module could implement three bits of register storage.
  5. The SMS card in the box is type AXV, a complementary emitter follower circuit used in the IBM 1443 printer and other systems.
  6. SLT was controversial, since other companies used more-advanced integrated circuits rather than hybrid modules. In typical IBM fashion, the vice president in charge of SLT was demoted in 1964, only to be reinstated in 1966 when SLT proved successful. My view is that integrated circuit technology was too immature when the System/360 was released, so IBM’s choice to use SLT made the System/360 possible. However, it only took a year before integrated circuits became practical, as shown by their use in competing mainframes. I think IBM stuck with SLT modules longer than necessary. Integrated circuits rapidly increased in complexity (Moore’s Law), while SLT modules could only increase density through hacks such as putting resistors on the underside (SLD) and using two layers of ceramic (ASLT).
  7. Curiously, this card is labeled in the box as an MST card, but checking the part numbers shows it has SLT modules. Specifically, it contains the following types of SLT modules (click for details): 361453 AND-OR-Invert, 361454 inverters, 361456 AND-OR-extender, and 361479 inverters. The SLT modules are also documented in IBM’s manual.
    Schematic of one of the SLT modules on the board (361453 AND-OR-INVERT (AOI) gate) from the IBM manual.

     

    The schematic above shows one of the SLT modules. (IBM had their own symbol for transistors; T1 is an NPN transistor.) This gate is built from diode-transistor-logic, so it’s more primitive than the TTL logic that became popular in the late 1960s. The “Extend” pins are used to connect modules together to build larger gates, so the modules provide a lot of flexibility. This module inconveniently requires three voltages. This SLT module contained one transistor die, three dual-diode dies, and three thick-film resistors. During manufacturing, the resistors were sand-blasted to obtain accurate resistances, an advantage over the inaccurate resistances on integrated circuit dies.

  8. The System/360 line was designed as a single 32-bit architecture for all the models. The Model 20, however, is a stripped-down, 16-bit version of System/360, incompatible with the other machines. (Some people don’t consider the Model 20 a “real” System/360 for this reason.) But due to its low price, the Model 20 was the most popular System/360 with more than 7,400 in operation by the end of 1970.
  9. This core memory plane from a System/360 Model 20 is a 128×68 grid. Note that this isn’t a power of 2: the plane provided 8192 bits of main memory storage as well as 512 bits for registers. Using the same core plane for memory and registers hurt performance but saved money. The computer used five of these planes to make a 4-kilobyte memory module, or 10 planes for an 8-kilobyte module. For details, see the Model 20 Field Engineering manual.
  10. For an extensive list of references on DRAM chips, see the thesis Impact of processing technology on DRAM sense amplifier design (1990). For a history of memory development at IBM through 1980, from ferrite core to DRAM, see Solid state memory development in IBM.
  11. The System/370 Model 145 was the first computer with semiconductor main memory. Each thumbnail-sized MST module held four 128-bit chips; 24 modules fit onto a 12-kilobit storage card. A shoebox-sized Basic Storage Module held 36 cards, providing 48 kilobytes of storage with parity. By modern standards this storage is incredibly bulky, but it provided twice the density of the magnetic core memory used by contemporary systems. The computer’s storage consisted of up to 16 of these boxes in a large cabinet (or two), providing 112 kilobytes to 512 kilobytes of RAM.
    Photos showing the 512-bit memory module, the 12-kilobit memory card, and the 48-kilobyte basic storage module. Photos from IBM 370 guide.
  12. IBM had used monolithic memory for special purposes earlier, holding the “storage protect” data in the IBM 360/91 (1966) and providing a memory cache in the System/360 Model 85.
  13. I wasn’t able to find exact details on the 64-kilobit, 256-kilobit, and 1-megabit chips from the display, but I took die photos.
    Die photo of the 64k memory chip.

     

    The 64-kilobit chip is shown above. The solder balls are the most visible part of the chip. The article A 64K FET Dynamic Random Access Memory: Design Considerations and Description (1980) describes IBM’s experimental 64-kilobit DRAM chip, but the chip they describe doesn’t entirely match the chip in the box. There were probably some significant design changes between the prototype chip and the production chip.

    Die photo of the 256-kilobit RAM, roughly 1985.

     

    The 256-kilobit die is shown above. The diagonal lines on the die are similar, but not identical, to the die in A 256K NMOS DRAM (1984). That chip was designed at IBM Laboratories in Böblingen, Germany, and could provide 1, 2, or 4 bits in parallel.

    Die photo of the 1-megabit memory chip.

     

    The 1-megabit die is shown above. IBM was the first company to begin volume production of 1-megabit memory chips and the first company to use them in mainframe computers. This chip was used in the IBM 3090 mainframe, but was later replaced by the faster and smaller “second-generation” 1-megabit chip on the 5″ wafer. One interesting feature of this die is the “eagle” logo, shown below.

    The eagle chip art on the 1-megabit RAM chip, slightly scratched.

     

    The box includes a 1-megabit MST module (below) that uses this chip. Because the chip’s solder balls are along its center, the module omits the center three pins to make room for the connections to the chip.

    The 1-megabit chip mounted in an MST module.

     

  14. This memory card and its 2-kilobit chips are described in detail in A High Performance Low Power 2048-Bit Memory Chip in MOSFET Technology and Its Application (1976). These modules were used in the main memory of the IBM System 370 models 115 (1973) and 125 (1972) as well as upgraded memory for the models 158 (1972) and 168 (1972). The IBM System/360 Model 138 (1976) and Model 148 (1976) also used 2K MOSFET chips, presumably the same ones. The 2-kilobit chip was developed at IBM Laboratories in Böblingen, Germany; this may have motivated its inclusion in this German display box.
    Closeup of the 2-kilobit RAM chip.

     

    The closeup of the 2-kilobit die shows some of the decoder circuitry (left) and the storage cells (right). Two solder balls are in the lower left; the rest of the die is covered with a protective yellow film, probably polyimide. Each storage cell consists of six transistors. The chip is built with metal-gate NMOS transistors.

  15. The 288-kilobit chip is described in detail in A 288Kb Dynamic RAM.
    Closeup of the IBM 288-kilobit memory chip showing the programmable fuses.

     

    The closeup die photo above shows some of the memory cells (at the top and bottom), wired into bit lines. One unusual feature of this chip is that has redundancy to work around faults. In particular, four redundant word lines can be substituted for faulty ones, by blowing configuration fuses. I think the large boxes with circles in the middle are four of the fuses.

    The part number on the 4″ die: OITETR02I IBM 032 BTV.

     

    The photo above shows the chip’s part number; BTV refers to IBM’s Burlington / Essex Junction, VT semiconductor plant where the chip was designed. This plant was acquired by GlobalFoundaries in 2015. This photo also shows the complex geometrical wiring, unlike the regular matrix in most memory chips.

  16. Note that there are two 1-megabit chips in the box. The chip on the 4-chip display is an older chip than the one on the 5″ wafer. The 1-megabit memory chip on the wafer is described in An Experimental 80-ns l-Mbit DRAM with Fast Page Operation (1985). It uses a single 5-volt power supply. The chip is structured as four 256-kbit quadrants, each subdivided into four 64-kbit subarrays. It has two redundant bit lines per quadrant for higher yield. The horizontal solder balls through the middle of the chip are the common connections for each quadrant, while the vertical connections along the left and right edges provide the signals specific to each quadrant. This quadrant structure allows the chip to be accessed as 256K×4 or 1M×1.
  17. IBM’s overview of the 3090 family provides details on the hardware, including the memory and TCM modules. Page 10 discusses IBM’s memory technology as of 1987 and has a picture of their “second generation” 1-megabit chip, which matches the die on the 5″ wafer.
  18. The 1-megabit memory chips were used in the IBM 3090 mainframe, but I think the faulty ones were used in IBM PS/2 personal computer. You can see the unusual metal MST packages on many PS/2 cards. Specifically, if one of the four quadrants in the memory chip had a fault, the memory chip was used as a 3/4-megabyte chip. These had four part numbers, depending on the faulty quadrant: 90X0710ESD through 90X0713ESD (ESD probably stands for Electrostatic Sensitive Device). The PS/2 2-megabyte memory card (90X7391) had 24 chips providing 2 megabytes with parity. The board used chips with alternating bad banks so the memory regions fit together.
  19. Since several of the artifacts in the box came from the IBM 3090 mainframe, and the 3380 disk system was used with the 3090 mainframe, my suspicion is that the platter is from the 3380 disk system, shown below.
    An IBM 3380E disk storage system, holding 5 gigabytes. The disk platters are center-left, labeled “E”. Photo taken at the Large Scale Systems Museum.

     

  20. It’s difficult to precisely compare different computers, especially since the 3090 supported multiple processors and vector units. I looked at benchmarks from 2001 comparing various computers on a linear algebra benchmark. The IBM 3090 performed at 97 to 540 megaflops/second for configurations of 1 to 6 processors respectively. An Intel Pentium II Xeon performed at 295 megaflops/second, a bit faster than the 3-processor IBM 3090. To compare clock speeds, the IBM 3090 ran at 69 MHz, while the Pentium ran at 450 MHz. An IBM 3090 cost $4 million while a Pentium II system was $7,000 to $20,000. The IBM 3090 came with 64 to 128 megabytes of RAM while people complained about the Pentium II’s initial 512-megabyte limit. The point of this is that while the IBM 3090 was a powerful mainframe in 1985, microprocessors caught up in about 13 years, thanks to Moore’s Law.
  21. The table below compares characteristics of the Thermal Conduction Modules used in the IBM 3081 (1980), IBM 3090 (1985), and IBM S/390 (1990) computers. The board-level technology progressed similarly. For instance, a 3081 board took up to 500 amps, while a 3090 board took 1400 amps, and an S/390 board took 3400 amps.

     

    The IBM 4300-series processors (1979) used a ceramic multi-chip module that held 36 chips, but it used an aluminum heat sink and air cooling instead of the more complex water-cooled TCM. The IBM 4381‘s smaller multi-chip module is often erroneously called a TCM by online articles, but it’s a multilayer ceramic multichip module (MLC MCM). For more information about IBM’s chip packaging, see this detailed web page.

  22. For more information on TCMs, see the EEVblog teardown.
  23. Desktop computer sales first exceeded mainframe computer sales in 1984. Counting the number of employees, IBM peaked in 1985 and declined until 1994 (source). 1985 was also a peak year for IBM’s revenue and profits, according to The Decline and Rise of IBM. By 1991, IBM’s problems were discussed by the New York Times. After heavy losses, IBM regained profitability and growth in the 1990s, but never regained its dominance of the computer industry.
  24. Perhaps one reason that the technology box ignores IBM’s personal computers is that these computers didn’t contain IBM-specific hardware that they could show off: Intel built the 80×86 processor, while companies such as Texas Instruments built the memory and support integrated circuits. The lack of IBM-specific technology in these personal computers is one factor that led to IBM losing control of the PC-compatible market.

How Airshield Can Save Transportation

How Airshield Can Save Transportation
by Roger C. Lanctot on 01-31-2021 at 6:00 am

How Airshield Can Save Transportation

The COVID-19 pandemic has devastated public transportation of every variety from buses and taxis to airplanes and trains. The combination of remote work and evolving economic shutdowns impacting restaurants, entertainment venues, schools, and tourism have sapped transportation demand while mitigation measures have reduced supply.

Restoring the supply of transportation as economies emerge from the coronavirus crisis in the wake of widespread vaccine deployment will call for a corresponding restoration in confidence. Returning public transportation users on trains and planes and other means of conveyance will be looking for accomodations intended to combat present and future viral transmission among passengers. Many bus drivers, for example, have seen protective barriers installed.

Much has been made of research studies showing the prophylactic effect of airflow in taxis, trains, buses and airplanes. But these studies tend to look at the prevailing airflow – normally ceiling-to-floor and front-to-back in shared transit situations – the nature of air filtration and the frequency of cabin air replacement. (Taxis or shared cars are more complicated.)

Some of these studies consider the disruption of the prevailing airflow due to the presence and movement of human beings – the passengers and/or attendants.  Little effort has gone into actually modifying the airflow in order to use it as a more active defense against viral transmission.

Airshield, a retrofit device for airplanes, is intended to actually use airflow as a barrier to viral transmission in airplanes. Developed by Teague, a design firm focused on user experiences in the transportation industry, Airshield is intended as an inexpensive adaptation of existing cabin air exchange systems to provide individualized and inobtrusive protection to airplane passengers – even those sitting in close proximity.

I have flown three domestic flights since the onset of the pandemic. I can personally attest to the fact that flights during the COVID-19 pandemic are almost universally completely full and the airlines have done little to modify the in-cabin experience to inspire passenger confidence.

The airline I fly most frequently is United, United touts its award winning United CleanPlus program saying: “United is the first airline among the four largest U.S. carriers to be awarded Diamond status by APEX Health Safety powered by SimpliFlying for our cleanliness and sanitation efforts.”

United CleanPlus addresses the cleanliness of the airplane. It does not address the real threat of airborne viral transmission in flight.

United is not alone. The manufacturers of the airplanes themselves appear to be in a bit of denial. Writes Boeing:

“What happens when someone coughs next to other passengers on an airplane?  New Boeing research shows the cabin environment significantly reduces and removes those cough particles from the air.

“In fact, Boeing researchers say the design of the cabin and the airflow system create the equivalent of more than 7 feet (2 meters) of physical distance between every passenger—even on a full flight. The findings, along with the use of face coverings, enhanced cleaning and other safeguards lower the risk of passengers contracting COVID-19 during air travel.”

Boeing’s claims fly in the face of existing research. Notes a comment on the MIT Medical Website:

“Still, the design of air-handling systems on commercial aircraft makes it unlikely that you’ll be breathing in air from anyone more than a few rows away. In fact, a 2018 study that examined the transmission of droplet-mediated respiratory illnesses during transcontinental flights found that an infectious passenger with influenza or another droplet-transmitted respiratory infection was highly unlikely to infect passengers seated farther away than two seats on either side or one row in front or in back.”

These findings do not inspire confidence. The real missing piece, of course, is research into infections traceable specifically to the flights themselves – especially given the challenges of segregating the behaviors and conditions associated with getting to and from the airplane itself.

It would be nice, though, to know and see active mitigation measures in place in airplanes. Teague’s Airshield offers that solution. Like other researchers, Teague has studied and modeled the airflow on airplanes and identified weaknesses in the current configurations of systems not intended to actually combat an actual pandemic.

SOURCE: Teague illustration of existing unmodified airflow on a Boeing 737

Teague’s analysis can be found here: https://teague.com/work/airshield-cabin-air-safety-device

Teague claims a 76% reduction in shared air particles with Airshield. The company also claims its own studies show that 86% of passengers would choose to fly on a plane with Airshield over one not so equipped.

SOURCE: Teague illustration of Airshield installation.

Airshield itself requires a two minute insallation over existing air vents, according to the company. I personally expect there are ways to implement Airshield in other forms of public transportation – though airplanes are most ideally suited to its adoption. For me, if nothing else, the adoption and installation of Airshield can demonstrate an active effort at affording some level of safety for airline travel.

There is still widespread fear of flying, especially given the reality that many passengers are known to be traveling while infected. No level of airplane sanitation can prevent transmission in a closely contained environment where air is more or less freely exchanged. Actively using airflow as a physical barrier is a measure engineers at Teague are putting at our disposal with Airshield. It seems like a good idea to me. (Disclosure: Teague is not a Strategy Analytics client.)


Podcast EP5: Verification, Evolution and Revolution

Podcast EP5: Verification, Evolution and Revolution
by Daniel Nenni on 01-29-2021 at 10:00 am

Dan and Mike are joined by Dr. Bernard Murphy. Bernard has recently published a book on entrepreneurship and the importance of storytelling. In this podcast, Bernard talks about his journey from a PhD in Nuclear Physics at Oxford University to a storied career in EDA and verification. Bernard discusses a fundamental shift in verification that occurred around 2000 and provides a thoughtful perspective on verification approaches, both today and tomorrow.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.


CEO Interview: Tuomas Hollman of Minima Processor

CEO Interview: Tuomas Hollman of Minima Processor
by Daniel Nenni on 01-29-2021 at 6:00 am

Tuomas Hollman Minima CEO

Tuomas is an experienced senior executive, with proficiency that ranges from strategy to product development and business management. He began his semiconductor industry career at Texas Instruments, serving for 15 years in increasingly important roles, including general management and profit and loss responsibility for multiple product lines. From Texas Instruments, Tuomas joined Exar Corporation as division vice president of power management and lighting with oversight of strategy, product development and marketing. Tuomas joined Minima Processor from MaxLinear, through its acquisition of Exar Corporation, where he continued to lead the power management and lighting products. Tuomas holds a Master of Science degree in Microelectronics Design from Helsinki University of Technology and a Master of Science degree in Economics and Business Administration from the Helsinki School of Economics, both in Finland.

What is unique about the approach of Minima to ultra-low power digital design?

Real-time adaptivity is a fundamental feature of the Minima approach to silicon design. Real life systems have several tasks to do, but even today custom silicon is designed for a single task (operating point), while all others are trade offs. Custom silicon is already a megatrend, spearheaded by Apple’s increased use of their own optimized silicon products. The next obvious megatrend is to have the silicon adapt to each task at hand individually and find it’s optimum operating point across wide range of tasks it needs to do in the end product. For that, the system needs to be “aware of itself” and adapt in real time, which other current silicon design methodologies do not support.

Why did Minima get started in Finland?
To create a truly adaptive system, you need an understanding of silicon as well as the rest of the embedded system, including the software. Once a upon a time, not too long time ago, there was a company in Finland that had the full range of talent, from silicon to system to software engineering, namely Nokia Mobile Phones. Nokia also supported academia, where our founder, Lauri Koskinen, laid the foundation for the Minima technology which attracted companies like Texas Instruments, where I gained most of my understanding about real life silicon business. Also, Finland and the EU have a variety of early-stage financial instruments to support new deep tech companies, which is mandatory in semiconductors due to long development cycles. We both have strong connections to Silicon Valley, too, as I worked there for 5 years and Lauri spent a one-year term at the UC Berkeley Wireless Research Center on a prestigious Fulbright Finland grant.

What applications are a fit for Minima’s real-time adaptive approach?
Always on, sensing type applications, such as hearables and wearables are a great fit. You get the benefit of minimum energy point, near-threshold operation for the extended periods of time the system is just monitoring it’s environment, but does not have actual user input to process. And when it does, our ultra-wide dynamic voltage and frequency scaling (DVFS) allows the same core to run 10x or 20x faster to process the user input, be it spoken key word or any other type user input.

Why the emphasis on energy vs low power? And why near-threshold voltage and not sub-threshold voltage?
Batteries hold a certain amount of energy. Doing one operation consumes a certain small fraction of that. How many operations you can do is what you really want to maximize. If you are only looking at power numbers, then slower operation or going to sleep part of the time would seem to lower the numbers. That is not helping you to get more computation cycles from your limited energy source, just the speed would change. To truly get more operations done with the same energy, you have to change what impacts it the most and that is your supply voltage. This is why you need to get to near/sub-threshold operation. The terms near- and sub-threshold have been used almost interchangeably, so let’s just say near/sub-threshold voltages are the goal.

Your company name includes “microprocessor” so do you sell processor IP?
Our technology can be flexibly applied on any pipelined logic, so it may be a processor, HW accelerator, NPU or any type of custom logic. Maybe we should change our name to Minima SoC!

What is your business model?
The core of our business model is licensing and implementation of our Dynamic Margining IP that consists of semiconductor IP and supporting software task/driver. In addition, we help our customers to make the most out of our system and near-threshold operation by analyzing their application and use cases, to define optimal operating points for their design, to reach the lowest possible energy.

With an IP business, silicon validation is critical. How’s that coming along?
We have validated our IP on silicon and we have a customer ramping into volume production.

What does the next 12 months have in store for Minima?
It will be very exciting, as we will see Minima Dynamic Margining enabled customer SoCs hitting the market. Internally, we will be working hard to serve more customers, which will be enabled both by additional investment and further development of our IP delivery methodologies.

What predictions do you have for the semiconductor industry in 2021?
Things are going beyond what you see in the catalog…devices are becoming more and more application specific. More and more vertical integration. More and more building blocks and more adaptivity Apple has demonstrated how powerful optimized SoCs are, first in True Wireless Stereo (TWS) headsets and now even in laptops!  It’s a mega trend. It’s behind NVIDIA buying Arm.    That’s how you pack that kind of performance into a user-friendly form factor. And this will happen to more and more products, not just the most obvious ones with small batteries.  If you want to do next generation devices, it will require specialized and optimized silicon. And the generation after it will adapt to the tasks at hand.  That’s why we aren’t a chip company…one sizes doesn’t fit all.

https://minimaprocessor.com/

Also Read:

CEO Interview: Lee-Lean Shu of GSI Technology

CEO Interview: Arun Iyengar of Untether AI

CEO Interview: Tony Pialis of Alphawave IP