ads mdx semiwiki building trust gen 800x100ai

Agile SoC Design: How to Achieve a Practical Workflow

Agile SoC Design: How to Achieve a Practical Workflow
by Andy Tan on 02-10-2022 at 6:00 am

Picture1

“The only sustainable advantage you can have over others is agility, that’s it.” (Jeff Bezos)

The best workflow is the one that has been working until it doesn’t. In their quest to tape-out the next design, SoC teams rely on their proven workflow to meet design constraints and project milestones. As often the case, design constraints or project milestones change during different phases of a workflow – this means prior workflows may be rendered obsolete by new technology nodes adding new challenges. That’s why many SoC teams have had to constantly tweak their workflows in order to meet their elusive tape-out milestones and constraints.

Agile was first introduced in the ‘90s (yes it’s been around for that long) to improve software development teams’ ability to respond to change, productivity, predictability, and quality. Since then, data from 25 years shows that Agile projects are nearly 2X more likely than Waterfall projects. Moreover, Agile has replaced Waterfall as project management of choice for software development, with more than 70% of companies using it. (Djurovic, 2020). It has built a strong support community such as Agile Alliance, a non-profit organization with over 72,000 members providing global resources for its members.

Unfortunately, the success of Agile in the hardware domain is much less heralded than in software. As Paul Cunningham suggested in his EE Times article “Agile Verification for SOC Design” agility trends in hardware are perceived to be limited likely because they are not explicitly labeled as such. Another likely reason is the fact that Agile started as a software development concept and the plethora of jargon such as Scrum, Kanban, Extreme Programming, FDD, Agile Manifesto, etc. can be intimidating to SoC teams that just want to tape out their projects ASAP—not to adopt seemingly academic software development concepts.

So, although a typical SoC workflow consists of multiple distinct tasks with short cycles relative to the project timeline, serves internal customers who expect flawless and timely handoff, uses IPs developed by different teams, needs multiple ECO flows to tape out, requires collaboration with many teams often in different geographies, and likely expected to meet with many plan changes, what we often hear is that Agile is not suitable for hardware development.

Instead of focusing on the potential benefit of Agile such as responsiveness to change, better project success rate, better collaboration, and better-quality deliverables, critics tend to focus on the differences between software and hardware projects as if adopting Agile were an all or nothing proposition. The truth of the matter is that adopting Agile is a journey that often starts small, driven by practical needs such as improving current processes that are no longer patchable.

With such a well-established track record in software development, focusing on the similarities and benefits of Agile for hardware development through incremental, practical steps seems like a good idea. The good news is, even in what appears to be very sequential with waterfall workflows for SoC design, there are opportunities for improvement by adopting Agile principles. The following figure depicts a typical high-level workflow for a worldwide SoC development team.

The front-end design team (1) is in US-West and uses soft IPs developed by a team in the US-Central region (3). The back-end design team is in India (2) and uses hard IPs developed by a team in Europe (4). Each team’s compute resources can be fully on-premises, entirely in the cloud, or using a hybrid on-premises/cloud configuration (5).

The front-end design team’s workflow goes through distinct phases such as design specification and modeling in HDL (Hardware Description Language), followed by a verification phase where functional simulation and formal verification are performed, followed by logic synthesis and testability. This design phase typically consists of running thousands of jobs in parallel, equally numerous but relatively small files, iterating through each of the phases before finally handing over to the back-end team, likely after running out of allocated time.

Although they typically must deal with much larger file sizes, including binary files, hard IPs, macros, and memories, the back-end team also must operate in iterative distinct phases. Physical design tasks such as floorplanning, place & route, and ECO (Engineering Change Order) are iterative in nature. Likewise, parasitic extraction, DRC/LVS (Design Rule Checks/Layout Vs. Schematic), timing ECO, power and signal integrity analysis typically must be done multiple times prior to the signoff phase.

Keeping in mind that each of the IP teams likely will have its own iterative workflows, and each team can be in different locations, it’s not hard to imagine that a common design and collaboration platform with features to help make iterations and changes safe and easy, naturally scalable through cloud computing, and integrated with productivity tools to help fuse teams together can be very beneficial as a foundation of a practical Agile SoC workflow.

This is where HCM (Hardware Configuration Management) platforms come into the picture. HCM has been around for a while and many SoC design teams are already using it in their projects. However, instead of using its full potential to improve the team’s agility, HCM may just be relegated to a specific task such as creating a Bill of Materials (BoM), or revision control. This is a pity because leading HCM platforms will continue to advance in response to the latest technology needs and every new release brings features that make them increasingly suitable for a first step towards adopting Agile in an SoC workflow. Consider the following figure.

Using the same high-level SoC workflow example previously mentioned, the figure depicts an Agile-friendly HCM platform. The platform offers code, documentation, and graphics review capability through integration with Review Board, a widely popular review collaboration tool (1). Integration with open-source automation tools such as Jenkins allows the platform to easily be used to drive iterative tasks (2). It connects to SCM tools such as Git, Perforce, and Subversion to allow the use and management of IPs created in the SCM tool’s repository (3). To leverage cloud computing and enable virtually unlimited horizontal scalability the platform is well supported by top cloud providers with the flexibility to be deployed fully in the cloud, on-premises, or in hybrid mode (4). Finally, the platform also connects to popular issue tracking tools so that any issue found during design can be automatically reported and tracked (5).

As with software, adopting Agile requires more than just tools; however, tools that automate processes consistently interpret it for all team members, are conducive to frequent changes and workflow improvements, and encourage collaboration, which goes a long way in helping adoption. It has been said that without the right tools adopting Agile can feel like driving a race car on a dirt road—sooner or later the driver will blame the car.

An Agile-friendly HCM platform such as Cliosoft SOS is an ideal first step to a practical Agile SoC workflow as it will ensure a positive adoption experience. Key characteristics include:

  1. Built from the ground-up for hardware designers instead of bolted on top of an SCM built for software development purposes. A native HCM platform is built with hardware designers in mind and key features friendly to hardware design practices such as:

A Efficiently handles all the data types and sizes present in an SoC design from millions of small text or binary files to GB size files, layout blocks, composites, etc.

B Ability to configure design phases into project views, and the flexibility to easily support branching and merging of projects and files when needed

C Can easily tag/label data to communicate design state readiness, and facilitate iterations during various SoC workflow phases such as design, verification, and test

D Can group (abstract) objects into composites to allow hierarchical data management

E Secure yet flexible work area configuration supports checkout based and writable work area use models under full ACL control

  1. Comprehensive CLI suitable for all phases of SoC design, and intuitive GUI developed with simplicity and ease of use in mind for GUI centric design tasks and to accelerate adoption
  2. Integration with tools from major EDA vendors to further minimize user ramp-up time
  3. Integration with leading tools for automation such as Jenkins to orchestrate various tasks such as simulation and test, and Review Board code and document review collaboration tool
  4. Integration with popular Issue Tracking tools such as Bugzilla, Jira, Redmine and TRAC to automatically log/report issues from the design management platform
  5. Connection to other SCM tools such as Git, Perforce, and Subversion to enable working with blocks and IP stored in diverse repositories and manage the entire design in progress
  6. Ability to scale from managing activities for a co-located team of few people to a worldwide organization with multiple teams and hundreds of members collaborating on projects
  7. Horizontal scalability to improve compute efficiency through cloud computing. Supports hybrid and full cloud deployments with major cloud computing providers
  8. Continuous improvements such as “sparsely populating” work areas allowing users to instantaneously populate it and significantly reduce disks and inodes usage, ability to programmatically configure projects conducive to process automation

In addition to ease of adoption by SoC team members, these characteristics help ease Agile adoption by offering practical benefits for the workflow. In other words, although how they are deployed together matters in building an Agile SoC workflow, each of them will bring practical benefits to the project.

Jeff Bezos of Amazon once said “In today’s era of volatility, there is no other way but to re-invent. The only sustainable advantage you can have over others is agility, that’s it…” This line of thought resulted in Amazon’s Day 1 principles on being agile that have guided the company for over two decades. It took Amazon over 3 years to reach full SoC development in the cloud since the acquisition of Annapurna Labs in 2015. Although Amazon owns cloud infrastructure, this transformation did not happen overnight.

Adopting Agile for SoC design may seem daunting. However, as with everything else, taking the first step is critical. A practical Agile SoC workflow can be achieved using a leading HCM platform such as Cliosoft SOS by deploying pertinent features and enhanced by regularly upgrading to the latest release to benefit from improvements. So why wait?

Also read:

Cliosoft and Microsoft to Collaborate on the RAMP Program

DAC 2021 – Cliosoft Overview

Cliosoft Webinar: What’s Needed for Next Generation IP-Based Digital Design


The Roots Of Silicon Valley

The Roots Of Silicon Valley
by Malcolm Penn on 02-09-2022 at 6:00 am

The Roots of Silicon Valley

 

The transistor was successfully demonstrated on December 23, 1947, at Bell Laboratories in Murray Hill, New Jersey, the research arm of American Telephone and Telegraph (AT&T).  The three individuals credited with its invention were William (Bill) Shockley Jr., the department head and group leader, John Bardeen and Walter Brattain.  Shockley continued to work on the development at Bell Labs until 1955 when, having foreseen the transistor’s potential, rather than continue to work for a salary, he quit to set up the world’s first semiconductor company and de facto industry father.

The Men … The Legend … The Legacy
Graphic Attribution: Dr Jeff Software
“It wasn’t scary, when you are in your late twenties, you don’t know enough to be scared. We just did it. We just knew what we had to do, and we did it.” – Jay Last

 

William Shockley Jr.  Shockley was born in London, England on February 13, 1910, the son of William Hillman Shockley, a mining engineer born in Massachusetts, and his wife, Mary (née Bradford), who had also been engaged in mining as a deputy mineral surveyor in Nevada.

The family returned to the United States in 1913, setting up home in Palo Alto, California, when Mary joined the large Mining Engineering Department faculty at Stanford University.  But for this twist of fate, given that both Shockley’s parents were mining engineers, the family could have easily settled in Colorado, Nevada or West Virginia instead.

In the event, William Jr. was educated in California, taking his BSc degree at the California Institute of Technology (CalTech) in 1932, before moving to the East Coast to study at the prestigious Massachusetts Institute of Technology (MIT) in Boston under Professor J.C. Slater.  He obtained his PhD there in 1936, submitting a thesis on the energy band structure of sodium chloride, and joined Bell Telephone Laboratories where he remained until his resignation in 1955.

On leaving Bell Labs, Shockley moved back Palo Alto, where his sick and aging mother still resided, initially as a visiting professor at Stanford University but with the vision to establish his own semiconductor firm making transistors and four-layer (Shockley) diodes.  Had he decided instead to stay on the East Coast, close to Bell Labs (New Jersey), MIT (Boston) or IBM (Burlington), then Silicon Valley might well have developed on the East rather than the West Coast of America, with almost certainly a very different DNA and personality.

On moving back to Palo Alto, Shockley found a sponsor in Raytheon, but Raytheon discontinued the project after a month.  Undeterred, Shockley, who had been one of Arnold Beckman’s students at CalTech, turned to him for advice on how to raise the one million dollars seed money needed.  Beckman was an American chemist, inventor, entrepreneur, founder and CEO of the hugely successful Beckman Instruments, and now also a budding financier, who believed that Shockley’s new inventions would be beneficial to his own company so, rather than pass the opportunity to his competitors, he agreed to create and fund a laboratory under the condition that its discoveries should be brought to mass production within two years.

He and Shockley signed a letter of intent to create the Shockley Semi-Conductor Laboratory (the hyphenation was common practice back then) as a subsidiary of Beckman Instruments, under William Shockley’s direction. The new group would specialise in semiconductors, beginning with the automated production of diffused-base transistors.  Shockley’s original plan was to establish the laboratory in Palo Alto, close to his mother’s home, but this changed when Frederick Terman, provost at Stanford University, offered him space in Stanford’s new industrial park at 381 San Antonio Road, Mountain View.  Beckmann bought licenses on all necessary patents for US$25,000 and the firm was launched in February 1956.

The seeds for Stanford’s hi-tech relationship with industry were sewn much earlier, in 1936, when Sigurd and Russell Varian, together with William Hansen, Russell’s ex college roommate and by then a professor at Stanford, approached David Webster, head of Stanford’s Physics Department, for help in developing the Varian brother’s idea of using radio-based microwaves for aircraft detection in poor weather conditions and at night.  Webster agreed to hire them to work at the University in exchange for lab space, supplies, and half the royalties from any patents they obtained.  The group’s work eventually led to the development of the Klystron in August 1937, subsequently adopted by Sperry, and a decade later, in 1948, the formation of Varian Associates.

In 1938, shortly after the Klystron’s development, Bill Hewlett and David Packard, who had graduated three years earlier with degrees in electrical engineering from Stanford University, formed Hewlett Packard in their home garage at 367 Addison Avenue in Palo Alto under the mentorship of Stanford professor Frederick Terman.  In some circles this garage has been attributed as the “Birthplace of Silicon Valley”, which, whilst not wishing to undermine the importance of Hewlett Packard’s contribution to the industry, understates both the role Stanford played in creating the catalytic environment for Californian hi-tech ventures, and the explosive role Shockley Semiconductors would subsequently play.  From a semiconductor perspective, 381 San Antonio Road in Mountain View is more appropriately the real Silicon Valley birthplace, as recognized by IEEE.

Shockley Semiconductors.  Given his own high IQ, Shockley embarked on an ambitious hiring campaign, seeking to employ the smartest and brightest scientists available; not just PhD’s, but PhDs from the finest universities at the very top of their class, bringing together a veritable brain trust of brilliant people.  The hiring process was not that straightforward however, given that the majority of electronics-related companies and professionals were at that time based on the East Coast, thus requiring ads to be posted in The New York Times and the New York Herald Tribune.  He did initially try to recruit from his Bell Lab peers but, knowing his reputation as a difficult manager, no-one would join him.

Early respondents included Sheldon Roberts of Dow Chemical, Robert Noyce of Philco, and Jay Last, a former intern of Beckman Instruments.  Each candidate was required to pass a psychological test followed by an interview.  Julius Blank, Jay Last, Gordon Moore, Robert Noyce, and Sheldon Roberts started working in the April-May timeframe, and Eugene Kleiner, Victor Grinich, and Jean Hoerni during the summer and by September 1956, the lab had 32 employees, including Shockley.

Although never medically diagnosed by psychiatrists, Shockley’s state of mind has been characterised as paranoid or autistic. All phone calls were recorded, and staff were not allowed to share their results with each other, which was not exactly feasible since they all worked in a small building.  At some point, he sent the entire lab for a lie detector test, although everyone refused.  He also lacked experience in business and industrial management and unilaterally decided that the lab would research an invention of his own, the four-layer diode, rather than developing the diffused silicon transistor that he and Beckman had agreed upon.

Barely six months old discontent boiled over leading to seven of the employees voicing their concerns to Arnold Beckman, not to get rid of Shockley but to put a more rational boss in between him and them.  The seven in question were Julius Blank, Victor Grinich, Jean Hoerni, Eugene Kleiner, Jay Last, Gordon Moore and Sheldon Roberts.  Their request might well have been granted had Shockley’s Nobel prize not been announced in November, fanning the flames of Shockley’s fame and already inflated ego.  Rather than rock the boat, Beckman chose not to interfere, telling instead the seven to accept things as they were.  At that time, Noyce and Moore stood on different sides of the argument, with Moore leading the dissidents and Noyce standing behind Shockley trying his best to resolve conflicts.  Shockley appreciated that and considered Noyce as his sole support in the group, but the team started to lose its members, starting with Jones, a technologist, who left in January 1957 due to a conflict with Grinich and Hoerni.

In March 1957, Kleiner, who was also beyond Shockley’s suspicions, asked permission ostensibly to visit an exhibition in Los Angeles.  Instead, he flew to New York to seek investors for a new company that he and the six others were by now contemplating.  Kleiner’s father, who was involved in investment banking, introduced Eugene to his broker, who in turn introduced Kleiner to Arthur Rock at Hayden Stone & Co.  The team’s original idea was to join an existing company and Rock, who had already developed a side interest in investing in new companies, what today would be called startups, together with Alfred Coyle, also from Hayden Stone, took a strong interest in Kleiner’s proposition of a seven-strong, pre-packaged team, believing that trainees of a Nobel laureate were destined to succeed.  Finding prospective investors, however, proved to be very difficult, given the US electronics industry was at that time concentrated in the East Coast and the California Group, as the seven became known, wanted to stay near Palo Alto.  Rock presented the group to 35 prospective employers, but no one was interested.

With the task of finding a backer proving hard, as a last resort on May 29, 1957, the group, led by Moore, presented Arnold Beckman with an ultimatum – solve the ‘Shockley problem’ or they would leave.  Moore suggested finding a professor position for Shockley and replacing him in the lab with a professional manager.  Beckman again refused, believing that Shockley could still improve the situation, later regretting this decision.

In June 1957, Beckman finally put a manager between Shockley and the team but by then it was too late as the seven were now emotionally committed to leave and embark on Plan B, namely creating their own startup.  Recognising, however, that they were followers not leaders, the group persuaded Bob Noyce, a born leader, to join them.  The now enlarged California Group met up with Rock and Coyle at the Hill Hotel in California and these ten people became the core of a new company.  Coyle, a ruddy-faced Irishman with a fondness for ceremony, pulled out 10 newly minted US$1 bills and laid them carefully on the table.  “Each of us should sign every bill”, he said.  “These dollar bills covered with signatures would be our contracts with each other.”

In August 1957, in a final throw of the funding dice, Rock and Coyle met with the inventor and businessman Sherman Fairchild, founder of Fairchild Aircraft and Fairchild Camera & Instrument Co.  Sherman, son of a rich entrepreneurial father who had made his fortune as a big investor in IBM, was a bright and equally entrepreneurial engineer who had amassed a small fortune during the war selling cameras for reconnaissance planes and suchlike.  Given that he had already developed a curious interest in semiconductors, Sherman sent Rock to meet his deputy, Richard Hodgson, who, risking his reputation, accepted Rock’s offer and within a few weeks all the paperwork and funding for the new company, Fairchild Semiconductor, had been sorted.

The capital was divided into 1,325 shares with each member of the eight receiving 100 shares, 225 shares went to Hayden, Stone & Co and the remaining 300 shares were held back in reserve.  Fairchild provided a loan of US$1.38 million and, to secure the loan, the eight gave Fairchild the voting rights on their shares with the right to buy them back at a fixed total price of US$3 million.

The eight left Shockley on September 18, 1957, and Fairchild Semiconductor was born.  Whilst there is no documentary evidence that he ever used the term, the group quickly became known and ‘The Traitorous Eight’.  Shockley never understood the reasons for their defection, considering it to have been a betrayal, and allegedly never talked to Noyce or the others again.

With the help of a new team, Shockley brought his own diode to mass production the following year but, by then, time had been lost and competitors were already close to the development of integrated circuits (ICs).  In April 1960, Beckman sold the unprofitable Shockley Labs to the Clevite Transistor Company based in Waltham, Mass, bringing his association with the semiconductor industry to an end.

On July 23, 1961, Shockley was seriously injured in a car crash and, after recovery, left the company and returned to teaching at Stanford.  Four years later, Clevite was acquired by ITT who, in 1969, decided to move the Labs to West Palm Beach, Florida where it had an already established semiconductor plant.  When the staff refused to move, the lab ceased to exist.

Fairchild Semiconductors.  Setting up shop on 844 East Charleston Road, on the border of Mountain View and Palo Alto, founded in intrigue, Fairchild has a long history of innovation having produced some of the most significant technologies of the second half of the twentieth century.  It quickly grew into one of the top semiconductor industry leaders, spurred on by the successful development of the silicon planar transistor.

Transistors, however, were already starting to develop their own ‘tyranny of numbers’ problem.  If you wanted to make a simple flip-flop, it needed four transistors but around ten wires to connect them up.  If two flip-flops were then interconnected, this needed not only twice the number of transistors and wires but also four or five additional wires to connect the two together.  So, four transistors needed ten wires, eight needed 25, and 16 needed 60-70 wires.  In other words, as the transistor count increased linearly, the number of connections grew exponentially, where the exponential was greater than one but less than two.

Whilst transistors were relatively easy to mass produce, connections were much more difficult as the wires had to be soldered together by hand and took up a lot of space.  The industry’s desire to build bigger and more complex systems was being held back by the difficulty in wiring everything together.  Up until now, no-one had really paid much attention to wiring but connections were soon to became public enemy number one, driving the need for the integrated circuit.

Jack Kilby, at the rival Dallas-based semiconductor firm Texas Instruments, demonstrated in 1958 the possibility to build two transistors in the same piece of semiconductor material but his transistors were wire bonded together with no practical solution for the connection problem at the time.  That problem was solved by Bob Noyce with the help of Jean Hoerni (who provided the technique) and Jay Last (who eventually made it work).

Jean Hoerni had been working on a solution for stopping transistors going bad, due to the fact the transistor surface inside the package was unprotected, allowing particles to contaminate and degrade the device over time.  His solution was to protect the transistor surface with a passivation (protection) layer of silicon dioxide (SiO2), grown or deposited on top of the structure.  He then further realized that, rather than depositing the emitter and base regions on top of the semiconductor substrate, as with the current Mesa process, if the surface was completely covered with silicon dioxide, the emitter and base areas could then be selectively diffused.  The net result was a much flatter surface and a more readily automated process.

This (Planar) technology, announced in January 1959, would become the second most important invention in the history of microelectronics, after the invention of the transistor, and laid the blueprint for all future integrated circuits.  But in 1959, it went virtually unnoticed, other than to Noyce who recognised that such a layer of glass was an insulator, opening the door for the connecting wires to be laid on top and patterned just like a printed circuit board.

When Noyce filed his patent in April 1959, it triggered a corporate patent battle between Texas Instruments and Fairchild, but not between Kilby and Noyce though who were friends and with high regard and respect for each other.  Texas Instruments claimed that Kilby’s patent wording ‘electrically conducting material such as gold laid down on the insulating material to make the necessary connections’ was a pre-existing description of Noyce’s patent claims and that Kilby had only used wire bonds as the quickest way to a prototype.  Had this assertion been upheld by the court, Noyce’s later-dated patent would have been declared invalid.  As it transpired, Texas Instruments lost the argument, both patents were declared valid, and a cross-licensing agreement was reached between the two firms.

Kilby, by nature, was a very humble person and, even though his patent pre-dated Noyce’s, he generously announced publicly that he felt both he and Noyce jointly invented the integrated circuit, contrary to Texas Instruments’ management position.

In 1959, Sherman Fairchild exercised his right to purchase the founding member’s shares, an event that turned former entrepreneurs and partners into ordinary employees, thereby destroying the team spirit and sowing the seeds of future tension.

There was still, however, one big problem yet to solve before integrated circuits could become a commercial reality, namely isolation; how to stop adjacent transistors interfering with each other.  Noyce delegated this thorny problem to Jay Last, who was running the R&D group at the time.  It was no easy task, taking some eighteen months before the first working device was produced on September 27, 1960.

Development also met with strong internal political resistance, with Tom Bay, the then Vice President of Marketing at Fairchild, accusing Last of squandering money and, in November 1960, Bay demanded termination of the project with the money saved spent on transistor development instead.  Moore refused to help, and Noyce declined to discuss the matter, leaving Last to fight the battle on his own.  Timing-wise, the conflict had flared up barely a month after Fairchild had announced the transition of its transistor production from Mesa to Planar technology, but Moore had refused to credit this achievement to Hoerni, fanning the flames of the already developing tensions between the eight founding partners.

Last continued to develop six more parts but these conflicts were the last straw and, flushed with their planar and isolation process success, he and Hoerni left Fairchild on January 31, 1961, to set up Amelco in Mountain View, California, with financing from Teledyne Corporation arranged by Arthur Rock.  Their plan was to build integrated circuits in support of Teledyne’s military business.  Kleiner and Roberts joined the pair a few weeks later.  With this high-level defection, the eight founding members had been split into two groups of four.

With just seven parts, Fairchild announced the world’s first standard logic family of integrated circuits, direct-coupled-transistor-logic (DCTL), in March 1961 based on Hoerni and Last’s resistor-resistor-transistor (RTL) planar process under the µLogic trademark.  One of these devices, the µL903 3-input NOR gate, became the basic building block of the Apollo guidance computer.  Designed by MIT and built by Raytheon, it needed 5,000 devices and was the first major integrated circuit application.

Fairchild’s lead, however, was to prove short-lived as David Allison, Lionel Kattner and some other technologists also left at around the same time as Hoerni and Last to start up Signetics (Signal Network Electronics).  One year later, in 1962, the firm announced a much-improved, second-generation, logic family, the SE100 Series diode-transistor-logic (DTL).  Fairchild quickly counter-attacked with their own DTL family, the 930 series, undercutting Signetics and rendering them unable to compete against Fairchild’s marketing strength.

Signetics’ most famous legacy part was the NE555 timer.  Designed in 1971, the 555, along with the ubiquitous TTL 7400 Quad 2-input NAND Gate, was probably the most popular integrated circuit ever sold.  Signetics was subsequently bought by Philips in 1975.

Early integrated circuits were housed mainly in either TO-5 or TO-18 adapted metal can transistor packages.  These worked fine for 3-lead devices but scaling them up to provide more and more connections proved to be limiting, given the can could only be made so large and the radial leads could only be packed so tight.  Ten leads were about the practical limit and that would not support the more complicated integrated circuits in the pipeline.  It fell to Fairchild’s Don Forbes, Rex Rice, and Bryant “Buck” Rogers to provide a solution to this problem in 1964, via the invention of the now familiar dual-in-line package (DIP), the little oblong ‘millipedes’ that would crawl across circuit boards for the next 40 years.

The idea for the package came from the ceramic flatpack design devised in 1962 by Yung Tao, a Texas Instruments engineer, as an industry standard for surface-mount integrated circuits for the US military.  This concept was adapted for through-hole, rather than surface mounting, with an eye for ease of handling for electronics manufacturers, ease of PCB layout design getting power to the ever-increasing number of integrated circuits and routing their signals around the board, and low cost, given the growing consumer integrated circuit market.  The 0.1″ (2.54 mm) package pin spacing left plenty of room for PCB tracks to be routed between pins and the 0.3″ (7.62 mm) spacing between rows of pins offered room for other tracks.

Fairchild launched the dual in-line package in 1965, originally in ceramic, but it took off with a vengeance when Texas Instruments introduced a plastic resin version, driving the unit cost down dramatically.  As a result of great design, low cost and support for increasingly complex integrated circuits, the plastic dual-in-line package became the mainstay industry standard, with its basic 14-pin design extended to support more leads, up to 64 pins in a 0.6” wide form factor, and more complex integrated circuits.  It was eventually surpassed by second-generation surface mount devices in the late 2000s as integrated circuit complexity and pin count requirements surpassed the dual-in-line package’s capability.

With as many as 15,000 die on a single wafer, in the early days, wafer fab costs were not as significant as assembly and test, hence the need to find ways to reduce the labour costs as a matter of survival.  After some early failed ventures in the US, e.g. in Shiprock, New Mexico, at a Navajo Indian reservation, and early attempts at automation, it was the Far East that ultimately was to prove successful, and also Fairchild’s third innovative move, when Bob Noyce, who had an investment position in a small radio company in Hong Kong, suggested to Charlie Sporck that he and Jerry Levine take a look at Hong Kong.

Attracted by the low labour cost, non-unionised facilities, low-cost western educated technicians, good engineering schools and favourable government and tax incentives, under the direction of Ernie Freiburg and run by Norm Peterson, previously manager of Fairchild’s Crystal growing operation, in 1963 Fairchild set up the industry’s first Far East assembly and test operation in an old rubber shoe factory on the Kawloon side of Hong Kong.  Hong Kong also had the added benefit that any fall out from testing could be sold to the local toy industry.  Other semiconductor manufacturers subsequently followed Fairchild to the Far East, mostly in Malaysia

Blank, Grinich, Moore and Noyce stayed with Fairchild throughout most of the 1960s but in March 1968, Moore and Noyce decided to leave, turning to Arthur Rock for funding, setting up NM Electronics in the summer of 1968.  One year later, NM Electronics bought the trade name rights from the hotel chain Intelco and took the name of Intel.

Grinich also left in 1968, first for a short sabbatical and then to teach at Berkeley and Stanford, where he published the first comprehensive textbook on integrated circuits.  But he never really lost the startup buzz and quit academia in 1985 to co-found and run several new companies, including Escort Memory Systems developing industrial RFID tags.

Blank, the last of the Eight, eventually left Fairchild in 1969 to become a consultant to new startup companies.  Having also grown tired of this, seeking a more hands-on role, he too quit and co-founded Xicor in 1978 to make EEPROMs.

As for the original four defectors, Hoerni headed Amelco until the summer of 1963 when, after a conflict with the Teledyne owners, he left for Union Carbide Electronics.  In July 1967, supported by the watch company Société Suisse pour l’Industrie Horlogère (SSIH), the predecessor of Swatch Group, he went on to found Intersil, pioneering the market for low-power custom CMOS circuits, some of which were developed for Seiko, kick-starting the Japanese electronic watch industry.

Hoerni then went on to set up a European version of Intersil, called Eurosil, financed in great part by SSIH’s desire to build a fab in Munich, not far from the Swiss watch manufacturing centres.  Eurosil was eventually sold to Diehl in late 1975 and Hoerni left in 1980 returning to the West Coast to form a new startup called Telmos, to produce specialised semicustom products covering the linear interface between sensors to the microprocessor and digital logic core and the high voltage, high current drivers.

Last continued at Amelco, taking on a twelve-year tenure as Vice President of Technology at Teledyne, Amelco’s parent, before founding Hillcrest Press, specialising in art books, in 1982.  Roberts also left to set up his own business and in 1973-1987 served as a trustee of the Rensselaer Polytechnic Institute.

That left just Kleiner, who also left to pursue a career in financing the many early-stage entrepreneurial firms that were starting to spring up on the West Coast of America, teaming up with Thomas (Tom) Perkins, head of R&D at Hewlett Packard, to form Kleiner Perkins with an office in Sand Hill Road, Palo Alto, an area that would become the Venture Capitalist’s home.  Thus, whilst Arthur Rock and Hayden Stone could arguably be accredited with setting up the first Venture Capitalist firms, Kleiner Perkins was the first Venture Capitalist to have a physical office in Silicon Valley.  The firm would go on to fund Amazon, Compaq, Genentech, Intuit, Lotus, Macromedia, Netscape, Sun Microsystems, Symantec and dozens of other companies.

As for today, Amelco, the original Fairchild spinout, after numerous mergers, acquisitions and renaming, no longer exists, but its remnant IP has survived and is now owned by Microchip.

Silicon Valley.  Last, Roberts, Kleiner and Hoerni’s collective decision to leave and compete against Fairchild, just over three years after the company was founded, was the first of what would be many subsequent defections and spinouts, eventually known as ‘Fairchildren’, directly or indirectly creating dozens of corporations, including Intel and AMD.  In doing so, Fairchild sowed the seeds of innovation across multiple companies in an area that would eventually become known as Silicon Valley.

Local pubs, restaurants, and social gathering hot spots played a key role in the ‘work hard, play hard’ Silicon Valley ethos at the time, where industry folk would head after work to have a drink, gossip, brag, trade war stories, talk shop, exchange ideas, change jobs, party and develop new business ventures.  Key venues included the Wagon Wheel, Lion & Compass and Ricky’s, along with the Peppermill and Sunnyvale Hilton.

Stanford University, or more accurately Fredrick Terman, also played a huge catalytic role, propelled by his farsighted vision for academia to develop a new relationship with the science and technology-based industries dependent on brain power as their main source of resource.  More than that, he further recognised the need to develop local industry, not just by building a community of interest between the faculties and industry but also by encouraging new enterprises, what today we would call startups, to cluster around the university via the provision of low-cost premises, often rent-free other than the local property taxes.

Whilst it is unclear who came up with the Silicon Valley name, Don Hoefler, a technology news reporter for the industry tabloid Electronic News, is credited with popularising the name in a column he wrote in 1971 about the valley’s semiconductor industry.  He also played a fundamental role in promoting the area’s innovative qualities and was one of the first writers to describe the Northern Californian technology industry as a community.

The Fairchild Legacy.  Throughout the first half of the 1960s, Fairchild was the undisputed semiconductor leader, setting the bar for others across all aspects of the industry, be it design, technology, production or sales.  Early sales and marketing efforts had been relatively small and military-oriented but that changed in 1961 when Noyce and Bay recruited a group of bright and aggressive salesmen and marketing specialists including Jerry Sanders III and Floyd Kvamme.  These two newcomers transformed Fairchild’s sales and marketing departments into one of the most effective in the industry.

One of the industry’s pivotal moments was Fairchild’s dramatic entry into the consumer TV market.  Attracted by the high-volume potential, Jerry Sanders wanted to replace the then tube (valve) CRT driver with a transistor, but the target price needed was US$1.50.  Transistors at that time were selling to the military for US$150.  In what can only be regarded as a massive leap of faith, Noyce’s instruction to Sanders was “Go take the order Jerry, we’ll figure out how to do it later.  Maybe we’ll have to build it in Hong Kong and put it in plastic, but right now let’s just do it.”

In 1963, Fairchild hired Robert (Bob) Widlar to design analog operational amplifiers using Fairchild’s digital IC process.  Despite its unsuitability, Widlar, in partnership with process engineer Dave Talbert, succeeded and went on to adapt the process to produce two revolutionary parts, the world’s first operational amplifiers, the µA702 in 1964 and µA709 in 1965.  With these two parts, Fairchild now dominated both the analog and digital integrated circuit markets, first with its µLogic RTL family and then with its 930 series DTL, and in April 1965 Gordon Moore famously publishing his article ‘Cramming More Components onto Integrated Circuits’ in Electronics Magazine.  Later to be known as Moore’s Law, this was basically an extrapolation of four plots on a graph of IC transistor density over time.

Fairchild’s digital technology lead was, however, being overtaken by Texas Instruments who, having fallen behind in RTL and DTL, had decided to copy Sylvania’s Ultra High Performance (SUHL) transistor–transistor logic (TTL) circuit design and adapted it to its own process to counter the announcement of Fairchild’s third generation 9000 series TTL logic.

Headed up by Stewart Carrell, Texas Instruments set up a ‘design factory’ that could churn out several new designs a week, mostly by guessing the W/L ratios, laying out the circuits, correcting them if the prototypes did not work, and zeroing in on a specification that manufacturing could support.  The design factory was supported by an optical photomask generator, as opposed to manual rubylith layout, that could create a photographic chip layout very quickly, and a ‘quick-turn’ fab line to rapidly turn out parts.

To strengthen their attack, Texas Instruments masterminded a marketing coup over Fairchild by persuading other semiconductor firms to second source its TTL rather than Fairchild’s competing product.  In this one masterly move, Texas Instruments established its 74 Series version of TTL as the de facto third generation industry standard, leaving Sylvania’s SHUL, Fairchild’s 9000 series and other proprietary alternatives behind.  It then proceeded to masterfully neutralise the entire second-source movement by providing every engineer with a copy of its ubiquitous orange book (The TTL Data Book), its twice-yearly ‘must attend’ TTL seminars in all major cities, not just in the US but globally, supported by an aggressive new product introduction programme.

By always ensuring any bill of materials (BOM) included at least one TTL part that was only available from it, Texas Instruments was able to stay one step ahead of the competition and ‘own’ the TTL market for the best part of 30 years, until standard logic eventually fell victim to the 1980’s Application Specific Integrated Circuit (ASIC) revolution.

In the meanwhile, starved of CapEx, Noyce’s position on Fairchild’s executive staff was consistently being compromised by Sherman Fairchild’s corporate interference and his lack of company support.  Many of the Fairchild management team were increasingly upset by Sherman’s corporate focus on unprofitable ventures at the expense of the semiconductor division.  The firm then suffered its ultimate humiliation in July 1967 when the semiconductor industry fell victim to the first of its legendary recessions, during which time the company became both unprofitable and was forced to concede its technology leadership to Texas Instruments.

Charles Sporck, Noyce’s Operations Manager, and reputed to run the tightest operation in the world, together with Pierre Lamond left in early 1968 to join the already departed Widlar and Talbert at National Semiconductor, both having grown disillusioned by the way things were going.  This trigger Noyce and Moore’s departure from the firm later that same year and was to prove a pivotal moment in the eventual demise of the firm.  The collective exodus of Sporck, Noyce and Moore, along with so many other iconic executives, signalled the end of an era and prompted Sherman Fairchild to bring in a new management team, led by C. Lester Hogan, then vice president of Motorola Semiconductor.  Of the eight original founders only Julius Blank now remained, although he too would be gone within a year.

Hogan’s arrival, and the subsequent displacement of Fairchild managers, demoralised the firm even further, prompting a further exodus of employees to start up a host of new companies.  Nicknamed ‘Hogan’s Heroes’, the ultra-conservative Motorola executives immediately clashed with Jerry Sanders III who, with his boisterous flamboyant style, was responsible for Fairchild’s sales.

Whilst initially slow to respond to the changing market, under Sander’s direction Fairchild had embarked on a strategy of leapfrogging Texas Instruments by focusing on more complex large scale (LSI), 30 plus gate parts, instead of simpler small and medium scale (SSI/MSI), under 30 gate devices, a strategy that was proving popular and successful with engineers, forcing Texas Instruments to recognize the threat and copy all of Fairchild’s 9300 series parts under 74 series numbers, for example the 9300 became the 74195 and the 9341 the 74181.

Sander’s whole strategy collapsed, however, when Hogan capitulated to Ken Olsen, founder and CEO of Digital Equipment Corporation (DEC) and a key Fairchild customer.  Olsen wanted Fairchild to give up on its proprietary TTL technology and second source Texas Instruments’ 74 Series TTL instead.  Against Sanders’ wishes, Hogan agreed, signing the death warrant for Fairchild’s TTL strategy.  Sanders was, understandably, absolutely livid with fury.  “You’ve just killed the company Ken”, Sander’s fumed.

Hogan’s betrayal was the last straw for Sanders and he, together with a group of Fairchild engineers, quit to start Advanced Micro Devices (AMD).  With Sanders installed as President, one of his first moves was to declare the mantra ‘people first, revenues and profits will follow’ and give every employee stock options in the new company, an innovation at the time.

In a subsequent boardroom coup, Wilf Corrigan, who had moved with Lester Hogan as director of Discrete Product Groups, succeeded Hogan as President and CEO in 1974, but Fairchild’s fate continued to decline, dropping to sixth place in the semiconductor industry by the end of the decade.

In the summer of 1979, with the semiconductor market riding high on its fourth year of successive double-digit growth, Fairchild fell victim to a hostile takeover bid from Gould, a major US producer of electrical and electronic equipment, hell-bent on a diversification strategy.

Unable to fight off the buy-out, Corrigan elected instead to seek the best price for the shareholders and the firm was eventually sold to Schlumberger, a French oil services industry company for US$350 million or US$66 per share vs. the Gould US$54 (later increased to US$57) offer.

Schlumberger, however, proved unable to inject vitality into the deteriorating company and it continued to lose money.  Corrigan departed in February 1980 and, once his one-year non-compete severance obligation was over, he and Rob Walker co-founded ASIC pioneer LSI Logic Corporation in 1981.

It initially replaced Corrigan by one of its own managers, Tom Roberts, who unsuccessfully tried to run the firm like a heavy equipment company.  Two years later, in 1983, the firm finally called in Donald W. Brooks, a Texas Instruments veteran, to reverse its decline but by then Fairchild semiconductor was a legend in trouble, lagging in leading-edge technologies and losing money, even as the rest of the semiconductor industry was booming.

The firm was eventually sold to National Semiconductor in 1987 for one-third of the price paid by Schlumberger eight years earlier.  With the Fairchild brand now dead, Brooks left, and the company was back in the hands of former Fairchild General Manager, Charlie Sporck.

Kirk Pond became COO at National Semiconductor in 1994 where he led the successful management buyout in 1997.  With the Fairchild name revived, Pond continued as President and CEO until 2005, when he became Chairman, before retiring a year later in 2006.

He was succeeded by Mark Thompson until the firm was acquired by ON Semiconductor in September 2016.  ON Semiconductor was the discrete, standard analog and standard logic device division spun out from Motorola’s Semiconductor Components Group in 1999.

The Silicon Valley Legacy.  The three key inventions that changed the world in the 1960s were the integrated circuit, startup fever and venture capital.  No doubt these inventions would have happened somewhere else in the world, at some other time, by somebody else, but the fact they all occurred within a short space of time, in the Palo Alto region, driven by the entrepreneurial spirit of the traitorous eight and the many other key contributors, along with the Stanford University ethos, is what made Silicon Valley so special and unique.

But what if Shockley’s parents had moved to Colorado, Nevada or West Virginia to pursue their mining careers on their return to the United States from London rather than Palo Alto?  Would Silicon Valley had developed there instead?

What if Shockley had chosen to set up Shockley Semiconductors on the East Coast, where there was an already well-developed infrastructure, rather than Palo Alto which had none?  From an infrastructure perspective, the East Coast was far better positioned to have hosted Silicon Valley there.

What if the Russians, Europeans or Japanese had invented the integrated circuit first?  These regions were known to be working on this at the time.  Could Silicon Valley have sprung up in the USSR, Europe or Tokyo instead?

What if Fredrick Terman had not had the foresight to develop a community of interest between Stanford’s faculties, industry and encourage new enterprises to cluster around the university?

What would the world look like today had any of these scenarios happened?

Clearly, fate played a role in bringing Shockley and semiconductors to Palo Alto, but the West Coast proved a far more fertile environment for the risk-taking entrepreneurial spirit of the traitorous eight and their peers than the more risk-averse and mature East Coast business and financial community.

All eight of the original founders eventually left Fairchild and went on to become serial entrepreneurs, co-founding between them a wide variety of new startups, both in semiconductors and venture capital, surrounded by brilliant engineers who wanted to start new companies, prove themselves and change the world, stoking the startup fever boom driven by Shockley Semiconductor as the embryo, Fairchild Semiconductor as the incubator and the Palo Alto infrastructure as the catalyst.  The rest, as they say, is history.

The Lunch That Changed The World

On February 14, 1956, Arnold O. Beckman and William B. Shockley announced to a luncheon audience of scientists, educators, and the press at San Francisco’s St. Francis Hotel that they were founding Shockley Semiconductor Laboratory in Palo Alto, California.

The entrepreneurial spirit of the Valley, and the rise and fall of Fairchild, is best summed up by the following comment from Rob Walker, co-founder of LSI Logic: “It’s amazing what a few dedicated people can accomplish when they have clear goals and a minimum of corporate bullshit.”

 

Malcolm Penn

22 Dec 2021

With acknowledgment and gratitude to my many industry colleagues, old and new, who proofed early drafts and offered much-appreciated additional insights, fact checks and clarifications.  Happy 74th birthday!

Also read:

Future of Semiconductor Design: 2022 Predictions and Trends

The Semiconductor Ecosystem Explained

Are We Headed for a Semiconductor Crash?


Upcoming Webinar: Optimized Chip Design with Main Processors and AI Accelerators

Upcoming Webinar: Optimized Chip Design with Main Processors and AI Accelerators
by Kalar Rajendiran on 02-08-2022 at 10:00 am

Expedera DLA IP Benefits

Using the right tool for the job can be extremely important. Well, maybe not in the case of the famed chef Martin Yan who is notorious for using just one knife—a razor sharp wide blade cleaver that doubles as a spatula—for preparing anything and everything he cooks. For the rest of us, though, the right tools can make all the difference.

The wrong choice of tool has stymied the prospects of many a product. Maybe there were justifiable reasons for the choice. Maybe the product concept was ahead of its time and too early for the market. Maybe the ecosystem at that time did not offer a better option. Perhaps you have your own list of such products. One in particular comes to mind that many of you may not be aware.

More than a decade before Apple launched the iPad, a similar product was conceived at National Semiconductor (now part of TI). It was called the WebPad, an always-on wireless tablet device. For practical purposes, the intended use cases for the WebPad were similar to the iPad. National had developed the reference design and manufactured a whole bunch of samples for its OEM customers to test and evaluate. National’s goal was to create traction for this product, so the company could sell more chips. There was serious interest from many customers. But the Achilles heel of the product was the processor. X86-based processors were available in-house. National had acquired Cyrix, an x86-architecture based processor company, a few years earlier. So, that was the processor of choice. From a PPA perspective for the intended application, it scored well on the performance metrics. But from power and area, not so well. The sample devices were power hungry and bulky. There are probably any number of reasons why the WebPad died on the vine, but the choice of processor makes for an interesting case study. For a product that is supposed to be an always-on mobile tablet, weight, form factor and longevity of each battery-charge are of paramount importance and play a deciding role on the product’s market viability.

Could a different processor have been considered for the WebPad? Maybe. Arm was nascent at that time and was just beginning its expansion into the mobile market. Arm may not have matched the x86 on performance during those days. But the applications were not that demanding and x86 was likely an overkill. And Arm would have done well on the power and area metrics. Fast forward to current times, applications are extremely demanding on all three metrics of PPA. And AI driven edge applications pose stringent requirements in terms of latency, deterministic responses, energy efficiency, memory resources and maximum throughput. As there are many options to choose from, there is no excuse for undermining a great product idea by making the wrong processor choice.

For today’s and future AI-enabled applications, is the main processor still the best fit in every case? Can custom instructions extensions breathe new life into main processing? When does it make sense to use a hybrid core architecture with a main processor along with AI accelerators? You will find the answers to these questions at an upcoming webinar hosted by Expedera and Andes Technology.

Expedera

Expedera provides scalable neural engine semiconductor IP that enables major improvements in performance, power, and latency while reducing cost and complexity in AI-inference applications. Expedera’s Origin™ deep learning accelerator (DLA) products are easily integrated, readily scalable, and can be customized to application requirements. The solutions also reduce the memory requirement, which is very important for embedded devices at the edge.

While their DLA products can work with any CPU architecture, they can deliver better efficiencies alongside processors that can support custom instructions.

Andes Technology

Andes Technology is a leading embedded processor intellectual property supplier in the world. Andes offers high-performance/low-power 32/64 bit processors and associated SoC platforms to serve the rapidly growing embedded systems applications.

Their processor cores including their RISC-V cores supporting custom extensions can fulfill the requirements of many AI applications. In other cases, an architecture with RISC-V cores and an Expedera DLA core would lead to a more optimal end-solution.

 

 

Also read:

CEO Interview: Da Chaung of Expedera

A Packet-Based Approach for Optimal Neural Network Acceleration

The Roots Of Silicon Valley


Accellera at DVCon U.S. 2022 in the Metaverse!

Accellera at DVCon U.S. 2022 in the Metaverse!
by Daniel Nenni on 02-08-2022 at 6:00 am

Gather.Town

The premier verification conference and exhibition is coming up and of course Accellera plays an important role. This year DVCON will again be virtual, which is unfortunate, but I must say as a long time attendee this year’s program really stands out. In fact, there is a new addition that is worth mentioning, it’s the Metaverse without the headset, I tried the demo, very cool, it will be interesting to see it in play:

“DVCon U.S. 2022 is pleased to be partnering with Gather.Town to enhance the exhibit hall and networking experience for companies and attendees. The virtual pages used in 2021 will still be available for our sponsors/exhibitors to upload supplemental documents for on-demand viewing and to chat with attendees at any time. The addition of Gather.Town will make spending time with attendees just as easy as in real life. Allowing attendees to walk in and out of conversations in a natural and seamless way. ”

And here are the Accellera related events. I hope to virtually see you there!

Portable Stimulus Working Group Tutorial:

PSS in the Real World

Monday, February 28 9:00-11:00am

The tutorial will highlight the power and flexibility of Accellera’s Portable Stimulus Standard by walking through several real-world examples. Beginning with a brief overview of the standard, presenters will show how to use PSS to model stimulus for a variety of applications, from which multiple target-specific test implementations may be generated.

UVM-AMS Working Group Workshop:

An Update on the Accellera UVM-AMS Standard

Monday, February 28 11:30am-12:30pm

The UVM-AMS Working Group was formed with a charter to develop a standard that will provide a unified analog/mixed-signal verification methodology based on UVM, with major focus on transient analysis. The UVM-AMS standard will provide a comprehensive and unified analog/mixed-signal verification methodology based on UVM to improve analog mixed signal (AMS) and digital mixed signal (DMS) verification of integrated circuits and systems. This will encourage support by tool and IP providers, offering ready-to-use analog/mixed-signal verification IP that can be integrated easily into a UVM-AMS testbench. It will raise the productivity and quality of analog/mixed-signal verification across projects and applications, thanks to the reuse of proven verification components, and stimuli. In this workshop, the working group will share the findings, requirements and ideas collected so far and the next step plan for developing the proposed standard. Aspects under consideration for the UVM-AMS standard will be discussed at high level in this workshop:

In addition, an example will be provided to illustrate how UVM-AMS may be deployed to easily augment an existing UVM environment to verify an Analog/Mixed-Signal device under test.

Presenters will conclude with an opportunity for attendees to ask questions and comment on the proposed standard.

IP Security Assurance Working Group Workshop:

An Overview of the Security Annotation for Electronic Design Integration (SA-EDI) Standard

Monday, February 28 1:00-2:00pm

The importance of security in the electronic systems many of us rely on has become obvious to semiconductor design and manufacturing companies but most hardware security assurance practices in industry are still performed manually using proprietary methods. This approach is very expensive, time consuming, and error prone due to the ever-increasing complexity of systems. To address the issue, the Accellera IP Security Assurance (IPSA) Working Group was formed in 2018 by a team of security and EDA experts to work on developing a general and portable IP security specification standard to describe the IP security concerns (threat model) and to guide EDA vendors on how to produce security assurance collateral and use it for the automation of security verification. The specification was approved as an Accellera standard for Security Annotation for Electronic Design Integration (SA-EDI) in 2021.

During this workshop we will give an overview of this standard by going over the related collateral, methodology, a case study of the application of the standard and the roadmap of the standard.

Functional Safety Working Group Workshop:

An Update on Accellera’s Emerging Functional Safety Standard

Monday, February 28 2:30-3:30pm

This workshop presents an update on the work performed by Accellera’s Functional Safety Working Group over the past year and gives a preview of the white paper the group is planning to publish in 2022. The presentation first introduces the formalization of the Failure modes, effects, and diagnostic analysis (FMEDA) process and how it has led to the initial high-level definition of the data model, which will be the basis for the emerging functional safety standard.

The workshop will then provide detail on the data model and describe the necessary attributes to perform an FMEDA, followed by a description of some of the methodology discussions that are captured or assumed in the data model.

The workshop will also explore some directions connected to the development of the Functional Safety data format standard that the working group has identified and that will form the basis for the next steps for the working group.

UVM Working Group Birds of a Feather:

Wednesday, March 2 1:00-2:00pm

During the UVM Birds of a Feather meeting at DVCon U.S. 2021, the Accellera UVM Working Group heard from users how backward compatibility issues held back migration to the latest library.  The Working Group is preparing to release a new library version (targeted for summer 2022) that reduces these issues greatly. At this meeting, the Working Group will present the expectations for this library, including the few remaining situations that may require user code updates, to again get feedback from the user community. There should also be time remaining for an open Q&A. Attendance to the Birds of a Feather is free, but registration through DVCon is required to access the platform.

About Accellera

Accellera Systems Initiative is an independent, not-for profit organization dedicated to create, support, promote, and advance system-level design, modeling, and verification standards for use by the worldwide electronics industry. We are composed of a broad range of members that fully support the work of our technical committee to develop technology standards that are balanced, open, and benefit the worldwide electronics industry. Leading companies and semiconductor manufacturers around the world are using our electronic design automation (EDA) and intellectual property (IP) standards in a wide range of projects in numerous application areas to develop consumer, mobile, wireless, automotive, and other “smart” electronic devices. Through an ongoing partnership with the IEEE, standards and technical implementations developed by Accellera Systems Initiative are contributed to the IEEE for formal standardization and ongoing governance.

Also read:

Accellera Unveils PSS 2.0 – Production Ready

Functional Safety – What and How

An Accellera Update. COVID Accelerates Progress


Silicon Catalyst Fuels Worldwide Semiconductor Innovation

Silicon Catalyst Fuels Worldwide Semiconductor Innovation
by Mike Gianfagna on 02-07-2022 at 10:00 am

Silicon Catalyst Fuels Worldwide Semiconductor Innovation

Silicon Catalyst just announced the addition of six new companies to its semiconductor industry incubator. The focus areas for these companies is worth noting, as is the broad geographic footprint of the group. I’ll get to this detail in a moment, but first I’d like to step back a bit and take a closer look at this remarkable organization and what it has accomplished. We all know semiconductors are fueling the development of life-changing and world-changing technology. Without this innovation pipeline, many of the amazing new products we enjoy would simply not exist. If you want more on this, just Google chip shortage. I had the opportunity recently to speak with Pete Rodriguez, Silicon Catalyst CEO. Before I get into the recent announcement, let me share Pete’s perspective on how Silicon Catalyst fuels worldwide semiconductor innovation.

“Best Time Ever to Be in the Semiconductor Industry”

Pete Rodriguez

This is the comment that began my conversation with Pete. He was very pleased with Silicon Catalyst’s growth in 2021 and is very bullish about the outlook for the Incubator in 2022 The term “explosive growth” was used. Pete pointed out Silicon Catalyst is the only incubator in the world (among thousands) that is focused exclusively on semiconductors. Given how hot semiconductors are across the globe, it’s great to have them as a unique source for startups.

Pete ran down some of the recent additions to the organization, including Matrix Capital Management and Sony Semiconductor Solutions as Strategic Partners. Beyond the current nine, there will be several more Strategic Partners in 2022. Pete also pointed out that the Silicon Catalyst Advisor Network is second to none, being two orders of magnitude larger than the leading incubators in the world (from a semiconductor perspective). This organization has grown to over 220 members and includes Wally Rhines, the recipient of this year’s Morris Chang Exemplary Achievement Award from the Global Semiconductor Alliance. Their ecosystem of In-Kind Partners grew from 14 four years ago to over 54 today. These are the organizations that provide preferred access to the tools, technologies and services semiconductor startups need to get to market. This is the primary mission of Silicon Catalyst. It’s what they do best. There will be more In-Kind Partners announced soon.

From an international perspective, there was a successful launch of Silicon Catalyst UK, covered by SemiWiki here. This adds to the joint venture in China and the team in Israel. Pete explained the organization also has a university program with 30 institutions that will increase this year to over 50. There are now 46 companies in the Silicon Catalyst incubator domestically and 35 in China. These organizations are getting funded and getting to a product. Pete reported that the gross market value of the portfolio is now over $1.25 billion, starting from zero about 6 ½ years ago. Fantastic progress in a very important area.

Six New Entrants to the Incubator

Now a bit about the recent announcement. Six new companies have been admitted to the Silicon Catalyst incubator. They are in Argentina, Canada, US, Israel, Singapore, and Switzerland. Silicon Catalyst has quite a worldwide footprint. The application areas of these six companies also shows a lot of diversity. Here is a summary:

  • ApLife Biotech- Argentina “Becoming World Leaders in Discovery for Biosensors”

Aplife Biotech manufactures synthetic DNA-derived molecules and large combinatorial libraries in predefined locations for mass-screening of important biological molecules.

  • Lemurian Labs – Canada “Building a next-gen AI Accelerator to enable deep learning on the edge”  

At Lemurian, our goal is to make deep learning affordable and available for everyone, from the individual researcher to industry.

  • NanoHydro Chem – USA “Energy Storage Solutions”

NanoHydroChem is an advanced materials company developing and commercializing nanomaterials for energy storage applications.

  • RAAAM – Israel Providing the highest-density embedded memory in any standard CMOS technology” 

RAAAM offers the highest-density embedded memory in any standard CMOS process, requiring no additional process steps or cost.

  • Siloxit – Singapore “Zero-touch security that works”

Siloxit was founded in 2020, focused on delivering IoT devices and systems for secure high-value, high-volume infrastructure applications.

  • Synthara.AI – Switzerland “Delivering server-class, rapidly-customizable AI accelerators for the next-generation of edge inference applications”   

Synthara offers highly scalable and rapidly customizable energy-efficient AI accelerators for the extreme edge applications such hearing aids, wearables and bio-medical monitoring.

That’s quite a lineup. Now you know how Silicon Catalyst fuels worldwide semiconductor innovation. You can learn more about Silicon Catalyst and their unique programs here.

Also read: 

CEO Interview: Pete Rodriguez of Silicon Catalyst

CEO Interview: Pete Rodriguez of Silicon Catalyst

Webinar: Investing in Semiconductor Startups

 


Future of Semiconductor Design: 2022 Predictions and Trends

Future of Semiconductor Design: 2022 Predictions and Trends
by Kalar Rajendiran on 02-07-2022 at 6:00 am

IP Management Tools Survey

Predictions and trends create the forces that accelerate innovations and keep the industry moving forward. We are all used to hearing of important issues and challenges, usually in the context of solutions offered by various vendors. The SemiWiki forum plays its role in bringing awareness of all of the above to its audience. For example, many companies make presentations on a regular basis about design related challenges and solutions and SemiWiki covers many of those. But a recent webinar by the Methodics division of Perforce is different. It is different because it presented key insights gathered from a broad cross-section of the industry.

The webinar titled “Future of Semiconductor Design: 2022 Predictions and Trends” was presented by Robin Butler, General Manager of Methodics at Perforce. Robin reported the top issues, trends, challenges and solutions as learned from a survey of the industry. The key to a great survey depends on how well the industry was represented in it. The following is the breakdown of the representation.

Roles: Engineering Management (32%), Design Engineering (34%), CAD Management (13%), IT Management (1%), Executive Management (10%) and others such as Functional Safety Managers.

Experience level: from 0-2 years (13%), 3-5 years (10%), 6-10 years (14%), and 11 years or more (62%).

Companies: under $500 million annual revenue (44%), $501 million to $5 billion (26%), and over $5 billion (30%).

No matter what role one is playing within the semiconductor ecosystem, you are likely to find the results of the survey interesting. (a) Inspiration to enhance an existing product, build a new product to bridge a gap, ride a trend or solve an important issue. (b) Adoption of best design practices and asset management tools and techniques.

This post is a synthesis of the salient points I garnered from the webinar.

Most Important Issues

The two most important issues companies are facing are time-to-market pressures and effective reuse of IP. While reuse of IP is an effective way to accelerate time to market, companies need to implement formal IP reuse strategies. Lack of such formal reuse strategies, processes and supporting tools are impeding the growth in semiconductor design productivity. This is compounded by the fact that semiconductor design capacity is increasing at a rapid rate. According to a recent study by the University of Michigan, semiconductor design productivity is increasing at a rate of 28% annually. But semiconductor capacity is increasing even faster, at 58% annually.

A formal IP reuse strategy is becoming a must to deliver on time-to-market demands and close that gap between design capacity and design productivity. 

Trends

Companies are increasingly required to meet ISO 26262, ITAR, and other compliance/functional safety standards. This is understandable given that many of the markets driving semiconductor growth are for consumer-oriented applications. A design and implementation tools suite that can enable, automate and ensure traceability for semiconductor design is becoming critical.

The global embedded system market is expected to grow 6.3% to $137.31 billion by 2027. Embedded software is becoming part and parcel of many of today’s products requiring hardware and software to be bundled together. As hardware designers and software developers collaborate to create the product offering, configuration management is essential to handle the interdependencies.

A majority of the survey respondents stated that 40+% of a die in a typical chip is made of custom circuitry. And analog component reuse is becoming more common to expedite design of complex mixed-signal SoCs. In other words, more analog is getting integrated into SoCs.

A significant portion of survey respondents indicated that more than half of their job requires IP integration. A comprehensive IP lifecycle management platform would make the IP integration job easier by helping find, qualify and integrate the optimal IP for the job.

What a difference a year or two makes. Implementation of 2.5D designs is trending upward. A little more than a third of the survey respondents are considering or already implementing 2.5D designs for their products. 2.5D designs are becoming more feasible and a way to maintain the level of SoC integration as Moore’s law is slowing.

Challenges

Finding relevant IPs for reuse is a challenge. Many survey respondents are either reusing IP from a previous project or asking a coworker for recommendations. While this approach works, this may or may not yield the optimal IP for the project on hand. A more formal, data-driven approach to finding relevant IP would increase design productivity and deliver a better product.

Survey respondents stated that finding relevant IP for their design takes at least a day or longer. They then need to qualify the IP for inclusion into their design. Nearly 75% of survey respondents reported difficulty in determining the context of an IP and its quality. Tracking and determining the quality of Ips is important for traceability.

An efficient way of cataloging IP using metadata from various qualification tools within the design ecosystem is an area of opportunity. A platform that can determine if requirements are met and where an IP is used can provide teams with the quality metric and state of the IP.

Tools

IP Management

Although companies have embraced IP-centric design practices, using a commercial IP Lifecycle platform is still in the early stages of adoption. As you can see from the Figure below, 81% of the survey respondents are not using a commercial IP Lifecyle Management (IPLM) platform. While 19% stated they are using Methodics IPLM, 28% said they use internal/other. The 28% could contain other commercial IPLM platforms. This is due to the fact that the survey was promoted in the Perforce customer base.

From an opportunity perspective for IPLM platform vendors, there is an opportunity with at least 53% of the pie below.

Data Management and Version Control

Data management and version control solutions come from the development space and for Perforce, it goes back to its early days as a company. These solutions provide a backbone for IP management. It can support the tracking of IPs and provide the metadata engineers need to make informed decisions. 36% of respondents indicated they are using Perforce Helix Core for data management, followed by 17% using Subversion (SVN) and another 17% using Git.

From an opportunity perspective for data management/version control tools vendors, there is an opportunity with at least 16%% of the pie below.

Summary

A formal IP reuse strategy is essential to make the most of one’s IP investments. It is a must to deliver on time-to-market demands and close that gap between design capacity and design productivity. With an increasing requirement for semiconductor products to meet compliance and/or functional safety standards, traceability represents a major challenge. An effective IP management platform helps designers locate, qualify and manage the release of IP. Using such a platform to manage the IP enables reuse across projects and also enables traceability.

The survey indicates that there is opportunity to maximize the potential of an IP-centric design approach with the use of the right management tools. And there is opportunity for tools vendors to tap into the prospective market potential for these tools.

To watch a recording of the webinar, click here.

The detailed results of the survey are included in a Perforce report titled “Semiconductor Report – The State of the Industry.” To get a copy of the report, click here.

 

Also read:

Webinar – SoC Planning for a Modern, Component-Based Approach

You Get What You Measure – How to Design Impossible SoCs with Perforce

Achieving Scalability Means No More Silos


The Semiconductor Ecosystem Explained

The Semiconductor Ecosystem Explained
by Steve Blank on 02-06-2022 at 6:00 am

TSMC Ecosystem Explained

The last year has seen a ton written about the semiconductor industry: chip shortages, the CHIPS Act, our dependence on Taiwan and TSMC, China, etc.

But despite all this talk about chips and semiconductors, few understand how the industry is structured. I’ve found the best way to understand something complicated is to diagram it out, step by step. So here’s a quick pictorial tutorial on how the industry works.


The Semiconductor Ecosystem

We’re seeing the digital transformation of everything. Semiconductors – chips that process digital information — are in almost everything: computers, cars, home appliances, medical equipment, etc. Semiconductor companies will sell $600 billion worth of chips this year.

Looking at the figure below, the industry seems pretty simple. Companies in the semiconductor ecosystem make chips (the triangle on the left) and sell them to companies and government agencies (on the right). Those companies and government agencies then design the chips into systems and devices (e.g. iPhones, PCs, airplanes, cloud computing, etc.), and sell them to consumers, businesses, and governments. The revenue of products that contain chips is worth tens of trillions of dollars.

Yet, given how large it is, the industry remains a mystery to most.  If you do think of the semiconductor industry at all, you may picture workers in bunny suits in a fab clean room (the chip factory) holding a 12” wafer. Yet it is a business that manipulates materials an atom at a time and its factories cost 10s of billions of dollars to build.  (By the way, that wafer has two trillion transistors on it.)

If you were able to look inside the simple triangle representing the semiconductor industry, instead of a single company making chips, you would find an industry with hundreds of companies, all dependent on each other. Taken as a whole it’s pretty overwhelming, so let’s describe one part of the ecosystem at a time.  (Warning –  this is a simplified view of a very complex industry.)

Semiconductor Industry Segments

The semiconductor industry has seven different types of companies. Each of these distinct industry segments feeds its resources up the value chain to the next until finally a chip factory (a “Fab”) has all the designs, equipment, and materials necessary to manufacture a chip. Taken from the bottom up these semiconductor industry segments are:

  1. Chip Intellectual Property (IP) Cores
  2. Electronic Design Automation (EDA) Tools
  3. Specialized Materials
  4. Wafer Fab Equipment (WFE)
  5. “Fabless” Chip Companies
  6. Integrated Device Manufacturers (IDMs)
  7. Chip Foundries

The following sections below provide more detail about each of these seven semiconductor industry segments.

Chip Intellectual Property (IP) Cores

  • The design of a chip may be owned by a single company, or…
  • Some companies license their chip designs – as software building blocks, called IP Cores – for wide use
  • There are over 150 companies that sell chip IP Cores
  • For example, Apple licenses IP Cores from ARM as a building block of their microprocessors in their iPhones and Computers

Electronic Design Automation (EDA) Tools

  • Engineers design chips (adding their own designs on top of any IP cores they’ve bought) using specialized Electronic Design Automation (EDA) software
  • The industry is dominated by three U.S. vendors – CadenceMentor (now part of Siemens) and Synopsys
  • It takes a large engineering team using these EDA tools 2-3 years to design a complex logic chip like a microprocessor used inside a phone, computer or server. (See the figure of the design process below.)

  • Today, as logic chips continue to become more complex, all Electronic Design Automation companies are beginning to insert Artificial Intelligence aids to automate and speed up the process

Specialized Materials and Chemicals

So far our chip is still in software. But to turn it into something tangible we’re going to have to physically produce it in a chip factory called a “fab.” The factories that make chips need to buy specialized materials and chemicals:

  • Silicon wafers – and to make those they need crystal growing furnaces
  • Over 100 Gases are used – bulk gases (oxygen, nitrogen, carbon dioxide, hydrogen, argon, helium), and other exotic/toxic gases (fluorine, nitrogen trifluoride, arsine, phosphine, boron trifluoride, diborane, silane, and the list goes on…)
  • Fluids (photoresists, top coats, CMP slurries)
  • Photomasks
  • Wafer handling equipment, dicing
  • RF Generators

Wafer Fab Equipment (WFE) Make the Chips

  • These machines physically manufacture the chips
  • Five companies dominate the industry – Applied MaterialsKLALAMTokyo Electron and ASML
  • These are some of the most complicated (and expensive) machines on Earth. They take a slice of an ingot of silicon and manipulate its atoms on and below its surface
  • We’ll explain how these machines are used a bit later on

 “Fabless” Chip Companies

  • Systems companies (Apple, Qualcomm, Nvidia, Amazon, Facebook, etc.) that previously used off-the-shelf chips now design their own chips.
  • They create chip designs (using IP Cores and their own designs) and send the designs to “foundries” that have “fabs” that manufacture them
  • They may use the chips exclusively in their own devices e.g. Apple, Google, Amazon ….
  • Or they may sell the chips to everyone e.g. AMD, Nvidia, Qualcomm, Broadcom…
  • They do not own Wafer Fab Equipment or use specialized materials or chemicals
  • They do use Chip IP and Electronic Design Software to design the chips


Integrated Device Manufacturers (IDMs)

  • Integrated Device Manufacturers (IDMs) design, manufacture (in their own fabs), and sell their own chips
    • They do not make chips for other companies (this is changing rapidly – see here.)
    • There are three categories of IDMs– Memory (e.g. MicronSK Hynix), Logic (e.g. Intel), Analog (TIAnalog Devices)
  • They have their own “fabs” but may also use foundries
    • They use Chip IP and Electronic Design Software to design their chips
    • They buy Wafer Fab Equipment and use specialized materials and chemicals
  • The average cost of taping out a new leading-edge chip (3nm) is now $500 million

 Chip Foundries

  • Foundries make chips for others in their “fabs”
  • They buy and integrate equipment from a variety of manufacturers
    • Wafer Fab Equipment and specialized materials and chemicals
  • They design unique processes using this equipment to make the chips
  • But they don’t design chips
  • TSMC in Taiwan is the leader in logic, Samsung is second
  • Other fabs specialize in making chips for analog, power, rf, displays, secure military, etc.
  • It costs $20 billon to build a new generation chip (3nm) fabrication plant

Fabs

  • Fabs are short for fabrication plants – the factory that makes chips
  • Integrated Device Manufacturers (IDMs) and Foundries both have fabs. The only difference is whether they make chips for others to use or sell or make them for themselves to sell.
  • Think of a Fab as analogous to a book printing plant (see figure below)
  1. Just as an author writes a book using a word processor, an engineer designs a chip using electronic design automation tools
  2. An author contracts with a publisher who specializes in their genre and then sends the text to a printing plant. An engineer selects a fab appropriate for their type of chip (memory, logic, RF, analog)
  3. The printing plant buys paper and ink. A fab buys raw materials; silicon, chemicals, gases
  4. The printing plant buys printing machinery, presses, binders, trimmers. The fab buys wafer fab equipment, etchers, deposition, lithography, testers, packaging
  5. The printing process for a book uses offset lithography, filming, stripping, blueprints, plate making, binding and trimming. Chips are manufactured in a complicated process manipulating atoms using etchers, deposition, lithography. Think of it as an atomic level offset printing. The wafers are then cut up and the chips are packaged
  6. The plant turns out millions of copies of the same book. The plant turns out millions of copies of the same chip

While this sounds simple, it’s not. Chips are probably the most complicated products ever manufactured.  The diagram below is a simplified version of the 1000+ steps it takes to make a chip.

Fab Issues

  • As chips have become denser (with trillions of transistors on a single wafer) the cost of building fabs have skyrocketed – now >$10 billion for one chip factory
  • One reason is that the cost of the equipment needed to make the chips has skyrocketed
    • Just one advanced lithography machine from ASML, a Dutch company, costs $150 million
    • There are ~500+ machines in a fab (not all as expensive as ASML)
    • The fab building is incredibly complex. The clean room where the chips are made is just the tip of the iceberg of a complex set of plumbing feeding gases, power, liquids all at the right time and temperature into the wafer fab equipment
  • The multi-billion-dollar cost of staying at the leading edge has meant most companies have dropped out. In 2001 there were 17 companies making the most advanced chips.  Today there are only two – Samsung in Korea and TSMC in Taiwan.
    • Given that China believes Taiwan is a province of China this could be problematic for the West.

What’s Next – Technology

It’s getting much harder to build chips that are denser, faster, and use less power, so what’s next?

  • Instead of making a single processor do all the work, logic chip designers have put multiple specialized processors inside of a chip
  • Memory chips are now made denser by stacking them 100+ layers high
  • As chips are getting more complex to design, which means larger design teams, and longer time to market, Electronic Design Automation companies are embedding artificial intelligence to automate parts of the design process
  • Wafer equipment manufacturers are designing new equipment to help fabs make chips with lower power, better performance, optimum area-to-cost, and faster time to market

What’s Next – Business

The business model of Integrated Device Manufacturers (IDMs) like Intel is rapidly changing. In the past there was a huge competitive advantage in being vertically integrated i.e. having your own design tools and fabs. Today, it’s a disadvantage.

  • Foundries have economies of scale and standardization. Rather than having to invent it all themselves, they can utilize the entire stack of innovation in the ecosystem. And just focus on manufacturing
  • AMD has proven that it’s possible to shift from an IDM to a fabless foundry model. Intel is trying. They are going to use TSMC as a foundry for their own chips as well as set up their own foundry

What’s Next – Geopolitics

Controlling advanced chip manufacturing in the 21st century may well prove to be like controlling the oil supply in the 20th. The country that controls this manufacturing can throttle the military and economic power of others.

  • Ensuring a steady supply of chips has become a national priority. (China’s largest import by $’s are semiconductors – larger than oil)
  • Today, both the U.S. and China are rapidly trying to decouple their semiconductor ecosystems from each other; China is pouring $100+ billion of government incentives in building Chinese fabs, while simultaneously trying to create indigenous supplies of wafer fab equipment and electronic design automation software
  • Over the last few decades the U.S. moved most of its fabs to Asia. Today we are incentivizing bringing fabs and chip production back to the U.S.

An industry that previously was only of interest to technologists is now one of the largest pieces in great power competition.

https://steveblank.com/

Also read:

Samsung Keynote at IEDM

TSMC Earnings – The Handoff from Mobile to HPC

Intel Discusses Scaling Innovations at IEDM

 


Podcast EP60: Knowing your bugs can make a big difference to elevate the quality of verification

Podcast EP60: Knowing your bugs can make a big difference to elevate the quality of verification
by Daniel Nenni on 02-04-2022 at 10:00 am

Dan is joined by Philippe Luc, director of verification at  Codasip. Philippe has spent over 20 years in verification which includes an extensive and successful career at Arm. Philippe gained engineering experience with a list of significant achievements during his time there, including:

  – Design and verification of coherent caches for the first multiprocessor core from Arm (Cortex-A9)

  – Lead development of random test bench for L1&L2 caches, used on most A & R class processors

  – Initiate and lead the development of one of the major random generator used on all application processors

  – Verification lead of Cortex-A17 core

Today, Philippe leads Codasip’s growing verification team from France, a key part of Codasip’s increasingly global team. His mission is to focus on boosting the quality of RISC-V processor IP, and to do so efficiently. Dan explores why bug tracking is so important with Philippe and how the process can impact the quality of designs.

The views, thoughts, and opinions expressed in these podcasts belong solely to the speaker, and not to the speaker’s employer, organization, committee or any other group or individual.

Codasip on SemiWiki.com


CEO Interviews: Kurt Busch, CEO of Syntiant

CEO Interviews: Kurt Busch, CEO of Syntiant
by Daniel Nenni on 02-04-2022 at 6:00 am

Syntiant Busch Headshot

Named Ernst & Young’s Entrepreneur of the Year® 2021 Pacific Southwest – Orange County, Kurt Busch is a tech industry veteran with extensive experience in product development, having driven the successful launch of new products, ranging from SaaS and semiconductors for telecom and broadcast video to consumer electronics and data center systems. Prior to founding Syntiant Corp., Busch was president, CEO and board of director at Lantronix (NASDAQ: LTRX), a global provider of secure data access and management solutions for Internet of Things (IoT) and information technology (IT). He is an engineering hall of fame inductee of the University of California at Irvine, where he earned bachelor’s degrees in electrical engineering and biological science. He also holds an MBA from Santa Clara University.

Can you tell us a little about Syntiant?

We founded Syntiant in 2017 with the idea of building a new kind of processor that will bring artificial intelligence to most any edge device. At the time, AI was the domain of cloud computing, and no one was thinking of putting significant deep learning processing into devices that operated at the edge. Today, we shipped more than 20 million of our Neural Decision Processors worldwide, making edge AI a reality for always-on voice, sensor and image applications in a range of consumer and industrial use cases, free from cloud connectivity, ensuring privacy and security.

What is unique about the company and its product technology?

We designed our technology as a complete turnkey system by combining purpose-built silicon with an edge-optimized data platform and training pipeline. Syntiant’s devices typically offer more than 100x efficiency improvement, while providing a greater than 10x increase in throughput over current low-power MCU solutions, and subsequently, enabling larger networks at significantly lower power. Using at-memory compute and built-in standard CMOS processes, Syntiant devices directly process neural network layers from platforms such as TensorFlow without the need for any secondary compilers, which shortens time to market and offers unprecedented performance for solutions that require under 1mW power consumption.

What industries are Syntiant addressing? 

Syntiant’s deep neural network processors are being designed into all kinds of end uses from earbuds to automobiles. We are working with about 80 customers globally across market segments including consumer, medical and industrial IoT. Our NDP100 and NDP101 are being used for always-on voice applications. The NDP102 for sensor processing. Our NDP120 for speech and sensor fusion and the NDP200 for vision and image recognition. We went from just offering voice to an expanded product line that includes sensor, audio and image processing, as well as offering the data and training too, providing customers with low-cost, low latency, end-to-end solutions that quickly deliver production grade deep learning models in a variety of domains.

What problems/challenges are you solving?

We’re moving AI from the cloud to the edge. Production deep learning models require significant data and training expertise, as well as significant processing power. The lack of clean data, training expertise and sufficient processing power has created fundamental blockers for mass edge AI deployments. Syntiant has tackled these fundamental challenges. First, with custom silicon delivering best in class performance while still meeting size, power and cost constraints for massive edge deployments. Second, the ability to collect, clean, align and generate data for ML training, and lastly, providing a training pipeline, optimized for edge applications, that can go from raw data to production quality machine learning models in an economical manner.

What’s new?

There is a lot of discussion about the democratization of AI, enabling most anyone to utilize the benefits of machine learning and not just the big Internet companies. While we usually deal with large volume customers, we also want to expand the reach and availability of AI. That’s why we launched our new TinyML Development Board for building low-power voice, acoustic event detection and sensor ML applications. This collaboration with Edge Impulse now enables anyone, from individual developers and hardware engineers to small companies to design, build and deploy highly accurate ML applications that respond to speech, sounds and motion with minimal power consumption. Whether it is for a wearable, industrial product or even to assist with people with disabilities, the possibilities are endless with our new TinyML board that provides a full solution for bringing the power of artificial intelligence to almost any device.

What’s next for AI at the edge?

We’ve just begun to scratch the surface on how AI will impact people’s everyday lives. Using Syntiant technology, devices can hear, speak, see and feel, making natural interfaces the path to the future. Advances in AI already are having a profound impact on many societal issues, including how voice technology can help those with disabilities and the elderly, as well as those in remote parts of the world with limited or no Internet access. As AI pervasiveness grows globally, so do myriad applications for public health like our collaboration with Canary Speech, a leader in the voice digital biomarker industry. Our joint deep learning solution enables real-time patient monitoring to detect health conditions such as Alzheimer’s disease, anxiety, depression, as well as a complex voice energy measurement. We’ve also seen AI play a big part in the industrial IoT landscape. Until now, predictive maintenance and condition-based monitoring usually has been done in the cloud. That said, we just announced a collaboration with Ceramic Speed for their Bearing Brain project, which moves prediction and forecasting down to the battery-powered sensor device itself to reduce or eliminate unforeseen maintenance costs. Our technology can continuously monitor sounds, vibrations and even temperature with minimal drain on power consumption, extending battery life by months or years, while improving performance, productivity and efficiency across a wide range of manufacturing applications.

Also read:

CEO Interview: Mo Faisal of Movellus

CEO Interview: Fares Mubarak of SPARK Microsystems

CEO Interview: Pradeep Vajram of AlphaICs


Waymo Collides with Transparency

Waymo Collides with Transparency
by Roger C. Lanctot on 02-03-2022 at 10:00 am

Waymo Collides with Transparency

Anyone looking to U.S. Transportation Secretary Pete Buttigieg to forthrightly assert a path-setting policy vision to guide autonomous vehicle development in the U.S. during his CES 2022 keynote was sorely disappointed. There was no guidance from the Secretary.

The issue has gained new urgency now that Waymo has sued the California Department of Motor Vehicles for allegedly sharing some Waymo-specific operational data with an unspecified inquiring third party. Outraged, Waymo is seeking an end to the sharing of its data relevant to how its autonomous vehicles operate or cope with specific circumstances.

Waymo complaint: https://www.courthousenews.com/wp-content/uploads/2022/01/waymo-calif-dmv-complaint.pdf

The lawsuit represents an important turning point in autonomous vehicle regulation. California lays claim to some of the most rigorous reporting requirements in relation to what is likely the largest group of licensed AV operators in the world.

The primary philosophy behind California’s autonomous vehicle regulatory policy is one of disclosure. Operators are obliged to report all disengagement events – where the safety driver has had to take over from the AV system. This, in turn, has created a battle among licensed operators to show the greatest distance traveled, on average, between disengagement events.

Waymo has used California’s reporting framework as a marketing tool to advertise its performance advantages over the numerous competitors operating in the state. Observers have grown frustrated that the disengagement-centric system is skewing AV development priorities in favor of favorable operating environments including location and time of day.

What is missing in the California regulatory regime is a minimum set of performance requirements, standards, or tests that operators must meet to receive their operating license. The AV regulation is performance based, but only in retrospect – and calling for mitigation in the event of failures for which the State seeks functional disclosures – that have allegedly been shared.

Ironically, since each licensed operator is generally pursuing its own bespoke path to autonomous operation it is unclear that any could benefit from learning about specific corrective measures that any other operator might have taken. All operators are presumably using similar mathematics, but each is using a unique portfolio of sensors and each has its own philosophical approach to writing its AV code.

The lawsuit highlights the lack of an adequate performance-based licensing or regulatory regime for AV operation on public roads. Each of the 50 U.S. states have pursued their own unique approaches – as have countries around the world.

The U.S. came close to establishing an AV regulatory regime at the end of the Obama administration, but fell short after unresolved disputes emerged over the number of AVs that would be exempted from Federal Motor Vehicle Safety Standards requirements such as brake pedals and steering wheels.

It is fairly clear that the Federal government is not in a position to establish a single path to autonomous operation. In this regard it is worth noting that the first AV operator to be granted an FMVSS waiver was Nuro – the maker of delivery bots.

What might work, as part of a process of setting AV operational standards, would be a series of operational tests that AV prototypes will have to pass – such as recognizing and responding to obstacles and other vehicles. Such an approach can be calibrated to establish some basic performance characteristics without giving an advantage to any particular operator or strategic approach.

It is worth noting that in the current global environment characterized by the existing regulatory vacuum, Mobileye, alone, has a unique advantage in putting forth its Responsibility-Sensitive Safety (RSS) framework. Mobileye says RSS “has advanced its way into both IEEE and ISO standards efforts recently.  Intel Senior Principal Engineer and Mobileye VP of Automated Vehicle Standards Jack Weast is chairing the IEEE effort to adopt a formal technical standard known as IEEE P2846: A Formal Model for Safety Considerations in Automated Vehicle Decision Making.”

Alone among operators, Mobileye is working to turn transparency into a competitive advantage. No competing operator has yet come forward to offer an equivalent vision – though Nvidia tried, and failed, with its Safety Force Field (SFF) alternative, which was quickly set aside.

While Mobileye touts RSS, competitors are left with smoke and mirrors. And Waymo clearly wants to keep that smoke and those mirrors in place – resisting requirements that it share elements of its disengagement mitigation. Waymo may be getting something of a comeuppance in California where General Motors’ Cruise may report some exceptional low disengagement figures – surpassing even Waymo – after operating exclusively at night.

It’s time for U.S. regulators to put forward some minimum performance requirements. The U.S. DOT’s National Highway Traffic Safety Administration has spent decades crashing cars. Isn’t it about time they started figuring out how to prevent cars from crashing in the first place?

I think it is. The Waymo lawsuit is a sign of the times and the time has come for change. The framework for regulation should be less focused on disclosure than it is on performance testing. Regulators should define the objectives and measure and monitor their achievement – anything less is an abdication of responsibility.

Also read:

Apple and OnStar: Privacy vs. Emergency Response

Musk: Colossus of Roads, with Achilles’​ Heel

RedCap Will Accelerate 5G for IoT