For many year 2D NAND drove lithography for the semiconductor industry with the smallest printed dimensions and yearly shrinks. As 2D NAND shrunk down to the mid-teens nodes, 16nm, 15nm and even 14nm, the cells became so small that there were only a few electrons in each cell and cross-talk issues made further shrinks very difficult and uneconomical.
Overcoming the Challenges of Creating Custom SoCs for IoT
As the Internet of Things (IoT) opportunities continues to expand, companies are working hard to bring System-on-Chip (SoC) solutions to market in the hopes of garnering market share and revenue. However, it’s not as easy as it may first seem. Companies are running into a series of issues that stand between them and capturing the market.
I had the chance to sit in on a panel session at the 54[SUP]th[/SUP] Design Automation Conference (DAC) that tried to address how companies can overcome these challenges. The panel was chaired by Ed Sperling of Semiconductor Engineering and was held at the Mentor booth on the DAC exhibit floor. On the panel were Mike Eftimakis from ARM, Jeff Miller from the Mentor Tanner group and John Tinson of Sondrel. Ed started off the discussion asking the panelists what they felt the main challenges were for IoT SoC designers.
Mike Eftimakis responded that ARM sees three main challenges for SoCs targeted at IoT applications.
[LIST=1]
Mike sees ARM helping designers meet these challenges by offering simple processors for edge devices that can be manufactured in high volumes and low costs. Having one of these processors in the edge device gives the SoC designer the ability to enable customization for derivative markets while at the same time doing sensor fusion and some analysis to cut down on the required amount of data to be sent to the Cloud.
To be able to build these SoCs quickly and to be able to pivot with the changing IoT market, ARM also believes that having pre-verified design elements and subsystems will also enable designers to rapidly build prototypes that can be tested in the market. Also, by modularizing the SoC architecture, it would allow designers to quickly customize the same base SoC with different interfaces for different end applications.
Jeff Miller of Mentor agreed with Mike’s assessment and suggested that another enabling technology is electronic design automation (EDA) software that can be used for analog and mixed signal designs. Jeff pointed out that almost all edge devices will be dealing with the real world which is in fact, analog. Edge devices will need to be able to integrate and possibly control analog signals while also analyzing and converting their data into digital representations that can easily be sent to the Cloud. Many of these devices will also be communicating to gateways using wireless technologies which means that the design and associated EDA tools will also need to comprehend RF technologies as well.
John Tinson of Sondrel pointed out that designing these types of SoC is a daunting task as it cuts across multiple engineering domains. While this might be manageable for some of the larger enterprise-level SoC providers, it is not so easy for cash strapped start-up companies. Time to market will be key for these companies and Sondrel’s mission is to help these types of companies by providing design services to reduce their time to market. Sondrel brings with it a significant amount of experience in putting together SoCs which can help to reduce and mitigate risks for start-up companies and their investors.
Ed Sperling pointed out that while pre-assembled and verified design blocks are useful, he wondered whether the IoT market is mature enough for designers to be able to know which blocks should be offered. He also questioned whether it was really feasible to reuse designs across such widely differing end use markets. Mike from ARM responded that they felt that there were definitely sub-segments to the IoT application space for which design reuse would certainly be possible and he made the point that indeed these would be the market segments that people should target to ensure they get a return on their investment. Jeff from Mentor agreed and pointed out that the key would be to start with a set of building blocks that were fundamental to all designs and then add more over time as more functions become integrated at the edge devices. A couple good example of these types of blocks would be security hardware for the edge devices and communications interfaces.
Mike from ARM also pointed out that because IoT is still not well defined, it makes a lot of sense to build flexibility into your SoC designs so that you can pivot with the market when needed. An example of this would be the requirement to be able to do over-the-air updates to the firmware of the edge devices.
Ed Sperling brought the group around to another hot topic for all things IoT based and that was the question of security. Ed pointed out that the threat surface is continually changing and asked the question of how should companies deal with this challenge. Mike from ARM suggested that designers think about segregating their design domains to have a clean, water tight separation between the critical functions of the device (boot up, firmware memory, communications and control) and the applications side of the device. The Cortex-M processors from ARM with built-in TrustZone hardware helps designers do this when using ARM-based processors in their designs.
Jeff from Mentor agreed. He suggested what he called defense in depth, including making sure the design company is in control of the SoC as it moves through different stages of the ecosystem. Once the design has left your company to be fabricated, packaged and tested it can be vulnerable to hacking. Every stop along the way is a possible attack point and designers need to have test suites to ensure that the final packaged devices are not doing something that they were not intended to do. These kinds of checks must be engineered into the system before the SoC design is started.
All in all, this was an excellent panel session. A job well done goes out to Ed, Mike, Jeff and John who covered a lot of information in a short amount of time (a lot of which I wasn’t even able to capture in this short article). Double kudos go to Mentor for sponsoring the event.
See Also:
ARM TrustZone
Mentor/Tanner AMS Solution
Sondrel Design Services
Memories for the Internet
In 1969 the Internet was born at UCLA when a computer there sent a message to a computer at Stanford. By 1975, there were 57 computers on the ‘internet’. Interestingly in the early seventies I actually used the original Xerox Sigma 7 connected to the internet in Boelter Hall at UCLA. A similar vintage computer is now in this room commemorating that first internet message on October 29, 1969. Internet traffic has of course sky rocketed, with the major impetus coming from web usage. Statistics from back in 1991 showed global internet traffic of 100 GB per day. In 2016 is was 26,000 GB per second, and in 2020 it is estimated to be 105,800 GB per second.
According to Cisco, in 2015 there were estimated to be 3 billion users, with 16.3 billion connected devices. Video is already 70% of all internet traffic, and it is expected to grow to 82% by 2020. The internet started out using Internet Protocol Version 4 (IPv4) around 1981. This familiar system uses 32 bits of addressing, providing for 4.3 billion unique addresses. Despite its surprisingly long run, IPv4 is running out of steam, even though it is still widely used.
In the early 1990’s work began on IPv4’s replacement, IPv6. By 1996 RFC 1883 was approved which was the first in a series of RFC’s covering IPv6. IPv6 uses 128 bits and therefore provides an address space of 3.4×10^38 addresses. The protocol is not compatible with IPv4 and thus many devices need dual protocol processing capabilities. Additionally, many nodes must provide tunneling to permit interoperability.
Wikipedia states that as of 2014 IPv4 was still used for 99% of worldwide web traffic. However, in June 2017 almost 20% of the users accessing Google did so using IPv6. Also, mobile networks have adopted it wholeheartedly. IPv6 growth is real and accelerating.
What does all this mean for network switch designers? At DAC this year in Austin I had a chance to sit down with Lisa Minwel, Senior Marketing Director for eSilicon’s IP Business Unit. She told me that the growth in data rates, connected devices and address space – courtesy of IPv6 – are all creating an unprecedented need for optimized memory IP of all kinds.
Data center chips can have total areas of over 400 mm^2, with over 900Mbs of embedded SRAM. Data centers require high clock rates, and low power to avoid cooling issues or thermal stress. eSilicon sees a wide palette of solutions for use by chip architects – among them are larger die, High Bandwidth Memory (HBM), TCAM, advanced FF nodes, dense multiport memory, high speed interfaces, and 2.5D and other complex packaging techniques.
eSilicon marshals all these technologies to deliver some of the most complex data center chips available today. She talked about a chip they recently put into production that supports 3.6 Terabits per second, in 60 lanes of 28Gbps. There is over 40Mb of TCAM in this particular design.
Indeed, for these packet handling chips, TCAM is the silver bullet. Despite how IPv6 optimized some aspects of packet inspection and routing, it still means larger and more complex searches. eSilicon has TCAM memory compilers that are proven at 28HPM, 16FF+GL, 16FF+LL and 14LPP. Lisa explained that the development and validation of their memory compilers can take over a year. As a result, eSilicon works with chip architects very early to discuss needs and options for future generations of chips in advance of their implementation. Lisa said this kind of interaction is highly beneficial because availability of specific memory configurations can create significant architectural advantages.
Internet data growth is a given, so larger and faster data center chips are going to be a necessity. Memory IP, and related IP for data transfer, play a central role. Look to see SRAM continuing to be a major percentage of chip area. Also, expect to see special purpose memories, such as TCAM and multiport, to be major contributors to system level performance. For more information on IP building block technology offered by eSilcon, look at their website.
DSRC: The Road to Ridiculous
Stupid has a home and that home is in Macomb County, Michigan. It is here, we learn from The Detroit News, that General Motors Co. has decided to test the use of wireless technology in conjunction with roadside QR code signs to transmit vital traffic information to passing cars. Those messages will only be communicated to cars equipped with Wi-Fi-based Dedicated Short Range Communication (DSRC) technology currently being contemplated for mandated fitment in U.S. cars by the U.S. Department of Transportation beginning as soon as 2019 and currently only available in the MY17 Cadillac CTS.
– GM Testing Smart Road Tech with MDOT, Macomb County – The Detroit News
Macomb County and GM are describing the technology as a safety feature in spite of the fact that it will introduce a distracting alert message into the dashboards of passing DSRC-equipped MY17 Cadillac CTS vehicles. The Detroit News tells us the first connected construction zone in the nation, on Interstate 75 in Oakland Country, will allow test cars to read roadside bar codes which communicate approaching lane closures. Additionally, reflective strips on workers’ safety vests contain information identifying them as people instead of traffic barrels, according to the Detroit News report.
This technology is expected to speed the development self-driving cars by enabling vehicle to infrastructure and vehicle-to-vehicle communications. It’s worth noting that none of the leaders in autonomous vehicle development are currently exploring the use of DSRC.
This non-cellular wireless tech qualifies the Michigan implementation as “smart” in the words of The Detroit News even though cellular technology is not being used to transmit the same information to traffic apps like Waze, Telenav Scout, HERE, TomTom, Google Maps or NNG’s iGo. For some reason the Michigan Department of Transportation and Macomb County believe that talking to cars in a specialized language using specialized and expensive hardware is “smart.”
The multimillion dollar exercise in exclusivity raises many questions. The most important question is why the State and the County have seen fit to share what they claim to be potentially life-saving information solely over a private network accessible only to a single new car model instead of opening up the broadcast to all traffic-related communication platforms.
This extraordinary feat of transportation exclusion extends beyond this highway work zone alert transmitting solution. The Detroit News tells us the state has established at least 100 miles of “connected” highway corridors with roadway sensors and plans for 350 more miles – all speaking in wireless electronic tongues – instead of cellular.
The technology is also being used to communicate the signal phase and timing of traffic signals, though, again, only to an appropriately equipped MY17 Cadillac CTS or so-equipped test vehicles. This “smart” approach to connecting cars to infrastructure ignores the fact that no more than 200 traffic lights nationwide make use of DSRC technology while thousands of traffic lights are connected using cellular technology and are accessible using the ConnectedSignals Enlighten app which works on any smartphone and is integrated in the BMW Apps platform in most new BMW’s. Audi of America offers a similar solution via the embedded cellular connection.
More importantly, for fixed communications opportunities between infrastructure and cars such as the Oakland County work zone scenario, cellular is the superior preferred solution. A growing chorus of states is rising up against the USDOT, which is insisting on the use of DSRC for most transportation project incorporating connectivity. That is some USDOT regulatory over-reach we can all do without.
To be clear, there is nothing smart about sending valuable construction zone and traffic light information exclusively via a communication channel requiring expensive hardware with limited availability. The system as currently deployed is not even integrated with emergency responders and law enforcement, to say nothing of commercial vehicles.
Were Macomb and Oakland counties and the State of Michigan to transmit the same information via cellular, the solution would not only be smart, but revolutionary. It would also align the State with the growing cadre of cities and states around the world that are sharing vital roadside traffic information over existing wireless networks for consumption via widely available consumer devices and in-vehicle integration platforms.
In this context, the roadside QR codes are the latter day equivalent of the clever Burma Shave signs from the middle of last century. Modern networks and cloud service delivery platforms have enabled edge computing technology such that alerts regarding approaching highway hazards and traffic information can be communicated at sufficiently low latency to be useful to drivers without requiring any additional infrastructure.
The onset of wireless technologies such as LTE Advanced Pro and 5G mean that collision avoidance applications will soon be enabled via embedded modems within five years – enabling direct communications without the network and at no cost. In this context, the creation of an expensive, dedicated network unsupported by any consumer device technology is a path to saving lives that is narrow indeed. Worse, it is a road to ridiculous and a waste of taxpayer dollars. There’s nothing smart about that.
Roger C. Lanctot is Director, Automotive Connected Mobility in the Global Automotive Practice at Strategy Analytics. More details about Strategy Analytics can be found here:
https://www.strategyanalytics.com/access-services/automotive#.VuGdXfkrKUk
ARM, Infineon, Synopsys, SK Hynix talk AMS Simulation
Every SoC that connects to an analog sensor or device requires AMS (Analog Mixed-Signal) circuit simulation for design and verification, so this year at #54DAC the organizers at Synopsys hosted another informative AMS panel session over lunch time on Monday. What makes this kind of panel so refreshing is that the invited speakers are all users of EDA circuit simulators and responsible for AMS IP or chip design. The panel moderator was Farhad Hayat of Synopsys and he gave a brief overview of the SPICE and FastSPICE circuit simulators and how Custom Designer is being used for IC layout. The mantra for 2017 at Synopsys for AMS design is:
- Physically aware IC design (early layout parasitics into SPICE)
- Visually assisted IC layout (templates make you more productive)
- Reliability aware (Monte Carlo simulations, EM and IR analysis)
SK Hynix
Sibaek Jung from the CAD Engineering Group was the first panelist to present on the challenges of DRAM design, and their company is #2 in the DRAM market behind #1 Samsung, and ahead of #3 Micron. SK Hynix also designs HBM (High Bandwidth Memory), NAND storage and CMOS image sensors.
Circuit design challenges for DRAM include:
- Coupling capacitors (198.1M parasitic capacitors)
- Slower run times with SPICE
- Simulation of an 8GB design is 2.3X slower than for a 4GB design
- Power-up simulations can take up to 168 hours
To meet these challenges they used the FineSim circuit simulator (acquired from Magma in November 2011) and traded off simulation speed versus accuracy using the ccmodel simulator setting. Running FineSim on 8 cores they saw an increase in speeds of 2-3.4X and were able to get long simulation run times down to a more manageable 17 hours. Even the power-up simulation run times used to take 4 days but now can be speeded up using partitioning and event control by 4X to 10.9X.
ARM
This company is most famous for their processor IP business and Tom Mahatdejkul revealed a technique called Large Scale Monte Carlo (LSMC) with the HSPICE circuit simulator.
LSMC is a new feature in HSPICE to manage and dramatically reduce the amount of data created during Monte Carlo runs. At ARM they use this feature for circuit simulation with under 1million runs.
The accuracy of LSMC versus non-LSMC are similar, and so are the run times, the big difference is the amount of disk space used up with the output files. They are reporting30,000X smaller total file size (3.3GB vs 112KB, 26 transistor, 68 nodes, 1 .meas statement) for a logic test cell.
On a memory cell they saw the output file size with LSMC get reduced to just 707KB, versus the non-LSMC size of 2.8GB.
Standard cell libraries are characterized with HSPICE across multiple PVT corners, so disk space is a big deal. ARMr really needs data efficiency for a Monte Carlo approach to be viable. So with LSMC they could characterize a 100 cell library using only 78.4MB of disk space, versus the previous approach which bloated out to 2.31TB disk usage.
With LSMC they are able to run more cells under more corners than before.Now they can run 1M to 10M SPICE runs using LSMC.
Measurements showed that they use the same amount of RAM with LSMC versus non-LSMC. LSMC provides statistical data results, but it is not saving as many data run points.
Synopsys
There’s an internal physical IP group at Synopsys and Marco Oliveira talked about their CAD flows and methodologies to support 2,700 engineers worldwide. Marco’s background includes A2D and D2A converters.
For high yield design they need to simulate across multiple process corners, however Monte Carlo simulations just take too long, so instead they limit their sample size and extract the results. Their approach is called sigma scaling. In one example for an RX termination circuit they did 1,000 MonteCarlo runs with no scaling, then re-ran agin needing just 200 runs with sigma scaling.
As a best practice they use sigma scaling with a factor up to 2, using only 200 simulation runs or more. This technique works with all of their circuit simulators: HSPICE, FineSim and CustomSim.
Infineon
Our final panelist was Haiko Morgenstern, an MS verification engineer from Infineon based in Munich, Germany. Their company is large with some 40,000 people and they have products in Mobility (Auto), Security (ID cards, NFC), and energy efficiency.
One of their big challenges is how to verify MS designs. They have used UVM testbenches, real number modeling, and UPF for implementation and verification. For verification they are running MonteCarlo simulations with CustomSim and VCS AMS.
On a recent ADC verification there were up to 1 million elements, so CustomSim handles this capacity and can simulate in about 30 minutes run time. The verification engineers can define which block uses which modeling abstraction (transistor, SPICE behavioral, Verilog-A, RTL).Variation block MC is now available in CustomSim runs.
The output results have statistical values as text along with the typical waveforms.Infineon uses scripts to automate the simulation process, and use LSF to distribute jobs across a compute farm. They cando 200 MC runs overnight using LSF on a 1 million element netlist, but they haven’t tried sigma scaling yet.
Summary
Synopsys has a family of three circuit simulators: HSPICE, FineSim and CustomSim. The SPICE and FastSPICE market continues to be fiercely competitive, so to stay viable each vendor has to show constant improvement with each new release of their tools. HSPICE got started back in the 1980’s, and over the decades has been able to stay relevant amidst newer tools by adding new features and refactoring the code to be more efficient. FineSim was an early SPICE simulator to exploit parallelism, and CustomSim is the newest simulator offered by Synopsys in the FastSPICE space.
TSMC Unveils More Details of Automotive Design Enablement Platform
At this year’s Design Automation Conference (DAC), TSMC unveiled more details about the design enablement platforms that were introduced at their 23[SUP]rd[/SUP] annual TSMC Technology Symposium earlier this year. I attended a presentation on TSMC’s Automotive Enablement Platform held at the Cadence Theater where TSMC’s Tom Quan gave a great overview of their status. Before diving into automotive, as a quick review, Tom updated us on all four of the segments covered by their enablement platforms, those being Mobile, High Performance Computing, Automotive and Internet of Things. Compound annual growth rate of wafer revenue from each of these areas was 7%, 10%, 12% and 25% respectively. Mobile consumes wafers from 28HPC+, 16FFC, 10nm and is now seeing some 7nm starts. HPC is in production at 16FF+ with newer designs targeting 7nm. IoT has the broadest breadth of wafer usage including 90nm, 55ULP, 40ULP, and 28HPC+ with 7nm ready for design starts.
Automotive, the subject of Tom’s presentation, is ready for design starts using 16FFC process. Tom started his presentation by giving a quick overview of the different types of ICs now being used in the automotive space. The biggest driver of platform complexity comes from infotainment and the growing space of ADAS (Advanced Driver Assistance Services). ADAS alone has several categories of applications and associated ICs including using vision, radar and audio capabilities for detection, avoidance, varying degrees of autonomous driving features, voice recognition, natural language interfaces, vision enhancement, and the list goes on. Overlaid on the traditional areas of power-train, engine control, chassis and suspension, communications and infotainment are now safety and security. All these functions are represented by more than 40 customers who have done over 600 tape-outs to TSMC with more than 1 million 12 inch equivalent wafers worth of ICs being shipped.
TSMC has put a tremendous amount of work into capturing this market building upon their successful Open Innovation Platform, better known to many of us as TSMC OIP. The whole idea of OIP is to bring together the thinking of customers and partners to enable an ecosystem that speeds time-to-market and ultimately shortening time-to-money for all involved. TSMC OIP boasts over 16 years of collaboration with more than 100 ecosystem partners and spans 13 technology generations that includes over 14,000 IPs, 8200+ tech files and 270 PDKs for 90+ EDA tools. The enablement platforms build on this foundational work ensuring that all of the right building blocks and tools are in place to enable designs in a given end market – in this case automotive.
As an example, and since TSMC was presenting at the Cadence Theater, we can look at the collaboration between TSMC and Cadence. Their collaboration in automotive started in 2015 with a focus on identifying needs and solutions to ensure conformance with the two main standards in this space which are AEC-Q100 and ISO-26262. Functional safety was a key area of collaboration and Cadence and TSMC started by training their engineers on functional safety requirements for the automotive space. Within the last two years, Cadence alone has trained over 100 engineers, many of which have been officially certified by an outside agency. Together, TSMC and Cadence have engaged with customers doing automotive ICs and IPs and as a result, Cadence developed a portfolio of interface IPs in TSMC’s 16FFC process supporting those customers. Many of these IP already meet AEC-Q100 requirements for Grade 2 temp range and Cadence has committed to qualify their controller IPs to be ISO 26262 ASIL-ready.
With respect to design tools and flows, in the second half of 2016, TSMC and Cadence worked to define a methodology for fault injection simulation and functional safety campaign management. In that time frame Cadence gained ISO 26262 tool compliance on 30+ tools in analog-mixed-signal, digital verification and front-end digital implementation and signoff flows. This work has also now prompted the collaboration to work on ‘reliability-centric’ design flows for 16nm and below including features such as aging simulations, self-heating, electro-migration analysis, FIT (failures in time) rate calculations and yield simulations.
TSMC wraps this effort up under another TSMC umbrella called TSMC9000. TSMC9000 and associated programs for TSMC Library and IP are quality management programs that aim to provide customers with a consistent, simple way to review a set of minimum quality requirements for libraries and IP designed for TSMC process technologies. The TSMC9000 team monitors ongoing IP quality and their requirements are documented and constantly revised to keep IP quality requirements up-to-date. TSMC IP Alliance members submit required data to TSMC for assessments. Assessment results are posted online so that customers can see the results and scores and understand the IP confidence level and/or risk of using a given IP. Having these assessment results readily available can significantly shorten design lead time and lower total cost of ownership for automotive IC and systems providers.
TSMC9000A (A for automotive) is based on requirements from ISO 26262 and AEC-Q100 to cover IP quality, reliability and safety assessment. It includes automotive grade IP at the 16FFC node targeted to automotive ADAS and Infotainment applications. Most of the current automotive IP has completed technology qualification for AEC-Q100 grade 1 up to 150[SUP]0[/SUP] C (Tj) and have been re-qualified with automotive-specific DRC/DRM decks. These IP are also ISO 26262 ASIL ready including safety manuals, FMEA/FMEDA, and ASIL B(D) certification.
In summary, TSMC’s automotive design enablement platform on 16FFC is ready to go. It will be interesting to see by the next DAC how far this platform has progressed both in terms of content and usage as the world progresses towards autonomous self-driving vehicles.
See also:
TSMC Design Platforms Driving Next-Gen Applications
DAC 2017: How Oracle does Reliability Simulation when designing SPARC
Last week at #54DAC there was a talk by Michael Yu from the CAD group of Oracle who discussed how they designed their latest generation of SPARC chips, with an emphasis on the reliability simulations. The three features of the latest SPARC family of chips are:
- Security in silicon
- SQL in silicon
- World’s fastest microprocessor
Continue reading “DAC 2017: How Oracle does Reliability Simulation when designing SPARC”
New Concepts in Semiconductor IP Lifecycle Management
Right before #54DAC I participated in a webinar with Methodics on “New Concepts in Semiconductor IP Lifecycle Management” with Simon Butler, CEO of Methodics, Michael Munsey, Vice President of Business Development and Strategic Accounts, and Vishal Moondhra, Vice President of Applications. Thewebinar introduced “percipient” and how it will not only extend IP Lifecycle Management, but allow for the modeling of the entire design ecosystem. Percipient was then featured in the Methodics booth at #54DAC with demos and presentations.
The premise is that IP lifecycle management and workspace management need to evolve as SoC’s become more and more complex. If you look at complex system design, like automotive and aerospace systems, those industries have evolved their ecosystem to keep track with the complexity of the systems they have been designing. Today’s SoC designs are truly systems, and have complexity rivaling the most complex systems in other industries.
Not only does IP lifecycle management need to keep pace with the increasing complexities of system design, but the ability to model the entire ecosystem for SoC deign must be accounted for as well. IP must be tracked not only as building blocks within an SoC, but as part of the entire ecosystem. A design team must be “percipient” or one that perceives, not only of the IP in a design, but the entire ecosystem that they are designing in. Engineering systems used by SoC design teams and the infrastructure of those design teams must be considered along with the IP building blocks.
The webinar is now up for replay:
New Concepts in Semiconductor IP Lifecycle Management
Today’s complex SoC design requires a new level of internal and external design traceability and reuse by tightly coupling IP creators with IP consumers. Join us for the introduction of an exciting new platform that allows companies to provide the transparency and control needed to streamline collaboration by providing centralized cataloging, automated notifications to design teams, flexible permissions across projects, and integrated analytics across diverse engineering systems. Come see how companies are realizing substantial cost and time to market savings by adopting IP lifecycle management methodologies.
About Methidics
Methodics delivers state-of-the-art IP Lifecycle Management, Design Data Management, and Storage and Workspace optimization and acceleration tools for analog, digital, SoC, and software development design teams. Methodics’ customers benefit from the products’ ability to enable high-performance collaboration across multi-site and multi-geographic design teams. The company is headquartered in San Francisco, California, and has additional offices and representatives in the U.S., Europe, China, Taiwan, and Korea. For more information, visit http://www.methodics.com
Open-Silicon SerDes TCoE Enables Successful Delivery of ASICs for Next-generation, High-Speed Systems
With 5G cellular networks just around the corner, there is an ever-increasing number of companies working to bring faster communications chips to the market. Data centers are now deploying 100G to handle the increased bandwidth requirements, typically in the form of four 28Gbps channels and that means ASIC designers are looking to integrate Serializer/Deserializer (SerDes) solutions that can reliably handle these speeds.
Few companies have the experience to do this correctly and they quite often look to outside sources to help with this part of their ASIC design. The hard work you put into your ASIC will be all for naught if you can’t get those high-speed signals on and off the die correctly, through the package and the board.
The choice of package can make or break your ASIC design both in terms of technical system specifications for the SerDes interface, as well as cost and reliability trade-offs. These are tough decisions to make that can largely impact project success, and possibly your career. To mitigate these risks, many customers turn to veteran ASIC design-services companies like Open-Silicon to mitigate their risk and help them make the proper package and board trade-offs that ensure final system success.
Open-Silicon has an impressive amount of experience in this area, having already integrated SerDes interfaces into over 100+ ASIC designs for high-speed systems used in the networking, telecom, computing and storage markets. In addition to their own experienced people, Open-Silicon also works with silicon-qualified SerDes IP providers who offer solutions that range across multiple technology nodes, foundries and communication protocols. Open-Silicon provides design services through their Technology Center of Excellence (TCoE). In the TCoE, Open-Silicon offers ASIC designers services such as: Channel Evaluation: Identifying the right SerDes solution by evaluating the channels and applications intended for the system
PCS and Controller Solutions: Evaluating the PCS and Controller/MAC requirements for the interface to the core and optimizing for interoperability of hard and soft macros
Physical Integration:Evaluating the metal stack compatibility, special layer and threshold voltage requirements, placement of SerDes on chip and bump plan for physical verification and packaging
Package/Board Design: Collaboratively working on packaging and board design including 3D parasitic extraction and crosstalk and simultaneous switching output noise analysis, signal/power integrity along with other system-level considerations
Silicon Bring-up: Close coordination with design-for-test and test teams for final design bring-up and quick assessment on automatic test equipment.
I recently conversed with H. N. Naveen and Abu Eghan of Open Silicon who did a case study integrating high-speed SerDes into an ASIC design. The first thing they pointed out is that interactions between the ASIC and the environment have a tremendous impact on the success of your ASIC within the system. Interactions between die, package, and printed circuit board must be considered and optimized to get a solid, reliable, high-speed interface. Normally these are three very different domains handled by different people but when it comes to a high-speed SerDes interfaces, a company must think differently. All these areas must be co-designed to ensure correct performance at these high frequencies.
In the Open-Silicon case study, a 28nm SerDes IP was chosen for a 4-channel, 28Gbps/channel communications ASIC intended to be used in a 100G Ethernet back-plane. Naveen and Eghan’s task was to focus on ensuring that the package and board design were tuned to work with the chosen SerDes IP.
The first package choice was a high performance Low Temperature Co-Fired Ceramic (LTCC) Flip-chip substrate. Their analysis included examining the return and insertion loss and package crosstalk of this and other packages. Additionally, they also looked at the PCB stack to make trade-offs regarding surface roughness, pin assignments, signal escape and routing, edge conditions and signal loss at the board level.
The final package design was selected and optimized through simulations to meet targets culled from CEI specs including pair to pair isolation and substrate insertion loss. The main drivers for their selection and analysis process included overall performance of the SerDes through to the board connections, cost effectiveness of the package, the ability to handle the high-speed signals over wide bandwidth and consistency for fabricated products to ensure manufacturability.
As part of the analysis they also determined they could meet the system requirements using an alternative High Density Build Up (HDBU) type package that had less capabilities but would be somewhat cheaper. In the end, the LTCC version (HiTCE ceramic) was chosen to take advantage of the extra margins in its loss characteristics.
Signal integrity analysis is one of the most important activities to be performed on SerDes signal lines on a printed circuit board. The PCB materials, via’s, copper surface roughness etc., become very critical when the signal speeds are very high (>10Gbps). A channel model is created which represent the complete channel comprised of transmitter and receiver (die), package, PCB and connectors. The losses and signal quality are analyzed and the various parameters of the channel are fine-tuned to obtain the optimum operating conditions to meet the prescribed standards. Each and every component of the channel is critical and has to be optimized to ensure successful functioning of the system.
The end result of the case study was an optimized high-bandwidth package and board design for 28Gbps that met all of the test requirements without any issues. As said before, these kinds of trade-offs are not easy and Open-Silicon has demonstrated that they have the expertise and experience to help system designers overcome the challenges of choosing and implementing next generation high-speed 56Gbps SerDes for ASIC designs targeted at future networking, telecom, computing and storage applications.
Since completing the case study, Open-Silicon has productized their work into a 28Gbps SerDes evaluation platform for ASIC development enabling rapid deployment of chips and systems for 100G networks. The Open-Silicon SerDes platform includes a full validation board with a packaged 28nm test chip, software and characterization data. The chip integrates a 28Gbps SerDes quad macro, using physical layer (PHY) IP from Rambus, and meets the compliance needs of the CEI-28G-VSR, CEI-25-LR and CEI-28G-SR specifications.
For more details on the case study and the resulting evaluation platform from Open-Silicon I encourage the reader to follow the links at the end of the article.
The Real Reason Siemens Bought Mentor!
The Siemens purchase of Mentor last year for a premium $4.5B was a bit of a shock to me as I have stated before. I had an inkling a Mentor acquisition was coming but Siemens was not on my list of suitors. The reviews have been mixed and the Siemens commitment to the IC EDA market has been questioned so I spent some time on this at #54DAC.
First stop: Chuck Grindstaff’s (Executive Chairman, Siemens PLM Software Inc.) keynote, we were then afforded a Q&A session with Chuck and Wally afterwards. I also had a chat with Chuck in the hallway.
The Age of Digital Transformation
EDA has continually moved to higher levels of abstractions, changing how electronics are designed and created. Now we are seeing the need in the industrial world for further digitalization and virtualization.In his keynote, Chuck Grindstaff, Executive Chairman of Siemens PLM Software, will discuss the global impact of this digital transformation. EDA pioneered this revolution and paved the way for today’s digital industrial revolution that is transforming and disrupting all industries. For system companies, their products are evolving into advanced system of systems. As a result, SoCs and application software are now the core differentiation and enabling technologies. This is spurring growth and opportunity for IC designers in the convergence of semiconductor and systems. Siemens and Mentor together are setting the vision for this new era of digital transformation.
(Chuck’s talk starts 5:30 mins into it)
The Q&A was interesting but it was mostly press people. My question was: Who called who first on the acquisition? Chuck said he made the first call but Wally added that they had their first acquisition talk nine years ago when Cadence (Mike Fister’s last stand) tried a hostile takeover. Siemens was on Wally’s list of white knights. I then asked Chuck point blank if he was going to sell the Calibre unit since it is not part of the Siemens core business and competition for that market segment is stronger than ever before. Chuck bluntly said no, why would he? My reply was money and I added that I wasn’t in the market to sell my house but if someone offered a billion dollars for it I would sell. I also said the Mentor Calibre unit is worth more today than it probably will ever be. My bet is that there would be a feeding frenzy between Synopsys, Cadence, and a Chinese holding company. Chuck disagreed but did say maybe for the right price, then he reminded me of how big Siemens is so their right price is probably higher than one might expect. By the way, Siemens is an $89 billion company while Synopsys is $2.4 billion, Cadence is $1.8 billion, and Mentor is $1.3 billion.
The hallway chat was even more revealing. Chuck and I had a blunt conversation and my feeling is that Chuck is a true competitor like Mentor management has never seen before. Remember, Chuck has CAD running through his veins from his many years at Unigraphics which was acquired by Siemens and he IS from Texas…
Bottom line: Disruption is good and EDA was overdue. For those of you who think Mentor will be assimilated into Siemens and the Calibre group will die a slow and silent death I think you will be seriously disappointed. For those of you who compete with Mentor watch your back, absolutely.

