In 1969 the Internet was born at UCLA when a computer there sent a message to a computer at Stanford. By 1975, there were 57 computers on the ‘internet’. Interestingly in the early seventies I actually used the original Xerox Sigma 7 connected to the internet in Boelter Hall at UCLA. A similar vintage computer is now in this room commemorating that first internet message on October 29, 1969. Internet traffic has of course sky rocketed, with the major impetus coming from web usage. Statistics from back in 1991 showed global internet traffic of 100 GB per day. In 2016 is was 26,000 GB per second, and in 2020 it is estimated to be 105,800 GB per second.
According to Cisco, in 2015 there were estimated to be 3 billion users, with 16.3 billion connected devices. Video is already 70% of all internet traffic, and it is expected to grow to 82% by 2020. The internet started out using Internet Protocol Version 4 (IPv4) around 1981. This familiar system uses 32 bits of addressing, providing for 4.3 billion unique addresses. Despite its surprisingly long run, IPv4 is running out of steam, even though it is still widely used.
In the early 1990’s work began on IPv4’s replacement, IPv6. By 1996 RFC 1883 was approved which was the first in a series of RFC’s covering IPv6. IPv6 uses 128 bits and therefore provides an address space of 3.4×10^38 addresses. The protocol is not compatible with IPv4 and thus many devices need dual protocol processing capabilities. Additionally, many nodes must provide tunneling to permit interoperability.
Wikipedia states that as of 2014 IPv4 was still used for 99% of worldwide web traffic. However, in June 2017 almost 20% of the users accessing Google did so using IPv6. Also, mobile networks have adopted it wholeheartedly. IPv6 growth is real and accelerating.
What does all this mean for network switch designers? At DAC this year in Austin I had a chance to sit down with Lisa Minwel, Senior Marketing Director for eSilicon’s IP Business Unit. She told me that the growth in data rates, connected devices and address space – courtesy of IPv6 – are all creating an unprecedented need for optimized memory IP of all kinds.
Data center chips can have total areas of over 400 mm^2, with over 900Mbs of embedded SRAM. Data centers require high clock rates, and low power to avoid cooling issues or thermal stress. eSilicon sees a wide palette of solutions for use by chip architects – among them are larger die, High Bandwidth Memory (HBM), TCAM, advanced FF nodes, dense multiport memory, high speed interfaces, and 2.5D and other complex packaging techniques.
eSilicon marshals all these technologies to deliver some of the most complex data center chips available today. She talked about a chip they recently put into production that supports 3.6 Terabits per second, in 60 lanes of 28Gbps. There is over 40Mb of TCAM in this particular design.
Indeed, for these packet handling chips, TCAM is the silver bullet. Despite how IPv6 optimized some aspects of packet inspection and routing, it still means larger and more complex searches. eSilicon has TCAM memory compilers that are proven at 28HPM, 16FF+GL, 16FF+LL and 14LPP. Lisa explained that the development and validation of their memory compilers can take over a year. As a result, eSilicon works with chip architects very early to discuss needs and options for future generations of chips in advance of their implementation. Lisa said this kind of interaction is highly beneficial because availability of specific memory configurations can create significant architectural advantages.
Internet data growth is a given, so larger and faster data center chips are going to be a necessity. Memory IP, and related IP for data transfer, play a central role. Look to see SRAM continuing to be a major percentage of chip area. Also, expect to see special purpose memories, such as TCAM and multiport, to be major contributors to system level performance. For more information on IP building block technology offered by eSilcon, look at their website.