Lee-Lean Shu co-founded GSI Technology in March 1995 and has served as President and Chief Executive Officer and as a member of the Board of Directors since its inception. In October 2000, Mr. Shu became Chairman of the Board of GSI Technology. Mr. Shu was has held various management positions in SRAM and DRAM designs at Sony Microelectronics Corp and AMD from July 1980 to March 1995. Shu holds a B.S. degree in Electrical Engineering from Tatung Institute of Technology and an M.S. degree in Electrical Engineering from the University of California, Los Angeles. Mr. Shu is the recipient of the award of Inventor of the year, 2017 from SVIPLA.
Tell me more about GSI Technology?
GSI Technology was started in 1995 and quickly became a leader in the global high performance SRAM market. In 2015 we acquired a very early-stage startup in the AI space, which enabled us to combine their technology and software with our advanced hardware design team to create and deliver our Associative Processing Unit (APU) chip. The first-generation device is called Gemini-I.
What challenge is your APU solving?
At a high level, the APU is addressing the challenge that Von Neumann architectures present when attempting to increase compute performance on big data workloads. Processing cores have been increasing in speed, but they continue to use the Von Neumann architecture, so the limits of Moore’s law, power dissipation, and even more importantly the I/O limits brought about by the need to constantly move the workspace data in and out provide reduced system level benefits. The design philosophy in place now to address the Von Neumann issues is to concentrate more of the same processing in less space rather than truly increasing native processing capability – essentially putting more cores to the chip. However, this just miniaturizes the big server problem. It does not eliminate the Von Neumann I/O bottleneck because you still have to get external data into and back out of those cores. The current treadmill of reducing the size of a core and memory combination and then massively duplicating it provides more of the same compute in a smaller space with only limited power savings and ultimately not very much efficiency in an end-to-end system improvement.
Why is the GSI APU important to data processing?
By removing or reducing the I/O cycles used at a system level, the clock speed can be reduced, providing a dual benefit of faster results and lower power. This increases processing efficiency.
What are the benefits of using Gemini?
The Gemini technology results in significant increases for inference workloads (reduction of big data processing times from seconds to milliseconds, for example). The Gemini is akin to having a memory with RISC Boolean processing capability, (note: NOT a number of RISC processors with memory). It is cycle-by-cycle programmable on its memory compute. Due to this flexibility, it also has performance improvement in workload functions that can be efficiently decomposed into Boolean operations. These benefits avail themselves particularly to very large data set problems.
What is the roadmap for Gemini?
Gemini-I is available today for shipment. We will be offering more options on the Leda boards in the next couple of quarters. The next-generation chip, Gemini-II, will be available in 2022. In this new chip, we have increased the L1 memory by 8x and will also double the clock frequency. This will allow us to further penetrate the big data market.
What applications do you see Gemini powering in the next 5-10 years?
Currently, much effort across the industry is being expended on improving training. This is because training is not only done on the initial off-line evaluation of the problem, but also whenever completely new information arrives. Data is only increasing in volume, and the results of searches are only increasing in need as more users are brought on. The Gemini currently is well suited for the inference operation as opposed to the one-time training effort. Also, the goal of all this massive data collection is multi-faceted: data can provide information, which can provide insights, which can be used to improve a criterion or change a behavior. The last output of improvement is being sought after in all industries. The Gemini can accelerate getting this information in real-time for those markets where the fast response from a huge data store enables improvements. Some examples of this includes medical research, personal medical assistance, facial recognition, NLP, e-commerce, space-based environmental monitoring, actual prevention systems (cybersecurity, IIoT, physical security), and traffic intelligence systems.
ABOUT GSI TECHNOLOGY
Founded in 1995, GSI Technology, Inc. is a leading provider of semiconductor memory solutions. GSI’s resources are focused on new products that leverage the strengths of its legacy SRAM business. The Company recently launched radiation-hardened memory products for extreme environments and the Gemini APU, a memory-centric associative processing unit designed to deliver performance advantages for diverse AI applications. The APU’s architecture features massive parallel data processing with two million-bit processors per chip. The massive in-memory processing reduces computation time from minutes to milliseconds, even nanoseconds, while significantly reducing power consumption with a scalable format. Headquartered in Sunnyvale, California, GSI Technology has 172 employees, 114 engineers, and 92 granted patents. For more information, visit gsitechnology.com.
Also Read:
CEO Interview: Arun Iyengar of Untether AI
CEO Interview: Tony Pialis of Alphawave IP
CEO Interview: Dr. Chouki Aktouf of Defacto
Share this post via:
More Headwinds – CHIPS Act Chop? – Chip Equip Re-Shore? Orders Canceled & Fab Delay