Clay Johnson has decades of executive experience in computing, FPGAs and development flows, including serving as vice president of the Xilinx (now AMD) Spartan Business Unit. He has a vision to enable the next phase of computing.
Tell us about CacheQ.
CacheQ is a little over six years old and we have about 10 people. We focus on application acceleration to simplify development of data center and edge-computing applications executing on processors, GPUs and FPGAs. Our QCC Acceleration Platform reduces development time and increases performance, enabling application developers to implement heterogeneous compute solutions leveraging processors, GPUs and FPGAs with limited hardware architecture knowledge.
My business partner and I envisioned CacheQ’s ideas and technology about 15 years ago. At that point, we recognized the need for performance beyond clock rate improvements in CPUs.
We continue to develop our software technology while engaging with customers to solve complex performance and technical challenges. Our customers have applications that may run single threaded on a CPU and need higher performance achieved by running multithreaded on a CPU. For higher performance, we can target accelerators like GPUs.
Tell us a little bit about yourself.
My career has been around technology, both in the semiconductor space and development platforms associated with complex semiconductor technology. I started off in EDA development tools, schematic capture and simulation. I worked for Xilinx (now AMD) for a long time. The last position I had was vice president of the Spartan business unit, a well-known product family from Xilinx at the time. I managed that along with the auto sector and the aerospace and defense sector. I left to run a company that developed a secure microprocessor with a high degree of encryption, decryption, anti-tamper technology and key management technology. We sold that company and I was involved in another security company. Then my business partner, who I’ve worked with for decades, and I started CacheQ.
What problems is CacheQ solving?
CacheQ accelerates application software. CacheQ targets a range of platforms from the absolute highest performance GPUs to embedded systems. Each platform has their own unique performance requirements. With embedded systems, we can target monolithic platforms comprised of CPUs and an accelerator fabric. An example of that is Jetson from Nvidia. In addition to performance, there is a requirement to reduce the complexity of application acceleration. Our development platform obfuscates the most complex steps to deliver application acceleration.
What application areas are your strongest?
Our customers deal with complex numerical problems in the medical, computational physics, video or large government entities. Essentially, high-technology companies with software developers implementing applications that solve complex numerical problems. Our target customers are software developers who traditionally write high-level languages like C or C++.
While it’s not a specific application or a specific vertical space, our tools are used for numerically intensive type of applications that require a lot of compute power to execute. Examples are molecular analysis and encryption and decryption. Computational fluid dynamics is a big area that’s used across a bunch of industries.
CacheQ has done various projects in weather simulation. Weather simulation can be traditional weather simulation or tsunami code that projects what will happen when a tsunami hits.
What keeps your customers up at night?
Driving performance is a daunting challenge because of the various technical challenges to overcome while trying to accelerate an application, especially those that are unanticipated.
Application acceleration can be compute bound or memory bound. At times it is unclear what target hardware to use –– is the target a CPU or a GPU? For some cases in performance, the target could be an FPGA. Another question is whether there are vendors that offer better development platforms and tools?
In many cases, the code written by an application developer runs single threaded and may need to be restructured to get higher performance to run in parallel. Attempting to accelerate an application includes plenty of unknowns. For example, we have code that runs faster on Intel processors. In other cases, it runs faster on AMD processors. It’s a multi-dimensional complex problem that’s not easy to solve.
What does the competitive landscape look like and how do you differentiate?
We are not aware of any company that does what we do. Our largest obstacle is customers who stay with the existing solutions. It’s a challenging problem. Customers are familiar with techniques used that put pragmas or directives into their code. An example is acceleration. Nvidia offers something called CUDA. CUDA is used to write code that runs in Nvidia GPUs.
It’s a complex tool chain to understand the software and how that applies to hardware. It takes time and energy to figure it out and where our competition lies. Getting over the hump of traditional software development and looking at new technology. Most developers have not been able to solve the problems that we solve. When we explain our technology and its complexity, they are skeptics.
Once we show developers our technology and demonstrate it, we typically hear, “Wow, you can do that.” It’s the existing mindset and existing platforms, as opposed to any competitor who’s doing exactly what we’re doing.
What new features/technology are you working on?
We continue to push our existing technology forward, which is further optimization to produce better results on target platforms. As new technology comes out, we update our technology to support those new technologies. We can always improve the results of what we deliver on the existing platforms we support. We continually look at target GPUs from AMD and Nvidia. With codes that come in from customers, we’re able to run them through our tools, look at them and analyze the results that we get and continually drive the performance that we deliver from various applications.
Beyond that, our technology supports heterogeneous computing, the ability to look at various technologies and split the task across these technologies. For example, most top-end acceleration is done with PCI-attached accelerator cards. Some code runs on the CPU and some on the GPU. We figure out what code needs to run where. It’s heterogeneous. At the same time, the world is looking at machine learning (ML). It is where everything is going and it’s dynamic.
Companies are investing significant amounts of capital to develop ML solutions across silicon systems, software frameworks like PyTorch, cloud services and platforms, new models, operational models, new languages. It’s broad and deep. We have technology to target multiple hardware platforms. We remove the need to code for specific platforms or vendors. I previously mentioned CUDA, the standard to get acceleration from ML models by executing on GPUs and why NVIDIA dominates.
CUDA is specific to Nvidia GPUs. Developers can’t run CUDA code on different GPUs. Coding in CUDA is challenging and, at the same time, writing CUDA code links into executing on an Nvidia GPU. Our technology removes the need to write ML libraries in CUDA. Developers write standard high-level languages like C or C++ and target CPUs and GPUs from both Nvidia and AMD.
Combining that with technology that allows access to low-cost cloud resources enables an environment that reduces model development time, delivers performance and low-cost access to high-performance GPUs. Anyone in the ML space knows the downside of GPUs today is cost. The most recent GPUs from Nvidia are $30,000. Many companies cannot afford or access that kind of technology in the cloud.
The direction we’re taking our technology for ML offers standard languages without vendor-specific code along with a development platform that allows developers access to low-cost GPUs. Customers tell us that’s a huge advantage.
What was the most exciting high point of 2023 for CacheQ?
Our customer engagements, our technology development and recognition of how our technology could enable ML are high points. We believe there’s a huge opportunity and challenge around ML development platforms.
Companies need much better software development platforms that could encompass not just a single vendor but multi-vendors. There are a significant number of models being developed and those models need to run across a variety of different hardware platforms.
No development platform offers what we bring to the market. 2023 was a year where we were engaging with customers and solving their complex problems. At the same time, we were working on our ML strategy. Our overall strategy really came together in 2023.
What was the biggest challenge CacheQ faced in 2023?
Traditional compute acceleration is not a growth area. Compute technology is being used for ML. The opportunity in 2023-2024 and beyond is various technologies around ML. Transitioning from a compute-focused company to include an ML offering was our biggest challenge in 2023.
How do customers normally engage with your company?
To learn more, visit the CacheQ Systems website at www.cacheq.com or email info@cacheq.com.
Also Read:
CEO Interview: Khaled Maalej, VSORA Founder and CEO
CEO Interview with Ninad Huilgol of Innergy Systems
CEO Interview: Ganesh Verma, Founder and Director of MoogleLabs
Share this post via:
Semiconductors Slowing in 2025