Actually, our solution ends up solving both challenges.
As to GigaIO and Enfabrica, they seem to continue down the road of doing more to achieve less power and less overhead - its almost contradictory. Its actually quite remarkable that doing more can result in less power and less latency. Enfabrica implements RDMA and CXL. One of the strengths of CXL is in managing cache... we are looking to remove caching altogether. Normally that can cause issues around timing and misconnects, but we have a solution for that. EnchargeAi has some super sophisticated integrated chip designs. Dont look for complexity from me, I'm not that well trained or experienced to go chasing complex designs.
I have been able to reduce latency and power by removing nearly everything that gets in the way of efficient computing. So yes - we are looking at addressing the underlying issue of reducing power, latency and computing bottlenecks, however our target is 90-95% savings, not 50%. And we expect to achieve it by doing much much less.
Right now we are focused on building a consumer data platform because it is too long a sales cycle to convince an engineer to unthink 70 some years of proven designs. Presently, we can switch 5 Gbps across 4 ports consuming 3mW/h at 40ns. (FGPA is limited to 100MHz and increasing to 1 GHz drops switch latency to 4ns) Adding more ports does not increase latency but will add some additional power drain. Still, nothing is as efficient or fast. A side benefit of the platform is that we can arrange for resource transfers between the connected systems transparently so as to address the distributed compute challenge. Achieving these objectives required us to take a very different approach to data exchange - one that is super simple and completely unexpected.
Am working on publishing some early work in the coming months. We just have a lot on the go.