Semiwiki 400x100 1 final
WP_Term Object
(
    [term_id] => 36
    [name] => eFPGA
    [slug] => efpga
    [term_group] => 0
    [term_taxonomy_id] => 36
    [taxonomy] => category
    [description] => 
    [parent] => 0
    [count] => 138
    [filter] => raw
    [cat_ID] => 36
    [category_count] => 138
    [category_description] => 
    [cat_name] => eFPGA
    [category_nicename] => efpga
    [category_parent] => 0
    [is_post] => 
)

Webinar – FPGA Native Block Floating Point for Optimizing AI/ML Workloads

Webinar – FPGA Native Block Floating Point for Optimizing AI/ML Workloads
by Tom Simon on 02-25-2020 at 10:00 am

block float example

Block floating point (BFP) has been around for a while but is just now starting to be seen as a very useful technique for performing machine learning operations. It’s worth pointing out up front that bfloat is not the same thing. BFP combines the efficiency of fixed point operations and also offers the dynamic range of full floating… Read More


Insights from the Next FPGA Platform Event

Insights from the Next FPGA Platform Event
by Daniel Nenni on 02-11-2020 at 10:00 am

The Next FPGA Platform SemiWiki

Unfortunately, I missed this event since I was in China. Fortunately, Manoj Roge VP Strategic Planning and Business Development at Achronix participated and did a nice write-up. Manoj has more than 20 years of experience in the programable business with Cypress Semiconductor, Xilinx, Altera, and now Achronix so you should definitely… Read More


FPGAs in the 5G Era!

FPGAs in the 5G Era!
by Daniel Nenni on 01-27-2020 at 6:00 am

New Family of FPGAs Speedster7t

FPGAs, today and throughout the history of semiconductors, play a critical role in design enablement and electronic systems. Which is why we included the history of FPGAs in our book “Fabless: The Transformation of the Semiconductor Industry” and added a new chapter in the 2019 edition on the history of Achronix.

In a recent blog… Read More


Network on Chip Brings Big Benefits to FPGAs

Network on Chip Brings Big Benefits to FPGAs
by Tom Simon on 12-19-2019 at 10:00 am

NAPs provide connection to high speed NoC

The conventional thinking about programmable solutions such as FPGAs is that you have to be willing to make a lot of trade-offs for their flexibility. This has certainly been the case in many instances. Even just getting data across the chip can eat up valuable routing resources and add a lot of overhead. These problems are exacerbated… Read More


Characteristics of an Efficient Inference Processor

Characteristics of an Efficient Inference Processor
by Tom Dillinger on 12-11-2019 at 10:00 am

The market opportunities for machine learning hardware are becoming more succinct, with the following (rather broad) categories emerging:

  1. Model training:  models are evaluated at the “hyperscale” data center;  utilizing either general purpose processors or specialized hardware, with typical numeric precision of 32-bit
Read More

New Generation of FPGA Based Distributed Accelerator Cards Offer High Performance and Adaptability

New Generation of FPGA Based Distributed Accelerator Cards Offer High Performance and Adaptability
by Tom Simon on 12-05-2019 at 10:00 am

Achronix FPGA used on BittWare Accelerator Card

We have learned from nature that two characteristics are helpful for success, diversity and adaptability. The same has been shown to be true for computing systems. Things have come a long way from when CPU centric computing was the only choice. Much heavy lifting these days is done by GPUs, ASICs, and FPGAs, with CPUs in a support … Read More


Achronix Announces New Accelerator Card at Linley Fall Processor Conference – VectorPath

Achronix Announces New Accelerator Card at Linley Fall Processor Conference – VectorPath
by Randy Smith on 11-04-2019 at 10:00 am

This blog is my second blog from this year’s Linley Fall Processor Conference. The first two blogs focused on edge inference solutions. Achronix’s discussion was much broader than just AI/ML; it was about where FPGA’s have been going and culminated with a product announcement preview. I’ll get to the announcement in a moment, … Read More


BittWare PCIe server card employs high throughput AI/ML optimized 7nm FPGA

BittWare PCIe server card employs high throughput AI/ML optimized 7nm FPGA
by Tom Simon on 10-31-2019 at 6:00 am

Back in May I wrote an article on the new Speedster7t from Achronix. This chip brings together Network on Chip (NoC) interconnect, high speed Ethernet and memory connections, and processing elements optimized for AI/ML. Speedster7t is a very exciting new FPGA that can be used effectively to accelerate a wide range of processing… Read More


Efficiency – Flex Logix’s Update on InferX™ X1 Edge Inference Co-Processor

Efficiency – Flex Logix’s Update on InferX™ X1 Edge Inference Co-Processor
by Randy Smith on 10-30-2019 at 10:00 am

Last week I attended the Linley Fall Processor Conference held in Santa Clara, CA. This blog is the first of three blogs I will be writing based on things I saw and heard at the event.

In April, Flex Logix announced its InferX X1 edge inference co-processor. At that time, Flex Logix announced that the IP would be available and that a chip,… Read More


Free webinar – Accelerating data processing with FPGA fabrics and NoCs

Free webinar – Accelerating data processing with FPGA fabrics and NoCs
by Tom Simon on 10-14-2019 at 10:00 am

FPGAs have always been a great way to add performance to a system. They are capable of parallel processing and have the added bonus of reprogramability. Achronix has helped boost their utility by offering on-chip embedded FPGA fabric for integration into SoCs. This has had the effect of boosting data rates through these systems… Read More