Hi all – Is there a significant cost difference between designing and manufacturing a clean-sheet ASIC vs. a clean-sheet CPU SoC to perform the same tasks?
For example, take a VoIP telephone's compute hardware. Say you need to encode and decode real-time audio streams in a specified audio compression format. A modest CPU can do it, since audio is typically not a huge compute task. And an ASIC can do it, probably a very small ASIC. Would you expect a significant difference in the cost to design and manufacture the CPU vs. the ASIC?
I'm interested in the engineering and computational aspects here. So the likelihood of a team choosing to develop an all-new chip for a VoIP telephone is out of scope (though maybe a supplier of VoIP chips – if "VoIP chips" per se exist – would actually face something like this decision every now and again).
I'm wondering if there are interesting engineering factors and constraints when you've got a well understood computational task. Is there anything about ASICs or CPUs that make them more or less expensive to design and build? I assume transistor count wouldn't be constant here, that an ASIC would have far fewer. Does that make ASICs decisively cheaper? Is it just going to be about area? Are there any countering factors?
Another example: Networking chips, especially switch chips. All the really high end stuff, the multi-terabit per second switches, use ASICs like Broadcom's Tomahawks and similar from Cisco, Innovium, et al. But I see a lot of mid-range stuff using CPUs for some reason, where they just keep adding cores as you go up the product range and get more throughput. Some of these are priced in the hundreds of dollars, instead of thousands or hundreds of thousands. Yet I thought I heard that there were inexpensive switch ASICs from Marvel or MediaTeak or some such, so I'm not sure why OEMs like MikroTik would use CPUs.
If you needed a switch that handled 100 Gbps of throughput, would you expect a clean-sheet ASIC or CPU to cost more? (This is a low end category, where the complete switch would cost maybe $200 or $300.)
If you needed to add networking to the VoIP telephone, like 1 GbE or 100 MbE, and wanted it all on the same SoC, does that change the ASIC vs. CPU cost scenario?
Thanks.
For example, take a VoIP telephone's compute hardware. Say you need to encode and decode real-time audio streams in a specified audio compression format. A modest CPU can do it, since audio is typically not a huge compute task. And an ASIC can do it, probably a very small ASIC. Would you expect a significant difference in the cost to design and manufacture the CPU vs. the ASIC?
I'm interested in the engineering and computational aspects here. So the likelihood of a team choosing to develop an all-new chip for a VoIP telephone is out of scope (though maybe a supplier of VoIP chips – if "VoIP chips" per se exist – would actually face something like this decision every now and again).
I'm wondering if there are interesting engineering factors and constraints when you've got a well understood computational task. Is there anything about ASICs or CPUs that make them more or less expensive to design and build? I assume transistor count wouldn't be constant here, that an ASIC would have far fewer. Does that make ASICs decisively cheaper? Is it just going to be about area? Are there any countering factors?
Another example: Networking chips, especially switch chips. All the really high end stuff, the multi-terabit per second switches, use ASICs like Broadcom's Tomahawks and similar from Cisco, Innovium, et al. But I see a lot of mid-range stuff using CPUs for some reason, where they just keep adding cores as you go up the product range and get more throughput. Some of these are priced in the hundreds of dollars, instead of thousands or hundreds of thousands. Yet I thought I heard that there were inexpensive switch ASICs from Marvel or MediaTeak or some such, so I'm not sure why OEMs like MikroTik would use CPUs.
If you needed a switch that handled 100 Gbps of throughput, would you expect a clean-sheet ASIC or CPU to cost more? (This is a low end category, where the complete switch would cost maybe $200 or $300.)
If you needed to add networking to the VoIP telephone, like 1 GbE or 100 MbE, and wanted it all on the same SoC, does that change the ASIC vs. CPU cost scenario?
Thanks.