WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 595
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 595
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)
            
14173 SemiWiki Banner 800x1001
WP_Term Object
(
    [term_id] => 15
    [name] => Cadence
    [slug] => cadence
    [term_group] => 0
    [term_taxonomy_id] => 15
    [taxonomy] => category
    [description] => 
    [parent] => 157
    [count] => 595
    [filter] => raw
    [cat_ID] => 15
    [category_count] => 595
    [category_description] => 
    [cat_name] => Cadence
    [category_nicename] => cadence
    [category_parent] => 157
)

Advanced Audio Tightens Integration to Implementation

Advanced Audio Tightens Integration to Implementation
by Bernard Murphy on 10-22-2024 at 6:00 am

You might think that in the sensing world all the action is in imaging and audio is a backwater. While imaging features continue to evolve, audio innovations may be accelerating even faster to serve multiple emerging demands: active noise cancellation, projecting a sound stage from multiple speakers, 3D audio and ambisonics, voice activated control, breakthrough to hear priority inputs, and in-cabin audio communication to keep driver attention on the road. Audio is especially attractive to product builders, providing more room for big quality and feature advances with premium pricing ($2k for premium headphones as an example). Such capabilities are already feasible but depend on advanced audio algorithms which must run on an embedded DSP. Today that option can add considerable complexity and cost to product development.

Advanced Audio Tightens Integration to Implementation

What makes advanced audio difficult?

Most audio algorithm developers work in MATLAB/ Simulink to design their algorithms. They build and profile these algorithms around predefined function blocks and new blocks they might define. When done, MATLAB/ Simulink outputs C-code which can run on a PC or a DSP though it will massively underexploit the capabilities of modern DSPs and will fail to meet performance and quality expectations for advanced audio.

A team of DSP programming experts is needed to get past this hurdle, to optimize the original algorithm to take full advantage of those unique strengths. For example, vectorized processing will pump through streaming audio much faster than scalar processing but effective vectorization demands expert insight into when and where it can be applied. Naturally this DSP software team will need to work with the MATLAB/ Simulink algorithm developer to make sure intent is carried through faithfully into DSP code implementation. Equally they must also work together to validate that the DSP audio streams match the audio streams from the original algorithm with sufficient fidelity.

MathWorks supports rich libraries to build sophisticated audio algorithms, making it the design platform of choice for serious audio product builders. Yet this automation gap between design and implementation remains a serious drawback, both in staffing requirements and in competitive time to market.

Streamlining the link from design to implementation

Cadence and MathWorks (the company behind MATLAB and Simulink) have partnered over several years to accelerate and simplify the path from algorithm development to implementation and validation, without requiring a designer to leave the familiar MathWorks environment. This they accomplish through a toolbox Cadence call the Hardware Support Package (HSP), which together with MATLAB and Simulink provides an integrated flow to drive optimized implementation prototyping, verification, and performance profiling, all from within the MathWorks framework.

Advanced Audio MathWorks Integration

Through this facility, HSP will map MATLAB/ Simulink function blocks and code replacement candidates to highly optimized DSP equivalents (this mapping could be one-to-one, one-to-many, or many-to-one). Additionally, it will map functions supported in the HSP Nature DSP library (trig, log, exponentiation, vector min/max/stddev). These mapping steps can largely eliminate need for DSP software experts, except perhaps for specialized functions developed during algorithm development. Equivalents for these, where required, can be built by a DSP expert and added to the mapping library.

Integration can also handle dynamic mode switching. If you want to support multiple audio processing chains in one platform – voice command pickup, play a songlist, or phone communication, each with their own codecs – you must manage switching between these chains as needed. The XAF Audio Framework will handle that switching. This capability can also be managed from within the MATLAB/ Simulink framework.

Once a prototype has been mapped it can be compiled and validated through a processor-in-loop (PIL) engine based on the HiFi DSP instruction set simulator. All these steps can be launched from within the MathWorks framework. An algorithm developer can then feed this output together with original algorithm output into whatever comparison they consider appropriate to assess the quality of the implementation. Wherever a problem is found, it can be debugged during PIL evaluation, again from within the MathWorks framework with additional support as needed from the familiar gdb debugger.

Finally, the HiFi toolkit also supports performance profiling (provided as a text output), which an algorithm developer can use to guide further optimization.

What about AI?

AI is playing a growing role in audio streams, in voice command recognition and active noise cancellation (ANC) for example. According to Prakash Madhvapathy (Product Marketing and Product Management Director for the Audio and Voice Product Line at Cadence), AI-based improvements in ANC are likely to be the next big driver for consumer demands in earbuds, headphones, in car cabins and elsewhere, making ANC enhancements a must-have for product developers.

HiFi DSPs provide direct support for AI acceleration compatible with audio streaming speeds. The NeuroWeave platform provides all the necessary infrastructure to consume architecture networks, ONNX or TFML, with a goal Prakash tells me to ensure “no code” translation from whatever standard format model you supply to mapping onto the target HiFi DSP. Support for integrating the model is currently outside the HSP integration.

Availability

The Hardware Support Package integration with MathWorks is available today. No-code AI support through NeuroWeave is available for HiFi 1s and HiFi 5s IPs today.

This integration looks like an important time to market accelerator for anyone working in advanced audio development. You can learn more about the HSP/MathWorks integration in this blog from Cadence and from the MathWorks page on the integration.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.