Silos to Systems Webinar Banner Ad 800x100 (1)
WP_Term Object
(
    [term_id] => 6435
    [name] => AI
    [slug] => artificial-intelligence
    [term_group] => 0
    [term_taxonomy_id] => 6435
    [taxonomy] => category
    [description] => Artificial Intelligence
    [parent] => 0
    [count] => 748
    [filter] => raw
    [cat_ID] => 6435
    [category_count] => 748
    [category_description] => Artificial Intelligence
    [cat_name] => AI
    [category_nicename] => artificial-intelligence
    [category_parent] => 0
)

AI Deployment Trends Outside Electronic Design

AI Deployment Trends Outside Electronic Design
by Bernard Murphy on 12-11-2025 at 6:00 am

In a field as white-hot as AI it can be difficult to separate cheerleading from reality. I am as enthusiastic as others about the potential but not the “AI everywhere in everything” message that some emphasize. So it was interesting to find a survey which looks at the deployment reality outside our narrow domain of electronic and systems design, surveying nearly 800 businesses worldwide who are applying generative AI in financial services, government and healthcare. There will be some differences from our usage/plans but there should be enough in common that we ought to pay attention to the important challenges they find in scaling beyond early trials. The report is quite detailed on several topics. I am just picking a small number that attracted my interest.

AI Deployment Trends Outside Electronic Design

Adoption rates are significant, usage still not widespread

Within the survey set, about 30% of employees are using GenAI daily (one or several times a day) and about 50% at least once per week. Perhaps these limts simply reflect corporate restrictions on access to generative tools, perhaps they reflect a learning curve especially in changing habits, both reasons entirely understandable. Then again 64% of employees said they don’t see value in using AI in their work. Maybe that is an education problem but is certainly a barrier to overcome in plans to deploy AI more widely.

Data quality/accuracy remains a problem

Nearly 70% of respondents said they have delayed rollouts due to issues with accuracy. They attribute this to outdated or irrelevant data or hallucinations. Many said that half of their data was more than 5 years old. They continue to add new data without flushing out old data, which inevitably leads to data rot (redundant, obsolete, trivial/low value data) especially in data used for training. Might sound familiar to anyone tasked with pruning regression datasets.

They also point out that this problem is compounded by data generated by GenAI itself, growing by 22-40% per year. They don’t comment further on this point, but I would guess that a non-trivial percentage of that generated data might also be considered rot.

One personal experience here. I have a recent model robot vacuum/floor mop and wanted to know how to remove the mop. I used a Google AI Overview header provided with my search and found a video as a RAG endpoint. Except the video was for replacing the head on a hand floor mop. Complete miss, which surprised me. I usually think of RAG (a search leading to an endpoint on a human-generated text or video) as reliable. But only if the search leading to that endpoint is reliable, it appears. (In a previous blog I pointed to a paper which aims to improve accuracy in RAG relevance.)

Confidence in quality is more important than speed

There is significant concern (67% of respondents) that employees will lose the ability to distinguish truth from fiction in material produced by GenAI tools. Or at least they may become more careless in checking GenAI outputs. If I ask a tool to generate an email response to a customer request, will I check it carefully, line by line, or just scan to make sure it looks OK?

Aside from potential damage to the business caused by generation errors, frequent mistakes will damage in-house confidence in the AI initiative. The report leans toward at least balancing higher quality with speed. I would go further in saying that quality should get more emphasis. It is more important to build a solid base than to grow deployment quickly. I continue to believe that the best applications are those which intrinsically support robust cross-checks.

Still an exciting journey but one that that requires continued oversight and caution.

Share this post via:

Comments

There are no comments yet.

You must register or log in to view/post comments.