You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
The combination of advancing AI/ML/Automation will apply to everything including just about every skill set there is from manual labor to the professions, politics even law enforcement of both blue collar and white-collar crime. The automation of medical which is twenty percent of the US economy is not only open to massive disruption but advancing at an ever-faster rate due to automated labs and medical procedures. This will all happen and is happening at a speed and breadth once considered unimaginable. I feel this forum and its readers as a group have the knowledge and skills to guide this trend to its most possible and productive uses. It is far too dangerous by deliberate action or random actions. Creating and guiding these trends will need to be carefully managed and the SemiWiki community has many of the skill sets to make the coming transitions is a safe and beneficial manner. The dangers of not handling these coming great powers should not be an option. Any thoughts or comments appreciated.
to understand the long term impact, I think we need to understand two things
1) Where is it working today well? Give examples. is it eliminating jobs? is it changing how jobs are done? is it creating new jobs
Coding, Quick summaries of Internet information, quick help on basic information on business, quantitative summaries of financial Information (but they are wrong ~10% of the time).
Music, artwork, logos
Written summaries of meetings (with 5% errors)
2) where is it not working and not adding value?
the bot that "helps" me on my Accounting software generally is not helpful except on the basics which people know. It generally misleads me with instructions that are incorrect.
are people willing to pay for copilot?
Fake information on internet
I had it do summaries of a legal contract..... but it missed items and misunderstood others. This is far worse than not having a summary.
List specific examples of each and we will get a better understanding.
My wife is an attorney with AI tools experience (LexisNexis, Westlaw), and her opinion is that for legal work only AI tools trained with curated legal databases are professionally trustworthy. Using any general purpose AI tool (CoPilot, ChatGPT, etc) trained on the public internet is going to produce obvious errors and hallucinations. Presenting non-existing precedents to a judge in a legal argument (as happened to her opposing counsel in one case) by having ChatGPT generate your brief will be very embarrassing, and may get the attorney wasting the court's time like that sanctioned.
My wife is an attorney with AI tools experience (LexisNexis, Westlaw), and her opinion is that for legal work only AI tools trained with curated legal databases are professionally trustworthy. Using any general purpose AI tool (CoPilot, ChatGPT, etc) trained on the public internet is going to produce obvious errors and hallucinations. Presenting non-existing precedents to a judge in a legal argument (as happened to her opposing counsel in one case) by having ChatGPT generate your brief will be very embarrassing, and may get the attorney wasting the court's time like that sanctioned.
Interesting experience, and pretty stupid by the opposing counsel if they didn't cross-check at all.
I can also see the scenario where a hallucination on a 'well trained' AI imay be much harder to detect and more insidious in nature (output looks more plausible, and assumed trust is higher).
Interesting experience, and pretty stupid by the opposing counsel if they didn't cross-check at all.
I can also see the scenario where a hallucination on a 'well trained' AI it may be much harder to detect and more insidious in nature (output looks more plausible, and assumed trust is higher).
The "professional" AI tools for attorneys are better than the general internet tools, since they also use Retrieval Augmented Generation, but they still hallucinate substantially. They're just "better". Nonetheless, a study I read of these professional legal tools said this:
That said, even in their current form, these products can offer considerable value to legal researchers compared to traditional keyword search methods or general-purpose AI systems, particularly when used as the first step of legal research rather than the last word. Semantic, meaning-based retrieval of legal documents may be of substantial value independent of how these systems then use those documents to generate statements about the law. The reduction we find in the hallucination rate of legal RAG systems compared to general purpose
LLMs is also promising, as is their ability to question faulty premises.
But until vendors provide hard evidence of reliability, claims of hallucination- free legal AI systems will remain, at best, ungrounded.
My point was that without these professional tools, attorneys probably spend more time verifying and correcting AI-generated information than they would if they just did the research manually. With the specialized tools they are at least a work-saver.
My wife is an attorney with AI tools experience (LexisNexis, Westlaw), and her opinion is that for legal work only AI tools trained with curated legal databases are professionally trustworthy. Using any general purpose AI tool (CoPilot, ChatGPT, etc) trained on the public internet is going to produce obvious errors and hallucinations. Presenting non-existing precedents to a judge in a legal argument (as happened to her opposing counsel in one case) by having ChatGPT generate your brief will be very embarrassing, and may get the attorney wasting the court's time like that sanctioned.
Agreed but that is why you have specialized LLMs. My wife is a banker and they are using localized AI in practice and it is a huge time saver and does a much better job with fraud detection.
I’m working with AI EDA applications that are speeding up DV and debug dramatically. 10-30x on big designs and this is just the beginning. Verification is a big part of the design cycle for complex SoCs so this is significant.
Agreed but that is why you have specialized LLMs. My wife is a banker and they are using localized AI in practice and is a huge time saver and does a much better job with fraud detection.
I’m working with AI EDA applications that are speeding up DV and debug dramatically. 10-30x on big designs and this is just the beginning. Verification is a big part of the design cycle for complex SoCs so this is significant.
The "specialized LLMs" are better, and that's probably the real monetize-able aspect of LLM technology. EDA applications also look like a promising area for LLM use. IMO, you also need to either be trained in using LLMs properly, or simply understand how they work to direct your queries, to get the most from any LLMs. Specificity and limiting the answer domain is critical for effective LLM use, and a lot of professionals using application-targeted LLMs just figure this out (though IMO technical understanding is still more efficient). Asking a very specific question has a far greater chance of getting you a worthy answer. Or limiting the domain, like "summarize these meeting notes" is an easier problem to solve than open-ended questions, and more likely to get you an answer of value.
Long ago and far away I came from a database management background, and even with "just databases", knowing how to structure a query to get an efficiently generated answer was important even in the 1980s and 90s. There was a big difference between knowing SQL syntax, for example, and knowing how the database management system really processed information. I think that domain professionals (like attorneys) who really understand AI tools are going to have lucrative professional opportunities ahead of them.
I can also see the scenario where a hallucination on a 'well trained' AI imay be much harder to detect and more insidious in nature (output looks more plausible, and assumed trust is higher).
My boat neighbor is an attorney. She uses ChatGPT to do drafts and other research stuff paralegals might do. According to her, ChatGPT does miss stuff but sometimes it catches stuff that she might not so it is a win-win for a support tool and it gets better the more they use it. They have a business ChatGPT membership which is better than what us free people use. They are cutting support staff with hiring freezes due to AI but they are not cutting attorneys, but new hire attorneys are required to be AI fluent so AI is definitely happening in the legal business, absolutely.
I see professional AI/ML programs going to a subscription service like many other business models. This will happen at a n ever accelerating rate as training become more automated where AI does quality control on other AIs. We are in the very early stages of AI/ML and the transition across the commercial environment worldwide is only going to accelerate. I also feel we are in the very early stages of both robotics and AI/ML in combination.