Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/ai-ml-needed-for-unbiased-solutions.19586/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

AI/ML Needed for Unbiased Solutions

Arthur Hanson

Well-known member
AI/ML can work far faster than any human and give more options than any human, even though created by humans. Removing biases from AI/ML will be the greatest challenge before us to create the tool we need to guide us through the challenges before us. AI/ML have given us power we have never had in human history and is the tool the tech sector needs to solve the challenges in our social, commercial, finance, governance and environmental challenges is a world that is nearer its breaking and points of no return than it has ever been. Automation applied to solutions is our best answer.
 
AI/ML can work far faster than any human and give more options than any human, even though created by humans. Removing biases from AI/ML will be the greatest challenge before us to create the tool we need to guide us through the challenges before us. AI/ML have given us power we have never had in human history and is the tool the tech sector needs to solve the challenges in our social, commercial, finance, governance and environmental challenges is a world that is nearer its breaking and points of no return than it has ever been. Automation applied to solutions is our best answer.
Although AI/ML progress very quick, but it still takes time to pass the critical mass.
 
Training is an essential stage for AI implementation. Human bias is probably hard to completely eliminate unless many diverse human inputs are used. A lot of man-hours.

AI training AI has been considered but it's been shown to produce errors after some iterations. https://arxiv.org/abs/2307.01850
 
Last edited:
Good day! So agree! I've been diving into the ethical considerations surrounding AI-applied solutions lately, especially in the context of my data science career change as described here: https://aw.club/global/en/blog/work/career-transition-to-data-science, and it's been quite eye-opening. One aspect that really stands out to me is the importance of ensuring that AI systems are developed and used ethically, with a focus on fairness, transparency, and accountability. Understanding these ethical basics is crucial for me to navigate the complexities of AI responsibly. It's not just about building powerful algorithms; it's about using them in ways that benefit society while minimizing potential harm. I believe that by incorporating ethical principles into our data science practices, we can not only drive innovation but also ensure that AI technologies are used for the greater good. Have you encountered any ethical dilemmas in your own AI projects or career transitions? Let's chat about it!
 
One aspect that really stands out to me is the importance of ensuring that AI systems are developed and used ethically, with a focus on fairness, transparency, and accountability. Understanding these ethical basics is crucial for me to navigate the complexities of AI responsibly. It's not just about building powerful algorithms; it's about using them in ways that benefit society while minimizing potential harm. I believe that by incorporating ethical principles into our data science practices, we can not only drive innovation but also ensure that AI technologies are used for the greater good.
I think that ensuring, as in trying to enforce, ethical uses for AI is impossible. The evidence? The continuing existence of malware in general. Like it or not, malware is some of the most innovative software ever created. Though billions of dollars per year are spent on detecting malware and protecting against it, mitigation is all that's been achieved. Breaches are common. Common IT tools are often the tools of breaches. Trying to regulate AI is useless, and will simply hobble development, especially compared to any countries without regulations.

Many companies will want to use AI responsibly, or even the latest view of ethically, for business reasons. The big cloud computing companies will likely do this on their own. Regulations won't stand in the way of bad actors.
 
Trying to cheat, manipulate and distort is just part of human nature, we even do it looking at ourselves to our own detriment. Security and accuracy will always be a cat and mouse game. All we can do is try and limit bad actors or our own misdirected actions to the point we still have posative outcomes.
 
IMHO: All LLM AI today is biased in one way or another. AI is programmed to do what its programmer want to achieve. I asked about living in my home town and it gave summary of all of the great things, none of the bad, about my town because it is programmed to say nice things when asked about a town. It is not unbiased or based only on the facts.

this is all before we get into what is "responsible", "ethical", "morally correct", "problematic".... which is included in all LLMs today.

And this is before we talk about uses .... ie having it predict security risks, likelihood of financial default, or who should be hired.

just an opinion
 
Bias can be in the programming and can be in the training data set. The problem is exacerbated in Gen AI as it uses vastly larger and diffentiated datasets than those used in machine learning, and so are even more difficult to sceen and control. That is why they put some hard rules on the answers to some questions in chatGPT and others..
And is not something that will get away as it is intrinsic in any learning process, even the biological ones that lead to adaption to local environments for example. At a higher level, each of us has some kind of bias coming from location, education and personal history. It is the reasons we have laws, regulations and social rules to even things out and smooth across a population. Perfect balance not only require a perfect system but would require also perfect knowledge, and that is something that doesn't exist.
"Quis custodiet ipsos custodes ?" will stay true also with AI around.
 
Back
Top