Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/who-will-be-responsible-for-policing-ai.18520/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Who will be responsible for policing AI

Arthur Hanson

Well-known member
AI is one of the most powerful tools invented by man and we are still in the very, very early stages of advancing and implementing it. What organizations are going to set up to watch, monitor, regulate and police this up and coming awesome power that will only increase geometrically. If not handled properly it could inflict untold economic distortions of the likes we have never seen. Deliberately used as a commercial or military weapon its power isn't even close to being realized what its full potential can be. There is absolutely no doubt AI is already in the wrong hands and they are working to make it as dangerous as possible to their social, commercial and government opponents. Do the readers have any thoughts or ideas for making sure AI is put to its most beneficial uses and ways of limiting its abuse by people who don't care if it hurts society socially, economically or militarily. Just as I have no doubt people and organizations are working at weaponizing it in many ways(some through the use in directing and controlling social media) among other ways to their own advantage, I know the creators are looking for ways to keep it being used in beneficial manners. Any thoughts is this area would be appreciated. I'm already seriously invested in this area and have no doubt Lisa Su and Jensen Huang have given this some careful thought even if not publicly stated.
 
AI, or any software capability for that matter, can't be policed. More to the point, key applications and tools in AI are open source, so anyone can download them and see how they're built. Anyone can read thousands of pages of websites about how important AI capabilities are designed and function, including LLMs, which everyone is so fascinated with lately.
 
I guess the lawyers and courts will end up entering a whole new world as lawsuits, conflicts and damage caused by the misuse of AI/ML come to light. It isn't a matter of if, but when AI/ML cause damage as we learn how to work with it. I just hope we have the wisdom to deal with it before the misuse of AI causes serious damage.
 
I guess the lawyers and courts will end up entering a whole new world as lawsuits, conflicts and damage caused by the misuse of AI/ML come to light. It isn't a matter of if, but when AI/ML cause damage as we learn how to work with it. I just hope we have the wisdom to deal with it before the misuse of AI causes serious damage.
There's a difference between what you're now discussing, commercial damage caused by AI, and your original post which talked about policing the development of AI. Commercial issues, such as AI products which misbehave in ways that cause harm, are subject to liability laws even now. Humans do have to use good judgement in deploying AI, just as they would with any product or service-related tool, or be sued for negligence. My comment only related to the development of AI products or capabilities, which seems to me impossible to police. Governments could try to slow down development, as the US and other countries are trying to do by limiting sales of advanced semiconductors to certain countries, but that is an indirect means which may have little if any impact. Only chips sales are really policed, not the software outcomes.

Reasonable people will also disagree on what constitutes "damage" from AI. Replacing humans with software or robots and eliminating jobs is seen as progress by some, as damage by others. In military matters I think we will find "anything goes" in countries which have the objective changing the world order. I doubt anything will deter those countries from pursuing their goals, and, just my opinion, the best defense is a very strong offense.
 
Last edited:
One issue with AI is where it resides or how many places it resides in. AI could turn into the ultimate legal minefield where no one will be able to be able to even prove where it resides in many cases. Will the platform host be held responsible for its use or misuse or will it be the party using it be held responsible or both. One for criminal action, the other for not having adequate safeties. Any additional thoughts or comments welcome. Could it be ruled sentient, causing even more problems and creating new issues?
 
Back
Top