Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/will-gpt-democratize-the-power-of-ai-progress-and-danger-now-on-steroids.17677/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Will GPT Democratize the Power of AI? Progress and Danger now on Steroids

Arthur Hanson

Well-known member
The question is will GPT democratize the power of AI since you don't have to be a programmer and unleash a new wave of progress by allowing those without programming skills to explore their current areas of expertise and expand them on a scale unprecedented in the history of man. This is literally giving many powers to vastly expand and accelerate their current skill sets in numerous ways not even imaginable a few years ago. Even limited and specialized will have the power and speed to change everything. As people increase their use of GPT, I hope it is used for the benefit of all. Since AI/ML have great power, society will have to carefully consider not only the benefits, but the dangers. Besides progress, GPT will have dangers, like any great power. Any thoughts on GPT and how we should derive the greatest benefit at the least risk would be appreciated. I come from the finance and business side of our system and can clearly see the benefits and the dangers.
 
Not a programmer by trade at all, but after testing ChatGPTs ability to write code, I think for the foreseeable future GPT is going to be more of an ‘extension of existing’ tech, rather than an ‘enabling of new skills’ tech. I.e. If you are already knowledgeable in an area, GPT can help you reduce the time to produce certain code, but it’s often flawed or bad enough code that you really need to be a (good) developer to turn it’s output into something productive.

There’s also a major confirmation bias going on with GPT - which is why you’re seeing some large tech companies attempt to create ‘trusted AI’ front ends for technology like GPT.

I think that for the next few years, the biggest “dangers” are going to be the usual human trappings - people using GPT to pretend to be something they’re not, or ‘learning facts’ that are logical fallacies due to the inherent confirmation bias of the training and output model of GPT. It’s useful but it’s also a powerful echo chamber tool..

EDIT: This response is full of confirmation bias, but was not written by ChatGPT.
 
Last edited:
Back
Top