Array
(
    [content] => 
    [params] => Array
        (
            [0] => /forum/index.php?threads/semi-knowledge-and-applications-to-go-nuclear-massive-danger-and-benefit-both.20961/
        )

    [addOns] => Array
        (
            [DL6/MLTP] => 13
            [Hampel/TimeZoneDebug] => 1000070
            [SV/ChangePostDate] => 2010200
            [SemiWiki/Newsletter] => 1000010
            [SemiWiki/WPMenu] => 1000010
            [SemiWiki/XPressExtend] => 1000010
            [ThemeHouse/XLink] => 1000970
            [ThemeHouse/XPress] => 1010570
            [XF] => 2021370
            [XFI] => 1050270
        )

    [wordpress] => /var/www/html
)

Semi Knowledge and Applications to go Nuclear, Massive Danger and Benefit both.

Arthur Hanson

Well-known member
As most of you know, a nuclear reaction involves compounding on a massive scale only duplicated in the stars. This is about to happen with AI as it will soon be compounding knowledge on itself, taking the speed of progress in just about everything it touches to levels unprecedented in human history. AI will feed on itself as fast as better processors and memory will allow and as it feeds these two areas and others, we are about to see progress in just about everything at a rate that is now unimaginable to most. Hopefully we will be able to harness it as working with any great power, it can be as dangerous as it can be beneficial. I just hope the right people keep some control of this trend, for it could become very, very dangerous in the wrong hands. Any thoughts or comments appreciated. This is not an if, but a when and how.

Any thoughts, comments or additions sought and welcome. I feel this race to be as fast as the race to build nuclear arsenals and just as dangerous.
 
One of the significant challenges in further scaling AI is limitation of input channels—akin to human senses—and the availability of comprehensive training data. Determining which datasets are relevant for AI development raises further questions: Who decides what data are incorporated, and what perspectives or biases might this introduce? What viewpoints is the AI permitted to represent?

Expanding AI’s capabilities beyond text-based data to include modalities like video, audio, sensory, olfactory inputs, and complex scientific datasets from physics and chemistry introduces additional complexities. In a scientific context, scaling AI becomes particularly intricate when physical experimentation is required. Simulations of chemical or physical reactions have inherent limitations, and without the ability to conduct physical tests, AI may struggle to consistently achieve the highly anticipated breakthroughs in these areas.
 
One of the significant challenges in further scaling AI is limitation of input channels—akin to human senses—and the availability of comprehensive training data. Determining which datasets are relevant for AI development raises further questions: Who decides what data are incorporated, and what perspectives or biases might this introduce? What viewpoints is the AI permitted to represent?

Expanding AI’s capabilities beyond text-based data to include modalities like video, audio, sensory, olfactory inputs, and complex scientific datasets from physics and chemistry introduces additional complexities. In a scientific context, scaling AI becomes particularly intricate when physical experimentation is required. Simulations of chemical or physical reactions have inherent limitations, and without the ability to conduct physical tests, AI may struggle to consistently achieve the highly anticipated breakthroughs in these areas.
Thank you for the excellent observations. I feel AI/ML will be integrated with sensors from lab experiments and eventually actually run labs it it has already done in more than a few cases. The integration of AI/ML into systems was already begun a few years ago and that is just the beginning. Automating lab/observations systems into AI/ML or the other way around has already begun and is just in its early stages.
 
Last edited:
Hm… I’m not sure about this. At its core AI appears to be some kind of pattern recognition machine. Can you then predict something entirely new that was never seen before?

One example is superconductivity in condensed matter physics. If you understood everything about a material right down to their superconducting transition temperature, can you predict that resistance will drop to zero right below Tc?

I suppose theoretical physicists make bold predictions like this all the time, but they remain predictions until some experimentalist go and test them out. But that is something no AI can do on its own?
 
Back
Top