You are currently viewing SemiWiki as a guest which gives you limited access to the site. To view blog comments and experience other SemiWiki features you must be a registered member. Registration is fast, simple, and absolutely free so please, join our community today!
LIke a nuclear chain reaction will we be able to create AI/ML ecosystems that feed on themselves? I see no reason that AI systems will not be able to be designed to accumulate data and train themselves or train each other without human intervention, but an ecosystem that allows them access to as much data from the ecosystems they will be working in to advance themselves without human intervention. Is there any reason this would not be possible? Also, could an AI system that accesses an ecosystem ever manage to become independent to the point we could not control it or just loose control of it? As we move forward, I feel we must tread carefully for when working with great power, also comes great danger if not handled properly and AI is an entirely new frontier for man. Also their might be some that don't understand what they are working with to create a myriad of problems and dangers. Security of AI systems must be a consideration from inception. Any thoughts or comments appreciated.
It will just be garbage in, garbage out. I don't think there exists enough well documented SystemVerilog code in this world to train AI to write chip on its own.