New research on generative AI has revealed a surprising and critical flaw in these systems. Large language models like ChatGPT and Gemini, which are trained on vast amounts of data, have been found to have the capacity to ‘poison’ themselves.
While tech industry leaders believe that training AI systems on massive datasets will eventually allow them to surpass human capabilities, researchers from Oxford University and elsewhere have warned in Nature that the use of synthetic data for training generative AI can lead to a significant decline in performance, rendering these systems useless.
A Bloomberg report found that when AI systems are trained on data generated by other AI systems, their performance deteriorates significantly. This phenomenon is known as ‘model collapse’.”