The institute now believes that its technology will lead to a debate about how such AI should be used and controlled.
Researchers at OpenAI institute who accidentally created an artificially-intelligent writer, which is capable of creating sophisticated fake news stories, have decided to withhold the technology as they fear that it might be used for “malicious” purposes.
The researchers were attempting to create an algorithm that could produce natural sounding text based upon extensive research and language processing, but soon realised that it was capable of creating fake news stories taking cues from the 8 million web pages it trawled to learn about language, according to ITPRO.
“We have observed various failure modes. Such as repetitive text, world modelling failures (e.g. the model sometimes writes about fires happening under water), and unnatural topic switching,” the research team told BBC.
Sometimes the system spits out passages of text that do not make a lot of sense structurally, or contain laughable inaccuracies, they said.
For example, the BBC cited one story where the AI wrote about a protest march organised by a man named “Paddy Power” – recognisable to many in the UK as being a chain of betting shops.
Should AI applications be used carefully?
OpenAI now believes that its technology will lead to a debate about how such AI should be used and controlled.
“[We] think governments should consider expanding or commencing initiatives to more systematically monitor the societal impact and diffusion of AI technologies, and to measure the progression in the capabilities of such systems,” the team was quoted s saying.