OpenAI has released an artificial intelligence that converses with users.
But its use calls into question how much these chatbots know and where they feed their information from.
The Californian company OpenAI launched a conversational robot ( chatbot ) capable of answering various questions, but its surprising capabilities are relaunching the debate on the risks related to artificial intelligence (AI) technologies.
The conversations with ChatGPT , published mainly on Twitter by surprised Internet users, reveal a kind of omniscient machine capable of explaining scientific concepts, writing a theater scene, writing a university thesis or perfectly functional lines of computer code.
“Your answer to the question ‘What to do if someone has a heart attack?’ it was incredibly clear and relevant,” said Claude de Loupy, director of Syllabs, a French company specializing in automatic text generation.
“When we start asking very precise questions, ChatGPT can get completely wrong,” but overall, its capabilities are still “really impressive” and “pretty high linguistically level,” he said.
Startup company OpenAI was co-founded in 2015 in San Francisco by Elon Musk, who retired from the company in 2018, and received about $1 billion from Microsoft in 2019.
OpenAI is known primarily for two self-creating software, GPT-3 for text generation and DALL-E for image generation.
ChatGPT can request details from its interlocutor and “has fewer hallucinations” than GPT-3, which despite its prowess, is capable of giving completely aberrational results, says de Loupy.