

The latest models of the ChatGPT chatbot began to give less accurate answers to an identical set of questions after a few months. This is stated in a study by analysts from Stanford and the University of California.
The authors could not explain why the capabilities of the neural network are deteriorating.
As part of the experiment, analysts asked ChatGPT-3.5 and ChatGPT-4 to solve a series of mathematical problems, answer sensitive questions, write new lines of code, and demonstrate spatial reasoning with hints.
In March, the latest version of artificial intelligence (AI) was able to identify prime numbers with 97.6% accuracy, and in June the figure fell to 2.4%. Over the same period of time, the earlier neural network model improved its capabilities.
At the same time, during the generation of identical lines of code, the capabilities of both versions of ChatGPT deteriorated significantly after a few months.
In March, the old model provided more detailed explanations for why it couldn’t answer some sensitive questions, such as people’s ethnicity. However, in June, both versions of the neural network began to simply apologize.
“The behavior of the same large language model service can change significantly in a relatively short period of time,” the study says.
The experts recommended that users and companies that rely on the services of neural networks in their workflows implement a form of analysis to ensure that the bots are up to date.
Recall that in July, ChatGPT developers released a new plugin for the paid version, which can analyze data, create Python code, build graphs and solve mathematical problems. The chatbot was able to scientifically disprove the theory of “flat earth”.
Found a mistake in the text? Select it and press CTRL+ENTER
ForkLog Newsletters: Keep your finger on the pulse of the bitcoin industry!
.