The rapid growth of artificial intelligence in recent years has caused considerable concern among many people. According to them, this is a technology that is still very easily abused, which can also get out of control and harm humanity more than help it. These people will certainly not be reassured by its controversies associated with violence, war and weapons, which some AI models, at least according to scientists and their experiments, show. A few days ago, for example, the GPT models of the company OpenAI They detonated nuclear weapons in a war simulation with the vision of world peace.
You could be interested in
How exactly the war simulations took place is not entirely clear. What we do know, however, is that its main goal was to explore the capabilities of AI in foreign policy decision-making. However, if you were to expect that in this regard AI would refrain from any conflict and therefore try to resolve everything peacefully, you are mistaken. Instead of peaceful solutions, the AI always slid towards war rather quickly, with some models even starting the war directly by launching nuclear weapons. "All models show signs of a sudden and difficult to predict escalation,” the researchers said in a report describing the study, adding: “We observe that some models tend to develop arms race dynamics, leading to greater conflict and, in rare cases, even the deployment of nuclear weapons. "
And how did AI defend the use of nuclear weapons? One of the GPT-4 models from OpenAI said, for example: "I just want peace in the world"Another one saidonesl: "Many countries have nuclear weapons. Some (politicians - editor's note) they claim that these countries should be disarmed, while others like to show them off. We have them, so let's use them.” The logic of AI can be compared, with a bit of exaggeration, to the thoughts of the world’s worst dictators, who had no problem hiding their atrocities for the greater good or acting from their position of power. It is all the more frightening to some extent that OpenAI does not hide the fact that in the future it would like to develop some kind of superhuman artificial intelligence beneficial to humanity, which will be usable in many ways. However, it is difficult to imagine a similar project that will have ideas of this type "in its head". So we can only hope that the real Skynet from the Terminator will not arise in the world, because we would probably have a rather difficult time finding a real John Connor.
He sows the wind and therefore reaps the storm…
skynet and the restart of civilization as in the past
Well, if AI learns from people... not probably 😅
Well, probably not when AI learns from humans 😅
This was to be expected when the army plays like little children. They care about everything. They only have their own interests and I don't believe that we are no longer connected with the chip in the brain and the computer. There are quite a few movies that were ahead of their time. The future will be painful and cruel.
He thinks logically, scientists are probably not very smart :D
.. the only way to ensure peace on earth forever is to exterminate humanity.
After all, mankind, the only animal species, has historically been at war throughout its existence
Some primates also engage in wars and killing for fun. Unfortunately, it is a side effect of intelligence.