Dla tego filmu nie wygenerowano opisu.
Now, AI is a great thing because AI will solve all the problems that we have today. It will solve employment, it will solve disease, it will solve poverty, but it will also create new problems. The problem of fake news is going to be a million times worse. Cyber attacks will become much more extreme. We will have totally automated AI weapons. I think AI has the potential to create infinitely stable dictatorships. This morning, a warning about the power of artificial intelligence. More than 1,300 tech industry leaders, researchers and others are now asking for a pause in the development of artificial intelligence to consider the risks. Playing God.
Scientists have been accused of playing God for a while. But there is a real sense in which we are creating something very different from anything we've created so far. We definitely will be able to create completely autonomous beings with the wrong goals. It will be very important, especially as these beings become much smarter than humans. It's going to be important to have these beings, the goals of these beings be aligned with our goals. What inspires me? I like thinking about the very fundamentals, the basics. What can our systems not do that humans definitely do? Almost approach it philosophically.
Questions like what is learning? What is experience? What is thinking? How does the brain work? I feel that technology is a force of nature. I feel like there is a lot of similarity between technology and biological evolution. It is very easy to understand how biological evolution works. You have mutations, you have natural selections, you keep the good ones, the ones that survive. And just through this process you're going to have huge complexity in your organisms. We cannot understand how the human body works because we understand evolution, but we understand the process more or less. And I think machine learning is in a similar state right now, especially deep learning.
We have a very simple rule that takes the information from the data and puts it into the model, and we just keep repeating this process. And as a result of this process, the complexity from the data gets transformed, transferred into the complexity of the model. So the resulting model is really complex, and we don't really know exactly how it works. You need to investigate. But the algorithm that did it is very simple. Chat GPT. Maybe you've heard of it. If you haven't, then get ready. You describe it as the first spots of rain before a downpour.
It's something we just need to be very conscious of because I agree it is a watershed moment. Well Chat GPT is being heralded as a game changer, and in many ways it is. It's latest triumph outscoring people. Our recent study by Microsoft Research concludes that GPT-4 is an early yet still incomplete artificial general intelligence system. Artificial General Intelligence. AGI. A computer system that can do any job or any task that a human does, but only better. There is some probability the AGI is going to happen pretty soon. There's also some probability it's going to take much longer.
But my position is that the probability that AGI could happen soon is high enough that we should take it seriously. And it's going to be very important to make these very smart capable systems be aligned and act in our best interest. The very first AGI's will be basically very, very large data centers. Backed with specialized neural network processors working in parallel. Compact, hot, power-hungry package. Consuming like 10 million homes worth of energy. You're going to see dramatically more intelligent systems. And I think it's highly likely that those systems will have completely astronomical impact on society.
Will humans actually benefit? And who will benefit? Who will not? The beliefs and desires of the first AGI's will be extremely important. And so it's important to program them correctly. I think that if this is not done, then the nature of evolution, of natural selection, favor those systems, prioritize their own survival, above all else. It's not that it's going to actively hate humans and want to harm them. But it is going to be too powerful. And I think a good analogy would be the way humans treat animals. It's not that we hate animals. I think humans love animals and have a lot of affection for them.
But when the time comes to build a highway between two cities, we are not asking the animals for permission. We just do it because it's important for us. And I think by default that's the case. And I think by default that's the kind of relationship that's going to be between us and AGI's which are truly autonomous and operating on their own behalf. Many machine learning experts, people who are very knowledgeable and very experienced, have a lot of skepticism about AGI. About when it could happen and about whether it could happen at all. Right now this is something that just not that many people have realized yet.
That the speed of computers for neural networks, for AI, are going to become maybe 100,000 times faster in a small number of years. If you have an arms race dynamics between multiple teams trying to build AGI first, they will have less time to make sure that the AGI that they will build being cared deeply for humans. Because the way I imagine it is that there is an avalanche. Like there is an avalanche of AGI development. Imagine you have this huge unstoppable force. And I think it's pretty likely the entire surface of the earth will be covered with solar panels and data centers.
Given these kinds of concerns, it will be important that AGI somehow builds as a cooperation between multiple countries. The future is going to be good for the AIs regardless. It would be nice if it were good for humans as well. .