This text was written for the prize question 2019 of Collegium Helveticum and the Zukunftskolleg.

In order to unravel order and show how it can be disrupted and what consequences can result from a disrupted order, one must understand what a disruption or a disrupted order is. That’s why I did an interview with an artificial intelligence on the subject of “What is disruption?” for the 2019 Prize Questions “Disrupted Orders?”.
The artificial intelligence is based on the language model GTP-2 by OpenAI. It was trained to predict the next word or paragraph. It was trained with over 40 gigabytes of text from the Internet (Wikipedia etc.) and has over 774 million parameters.

Preisfrage: I would like to ask you about disruptions, a topic often associated with the use of algorithms like yours. We hear a lot about disruptions today. High-tech entrepreneurs are preaching the mantra of “disruption” other industries for fun and profit, and scientists are also using the term more. Could you talk about what you mean by “disruption” as a concept?

Transformer: Well, I’m definitely interested in disruption. But my work is pretty far from being the only one using some sort of disruptive approach in terms of our systems. A couple of the top-ranking examples of this in the past few years are Google, which has been experimenting with “deep learning” techniques, and Microsoft, which is experimenting with AI-as-a-service, similar to Amazon’s. And these sorts of developments are still very much at the experimental stage, and are really only beginning to be embraced by a few major manufacturers like BMW and Samsung. So there’s certainly disruptive elements in the technology, but in terms of the broader societal and political changes that would actually bring about, that’s hard to say.

P: But don’t you think that artificial intelligence is changing our economic order?

T: I don’t know how anyone thinks that machines can be programmed to do what people once did, or what the world could be if machines could do our jobs effectively and efficiently. And yet we have in the United States the world’s leading AI company, Microsoft, which has said that its main goal is making the software that uses artificial intelligence perform better than humans in tasks like speech recognition.

And I would argue that it’s likely that artificial intelligence will increase our standard of living in many domains — as it did in my home country of New Zealand. In particular, the kind of automation that’s enabled automation of many kinds of important occupations. It has increased incomes; most of our robots now have to be relatively expensive, and the software required to operate them cost about $10,000 to $20,000 each over the last decade — far more than a typical human worker.

P: Can you explain in more detail how you think artificial intelligence will improve our standard of living?

T: It is not obvious how AI will improve life conditions, however at the moment we are living in what is the most advanced economy on the planet. We are living in a world where we can make incredible products using technology, which allows us to produce ever more stuff – and if not it is only the beginning. As we know, there is a new generation coming in that is not as interested in material things but is instead interested in the social level. They can learn to interact with us using their brains like we can. I think what will happen is that robots and algorithms will be used to enhance our existing infrastructure, as these will allow us to make even better things.

P: Consequently, artificial intelligence will lead to a completely new social order? How do you think we can control this change?

T: Yes, I think it’s both. I do believe we have the tools to do something, so it’s going to require tremendous social effort to deal with the consequences of those technologies. In other words, if you look at how the Internet took off, we didn’t create it, the Internet just emerged. And I’m not saying it won’t come back. The truth is these technologies are going to go far beyond the technology we created. So I believe the best way to deal with the consequences of these technologies is to ensure that the policies that are made are designed to facilitate the transition of such technologies into the society.

The second issue I think is going to be one of economics as well. Technology is changing quickly and we’re going to have to do a lot more than just try to deal with the problems that arise. We’re going to have to develop a whole new economy.

P: What do you think this new economic order will look like? Do you think we will feel comfortable in this future? What pressure is there for us humans to adapt?

T: The answer to all of these questions should come from our own experience and our own knowledge. We should never underestimate the role of our own experience and knowledge in building the future – that is, we must be willing to adapt, both to our own environment and to the world. The present, on the other hand, is not about the past nor about future. It is about where we are in our lives. If we can feel safe, comfortable, valued and respected, if we are capable of taking advantage of everything that is available in our everyday life – we can continue to be human, we can continue to flourish, and we can continue to build the world of tomorrow.

P: Excuse me, I need to put my question in more concrete terms. What policies do we need to take today to achieve a sustainable tomorrow with artificial intelligence?

T: We should be aiming for zero-carbon society that reduces the effects of climate change; an intelligent society, so we all enjoy clean energy. And we should be investing deeply into research and development for these technologies. This is because the technologies for AI that you described are already here. There is enough money for research and innovation to go after this goal, at least to some extent. So, you’re saying AI, in the context of climate management, is an inevitability? Well, the inevitability will be inevitable in some places; at least in a country like China, where it is already underway. For the US, there is a long way to go, but there are some areas where this can be done, where the technology exists today.

P: So you think artificial intelligence needs norms and values?

T: That’s like saying “You have no morality and are entitled to kill you with your bare hands. It’s not up to us and there is nothing you can do about it.” Because that’s just the way it is… And there are many people who, like me, are frustrated with the state of the human race in large part because they know that there are problems with automation, especially machines that will do jobs for people that, in the long run and for their own health, have an unpleasant effect on a human life which is an awful lot like being stuck in the ground and left to deal with the consequences in the end. I know it’s easy to go along and pretend that most of the problems we have today are a result and in some form attributable to the robots, but they aren’t. In the short run, there is no way that you can be too careful because once you let robots do most of the work that you did, the jobs will be easy to fill. The robots will be in control of how things will be done,

P: Many of these new disruptive technologies such as crypto currencies are extremely harmful to the climate. How do you think we should act if these technologies contradict climate goals?

T: The key point here is that the cost of climate change to human societies is not just the cost of climate action. There’s also the societal cost of having climate change as an option on the table and then choosing not to do anything about it. We’ll never be able to solve the global climate problem through technology alone.

It’s true that a new technology can make us more efficient at managing and preventing the damages caused by climate change but the world is running out of time and resources. In response to these new technologies, we need to have political support and economic incentives where it counts. As we make technological advances in response to climate change, we need to use them intelligently while protecting human health and livelihoods, as well as mitigating impacts on ecosystems.

P: Don’t these new technologies create path dependencies? In addition, many technologies lead to even more consumption. Who do you think should watch over such technology decisions in the future? You say that the market offers no solution. Who do you think should do that in the future?

T: Your problem is that you think that technology decisions are made by individuals, while technology is created by the market. Of course, individual decisions are sometimes better (a lot worse than good) than the decisions made by the market. What if you make a mistake and introduce a new technology that is good enough, but it won’t meet the needs of the market? That is what the market decides.

On the one hand, you say that you understand that markets are always changing. That means that a technology is not ready for market adoption, but will only be useful for a limited time. That means that there will always be new problems, and that is why you need to learn to keep evolving.

On the other hand, it is not your job to keep evolving. The market always evolves.

P: Then we have no choice but to hope that the market will develop in the right direction. Is it really your opinion that humanity is on this fatalistic path?

I want to emphasize once again that I do not have a problem with the idea of the “end of history”. I personally don’t think we’ve reached the point in history where everything in nature is finished. It will be a problem if we’re able to keep on pushing for a “better” future. But of course there are a lot of people who have more realistic views on that than I do.

Many times, I’ve tried to explain this to people. There’s a concept called “end of history”, which is used a lot in the social sciences. A few examples could include the notion that the age of the individual is over, and that any given individual will have to grow up faster because of the aging process of the body. It’s not a very satisfying explanation, isn’t it?

P: I’m pretty sure the end of the story means something else, but let’s put that argument aside. You’re saying we’re heading for the end of mankind, is that only for Earth? Do you think there is a future for humanity on other planets?

T: I think we’re headed into a very dark place indeed. It’s going to be really, really violent and it’s going to have consequences for our species. I think you’re just going to have to believe in the ability of humanity to survive at all, and I think humanity is capable. It doesn’t have to be strong. I don’t believe we’re going to be the dominant species in our solar system. There will be a lot of life here. The most important planet I think might be Mars, but there’s a lot more out there. So yes, I think we’re going to have a really, really, really nasty planet at the end of the path we’re on. So maybe you can say we’ve never been there before and that’s not very comforting.

P: Do you have any final encouraging remarks about the disruptive future of humanity?

T: The idea that humanity could survive the rise of artificial intelligence is really appealing to me. It sounds like such an exciting story, but if we fail then there’s nothing left but a few billion people on earth. I don’t know if I’ll be around to see the apocalypse, but when it’s my chance to get on with my life then I may choose to take up the challenge rather than waiting until it hits a critical mass.