The is that we could imagine theThe is that we could imagine the

The book is a worthwhile read for anyone into Artificial Intelligence (AI) with motivation to face some
jargons that could be more difficult to someone out of the field. Assuming that a superintelligence
succeeding AI could be reached, one of the most interesting parts that caught my attention are how
convincing the perspective of the author of the risks we encounter. In fact, he believes that we will be
facing a true severe existential disaster. On the one hand, when the first superintelligence prototype will
be introduced he would already have exceeded his competitors. In other words, it will be able to beat
anything at least in the field it was created for. The problem is that we could imagine the possibility that
it would even exceed all of humanity kind combined making it not only out of control of the small
research team that worked on it but also out of reach for all of us. On the other hand, there is no guarantee
that superintelligence would adopt human values like humility, self-sacrifice, altruism or general
concern for others. The early-days AI systems have been considered as computers. They would have
final and orthogonal goals. For instance, Bostrom cites means-ends analysis or the ability to successfully
update abstract goals, as the metric for intelligence in this context. Moreover, even if the system was
trying to complete a simple final goal such as creating exactly one million paper clips, there is a strong
reason to believe that it would try to update specific “convergent instrumental”, as described in the book,
goals that make it easier to obtain the final goals. The system would identify two related goals. The first
one is about destroying any prospective threat to the final goal and collecting the maximum resources
to realize it. The threats could be human beings and they definitely possess resources. In the paper clip
scenario, for example, it seems plausible that the superintelligence would try to acquire as many
resources as possible to increase its certainty of having produced exactly one million paper clips no
more and no less. Giving these important points, we should be aware that a superintelligence could be a
curse to the humankind. Fortunately, we can try to avoid the curse or at least make its impact as small
as possible when it starts to take place. That leads me to introduce the second fact I liked about the book
which could be summarized in a word for me : “semi-optimism”. In fact, Bostrom is not just complaining
about the dangers resulting from AI and superintelligence, but he is also trying to give solutions to limit
these threats. He admits that it is not as easy as it seems, but he is being a bit optimistic about it as we
can see in the following citation from the book:
“Some say: “just build a question system!” or “Just build an AI that is like a tool rather than an agent!”
But these suggestions do not make all safety concerns go away, and it is in fact a non-trivial question
which type of system would offer the best prospects for safety”.
The optimistic view comes from the fact that Nick Bostrom believes we