How can artificial intelligence affect our future?
Technology is evolving at an amazing speed, impacting both convenience and entertainment. But what could be the possible effects of these technological advancements on our daily lives and environment in the future?
In this episode of the Ikigai Podcast, Nick and his guest, Dr. Soenke Ziesche, discuss the probable impact of technology on our daily lives and how that can influence ikigai.
- Extended Reality. At 1:59, Soenke talks about the term XR and a few other enhanced realities, and how these can possibly impact our lives in the years to come.
- "How did you stumble upon ikigai?" Soenke discusses with Nick the reason why he studied the ikigai concept in relation to AI and XR at 4:23
- ‘Introducing The Concept Of Ikigai To The Ethics Of AI And Of Human Enhancements’. At 8:46, Soenke explains to Nick why he decided to research and write this paper together with his co-author, Roman Yampolskiy.
- AI can possibly eliminate some ikigai sources. Nick asks Soenke at 10:23 about the probability of AI disrupting ikigai activities in the future.
- "The Risks". At 14:18, Soenke touched on the two categories of risks that have been defined by experts, and his own term that he came up with which is the "i-risk".
- The unknown ikigai activities of the future. Nick and Soenke talk about new possible opportunities to experience ikigai at 17:23.
- Eliminating the suffering. At 24:48, Nick asks Soenke about AI eliminating people's suffering and its risks regarding ikigai.
- Ethical guidelines. Soenke talks about his point that ikigai research should be a critical component for framing ethical guidelines in AI at 27:59.
- Ikigai optimization. Nick and Soenke discuss the four areas where AI will be able to assist us with ikigai optimization at 32:17.
- Ikigai designer. At 38:04, Soenke talks about his idea of ikigai designer, and how it can probably be a future profession.
- Ikigai and AI safety. At 42:39, Soenke talks about the AI value alignment problem, and how ikigai might help in aligning AI values to human values.
- AI welfare. Soenke discusses the welfare of potential future sentient beings at 45:36.
- "What is your ikigai?". At 53:28, Soenke explains to Nick what his ikigai is.
Dr. Soenke Ziesche
Dr. Soenke Ziesche has a Ph.D. in Computer Science from the University of Hamburg and has worked for over 20 years for the United Nations in the humanitarian and recovery sector. He also worked as a senior researcher on artificial intelligence at the Maldives National University and is currently based in New Delhi, India, as an independent consultant for the United Nations.
Definition of XR
Soenke’s area of research and expertise is artificial intelligence (AI), the intelligence demonstrated by machines. In his paper, he also mentions extended reality (XR), an umbrella term collectively referencing several different types of enhanced reality:
- Augmented reality (AR) - objects in the real world that are enhanced by computer-generated information.
- Virtual reality (VR) - a simulated experience, it could be similar to the real world or completely different.
- Mixed reality (MR) - a blend of physical and digital worlds.
Soenke believes that it is possible for these types of enhanced reality to become increasingly important over the next 10 to 20 years. There has been much progress with these technologies for the past few decades and it’s more likely that this will continue. People make use of technologies for many opportunities and positive outcomes, but he thinks that it’s also important to look at the risks that these technologies may cause.
How did you stumble upon ikigai?
Soenke shares that his Japanese wife exposed him to the word ikigai. He is really interested in the concept and believes that it plays a vital role in having a fulfilling life in a potentially different world caused by all these advancements in technology. While he was closely monitoring the developments of AI and XR, he felt that there was something missing, and worried about a possible lack of fulfilling activities in the future; however, he thinks that associating ikigai with these technologies can be the solution.
AI to likely eliminate ikigai resources
Soenke shares his investigations into time-use research, which allows researchers to check how much time people spend on certain activities. Typically, time can be categorised in four different ways:
Necessary time - the time required to maintain one’s self
Contracted time - the time allocated to work or study
Committed time - the time spent with family or at home
Free time - remaining time of the day after the three other types were subtracted
He thinks that the time that people spend on certain activities is more likely to change in the future; because of the new technologies being introduced, people might end up having more free time and less contracted and committed time. This led him to conclude that accumulating more free time may decrease or even eliminate activities people consider as ikigai.
Two categories of risk have previously been identified in association with AI and XR:
Existential risk (X-risk), invented by the philosopher Nick Bostrom, which refers to existential risks that could lead to the extinction of life entirely.
Suffering risk (S-risk), a term proposed by The Foundational Research Institute, which refers to the likelihood that people will survive but experience increased suffering.
It's not sufficient to be alive and not to suffer if we want a sense of purpose in life. - Dr. Soenke Ziesche
Soenke shares that ever since X-risk and S-risk were proposed, people have looked for ways to avoid or minimize them. However, he feels that i-risk addresses something that has been missing -- because even if people survive and avoid suffering, there might still be something missing; for him, it’s not sufficient to be alive and not to suffer if people don’t have a purpose in life.
Nick says that the term i-risk reminds him of the conversation that he had with one of his previous podcast guests -- about how people can have no mental issues, they can be mentally stable or healthy, but if they don’t have a sense of life satisfaction and purpose, they will suffer, so it is crucial for people to have ikigai.
The unknown ikigai activities of the future
According to Soenke, there is much research being done on how to ensure if AI will be human-friendly; this is a whole different field that can be relevant to ikigai. AI is ikigai-friendly in the sense that it can gather data that can be used to support humans in finding ikigai and pursuing something which they haven’t thought about. Soenke also believes that AI could positively manipulate people to do things that benefit their environment.
Soenke says that because technologies are advancing and improving, they are increasingly likely to fundamentally change people’s lives; for example, there will likely be developments that enhance people’s physical and mental capabilities. He thinks that if people enhance their selves, it may be possible for them to achieve bliss; for example, technologies that impact people’s brains and emotions may lead to the eradication of depression. However, this is also a concern for Soenke because he worries that if people only experience happiness, their lives will lack balance -- which goes back to the importance of having ikigai.
If we are forever blissful and happy, we might get sick of it. It might become a source of suffering in and of itself. - Nicholas Kemp
Nick points out that if people are forever blissful and happy, they might get sick of it, and it might become a source of suffering in itself.
AI ethics is a system of moral principles and techniques intended to inform the responsible use of AI technology, and according to Soenke, there were some delays in developing these AI ethics because it's hard to foresee what is going to happen as the research progresses. AI ethics are also linked to the field of AI policies, such as to what extent governments should be involved and how these areas should be regulated. Soenke thinks that there is something missing: Some mention of ikigai, and how people should spend their lives in the future.He believes this is an important area to research in the future.
Soenke points out that there are areas that AI can assist with ikigai optimization; for example, AI may help people find and create new types of ikigai and identify ikigai activities that can be helpful for society and the world as a whole. The idea is to produce an AI that encourages people to do activities that are also aligned with something which is good for the planet. Ikigai optimization gives people the freedom to pursue more ikigai activities because it is foreseen that in the future, people will have a huge amount of free time.
Soenke thinks that in the future we might see the emergence of ‘ikigai designers’ -- people who create ikigai for others, functioning similarly to how video game designers do now in the way that they develop and design video games for other people’s pleasure.
Ikigai is something personal, something people find themselves in. However, Soenke thinks that if people are given the data, the details about what people’s ikigai are, it would be feasible to design ikigai for other people.
Ikigai and AI safety
To ensure that AI/ikigai research is safe, Soenke thinks it is important to ask how we can guide AI to pursue goals and values aligned with those of humans. He explains that AIs don’t have any feelings for humans, but they do have certain goals. To date, when the value alignment problem has been examined, there have been failures to teach AI human values; instead, the AI simply persists in its original state and never changes. A failure to fix this may constitute an existential threat to humanity, since there is no reason to assume that an AI will turn out to have values aligned to ours.
AI welfare is the idea of sentient digital beings existing, and that they may suffer to some degree. Soenke says that this is a highly philosophical area of exploration, but it is important to consider because even if there’s a small probability of likelihood that there are digital sentient beings that suffer, there could still be high risk and, therefore, a big impact.
What is your Ikigai?
Soenke’s ikigai is his love for learning and anticipating the future with the idea of ethics; he devotes time to thinking about ethical questions and issues which have not been considered yet, and he wants to contribute to those questions, imagining and contemplating, and trying to validate his ideas through research and sharing them with others.
Technology is a vital part of our lives, making our lives easier, improving our work, and even helping to make the world better -- but it is also important to be aware of the possible risks that we may encounter as we develop new types of technologies. We don’t yet know how people will live their lives in the future, or what the future will be like more generally. As Soenke has found, AI research has a lot of promise, but we need to keep in mind how it might impact ikigai -- and whether that impact will be positive, and help us live more meaningfully.