Artificial intelligence technology is clearly making significant progress. Technological advancements often come with concerns and anxieties, but until now, we have been receiving their benefits while dealing with risks individually. These successful experiences have given us confidence and justification in the progress of science and technology.
However, there is a question of whether we can continue to cope if scientific and technological progress continues in this manner. Dealing with individual technologies and risks has an aspect that we cannot understand unless we actually try them. On the other hand, when we look at the advancement of science and technology from a broader perspective, it seems unlikely that we, as individuals or as a society, can indefinitely tolerate it.
This article reveals that there are limits to the advancement of science and technology from the perspective of human capacity limitations and society’s capacity to handle risks.
The evolution of artificial intelligence technology not only poses a direct threat due to AI itself but also has the nature of overflowing society’s risk capacity in a shorter period by accelerating the advancement of science and technology in various fields. Therefore, the progress of science and technology has become a more pressing issue.
Intellectual and Moral Limits
In economics and sociology, individuals are often modeled as rational actors seeking to maximize their own interests, allowing for analysis of society as a whole.
To make rational choices, one needs to be able to predict which options will maximize their interests. However, when choices are too complex and unpredictable, even subjectively rational choices can be objectively irrational. This acknowledges the presence of an intellectual limit in society, considering the variability in individual abilities.
Additionally, human behavior is influenced by conscience, socialization, education, and moral awareness. People’s actions can be guided not only by their spontaneous morality but also by penalties and incentives. When problems are overly complex, what seems morally right or punishment-avoiding subjectively might be objectively immoral. These variances in judgment and moral compass suggest the existence of moral limits.
The Necessity of Facing Limits
Though education, ethical thought, and enlightenment can expand these limits over time, we cannot expect drastic expansions in intellectual and moral limits, just as human records in the 100m dash don’t see substantial reductions.
Our society aims to accept a wide range of intellectual and moral differences, rooted in the idea of basic human rights. We must avoid endorsing exclusions or elitism based solely on ability, except in cases of criminality or antisocial intent.
Thus, we must consider the intellectual and moral limits of society in our deliberations, accepting that objectively irrational or immoral actions will occur.
Intellectual and Moral Limits in Economic Systems
Capitalism’s success over socialism can be explained from the perspective of intellectual and moral limits. Socialism required surpassing these limits to function effectively, while capitalism operates without reliance on high morality and does not require high intelligence for rational decision-making. This simplicity is key to capitalism’s success.
Average and Marginal Limits
We should differentiate between the limits of average societal abilities and those with ability handicaps. The former are ‘average limits,’ and the latter ‘marginal limits.’
For instance, the success or failure of capitalism versus socialism should be evaluated based on average limits, as the actions of the majority significantly impact the economy. However, when considering moral transgressions or criminal decisions, we should focus on marginal limits. Even those with high intellectual capabilities but near-marginal moral capacities might be tempted to make criminal choices.
Individual Responsibility and Social Responsibility
Actions leading to crimes or moral violations can be seen as a matter of individual responsibility. However, from the perspective of the victims, society also bears some responsibility for not being able to deter such actions. This viewpoint necessitates the acknowledgement that society has a responsibility to minimize the impact of crime and moral violations.
Without this perspective, a society focused on setting rules and punishing those who deviate from them emerges. If these rules are within the average limits of intelligence and morality, and their violation doesn’t cause significant harm, society might not deteriorate significantly. However, if adhering to the rules surpasses the average limits, widespread crime and moral violations will occur, negatively impacting many people.
If the rules far exceed the marginal limits, they may be too difficult for some to follow, leading them to criminal or morally wrong actions. The cumulative effect of the harm caused by these violations and the impact of punishing the offenders represents the negative impact society must bear.
When introducing new systems or structures, it’s a societal responsibility to consider not just their benefits but also the negative impacts arising from exceeding intellectual and moral limits.
The Impact of Exceeding Marginal Limits
When considering new systems or structures, we typically focus on the average limits. If a system is too complex for most to understand or comply with, it’s evident that it won’t function well. However, what’s often overlooked is the impact of exceeding marginal limits.
This impact must be considered from two perspectives. First, if the negative impact exceeds the benefits, there’s no rationality in implementing the system. Second, even if the benefits outweigh the negative impact, if the latter exceeds society’s capacity to accept, the system still cannot be implemented.
Minimizing the Cost of Man-Made Catastrophes
I refer to the excessive negative impact that can cause societal disruption as ‘man-made catastrophes,’ such as nuclear wars or environmental destruction that challenges the sustenance of current societies.
As technology advances, the variety of potential man-made catastrophes increases, some of which may require less effort or funds to initiate. This suggests that the minimum cost required for such catastrophes tends to decrease with technological progress.
For instance, advancements in biotechnology raise concerns about artificial pandemics. The progress in AI technology hints at the emergence of AGI or ASI, surpassing human intelligence, raising fears of being dominated by superior AI.
The potential for these concerns to manifest into man-made catastrophes, with significantly lower costs compared to nuclear wars or environmental destruction, is real.
Rules and regulations are expected to prevent misuse of these technologies, typically set within the average limits. However, operating these technologies without exceeding the marginal limits is exceedingly challenging.
For example, there are many highly intelligent individuals with low moral consciousness. Humans can also be driven to irrational decisions for their benefit under stressful or painful circumstances.
For typical crimes or moral violations, society might compensate the victims and punish the offenders. However, if such individuals collaborate to cause a man-made catastrophe, society cannot compensate for the damage, and punishing the offenders is inconsequential.
The lower the minimum cost of man-made catastrophes, the higher the risk. Recognizing the framework of intellectual and moral marginal limits and the minimum cost of man-made catastrophes reveals the dilemma that technological progress might eventually exceed society’s capacity.
Addressing the Dilemma’s Challenges
As long as technological advancement is predicated on free development, the minimal cost of man-made catastrophes will decrease over time. Meanwhile, the marginal limits of intelligence and morality are almost immutable, leading to the inevitable conclusion that society will face catastrophes exceeding its capacity.
The difficulty in addressing this dilemma lies in the lack of widespread discussion, making it challenging to understand and develop countermeasures.
Commonly proposed solutions like enhanced ethical and moral education, development of safety technologies to counteract misuse, regulation of technology development and use, and increased openness and accountability in technological development may seem viable. However, these general approaches have limited efficacy in addressing the dilemma discussed here.
Enhancing ethical and moral education might slightly raise the average limit of morality but has a limited effect on increasing the marginal limits. Safety technology development, although it seems effective, is often more challenging than developing technology for destructive or offensive purposes. It’s uncertain whether such safety measures can sufficiently keep pace with the advancement of misuse technologies.
Regulations are effective for individuals near the average limits of intelligence and morality but are ineffective for those with high intelligence but low morality, necessitating comprehensive control over technology development, including underground activities.
Promoting openness and responsibility in technology development is effective for those near the average moral limits but counterproductive for those near the marginal limits, as it increases opportunities for misuse.
In summary, general measures are insufficient when considering the marginal limits. Therefore, facing a society exceeding its capacity for man-made catastrophes, we must contemplate a society not predicated on free technological advancement and methods to achieve this.
Side Risks of Artificial Intelligence
AI technology, as mentioned earlier, poses a risk of leading to man-made catastrophes. Even if AI is controlled effectively to avoid being a direct cause of catastrophes, it accelerates technological advancements in various fields. This acceleration could significantly reduce the minimal cost of man-made catastrophes in a much shorter time frame.
This discussion is critical as of January 2024, given the rapid development of AI. If AI’s development had been slower, we could have had more time to deliberate on future societal structures in relation to the minimal cost of catastrophes and the marginal limits of intelligence and morality.
However, the pace of AI’s development may bring visible societal risks before we have sufficient time for discussion, likely manifesting in unexpected technological areas.
Thus, we urgently need to advance discussions on problems caused by free technological progress. This goes beyond assessing specific technological risks and responses; it encompasses comprehensive measures for technological advancement and addressing potential unknown risks.
In Conclusion
The starting point of this discussion is the acknowledgment of human capacity limits and societal capacity ceilings, forcing us to recognize that we cannot sustain all our current values and ideals. At least, societal safety and free technological progress are incompatible.
Facing this dilemma might lead many to despair or resignation, hoping optimistically that problems will be avoided or solutions will emerge from others. However, despite the difficulty, we must continue to seek solutions and deepen our understanding of these issues.
Serious deliberation might require sacrificing some values we’ve taken for granted, like limiting free technological development or strictly controlling access to cutting-edge scientific information and materials.
Deeper examination might lead to reconsidering the trade-offs between individual freedom and societal capacity, and the balance of power between democracy and elite management, delving into the fundamentals of human rights and social governance.
A major issue in this discussion is the absence of designated leaders or specialists to address these societal framework and value changes. Unlike other problems with clear responsible parties or experts, changes in societal frameworks and values cannot have designated internal responsibility bearers or multi-faceted experts.
Therefore, anyone could potentially contribute to this discussion. If everyone avoids it, believing they lack expertise or responsibility, risks will continue to escalate without progress in the discourse. Consequently, it falls upon conscientious individuals to deepen this discussion.