Should We Be Afraid of Superintelligent AI?
AI&Future |
From the US Atomic Energy Commission's 1958 plan to use nuclear explosions in buildings to Google Glass, history is littered with short-lived projects and technologies. However, artificial intelligence (AI) is clearly not one of them, as it continues to assert its presence.
Recently, the British journal Nature Outlook published a lengthy article describing how AI is gradually permeating our daily lives, while people are both enjoying and fearing it. Amid this sentiment, technologists driving the next automation revolution need to confront a serious question: what will the public want next?

Is anyone still unaware of AI?
The problem of sorting spam arose when the first digital mail began to be delivered. However, the technology for identifying legitimate emails from the vast sea of spam continues to improve today. This is because clever spammers quickly found ways to circumvent filters, prompting email platforms to begin using AI.
Now, after extensive training, AI can effectively sort emails without human intervention. The fears people had at the beginning of this century that spam would suffocate legitimate email haven't materialized. Machine learning algorithms are remarkably adept at identifying the "tricks" of mass email.
We still need to check our email trash from time to time, however, because machine learning isn't perfect. If its database isn't kept up to date, it can easily be defeated by spammers' new tricks. Regardless, things are getting much easier, and machine learning is arguably the best tool we've ever had.
Email systems are just one example. Using machine learning, streaming services can recommend movies to users, deliver items they might soon order online, and even help identify people and even flowers in photos. Yes, this is all AI.
AI is creeping in, but it's no longer invisible to people. Many of us interact with computers through voice every day, and Google's AlphaGo, which used machine learning to defeat a human champion at the 3,000-year-old game of Go, is particularly impressive.
Why are you afraid of AI?
Currently, organizations and companies are investing heavily in applying machine learning to self-driving cars. Clearly, this is a far more ambitious and risky project than simply identifying spam patterns. Therefore, in this process, developers have to confront a critical issue: public perception of AI.
This is a serious issue that every researcher driving the automation revolution must confront head-on: what is public thinking?
In the field of AI, science fiction has dominated public perception for the past century. The image of AI in fiction has profoundly influenced public opinion, leading to a widespread reaction to its growing prominence: fear.
Part of this fear may stem from the fact that machines possess cognition not dissimilar to that of humans. Furthermore, the way AI research is reported can also cause panic. For example, in June 2017, Facebook AI researchers reported that two chatbots had begun using code words in their conversations. Some news reports even portrayed the researchers as hastily terminating the experiment to prevent the situation from getting out of control.
To ensure that the development of AI aligns with human interests and values, the following aspects can be addressed:
Technical Level
- Security by Design:
- Developers should incorporate security mechanisms into the design phase of AI systems. For example, multi-layer encryption technology should be used to protect data security and prevent malicious attacks that could lead to data leaks or system tampering. Furthermore, a comprehensive access control system should be established to ensure that only authorized personnel can access and operate AI systems.
- Establish a strict security audit mechanism for AI systems to regularly check system security and compliance, and promptly identify and address potential security vulnerabilities.
- Design for Explainability:
- Strive to improve the explainability of AI algorithms. Currently, many deep learning algorithms are considered "black boxes," making their decision-making processes difficult to understand. By developing explainable AI technologies, people can understand how AI systems make decisions and better assess their rationality and fairness.
- For example, in the medical field, doctors need to understand the factors underlying the diagnosis of AI diagnostic systems so that they can verify and further analyze the results.

Ethical Code Development
- Establish Industry Standards:
- The technology industry should jointly develop ethical and industry standards for AI. These standards should clearly define the principles that must be followed in the development and application of AI, such as harming humans, protecting privacy, and ensuring fairness and impartiality.
- Establish a dedicated ethics review body to examine AI projects and ensure they comply with ethical standards. For example, conduct an ethics assessment before new AI products are released, and those that do not meet the requirements will be rectified or banned from the market.
- International Cooperation:
- Countries should strengthen international cooperation on AI ethics. Given the global impact of AI, it is necessary for all countries to work together to develop unified ethical standards and norms.
- Organize international AI ethics seminars to promote exchanges and cooperation among countries, share experiences and best practices, and jointly address the ethical challenges posed by AI.
Education and Public Participation
- Professional Education:
- Strengthen AI ethics education in higher education. Offer AI ethics courses for students in computer science, engineering, and other related majors to cultivate their ethical awareness and sense of responsibility.
- Encourage universities to conduct AI ethics research to provide theoretical support and guidance for the development of AI. For example, research should be conducted on how to integrate ethical values into AI systems and how to resolve ethical dilemmas posed by AI.
- Public Education:
- Conduct AI education activities for the public to enhance their awareness and understanding of AI. Educate the public about the potential risks and ethical issues surrounding AI, enhancing their risk awareness and oversight capabilities.
- For example, through popular science lectures, exhibitions, and media outreach, educate the public about basic knowledge and ethical issues surrounding AI, encouraging public participation in its development and oversight.
Legal Regulation
- Legislation:
- The government should formulate comprehensive laws and regulations to regulate the development and application of AI. These laws should cover all areas of AI, such as data protection, privacy, algorithmic transparency, and accountability.
- For example, a data protection law could be enacted to clearly define the rules companies must follow when collecting, using, and storing personal data, protecting user privacy and security.
- Regulatory Mechanism:
- Establish a robust AI regulatory mechanism to strengthen oversight of AI companies and projects. Regulators should possess professional technical expertise and regulatory capabilities to evaluate and oversee AI systems.
- For example, AI projects involving important areas such as public safety and healthcare should be strictly regulated to ensure compliance with laws, regulations, and ethical standards.