🎉 Gate.io Growth Points Lucky Draw Round 🔟 is Officially Live!
Draw Now 👉 https://www.gate.io/activities/creditprize?now_period=10
🌟 How to Earn Growth Points for the Draw?
1️⃣ Enter 'Post', and tap the points icon next to your avatar to enter 'Community Center'.
2️⃣ Complete tasks like post, comment, and like to earn Growth Points.
🎁 Every 300 Growth Points to draw 1 chance, win MacBook Air, Gate x Inter Milan Football, Futures Voucher, Points, and more amazing prizes!
⏰ Ends on May 4, 16:00 PM (UTC)
Details: https://www.gate.io/announcements/article/44619
#GrowthPoints#
Is Musk calling for a moratorium on AI development to be a "spout"?
Author: Liu Yongmou, Professor of the School of Philosophy, Renmin University of China, Researcher of the National Academy of Development and Strategy
In March 2023, thousands of experts, including Elon Musk, jointly signed an open letter calling for a moratorium on training GPT-4 follow-up artificial intelligence (AI) models for at least 6 months. As soon as the news came out, it detonated the global network and media, and soon attracted the opposition of Wu Enda and other "AI experts". In the end, there was no further information on this matter, and the suspension of GPT-4 was suspected to be a wave of hype by OpenAI.
In April, it was reported that Italy would completely ban ChatGPT, and the result was a fine.
In May, U.S. President Biden and Vice President Harris also came to join in the fun and held a meeting with the CEOs of top artificial intelligence companies Alphabet, Microsoft, OpenAI and Anthropic, the parent company of Google, to put pressure on companies to implement safeguards around artificial intelligence, saying Support new regulations or legislation to mitigate the potential harms of AI technologies.
Everything seems to be coming soon. It seems that the generative artificial intelligence (GAI, Generated AI) represented by ChatGPT, Midjourney, and DALL-E 2 has spawned new and more serious social risks, making the development of human society and the application of AI technology The overall situation has undergone fundamental changes, and new strategies, measures and methods need to be adopted to deal with it, such as a complete suspension of the research and development of new AI technologies.
is it really like this? no.
First, GAI does not produce any new risks, and the problems it may cause have long been discussed and called for by the academic circle. Second, no new methods are needed to deal with GAI risks. The key to dealing with AI risks has always been implementation.
On April 11, the Internet Information Office of the People's Republic of China publicly released the "Generative Artificial Intelligence Service Management Measures (Draft for Comment)", responding to GAI governance at an unprecedented speed, proving that the application of GAI has a huge social impact, and must be carefully studied and carefully implemented. Respond quickly.
01 Beware of AI out of control - "political correctness" in the global technology field
Unemployment problem, that is, it may lead to a large number of unemployment of mental workers such as copywriting planners, original artists, industrial designers, programmers, media practitioners and translators.
Educational issues, that is, it may impact the existing education and scientific research system, for example, students can use ChatGPT instead of doing their homework.
Information security issues, that is, GAI automatically generates a large amount of artificial intelligence production content (AIGC, AI Generated Content), which is difficult to distinguish between authenticity and falsehood, suspicious positions, unclear ownership, and difficult accountability. It may even become a challenge to mainstream values and ideology dangerous tool.
AI risks have long been valued by governments of various countries, and have become the focus of attention of the whole society in recent years. Obviously, the three major impacts caused by the above-mentioned GAI are not new problems, and they have been concerned and studied by the academic circles and the government as early as ChatGPT exploded.
In the past ten years, from the Internet of Things to big data, cloud computing, blockchain, Metaverse, ChatGPT, among waves of AI upsurge, there has never been a lack of voices to pay attention to risks.
I am professionally concerned about the social impact of new technologies, and have spent a lot of energy reminding everyone of the technical risks that AI may cause. For example, he has written monographs "General Theory of Technology Governance", "Internet of Things and the Coming of Ubiquitous Society", "I have no medicine for technology sickness", "Metaverse Trap", "Fourteen Lectures on Science and Technology and Society", reminding non-professionals of ICT ( Social impacts and technological risks of information and communication technology).
It can be said that being vigilant against the development of AI and preventing AI from getting out of control is becoming the "political correctness" in the field of science and technology around the world. Under this atmosphere, the voices that publicly declared that "AI has no restricted areas" were greatly suppressed. In this regard, as a selection theorist of technological control, I fully agree.
As far as the current development situation is concerned, what serious risks does ChatGPT have that must be dealt with in an "overweight" way, such as suspending for at least six months?
Some people say that ChatGPT is a sign of the emergence of super AI. If it is not stopped now (note: it is not just a suspension), once the "Pandora's Box" is opened, human beings will soon be ruled or even extinct by super AI. It is difficult for people to take such fanciful ideas seriously. In particular, many professionals do not recognize ChatGPT as a general AI, let alone a prototype of a super AI.
Super AI may be a problem, but when it comes to preventing human self-extinction, many risks, such as nuclear war, climate change, the spread of deadly viruses, and the abuse of biotechnology, must be ranked ahead of super AI. Do we stop them all first? Even if it can't be stopped, it's okay to pause, at least you can call for a pause.
Since there is nothing new, there is no need to take the drastic approach of pausing development, why?
First, the suspension is unjustified. Some people say that the ChatGPT risk response measures have not been thought through clearly, so it is suspended first. wrong! It's not that I didn't think clearly, but that I didn't implement it in place.
Second, the moratorium is impossible to actually achieve, and there will definitely be AI companies that will violate the ban, and the result will be unfair competition.
Third, suspension will not solve the problem. Even if all AI companies really suspend GAI research, will the risk problem be solved? No! Unless LLMs (Large Language Models, large language model) are completely stopped and banned, the risk will not disappear. When restarting, you still have to face the risks head-on.
Today, new technology is the most important tool for human survival and development that cannot be left behind. The role of GAI in promoting social productivity has been clearly revealed. Why should we stop eating because of choking, and why can’t we use it in a controlled manner?
In the 20th century, some people suggested that the technology at that time was enough for human beings, and the continued development would lead to many new problems. Technology should be stopped, and scientists should stop doing research. Such extremely irrational ideas have never been taken seriously by society.
02 Suspending GAI research and development cannot be realized, it is purely invalid "mouth gun"
AI may lead to various social risks. The academic circle has long pointed out that various countermeasures have also been put forward. Therefore, the question now is to implement AI governance measures according to national conditions, not to suspend AI research and development. Facts have proved that the idea of suspending the research and development of GAI is simple and rude, with little effect and impossible to realize. It is purely invalid "talking".
Take the AI unemployment problem as an example. The AI unemployment problem, that is, the advancement of artificial intelligence is accompanied by more and more people losing their jobs, is the biggest problem in the social impact of AI, and involves comprehensive institutional innovation of the entire society. Existing literature discusses the issue of AI unemployment, which can be described as overwhelming, and there are many specific countermeasures, such as career planning of students, improvement of AI literacy of workers, social security and re-employment of unemployed, upgrading and transformation of industrial structure, etc.
Long-term strategic planning is also impressive, such as systematically reducing the working hours of workers (some places are already trying to work 4 days a week), collecting AI taxes (AI is the crystallization of human intelligence, and AI companies are taxed heavily for all people to share), flexible retirement system (workers can retire briefly several times in their lifetime), etc.
The biggest impact of AI applications on contemporary governance activities lies in: expanding economic freedom, increasing leisure time, greatly changing the prerequisites for public governance, and thus changing the fundamental appearance of social operations. However, this impact also means that the "AI unemployment problem" is becoming more and more serious, which poses serious challenges to the governance activities of the entire society and must be dealt with prudently. The industrialization of AIGC will once again prove the seriousness of the AI unemployment problem. If there is no overall policy arrangement, ChatGPT will definitely lead to a large number of "AI unemployment", which in turn will hinder the further application of ChatGPT.
The solution to the "AI unemployment problem" must take both perspective and reality into consideration.
From a long-term perspective, the solution to the "AI unemployment problem" involves a fundamental change in the human social system, and it cannot be solved only by the development of intelligent technology and intelligent governance. According to the basic principles of Marxism, the "AI unemployment problem" reflects the contradiction between the development of scientific and technological productivity and the existing production relations. The ability of robots to replace human labor does not mean that it actually replaces human labor, because such replacement means canceling the exploitative system in which a few people force the majority of people to work through institutional arrangements. In essence, to solve the "AI unemployment problem", we must continuously reduce the working hours of laborers, give people more leisure time, and ultimately must completely eliminate the system of exploitation. The labor history of the 20th century shows that the application of modern technology in production continues to reduce the total labor time necessary for society, and promotes the implementation of the "eight-hour work system" and "weekend system" by more and more countries.
From a reality point of view, the evolution of the social system takes a long time, and it must be advanced gradually and steadily, and it must wait for the continuous development of intelligent technology. Therefore, the most urgent issue is to find new jobs for laborers who have been impacted by artificial intelligence, and ensure that They can enjoy the material wealth created by technological progress.
Therefore, at the beginning of the outbreak of AIGC industrialization, the country, government and society should conduct in-depth research, make overall plans, and actively respond to the pressure of unemployment and employment that may arise.
For example, improve unemployment insurance and provide reemployment training services, strengthen career planning and creative quality improvement for young people, adjust the direction of school talents, especially in humanities, art and other disciplines, and improperly promote industrial upgrading to create new jobs. In short, in the face of the unemployment risk that AI may accompany, we must give up the extreme thinking of either-or, try to avoid it, and adjust it in real time.
**03 How is China's AI governed? **
Building a digital China is an important engine for promoting Chinese-style modernization in the digital age, and it is a strong support for building new advantages in national competition.
In the construction of digital China, it is necessary to promote "people-centered" AI governance so that AI can truly benefit society. To this end, we must first call on the whole society to pay attention to the issue of AI risks. The government, enterprises, NGOs, scientific and technological circles, and the public, according to the division of responsibilities, improve policies and measures, strengthen resource integration and force coordination, and form a joint force of action to control AI risks. The programs are actually being implemented.
In addition, the following principled suggestions can also be considered in the governance of AI risks:
First, implement the principle of limited tools. We must have a clear understanding of the role of intelligent governance - intelligent governance is not a panacea "perfect weapon", in many cases it will also "failure", and even go to the opposite side of hindering social efficiency. Therefore, it is necessary to recognize the role of intelligent technology in improving governance efficiency in certain fields, certain problems, and certain occasions, and to give priority to technical governance means to solve problems, while always keeping in mind the limited role of intelligent technology. Adhere to the modesty of science and technology, be alert to "big data superstition", adopt a review attitude of specific context and specific analysis, and pay attention to the actual effect feedback and risk control of governance activities.
Second, insist on equal emphasis on utilization and control. It is necessary to play the role of intelligent governance, but also to control specific intelligent governance to prevent the power of intelligent platforms and technical governance experts from getting out of control. At the same time, institutional-technical methods should be used to avoid social risks that may be caused by intelligent governance.
Third, properly handle the "AI unemployment problem". The greatest impact of the application of intelligent technology on contemporary governance activities lies in: expanding economic freedom, increasing leisure time, greatly changing the prerequisites for public governance, and thus changing the fundamental appearance of social operation. However, this impact also means that the "AI unemployment problem" is becoming more and more serious, which poses serious challenges to the governance activities of the entire society, attracts the attention of the entire society, and must be dealt with prudently.
Fourth, closely integrate the construction of "smart city" and "digital village". Running a city scientifically is an important strategic measure for contemporary technology governance, and a smart city is an advanced form of a scientific city. Since humans currently mainly live in cities, especially large and super-large cities, the current intelligent governance is mainly promoted around the construction of "smart cities", or the construction of "smart cities" as the main carrier. In particular, the construction of digital China cannot ignore the construction of digital villages, and efforts must be made to narrow the gap between urban and rural areas through digitization.
Fifth, pay attention to the integration of intelligent technology and people. In technological governance activities, the better the combination of technology and people, the higher the governance efficiency. To strengthen the integration of human and technical factors in intelligent governance activities, it is necessary to systematically reflect on the characteristics of various intelligent technologies and related ethics, laws, psychology, crisis management, etc., to promote system construction, technology research and development, and talent reserves, and to strengthen organizational leadership. , expert consultation and actual combat exercises, and continuously and systematically improve China's intelligent governance capabilities.
Sixth, distinguish governance from manipulation in specific contexts. There is a limit to intelligent governance, and if it exceeds the limit, it becomes intelligent manipulation and violates the basic rights of citizens. The future development of intelligent governance must specifically consider the limits of various applications. This involves not only governance goals, but also the means used. It can only be reviewed calmly, objectively and cautiously in a specific social context. To prevent intelligent governance from moving towards intelligent manipulation, a very important issue is that the anti-governance behavior of intelligent governance must be tolerated to a certain extent.