A public letter has been issued, advocating for the establishment of independent regulators to guarantee the safety of future AI systems, as the current AI experiments are deemed perilous.
The letter has been signed by various prominent AI researchers, including Elon Musk, who urge AI labs worldwide to halt the development of large-scale AI systems due to the significant dangers they pose to society and humanity.
The letter, published by the nonprofit Future of Life Institute, notes that-
Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders.
Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. OpenAI’s recent statement regarding artificial general intelligence, states that “At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.” We agree. That point is now.
Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.
The list of people who signed the statement includes famous names such as author Yuval Noah Harari, Apple’s co-founder Steve Wozniak, Jaan Tallinn who is a co-founder of Skype, politician Andrew Yang, and several renowned AI researchers and CEOs such as Stuart Russell, Yoshua Bengio, Gary Marcus, and Emad Mostaque. However, there have been reports of some names being added as a prank, so any new names should be viewed with skepticism. For instance, OpenAI CEO Sam Altman has been mentioned as someone who contributed to the existing race dynamics in AI. To view the complete list of signatories, click here.
It is improbable that the letter will bring about any changes in the existing landscape of AI research, where companies such as Google and Microsoft are in a hurry to release new products, often at the expense of prior concerns regarding safety and ethics. Nevertheless, the letter is indicative of an increasing resistance to the “ship it now and fix it later” mentality, which might eventually penetrate the political arena and attract attention from lawmakers.
Also Read: Can AI Replace Human Jobs?
The letter highlights that OpenAI has acknowledged the possibility of requiring an “independent review” of forthcoming AI systems to ensure they conform to safety standards. The signatories have emphasized that the time has arrived for such a review.
In their opinion, during this pause, AI laboratories and independent experts should work together to create and execute a set of shared safety protocols for advanced AI design and development. These protocols should be thoroughly examined and supervised by independent outside experts to guarantee that systems adhering to them are undoubtedly safe.
The signatories of the letter have stated that OpenAI has recognized the possible requirement of “independent review” of upcoming AI systems to guarantee their compliance with safety standards, as mentioned in the letter. They assert that this moment has arrived.
Leave A Comment