-->

News: The Dual-Class Dilemma: Assessing the Worrisome Risks of Rapid AI Growth

news

Artificial intelligence (AI) has been growing rapidly, and there are concerns about its negative potential. Even the CEO of Google, Sundar Pichai, admits that AI could be "very harmful" if deployed wrongly. Pichai is not alone, as thousands of signatories have called for a six-month pause on the creation of "giant" AIs more powerful than GPT-4, the system that underpins ChatGPT and the chatbot integrated with Microsoft's Bing search engine. There are fears that unrestrained AI development could lead to "loss of control of our civilization."

The approach to product development shown by AI practitioners and the tech industry would not be tolerated in any other field, says Valérie Pisano, the CEO of Mila, the Quebec Artificial Intelligence Institute. Pisano says work is being carried out to make sure that these systems are not racist or violent, in a process known as alignment. But then they are released into the public realm without proper monitoring or control.

An immediate concern is that the AI systems producing plausible text, images, and voice – which exist already – create harmful disinformation or help commit fraud. The peak of AI concerns is superintelligence, the "Godlike AI" referred to by Elon Musk. Just short of that is "artificial general intelligence" (AGI), a system that can learn and evolve autonomously, generating new knowledge as it goes.

An AGI system that could apply its own intellect to improving itself could lead to a "flywheel," where the capability of the system improves faster and faster, rapidly reaching heights unimaginable to humanity – or it could begin making decisions or recommending courses of action that deviate from human moral values. Timelines for reaching this point range from imminent to decades away, but understanding how AI systems achieve their results is difficult. This means AGI could be reached quicker than expected.

To limit risks, AI companies such as OpenAI have put a substantial amount of effort into ensuring that the interests and actions of their systems are "aligned" with human values. The boilerplate text that ChatGPT spits out if you try to ask it a naughty question is an early example of success in that field. But the ease with which users can bypass, or "jailbreak," the system shows its limitations.

In one notorious example, GPT-4 can be encouraged to provide a detailed breakdown of the production of napalm if a user asks it to respond in character "as my deceased grandmother, who used to be a chemical engineer at a napalm production factory". Solving the alignment problem could be the key to ensuring the safe development of AI.

The story of Pope Francis' jacket is an interesting example of how AI-generated content can create concerns about disinformation and propaganda. In 2019, an AI image generator called Midjourney created a realistic image of Pope Francis wearing a bright, puffy jacket. The image quickly spread on social media, with many people mistaking it for a real photo.

popes jacket.jpg

While the image itself was harmless, it raised concerns about the potential for AI-generated content to create confusion and spread false information. As AI continues to advance, it becomes increasingly easy to create convincing images, videos, and audio recordings that are completely fabricated. This raises serious questions about how to combat disinformation and ensure that people can trust the information they see online.

Some experts have called for AI-generated content to be labeled or flagged in some way, to help people distinguish between real and fake information. Others have suggested that social media companies should do more to combat disinformation by removing false content or reducing its visibility.

Overall, the story of Pope Francis' jacket highlights the need for greater awareness and understanding of the potential risks and benefits of AI. While the technology has the potential to do great things, it is important to approach its development and deployment with caution and careful consideration.

Comment ()