OpenAI CEO Sam Altman has warned about the potential for c before reaching superhuman general intelligence. This warning raises concerns about the future of AI and its ability to influence and manipulate human behaviour. While Altman’s statement has sparked discussions, experts have differing opinions on the severity and implications of these concerns.
The term “superhuman persuasion” refers to AI systems becoming exceptionally effective at influencing human decisions, opinions, and behaviours. These systems could utilize advanced machine learning and pattern recognition to identify and employ persuasive content with great precision, such as in digital advertising.
Christopher Alexander, Chief Analytics Officer of Pioneer Development Group, believes that AI’s persuasive abilities won’t involve subliminal mind control but rather an improved understanding of what persuasive content works best and when. He points out that this is already happening with digital advertising, and AI’s capabilities in this area will likely continue to improve.
Aiden Buzzetti, President of the Bull Moose Project, questions how close AI is to achieving “superhuman persuasion” abilities, highlighting the current limitations of platforms like ChatGPT. He mentions that these AI systems often provide inaccurate information and answers that may “seem correct” but are far from reliable. Buzzetti suggests that AI’s implicit nature might make some people find it more trustworthy, but there’s no immediate cause for concern.
Phil Siegel, Founder of the Center for Advanced Preparedness and Threat Response Simulation (CAPTRS), believes that we might already be at a point where some AI technology is highly persuasive. He argues that if a malicious actor programmed an AI algorithm to misuse data or draw incorrect conclusions, it could convincingly persuade individuals.
Siegel emphasizes the importance of treating AI and human experts with a healthy dose of scepticism and critical thinking, given that human experts can also be mistaken or mislead people. Siegel’s point is that the problem of persuasion is fundamentally the same for both human and AI experts, and the solution lies in questioning and not unquestioningly accepting information from either source. It’s essential to verify and validate information from experts, whether they are human or machine.
Jon Schweppe, Policy Director of the American Principles Project, suggests that the concerns raised by Altman are valid. He humorously imagines a scenario in which robots might run for political office, highlighting the growing influence of AI in various domains, including politics. This increased role of AI in decision-making and persuasion requires careful consideration and regulation.
Sam Altman’s warning about the potential for AI to achieve “superhuman persuasion” capabilities has sparked discussions among experts. While there are varying opinions on the immediacy and extent of these concerns, it is evident that AI’s ability to influence human behaviour is increasing, especially in areas like digital advertising. However, the solution lies in critical thinking and skepticism, whether the source of persuasion is human or AI, and the need for thoughtful regulation in domains where AI plays a significant role, such as politics.
News Reference
Fox News, “ChatGPT chief warns of some ‘superhuman’ skills AI could develop” https://www.foxnews.com/us/chatgpt-chief-warns-superhuman-skills-ai-develop.



