
Artificial intelligence (AI) has been making rapid strides in recent years, with advancements in machine learning and data analytics leading to the development of increasingly-smart machines. However, one serial AI investor, Ian Hogarth, has raised alarm bells about the potential consequences of this relentless pursuit of AI, warning that it may soon advance to the point of divinity.
An AI investor with a track record of forewarning has expressed fear that the race to create artificial general intelligence (AGI) could soon reach a point where it will be godlike. Ian Hogarth, a prominent AI investor, authored an editorial piece for the Financial Times in which he expressed his concerns about the potential repercussions of AGI reaching a level of autonomy and intelligence beyond that of humans.
As per an article published in Futurism, Hogarth described an unusual conversation with a machine learning researcher who claimed they were on the verge of producing AGI. He called into question the accountability of researchers and developers pushing AGI’s boundaries without fully understanding the implications. He compared the position to one of “them” and “us,” even though he is a “us” himself, having invested in over 50 AI companies.
AGI, or artificial general intelligence, is the concept of a superintelligent computer that can learn and develop independently, understand its surroundings without human intervention, and adjust its surroundings. Hogarth noted, however, that it is highly impossible to predict the timing of the development of AGI and the likely ramifications of that development. Using the term “Godlike AI,” he stressed the possibility of AGI having unbounded autonomy and great power, constituting a threat to humanity.
Hogarth is concerned that a few firms are pursuing AGI without democratic oversight. He added that because AGI has the potential to touch every living creature on Earth, decisions about it should not be made in secret by a small group of people but rather with widespread public participation and scrutiny. He wondered if those working swiftly to produce AGI have the plan to slow down and incorporate more people in the process.
As he examined the future, he would leave his 4-year-old son and realized that such fundamental decisions about AGI were being made without democratic control; Hogarth went from shock to wrath. He was dissatisfied with his colleagues’ failure to respond to his recommendations for more ethical AI research and investment. He argued for a more balanced approach to AI development, emphasizing the necessity of prudence, ethics, and government regulation.
Hogarth concluded his opinion piece by suggesting that the rush for AGI may continue despite the risks and that a significant misuse incident or tragedy may be required to raise public and government attention to the potential dangers of AGI. He stated that before the technology is adopted, the public, politicians, and the AI community should collaborate to ensure that AGI development and deployment decisions are transparent, accountable, and inclusive.
Finally, serial AI investor Ian Hogarth has expressed reservations about pursuing AGI because of the risk of it gaining godlike abilities. He has called for democratic control over AI development and prophylactic measures to mitigate the possible harm from AGI.