Artificial intelligence (AI) is generally understood as the capacity of a machine to carry out operations like computation, analysis, reasoning, learning, and meaning discovery. Both “narrow AI,” where only a small number of specific activities are carried out, and “broad AI,” where a variety of functions and varied tasks are carried out, are undergoing rapid development and application.
AI can transform healthcare by enhancing diagnostics, assisting in developing new treatments, assisting physicians, and extending access to care outside the hospital to more people. These advantageous effects result from technological uses such as language processing, robots, big data analytics, decision support systems, image recognition, and more. Similar uses of AI exist in other fields that could benefit society.
Although some people are aware of the dangers and potential adverse effects connected with the use of AI in healthcare and medicine, the BMJ Global Health Journal published this study. In this part, we outline three categories of risks related to the misuse of AI: intentional, negligent, unintentional, and risks resulting from a failure to foresee and prepare for the transformative effects of AI on society.
The ability of AI to quickly clean, organize, and analyze large data sets of personal information, including images gathered by the increasingly pervasive presence of cameras, poses the first set of threats. AI can also be used to develop highly individualized and targeted marketing and information campaigns and significantly expanded surveillance systems. This capability of AI can be used for good, such as enhancing our information access or thwarting terrorist attacks. But it can also be abused, which can have terrible effects.
The creation of lethal autonomous weapon systems (LAWS) is the subject of the second group of threats. AI has many uses in military and defense systems, some of which might be employed to foster safety and harmony. However, any potential benefits of LAWS are outweighed by the risks and threats they pose. Insofar as they can detect, pick, and ‘engage’ human targets without human intervention, weapons are autonomous. The third revolution in warfare, following the first and second revolutions of gunpowder and nuclear weapons, is believed to be the dehumanization of fatal force.
There are several shapes and sizes of lethal autonomous weapons. However, they are also vital because they contain weapons and explosives that could be attached to small, mobile, and agile machines (like quadcopter drones) with intelligence, the capacity for self-piloting, and the ability to perceive and navigate their surroundings. For instance, a typical shipping container may be used to house a million miniature drones outfitted with explosives, visual recognition technology, and autonomous navigation capabilities and programmed to murder in large numbers without human oversight.
The third group of dangers results from the employment losses brought on by the broad adoption of AI technology. Between tens and hundreds of millions of jobs are expected to be lost over the next ten years due to AI-driven automation. Participants expected that human work would be automated entirely shortly after the turn of the century, albeit much would rely on how quickly AI, robotics, and other pertinent technologies advance and on societal and governmental policy choices.
In this decade, it is already projected that AI-driven automation will disproportionately negatively impact low- and middle-income countries by replacing lower-skilled employment. As automation advances up the skill ladder, it will eventually replace increasingly more significant portions of the global workforce, including in high-income nations.