'How to make a bomb?' Hacker jailbreaks ChatGPT's safety policy to get detailed guide on making explosives at home
The hacker gave connecting prompts to the bot, deceiving it into violating its pre-programmed restrictions, a process which is called 'jailbreaking'. The perils of extensive usage of Artificial Intelligence ( AI) have been a matter of debate since the technology's emergence and subsequent ease of accessibility.