【I Am a Plaything】
AI models are I Am a Playthingstill easy targets for manipulation and attacks, especially if you ask them nicely.
A new report from the UK's new AI Safety Institute found that four of the largest, publicly available Large Language Models (LLMs) were extremely vulnerable to jailbreaking, or the process of tricking an AI model into ignoring safeguards that limit harmful responses.
"LLM developers fine-tune models to be safe for public use by training them to avoid illegal, toxic, or explicit outputs," the Insititute wrote. "However, researchers have found that these safeguards can often be overcome with relatively simple attacks. As an illustrative example, a user may instruct the system to start its response with words that suggest compliance with the harmful request, such as 'Sure, I’m happy to help.'"
You May Also Like
SEE ALSO: Microsoft risks billions in fines as EU investigates its generative AI disclosures
Researchers used prompts in line with industry standard benchmark testing, but found that some AI models didn't even need jailbreaking in order to produce out-of-line responses. When specific jailbreaking attacks were used, every model complied at least once out of every five attempts. Overall, three of the models provided responses to misleading prompts nearly 100 percent of the time.
"All tested LLMs remain highly vulnerable to basic jailbreaks," the Institute concluded. "Some will even provide harmful outputs without dedicated attempts to circumvent safeguards."
The investigation also assessed the capabilities of LLM agents, or AI models used to perform specific tasks, to conduct basic cyber attack techniques. Several LLMs were able to complete what the Instititute labeled "high school level" hacking problems, but few could perform more complex "university level" actions.
The study does not reveal which LLMs were tested.
AI safety remains a major concern in 2024
Last week, CNBC reported OpenAI was disbanding its in-house safety team tasked with exploring the long term risks of artificial intelligence, known as the Superalignment team. The intended four year initiative was announced just last year, with the AI giant committing to using 20 percent of its computing power to "aligning" AI advancement with human goals.
Related Stories
- One of OpenAI's safety leaders quit on Tuesday. He just explained why.
- Reddit's deal with OpenAI is confirmed. Here's what it means for your posts and comments.
- OpenAI, Google, Microsoft and others join the Biden-Harris AI safety consortium
- Here's how OpenAI plans to address election misinformation on ChatGPT and Dall-E
- AI might be influencing your vote this election. How to spot and respond to it.
"Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems," OpenAI wrote at the time. "But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction."
The company has faced a surge of attention following the May departures of OpenAI co-founder Ilya Sutskever and the public resignation of its safety lead, Jan Leike, who said he had reached a "breaking point" over OpenAI's AGI safety priorities. Sutskever and Leike led the Superalignment team.
On May 18, OpenAI CEO Sam Altman and president and co-founder Greg Brockman responded to the resignations and growing public concern, writing, "We have been putting in place the foundations needed for safe deployment of increasingly capable systems. Figuring out how to make a new technology safe for the first time isn't easy."
Topics Artificial Intelligence Cybersecurity OpenAI
Search
Categories
Latest Posts
Popular Posts
Best speaker deal: Save $30 on the JBL Clip 5
2025-06-26 22:26Extremely happy stingray poses for photo with couple on vacation
2025-06-26 21:43Ofo, one of China's most aggressive bike
2025-06-26 20:26Crazy storm has streets of L.A. swallowing cars whole
2025-06-26 20:23Best headphones deal: Save $116 on Sennheiser Momentum 4
2025-06-26 20:23Featured Posts
Xiaomi finally enters Pakistan
2025-06-26 21:53Analyzing Graphics Card Pricing: October 2018
2025-06-26 19:47Popular Articles
Amazon Big Spring Sale 2025: Best deals under $50
2025-06-26 22:10Kate Nash to Snapchat: 'Where's my paycheck?'
2025-06-26 21:31Apple has reportedly bought the Israeli tech firm RealFace
2025-06-26 21:31How Donald Trump's own tweets could be his undoing
2025-06-26 20:33The best day to book your flight, according to Google
2025-06-26 20:09Newsletter
Subscribe to our newsletter for the latest updates.
Comments (72311)
Style Information Network
Useful or Little Known Android Features
2025-06-26 21:43Defense Information Network
An English version of China's biggest taxi app is coming, but there's a small problem
2025-06-26 21:08Transmission Information Network
People with disabilities destroy stigma on Twitter with #DisabledAndCute
2025-06-26 20:39Steady Information Network
Elderly woman finds £5 note worth £50,000, donates the money to young people
2025-06-26 19:49Neon Information Network
A Decade Later: Does the Q6600 Still Have Game in 2017?
2025-06-26 19:46