【OKASANNOANARU-02】
OpenAI may be OKASANNOANARU-02paving the way toward finding out its AI's military potential.
First reported by the Intercepton Jan 12., a new company policy change has completely removed previous language that banned “activity that has high risk of physical harm," including specific examples of “weapons development” and “military and warfare.”
As of Jan. 10, OpenAI's usage guidelines no longer included a prohibition on "military and warfare" uses in existing language that obligates users to prevent harm. The policy now only notes a ban on utilizing OpenAI technology, like its Large Language Models (LLMs), to "develop or use weapons."
You May Also Like
SEE ALSO: What is the Rabbit R1 AI Assistant and why is everyone going crazy for it?
Subsequent reporting on the policy edit pointed to the immediate possibility of lucrative partnerships between OpenAI and defense departments seeking to utilize generative AI in administrative or intelligence operations.
In Nov. 2023, the U.S. Department of Defense issued a statement on its mission to promote "the responsible military use of artificial intelligence and autonomous systems," citing the country's endorsement of the international Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy — an American-led "best practices" announced in Feb. 2023 that was developed to monitor and guide the development of AI military capabilities.
"Military AI capabilities includes not only weapons but also decision support systems that help defense leaders at all levels make better and more timely decisions, from the battlefield to the boardroom, and systems relating to everything from finance, payroll, and accounting, to the recruiting, retention, and promotion of personnel, to collection and fusion of intelligence, surveillance, and reconnaissance data," the statement explains.
AI has already been utilized by the American military in the Russian-Ukrainian war and in the development of AI-powered autonomous military vehicles. Elsewhere, AI has been incorporated into military intelligence and targeting systems, including an AI system known as "The Gospel," being used by Israeli forces to pinpoint targets and reportedly "reduce human casualties" in its attacks on Gaza.
AI watchdogs and activists have consistently expressed concern over the increasing incorporation of AI technologies in both cyber conflict and combat, fearing an escalation of arms conflict in addition to long-noted AI system biases.
In a statement to the Intercept, OpenAI spokesperson Niko Felix explained the change was intended to streamline the company's guidelines: "We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs. A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples."
Related Stories
- OpenAI launches its own GPT Store
- The New York Times sues OpenAI and Microsoft for copyright infringement
- OpenAI releases ChatGPT data leak patch, but the issue isn't completely fixed
- It’s not just you. ChatGPT is ‘lazier,’ OpenAI confirmed.
- This creepy AI head at CES 2024 is proof that ChatGPT should remain faceless
An OpenAI spokesperson further clarified the change in an email to Mashable: "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPAto spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."
OpenAI introduces its usage policies in a more simplistic refrain: "We aim for our tools to be used safely and responsibly, while maximizing your control over how you use them."
UPDATE: Jan. 16, 2024, 12:28 p.m. EST This article has been updated to include an additional statement from OpenAI.
Topics Artificial Intelligence OpenAI
Search
Categories
Latest Posts
Nvidia DLSS: An Early Investigation
2025-06-26 08:43Wordle today: The answer and hints for August 24
2025-06-26 08:31Philips now allows customers to 3D print replacement parts
2025-06-26 07:26Popular Posts
A worthless juicer and a Gipper-branded server
2025-06-26 08:50Wolves vs. Chelsea 2024 livestream: Watch Premier League for free
2025-06-26 08:21Indiana Fever vs. Atlanta Dream 2024 livestream: Watch live WNBA
2025-06-26 08:15NYT mini crossword answers for August 26
2025-06-26 07:19This fat bear's before and after photos are stunning
2025-06-26 07:17Featured Posts
Draper vs. Arnaldi 2025 livestream: Watch Madrid Open for free
2025-06-26 09:04Shop Lenovo Yoga Book and laptops during Best Buy's Tech Fest
2025-06-26 08:37Jack Innanen turns chaos into comedy
2025-06-26 08:32A decade retrospective of the Try Guys, from BuzzFeed to streaming
2025-06-26 07:15Gods of War
2025-06-26 06:57Popular Articles
Exceptionally rare radio sources detected in the distant universe
2025-06-26 07:46Shop MacBooks up to $500 off at Best Buy
2025-06-26 07:28When is Prime Big Deal Days? Our predictions for the October event.
2025-06-26 07:07NYT mini crossword answers for August 26
2025-06-26 06:44AMD Radeon RX 550 + Intel Pentium G4560
2025-06-26 06:38Newsletter
Subscribe to our newsletter for the latest updates.
Comments (8134)
Childhood Fun Information Network
The 10 Most Anticipated PC Games of 2016
2025-06-26 07:38Mark Information Network
Trump campaign's take on Tim Walz's coaching days proves they don't know football
2025-06-26 07:23Sailing Information Network
Apple's macOS Sequoia is coming earlier than usual, report claims
2025-06-26 07:22Elite Information Network
Echo Dot (5th gen) deal — get it for $29.99 at Amazon
2025-06-26 07:07Impression Information Network
Astronomers saw one galaxy impale another. The damage was an eye
2025-06-26 06:43