Nvidia is close to finalizing a major investment in OpenAI, as the United States Department of Defense expands access to commercial artificial intelligence systems, including a new agreement with xAI.
Speaking on February 26, 2026, during Nvidia’s earnings call, Chief Executive Officer Jensen Huang said the company is nearing completion of its previously announced investment in OpenAI.
“We continue to work with OpenAI toward a partnership agreement, and believe we are close,” Huang said. “We are thrilled with our ongoing partnership with OpenAI, a once-in-a-generation company we’ve had the pleasure of partnering with since their first days.”
Nvidia had earlier disclosed plans to invest up to $100 billion in OpenAI, although the transaction has not yet closed.
Nvidia supplies advanced graphics processing units used to train and operate large AI models and also provides hardware to other developers, including xAI and Anthropic.
In a separate development, the Pentagon has signed an agreement with xAI, the artificial intelligence company founded by Elon Musk, to allow its Grok chatbot to operate on classified government systems.
According to Axios, the agreement permits the use of Grok for “all lawful use.” The deal expands the Defense Department’s access to privately developed AI tools for sensitive operations.
Until recently, Anthropic was the only AI developer whose system had been cleared for the Pentagon’s most sensitive work.
Discussions between the Defense Department and Anthropic reportedly focused on whether the company would allow its AI model to be used in connection with autonomous weapons or domestic surveillance activities.
A meeting was scheduled between Defense Secretary Pete Hegseth and Anthropic Chief Executive Officer Dario Amodei to address the issue.
The Pentagon is also reported to be in talks with OpenAI and Google regarding the deployment of their AI systems within classified environments. Officials have asked companies to agree to the same “all lawful use” condition before access is granted.
The wider use of artificial intelligence in military operations has raised concerns among policy groups and researchers. Diplo, a nonprofit organization focused on digital policy, warned about what it described as “black-box decisions,” referring to outputs that cannot be clearly explained.
“Black-box decisions refer to unexplainable outputs by an AI system,” Diplo stated. “For example, an AI may assign features to a target or calculate a score for suspect analysis without understandable logic for the system’s conclusion.”
The organization also addressed the risk of bias in systems trained on surveillance and behavioral data.
“Bias in AI systems is inevitable,” Diplo said. “In a military context, AI is usually trained on data from surveillance footage, behavioural patterns, and biometric databases, which can be skewed by profiling based on race, religion, or geography.”
Meanwhile, Apple is facing legal and operational challenges related to its artificial intelligence efforts.
On Thursday, February 26, Apple asked a federal judge to dismiss a proposed class action lawsuit alleging that the company misled shareholders about progress in adding AI capabilities to its voice assistant, Siri.
“It is no secret that Apple faces challenges and weathered ups and downs in its stock price in 2025, like many major companies,” Apple said in a court filing. “But plaintiff takes a massive and unsupported leap by claiming that securities fraud caused the temporary price drops.”
The lawsuit also references Apple’s compliance with a 2021 injunction issued in litigation brought by Epic Games concerning App Store payment policies.
Separately, planned AI upgrades for Siri have faced delays, with some features initially expected earlier in 2026 now projected for release in later software updates.

