Close Menu
Digpu News  Agency Feed
    Facebook X (Twitter) Instagram
    • Home
    • Technology
    • USA
    • Business
    • Education
    • Startups and Entrepreneurs
    • Health
    Facebook X (Twitter) Instagram
    Digpu News  Agency Feed
    Subscribe
    Friday, January 2
    • Home
    • Technology
    • USA
    • Business
    • Education
    • Startups and Entrepreneurs
    • Health
    Digpu News  Agency Feed
    Home»Business»Google’s AI safety promises under scrutiny after Gemini report
    Business

    Google’s AI safety promises under scrutiny after Gemini report

    DeskBy DeskAugust 6, 2025No Comments4 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link

    Google shared a safety paper on its flagship artificial‑intelligence model, Gemini 2.5 Pro. Experts say it leaves key risks unexplained.

    The company posted the technical paper on Thursday, several weeks after it released Gemini 2.5 Pro to customers. The document lists the internal tests Google ran on the model but offers few facts on how the system behaves under overload or misuse. Researchers who reviewed the paper told TechCrunch the missing details make it hard to judge whether Gemini 2.5 Pro is truly safe for broad use.

    Technical reports are one of the main ways the public learns what advanced AI systems can and cannot do. A thorough report often shows where a model fails and where it might be misused. Many AI researchers treat these papers as honest efforts to back up a company’s safety claims.

    Google handles safety reporting differently.

    Google releases a report only after a model is no longer tagged “experimental,” and it moves certain “dangerous capability” findings into a separate audit that is not published at once. As a result, the public paper does not cover every threat Google has tested for.

    Several analysts said the new Gemini 2.5 Pro document is a stark case of limited disclosure. They also noticed that the report never refers to Google’s Frontier Safety Framework, or FSF, a policy the company announced last year to spot future AI powers that could cause “severe harm.”

    “This report is very sparse, contains minimal information, and arrived weeks after the model went public,” said Peter Wildeford, co‑founder of the Institute for AI Policy and Strategy. “It is impossible to confirm whether Google is meeting its own promises, and therefore impossible to judge the safety and security of its models.”

    Thomas Woodside, co‑founder of the Secure AI Project, said he was glad any paper had appeared at all, yet he doubted Google’s plan to release steady follow‑ups. He pointed out that the last time the firm shared results from dangerous‑capability tests was June 2024, and that paper covered a model announced in February of the same year.

    Confidence slipped further when observers saw no safety paper for Gemini 2.5 Flash, a slimmer and faster model Google revealed last week. A company spokesperson said a Flash paper is “coming soon.”

    “I hope this is a real promise to start giving more frequent updates,” Woodside said. “Those updates should include results for models that have not yet reached the public, because those models may also pose serious risks.”

    Google now falls short on transparency

    Meta’s safety note for its new Llama 4 models runs only a few pages, while OpenAI chose not to publish any report at all for its GPT‑4.1 series.

    The shortage of detail comes at a tense time. Two years ago, Google told the U.S. government it would post safety papers for every “significant” AI model within scope.” The company made similar pledges to officials in other countries, saying it would offer “public transparency” about its AI products.

    Kevin Bankston, senior adviser on AI governance at the Center for Democracy and Technology, called the releases from leading labs a “race to the bottom” on safety.

    “Combined with reports that rival labs like OpenAI have cut safety‑testing time before release from months to days, this meager documentation for Google’s top model tells a troubling story of a race to the bottom on AI safety and transparency as companies rush their models to market,” he added.

    Google says much of its safety work happens behind closed doors. The company states that every model undergoes strict tests, including “adversarial red teaming,” before any public launch.

    Source: Cryptopolitan / Digpu NewsTex

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
    Previous ArticleMutuum Finance (MUTM) is Outperforming Ethereum (ETH) and Solana (SOL) In Key Metrics
    Next Article Trump says China has reached out for trade deals

    Related Posts

    Business

    Sportswear Fabrics and India’s Challenge

    September 26, 2025
    Read more
    Auto Tech

    Oura Ring vs Apple Watch (2025): Features, Accuracy, & Value Compared

    September 26, 2025
    Read more
    Culture

    American Black Film Festival Returns for Milestone 30th Anniversary

    September 26, 2025
    Read more
    Business

    ESE Entertainment Asset Bombee Achieves Record Revenues

    September 26, 2025
    Read more
    Auto Tech

    Uber partner Momenta pursues fresh capital, targets over $5B valuation

    September 26, 2025
    Read more
    Business

    China Opens Digital Yuan Operations Hub in Shanghai to Drive Global Use

    September 26, 2025
    Read more
    © 2026 ThemeSphere. Designed by ThemeSphere.
    • Home
    • About
    • Team
    • World
    • Buy now!

    Type above and press Enter to search. Press Esc to cancel.