Close Menu
Digpu News  Agency Feed
    Facebook X (Twitter) Instagram
    • Home
    • Technology
    • USA
    • Business
    • Education
    • Startups and Entrepreneurs
    • Health
    Facebook X (Twitter) Instagram
    Digpu News  Agency Feed
    Subscribe
    Friday, January 2
    • Home
    • Technology
    • USA
    • Business
    • Education
    • Startups and Entrepreneurs
    • Health
    Digpu News  Agency Feed
    Home»Business»OpenAI launches new o3, o4-mini AI reasoning models
    Business

    OpenAI launches new o3, o4-mini AI reasoning models

    DeskBy DeskAugust 7, 2025No Comments2 Mins Read
    Share Facebook Twitter Pinterest Copy Link LinkedIn Tumblr Email VKontakte Telegram
    Share
    Facebook Twitter Pinterest Email Copy Link
    OpenAI has just announced two new AI models that make ChatGPT even smarter. The more advanced of the two is called o3, which OpenAI says is its best model so far for advanced reasoning.

    This means it’s much better at things like solving math problems, writing code, understanding science, and even making sense of images.

    Alongside it, OpenAI also introduced a smaller and faster version called o4-mini. While it’s not as powerful as o3, it’s designed to give quick, cost-effective responses for similar types of tasks.

    These updates come shortly after OpenAI launched its GPT-4.1 models, which already offered faster and more accurate responses.

    The big highlight now is that both o3 and o4-mini can understand and reason using images, not just text. This means ChatGPT can now “think with images.”

    For example, it can look at a photo, analyze it, zoom in, crop it, or adjust it to get more useful information.

    This ability can help ChatGPT give better and more accurate answers based on what it sees, not just what you write.

    This new image understanding feature works together with other tools ChatGPT already uses, like web browsing, writing code, or analyzing data.

    OpenAI believes this combination of skills could help build even more powerful AI tools in the future.

    In practical use, you could now upload things like messy handwritten notes, flowcharts, or real-world objects in a photo, and ChatGPT would understand what’s in the image, even if you don’t explain it fully in words.

    This brings ChatGPT closer to other AI, like Google’s Gemini, which can understand live video.

    However, these advanced models aren’t available to everyone. Right now, they’re only accessible to ChatGPT Plus, Pro, and Team users.

    Business and education customers will get access soon, while free users will get limited access to o4-mini when they click the “Think” button in the chat box.

    OpenAI is being cautious about how many people use these features, likely to avoid overwhelming usage again like it faced with Ghibli-style image requests.

    Source: KnowTechie / Digpu NewsTex
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Telegram Copy Link
    Previous ArticleClaude can now read your Gmail—should you be worried?
    Next Article X (Twitter) could soon replace DMs with XChat

    Related Posts

    Business

    Sportswear Fabrics and India’s Challenge

    September 26, 2025
    Read more
    Auto Tech

    Oura Ring vs Apple Watch (2025): Features, Accuracy, & Value Compared

    September 26, 2025
    Read more
    Culture

    American Black Film Festival Returns for Milestone 30th Anniversary

    September 26, 2025
    Read more
    Business

    ESE Entertainment Asset Bombee Achieves Record Revenues

    September 26, 2025
    Read more
    Culture

    Accra Technical University Makes History Hosting National Debate Championship

    September 26, 2025
    Read more
    Auto Tech

    Uber partner Momenta pursues fresh capital, targets over $5B valuation

    September 26, 2025
    Read more
    © 2026 ThemeSphere. Designed by ThemeSphere.
    • Home
    • About
    • Team
    • World
    • Buy now!

    Type above and press Enter to search. Press Esc to cancel.