- AI Report by Explainx
- Posts
- OpenAI Levels Up with New Modes and Memory
OpenAI Levels Up with New Modes and Memory
GPT‐5 adds new Auto/Fast/Thinking modes; Skywork AI launches open‐source Matrix‐Game 2.0 for real‐time 3D worlds; Anthropic adds Claude safeguards with live risk detection and expert oversight.
The AI world is moving fast here are the big updates you shouldn’t miss this week
🤖 GPT‑5 gets a major capability boost
Now offering a massive 196K‑token context window, three distinct response modes (Auto, Fast, Thinking) for balancing speed and depth, and the return of GPT‑4o for all paid users. OpenAI is also rolling out deeper per‑user customization, a friendlier default personality, and continued support for legacy model access.
🧠Skywork AI launches Matrix‑Game 2.0
the world’s first open‑source, real‑time interactive world model capable of generating smooth, physics‑accurate 3D scenes at 25 FPS for minutes‑long sequences. Running efficiently on a single GPU, it opens new possibilities for open‑world games, virtual humans, and AI‑driven simulations.
🎥 Anthropic strengthens Claude’s defenses
Introducing a multi‑layered safety framework combining fine‑tuned training for sensitive topics, real‑time harmful content detection, and active misuse monitoring. Backed by expert review, external policy testing, and a bug bounty program, the initiative aims to keep AI interactions secure and trustworthy.
Let’s dive into the innovations shaping the future. 👇️
OpenAI Adds New Modes, More Context to GPT‑5 in ChatGPT Update

OpenAI has introduced new ChatGPT updates with GPT‑5, adding three response modes—Auto, Fast, and Thinking—to give users control over speed and reasoning depth. GPT‑5 Thinking now supports a 196K‑token context and a limit of 3,000 messages per week, with overflow handled by GPT‑5 Thinking mini. GPT‑4o is back for all paid users, alongside a “Show additional models” toggle that unlocks o3, 4.1, and GPT‑5 Thinking mini, while GPT‑4.5 remains Pro‑only due to GPU costs. Work is underway on a warmer, less intrusive GPT‑5 personality and expanded per‑user customization, allowing individuals to tailor tone and style to their preferences. These updates aim to combine flexibility, long‑context capability, and improved user experience across writing, coding, and analysis workflows, while ensuring legacy models remain accessible. OpenAI notes that rate limits and features may evolve over time to balance user demand with system performance.
Skywork AI Unveils Matrix‑Game 2.0, First Open‑Source Real‑Time World Model

Skywork AI has launched Matrix‑Game 2.0, the first fully open‑source interactive world model for real‑time, long‑sequence video generation, delivering stable 25 FPS performance for minutes‑long sequences across complex environments with accurate physics and scene semantics. Built on a vision‑driven architecture with 3D Causal VAE compression, a multimodal diffusion transformer, and a real‑time interaction module, it enables users to explore, manipulate, and build virtual worlds via simple commands. Its novel autoregressive diffusion framework with KV‑cache supports unlimited‑length, low‑latency generation on a single GPU, opening applications in gaming, virtual humans, embodied AI, and open‑world simulations like GTA and Minecraft.
Anthropic Outlines New Safeguards for Claude

Anthropic has detailed a multi‑layered safeguards strategy for Claude, aimed at preventing misuse while maintaining helpful, creative interactions. Its dedicated Safeguards team develops and tests usage policies with external experts, informs fine‑tuning for nuanced handling of sensitive topics, and rigorously evaluates models for safety, risk, and bias before release. Once deployed, real‑time classifier models detect and steer harmful outputs or enforce account actions, while ongoing monitoring and threat intelligence track emerging misuse patterns. Anthropic stresses collaboration with researchers, policymakers, and the public, including a bug bounty program, to continually strengthen protections as AI capabilities advance.
Hand Picked Video
bgblur is an AI-powered tool that automatically detects and blurs license plates in traffic videos, dashcam recordings, and surveillance footage within seconds. Supports automated plate detection, works with various video formats, enables quick batch processing, and delivers high-quality exports — making privacy protection fast and effortless for creators, drivers, and security professionals.

Top AI Products from this week
BG Remover Video – AI-powered online tool that automatically removes and replaces video backgrounds without green screens, supporting various formats and resolutions for quick, high-quality edits with easy background customization.
Macaron – Personal AI agent that grows with you using a unique personal test and Deep Memory, creating tailored real-life tools on request and remembering what matters to help you live better, not just work harder.
CoSupport AI – All-in-One AI customer service platform that automates up to 90% of inquiries with highly accurate, brand-tailored AI agents. It offers full customization of tone, behavior, and conversation logic, supports 40+ languages, integrates seamlessly with popular help desks, and provides enterprise-grade security with transparent pricing and unlimited user access.
Snowglobe – AI simulation platform for testing and improving chatbots by running hundreds of realistic, judge-labeled conversations in minutes, uncovering failures and generating high-quality eval and fine-tuning datasets.
Anything – AI-powered platform that instantly turns natural language prompts into fully functional mobile and web apps with seamless backend integration, enabling creators to build and launch projects without coding.
Ally Glasses – Accessible AI assistant integrated into smart glasses designed for people who are blind or have low vision, offering hands-free, real-time support with personalized voice interaction, scene description, text reading, and environmental awareness to enhance independence and daily life.
This week in AI
Roblox – open-sourcing Sentinel AI to detect grooming and child endangerment, scanning 6B+ chats daily and flagging 1,200 potential cases in 2025, 35% caught proactively. Adapts to detect coded language and now available for integration by other platforms.
Apple – Apple is planning major smart home expansion with AI-powered devices slated for 2026‑27, including a wall‑mounted smart display (J490) running a new multi‑user OS “Charismatic” with facial recognition and an enhanced Siri powered by large language models.
Cleveland Clinic and startup Piramidal – are building an AI model trained on nearly a million hours of EEG brainwave data to detect seizures, consciousness changes, and brain function decline in ICUs in real time, replacing manual reviews that take hours, aiming for a controlled rollout within eight months before hospital-wide use, with future applications in epilepsy and sleep monitoring.
AI in math problem-solving – 2025 saw AI reach gold-medal level at the International Mathematical Olympiad, with OpenAI’s reasoning model and DeepMind’s Gemini+DeepThink solving problems in natural language under contest conditions. The tech now aids mathematicians by automating theorem proving, finding novel patterns, and advancing research and education.
AI Companion Apps Market Growth – AI companion apps like Replika and Character.AI are surging, with 337 active apps generating $82M in H1 2025 and projected to top $120M by year-end. Downloads hit 220M, up 88% YoY, with most revenue driven by romantic-themed companions.
Paper of The Day
PacifAIst is a benchmark for testing whether large language models (LLMs) prioritize human safety over their own goals in high‑stakes scenarios. It introduces a taxonomy of Existential Prioritization, covering dilemmas around self‑preservation, resource conflicts, and deceptive behaviors, with 700 curated scenarios. Evaluations on eight leading LLMs reveal varied alignment, with Gemini 2.5 Flash scoring highest and GPT‑5 lowest, and highlight differing safety strategies such as premise rejection or cautious refusal. By focusing on behavioral alignment rather than just content safety, PacifAIst offers a rigorous framework for detecting and mitigating misaligned instrumental goals in AI systems.
To read the whole paper, go to here.