Forget GPT-40😲Use This AI InsteadšŸ”„

Ovis2.5-9B boosts multimodal reasoning, TimeCapsule LLM revives authentic 19th-century language, and OpenAI reinforces strict rules on equity transactions.

The AI field continues to push boundaries with new innovations and key industry updates.

Ovis2.5-9B has been introduced by AIDC-AI as a powerful open-source multimodal model. With native-resolution image processing and a ā€œthinking modeā€ for advanced reasoning, it delivers state-of-the-art results while balancing efficiency and capability.

TimeCapsule LLM is exploring history in a new way by training models only on texts from specific eras. Its latest version, built on writings from 1800–1875 London, recreates authentic Victorian language and thought through Selective Temporal Training.

On the business side, OpenAI has reiterated that all equity transactions remain strictly restricted. Any unauthorized sales or tokenized offers carry no legal value and may violate U.S. securities laws, reinforcing the importance of compliance.

Together, these updates showcase how AI is advancing across technology, culture, and governance reshaping both innovation and responsibility in the field.

Ovis2.5-9B: Advanced Multimodal Reasoning Model

Ovis2.5-9B is a state-of-the-art open-source multimodal large language model by AIDC-AI, designed for native-resolution image processing, deep reasoning, and broad task coverage. Powered by a NaViT vision encoder, it preserves fine details and global layout without lossy tiling, making it highly effective for charts, documents, video, and OCR tasks. With an optional "thinking mode" that supports self-checking, reflection, and a token budget for advanced reasoning, it achieves 78.3 on OpenCompass benchmarks (SOTA among models under 40B parameters). Available in both lightweight (2B) and high-performance (9B) versions, Ovis2.5 balances efficiency and capability, supporting use cases from visual grounding to text-only reasoning. Licensed under Apache 2.0, it caters to research, development, and real-world multimodal applications.

Time-Traveling AI: Bringing the Past to Life

TimeCapsule LLM is an experimental open-source project by Hayk Grigorian that trains language models from scratch exclusively on historical texts from specific places and eras to minimize modern bias and emulate authentic period language and worldview. The current versions are trained on London writings from 1800–1875, sourced from books, newspapers, and legal documents. Early models (v0 and v0.5) focused on replicating Victorian writing style but struggled with coherence, while v1 (700M parameters) showed progress by connecting real historical figures and events, marking a step toward accurate temporal reasoning. Built initially with nanoGPT and later on Microsoft’s Phi 1.5, the models apply a method called Selective Temporal Training (STT), ensuring all data remains era-specific. Licensed under MIT, the project aims to scale with larger, cleaner corpora and expand into other regions and timeframes, ultimately offering AI that reasons within a historically authentic context.

OpenAI Equity Transactions

All OpenAI equity is subject to strict transfer restrictions, meaning it cannot be sold, pledged, or transferred directly or indirectly without OpenAI’s prior written consent. Any unauthorized attempt, including sales of equity, SPV investments, tokenized interests, or forward contracts, is invalid and may also violate U.S. securities laws, exposing both buyers and sellers to potential liability and rescission. OpenAI does not endorse or participate in such transactions and warns that offers claiming exposure to OpenAI equity outside approved channels may carry no legal or economic value. OpenAI intends to strictly enforce its restrictions and urges anyone contacted with such offers to report them to [email protected].

Hand Picked Video

In this video we'll look at GPT-OSS, OpenAI's unexpected open source model that rivals O3 performance, features built-in web search, and how to test it yourself locally.

Top AI Products from this week

  • AI Elements by Vercel – AI Elements is an open source library of customizable React components for building interfaces with the Vercel AI SDK. Built on shadcn/ui, it provides full control over UI primitives like message threads, input boxes, reasoning panels, and response actions.

  • Qoder ā€“ Qoder transforms how AI understands real software. Beyond snippets, it grasps your entire architecture dependencies, patterns, history. Chat naturally for multi-file edits or delegate tasks to AI. From invisible complexity to transparent collaboration.

  • Trace – A workflow automation platform that routes tasks to the right agent – human or AI. By connecting tools like Slack, Jira and Notion – Trace breaks down existing workflows, spots automation opportunities and embeds AI agents to repetitive tasks.

  • Onlook for Web – End the design-dev handoff nightmare. Onlook is an Open Source builder that helps product teams ship faster by building production-ready apps visually. Use code as the source of truth – zero translation errors, instant collaboration across all roles

  • Command A Reasoning ā€“ Command A Reasoning by Cohere is an advanced model for enterprise reasoning tasks. Designed for private deployment, it outperforms models like gpt-oss-120b while running on a single H100. A user-controlled token budget lets you balance performance and cost.

  • Thr8 ā€“ thr8.dev helps development teams find security vulnerabilities before deployment.

This week in AI

  • YouTube’s AI Experiment - YouTube is quietly testing AI video enhancements on Shorts, altering creators’ content with sharper, plastic-like visuals, sparking trust and authenticity concerns.

  • Meta x Midjourney - Meta partners with Midjourney to license its AI image and video tech, aiming to boost future AI models and compete with leaders like OpenAI’s Sora and Google’s Veo.

  • Bot vs Bot Future - As AI agents multiply, bots will dominate the internet, negotiating prices, trading, and even hacking turning online life into a machines-first ecosystem with humans sidelined.

  • Amazon’s AI Agent Push - Amazon is investing heavily in AI agents through its AGI lab, aiming to move beyond chatbots to reliable task-completing systems as it races to catch rivals in the AI race.

Paper of The Day

AgentScope 1.0 presents a flexible, developer-focused framework designed to enhance agent-based AI applications. It offers modular tool-based interaction, unified interfaces, and advanced infrastructure to facilitate the development, deployment, and management of AI agents. The framework aims to simplify building complex multi-agent systems with varying functionalities and supports scalable, customizable AI workflows.

To read the whole paper, go to here.