AI That Buys Groceries for You 🧺🛍️🥒

OpenAI x Instacart brings grocery shopping to ChatGPT, Integral AI launches world’s first AGI, and Mistral AI releases Devstral 2 and Vibe CLI for open-source coding.

AI is blasting into a new era, where chat turns into commerce, models learn like humans, and coding becomes fully automated. Here’s what just dropped:

đź›’ Instacart x OpenAI – Shopping Inside ChatGPT
Meal ideas → cart → checkout, all in one chat. Instacart becomes the first to launch a full grocery shopping app inside ChatGPT using the Agentic Commerce Protocol.

đź§© Integral AI – First AGI-Capable Model
A system that learns new skills autonomously, safely, and energy-efficiently—bringing true general intelligence a step closer to reality.

đź’» Mistral Devstral 2 + Vibe CLI
A frontier open-source coding model scoring 72.2% on SWE-bench and up to 7Ă— cheaper than major rivals. Paired with Vibe CLI, it turns your terminal into a fully automated coding agent.

AI isn’t slowing down, it’s rewriting what’s possible, one breakthrough at a time.

Instacart x OpenAI: Grocery Shopping in ChatGPT

OpenAI and Instacart have partnered to bring grocery shopping directly into ChatGPT, creating a seamless, AI-powered experience that transforms how users interact with their daily shopping needs. Consumers can now chat with ChatGPT to get meal inspiration, build a grocery cart with real-time in-stock items from local retailers, and complete their purchase with Instant Checkout—all within a single conversation, without switching apps. This integration, built on the Agentic Commerce Protocol, leverages OpenAI’s frontier models and Instacart’s extensive grocery network, offering personalized recommendations and a frictionless shopping journey. Instacart is the first grocery partner to launch a fully embedded, end-to-end shopping app inside ChatGPT, making it easier than ever to go from meal ideas to doorstep delivery. The collaboration also extends to internal workflows, with Instacart using OpenAI’s API, ChatGPT Enterprise, and Codex to enhance productivity and shopper experiences. This marks a significant milestone in AI-driven commerce, redefining convenience and personalization in grocery shopping.

Integral AI Unveils World’s First AGI-Capable Model

Integral AI has unveiled the world’s first AGI-capable model, marking a breakthrough in machine intelligence. This AGI autonomously learns new skills in novel domains, masters them safely and efficiently, and does so with energy consumption comparable to humans, setting a new standard for generality and reliability in AI systems. The model is built on a rigorous definition of AGI, focusing on autonomous skill acquisition, safety, and efficiency—addressing the limitations of current AI that rely on massive human-generated datasets. Integral’s AGI architecture integrates growth, abstraction, and action, enabling scalable, safe intelligence that can master any task without catastrophic failures or unintended side effects. The approach moves beyond traditional black-box AI, creating explicit, hierarchical abstractions that mirror human reasoning and enable robust world modeling across modalities. Integral’s roadmap to superintelligence includes Universal Simulators for world modeling, Universal Operators for planning and tool use, and a scalable backend and frontend infrastructure. Early demonstrations show the AGI’s ability to autonomously learn robotics skills and generate novel software, proving its potential to outperform human-machine collaboration in diverse domains. This milestone represents a foundational shift toward

Mistral AI Launches Devstral 2 and Vibe CLI for Open-Source Coding

Mistral AI has launched Devstral 2, a next-generation open-source coding model family, including Devstral 2 (123B) and Devstral Small 2 (24B), both permissively licensed for distributed intelligence. Devstral 2 achieves 72.2% on SWE-bench Verified, outperforming larger competitors while remaining highly cost-efficient—up to 7x cheaper than Claude Sonnet at real-world tasks. The model supports a 256K context window and enables production-grade workflows, such as codebase exploration, architecture-level changes, bug fixing, and legacy system modernization. Devstral Small 2 offers similar capabilities in a compact form, deployable locally on consumer hardware, and supports image inputs for multimodal agents. Mistral also introduces Vibe CLI, a native, open-source command-line agent for end-to-end code automation, supporting file manipulation, code searching, version control, and command execution in natural language. Vibe CLI integrates with popular IDEs and offers persistent history, autocompletion, and customizable themes. Both models are available via Mistral’s API, with Devstral 2 currently free to use and Devstral Small 2 ideal for on-prem and local deployment.

Hand Picked Video

Unlock the future of mobile AI - learn how to run powerful open-source Language Models right on your Android phone! No cloud services, no subscriptions, just pure local AI power in your pocket.

Top AI Products from this week

  • taag.app - Now powered by Vision AI - Simplifying product sharing for content creators powered by vision AI. Earn more as a creator with taag.app, taag.app enhances the process of sharing products links for content creators using AI, increasing both engagement and monetization opportunities.

  • GLM-4.6V - GLM-4.6V is GLM's newest open-source multimodal model with a 128k context window. It features native function calling, bridging visual perception with executable actions for complex agentic workflows like web search and coding.

  • Ark Empowerment - Ark unites your co-founders, documents, and AI Agents into one workspace. Trained on real startup advice, Ark helps you make better decisions, create faster, and stay in sync as you build.

  • The Record - The Record is a new media company built on a transparent truth engine that verifies every claim with leading AI models and community review. Fact check any claim. Check source credibility with our Trust Battery. Read our weekly newsletter distilling trending topics into their most truthful narrative.

  • Anchored Growth - Anchored Growth isn’t just another outbound tool, it’s an Intelligent Outbound Command Center that removes the frustration and guessing from outbound. It learns how your ICP wants to be engaged, surfaces nuanced patterns humans miss, guides messaging that speeds campaign creation and boosts performance.

  • rationalGO AI - We’re launching the first piece of our agentic, local-first AI OS. Unlike cloud AI tools, Rationalgo runs fully on your desktop — fast, private, and secure. Your data never leaves your device. Our agents don’t just “assist”; they execute tasks end-to-end, and soon you’ll be able to build and monetize your own. Slides Agent is our first example of what agentic, zero-cost AI can look like. 

This week in AI

  • Android Show XR Edition New AI-Powered Updates - Google unveiled new AI-driven features for Android Show XR Edition, enhancing immersive experiences with smarter voice assistants, real-time translation, and interactive AR tools for developers and users.

  • Claude Code in Slack AI-Powered Coding - Anthropic brings Claude Code to Slack, letting developers delegate coding tasks, fix bugs, and generate pull requests directly from chat threads, marking a major shift toward AI-embedded collaboration in software workflows.

  • SAE Latent Attribution Explained - OpenAI's SAE latent attribution method identifies which internal features (latents) in a language model causally drive specific behaviors by comparing activation patterns in aligned vs. misaligned completions, using attribution to pinpoint features that actually caused the output

  • Slop Evader Browse Pre-AI Internet - Slop Evader is a browser extension that filters search results to show only content published before November 30, 2022, helping users avoid AI-generated "slop" and synthetic media, promoting more authentic, human-created information online.

  • Self-Improving VLM Judges - A new method trains Vision-Language Model (VLM) judges without human annotations, using self-synthesized data to improve accuracy in correctness, reasoning, and safety, outperforming much larger models in multimodal evaluation tasks.

Paper of The Day

This paper argues that foundation models need native multi-agent intelligence—beyond single-agent skills—to thrive in collaborative, interactive environments. It identifies key capabilities like understanding, planning, communication, and adaptation, and shows that scaling single-agent abilities does not automatically yield robust multi-agent competence. The authors propose new research directions for building, evaluating, and training models to excel in multi-agent settings, emphasizing intentional design and safety considerations.​

To read the whole paper 👉️ here