- AI Report by Explainx
- Posts
- Runway Aleph Lets You Change Camera Angles🎦🎬
Runway Aleph Lets You Change Camera Angles🎦🎬
Runway Aleph enables AI-based video edits like angle shifts & object removal. HRM solves puzzles with brain-like reasoning. Qwen3-235B excels at logic, math & long-context AI tasks.
Welcome to this week’s edition of your AI innovation digest — where the cutting edge becomes practical. We're diving into the latest breakthroughs shaping the future of video creation and AI reasoning.
Runway's Aleph redefines post-production with AI-powered video editing that lets you change camera angles, relight scenes, and even remove objects — all from a single piece of footage.
Sapient's Hierarchical Reasoning Model (HRM) introduces a compact, brain-inspired architecture that solves complex tasks with stunning efficiency — no pretraining needed.
Qwen3-235B-Thinking, a state-of-the-art open-source model, pushes the boundaries of logical and mathematical reasoning, with massive context support and "thinking mode" outputs.
Let’s break down what these mean for creators, developers, and the future of intelligent systems 👇
Instantly Change Camera Angles in AI-Generated Video with Runway Aleph

Runway Aleph is a next-generation AI-powered video editing platform designed to transform how filmmakers handle post-production. Unlike earlier AI tools that generated new videos from scratch, Aleph focuses on manipulating real, existing footage through a single interface, offering an impressive suite of features accessed via simple text prompts. These include generating new camera angles, seamless shot continuations, custom style transfers, environmental changes (like weather, lighting, or time of day), as well as powerful object manipulation capabilities such as adding or removing objects, changing their appearance, recoloring elements, and relighting entire scenes. Aleph’s technology could enable filmmakers to create “endless coverage” from minimal source material, streamlining workflows and reducing the need for large crews or extensive reshoots. While its public release is not yet available, early access is being rolled out to enterprise and creative partners, with broader availability expected soon. However, industry experts remain cautious about integration with professional workflows and quality consistency across diverse content, noting that Aleph’s promise represents a significant leap for multi-task video editing and production.
Hierarchical Reasoning Model: Efficient AI for Complex Problem Solving

The Hierarchical Reasoning Model (HRM) by sapientinc is a novel AI architecture designed to optimize complex goal-oriented reasoning tasks. Inspired by human brain processes, it uses two interconnected recurrent modules: a high-level module for slow, abstract planning and a low-level module for fast, detailed computations. With only 27 million parameters, HRM achieves high performance on challenging tasks—including solving complex Sudoku puzzles, maze navigation, and the Abstraction and Reasoning Corpus (ARC)—using just 1000 training samples. It operates without pre-training or chain-of-thought data and significantly outperforms larger models with longer context windows. This efficiency, stability, and ability to perform sequential reasoning in a single forward pass highlight HRM's potential as a transformative advancement toward general-purpose AI reasoning systems. The GitHub repository includes setup instructions for CUDA and PyTorch, example training scripts for various puzzle tasks, evaluation procedures, and pre-trained checkpoints to facilitate research and experimentation.
Qwen3-235B-Thinking: Advanced AI for Complex Reasoning

Qwen3-235B-A22B-Thinking-2507 is a powerful large-scale causal language model developed by the Qwen team, designed to excel in complex reasoning tasks such as logical reasoning, mathematics, science, coding, and advanced academic benchmarks. This version features significant improvements in reasoning quality and depth, better general capabilities including instruction following and tool usage, and supports an extremely long context length of 262,144 tokens, making it well-suited for highly complex problems. The model has 235 billion parameters with 128 experts (8 activated), and requires the latest versions of Hugging Face transformers (>=4.51.0) for use. It operates in a special “thinking mode” where outputs typically include implicit reasoning tags. The model can be deployed locally or via API with frameworks like SGLang and vLLM, and supports integration with tools via Qwen-Agent for enhanced agentic capabilities. Best practices include using sampling parameters such as temperature 0.6 and top-p 0.95, setting adequate output lengths (up to 81,920 tokens for difficult tasks), and standardizing outputs with prompts. The model is open-source under the Apache 2.0 license and achieves state-of-the-art performance among open-source thinking models. For more information, see the Qwen team’s technical report, blog, and Hugging Face page.
Promote your videos with the Video BG Remover! Quickly remove or replace backgrounds—no green screen needed. Simple, AI-powered, and perfect for standout, professional videos in minutes.

Top AI Products from this week
CopyCat - CopyCat is a no-code platform for building browser automations. Using the CopyCat editor, you can automate any web task by combining AI prompts with reliable step-by-step actions.
Doco - Doco is an AI writing assistant built natively into Microsoft Word. It combines the power of Grammarly, Cursor, and Co-Pilot—optimized for structured document workflows. Reference any file, create custom projects and workflows - all without leaving Word.
Nitrode - Nitrode is an AI game engine that empowers devs to vibe code a playable 3D game in a matter of hours. Build the game you’ve always dreamed of but never got the chance to.
Singify AI Vocal Remover - Singify AI Vocal Remover uses advanced 10-stem separation to isolate vocals, drums, bass, piano, guitar, and more. Fast, free, and easy to use, it delivers high-quality results with minimal artifacts—perfect for creators, remixers, and music lovers.
Chive - Chive is a native Mac app that shows you what all your Claude Code instances are doing at a glance. If you’re running multiple sessions across different projects, it’s easy to lose track of what’s working, what’s waiting, and what’s looking for something to do.
Aeneas - Aeneas is a new open-source AI model from Google DeepMind that helps historians restore, date, and place fragmentary ancient inscriptions. It analyzes both text and images to provide new context on ancient texts.
This week in AI
Alibaba's Quark AI Glasses - Alibaba’s AI-powered Quark glasses offer hands-free payments, shopping, navigation, and more with Snapdragon AR1 chip; launch expected late 2025 in China.
HunyuanWorld 1.0: 3D World Generation - Tencent's framework generates immersive, explorable 3D worlds from text/images using panoramic proxies, layered mesh representation, and object separation for VR/gaming.
GLM-4.5 Powerful Hybrid Reasoning AI - GLM-4.5 is a 355B-parameter open-source AI with hybrid reasoning for complex tasks, excelling in coding, reasoning, and agent use, licensed under MIT for commercial use.
Anthropic Limits Claude Code Usage - Anthropic sets weekly rate limits on Claude Code for heavy users from August to control costs and ensure service quality; extra usage purchasable on Max plan.
Microsoft Edge Copilot Mode - Edge’s new Copilot Mode uses AI to streamline browsing with chat, voice commands, multi-tab context, and task automation—free, opt-in on Windows and Mac now.
Paper of The Day
This survey presents the first comprehensive framework for self-evolving agents - AI systems that continuously adapt and improve through real-world interactions, moving beyond static large language models toward autonomous intelligence. The authors organize the field around three key dimensions: what evolves (models, memory, tools, architecture), when evolution occurs (during vs. between tasks), and how agents evolve (through rewards, imitation, or population-based methods). These agents demonstrate promising capabilities across domains like coding, healthcare, and education, but face critical challenges in personalization, safety, and generalization. The work establishes a structured roadmap for advancing from current static AI toward Artificial Super Intelligence (ASI) through agents that can autonomously evolve while remaining aligned with human values and maintaining robust performance across diverse, dynamic environments.
To read the whole paper, go to here.