Automate your Business with Gemini Enterprise💼🏢

AI in business is leveling up Google’s Gemini Enterprise and Amazon’s Quick Suite redefine workplace automation, while Anthropic warns even tiny data leaks can compromise massive AI models.

In partnership with

The AI world keeps evolving at enterprise speed, and this week, it’s all about intelligence, integration, and security.

🏢 Google launches Gemini Enterprise, a unified AI platform that brings Gemini models to every department through a secure, chat-based interface — helping teams automate workflows, analyze documents, and build custom agents without code.

⚙️ Amazon unveils Quick Suite, an agentic AI tool designed to supercharge productivity with features like Quick Index, Quick Research, and Quick Automate — connecting seamlessly across over 1,000 business apps and AWS services.

🔒 Anthropic’s latest study warns that even a handful of poisoned samples — as few as 250 — can backdoor large language models, emphasizing the urgent need for stronger data integrity and model governance.

From smarter enterprise ecosystems to new AI security frontiers, this week’s updates prove that the race for intelligent, trustworthy, and scalable AI is just getting started.

Google’s new AI platform for enterprises

Google has launched Gemini Enterprise, a unified AI platform for businesses that brings Google’s advanced Gemini models to the workplace through an intuitive chat interface, enabling every employee to access and orchestrate AI-powered agents for tasks like deep research, document analysis, and automation across departments such as marketing, finance, and HR. The platform features a no-code workbench for building custom agents, offers pre-built Google AI agents, and securely connects to business data from sources like Google Workspace, Microsoft 365, Salesforce, and SAP, all managed under a central governance framework to ensure security and compliance. Priced from $21 to $30 per user per month, Gemini Enterprise is designed to streamline workflows, drive smarter business outcomes, and provide a single, enterprise-grade environment for deploying the full power of Google AI across organizations.​

Get things done faster with Amazon Quick Suite

Amazon Quick Suite is an agentic AI application from AWS designed to transform workplace productivity by helping employees find insights, conduct deep research, automate tasks, visualize data, and take actions seamlessly across multiple applications. It connects securely to enterprise information repositories like wikis, intranets, and popular apps, as well as AWS services such as S3 and Redshift, and integrates with over 1,000 apps using Model Context Protocol (MCP). Quick Suite offers users an intuitive chat-based interface, enabling requests, questions, or task automation as easily as chatting with a teammate. Key features include Quick Index to unify data sources, Quick Sight for business intelligence, Quick Research for deep, accurate investigations, and Quick Automate and Quick Flows to streamline both simple and complex workflows. Trusted by thousands of Amazon employees and customers such as DXC Technology and Jabil, Quick Suite drastically cuts time on repetitive tasks, improves decision-making with AI-driven insights, and supports secure, scalable automation—ushering a new era of AI-enhanced work experiences in enterprises.​

Find out why 100K+ engineers read The Code twice a week

Staying behind on tech trends can be a career killer.

But let’s face it, no one has hours to spare every week trying to stay updated.

That’s why over 100,000 engineers at companies like Google, Meta, and Apple read The Code twice a week.

Here’s why it works:

  • No fluff, just signal – Learn the most important tech news delivered in just two short emails.

  • Supercharge your skills – Get access to top research papers and resources that give you an edge in the industry.

  • See the future first – Discover what’s next before it hits the mainstream, so you can lead, not follow.

Small Number of Samples Can Poison Any Large Language Model

A recent study by Anthropic and partners reveals that large language models (LLMs) can be backdoored or poisoned by injecting a surprisingly small number of malicious documents—around 250—into their training datasets, regardless of the model’s size or the volume of clean training data. This means even massive LLMs like those with 13 billion parameters, trained on vast amounts of data, are vulnerable to backdoor attacks triggered by specific phrases inserted into a tiny fraction of the training corpus. The research challenges prior assumptions that poisoning attacks require controlling a significant percentage of the training data, showing instead that the absolute number of poisoned samples matters most. These attacks can induce undesirable model behaviors such as generating gibberish text on command. The findings highlight an urgent need for developing scalable defenses against such poisoning risks as AI systems continue to grow in size and importance. The work suggests that protecting AI models effectively will require governance and layered verification beyond relying on model scale alone.​

Hand Picked Video

In this video, we'll look at BGBlur's AI-powered license plate blur feature that automatically detects and blurs number plates in traffic videos, dashcam footage, and surveillance content in seconds.

Top AI Products from this week

  • Bgremover.video - BGRemover.video is a free, AI-powered tool that instantly removes video backgrounds without green screens. It supports multiple formats, batch processing, and lets users add custom backgrounds easily.

  • Holodeck - Holodeck is a multiplayer story game which uses Apple Intelligence to create interactive stories and uses ImagePlayground to illustrate them. Multiplayer mode via Game Center allows you and a friend can take turns contributing to the same story together.

  • DINNA AI - DINNA is an entire marketing ecosystem that helps small and medium businesses cut marketing costs up to 95%. One workspace that: - Acts like a seasoned CMO using proven HBS/AMA frameworks.

  • Qrent - Qrent helps international students in Australia find housing smarter and faster. It uses AI to analyze commute, budget, and area data to recommend the best rentals — all in one place, with easy filtering and booking.

  • Priorit.AI - Task management, simplified. Effortlessly manage your task in no time with our cutting-edge AI technology. Say goodbye to hard and manual prioritization. No time wasted. Work on what truly matters for you.

  • Compyle - Compyle is the coding agent that actually collaborates. It doesn’t just guess what you mean - it asks, plans, and checks in before writing a single line of code. That means you can trust it with bigger and more ambiguous projects than any other coding agent.

This week in AI

  • Stitch by Google Prompted Variants - Guide UI design with prompts—adjust layouts, set goals, or let Stitch reimagine screens for fast, creative iterations.

  • Claude Code Plugins - Claude Code plugins package slash commands, subagents, MCP servers, and hooks into easy installs to customize and share enhanced coding workflows. Public beta is live

  • GPT-5 Pro ARC-AGI Leader - GPT-5 Pro tops ARC-AGI benchmark with 70.2% on ARC-AGI-1 at $4.78/task and 18.3% on ARC-AGI-2 at $7.41/task, holding the highest verified frontier LLM score.

  • OpenAI on Political Bias in LLMs - OpenAI's political bias evaluation measures 500 prompts across 100 topics with 5 bias axes, showing GPT-5 models reduce bias by 30% and less than 0.01% of responses exhibit bias, mostly in emotionally charged cases.

  • Copilot on Windows Update - Microsoft Copilot on Windows now links OneDrive, Gmail, and more for natural language search and creates/export Word, Excel, PDF, or PowerPoint files by simple prompts.

Paper of The Day

The paper "CaRT: Teaching LLM Agents to Know When They Know Enough" introduces CaRT, a method to train large language models (LLMs) to effectively decide when to stop gathering information in multi-step tasks, such as medical diagnosis and math problem solving. CaRT uses a novel training approach combining "hard negative" counterfactual examples—pairs of similar information sequences where termination is either correct or premature—and explicit verbal reasoning traces that explain why the model should terminate or continue. This trains the model to recognize the point at which it has sufficient information to act, improving efficiency and accuracy. Experiments demonstrate CaRT outperforms baseline methods, achieving better termination decisions that boost task success while reducing unnecessary computation or interaction. The method also generalizes well to out-of-distribution scenarios and offers improved transparency and reliability in termination behavior.​


To read the whole paper 👉️ here.