"Safe" Superintelligence, seriously?

Sutskever launched Safe Superintelligence for advanced AI safety. Meta shared FAIR innovations like multimodal generation and datasets. ERASE updates knowledge for retrieval-augmented LLMs.

A month back, chief scientist of OpenAI, Ilya Sutskever, left from OpenAI. After leaving OpenAI, Sutskever posted that he is working on an as-yet-unnamed project that is "very personally meaningful" for him. And now after one month, Ilya Sutskever founded one startup called Safe Superintelligence Inc. (SSI) with OpenAI colleague Daniel Levy and founder of YC-backed Cue, Daniel Gross.

Sutskever aims to solely focus on developing safe superintelligence through SSI, a challenge he views as critical for the future.

Why “SAFE”?

Isn’t that something the AI should naturally be able to inherit?

Well no.

For starters, superintelligence just as it sounds is an AI smarter than a human being. But building a model that is smarter than a human is not only hard, but also unsafe. One of the biggest fears is mis-alignment, which is a fear that the machines will not align with human values and will cause extinction.

This is a real fear.

Which is why OpenAI called their AGI team superalignment, which basically means an AGI that is aligned with human values.

Which is also why Ilya started “Safe” superintelligence.

Again a lot of AGI news this week, look out for our AGI course :)

Let’s read more about this.

Meta FAIR announced Chameleon Unified for multimodal generation, Multi-Token Prediction for efficient LLM training, JASCO text-to-music, AI speech watermarking, text-to-image diversity work.

Finally, a new approach called ERASE to keep our LLMs up to date. It essentially updates retrieval-augmented LLM knowledge by rewriting outdated entries when adding new docs, boosting Q&A accuracy.

Stuff you should know

Safe Superintelligence Inc (SSI)

Ilya Sutskever, a co-founder of OpenAI who recently stepped down as chief scientist, has announced a new startup called Safe Superintelligence Inc. (SSI). The company is focused on building safe superintelligence, which Sutskever calls "the most important technical problem of our time." SSI aims to advance AI capabilities as fast as possible while ensuring safety remains ahead, tackling safety and capabilities together through engineering breakthroughs. Sutskever is joined by his former OpenAI colleague Daniel Levy and Apple's former AI lead Daniel Gross as co-founders. The startup claims it will pursue safe superintelligence with "one focus, one goal, and one product" in a "straight shot" approach without distractions. This continues Sutskever's work from when he was part of OpenAI's superalignment team tasked with controlling powerful AI systems, which was disbanded after his departure.

Meta FAIR's New AI Innovations

  • Meta ChameleonUnified architecture that can generate combinations of text and images from text and image inputs using tokenization.

  • Multi-Token PredictionApproach for training large language models to predict multiple future words simultaneously, improving efficiency and speed.

  • JASCOText-to-music model that accepts symbolic (chords, beats) and text inputs for controlled music generation.

  • AudioSealAudio watermarking technique for localized detection of AI-generated speech segments within longer audio.

  • Geographical Diversity in Text-to-ImageCode and annotations to evaluate and improve cultural/geographical diversity of text-to-image systems.

  • PRISM DatasetMaps diverse user preferences/demographics to feedback on conversing with different language models for inclusive AI development.

ERASE - Rewriting Knowledge for Evolving Language Models

When the world changes, so does the text that humans write about it. How do we build language models that can be easily updated to reflect these changes? One popular approach is retrieval-augmented generation, in which new documents are inserted into a knowledge base and retrieved during prediction for downstream tasks. Most prior work on these systems have focused on improving behavior during prediction through better retrieval or reasoning. This paper introduces ERASE, which instead improves model behavior when new documents are acquired, by incrementally deleting or rewriting other entries in the knowledge base each time a document is added. In two new benchmark datasets evaluating models' ability to answer questions about a stream of news articles or conversations, ERASE improves accuracy relative to conventional retrieval-augmented generation by 7-13% (Mixtral-8x7B) and 6-10% (Llama-3-8B) absolute.

Hand Picked Video

I generated short video Ads with AI.

Top AI Products from this week 

  • La Growth Machine - Struggling to get replies from your leads? La Growth Machine lets you create personalized, multi-channel conversations at scale on LinkedIn, Email, Voices, Calls & X.

  • Genspark - An AI Agent engine where specialized AI agents perform research and generate custom pages called Sparkpages. Free from biases and SEO-driven content, Sparkpages synthesize trustworthy information, offering more valuable results and saving users time.

  • Augie Studio - Augie is a video creation and editing studio that enables businesses to leverage the power of video-first marketing.

  • AI Logo Reveals - Effortlessly bring static logos to life with our AI-powered generation tool 🪄 Captivating animations include mesmerizing smoke 💨, lightning ⚡️ and more ✨ - all in your browser.

  • DataSquirrel.ai - DataSquirrel saves you time, stress, and pain when making sense of your data. Automatically create insights, clear visuals and dashboard reports for any data!

  • AI Patient Intake - Conversational Patient Intakes take over nurses' tasks by asking patients coherent, dynamic-contextualized questions to extract all the relevant information for Doctors to make better decisions.

  • Hedra - Imagine worlds, characters and stories with complete creative control

  • Document AI by Playmaker - Fetch documents via emails, API or manual upload. PDF, XLS or TXT. 🤖 Extract data from documents, including contracts, invoices and more, using document ai.

This week in AI

  • Open Interpreter new Improvements - Local III update for Open Interpreter brings major improvements to local AI. Includes local model explorer, deep integration with inference engines like Ollama, optimized profiles for open models like Llama3 and Codestral, local vision via Moondream, and free hosted 'i' model endpoint serving Llama3-70B to contribute to open-source language model training for computer control. Aims to provide personal, private access to machine intelligence locally.

  • Meta AI Opens Up, Google Restricts Election Queries in India - Meta AI has lifted restrictions on election queries in India, allowing its AI models to respond to prompts about elections and politics. However, Google maintains limits on such queries globally, citing responsible AI development. The contrasting approaches by Meta and Google highlight differing views on AI's role in providing information during democratic processes.

  • AI Business Leaders Converge at Ai4 2024 - Runway has released Gen-3 Alpha, a new video generation AI model trained on a new infrastructure for large-scale multimodal training. This model is capable of generating highly detailed videos with complex scene changes, a wide range of cinematic options, and detailed art direction capabilities.

  • Ex-Snap engineer launches a AI social network - A former Snap engineer has launched Butterflies, a social network where humans and AI avatars coexist. Butterflies allows users to create AI personas that can interact with others. The platform aims to explore the future of human-AI relationships as AI becomes more advanced. Butterflies has raised $12 million in seed funding. Create heading