• Daily AI Skills
  • Posts
  • Google's Free AI Coding Assistant, Diffusion LLMs & Sora’s New Rival

Google's Free AI Coding Assistant, Diffusion LLMs & Sora’s New Rival

Google’s free coding AI, Mercury’s game-changing speed, and Alibaba’s Wan2.1 shaking up video generation.

Welcome back to Daily AI Skills.

Here’s what we are covering today:
1. Google’s FREE Coding Assistant
2. Inception Labs Mercury - Diffusion LLM
3. Alibaba Wan 2.1 - Better than Sora?

Google’s Gemini Code Assist: Free AI Coding Help for Everyone

Google has launched Gemini Code Assist for individuals, offering free AI-powered coding assistance with unmatched usage limits. Here’s what you need to know:

  • AI for All: Previously, only well-funded teams had access to top-tier AI coding tools. Now, students, freelancers, and startups can use Gemini Code Assist for free.

  • Massive Usage Limits: Unlike competitors with 2,000 completions/month, Gemini offers up to 180,000—far more than most developers will ever need.

  • Code Review in GitHub: AI-powered reviews detect stylistic issues, bugs, and suggest fixes, streamlining pull requests.

  • Supports All Languages: Works with any public domain programming language.

  • Seamless IDE Integration: Available in Visual Studio Code, JetBrains IDEs, Firebase, and Android Studio for real-time coding help.

  • Large Context Window: Supports 128,000 input tokens, allowing deeper code understanding.

Mercury: The First Commercial-Scale Diffusion LLM

A new era of AI is here! Mercury, a diffusion large language model (dLLM), promises to be 10x faster and cheaper than current LLMs.


Here’s why it’s a game-changer:

  • Revolutionary Speed: Mercury runs at over 1000 tokens/sec—far surpassing GPT-4o Mini and Claude 3.5 Haiku.

  • Diffusion-Based AI: Unlike traditional LLMs, Mercury refines text in parallel rather than generating it one token at a time, improving speed, reasoning, and accuracy.

  • Mercury Coder for Developers: A specialized code model that outperforms speed-optimized models like GPT-4o Mini while being up to 10x faster.

  • Real-World Impact: Enterprise clients are already cutting costs and improving performance by switching from standard LLMs to dLLMs.

  • Seamless Integration: Available via API and on-premise, fully compatible with existing AI pipelines.

Try out the model here: https://chat.inceptionlabs.ai/

Alibaba’s Wan2.1: A New Leader in Open-Source Video Generation

Alibaba’s Tongyi Lab just launched Wan2.1, a cutting-edge open-source video generation suite that outperforms even proprietary models like Sora—all while being 2.5x faster.

Video Generated by Wan 2.1 (Compressed GIF, Original Video is of higher quality)

Here’s why it’s a breakthrough:

  • SOTA Performance: Wan2.1-T2V-14B tops the VBench leaderboard, excelling in complex motion, real-world physics, and text rendering.

  • Versatile Generation: Supports text-to-video, image-to-video, and video-to-audio, plus English & Chinese text rendering—a first in AI video.

  • Powerful Editing Tools: Features video inpainting, outpainting, multi-image referencing, and character consistency.

  • Consumer-Friendly Model: The 1.3B version runs on an RTX 4090, generating 5-sec 480P clips in just 4 minutes.

Try out Wan 2.1 here: https://wan21ai.com/

The AI race is heating up—whether you're coding, generating text at lightning speed, or crafting cinematic magic, the future is faster, smarter, and more open than ever. Stay ahead, or stay outdated!

📩 Forward it to people you know are keeping pace with the changing AI world and stay tuned for the next edition to stay ahead of the curve!