Why is everyone going crazy over Zuck's Llama 4?

PLUS: Backend Structure Doc Template for Vibe Coding

Welcome back to Daily AI Skills.

Here’s what we are covering today:
1. MoCha by Meta - New Breakthrough in AI Video Gen
2. The collab you didn’t expect - Intel x TSMC
3. Llama 4 - The New Best Open-source model

+ How to make a Backend Structure Doc for Vibe Coding - Template

MoCha by Meta - Truly Photorealistic Video Generation

Meta has unveiled MoCha, an innovative AI system developed in collaboration with the University of Waterloo, designed to transform text prompts into movie-grade animated characters with synchronized speech and lifelike movements, pushing the boundaries of AI-driven video generation.

The details:

  • MoCha leverages a diffusion transformer model with 30 billion parameters, capable of producing photorealistic, high-definition video clips approximately five seconds long at 24 frames per second.

  • The system incorporates "Speech-Video Window Attention" technology to ensure precise lip-syncing and can animate multiple characters simultaneously based on user-defined prompts.

  • It was trained on 300 hours of meticulously curated video content, supplemented with text-based sequences to enhance expressiveness and interaction variety, though the source material remains undisclosed.

  • While MoCha has outperformed competitors in lip-sync quality and natural movement in internal tests, Meta has not yet confirmed a public release timeline for the technology.

Intel and TSMC Explore White House-Backed Joint Venture

Intel is reportedly entering a strategic partnership with rival chipmaker TSMC, according to The Information. The deal would involve a joint venture aimed at reviving Intel’s struggling manufacturing operations.

Here are the key points:

  • The White House is said to have facilitated the talks between the two companies, with TSMC potentially taking a 20% stake in the joint venture.

  • Rather than investing cash, TSMC would provide manufacturing know-how and training to help boost Intel’s production capabilities.

  • The proposed partnership is facing pushback from within Intel, with executives worried about potential layoffs and the implications for the company’s own manufacturing technology.

  • Intel’s new CEO, Lip-Bu Tan, is reportedly driving the initiative as part of a broader overhaul, following the company’s $16 billion loss in 2024.


    Read the full article here

Llama 4 - The best truly open source model right now

Meta has launched Llama 4, its latest family of open-source AI models—featuring Scout and Maverick variants now available, with the colossal Behemoth still in training—claiming the top spot as the leading open-source model by surpassing DeepSeek’s R1 and V3 in key performance metrics.

The details:

  • Llama 4 Scout, with a 10-million-token context window, runs on a single GPU and outshines Google’s Gemma 3 and Mistral 3.1, while trumping DeepSeek V3 in efficiency and benchmark scores like MMLU and MATH-500.

  • Llama 4 Maverick, a 17-billion-active-parameter “mixture of experts” model, excels in multimodal tasks—chat, vision, coding, and reasoning—edging out DeepSeek R1, GPT-4o, and Gemini 2.0 Flash in head-to-head tests.

  • The forthcoming Llama 4 Behemoth, a 288-billion-parameter titan, is poised to challenge DeepSeek’s anticipated R2 and even GPT-4.5 in STEM reasoning, with a reveal slated for Meta’s LlamaCon on April 29, 2025.

  • Meta’s AI assistant, powered by Llama 4, has rolled out across WhatsApp, Messenger, and Instagram in 40 countries, outpacing DeepSeek’s adoption by leveraging Meta’s vast ecosystem.

Try out the model through Open Router here: https://openrouter.ai/meta-llama/llama-4-maverick:free

How to make a Backend Structure Doc for Vibe Coding - Template

📩Forward it to people you know who are keeping pace with the changing AI world, and stay tuned for the next edition to stay ahead of the curve!