The Daily AI Briefing - 29/07/2025
Welcome to The Daily AI Briefing! I'm your host, bringing you the most significant AI developments on this Wednesday. Today, we're tracking groundbreaking moves in open-source AI models, a major browser update from Microsoft, new video voice customization techniques, and advancements in open-source video generation. Let's dive into today's AI landscape and what these developments mean for the industry and users alike. In today's episode, we'll cover Z.ai's powerful new open-source model, Microsoft's integration of Copilot directly into the Edge browser, a tutorial for customizing AI video voices, and Alibaba's impressive open-source video generation model. We'll also highlight trending AI tools and job opportunities in the sector. First up, Chinese startup Z.ai, formerly known as Zhipu, has released GLM-4.5, a remarkable open-source agentic AI model family. With 355 billion parameters, this model combines reasoning, coding, and agentic abilities while undercutting competitors on pricing. Z.ai claims it's now the top open-source model globally, ranking just behind industry leaders like o3 and Grok 4 in overall performance. What makes this particularly noteworthy is that Z.ai has not only launched these models with open weights but also published their 'slime' training framework for others to build upon. Moving to Microsoft, the tech giant has introduced 'Copilot Mode' in Edge, bringing AI assistance directly into the browsing experience. This feature allows Copilot to search across open tabs, handle tasks, and proactively suggest actions. Available for free initially on Windows and Mac, Microsoft hints at eventual subscription pricing. The integration goes deeper than previous AI assistants, as Copilot will eventually be able to access browser history and credentials with user permission, enabling it to complete bookings and errands autonomously. For content creators, a new tutorial demonstrates how to replace default AI-generated voices in videos with custom voices. The process involves creating a video with Google Veo, extracting the audio, using ElevenLabs' Voice Changer to generate new speech, and then combining everything in CapCut. This technique opens up new possibilities for personalization in AI-generated content, allowing creators to maintain consistent character voices across their projects. In the video generation space, Alibaba's Tongyi Lab has launched Wan2.2, an advanced open-source video model that brings cinematic capabilities to both text-to-video and image-to-video generations. Using a dual "expert" approach, Wan2.2 creates overall scenes while adding fine details efficiently. The model reportedly surpasses competitors in aesthetics, text rendering, and camera control, having been trained on significantly more images and videos than its predecessor. Users can fine-tune various aspects of their videos, gaining unprecedented control over the final output. Among trending AI tools today are Runway Aleph for video content creation, Alibaba's Qwen3-Thinking with enhanced reasoning, Tencent's Hunyuan3D World Model for open world generation, and Google's Aeneas for restoring ancient texts. For those looking for career opportunities in AI, notable job openings include positions at Databricks, Parloa, UiPath, and xAI. That wraps up our Daily AI Briefing for today. We've seen significant advances in open-source models, browser integration, content creation, and video generation, showcasing the rapid pace of AI innovation. These developments continue to democratize powerful AI capabilities while raising important questions about the future of human-computer interaction. Join us tomorrow for another update on the ever-evolving world of artificial intelligence. Until then, stay curious and stay informed.