Discover how to maintain character and style consistency in AI-generated videos. Explore Google Veo 3's 'High-Quality Chaining' for prosumers and indie filmmakers combining Midjourney, Kling, and Runway for narrative coherence. Learn about using layered ComfyUI pipelines for professional studios and effective tools like Suno for music and ElevenLabs for voice synthesis. Unveil tips for crafting viral short videos, from image concepts to audio editing, tailored for every level of creator.
01:11:37
forum Ask episode
web_stories AI Snips
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
insights INSIGHT
Google VO3 for Instant AI Video
Google VO3 is the best start for instant AI video production with all-in-one features like high-quality video and integrated audio.
Its weaknesses with multi-clip extension require workarounds like high-quality chaining for longer videos.
volunteer_activism ADVICE
Use GPT-4o for Characters
Create a character sheet using GPT-4o with a locking prompt to ensure visual consistency for video.
Generate multiple poses and expressions conversationally to build a reliable reference for video generation.
volunteer_activism ADVICE
Maintain Quality with High-Quality Chaining
Use VO3 high-quality chaining: generate each video clip independently and use the last frame of one as the first frame of the next.
This avoids quality drops from VO3's native clip extend feature and maintains video fidelity.
Get the Snipd Podcast app to discover more snips from this episode
How to maintain character consistency, style consistency, etc in an AI video. Prosumers can use Google Veo 3’s "High-Quality Chaining" for fast social media content. Indie filmmakers can achieve narrative consistency by combining Midjourney V7 for style, Kling for lip-synced dialogue, and Runway Gen-4 for camera control, while professional studios gain full control with a layered ComfyUI pipeline to output multi-layer EXR files for standard VFX compositing.
Music: Use Suno for complete songs or Udio for high-quality components for professional editing.
Sound Effects: Use ElevenLabs' SFX for integrated podcast production or SFX Engine for large, licensed asset libraries for games and film.
Voice: ElevenLabs gives the most realistic voice output. Murf.ai offers an all-in-one studio for marketing, and Play.ht has a low-latency API for developers.
Open-Source TTS: For local use, StyleTTS 2 generates human-level speech, Coqui's XTTS-v2 is best for voice cloning from minimal input, and Piper TTS is a fast, CPU-friendly option.
I. Prosumer Workflow: Viral Video
Goal: Rapidly produce branded, short-form video for social media. This method bypasses Veo 3's weaker native "Extend" feature.
Toolchain
Image Concept: GPT-4o (API: GPT-Image-1) for its strong prompt adherence, text rendering, and conversational refinement.
Video Generation: Google Veo 3 for high single-shot quality and integrated ambient audio.
Soundtrack: Udio for creating unique, "viral-style" music.
Assembly: CapCut for its standard short-form editing features.
Workflow
Create Character Sheet (GPT-4o): Generate a primary character image with a detailed "locking" prompt, then use conversational follow-ups to create variations (poses, expressions) for visual consistency.
Generate Video (Veo 3): Use "High-Quality Chaining."
Clip 1: Generate an 8s clip from a character sheet image.
Extract Final Frame: Save the last frame of Clip 1.
Clip 2: Use the extracted frame as the image input for the next clip, using a "this then that" prompt to continue the action. Repeat as needed.
Create Music (Udio): Use Manual Mode with structured prompts ([Genre: ...], [Mood: ...]) to generate and extend a music track.
Final Edit (CapCut): Assemble clips, layer the Udio track over Veo's ambient audio, add text, and use "Auto Captions." Export in 9:16.
II. Indie Filmmaker Workflow: Narrative Shorts
Goal: Create cinematic short films with consistent characters and storytelling focus, using a hybrid of specialized tools.
Toolchain
Visual Foundation: Midjourney V7 to establish character and style with --cref and --sref parameters.
Dialogue Scenes: Kling for its superior lip-sync and character realism.
B-Roll/Action: Runway Gen-4 for its Director Mode camera controls and Multi-Motion Brush.
Voice Generation: ElevenLabs for emotive, high-fidelity voices.
Edit & Color: DaVinci Resolve for its integrated edit, color, and VFX suite and favorable cost model.
Workflow
Create Visual Foundation (Midjourney V7): Generate a "hero" character image. Use its URL with --cref --cw 100 to create consistent character poses and with --sref to replicate the visual style in other shots. Assemble a reference set.
Create Dialogue Scenes (ElevenLabs -> Kling):
Generate the dialogue track in ElevenLabs and download the audio.
In Kling, generate a video of the character from a reference image with their mouth closed.
Use Kling's "Lip Sync" feature to apply the ElevenLabs audio to the neutral video for a perfect match.
Create B-Roll (Runway Gen-4): Use reference images from Midjourney. Apply precise camera moves with Director Mode or add localized, layered motion to static scenes with the Multi-Motion Brush.
Assemble & Grade (DaVinci Resolve): Edit clips and audio on the Edit page. On the Color page, use node-based tools to match shots from Kling and Runway, then apply a final creative look.
III. Professional Studio Workflow: Full Control
Goal: Achieve absolute pixel-level control, actor likeness, and integration into standard VFX pipelines using an open-source, modular approach.
Toolchain
Core Engine: ComfyUI with Stable Diffusion models (e.g., SD3, FLUX).
Train Character LoRA: Train a custom LoRA on a 15-30 image dataset of the actor in ComfyUI to ensure true likeness.
Build ComfyUI Node Graph: Construct a generation pipeline in this order:
Loaders: Load base model, custom character LoRA, and text prompts (with LoRA trigger word).
ControlNet Stack: Chain multiple ControlNets to define structure (e.g., OpenPose for skeleton, Depth map for 3D layout).
IPAdapter-FaceID: Use the Plus v2 model as a final reinforcement layer to lock facial identity before animation.
AnimateDiff: Apply deterministic camera motion using Motion LoRAs (e.g., v2_lora_PanLeft.ckpt).
KSampler -> VAE Decode: Generate the image sequence.
Export Multi-Layer EXR: Use a node like mrv2SaveEXRImage to save the output as an EXR sequence (.exr). Configure for a professional pipeline: 32-bit float, linear color space, and PIZ/ZIP lossless compression. This preserves render passes (diffuse, specular, mattes) in a single file.
Composite in Fusion: In DaVinci Resolve, import the EXR sequence. Use Fusion's node graph to access individual layers, allowing separate adjustments to elements like color, highlights, and masks before integrating the AI asset into a final shot with a background plate.