FAQ
What video models does Vertical Motion use?
Vertical Motion playground has over 50 AI models VEO 3, Sora 2, Grok Imagine, Kling V3 (Kling o3 & Kling o1), Wan 2.6, Vidu Q3.
The AI Director mainly uses Kling O3 for video generation. When creating a project, you choose between Standard (13c/s) and Pro (34c/s) — this determines which Kling O3 tier generates your scene videos. The Playground includes 50+ additional models across video, audio, image, and text. More video models are added regularly.
Can I switch from Standard to Pro after creating a project?
No. Video quality is locked at project creation. Standard uses Kling O3 Standard, Pro uses Kling O3 Pro. If you need Pro quality, choose it when you create the project.
What settings can I change after creating a project?
Everything in the chat box dropdowns can be changed at any time:
Mode — Auto, Plan, Build, or Ask
AI Model — Swift ($), Smart ($$), or Genius ($$$)
Scenes — number of scenes to plan
Flow — new scenes (cinematic cuts) or continuous (smooth transitions)
The only thing locked at creation is the video quality tier (Standard or Pro).
What's the difference between Swift, Smart, and Genius?
These control the AI model powering the Director's text intelligence — the brain behind planning, scene structure, and prompt writing. Swift ($) is fast and budget-friendly. Smart ($$) is the recommended default. Genius ($$$) handles complex narratives and nuanced creative direction. Switch between them anytime from the chat dropdown.
What do the Director modes do?
Auto
Director picks the right mode based on your message — recommended default
Plan
Creates a structured plan without building — review first, execute later
Build
Immediately creates elements, environments, and scenes on the canvas
Ask
Brainstorming only — explores ideas and answers questions, no building
Do I need to know how AI video models work?
No. The AI Director and Playground assistant have that knowledge built in. You describe what you want — they handle the technical details.
How does character consistency work?
Characters are created as elements with a main image and up to 3 angle references (frontal, left, right). The video model uses these views to maintain identity — same face, same proportions, same outfit — across every scene. You can generate the angle references automatically from the main image, or upload your own.
Can I use my own photos as elements?
Yes. Click an element card and upload a photo from your computer or asset library instead of generating one. Then click to generate the angle references — the model creates them from your uploaded image. This is how you put yourself, your product, or any real-world object into AI video with consistent identity.
Can I edit the canvas directly?
No. The canvas is controlled exclusively through the AI Director chat. You can click cards to view details, generate images, and upload assets — but adding, removing, or rearranging scenes, elements, and connections happens through conversation. This keeps prompts, assignments, and connections in sync automatically.
Can I edit scene prompts manually?
Yes. Click any scene card and click Edit to manually change the prompt or switch to multi-shot mode. However, the easiest way is to talk to the Director — it knows all constraints and best practices. Tell it what you want changed and it updates everything automatically. If you do edit manually, remember to click Save after making changes.
What are the multi-shot constraints?
If you're manually editing scenes in multi-shot mode:
Shot total must equal scene duration — If your scene is 5 seconds, all shots combined must total 5 seconds (e.g., 3s + 2s = 5s)
Maximum multi-shot duration is 15 seconds — The total duration of all shots in a multi-shot scene cannot exceed 15 seconds
The Director knows these rules automatically, so if you use chat to control scenes, you don't need to worry about them.
How many elements can a scene have?
Each scene can have up to 3 elements. Each element should have a main image plus angle references — minimum is main + 1 angle, but main + 3 angles gives the best consistency.
How can I get more style control?
Ask the Director to add more environments. Multiple environment references give you finer control over the visual look of your scenes. The Director can add environment references based on your description, or you can upload your own from your asset library or computer.
Can I restore previous versions of a scene?
Yes. When you regenerate a scene after making adjustments, the Director stores previous versions. Click the scene card and you'll see all previous versions — click to restore any of them. This lets you experiment with different prompts or shot configurations without losing your work.
Can I use elements for products and props?
Yes. Elements work for anything that needs to stay visually consistent: characters, products (watches, shoes, bottles), and props (phones, swords, bags).
What are reference images?
Reference images define the visual atmosphere and style for your scenes — the environment, lighting, mood, and color palette. They lock the look so every scene stays cohesive.
Critical rule: Reference images should contain NO subjects or characters. They define the empty environment only. If you include people, objects, or characters in reference images, the video generation tool can get confused about what to generate.
Good reference: Empty studio with soft lighting, minimalist backdrop, warm glow Bad reference: Studio with a person or product visible in it
Can one reference be used for multiple scenes?
Yes. One reference can be connected to multiple scenes for visual consistency. This is especially useful when you want several scenes to share the same environment or style. Ask the Director to connect a reference to multiple scenes — for example: "Use the studio reference for scenes 1, 3, and 5" or "Apply the futuristic city environment to all outdoor scenes."
How do I get reference images?
Reference images can come from three sources:
Generate in Playground — Create a reference image using any image model, then add it to your project from Assets
Upload from Assets — Use an existing image from your asset library
Upload from desktop — Drag or browse an image from your computer
When you click a reference card on the canvas, you'll see options to generate, browse from library, or upload.
What's the difference between AI Director and Playground?
AI Director plans and directs full multi-scene video productions with consistency. Playground gives you direct access to 50+ models for quick, one-off generations.
Both have AI assistants trained on the available models.
Can I edit my video after generating?
Yes. Studio is a built-in video editor with timeline, audio tracks, transitions, text overlays, and export. One-click transfer from Director to Studio.
What can I export?
MP4 (H.264) at up to 1920×1080 (YouTube HD). Custom resolutions are also supported.
How long can my video be?
Individual scenes are 3–10 seconds. Chain as many scenes as you need. Typical projects range from 15 seconds to 5+ minutes.
What audio tools are available?
The Playground includes AI models for music generation, sound effects, and voice synthesis. Studio lets you layer multiple audio tracks on your video timeline.
How are credits used?
Credits are consumed when you generate content — videos, images, or audio. Previewing and editing in Studio don't cost credits. Video generation costs depend on your project's quality tier: Standard at 13 credits/second, Pro at 34 credits/second.
Last updated

