Montage Editor, Prompt Enhance, and Seedance 2 Support: Changelog March 28 – April 3, 2026
A big week. The Montage video editor moved from branch to production. A new LLM-powered Prompt Enhance button replaced the old toggle. Seedance 2 multi-endpoint video support landed. Team invites no longer redirect new users to a separate signup page. Here is everything that shipped March 28 through April 3.
Montage Video Editor Is Now in Production
The Montage editor merged to main on March 31 via PR #293. It is a storyboard-based video composition tool built into Platform Studio. You generate or import individual video clips, arrange them on a timeline, and export a single stitched video.
What you can do in Montage
- →Timeline with zoom: Pixel-based ruler, zoom to cursor with Cmd/Ctrl+scroll, ghost playhead hover, click-to-seek.
- →Clip editing: Drag-to-reorder, split-clip tool (scissors cursor), per-clip speed (1x to 2x), per-clip mute toggle.
- →Add clips two ways: Generate a new video clip with AI, or import an existing video from your Files. The choice screen appears when you add a new clip.
- →Project persistence: Your storyboard auto-saves to localStorage. Refresh or close the tab, and your clips are still there. Ongoing generations reconnect automatically.
- →Export: Set aspect ratio and resolution (up to 2160p) before exporting. The export goes through the same async generation system as image generation, with WebSocket-based progress and auto-download on completion.
- →Undo: Up to 50 steps with Ctrl+Z or the dedicated button.
The first frame of each clip shows the actual trimIn offset, not frame 0, so thumbnails match what will appear in the final video. Export is disabled until all clips are in READY state.
Prompt Enhance Button: Now Powered by an LLM
The old feature-flag-controlled enhance toggle is gone. In its place: a sparkle button directly in the prompt input. Click it and the current prompt gets sent to an LLM that rewrites it with model-specific context.
The enhance is context-aware. For Seedance 2, the system prompt enforces multi-shot structure: three shots by default, evenly divided durations, one motion verb per shot, camera specified as [move+speed+stability], secondary physics required (fabric, hair, drape), and a Constraints line for cross-shot consistency. Because Seedance 2 does not accept face reference photos, the LLM now describes the model's appearance (age, build, hair, skin tone) so the output is consistent across shots.
The enhance button shows a shimmer loading state while the LLM responds. It is wired into all PromptInput locations: Studio, Batch Generation, Image Edit, Video Generation, and Storyboard.
Inline Prompt Expansion
The expand button on PromptInput no longer opens a separate modal. It now expands the textarea inline as an absolute overlay: minimum 200px tall, maximum 400px, click outside to collapse. The deleted ExpandedPromptModal and its CSS are gone.
Video models without first-frame support now require a prompt before the Generate button activates. Those models also default to always-expanded mode.
Video Generation: Multi-Endpoint Model Support
Seedance 2 uses multiple inference endpoints, and each endpoint accepts different inputs. The video generation UI now adapts to this.
What changes based on the selected model
- →First frame input: Shown only when the model's endpoints support it. If no endpoint accepts both first frame and reference images, selecting one hides the other.
- →Aspect ratio selector: Shown when the model has aspect ratio capability and no first frame is selected.
- →Resolution picker: Hidden when only one resolution is available. Video resolutions like 720p stay lowercase (fixed an API rejection caused by uppercase normalization).
- →Model-specific placeholder text: Seedance 2 shows: "No people in ref images, add clothing and location refs, and describe the model's look, shots and camera moves." Other models get a sound-aware default.
A frontend model access gate also shipped alongside this. It restricts model visibility by company type, email domain, and billing country. It is currently a soft-launch gate with placeholder config for Seedance 2. Backend enforcement is coming separately.
Team Invites: One Step Instead of Six
Previously, a new user receiving a team invite with no existing account had to leave the invite page, sign up, verify email, then return. Six steps.
The invite page now handles account creation directly. New users fill in first name, last name, and password without leaving the page. Google OAuth signup is also available inline. The page shows the reserved email masked, fetches the password policy from the API, and creates the account and accepts the invite in a single submission. This shipped March 31 via UWE-276.
Batch Generation: Reference Images Now Supported
Batch prompt inputs now support attaching reference images. The option appears only when the selected AI model supports image references. The reference URLs pass through to the batch request payload for both clothing and outfit item types.
The Asset Picker itself also got a design system migration this week. The old MUI Modal, Grow, and useMediaQuery components are replaced with DSModal, DSTabs (with sliding indicator and responsive overflow handling), DSInput, DSSkeleton, DSCheckbox, DSFileUpload, and DSChip. The clothing sidebar in the picker is now collapsible: hover to expand or collapse, with animation. Search clears on collapse if a query was active.
Fix: Results from Agent and API Now Show Up
The results page was silently filtering to only show generations with origin=web. Anyone who generated via the agent or API saw an empty page. That filter is removed. All results show regardless of origin.
Report Issue Button with Auto-Screenshot
The side nav now has a Report Issue button. Clicking it captures an html2canvas screenshot of the current viewport and attaches it automatically to the support form. Object URLs are revoked on close to prevent memory leaks, and a double-screenshot guard prevents duplicate uploads.
Platform Stability Fixes
- →Safari bulk download fixed (again). A linter revert had undone the prior fix. The
download_urlcasing is back to camelCase, and the fetch+blob approach replaces direct anchor-tag downloads, which Safari blocks on cross-origin S3 URLs. - →Auth session isolation. OAuth callback now forces a full
window.location.replaceinstead of SPA navigation. AuthStore callsrootStore.clearAllData()on login and logout. Switching accounts in the same tab no longer leaks cached entities from a prior profile. - →Resolution dropdown smart fallback. Switching AI models now finds the closest smaller resolution rather than defaulting to the first option. A dependency array bug that left the dropdown empty on async model load is fixed.
- →Asset picker cache filter. The picker was returning incomplete results when a kind/clothing/avatar filter was active in the main results view. It now does a full refetch on open.
- →Legacy Edit type filter fixed. Results with the old
Enhancekind were invisible when filtering by Edit. Normalized toEditthroughout. - →Asset picker z-index fixed. Modal z-index tokens were below the sidebar (999) and nav (1200). Bumped to 1300 so the picker no longer renders behind the navigation.
All Platform Updates Are Live
Montage editor, prompt enhance, Seedance 2 support, team invite improvements, and all stability fixes are in Uwear Studio now.
Questions or feedback? Reach out. We read everything.