UGR | 2025 | Virtual Backgrounds - CHI-CityTech/BSP-graphic-imagery GitHub Wiki

@ Samuel Cheung

Background Archive: https://drive.google.com/drive/u/1/folders/1UhG4TP0GG4qqV0njPhHWcRm3oKod-fG-

This semester, we’ve been building on the work we started last year, taking what worked and pushing it further to create a more immersive experience for the audience. The goal has been to make the world of the story feel more real and connected, not just through the puppets themselves but through the space they move in. We focused on improving structure and adding more dynamic visual elements to bring scenes to life. A big part of that was using technology—specifically AI tools and editing software—to help us create consistent, animated backgrounds that feel like part of the same world. It’s still a hands-on process, but these tools have helped us shape a more complete and engaging environment for the story to unfold in.

Tools Included in this process

https://www.scenario.com/ (Artistic Collaboration Tool)

Discord (Pika Labs/ Image -> Video Generator)

https://vmake.ai/video-watermark-remover/upload (Background Remover)

1. Enhancing Visual Consistency with Scenario AI

For the animated backgrounds, Scenario AI was used to refine and develop the world designs needed for our production. Scenario is an invaluable tool due to its ability to maintain artistic consistency. Unlike ChatGPT, which relies heavily on algorithms to generate backgrounds, Scenario specializes in creating artwork that not only responds to prompts but also adheres to a specific art style. This ensures that all generated pieces blend seamlessly, maintaining a cohesive visual identity within the same genre.

1.1 Adapting Visual Assets for Scene Versatility

Removing backgrounds and applying color corrections also played an important role in making the scenes more versatile—especially when switching between day and night settings. Some of the generated scenes ended up looking the opposite of what we originally intended, like a night scene appearing too bright or a daytime scene feeling too dark. But with these editing tools, we were able to adjust and correct them without starting over, which helped us keep the visual style consistent while still using the same assets across different times of day.

2. Animating with Pika Labs

After selecting final images, Pika Labs was used to convert still images into short animated video clips. Through text prompts, we could control exactly what elements moved—such as tree branches or flowing water—allowing for selective animation that didn’t overwhelm the composition. Prompts were duplicated to generate multiple variations of each scene, giving us flexibility to choose the version that fit best.

3. Removing Logos for Cleaner Presentation

A major obstacle in using Pika Labs was the inclusion of their default logo watermark on every generated clip. To fix this, we used Remove Logo AI to clean up the animations. Although the tool only allowed five-second previews under the free plan, it was sufficient for our purposes since most of our background loops were short. This simple fix made a big difference in visual polish.

4. Editing and Looping in Adobe Premiere Pro

The final animation clips were brought into Adobe Premiere Pro for editing. One issue we faced in previous semesters was glitchy looping, where frames would visibly jump or reset. To solve this, we carefully trimmed each clip just before those glitches occurred and prioritized backgrounds with limited but purposeful movement. This helped the loops run more smoothly and naturally.

5. Clean Transitions Between Loops

To reduce the visibility of cuts, we applied clean transition techniques—like cross-dissolves and opacity fades—at loop points. These transitions softened the visual jump, making the repeat cycles feel less noticeable to the audience.

6. Layering with Overlays

To add depth, we blended the AI-generated backgrounds with stock video overlays sourced from free video libraries. These overlays introduced real-world textures, like smoke, mist, or subtle lighting shifts, which helped the animated backgrounds feel more dynamic and immersive—giving puppets a more grounded space to move within.

7. Extended Duration for Better Flow

This semester, we extended background clip durations to five minutes, which allowed for longer, uninterrupted scenes before a loop was necessary. This gave performers more flexibility and helped maintain immersion during live sequences without having to worry about frequent restarts.

⚠️ **GitHub.com Fallback** ⚠️