Extending Text2Video-Zero for Multi-ControlNet
dc.contributor.advisor | Shi, Humphrey | |
dc.contributor.advisor | Mossberg, Barbara | |
dc.contributor.author | Backen, Ben | |
dc.date.accessioned | 2023-08-18T15:48:20Z | |
dc.date.available | 2023-08-18T15:48:20Z | |
dc.date.issued | 2023 | |
dc.description | 15 pages | en_US |
dc.description.abstract | This research paper presents an extension to the Text2Video-Zero (T2V0) generative model, augmenting the synthesis of video from textual and video inputs. The project focuses on enhancing the functionality and accessibility of T2V0 by integrating Stable Diffusion’s (SD) support for multiple ControlNets, implementing frame-wise masking for selective ControlNet application, and introducing memory optimizations to enable running the model on consumer-grade hardware. The paper also provides a high-level overview of SD, explores experimental features, and offers practical tips for generating videos using these tools. Additionally, we include a demonstration video showcasing T2V0 with Multi-ControlNet. The video highlights the early potential of text-to-video models for storytelling. Ultimately, the study strives to expand the capabilities and accessibility of T2V0, increasing users' control over their generated outputs while upholding the democratic principles of open-source AI. | en_US |
dc.identifier.orcid | 0009-0005-0548-7369 | |
dc.identifier.uri | https://hdl.handle.net/1794/28647 | |
dc.language.iso | en_US | |
dc.publisher | University of Oregon | |
dc.rights | CC BY-NC-ND 4.0 | |
dc.subject | text-to-video | en_US |
dc.subject | Stable Diffusion | en_US |
dc.subject | ControlNet | en_US |
dc.subject | machine learning | en_US |
dc.subject | generative models | en_US |
dc.title | Extending Text2Video-Zero for Multi-ControlNet | |
dc.type | Thesis/Dissertation |