Skip to content

re-evaluating prompt makes comfyui crash #10676

@elemich

Description

@elemich

Custom Node Testing

Your question

i'm on colab free(12gb ram,16gb vram),fresh install at every run (models and persistent stuff are on GoogleDrive).
i'm using :

wan2.1 rapid gguf model,about 7,5 gb
umt5-xxl-encoder-Q3_K_M.gguf (2,85gb)
a lora (600mb)
the wan vae (circa 2-300 mb)

making a imageToVideo.i can rerun the workflow cyclically a lot of time with a plain prompt,
but if the prompt needs to be reevaluated (for example if it contains some {|} for random choice),comfyui crashes.
in the terminal i have a very compact ram/vram resource viewer: ram is 46% and vram is 74%.

now if i run main.py with no vram parameters,comfyui crashes.with --highvram it launches a popup but it not crashes,and in this case, if i "unload models and execution cache" from the comfymenu and restart the workflow i can wait a little and the workflow restarts without error until the next reevaluation of the prompt.

is this an error? how can i solve that?maybe --highvram with "UnloadModels and execution cache"node will fix that issue? how can i "unload models and execution cache" from a node after one generation automatically?

can someone explain me the precise problem?

Logs

Other

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    User SupportA user needs help with something, probably not a bug.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions