Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Issue]: 'OnnxRawPipeline' object is not callable #3352

Open
2 tasks done
AdmiralTriggerHappy opened this issue Aug 1, 2024 · 4 comments
Open
2 tasks done

[Issue]: 'OnnxRawPipeline' object is not callable #3352

AdmiralTriggerHappy opened this issue Aug 1, 2024 · 4 comments
Assignees
Labels
question Further information is requested

Comments

@AdmiralTriggerHappy
Copy link

Issue Description

When setting up SD.next using Olive/ONNX as per the instructions I get that 'OnnxRawPipeline' object is not callable
Its happening with another SD platform on my machine so I suspect its due to a change in one of the dependencies they both share but can't narrow it down

Version Platform Description

Windows 11
Checked out from Git the latest today

16:30:25-344558 INFO Starting SD.Next
16:30:25-347832 INFO Logger: file="C:\Users\UsernameHere\sd\automatic\sdnext.log" level=DEBUG size=65 mode=create
16:30:25-349834 INFO Python version=3.10.6 platform=Windows bin="C:\Users\UsernameHere\sd\automatic\venv\Scripts\python.exe" venv="C:\Users\UsernameHere\sd\automatic\venv"
16:30:25-512603 INFO Version: app=sd.next updated=2024-07-24 hash=a874b27e branch=master url=https://github.com/vladmandic/automatic/tree/master ui=main
16:30:26-103683 INFO Platform: arch=AMD64 cpu=AMD64 Family 25 Model 33 Stepping 2, AuthenticAMD system=Windows release=Windows-10-10.0.22631-SP0 python=3.10.6
16:30:26-105683 DEBUG Setting environment tuning
16:30:26-106683 INFO HF cache folder: C:\Users\UsernameHere.cache\huggingface\hub
16:30:26-107685 DEBUG Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
16:30:26-108683 DEBUG Torch overrides: cuda=False rocm=False ipex=False diml=False openvino=False
16:30:26-109684 DEBUG Torch allowed: cuda=True rocm=True ipex=True diml=True openvino=True
16:30:26-122549 INFO Using CPU-only Torch
16:30:26-288087 INFO Verifying requirements
16:30:26-293607 INFO Verifying packages
16:30:26-334580 DEBUG Repository update time: Thu Jul 25 06:16:33 2024
16:30:26-335581 INFO Startup: standard
16:30:26-336584 INFO Verifying submodules
16:30:28-374541 DEBUG Submodule: extensions-builtin/sd-extension-chainner / main
16:30:28-445303 DEBUG Submodule: extensions-builtin/sd-extension-system-info / main
16:30:28-516571 DEBUG Submodule: extensions-builtin/sd-webui-agent-scheduler / main
16:30:28-615484 DEBUG Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=main
16:30:28-616483 DEBUG Submodule: extensions-builtin/sdnext-modernui / main
16:30:28-707030 DEBUG Submodule: extensions-builtin/stable-diffusion-webui-rembg / master
16:30:28-775338 DEBUG Submodule: modules/k-diffusion / master
16:30:28-870221 DEBUG Git detached head detected: folder="wiki" reattach=master
16:30:28-871221 DEBUG Submodule: wiki / master
16:30:28-912235 DEBUG Register paths
16:30:28-997725 DEBUG Installed packages: 191
16:30:28-999727 DEBUG Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
16:30:29-245228 DEBUG Running extension installer: C:\Users\UsernameHere\sd\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
16:30:29-694641 DEBUG Running extension installer: C:\Users\UsernameHere\sd\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
16:30:30-040482 DEBUG Extensions all: []
16:30:30-041482 INFO Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info', 'sd-webui-agent-scheduler', 'sdnext-modernui',
'stable-diffusion-webui-rembg']
16:30:30-042996 INFO Verifying requirements
16:30:30-043996 DEBUG Setup complete without errors: 1722493830
16:30:30-046997 DEBUG Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
16:30:30-048995 DEBUG Starting module: <module 'webui' from 'C:\Users\UsernameHere\sd\automatic\webui.py'>
16:30:30-049995 INFO Command line args: ['--debug'] debug=True
16:30:30-051502 DEBUG Env flags: []
16:30:35-605268 INFO Load packages: {'torch': '2.4.0+cpu', 'diffusers': '0.29.1', 'gradio': '3.43.2'}
16:30:36-163068 DEBUG Read: file="config.json" json=32 bytes=1440 time=0.000
16:30:36-165074 INFO Engine: backend=Backend.DIFFUSERS compute=cpu device=cpu attention="Scaled-Dot-Product" mode=no_grad
16:30:36-166073 INFO Device:
16:30:36-167076 DEBUG Read: file="html\reference.json" json=45 bytes=25986 time=0.000
16:30:36-567590 DEBUG ONNX: version=1.18.1 provider=CPUExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
16:30:36-779814 DEBUG Importing LDM
16:30:36-797863 DEBUG Entering start sequence
16:30:36-799869 DEBUG Initializing

Relevant log output

16:36:18-785816 ERROR    gradio call: TypeError
╭──────────────────────────────────────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────────────────────────────────────╮
│ C:\Users\UsernameHere\sd\automatic\modules\call_queue.py:31 in f                                                                                                                  │
│                                                                                                                                                                                    │
│   30 │   │   │   try:                                                                                                                                                              │
│ ❱ 31 │   │   │   │   res = func(*args, **kwargs)                                                                                                                                   │
│   32 │   │   │   │   progress.record_results(id_task, res)                                                                                                                         │
│                                                                                                                                                                                    │
│ C:\Users\UsernameHere\sd\automatic\modules\txt2img.py:91 in txt2img                                                                                                               │
│                                                                                                                                                                                    │
│   90 │   if processed is None:                                                                                                                                                     │
│ ❱ 91 │   │   processed = processing.process_images(p)                                                                                                                              │
│   92 │   p.close()                                                                                                                                                                 │
│                                                                                                                                                                                    │
│ C:\Users\UsernameHere\sd\automatic\modules\processing.py:191 in process_images                                                                                                    │
│                                                                                                                                                                                    │
│   190 │   │   │   with context_hypertile_vae(p), context_hypertile_unet(p):                                                                                                        │
│ ❱ 191 │   │   │   │   processed = process_images_inner(p)                                                                                                                          │
│   192                                                                                                                                                                              │
│                                                                                                                                                                                    │
│ C:\Users\UsernameHere\sd\automatic\modules\processing.py:312 in process_images_inner                                                                                              │
│                                                                                                                                                                                    │
│   311 │   │   │   │   │   from modules.processing_diffusers import process_diffusers                                                                                               │
│ ❱ 312 │   │   │   │   │   x_samples_ddim = process_diffusers(p)                                                                                                                    │
│   313 │   │   │   │   else:                                                                                                                                                        │
│                                                                                                                                                                                    │
│ C:\Users\UsernameHere\sd\automatic\modules\processing_diffusers.py:122 in process_diffusers                                                                                       │
│                                                                                                                                                                                    │
│   121 │   │   else:                                                                                                                                                                │
│ ❱ 122 │   │   │   output = shared.sd_model(**base_args)                                                                                                                            │
│   123 │   │   if isinstance(output, dict):                                                                                                                                         │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
TypeError: 'OnnxRawPipeline' object is not callable

Backend

Diffusers

UI

Standard

Branch

Master

Model

StableDiffusion 1.5

Acknowledgements

  • I have read the above and searched for existing issues
  • I confirm that this is classified correctly and its not an extension issue
@timhagen
Copy link

I had the same issue. The text encoder and VAE appeared to compile, hung on the model. After a restart, received the above message.

@vladmandic
Copy link
Owner

cc @lshqqytiger

@lshqqytiger lshqqytiger self-assigned this Aug 28, 2024
@lshqqytiger
Copy link
Collaborator

lshqqytiger commented Aug 28, 2024

This occurs when a broken cache remains under models/ONNX/cache.
A full log of previous conversion (or optimization) is needed to figure out why it failed.

@vladmandic vladmandic added the question Further information is requested label Aug 29, 2024
@vladmandic
Copy link
Owner

@AdmiralTriggerHappy what is the status of this issue after lshqqytiger provided an update?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants