Unfortunately, the docker method failed. Searching for stable diffusion docker with web ui
talks about
(I had to run everything with sudo prefix.)
sudo docker compose --profile download up --build
roughly 12GB of data to be downloaded.
Unfortunately, after 2 hours of downloading, using the Automatic1111 profile,
sudo docker compose --profile auto up --build
Error response from daemon: could not select device driver "nvidia" with capabilities: [[compute utility]]
So, probably I need --profile auto-cpu.
But that gave an error,
File "/stable-diffusion-webui/webui.py", line 13, in <module>
etc
ImportError: cannot import name 'TypeIs' from 'typing_extensions' (/opt/conda/lib/python3.10/site-packages/typing_extensions.py)
Deleting the dir, cloning the repo and starting again - again downloading 12 GB,
And again failed.
sudo docker compose --profile auto-cpu up --build
Mounted styles.csv
auto-cpu-1 | Mounted ui-config.json
auto-cpu-1 | mkdir: created directory '/data/config/auto/extensions'
auto-cpu-1 | Mounted extensions
auto-cpu-1 | Installing extension dependencies (if any)
auto-cpu-1 | Traceback (most recent call last):
auto-cpu-1 | File "/stable-diffusion-webui/webui.py", line 13, in <module>
auto-cpu-1 | initialize.imports()
auto-cpu-1 | File "/stable-diffusion-webui/modules/initialize.py", line 23, in imports
auto-cpu-1 | import gradio # noqa: F401
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/gradio/__init__.py", line 3, in <module>
auto-cpu-1 | import gradio.components as components
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/gradio/components/__init__.py", line 3, in <module>
auto-cpu-1 | from gradio.components.bar_plot import BarPlot
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/gradio/components/bar_plot.py", line 7, in <module>
auto-cpu-1 | import altair as alt
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/altair/__init__.py", line 649, in <module>
auto-cpu-1 | from altair.vegalite import *
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/altair/vegalite/__init__.py", line 2, in <module>
auto-cpu-1 | from .v5 import *
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/altair/vegalite/v5/__init__.py", line 2, in <module>
auto-cpu-1 | from altair.expr.core import datum
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/altair/expr/__init__.py", line 11, in <module>
auto-cpu-1 | from altair.expr.core import ConstExpression, FunctionExpression
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/altair/expr/core.py", line 6, in <module>
auto-cpu-1 | from altair.utils import SchemaBase
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/altair/utils/__init__.py", line 14, in <module>
auto-cpu-1 | from .plugin_registry import PluginRegistry
auto-cpu-1 | File "/opt/conda/lib/python3.10/site-packages/altair/utils/plugin_registry.py", line 13, in <module>
auto-cpu-1 | from typing_extensions import TypeIs
auto-cpu-1 | ImportError: cannot import name 'TypeIs' from 'typing_extensions' (/opt/conda/lib/python3.10/site-packages/typing_extensions.py)
auto-cpu-1 exited with code 1
Then I thought I would try chainner, https://chainner.app/download
gives a deb file, for ease of installation. It has a dependency manager also.
But the dependency manager does not install stable diffusion! So, trying
(
current python version is 3.11.5 on this Linux mint based on ubuntu 24.04 - which may run into problems,
- we're supposed to run on python 3.10-venv
)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB. GPU 0 has a total capacty of 3.94 GiB of which 31.50 MiB is free. Including non-PyTorch memory, this process has 3.73 GiB memory in use. Of the allocated memory 3.60 GiB is allocated by PyTorch, and 77.43 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Stable diffusion model failed to load
Can try lowvram command, since we have the NVidia 1060 -
export COMMANDLINE_ARGS="--lowvram"
./webui.sh
Success. Does a 512x512 generate in ~ 1 minute, 4x ESRGAN upscale resize in ~30 sec.
this has a listing of how to make prompts.
My primary use case was for upscaling. I took a picture which I had clicked with the mobile camera in its "50 megapixel" mode, which shows quite a bit of "softening" when viewed at 100% - then cropped to 4096x4096, scaled down to 512x512 (1k images were causing out of VRAM errors) and then scaled up to 4096x4096. Interesting to see the results for a jpg show lots of ringing, while the results for a png seems to make things sharper.
A portion of jpg input on the left, with output on the right, processed as
scale down to 512 px, save as jpg, and then upscale to 4x - this shows ringing. Click on the image to view larger size.
A portion of jpg input on the left, with output on the right, processed as
scale down to 512 px, save as png, and then upscale to 4096 - this shows an interesting pointillistic effect, and has made edges if the image sharper - clearly seen in the foreground trees. Click on the image to view larger size.
I'll write a separate post on Easy Diffusion, since this post has already become quite long.
No comments:
Post a Comment