-
Notifications
You must be signed in to change notification settings - Fork 10.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eval bug: b4547 doesn't see gpus #11401
Comments
C:\Users\Admin\Documents\llamacpp>llama-cli --version Everything works ok with cuda 12.4 on 4525 |
This is likely caused by the line breaks added to |
@anunknowperson likely, you don't have cuda drivers, you install as nvidia-drivers without cuda) you may try run mining and see it |
Yes, this is the problem: https://github.com/ggerganov/llama.cpp/actions/runs/12956723171/job/36143642760#step:7:30 I think the line breaks on Windows have to be different than |
Should be fixed in the next builds. I didn't know that the line-breaks are different for the various shells:
|
Name and Version
C:\Users\Admin\Documents\llamacpp>llama-cli --version
version: 4547 (c5d9eff)
built with MSVC 19.29.30158.0 for
C:\Users\Admin\Documents\llamacpp>
Operating systems
Windows
GGML backends
CUDA
Hardware
Tesla P40 + 3090
Models
No response
Problem description & steps to reproduce
I tried both with cuda 11.7 and 12.4. I'm downloading
llama-b4547-bin-win-cuda-cu12.4-x64.zip and
cudart-llama-bin-win-cu12.4-x64.zip and putting them into the same folder.
C:\Users\Admin\Documents\llamacpp>llama-server --list-devices
Available devices:
C:\Users\Admin\Documents\llamacpp>
Llama server doesn't see any devices and doesn't offload layers to gpu with ngl.
It was working with previous versions. Latest koboldcpp also works (though it doesn't support R1 distilled models, so i'm trying to launch them with llamacpp)
CUDA_VISIBLE_DEVICES is not set. If I set it, still doesn't work.
First Bad Commit
No response
Relevant log output
The text was updated successfully, but these errors were encountered: