You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-1Lines changed: 6 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,5 +8,10 @@ All artifacts are uploaded as binaries to the release tagged v0.0.1
8
8
# Llama-cpp-python custom pre-built wheels
9
9
10
10
- Current pre-built wheels do not support tesla v100 due to CMAKE args settings used within llama-cpp-python CUDA wheel release process. Uploaded artifact aims to overrid the said CMAKE args and build CUDA compatible pre-built wheels. (ref: git@github.com:abetlen/llama-cpp-python.git)
11
-
- Also aims to fixe a musl linux bug in the llama-cpp-python CPU pre-built wheel as well.
11
+
- Also aims to fixe a musl linux bug in the llama-cpp-python CPU pre-built wheel as well.
12
12
13
+
# Instructions for a new CUDA release
14
+
- Run "git submodule update --remote --merge" to update to latest llama_cpp version and commit/push the change
15
+
- Update workflow to chose your preferred CUDA/Python/OS version
16
+
- Create new release with release description including the latest llama_cpp version for posterity
0 commit comments