Open-source large language models, run locally on your CPU and nearly any GPU-Slackware
SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Finally managed to build QT6 so was testing GPT4All.SlackBuild only to find it's not one that runs in a normal way. It uses curl/wget to download dependencies from sites I can't do that with: GitHub (for a few months I and many other users/areas have been blocked from accessing GitHub with traceroute, 'git clone', & curl/wget and can only access with major web browsers like Chrome, Firefox, Safari). All dependencies should be in a certain line of a properly-done GPT4All.info so people can get those how they can, and then run the script without that downloading anything at all.
I dont know which is the normal way to download a source file from github and build it, and honestly I dont care if there is a "normal way."
sbopkg
slpkg
slapt-get
slackrepo
slackpkg+
and all others package manager using (wget,curl,lftp etc...)
BTW: the way that slackbuilds.org use to build packages with package.info files is not "the normal way", is the SBo way... It works fine for SBo but its not "a must way" to build packages in Slackware.
In other words, if you dont like my slackbuild dont use it, or take it remove my name and make it better as I mentioned in older post...
I didn't say there's a normal SlackBuild download way rather than not downloading at all (downloading isn't a step within building). Not just SlackBuilds.org (SBo) uses package.info files, but Slackware Team member RLWorkman and some/many repositories.
It's not that I don't like GPT4ALL.SlackBuild, just wasn't sure if I could use same URLs or where to put dependencies. I tried again and was surprised GitHub allowed wget this time (almost never does anymore) so built it!
One thing could be improved: it installs a directory/folder to /usr/local/bin, which is for single binaries; directories holding binaries are supposed to go to /usr/local/lib64 (or lib for 32-bit) or maybe something else like /usr/local/share or somewhere similar in /var.
I started reading documentation but couldn't find if there's a command-line version to start testing with or I have to go through the whole QT graphical project building/configuring steps, which I plan to try tomorrow.
Thanks for making this SlackBuild!
There is a cli script, but you have to install some python stuff via pip3. Its not exactly like they qt6 gui but it still good. Now its Christmas here and i m out of pc. When i m in place again i will upload here documentation for cli stuff. You can see here a demo how it looks the cli https://youtu.be/A9oB2Bhpg_E
Didn't appear in my X menu but found the .desktop so ran GUI version (don't need command-line version but would be nice).
There are many one can use commercially but most/others say you can't: does that just mean can't sell them (which I don't plan to) or even can't use output/results commercially?
I dont know which is the normal way to download a source file from github and build it, and honestly I dont care if there is a "normal way."
sbopkg
slpkg
slapt-get
slackrepo
slackpkg+
and all others package manager using (wget,curl,lftp etc...)
BTW: the way that slackbuilds.org use to build packages with package.info files is not "the normal way", is the SBo way... It works fine for SBo but its not "a must way" to build packages in Slackware.
In other words, if you dont like my slackbuild dont use it, or take it remove my name and make it better as I mentioned in older post...
I have used sbopkg and slpkg for SBo, not very often I prefer to build my own, but they both do the job. sbopkg is exclusive for SBo, slpkg, has a default for SBo (or ponce) as well as several other repositories.
As for slackpkg+, it does not work with SBo for packages. It does have the ability to search SBo (ponce's too) for builds though.
For the cli version there are several HOWTOS.
What i prefer is this:
Code:
#!/bin/bash
# rizitis 2023
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR ''AS IS'' AND ANY EXPRESS OR IMPLIED
# WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
# EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
cd $(dirname $0) ; CWD=$(pwd)
## requires: gpt4all typer app.py
python3 -m pip install --user --upgrade gpt4all typer
wget -c https://raw.githubusercontent.com/nomic-ai/gpt4all/main/gpt4all-bindings/cli/app.py
### ^^^ COMMENT THESE 2 LINES AFTER FIRST RUN ###
# This is the path were your .guff models are stored. Edit it as for your needs...
DIR="/Path/Of/Models"
# Here you select the model you want to start
select MODEL in $(ls $DIR | grep .gguf); do break;done
python3 $CWD/app.py repl --model "$DIR"/"$MODEL" || exit
wait
cd $DIR || exit
rm *.chat
clear
text="01010011 01101100 01100001 01100011 01101011 01110111 01100001 01110010 01100101"
smiley=";)"
color="\e[1;32m"
for ((i=0; i<${#text}; i++)); do
echo -n -e "$color${text:$i:1}"
sleep 0.05
done
echo ""
echo -e "\e[1;33m$smiley\e[0m"
Prepare:
1) download this script anywhere you like (folder with executable permissions), I have it in /usr/local/bin/
2) give a name to the scipt, assume: gpt4all-slackware-cli and chmod +x /usr/local/bin/gpt4all-slackware-cli
3) create an alias in .bashrc alias ai='bash /usr/local/bin/gpt4all-cli-slackware'
4) requires: gpt4all typer app.py (script will download and install them the first time you run it , BUT after that you must comment these 2 lines )
Run:
open a terminal and command ai
TIPS:
If you have the qt6 gui gpt4all installed then you can use this $DIR as for /Path/for/models.guff
If you dont have it and you dont have at all any model in your system a nice light model running in old CPU also is this:
I see that you call it a stable/decimal version number while getting latest git testing/commit-hash version, which may not be correct: sometimes programmers/developers increase the decimal version number in a commit but don't yet release that as a decimal version.
previous version was builded easy with out git clone...
new version they add submodules in submodules and make it more complicate to build from source...
I build-ed from main branch the time they released new version , but thats not the right way for a stable/decimal version.
I like it gets git versions as long as we know those are stable, but until you try new git versions you might not know... tried building version you listed above (though had to 'wget' (wget https://github.com/nomic-ai/gpt4all/...973a83b.tar.gz) instead of clone: GitHub blocks git and often even curl/wget for me andy many others) but I got the following errors.
Code:
-- Found Vulkan: /usr/lib64/libvulkan.so (found version "1.3.268") found components: glslc glslangValidator
CMake Warning at /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp.cmake:307 (message):
Kompute not found
Call Stack (most recent call first):
/tmp/rtz/gpt4all/gpt4all-backend/CMakeLists.txt:46 (include)
-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring ggml implementation target llama-mainline-default in /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp-mainline
-- x86 detected
-- Configuring model implementation target llamamodel-mainline-default
-- Configuring model implementation target gptj-default
-- Configuring model implementation target bert-default
-- Configuring ggml implementation target llama-mainline-avxonly in /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp-mainline
-- x86 detected
-- Configuring model implementation target llamamodel-mainline-avxonly
-- Configuring model implementation target gptj-avxonly
-- Configuring model implementation target bert-avxonly
CMake Warning (dev) at /usr/lib64/cmake/Qt6Core/Qt6CoreMacros.cmake:2768 (message):
Qt policy QTP0001 is not set: ':/qt/qml/' is the default resource prefix
for QML modules. Check https://doc.qt.io/qt-6/qt-cmake-policy-qtp0001.html
for policy details. Use the qt_policy command to set the policy and
suppress this warning.
Call Stack (most recent call first):
/usr/lib64/cmake/Qt6Qml/Qt6QmlMacros.cmake:468 (__qt_internal_setup_policy)
/usr/lib64/cmake/Qt6Qml/Qt6QmlMacros.cmake:716 (qt6_add_qml_module)
CMakeLists.txt:92 (qt_add_qml_module)
This warning is for project developers. Use -Wno-dev to suppress it.
-- Configuring done (2.8s)
CMake Error at /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp.cmake:592 (add_library):
Cannot find source file:
llama.cpp-mainline/ggml.c
Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm
.ccm .cxxm .c++m .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90
.f95 .f03 .hip .ispc
Call Stack (most recent call first):
/tmp/rtz/gpt4all/gpt4all-backend/CMakeLists.txt:75 (include_ggml)
CMake Error at /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp.cmake:623 (add_library):
Cannot find source file:
llama.cpp-mainline/llama.cpp
Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm
.ccm .cxxm .c++m .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90
.f95 .f03 .hip .ispc
Call Stack (most recent call first):
/tmp/rtz/gpt4all/gpt4all-backend/CMakeLists.txt:75 (include_ggml)
-- Generating done (0.0s)
CMake Warning:
Manually-specified variables were not used by the project:
LIB_SUFFIX
CMake Generate step failed. Build files cannot be regenerated correctly.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.