LinuxQuestions.org
Help answer threads with 0 replies.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware
User Name
Password
Slackware This Forum is for the discussion of Slackware Linux.

Notices


Reply
  Search this Thread
Old 12-25-2023, 08:39 AM   #16
dchmelik
Senior Member
 
Registered: Nov 2008
Location: USA
Distribution: Slackware, FreeBSD, Illumos, NetBSD, DragonflyBSD, Plan9, Inferno, OpenBSD, FreeDOS, HURD
Posts: 1,075

Rep: Reputation: 149Reputation: 149
Exclamation


Finally managed to build QT6 so was testing GPT4All.SlackBuild only to find it's not one that runs in a normal way. It uses curl/wget to download dependencies from sites I can't do that with: GitHub (for a few months I and many other users/areas have been blocked from accessing GitHub with traceroute, 'git clone', & curl/wget and can only access with major web browsers like Chrome, Firefox, Safari). All dependencies should be in a certain line of a properly-done GPT4All.info so people can get those how they can, and then run the script without that downloading anything at all.

Last edited by dchmelik; 12-25-2023 at 08:41 AM.
 
Old 12-25-2023, 10:01 AM   #17
rizitis
Member
 
Registered: Mar 2009
Location: Greece,Crete
Distribution: Slackware64-current, Slint
Posts: 686

Original Poster
Blog Entries: 1

Rep: Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513
I dont know which is the normal way to download a source file from github and build it, and honestly I dont care if there is a "normal way."
sbopkg
slpkg
slapt-get
slackrepo
slackpkg+
and all others package manager using (wget,curl,lftp etc...)
BTW: the way that slackbuilds.org use to build packages with package.info files is not "the normal way", is the SBo way... It works fine for SBo but its not "a must way" to build packages in Slackware.

In other words, if you dont like my slackbuild dont use it, or take it remove my name and make it better as I mentioned in older post...
 
Old 12-25-2023, 10:42 AM   #18
dchmelik
Senior Member
 
Registered: Nov 2008
Location: USA
Distribution: Slackware, FreeBSD, Illumos, NetBSD, DragonflyBSD, Plan9, Inferno, OpenBSD, FreeDOS, HURD
Posts: 1,075

Rep: Reputation: 149Reputation: 149
Thumbs up

I didn't say there's a normal SlackBuild download way rather than not downloading at all (downloading isn't a step within building).    Not just SlackBuilds.org (SBo) uses package.info files, but Slackware Team member RLWorkman and some/many repositories.
        It's not that I don't like GPT4ALL.SlackBuild, just wasn't sure if I could use same URLs or where to put dependencies. I tried again and was surprised GitHub allowed wget this time (almost never does anymore) so built it!
        One thing could be improved: it installs a directory/folder to /usr/local/bin, which is for single binaries; directories holding binaries are supposed to go to /usr/local/lib64 (or lib for 32-bit) or maybe something else like /usr/local/share or somewhere similar in /var.
        I started reading documentation but couldn't find if there's a command-line version to start testing with or I have to go through the whole QT graphical project building/configuring steps, which I plan to try tomorrow.
Thanks for making this SlackBuild!

Last edited by dchmelik; 12-25-2023 at 10:45 AM.
 
Old 12-25-2023, 12:05 PM   #19
rizitis
Member
 
Registered: Mar 2009
Location: Greece,Crete
Distribution: Slackware64-current, Slint
Posts: 686

Original Poster
Blog Entries: 1

Rep: Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513
There is a cli script, but you have to install some python stuff via pip3. Its not exactly like they qt6 gui but it still good. Now its Christmas here and i m out of pc. When i m in place again i will upload here documentation for cli stuff. You can see here a demo how it looks the cli https://youtu.be/A9oB2Bhpg_E
 
1 members found this post helpful.
Old 12-25-2023, 09:34 PM   #20
dchmelik
Senior Member
 
Registered: Nov 2008
Location: USA
Distribution: Slackware, FreeBSD, Illumos, NetBSD, DragonflyBSD, Plan9, Inferno, OpenBSD, FreeDOS, HURD
Posts: 1,075

Rep: Reputation: 149Reputation: 149
Didn't appear in my X menu but found the .desktop so ran GUI version (don't need command-line version but would be nice).
        There are many one can use commercially but most/others say you can't: does that just mean can't sell them (which I don't plan to) or even can't use output/results commercially?
 
Old 12-25-2023, 11:34 PM   #21
chrisretusn
Senior Member
 
Registered: Dec 2005
Location: Philippines
Distribution: Slackware64-current
Posts: 2,980

Rep: Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556Reputation: 1556
Quote:
Originally Posted by rizitis View Post
I dont know which is the normal way to download a source file from github and build it, and honestly I dont care if there is a "normal way."
sbopkg
slpkg
slapt-get
slackrepo
slackpkg+
and all others package manager using (wget,curl,lftp etc...)
BTW: the way that slackbuilds.org use to build packages with package.info files is not "the normal way", is the SBo way... It works fine for SBo but its not "a must way" to build packages in Slackware.

In other words, if you dont like my slackbuild dont use it, or take it remove my name and make it better as I mentioned in older post...
I have used sbopkg and slpkg for SBo, not very often I prefer to build my own, but they both do the job. sbopkg is exclusive for SBo, slpkg, has a default for SBo (or ponce) as well as several other repositories.

As for slackpkg+, it does not work with SBo for packages. It does have the ability to search SBo (ponce's too) for builds though.
 
Old 01-12-2024, 03:18 PM   #22
rizitis
Member
 
Registered: Mar 2009
Location: Greece,Crete
Distribution: Slackware64-current, Slint
Posts: 686

Original Poster
Blog Entries: 1

Rep: Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513
new version is out
If you upgrade an older version to new, after package build finish just do the old time classic
Code:
upgradepkg /tmp/gpt4all-2.6.1-x86_64-1_rtz.txz
This way your old models and localldocs will stay in place.
 
1 members found this post helpful.
Old 01-14-2024, 09:26 AM   #23
rizitis
Member
 
Registered: Mar 2009
Location: Greece,Crete
Distribution: Slackware64-current, Slint
Posts: 686

Original Poster
Blog Entries: 1

Rep: Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513
For the cli version there are several HOWTOS.
What i prefer is this:

Code:
#!/bin/bash

# rizitis 2023
#
# THIS SOFTWARE IS PROVIDED BY THE AUTHOR ''AS IS'' AND ANY EXPRESS OR IMPLIED
# WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
# MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO
# EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
# OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
# WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
# OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

cd $(dirname $0) ; CWD=$(pwd)

## requires: gpt4all typer app.py

 python3 -m pip install --user --upgrade gpt4all typer
 wget -c https://raw.githubusercontent.com/nomic-ai/gpt4all/main/gpt4all-bindings/cli/app.py
### ^^^ COMMENT THESE 2 LINES AFTER FIRST RUN ###

# This is the path were your .guff models are stored. Edit it as for your needs...
DIR="/Path/Of/Models"


# Here you select the model you want to start
select MODEL in $(ls $DIR | grep .gguf); do break;done
python3 $CWD/app.py repl --model "$DIR"/"$MODEL" || exit 
wait
cd $DIR || exit 
rm *.chat
clear
text="01010011 01101100 01100001 01100011 01101011 01110111 01100001 01110010 01100101"
smiley=";)"

color="\e[1;32m"

for ((i=0; i<${#text}; i++)); do
    echo -n -e "$color${text:$i:1}"
    sleep 0.05
done
echo ""
echo -e "\e[1;33m$smiley\e[0m"
Prepare:
1) download this script anywhere you like (folder with executable permissions), I have it in /usr/local/bin/
2) give a name to the scipt, assume: gpt4all-slackware-cli and chmod +x /usr/local/bin/gpt4all-slackware-cli
3) create an alias in .bashrc alias ai='bash /usr/local/bin/gpt4all-cli-slackware'
4) requires: gpt4all typer app.py (script will download and install them the first time you run it , BUT after that you must comment these 2 lines )

Run:
open a terminal and command ai

TIPS:
If you have the qt6 gui gpt4all installed then you can use this $DIR as for /Path/for/models.guff
If you dont have it and you dont have at all any model in your system a nice light model running in old CPU also is this:
Code:
https://huggingface.co/TheBloke/Mistral-7B-OpenOrca-GGUF/blob/main/mistral-7b-openorca.Q4_0.gguf
Just dont forget to specify the path you downloaded in gpt4all-cli-slackware

Last edited by rizitis; 01-14-2024 at 09:29 AM.
 
Old 01-21-2024, 11:49 PM   #24
dchmelik
Senior Member
 
Registered: Nov 2008
Location: USA
Distribution: Slackware, FreeBSD, Illumos, NetBSD, DragonflyBSD, Plan9, Inferno, OpenBSD, FreeDOS, HURD
Posts: 1,075

Rep: Reputation: 149Reputation: 149
I see that you call it a stable/decimal version number while getting latest git testing/commit-hash version, which may not be correct: sometimes programmers/developers increase the decimal version number in a commit but don't yet release that as a decimal version.
 
Old 01-22-2024, 08:02 AM   #25
rizitis
Member
 
Registered: Mar 2009
Location: Greece,Crete
Distribution: Slackware64-current, Slint
Posts: 686

Original Poster
Blog Entries: 1

Rep: Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513
you are absolutely right.

previous version was builded easy with out git clone...
new version they add submodules in submodules and make it more complicate to build from source...
I build-ed from main branch the time they released new version , but thats not the right way for a stable/decimal version.

I fixed it.
Code:
git clone https://github.com/nomic-ai/gpt4all
cd $PRGNAM/gpt4all-chat
git checkout a9c5f535629a362e1bddda7d71018a61c973a83b
git submodule update --init --recursive
thanks a lot for point out it!
 
Old 01-22-2024, 08:47 PM   #26
dchmelik
Senior Member
 
Registered: Nov 2008
Location: USA
Distribution: Slackware, FreeBSD, Illumos, NetBSD, DragonflyBSD, Plan9, Inferno, OpenBSD, FreeDOS, HURD
Posts: 1,075

Rep: Reputation: 149Reputation: 149
Quote:
Originally Posted by rizitis View Post
you are absolutely right.[...]
I like it gets git versions as long as we know those are stable, but until you try new git versions you might not know... tried building version you listed above (though had to 'wget' (wget https://github.com/nomic-ai/gpt4all/...973a83b.tar.gz) instead of clone: GitHub blocks git and often even curl/wget for me andy many others) but I got the following errors.

Code:
-- Found Vulkan: /usr/lib64/libvulkan.so (found version "1.3.268") found components: glslc glslangValidator 
CMake Warning at /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp.cmake:307 (message):
  Kompute not found
Call Stack (most recent call first):
  /tmp/rtz/gpt4all/gpt4all-backend/CMakeLists.txt:46 (include)


-- CMAKE_SYSTEM_PROCESSOR: x86_64
-- x86 detected
-- Configuring ggml implementation target llama-mainline-default in /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp-mainline
-- x86 detected
-- Configuring model implementation target llamamodel-mainline-default
-- Configuring model implementation target gptj-default
-- Configuring model implementation target bert-default
-- Configuring ggml implementation target llama-mainline-avxonly in /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp-mainline
-- x86 detected
-- Configuring model implementation target llamamodel-mainline-avxonly
-- Configuring model implementation target gptj-avxonly
-- Configuring model implementation target bert-avxonly
CMake Warning (dev) at /usr/lib64/cmake/Qt6Core/Qt6CoreMacros.cmake:2768 (message):
  Qt policy QTP0001 is not set: ':/qt/qml/' is the default resource prefix
  for QML modules.  Check https://doc.qt.io/qt-6/qt-cmake-policy-qtp0001.html
  for policy details.  Use the qt_policy command to set the policy and
  suppress this warning.

Call Stack (most recent call first):
  /usr/lib64/cmake/Qt6Qml/Qt6QmlMacros.cmake:468 (__qt_internal_setup_policy)
  /usr/lib64/cmake/Qt6Qml/Qt6QmlMacros.cmake:716 (qt6_add_qml_module)
  CMakeLists.txt:92 (qt_add_qml_module)
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Configuring done (2.8s)
CMake Error at /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp.cmake:592 (add_library):
  Cannot find source file:

    llama.cpp-mainline/ggml.c

  Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm
  .ccm .cxxm .c++m .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90
  .f95 .f03 .hip .ispc
Call Stack (most recent call first):
  /tmp/rtz/gpt4all/gpt4all-backend/CMakeLists.txt:75 (include_ggml)


CMake Error at /tmp/rtz/gpt4all/gpt4all-backend/llama.cpp.cmake:623 (add_library):
  Cannot find source file:

    llama.cpp-mainline/llama.cpp

  Tried extensions .c .C .c++ .cc .cpp .cxx .cu .mpp .m .M .mm .ixx .cppm
  .ccm .cxxm .c++m .h .hh .h++ .hm .hpp .hxx .in .txx .f .F .for .f77 .f90
  .f95 .f03 .hip .ispc
Call Stack (most recent call first):
  /tmp/rtz/gpt4all/gpt4all-backend/CMakeLists.txt:75 (include_ggml)


-- Generating done (0.0s)
CMake Warning:
  Manually-specified variables were not used by the project:

    LIB_SUFFIX


CMake Generate step failed.  Build files cannot be regenerated correctly.

Last edited by dchmelik; 01-22-2024 at 08:50 PM.
 
Old 01-23-2024, 06:11 AM   #27
brobr
Member
 
Registered: Oct 2003
Location: uk
Distribution: Slackware
Posts: 977

Rep: Reputation: 239Reputation: 239Reputation: 239
A similar error blocked an upgrade at my end as well; that was with gt4all-2.6.1; llama.cpp-b1851 or the original llama.cpp-b1350.
 
Old 01-23-2024, 11:08 AM   #28
rizitis
Member
 
Registered: Mar 2009
Location: Greece,Crete
Distribution: Slackware64-current, Slint
Posts: 686

Original Poster
Blog Entries: 1

Rep: Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513
We cant build any more gpt4all without
Code:
git submodule update --init --recursive
because in
Code:
https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-backend
there is
Code:
 @ https://github.com/nomic-ai/llama.cpp/tree/01307d86bbe980128308c36b64c494fb9dbaa5bf
and in this there is
Code:
 @ https://github.com/nomic-ai/kompute/tree/4565194ed7c32d1d2efa32ceab4d3c6cae006306
But even if you download them seperate and install them in the correct place still not building...

Only solution for me is this script, is it working for you? https://raw.githubusercontent.com/ri...all.SlackBuild
 
1 members found this post helpful.
Old 01-23-2024, 12:20 PM   #29
brobr
Member
 
Registered: Oct 2003
Location: uk
Distribution: Slackware
Posts: 977

Rep: Reputation: 239Reputation: 239Reputation: 239
Yip, it did for me, thanks a lot.

Last edited by brobr; 01-23-2024 at 12:21 PM.
 
Old 05-14-2024, 02:04 AM   #30
rizitis
Member
 
Registered: Mar 2009
Location: Greece,Crete
Distribution: Slackware64-current, Slint
Posts: 686

Original Poster
Blog Entries: 1

Rep: Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513Reputation: 513
This is a personal created AppImage
In theory it should run in any linux system.
If you test it plz give a feed back.

There are small models to download like Mini Orca (small) which need only 2GB free space and 4GB ram.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
LXer: Nvidia open-sources Linux kernel GPU modules. Repeat, open-source GPU modules LXer Syndicated Linux News 0 05-13-2022 08:16 AM
[SOLVED] GPU fans not spinning, and how to run GPU stress test Arct1c_f0x Linux - Hardware 28 08-03-2020 11:53 AM
Xubuntu Multiple Screens, Nearly working...... Nearly kraziekris Linux - Desktop 2 02-14-2017 04:20 PM
LXer: This week at LWN: Large pages, large blocks, and large problems LXer Syndicated Linux News 0 09-27-2007 11:40 AM
LXer: Your Guide to Open-Source Business Models LXer Syndicated Linux News 0 02-15-2006 11:16 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Distributions > Slackware

All times are GMT -5. The time now is 10:31 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration