A command for making this organization from a dir with a lot of files?
Linux - SoftwareThis forum is for Software issues.
Having a problem installing a new program? Want to know which application is best for the job? Post your question in this forum.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
A command for making this organization from a dir with a lot of files?
Hello,
I have a directory called file.set which contains 1000 thousand files (the number may vary), and we may assume no subfolder exist. I want to organize these files in directories in new folders, with at most 50 files (number may also vary). These directories should be created inside file.set with names "001.050", "051.100", etc.
Do you know a command for this? Or writing a shell script is the best option?
I am creating a script now using Bash. I know (and have used before) that I can create a 'for' like this, to iterate on each file on a dir, ordered by name:
Code:
# list the $path dir
for fileName in $path/*; do
echo $fileName
done
This 'for' is not appropriate for the algorithm i am using, though.
How do I obtain an array with each file in a position?
files=$path/*
and
files="$path/*"
do not work.
Or my problem is how i am trying to access the file names? With:
Code:
echo -e "mv $files[$n] $dest" # where n is a var with a number
I am creating a script now using Bash. I know (and have used before) that I can create a 'for' like this, to iterate on each file on a dir, ordered by name:
Code:
# list the $path dir
for fileName in $path/*; do
echo $fileName
done
This 'for' is not appropriate for the algorithm i am using, though.
How do I obtain an array with each file in a position?
files=$path/*
and
files="$path/*"
do not work.
Or my problem is how i am trying to access the file names? With:
Code:
echo -e "mv $files[$n] $dest" # where n is a var with a number
Is there something better than navigating throw all files, building an array, like
A script version, but it has a problem with the last set of files
I have made a script. But it has a problem i could not solve, maybe you can help.
This is its code:
Code:
#!/bin/bash
# org: Organiza uma pasta com vários arquivos, criando várias subpastas com
# um número máximo de arquivos
# Versão: 2019,04,05,19,25,00 UTC-3
# Version that only prints the commands that would be done
# Print help
ajuda()
{
echo Uso:
echo -e "\torg [pasta]\n"
echo -e "Argumento:\n"
echo -e "\t[pasta]\t\ta pasta que contém vários arquivos a serem organizados.\n\t"
return
}
# Processa argumento, possivelmente decidindo o diretório escolhido
if (( ($# == 0) )); then
ajuda
exit 0
fi
if (( $# == 1 )); then
if [[ $1 == '-h' ]] ||
[[ $1 == '--help' ]] ||
[[ $1 == '--ajuda' ]];
then
ajuda
exit 0
else
dir="$1"
fi
else
dir="."
fi
#echo dir=$dir
# There is no better way to make an array with each file in dir?
arqs=()
for i in $dir/*; do
arqs+=("$i")
done
#numArqs=${#arqs[@]}
#echo $numArqs arquivos
#arqs=$dir/*
n=0 # In each folder, the number of a file
#qPorPasta=50; # Files per dir (max)
qPorPasta=5; # Files per dir (max)
#echo $qPorPasta
max=$qPorPasta
for (( max=qPorPasta; max<${#arqs[@]}; max+=qPorPasta )); do
dest=`printf "%03i" $n`
dest="$dest."
dest="$dest"`printf "%03i" $max`
#echo dest = \"$dest\"
echo mkdir $dest
#mkdir $dest
for((; n<max; n++)); do
if [[ $n -lt ${#arqs[@]} ]]; then
echo -e "mv ${arqs[n]} $dest"
#mv ${arqs[n]} $dest
else
echo max=$max
for(( max+=qPorPasta; n<max; n++)); do
if [[ $n -lt ${#arqs[@]} ]]; then
echo -e "mv ${arqs[n]} $dest"
#mv ${arqs[n]} $dest
fi
done
break 3;
fi
done
done
exit 0
And these commands may help you making a test like mine:
Code:
cd /dev/shm mkdir t; cd t;
# 23 files from '00' to '22':
for (( i=0; i<23; i++)); do touch `printf "%02d" $i`; done
Executing the script in the t dir does not move the last set of files. The output (which would be the commands done) is this:
this is a script I use to devide up x amount of dirs and putting all of the parent sud-directores into a set up to 4 other parent dirs.
Code:
#!/bin/bash
working_dir=/run/media/userx/1TB/ARedoneMusic_Sorted
move=( /run/media/userx/1TB/Music_2_do1 /run/media/userx/1TB/Music_2_do2 /run/media/userx/1TB/Music_2_do3 /run/media/userx/1TB/Music_2_do4 )
#move=( /run/media/userx/2TBInternal/Music_2_do1 /run/media/userx/2TBInternal/Music_2_do2 /run/media/userx/2TBInternal/Music_2_do3 /run/media/userx/2TBInternal/Music_2_do4 )
t=0
#remove empty directories from working working_dir
while read d ;
do
rmdir -v "$d"
done <<< "$(find "$working_dir" -type d)"
mkdir -pv /run/media/userx/1TB/Music_2_do1
mkdir -pv /run/media/userx/1TB/Music_2_do2
mkdir -pv /run/media/userx/1TB/Music_2_do3
mkdir -pv /run/media/userx/1TB/Music_2_do4
echo
totalDir="$(find "$working_dir" -maxdepth 1 -mindepth 1 -type d | wc -l)"
divDir=$(($totalDir / 4))
echo "Total DIr Div/4
$totalDir $divDir"
countDn=0
while read f
do
if [[ $countDn -le $divDir ]] ; then
{
mv -f "$f" "${move[$t]}"
((countDn++))
}
fi
echo "t $t"
[[ $countDn = $divDir ]] && { ((t++)) ; countDn=0 ; }
[[ $t -eq 4 ]] &&
{ echo "$t $countDn $divDir" ; exit ; }
done <<< "$(find "$working_dir" -maxdepth 1 -mindepth 1 -type d)"
basically what you want to do is something similar like run a loop and a count on files, if x amount of files reached then create a new dir then start putting files into that one, reset count then loop until same x amount reached then create another dir then repeat process.
Code:
#!/bin/bash
working_dir=/whatever/
move2=/destanation->
count=0
newDir=0
xamount=100
#create 1st dir
mkdir "$move2"_"$newDir"
while read f
do
mv -f "$f" "$move2"_"$newdir"
((count++))
#reset count add 1 to dir name, create new dir
[[ "$count" -eq $xamount ]] && { count=0 ; ((newDir++)) ; mkdir "$move2"_"$newDir" ; }
done <<<"$(find "$working_dir" -type f )"
adds a number to end of whatever you name your dirs you want your files in.
not tested.
rereading that last post, question?
you are reading all of the files first then putting the file path/name into an array first, then using that array to get the x amount of files and moving them into a different directory then getting the next set of files and doing the same until you're out of files?
the way I wrote it, it just going though the dir, putting the files into dir created dynamically as it loops through the files, when it gets to the end it stops no matter how many files left because it goes until eof.
I would suggest adding a counter to the loop so that a remainder test can be used for branching.
Code:
j=50
for i in {1..501}; do
if (( $i % 50 == 0 )); then
j=$(( $i + 50 ))
elif (( $i % 50 == 1 )); then
echo
echo "$i-$j"
# make next directory here
fi
echo -n "$i "
done
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.