[SOLVED] Bash, maximum file/folder listing, ls -a? Why no /run/5000/?
ProgrammingThis forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Bash, maximum file/folder listing, ls -a? Why no /run/5000/?
In Bash I have a thing that recursively goes into directories and calls "ls -a" to find all in it until no more are like found. I seem to have something like "/run/5000/" folder with "find" command that I tried saying it exists but it never showed in like my recording with "ls -a" and I know it exists and I ran it as sudo with no errors and according to that program no unfound directories or files. Is there another better way to do this or am I stuck with "ls -a"? Is there some super-root or what? "ls -a" seems like it should work. I know there may be permission denied, with "find" at least. I am on Ubuntu desktop 22.04.3 LTS but trying for on all ext2 to ext4 Linux for like my program set. I just want it to list all files and folders in all directories it is called. If it helps, I was running on / directory like "sudo ./myProgramXE.sh /". Do I maybe need in that "sudo ls -a" inside like my program? Basic problem: no access to "/run/5000/" and inside it even as sudo, "ls -a" seems to not list it and "find" permission denied both as sudo. Like my program records timestamps, inode, and what path. It also displays path in terminal as going. No that directory displayed or recorded. What should I try? This seems like a bug to me so should I report it as such? X E.
/run and anything within is usually a tmpfs filesystem which only exists in memory. If you are only trying for ext2,3,4 filesystems then don't include temporary filesystems.
I would guess the actual path is /run/user/<user id> which only has user permissions (drwx --- ---) and so running with sudo you should see a permission denied warning.
I am certain it was "/run/5000/" and it shows all times when I use "find" as permission denied. With "ls -a" recursively it is never listed but I know it exists. X E.
Do you want code or what? All you should need is that "ls -a" never lists it and "find" cannot access it and I would like all things on this computer listed. Programs are run like "sudo ./program.sh /". I do not use sudo inside programs. X E.
Last edited by maybeJosiah; 01-20-2024 at 11:24 AM.
Reason: sudo maybe
Please forget "something like this". Quote the actual command/script and clearly describe your problem/question.
A fun fact: /run is a product of Lennart Pöttering. It is a temporary storage, its content is volatile.
Do you want code or what? All you should need is that "ls -a" never lists it and "find" cannot access it and I would like all things on this computer listed. Programs are run like "sudo ./program.sh /". I do not use sudo inside programs. X E.
For someone who started out by saying you're "like advanced Linux user and programmer.", you seem to be ignoring everything you're being told and don't seem to understand the answers you get. There aren't any 'bugs' to report in your original post in this thread, and again..you *DO NOT NEED* timestamps/ctime/mtime/whatever-time to do a system restore. If you don't understand that the file *IS* the data, and the time on it doesn't matter...there isn't much we can do to help you. If I back up a file with my Apache configuration (which is plain text), and I open it, insert a single space/comment and save it, then REMOVE my edits and save it again, the files timestamps are totally different, but the DATA is the same. When I go to start Apache, do you honestly think it's going to error out because of a timestamp?? No, it doesn't....nor does anything else.
Again, if you back up your config files and your /home folder (along with whatever else you edit/add/think good), you're done. That's it...really. No one wants or needs a program that looks at timestamps for backups, because they're useless. True backup programs that do versioning already keep track of these things, which lets you have many copies of the same file at different times/dates, so you can roll back if needed.
As myself and other experienced admins have told you (and we have DECADES of experience to draw upon, in the real world), what you're doing is pointless. Reload the operating system, and copy your data back from your backup set. Simple. I have restored servers with multiple RAID/SAN disks attached to them in 20 minutes before with zero problems. If you need something for data-security purposes to keep track of file changes, those exist but *NOT* for backups because they're not needed.
Be clear: what is the EXACT PROBLEM you're trying to solve with all this???
norvel@norvel4-ThinkPad-T460:~$ sudo ls -ld /run/5000
[sudo] password for norvel:
Sorry, try again.
[sudo] password for norvel:
ls: cannot access '/run/5000': No such file or directory
norvel@norvel4-ThinkPad-T460:~$
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.