Appending incrementing number to directory name.
Quote:
Yes, this is exactly the idea. I figured out (I think) how to "transplant" all directories at level 10 to level 3 (or whatever) -- with a simple mv. But the problemo, as you observe, is name duplicates. I'd prefer to rename by appending incrementing number, though merging would probably work, too. I've been using this, which unfortunately does not address duplicates: CODE find -maxdepth 10 -mindepth 10 -type d in ./* -exec echo mv -i BACK TO TEXT If you can suggest how to modify directory names to avoid collisions, that would help a lot. Thanks much for your help with this. Eric |
THANK YOU ALL FOR YOUR HELPFUL REPLIES.
I actually wrote a more detailed reply for each of your various approaches, but unfortunately it got zapped by accident, and it's too late for me to redo. I'm going with appending incrementing number to avoid directory name collisions. I might also try (if that doesn't work) combining path and filename into one filename. Thank you again for your help, and please excuse my absence for the last month. Eric |
Quote:
I agree symbolic links present an effective solution. However, I've had links broken in the past -- not sure how it happened. I'm afraid it will happen again. I'd like a solution that doesn't require me to remember there are symbolic links that can be broken. Not sure if it was your suggestion, but appending incrementing number to directory name before moving it to a higher level seems like a good idea. |
Quote:
I'm afraid at some point in the future it might become an issue -- perhaps for some other OS, or some other backup system. |
Quote:
Right now I just do a manual backup every so often -- at risk of losing stuff since the previous backup. I've tried rsync, but it doesn't copy all the files for some reason. I've just recently learned how to file-share, and I'd like to set this up with some rsync-like backup solution. Placing the backup on a separate system may be more secure. Also, it slows the system file operations down -- sometimes it seems like it gets stuck. |
blank post
...
|
Quote:
Also, copying deeply nested directory structures results in incomplete copying -- I think. It can be hard to tell when you don't know if the system has just slowed down or stopped altogether. |
to Astrogeek
I wasn't sure if I could continue a thread from 5 years ago.
I figured it would be most likely seen if posted anew. Thanks for letting me know. |
Quote:
-mindepth 15 -maxdepth 15 or something similar. But this posed 2 problems . . 1) It didn't check for duplicate names before moving to target directory. Recently (5 years after OP) I learned how to append incrementing numbers in python. 2) It didn't determine a file's nesting depth. I can use the awkward -maxdepth & -mindepth from bash, guessing at the depth and then seeing what happens. This flattened only part of a huge directory. More recently, I've researched how to find path length in python. This guy on reddit/learnpython describes my experience . . Quote:
I'd like this output . . Code:
longest path in directory structure is 5 This may make the directory structure easier to search, back up, remove duplicates, and transfer to other file systems. There's also a variation where the number returned is not the integer number of components in the directory path -- but rather the number of characters (alphanumeric, punctuation, delimiters) in the entire path. So the character count for Code:
/a/bb/FN1.ext Then I can meet NTFS limit of 4096 characters for a path. I can use my script to move empty folders to a target folder. But instead of testing if folder is empty, I'd test for whether its length exceeds some limit. Code:
import os |
Quote:
Anyway, this isn't difficult to do in Python, and based on your posting history, you're more than capable. Use this: https://docs.python.org/3/library/pathlib.html |
THIS IS THE CODE THAT FINALLY WORKED. THANKS TO ALL
Code:
import os |
All times are GMT -5. The time now is 01:46 AM. |