LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 12-16-2009, 02:57 AM   #1
suganthi
LQ Newbie
 
Registered: Dec 2009
Posts: 21

Rep: Reputation: 15
Question linux file system permissions


Dear All,

I am new to linux. I am trying to copy the data from one disk to another.
i have already mounted /dev/sdf on rdf. But when i try to create a directory inside rdf iam getting
1. "cannot create directory.Read only file system"
2. Also iam not able to umount rdf

I have some other queries:

1. How to list the current processes running on in linux(in order to terminate some process which consumes more cpu usage)?
2. How to check my linux server's processor/ram details
3. It's taking a week's time for taking backup.I need to reduce the time.What can be done to improvise the process
 
Old 12-16-2009, 06:00 AM   #2
raju.mopidevi
Senior Member
 
Registered: Jan 2009
Location: vijayawada, India
Distribution: openSUSE 11.2, Ubuntu 9.0.4
Posts: 1,155
Blog Entries: 12

Rep: Reputation: 92
If you are a super user, then you can change those file system permissions.

Become as a super user then you can have write permissions, you can mount or unmount.


1. To see all the process running on your system use this
Code:
$ ps aux | less
2.To know your CPU info use this command.
Code:
$ cat /proc/cpuinfo
3. use backup utitility. compress your data using cpio. and move to safe location.
 
Old 12-16-2009, 06:14 AM   #3
raju.mopidevi
Senior Member
 
Registered: Jan 2009
Location: vijayawada, India
Distribution: openSUSE 11.2, Ubuntu 9.0.4
Posts: 1,155
Blog Entries: 12

Rep: Reputation: 92
To know about your ram details

Code:
$dmesg | grep ^Memory
Memory: 500080k/514892k available (1953k kernel code, 14152k reserved, 1676k data, 264k init, 0k highmem)
 
Old 12-16-2009, 08:55 AM   #4
XavierP
Moderator
 
Registered: Nov 2002
Location: Kent, England
Distribution: Debian Testing
Posts: 19,192
Blog Entries: 4

Rep: Reputation: 475Reputation: 475Reputation: 475Reputation: 475Reputation: 475
Moved: This thread is more suitable in Linux-General and has been moved accordingly to help your thread/question get the exposure it deserves.
 
Old 12-16-2009, 07:52 PM   #5
chrism01
LQ Guru
 
Registered: Aug 2004
Location: Sydney
Distribution: Rocky 9.2
Posts: 18,356

Rep: Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751Reputation: 2751
You can get a summary of the current cpu/mem/disk by running 'top'.
As for backups, we'd need to know a lot more detail eg what your system specs are and what/how much you are trying to backup? What technique/code are you currently using to backup?
 
Old 12-17-2009, 12:53 AM   #6
suganthi
LQ Newbie
 
Registered: Dec 2009
Posts: 21

Original Poster
Rep: Reputation: 15
Regarding backup:

we are trying the backup using amanda resource.total size is around 6tb..but its taking a week's time for the process to take place.your suggestion pls

any idea about roaming profiles..
we just migrated from linux ad to windows creating a new domain. but when i try to login with the same (old domin's user) in the new domain login, iam not able to get the roaming profile.error --> "could not locate the server copy of your roaming profile". windows now loging in temporarily.
any idea to make it work properly,without any issues.?
 
Old 12-18-2009, 07:24 AM   #7
choogendyk
Senior Member
 
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197

Rep: Reputation: 105Reputation: 105
With respect to backing up 6tb -- this will depend on the configuration and tuning of your system. Amanda uses native utilities. I presume you've configured it to use gnutar and are not trying to compress or encrypt? Have you tried just doing a straight copy or gnutar to see how fast it goes? If you describe your hardware and configuration, perhaps someone could make some suggestions.
 
Old 12-18-2009, 07:28 AM   #8
raju.mopidevi
Senior Member
 
Registered: Jan 2009
Location: vijayawada, India
Distribution: openSUSE 11.2, Ubuntu 9.0.4
Posts: 1,155
Blog Entries: 12

Rep: Reputation: 92
This CMS Automatic Backup System Plus V2 is a 1.5TB, external hard disk drive.Compared to other external hard drives on the market, it is expensive at around $225.Connects via either FireWire or USB 2.0 for increased flexibility.This is a 7200 rpm drive and comes with a 32MB buffer.
 
Old 12-21-2009, 06:43 AM   #9
choogendyk
Senior Member
 
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197

Rep: Reputation: 105Reputation: 105
hmm. Not much to go on here. That's just the external drive. It ought to be able to take the backup faster than you reported earlier. However, you originally indicated you were trying to backup 6tb, then you report that you are using a 1.5TB drive to do it. Which is right?
(Edit: My mistake. I thought the OP was coming back with information on his drive. Still waiting to hear.)

What is the system you are backing up from? What else is it doing? Have you tried just doing copies or tars to see how fast it can go?

Last edited by choogendyk; 12-21-2009 at 07:39 PM.
 
Old 12-21-2009, 07:22 AM   #10
raju.mopidevi
Senior Member
 
Registered: Jan 2009
Location: vijayawada, India
Distribution: openSUSE 11.2, Ubuntu 9.0.4
Posts: 1,155
Blog Entries: 12

Rep: Reputation: 92
Eventhough compress takes some time , it is better to compress and then copy.

compressed files can be easily copied from one system to other.
 
Old 12-21-2009, 11:17 PM   #11
suganthi
LQ Newbie
 
Registered: Dec 2009
Posts: 21

Original Poster
Rep: Reputation: 15
Actually iam trying to take the backup of linux server which is around 6 tb. for that we are using 4 HDD each of 1.5 tb.
now when i start the backup it takes around 74 hrs.normally it should not take this much time.
 
Old 12-22-2009, 11:37 AM   #12
choogendyk
Senior Member
 
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197

Rep: Reputation: 105Reputation: 105
You still aren't telling much about your configuaration. But it sounds like a potential tight squeeze. You're backing up 6TB and using 4x1.5=6TB of space to back it up. How is that configured? What are the original disk list entries? What sort of values do you have for dumpcycle, runspercycle and tapecycle? Sounds like you probably have no holding disk. Right? Have you broken up the source volumes to be backed up into smaller pieces in the disklist file? And, what is the hardware configuration? How are the drives connected? What kind of drives are they? What sort of server?
 
Old 12-22-2009, 11:51 PM   #13
suganthi
LQ Newbie
 
Registered: Dec 2009
Posts: 21

Original Poster
Rep: Reputation: 15
Its an AMD server.we are using holding disk...
the backupcycle is 15 days...dumpcycle???

pls check the file attached for the other details. .right now we have connected the external HDD to the amanda server. so the backup from the client(linux server) will fall into the HDD.since its through network, its taking quite a long time....

i need tour suggestion to reduce the backup time
Attached Thumbnails
Click image for larger version

Name:	untitled.GIF
Views:	9
Size:	13.0 KB
ID:	2286  
 
Old 12-23-2009, 07:06 AM   #14
choogendyk
Senior Member
 
Registered: Aug 2007
Location: Massachusetts, USA
Distribution: Solaris 9 & 10, Mac OS X, Ubuntu Server
Posts: 1,197

Rep: Reputation: 105Reputation: 105
You can find those parameters (e.g. dumpcycle) in your amanda.conf file in the configuration directory, which might be someplace like /etc/amanda/DailySet1/amanda.conf. Or, you could use amgetconf (do a man page `man amgetconf`) to list configuration parameters.

So, you're pulling 6TB over the network, and you have an AMD linux server with 2G memory. Still not giving us much to go on. Is your network 10Mb, 100Mb or 1000Mb (GigE)? Is there lots of other traffic or congestion on it? Have you tested other transfers for performance using ftp or scp? Are you running the backup through an ssh pipe? How fast are the cpu's on the servers? How much overall disk space? How large is the holding disk? And, again, what does your Amanda disklist look like? Are you breaking up the 6TB into smaller pieces for backup?

That external drive you've connected to the Amanda server -- is that the holding disk? Or is that being used as virtual tapes for backup? How large is it, and how is it configured?

There are a lot of questions that need answers before we can help you. However, making guesses, and assuming you have no fundamental problems with network performance or server capabilities, the way I would try to back up 6TB across the network would be to break it up into a large number of smaller pieces by defining them in Amanda's disklist. Then I would have a dumpcycle of somewhere between a week and a month, with nightly backups. The runspercycle would be the number of days in a dumpcycle. The tapecycle would be something larger than the dumpcycle. This way, the full backups of individual parts of the 6TB would be distributed over the dumpcycle and not all done at once. When you start up a configuration like that, add a few disklist entries to the disklist every night until it gets going. Otherwise, it will need to start out with all full, since incrementals don't make sense without a full. Also, if it is on a secure internal network, I would not push it through ssh since that adds significant overhead to the transfer. And, as far as compression goes, that would depend on where you have the cpu cycles. If your backup server is faster and isn't doing other things, do server side compression. If the backups were coming from a larger number of other servers, all of which had significant cpu capacity, and the backup server was more limited, then I would do client side compression.

Anyway, maybe that will give you something to work with.
 
Old 12-24-2009, 12:49 AM   #15
suganthi
LQ Newbie
 
Registered: Dec 2009
Posts: 21

Original Poster
Rep: Reputation: 15
I used to schedule and check the status zmc
Network speed is 100mbps
holding disk space-15 vtapes and 225 tb (max space used)
total backup size is 6tb which comprises 2 fileservers. but i tried with only one server which itself took 74 hrs.ima not breaking...iam just adding all the directories via zmc backup what page and scheduling the
backup.
no idea how the disk was configured.might be through LVM.(four seperate HDD were configured as one-BigDisk)

tried with amgetconf daily logdir.....but getting as"no such a file or directory
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
File System permissions Virtual Circuit Linux - Newbie 5 11-18-2008 10:25 AM
Permissions problem on different file system locust100473 Linux - Newbie 4 05-22-2007 04:15 PM
permissions on ntfs file system mikeotieno Linux - Newbie 7 06-24-2006 09:48 AM
file system permissions problem guy24x Linux - Security 8 04-07-2005 06:09 PM
Problems with system-wide file permissions hexadevil Linux - Newbie 2 05-30-2004 07:54 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 06:09 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration