LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Other *NIX Forums > Other *NIX
User Name
Password
Other *NIX This forum is for the discussion of any UNIX platform that does not have its own forum. Examples would include HP-UX, IRIX, Darwin, Tru64 and OS X.

Notices


Reply
  Search this Thread
Old 02-11-2014, 06:21 AM   #1
ksri07091983
Member
 
Registered: Nov 2007
Location: Chennai,TamilNadu,India
Distribution: RedHat,SuSE
Posts: 65

Rep: Reputation: 15
wiping out data for server decommission


Hi all,

I have a task to wipe out all the data in a server that is about to be de-commissioned. I came up with the idea of

* identifying the Volume group information from bdf output

* Use vgdisplay -v to find out disks under root volume group

* Use 'dd' command to wipe out the information on the disks (nohup dd if=/dev/zero of=/dev/rdsk/${DISK} bs=8192k &)


Would like to take advise as to, is this a right approach or are there any better methods to achieve this task?

Thank you in advance!
 
Old 02-12-2014, 02:40 AM   #2
TenTenths
Senior Member
 
Registered: Aug 2011
Location: Dublin
Distribution: Centos 5 / 6 / 7
Posts: 3,481

Rep: Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553Reputation: 1553
You could boot from a livecd and then dd /dev/zero the whole of the disk.

Within our company the policy is to have the drives physically destroyed after use, basically a company turns up with a van and shreds the HD on-site.

The whole "must over-write X times with zeros or random values" thing is pretty sensationalist, once with 0's is enough to deter anyone except those with access to tunnelling electron microscopes to try and reconstruct data.
 
Old 02-12-2014, 03:42 AM   #3
syg00
LQ Veteran
 
Registered: Aug 2003
Location: Australia
Distribution: Lots ...
Posts: 21,140

Rep: Reputation: 4123Reputation: 4123Reputation: 4123Reputation: 4123Reputation: 4123Reputation: 4123Reputation: 4123Reputation: 4123Reputation: 4123Reputation: 4123Reputation: 4123
On a couple of occasions I've just used mkfs.ntfs. Unless you force a quick format, it'll zero out every block after formatting the metadata.
Nice and convenient, but no quicker.

EDIT: just noticed this is under "other *NIX" - I've only done this in Linux.

Last edited by syg00; 02-12-2014 at 03:45 AM.
 
Old 02-12-2014, 09:25 AM   #4
ksri07091983
Member
 
Registered: Nov 2007
Location: Chennai,TamilNadu,India
Distribution: RedHat,SuSE
Posts: 65

Original Poster
Rep: Reputation: 15
Thanks for your suggestions TenTenths , syg00.

I will boot the server through a CD Image and write zeros on the whole disk.

However, i have a question (that may sound silly). Isn't it possible to write the data directly from the Operating system shell. I understand that the command may not return back to prompt, as there wont be a tty to display the output.

But would the following code run from memory and write zeros on all of the disk's sectors (or) would it stop as soon as it replaces the the data block that contains /dev/zero?

Code:
 dd if=/dev/zero of=<current root disk>
 
Old 02-12-2014, 09:33 AM   #5
pan64
LQ Addict
 
Registered: Mar 2012
Location: Hungary
Distribution: debian/ubuntu/suse ...
Posts: 21,930

Rep: Reputation: 7321Reputation: 7321Reputation: 7321Reputation: 7321Reputation: 7321Reputation: 7321Reputation: 7321Reputation: 7321Reputation: 7321Reputation: 7321Reputation: 7321
you should not (try to) destroy the filesystem(s) containing the running os (this one will fail), all the others can be cleaned by that way.
 
1 members found this post helpful.
Old 02-12-2014, 12:51 PM   #6
salasi
Senior Member
 
Registered: Jul 2007
Location: Directly above centre of the earth, UK
Distribution: SuSE, plus some hopping
Posts: 4,070

Rep: Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897
Quote:
Originally Posted by TenTenths View Post
The whole "must over-write X times with zeros or random values" thing is pretty sensationalist, once with 0's is enough to deter anyone except those with access to tunnelling electron microscopes to try and reconstruct data.
While, generally, I agree with that
  • If someone has written a procedure, I'll probably follow it (before I tell them that I think they've done it wrong...but that's just me)
  • If I wanted to select something that I thought was pretty bulletproof, I'd do a pass of zeroes, a pass of ones, and then random data - even if you could recover anything through the random data, it would probably only be either the ones or the zeroes written from the first two passes, or, at least it would be almost impossible to prove that it wasn't (hence, having put any variation on the data in the first two passes)
  • I'm not sure that you can recover anything useful from the data after a couple of passes - you know, the information that the previous user had used at least one 'zero' isn't going to get you anywhere, you'd need to get much, much, more to do anything useful with it
  • DBAN, or something; using a live CD(/DVD) allows you to 'nuke' the OS.
  • Normally, people don't scramble the partition table, and I don't know why; it takes little time, and unless you can get the partition table back, it going to be difficult to put any context to your data
  • I'm not sure that a tunnelling electron microscope actually helps...at least the the micrographs I've seen didn't help me
  • While there are people who will tell you that 26 passes are actually necessary, I don't believe it. On the other hand, that's probably just 26 overnight sessions, and that doesn't really cost anything, except a month of elapsed time
  • If you are a three letter agency, you'll want the full professional job, no question, and I wouldn't blame you. The 'costs' of getting this wrong are potentially immense
 
Old 02-12-2014, 01:52 PM   #7
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 26,685

Rep: Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972
Quote:
Originally Posted by ksri07091983 View Post
Hi all,
I have a task to wipe out all the data in a server that is about to be de-commissioned. I came up with the idea of

* identifying the Volume group information from bdf output
* Use vgdisplay -v to find out disks under root volume group
* Use 'dd' command to wipe out the information on the disks (nohup dd if=/dev/zero of=/dev/rdsk/${DISK} bs=8192k &)

Would like to take advise as to, is this a right approach or are there any better methods to achieve this task?
While I agree in principle to the suggestions offered previously, I don't know if it's what I'd use. You just say "decommissioned"...but don't say if you're SELLING the server(s) to someone else (another company/private buyer), or if you're re-deploying them elsewhere at your company. So if you're keeping them in your company, then any of the suggestions here work fine.

If you're SELLING them, I'd settle for nothing short of a sledgehammer and pound the hard drives into scrap. Anyone who buys them *COULD* (theoretically) still read the data from the drives...government agencies just have an easier time of it. You CAN recover good bits of data, even with the ones/zeros method. DBAN is MUCH harder, but experienced data forensics people can recover data. It's expensive, and the results may be hit or miss, but it can be done. It may sound paranoid, but you have NO IDEA what someone else will do, and if it's your company's data/client records, do you really want to take the chance?
 
Old 02-12-2014, 02:24 PM   #8
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
A sledgehammer is worse than zeroing the drive, because you can still recover info from the pieces. A PRNG that is not cryptographically secure is no better than zeroing the drive. If the data is "top secret" then use a cryptographic PRNG that is well seeded or encrypt the drive.
 
Old 02-12-2014, 02:32 PM   #9
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 26,685

Rep: Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972
Quote:
Originally Posted by metaschima View Post
A sledgehammer is worse than zeroing the drive, because you can still recover info from the pieces. A PRNG that is not cryptographically secure is no better than zeroing the drive. If the data is "top secret" then use a cryptographic PRNG that is well seeded or encrypt the drive.
Really?? Mind telling us how, exactly, you can recover data from PIECES of a hard drive? Especially since the coating on the platters will flake into particles about the size of grains of sand?
 
Old 02-12-2014, 04:11 PM   #10
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
The data density for HDDs is quite high, and please don't tell me that a sledgehammer will turn a HDD into sand

http://www.xbitlabs.com/news/storage..._Analysts.html
NOTE: values are in gigabits

Let's say we currently have 100 GB / square inch, that equals about 0.15 GB / square mm, about a grain of sand or salt. So that grain of sand could contain a lot of data.

Oh, and if you can turn a HDD into sand using a sledgehammer, please do post a video of it on youtube as that would be entertaining.

Last edited by metaschima; 02-12-2014 at 04:14 PM.
 
Old 02-12-2014, 04:43 PM   #11
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 26,685

Rep: Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972
Quote:
Originally Posted by metaschima View Post
The data density for HDDs is quite high, and please don't tell me that a sledgehammer will turn a HDD into sand
No, but please DO read what was actually said. I said the COATING will flake off into bits as fine as a grain of sand. Since the coating = what holds the data, how, exactly, do you read data from it???
Quote:
NOTE: values are in gigabits Let's say we currently have 100 GB / square inch, that equals about 0.15 GB / square mm, about a grain of sand or salt. So that grain of sand could contain a lot of data.
Right...and we're back to "tell us how you READ THE DATA FROM IT in that state???
Quote:
Oh, and if you can turn a HDD into sand using a sledgehammer, please do post a video of it on youtube as that would be entertaining.
Again, go back and re-read what was said.
 
Old 02-12-2014, 05:08 PM   #12
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 492Reputation: 492Reputation: 492Reputation: 492Reputation: 492
I would not be able to read data from HDD scraps, but using a microscope and sophisticated methods, one might be able to, just like one is able to see individual bits on a HDD platter. Of course, I'm referring to three letter agencies and not the layman, who would not have access to such methods.

I sense you are angry, maybe you should go out and pulverize a HDD to let off some steam. As for me, I've said what I wanted to say, and I'll go off to another thread.
 
1 members found this post helpful.
Old 02-12-2014, 07:55 PM   #13
TB0ne
LQ Guru
 
Registered: Jul 2003
Location: Birmingham, Alabama
Distribution: SuSE, RedHat, Slack,CentOS
Posts: 26,685

Rep: Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972Reputation: 7972
Quote:
Originally Posted by metaschima View Post
I would not be able to read data from HDD scraps, but using a microscope and sophisticated methods, one might be able to, just like one is able to see individual bits on a HDD platter. Of course, I'm referring to three letter agencies and not the layman, who would not have access to such methods.

I sense you are angry, maybe you should go out and pulverize a HDD to let off some steam. As for me, I've said what I wanted to say, and I'll go off to another thread.
Uhh, sorry....you're wrong. There is NOT a microscope that can view magnetic fields. Hdd platters use magnetic principles to store data. You can't 'view them

And not angry a bit...but If you want to advocate against something, you should have proof of what you say. You don't.
 
Old 02-13-2014, 07:29 AM   #14
ksri07091983
Member
 
Registered: Nov 2007
Location: Chennai,TamilNadu,India
Distribution: RedHat,SuSE
Posts: 65

Original Poster
Rep: Reputation: 15
Thanks Everyone for participating and providing valuable insight.

I will take the suggestion to wipe out the hard disk as well as attempt to physically destroy them, if they are not going to be used in the same environment further.

As of those disks which will be re-purposed in the same environment. I will go with zero-ing the whole disk (and fill it with random numbers.

Thanks once again!
 
Old 02-13-2014, 06:23 PM   #15
salasi
Senior Member
 
Registered: Jul 2007
Location: Directly above centre of the earth, UK
Distribution: SuSE, plus some hopping
Posts: 4,070

Rep: Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897Reputation: 897
There is some work on this subject that you should read:

At cmrr.ecu.edu, there is a Data Sanitization Tutorial (pdf). Amongst the highlights is a discussion of how some techniques of forensics and some of data destruction have become impractical over time and a discussion of various legal requirements and penalties that might exist under the different (US) applicable laws. this might concentrate the attention, in some circumstances.

One particular issue that is easy to overlook is that of unused blocks in circumstances in which user accessible blocks are lower than native. In some cases, you might not be that worried about ancient data escaping the erasure process, but you probably shouldn't be that lax.

There is also a discussion at SANS of the different microscopic techniques that can be used (and again, how progress has made this more difficult).

Some other bits and pieces:
http://www.nber.org/sys-admin/overwr...a-gutmann.html
http://us.simsrecycling.com/Business...tion-Standards (probably better dealt with in the pdf at cmrr)
https://en.wikipedia.org/wiki/Data_remanence
https://www.anti-forensics.com/disk-...h-screenshots/
(And, of course, SSD erasure is different issue.)
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
Decommission NIS server thekillerbean Linux - Software 1 04-11-2013 02:24 AM
[SOLVED] wiping a partition anctop Linux - Security 15 07-11-2011 03:45 AM
Best way to simulate data loss, corruption, partition wiping, etc & then "rescue" it? linus72 Linux - General 8 06-28-2009 06:26 PM
Wiping the HD NEVICA Linux - Newbie 7 02-20-2008 03:35 PM
Securely Wiping Data zok Linux - Software 10 09-23-2006 12:30 PM

LinuxQuestions.org > Forums > Other *NIX Forums > Other *NIX

All times are GMT -5. The time now is 08:47 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration