SlackwareThis Forum is for the discussion of Slackware Linux.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541
Rep:
Years ago, working on a Unix box (System V), logged in as root, forgot that I was root and in the root directory and...
Code:
rm -r *
Everything went to the great byte bucket in the sky.
Reinstalled from DC-600 distribution tape cartridge (the operating system) and, fortunately, had a DC-600 with all the user accounts on it plus another with the /usr/local and /opt trees and some other stuff on another tape cartridge and a couple of 8" dual sided floppies.
Got it all back, working, in about 6 hours (DC-600's are not speed demons), didn't get out of there until about midnight.
A long time ago (80's?) in an operating system far away (MS-DOS) my wife asked for the command to back up a couple of hours' work in the working directory to the backup directory. Being half asleep, I gave the directions to copy the old backup directory over the working directory.
eons ago, we had issues on NFS mounted filesystems if the permissions on the local mountpoints weren't correct. rather than unmounting the filesystems, quicker fix was to remount / elsewhere to check and/or fix the mount point permissions. was tasked with checking a dozen or so boxes, so scripted it and w/out thinking things thru and remounted / inside /tmp/blah, didn't include an umount. All was well until skulker kicked in that night and started pruning things in /tmp older since the uptime of the boxes, which nuked a bunch of stuff.
1. Plugged unsupported CPU into motherboard (mismatch by TDP, got warining in POST), a kind of AMD Athlon 3 cores)
2. make -j4 in latest kernel sources
3. startx, run firefox, surfing some websites
Result: motherboard died in 15 minutes, probably in power supply circuits.
Yeah, since I bought it few days before, replaced it on guarantee next day
Distribution: Void, Linux From Scratch, Slackware64
Posts: 3,150
Rep:
On my old mac had a script to make a bootable backup on an external fire wire, the script deleted the old backup first, until the day i got source and destination drives mixed upmand deleted my system.
While still learning Linux did 'rm -r /media/*' and wiped a 1Tb drive.
Thought I would encrypt my drive after that and then forgot the password, bang went another 1Tb of data.
I now keep at least two full backups, on different drives in different locations, plus some stuff on the cloud and github, I am definatly a 'belts and braces' man now.
Wrote a CGI script to detect Apache vulnerability probes and block the attacking IP address. It worked fine, until I tested it from localhost and locked myself out of my own desktop. (But SSH from a remote worked like a charm, and let me in to fix it.)
This wasn't "my" computer, but late '70s in a basement facility of a major financial institution in a major American city...
I and another engineer were bringing some large infrastructure online overnight, on a weekend. Not everything in the room was "ours".
Some of our equipment included a 1/2 mega-watt of parallel UPSs powering our computer center on an upper floor of the building. Next to one of the UPS cabinets was a large enclosure with a single small red light and a button labelled simply, "Test". During a coffee break while planning our next tasks, we discussed what the "Test" button was for, so naturally one of us (which one will never be known with certainty)... pushed the button...
BANG! Lights out, except for the UPSs... as we discussed what to do next a couple of guards entered the room asking what we had done... "nothing", we implied. Their biggest concern was a call from the equally major airport wondering why the clearance lights had gone out on the tallest two buildings in the city...
After they left in a panic, by flashlight we opened the enclosure and found it was distribution control for an entire city block or two, the button had been a ground fault test on something like a 5,000 amp high-voltage switch... we figured out how to re-energize it and decided no guts, no glory and set it in motion... the motor drives whirred and clicked and - BANG! The lights were back on.
The guards returned later, along with others, again asking what we had done... nothing at all... through the night there were a lot of people and lots of activity working to figure out what had happened, and to restore unknown numbers of systems affected by the outage.
It turned out that our new facility, powered by those beautiful, new UPSs was the only thing that had not gone down!
How about stupid things that your IT Department does?
Not me but our farmed out IT department at work:
I call IT to report problem "X". IT writes a ticket. Then the next day I get a phone call from IT telling me that there is a user who has problem "X" and can I contact that user and help them. I tell them "that user was me!". They respond "Oh. Well then is it ok for me to close the ticket?" Arrrgh! This happens several times a year.
Not really linux nor to my computer.. but very recently I got a call through our ticket system for a decommission, All fine i was thinking, although a thought did cross my mind why they wanted the only server that hosts our library software gone (maybe they built a new one, i thought), but not really my problem i just do as im told. Went through with the decommission. about ten minutes lots of calls came through no books can be checked in/out, library software not working etc. In the space of ten minutes i went through the complete decommission,
1: Removed from NAGIOS
2: Powered off Server
3: Pulled Server
4: HDD's pulled and crushed
5: DNS removed
Although this is also a plus as since i deal with backups, it gave me a chance to restore a new feature.. restore of a physical server to a VM which i have never tried from our software.. Lesson of the Day (Listen to brain and question things) Now to attend my MIR (Major Incident Review) tomorrow
Also pulled a few cables for our EMC equipment... that was also a fun day no more storage for a while
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
I'll share this even though it wasn't my computer just something silly I did with a computer:
In a past job I was responsible for creating and decomisioning IT accounts on the Windows domain. I was emailed Word documents with the accoubt details and the date the person started, left, transferred or went on maternity. This was my first real job in IT* so I was keen as mustard but my boss was a tall blonde cockney with a short temper so I always felt rushed.
So, one day I received a deletion request for a guy called Paul* whom I did ask for asistance from time to time but didn't really know. It was dated a few weeks in the future so I printed it, put it in its place in the pile and forgot about it.
a week or so later I had just rushed to complete the creation of half a dozen accounts for the Hong Kong office which were needed for some time the previous night when I came across the form again. So, eager to get on I disabled the domain account, deleted the mailbox, logged the ticket and went on to the next.
About 2 minutes later, whilst I was working out the best time to transfer an account from Singapore to Manchester I heard a commotion nearby in the office. It turned out that I had processed the form a month too early and Paul just lost his email and domain access.
I had to apologise to Paul then taped backups and the like were used, permissions reinstated, mailbox reconstructed and all was well.
Three weeks later I did the same thing again!
Luckily Paul saw the funny side and when he returned to the firm he bore me no ill will and was always great to work with.
Apologies for typo's -- still need to set up spoil chocker for Android. edit: Apparently Android thinks it is beneath spell checkers and the like because everybody must use autocorrect or else!
*first and last, as it turned out but that's not relevant.
This wasn't "my" computer, but late '70s in a basement facility of a major financial institution in a major American city...
I and another engineer were bringing some large infrastructure online overnight, on a weekend. Not everything in the room was "ours".
Some of our equipment included a 1/2 mega-watt of parallel UPSs powering our computer center on an upper floor of the building. Next to one of the UPS cabinets was a large enclosure with a single small red light and a button labelled simply, "Test". During a coffee break while planning our next tasks, we discussed what the "Test" button was for, so naturally one of us (which one will never be known with certainty)... pushed the button...
BANG! Lights out, except for the UPSs... as we discussed what to do next a couple of guards entered the room asking what we had done... "nothing", we implied. Their biggest concern was a call from the equally major airport wondering why the clearance lights had gone out on the tallest two buildings in the city...
After they left in a panic, by flashlight we opened the enclosure and found it was distribution control for an entire city block or two, the button had been a ground fault test on something like a 5,000 amp high-voltage switch... we figured out how to re-energize it and decided no guts, no glory and set it in motion... the motor drives whirred and clicked and - BANG! The lights were back on.
The guards returned later, along with others, again asking what we had done... nothing at all... through the night there were a lot of people and lots of activity working to figure out what had happened, and to restore unknown numbers of systems affected by the outage.
It turned out that our new facility, powered by those beautiful, new UPSs was the only thing that had not gone down!
We were heroes!
HA HA HA HA HA HA HA HA !!!! Excellent !!!! You made my day !!!!
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.