Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Location: Northeastern Michigan, where Carhartt is a Designer Label
Distribution: Slackware 32- & 64-bit Stable
Posts: 3,541
Rep:
In general viruses, ransomware, malware and all other crapware affect only Windows -- the user clicks on something nasty and the code is downloaded and installed with permissions to run.
Unix/Linux systems isolate users from elevating software to system level permissions -- root permissions -- and that protects the user and the system from the bad stuff.
Software written for Windows will not run on a Unix/Linux system in any event, they are completely incompatible one from the other (you can't run Linux software on Windows and you can't run Windows software on Linux).
But the main thing is that if you can't be root, you can't do damage; that's why you don't log in and work as root.
This is not to say that you don't pay attention -- you need good passwords (never dictionary words for example), you need a firewall to keep out bad actors and all the other details of reasonable security.
You can have ransomware problem in Linux, a Samba share with write access by Windows computers can be affected. Keep your share read-only for Windows clients to protect it from ransomware.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Quote:
Originally Posted by Keruskerfuerst
Make regulary a backup.
This! If you are affected by ransomware then your backup strategy is wrong.
I was talking to somebody yesterday who suffered because of ransomware because, sadly, he thought that a backup to a network share was safe. Be creative with backups.
Also, use something like uBlock (like AdBlock Plus but not sponsored by advertisers) and NoScript. A lot of these things are spread by malicious averts on otherwise trusted websites and ad firms as well as web forms on badly maintained blogs and the like.
Don't install software from random sources, not as necessary as on Windows because of repositories but some people still do that.
Then, yes, it's less likely that malware will be targeted to Linux so we are a little less likely to have issues with this kind of thing.
I was talking to somebody yesterday who suffered because of ransomware because, sadly, he thought that a backup to a network share was safe. Be creative with backups.
You don't need to be 'creative', you only need to do it the right way
A backup that is reachable as-is by a regular user is not a backup, as you have stated. A backup should be on a different device or medium, with no direct access (requiring a login, or better, off-line). As long a one can access the data on a different disk or across the network (including using SSH keys) then nothing is really in a safe place.
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Quote:
Originally Posted by Ellendhel
You don't need to be 'creative', you only need to do it the right way
A backup that is reachable as-is by a regular user is not a backup, as you have stated. A backup should be on a different device or medium, with no direct access (requiring a login, or better, off-line). As long a one can access the data on a different disk or across the network (including using SSH keys) then nothing is really in a safe place.
I would say that, for most people, using SSH for file transfer is a little "creative" -- it's obvious to people who've messed with Linux but it's not something which would immediately spring to mind, there's also the issue that if backups aren't automated they're easy to forget and if they are automated they could end up backing up files encrypted by ransomware. The problem the guy I was talking to had, and I think many people have this, is that he didn't make offline backups often enough. With his data he didn't actually have much of an excuse but for some people who generate data as they work having up-to-date offline backups requires some thought.
Personally, I'm lucky in that I generate very little data that isn't somewhere else (for example email which remains on the server for a while) so I just make online, nearline and offline backups every so often when I realise data has changed. If I started to produce data more regularly (for example writing code, creating game maps and the like) I'd have to change my strategy though.
Ransomware encrypts everything it has read / write access and then deletes everything it has write access too.
What kind of access do they have? Depends what user they breached into.
It's easy to test out. In the command line, run something like this
Code:
sudo -u exampleuser find / -writable 2> /dev/null | less
If you see your backups in there, you have a problem
Unfortunately, this will include any web-based php backups
Having your backup on a different device doesn't make it secure, nor does making it on a same device make it insecure. What matters is the user that is breached does not have write access to the backup. After all, having ransomware run on your computer implicates a possibility of setting up a backdoor for a attacker to login, allowing them to examine your ~/.ssh/config and ~/.ssh/known_hosts (you hash this file right?) files (etc.) to see what else they can access.
There are many web pages that describe in detail how to create rsync-based backups similar to Apple's venerable Time Machine. There are also plenty of open-source projects.
The most important feature of any such system is that the backup directory is owned by a special user-id which can't be logged-on to. Only this user has read/write access to the directory in which the backups are maintained.
The backups should run very frequently ... at least once an hour.
On my Macintosh, I'm always surprised at just how much information Time Machine selects to back up ... that is, when I actually bother to look. The point is simply that "it just works, and it's always working." Backups are always very-current and very-available with no attention whatsoever from me.
Actually, ransomware that affects linux computers has existed for 2 years or so. For Mac too. And they're quite advanced, as far as I heard.
@Emerson - suggesting read-only shares is like suggesting using your legs for typing. It's a cliché to say that most shares need write access. And while we're at it, Windows's shadow copy system is a pretty good protection against it. I suppose snaphosts could be a solution for that.
And the best backup would probably entail physically unplugging the device after the backup is finished. For instance, you could have two external hdds that can be switched after a period of time.
and some things can run in your home folder and mess up things that YOUR normal user can read/write to
but for the most part system files are mostly safe
unlike on MS where they are mostly UNSAFE
Actually, most things are still denied execute privilege... downloaded files are never marked executable. Shell scripts can get executed indirectly (you have to exec bash with the shell script), but then you have to have access to do the fork/exec system calls (usually covered by ASLR, but even that can be gotten around with a good bit of effort).
Distribution: Debian Sid AMD64, Raspbian Wheezy, various VMs
Posts: 7,680
Rep:
Quote:
Originally Posted by vincix
Actually, ransomware that affects linux computers has existed for 2 years or so. For Mac too. And they're quite advanced, as far as I heard.
For the most part they appear to affect servers rather than desktop machines, exploiting holes in web hosting platforms and the like.
I suppose that shows anther way Linux users can protect themselves against attacks -- don't open any ports to the internet unless you know exactly what you're doing. That could be as simple as just not hosting a web server on your desktop machine or more complex depending upon your network setup.
For the most part they appear to affect servers rather than desktop machines, exploiting holes in web hosting platforms and the like.
I suppose that shows anther way Linux users can protect themselves against attacks -- don't open any ports to the internet unless you know exactly what you're doing. That could be as simple as just not hosting a web server on your desktop machine or more complex depending upon your network setup.
These require a shell level access to do anything - and are relatively easy to recover from with decent management (content management systems) as the shell level gained should not be able to access backups of the data. One place I worked could recover from such activity by disabling the virtual machine, and recreating it from scratch + the CMS in about 15 minutes from the time the attack was identified.
The disabled VM could then be used for forensics.
The CMS data was never built from the server itself, but from isolated test servers used to stage data from an isolated development server (using a different level CMS database). Only the development server created the data... then used the CMS to create the test server, and after management approval was the data from the test server used update the production CMS, which then passed the updates to the public servers.
The entire flow was designed to block reverse propagation.
The CMS data was never built from the server itself, but from isolated test servers used to stage data from an isolated development server (using a different level CMS database). Only the development server created the data... then used the CMS to create the test server, and after management approval was the data from the test server used update the production CMS, which then passed the updates to the public servers.
The entire flow was designed to block reverse propagation.
While mirroring / generating is a excellent method for static sites, for more dynamic ones (blogs*, forums), I have to at some point accept the changes made from external (hopefully verified) users into a backup. Because of this, I'd almost prefer ransomware because it's so obvious. Otherwise, some worm could crawl in, lay dormant for months until my backups have fully integrated it and strike.
* There are excellent static blogs like Pelican that could use this method
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.