Linux - SecurityThis forum is for all security related questions.
Questions, tips, system compromises, firewalls, etc. are all included here.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Anybody know some script/tools to scan a specific https page for non-secure contents. Should be called from command line. Good thing if it support recursive.
Before our site is sent to production we want to verify the site does not contains non-secure contents when running in https. The script/tools should be run by our monitoring tools (Sensu) which will execute the script i.e. every minute. Something like nagios plugin for check_http i.e.
A Nagios plugin for this doesn't exist as far as I know. I'd say use a spider, could even 'wget' recursively, then grep stored files for any 'http://' strings?..
Don't browsers warn of potential issues on a https page?
Yes it does. But I need a automatic way to check the site and response back to me when the site does contains non-secure contens. I don't want to open a browsers 100 times at day and with recursive pages it will be a nightmare
A Nagios plugin for this doesn't exist as far as I know. I'd say use a spider, could even 'wget' recursively, then grep stored files for any 'http://' strings?..
It looks like the best solution right now. Using the --spider option for wget.
Not to make it more complicated, but grepping for 'http://' isn't enough because one can serve (not that one should) embedded content without necessarily specifying the protocol. For example an iframe src='domain.com/some-content.html'.
Not to make it more complicated, but grepping for 'http://' isn't enough because one can serve (not that one should) embedded content without necessarily specifying the protocol. For example an iframe src='domain.com/some-content.html'.
You are right. Is it possible to starting a firefox/chrome session with an url as argument and then somehow extract the console information about non-secure contents?
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.