Linux - GeneralThis Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.
Notices
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
Does anybody know if there exists a "free to use" grid/distributed computing network? As in, a cluster of volunteers in the grid, such as those that contribute to something like seti@home, but that is free to use by any of the people that contribute to it? If not, would there be any way to say... create a networked grid of computers, with thousands of nodes all accross the internet, and have the users run a server on their computer that would use spare processor cycles, memory, storage space, etc, and have them all clustered togather, in a useful fassion, and also in a way that any one of those contributors could log onto the network in a unix/linux/whatever environment and run programs of processor/memory/storage intensive nature, and have the users of the network's processors scheduled against however much time they put into the project?
I was just wondering if this has ever been done before, or if it's even feasable, or would be clobbered by bandwidth bottlenecks and leeches and such to the point where the end user wouldn't get any gain out of being on the network. Also, if it has been done, could somebody point me in the dirrection as to where I could find a project of this nature?
It's like driving with your feet. It's possible but probably dumb.
Have a think about why a lot of p2p networks are such viral minefields and why seti@home keeps having people publishing fraudulent clients etc. and it will start to dawn on you that not only are these types of projects complicated but they are plagued with problems associated with a lot of people with a lot of different levels of computing expertise being involved.
Distributed computing is a pretty complicated area that large groups of people invest a lot of money in. I've looked around to see if there is anything like an open sourced DC project and there isn't. I'm pretty sure I understand why.
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.