Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game. |
Notices |
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto |
Site FAQ |
Sitemap |
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQ-related cookies.
Get a virtual cloud desktop with the Linux distro that you want in less than five minutes with Shells! With over 10 pre-installed distros to choose from, the worry-free installation life is here! Whether you are a digital nomad or just looking for flexibility, Shells can put your Linux machine on the device that you want to use.
Exclusive for LQ members, get up to 45% off per month. Click here for more info.
|
|
01-12-2014, 03:21 PM
|
#1
|
LQ Guru
Registered: Dec 2007
Distribution: Centos
Posts: 5,286
|
Windows share capacity problem
Is there some practical way to limit file opens inside a mount.cifs to avoid problems on the Windows side?
I need to do frequent builds on a Linux system of a massive codebase with all source code on a Windows share.
I use a build tool that runs many compilers in parallel (The Linux system has two 6-core CPUs hyperthreaded, so running 24 way parallel compiles is faster than running fewer. But at the start of each major block of compiles, several opens of .h files fail.
I'm not 100% certain, but pretty sure the Windows system limits the number of file open requests it will queue. So if a burst of open requests happens at once (more requests coming in before the last finished opening) it fails the requests.
Once 24 worker threads are busy compiling on the Linux side, new compiles start out of sync and all the file opens succeed. But at the start and at a few choke points in the build process, 24 compiles start at once and all of them are opening include files at the same moment and several file opens fail.
Inside the cifs code on the Linux side the number of outstanding requests to Windows side must be known (how many requests have been sent but not yet replied to). Is there a way to limit the number of outstanding requests: delay sending the next request until some previous request has been answered?
Or any other suggestions?
As things stand, it is faster to run the whole build 24 way parallel with a few parts failing, then rerun to redo just the parts that failed, as compared to running less parallel to begin with. But it would be better to get 24 way parallel to work right from the start.
|
|
|
01-12-2014, 04:52 PM
|
#2
|
Senior Member
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983
|
what MS OS are you running that is holding the data, and why has the data not been moved to a Linux NFS server?
|
|
|
01-12-2014, 05:23 PM
|
#3
|
LQ Newbie
Registered: Sep 2009
Location: Woodshed, CA
Distribution: Debian
Posts: 11
Rep:
|
Is this Windows server, user, also seeing some dev libraries have limits. What I'm seeing Windows early on had an limit or 512 open files, but current windows server the number is 16,383. I wonder it instead of a number of open files its from trying to open so many files its creating a I/O bottleneck causing file open's to time out.
|
|
|
01-12-2014, 05:29 PM
|
#4
|
LQ Guru
Registered: Dec 2007
Distribution: Centos
Posts: 5,286
Original Poster
|
Windows 7 Desktop system.
I have some pretty good reasons to keep the codebase on the Windows system. But I would prefer not to discuss that aspect.
I was really hoping for some way to limit the Linux side from queuing up too many requests at once.
I don't really understand the failure, but I don't think it relates to the number of files open at the same time. It seems to relate to the number of files for which open has been requested, but are not yet open.
I suspect there is an anti-virus aspect to the problem. I am required to have really vicious anti-virus software running and that causes most aspects of the Windows file I/O to be single threaded and quite slow.
Last edited by johnsfine; 01-12-2014 at 05:33 PM.
|
|
|
01-12-2014, 11:54 PM
|
#5
|
Senior Member
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983
|
win7 what vs. again please be specific. the issue could be win7 if it is less then pro you are fubar and there will be nothing you can do. if it is win7 pro or ultimate, you might be able to manipulate something on the win7 box.
if you have AV that is blocking access to files, nothing you can do other then adjust/exclude those files from your AV for these types of processes. again you are best off running/sharing Linux files via a Linux server, not a hodge podge broken OK like Microsoft.
|
|
|
All times are GMT -5. The time now is 03:22 AM.
|
LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.
|
Latest Threads
LQ News
|
|