LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - Networking
User Name
Password
Linux - Networking This forum is for any issue related to networks or networking.
Routing, network cards, OSI, etc. Anything is fair game.

Notices


Reply
  Search this Thread
Old 01-12-2014, 03:21 PM   #1
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Windows share capacity problem


Is there some practical way to limit file opens inside a mount.cifs to avoid problems on the Windows side?

I need to do frequent builds on a Linux system of a massive codebase with all source code on a Windows share.

I use a build tool that runs many compilers in parallel (The Linux system has two 6-core CPUs hyperthreaded, so running 24 way parallel compiles is faster than running fewer. But at the start of each major block of compiles, several opens of .h files fail.

I'm not 100% certain, but pretty sure the Windows system limits the number of file open requests it will queue. So if a burst of open requests happens at once (more requests coming in before the last finished opening) it fails the requests.

Once 24 worker threads are busy compiling on the Linux side, new compiles start out of sync and all the file opens succeed. But at the start and at a few choke points in the build process, 24 compiles start at once and all of them are opening include files at the same moment and several file opens fail.

Inside the cifs code on the Linux side the number of outstanding requests to Windows side must be known (how many requests have been sent but not yet replied to). Is there a way to limit the number of outstanding requests: delay sending the next request until some previous request has been answered?

Or any other suggestions?

As things stand, it is faster to run the whole build 24 way parallel with a few parts failing, then rerun to redo just the parts that failed, as compared to running less parallel to begin with. But it would be better to get 24 way parallel to work right from the start.
 
Old 01-12-2014, 04:52 PM   #2
lleb
Senior Member
 
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983

Rep: Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551
what MS OS are you running that is holding the data, and why has the data not been moved to a Linux NFS server?
 
Old 01-12-2014, 05:23 PM   #3
docbop
LQ Newbie
 
Registered: Sep 2009
Location: Woodshed, CA
Distribution: Debian
Posts: 11

Rep: Reputation: 0
Is this Windows server, user, also seeing some dev libraries have limits. What I'm seeing Windows early on had an limit or 512 open files, but current windows server the number is 16,383. I wonder it instead of a number of open files its from trying to open so many files its creating a I/O bottleneck causing file open's to time out.
 
Old 01-12-2014, 05:29 PM   #4
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Original Poster
Rep: Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197Reputation: 1197
Windows 7 Desktop system.

I have some pretty good reasons to keep the codebase on the Windows system. But I would prefer not to discuss that aspect.

I was really hoping for some way to limit the Linux side from queuing up too many requests at once.

I don't really understand the failure, but I don't think it relates to the number of files open at the same time. It seems to relate to the number of files for which open has been requested, but are not yet open.

I suspect there is an anti-virus aspect to the problem. I am required to have really vicious anti-virus software running and that causes most aspects of the Windows file I/O to be single threaded and quite slow.

Last edited by johnsfine; 01-12-2014 at 05:33 PM.
 
Old 01-12-2014, 11:54 PM   #5
lleb
Senior Member
 
Registered: Dec 2005
Location: Florida
Distribution: CentOS/Fedora/Pop!_OS
Posts: 2,983

Rep: Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551Reputation: 551
win7 what vs. again please be specific. the issue could be win7 if it is less then pro you are fubar and there will be nothing you can do. if it is win7 pro or ultimate, you might be able to manipulate something on the win7 box.

if you have AV that is blocking access to files, nothing you can do other then adjust/exclude those files from your AV for these types of processes. again you are best off running/sharing Linux files via a Linux server, not a hodge podge broken OK like Microsoft.
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
incorrect samba capacity when mapped in windows !! Linux Chips Linux - General 1 05-18-2013 08:52 AM
How to determine capacity on an smb share Ozyank Linux - Software 1 08-05-2012 07:13 PM
Vector Capacity Difference Windows to Linux morty346 Programming 2 01-07-2010 11:07 AM
windows/linux hard drive capacity Vensan Linux - Hardware 2 01-19-2007 11:39 AM
Windows / Linux Share Problem guygriffiths General 3 09-28-2003 11:19 AM

LinuxQuestions.org > Forums > Linux Forums > Linux - Networking

All times are GMT -5. The time now is 03:22 AM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration