I just went through a similar problem, that seems to have the same solution. I was wondering if anyone has any more detail on what that parameter in W7 actually does.
Microsoft pages I read while researching the issue (but now can't find again) say something like: the default value of 1 optimizes the system for local use and 3 optimizes for use as a server.
My W7 system will be used almost exclusively as a local workstation and only occasionally have intensive use of a share by Windows or Linux computers on the LAN. I don't want to leave a setting that will reduce performance as a workstation.
This system has 8 cores and 12GB of ram. In prior versions of Windows, there were settings that tuned for better "server" performance that were described as giving worse "workstation" performance, but actually only gave worse performance on systems that were seriously short of physical ram. If this setting in W7 is like that, I don't need to worry about it, because this system has enough physical ram. But I would have thought in 64 bit Windows 7, Microsoft would no longer be expecting most users to have too little ram.
I have a large project directory with tens of thousands of source files and a complicated combination of shell script and build system to compile it all in any one of a large selection of tool sets (combination of compiler and target architecture), generating the intermediate files and binaries into subdirectories unique to each tool set (so the project can be built any combination of different ways coexisting in one directory tree from one set of source code).
Most of the tool sets run on the local Windows system. But I occasionally mount that entire Windows directory tree (via SMB) into Linux (running elsewhere in the LAN) and build with a tool set on Linux, but with all the build scripts, source files, intermediate files and final binaries kept within that directory on Windows.
After switching from a much less powerful computer running XP64 to the current computer running W7, it stopped working. The script started properly and ran about a thousand compiles then failed with
/bin/bash: ./makesim.sh: Cannot allocate memory
On subsequent restarts, it made significantly less further progress each time before hitting that error until it was making zero progress.
From that same time, I found in dmesg many errors looking like
kernel: CIFS VFS: Unexpected lookup error -12
After a lot of research, I changed the registry key
“HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters\Size” to “3″ then restarted the "server" service.
The problem seems to be fixed. I ran a build session that averaged three times as much simultaneous file I/O as the original failing session and even that works.
So my main question is whether to set this registry setting back to 1 for the likely multi-week period before the next time I need to mount that directory on Linux, or whether it is OK to just leave it set to 3 all the time.