LinuxQuestions.org
Latest LQ Deal: Latest LQ Deals
Go Back   LinuxQuestions.org > Forums > Linux Forums > Linux - General
User Name
Password
Linux - General This Linux forum is for general Linux questions and discussion.
If it is Linux Related and doesn't seem to fit in any other forum then this is the place.

Notices


Reply
  Search this Thread
Old 09-22-2014, 11:51 AM   #1
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Rep: Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194
What limits number of threads per system or per user?


I think I checked the obvious limits and none were low enough to explain the symptom.

Under an automated QA process, several of the programs being tested failed with the message:
Code:
OMP: Error #35: System unable to allocate necessary resources for the monitor thread:
OMP: System error #11: Resource temporarily unavailable
OMP: Hint: Try decreasing the number of threads in use simultaneously.
I don't have an exact count, but I'm pretty sure that user had between 100 and 200 threads active across about 40 processes. The rest of the system was very lightly loaded.

ulimit -a shows that user has a limit of 1024 "max user processes"

The system had 210GB ram free and only 41GB used (used includes buffers and cache) as well as 105GB swap free and less than 1GB used. So this is an absurdly underloaded system and any memory limits should not be relevant.

What other loads or limits should I investigate to understand that error?

Last edited by johnsfine; 09-22-2014 at 01:19 PM.
 
Old 09-22-2014, 12:01 PM   #2
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 & 7
Posts: 3,412

Rep: Reputation: 916Reputation: 916Reputation: 916Reputation: 916Reputation: 916Reputation: 916Reputation: 916Reputation: 916
I've hit the total open files limit (-n), which is a per-user limit of 1024 by default.
 
Old 09-22-2014, 12:04 PM   #3
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 491Reputation: 491Reputation: 491Reputation: 491Reputation: 491
See:
Quote:
Another possible scenario that can lead to an error is if the virtual address space limit is set too low. For example, if the virtual address space limit is 1 GiB and the thread stack size limit is set to 512 MiB, then the OpenMP run-time would try to allocate 512 MiB for each additional thread. With two threads one would have 1 GiB for the stacks only, and when the space for code, shared libraries, heap, etc. is added up, the virtual memory size would grow beyond 1 GiB and an error would occur:

Set the virtual address space limit to 1 GiB and run with two additional threads with 512 MiB stacks (I have commented out the call to omp_set_num_threads()):
Code:
$ ulimit -v 1048576
$ KMP_STACKSIZE=512m OMP_NUM_THREADS=3 ./a.out
OMP: Error #34: System unable to allocate necessary resources for OMP thread:
OMP: System error #11: Resource temporarily unavailable
OMP: Hint: Try decreasing the value of OMP_NUM_THREADS.
forrtl: error (76): Abort trap signal
... trace omitted ...
zsh: abort (core dumped)  OMP_NUM_THREADS=3 KMP_STACKSIZE=512m ./a.out
In this case the OpenMP run-time library would fail to create a new thread and would notify you before it aborts program termination.
http://stackoverflow.com/questions/1...is-openmp-code
 
Old 09-22-2014, 01:18 PM   #4
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Original Poster
Rep: Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194
Thankyou for both those suggestions, but I had already considered those.

1) Each process has few open files and there are only about 40 of the processes. I actually did not figure out whether that open file limit of 1024 was per user or per process, but even if it was per user it would not be hit.

2) The virtual size is unlimited and the same processes run perfectly well when there are fewer of them, so no per process limit could be doing this. It must be a per user or per system limit, but I don't know WHAT limit.
 
Old 09-22-2014, 02:00 PM   #5
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 491Reputation: 491Reputation: 491Reputation: 491Reputation: 491
First thing I would try is increase OMP_STACKSIZE.

I don't think this is about a thread limit, but rather stack or memory limits. Either way I don't know of any per user thread limit, but the system max is at:
Code:
cat /proc/sys/kernel/threads-max
I have gotten this error before when putting too many things on the stack with openmp. pthreads is not as conservative/restrictive with stack size. The same program written in both with seg fault with openmp but not with pthreads. Increasing OMP_STACKSIZE should fix it.

Last edited by metaschima; 09-22-2014 at 02:02 PM.
 
Old 09-22-2014, 02:20 PM   #6
johnsfine
LQ Guru
 
Registered: Dec 2007
Distribution: Centos
Posts: 5,286

Original Poster
Rep: Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194Reputation: 1194
I'm pretty sure OMP_STACKSIZE could not be the problem, because the processes run fine when there are fewer of them and only fail when about 40 processes (1 to 32 threads per process, but on average few threads per process) are running at the same time. Any virtual memory or stack size related problem would be completely internal to each process and not affected by other processes.

It must be some per user or per system limit, not any per process limit.

I had already checked:
Code:
$ cat /proc/sys/kernel/threads-max 
4132206
That obviously is big enough.
 
Old 09-22-2014, 03:19 PM   #7
metaschima
Senior Member
 
Registered: Dec 2013
Distribution: Slackware
Posts: 1,982

Rep: Reputation: 491Reputation: 491Reputation: 491Reputation: 491Reputation: 491
The only others things I can think of:

1) Check OMP_THREAD_LIMIT, this is per program tho.
2) Check the RAM with memtest86
3) cgroups would be the only other way to limit resources besides the system method, but you would need to active and configure it for it to limit anything.
 
Old 09-22-2014, 04:23 PM   #8
smallpond
Senior Member
 
Registered: Feb 2011
Location: Massachusetts, USA
Distribution: CentOS 6 & 7
Posts: 3,412

Rep: Reputation: 916Reputation: 916Reputation: 916Reputation: 916Reputation: 916Reputation: 916Reputation: 916Reputation: 916
You might have more files open then you think. A way to tell (approximately) is:
Code:
lsof -u <username> |wc -l
I have 159 open files on a system just due to 6 xterm windows. root has 3000.
 
Old 09-22-2014, 04:30 PM   #9
astrogeek
Moderator
 
Registered: Oct 2008
Distribution: Slackware [64]-X.{0|1|2|37|-current} ::12<=X<=14, FreeBSD_12{.0|.1}
Posts: 5,444
Blog Entries: 11

Rep: Reputation: 3421Reputation: 3421Reputation: 3421Reputation: 3421Reputation: 3421Reputation: 3421Reputation: 3421Reputation: 3421Reputation: 3421Reputation: 3421Reputation: 3421
Quote:
Originally Posted by smallpond View Post
You might have more files open then you think. A way to tell (approximately) is:
Code:
lsof -u <username> |wc -l
I have 159 open files on a system just due to 6 xterm windows. root has 3000.
Never thought to look at that - I now have 2322, root has 272. Very interesting data point!
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
[SOLVED] Change kernel process number limits linuxbird Slackware 5 11-25-2012 05:53 PM
[SOLVED] Problem with pthread + Embedded linux and threads limits Lobinho Linux - General 2 11-08-2012 04:09 AM
Limiting number of processes: /etc/security/limits.conf vs. /etc/ld.so.preload brianmcgee Linux - Server 2 01-10-2012 09:41 AM
Protecting a multi-user server, per-user limits askest Linux - Software 1 02-08-2010 02:34 PM
some threads are become unnoticed because of large number of continious threads deepak_cucek LQ Suggestions & Feedback 9 08-20-2009 11:21 PM

LinuxQuestions.org > Forums > Linux Forums > Linux - General

All times are GMT -5. The time now is 09:30 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration