LinuxQuestions.org
Welcome to the most active Linux Forum on the web.
Home Forums Tutorials Articles Register
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 03-01-2005, 01:20 AM   #1
maxfacta
Member
 
Registered: Aug 2004
Location: Perth, Australia
Distribution: Debian @ home + work :)
Posts: 66

Rep: Reputation: 22
C: Socket weirdness: select() -> fork() -> accept()


I have written a server which sets up and listens on a socket. It has business to attend to once per second, and then uses select() to serve connection requests during the interval.

Now, if I accept() the connection and then fork(), all is well (but I must remember to close my new socket in both the child and the parent). It made sense to me to fork() and then accept() - only the child needs deal with the new socket. This "works", in that the client collects the infomation required. However, it comes at the cost of massive system resources! Invariably, the server will give multiple

"accept: Resource temporarily unavailable"

errors, and it starts to hog the system.

I do not understand why this is so. Can anyone shed some light?

Here is the code:

Code:
    // Serve multiple requests during the idle second
    //
    while (w_time.tv_sec != 0 || w_time.tv_usec != 0) {

      if (select(orig_sock + 1, &read_fd, (fd_set *)NULL,
		 (fd_set *)NULL, &w_time) < 0) {
	perror("read socket select");
	clean_up(orig_sock, NAME, shmid, 1);
	exit(3);
      }
      if (FD_ISSET(orig_sock, &read_fd)) {
	
	clnt_len = sizeof(clnt_addr);
	if ((new_sock = accept(orig_sock, (struct sockaddr *) &clnt_addr,
			       &clnt_len)) < 0) {
	  
	  perror("accept error");
	  clean_up(orig_sock, NAME, shmid, 1);
	  exit(3);
	}
	
	if (fork() == 0) {

	  read(new_sock, sock_request, 1024);

	  if (strcmp(sock_request, "START") == 0)
	    start_session(new_sock, registered, stack);

	  else if (strcmp(sock_request, "GET") == 0)
	    time_remaining(new_sock, registered);

	  else if (strcmp(sock_request, "STOP") == 0)
	    terminate_session(new_sock, registered, stack);

	  else
	    write(new_sock, "EBADREQ", sizeof("EBADREQ"));

	  close (new_sock);

	  exit(0);
	}

	close (new_sock);
      }
    }

This works; move the accept() into the fork() block and things get messy.
The socket was set up like this:

Code:
  // Set up listening socket
  //
  if ((orig_sock = socket(PF_UNIX, SOCK_STREAM, 0)) < 0) {
    perror("generate error");
    clean_up(orig_sock, NAME, shmid, 0);
    exit(3);
  }
  if (ioctl(orig_sock, FIONBIO, &flag) < 0) {
    perror("SERVER ioctl");
    clean_up(orig_sock, NAME, shmid, 1);
    exit(3);
  }

  serv_addr.sun_family = AF_UNIX;
  strcpy (serv_addr.sun_path, NAME);
  unlink(NAME);                       // In case it's hanging around..

  mask = umask(0000);                 // Squid must talk to this socket.

  if (bind(orig_sock, (struct sockaddr *) &serv_addr,
	      sizeof(serv_addr.sun_family) + strlen(serv_addr.sun_path)) < 0) {
    perror("bind error");
    clean_up(orig_sock, NAME, shmid, 1);
    exit(3);
  }

  umask(mask);

  listen(orig_sock, 2);               // There are 2 redirectors
 
Old 03-01-2005, 01:33 PM   #2
aluser
Member
 
Registered: Mar 2004
Location: Massachusetts
Distribution: Debian
Posts: 557

Rep: Reputation: 43
I'm going to guess that what happened is this:
  • select() reports the master socket is ready to read
  • You fork() a child, which is put on the ready queue but doesn't run yet; the parent continues running.
  • The parent reaches the top of the select() loop and finds the master socket is ready for reading
  • You fork a child and it's put on the ready queue but doesn't run yet...

Sounds like the easiest way is to just do the accept in the parent.

Another strategy, which is only tangentally related to your question, would be to have several children preforked who just sit blocked in accept() loops on the master socket. The parent would then not need to do any select()ing. Of course, if you get more requests than you have children to process them, some of the requests will have to wait until a child finishes processing a request. You get to decide if that's a problem; if the processing for each request is either short or not very variable, it's probably not.
 
  


Reply



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
fork()'ed or multi threaded socket I/O? thedevilsjester Programming 5 09-24-2008 12:41 AM
breaking out of accept() with interrupts (socket question) hatha Programming 1 07-16-2005 05:33 AM
Select() did not select my socket thvo Programming 1 05-08-2005 12:20 AM
(c++) network socket programming: help with accept() Dark Carnival Programming 4 08-11-2004 04:22 PM
Socket and Select() problems strikernzl Programming 2 09-01-2003 08:34 AM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 06:37 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Open Source Consulting | Domain Registration