LinuxQuestions.org
Visit Jeremy's Blog.
Go Back   LinuxQuestions.org > Forums > Non-*NIX Forums > Programming
User Name
Password
Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game.

Notices


Reply
  Search this Thread
Old 06-22-2006, 01:56 AM   #1
alred
Member
 
Registered: Mar 2005
Location: singapore
Distribution: puppy and Ubuntu and ... erh ... redhat(sort of) :( ... + the venerable bsd and solaris ^_^
Posts: 658
Blog Entries: 8

Rep: Reputation: 31
problems about multi-threading and/or multi-processing with tcp network in java ...


probably a few newbies java doubts/problems jam into one thread ... please bear with me and my terminologies ...

with multi-threading and multi-processing , which one of them are better ?? i mean on both *nix and windows systems ...

heres something basic which i wrote in java recently ::
Code:
new thread1
new thread2
new thread3
new thread4
new thread5

each one of them sits on it own ServerSocket.accept() 
but with the same ServerSocket ...
is this the correct way to thread a ServerSocket.accept() ??

is it true that tcp could itself ganrentee(a kind of built-in free tool) a multi-threading networking when i'm in java ?? could tcp really "cycle" through and use one of the thread "automatically" ??

how to judge how many threads i need to create new and should i just create only a single thread for the socket accept() and thats it ??

mine is a single-proccessor machine or does it really matters ??



//not really urgent but just wanted to know something more from others experience ...

//thanks in advance ...


.

Last edited by alred; 06-22-2006 at 01:58 AM.
 
Old 06-23-2006, 04:04 AM   #2
dom83
LQ Newbie
 
Registered: Jan 2006
Posts: 20

Rep: Reputation: 1
Quote:
is this the correct way to thread a ServerSocket.accept() ??
I hope I understand you right: You start for example 5 threads. So with your implementation you can only accept 5 connections!? If I misunderstood you, please ignore the rest

This is how I learned Server/Client programming:
In your "main" server method, you are listening on the ServerSocket. If a client connects you create a new "client" socket and start a new thread with that client socket.
For example:

Listening for new connections:
Code:
ServerSocket server = new ServerSocket(9000);

while(true)
{
    Socket socket = server.accept();
    new RequestProcessor(socket).start();
}
Do something with the request:
Code:
class RequestProcessor extends Thread
{
    private Socket socket;

    ...

    public void run()
    {
	...
    }   
}
The only downside of this method: Keep the constructor of RequestProcessor fast. Because in the time you are creating a new instance of RequestProcessor you can't accept a new connection... at least I think so.
 
Old 06-23-2006, 01:36 PM   #3
alred
Member
 
Registered: Mar 2005
Location: singapore
Distribution: puppy and Ubuntu and ... erh ... redhat(sort of) :( ... + the venerable bsd and solaris ^_^
Posts: 658
Blog Entries: 8

Original Poster
Rep: Reputation: 31
i was thinking of something like this ::

Code:
//from the main()
for (int i=0;i<10;i++) {
    
Thread th_ = new fishK( serverSocket);
th_..start();
System.out.println(th_);    

}

//from the fishK class
class fishK extends Thread {

ServerSocket serverSocket;
Socket clientSocket;

...

public void run() {

while (true) {

clientSocket = serverSocket.accept();
System.out.println(currentThread());    

...

}

}   

}
according to you is it ok to do that ?? i'm feeling kind of uncertain with it ...

but i have tried yours and it runs great ... probably i will stick with your sample , thanks ...


.

Last edited by alred; 06-23-2006 at 01:37 PM.
 
Old 06-23-2006, 01:51 PM   #4
reshojaei
LQ Newbie
 
Registered: Oct 2005
Location: toronto
Posts: 8

Rep: Reputation: 0
It is always good practice to put some restriction on the number of connection that your server can accept. you can do it easily just by using a counter in dom83's code. But never use for block to implement server.
 
Old 06-23-2006, 01:58 PM   #5
sundialsvcs
LQ Guru
 
Registered: Feb 2004
Location: SE Tennessee, USA
Distribution: Gentoo, LFS
Posts: 8,457
Blog Entries: 4

Rep: Reputation: 2920Reputation: 2920Reputation: 2920Reputation: 2920Reputation: 2920Reputation: 2920Reputation: 2920Reputation: 2920Reputation: 2920Reputation: 2920Reputation: 2920
The so-called "flaming arrow" approach ... where one thread is created for each request, and it flies up into the air, processes the request, and then flames-out ... is fine for very light workloads, but do be aware that it will crumple badly under more serious loads.

Suppose you suddenly get deluged with 1,000 requests per second by someone who's trying to do a denial-of-service attack. His attack will succeed, because suddenly the system dispatcher has 1,000 new threads to contend with, and all of them are fighting for the same resources and taking the same locks and so-on. Basically they're all getting in each other's way... just as a thousand hungry people would if they all raced into the kitchen.

A different approach would have a pool of worker-threads who are waiting on a single queue of incoming requests. The workers dequeue a request, process it, then "clean up, change clothes, and take a quick shower," and go back to waiting for the next request to arrive.

The worker-threads don't die. There's usually a "monitor" thread that periodically checks to make sure that the workers are still alive and that they're not somehow "stuck" on a particular request. Yet another thread might do nothing but watch the monitor.

There is a single input-thread which is running the accept() loop previously described, but it is then placing those incoming requests on a queue. (And, probably, observing the size of that queue so that it is not permitted to grow too large... some incoming requests might just have to be refused.)

There might be yet another thread whose job is to finish-up the requests, to do statistical logging and so-forth.

At any moment in time, then, only a fixed maximum number of requests will be "in process" at any one time. This enables the throughput of the system to be predictable... even with a flood of 1,000 requests, only (say) 20 requests will be active and so, regardless of the queue-size at the moment, we can say that "this system has a worst-case throughput of (say) 200 requests/sec," and we know that the backlog will clear out within five seconds.

There is, you see, a rather infamous characteristic of computer systems: the "knee-shaped curve" or "hitting the wall." (Smack!) Performance degrades more-or-less linearly up to a certain point, where it abruptly becomes exponential(ly bad). If you maintain a throttle upon the amount of work that the system attempts to carry out simultaneously, you never hit that wall. Queues build up, but the work keeps moving.

A "request," in such a system, isn't a thread or a process or anything known to the system dispatcher. It is a thing; an object.

Some systems are even more sophisticated, with a certain number of "job analysis" threads taking the initial request, deciding what stage(s) need to be performed to complete it, and then brokering out the stages to those threads .. or even machines .. that are dedicated to each stage. (What I'm now describing is a transaction-processing monitor. These are available off-the-shelf. I'm being loose with my terminology here.) Sounds fancy, but you see it being done at any fast-food restaurant, where we've got "the fry guy" and "the drink guy" and "mister burger-man" and so on.

Last edited by sundialsvcs; 06-23-2006 at 02:12 PM.
 
Old 06-23-2006, 03:21 PM   #6
dom83
LQ Newbie
 
Registered: Jan 2006
Posts: 20

Rep: Reputation: 1
Thank you sundialsvcs for your reply! I have one small question to the input-thread:

Quote:
Originally Posted by sundialsvcs
There is a single input-thread which is running the accept() loop previously described, but it is then placing those incoming requests on a queue. (And, probably, observing the size of that queue so that it is not permitted to grow too large... some incoming requests might just have to be refused.)
What should I do if the queue has reached the limit? Let the thread sleep for a second?

Wether the thread is sleeping or not I will loose "good" requests. But if I'm refusing the request without going to sleep, the denial of service attack will eat up my CPU resources.

So, how would you handle this exceptional case?
 
  


Reply


Thread Tools Search this Thread
Search this Thread:

Advanced Search

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off



Similar Threads
Thread Thread Starter Forum Replies Last Post
dlopen in multi-threading environnement yakotey Programming 2 06-30-2005 08:36 AM
help! perl multi-threading? eph Programming 0 05-03-2004 09:15 PM
multi-threading with signals TedMaul Programming 0 04-17-2004 07:35 PM
Multi-threading rch Programming 3 03-30-2003 10:27 PM
Multi threading Mohsen Programming 5 03-01-2003 11:13 PM

LinuxQuestions.org > Forums > Non-*NIX Forums > Programming

All times are GMT -5. The time now is 11:47 PM.

Main Menu
Advertisement
My LQ
Write for LQ
LinuxQuestions.org is looking for people interested in writing Editorials, Articles, Reviews, and more. If you'd like to contribute content, let us know.
Main Menu
Syndicate
RSS1  Latest Threads
RSS1  LQ News
Twitter: @linuxquestions
Facebook: linuxquestions Google+: linuxquestions
Open Source Consulting | Domain Registration