Programming This forum is for all programming questions.
The question does not have to be directly related to Linux and any language is fair game. 
Notices 
Welcome to LinuxQuestions.org, a friendly and active Linux Community.
You are currently viewing LQ as a guest. By joining our community you will have the ability to post topics, receive our newsletter, use the advanced search, subscribe to threads and access many other special features. Registration is quick, simple and absolutely free. Join our community today!
Note that registered members see fewer ads, and ContentLink is completely disabled once you log in.
Are you new to LinuxQuestions.org? Visit the following links:
Site Howto 
Site FAQ 
Sitemap 
Register Now
If you have any problems with the registration process or your account login, please contact us. If you need to reset your password, click here.
Having a problem logging in? Please visit this page to clear all LQrelated cookies.

Introduction to Linux  A Hands on Guide
This guide was created as an overview of the Linux Operating System, geared toward new users as an exploration tour and getting started guide, with exercises at the end of each chapter.
For more advanced trainees it can be a desktop reference, and a collection of the base knowledge needed to proceed with system and network administration. This book contains many real life examples derived from the author's experience as a Linux system and network administrator, trainer and consultant. They hope these examples will help you to get a better understanding of the Linux system and that you feel encouraged to try out things on your own.
Click Here to receive this Complete Guide absolutely free. 


11162008, 03:17 PM

#1

Senior Member
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078
Rep:

plotting difference equations which are dependent on linear equations
I am working on a server application that queues incoming requests using multiple IPCmonitoring threads. Each thread only receives input from clients, parses it, and queues it in the form of a requested action. The main thread periodically removes a bulk of requests for execution so that the input threads can continue to queue new requests while those removed are being executed.
The queue of requests has an upper limit, and based on that limit I've developed an equation to determine the number of requests to be pulled from the queue at one time. This helps the server run more smoothly and it attempts to minimize impending backlogs. As far as I know the equation can't be represented linearly because it uses the change in state of the server at a given point to calculate an adjustment to make for the next interval. Basically, it derives a value normally ranging from 1 to 1 (essentially a baseX log of the estimated change before the next calculation) denoting what the load seems to be doing and that value is used in conjunction with the time elapse since the last calculation to adjust the percentage of pending requests that are pulled for execution. This incidentally causes the next iteration to vary based on that calculation vs. the results of it in a way I'm afraid isn't solvable.
There are really only four things that effect performance of the server under ideal conditions: the rate of requests, the amount of time to queue a request, the amount of time to remove a request, and the amount of time to execute it. In general, the last three can be seen as constant in an ideal system; therefore, I'd like to simulate the server's load based on a varying rate of requests. Unfortunately the loadadjustment equation I use is a difference equation and the rate of requests is a linear equation (to be created arbitrarily for simulation,) which would need to be integrated across the individual periods between load adjustment calculations.
I found links to several load simulators, but it looks like all are meant to simulate connections to a server that's actually running. I can do that; that's how I developed the equation. The problem with that is the context of the system varies each run, so I have certain anomalies that I'd like to know a little more about so I might be able to improve the equation. Thanks.
ta0kira
PS I meant to put this in the software forum, but I initially thought of putting it in programming (where I accidentally posted it.) It doesn't matter to me which one it's in, though.
PPS I'm leaning toward writing this in C with a simple for pluginbased calculation of the integrals. That actually sounds pretty simple! Let me know if you'd like me to post what I come up with.
Last edited by ta0kira; 11162008 at 04:45 PM.



11172008, 03:01 PM

#2

ReliaFree Maintainer
Registered: Aug 2004
Location: Kalamazoo, Michigan
Distribution: Slackwarecurrent, Cross Linux from Scratch, Gentoo
Posts: 2,703

What is the model you have developed thus far? Since three of your explanatory variables (time to queue, time to retrieve, and time to remove) can be considered constants for a given server, you have only the one variable left (rate of requests). At this point, you can view your server requests as a nonhomogenous Poisson process (assuming the rate of request varies with time).
If you have existing data about the number of requests per unit of time, you can use that to fit a model for the rate of request. Always look for the simplest model that reasonably describes your data. You may find that a piecewise model works depending on the source of the requests.
And, take a look at the GSL for functions to calculate integrals.



11172008, 07:31 PM

#3

Senior Member
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078
Original Poster
Rep:

Yes, the outcome will only vary based on the rate of requests, which can be described by a nonhomogenous Poisson process. The state of the queue must be discretely calculated, however. Here is my best generalization of the load (i.e. number of queued requests) inherent to the server's design:
Code:
Ln(Δt) = Ln1 + A(tn)  T(Sn1)
tn
A(tn) = ∫ λr(t) dt
tn1
T(Sn) = Ln·δη(Sn, Δt(T))
t ≡ time
Ln ≡ queued requests
A ≡ added requests
λr ≡ request enqueue rate
T ≡ removed requests
δη ≡ load compensation calculation
Sn ≡ instantaneous dynamic queue state (L ⊂ S, S(t[n,n+1)) = Sn)
The problem I see is that λr can't be expected to symbolically integrate, meaning it will probably involve a rough Σ operation each iteration.
Something I'm particularly interested in is if oscillations in Ln will eventually attenuate with a constant λr. Thanks.
ta0kira
PS λr(t) will be an arbitrary simulation which could possibly consist of a system of equations.
Last edited by ta0kira; 11172008 at 08:27 PM.



11182008, 12:24 PM

#4

ReliaFree Maintainer
Registered: Aug 2004
Location: Kalamazoo, Michigan
Distribution: Slackwarecurrent, Cross Linux from Scratch, Gentoo
Posts: 2,703

Well, you can't conclude that lambda won't symbolically integrate unless you know the function that lambda represents. Commonly the lambda in NHPP follow a power law model or a loglinear model (at least in the application I use NHPP models). Both are symbolically integrable.
Oscillations in Ln will attenuate with a constant lambda as long as T is a constant and T = A. But the problem you've presented prohibits a constant lambda. Ln should attenuate (within a range of values) even with a varying lambda. If it doesn't your system is unoptimized and out of control. My understanding of your problem is to stabilize Ln with a varying lambda. Thus, if Ln is oscillating, you haven't solved your problem.
The key is to shoot for an acceptable RANGE of Ln values and only vary T as needed to maintain Ln within that range. Attempting to maintain Ln at a single, constant value will never be stable. Your T will constantly be changing.
If lambda is a system of equations, the solution will be a system of solutions. Just use matrices to do the math.



11182008, 08:43 PM

#5

Senior Member
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078
Original Poster
Rep:

Yes, I suppose you're right about the λ integration, except if some sort of randomness is included. The overall goal is to prevent queue overflow with the smallest possible Δt despite seeminglychaotic input. For example, I've been testing the system with a set of 8 client programs each sending off 512 requests in a row with random 10ms intervals thrown in, then a random pause and another 512 requests each. The objective is for the number of requests pulled at once to be relatively uniform, yet conforming to the essential pattern of input. To do this it needs to both recognize the signs of a spike, yet not overreact to a change that might be very shortlived to prevent massive oscillations.
ta0kira



11212008, 06:58 PM

#6

ReliaFree Maintainer
Registered: Aug 2004
Location: Kalamazoo, Michigan
Distribution: Slackwarecurrent, Cross Linux from Scratch, Gentoo
Posts: 2,703

I'm visualizing from your description a pulse train of requests. Each pulse varies in width because you're breaking up the 512 requests with pseudorandomly occurring 10ms intervals (and always 10ms). That should be easily integrable as each pulse is a rectangle.
In order to "recognize" a spike and not freak out, you could use a moving average which tends to smooth the input. Another approach would be to add negative feedback to the system. Not sure how you would include negative feedback in your system. It would be interesting though.



11222008, 10:08 PM

#7

Senior Member
Registered: Sep 2004
Distribution: FreeBSD 9.1, Kubuntu 12.10
Posts: 3,078
Original Poster
Rep:

Right now the calculation is split into 2 parts. The first is the exponent of a negative ratio where absolute change in queue load, queue capacity, and average execution time increase it and remaining queue capacity, change in time, and last number of executed requests decrease it. After subtracting from 1 and assuming the sign of the change in queue load. this becomes the "differential state" of the queue.
The second part is the calculation of the bulk execution percentage. The old percentage is combined with a new one calculated with the value found above, weighted based on elapsed time.
The combination of elapsed time increasing the first value and that same elapsed time decreasing the weight when combing the new percentage with the old prevents sharp, but extremely short changes from having as much impact as a sharp, but somewhat short change, while changes over a period of several seconds gets little consideration.
I would post the equations but I'm not patient enough to put them into LQ format. I might put them into maxima for that.
ta0kira
PS Something I have noticed is that the rate of request enqueueing and the rate of request execution will almost always add up to 2k/second; therefore, I need to do some scheduling manipulation, also.
Last edited by ta0kira; 11232008 at 03:20 AM.



Thread Tools 
Search this Thread 


Posting Rules

You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts
HTML code is Off



All times are GMT 5. The time now is 02:45 AM.

LinuxQuestions.org is looking for people interested in writing
Editorials, Articles, Reviews, and more. If you'd like to contribute
content, let us know.

Latest Threads
LQ News

