LinuxQuestions.org

LinuxQuestions.org (/questions/)
-   Programming (https://www.linuxquestions.org/questions/programming-9/)
-   -   plotting difference equations which are dependent on linear equations (https://www.linuxquestions.org/questions/programming-9/plotting-difference-equations-which-are-dependent-on-linear-equations-683861/)

ta0kira 11-16-2008 03:17 PM

plotting difference equations which are dependent on linear equations
 
I am working on a server application that queues incoming requests using multiple IPC-monitoring threads. Each thread only receives input from clients, parses it, and queues it in the form of a requested action. The main thread periodically removes a bulk of requests for execution so that the input threads can continue to queue new requests while those removed are being executed.

The queue of requests has an upper limit, and based on that limit I've developed an equation to determine the number of requests to be pulled from the queue at one time. This helps the server run more smoothly and it attempts to minimize impending backlogs. As far as I know the equation can't be represented linearly because it uses the change in state of the server at a given point to calculate an adjustment to make for the next interval. Basically, it derives a value normally ranging from -1 to 1 (essentially a base-X log of the estimated change before the next calculation) denoting what the load seems to be doing and that value is used in conjunction with the time elapse since the last calculation to adjust the percentage of pending requests that are pulled for execution. This incidentally causes the next iteration to vary based on that calculation vs. the results of it in a way I'm afraid isn't solvable.

There are really only four things that effect performance of the server under ideal conditions: the rate of requests, the amount of time to queue a request, the amount of time to remove a request, and the amount of time to execute it. In general, the last three can be seen as constant in an ideal system; therefore, I'd like to simulate the server's load based on a varying rate of requests. Unfortunately the load-adjustment equation I use is a difference equation and the rate of requests is a linear equation (to be created arbitrarily for simulation,) which would need to be integrated across the individual periods between load adjustment calculations.

I found links to several load simulators, but it looks like all are meant to simulate connections to a server that's actually running. I can do that; that's how I developed the equation. The problem with that is the context of the system varies each run, so I have certain anomalies that I'd like to know a little more about so I might be able to improve the equation. Thanks.
ta0kira

PS I meant to put this in the software forum, but I initially thought of putting it in programming (where I accidentally posted it.) It doesn't matter to me which one it's in, though.

PPS I'm leaning toward writing this in C with a simple for plug-in-based calculation of the integrals. That actually sounds pretty simple! Let me know if you'd like me to post what I come up with.

weibullguy 11-17-2008 03:01 PM

What is the model you have developed thus far? Since three of your explanatory variables (time to queue, time to retrieve, and time to remove) can be considered constants for a given server, you have only the one variable left (rate of requests). At this point, you can view your server requests as a nonhomogenous Poisson process (assuming the rate of request varies with time).

If you have existing data about the number of requests per unit of time, you can use that to fit a model for the rate of request. Always look for the simplest model that reasonably describes your data. You may find that a piecewise model works depending on the source of the requests.

And, take a look at the GSL for functions to calculate integrals.

ta0kira 11-17-2008 07:31 PM

Yes, the outcome will only vary based on the rate of requests, which can be described by a nonhomogenous Poisson process. The state of the queue must be discretely calculated, however. Here is my best generalization of the load (i.e. number of queued requests) inherent to the server's design:
Code:

Ln(Δt) = Ln-1 + A(tn) - T(Sn-1)

              tn
        A(tn) = ∫ λr(t) dt
              tn-1

        T(Sn) = Ln·δη(Sn, Δt(T))


t  ≡ time
Ln queued requests
A  ≡ added requests
λr request enqueue rate
T  ≡ removed requests
δη load compensation calculation
Sn instantaneous dynamic queue state (L ⊂ S, S(t[n,n+1)) = Sn)

The problem I see is that λr can't be expected to symbolically integrate, meaning it will probably involve a rough Σ operation each iteration.

Something I'm particularly interested in is if oscillations in Ln will eventually attenuate with a constant λr. Thanks.
ta0kira

PS λr(t) will be an arbitrary simulation which could possibly consist of a system of equations.

weibullguy 11-18-2008 12:24 PM

Well, you can't conclude that lambda won't symbolically integrate unless you know the function that lambda represents. Commonly the lambda in NHPP follow a power law model or a loglinear model (at least in the application I use NHPP models). Both are symbolically integrable.

Oscillations in Ln will attenuate with a constant lambda as long as T is a constant and T = A. But the problem you've presented prohibits a constant lambda. Ln should attenuate (within a range of values) even with a varying lambda. If it doesn't your system is unoptimized and out of control. My understanding of your problem is to stabilize Ln with a varying lambda. Thus, if Ln is oscillating, you haven't solved your problem.

The key is to shoot for an acceptable RANGE of Ln values and only vary T as needed to maintain Ln within that range. Attempting to maintain Ln at a single, constant value will never be stable. Your T will constantly be changing.

If lambda is a system of equations, the solution will be a system of solutions. Just use matrices to do the math.

ta0kira 11-18-2008 08:43 PM

Yes, I suppose you're right about the λ integration, except if some sort of randomness is included. The overall goal is to prevent queue overflow with the smallest possible Δt despite seemingly-chaotic input. For example, I've been testing the system with a set of 8 client programs each sending off 512 requests in a row with random 10ms intervals thrown in, then a random pause and another 512 requests each. The objective is for the number of requests pulled at once to be relatively uniform, yet conforming to the essential pattern of input. To do this it needs to both recognize the signs of a spike, yet not overreact to a change that might be very short-lived to prevent massive oscillations.
ta0kira

weibullguy 11-21-2008 06:58 PM

I'm visualizing from your description a pulse train of requests. Each pulse varies in width because you're breaking up the 512 requests with pseudo-randomly occurring 10ms intervals (and always 10ms). That should be easily integrable as each pulse is a rectangle.

In order to "recognize" a spike and not freak out, you could use a moving average which tends to smooth the input. Another approach would be to add negative feedback to the system. Not sure how you would include negative feedback in your system. It would be interesting though.

ta0kira 11-22-2008 10:08 PM

Right now the calculation is split into 2 parts. The first is the exponent of a negative ratio where absolute change in queue load, queue capacity, and average execution time increase it and remaining queue capacity, change in time, and last number of executed requests decrease it. After subtracting from 1 and assuming the sign of the change in queue load. this becomes the "differential state" of the queue.

The second part is the calculation of the bulk execution percentage. The old percentage is combined with a new one calculated with the value found above, weighted based on elapsed time.

The combination of elapsed time increasing the first value and that same elapsed time decreasing the weight when combing the new percentage with the old prevents sharp, but extremely short changes from having as much impact as a sharp, but somewhat short change, while changes over a period of several seconds gets little consideration.

I would post the equations but I'm not patient enough to put them into LQ format. I might put them into maxima for that.
ta0kira

PS Something I have noticed is that the rate of request enqueueing and the rate of request execution will almost always add up to 2k/second; therefore, I need to do some scheduling manipulation, also.


All times are GMT -5. The time now is 05:11 AM.