<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<div class="moz-cite-prefix">Hello Otto,</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">as I mentionjed before ... I made some
assumptions:</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">- I do not have sticky session, so the
request may run on all waiting processes. Gemstone/S is pretty
fast with its shared memory - an advantage against the typical
mainstream solutions (at least my colleques are using).<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">- all my applications are totally API
driven<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">- Up to now I assumed, that I
categorize all calls into different groups - mostly according to
its memory and speed (execution time) to get rid of the problem,
that "fast" calls do not have to wait on a "long" call.
Practically this worked well (most of the time) . sometimes I had
to change the URL of a call between releases due to the fact, that
the execution time changed very strong due to changed in the
answering code to request a different category.<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">- most of my API-calls are fast calls,
less are long calls with moderate more memory usage and also less
are memory intensive calls (and perhaps due to that also long api
call).<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">- use categories has also an
disadvantage (I am really not happy with), that a long- and memory
intensive waiting processes are waiting lot of time, doing
nothing. In that time they could answer all fast calls ... before
the next long or memory intensive calls comes up.</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">- so it would be better to have equally
answering processes.<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">So in my topaz processes (which are
handling "events" based on rabbitmq) I do the following:</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">-> (a) call rabbitmq for any new
message, then the topaz processes goes to sleep for a specific
amount of time (here : 30 seconds) or it returns at once</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">-> (b) if it returns (without a new
message), I do some Smalltalk waiting code to handle Gemstone/S
background specific stuff and return to (a)</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">-> otherwise it returns with the
message. I handle this message and tell RabbitMQ, that the work
has been done (in combination with Gemstone/S optimistic locking
and transaction handling), send some "answering" event to rabbitmq
and it returns to (a)<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">Pretty efficient and programming
language independent.<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">Marten<br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix"><br>
</div>
<div class="moz-cite-prefix">On 27.12.23 06:57, Otto Behrens wrote:<br>
</div>
<blockquote type="cite"
cite="mid:CAOJutV65n=vNDt57HFYkQWHv52ChsBEt8QTfqWF5ttsnPHrfCQ@mail.gmail.com">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">Marten, thank you for your response.
<div><br>
</div>
<div>Yes, maybe the answer is to try a totally different way of
delivering requests. How would, for example the RabbitMQ
library, help to distribute the load better? If Apache or
nginx does not have enough knowledge to decide how to
distribute the requests, why would another delivery
mechanism have better knowledge?</div>
<div><br>
</div>
<div>I am assuming your thinking of using an MQ based approach
is based on the idea that there is a single queue that
channels requests into a server that will then decide how to
service these requests optimally. Is that correct? By
optimally here I mean distribute the work fairly and process
in parallel so that a full queue of requests are handled as
quickly as possible. Do you think that it is necessary for
such a server to know something about the requests so that it
can handle them differently?</div>
<div><br>
</div>
<div>What I would like is to have a single queue, a pool of
processes that can handle requests and a distributor that
picks any process that is not busy to handle a request. If all
processes are busy, wait for the first one that becomes
available and give it the next request to handle. Easier said
than done I suppose</div>
</div>
<div class="gmail_quote">
<blockquote class="gmail_quote"
style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
</blockquote>
</div>
</blockquote>
<p><br>
</p>
<p>That IS the way RabbitMQ works ... <br>
</p>
</body>
</html>