[Glass] load balancer configuration

Marten Felddtmann m at feldtmann.online
Wed Dec 27 00:14:59 PST 2023


Hello Otto,

as I mentionjed before ... I made some assumptions:

- I do not have sticky session, so the request may run on all waiting 
processes. Gemstone/S is pretty fast with its shared memory - an 
advantage against the typical mainstream solutions (at least my 
colleques are using).

- all my applications are totally API driven

- Up to now I assumed, that I categorize all calls into different groups 
- mostly according to its memory and speed (execution time) to get rid 
of the problem, that "fast" calls do not have to wait on a "long" call. 
Practically this worked well (most of the time) . sometimes I had to 
change the URL of a call between releases due to the fact, that the 
execution time changed very strong due to changed in the answering code 
to request a different category.

- most of my API-calls are fast calls, less are long calls with moderate 
more memory usage and also less are memory intensive calls (and perhaps 
due to that also long api call).

- use categories has also an disadvantage (I am really not happy with), 
that a long- and memory intensive waiting processes are waiting lot of 
time, doing nothing. In that time they could answer all fast calls ... 
before the next long or memory intensive calls comes up.

- so it would be better to have equally answering processes.

So in my topaz processes (which are handling "events" based on rabbitmq) 
I do the following:

-> (a) call rabbitmq for any new message, then the topaz processes goes 
to sleep for a specific amount of time (here : 30 seconds) or it returns 
at once

-> (b) if it returns (without a new message), I do some Smalltalk 
waiting code to handle Gemstone/S background specific  stuff and return 
to (a)

-> otherwise it returns with the message. I handle this message and tell 
RabbitMQ, that the work has been done (in combination with Gemstone/S 
optimistic locking and transaction handling), send some "answering" 
event to rabbitmq and it returns to (a)

Pretty efficient and programming language independent.

Marten




On 27.12.23 06:57, Otto Behrens wrote:
> Marten, thank you for your response.
>
> Yes, maybe the answer is to try a totally different way of delivering 
> requests. How would, for example the RabbitMQ library, help to 
> distribute the load better? If Apache or nginx does not have enough 
> knowledge to decide how to distribute the requests, why would another 
> delivery mechanism have better knowledge?
>
> I am assuming your thinking of using an MQ based approach is based on 
> the idea that there is a single queue that channels requests into a 
> server that will then decide how to service these requests optimally. 
> Is that correct? By optimally here I mean distribute the work fairly 
> and process in parallel so that a full queue of requests are handled 
> as quickly as possible. Do you think that it is necessary for such a 
> server to know something about the requests so that it can handle them 
> differently?
>
> What I would like is to have a single queue, a pool of processes that 
> can handle requests and a distributor that picks any process that is 
> not busy to handle a request. If all processes are busy, wait for the 
> first one that becomes available and give it the next request to 
> handle. Easier said than done I suppose
>

That IS the way RabbitMQ works ...
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/archives/glass/attachments/20231227/0076861d/attachment.htm>


More information about the Glass mailing list