[Glass] load balancer configuration

Otto Behrens otto at finworks.biz
Tue Dec 26 21:57:57 PST 2023


Marten, thank you for your response.

Yes, maybe the answer is to try a totally different way of delivering
requests. How would, for example the RabbitMQ library, help to distribute
the load better? If Apache or nginx does not have enough knowledge to
decide how to distribute the requests, why would another delivery
mechanism have better knowledge?

I am assuming your thinking of using an MQ based approach is based on the
idea that there is a single queue that channels requests into a server that
will then decide how to service these requests optimally. Is that correct?
By optimally here I mean distribute the work fairly and process in parallel
so that a full queue of requests are handled as quickly as possible. Do you
think that it is necessary for such a server to know something about the
requests so that it can handle them differently?

What I would like is to have a single queue, a pool of processes that can
handle requests and a distributor that picks any process that is not busy
to handle a request. If all processes are busy, wait for the first one that
becomes available and give it the next request to handle. Easier said than
done I suppose.

Otto Behrens

+27 82 809 2375
[image: FINWorks]
[image: FINWorks] <http://za.linkedin.com/in/waltherbehrens>
www.finworks.biz

Disclaimer & Confidentiality Note: This email is intended solely for the
use of the individual or entity named above as it may contain information
that is confidential and privileged. If you are not the intended recipient,
be advised that any dissemination, distribution or copying of this email is
strictly prohibited. FINWorks cannot be held liable by any person other
than the addressee in respect of any opinions, conclusions, advice or other
information contained in this email.


On Fri, Dec 22, 2023 at 10:45 AM Marten Felddtmann <m at feldtmann.online>
wrote:

> Hi,
>
> this is a very interesting topic and actually I do not find any good
> solution under Apache (or perhaps nginx). I mentioned this in my talk at
> the London User Group last month regarding Gemstone/S. Apache (perhaps also
> nginx) do not have the knowledge to make the delivery of the http request
> in a good way to Gemstone.
>
> I had more or less luck, that the modelling tool for Gemstone (I use)
> allows me to categorize the API (HTTP) calls into different categories:
> normal, long and memory and this works most of the time using the standard
> balancer of Apache. Its getting more complicated, when the usage of a
> server is getting higher and higher ... then there is theoretical a point,
> where the whole communication collapses (in a notable way of the end user:
> more and more UI errors a showing).
>
> Even with these categories, implementing background tasks in Gemstone/S
> solutions seems to be a general suitable pattern (in the API oriented way)
> (so, the UI is defining a background task with parameters and then it is
> waiting for an event, that the work has been done).
>
> The solution to this problem would be a total different way of delivery
> the http requests to Gemstone. Around 2012/15 there was an experimental
> http server mongel2 available, which works as a http-server and a zeromq
> backend. So the backend processes pull their next request (however pull is
> implemented). By using the available software it would be perhaps more
> useful to use the RabbitMQ library from Gemstone and write a mapper between
> http and rabbitmq.
>
> As I mentioned above: I have a working solution, but not a good solution -
> but it works now for more than 8 years.
>
> My Gemstone/S solutions are all API-oriented, sessions are persistent (so
> available in all processes), UI is written in JS .
>
> Marten
>
> On 20.12.23 13:04, Otto Behrens via Glass wrote:
>
> Hi,
>
> We are using nginx to load balance in front of GemStone that runs a
> Seaside application. Some of our requests run too long (we are working hard
> to cut them down) and in general, the time it takes to service a request in
> our application varies between 0.1 and about 4 seconds. We are improving
> and getting more towards the lower end of that.
>
> Because of this, we use the least_conn directive and we persist session
> state so that we could use any of our GemStone upstream sessions to service
> a request. Requests are generally load balanced to idle sessions and there
> are theoretically no requests that wait for another to get serviced.
> Perhaps this is not optimal and you have better suggestions. It has worked
> ok for a long time, but should we consider another approach?
>
> When our code misbehaves and a request takes let's say 60 seconds to
> handle, things go pear shaped (yes we want to eliminate them). The user
> clicks "back" on the browser or closes the browser and nginx picks it up
> with:
> "epoll_wait() reported that client prematurely closed connection, so
> upstream connection is closed too while sending request to upstream"
>
> We suspect our problem is: when this happens, it appears as if nginx then
> routes requests to that same upstream, which is unable to handle it because
> it is busy handling the previous request (which is taking too long), even
> with some upstream sessions sitting idle. Some users then end up with no
> response.
>
> Ideally, we would like to catch the situation in the GemStone session and
> stop processing the request (when nginx closes the upstream connection).
> Alternatively, we could set timeouts long enough so that if the browser
> prematurely closes the connection, nginx does not close the upstream
> connection.
>
> Do you have a suggestion to handle this? Does it make sense to get
> timeouts (which ones?) to align so that this does not happen?
>
> Thanks a lot
>
> Otto Behrens
>
> +27 82 809 2375
> [image: FINWorks]
>
>
>
> [image: FINWorks] <http://za.linkedin.com/in/waltherbehrens>
> www.finworks.biz
>
> Disclaimer & Confidentiality Note: This email is intended solely for the
> use of the individual or entity named above as it may contain information
> that is confidential and privileged. If you are not the intended recipient,
> be advised that any dissemination, distribution or copying of this email is
> strictly prohibited. FINWorks cannot be held liable by any person other
> than the addressee in respect of any opinions, conclusions, advice or other
> information contained in this email.
>
> _______________________________________________
> Glass mailing listGlass at lists.gemtalksystems.comhttps://lists.gemtalksystems.com/mailman/listinfo/glass
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/archives/glass/attachments/20231227/121165fa/attachment-0001.htm>


More information about the Glass mailing list