[Glass] load balancer configuration

Otto Behrens otto at finworks.biz
Wed Dec 20 23:01:06 PST 2023


Hi Lourens,

Thanks for the reply.

You probably need to look at a multi thread/auto scaling solution. Forcing
> a timeout will probably give the client a bad experience if it happens
> often.
>

I think this is what happens anyway: there are timeouts (in the nginx
setup, eg. proxy_read_timeout and proxy_connect_timeout) that will timeout
the upstream connections. And this causes a bad experience for the user. We
want to manage this better.


> Usually each socket has a queue to “hold new requests” while the first one
> is being processed. (Making that 1 will cause problems on systems under
> load).
>
> Thus GemStone need to be able process more than 1 request on that queue,
> so if there are 5 requests on the queue, and no. 1 is a slow one, no. 2 – 5
> will wait until no. 1 is complete.
>

We start up a limited number of sessions and at this point it is ok if
there are some requests waiting in a queue. Nginx decides which queues have
the least number of connections and routes the requests to that one. So the
idea is that requests are routed to queues that get serviced quickly. Yes,
under load it may happen that some requests get stuck behind a process that
takes too long. We are working on those.

The particular problem that I'm trying to prevent is that requests are
routed to a queue where the GemStone process is still busy (and unable to
handle the request), while other queues are empty (and able to handle
requests). This is caused by the fact that a browser starts a heavy process
and then navigates away (this is what I tried to explain in my original
email).


> Now if GemStone process can process 2 requests per socket requests 2-5
> will not wait for call no.1, thus the fast calls will be processed as
> normal.
>
>
>
> If 2 slow requests sits in a row, you will have the same issue, 3-5 will
> need to wait. This is where autoscaling comes in. GemStone needs to note
> request 1 is slow thus start a 3rd thread up incase no. 2 also is a slow
> request. If no. 2 is slow then start a 4th thread etc. And if things
> speed up again reduce the threads. Or always keep 3 threads and scale up
> when 1 request show slowness.
>

>
> This will also mean each thread have it’s own logged in session so that
> one thread does not persist 50% of another call’s data.
>

This may be something that we can investigate later. We currently start up
multiple sessions that do not use threading. I have not encountered a setup
where a GS thread could have its own logged in session (are there
references that you can give to this, please?). I have not encountered
"autoscaling" in the documentation or other literature (for this kind of
context) before. Can you refer me to where I can read more about this?


> In the interim you could increase the ports on GemStone and nginx if
> possible. This will not remove the problem but hopefully reduce the
> occurrence.
>

We are running 4 to 8 GemStone sessions (topaz) for web users (depending on
how busy the instance is). They seem to handle the load well, except in the
situation that we're struggling with.


>
>
> Regards
>
>
>
> Lourens
>
>
>
> *From:* Glass <glass-bounces at lists.gemtalksystems.com> * On Behalf Of *Otto
> Behrens via Glass
> *Sent:* Wednesday, December 20, 2023 2:04 PM
> *To:* glass at lists.gemtalksystems.com
> *Cc:* Iwan Vosloo <iwan at finworks.biz>
> *Subject:* [Glass] load balancer configuration
>
>
>
> Hi,
>
>
>
> We are using nginx to load balance in front of GemStone that runs a
> Seaside application. Some of our requests run too long (we are working hard
> to cut them down) and in general, the time it takes to service a request in
> our application varies between 0.1 and about 4 seconds. We are improving
> and getting more towards the lower end of that.
>
>
>
> Because of this, we use the least_conn directive and we persist session
> state so that we could use any of our GemStone upstream sessions to service
> a request. Requests are generally load balanced to idle sessions and there
> are theoretically no requests that wait for another to get serviced.
> Perhaps this is not optimal and you have better suggestions. It has worked
> ok for a long time, but should we consider another approach?
>
>
>
> When our code misbehaves and a request takes let's say 60 seconds to
> handle, things go pear shaped (yes we want to eliminate them). The user
> clicks "back" on the browser or closes the browser and nginx picks it up
> with:
>
> "epoll_wait() reported that client prematurely closed connection, so
> upstream connection is closed too while sending request to upstream"
>
>
>
> We suspect our problem is: when this happens, it appears as if nginx then
> routes requests to that same upstream, which is unable to handle it because
> it is busy handling the previous request (which is taking too long), even
> with some upstream sessions sitting idle. Some users then end up with no
> response.
>
>
>
> Ideally, we would like to catch the situation in the GemStone session and
> stop processing the request (when nginx closes the upstream connection).
> Alternatively, we could set timeouts long enough so that if the browser
> prematurely closes the connection, nginx does not close the upstream
> connection.
>
>
>
> Do you have a suggestion to handle this? Does it make sense to get
> timeouts (which ones?) to align so that this does not happen?
>
>
>
> Thanks a lot
>
> *Otto Behrens*
>
> +27 82 809 2375
>
> [image: FINWorks]
>
>
>
>
>
>
>
>
>
> [image: FINWorks] <http://za.linkedin.com/in/waltherbehrens>
>
> www.finworks.biz
>
>
>
>
>
> Disclaimer & Confidentiality Note: This email is intended solely for the
> use of the individual or entity named above as it may contain information
> that is confidential and privileged. If you are not the intended recipient,
> be advised that any dissemination, distribution or copying of this email is
> strictly prohibited. FINWorks cannot be held liable by any person other
> than the addressee in respect of any opinions, conclusions, advice or other
> information contained in this email.
>
>
>
>
> This electronic message, including any attachments, is intended only for
> the use of the individual or entity to which it is addressed and may
> contain information that is privileged or confidential. If you are not the
> intended recipient, please notify us immediately by replying to this
> message and then delete this message from your system. Any use,
> dissemination, distribution or reproduction of this message or any
> attachments by unintended recipients is unauthorised and may be unlawful.
> We have taken precautions to minimise the risk of transmitting software
> viruses, but we advise you to perform your own virus checks on any
> attachment to this message. We do not accept liability for any loss or
> damage caused by software viruses.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/archives/glass/attachments/20231221/ab3c24bb/attachment-0001.htm>


More information about the Glass mailing list