[Glass] load balancer configuration

Otto Behrens otto at finworks.biz
Thu Dec 21 10:57:43 PST 2023


Hi Lourens,

Least connect load balancing on nginx does not mean each upstream will only
> get one call and the rest will go to the other open ports. Underload and
> depending on below setting on each upstream, multiple calls will go to each
> upstream.
>

I'm not quite following you. I did not intend to say each upstream will get
one call, sorry if I was misleading. What I understand is that if an
upstream is busy with "one call", other upstreams should receive subsequent
calls. The upstream with the least number of connections waiting in the
queue should receive more requests than others. At least, this is how I
interpret how it works. If I don't get it, let me know.


>  Each socket/port (say ie. port 8001) has its own queue see:
>
>
>
> makeListener: queueLength
>
>
>
> "Turns the receiver into a listening socket.
>
> The queueLength argument specifies the size of the listen backlog queue for
>
> incoming connections.
>
> Returns the receiver or nil if an error occurred."
>
>
>
> My experience, nginx will give each port multiple requests until the above
> queue is full. Thus fast calls could be stuck behind slow calls. Making
> this param 1 will in essence force nginx to only give each port one
> request, thus if 10 port are available and 20 requests comes in, things
> will start to go pear shape if some are slow requests.
>

If there are many slow calls simultaneously requested, yes, I agree, the
distribution can be unfortunate. If there are fewer slow calls and some
upstreams manage to serve faster calls, at least some calls will go
through. It is not perfect, yes.

This is not our problem. We see calls routed to an upstream that is unable
to handle the call.


>  See: https://github.com/jgfoster/WebGS for a possible idea about the
> threading.
>

Maybe


>  In regards to autoscaling, see Kubernetes concept about this.
>
>
This is not in our scope and does not make sense for us as a solution to
this problem


>
>
> Regards
>
>
>
> Lourens
>
>
>
>
>
>
> *From:* Otto Behrens <otto at finworks.biz>
> *Sent:* Thursday, December 21, 2023 9:01 AM
> *To:* Lourens van Nieuwenhuizen <LvNieuwenhuizen at momentum.co.za>
> *Cc:* Iwan Vosloo <iwan at finworks.biz>; glass at lists.gemtalksystems.com
> *Subject:* Re: [Glass] load balancer configuration
>
>
>
> Hi Lourens,
>
>
>
> Thanks for the reply.
>
>
>
> You probably need to look at a multi thread/auto scaling solution. Forcing
> a timeout will probably give the client a bad experience if it happens
> often.
>
>
>
> I think this is what happens anyway: there are timeouts (in the nginx
> setup, eg. proxy_read_timeout and proxy_connect_timeout) that will timeout
> the upstream connections. And this causes a bad experience for the user. We
> want to manage this better.
>
>
>
> Usually each socket has a queue to “hold new requests” while the first one
> is being processed. (Making that 1 will cause problems on systems under
> load).
>
> Thus GemStone need to be able process more than 1 request on that queue,
> so if there are 5 requests on the queue, and no. 1 is a slow one, no. 2 – 5
> will wait until no. 1 is complete.
>
>
>
> We start up a limited number of sessions and at this point it is ok if
> there are some requests waiting in a queue. Nginx decides which queues have
> the least number of connections and routes the requests to that one. So the
> idea is that requests are routed to queues that get serviced quickly. Yes,
> under load it may happen that some requests get stuck behind a process that
> takes too long. We are working on those.
>
>
>
> The particular problem that I'm trying to prevent is that requests are
> routed to a queue where the GemStone process is still busy (and unable to
> handle the request), while other queues are empty (and able to handle
> requests). This is caused by the fact that a browser starts a heavy process
> and then navigates away (this is what I tried to explain in my original
> email).
>
>
>
> Now if GemStone process can process 2 requests per socket requests 2-5
> will not wait for call no.1, thus the fast calls will be processed as
> normal.
>
>
>
> If 2 slow requests sits in a row, you will have the same issue, 3-5 will
> need to wait. This is where autoscaling comes in. GemStone needs to note
> request 1 is slow thus start a 3rd thread up incase no. 2 also is a slow
> request. If no. 2 is slow then start a 4th thread etc. And if things
> speed up again reduce the threads. Or always keep 3 threads and scale up
> when 1 request show slowness.
>
>
>
> This will also mean each thread have it’s own logged in session so that
> one thread does not persist 50% of another call’s data.
>
>
>
> This may be something that we can investigate later. We currently start up
> multiple sessions that do not use threading. I have not encountered a setup
> where a GS thread could have its own logged in session (are there
> references that you can give to this, please?). I have not encountered
> "autoscaling" in the documentation or other literature (for this kind of
> context) before. Can you refer me to where I can read more about this?
>
>
>
> In the interim you could increase the ports on GemStone and nginx if
> possible. This will not remove the problem but hopefully reduce the
> occurrence.
>
>
>
> We are running 4 to 8 GemStone sessions (topaz) for web users (depending
> on how busy the instance is). They seem to handle the load well, except in
> the situation that we're struggling with.
>
>
>
>
>
> Regards
>
>
>
> Lourens
>
>
>
> *From:* Glass <glass-bounces at lists.gemtalksystems.com> *On Behalf Of *Otto
> Behrens via Glass
> *Sent:* Wednesday, December 20, 2023 2:04 PM
> *To:* glass at lists.gemtalksystems.com
> *Cc:* Iwan Vosloo <iwan at finworks.biz>
> *Subject:* [Glass] load balancer configuration
>
>
>
> Hi,
>
>
>
> We are using nginx to load balance in front of GemStone that runs a
> Seaside application. Some of our requests run too long (we are working hard
> to cut them down) and in general, the time it takes to service a request in
> our application varies between 0.1 and about 4 seconds. We are improving
> and getting more towards the lower end of that.
>
>
>
> Because of this, we use the least_conn directive and we persist session
> state so that we could use any of our GemStone upstream sessions to service
> a request. Requests are generally load balanced to idle sessions and there
> are theoretically no requests that wait for another to get serviced.
> Perhaps this is not optimal and you have better suggestions. It has worked
> ok for a long time, but should we consider another approach?
>
>
>
> When our code misbehaves and a request takes let's say 60 seconds to
> handle, things go pear shaped (yes we want to eliminate them). The user
> clicks "back" on the browser or closes the browser and nginx picks it up
> with:
>
> "epoll_wait() reported that client prematurely closed connection, so
> upstream connection is closed too while sending request to upstream"
>
>
>
> We suspect our problem is: when this happens, it appears as if nginx then
> routes requests to that same upstream, which is unable to handle it because
> it is busy handling the previous request (which is taking too long), even
> with some upstream sessions sitting idle. Some users then end up with no
> response.
>
>
>
> Ideally, we would like to catch the situation in the GemStone session and
> stop processing the request (when nginx closes the upstream connection).
> Alternatively, we could set timeouts long enough so that if the browser
> prematurely closes the connection, nginx does not close the upstream
> connection.
>
>
>
> Do you have a suggestion to handle this? Does it make sense to get
> timeouts (which ones?) to align so that this does not happen?
>
>
>
> Thanks a lot
>
> *Otto Behrens*
>
> +27 82 809 2375
>
> [image: FINWorks]
>
>
>
>
>
>
>
>
>
> [image: FINWorks] <http://za.linkedin.com/in/waltherbehrens>
>
> www.finworks.biz
>
>
>
>
>
> Disclaimer & Confidentiality Note: This email is intended solely for the
> use of the individual or entity named above as it may contain information
> that is confidential and privileged. If you are not the intended recipient,
> be advised that any dissemination, distribution or copying of this email is
> strictly prohibited. FINWorks cannot be held liable by any person other
> than the addressee in respect of any opinions, conclusions, advice or other
> information contained in this email.
>
>
>
>
>
> This electronic message, including any attachments, is intended only for
> the use of the individual or entity to which it is addressed and may
> contain information that is privileged or confidential. If you are not the
> intended recipient, please notify us immediately by replying to this
> message and then delete this message from your system. Any use,
> dissemination, distribution or reproduction of this message or any
> attachments by unintended recipients is unauthorised and may be unlawful.
> We have taken precautions to minimise the risk of transmitting software
> viruses, but we advise you to perform your own virus checks on any
> attachment to this message. We do not accept liability for any loss or
> damage caused by software viruses.
>
>
>
> This electronic message, including any attachments, is intended only for
> the use of the individual or entity to which it is addressed and may
> contain information that is privileged or confidential. If you are not the
> intended recipient, please notify us immediately by replying to this
> message and then delete this message from your system. Any use,
> dissemination, distribution or reproduction of this message or any
> attachments by unintended recipients is unauthorised and may be unlawful.
> We have taken precautions to minimise the risk of transmitting software
> viruses, but we advise you to perform your own virus checks on any
> attachment to this message. We do not accept liability for any loss or
> damage caused by software viruses.
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/archives/glass/attachments/20231221/b57e8ef9/attachment-0001.htm>


More information about the Glass mailing list