[Glass] load balancer configuration
Brodbeck Andreas
andreas.brodbeck at mindclue.ch
Wed Dec 20 23:34:17 PST 2023
Hi Otto
(This is probably just a side note, and not a solution to your GemStone specific problem).
I totally "outsourced" the load balancing to a HAproxy (www.haproxy.org) behind nginx. Nginx was not really configurable enough, in my opinion (at least, the open source community version). So I chained HAproxy into the stack, and never looked back.
HAproxy is a beast of a rock solid load balancer, which does its jobs extremely well for balancing to topaz instances. Tons of options to tweak for different critical situations, and tailored to the capabilities of the GemStone internal web server. And with detailled logging and a web interface for statistic overview, too.
Cheers!
Andreas
-- -- -- -- -- -- -- --
Andreas Brodbeck
Softwaremacher
www.mindclue.ch
> Am 20.12.2023 um 13:04 schrieb Otto Behrens via Glass <glass at lists.gemtalksystems.com>:
>
> Hi,
>
> We are using nginx to load balance in front of GemStone that runs a Seaside application. Some of our requests run too long (we are working hard to cut them down) and in general, the time it takes to service a request in our application varies between 0.1 and about 4 seconds. We are improving and getting more towards the lower end of that.
>
> Because of this, we use the least_conn directive and we persist session state so that we could use any of our GemStone upstream sessions to service a request. Requests are generally load balanced to idle sessions and there are theoretically no requests that wait for another to get serviced. Perhaps this is not optimal and you have better suggestions. It has worked ok for a long time, but should we consider another approach?
>
> When our code misbehaves and a request takes let's say 60 seconds to handle, things go pear shaped (yes we want to eliminate them). The user clicks "back" on the browser or closes the browser and nginx picks it up with:
> "epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream"
>
> We suspect our problem is: when this happens, it appears as if nginx then routes requests to that same upstream, which is unable to handle it because it is busy handling the previous request (which is taking too long), even with some upstream sessions sitting idle. Some users then end up with no response.
>
> Ideally, we would like to catch the situation in the GemStone session and stop processing the request (when nginx closes the upstream connection). Alternatively, we could set timeouts long enough so that if the browser prematurely closes the connection, nginx does not close the upstream connection.
>
> Do you have a suggestion to handle this? Does it make sense to get timeouts (which ones?) to align so that this does not happen?
>
> Thanks a lot
> Otto Behrens
More information about the Glass
mailing list