[Glass] load balancer configuration

Otto Behrens otto at finworks.biz
Thu Dec 21 02:26:44 PST 2023


Thanks, Andreas. That may well be the solution.

Why do you use nginx at all then? Can you not use HAproxy as a web server?

How do you configure nginx in this setup? Do you have one upstream that you
connect to HAproxy?

Kind regards

Otto Behrens

+27 82 809 2375
[image: FINWorks]
[image: FINWorks] <http://za.linkedin.com/in/waltherbehrens>
www.finworks.biz

Disclaimer & Confidentiality Note: This email is intended solely for the
use of the individual or entity named above as it may contain information
that is confidential and privileged. If you are not the intended recipient,
be advised that any dissemination, distribution or copying of this email is
strictly prohibited. FINWorks cannot be held liable by any person other
than the addressee in respect of any opinions, conclusions, advice or other
information contained in this email.


On Thu, Dec 21, 2023 at 9:34 AM Brodbeck Andreas <
andreas.brodbeck at mindclue.ch> wrote:

> Hi Otto
>
> (This is probably just a side note, and not a solution to your GemStone
> specific problem).
>
> I totally "outsourced" the load balancing to a HAproxy (www.haproxy.org)
> behind nginx. Nginx was not really configurable enough, in my opinion (at
> least, the open source community version). So I chained HAproxy into the
> stack, and never looked back.
>
> HAproxy is a beast of a rock solid load balancer, which does its jobs
> extremely well for balancing to topaz instances. Tons of options to tweak
> for different critical situations, and tailored to the capabilities of the
> GemStone internal web server. And with detailled logging and a web
> interface for statistic overview, too.
>
> Cheers!
> Andreas
>
>
> -- -- -- -- -- -- -- --
> Andreas Brodbeck
> Softwaremacher
> www.mindclue.ch
>
>
> > Am 20.12.2023 um 13:04 schrieb Otto Behrens via Glass <
> glass at lists.gemtalksystems.com>:
> >
> > Hi,
> >
> > We are using nginx to load balance in front of GemStone that runs a
> Seaside application. Some of our requests run too long (we are working hard
> to cut them down) and in general, the time it takes to service a request in
> our application varies between 0.1 and about 4 seconds. We are improving
> and getting more towards the lower end of that.
> >
> > Because of this, we use the least_conn directive and we persist session
> state so that we could use any of our GemStone upstream sessions to service
> a request. Requests are generally load balanced to idle sessions and there
> are theoretically no requests that wait for another to get serviced.
> Perhaps this is not optimal and you have better suggestions. It has worked
> ok for a long time, but should we consider another approach?
> >
> > When our code misbehaves and a request takes let's say 60 seconds to
> handle, things go pear shaped (yes we want to eliminate them). The user
> clicks "back" on the browser or closes the browser and nginx picks it up
> with:
> > "epoll_wait() reported that client prematurely closed connection, so
> upstream connection is closed too while sending request to upstream"
> >
> > We suspect our problem is: when this happens, it appears as if nginx
> then routes requests to that same upstream, which is unable to handle it
> because it is busy handling the previous request (which is taking too
> long), even with some upstream sessions sitting idle. Some users then end
> up with no response.
> >
> > Ideally, we would like to catch the situation in the GemStone session
> and stop processing the request (when nginx closes the upstream
> connection). Alternatively, we could set timeouts long enough so that if
> the browser prematurely closes the connection, nginx does not close the
> upstream connection.
> >
> > Do you have a suggestion to handle this? Does it make sense to get
> timeouts (which ones?) to align so that this does not happen?
> >
> > Thanks a lot
> > Otto Behrens
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/archives/glass/attachments/20231221/b9ddbbda/attachment.htm>


More information about the Glass mailing list