[Glass] load balancer configuration

Paul Baumann plbaumann at gmail.com
Wed Dec 20 05:48:20 PST 2023


Use VSD to see if tempObjSpace improvement coincides with the delay (by a reclaim). Even if your application code isn't creating and disposing of many objects, it is a traditional GS issue that iteration of a complex block will. GS since 3.0 is supposed to have made improvements to this, but I've never verified that. My experience was more with building a framework that allowed application code to be changed to use only simple blocks, often with over a 90% reduction in execution time (and no occasional slowness) once ALL complex blocks are eliminated from tuned code.



On December 20, 2023 7:04:20 AM EST, Otto Behrens via Glass <glass at lists.gemtalksystems.com> wrote:
>Hi,
>
>We are using nginx to load balance in front of GemStone that runs a Seaside
>application. Some of our requests run too long (we are working hard to cut
>them down) and in general, the time it takes to service a request in our
>application varies between 0.1 and about 4 seconds. We are improving and
>getting more towards the lower end of that.
>
>Because of this, we use the least_conn directive and we persist session
>state so that we could use any of our GemStone upstream sessions to service
>a request. Requests are generally load balanced to idle sessions and there
>are theoretically no requests that wait for another to get serviced.
>Perhaps this is not optimal and you have better suggestions. It has worked
>ok for a long time, but should we consider another approach?
>
>When our code misbehaves and a request takes let's say 60 seconds to
>handle, things go pear shaped (yes we want to eliminate them). The user
>clicks "back" on the browser or closes the browser and nginx picks it up
>with:
>"epoll_wait() reported that client prematurely closed connection, so
>upstream connection is closed too while sending request to upstream"
>
>We suspect our problem is: when this happens, it appears as if nginx then
>routes requests to that same upstream, which is unable to handle it because
>it is busy handling the previous request (which is taking too long), even
>with some upstream sessions sitting idle. Some users then end up with no
>response.
>
>Ideally, we would like to catch the situation in the GemStone session and
>stop processing the request (when nginx closes the upstream connection).
>Alternatively, we could set timeouts long enough so that if the browser
>prematurely closes the connection, nginx does not close the upstream
>connection.
>
>Do you have a suggestion to handle this? Does it make sense to get timeouts
>(which ones?) to align so that this does not happen?
>
>Thanks a lot
>
>Otto Behrens
>
>+27 82 809 2375
>[image: FINWorks]
>[image: FINWorks] <http://za.linkedin.com/in/waltherbehrens>
>www.finworks.biz
>
>Disclaimer & Confidentiality Note: This email is intended solely for the
>use of the individual or entity named above as it may contain information
>that is confidential and privileged. If you are not the intended recipient,
>be advised that any dissemination, distribution or copying of this email is
>strictly prohibited. FINWorks cannot be held liable by any person other
>than the addressee in respect of any opinions, conclusions, advice or other
>information contained in this email.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/archives/glass/attachments/20231220/025ebcfc/attachment.htm>


More information about the Glass mailing list