[Glass] load balancer configuration

Iwan Vosloo iwan at reahl.org
Wed Dec 20 23:57:25 PST 2023


Just an idea - the implementation of which I'm not sure is possible??

Is there a way to detect inside a running hyper (via a signal or 
similar) that Nginx has closed the connection to the backend Hyper?

I'm just thinking that if this can be done the orphaned/misbehaving 
hyper could be killed off and restarted without consuming unnecessary 
resources.

On 2023/12/21 08:03, Otto Behrens via Glass wrote:
> Thanks. Paul.
> 
> Indeed, we've been improving a lot around block complexity, especially 
> when iterating through large collections. Our code is often just not 
> optimally written and this is where significant gains are. The system is 
> big and it will take time to fix all these issues. In the meantime, we 
> would just like to handle the situation better and avoid requests routed 
> to the wrong upstream (GS session).
> 
> On Wed, Dec 20, 2023 at 3:48 PM Paul Baumann <plbaumann at gmail.com 
> <mailto:plbaumann at gmail.com>> wrote:
> 
>     Use VSD to see if tempObjSpace improvement coincides with the delay
>     (by a reclaim). Even if your application code isn't creating and
>     disposing of many objects, it is a traditional GS issue that
>     iteration of a complex block will. GS since 3.0 is supposed to have
>     made improvements to this, but I've never verified that. My
>     experience was more with building a framework that allowed
>     application code to be changed to use only simple blocks, often with
>     over a 90% reduction in execution time (and no occasional slowness)
>     once ALL complex blocks are eliminated from tuned code.
> 
> 
> 
>     On December 20, 2023 7:04:20 AM EST, Otto Behrens via Glass
>     <glass at lists.gemtalksystems.com
>     <mailto:glass at lists.gemtalksystems.com>> wrote:
> 
>         Hi,
> 
>         We are using nginx to load balance in front of GemStone that
>         runs a Seaside application. Some of our requests run too long
>         (we are working hard to cut them down) and in general, the time
>         it takes to service a request in our application varies between
>         0.1 and about 4 seconds. We are improving and getting more
>         towards the lower end of that.
> 
>         Because of this, we use the least_conn directive and we
>         persist session state so that we could use any of our GemStone
>         upstream sessions to service a request. Requests are generally
>         load balanced to idle sessions and there are theoretically no
>         requests that wait for another to get serviced. Perhaps this is
>         not optimal and you have better suggestions. It has worked ok
>         for a long time, but should we consider another approach?
> 
>         When our code misbehaves and a request takes let's say 60
>         seconds to handle, things go pear shaped (yes we want to
>         eliminate them). The user clicks "back" on the browser or closes
>         the browser and nginx picks it up with:
>         "epoll_wait() reported that client prematurely closed
>         connection, so upstream connection is closed too while sending
>         request to upstream"
> 
>         We suspect our problem is: when this happens, it appears as if
>         nginx then routes requests to that same upstream, which is
>         unable to handle it because it is busy handling the previous
>         request (which is taking too long), even with some upstream
>         sessions sitting idle. Some users then end up with no response.
> 
>         Ideally, we would like to catch the situation in the GemStone
>         session and stop processing the request (when nginx closes the
>         upstream connection). Alternatively, we could set timeouts long
>         enough so that if the browser prematurely closes the connection,
>         nginx does not close the upstream connection.
> 
>         Do you have a suggestion to handle this? Does it make sense to
>         get timeouts (which ones?) to align so that this does not happen?
> 
>         Thanks a lot
> 
>         Otto Behrens
> 
>         +27 82 809 2375
> 
>         	FINWorks
> 
>         FINWorks <http://za.linkedin.com/in/waltherbehrens>
>         www.finworks.biz <http://www.finworks.biz/>
> 
>         Disclaimer & Confidentiality Note: This email is intended solely
>         for the use of the individual or entity named above as it may
>         contain information that is confidential and privileged. If you
>         are not the intended recipient, be advised that any
>         dissemination, distribution or copying of this email is strictly
>         prohibited. FINWorks cannot be held liable by any person other
>         than the addressee in respect of any opinions, conclusions,
>         advice or other information contained in this email.
> 
> 
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> https://lists.gemtalksystems.com/mailman/listinfo/glass

-- 





More information about the Glass mailing list