<div dir="ltr"><div dir="ltr">Thank you for your response, Johan. Compliments of the season to you and may the coming year be the best.<div><br></div><div><div><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>I would like to understand this better. Because of the disadvantages that you list (and in the article), we were always under the impression that our app would do better without session affinity and with a fair load balancer (because we know we have requests that take too long and we do not want to block unlucky users). The major disadvantage of our approach is that we persist session state so that it can be shared across sessions. This must be a heavy weight approach because the stacks are deep and with ajax calls this becomes worse.</div><div><br></div><div>What I do not understand is how our application can work if a session fires asynchronous ajax requests. We conceptually have to process an ajax response as it re-renders and replaces part of the document, which could end in a mess if multiple requests are done in parallel. I don't understand how the app would be faster if these requests are sent to the same GS session, except if the session state is temporary in the session. </div><div><br></div><div>We prevent multiple clicks on the same button with a bit of java script. (As soon as you click on a button, we replace it with text) I must also admit that I don't see how the locking problem you describe in the article would manifest under normal circumstances. From what we've seen, it happens if a user randomly clicks around in the browser (clicking the browser's back button also). The user would be delayed. But does this mean that other requests routed to the same session when it "delays for a bit" would also be blocked?</div></div></div></div></div></div><br><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Mind that with Seaside, requests for the same session cannot be processed in parallel. </div><div>This is why at Yesplan, I use sticky sessions to route all requests for a single session. An old write-up about that approach is still online (and we still do it this way): [1].</div></blockquote><div><br></div><div>How do you get requests for the same session in parallel? I think it may be that with JavaScript in the browser you cannot prevent concurrent requests without queuing requests in the event queue. I don't understand enough of this to give an opinion. I just don't see how the app will work properly because we replace the same html tree in the document with different ajax responses.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>We use the Seaside session url query parameter (‘_s’) to hash requests to an upstream. Depending on the hash distribution, this may have the downside that load is not evenly distributed across all upstreams.<br></div><div>To possibly solve that, we have already been thinking to let the Seaside application add another parameter to the generated urls based on how many sessions exist and, as such, let Seaside control the Nginx load balancing distribution.</div></blockquote><div><br></div><div>Does this imply you will have to somehow share session state across sessions? How would you do that (if persisting is too expensive).</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div>Also, in our experience, when one upstream times out, Nginx will re-route the request to another upstream. This, of course, still means the end user is waiting for the request longer than necessary.<br></div><div>Having an nginx configuration that does sticky sessions unless a request is not accepted after a specified amount of time would be the ideal situation imho. </div></blockquote><div><br></div><div>Yes, this sounds good.</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><br></div><div>Johan</div><div><br></div><div>[1] <a href="https://jbrichau.github.io/blog/when-to-use-http-session-affinity-in-glass" target="_blank">https://jbrichau.github.io/blog/when-to-use-http-session-affinity-in-glass</a></div><div><br><blockquote type="cite"><div>On 20 Dec 2023, at 13:04, Otto Behrens via Glass <<a href="mailto:glass@lists.gemtalksystems.com" target="_blank">glass@lists.gemtalksystems.com</a>> wrote:</div><br><div><div dir="ltr">Hi,<div><br></div><div>We are using nginx to load balance in front of GemStone that runs a Seaside application. Some of our requests run too long (we are working hard to cut them down) and in general, the time it takes to service a request in our application varies between 0.1 and about 4 seconds. We are improving and getting more towards the lower end of that. </div><div><br></div><div>Because of this, we use the least_conn directive and we persist session state so that we could use any of our GemStone upstream sessions to service a request. Requests are generally load balanced to idle sessions and there are theoretically no requests that wait for another to get serviced. Perhaps this is not optimal and you have better suggestions. It has worked ok for a long time, but should we consider another approach?</div><div><br></div><div>When our code misbehaves and a request takes let's say 60 seconds to handle, things go pear shaped (yes we want to eliminate them). The user clicks "back" on the browser or closes the browser and nginx picks it up with: </div><div>"epoll_wait() reported that client prematurely closed connection, so upstream connection is closed too while sending request to upstream"<br></div><div><br></div><div>We suspect our problem is: when this happens, it appears as if nginx then routes requests to that same upstream, which is unable to handle it because it is busy handling the previous request (which is taking too long), even with some upstream sessions sitting idle. Some users then end up with no response.</div><div><br></div><div>Ideally, we would like to catch the situation in the GemStone session and stop processing the request (when nginx closes the upstream connection). Alternatively, we could set timeouts long enough so that if the browser prematurely closes the connection, nginx does not close the upstream connection. </div><div><br></div><div>Do you have a suggestion to handle this? Does it make sense to get timeouts (which ones?) to align so that this does not happen?</div><div><br></div><div>Thanks a lot</div><div><div><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><table width="600" cellpadding="0" cellspacing="0" border="0" style="font-family:Times;font-size:medium"><tbody><tr><td width="400" valign="bottom"><div style="margin:0px;padding:0px"><span style="font-size:18px;color:rgb(146,148,151);font-family:Calibri,sans-serif;font-weight:700">Otto Behrens</span><br></div><div style="font-size:18px;font-weight:700;color:rgb(146,148,151);font-family:Calibri,sans-serif;margin:0px;padding:0px"><span style="font-size:14px;font-weight:300;margin:0px;padding:0px">+27 82 809 2375</span></div></td><td width="200" valign="middle"><img src="https://www.finworks.biz/signature/finworks-signature-logo.png" width="200" height="38" alt="FINWorks" style="display: block; border: 0px; width: 200px; height: 38px; margin: 0px; padding: 0px;"></td></tr></tbody></table><table width="600" cellpadding="0" cellspacing="0" border="0" style="font-family:Times;font-size:medium"><tbody><tr><td height="5"></td></tr></tbody></table><table width="600" cellpadding="0" cellspacing="0" border="0" style="font-family:Times;font-size:medium;border-bottom:1px solid rgb(200,28,36)"><tbody><tr><td height="15"></td></tr></tbody></table><table width="600" cellpadding="0" cellspacing="0" border="0" style="font-family:Times;font-size:medium"><tbody><tr><td height="20"></td></tr></tbody></table><table width="600" cellpadding="0" cellspacing="0" border="0" style="font-family:Times;font-size:medium"><tbody><tr><td width="15" valign="top" style="display:inline-block"><a href="http://za.linkedin.com/in/waltherbehrens" style="color:rgb(17,85,204)" target="_blank"><img src="https://www.finworks.biz/signature/finworks-linkedin-logo.png" width="15" height="15" alt="FINWorks" style="display: inline-block; border: 0px; width: 15px; height: 15px; margin-top: 1.5px; padding: 0px;"></a></td><td width="250" valign="top" style="display:inline-block"><a href="http://www.finworks.biz/" style="color:rgb(200,28,36);font-family:Calibri,sans-serif;margin-left:10px;margin-top:0px;padding-top:0px;font-size:11pt;display:inline-block" target="_blank">www.finworks.biz</a></td></tr></tbody></table><table width="600" cellpadding="0" cellspacing="0" border="0" style="font-family:Times;font-size:medium"><tbody><tr><td height="10"></td></tr></tbody></table><table width="600" cellpadding="0" cellspacing="0" border="0" style="font-family:Times;font-size:medium"><tbody><tr><td><p style="font-size:10px;color:rgb(146,148,151);font-family:Calibri,sans-serif;text-align:justify">Disclaimer & Confidentiality Note: This email is intended solely for the use of the individual or entity named above as it may contain information that is confidential and privileged. If you are not the intended recipient, be advised that any dissemination, distribution or copying of this email is strictly prohibited. FINWorks cannot be held liable by any person other than the addressee in respect of any opinions, conclusions, advice or other information contained in this email.</p></td></tr></tbody></table></div></div></div></div></div></div></div>
_______________________________________________<br>Glass mailing list<br><a href="mailto:Glass@lists.gemtalksystems.com" target="_blank">Glass@lists.gemtalksystems.com</a><br><a href="https://lists.gemtalksystems.com/mailman/listinfo/glass" target="_blank">https://lists.gemtalksystems.com/mailman/listinfo/glass</a><br></div></blockquote></div><br></blockquote></div></div>