[Glass] How to deal with timeouts, fastcgi and what is the expected behavior?

Mariano Martinez Peck marianopeck at gmail.com
Fri Jul 18 08:58:05 PDT 2014


On Fri, Jul 18, 2014 at 12:54 PM, Mariano Martinez Peck <
marianopeck at gmail.com> wrote:

>
>
>
> On Fri, Jul 18, 2014 at 12:08 PM, Dale Henrichs <
> dale.henrichs at gemtalksystems.com> wrote:
>
>>
>>
>>
>> On Fri, Jul 18, 2014 at 7:36 AM, Mariano Martinez Peck <
>> marianopeck at gmail.com> wrote:
>>
>>> Hi guys,
>>>
>>> I have a strange situation with timeouts under nginx/fastCGI and I am
>>> not sure what is expected. I am executing a Seaside WATask, where one of
>>> its method takes (for sure) more then the timeout of nginx/fastCGI. My
>>> piece of nginx setup is something like this:
>>>
>>> location @seasidemariano {
>>>  include fastcgi_params;
>>> fastcgi_param REQUEST_URI $uri?$args;
>>>   fastcgi_pass seasidemariano;
>>>  fastcgi_connect_timeout      180;
>>>       fastcgi_send_timeout         180;
>>>       fastcgi_read_timeout         180;
>>>
>>>   fastcgi_next_upstream error invalid_header timeout http_500;
>>> }
>>>
>>>
>>> So...as you can see I have a timeout of 180 and I tell to go to the next
>>> upstream (gem) in any error, including timeout. Now...say I have this
>>> method being executed and it takes more than 180 seconds. What happens is
>>> that the user gets a Nginx 504 Gateway Time-out in the browser. Ok. But...
>>> I have some questions:
>>>
>>> 1) what happens with that gem that was executing the task (the one that
>>> took more than 180)? is the execution finished even if the nginx give a
>>> timeout and pass the reuqest to the next gem? Or the gem execution is
>>> aborted?  Why I ask? Because...I out a log to a file inside my method...and
>>> it looks like if the method were called 3 times rather than 1. And from a
>>> domain point of view.... it is not good that such a method is executed 3
>>> times...
>>>
>>
>> It does sound like nginx is redispatching the http request on timeout...
>>
>>>
>>>
> Exactly. And it should, as my configuration is:
>
> fastcgi_next_upstream error invalid_header *timeout* http_500;
>
> So yes...upon a gem timeout, nginx forwards the request to the next gem.
>
>
>>  2) If I put a larger timeout...say 1500 ... it works correct..the
>>> method is executed only once, no timeout. Same if I use swazoo. So it is
>>> something to do with the timeouts and fastCGI for sure.
>>>
>>
>> In general I try to avoid timeouts ... it seems that timeouts fire more
>> often because the system is slow than for any other reason and the standard
>> answer: increase the timeout ...
>>
>> So I guess I would wonder why the operation is taking so long ... if the
>> operation is slow because the system is overloaded, then a longer timeout
>> _is_ called for, but then what is a good value for the timeout ..
>>
>>
> The operation takes long because I need to call a HTTPS api (using Zinc to
> a local nginx tunnel) many times where I need to post a XML and I also get
> a large XML response. The time it takes..depends on how many "items" have
> been selected. So it is hard to estimate how much it would take.
>
>
>> I guess the real question to ask is what is the purpose of the timeout?
>>
>>
> If a gem went down, I would like nginx to forward request to the other
> (available gems).
>
>
>> You might want the gem itself to decide to terminate a request if it is
>> "taking too long" then you don't need  a timeout at the nginx level?
>>
>>
>>> 3) 3 times...why? It seems because I have 3 gems. I did an experiment,
>>> and I set only 2 gems to nginx fastcgi. And yes, the method was executed
>>> only 2 times rather than 3.
>>>
>>
>> It does sound like nginx is sending the request again upon a timeout ...
>> could that be?
>>
>
> Yes, it is that. But I don't know how to properly solve both things... be
> able to have large timeout (like for this scenario), yet manage the
> scenario of gems going down. Imagine I don't care and I put a timeout of 1
> hour (to say something). Then.. imagine I have a gem down and then a web
> user that connects to the site.  nginx might assign that gem that gem that
> went down. Therefore... it would have the web user waiting in the browser
> for 1 hour until nginx answers for a timeout....Is this correct?
>
> The biggest issue is that nginx thinks the gem timeouted and then fordwads
> to the next gem. However...the gem was not dead...it was simply too busy
> with a time consuming request  ;) There is no way I can make the gem answer
> to nginx "I am fine, don't worry, just busy, continue to next gem" hahaha ?
>
>
If we assume that when a gem goes down it is normally that the process is
aborted (rather than an unresponding http server), then it would be nice if
nignx could check if the process is alive (using PID or whatever) in order
to forward the request to another upstream rather than checking via http
timeout ...



> Probably the real real solution is the service VM as you and Paul pointed
> out several times. But i didn't have time to take a look to it yet :(
>
>
>
>
>
>>
>>>
>>> So....how do people normally deal with this? Of course, the immediate
>>> workaround seems to increase the timeout...but it seems risky to me,
>>> thinking that if for some reason (like GC running or whatever) one
>>> particular request takes more than the timeout, then my "backend code"
>>> could be run more than once...
>>>
>>> Thanks in advance,
>>>
>>> --
>>> Mariano
>>> http://marianopeck.wordpress.com
>>>
>>> _______________________________________________
>>> Glass mailing list
>>> Glass at lists.gemtalksystems.com
>>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>
>>>
>>
>
>
> --
> Mariano
> http://marianopeck.wordpress.com
>



-- 
Mariano
http://marianopeck.wordpress.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20140718/895a92e7/attachment.html>


More information about the Glass mailing list