[Glass] out of resources
Dale Henrichs
dale.henrichs at gemtalksystems.com
Thu May 8 08:07:19 PDT 2014
Otto,
The way that we use shared memory resources, we have things setup so that
the resources are deallocated when the last process detaches from the
cache.
So I would look around the system for rogue stoned, topaz or gem processes
that may be hung/refusing to quit ... if you find some hanging around note
their process ids and track down their log file...there should be
information there as to why they are hung ...
Another thing is if you 'kill -9' the shrpc monitor process, the shared
memory segment will not be told to clean up when the last process detaches
and the shared memore/semaphores will be left around.
While not recommended, it is "safe" to kill -9 the stoned process as a last
resort, you will lose any transactions that are in progress and will need
to restore from tranlogs on restart, but kill -9 on stoned should not
corrupt the db ... the shrpc monitor process is really the only process
that it is not "safe" to use kill -9 on but even then the db will not be
corrupted, like killing the stone, you will lose any transactions in
progress AND you will leave shared memory resources around to be cleaned up
manually ...
Dale
On Thu, May 8, 2014 at 7:38 AM, Otto Behrens <otto at finworks.biz> wrote:
> Hi,
>
> With GS 3.1 we're running out of semaphores and shared memory because
> the system resources are not freed up.
>
> When we run ipcs, we get lists of shared memory segments (about 1GB
> each) that report no attached processes. When we use ipcrm -m <id>,
> the memory is free.
>
> ipcs -s shows a long list of semaphore arrays. When using ipcs -s -i
> <array id>, we see that the referenced processes are dead.
>
> This happens on our jenkins machines where we start & stop GS a lot.
> (Jobs running tests restore from a built GS backup.) We think we are
> using stopstone properly (with waitstone to make sure it is stopped,
> etc.). We need to investigate properly and make sure the jenkins jobs
> do this in the way we are expecting.
>
> I was hoping someone can give us some ideas on this. Perhaps there's a
> GS flag or a OS setup that we're missing. Your insights are
> appreciated.
>
> Thanks
> Otto
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> http://lists.gemtalksystems.com/mailman/listinfo/glass
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20140508/d86e018b/attachment-0001.html>
More information about the Glass
mailing list