[Glass] cannot allocate memory

Otto Behrens via Glass glass at lists.gemtalksystems.com
Tue Jul 7 11:35:15 PDT 2015


Thanks guys. Must have been the fork call that failed. Seems like
memory / swap problems. We have a lot of memory on the machine, but
running 8 topaz sessions eats up memory over time.

We run different things with performOnServer:. We use pdftk a lot to
manipulate PDF documents. We run topaz sessions in the background to
to background jobs (and we start them with performOnServer:, yes). And
a few more.

On Tue, Jul 7, 2015 at 7:01 PM, Norm Green via Glass
<glass at lists.gemtalksystems.com> wrote:
> Otto,
>
> I am not able to trivially reproduce this.  If I run code like this in
> topaz, the memory usage of the gem (from VSD and top) does not grow:
>
> [true] whileTrue:[  System performOnServer: 'echo "abc" >foo' .
>             System performOnServer: 'rm foo' ]
>
>
> So I don't think there's a memory leak in performOnServer per se. What
> exactly are you doing in your performOnServer: calls ?
>
> performOnServer does a fork to create a new shell, so I think what you are
> seeing is a failure in the fork() call in the gem, which could indicate the
> gem process is exceeding its memory limit.  Do you have ulimit set (ulimit
> -a) ?  The memory of the gem process will grow over time as it allocates the
> temp obj cache in a lazy fashion.  Is there enough memory in the box for all
> your gems to allocated 400 MB of temp obj cache?  This is what could be
> happening at the end of your business day.
>
> There is some info on Linux memory over commit here:
> http://stackoverflow.com/questions/15608347/fork-failing-with-out-of-memory-error
>
>
> Norm
>
>
>
> On 7/7/15 09:02, Otto Behrens via Glass wrote:
>>
>> More details. Sorry to dump it like this; please ignore if not interested.
>>
>> top reports for a topaz session serving seaside for example below.
>> This one crashes every time we call performOnServer: now.
>>
>>     PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
>>   72134 wonka     20   0 4926m 3.9g 1.9g S    1 12.4  10:19.72 topaz
>>
>> And here are the parameters this process is started with:
>>
>> DUMP_OPTIONS = TRUE;
>> GEM_GCI_LOG_ENABLED = FALSE;
>> GEM_ABORT_MAX_CRS = 0;
>> GEM_FREE_FRAME_CACHE_SIZE = -1;
>> GEM_FREE_FRAME_LIMIT = -1;
>> GEM_FREE_PAGEIDS_CACHE = 200;
>> GEM_HALT_ON_ERROR = -1;
>> GEM_KEEP_MIN_SOFTREFS = 0;
>> GEM_MAX_SMALLTALK_STACK_DEPTH = 1000;
>> GEM_NATIVE_CODE_ENABLED = 2;
>> GEM_PRIVATE_PAGE_CACHE_KB = 960KB;
>> GEM_PGSVR_COMPRESS_PAGE_TRANSFERS = FALSE;
>> GEM_PGSVR_FREE_FRAME_CACHE_SIZE = -1;
>> GEM_PGSVR_FREE_FRAME_LIMIT = -1;
>> GEM_PGSVR_UPDATE_CACHE_ON_READ = FALSE;
>> GEM_READ_AUTH_ERR_STUBS = FALSE;
>> GEM_REPOSITORY_IN_MEMORY = FALSE;
>> GEM_RPC_KEEPALIVE_INTERVAL = 0;
>> GEM_RPCGCI_TIMEOUT = 0;
>> GEM_RPC_USE_SSL = TRUE;
>> GEM_SOFTREF_CLEANUP_PERCENT_MEM = 50;
>> GEM_TEMPOBJ_AGGRESSIVE_STUBBING = TRUE;
>> GEM_TEMPOBJ_CACHE_SIZE = 400000KB;
>> GEM_TEMPOBJ_MESPACE_SIZE = 0KB;
>> GEM_TEMPOBJ_OOPMAP_SIZE = 0;
>> GEM_TEMPOBJ_SCOPES_SIZE = 2000;
>> GEM_TEMPOBJ_POMGEN_SIZE = 0KB;
>> GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE = 50;
>> GEM_TEMPOBJ_POMGEN_SCAVENGE_INTERVAL = 1800;
>> GEM_TEMPOBJ_START_ADDR not used on this platform
>> LOG_WARNINGS = TRUE;
>> SHR_NUM_FREE_FRAME_SERVERS = -1;
>> SHR_PAGE_CACHE_LARGE_MEMORY_PAGE_POLICY = 0;
>> SHR_PAGE_CACHE_LOCKED = FALSE;
>> SHR_PAGE_CACHE_NUM_PROCS = 4089;
>> SHR_PAGE_CACHE_NUM_SHARED_COUNTERS = 1900;
>> SHR_PAGE_CACHE_PERMISSIONS = 660;
>> SHR_PAGE_CACHE_SIZE_KB = 2000000KB;
>> SHR_TARGET_FREE_FRAME_COUNT = -1;
>> SHR_WELL_KNOWN_PORT_NUMBER = 0;
>> (vmGc spaceSizes: eden init 2048K max 74944K , survivor init 448K max
>> 12544K,
>>   vmGc    old max 299968K, code max 80000K, perm max 40000K, pom 10 *
>> 33344K = 333440K,
>>   vmGc    remSet 8068K, meSpace max 382460K oopMapSize 2097152  max
>> footprint 1251M)
>>
>> _____________________________________________________________________________
>> |             GemStone/S64 Object-Oriented Data Management System
>> |
>> |                   Copyright (C) GemTalk Systems 1986-2015
>> |
>> |                            All rights reserved.
>> |
>> |                           Covered by U.S Patents:
>> |
>> |            6,256,637 Transactional virtual machine architecture
>> |
>> |              6,360,219 Object queues with concurrent updating
>> |
>> |                  6,567,905 Generational Garbage Collector.
>> |
>> | 6,681,226 Selective Pessimistic Locking for a Concurrently Updateable
>> Database
>>
>> +-----------------------------------------------------------------------------+
>> |    PROGRAM: topaz, Linear GemStone Interface (Linked Session)
>> |
>> |    VERSION: 3.2.6, Fri Mar 20 15:37:57 2015
>> |
>> |      BUILD: gss64_3_2_x_branch-35651
>> |
>> |  BUILT FOR: x86-64 (Linux)
>> |
>> |       MODE: 64 bit
>> |
>> | RUNNING ON: 8-CPU luke.finworks.biz x86_64 (Linux 3.2.0-75-generic
>> #110-Ubuntu
>> | SMP Tue Dec 16 19:11:55 UTC 2014) 31892MB
>> |
>> | PROCESS ID: 72134     DATE: 07/07/2015 00:03:05 SAST
>> |
>> |   USER IDS: REAL=wonka (1000) EFFECTIVE=wonka (1000)
>> |
>>
>> +-----------------------------------------------------------------------------+
>>
>> On Tue, Jul 7, 2015 at 5:35 PM, Otto Behrens <otto at finworks.biz> wrote:
>>>
>>> Hi,
>>>
>>> We call "System class >> performOnServer:" often from our seaside
>>> server sessions. We get an error "HostPerform failed; errno 12, Cannot
>>> allocate memory".
>>>
>>> This happens at the end of the day, while sessions have been running
>>> all day, and have been quite busy.
>>>
>>> It appears as if there is a memory leak in this function; I don't
>>> really understand what memory it is trying to allocate here.
>>>
>>> We're assuming the System class >> performOnServer: is the way in
>>> GemStone to execute shell commands. Should we be using another way?
>>>
>>> Any ideas?
>>>
>>> Thanks
>>> Otto
>>>
>>> I attach a stack ouput with more details as an example.
>>>
>>> In this example, we call pdftk to fill in the fields of a pdf document.
>>
>> _______________________________________________
>> Glass mailing list
>> Glass at lists.gemtalksystems.com
>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>
>
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> http://lists.gemtalksystems.com/mailman/listinfo/glass


More information about the Glass mailing list