[Glass] Explanation to "too many failed pom_gen scavenges" in this context??
Richard Sargent via Glass
glass at lists.gemtalksystems.com
Mon Mar 7 13:11:41 PST 2016
GLASS mailing list wrote
> Dale, this is the gem log I get:
>
>
> topaz>
> topaz> display oops
> topaz> iferror where
> topaz> login
> successful login
> fileformat is now utf8
> sourcestringclass is now Unicode16
> topaz 1>
> topaz 1> run
>
> Transcript disableLoggingToObjectLogForSession.
> Transcript enableLoggingToGemLogFileForSession.
> 12350651905 asObject sessionId: ( (System descriptionOfSession: System
> session) at: 10 ).
> System commit.
> System _cacheName: (#BackgroundProcess asString, 12350651905 asString ).
> *12350651905 asObject runInForeground*
> %
> -----------------------------------------------------
> GemStone: Error Fatal
> VM temporary object memory is full
> , too many failed pom_gen scavenges
> Error Category: 231169 [GemStone] Number: 4067 Arg Count: 1 Context : 20
> exception : 20
> Arg 1: 20
> topaz > exec iferr 1 : where
> WHERE can't be used prior to logging in.
> topaz>
> topaz>
>
>
> This is a "background job" gem and the code is actually invoked via
> "*12350651905
> asObject runInForeground".*
>
> Anyway, as you can see, there is NOTHING printed in the gem log, nor there
> is in the object log. So I cannot even get a list instances...
>
> Any clues?
Hi Mariano, the following is an excerpt of a message I sent to a developer
at one of our other customers. He had much the same kind of question.
Richard wrote
> Q: Is there anything we can do to see what the image was doing when it
> ran out of memory?
>
> Yes.
> There are a number of things one can control, and these are documented in
> gemnetdebug (/usr/local/GemStone/sys/gemnetdebug).
>
> # Print Smalltalk stack and instance counts when OutOfMemory error
> occurs
> # GS_DEBUG_VMGC_VERBOSE_OUTOFMEM=1
> # export GS_DEBUG_VMGC_VERBOSE_OUTOFMEM
>
>
> I recommend setting this environment variable all the time before running
> the netldi, so that all gems get it. I cannot imagine a scenario in which
> your session dies with an out of memory error and you would not want to
> know this.
>
>
> Obviously, this does not solve the actual problem of why the session uses
> so much temporary object space. It will give you the information necessary
> to understand that.
> Thanks
>
>
>
> On Mon, Mar 7, 2016 at 5:39 PM, Mariano Martinez Peck <
> marianopeck@
> > wrote:
>
>> Also...I do not get the instance count written neither in object log
>> (could be the bug I mentioned in previous email), nor in the gem file
>> log...shouldn't I get that? At least I remember getting that once upon a
>> time.
>>
>> Thanks!
>>
>> On Mon, Mar 7, 2016 at 5:11 PM, Mariano Martinez Peck <
>>
> marianopeck@
>> wrote:
>>
>>> BTW Dale.... I think the line of "System *_vmInstanceCounts:* 3" in
>>> #installAlmostOutOfMemoryStaticHandler: is wrong and instead it should
>>> be (System *_vmInstanceCountsReport:* 3) and it also will fail the
>>>
>>> sort: [ :a :b | (a value at: 2) > (b value at: 2) ]
>>>
>>> Was this fixed somewhere after 3.2.9 ???
>>>
>>>
>>> Cheers,
>>>
>>>
>>>
>>>
>>> On Mon, Mar 7, 2016 at 4:57 PM, Mariano Martinez Peck <
>>>
> marianopeck@
>> wrote:
>>>
>>>>
>>>>
>>>> On Mon, Mar 7, 2016 at 3:18 PM, Dale Henrichs via Glass <
>>>>
> glass at .gemtalksystems
>> wrote:
>>>>
>>>>> Mariano,
>>>>>
>>>>> The handler for AlmostOutOfMemory relies on being able to resume after
>>>>> a successful commit, but a GemStone vm will sometimes cross the
>>>>> threshol in
>>>>> the middle of a "random C code" called by a primitive ... in these
>>>>> cases we
>>>>> have to defer the signalling of AlmostOutOfMemory until we reach a
>>>>> point
>>>>> where we can safely resume .... The implication is that depending upon
>>>>> the
>>>>> details of your call it is possible to physically run out of memory
>>>>> before
>>>>> the deferred signaling can take place ...
>>>>>
>>>>> If you lower the threshold, you should be able to find a limit that
>>>>> gives you room to finish the memory hungry primitive call and get the
>>>>> deferred AlmostOfOfMemory exception signalled.
>>>>>
>>>>>
>>>> Yes,* but I am already running the threshold with 10% with a 700MB temp
>>>> space*... how less than that should it be? mmmm
>>>>
>>>>
>>>>> It is also possible that your temp obj cache is filling with objects
>>>>> that have not yet been connected to the persistent root ...
>>>>>
>>>>>
>>>> mmm OK I will check this. But now I get this same error in another
>>>> cron
>>>> job. Both were running correctly a few weeks/months ago.
>>>>
>>>>
>>>>> If you want to see what's happening with respect to the exceptions
>>>>> that
>>>>> are being signaleed, you could add logging to MCPlatformSupport
>>>>> class>>installAlmostOutOfMemoryStaticHandler: ...
>>>>>
>>>>>
>>>> mmmm I am a bit lost there. Which kind of log could I add?
>>>>
>>>> Thanks in advance,
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>> Dale
>>>>>
>>>>>
>>>>>
>>>>> On 03/07/2016 08:35 AM, Mariano Martinez Peck via Glass wrote:
>>>>>
>>>>> Hi guys,
>>>>>
>>>>> I am running some code that processes a huge csv file and inserts
>>>>> persistent data to gemstone. This is a GemStone 3.2.9 with 1GB of SPC,
>>>>> GEM_TEMPOBJ_CACHE_SIZE of 700MB and GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE
>>>>> at
>>>>> 100.
>>>>>
>>>>> What is funny is that my code runs inside commitOnAlmost...
>>>>>
>>>>> I have this method:
>>>>>
>>>>> FAGemStoneCompatibility >> commitOnAlmostOutOfMemoryDuring: aBlock
>>>>> threshold: aThreshold
>>>>> [
>>>>> MCPlatformSupport installAlmostOutOfMemoryStaticHandler: aThreshold.
>>>>> aBlock value ]
>>>>> ensure: [ MCPlatformSupport
>>>>> uninstallAlmostOutOfMemoryStaticHandler
>>>>> ]
>>>>>
>>>>> And this is how I use it:
>>>>>
>>>>> System commitTransaction.
>>>>> FACompatibilityUtils current
>>>>> commitOnAlmostOutOfMemoryDuring: [
>>>>> *WhateverClass whateverThingThatNeedsMemory.*
>>>>> ]
>>>>> threshold: 10.
>>>>> System commitTransaction.
>>>>>
>>>>>
>>>>> And even with a threshold of 10... I am getting a
>>>>>
>>>>> *VM temporary object memory is full *
>>>>> *, too many failed pom_gen scavenges*
>>>>>
>>>>>
>>>>> Any idea what could be going on?
>>>>>
>>>>> Thanks in advance,
>>>>>
>>>>> --
>>>>> Mariano
>>>>> http://marianopeck.wordpress.com
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Glass mailing
> listGlass at .gemtalksystems
> ://lists.gemtalksystems.com/mailman/listinfo/glass
>>>>>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Glass mailing list
>>>>>
> Glass at .gemtalksystems
>>>>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Mariano
>>>> http://marianopeck.wordpress.com
>>>>
>>>
>>>
>>>
>>> --
>>> Mariano
>>> http://marianopeck.wordpress.com
>>>
>>
>>
>>
>> --
>> Mariano
>> http://marianopeck.wordpress.com
>>
>
>
>
> --
> Mariano
> http://marianopeck.wordpress.com
>
> _______________________________________________
> Glass mailing list
> Glass at .gemtalksystems
> http://lists.gemtalksystems.com/mailman/listinfo/glass
--
View this message in context: http://forum.world.st/Explanation-to-too-many-failed-pom-gen-scavenges-in-this-context-tp4883019p4883086.html
Sent from the GLASS mailing list archive at Nabble.com.
More information about the Glass
mailing list