[Glass] [GLASS] Seaside - growing extent - normal?

Dale Henrichs via Glass glass at lists.gemtalksystems.com
Fri Mar 27 09:48:56 PDT 2015


Larry,

Yea the bulk of the progress that you've made should be committed... 
should probably add a final transcript show in the gemstoneReap method 
... but when your `Expire Progress` matches the preceding `finished 
scan` that chunk of work won't be done again ...

At this point in time, I suggest that you make a backup in case we hit a 
problem (like run out of extent space or something) that would make 
things difficult to proceed from ... this is just precautionary, but 
prudent ...

I am interested in what caused the out of memory so checking in the gem 
log for a smalltalk stack when you ran out of memory would help ... it's 
possible that we'll just hit the same situation right off the bat, when 
we start process again, so might as well try to understand that one...

Dale

On 03/27/2015 09:31 AM, Lawrence Kellogg wrote:
>
>> On Mar 27, 2015, at 12:08 PM, Dale Henrichs 
>> <dale.henrichs at gemtalksystems.com 
>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>
>> Larry,
>>
>> Not sure ... I guess the I'm curious if the system is in iowait or 
>> burning cpu ... top should give you that information ... I'd also be 
>> interested to know if you've run out of disk space ... check the 
>> stone log for that information ...
>>
>> Based on what you find out we'll go on from there ...
>>
>
>
> The struggle continues, eventually, I ran out of temporary Object Memory:
>
>
>
> Disk space as at 63%. Can I run it again? Did any of that stick?
>
> Larry
>
>
>
>
>
>> Dale
>>
>> On 03/27/2015 07:43 AM, Lawrence Kellogg wrote:
>>> Well, I put in Transcript calls to show count all the time, not just 
>>> at mod 1000..and I made it all the way through, except it does’t 
>>> seem to want to come back to me. The image is frozen.
>>>
>>> Is it hung in the final commit?
>>>
>>> Here is a screen shot of the last part of the display. Yes I’m 
>>> running it in GemTools. I don’t care if it’s slow as long as it works.
>>>
>>> <Mail Attachment.png>
>>>
>>>
>>> Larry
>>>
>>>> On Mar 26, 2015, at 1:01 AM, Dale Henrichs 
>>>> <dale.henrichs at gemtalksystems.com 
>>>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>>>
>>>> Hmmm, I think we need to set a breakpoint in there ... I might be 
>>>> tired, but I can't see how that can happen ... but then that's what 
>>>> a debugger is for ...
>>>>
>>>> Are you running this expression from GemTools? If so the 
>>>> `Transcript show:` might be awfully expensive (lot's o round trips 
>>>> over the WAN) ... you might use GsFile gciLogServer: and then tail 
>>>> the gem log file on the server to see progress ...
>>>>
>>>> Need to find out where the 0 is coming from first...
>>>>
>>>> Dale
>>>>
>>>> On 3/25/15 7:34 PM, Lawrence Kellogg wrote:
>>>>> Hello Dale,
>>>>>
>>>>>   Well, I replaced the WACache>>gemstoneReap method and ran the 
>>>>> code from before but it just shows Scan Progress: 0 over and over 
>>>>> again.
>>>>>
>>>>>   I let it run a few hours but had to kill it as the computer is 
>>>>> in my bedroom and I can’t sleep if it makes noise all night.
>>>>>
>>>>>   Do I try again tomorrow?
>>>>>
>>>>>   best,
>>>>>
>>>>>   Larry
>>>>>
>>>>>
>>>>>> On Mar 25, 2015, at 6:23 PM, Dale Henrichs 
>>>>>> <dale.henrichs at gemtalksystems.com 
>>>>>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>>>>>
>>>>>> Hmm, I scanned the code in WACache>>gemstoneReap and saw that a 
>>>>>> commit was being down every 100 entries, but presumably you blew 
>>>>>> up while building the collection of entries to expire ... sorry 
>>>>>> about that ... There should be a a stack dump in the gem log 
>>>>>> (depending upon where you ran the expressions where we can verify 
>>>>>> my assumption.
>>>>>>
>>>>>> It looks like we need to replace WACache>>gemstoneReap with the 
>>>>>> following:
>>>>>>
>>>>>> gemstoneReap
>>>>>>   "Iterate through the cache and remove objects that have expired."
>>>>>>
>>>>>>   "In GemStone, this method is performed by a separate 
>>>>>> maintenance VM,
>>>>>>      so we are already in transaction (assumed to be running in 
>>>>>> #autoBegin
>>>>>>      transactionMode) and do not have to worry about acquiring 
>>>>>> the TransactionMutex.
>>>>>>     Since we are using reducedConflict dictionaries in the first 
>>>>>> place, we will remove the keys
>>>>>>     and values from the existing dictionaries without using the 
>>>>>> mutex."
>>>>>>
>>>>>>   | expired count platform |
>>>>>>   expired := UserGlobals at: #'ExpiryCleanup' put: 
>>>>>> OrderedCollection new.
>>>>>>   platform := GRPlatform current.
>>>>>>   platform doCommitTransaction.
>>>>>>   count := 0.
>>>>>>   objectsByKey
>>>>>>     associationsDo: [ :assoc |
>>>>>>       (self expiryPolicy isExpiredUpdating: assoc value key: 
>>>>>> assoc key)
>>>>>>         ifTrue: [
>>>>>>           self notifyRemoved: assoc value key: assoc key.
>>>>>>           count := count + 1.
>>>>>>           expired add: assoc.
>>>>>>           count \\ 100 == 0
>>>>>>             ifTrue: [ platform doCommitTransaction ].
>>>>>>           count \\ 1000 == 0
>>>>>>             ifTrue: [ Transcript cr; show: 'Scan progress: ' , 
>>>>>> count printString.] ] ].
>>>>>>   Transcript cr; show: 'finished scan: ' , count printString.
>>>>>>   count := 0.
>>>>>>   (UserGlobals at: #'ExpiryCleanup' )
>>>>>>     do: [ :assoc |
>>>>>>       count := count + 1.
>>>>>>       objectsByKey removeKey: assoc key ifAbsent: [].
>>>>>>       keysByObject removeKey: assoc value ifAbsent: [  ].
>>>>>>       count \\ 100 == 0
>>>>>>         ifTrue: [ platform doCommitTransaction ].
>>>>>>       count \\ 1000 == 0
>>>>>>         ifTrue: [ Transcript cr; show: 'Expire progress: ' , 
>>>>>> count printString.]].
>>>>>>   platform doCommitTransaction.
>>>>>>   UserGlobals removeKey: #'ExpiryCleanup'.
>>>>>>   platform doCommitTransaction.
>>>>>>   ^ expired size
>>>>>>
>>>>>> This implementation should be more resistant to an out of memory 
>>>>>> condition and I've got some logging in  there as well ... the 
>>>>>> `Transcript show:` should show up in the gem.log and/or 
>>>>>> Transcript...
>>>>>>
>>>>>> I haven't tested this but if there's a problem in the scan loop 
>>>>>> it will fail quickly. If there's a problem in the expire loop, 
>>>>>> you can skip the scan loop, and just run the expire loop ...
>>>>>>
>>>>>> Sorry about that ... hopefully the second time will be the charm ...
>>>>>>
>>>>>> Dale
>>>>>>
>>>>>> On 3/25/15 2:36 PM, Lawrence Kellogg wrote:
>>>>>>> Hello Dale,
>>>>>>>
>>>>>>>   Well, Step 1 ran for hours and halted in the debugger with 
>>>>>>> this error:
>>>>>>>
>>>>>>> Error: VM temporary object memory is full
>>>>>>> , almost out of memory, too many markSweeps since last 
>>>>>>> successful scavenge
>>>>>>>
>>>>>>>
>>>>>>> What do I do now?
>>>>>>>
>>>>>>> Best,
>>>>>>>
>>>>>>>   Larry
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Mar 25, 2015, at 2:15 PM, Dale Henrichs 
>>>>>>>> <dale.henrichs at gemtalksystems.com 
>>>>>>>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>>>>>>>
>>>>>>>> Okay here's the sequence of steps that I think you should take:
>>>>>>>>
>>>>>>>>   1. expire all of your sessions:
>>>>>>>>
>>>>>>>>   | expired |
>>>>>>>>   Transcript cr; show: 'Unregistering...' , DateAndTime now 
>>>>>>>> printString.
>>>>>>>>   expired := WABasicDevelopment reapSeasideCache.
>>>>>>>>   expired > 0
>>>>>>>>     ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' 
>>>>>>>> object: expired) addToLog ].
>>>>>>>>   Transcript cr; show: '...Expired: ' , expired printString , ' 
>>>>>>>> sessions.'.
>>>>>>>>   System commitTransactions
>>>>>>>>
>>>>>>>>   2. initalize your object log
>>>>>>>>
>>>>>>>>   3. run MFC
>>>>>>>>
>>>>>>>>   [
>>>>>>>>   System abortTransaction.
>>>>>>>>   SystemRepository markForCollection ]
>>>>>>>>     on: Warning
>>>>>>>>     do: [ :ex |
>>>>>>>>       Transcript
>>>>>>>>         cr;
>>>>>>>>         show: ex description.
>>>>>>>>       ex resume ]
>>>>>>>>
>>>>>>>>   4. Then do a backup and restore ... you can use GemTools to 
>>>>>>>> do the restore,
>>>>>>>>       but then you should read the SysAdmin docs[1] for 
>>>>>>>> instructions to do the restore
>>>>>>>>       (I've enclosed link to 3.2 docs, but the procedure and 
>>>>>>>> commands should pretty
>>>>>>>>       much be the same, but it's best to look up the docs for 
>>>>>>>> your GemStone version[2]
>>>>>>>>       and follow those instructions)
>>>>>>>>
>>>>>>>> As I mentioned earlier, it will probably take a while for each 
>>>>>>>> of these operations to complete (object log will be fast and 
>>>>>>>> the backup will be fast, if the mfc tosses out the majority of 
>>>>>>>> your data) and it is likely that the repository will grow some 
>>>>>>>> more during the process (hard to predict this one, tho).
>>>>>>>>
>>>>>>>> Step 1 will touch every session and every continuation so it is 
>>>>>>>> hard to say what percent of the objects are going to be touched 
>>>>>>>> (the expensive part), still there are likely to be a lot of 
>>>>>>>> those puppies and they will have to be read from disk into the 
>>>>>>>> SPC ...
>>>>>>>>
>>>>>>>> Step 3. is going scan all of the live objects and again it hard 
>>>>>>>> to predict exactly how expensive it will be ...
>>>>>>>>
>>>>>>>> Dale
>>>>>>>>
>>>>>>>> [1] 
>>>>>>>> http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
>>>>>>>> [2] http://gemtalksystems.com/techsupport/resources/
>>>>>>>>
>>>>>>>> On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
>>>>>>>>> Hello Dale,
>>>>>>>>>
>>>>>>>>> Thanks for the help. I’m a terrible system admin when it comes 
>>>>>>>>> to maintaining a system with one user, LOL.
>>>>>>>>>
>>>>>>>>> I’m not running the maintenance VM and I haven’t been doing 
>>>>>>>>> regular mark for collects.
>>>>>>>>>
>>>>>>>>> I’m trying to do a fullBackupTo: at the moment, well see if I 
>>>>>>>>> get through that. Should I have done a markForCollection 
>>>>>>>>> before the full backup?
>>>>>>>>>
>>>>>>>>> I’ll also try the ObjectLog trick.
>>>>>>>>>
>>>>>>>>> I guess I need to start from a fresh extent, as you said, and 
>>>>>>>>> the extent file will not shrink. I’m at 48% of my available 
>>>>>>>>> disk space but it does seem slower than usual.
>>>>>>>>>
>>>>>>>>> Best,
>>>>>>>>>
>>>>>>>>> Larry
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass 
>>>>>>>>>> <glass at lists.gemtalksystems.com 
>>>>>>>>>> <mailto:glass at lists.gemtalksystems.com>> wrote:
>>>>>>>>>>
>>>>>>>>>> Lawrence,
>>>>>>>>>>
>>>>>>>>>> Are you doing regular Mark for collects? Are you running the 
>>>>>>>>>> maintenance vm along with you seaside servers?
>>>>>>>>>>
>>>>>>>>>> Seaside produces persistent garbage (persistent session state 
>>>>>>>>>> that eventually times out) when it processes requests so if 
>>>>>>>>>> you do not run the maintenance vm the sessions are not 
>>>>>>>>>> expired and if you do not run mfc regularly the expired 
>>>>>>>>>> sessions are not cleaned up ...
>>>>>>>>>>
>>>>>>>>>> Another source of growth could be the Object Log ... (use 
>>>>>>>>>> `ObjectLogEntry initalize` to efficiently reset the Object 
>>>>>>>>>> Log ... pay attention to the mispelling ... thats another 
>>>>>>>>>> story). If you are getting continuations saved to the object 
>>>>>>>>>> log, the stacks that are saved, can hang onto a lot of 
>>>>>>>>>> session state, that even though expired will not be garbage 
>>>>>>>>>> collected because of references from the continuation in the 
>>>>>>>>>> object log keep it alive ...
>>>>>>>>>>
>>>>>>>>>> The best way to shrink your extent (once we understand why it 
>>>>>>>>>> is growing) is to do a backup and then restore into a virgin 
>>>>>>>>>> extent ($GEMSTONE/bin/extent0.seaside.dbf)...
>>>>>>>>>>
>>>>>>>>>> Dale
>>>>>>>>>>
>>>>>>>>>> On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
>>>>>>>>>>> Well, Amazon sent me a note that they are having hardware 
>>>>>>>>>>> trouble on my instance, so they shut it down. It looks like 
>>>>>>>>>>> they’re threatening to take the thing offline permanently so 
>>>>>>>>>>> I’m trying to save my work with an AMI and move it somewhere 
>>>>>>>>>>> else, if I have to.
>>>>>>>>>>>
>>>>>>>>>>> I finally got Gemstone/Seaside back up and running and 
>>>>>>>>>>> noticed these lines in the Seaside log file. These kind of 
>>>>>>>>>>> messages go on once a day for weeks. Is this normal?
>>>>>>>>>>>
>>>>>>>>>>> --- 03/07/2015 02:44:14 PM UTC ---
>>>>>>>>>>>   Extent = 
>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>     has grown to 22528 megabytes.
>>>>>>>>>>>   Repository has grown to 22528 megabytes.
>>>>>>>>>>>
>>>>>>>>>>>   Extent = 
>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>     has grown to 22544 megabytes.
>>>>>>>>>>>   Repository has grown to 22544 megabytes.
>>>>>>>>>>>
>>>>>>>>>>> --- 03/08/2015 03:31:45 PM UTC ---
>>>>>>>>>>>   Extent = 
>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>     has grown to 22560 megabytes.
>>>>>>>>>>>   Repository has grown to 22560 megabytes.
>>>>>>>>>>>
>>>>>>>>>>>   Extent = 
>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>     has grown to 22576 megabytes.
>>>>>>>>>>>   Repository has grown to 22576 megabytes.
>>>>>>>>>>>
>>>>>>>>>>> --- 03/10/2015 03:19:34 AM UTC ---
>>>>>>>>>>>   Extent = 
>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>     has grown to 22592 megabytes.
>>>>>>>>>>>   Repository has grown to 22592 megabytes.
>>>>>>>>>>>
>>>>>>>>>>> --- 03/10/2015 03:46:39 PM UTC ---
>>>>>>>>>>>   Extent = 
>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>     has grown to 22608 megabytes.
>>>>>>>>>>>   Repository has grown to 22608 megabytes.
>>>>>>>>>>>
>>>>>>>>>>>   Extent = 
>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>     has grown to 22624 megabytes.
>>>>>>>>>>>   Repository has grown to 22624 megabytes.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> My extent has now grown to
>>>>>>>>>>>
>>>>>>>>>>> -rw------- 1 seasideuser seasideuser 23735566336 Mar 25 
>>>>>>>>>>> 15:31 extent0.dbf
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> I don’t get a lot of traffic so I’m a little surprised at 
>>>>>>>>>>> the growth. Should I try to shrink the extent?
>>>>>>>>>>>
>>>>>>>>>>> I suppose I should also do a SystemRepository backup, if I 
>>>>>>>>>>> can remember the commands.
>>>>>>>>>>>
>>>>>>>>>>> Best,
>>>>>>>>>>>
>>>>>>>>>>> Larry
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Glass mailing list
>>>>>>>>>>> Glass at lists.gemtalksystems.com
>>>>>>>>>>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>>>>>>>>
>>>>>>>>>> _______________________________________________
>>>>>>>>>> Glass mailing list
>>>>>>>>>> Glass at lists.gemtalksystems.com 
>>>>>>>>>> <mailto:Glass at lists.gemtalksystems.com>
>>>>>>>>>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20150327/aafcaee1/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 50828 bytes
Desc: not available
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20150327/aafcaee1/attachment-0001.png>


More information about the Glass mailing list