[Glass] [GLASS] Seaside - growing extent - normal?

Dale Henrichs via Glass glass at lists.gemtalksystems.com
Tue Mar 31 09:41:53 PDT 2015


Larry,

I'm just going over the old ground again, in case we missed something 
obvious ... I would hate to spend several more days digging into this 
only to find that an initial step hadn't completed as expected ...

So it looks like the object log is clear. Next I'd like to double check 
and make sure that the session state has been expired ...

So let's verify that `UserGlobals at: #'ExpiryCleanup` is no longer 
present and I'd like to run the WABasicDevelopment reapSeasideCache one 
more time for good luck.

Assuming that neither of those turn up anything of use, the next step is 
to find out what's hanging onto the unwanted objects ...

Since I think we've covered the known "object hogs" in the Seaside 
framework, there are a number of other persistent caches in GLASS, that 
might as well be cleared out. You can use the workspace here[1] to clean 
them up ... I don't think that these caches should be holding onto 23G 
of objects, but run an MFC aftwards to be safe ...


At this point there's basically two directions that we can take:

   1. Top down. Start inspecting the data structures in your application 
and look
       for suspicious collections/objects that could be hanging onto 
objects above and
       beyond those absolutely needed.

   2. Bottom up. Scan your recent backup and get an instance count 
report[2] that
       will tell you what class of object is clogging up your data base 
.... Perhaps you'll
       recognize a big runner or two and know where to look to drop the 
references.
      If no, we'll have to pick a suspicious class, list the instances 
of that class, and then
      Repository>>listReferences:  to work our way back to a known root 
and then
      NUKE THE SUCKER:)

Dale

[1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
[2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:
>
>> On Mar 30, 2015, at 6:24 PM, Dale Henrichs 
>> <dale.henrichs at gemtalksystems.com 
>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>
>> The initial MFC gave you (pre-backup):
>>
>>   390,801,691 live objects with 23,382,898 dead
>>
>> The second MFC gave you (post-backup):
>>
>>   391,007,811 live objects with 107 dead
>>
>> Which means that we did not gain nearly as much as anticipated by 
>> cleaning up the seaside session state and object log ... so something 
>> else is hanging onto a big chunk of objects ...
>>
>> So yes at this point there is no need to consider a backup and 
>> restore to shrink extents until we can free up some more objects ...
>>
>> I've got to head out on an errand right now, so I can't give you any 
>> detailed pointers, to the techniques to use for finding the nasty boy 
>> that is hanging onto the "presumably dead objects" ...
>>
>> I am a bit suspicious that the Object log might still be alive an 
>> kicking, so I think you should verify by inspecting the ObjectLog 
>> collections ... poke around on the class side ... if you find a big 
>> collection (and it blows up your TOC if you try to look at it), then 
>> look again at the class-side methods and make sure that you nuke the 
>> RCQueue and the OrderedCollection .... close down/logout your vms, 
>> and then run another mfc to see if you gained any ground …
>
>
> Well, the ObjectLog collection on the class side of ObjectLogEntry is 
> empty, and the ObjectQueue class variable has:
>
>
>
> Is it necessary to reinitialize the ObjectQueue?
>
> Is there some report I can run that will tell me what is holding onto 
> so much space?
>
> Best,
>
> Larry
>
>
>>
>> Dale
>>
>> On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:
>>>
>>>> On Mar 30, 2015, at 12:28 PM, Dale Henrichs 
>>>> <dale.henrichs at gemtalksystems.com 
>>>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>>>
>>>> Okay,
>>>>
>>>> I guess you made it through the session expirations okay and 
>>>> according to the MFC results it does look like you did get rid of a 
>>>> big chunk of objects... Presumably the backup was made before the 
>>>> vote on the possible dead was finished so the backup would not have 
>>>> been able to skip all of the dead objects (until the vote was 
>>>> completed) .... there 's also an outside chance that the vm used to 
>>>> expire the sessions would have voted down some of the possible dead 
>>>> if it was still logged in when the backup was made ...
>>>>
>>>> So we need to find out what's going on in the new extent ... so do 
>>>> another mfc and send me the results
>>>
>>>
>>> Ok, I made it through another mark for collection and here is the 
>>> result:
>>>
>>> <Mail Attachment.png>
>>>
>>>
>>>
>>>
>>> Am I wrong in thinking that the file size of the extent will not 
>>> shrink? It certainly has not shrunk much.
>>>
>>>
>>>
>>>
>>>>
>>>>  In the new extent, run the MFC again, and provide me with the 
>>>> results ... include an `Admin>>DoIt>>File Size Report`. Then logout 
>>>> of GemTools and stop/start any other seaside servers or maintenance 
>>>> vms that might be running ...
>>>>
>>>
>>> Here is the file size report before the mark for collection
>>>
>>> Extent #1
>>> -----------
>>>    Filename = 
>>> !TCP at localhost#dir:/opt/gemstone/product/seaside/data#log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf 
>>> <log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf>
>>>
>>>    File size =       23732.00 Megabytes
>>>    Space available = 3478.58 Megabytes
>>>
>>> Totals
>>> ------
>>>    Repository size = 23732.00 Megabytes
>>>    Free Space =      3478.58 Megabytes
>>>
>>> and after
>>>
>>> Extent #1
>>> -----------
>>>    Filename = 
>>> !TCP at localhost#dir:/opt/gemstone/product/seaside/data#log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf 
>>> <log://opt/gemstone/log/%N%P.log#dbf%21/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf>
>>>
>>>    File size =       23732.00 Megabytes
>>>    Space available = 3476.47 Megabytes
>>>
>>> Totals
>>> ------
>>>    Repository size = 23732.00 Megabytes
>>>    Free Space =      3476.47 Megabytes
>>>
>>>
>>>
>>> I await further instructions.
>>>
>>> Best,
>>>
>>> Larry
>>>
>>>
>>>
>>>
>>>
>>>> By the time we exchange emails, the vote should have a chance to 
>>>> complete this time... but I want to see the results of the MFC and 
>>>> File SIze Report before deciding what to do next ...
>>>>
>>>> Dale
>>>>
>>>> On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
>>>>> Hello Dale,
>>>>>
>>>>>   Well, I went though the process as described below, but have not 
>>>>> see my extent shrink appreciably, so I am puzzled.
>>>>> Here is the screenshot after the mark for collection. Do I have to 
>>>>> do something to reclaim the dead objects? Does the maintenance gem 
>>>>> need to be run?
>>>>>
>>>>>
>>>>> <Mail Attachment.png>
>>>>>
>>>>> After the ObjectLog init, and mark, I did a restore into a fresh 
>>>>> extent.
>>>>>
>>>>> Here is the size of the new extent vs the old, saved extent:
>>>>>
>>>>> <Mail Attachment.png>
>>>>>
>>>>>
>>>>>
>>>>> Thoughts?
>>>>>
>>>>> Larry
>>>>>
>>>>>
>>>>>
>>>>>> On Mar 25, 2015, at 2:15 PM, Dale Henrichs 
>>>>>> <dale.henrichs at gemtalksystems.com 
>>>>>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>>>>>
>>>>>> Okay here's the sequence of steps that I think you should take:
>>>>>>
>>>>>>   1. expire all of your sessions:
>>>>>>
>>>>>>   | expired |
>>>>>>   Transcript cr; show: 'Unregistering...' , DateAndTime now 
>>>>>> printString.
>>>>>>   expired := WABasicDevelopment reapSeasideCache.
>>>>>>   expired > 0
>>>>>>     ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired sessions' 
>>>>>> object: expired) addToLog ].
>>>>>>   Transcript cr; show: '...Expired: ' , expired printString , ' 
>>>>>> sessions.'.
>>>>>>   System commitTransactions
>>>>>>
>>>>>>   2. initalize your object log
>>>>>>
>>>>>>   3. run MFC
>>>>>>
>>>>>>   [
>>>>>>   System abortTransaction.
>>>>>>   SystemRepository markForCollection ]
>>>>>>     on: Warning
>>>>>>     do: [ :ex |
>>>>>>       Transcript
>>>>>>         cr;
>>>>>>         show: ex description.
>>>>>>       ex resume ]
>>>>>>
>>>>>>   4. Then do a backup and restore ... you can use GemTools to do 
>>>>>> the restore,
>>>>>>       but then you should read the SysAdmin docs[1] for 
>>>>>> instructions to do the restore
>>>>>>       (I've enclosed link to 3.2 docs, but the procedure and 
>>>>>> commands should pretty
>>>>>>       much be the same, but it's best to look up the docs for 
>>>>>> your GemStone version[2]
>>>>>>       and follow those instructions)
>>>>>>
>>>>>> As I mentioned earlier, it will probably take a while for each of 
>>>>>> these operations to complete (object log will be fast and the 
>>>>>> backup will be fast, if the mfc tosses out the majority of your 
>>>>>> data) and it is likely that the repository will grow some more 
>>>>>> during the process (hard to predict this one, tho).
>>>>>>
>>>>>> Step 1 will touch every session and every continuation so it is 
>>>>>> hard to say what percent of the objects are going to be touched 
>>>>>> (the expensive part), still there are likely to be a lot of those 
>>>>>> puppies and they will have to be read from disk into the SPC ...
>>>>>>
>>>>>> Step 3. is going scan all of the live objects and again it hard 
>>>>>> to predict exactly how expensive it will be ...
>>>>>>
>>>>>> Dale
>>>>>>
>>>>>> [1] 
>>>>>> http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
>>>>>> [2] http://gemtalksystems.com/techsupport/resources/
>>>>>>
>>>>>> On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
>>>>>>> Hello Dale,
>>>>>>>
>>>>>>>   Thanks for the help. I’m a terrible system admin when it comes 
>>>>>>> to maintaining a system with one user, LOL.
>>>>>>>
>>>>>>>   I’m not running the maintenance VM and I haven’t been doing 
>>>>>>> regular mark for collects.
>>>>>>>
>>>>>>>   I’m trying to do a fullBackupTo: at the moment, well see if I 
>>>>>>> get through that. Should I have done a markForCollection before 
>>>>>>> the full backup?
>>>>>>>
>>>>>>>   I’ll also try the ObjectLog trick.
>>>>>>>
>>>>>>>   I guess I need to start from a fresh extent, as you said, and 
>>>>>>> the extent file will not shrink. I’m at 48% of my available disk 
>>>>>>> space but it does seem slower than usual.
>>>>>>>
>>>>>>> Best,
>>>>>>>
>>>>>>> Larry
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>> On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass 
>>>>>>>> <glass at lists.gemtalksystems.com 
>>>>>>>> <mailto:glass at lists.gemtalksystems.com>> wrote:
>>>>>>>>
>>>>>>>> Lawrence,
>>>>>>>>
>>>>>>>> Are you doing regular Mark for collects? Are you running the 
>>>>>>>> maintenance vm along with you seaside servers?
>>>>>>>>
>>>>>>>> Seaside produces persistent garbage (persistent session state 
>>>>>>>> that eventually times out) when it processes requests so if you 
>>>>>>>> do not run the maintenance vm the sessions are not expired and 
>>>>>>>> if you do not run mfc regularly the expired sessions are not 
>>>>>>>> cleaned up ...
>>>>>>>>
>>>>>>>> Another source of growth could be the Object Log ... (use 
>>>>>>>> `ObjectLogEntry initalize` to efficiently reset the Object Log 
>>>>>>>> ... pay attention to the mispelling ... thats another story). 
>>>>>>>> If you are getting continuations saved to the object log, the 
>>>>>>>> stacks that are saved, can hang onto a lot of session state, 
>>>>>>>> that even though expired will not be garbage collected because 
>>>>>>>> of references from the continuation in the object log keep it 
>>>>>>>> alive ...
>>>>>>>>
>>>>>>>> The best way to shrink your extent (once we understand why it 
>>>>>>>> is growing) is to do a backup and then restore into a virgin 
>>>>>>>> extent ($GEMSTONE/bin/extent0.seaside.dbf)...
>>>>>>>>
>>>>>>>> Dale
>>>>>>>>
>>>>>>>> On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
>>>>>>>>> Well, Amazon sent me a note that they are having hardware 
>>>>>>>>> trouble on my instance, so they shut it down. It looks like 
>>>>>>>>> they’re threatening to take the thing offline permanently so 
>>>>>>>>> I’m trying to save my work with an AMI and move it somewhere 
>>>>>>>>> else, if I have to.
>>>>>>>>>
>>>>>>>>> I finally got Gemstone/Seaside back up and running and noticed 
>>>>>>>>> these lines in the Seaside log file. These kind of messages go 
>>>>>>>>> on once a day for weeks. Is this normal?
>>>>>>>>>
>>>>>>>>> --- 03/07/2015 02:44:14 PM UTC ---
>>>>>>>>>   Extent = 
>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>     has grown to 22528 megabytes.
>>>>>>>>>   Repository has grown to 22528 megabytes.
>>>>>>>>>
>>>>>>>>>   Extent = 
>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>     has grown to 22544 megabytes.
>>>>>>>>>   Repository has grown to 22544 megabytes.
>>>>>>>>>
>>>>>>>>> --- 03/08/2015 03:31:45 PM UTC ---
>>>>>>>>>   Extent = 
>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>     has grown to 22560 megabytes.
>>>>>>>>>   Repository has grown to 22560 megabytes.
>>>>>>>>>
>>>>>>>>>   Extent = 
>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>     has grown to 22576 megabytes.
>>>>>>>>>   Repository has grown to 22576 megabytes.
>>>>>>>>>
>>>>>>>>> --- 03/10/2015 03:19:34 AM UTC ---
>>>>>>>>>   Extent = 
>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>     has grown to 22592 megabytes.
>>>>>>>>>   Repository has grown to 22592 megabytes.
>>>>>>>>>
>>>>>>>>> --- 03/10/2015 03:46:39 PM UTC ---
>>>>>>>>>   Extent = 
>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>     has grown to 22608 megabytes.
>>>>>>>>>   Repository has grown to 22608 megabytes.
>>>>>>>>>
>>>>>>>>>   Extent = 
>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>     has grown to 22624 megabytes.
>>>>>>>>>   Repository has grown to 22624 megabytes.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> My extent has now grown to
>>>>>>>>>
>>>>>>>>> -rw------- 1 seasideuser seasideuser 23735566336 Mar 25 15:31 
>>>>>>>>> extent0.dbf
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> I don’t get a lot of traffic so I’m a little surprised at the 
>>>>>>>>> growth. Should I try to shrink the extent?
>>>>>>>>>
>>>>>>>>> I suppose I should also do a SystemRepository backup, if I can 
>>>>>>>>> remember the commands.
>>>>>>>>>
>>>>>>>>> Best,
>>>>>>>>>
>>>>>>>>> Larry
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Glass mailing list
>>>>>>>>> Glass at lists.gemtalksystems.com
>>>>>>>>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Glass mailing list
>>>>>>>> Glass at lists.gemtalksystems.com 
>>>>>>>> <mailto:Glass at lists.gemtalksystems.com>
>>>>>>>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20150331/15f9e970/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 21790 bytes
Desc: not available
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20150331/15f9e970/attachment-0001.png>


More information about the Glass mailing list