[Glass] extent0.dbf grows

Mariano Martinez Peck via Glass glass at lists.gemtalksystems.com
Wed Aug 5 14:33:55 PDT 2015

On Wed, Aug 5, 2015 at 5:41 PM, Dale Henrichs via Glass <
glass at lists.gemtalksystems.com> wrote:

> On 07/31/2015 08:58 AM, Trussardi Dario Romano via Glass wrote:
> James,
> On Jul 31, 2015, at 3:37 AM, Trussardi Dario Romano via Glass <
> glass at lists.gemtalksystems.com> wrote:
> Ciao,
> i have a deployment system based on GemStone version ''
> Now i use Gemtools and todeClient to do some works on the repository:
>  update code,  save repository .....
> When i login with the tools  i note the repository grows significantly,  and
> also, the relative  full backups grows significantly.
> To reset the system i need to clear the Object Log with the relative clear
> command  ( ol clear in tode )
> Do you think that the Object Log grows significantly during your time
> logged in with the tools? What is its size at the beginning and end of your
> session?
> I don't have real data about the extent0.dbf change size from beginning
> and end of a gemtools session.
> But the full backup size  before Object Log  clear is :  4 258 529280
> after Object Log clear is: 1 376 124 928
> The SystemRepository freeSpace size before Object Log clear is :  1 092
> 419 584
> after Object Log clear ( and one hour  of system background works )  is:
>  5 267 701 760
> Okay as James has mentioned, it definitely looks like the Object Log is
> the source of the "excess data" ... it would be interesting to understand
> what is in the object log ... My guess would be that you are having
> recurring errors and the continuations created to record those errors can
> easily cause a bunch of data to be stored in your extent ... if the errors
> are providing useful data, than this can be viewed as a "cost of doing
> business" and you will just need to have a regularly scheduled task for
> clearing the object log ... you could arrange to regularly clear
> continuation entries from the object log that are more than a day or week
> or ? old then at least you would expect to reach a steady state for extent
> size ... then you would only need to consider shrinking the size of the
> extent when you had an anomaly where a series of unexpected errors cropped
> up ....
I do exactly that as part of my daily cleaning:

cleanObjectLogEntriesDaysAgo: aNumberOfDays
| log |
log := self objectLogEntries: true.
(log select: [:ea | (Date today - ea stamp asDate) asDays >= aNumberOfDays
]) do: [:ea | log remove: ea ifAbsent: []].
System commitTransaction.

objectLogEntries: shouldLock

ifTrue: [
System writeLock: ObjectLogEntry objectQueue
ifDenied: [
Transcript show: 'ObjectLogEntry objectQueue lock denied'.
^ nil
ifChanged: [
System addToCommitOrAbortReleaseLocksSet: ObjectLogEntry objectQueue.
Transcript show: 'ObjectLogEntry objectQueue lock dirty'.
^ nil
System addToCommitOrAbortReleaseLocksSet: ObjectLogEntry objectQueue].
^ObjectLogEntry objectLog

Maybe we could add this directly to class side of ObjectLogEntry so that
others can benefit from it?

> As Mariano and James point out, there are other possible sources of extent
> growth that are related to a commit record backlog because you have a
> GemTools/tODE/topaz session open (and idle) on a production system that is
> "busy committing" ... commit record backlog data is transient data that is
> stored in the extent and will cause the extent to grow, but once the
> session or sessions causing the commit record backlog log out commit or
> abort, the transient data is no longer needed by the system and will
> "magically turn into free space" at checkpoint boundaries ....
> The key point here is that object log data is persistent "user data" and
> will not turn into free space until 1) you break the link to the persistent
> root (object log clear) and 2) you run an MFC. Commit record backlog data
> "system data" and will turn into free space as soon as the sessions
> abort/commit/logout and a checkpoint is hit, i.e., no MFC is needed ... The
> default checkpoint interval is 5 minutes I think ...
Dale, it is not clear to me what the checkpoint interval is. I understood
it was at abort/commit/logout..so how is this internal related? Is there a
way I can check the stone configuration parameter of this (the 5 minutes)?

So for the "system data" to turn into free space, the other gems need to
abort/commit/logout AND only after 5 minutes that turning into free space
will happen?   I ask because I am scheduling some batch jobs running at
night, and as soon as they all finish, I run GC...and sometimes it seems I
do not really GC what I should have...


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20150805/5176c7cf/attachment.html>

More information about the Glass mailing list