[Glass] Automatic increment policies of extent0.dbf size

Dale Henrichs via Glass glass at lists.gemtalksystems.com
Sat Feb 11 11:27:13 PST 2017


Paul and Mariano,

I'm curious how the repository grows so large in the first place? It 
seems that with regular MFCs you should be able to keep the extent from 
getting so large. The primary cause of explosive repository growth is 
having a large commit record backlog...

This can happen when you have a single session sitting in transaction 
for a "very long time" while other sessions are busy committing away ... 
This kind of thing can happen by accident: an idle, but logged in 
development session (tODE, topaz, etc.); or it can be the result of 
legitimate, long running operations. You should be able minimize 
repository growth from legitimate, long running operations. Not much can 
be done about accidents, but you can take a look at the "Disk Space and 
Commit Record Backlogs" for different techniques you can use manage disk 
space during legitimate, long running operations.

If you are not sure when or why your disk space is growing, then you 
will want to use statmon to monitor your reposisory growth and determine 
when/why it is happening ... basically identify any accidental or 
legitimate operations that are causing your repository to grow and fix 'em.

We have a Tech Tip: Causes of Repository Growth[2] that is also worth 
looking at.

Dale


[1] 
https://downloads.gemtalksystems.com/docs/GemStone64/3.3.x/GS64-SysAdminGuide-3.3/4-RunningGemStone.htm#pgfId-970470
[2] 
https://gemtalksystems.com/techsupport/techtip-causes-of-repository-growth/

On 2/11/17 11:01 AM, Mariano Martinez Peck via Glass wrote:
>
>
> On Sat, Feb 11, 2017 at 3:47 PM, Paul DeBruicker via Glass 
> <glass at lists.gemtalksystems.com 
> <mailto:glass at lists.gemtalksystems.com>> wrote:
>
>     I just had an out-of-disk error with another service that shares a
>     disk with
>     my 3.3.1 stone because my extent0.dbf file had grown to ~23GB in size.
>     After going through the "shrink the repository" procedure the
>     extent is now
>     680MB.
>
>
>     IS there a way to prevent the extent from growing indefinitely
>     with empty
>     space or should I just make a plan to go through the "shrink the repo"
>     process on a regular basis?
>
>
>
> I do that. As far as I know, the extent will never decrease on size 
> automatically. It may increase the "free" space on it, but AFAIK, 
> growing the extent is expensive. Hence, I guess they do not 
> automatically shrink to avoid possible grown up again.
> So...I compact the extent on a regular basis.
> I have a GsDevKit_home script for that if you want.
>
> Cheers,
>
>
>     Thanks
>
>
>     Paul
>
>
>
>
>
>
>     GLASS mailing list wrote
>     > On 11/13/15 6:10 AM, Trussardi Dario Romano via Glass wrote:
>     >> Ciao,
>     >>
>     >> i have a 3.1.0.6 extent0.dbf  into deployment status.
>     >>
>     >> Some months ago when i work with Gemtools and Tode the
>     repository go
>     >> up to *5991563264 byte,*
>     >> with freeSpace of 5286739968.
>     >>
>     >> Now day after day the system freeSpace decrease of 50MB any day,
>     >>
>     >> and today the freeSpace  is about: 1311784960.
>     >>
>     >> Now my question is to understand ( in broad terms ) when and
>     how the
>     >> repository size grow up.
>     >>
>     >> When and than my repository will increase the next time ?
>     >>
>     >> Thank for any considerations.
>     >>
>     >> Dario
>     >>
>     >>
>     >>
>     > The broad answer is that it depends upon your application ...
>     You are
>     > using Seaside and Seaside stores session state as persistent
>     objects. If
>     > you don't run the maintenance vm and expire the Seaside sessions
>     > (WAGemStoneMaintenanceTask class>>maintenanceTaskExpiration) on a
>     > regular basis then your data base will grow. Seaside also adds error
>     > continuations to the object log and this means that a copy of the
>     > process that was running at the time of the error is persisted
>     so all
>     > temp vars and objects reachable form the process are persisted.
>     If you
>     > don't periodically clean up the errors in your object log, then the
>     > object log could be another source of growth ...
>     >
>     > I think that we had covered those two possibilities along with a
>     couple
>     > of other recommendations in the previous email. If you are regularly
>     > running the session expiration and you are regularly pruning
>     your object
>     > log and your db is growing, then it is likely that your data
>     structures
>     > are the source of the growth and we would want to go through the
>     process
>     > of a) characterizing the types of objects that are accumulating
>     in the
>     > repository b) answer the question whether the accumulated
>     objects are
>     > reasonable within the context of your application and if not
>     then c) try
>     > to understand (Repository>>findReferencePathToObject: and
>     friends) what
>     > is causing those objects to continue to live ...
>     >
>     > In 3.2.x we have a better API for finding reference paths
>     > (Repository>>findAllReferencePathsToObject: and friends) so if it
>     > becomes difficult to find the meaningful reference path in
>     3.1.0.6, you
>     > could upgrade a copy of your extents to 3.2.x and use the
>     > findAllReferencesPathsToObject: variant ...
>     >
>     > I hope this is what you were looking for,
>     >
>     > Dale
>     > GS64-GCI.book
>     >
>     > _______________________________________________
>     > Glass mailing list
>
>     > Glass at .gemtalksystems
>
>     > http://lists.gemtalksystems.com/mailman/listinfo/glass
>     <http://lists.gemtalksystems.com/mailman/listinfo/glass>
>
>
>
>
>
>     --
>     View this message in context:
>     http://forum.world.st/Automatic-increment-policies-of-extent0-dbf-size-tp4860811p4933940.html
>     <http://forum.world.st/Automatic-increment-policies-of-extent0-dbf-size-tp4860811p4933940.html>
>     Sent from the GLASS mailing list archive at Nabble.com.
>     _______________________________________________
>     Glass mailing list
>     Glass at lists.gemtalksystems.com <mailto:Glass at lists.gemtalksystems.com>
>     http://lists.gemtalksystems.com/mailman/listinfo/glass
>     <http://lists.gemtalksystems.com/mailman/listinfo/glass>
>
>
>
>
> -- 
> Mariano
> http://marianopeck.wordpress.com
>
>
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> http://lists.gemtalksystems.com/mailman/listinfo/glass

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20170211/c71173f5/attachment-0001.html>


More information about the Glass mailing list