[Glass] Backup procedure

Dale Henrichs via Glass glass at lists.gemtalksystems.com
Tue Oct 20 12:38:34 PDT 2015



On 10/20/2015 09:37 AM, Trussardi Dario Romano via Glass wrote:
> Ciao,
>
>>
>>
>> On 06/08/2015 12:41 PM, Mariano Martinez Peck wrote:
>>>
>>>
>>> On Mon, Jun 8, 2015 at 4:27 PM, Dale Henrichs via Glass 
>>> <glass at lists.gemtalksystems.com 
>>> <mailto:glass at lists.gemtalksystems.com>> wrote:
>>>
>>>     Yeah, the support code surrounding
>>>     GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE has evolved over time ...
>>>
>>>     Back in 2.4.x when the original problem with seaside gems was
>>>     discovered, I think that there might have been a few bugs in the
>>>     implementation of GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE that lead to
>>>     the initial recommendation of cycling gems (that way you get
>>>     100% GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE). Also the max value of
>>>     GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE was limited to 90%, so there
>>>     was quite a bit of room for some references to be kepts around.
>>>     Here's the comment about GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE from
>>>     the 2.4.4.1 $GEMSTONE/data/system.conf file:
>>>
>>>     #=========================================================================
>>>     # GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE: Percent of pom generation area
>>>     #   to be thrown away when voting on possible dead objects.
>>>     #   Only subspaces of pom generation older than 5 minutes are
>>>     thrown away.
>>>     #   The most recently used subspace is never thrown away.
>>>     #
>>>     # If this value is not specified, or the specified value is out
>>>     of range,
>>>     # the default is used.
>>>     #
>>>     # Runtime equivalent: #GemPomGenPruneOnVote
>>>     # Default: 50
>>>     #    min: 0  max: 90
>>>     # GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE = 50;
>>>
>>>
>>>     Overtime the implementation has been changed to the point where
>>>     you are allowed to specify a GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE of
>>>     100% so  the need to cycle gems can be completely eliminated.
>>>     Here's the comment from the $GEMSTONE/data/system.conf file for
>>>     2.4.6 (3.3 is identical):
>>>
>>>
>>>
>>> Hi Dale,
>>>
>>> Thanks for telling us. Now, if you were to guess, would you prefer 
>>> 1) to stop/start the seaside gems and let a value of 90% or so... or 
>>> 2) let 100% and do not even start/stop gems?
>>>
>>> Thanks in advance,
>>>
>>
>> If you're using a version of GemStone that supports the 100%  option 
>> then I'd be inclined to go that route ... starting and stopping gems 
>> just requires extra machinery and can interrupt users ... Now if I 
>> happen to have a gem or two that are doing batch processing and the 
>> potential for referencing a bunch of persistent objects I might think 
>> twice and consider the cost of restarting the (presumably long 
>> rnning) batch process and refilling the cache for those gems versus 
>> the amount of repository growth I might encounter ...
>
> I works with gsDevKit 3.1.0.6 repository.
>
> It support the  GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE set to 100 % ?
I don't know off hand ... take a look in $GEMSTONE/data/system.conf ... 
there are comments for every conf setting possible that fully describe 
the ranges, default values, etc.
> If i right understand when i have gem without batch processing i don't 
> have problem with the GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE parameter.
Every active Gem could end up voting downa dead object because it has 
kept a reference in it's head ... I mention the long running batch 
process because it could have a "valid" reference to the "dead object" 
obtained in a transaction before the object became "dead" ...
>
> When i have gem with batch processing ( other some minutes )  i found 
> the problem about the GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE parameter.
>
> But I understand that the wisest solution would be to wait for batch 
> processing to finish, and only after go - make starting the garbage 
> collections mechanism.
Well I think the wise answer is to not worry too much about when a dead 
object actually disappears from your repository ... if you have gems 
that will be running for weeks at a time, then set the 
GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE to the max value allowed for the 
GemStone version that you are running and then let "nature take it's 
course" it may take ore time than you think for the object to go away 
but it will go away eventually.

If you are under repository size pressure then the guaranteed method to 
avoid voting down dead objects is to restart ALL of the gems in the 
system before running the MFC ... but I think this is an extreme 
solution that should only be used when you are under critical repository 
size pressures ...
>
> It's right ?
>
> If yes, how i can manage it ?
>
>
>
> Another questions:
>
> i have a ubuntu server with 8GB of memory and the 
> SHR_PAGE_CACHE_SIZE_KB set to 2097152.
>
> Now the system is load with some date and i think it used all 
> the SHR_PAGE_CACHE.
Why do you think that the full SHR_PAGE_CACHE is used? Are you looking 
at statmonitor output with vsd?
>
> But when do the login to the system the shell report:     memory usage: 9%
I think it depends upon what command you get to see the memory usage and 
on linux, I don't think there is a good way to get memory usage 
information when shared memory is involved, some bits of memory can get 
paged out and so on ....
>
> The memory usage don't consider the kernel.shm* parameter?
>
> The ipcs -lm    report:------ Limiti della memoria condivisa -------- 
> numero massimo di segmenti = 4096 dimensione max seg (kbyte) = 6291456 
> max total shared memory (kbytes) = 6291456 dimensione min seg (byte) = 1
>
I would say that you should run statmonitor and use vsd to look at the 
results ...  once you've started the statmonitor, use vsd (an X 
application) to look at the stat file and I recommend the "Template > 
New Template Chart > CacheMix" to start with ... there are 100's of 
stats, and I just don't have the bandwidth to coach folks on the 
interpretation of the results, but you can check "Show Statistics Info" 
which will give you documentation to read about each stat when selected 
in the chart ...

Dale
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20151020/f2111e71/attachment-0001.html>


More information about the Glass mailing list