[Glass] Backup procedure

Trussardi Dario Romano via Glass glass at lists.gemtalksystems.com
Tue Oct 20 09:37:28 PDT 2015


Ciao, 

> 
> 
> On 06/08/2015 12:41 PM, Mariano Martinez Peck wrote:
>> 
>> 
>> On Mon, Jun 8, 2015 at 4:27 PM, Dale Henrichs via Glass <glass at lists.gemtalksystems.com> wrote:
>> Yeah, the support code surrounding GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE has evolved over time ... 
>> 
>> Back in 2.4.x when the original problem with seaside gems was discovered, I think that there might have been a few bugs in the implementation of GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE that lead to the initial recommendation of cycling gems (that way you get 100% GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE). Also the max value of GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE was limited to 90%, so there was quite a bit of room for some references to be kepts around. Here's the comment about GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE from the 2.4.4.1 $GEMSTONE/data/system.conf file:
>> 
>> #=========================================================================
>> # GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE: Percent of pom generation area
>> #   to be thrown away when voting on possible dead objects.
>> #   Only subspaces of pom generation older than 5 minutes are thrown away.
>> #   The most recently used subspace is never thrown away.
>> #
>> # If this value is not specified, or the specified value is out of range,
>> # the default is used.
>> #
>> # Runtime equivalent: #GemPomGenPruneOnVote
>> # Default: 50
>> #    min: 0  max: 90
>> # GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE = 50;
>> 
>> 
>> Overtime the implementation has been changed to the point where you are allowed to specify a GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE of 100% so  the need to cycle gems can be completely eliminated. Here's the comment from the $GEMSTONE/data/system.conf file for 2.4.6 (3.3 is identical):
>> 
>> 
>> 
>> Hi Dale,
>> 
>> Thanks for telling us. Now, if you were to guess, would you prefer 1) to stop/start the seaside gems and let a value of 90% or so... or 2) let 100% and do not even start/stop gems?
>> 
>> Thanks in advance,
>> 
> 
> If you're using a version of GemStone that supports the 100%  option then I'd be inclined to go that route ... starting and stopping gems just requires extra machinery and can interrupt users ... Now if I happen to have a gem or two that are doing batch processing and the potential for referencing a bunch of persistent objects I might think twice and consider the cost of restarting the (presumably long rnning) batch process and refilling the cache for those gems versus     the amount of repository growth I might encounter ... 

	I works with gsDevKit 3.1.0.6 repository.

	It support the  GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE 	set to 100 % ?
	
	If i right understand when i have gem without batch processing i don't have problem with the GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE parameter.

	When i have gem with batch processing ( other some minutes )  i found the problem about the GEM_TEMPOBJ_POMGEN_PRUNE_ON_VOTE parameter.

	But I understand that the wisest solution would be to wait for batch processing to finish, and only after go - make starting the garbage collections mechanism.

	It's right ?

	If yes, how i can manage it ?



	Another questions:

		i have a ubuntu server with 8GB of memory and the SHR_PAGE_CACHE_SIZE_KB set to 2097152.

		Now the system is load with some date and i think it used all the SHR_PAGE_CACHE.

		But when do the login to the system the shell report:     memory usage: 9%

		The memory usage don't consider the kernel.shm* parameter?

		The 	ipcs -lm     report:

			------ Limiti della memoria condivisa --------
			numero massimo di segmenti = 4096
			dimensione max seg (kbyte) = 6291456
			max total shared memory (kbytes) = 6291456
			dimensione min seg (byte) = 1

	Thanks for any considerations,

		Dario

> 
> With the ability to flush all persistent objects you're basically stopping/restarting with respect to POM ...
> 
> Dale
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> http://lists.gemtalksystems.com/mailman/listinfo/glass

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20151020/132eae95/attachment.html>


More information about the Glass mailing list