[Glass] Gemstone 2.4.4.1 repository full

Lyn Headley laheadle at gmail.com
Thu Jan 16 10:53:19 PST 2014


On this note, I'd be interested to hear about the results of any
investigation of why these Pier instances ended up consuming 4G of space,
which, if I have been reading correctly, was a surprise in this context.


On Thu, Jan 16, 2014 at 10:25 AM, Dale Henrichs <
dale.henrichs at gemtalksystems.com> wrote:

> Ah ... a keyfile issue ... no problem, I'll mail you a new keyfile that is
> updated to match the more recent free version license restrictions (i.e.,
> unlimitied space) ...
>
> The MFC (mark for collect) process is run by the maintenance vm and it
> does repository wide garbage collection ...
>
> Dale
>
>
> On Thu, Jan 16, 2014 at 9:16 AM, Dario Trussardi <
> dario.trussardi at tiscali.it> wrote:
>
>>
>> Ciao,
>>
>> thank Martin, Dale,
>>
>>
>> Dario,
>>
>> I second Martin's suggestions...
>>
>> You say that "repository increased to max size". Do you mean that you ran
>> out of disk space on your machine or that you had specified an extent size
>> limit in your system.conf file.?
>>
>>
>> i dont have disk size problem.
>>
>>
>>
>> I ask because if you've specified an extent size limit in system.conf and
>> you have available disk on your system, you can bump up the extent size
>> limit to take advantage of the available disk ...
>>
>> If you've literally run out of disk space on your system, then you will
>> need to find more disk space ... is the extent the cause of your "out of
>> disk" condition or has some other file or file9s) consumed disk space ...
>> if you can free up disk space by deleting other files (or copying them to
>> another system) the stone will start working again ...
>>
>> Before freeing up disk space you want to stop any of the seaside gems
>> and/or batch gems, because as soon as you free up disk space the stone will
>> start processing transactions again and you don't wantto give  the process
>> that has consumed the disk space so far a chance to eat up any disk space
>> that you are able to free up...
>>
>> once you've freed up some space you want to shut down your stone and find
>> additional disk space ...
>>
>> If you can add another drive to your system you can tell the stone to add
>> an extent on this new disk and restart ... you will probably need to move
>> your tranlog locations to the new disk as well ....
>>
>>
>> But i have a gemstone.key :
>>
>> # GemStone version: 2.4.4.1, Tue Jul 13 15:19:49 2010 # Customer license:
>> Free 4GB GS/S Web Edition Beta # Host processor type: Linux x86 # Customer
>> permissions: # NO_SUNSET STONE GEM PGSVR NETLDI NO_GEMCOPY NO_GciTraversal
>> # Stone Session limit: 10000 (max possible for executable) # Repository
>> size limit: 4096 MB # Repository object limit: 1024 million objects #
>> Shared cache size limit: 1024 MB # CPU affinity: limited to 1 CPUs #
>> Customer name: GemStone Seaside Community
>>
>> Can i have a new key for increase the size of the extent  and do the
>> garbage collection?
>>
>> Once you've got some extra head room with disk space, you will want to
>> run an mfc and see how much of the data is garbage ... if you gain a lot of
>> space from the mfc, then a backup and restore will shrink the size of the
>> extent files ...
>>
>>
>> I don't know mfc .
>>
>> Where i found informations anbout it ?
>>
>> Thanks,
>>
>> Dario
>>
>>
>> This isn't necessarily the complete process, but hopefully you have
>> enough to go on ..
>>
>> Dale
>>
>>
>> On Thu, Jan 16, 2014 at 7:27 AM, Dario Trussardi <
>> dario.trussardi at tiscali.it> wrote:
>>
>>> Hi,
>>>
>>>         i have a GLASS instance based on Gemstone version 2.4.4.1
>>>
>>>         It manage only two Pier instance and don't manage other data.
>>>
>>>         I found strange but the repository increased  to the max size,
>>>
>>>         and now the web request don't works.
>>>
>>>         The GemTools   sometime answer with:
>>>
>>>                         Error: Your GemStone session has been forcibly
>>> terminated, , repository full
>>>
>>>
>>>         Now i open a topaz session on this  environment.
>>>
>>>
>>>         A)      When do a login for      GcUser
>>>
>>>                                 the system report :
>>>
>>>                                 GemStone: Error         Fatal
>>>                                 The Repository is full and can no longer
>>> be expanded.,
>>>                                 Error Category: [GemStone] Number: 4002
>>> Arg Count: 1
>>>                                 Arg 1: 20
>>>
>>>
>>>         B)      When do a login for SystemUser  i can do login and
>>> submit the run command:
>>>
>>>                         SystemRepository markForCollection
>>>
>>>
>>>                 It work fine for some minutes but after the topaz
>>>  report:
>>>
>>>                         GemStone: Error         Fatal
>>>                         Your GemStone session has been forcibly
>>> terminated, , repository full
>>>                         Error Category: [GemStone] Number: 4059 Arg
>>> Count: 1
>>>                         Arg 1: 20
>>>
>>>
>>>         B1 )   if submit the run command:
>>>
>>>                         SystemRepository reclaimAll
>>>
>>>                 the topaz report :
>>>
>>>                         GemStone: Error         Nonfatal
>>>                         A reclaimAll operation was attempted but at
>>> least one GC session is
>>>                         not running.  Ensure all reclaim sessions and
>>> the Admin GC session
>>>                         are running and try the operation again.
>>>                         Error Category: [GemStone] Number: 2395 Arg
>>> Count: 1
>>>                         Arg 1: a Repository
>>>                           name            SystemRepository
>>>                          dataDictionary  nil
>>>                          #1 a Segment
>>>                          #2 a Segment
>>>                           #3 a Segment
>>>                           #4 a Segment
>>>                           #5 a Segment
>>>                          #6 a Segment
>>>                          #7 a Segment
>>>                          #8 a Segment
>>>
>>>         What i can do now to fix the database ?
>>>
>>>
>>>         Thanks for any considerations.
>>>
>>>
>>>                 Dario
>>> _______________________________________________
>>> Glass mailing list
>>> Glass at lists.gemtalksystems.com
>>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>
>>
>> _______________________________________________
>> Glass mailing list
>> Glass at lists.gemtalksystems.com
>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>
>>
>
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> http://lists.gemtalksystems.com/mailman/listinfo/glass
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20140116/fe12eadd/attachment-0001.html>


More information about the Glass mailing list