[Glass] [GLASS] Seaside - growing extent - normal?
Dale Henrichs via Glass
glass at lists.gemtalksystems.com
Thu Apr 2 09:15:03 PDT 2015
ON the one hand, Richard is correct, but on the other hand ... there is
no good reason to have that many counters hanging around ... the
RcCounters are only supposed to survive as long as a WaSession and that
should be normally 10 minutes or so ... the counters are a side effect
of the deeper "object leak"
Dale
On 04/02/2015 09:09 AM, Richard Sargent via Glass wrote:
> The number of counter elements suggests you should perform some
> maintenance on your RcCounters.
>
> There is an instance method #cleanupCounter which has the following
> comment:
> "For sessions that are not logged in, centralize the individual session
> element's values to the global session element (at index 1). This
> may cause
> concurrency conflict if another session performs this operation."
>
> I believe the counter manages the potential conflict by holding a
> value per session. You may have a counter element for every session
> there ever was!
> (Although, I find it hard to believe you have a thousand sessions per
> counter and also have 338,617counters!)
>
> You should track down the reference paths for a sampling of those
> counters and find out what's holding on to them.
>
>
> On Thu, Apr 2, 2015 at 8:32 AM, Dale Henrichs via Glass
> <glass at lists.gemtalksystems.com
> <mailto:glass at lists.gemtalksystems.com>> wrote:
>
> Okay,
>
> This is good information because it does give us some clues ...
> I'll look into the RcCounterElement ... some of the Rc Collections
> can hold onto data with the goal of eliminating/reducing conflicts
> and that appears to be the case here ...
>
> Dale
>
> On 04/01/2015 07:05 PM, Lawrence Kellogg wrote:
>>
>>> On Mar 31, 2015, at 12:41 PM, Dale Henrichs
>>> <dale.henrichs at gemtalksystems.com
>>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>>
>>> Larry,
>>>
>>> I'm just going over the old ground again, in case we missed
>>> something obvious ... I would hate to spend several more days
>>> digging into this only to find that an initial step hadn't
>>> completed as expected ...
>>>
>>> So it looks like the object log is clear. Next I'd like to
>>> double check and make sure that the session state has been
>>> expired ...
>>>
>>> So let's verify that `UserGlobals at: #'ExpiryCleanup` is no
>>> longer present and I'd like to run the WABasicDevelopment
>>> reapSeasideCache one more time for good luck.
>>>
>>
>> Yes, the collection at UserGlobals at: #ExpiryCleanup is empty.
>>
>> I ran the reapSeasideCache code again.
>>
>>
>>
>>> Assuming that neither of those turn up anything of use, the next
>>> step is to find out what's hanging onto the unwanted objects ...
>>>
>>> Since I think we've covered the known "object hogs" in the
>>> Seaside framework, there are a number of other persistent caches
>>> in GLASS, that might as well be cleared out. You can use the
>>> workspace here[1] to clean them up ... I don't think that these
>>> caches should be holding onto 23G of objects, but run an MFC
>>> aftwards to be safe ...
>>>
>>
>> I cleared the caches.
>>
>> I ran another MFC
>>
>>
>>
>>>
>>> At this point there's basically two directions that we can take:
>>>
>>> 1. Top down. Start inspecting the data structures in your
>>> application and look
>>> for suspicious collections/objects that could be hanging
>>> onto objects above and
>>> beyond those absolutely needed.
>>>
>>> 2. Bottom up. Scan your recent backup and get an instance
>>> count report[2] that
>>> will tell you what class of object is clogging up your
>>> data base .... Perhaps you'll
>>> recognize a big runner or two and know where to look to
>>> drop the references.
>>> If no, we'll have to pick a suspicious class, list the
>>> instances of that class, and then
>>> Repository>>listReferences: to work our way back to a
>>> known root and then
>>> NUKE THE SUCKER:)
>>
>>
>> Ok, here is my instance count report. RCCounterElement is a huge
>> winner here, I have no idea why. #63 PracticeJournalLoginTask
>> and #65 PracticeJournalSession is coming up a lot, so perhaps
>> these are being held onto somewhere.
>>
>> 1338955617RcCounterElement
>> 217607121RcCollisionBucket
>> 37683895Association
>> 42142624String
>> 52126557WAValueHolder
>> 61959784VariableContext
>> 71629389CollisionBucket
>> 81464171Dictionary
>> 91339617KeyValueDictionary
>> 101339616Set
>> 111243135OrderedCollection
>> 121116296Array
>> 13951872ComplexBlock
>> 14943639ComplexVCBlock
>> 15781212IdentityDictionary
>> 16673104IdentityCollisionBucket
>> 17666407WAUserConfiguration
>> 18664701WAAttributeSearchContext
>> 19338617RcCounter
>> 20338617WARcLastAccessEntry
>> 21332017RcKeyValueDictionary
>> 22230240WAValueCallback
>> 23226002WARequestFields
>> 24226002WAUrl
>> 25223641GRSmallDictionary
>> 26221821GRDelayedSend
>> 27221821GRUnboundMessage
>> 28220824GsStackBuffer
>> 29219296WAImageCallback
>> 30187813Date
>> 31176258MCMethodDefinition
>> 32146263WAActionCallback
>> 33113114WARenderCanvas
>> 34113039WAMimeType
>> 35113003WADocumentHandler
>> 36113003WAMimeDocument
>> 37113001WARenderVisitor
>> 38113001WAActionPhaseContinuation
>> 39113001WACallbackRegistry
>> 40113001WARenderingGuide
>> 41113001WARenderContext
>> 42112684WASnapshot
>> 43110804IdentityBag
>> 44110720TransientValue
>> 45110710WAToolDecoration
>> 46110672TransientMutex
>> 47110670WAGemStoneMutex
>> 48110670WARcLastAccessExpiryPolicy
>> 49110670WACache
>> 50110670WANoReapingStrategy
>> 51110670WACacheMissStrategy
>> 52110670WANotifyRemovalAction
>> 53110640WATimingToolFilter
>> 54110640WADeprecatedToolFilter
>> 55110489WAAnswerHandler
>> 56110422WADelegation
>> 57110412WAPartialContinuation
>> 58110412GsProcess
>> 59109773UserPersonalInformation
>> 60109712Student
>> 61109295WATaskVisitor
>> 62109285UserLoginView
>> 63109285PracticeJournalLoginTask
>> 64109259WAValueExpression
>> 65109215PracticeJournalSession
>> 6656942Time
>> 6754394GsMethod
>> 6853207MCVersionInfo
>> 6953207UUID
>> 7045927MethodVersionRecord
>> 7141955MethodBookExercise
>> 7237223Symbol
>> 7329941MCInstanceVariableDefinition
>> 7421828MCClassDefinition
>> 7519291SymbolAssociation
>> 7618065PracticeDay
>> 7717218GsMethodDictionary
>> 7816617MusicalPiece
>> 7916609SymbolSet
>> 8011160FreeformExercise
>> 818600SymbolDictionary
>> 827537DateAndTime
>> 836812Duration
>> 846288Month
>> 856288PracticeMonth
>> 864527WAHtmlAttributes
>> 874390DateTime
>> 884247Metaclass
>> 894190WAGenericTag
>> 904142SimpleBlock
>> 914136WATableColumnTag
>> 924136WACheckboxTag
>> 934029Composer
>> 943682RcIdentityBag
>> 953428ClassHistory
>> 963010PracticeSession
>> 972185MCClassVariableDefinition
>> 982017CanonStringBucket
>> 991986MethodBook
>> 1001974WARenderPhaseContinuation
>> 1011965PurchaseOptionInformation
>> 1021843AmazonPurchase
>> 1031796GsDocText
>> 1041513GsClassDocumentation
>> 1051508209409
>> 1061425WASession
>> 1071218UserInformationInterface
>> 1081134WAValuesCallback
>> 1091125WACancelActionCallback
>> 110751DepListBucket
>> 111738Pragma
>> 112716LessonTaskRecording
>> 113693UserForgotPasswordView
>> 114629MusicalPieceRepertoireItem
>> 115524PracticeYear
>> 116524Year
>> 117483MCOrganizationDefinition
>> 118480Repertoire
>> 119467MCPackage
>> 120440MultiplePageDisplayView
>> 121403MethodBookExerciseRepertoireItem
>> 122352UserCalendar
>> 123334MetacelloValueHolderSpec
>> 124333MCVersion
>> 125333MCSnapshot
>> 126313TimeZoneTransition
>> 127307Color
>> 128269NumberGenerator
>> 129216UserCommunityInformation
>> 130206IdentitySet
>> 131200RcQueueSessionComponent
>> 132199FreeformExerciseRepertoireItem
>> 133191WAHtmlCanvas
>> 134187PackageInfo
>> 135182InvariantArray
>> 136176MCRepositoryGroup
>> 137175MCWorkingCopy
>> 138175MCWorkingAncestry
>> 139157PracticeSessionInputView
>> 140149MetacelloPackageSpec
>> 141139MetacelloRepositoriesSpec
>> 142132WAMetaElement
>> 143131MCClassInstanceVariableDefinition
>> 144117MetacelloMergeMemberSpec
>> 145106YouTubeVideoResource
>> 146101MusicalPieceRepertoireItemInputView
>> 14799UserCommentsView
>> 14896LessonTasksView
>> 14996LessonTaskView
>> 15094WATableTag
>> 15191MetacelloMCVersion
>> 15287PracticeSessionView
>> 15381SortedCollection
>> 15478MetacelloMCVersionSpec
>> 15578MetacelloVersionNumber
>> 15678MetacelloPackagesSpec
>> 15777DateRange
>> 15877PracticeSessionsView
>> 15970MethodBooksView
>> 16067UserCalendarView
>> 16167PracticeJournalMiniCalendar
>> 16266PracticeDayView
>> 16365MetacelloAddMemberSpec
>> 16464WATextInputTag
>> 16561Teacher
>> 16661MCPoolImportDefinition
>> 16761MCHttpRepository
>> 16860CheckScreenNameAvailability
>> 16958UserRepertoireItemsView
>> 17058UserRepertoireView
>> 17158PrivateLesson
>> 17257UserRepertoireItemsSummaryView
>> 17353MetacelloMCProjectSpec
>> 17453MetacelloProjectReferenceSpec
>> 17552WAListAttribute
>> 17648TimedActivitiesInformationServer
>> 17746PracticeSessionTemplate
>> 17846WriteStream
>> 17944WAFormTag
>> 18039UserInstrumentsInputView
>> 18139MetacelloRepositorySpec
>> 18235UserInstrumentsInputViewGenerator
>> 18335CreateLessonTaskRecordingInterface
>> 18432WASelectTag
>> 18532WADateInput
>> 18630WAApplication
>> 18730UserComment
>> 18830WAExceptionFilter
>> 18929WADispatchCallback
>> 19029WARadioGroup
>> 19128DecimalFloat
>> 19227JSStream
>> 19326MethodBookExerciseRepertoireItemInputView
>> 19426WAStringAttribute
>> 19524WAOpeningConditionalComment
>> 19624WAScriptElement
>> 19724WAClosingConditionalComment
>> 19824PracticeSessionTemplateInputView
>> 19924WALinkElement
>> 20023UserInformationView
>>
>>
>> Larry
>>
>>
>>>
>>> Dale
>>>
>>> [1] https://code.google.com/p/glassdb/wiki/ClearPersistentCaches
>>> [2] https://programminggems.wordpress.com/2009/12/15/scanbackup-2/
>>> On 03/31/2015 05:35 AM, Lawrence Kellogg wrote:
>>>>
>>>>> On Mar 30, 2015, at 6:24 PM, Dale Henrichs
>>>>> <dale.henrichs at gemtalksystems.com
>>>>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>>>>
>>>>> The initial MFC gave you (pre-backup):
>>>>>
>>>>> 390,801,691 live objects with 23,382,898 dead
>>>>>
>>>>> The second MFC gave you (post-backup):
>>>>>
>>>>> 391,007,811 live objects with 107 dead
>>>>>
>>>>> Which means that we did not gain nearly as much as anticipated
>>>>> by cleaning up the seaside session state and object log ... so
>>>>> something else is hanging onto a big chunk of objects ...
>>>>>
>>>>> So yes at this point there is no need to consider a backup and
>>>>> restore to shrink extents until we can free up some more
>>>>> objects ...
>>>>>
>>>>> I've got to head out on an errand right now, so I can't give
>>>>> you any detailed pointers, to the techniques to use for
>>>>> finding the nasty boy that is hanging onto the "presumably
>>>>> dead objects" ...
>>>>>
>>>>> I am a bit suspicious that the Object log might still be alive
>>>>> an kicking, so I think you should verify by inspecting the
>>>>> ObjectLog collections ... poke around on the class side ... if
>>>>> you find a big collection (and it blows up your TOC if you try
>>>>> to look at it), then look again at the class-side methods and
>>>>> make sure that you nuke the RCQueue and the OrderedCollection
>>>>> .... close down/logout your vms, and then run another mfc to
>>>>> see if you gained any ground …
>>>>
>>>>
>>>> Well, the ObjectLog collection on the class side of
>>>> ObjectLogEntry is empty, and the ObjectQueue class variable has:
>>>>
>>>> <Mail Attachment.png>
>>>>
>>>>
>>>> Is it necessary to reinitialize the ObjectQueue?
>>>>
>>>> Is there some report I can run that will tell me what is
>>>> holding onto so much space?
>>>>
>>>> Best,
>>>>
>>>> Larry
>>>>
>>>>
>>>>>
>>>>> Dale
>>>>>
>>>>> On 03/30/2015 02:57 PM, Lawrence Kellogg wrote:
>>>>>>
>>>>>>> On Mar 30, 2015, at 12:28 PM, Dale Henrichs
>>>>>>> <dale.henrichs at gemtalksystems.com
>>>>>>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>>>>>>
>>>>>>> Okay,
>>>>>>>
>>>>>>> I guess you made it through the session expirations okay and
>>>>>>> according to the MFC results it does look like you did get
>>>>>>> rid of a big chunk of objects... Presumably the backup was
>>>>>>> made before the vote on the possible dead was finished so
>>>>>>> the backup would not have been able to skip all of the dead
>>>>>>> objects (until the vote was completed) .... there 's also an
>>>>>>> outside chance that the vm used to expire the sessions would
>>>>>>> have voted down some of the possible dead if it was still
>>>>>>> logged in when the backup was made ...
>>>>>>>
>>>>>>> So we need to find out what's going on in the new extent ...
>>>>>>> so do another mfc and send me the results
>>>>>>
>>>>>>
>>>>>> Ok, I made it through another mark for collection and here is
>>>>>> the result:
>>>>>>
>>>>>> <Mail Attachment.png>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Am I wrong in thinking that the file size of the extent will
>>>>>> not shrink? It certainly has not shrunk much.
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>>
>>>>>>> In the new extent, run the MFC again, and provide me with
>>>>>>> the results ... include an `Admin>>DoIt>>File Size Report`.
>>>>>>> Then logout of GemTools and stop/start any other seaside
>>>>>>> servers or maintenance vms that might be running ...
>>>>>>>
>>>>>>
>>>>>> Here is the file size report before the mark for collection
>>>>>>
>>>>>> Extent #1
>>>>>> -----------
>>>>>> Filename =
>>>>>> !TCP at localhost#dir:/opt/gemstone/product/seaside/data#log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>
>>>>>> File size = 23732.00 Megabytes
>>>>>> Space available = 3478.58 Megabytes
>>>>>>
>>>>>> Totals
>>>>>> ------
>>>>>> Repository size = 23732.00 Megabytes
>>>>>> Free Space = 3478.58 Megabytes
>>>>>>
>>>>>> and after
>>>>>>
>>>>>> Extent #1
>>>>>> -----------
>>>>>> Filename =
>>>>>> !TCP at localhost#dir:/opt/gemstone/product/seaside/data#log://opt/gemstone/log/%N%P.log#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>
>>>>>> File size = 23732.00 Megabytes
>>>>>> Space available = 3476.47 Megabytes
>>>>>>
>>>>>> Totals
>>>>>> ------
>>>>>> Repository size = 23732.00 Megabytes
>>>>>> Free Space = 3476.47 Megabytes
>>>>>>
>>>>>>
>>>>>>
>>>>>> I await further instructions.
>>>>>>
>>>>>> Best,
>>>>>>
>>>>>> Larry
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>> By the time we exchange emails, the vote should have a
>>>>>>> chance to complete this time... but I want to see the
>>>>>>> results of the MFC and File SIze Report before deciding what
>>>>>>> to do next ...
>>>>>>>
>>>>>>> Dale
>>>>>>>
>>>>>>> On 03/30/2015 07:30 AM, Lawrence Kellogg wrote:
>>>>>>>> Hello Dale,
>>>>>>>>
>>>>>>>> Well, I went though the process as described below, but
>>>>>>>> have not see my extent shrink appreciably, so I am puzzled.
>>>>>>>> Here is the screenshot after the mark for collection. Do I
>>>>>>>> have to do something to reclaim the dead objects? Does the
>>>>>>>> maintenance gem need to be run?
>>>>>>>>
>>>>>>>>
>>>>>>>> <Mail Attachment.png>
>>>>>>>>
>>>>>>>> After the ObjectLog init, and mark, I did a restore into a
>>>>>>>> fresh extent.
>>>>>>>>
>>>>>>>> Here is the size of the new extent vs the old, saved extent:
>>>>>>>>
>>>>>>>> <Mail Attachment.png>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> Thoughts?
>>>>>>>>
>>>>>>>> Larry
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>> On Mar 25, 2015, at 2:15 PM, Dale Henrichs
>>>>>>>>> <dale.henrichs at gemtalksystems.com
>>>>>>>>> <mailto:dale.henrichs at gemtalksystems.com>> wrote:
>>>>>>>>>
>>>>>>>>> Okay here's the sequence of steps that I think you should
>>>>>>>>> take:
>>>>>>>>>
>>>>>>>>> 1. expire all of your sessions:
>>>>>>>>>
>>>>>>>>> | expired |
>>>>>>>>> Transcript cr; show: 'Unregistering...' , DateAndTime
>>>>>>>>> now printString.
>>>>>>>>> expired := WABasicDevelopment reapSeasideCache.
>>>>>>>>> expired > 0
>>>>>>>>> ifTrue: [ (ObjectLogEntry trace: 'MTCE: expired
>>>>>>>>> sessions' object: expired) addToLog ].
>>>>>>>>> Transcript cr; show: '...Expired: ' , expired
>>>>>>>>> printString , ' sessions.'.
>>>>>>>>> System commitTransactions
>>>>>>>>>
>>>>>>>>> 2. initalize your object log
>>>>>>>>>
>>>>>>>>> 3. run MFC
>>>>>>>>>
>>>>>>>>> [
>>>>>>>>> System abortTransaction.
>>>>>>>>> SystemRepository markForCollection ]
>>>>>>>>> on: Warning
>>>>>>>>> do: [ :ex |
>>>>>>>>> Transcript
>>>>>>>>> cr;
>>>>>>>>> show: ex description.
>>>>>>>>> ex resume ]
>>>>>>>>>
>>>>>>>>> 4. Then do a backup and restore ... you can use GemTools
>>>>>>>>> to do the restore,
>>>>>>>>> but then you should read the SysAdmin docs[1] for
>>>>>>>>> instructions to do the restore
>>>>>>>>> (I've enclosed link to 3.2 docs, but the procedure
>>>>>>>>> and commands should pretty
>>>>>>>>> much be the same, but it's best to look up the docs
>>>>>>>>> for your GemStone version[2]
>>>>>>>>> and follow those instructions)
>>>>>>>>>
>>>>>>>>> As I mentioned earlier, it will probably take a while for
>>>>>>>>> each of these operations to complete (object log will be
>>>>>>>>> fast and the backup will be fast, if the mfc tosses out
>>>>>>>>> the majority of your data) and it is likely that the
>>>>>>>>> repository will grow some more during the process (hard to
>>>>>>>>> predict this one, tho).
>>>>>>>>>
>>>>>>>>> Step 1 will touch every session and every continuation so
>>>>>>>>> it is hard to say what percent of the objects are going to
>>>>>>>>> be touched (the expensive part), still there are likely to
>>>>>>>>> be a lot of those puppies and they will have to be read
>>>>>>>>> from disk into the SPC ...
>>>>>>>>>
>>>>>>>>> Step 3. is going scan all of the live objects and again it
>>>>>>>>> hard to predict exactly how expensive it will be ...
>>>>>>>>>
>>>>>>>>> Dale
>>>>>>>>>
>>>>>>>>> [1]
>>>>>>>>> http://downloads.gemtalksystems.com/docs/GemStone64/3.2.x/GS64-SysAdmin-3.2/GS64-SysAdmin-3.2.htm
>>>>>>>>> [2] http://gemtalksystems.com/techsupport/resources/
>>>>>>>>>
>>>>>>>>> On 3/25/15 10:18 AM, Lawrence Kellogg wrote:
>>>>>>>>>> Hello Dale,
>>>>>>>>>>
>>>>>>>>>> Thanks for the help. I’m a terrible system admin when
>>>>>>>>>> it comes to maintaining a system with one user, LOL.
>>>>>>>>>>
>>>>>>>>>> I’m not running the maintenance VM and I haven’t been
>>>>>>>>>> doing regular mark for collects.
>>>>>>>>>>
>>>>>>>>>> I’m trying to do a fullBackupTo: at the moment, well
>>>>>>>>>> see if I get through that. Should I have done a
>>>>>>>>>> markForCollection before the full backup?
>>>>>>>>>>
>>>>>>>>>> I’ll also try the ObjectLog trick.
>>>>>>>>>>
>>>>>>>>>> I guess I need to start from a fresh extent, as you
>>>>>>>>>> said, and the extent file will not shrink. I’m at 48% of
>>>>>>>>>> my available disk space but it does seem slower than usual.
>>>>>>>>>>
>>>>>>>>>> Best,
>>>>>>>>>>
>>>>>>>>>> Larry
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>> On Mar 25, 2015, at 12:58 PM, Dale Henrichs via Glass
>>>>>>>>>>> <glass at lists.gemtalksystems.com
>>>>>>>>>>> <mailto:glass at lists.gemtalksystems.com>> wrote:
>>>>>>>>>>>
>>>>>>>>>>> Lawrence,
>>>>>>>>>>>
>>>>>>>>>>> Are you doing regular Mark for collects? Are you running
>>>>>>>>>>> the maintenance vm along with you seaside servers?
>>>>>>>>>>>
>>>>>>>>>>> Seaside produces persistent garbage (persistent session
>>>>>>>>>>> state that eventually times out) when it processes
>>>>>>>>>>> requests so if you do not run the maintenance vm the
>>>>>>>>>>> sessions are not expired and if you do not run mfc
>>>>>>>>>>> regularly the expired sessions are not cleaned up ...
>>>>>>>>>>>
>>>>>>>>>>> Another source of growth could be the Object Log ...
>>>>>>>>>>> (use `ObjectLogEntry initalize` to efficiently reset the
>>>>>>>>>>> Object Log ... pay attention to the mispelling ... thats
>>>>>>>>>>> another story). If you are getting continuations saved
>>>>>>>>>>> to the object log, the stacks that are saved, can hang
>>>>>>>>>>> onto a lot of session state, that even though expired
>>>>>>>>>>> will not be garbage collected because of references from
>>>>>>>>>>> the continuation in the object log keep it alive ...
>>>>>>>>>>>
>>>>>>>>>>> The best way to shrink your extent (once we understand
>>>>>>>>>>> why it is growing) is to do a backup and then restore
>>>>>>>>>>> into a virgin extent ($GEMSTONE/bin/extent0.seaside.dbf)...
>>>>>>>>>>>
>>>>>>>>>>> Dale
>>>>>>>>>>>
>>>>>>>>>>> On 3/25/15 8:34 AM, Lawrence Kellogg via Glass wrote:
>>>>>>>>>>>> Well, Amazon sent me a note that they are having
>>>>>>>>>>>> hardware trouble on my instance, so they shut it down.
>>>>>>>>>>>> It looks like they’re threatening to take the thing
>>>>>>>>>>>> offline permanently so I’m trying to save my work with
>>>>>>>>>>>> an AMI and move it somewhere else, if I have to.
>>>>>>>>>>>>
>>>>>>>>>>>> I finally got Gemstone/Seaside back up and running and
>>>>>>>>>>>> noticed these lines in the Seaside log file. These kind
>>>>>>>>>>>> of messages go on once a day for weeks. Is this normal?
>>>>>>>>>>>>
>>>>>>>>>>>> --- 03/07/2015 02:44:14 PM UTC ---
>>>>>>>>>>>> Extent =
>>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>> has grown to 22528 megabytes.
>>>>>>>>>>>> Repository has grown to 22528 megabytes.
>>>>>>>>>>>>
>>>>>>>>>>>> Extent =
>>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>> has grown to 22544 megabytes.
>>>>>>>>>>>> Repository has grown to 22544 megabytes.
>>>>>>>>>>>>
>>>>>>>>>>>> --- 03/08/2015 03:31:45 PM UTC ---
>>>>>>>>>>>> Extent =
>>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>> has grown to 22560 megabytes.
>>>>>>>>>>>> Repository has grown to 22560 megabytes.
>>>>>>>>>>>>
>>>>>>>>>>>> Extent =
>>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>> has grown to 22576 megabytes.
>>>>>>>>>>>> Repository has grown to 22576 megabytes.
>>>>>>>>>>>>
>>>>>>>>>>>> --- 03/10/2015 03:19:34 AM UTC ---
>>>>>>>>>>>> Extent =
>>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>> has grown to 22592 megabytes.
>>>>>>>>>>>> Repository has grown to 22592 megabytes.
>>>>>>>>>>>>
>>>>>>>>>>>> --- 03/10/2015 03:46:39 PM UTC ---
>>>>>>>>>>>> Extent =
>>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>> has grown to 22608 megabytes.
>>>>>>>>>>>> Repository has grown to 22608 megabytes.
>>>>>>>>>>>>
>>>>>>>>>>>> Extent =
>>>>>>>>>>>> !#dbf!/opt/gemstone/GemStone64Bit2.4.4.1-x86_64.Linux/seaside/data/extent0.dbf
>>>>>>>>>>>> has grown to 22624 megabytes.
>>>>>>>>>>>> Repository has grown to 22624 megabytes.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> My extent has now grown to
>>>>>>>>>>>>
>>>>>>>>>>>> -rw------- 1 seasideuser seasideuser 23735566336 Mar 25
>>>>>>>>>>>> 15:31 extent0.dbf
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> I don’t get a lot of traffic so I’m a little surprised
>>>>>>>>>>>> at the growth. Should I try to shrink the extent?
>>>>>>>>>>>>
>>>>>>>>>>>> I suppose I should also do a SystemRepository backup,
>>>>>>>>>>>> if I can remember the commands.
>>>>>>>>>>>>
>>>>>>>>>>>> Best,
>>>>>>>>>>>>
>>>>>>>>>>>> Larry
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> _______________________________________________
>>>>>>>>>>>> Glass mailing list
>>>>>>>>>>>> Glass at lists.gemtalksystems.com <mailto:Glass at lists.gemtalksystems.com>
>>>>>>>>>>>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>>>>>>>>>
>>>>>>>>>>> _______________________________________________
>>>>>>>>>>> Glass mailing list
>>>>>>>>>>> Glass at lists.gemtalksystems.com
>>>>>>>>>>> <mailto:Glass at lists.gemtalksystems.com>
>>>>>>>>>>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>
>
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com <mailto:Glass at lists.gemtalksystems.com>
> http://lists.gemtalksystems.com/mailman/listinfo/glass
>
>
>
>
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> http://lists.gemtalksystems.com/mailman/listinfo/glass
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20150402/21dbcc09/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/png
Size: 183513 bytes
Desc: not available
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20150402/21dbcc09/attachment-0001.png>
More information about the Glass
mailing list