[Glass] Transaction Logs so large ....

Dale Henrichs via Glass glass at lists.gemtalksystems.com
Mon Oct 24 10:26:53 PDT 2016



On 10/24/2016 04:39 AM, Otto Behrens via Glass wrote:
>> If you already know this stuff then your question would be related to
>> checkpoint commits not happening in a timely manner. How that can happen and
>> how to improve that is a much deeper discussion, but is usually remedied by
>> changing gems to not stay with old views of the database. That can mean more
>> frequent aborts for example, and starting a transaction a short time before
>> making changes to commit. Just one gem staying in-transaction for a long
>> time can hold up checkpoint commits. A developer logged into (and
>> in-transaction) a very active database can hold up checkpoints causing a
>> backlog of transactions not yet applied to extent files.
> I understand that if there are gems holding onto old views with other
> gems committing a lot, it may happen that more garbage than usual will
> end up in the extent and the gems will take longer to commit because
> they have more work to determine the write set union etc. I suppose if
> things start to overflow one could create bigger and bigger
> transaction logs because a gem would be committing temporary objects
> to the repository.
>
> If one install proper SigAbort handlers for gems then the gems should
> be catching them and aborting, unless there are way more commits than
> what the gems can react to via sig aborts? Is it in this case that the
> condition may occur?
>
> In our application, we start a number of gems that service seaside
> requests. These gems (as well as the other gems we run in the
> background), all have sig abort handlers. The load balancing also
> helps to ensure that the seaside based application distributes work to
> idle gems. So in my view, gems should be cooperating well.
>
> In spite of this, we still see 10GB of tranlogs created in a few hours
> under low load. (The db is around 23GB).
>
> I suppose the only way to really analyse what's going on is to run
> statmonitor and understand the output. From my previous attempts
> though, it appears as if the Seaside infrastructure is really heavy on
> transaction logs.

Otto, you are absolutely right ... one of the things that makes GemStone 
fast is that the session state (mostly continuations) is not held in 
transient memory but written to disk --- with a cost in tranlog size --- 
then use MFC to clean up the mess ...

There are possibilities that the large tranlogs are being cause by 
updates to large, sequenceable collections (like OrderedCollection and 
SortedCollection) and others ... we  have some basic tools that we ship 
with the product that can be used to analyze the content of tranlogs and 
with a bit of work can be used to tell us what is in the tranlogs 
(continuations will show up as a lot of GsProcess instances for example) 
and start to get a handle on the problem ..

Again as a group effort I can help with instructions on doing the 
analysis and then we can see what can be done ... we have done some work 
in reducing the size of tranlogs for somes type of data, but not all so 
getting information from you guys would help us understand where to put 
our emphasis --- whether be on adding additional features to GemStone, 
fixing bugs, or working on notTranlogged session state for Seaside.

Dale


More information about the Glass mailing list