[Glass] Transaction Logs so large ....
Dale Henrichs via Glass
glass at lists.gemtalksystems.com
Mon Oct 24 12:06:47 PDT 2016
Well Richard pointed out that I did the math wrong at several steps ---
it's good to show your work (part of being clear:), then others can
point out your mistakes.
The math should have been:
1GB in 5 minutes - 3.4Mb of tranlog data per second or 170kb/event
Which may or may not be excessive depending upon what you are doing in
your application...
Dale
On 10/24/2016 11:34 AM, Dale Henrichs wrote:
> Marten,
>
> To be clear.
>
> 1GB in 5 minutes is 60Mb of tranlog data per second. At 20
> events/second you are producing 3Mb per event ... is that reasonable
> given the application data you are saving per event from an
> application perspective?
>
> If you still think that that is excessive, then I think submitting a
> HR is called for. Tech support can walk you through the tranlog
> analysis steps and help determine if what you are experiencing a
> problem that has already been fixed (the other customer with tranlog
> growth problems) or if you are experiencing "normal growth" --- we are
> aware of additional optimizations that can be made for tranlogs, but
> those optimizations are not yet available and there also the
> possibility of switching to another data structure, or you may even be
> persisting more data than you expect -- without analysis we can only
> guess.
>
> I'm not sure that you need to change the SigAbort handler that we
> talked about at ESUG. I am under the impression that you are not
> concerned about extent growth and if so, then that SigAbort handler
> is doing it's job.
>
> Dale
>
>
> On 10/24/2016 11:12 AM, Dale Henrichs wrote:
>>
>>
>> On 10/24/2016 06:12 AM, Marten Feldtmann via Glass wrote:
>>>
>>> Concerning transactions: Well, all Topaz scripts (starting the Zinc
>>> server) are started in #manualBegin transactionMode. If not in that
>>> state the handling of SigAbort handlers is more difficult.
>>>
>> SigAbort handling has nothing to do with tranlog growth, so don't
>> mess with this ...
>>>
>>> Other than that: after an incoming request a BeginTransaction is
>>> done, the work is done and a commit or abort is then done ... then
>>> the gem is again out of an transaction - and then it should be no
>>> candidate for SigAbort handler (I'm not sure of that last statement).
>>>
>> Again ... Sigabort handling does not affect the amount of data that
>> is dumped to a tranlog
>>>
>>> Conflicts - yes initially lots of. After that we introduce retries
>>> on the server side and now the conflicts are not very often - though
>>> the external programmers should always consider this ( ... and its
>>> not done).
>>>
>>> Other conflicts can not be solved in this way and need a different
>>> API behaviour (normally implemented with RcQueue and worker topaz
>>> processes on the server side). The offered API then returns a TASK
>>> id and the programmer has to watch for its task ... long-term logic
>>> tasks can be programmed in a conflict-free way then on the server side.
>>>
>>> For logic conflicts we have a revision counter in the objects.
>>>
>>> (We use an API-HTTP-REST-RPC oriented way of programming the
>>> Gemstone/S database when interacting via C#, Python or Javascript
>>> with the database).
>>>
>>
>> Since you are not using Seaside we cannot blame continuations, so we
>> have to look for other sources of tranlog growth ...
>>
>> Do you have large sorted collections or other SequenceableCollection
>> collections where you insert data in the middle of the collection? If
>> you do we write tranlog entries for the shift of the data in these
>> types of collections ...
>>
>> If you use indexes it is possible for the RcIndexDictionary
>> collision buckets to get large and they will behave like a
>> SequenceableCollection collection when it comes to inserts ...
>>
>> I think the best bet here is to first identify what's being dumped
>> into your tranlogs and then go from there ... we've recently had
>> another customer with tranlog growth issues and the engineer that
>> handled that is out of the office ... This would actually be a good
>> case for submitting an HR and then work directly on this issue with
>> tech support to help you identify the source of the tranlog growth
>> and work out strategies for reducing or otherwise addressing the
>> problem.
>>
>> Dale
>
More information about the Glass
mailing list