[Glass] Run-away tranlogs

Dale Henrichs via Glass glass at lists.gemtalksystems.com
Tue Dec 22 11:30:23 PST 2015


If you aren't changing code and you haven't changed versions of 
GemStone, then the increased tranlog volume has to be a result of your 
traffic ...

I see that you are expiring something like 30 session objects/minute 
which comes to 45k continuations/day.

With 5GB of tranlog and assuming 100byts/object, you get something like 
50M objects written to tranlogs/day ... divide that by 45k 
continuations/day and you get ~1000 objects/continuation ...

Now considering that a continuation is basically a copy of a smalltalk 
stack including all temps ... 1000 objects referenced from a stack is 
not unreasonable ...

Coming at it from a different direction, each MFC appears to yield 
between 10M and 2M dead objects - you didn't include timestamps, but if 
an MFC is running once an hour and you average 2M dead objects/hour, 
then you get 50M dead objects per day and 5GB of tranlogs ....

It doesn't look like your extent is growing quite as fast as the tranlog 
data, so given the MFC yields, you appear to be generating more 
transient data than you used to generate ....

If your traffic hasn't significantly increased (a factor of 20?) then 
something must have changed in your application or the way that the 
application is being used?

Dale


On 12/22/2015 05:35 AM, Tobias Pape via Glass wrote:
> Dear all
>
> I've got an old, GemStone64Bit2.4.4.7-x86_64.Linux, which, with a few
> updates, runs since 2010. Recently, I got the problem of run-away transactions logs.
> That is, I get around 4..5 1GB tranlogs _per day_ whereas I only
> got around 2..3 1G tranlogs per week at most, typically less, just 2 month
> ago.
>
> -rw-r--r-- 1 gemstone nogroup  1048571392 Dec 17 04:39 tranlog10.dbf
> -rw-r--r-- 1 gemstone nogroup  1048432640 Dec 17 07:31 tranlog11.dbf
> -rw-r--r-- 1 gemstone nogroup  1048516096 Dec 17 10:27 tranlog12.dbf
> -rw-r--r-- 1 gemstone nogroup  1048549888 Dec 17 13:06 tranlog13.dbf
> -rw-r--r-- 1 gemstone nogroup  1048527360 Dec 17 22:35 tranlog14.dbf
> -rw-r--r-- 1 gemstone nogroup  1048424960 Dec 18 01:21 tranlog15.dbf
> -rw-r--r-- 1 gemstone nogroup  1048459776 Dec 18 04:09 tranlog16.dbf
> -rw-r--r-- 1 gemstone nogroup  1048424960 Dec 18 06:56 tranlog17.dbf
> -rw-r--r-- 1 gemstone nogroup  1048440832 Dec 18 11:56 tranlog18.dbf
> -rw-r--r-- 1 gemstone nogroup  1048524800 Dec 19 00:24 tranlog19.dbf
> -rw-r--r-- 1 gemstone nogroup  1048436224 Dec 19 03:55 tranlog20.dbf
> -rw-r--r-- 1 gemstone nogroup  1048543744 Dec 19 07:04 tranlog21.dbf
> -rw-r--r-- 1 gemstone nogroup  1048472576 Dec 19 13:33 tranlog22.dbf
> -rw-r--r-- 1 gemstone nogroup  1048532992 Dec 20 03:27 tranlog23.dbf
> -rw-r--r-- 1 gemstone nogroup  1048542720 Dec 20 08:38 tranlog24.dbf
> -rw-r--r-- 1 gemstone nogroup  1048436736 Dec 20 18:29 tranlog25.dbf
> -rw-r--r-- 1 gemstone nogroup  1048503296 Dec 20 21:02 tranlog26.dbf
> -rw-r--r-- 1 gemstone nogroup  1048551936 Dec 20 23:23 tranlog27.dbf
> -rw-r--r-- 1 gemstone nogroup  1048484352 Dec 21 01:43 tranlog28.dbf
> -rw-r--r-- 1 gemstone nogroup  1048532992 Dec 21 04:09 tranlog29.dbf
> -rw-r--r-- 1 gemstone nogroup  1048475136 Dec 21 06:39 tranlog30.dbf
> -rw-r--r-- 1 gemstone nogroup  1048557568 Dec 21 08:52 tranlog31.dbf
> -rw-r--r-- 1 gemstone nogroup  1048433664 Dec 21 11:13 tranlog32.dbf
> -rw-r--r-- 1 gemstone nogroup  1048544256 Dec 21 13:35 tranlog33.dbf
> -rw-r--r-- 1 gemstone nogroup  1048478208 Dec 21 21:07 tranlog34.dbf
> -rw-r--r-- 1 gemstone nogroup  1048562688 Dec 21 23:47 tranlog35.dbf
> -rw-r--r-- 1 gemstone nogroup  1048491520 Dec 22 02:08 tranlog36.dbf
> -rw-r--r-- 1 gemstone nogroup  1048559104 Dec 22 04:31 tranlog37.dbf
> -rw-r--r-- 1 gemstone nogroup  1048495616 Dec 22 07:06 tranlog38.dbf
> -rw-r--r-- 1 gemstone nogroup  1048512512 Dec 22 09:21 tranlog39.dbf
> -rw-r--r-- 1 gemstone nogroup  1048538112 Dec 22 11:40 tranlog40.dbf
> -rw-r--r-- 1 gemstone nogroup   662727168 Dec 22 13:10 tranlog41.dbf
>
>
> Also, I recently found that I got a lot of #'Write-Write' Aborts due to
> Seaside's Cache reaping strategy that got a count variable (I replaced it
> with a RcCounter, find attached). This reduced the number of Aborted Transactions,
> but I nevertheless got a 7,500 entry Objectlog in just a week.
> Mind that 90% of that are just Maintenance entries expiring sessions, but
> the objectlog just grows.
>
> See attached logs.
>
> How should I start investigating?
>
> Best regards
> 	-Tobias
>
>
>
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> http://lists.gemtalksystems.com/mailman/listinfo/glass

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20151222/be540e0b/attachment.html>


More information about the Glass mailing list