[Glass] Time to responds varies very much (performance problems)

Dale Henrichs via Glass glass at lists.gemtalksystems.com
Thu Dec 7 14:21:23 PST 2017


Marten,

It really seems like you are running into disk i/o issues ... a detailed 
review of your statmon files, should provide a pretty clear portrait of 
exactly where your bottleneck(s) are occurring ... Unfortunately I don't 
really have the spare time to do a detailed analysis of your statmon 
files to pinpoint the bottlenecks ...

Assuming that disk i/o is the issue ... I would say that it is worth 
trying an 8GB cache to see if you problems are resolved --- at least 
disk i/o should be eliminated as a culprit, but there are always 
additional layers to the performance issues -- swap space, machine 
memory, number of disk partitions, etc.

Speaking of disk partitions, I have seen performance issues that were 
resolved by making sure that the tranlogs and extents are on separate 
disk paritions (even if those disk partitions are virtual partitions) 
.... the underlying issue has to do with the fact that Linux prioritizes 
disk writes over disk reads and a system that is doing commits at a fast 
pace will cause the disk reads to be load pacakges from sick into the 
SPC will be delayed --- and this phenomenon can be significant. A large 
SPC (at least as large as the DB ) should fix the problem, but simply 
putting the tranlogs and extents on separate partitions can also address 
the problem...

Another trick that may work is to increase the TOC for your Gems (you 
can have a TOC that is larger than your SPC) ... once the working set of 
objects have been faulted into a gem, there is no need to hit disk again 
to refresh the working (except to refresh those objects changed by other 
transations) ... so the veracity of this technique will be a function of 
how often the objects in your working set are changed by other 
transactions ... the downside to this approach is that it can be RAM 
hungry --- as I said there are no magic bullets and each approach has 
it's downsides ...

Dale


On 11/29/17 1:12 PM, Marten Feldtmann via Glass wrote:
>
> I tried a new license with 4GB cache, but that did not help at all. 
> The extend is around 7GB large. I noticed, that the topaz processes 
> are very often in the "D" state, which means, that IO is done - when I 
> execute that statement very often, the responding topaz process does 
> not need to go into "D" and I get the full expected speed. That's 
> getting an interesting point of learning.
>
> Marten
>
>> Marten Feldtmann via Glass <glass at lists.gemtalksystems.com> hat am 
>> 29. November 2017 um 20:48 geschrieben:
>>
>> Well, yes I could benefit from that - but considering, that I can 
>> sort that number of items in 3 seconds on my machine when nothing 
>> else happens - but the time gets up dramatically when working on 
>> other parts on the database and come back I do not assume, that index 
>> may help that much !?
>>
>> Marten
>>
>>> Richard Sargent <richard.sargent at gemtalksystems.com> hat am 29. 
>>> November 2017 um 18:27 geschrieben:
>>>
>>> Marten, when you write "sort 300000 addresses", that is a good 
>>> indicator that you may benefit from indexes on your collection(s). I 
>>> think the Programming Guide has an entire chapter on indexes.
>>>
>>>
>>> On Nov 29, 2017 09:00, "Marten Feldtmann via Glass" 
>>> <glass at lists.gemtalksystems.com 
>>> <mailto:glass at lists.gemtalksystems.com>> wrote:
>>>
>>>     Considering the fact, that I am not an expert in interpretation
>>>     of the statistics of the system I noticed, that the time is
>>>     indeed needed for reloading pages to get access to all data
>>>     needed for the computation (PageIOCount, PageLocateCount, PageReads)
>>>
>>>     In one case I sort around 300000 addresses and this needs at
>>>     least 3 seconds - when lots of data has to be loaded (from disc)
>>>     it goes up to more than 20 seconds (and this is only early
>>>     experiences) - and this on a system without load. I think on a
>>>     system with heavy load this will goes up even higher.
>>>
>>>     Marten
>>>
>>>>     Marten Feldtmann via Glass <glass at lists.gemtalksystems.com
>>>>     <mailto:glass at lists.gemtalksystems.com>> hat am 27. November
>>>>     2017 um 21:49 geschrieben:
>>>>
>>>>     This is a typical "any idea" question :-)
>>>>
>>>>     I'm now in the process of doing heavy performance tests and I
>>>>     notice a strange effect - the time for responding a query
>>>>     varies very much and I've no idea how to find out, where the
>>>>     reason for the performance problems are.
>>>>
>>>>     I've a system of 8 responding topaz processes answering http
>>>>     requests (2 core license on a 4 core cpu witrh 8GB RAM). The
>>>>     load tests are around 50 transactions/second. Normally a
>>>>     specific query can be answered within 1-2 ms, but when this
>>>>     query is not executed for some time the time needed for an
>>>>     answer increases and I found answering times with up to
>>>>     12000ms. The system s located on a SSD, has been defined to
>>>>     have a 2GB of cache. The system has lots of transactions
>>>>     (commit and abort). If I have no transactions the system
>>>>     answers the query within 1-2 ms. So I assume, that this
>>>>     association a thrown out of the shared cache (even though 2 GB
>>>>     cache is pretty much) - but how can I proove this ?
>>>>
>>>>     Any further idea with the statmonitor and/or how to interpret
>>>>     the results ?
>>>>
>>>>     Marten
>>>>
>>>
>>>>     _______________________________________________ Glass mailing
>>>>     list Glass at lists.gemtalksystems.com
>>>>     <mailto:Glass at lists.gemtalksystems.com>
>>>>     http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>>     <http://lists.gemtalksystems.com/mailman/listinfo/glass>
>>>
>>>     _______________________________________________
>>>     Glass mailing list
>>>     Glass at lists.gemtalksystems.com
>>>     <mailto:Glass at lists.gemtalksystems.com>
>>>     http://lists.gemtalksystems.com/mailman/listinfo/glass
>>>     <http://lists.gemtalksystems.com/mailman/listinfo/glass>
>>>
>
>> _______________________________________________ Glass mailing list 
>> Glass at lists.gemtalksystems.com 
>> http://lists.gemtalksystems.com/mailman/listinfo/glass
>
>
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> http://lists.gemtalksystems.com/mailman/listinfo/glass

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20171207/4f3b704b/attachment.html>


More information about the Glass mailing list