[Glass] Trying to NOT run out of gem space while having a decent profiling sample resolution

Mariano Martinez Peck via Glass glass at lists.gemtalksystems.com
Mon May 29 15:29:09 PDT 2017


Hi guys,

Allen Otis suggested me a nice idea to use #computeInterval: so that I can
get a decent number of samples based on the approx total time rather than a
fixed interval value. The problem is, of course, the higher the sampling,
the more gem temp space I need. I already increased my gem temp space quite
a bit, but for certain scenarios I am still running out of memory.
Basically, the profiler outputs:

There was an error trying to profile: *a Error occurred (error 2517), 2
failed attempts to signal AlmostOutOfMemory*

So... besides the obvious solution of increase and increasing and
increasing the gem temp space, I tried something similar to what I do with
sixx...basically.. make that temp space to be persistence and do the stuff
inside a memory handler that commits under pressure.

My code looks like this:

profile: aBlock estimatedTotalCpuTime: totalCpuTime tallyThreshold:
tallyThreshold writingReportOn: aStream
  "Profiles aBlock with a sample of miliseconds and writes the resulting
report (limited to tallyThreshold number of invokations of a given method)
into aStream "

  | profMon startTime endTime persistentKey persistentDict |
  persistentKey := ('PROFMONITOR' , Object new identityHash asString)
asSymbol.
  persistentDict := UserGlobals
    at: #'ProfMonitorRoots'
    ifAbsentPut: [ RcKeyValueDictionary new ].
  [
  FACompatibilityUtils current
    commitOnAlmostOutOfMemoryDuring: [
      startTime := System _timeGmtFloat.
      profMon := ProfMonitorTree new.
      persistentDict at: persistentKey put: profMon.
      profMon intervalNs: (profMon class computeInterval: totalCpuTime) * 4.
      profMon startMonitoring.
      [ aBlock value ]
        ensure: [
          "This #ensure: is very important because the closure we are
profiling may raise a signal... (like seaside request processing which uses
notifications)..so without the #ensure: we do not write the report
anywhere..."
          [
          endTime := System _timeGmtFloat.
          profMon stopMonitoring.
          profMon gatherResults.
          aStream
            nextPutAll:
                'Total time: ' , ((endTime - startTime) * 1000) asInteger
asString;
            cr;
            cr.
          aStream
            nextPutAll: (profMon reportDownTo: tallyThreshold);
            cr.
          profMon removeResults ]
            on: Error
            do: [ :ex | aStream nextPutAll: 'There was an error trying to
profile: ' , ex printString ] ] ]
    threshold: 80 ]
    ensure: [
      persistentDict removeKey: persistentKey ifAbsent: [  ].
      System commit ]


What I cannot understand is that WITHOUT the
#commitOnAlmostOutOfMemoryDuring:threshold:   logic, I only get the out of
memory problem in a few scenarios (those scenarios that are really big).

Now, WITH the logic of  #commitOnAlmostOutOfMemoryDuring:threshold: I
thought I should get none (ideally) out of memory problem as the samples
are now persistent. Yet... the funny part is that I am getting above error
for every single profile I do..even for small ones. WTF!

I am clearly doing something wrong, but I cannot see it.

What does it mean exactly the exception " 2 failed attempts to signal
AlmostOutOfMemory" ????  How can someone fail to signal the exception ?
What can I do so that it doesn't fail ?


Thanks a lot in advance,


-- 
Mariano
http://marianopeck.wordpress.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20170529/3f272e42/attachment.html>


More information about the Glass mailing list