[Glass] Understanding temp memory used seaside gems and 'ps' output

Martin McClure via Glass glass at lists.gemtalksystems.com
Thu Jul 2 20:15:44 PDT 2015


On 06/19/2015 01:22 PM, Mariano Martinez Peck via Glass wrote:
>     
> *So.. first question is why process 2563 is showing more than 700MB
> (786MB in this case).*
> 
> Anyway...if I go and cycle over each of my seaside gems and I print the
> results of *"System _tempObjSpacePercentUsed" for each gem, I usually
> get between 5% and 20%*, which would mean between 35MB to 140MB. This is
> FAR from 786MB reported by PS.  So.... I am misunderstanding something? 
> 
> yes...my OS is having little memory free and I am trying to see if I am
> not holding unnecessary memory...
> 

Linux is very precise about what it reports, but not very obvious, since
it avoids using memory whenever it can get away with not using memory.

/proc/meminfo gives the best information about the entire system, but
you have to read a bunch of documentation (and sometimes kernel code) to
understand what all the lines mean.

As I understand it, the VSZ of a process is all of the virtual memory
pages that has been allocated by the process. This might be the total of
everything that shows up in /proc/<pid>/maps. This will include memory
that is being used, and also memory that is shared, and memory that is
swapped out, but also things that have never been used, and perhaps will
never be used, and never consume actual memory. For instance, every gem
will probably report the entire shared page cache as part of its VSZ.

The RSS of a process is the subset of VSZ that is actually mapped to
real memory at a point in time. This includes shared memory, so every
gem reports some portion of the SPC in its RSS. Pages in the SPC are
mapped to the process on demand (this is a Linux kernel thing, not a
GemStone thing) and never unmapped once mapped. So the RSS of a gem will
be larger or smaller depending on which page frames in the SPC that it
has ever touched ever since it was launched. So the RSS, by itself,
doesn't tell you much.

The amount of free memory in the system doesn't tell you much either.
Any Linux system that has been running a while will have low free
memory, unless some process recently exited that was using a lot of
anonymous memory (memory which is not backed by a file). This is because
when Linux reads things from files into memory, it keeps the memory
version of that file around until there is demand for memory, since
freeing this memory is cheap and easy, but re-reading the contents from
a file is expensive, and someone might want that file again.

So much for things that don't tell you much. What *does* tell you useful
things?

Recent kernels have a field in /proc/meminfo called "MemAvailable" which
is an estimate of how much you could allocate before actually running
out. At this moment my desktop system is reporting 1.8G free, but 18G
available. If you don't have the MemAvailable stat in your kernel, it's
approximately MemFree + Active(file) + Inactive(file).

The iotop tool can tell you what processes are doing disk I/O, and when
they're doing it, and whether and how much swapping is going on. Using a
small amount of swap space is not a problem, and occasional small reads
and writes to swap are not a problem. This indicates that the system is
swapping out things that hardly ever get used, freeing up memory for
more frequently-accessed stuff. But frequent swap in/out activity is an
indicator of memory pressure, so if you see that you should look at
decreasing memory demand or adding memory to the system.

If you see pauses that correspond to disk I/O that is not swap I/O, then
things like Dale's hint apply. Some of the Linux I/O schedulers do
prioritize reads over writes. Others don't, and which scheduler you
choose can make a difference. Some schedulers may be tunable. There are
ways to tell which I/O schedulers your kernel supports, but I'll let you
look that up if you think you need it. :-)

Regards,

-Martin


More information about the Glass mailing list