[Glass] Problem with #fork and #performOnServer?

Mariano Martinez Peck via Glass glass at lists.gemtalksystems.com
Tue Jul 4 04:42:30 PDT 2017


On Tue, Jul 4, 2017 at 5:31 AM, Petr Fischer via Glass <
glass at lists.gemtalksystems.com> wrote:

>  am interested also in this: in free Gemstone version, all Gemstone
> processes has CPU affinity to first 2 CPU cores (licensing).
>  Does this also apply for your own sub-processes (performOnServer: or
> OSSubprocess)?
>
>
As far as I understand, yes to both questions.


>  In Free version is possible to run 10-20 Gems, but only on 2 cores, quite
> a bottleneck...
>
>
Well...bottleneck on CPU yes, but you may be able to take benefits of other
resources anyway (I/O etc). What I mean is...imagine you are serving a
website...I rather than 10 / 20 gems even if split across 2 cores than only
have 2 gems.




>  pf
>
>
> > On Mon, Jul 3, 2017 at 4:52 PM, Petr Fischer via Glass <
> > glass at lists.gemtalksystems.com> wrote:
> >
> > > > Hi guys,
> > > >
> > > > I am trying to accomplish something easy: I have a main gem that
> iterates
> > > > some "reports" and exports each report into a PDF by calling a unix
> lib.
> > > > This PDF export takes some seconds. So what I wanted to do is
> something
> > > > like this pseudo code:
> > > >
> > > > self reports do: [:aReport | [  System performOnServer: (self
> > > > pdfExportStringFor: aReport)  ] fork ].
> > > >
> > > > What I wanted to do with that is that each unix process for the PDF
> tool
> > > > was executed on separate CPU cores than the GEM (my current GemStone
> > > > license does allow all cores). However, I am not sure I am getting
> that
> > > > behavior. It looks like I am still using 1 core and being sequential.
> > > >
> > > > Finally, I couldn't even reproduce a single test case with that I
> had in
> > > > mind:
> > > >
> > > > 1 to: 6 do: [:index |
> > > > [ System performOnServer: 'tar -zcvf test', index asString, '.tar.gz
> > > > /home/quuve/GsDevKit_home/server/stones/xxx_333/extents' ] fork.
> > > > ].
> > > >
> > > > I would have expected those lines to burn my server and use 6 cpu
> cores
> > > at
> > > > 100%. But no, nothing happens. What is funny is that if I call the
> very
> > > > same line without the #fork I do get the 100% CPU process:
> > >
> > > Just a note:
> > > tar/gzip is not written with multicore support IMHO, so you always
> utilize
> > > single core only to 100%. But there is "parallel gzip" (pigz), which
> > > definitelly turns CPU cooler to speed.
> > > Usage: tar --use-compress-program=pigz ...
> > >
> >
> > Sure, that was a dummy example to see CPU in usage and test my thoughts
> > (not the real unix command called)
> >
> >
> > >
> > > Is performOnServer: really non blocking for whole image/gem (or the
> whole
> > > VM simply wait for command to complete)?
> > >
> > >
> > Yeah, that's the thing, I think you are right, the #performOnServer: may
> be
> > blocking at gem level.
> >
> > For OSSubprocess I was able to allow specifying/managing none blocking
> > pipes for standard streams.
> >
> > And now, I am reading GsHostProcess and it also supports none blocking
> > streams!!
> >
> > I see that #_waitChild does indeed call waitpid() so.... I should be able
> > to do a busy waiting around #childHasExited
> >
> > BTW,  for OSSubprocesses I added a SIGCHLD kind of waiting to avoid
> > polling.... but of course, you must be careful because you may need to
> > force reading from streams (depending on how much the process writes)
> >
> > [2]
> > https://github.com/marianopeck/OSSubprocess#semaphore-based-sigchld-
> waiting
> >
> >
> > Thanks!
> >
> > --
> > Mariano
> > http://marianopeck.wordpress.com
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> http://lists.gemtalksystems.com/mailman/listinfo/glass
>



-- 
Mariano
http://marianopeck.wordpress.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20170704/c833be61/attachment-0001.html>


More information about the Glass mailing list