lucy-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Marvin Humphrey <mar...@rectangular.com>
Subject Re: [lucy-dev] Parallel compilation
Date Tue, 16 Nov 2010 17:25:35 GMT
On Tue, Nov 16, 2010 at 07:51:43AM -0500, Robert Muir wrote:
> Sorry, I am a perl dummy, so forgive me if i interpreted your
> statements/code wrong :)

Over time, we should expect to migrate a lot of the build structure to
Makefiles.  I hate make, but it's the lowest common denominator.

> why is it 10 C files each?

It was originally 1 C file each. :)  That incurred waaaay too much fork()
overhead and CPU contention -- it was slower than a single-threaded build!

I hacked each fork to compile 10 files, and that was good enough to
effectively eliminate contention as a concern, achieve some nice gains, and
present the results to the list as a draft.  But 10 was an arbitrary number.

> wouldn't this require some wasted fork() overhead respawning many children?
> couldnt you instead just split all the .c files into 4 pieces, and
> only have 4 children up front?

I like it. :)  I think we should do as you propose, and spawn a fixed number
of child processes -- perhaps deriving the number of children from the CPU
count when we can figure it out and falling back to 4 when we can't.

We can then have each child communicate back to the parent process by writing
a status file upon successful exit.  The parent process can monitor the
children once per second or so, and terminate if a child exits without
communicating success.

There are other IPC methods we could use for monitoring how compilation is
proceeding in the child processes, but a combination of fork() and the file
system seems like it would be easiest to grok and maintain.

> p.s. we do a similar thing in Lucene-java with running tests, and a
> problem can be balancing the workload with the children.

Yes, that was what got me thinking.  I watched "top" and confirmed that GCC
wasn't taking advantage of multiple cores on its own.

> one heuristic about how long a test will take to run, or file will
> take to compile, is its length in bytes.
> it might be useful to sort the list of files by their size in bytes,
> and use mod to divide them up.

For now, I think modulus will be good enough.  It also has the advantage of
compiling files in roughly the same order that they are compiled in now --
which is useful, because the files most likely to fail are up front.

There's only one giganto file, the C file which has all the XS bindings in it.
It takes about 30 seconds to compile on my laptop, and exhausts the memory on
some systems.  Until we break up that file, it should only be run in the
parent process, so that it's not competing for resources with anything else.

Marvin Humphrey


Mime
View raw message