Tuesday, October 28, 2008

How To Backport Multiprocessing to 2.4 and 2.5

Just let these guys do it for you.

My hats off to them for this contribution to the community. It is much appreciated and will find use quickly, I'm sure. I know I have some room for it in my toolbox. Hopefully, the changes will be taken back to the 2.6 line so that any bugfixes that come will help stock Python and the backport.

So, if you don't follow 2.6/3.0 development you might not be aware of multiprocessing, the evolution of integrating the pyprocessing module into the standard library. It was cleaned up and improved as part of its inclusion, so its really nice to have the result available to the larger Python user base that is still on 2.5 and 2.4. Although some edge cases might still need to be covered, the work is stable quickly.

Here's an overview incase you don't know, so hopefully you can see if it would be useful for any of your own purposes. I think, starting out, there is more potential for this backport than the original multiprocessing module. Thus, I hope this introduction is found useful by a few people.

>>> from multiprocessing import Process, Pipe
>>>
>>> def f(conn):
...     conn.send([42, None, 'hello'])
...     conn.close()
...
>>> parent_conn, child_conn = Pipe()
>>> p = Process(target=f, args=(child_conn,))
>>> p.start()
>>> print parent_conn.recv()   # prints "[42, None, 'hello']"
[42, None, 'hello']
>>> p.join()

This is an example from the multiprocessing docs, utilizing its Pipe abstraction. The original idea was emulating the threading model. The provisions are basic, but give you what you need to coordinate other Python interpreters. Aside from pipes, there are also queues, locks, and worker pools provided. If you're working on a multicore system with a problem that can be broken up for multiple workers, you can stop complaining about the GIL and dispatch your work out to child processes. Its a great solution and this makes it a lot easier, giving the anti-thread crowd a nice boost in validation and ease-of-convincing. That's a good thing for all of us, because it means software that takes advantage of our new machines and more people who can write that software without the problems threading always gave us. Of course, some problems, like locks, can be problematic in the wrong situation, so don't think I'm calling anything a silver bullet. The point is, it improves. Nothing perfects, and I know that.

2 comments:

Jesse said...

It wasn't that much of a contribution!

In reality, the multiprocessing back port is simple a revision of pyprocessing (original project:http://pyprocessing.berlios.de/) which was included in 2.6. We wanted to make it available with the updated docs/apis and tests. A big drawback is that the stability of the 2.6 trunk version of multiprocessing relies off of changes to python-core which were not in 2.4/2.5 for stability.

Thanks for the plug :) There's a lot of work still to be done, and as recent traffic on the python-list shows, there's still some education and improvements that could still be done as well.

I will be doing a talk on the new package and threaded programming at pyworks in atlanta in november, and hopefully a talk at pycon 2009.

sevenseeker said...

What is a good way to communicate with foreign systems which you wish to share processing in addition to your multicore box you are running multiprocessing goodness on?

What are some things to avoid? What are good guidelines (if any yet) to integrate the solutions?

I write here about programming, how to program better, things I think are neat and are related to programming. I might write other things at my personal website.

I am happily employed by the excellent Caktus Group, located in beautiful and friendly Carrboro, NC, where I work with Python, Django, and Javascript.

Blog Archive