Thursday, October 30, 2008

How To Call It A Day

This week hasn't been great for my productivity. It has been a series of days overshadowed by a series of things coming up. Between standing in line at the DMV, computer issues, and today helping my brother-in-law with a very sudden move, it feels like typing is an unfamiliar act. (Unless its on the T-Mobile G1, which I'll be reviewing this weekend.)

Today, I helped load a seventeen-foot U-HAUL truck, made a few last minute stops, drove said truck just over an hour south and helped unload it into a storage shed. I've never loaded and unloaded a complete truck in one day, in all the several moves I've made over the last years. I was always able to stretch them over two days, with a nice sleep in the middle. After all that, I had to drive the truck back to drop it off. I barely made it. My dear wife hit traffic on her way to pick me up, so I sat and I waited. I listened to the mechanic at the drop-off location declaring how "North Carolina is McCain country," which was informative of him. I enjoyed some Mike and Ike candies.

So finally getting home, eating dinner, and putting my son to sleep, I sit down at this familiar, glowing box. What code can I get out before its time to call a day? What debugging and planning can I get in before the consciousness must be suspended? How do I make use of what day I have left?

I did a lot today, even if I couldn't work today. Sometimes, knowing when to quit is the best productivity choice you can make. I'll see you in the morning, Internet.

Tuesday, October 28, 2008

How To Backport Multiprocessing to 2.4 and 2.5

Just let these guys do it for you.

My hats off to them for this contribution to the community. It is much appreciated and will find use quickly, I'm sure. I know I have some room for it in my toolbox. Hopefully, the changes will be taken back to the 2.6 line so that any bugfixes that come will help stock Python and the backport.

So, if you don't follow 2.6/3.0 development you might not be aware of multiprocessing, the evolution of integrating the pyprocessing module into the standard library. It was cleaned up and improved as part of its inclusion, so its really nice to have the result available to the larger Python user base that is still on 2.5 and 2.4. Although some edge cases might still need to be covered, the work is stable quickly.

Here's an overview incase you don't know, so hopefully you can see if it would be useful for any of your own purposes. I think, starting out, there is more potential for this backport than the original multiprocessing module. Thus, I hope this introduction is found useful by a few people.

>>> from multiprocessing import Process, Pipe
>>>
>>> def f(conn):
...     conn.send([42, None, 'hello'])
...     conn.close()
...
>>> parent_conn, child_conn = Pipe()
>>> p = Process(target=f, args=(child_conn,))
>>> p.start()
>>> print parent_conn.recv()   # prints "[42, None, 'hello']"
[42, None, 'hello']
>>> p.join()

This is an example from the multiprocessing docs, utilizing its Pipe abstraction. The original idea was emulating the threading model. The provisions are basic, but give you what you need to coordinate other Python interpreters. Aside from pipes, there are also queues, locks, and worker pools provided. If you're working on a multicore system with a problem that can be broken up for multiple workers, you can stop complaining about the GIL and dispatch your work out to child processes. Its a great solution and this makes it a lot easier, giving the anti-thread crowd a nice boost in validation and ease-of-convincing. That's a good thing for all of us, because it means software that takes advantage of our new machines and more people who can write that software without the problems threading always gave us. Of course, some problems, like locks, can be problematic in the wrong situation, so don't think I'm calling anything a silver bullet. The point is, it improves. Nothing perfects, and I know that.

Monday, October 27, 2008

How To Review Memiary in 5 Easy Steps

This is how to review Memiary in 5 easy steps:
  1. Forget what you did yesterday. Check!
  2. Decide that all problems can be solved not just with software, but by adding new software just for that purpose. Check!
  3. Get written about on the popular ReadWriteWeb so people find you. Check!
  4. Be nifty enough to grab someone's attention when they try out the new service. Check!
  5. Surpass a plain text file in convienience, flexability, privacy, and install base. Damn! Maybe next time.
The best way to solve a problem is to avoid needing to solve it in the first place.

How to Underestimate Google App Engine

Yeah, AppEngine has been around for a while. That doesn't make my general AppEngine article less timely. Hey, I don't just write about stuff because its hip. In a few months, I'll announce what Google Chrome means for the web landscape. Seriously.

Although a lot of people believe Google App Engine is a very big thing and extremely important to the landscape of the web, I get the strong impression from outside the camp that its more of a toy, and I want to address that. As with my quick review of App Engine itself, its hard to make real calls when everything is still beta, but we're working with what we've got here. The people who see the real potential of App Engine feel it and the people who just think its neat Just Don't Get It. What is there to get that so many developers are missing and why would those of us that do think its important enough to be evangelical about, as I'm doing right now?

Once again, making any claims or arguments in this discussion has to start by defining what we're talking about in the first place. Are we talking about the choice of first runtime (Python) and included libraries (namely, the Django templates)? Is Google's design of the Datastore API and what other service APIs they provide the import factor to praise or ignore? Perhaps the details of their hosting plan makes it all worth gold, regardless of what software they put on top of all that iron? Going with my previous post on App Engine, it all comes down to the experience we need to discuss.

For Newbies This Means...

People just starting out with Python, web development, or even programming at all have a great opportunity here. They can focus on writing code in a very low-barrier environment and not worry about a lot of the details of deployment and hosting that got in the way before. The river between the Field of Writing Code and the Field of Running Your Website has been reduced to a trickle that can easily be waded across.

Few deny the benefit of the lower barrier here for the uninitiated, but there may be some misplaced valuation. There is more to benefit these individuals than staving off their eventual need to understand how to manage and deploy to their own hosting solution. New developers are in an amazing position that none of the rest of us are privy to: they may go entire careers without knowing how to setup a webserver. This is wonderful. Does it mean they are incapable of it or that we'll get a flood of developers who are less able to perform? I believe we see a change in the way we learn. We're narrowing disciplines and we aren't wasting mental cycles and man hours having every gear understand the entire machine. If you write code well, then learn that and just that. Ignore the rest.

Every coder today knows something about designing, even if they aren't good at it. Anyone who has written a line of code, HTML, or CSS for the web has probably configured a database at some point. We take this as normal and expected, and just rites of passage. We fit ourselves into specialties as we gain experience in our niche, but we all have an expectation of knowing a little about a lot. We seem adverse to the idea that the next generate will know how to do their job well, and not at all the jobs we know, but don't do on a regular basis.

For Experienced Hobbyists This Means...

Even when a developer reaches the point that hosting things themselves, managing servers, and configuring databases is feasible, none of it is necessarily worth the effort. All of that is time that could be spent solving the real problem at hand, with your family, or on your real job. The ability to do something doesn't negate the cost in time and effort of doing it. Being able to handle a problem when it arises does not make it meaningless to avoid the possibility of that adversity in the first place.

For Serious Ventures This Means...

With the flood of tiny little apps being launched on App Engine, the question of a large scale app being unrolled on the platform is a big one. Will anyone really build businesses hosted on App Engine? Can Google be trusted with your code? Will this platform offer the real power and opportunity needed to meet the demands of a growing business? None of these are particularly interesting to me in this context, because the deeper question is if the benefits that work for tinkerers and hobbyists extends to "serious" work. I think it does.

Sunday, October 26, 2008

How To Test Django Template Tags - Part 2

In Part 1 I wrote about my method of testing the actual tag function for a custom Django template tag. I said I would follow it with a Part 2 on testing the rendering of the resulting Node. This is that follow up post for any of you who were waiting for it.

Testing the rendering poses some more problems than our little tag function. Rendering is going to do a bit more, including loading the templates, resolving any variables in question, doing any processing on those results (like looking up a record from the database based on the variable value), and finally populating a new context to render the tag's template with. How do we test all of that, without actually doing any of that? This is the goal we're reaching here, for unittests. We want each test to be so specific that we test what something will do, without actually relying on those things it does. We aren't testing any of those things, just that our render() method does them.

What can we mock easily? get_template() is an easy call, so we can patch that to return a mock inside of our test. Our render() needs to load the template, do its processing, and then render the template. We can assert the rendering was done properly afterwards, thanks to the mock template.

So far...

@patch('django.template.loader.get_template')
def test_link_to_email_render(self, get_template):
    node = LinkToEmail(obfuscate=False, email=Mock())
    node.email.resolve.return_value = 'bob@company.com'

    ...

But now we get to our problem. We have to call our render method to test it, and its expecting a Context to be passed. Normally, we want to mock things we aren't directly testing, but it doesn't always present itself as easy.

As of mock 0.4.0 the Mock class does not support subscripting, and contexts are dict-like objects. My first inclination? Just pass a dictionary. Unfortunately, the context also has an important attribute, autoescape, which needs to be inherited by the context we use inside the render() method, and dictionaries don't have this.

class ContextMock(dict):
    autoescape = object()

@patch('django.template.loader.get_template')
def test_link_to_email_render(self, get_template):
    node = LinkToEmail(obfuscate=False, email=Mock())
    node.email.resolve.return_value = 'bob@company.com'

    context = ContextMock({})

We're making progress and we're at the point where we need to actually call the render() method. Now, after its basic processing its going to create the Context in which to render the template. For the sake of limiting what "real things" we invoke during our test, this might be something we watch to mock.

class ContextMock(dict):
    autoescape = object()

@patch('django.template.loader.get_template')
@patch('django.template.Context')
def test_link_to_email_render(self, get_template, Context):
    template = get_template.return_value
    node = LinkToEmail(obfuscate=False, email=Mock())
    node.email.resolve.return_value = 'bob@company.com'

    context = ContextMock({
        'email': Mock(),
        'obfuscate': Mock(),
    })

    node.render(context)
    template.render.assert_called_with(Context.return_value)

    args, kwargs = Context.call_args
    assert kwargs['autoescape'] is context.autoescape
    assert args[0]['email'] is context['email']
    assert args[0]['obfuscate'] is context['obfuscate']

The testing itself is pretty basic. We want to make sure the mocked context is given go the template to use in rendering and that the context properly inherits the autoescape property. We also test that the context matches the data we're giving. In the end, this was pretty easy. I actually cleaned up the code I based this on in response to writing the article and discovering cleaner ways to do it.

We need to put some thought into our tests. Often we are tempted to take shortcuts. We might write a unittest which simply calls the function, maybe checks the result, and we call it a day. We need to test different conditions under which a function is called. We need to ensure we are testing reliably, and using things like mocks help us ensure that when our test calls the function, we know what the world looks like to that function. Mocks are our rose colored glasses.

This two parter on testing Django template tags is hopefully the start of more similar writings on specific testing targets. Many of them will likely focus on Django, for two reasons. Firstly, I think there is a lack of good testing practices in the Django world, where I see. Secondly, I'm in the process of adding tests to a not-small codebase and these posts both document my journey and guide me.

How To Test

This is an index of different articles I've written covering techniques for testing specific software components. The number is small, but will grow in time. Initially, expect a heavier lean towards Django topics.

  • How To Test Django Template Tags Parts One and Two

Saturday, October 25, 2008

How To Test Django Template Tags - Part 1

I'm involved in a project that has gone for a long time without tests and everyone involved knows tests are rilly rilly important. There is a point where acknowledged best practices simply meets the reality of the development mind and it doesn't always work out like you'd hope. We know tests are important, but we need to resolve this ticket right freaking now. You understand. The point was reached that this just couldn't continue and the costs of fixing the same bugs over and over were way too obvious to ignore. Tests are now popping up for things we're fixing and new things we're writing. As it happens, I came across my first real need to create a custom template tag. Of course, I wanted to test it. So how do you test something that is so entrenched in the Django processing pipeline as a template tag?

Incidentally, I'm just going to assume you either know all about testing and Django template tags or you can follow along just fine.

Testing breaks down into individual functions and we try to keep them individually small, to be easier to test and less likely to be broken. The simpler something is, the more likely you actually understand it. So our custom template is really two functions: the tag parser and the renderer. The first is the function we actually tell Django to call when it needs to parse our tag. The second is the render() method of a Node subclass.

Here is an example of a kind of tag we might be working with. It creates a link to an email address, and optionally can obfuscate it instead. For example, the obfuscate flag might come from whether or not the page is being viewed by an anonymous user or a friend.

{% link_to_email "bob@company.com" do_obfuscate %}

The parsing first, which I do in LinkEmail.tag(), a classmethod.

...
@classmethod
def tag(cls, parser, token):
    parts = token.split_contents()
    email = template.Variable(parts[1])
    try:
        obfuscate = parts[2]
    except IndexError:
        obfuscate = False
    return cls(email, obfuscate)

So we have two conditions that can happen here. Either the tag is used with just an email and we default to not obfuscating, or we are told to obfuscate or not by the optional second tag parameter. To simplify this post, the second parameter is simply given or not. If its given, we obfuscate, we don't resolve it as a variable like the email.

So we need to test this function getting called when the parser gives us the different possible sets of tokens we're dealing with. Mocking comes in handy.

@patch('django.template.Variable')
def test_tag(self, Variable):
    parser = Mock()
    token = Mock(methods=['split_contents'])

    token.split_contents.return_value = ('link_to_email', 'bob@company.com')

Now we actually call the tag method to test it.

    node = LinkToEmail.tag(parser, token)

    self.assertEqual(node.email, Variable.return_value)
    assert not node.obfuscate

This is the axiom of good testing: we're only testing one thing at a time. We don't actually invoke any template processing to test our one little tag. We don't even let the function we're testing do anything else that might break, except for a pretty innocent creation of an instance of our node. That's OK, because it can't break:

def __init__(self, email, obfuscate):
    self.email = email
    self.obfuscate = obfuscate

The only things it tries to do outside of the function we're testing is split_contents() to parse the parameters and create a template.Variable instance, but we mock both. We control what split_contents() returns, instead of relying on actually parsing a template. We replace template.Variable with a Mock instance, so it doesn't do anything other than record that it was called and let us test some things about how it was called and what the tag() method did with the result.

We'll also want a second test where split_contents() returns three items and we verify the obfuscate parameter was handled properly.

In an effort to remember that I don't usually read any blog post longer than this, I'm not making this longer. So, I'll make it two parts. Tomorrow, I'll write about the larger issue of testing the template renderer, while trying to keep our test as clean as possible. It is a little trickier.

Monday, October 20, 2008

How To Limit Your Possibilities

So, this was going to be a post about the Python module, subprocess. I'm a big fan of subprocess and there are a lot of problems that are easier to solve by using it. We reduce thirteen distinct facilities into one class. We reduce a diverse ecosystem of interfaces into one, uniform interface. The subprocess module is good, both by itself and as a symbol for what Python stands for. I won't be writing my original post about subprocess.

It isn't that subprocess isn't important, or that I don't think I can express myself properly, but that it brought up something else I should write about right now: What should I write about?

Is this a blog about software development or is this a blog about Python development? Does it need to be only one? I'm looking for my direction here. I'm not going to stretch this out, because if I do, you won't read it. And truth be told, I want you to read it. I want you to enjoy reading what I write. At heart, I am a writer. I take no shame in admitting that I love watching my graph in Google Analytics rise on every post I make. But, this is also about expressing myself, as a developer. And that is no more a Python developer than a software developer. I can't abstract everything I write.

The final answer to what my direction is? I don't have one, and that's just fine.

Saturday, October 18, 2008

How To Recognize a Bad Codebase

We learn to recognize a bad bit of code quickly as our code-fu grows. Arbitrary side-effects smell badly and crazy one-liners frustrate us. It becomes easier to identify what lines of a codebase you might want to clean up to improve the overall quality of the work.

There is a line between codebaess with bad code in them and bad codebases. When do we learn to recognize this and what are the signs that the problem is far reaching, not localized? A bad codebase is an expensive codebase. It is difficult to work with and difficult to collaborate with others on. Identifying what makes a codebase bad is key to knowing when, where, and why to improve it. Improving the overall code quality reduces the overall code cost. I'm thinking about software in economic terms these days, and I'm hoping we can turn the recession to our favor by pushing the mantra Bad Code is Expensive Code.

Costs of code come from three actions. Adding features costs, fixing bugs costs, and understanding costs. Adding features is an obvious source of code cost, and every time you want to expand a products abilities you're going to pay appropriately. Fixing bugs is both obvious and subtle. Where its obvious that you need to fix bugs you see, it can be very subtle when costs are added that you can't actually detect (more on this later). Understanding the code, to most minds, might be entire subtle and never obvious. New developers, existing developers moving to new areas, and users trying to understand the behavior emerging from the collection of code all need to understand these things and the most expensive to understand it the less likely they will.

I feel no need to expand on the cost of adding to a codebase. What will hit us are the subtle points. Bugs' cost explode against the subtle misunderstandings, leading to the conclusion that a lack of understanding the code is the single greatest source of increasing its cost. This is through the partial obvious needs to understand the code and the more subtle costs they add to being able to fix bugs, and even to properly expand the feature set. The problems manifest as the actual bugs in the software.

The sign of a bad codebase is a difficult to debug codebase.

Now we only need to know the causes of difficult debugging to know the signs of a bad codebase.

Does the codebase lack tests? No tests mean you can't be sure any change breaks more than you intended to fix. Locating the source of a problem is hugely expensive when you're manually verifying correctness, instead of via automated testing. There are fantastic techniques of binary debugging, narrowing a changeset range down to the extra change that introduced a bug. This is so expensive with manual testing that it might as well be impossible, while with tests its one of the greatest debugging tools you could ever have at your disposal: It can automatically tell you exactly what code caused your bug. It can debug for you, but only in a codebase that started out good.

Does the codebase lack documentation? If your understanding of the code comes mostly from trial and error or asking other developers, then you lack documentation or enough clear code to self-document. Every time you add a feature or fix a bug, you're debugging more than the code, but your understanding of how it functions. Clear code, concise comments, and good documentation let you focus on the breakage of the code, and not the breakage of your understanding of its design.

Does the codebase grow or shrink? We might think a growing codebase is a generally universally good sign, but its not so. A shrinking codebase can be a great sign. It means two things. Firstly, it means an increase in the quality when the amount of code reduces while maintaining or increasing the value (not to be confused with cost) of the code. For example, if you can make a function clearer but finding more concise ways of expressing the same ideas, you reduce how much code there is to understand to get the same job done. A shrinking codebase also tells you that the code is understandable enough to be refactored, which is a little deceptive. The better quality of your code, the easier it becomes to improve the quality even futher.

Take this as a three point test. How do your current projects score?
I write here about programming, how to program better, things I think are neat and are related to programming. I might write other things at my personal website.

I am happily employed by the excellent Caktus Group, located in beautiful and friendly Carrboro, NC, where I work with Python, Django, and Javascript.

Blog Archive