I want to say a few things for my own benefit. Maybe that's
the only thing I do here. As always, I hope something I have
is useful to someone else. In this case, if you're in any
position to deal with a big surge on a small site, you might
get something useful, or at least enjoy, what I have to from
my experience getting a bump from some guy named Mike
Arrington with a little blog called TechCrunch.
This is about reaction and what was the right and wrong way
to react to the impact of a weeks traffic in a couple hours.
Should natural means have brought our typical traffic to
these levels (time will bring this) the means to handle it
on a day to day basis would have been put in place.
The sudden increase began to timeout our FastCGI processes
and this was alerted to me quickly. I confirmed this and my
first response was to initiate a restart cycle, restarting
each process in turn, which did nothing to help. I brought
up a new instance on EC2 and prepared to roll it out as a
new production machine, with the same steps I use for every
rollout of software updates. The new instance ran fine, so
I initiated the rollout, associating our public IP with the
new instance to begin taking traffic. Immediately, the
staging machine, now in produciton, stumbled and began
behaving exactly the same.
My next thought was the obvious thing both machines shared:
the database. I started looking at any metrics I could, and
with nothing obvious and the site already failing to respond,
it seemed a safe bet to restart the database, after some comments from the fine folks in ##postgresql, it became possible that badly terminated transactions might have been hanging processes and I was advised to restart PG, which is a disruptive action. When it finally cycled my staging machine seemed fine and I deployed it, only to watch it start to suffer once again.
This was when I got a message that we had gotten the bump from Mike Arrington, over at TechCrunch. Everything suddenly made sense, and dropping into logs showed me a huge surge in traffic. There are things I could probably improve about our setup, but I'm mostly satisfied with its progress. Still, this surge was well over what it was prepared for at the rate it was coming in and it would actually be unreasonable to expect a site this size to scale that quickly for such a large and relatively short burst (a few hours).
In the end, my final call is that the biggest problem that happened is that I didn't have the information obvious to me that it was the traffic and not the system causing the problem. Everything I did was only making the problem worse, and my best course of action should have been to step back and cross my fingers. I'm looking at short term reports I can consult to give me a better overview of the recent activities, traffic rates over the last hour and server error ratios that can tell me what's going on without spending too much time digging into it. The more time it takes to figure out what's going on, the more likely someone is going to jump to a conclusion in an attempt to get a solution moving as quickly as possible.
the only thing I do here. As always, I hope something I have
is useful to someone else. In this case, if you're in any
position to deal with a big surge on a small site, you might
get something useful, or at least enjoy, what I have to from
my experience getting a bump from some guy named Mike
Arrington with a little blog called TechCrunch.
This is about reaction and what was the right and wrong way
to react to the impact of a weeks traffic in a couple hours.
Should natural means have brought our typical traffic to
these levels (time will bring this) the means to handle it
on a day to day basis would have been put in place.
The sudden increase began to timeout our FastCGI processes
and this was alerted to me quickly. I confirmed this and my
first response was to initiate a restart cycle, restarting
each process in turn, which did nothing to help. I brought
up a new instance on EC2 and prepared to roll it out as a
new production machine, with the same steps I use for every
rollout of software updates. The new instance ran fine, so
I initiated the rollout, associating our public IP with the
new instance to begin taking traffic. Immediately, the
staging machine, now in produciton, stumbled and began
behaving exactly the same.
My next thought was the obvious thing both machines shared:
the database. I started looking at any metrics I could, and
with nothing obvious and the site already failing to respond,
it seemed a safe bet to restart the database, after some comments from the fine folks in ##postgresql, it became possible that badly terminated transactions might have been hanging processes and I was advised to restart PG, which is a disruptive action. When it finally cycled my staging machine seemed fine and I deployed it, only to watch it start to suffer once again.
This was when I got a message that we had gotten the bump from Mike Arrington, over at TechCrunch. Everything suddenly made sense, and dropping into logs showed me a huge surge in traffic. There are things I could probably improve about our setup, but I'm mostly satisfied with its progress. Still, this surge was well over what it was prepared for at the rate it was coming in and it would actually be unreasonable to expect a site this size to scale that quickly for such a large and relatively short burst (a few hours).
In the end, my final call is that the biggest problem that happened is that I didn't have the information obvious to me that it was the traffic and not the system causing the problem. Everything I did was only making the problem worse, and my best course of action should have been to step back and cross my fingers. I'm looking at short term reports I can consult to give me a better overview of the recent activities, traffic rates over the last hour and server error ratios that can tell me what's going on without spending too much time digging into it. The more time it takes to figure out what's going on, the more likely someone is going to jump to a conclusion in an attempt to get a solution moving as quickly as possible.
Comments