Skip to main content

New Job, Fun Projects, and Amazon S3

I haven't posted in a while, but things have been going on. I thought I'd post about some of the more interesting aspects. I've recently began a fairly regular contracting deal with an interesting company, and I'll have to be a little vauge on some aspects, because of NDAs and such.

During one of my usual nights of aiding the Pythoners of #python on irc.freenode.net, I was discussing a project someone was trying to complete and the long debate about various routes that could be taken led to me being contracted for the job, which I've had fun with. I've been contracted to build a Fuse package, which uses Amazon's S3 as its storage mechanism. It is a fun system to work with, because of its simplicity and challenging limitations. For example, all operations are redundant, but non-atomic, because the same data could be changed at the same time, and its unpredictable how it would propogate across their redundant network. Mostly this hasn't been an issue, because you have almost the same trust in file locks on a local system anyway, and the only issues have been how to ensure integrity within directory entries and file node chains.

This aspect of the work is to be released under the GPL upon completion, and hopefully I can factory out some things that will be useful for other uses of the S3, which I've developed for the use in this project. I'll try to factor out modules for the following features:
  • Define classes that represent a type of data stored in an S3 entry
  • Easily define meta-attributes, with coercers and defaults
  • Unique IDs generated for entries
  • "Sub-Entry" concept, where one entry is owned by another
  • Caching of data both in disk and over memcache, with an open API to implement other cache-types, like local-memory caches, or even other web services.
  • Node entries, which can span data across multiple entries for more efficient (and cost effective) reads and writes that do not involve the entire data buffer.
  • Test facilities for both BitBucket (a Python S3 access package I use) and Python-MemCached, which I use for offline testing. Both mirror all the functionalty (read: most) of the related projects, so they can be tested against without actual network use.
My work with this project has led to the beginning of a long-term working relationship with the company, which I am very excited about. I can't talk about the specifics of the work I will be doing, until the company launches in a few months. As soon as that happens, I'll be blogging extensively about some of the aspects I can devolge, and of any additional software that might be released freely (I don't know if there will be any).

If you are interested, look forward to the S3 packages I'll wrapping up this weekend. Hopefully, someone will find them useful.

Comments

Anonymous said…
you rock

Popular posts from this blog

CARDIAC: The Cardboard Computer

I am just so excited about this. CARDIAC. The Cardboard Computer. How cool is that? This piece of history is amazing and better than that: it is extremely accessible. This fantastic design was built in 1969 by David Hagelbarger at Bell Labs to explain what computers were to those who would otherwise have no exposure to them. Miraculously, the CARDIAC (CARDboard Interactive Aid to Computation) was able to actually function as a slow and rudimentary computer.  One of the most fascinating aspects of this gem is that at the time of its publication the scope it was able to demonstrate was actually useful in explaining what a computer was. Could you imagine trying to explain computers today with anything close to the CARDIAC? It had 100 memory locations and only ten instructions. The memory held signed 3-digit numbers (-999 through 999) and instructions could be encoded such that the first digit was the instruction and the second two digits were the address of memory to operate on

The Range of Content on Planet Python

I've gotten a number of requests lately to contribute only Python related material to the Planet Python feeds and to be honest these requests have both surprised and insulted me, but they've continued. I am pretty sure they've come from a very small number of people, but they have become consistent. This is probably because of my current habit of writing about NaNoWriMo every day and those who aren't interested not looking forward to having the rest of the month reading about my novel. Planet Python will be getting a feed of only relevant posts in the future, but I'm going to be honest: I am kind of upset about it. I don't care if anyone thinks it is unreasonable of me to be upset about it, because the truth is Planet Python means something to me. It was probably the first thing I did that I considered "being part of the community" when I submitted my meager RSS feed to be added some seven years ago. My blog and my name on the list of authors at Plan

Interrupting Coders Isn’t So Bad

Here’s a hot take: disrupting coders isn’t all that bad. Some disruptions are certainly bad but they usually aren’t. The coder community has overblown the impact. A disruption can be a good thing. How harmful disruption might be a symptom of other problems. There are different kinds of disruptions. They are caused by other coders on your team, managers and other non-coders, or meetings throughout the day. The easiest example to debunk is a question from a fellow developer. Imagine someone walks over to your desk or they ping you on Slack, because they have “one quick question.” Do you get annoyed at the interruption when you were in the middle of something important? You help out your teammate quickly and get back to work, trying to pick up where you left off. That’s a kind of interruption we complain about frequently, but I’m not convinced this is all that bad. You are being disrupted but your team, of which you are only one member of the whole unit, is working smoothly. You u