Tuesday, April 21, 2009

The Truth About Consumer Bandwidth Pricing

There's been a lot of noise made recently about Time Warner instituting bandwidth caps. Everyone was angry at Time Warner, whereas Time Warner claims it's losing money because of a few people hogging all the bandwidth, that usage based pricing is more fair and also necessary to pay for building up their networks, and that all of this BitTorrent traffic and streaming video is killing their networks and needs to be capped.

I have an inside perspective on this matter because when I was the Director of Product Management at BitTorrent, we often spoke with ISPs. We knew that Comcast was throttling BitTorrent traffic far before it made it into the news and I flew down to Comcast headquarters in Philadelphia to discuss the situation. I was suprised when the told me that they had plenty of bandwidth and that BitTorrent wasn't anywhere close to crushing their network. Their problem was that they don't want to sell bandwidth, a comodity with a price racing to zero. They want to sell entertainment services, which have a higher profit margin. They are therefore threatened by online video as it competes with cable TV.

The consumer ISP strategy thus has a twofold purpose: raise the price of bandwidth, and at the same time make the Internet a less appealing way to watch video. Both of these purposes are accomplished by bandwidth caps. Additionally, the new pricing models make it complicated to determined how much you're going to be paying exactly for bandwidth, allowing the ISPs to increase prices covertly. If they were to just declare that prices were going up because they felt like it, people would be very angry indeed, and it might lead to government regulation of pricing.

In order to unravel the mystery of the new pricing models, I've made some graphs that show how much you will pay in dollars for a number of total gigabytes transferred in a month. I was very suprised by the results.

To start, here is a graph of a lot of different plans, such as various Time Warner plans, AT&T DSL, and the main 3G mobile carriers.

On the bottom is gigabytes and on the left is dollars. Yes, dollars. 300 GB would costs you $140,000 on AT&T 3G. You'll notice that only the 3G providers show up at all, everything else being squished into a single line on the bottom. This is because while Time Warner is charges overages of $1/GB, Sprint is charging $50/GB, Verison $280/GB, and AT&T a ridiculous $480/GB after you exceed the 5GB cap. Everyone is mad about the Time Warner caps, but it's really the 3G caps that are totally insane. Every iPhone user is on AT&T, so when Hulu for iPhone comes out it's going to be crazy.

So don't use more than 5G of 3G per month or else you're getting ripped off. Let's compare some ISPs just in the 1-5G range to see how they stack up.

Amazon S3 is included here at the bottom just to show how much more expensive consumer bandwidth is than hosting bandwidth. The bottom tier of Time Warner service is a clear winner here, following by the original capper Comcast. 3G services are in the middle, with premium tier cable and DSL services losing. In this bandwidth bracket, you don't really get much benefit from upgrading your service.

Now let's look at ISP choices excluding 3G.

The lowest Time Warner tier wins again if you lose little bandwidth, and then Comcast wins everything else up to 250G where they have put a hard cap.

Now let's look in depth at just the Time Warner tiers.

The graph is interesting because Time Warner imposes an overage fee cap of $75. This causes the lowest tier to come out best for both low and high numbers of gigabytes. The lowest tier charges $15/month for 1GB and $2/GB for each additional GB, up to $75 in overages, meaning that your total bill is capped at $90. You therefore get unlimited bandwidth for $90 with that plan. Whereas their highest tier plan is $75 for 100 GB and then $1/GB after that up to $75 in overage charges. You get unlimited bandwidth for $150 with this plan. So the lowest tier wins and the highest tier loses. The middle tiers only come into play for medium amounts of bandwidth.

So, let's look at medium amounts of bandwidth where the multiple tiers come into play.

This graphs shows a situation similar to the one pitched by Time Warner. There are multiple tiers and you get the best deal by choosing the right tier for the amount of bandwidth you use. However, note that the goal is not to avoid overages. The goal is to avoid having your overage charges cost more than the monthly charge of the next plan up. So while the lowest tier only includes 1GB/month, it's the best plan up to around 10GB/month. Similarly, the standard plan will be better than an upgrade up to 50GB/month. The highest tier is only good for people that use >80 GB/month. And Time Warner Business Class is, as shown on all of the graphs, always just a terrible deal.

It was just discovered that AT&T DSL is implementing bandwidth caps. They have a different model because they don't have a cap on overage fees. That sounds like it would probably be a worse deal than Time Warner. Let's take a look, first at just the different AT&T DSL tiers.

This is the more classical model that you'd expect with overages. Since there are no caps on overage fees, you get the best deal by choosing a plan matched to your usage. If you guess incorrectly, you overpay. The ordering of plans from cheapest to most expensive becomes inverted from low usage to high usage.

Now let's compare the various AT&T DSL plans to the various Time Warner cable plans.

There are a lot of lines on this graph, but you only need to look at the bottom. The lowest tier of Time Warner again wins for low bandwidth. After than, successive AT&T DSL plans win. Despite the fact that their pricing structure is worse, their actual prices are better than Time Warner as long as you're good at guessing how much bandwidth you're going to use. If you're bad at guessing, only the lowest two tiers of Time Warner could ever possibly be better than AT&T DSL and only for a small range of usage. So if you're bad at guessing your usage, your best bet is to get the highest tier of AT&T DSL.


I was suprised by the outcome of these charts. The Time Warner caps are not that big of a deal and the AT&T caps are even less of a big deal. What you really need to watch out for is the 3G caps. Those are just totally off the rails.

The best deal for consumer Internet is AT&T DSL, even with the caps and overage fees. If you know how much bandwidth you're going to use, buy the appropriate tier. If you don't know how much bandwidth you're going to use, you're safest buying the highest tier.

If you're going to go with Time Warner, the lower tiers are a better deal. Go with the lowest tier you can and only upgrade if your overage fees are costing you more than the next tier. Never buy the highest tier or business class, they are ripoffs.

3G is a terrible deal. If you use less than 5G a month, all the 3G providers are priced the same and are not a very good deal for Internet. Use the lowest tier of Time Warner instead. Under no circumstances use more than 5G of 3G in a month, you will get ripped off big time.

Also, Hulu for iPhone is going to be a train wreck.

Friday, February 27, 2009

Diakonos: A Programmer's Text Editor in Ruby

A text editor (or for some an IDE) is the most important tool a programmer has, other than the programming language itself. Religious wars over editors are inevitable because people spend so much time with their editor. Some people flip-flop, but many people become both functionally and emotionally attached. No one wants to spend time learning new keybindings when they could be programming instead.

Personally, I use nano. This is not out of ignorance, mental damage, or a deep moral perversion as my friends that use emacs and vi insist. I want an editor which is small and quick to install. It must be available on all platforms and easy to install (if there's no Debian/Ubuntu package in the main repositories, forget it). I'm not going to mess around with configuring it. And I basically just don't like vi. So nano has been winning the war for my soul for many years. However, like all programmers, I dream of a better world. I wouldn't mind a slightly (or even somewhat) better editor, but I everything I've ever tried lacked the beautiful simplicity of nano. With more features comes more hassle.

Then I found Diakonos. It's a console-based text editor (which I like because I ssh into my server and edit things as much as I edit them locally), and it's written in Ruby. It has the modern features, such as multiple buffers, syntax highlighting, and syntax-aware indentation. It's scriptable, either through the Ruby interface or through external programs (in any language) which are fed the old buffer on stdin and output new buffer contents on stdout.

Like all editors under my consideration, it has packages in the main repositories of both Debian and Ubuntu. It also has Windows and OS X binaries (also a Ruby gem for you Ruby guys). It's as quick and easy to install as nano, and through it has lots more features, they are not obtrusive. The keybindings are the "standard" Windows-style ones (ctrl-x cut, ctrl-c copy, ctrl-v paste). You can of course configure it to emacs or whatever style you want, but I am personally happy to use a similar set of keys across my editor and web browser.

I am particularly excited about finally having an editor that's not written in C. This is a personal issue. Many people like C, but I just think it's time for us to move on as a society. I have a T-shirt that says "I would code in C for love, but not for money." While you may love C, autoconf, and make, I am personally very excited about an editor both written in and scriptable in Ruby. It seems like a step towards the future. It's also nice to have a fresh codebase which doesn't inherit several decades of design decisions.

My apologies for insulting your favorite text editors and programming languages, my Internet friends. I meant no harm. Just check out Diakonos for a bit and see what you think. It has a feel which is both fresh and yet somehow also classic. A "modern classic" if you will. And it's fun. In a way I can't really articulate, it's just enjoyable to use. Also, the author is a really nice guy and the IRC channel isn't full of obnoxious jerks (#mathetes on freenode), just good folks like you and me, hacking on code. I'll see you there!

Friday, February 20, 2009

Startup Camp Austin, Feb 28th

Next Saturday, Feb 28th, from 1pm-6pm, is the second annual Startup Camp Austin!

Last year's Startup Camp Austin was pretty great. A lot has changed since then in the Austin Startup Scene. It's really quite booming. With events like SXSW Accelerator and the CapitalFactory application deadline coming up at the beginning of March, we decided that now was a good time to get together again and talk about the ongoing developments of interest to Austin startups.

There are still a few slots left, so if you'd like to do a presentation, pitch, or demo, or lead a roundtable discussion, sign up on the wiki and I'll save you a slot in the program. Also feel free to just add discussion topics and we can discuss whatever anyone feels like discussing.

Also, please RVSP on the Facebook event so that we know how much food to provide.

I have to say, I'm pretty excited about this camp. The first Startup Camp was kind of scary because we'd never put on an Unconference before and I had just moved back to Austin and started my own startup. I really wanted to help make Austin a great place for startups, but it was just one person's dream. Since then, things have become so exciting! There are lots of events for startups now, from SD2020 to SXSW Accelerator. There are several new experiments in funding going on, including a startup incubator and a startup organized as a coop. Coworking spaces and BarCamps have become hot items, sprouting up in Dallas, Houston, and San Antonio as well. So many of my friends have lost or quit their jobs due to the economic turmoil and instead of feeling down about it have decided that now is a great time to start a startup. It's really a very optimistic time for startup entrepreneurs as we see opportunities in every problem.

So if you're currently at a startup, are interested in starting one, or just curious about how things are going in the Austin startup community, come to the ACTLab next Saturday. It's located on the UT Campus in the Communications Building (CMB) on the 4th floor, in Studio 4B. The Communications Building is on the southeast corner of Dean Keaton and Guadalupe, across from Madam Mam's. There's a parking lot right across the street (south of Madam Mam's) which is usually $6 to park all day.

I hope to see you there!

Friday, January 30, 2009

Austin Gets Its Own Startup Incubator

I love having a startup in Austin. I think it's a great place to do a startup right now. At the Tech Happy Hour last night, one early stage investor likened Austin to a gasoline soaked pile of rags just waiting for a spark. Indeed!

For a while I've felt that the missing element in the Austin startup scene is an early stage, small investment startup incubator in the spirit of Y Combinator and TechStars. Austin is a great place to bootstrap, and angel and VC funding are available, but for many young entrepreneurs the best way to get started is with a startup incubator. You get to meet people with startup experience, you get to pitch, and you get some press. It's one of the best ways to get started, especially if you're on the engineering side and you need to meet people with business experience.

Austin finally has such a venture, and it's called Capital Factory. You can read their press release to get the sales pitch, but let me just break down the numbers for you. If you're one of the three companies picked, you get $20,000 for 5%, giving you a $380,000 valuation, which is comparable or slightly better than YC and TechStars in terms of valuation. There are of course many intangibles to compare between the various incubators, but it basically comes down to where you want to start your company: the Bay Area, Boulder, or Austin. For myself, I choose Austin!

They're also still looking for a few good investor-mentors, so if you want to help the Austin startup scene and you've got some time and money to invest, check them out. I can't wait until pitch day to see what new startups are started!

Friday, January 23, 2009

P2P Money with App Engine, OAuth, and QR Codes

In honor of National Service Day, I decided to take a day off from my regularly scheduled Ringlight hacking and work on some community service hacking. In Austin we have a complimentary currency called the Austin Time Exchange Network (ATEN). There's a lot to say about complimentary currency and its role in helping economies during a downturn. However, I want to delve mainly into the technical details of my hack, so if you're interested I recommend Bernard Lietaer. The basic idea is that you can pay people for their time in ATEN currency, denominated in hours, rather than dollars. This is quite good for situations where no one has dollars they want to spend, but they do have work they want to do and get done, such as the current economy. There's no shortage of needs or workers, only a shortage of money. So let's make our own money! Problem solved! You'll still use dollars to pay taxes, your mortgage, and Wal-Mart, but you can use ATEN hours to buy local goods and services from people in Austin that accept this currency.

The goal of this project, named Austin Time Machine (ATM) is to provide a means to withdraw electronic currency into a physical paper form (cash) and later deposit paper to an electronic account. This is particularly useful for the sorts of situations which are normally "cash only", for instance festivals where it's unreasonable to expect all of the booths to have computers and Internet. Since the paper currency is backed by a separate online currency (in this case OpenSourceCurrency.org), the ATM service doesn't need to manage things like account balance. It only needs to keep track of bill serial numbers and manage authentication to the "bank" so that it can transfer credits to and from user accounts.

So on to the technical details. The first interesting bit is that OpenSourceCurrency.org supports OAuth for authenticating users. Additionally, I implemented the whole service on App Engine, which is wonderful because I don't have to run it on my server or manage uptime. However, this meant that I had to port the python OAuth library to use the App Engine API. In particular, I had to replace all of the use of httplib with App Engine's urlfetch service. This code will be useful to anyone attempting to authenticate to external services from inside an App Engine application. This app also provides a handy example of how to write an OAuth client. It's a little bit more complicated than it needs to be, but it's not that bad if you use an OAuth library to generate the signatures and such. It's basically involves just POSTing some fields to a few URLs and providing callback URLs that the website will POST back to. You pass some tokens around this way and end up with a token which, when included in a call to whatever web service you're trying to access, will serve to authenticate you as acting on the behalf of the user.

The next component of the app is the storage of serial numbers when you withdraw bills and verification of serial numbers when you deposit. Nothing particularly exciting here. I created an App Engine Model for each bill and save and access them using the standard App Engine ORM API. This is worth checking out if you haven't used App Engine before though because it's a simple example of how it works, and it's very different than SQL. Basically you need to assign a unique (string) key to each object and this is how you access them. The mechanisms you might except from SQL such as the UNIQUE keyword are absent.

With all of the nitty gritty storage and OAuth stuff taken care of, the bulk of the application is very simple. OpenSourceCurrency.org is a Rails app and so exposes a simple REST and JSON (or XML) API to do transactions. There are a couple of gaps in the API (from the perspective of this particular app) which I work around in this code. The API only lets you transfer money from the current user to a specified destination user, and you need the userid of the destination user. For withdrawl it's easy, I transfer money from the authenticated user to my own account, since I happen to know my userid. For deposit, I perform a tricky manuever. I charge the user a 0.1 hour fee, transferring it from their account to mine just like in a withdrawl. The result of that call includes their userid in the JSON output. I then take that userid and have the ATM service log into my own account (specifying credential via HTTP Auth, not OAuth) and transfer from my account to the account of the user, specified by their userid. A bit complicated! However, I'm working with Tom Brown, creator of the OpenSourceCurrency.org API, to create a simpler API.

Finally, once you've made a withdrawl, the bill needs to be generated so you can print it. This is currently done with just a little bit of HTML. A PDF export would be nice for printing multiple bills on one page, but for the prototype HTML was of course the fastest. The QR code generation turned out to be extremely simple because the Google Chart API recently added QR code support. So the QR code is just a single HTML img tag with a URL which will automatically generate a QR code. Nice!

Feel free to play this all this stuff. Check out Tom's screencast on using the ATM, the live ATM site, and of course the source code (also available as a zip).