Friday, April 11, 2014

Haskell is Not PHP And That's Okay

It's interesting what people think programming is and how thinking in Haskell has changed that for me. I remember seeing something about how Mark Zuckerberg wanted to teach kids 0-based counting so they could be computer programmers.

Well guess what, I don't use 0-based counting anymore!

In Haskell instead of x = arr[0] you say something like (x:xs) = arr. You could also say x = head arr, but I find the former idiom happens more often in my code. Getting the first element of a list is a subset of the functionality of destructuring that is as pervasive in Haskell code as array indices are in PHP.

I mention this not because how you get items out of a list in different languages is interesting in itself, but because I used to think that 0-based math is one of the things that makes you a programmer. It turns out I don't need it and I don't miss it.

Other things I am surprised that I don't miss are multi-assignment variables and loops.

I used to think that these were core elements of what it means to program. I used to think that languages without loops were just toys (and a lot of them still are).

Things I still use are if statements and function calls. These seem to be really quite fundamental to what it means to program a computer. There are languages which lack these. For instance, you can use SKI calculus to write computations and that is a language without an if. However, you end up recreating some form of branching combinator if you try to write programs his way. 

Things I do miss are strict evaluation and dynamic types.

Lazy evaluation can do some cool things, but it can make your programs more confusing. This especially comes up when debugging things like why they are running out of memory. Sometimes I am implementing a straightforward standard eager evaluation algorithm and it would nice to not have to convert this into a lazy-friendly form. I think this might be more fundamentally difficult than converting from iterative to recursive functions. At least it has been for me so far.

Static typing is great and part of what makes Haskell work well, but sometimes it's a pain. Parsing data structures from dynamically typed languages, for instance JSON, is painful in Haskell. JSON as mixed type arrays and Haskell doesn't, so it's hard to translate between the two, whereas in python you just call loads(jsondata) and you get a native python data structure. So fun and easy! Another place where dynamic typing is nice is function/operator overloading. Haskell has this for numbers with type classes. You can do a + b and as long as they are the same type it will work whether a and b are of type Integer or Float or whatever. However, Haskell also has three kinds of strings (String, ByteString, ByteString.Lazy) and I have to use three different append functions (append, ByteString.append, ByteString.Lazy.append) to concatenate them. Python has two string types (str and bytes) and + works on either (as well as on numbers).

In summary, I used to have some ideas about what programming was and what was important in a language. The more languages I learn the more I realize that the things I were really important are not the important things after all. I used to care a lot about surface-level features like whether a language had lambdas or its support for objects. I've come to realize that when you really get down to it high-level languages are just ways to glue together what is essentially chunks of C code underneath. I'm willing to try out different sets of abstractions until I find the one that gets me to my goal with minimum time and effort. Haskell has proven to be a very practical language for getting things done, but python is still ahead of the game for messing around with data in text files.

Wednesday, March 19, 2014

Why Does My Haskell Program Keep Running Out of Memory?

Recently I've been doing some "big data" or "data science" work. I define data as being "big" when there's too much of it for me to look at the actual data and instead I am forced to only look at the results of computations derived from the data. I consider this work to be "science" because people offer hypotheses about what the data might say based on their understanding of the real world situations that generated the data and then I can look at the results of computations on the data and provide evidence for or against these hypotheses. The hypotheses lets me know what kind of computations to write and the results inform the formulation of new hypotheses. Fun stuff!

I originally wrote all of these data processing computations in python. It seemed a sensible choice because all of the data was in the form of CSV files and slicing up text is easy and fun in python. Unfortunately, python was too slow, even when I rewrote my code to by multicore. I was spending a lot of time waiting around for results and it was slowing down the rate of scientific progress.

I rewrote all of the code in Haskell and it's much faster! Unfortunately, it also crashes. A lot! It keeps running out of memory. Maybe this has happened to you too. So let me tell you why my (and possibly your) Haskell program keeps running out of memory.

First of all, I am on Windows and if you install Haskell Platform for Windows, it is still 32-bit. There is a 64-bit GHC for Windows, but you will have to install it manually, and who has time for that when there's science to do? If you're on Linux, and are lucky enough to have packages which aren't ancient, then you might have a 64-bit GHC already. If you're on Windows, you're out of luck until they release as new Haskell Platform and it's been a while since the last one. Being stuck in 32-bit means that Haskell programs are limited to 4GB of memory. It's worse than that though because the way GHC compiles thing, you're actually limited to only 2GB of memory. Actually it seems like it's more like 1.7GB. Pretty bad for big data work. It's too bad for me because I have a powerful high-memory multi-core desktop I built for crunching data and for some reason it's running Windows.

  • Tip #1 - run Haskell programs on Linux.

Of course, you shouldn't actually need to load the whole data set into memory. That's the beauty of lazy evaluation and garbage collection, you can create computations on streams of data and write your code like there's just one big in-memory list, but actually the whole file is not in memory at once. Right??? Well yes, but only if you do it right. There are a number of ways to do it wrong.

The key to having lazy evaluation work for you is to only evaluate a lazy list once. If you evaluate it twice, the first evaluation will load the whole list into memory and it can't be garbage collected because it has to stay around for the second evaluation. There are two ways I can think of to deal with this. You can sequence your evaluations or you can fuse them. The classic example is computing an average. The naive way to compute an average is to first compute the sum of the elements of the list (first evaluation), then compute the length of the list (second evaluation), and then divide the sum by the length. Naively computing the average of a list of numbers which is 2GB will crash your Haskell program due to the dual evaluation interfering with garbage collection. In the sequential approach you could load the list from a file and compute the sum. Then load the list from a file again (new list) and compute the length. Each instance of the list will be garbage collected separately. This is inefficient, but may be your only option if you are using computations you didn't write. For instance, I use the stddev function from Math.Statistics and I don't want to write that myself, so sequential is the best option there. A more efficient approach is fusion, when you evaluate the list once and compute everything you want to computer in a single recursive function. In the case of average, you could write a function which evaluates the list once, keeping both a running sum and a running length. At the end you have the sum and length and you can compute the average. If you are writing your own computations then this is a great option.

  • Tip #2 - Only evaluate a large list once - sequence or fuse your computations
That works great if all of your data can be processed sequentially. However, some of my computations actually require that I load the whole dataset into memory at once or else come up with some fancy workarounds. An example of this sort of computation is sorting. It's much easier to sort a list if you have the whole list in memory at once. Of course this only works if your data is less than 1.7GB (or you're on Linux). I thought my data was small enough, but again the crashing, out of memory. Well it turns out that all the Haskell types you know and love are actually quite terrible from a memory use perspective. How big do you think a String is? An Integer? Too big, is the correct answer. Fortunately, there are more efficient alternatives that are less popular among Haskell code examples, such as ByteString (strict if you have small strings, lazy if you have big ones) and Int. Not only are these more memory efficient, but they can also be faster when your Haskell code is compiled down into C or assembly. Ints, for instance, can be loaded directly into CPU registers.
  • Tip #3 - Use Int instead of Integer and ByteString instead of String
So those are my tips so far. I've managed to crunch some big datasets since making these changes to my Haskell code. There is great potential for Haskell in the world of big data which is still ruled by Java tools. Sometimes though the convenience of Haskell being a high-level language hides some of the things that are happening under the hood which can affect whether your program succeeds which startling speed or runs out of memory.

If anyone wants to build a high-performance cloud for Haskell-based big data processing, let me know, I have some cool ideas for how to make that work.

Wednesday, February 26, 2014

Bitcoin's Failure to Scale

The golden age of Bitcoin is over, but it's not because of the reason you'd think. The recent drop in the price of Bitcoin after the collapse of MtGox is irrelevant because as I've said before the price of Bitcoin is irrelevant. However, the MtGox collapse and the joint statement regarding this collapse from major players in the Bitcoin industry highlights that Bitcoin has taken a wrong turn and is now plowing ahead in the wrong direction.

Why is Bitcoin useful? If you've read my post "The Price of Bitcoins is Irrelevant" you know that I consider Bitcoin to be useful for one important function: transferring money online for the purpose of buying and selling goods and services. In the past, we've had decentralized currency exchange in the form of cash and centralized electronic currency exchange in the form of ACH transfers and credit bard payments through banks, but we haven't had a good means for currency exchange which is both electronic and decentralized. Bitcoin is useful because it provides exactly these properties, or at least it used to.

The problem with the modern Bitcoin economy is that it is becoming less and less decentralized. Much of the Bitcoin exchange is now happening through a handful of services such as MtGox and Coinbase. They are essentially taking on the role of unregulated banks and are starting to act in much the same way that banks did before regulation. The collapse of MtGox is a tale as old as money, with an origin before the rise of modern banks. Before banks as we know it existed, there were goldsmiths that would hold onto your gold and other valuables while you were off on the crusades. They provided physical security for your physical wealth in exchange for a fee. You got a paper receipt as proof of your deposit of valuables. Of course eventually someone showed up to get his gold back and found out that his piece of paper was worthless because the king had raided the vaults to finance his war efforts.

The problem with banks is when they replace your actual money with virtual money in the form of an account balance. This is a promise from the bank that they will give you back an equal amount of money as you gave them to hold. Of course, a promise is only worth anything if it's fulfilled. An account balance denominated in Bitcoins is no better than an handwritten IOU. There's no way to know if the vaults are in fact empty.

It's not necessary for Bitcoin companies to implement their services this way, by converting your actual Bitcoin assets into account balances. Bitcoin holdings are independently verifiable by examining the blockchain. Therefore the responsible way to operate is for Bitcoin companies to merely act as proxies. Rather than running the Bitcoin client yourself, a company such as Coinbase can run it for you, providing an easy to use interface, dollar/Bitcoin exchange services, and secure backups of your private keys. However, the majority of Bitcoin services don't operate this way. They do not actually keep your Bitcoins for you, instead the Bitcoins you deposit or receive as payment go into that company's account and in exchange you only get a promise. They can at any time freeze your account, become insolvent, or otherwise break their promise and thereby steal your money. As example of this problem, look at the Coinbase et. al. joint statement in the part where it calls for Bitcoin companies to have "clear policies to not use customer assets for proprietary trading or for margin loans in leveraged trading". The fact that they even have the option to do this means that Bitcoin has failed to live up to its potential. A decentralized currency shouldn't have these problems or else we're just using banks again, unregulated banks, prone to all of the failures and abuses we've come to know and fear.

You might argue that this has nothing to do with Bitcoin. You can still run the client and be fully decentralized. People are free to build these centralized bank-like services on top of Bitcoin and other people are free to ignore them. However, there's a reason that people use services like MtGox and Coinbase. Bitcoin has some flaws which make it a usability nightmare and the centralized services fix those flaws. Let's discuss some aspects of the Bitcoin design and why they failed to scale:
  • Mining as the means of issuance
  • Requiring the full transaction history to make transactions
  • Fluctuating exchange rate
Mining as the means of issuance is one of the key innovations of Bitcoin and it worked quite well in the early days to ensure fairness in a fully decentralized way. However, as mining has scaled, it has failed to remain decentralized. Mining power is now concentrated in just a few mining pools. This is a direct effect of the way mining is done, with the difficulty being set by the hashrate. As the difficulty increases, the ability for individuals to successfully mine declines, forcing consolidation into pools. To be fair, the founders of Bitcoin could not have anticipated dedicated ASICs for mining. In the early days, the difficulty was discussed as something which would go up and down over time, not something that would go forever upwards.

Requiring the full transaction history to make transactions is a straightforward scaling problem. The size of the transaction history grows over time with the number of transactions. For existing clients, dealing with this is just a matter of storage space and keeping up with new transactions. For new clients, the entire history must be downloaded, which delays their introduction into the network. This has also caused recentralization. In the early days of Bitcoin, everyone ran the client, in fact there was no other option. This was fully decentralized, the way Bitcoin was meant to be used. More and more users are migrating to Bitcoin services which manage the transaction history for you. The signup for these can be instant and they are especially useful for mobile users that don't have sufficient resources to run a full Bitcoin client all the time. This once again re-centralized Bitcoin use.

In the early days of Bitcoin, the exchange rate was low but stable. Bitcoins were often obtained through mining instead of purchase as mining was something anyone could do. The exchange rate was not something to worry too much about as it changed slowly, and mostly upward. As the interest in Bitcoin grew, the volatility of the exchange rate increased to the point that it is a significant consideration for customers and vendors that would like to transaction using Bitcoins. This has lead to a desire for holding account balances in dollars. Services like Coinbase and Bitpay will let you transact entirely in dollars. This is not in itself a bad thing, but it means that once again you have to use a centralized service as the Bitcoin client has no way of converting your Bitcoins into dollars for storage.

So three aspects of the Bitcoin design that some would say are integral to its character as a cryptocurrency have all failed to maintain decentralization as Bitcoin has scaled. I argue that in fact these characteristics create pressure to centralize at scale. This is very bad for Bitcoin as it means that as it scales it will lose more and more of its advantage over traditional online payment methods.

Here is my three-point plan for getting back to decentralized cryptocurrencies:
  • Don't use services that give you an account balance instead of holding your actual Bitcoins
    • Blockchain seems like the only viable choice right now
    • In the past I have supported Coinbase, but unfortunately I must suggest moving off of it
  • Build services that maintain the decentralized operation of Bitcoin
    • Most of the services provided by companies like MtGox could be offered without account balances, storing actual Bitcoins for users
    • Bitcoin clients in the cloud offer are a good compromise
    • These services should be independently auditable by looking at the Blockchain
  • Build a new cryptocurrency without these scaling problems
    • Mining was a cool idea, but it must be replaced
    • Clients should be able to connect quickly without the full transaction history
    • A stable exchange rate
    • While we're at it, no transaction malleability
Let me know if you're interested in working on these things with me. I have several ideas for experiments that I think might be good to test the water.

Friday, February 14, 2014

The Price of Bitcoins is Irrelevant

Much of the press coverage and discussion of Bitcoin has focused on the price of a Bitcoin, which has fluctuated greatly. It was $15 when I first started getting interested in Bitcoins. I had previously done some mining when Bitcoins are worth about $0.01 each, but I found the whole user experience at the time to be unusable. The $15 price point was when I discovered Coinbase and determined that maybe one day it would be feasible to actually exchange Bitcoins for good and services. Since then, the price has gone up exponentially and this has caused a lot of emotion: excitement from speculators, bitterness from people that missed out, disdain from people calling Bitcoins beany babies for nerds, even outright hatred from Charles Stross. Suddenly everyone is asking me about Bitcoins. My aunt even asked me about them at Thanksgiving. Now that the price has gone down somewhat from it's high at around $1000 per Bitcoin, people are proclaiming that this heralds the end of Bitcoins, that they knew all along it was a fad, and that they were smart for not investing. Every fluctuation in the exchange rate is viewed as a portent which confirms the feelings of the commentator.

I can see why people like to talk about the price of Bitcoins, especially the press. It's a very easy to track and easy to visualize indicator. The swooping curves, either upwards or downwards, are visually engaging. It fun to talk about Bitcoin millionaires, and it fun to talk in a schadenfreude sense about people losing their life savings in foolish Bitcoin investments. It's all very exciting and makes for good entertainment news.

I have a different perspective, and it is that the price of bitcoins is irrelevant. The focus on price is due to a misunderstanding of what Bitcoins are, what they're good for, and why they're interesting. Let me break it down for you.

What is money?

Money is a:
  1. unit of measurement
  2. store of value
  3. vehicle for speculation
  4. means of exchange
A given type of money can be any or all of these. People think Bitcoin is all of them and this is the root of the confusion as Bitcoin is bad at 1&2 while being good at 3&4. Therefore some people think Bitcoin is "good money" and some people think it's "bad money". It's both!

Unit of Measurement
We use money as a unit of measurement every day. When you say things like "$X is too much for a cup of coffee" or "My time is worth $X an hour", you are measuring the value of things in terms of dollars. A common misconception people have about Bitcoin is that Bitcoin is a unit of measurement and that something would cost, for instance, 1 Bitcoin. In actuality, when you see a price in Bitcoins it is calculated dynamically from a price in $. Some Bitcoin payment processing services like Bitpay will do this for you automatically based on the current exchange rate. Otherwise the vendor can calculate prices daily based on the current exchange rate. So prices are actually measured in dollars and just displayed in Bitcoins. The reason is that vendors have to spend dollars to acquire the products they sell you. Having a fixed cost in dollars to acquire goods and then selling them at a fixed price in Bitcoins while the dollar-Bitcoin exchange rate fluctuates is a nonsensical situation for vendors. You can hypothesize about a world in which vendors buy their stock of products with Bitcoins and then sell them for Bitcoins, but this is an imaginary world and not the one we are currently operating in. We do not live in a Bitcoin-based economy. We live in a dollar-based economy where Bitcoin only fills a small link in that chain between the customer and the vendor. Therefore, at the present time, Bitcoin is not a good unit of measurement. Most prices in Bitcoin is just a marketing tactic to let people know you accept Bitcoins.

Store of Value
Bitcoin is simple put a terrible store of value because the value is measured in dollars (see above). Obviously if the dollar value decreases then thats bad, but a store of value which increases in value is also not a very good store of value. A good store of value maintains its value (in dollars) consistently over time. Since the price of Bitcoins in dollars is determined by a market-based exchange rate, it is a poor store of value. Of course dollars aren't a particularly good store of value, even when they are stored in banks, due to inflation. Some national currencies might be an even worse store of value than Bitcoins if they are undergoing hyperinflation. However, if the choice is between Bitcoins and dollars, dollars are a superior store of value with significantly less fluctuation in value in the short term and a good track record for holding their value in the long term. The best store of value is a diversified investment portfolio. If you want to put some Bitcoins in the mix that's fine, but often when people buy Bitcoins they buy too many Bitcoins and are not sufficiently diversified. Consider mutual funds, real estate, and small business investments along with high-risk speculative investments such as Bitcoins and Internet startup stock options.

Vehicle for Speculation
When discussing what money is good for, people sometimes forget to include that it is a vehicle for speculation. This is true of all currencies, including dollars, because of currency exchange markets. Bitcoins are an excellent vehicle for speculation. Unlike the currency exchange markets, the Bitcoin-dollar exchange has a low barrier to entry and low overhead. You can start your exchange rate speculation today with only $1 of starting capital. The high volatility of the exchange rate offers many opportunities to buy low and sell high. With the multiple exchange markets there are also ample opportunities for arbitrage. It's a day trader's dream. Of course I'm not saying that Bitcoin is a good speculative investment or that you will make money playing the market. Successful speculation involves guessing when the prices are going to up and when they are going to go down, which is where the fun and the risk come in. So while Bitcoin is a fun and easy way to speculate on currency exchange markets, the actual price of Bitcoin is unimportant to speculation. All that matters is that the price keeps going up and down with sufficient frequency that you have opportunities to place bets on which direction it's going to go.

It's important to understand the difference between speculation and other types of investment. Long-term investments are like farming. With farming, seeds cost less than crops, so you invest in buying the seeds with the belief that if you wait a while the seeds will grow into crops and you can sell them for more than you put in. Value is created with time and effort. Businesses do this as well, so when you invest in a business you are buying a share of the larger value that's going to be created in the future. Speculative investment is more like betting on the result of a coin flips. You buy at a certain price hoping that the price will go up rather than down. Unlike the crops produced by farming or the value produced by businesses, Bitcoins do not naturally become more valuable over time. The rise in the price of Bitcoins has been based entirely on fluctuations in demand, which makes Bitcoins a specultive investment. So when people compare Bitcoins to beanie babies or tulip bulbs, there is a meaningful parallel there when speaking of Bitcoins as an investment. People are going to get rich and people are going to lose their shirts. That's how gambling works. While fun and perhaps the most discussed monetary feature of Bitcoin, I consider speculation to be the least interesting aspect of Bitcoin. The real value of Bitcoins is as a means of exchange.

Means of Exchange
Where Bitcoin really shines is as a means of exchange, even though for some reason the press never seems to cover this aspect of Bitcoin. The great feature of Bitcoin is that you can buy things with it! Also, you can sell things in exchange for Bitcoins! Buying and selling things is perhaps the oldest feature associated with money and a very handy feature indeed. Cash is a form of money that provides this feature but also has some downsides, particularly for Internet sales. Credit and debit cards work for Internet sales, but also have some undesirable features which are actually pretty weird if you think about them. Bitcoin offers an improvement over credit and debit cards for buying and selling goods and services, both in person and over the Internet.

Here are some great features of Bitcoin as compared to credit and debit cards:

  • Security - The customer only authorizes a specific transaction with the merchant, so the merchant can't steal from the customer or leak information that would allow hackers to steal from the customer.
  • Privacy - The customer doesn't provide any private information to the merchant such as their home address. Due to the increased security, this private information is not necessary to verify transactions.
  • Cost - Credit cards processing is expensive for the merchant and this is reflected in higher prices for the customers. All transactions carry what is essentially a sales tax but instead of being used to build schools and roads it just adds to the profit of the banks. Merchants have to pay swipe fees, a percentages of the sale, monthly fees, setup fees, and they have monthly minimums they must pay for even if they end up making no sales. Bitcoin is much cheaper for the merchant as there are only per-transaction fees and they are comparatively very low. Think about this: why do credit card transactions have percentage-based fees when a digital currency transaction requires the same amount of work for the processor whether it's for $1 or $1000?
  • Ease of use - Bitcoin is the easiest way to accept money on the Internet. No special equipment is required and you don't need a credit card processor or even a bank account. You can start accepting Bitcoins as payment right now for no cost. The easiest way is to set up a Coinbase account. It takes a couple of minutes and it will provide you with some HTML code you can put on your website to start accepting payments. It's really that easy!
So Bitcoin is a terrible form of money for all uses except for buying and selling things. For buying and selling things it's great and really quite revolutionary. Here's the thing about using Bitcoins for buying and selling goods and services: the price of Bitcoins is irrelevant. There is no reason to hold onto Bitcoins because they are a terrible store of value and they're a terrible investment unless you just like to gamble. Vendors do not price in Bitcoins (although they may display prices in Bitcoins) because we do not live in a Bitcoin-based economy. So if you want to buy something with Bitcoins, you purchase just the exact number of Bitcoins you need to match the price in dollars at the current exchange rate and send them to the merchant. The merchant then converts those Bitcoins immediately into dollars and deposits them in a bank account. The Bitcoin-dollars exchange rate only matters for the duration of the transaction, a short enough time period that the price is stable. Services for merchants like Coinbase and Bitpay automate this whole process for you so that you can use Bitcoin as a medium of exchange but only ever deal with dollars on either side.

So if the price of Bitcoins is irrelevant, how can we track the rise of Bitcoin and tell how adoption is going? The real measure of value is in the amount of goods and services being transacted using Bitcoins. Every time a new vendor starts accepting Bitcoins, that's when the real value of Bitcoins goes up. Unfortunately, there's not a handy chart of this, so it will probably never being reported on by the press. However, if you're in Austin for SXSW, hit me up and I'll show you where you can buy tacos with your Bitcoins.

Saturday, February 9, 2013

Space Party: Space Captain

My game studio, Hot Trouble, is working on a local co-op video game called Space Party. It's inspired by Artemis, Space Team, FTL, and Puzzle Pirates. In this game you and your friends all take on different roles crewing a spaceship. Each role has its own minigame that you have to play to do your job and keep the spaceship running.

We're releasing each of the minigames as a standalone game as part of the initiative. The first one is out now and it's called Space Captain. You pilot a ship around different sectors of the galaxy looking for an Earth-like planet to colonize. Watch out for the other planets though, as they're inhabited by hostile aliens that will chase you down and destroy your ship. You don't have any weapons, so your only option is to run.

Friday, January 25, 2013

JSON: It's Time to Move On

I love JSON. I love it because it's not XML. I used to think XML was a pretty good idea compared to unpacking structs, but the more it was used for generic tasks like RPC and config file formats, the more it became clear that it was really only suitable for documents. This makes sense, as that's what it was designed to do. XML was being used to represent data structures, and the problem with that is that there is a mismatch between what XML is good at expressing and the sort of data structures you generally want to encode for computational tasks.

JSON is obviously a better choice for a number of common data types and structures such as floats, strings, maps, and lists. The syntax is easier to read and more concise for encoding these types. More importantly, however, is that there is a clear mapping between the data structures and their encoding. This was something you had to invent in XML or use one of a number of incompatible standards, such that XML became a proliferation of different languages speaking about the same things.

JSON has served us well, but much like XML, as it's been used for more and more things, it's shortcomings are becoming apparent. JSON suffers from essentially the same problem as XML, a like of universal mappings for common items that need to be encoded and decoded consistently.

The missing type which most commonly causes me trouble with JSON is byte strings. Javascript only has one string type, while other languages often have two: one for byte strings and one for unicode strings. To be honest, I'm not totally sure if Javascript strings are supposed to be unicode strings. String literals can include unicode escape sequences. However, I'm not clear on if you can have pure byte strings (i.e. with invalid unicode sequences) and I don't know, for instance, if String.charAt(x) counts bytes or unicode characters. However, most JSON encoders assume all strings to be unicode. Therefore, JSON in practice has only a unicode string type and does not support byte strings.

Many applications, however, have byte strings. The most common solution is to base64 encode your byte strings into ASCII characters and encode them into JSON as unicode strings. In addition to being slower, it requires increased semantic complexity. Both the sender and receiver of the JSON now need to know where the base64 encoded strings are in the nested JSON data structure so that they can be encoded and decoded between byte strings and base64.

This has caused people to invent their own protocols on top of or around JSON. For instance, you can tag every string as to whether it needs to be base64 decoded or not. Another solution is to remove all byte strings from the JSON and instead include tagged offsets. The binary data can then be appended to the end of the JSON data as a packed binary blob and the offsets used to extract individual byte strings. A very simple solution I've seen is to encode the whole data structure using a binary-friendly format such as BSON or MessagePack, base64 encode the entire result, and send it as a single JSON string.

The advantage to building something on top of or around JSON is that the encoder and decoder do all of the work of analyzing the data structure and patching incompatibilities with standard JSON. The disadvantage is that now you're using a nonstandard protocol which is going to need to be implemented for both the sender and receiver, for every language you want to use.

The best solution overall is to realize the limitations of JSON and decide on a new protocol which fixes these limitations. There are several alternatives to JSON already, but they focus more on efficiency on encoding and decoding rather than on the more fundamental semantic mismatch issues. Of the binary formats I've looked at (BSON, BJSON, MessagePack), only BSON has separate data types for unicode strings and byte strings. I'm not specifically advocating BSON, but at least they have the right idea on that front.

This new protocol doesn't even necessarily need to be a binary protocol. It just needs to support byte strings as a semantic type. In the end, everything needs to be JSON-compatible in order to be browser-compatible, so building something on top of JSON would probably be a fine solution. People are already doing this, as I mentioned above. The next step is to give it a name and release it on github so that everyone can use it and start adding support for more languages.

Here is my minimum feature least for a new encoding:
  • It should be JSON-style where you just give it a data structure and it serializes it, and you give it a string and it deserializes it into a data structure. (As opposed to Protobuf/Thrift style with schemas)
  • Support for all the JSON datatypes - string, float, map, list, boolean, null
  • Add support for byte strings in addition to unicode strings
  • Add support for integers in addition to floats
  • Add support for dates
  • Browser-compatible, which probably means encoded as JSON between the client and server
Nice-to-have optional features:
  • Sets as well as lists
  • Ordered maps as well as unordered maps
In the meantime, I've switched to using BSON when not in browsers and I'm still using JSON in the browser. This is not a good solution, but it's the best available at the moment that doesn't require inventing a custom protocol.

Monday, October 1, 2012

Adventure Time Game Jam

I was recently fortunate enough to participate in the Adventure Time Game Jam, sponsored by Fantastic Arcade. They managed to get licensing rights from Pendleton Ward and Cartoon Network to use Adventure Time characters in games, under the condition that we could only distribute our games through the game jam site, and that Cartoon Network could post the ones they like on their own site.

The were about 700 participants, and approximately 100 games were produced. The winning game was by indie studio Vlambeer. It was such a great game too!

My own team consisted of myself as programmer, Corie Johnson as UI/UX/graphic designer, and Celine Suarez as voice actress and graphic artist. Corie also recorded the opening theme song and composed an original rap which she performed for the ending screen.

It was a unique experience. The game jam took place in an abandoned yoga studio next to the Alamo Drafthouse South Lamar. When we first arrived, there were no chairs.  Our Internet was stolen from the Drafthouse. There was a main in the corner with an Einstein's Arcade t-shirt making ethernet cables and each time he finished one, one more person got to get online. In another corner, Vlambeer were sitting on the ground playing Infinite Swat with xbox controllers on a laptop.

For some reason pizza and beer kept arriving from unknown origins for 48 hours. All of the audio was recorded on iPhones in the shower at the space where we were doing the game jam. The ending rap was composed and the main theme recorded in the car driving to and from the space. There was no time to waste on second guessing decisions as the clock was constantly ticking. In the end I think we had one of the most finished games. You can download it from the site. Also check out how it was mentioned in the top 8 coolest games from the jam on Wired!

For me it was great working with such a talented team. I basically just hacked code nonstop. I did the whole game in KineticJS, which is a great HTML5 graphics framework, and I used Buzz for the sound. These libraries saved me a lot of time and I learned a lot about the affordances and limitations of HTML5 games.