Blog Archives

Water Heater Cost / Payback Calculator

For the last few months my partner and I have been trying to decide on a new water heater. After moving into our new place, we realized that the existing electric tank water heater wasn’t working right since the temperature of our showers steadily got colder. It was suggested that one of the heating elements was busted, but I wasn’t interested in getting it repaired since the heater was way older than the expected lifetime of an electric heater. However, there are a lot of choices for a replacement. Another electric tank water heater would be cheap, a gas tank heater would be cheaper to run but require running a gas line, and there are tankless water heaters which are much more expensive but are cheaper to operate and don’t have to keep a whole tank of water heated up all the time for the few times you use it.

There are a number of ways out there for you to figure out how the cost of installation and purchase balance out with the cost of operation over time. You can always make your own Excel spreadsheet to figure it out, or you can use calculators like this one from energy.gov. However, all the web payback calculators I’ve seen have had clunky 90s interfaces, don’t take into account all the variables, and most importantly, don’t let you compare multiple types of heaters at the same time. So, like any good software developer, I built my own.

My water heater calculator is based on the same calculations used on the Federal Energy Management Program site, with the addition of inputs for your hot and cold water temperature. It’s also more flexible about how you enter your water usage. But the best part is that you can enter as many different water heaters as you want and they’ll all be graphed against each other, taking into account the lifetime of the unit. Get multiple bids, try different models, compare gas and electric. By displaying them as a graph of total cost over time, you can see where each heater breaks even with each other, and how much savings you’re getting by the end.

As a bonus, the calculator will also calculate how much you may be able to claim as part of the Energy Star Federal Tax Credit program. It’s smart enough to know the rules about the credits (gas heaters >e; 0.82 efficiency only, 30% of total cost up to $1500), and you can choose not to use the rebate if you’ve already used it up this year or don’t plan on applying it to your heater.

You can get started with the calculator by filling in values for your water usage and resource costs, or accept the defaults. Then add as many heaters as you like, entering in the cost for purchase and installation, the Energy Factor (which should be in the documentation for the heater), and the estimated lifetime of the heater. The more accurate you can make the numbers, the better your cost projection will be. Then check out the graph to see what your total expenditure will be after every year. If you’re comparing a new heater with the option of keeping your existing heater, just set the Cost to $0 and reduce the lifetime to how long you expect your existing heater to last.

Hopefully this little tool will be helpful to anyone else looking to replace their water heater. I filled it out for a combination of several electric, gas tanked, gas tankless, and heat-pump based water heaters, and it gave me a much better picture of what was worth it and what wasn’t. In the end, even though the graphs told me that the increased efficiency of a gas tankless heater wouldn’t ever pay back the cost difference versus an electric tank water heater, we ended up going with one. The promise of infinite hot water (long showers after a hike!) and no chance of burst water heaters outweighed the additional cost. But at least we were well-informed!

Glowback - Arduino-powered glowing ceramic creature

While I spend most of my time in front of a keyboard and monitor, my partner Eva Funderburgh spends her time sculpting amazing, imaginary ceramic creatures. Her beasts are assembled out of different clays and wood-fired. About a year ago she enlisted my help in building a new type of beast with egg-shaped domes on its back. The idea was to have the domes glow and pulse with an organic, bioluminescent light. (Note: This was way before we’d seen Avatar!) Eva had already built and fired the beast a few months earlier, using thin shells of translucent Southern Ice porcelain for the domes. She left a few of the domes unattached so we could get lights inside after the firing.

The start of the Glowback

We decided to use the open-source Arduino microcontroller platform to drive LEDs inside the domes - that way we could have a bunch of independently-controlled lights and set their behavior with software. We chose the Boarduino Arduino clone from Adafruit Industries because it’s cheap, easy to assemble, and much smaller than the full-size Arduinos. Soldering it together only took an hour or so.

Completed Boarduino

After that we connected a total of 11 superbright LEDs (ordered from DigiKey) to the Boarduino. Since the Boarduino only has 6 PWM pins (which can be used to “fade” LEDs in and out), we put 5 really bright LEDs on their own PWM pins (for the big domes) and wired the remaining LEDs (slightly less blindingly bright ones) in parallel to the 6th pin. The LEDs are unbelievably bright - even after covering them in an anti-static bag they are tough to look at directly.

Franken Beast, glowing

At this point we had to sketch up some software to actually control the lights. Eva wanted a random, organic pulsing, so I started by having each light animate through 360 degrees and used trigonometric functions to create a smooth curve of lighting and fading. We tried a whole bunch of different speeds, patterns, brightnesses, and randomization (some different tests: 1 2 3 4) before settling on the final code. The code is a bit messy because of all the things that got changed around. I ended up using 1 - abs(sin(θ)) as the main brightness function, which gave the lights a sort of “breathing” effect.

1 - abs(sin(θ))

The 0-1 values from that function got converted into a brightness from 0-255 for the PWM output. Actually, the brightnesses were always between a set minimum and maximum brightness, so they never quite go all the way out. Each cycle the speed of the fade gets randomly modified, so the lights never line up in any pattern - it’s pretty hypnotic to stare at.

the belly of the beast.

After this Eva had the unenviable task of stuffing the whole works into the beast. She built little foam stoppers for each LED, and pushed one up into each dome. Then she carefully crammed all the wires inside, and the Boarduino, a switch, and the 9V battery. It ended up being way too cramped, resulting in a lot of broken wires, resoldering, and hot glue burns. Lesson learned - the next glowing beast will be bigger, with more open access to the inside.

Play Video: Glowback

The end result is really captivating. Eva ended up displaying it at Gallery Madeira in Tacoma, WA along with some of her other creatures. Since we both put a lot of personal attention the two of us put into the Glowback, and the fact that due to all the hairy wiring inside it’s sort of “high maintenance”, we decided to keep it for ourselves instead of offering it for sale. However, Eva’s not done with the idea of lit beasts containing microcontrollers.

Eva’s has written up a post on the Glowback from her perspective on her own blog - I suggest checking it out to get more detail on the concept and lineage of the piece.

Speeding up jQuery's each function

Note: This post is from 2010, and browsers/libraries have changed a lot since then. Please don’t use any of this information to make decisions about how to write code.

In my previous post, Investigating JavaScript Array Iteration Performance, I found that among a selection of different array iteration methods, jQuery’s each function was the slowest. It’s worth mentioning again that these investigations are pretty academic - array iteration and looping speed is unlikely to be the source of performance problems compared to actual program logic, DOM manipulation, string manipulation, etc. I just found it interesting to poke into how things work in different browsers. That said, with the recent release of jQuery 1.4 emphasizing performance so much, I wanted to see what if anything could be done to speed up each (which is used inside jQuery all over the place), and whether it would made much of a difference.

Again, the details are after the jump.

For reference, here’s the original implementation of jQuery.each from jQuery 1.3.2 (it hasn’t changed much for 1.4):

function( object, callback, args ) {
  var name, i = 0, length = object.length;

  if ( args ) {
    ... omitted ...
  } else {
    if ( length === undefined ) {
      ... omitted ...
    } else
      for ( var value = object[0]; i < length && callback.call( value, i, value ) !== false; value = object[++i] ){}
  }

return object;
}

I cut out some pieces relating to iterating over Objects instead of Arrays, and some internal-only code, just for brevity. You can see that at its core, each just iterates over the array with a regular for loop and calls the provided callback for each element. It’s using the call function to invoke the callback so it can set this to the value of each element in the array in turn, and passes the index in the array and the value at that index as parameters to the callback as well. I ended up trying four different modifications of jQuery’s each function. I also allowed myself to actually change the signature of each, which would likely break much existing code written on top of jQuery, but it gave me a lot more freedom to tweak things.

The first was to try using native Array.forEach (where available). I had to pass in my own callback to forEach that reversed the order of the index and value arguments to the function, since jQuery.each and Array.forEach put those arguments in opposite order. Of course, I had to fall back on the original for loop implementation for IE. This modification retains the complete behavior from the original implementation.

if (jQuery.isFunction(object.forEach) ) {
  object.forEach(function(value, i) {
    callback.call(value, i, value);
  });
}
else {
  for (var value = object[0]; i < length && callback.call(value, i, value) !== false; value = object[++i]) {}
}

Next, I tried skipping the callback that switches the order of arguments and just passing the user’s callback directly to forEach. I had to modify the fallback to match this. Notice in both cases we no longer set this to the current element in the iteration - Array.forEach doesn’t support that directly. We’re solidly in non-backwards-compatible change territory here.

if (jQuery.isFunction(object.forEach) ) {
  object.forEach(callback);
}
else {
  for (var value = object[0]; i < length && callback.call(null, value, i) !== false; value = object[++i]) {}
}

I noticed in testing this out that Firefox seems to really struggle with using call with a frequently-changing value for this (the first parameter) so I tried another variation that didn’t use native Array.forEach but just didn’t change this in the call (I let it be the whole array each time):

for (var value = object[0]; i < length && callback.call(object, i, value) !== false; value = object[++i]) {}

After that, I wondered why use call at all (I might be missing something important here about how JavaScript function invocation works - please correct me!) So I tried a version that just called the callback directly.

for (var value = object[0]; i < length && callback(i, value) !== value = object[++i]) {}

With these four variations, I went and tested how long it took for them to iterate over a 500,000 element array in different browsers. In the previous tests I used 100,000 elements but the tests completed too fast to get meaningful results (which should tell you how fast this stuff is to begin with!). As in the previous post, the absolute numbers don’t really mean much - it’s the comparison between the different approaches that matters.

Time to iterate over an array of 500,000 integers
jQuery.each Array.forEach (same signature as jQuery) Array.forEach (native signature) Unvarying ‘this No call
Firefox 3.5 1,358ms 1,591ms 371ms 576ms 469ms
Firefox 3.6rc2 546ms 672ms 201ms 194ms 109ms
Firefox 3.7a1pre 524ms 641ms 173ms 102ms 301ms
Chrome 3 81ms 94ms 41ms 38ms 35ms
Safari 4 54ms 102ms 102ms 69ms 56ms
IE 8 789ms 759ms 693ms 741ms 476ms
Opera 10.10 451ms 703ms 286ms 305ms 228ms

We find that Firefox 3.6 improves over Firefox 3.5, IE is slow no matter what (though faster than Firefox 3.5 for vanilla jQuery.each), and the Webkit browsers are both very fast. What’s more interesting is to look at each time as a percentage of the stock jQuery implementation:

Percentage of time taken to iterate over 500,000 integers compared to regular `jQuery.each`.
Array.forEach (same signature as jQuery) Array.forEach (native signature) Unvarying ‘this No call
Firefox 3.5 117% 27% 42% 35%
Firefox 3.6rc2 123% 37% 36% 20%
Firefox 3.7a1pre 122% 33% 20% 57%
Chrome 3 116% 51% 47% 43%
Safari 4 189% 188% 128% 104%
IE 8 96% 88% 94% 60%
Opera 10.10 156% 63% 68% 51%

A couple of things jump out at us - Array.forEach doesn’t buy us anything if we have to provide a callback to reverse the inputs. If we can use the native forEach signature, it gets much faster, but not by an order of magnitude. Not varying this helps a lot in Firefox and Chrome - I suspect some runtime optimizations kick in if this stays the same, but not if it’s changing. The overhead of call is significant - it tends to matter more than anything else here. The last thing to note is that, weirdly, Safari 4 is fastest with the stock jQuery.each - I wonder if they’ve optimized specifically for that pattern.

Armed with this knowledge, I customized a copy of jQuery 1.4 to stop referring to this in its uses of each, switched the for loop to call the callback directly instead of using call, and reverse-engineered the performance tests John Resig used for the jQuery 1.4 release notes. Using these tests, I compared my custom version to the released jQuery 1.4.

The result: optimizing array iteration speed made no difference. The real work being done by jQuery (DOM manipulation, etc) totally overshadows any array iteration overhead. Reducing that overhead even by 80% doesn’t matter at all. We learned a few things about how fast Array.forEach is and how setting this in call affects performance, but we haven’t found some magic way to make our code, or jQuery overall, any faster. Furthermore, the only improvement that would have preserved the signature of the original jQuery API was actually slower than the existing implementation! It’s not worth losing this in each for any of these speed gains.

There was one small improvement to jQuery, however - a very small boost to the compressability of the library. Using explicit arguments to each instead of this to refer to the current element being iterated means that YUI Compressor or Google Closure Compiler can use one character for that item, instead of 4 for this (since this is a keyword). In practice, that saved about 197 bytes out of 69,838 - still not a huge win. But I like to avoid using this in my each anyway, just so I get to use semantically meaningful variable names, so it’s nice to see that I’m saving a byte or two along the way.

PS: Aside from the “jQueryness” of it, I wondered why each set this to the current element in the array anyway. I have one idea - if this is set to the current element in the array for each invocation of the callback, you can do cleaner OO-style JavaScript. For example, let’s say you have a Dialog object that has a close method. Of course the close method would just use this to refer to the object it’s a member of. But if you had an array of Dialogs and wanted to say “$.each(dialogs, Dialog.prototype.close)” and each didn’t set this to each Dialog in turn, everything would get confused. Of course, in jQuery 1.4 you can get around this using jQuery.proxy, which goes ahead and uses apply (a variant of call) anyway.

Investigating JavaScript Array Iteration Performance

Note: This post is from 2009, and browsers/libraries have changed a lot since then. Please don’t use any of this information to make decisions about how to write code.

The other day I was working on some JavaScript code that needed to iterate over huge arrays. I was using jQuery’s $.each function just because it was simple, but I had heard from a bunch of articles on the web that $.each was much slower than a normal for loop. That certainly made sense, and switching to a normal for loop sped up my code quite a bit in the sections that dealt with large arrays.

I’d also recently seen an article on Ajaxian about a new library, Underscore.js that claimed to include, among other nice Ruby-style functional building blocks, an each function that was powered by the JavaScript 1.5 Array.forEach when it was available (and degrading for IE). I wondered how much faster that was than jQuery’s $.each, and that got me to thinking about all the different ways to iterate over an array in JavaScript, so I decided to test them out and compare them in different browsers.

This gets pretty long so the rest is after the jump.

I went with six different approaches for my test. Each function would take a list of 100,000 integers and add them together. You can try the test yourself. The first was a normal for loop, with the loop invariant hoisted:

var total = 0;
var length = myArray.length;
for (var i = 0; i < length; i++) {
  total += myArray[i];
}
return total;

And then again without the invariant hoisted, just to see if it makes a difference:

var total = 0;
for (var i = 0; i < myArray.length; i++) {
  total += myArray[i];
}
return total;

Next I tried a for in loop, which is really not a good idea for iterating over an array at all - it’s really for iterating over the properties of an object - but is included because it’s interesting and some people try to use it for iteration:

var total = 0;
for (i in myArray) {
  total += myArray[i];
}
return total;

Next I tried out the Array.forEach included in JavaScript 1.5, on its own. Note that IE doesn’t support this (surprised?):

var total = 0;
myArray.forEach(function(n, i){
  total += n;
});
return total;

After that I tested Underscore.js 0.5.1’s _.each:

var total = 0;
_.each(myArray, function(i, n) {
  total += n;
});
return total;

And then jQuery 1.3.2’s $.each, which differs from Underscore’s in that it doesn’t use the native forEach where available, and this is set to each element of the array as it is iterated:

var total = 0;
$.each(myArray, function(i, n){
  total += n;
});
return total;

I tested a bunch of browsers I had lying around - Firefox 3.5, Chrome 3, Safari 4, IE 8, Opera 10.10, and Firefox 3.7a1pre (the bleeding edge build, because I wanted to know if Firefox is getting faster). I tried testing on IE6 and IE7 in VMs but they freaked out and crashed. The tests iterated over a list of 100,000 integers, and I ran each three times and averaged the results. Note that this is not a particularly rigorous test - other stuff was running on my computer, some of the browsers were run in VMs, etc. I mostly wanted to be able to compare different approaches within a single browser, not compare browsers, though some differences were readily apparent after running the tests.

Time to iterate over an array of 100,000 integers
for loop for loop (unhoisted) for in Array.forEach Underscore.js each jQuery.each
Firefox 3.5 2ms 2ms 78ms 72ms 69ms 225ms
Firefox 3.7a1pre 2ms 3ms 73ms 29ms 34ms 108ms
Chrome 3 2ms 2ms 35ms 6ms 5ms 14ms
Safari 4 1ms 2ms 162ms 16ms 15ms 10ms
IE 8 17ms 41ms 265ms n/a 127ms 133ms
Opera 10.10 15ms 19ms 152ms 53ms 57ms 74ms

I’ve highlighted some particularly interesting good and bad results from the table (in green and red, respectively). Let’s see what we can figure out from these tests. I’ll get the obvious comparisons between browsers out of the way - IE is really slow, and Chrome is really fast. Other than that, they each seem to have some mix of strengths and weaknesses.

First, the for loop is really fast. It’s hard to beat, and it’s clear that if you’ve got to loop over a ton of elements and speed is important to you you should be using a for loop. It’s sad that it’s so much slower in IE and Opera, but it’s still faster than the alternative. Opera’s result is somewhat surprising - while it’s not particularly fast anywhere, it’s not nearly as slow as IE on the other looping methods, but it’s still pretty slow on normal for loops. Notice that IE8 is the only browser where manually hoisting the loop invariant in the for loop matters - by almost 3x. I’m guessing every other browser automatically caches the myArray.length result, leading to roughly the same performance either way, but IE doesn’t.

Next, it turns out that for in loops are not just incorrect, they’re slow - even in otherwise blazingly-fast Chrome. In every browser there’s a better choice, and it doesn’t even get you much in terms of convenience (since it iterates over indices of the array, not values). Safari is particularly bad with them - they’re 10x slower than its next slowest benchmark. Just don’t use for in!

The results for the native Array.forEach surprised me. I expected some overhead because of the closure and function invocation on the passed-in iterator function, but I didn’t expect it to be so much slower. Chrome and Safari seem to be pretty well-optimized, though it is still several times slower than the normal for loop, but Firefox and Opera really chug along - especially Firefox. Firefox 3.7a1pre seems to have optimized forEach a bit (probably more than is being shown, since 3.7a1pre was running in a VM while 3.5 was running on my normal OS).

Underscore.js each‘s performance is pretty understandable, since it boils down to native forEach in most browsers, it performs pretty much the same. However, in IE it has to fall back to a normal for loop and invoke the iterator function itself for each element, and it slows way down. What’s most surprising about that is that having Underscore invoke a function for each element within a for loop is still 10x slower in IE than just using a for loop! There must be a lot of overhead in invoking a function with call in IE - or maybe just in invoking functions in general.

Lastly we have jQuery’s each, which (excluding for in loops) is the slowest method of iterating over an array in most of the browsers tested. The exception is Safari, which consistently does better with jQuery’s each than it does with Underscore’s or the native forEach. I’m really not sure why, though the difference is not huge. IE’s performance here is pretty much expected, since Underscore degrades to almost the same code as jQuery’s when a native forEach isn’t available. Firefox 3.5 is the real shocker - it’s drastically slower with jQuery’s each. It’s even slower than IE8, and by a wide margin! Firefox 3.7a1pre makes things better, but it’s still pretty embarassing. I have some theories on why it’s so slow in Firefox and what could be done about it, but those will have to wait for another post.

It’d be interesting to try out some of the other major JavaScript libraries’ iteration functions and compare them with jQuery and Underscore’s speeds - I’m admittedly a jQuery snob, but it’d be interesting to see if other libraries are faster or not.

It’s worth noting that, as usual, this sort of performance information only matters if you’re actually seeing performance problems - don’t rewrite your app to use for loops or pull in Underscore.js if you’re only looping over tens, or hundreds, or even thousands of items. The convenience of jQuery’s functional-style each means it’s going to stay my go-to for most applications. But if you’re seeing performance problem iterating over large arrays, hopefully this will help you speed things up.

Swoopo Profits Greasemonkey Script - Entertainment Shopping

In the last few weeks I’ve become increasingly obsessed with the evil genius that is Swoopo.com. Swoopo is a penny-auction site - users buy bids for $0.60, and each bid placed on an item increases the price by $0.12. The cost of bids and the amount they increase the price of the item vary depending on the type of auction and the country you’re in. Swoopo does a lot to make it harder to win, though. For example, if the bid is placed within 10 seconds of the end of an auction, the closing time of the auction is extended by 10 seconds or so, so people can have last-second-sniping wars for as long as they want. They also offer a “BidButler” service that will automatically bid for you, and of course if two users in an auction are using it, the BidButlers just fight until they’ve used up all the money they were given. Swoopo’s operation is like cold fusion for money - they make insane amounts of cash off their users, and they only have to drop-ship the item to one user so there’s theoretically very little operating cost (they already have the money from selling bids, and they don’t need to maintain inventory). They’re shameless enough to even have auctions for cash, gold, and even more bids! Because everyone in the auction is paying to participate, even if the winner gets some savings on the item, Swoopo makes far, far more on the sunk bids - sometimes 10x the price of the item in pure profit.

Jeff Atwood (of codinghorror.com) has written about Swoopo multiple times, and some techies have even tried to game the system, but it hasn’t worked. I was introduced to Swoopo through Jeff’s blog but I hadn’t thought about it forever, and for some reason it came up again recently. After looking at it a bit, I was just floored by how they’ve managed to set up such a perfect money-generating system. The company that runs Swoopo is called “Entertainment Shopping”, which I guess is supposed to be a suggestion that it’s like gambling (where it’s “fun” to lose money) though they really, really don’t want to be regulated as gambling. I don’t personally find gambling (or bidding on Swoopo) to be that fun, but I do find it entertaining to watch the astronomical profits tick up as more and more suckers toss money into an auction. So I built a little Greasemonkey script that’ll add the estimated profit to Swoopo above the price of an auction, updating in real time as people place bids.

Example screenshot of Swoopo Profits

It took quite a bit of work to sniff out the prices from the page (I suspect they make it hard to scrape on purpose), but I’ve checked it out and the script works pretty well on current and recent auctions on all of Swoopo’s different sites (US, Canada, UK, Germany, Austria, and Spain). It won’t work on some of their older auctions, where the rules were slightly different (and bid costs were different, too). The basic formula looks like this:

((currentPrice - bidAmount) / bidAmount) * bidCost + currentPrice - worthUpTo

I’m calculating it with all the fairness to Swoopo I can muster. I calculate the number of bids based on the current price and the amount each bid moves the price (bidAmount) times the cost of bids (bidCost). The winner still has to pay the current price, so I add that in, but I subtract what Swoopo says the item is “worth up to” since they probably have to pay around that to drop-ship it to a customer. As the example screenshot shows, this leads to examples like an iMac selling for $364.75 (plus another $392.40 in bids for the winner), but a total pure profit of $9,827.98 for Swoopo. Exciting! I’ll readily admit that my calculation is not always 100% accurate. There are a number of things I don’t take into account - I assume shipping is a wash, so it’s not included. I assume Swoopo’s paying the full retail “worth up to” price when they’re probably not. I count bids as all costing the same even though they might have been won at a “discount” via a bid auction. In cases where I can’t figure out some numbers I default them to hardcoded values, which might be wrong. I also don’t take into account “Swoop it now”, which lets bidders buy the item for its full price minus the money they’ve sunk into bids, effectively getting out of the auction entirely. This would reduce Swoopo’s profits but it isn’t recorded anywhere so I can’t factor it in. So take the number with a grain of salt - it’s entertainment.

Grab the script and start poking around swoopo.com. Hopefully you’ll have as much fun as I have with it. Update: Swoopo.com closed in 2011.

JSONView - View JSON documents in Firefox

I’m a big fan of JSON as a data exchange format. It’s simple, lightweight, easy to produce and easy to consume. However, JSON hasn’t quite caught up to XML in terms of tool support. For example, if you try to visit a URL that produces JSON (using the official “application/json” MIME type), Firefox will prompt you to download the file. If you try the same thing with an XML document, it’ll display a nice formatted result with collapsible sections. I’ve always wanted a Firefox extension that would give JSON the same treatment that comes built-in for XML, and after searching for it for a while I just gave up and wrote my own. The JSONView extension (install) will parse a JSON document and display something prettier, with syntax highlighting, collapsible arrays and objects, and nice readable formatting. In the case that your JSON isn’t really JSON (JSONView is pretty strict) it’ll display an error but still show you the text of your document. If you want to see the original text at any time, it’s still available in “View Source” too.

JSONView logo

I’ve been eager to release this for some time, but I finally pushed it to addons.mozilla.org last night. I actually started development on it about 7 months ago, but work got paused on it for about 6 months due to stuff out of my control, and then I had some other projects I was working on. The actual development only took a few days (including digging through some confusing Unicode bugs). I thought it was funny that right as I was resuming work on JSONView I noticed that a JSON explorer had actually landed for Firebug 1.4, which I’ll also be looking forward to. Initially I had intended to build that functionality as part of my extension. There’s a lot I’d like to add on, like JSONP support and a preference to send the “application/json” MIME type in Firefox’s accept headers.

This is actually my first real open source project - I’ve released some code under open source licenses before, but this is actually set up at Google Code with an issue tracker and public source control and everything. I’ve licensed it under the MIT license. I’m really hoping people get interested in improving the extension with me. I’ve pre-seeded the issue tracker with some known bugs and feature requests.

The extension itself is pretty simple. I wasn’t sure how to approach the problem of supporting a new content type for Firefox, so I followed the example of the wmlbrowser extension and implemented a custom nsIStreamConverter. What this means is that I created a new component that tells Firefox “I know how to translate documents of type application/json into HTML”. And that it does - parsing the JSON using the new native JSON support in Firefox 3 (for speed and security) and then constructing an HTML document that it passes along the chain. This seems to work pretty well, though there are some problems - some parts of Firefox forget the original type of the document and treat it as HTML, so “View Page Info” reports “text/html” instead of “application/json”, “Save as…” saves the generated HTML, Firebug sees the generated HTML, etc. Just recently I came across the nsIURLContentListener interface, which might offer a better way of implementing JSONView, but I’m honestly not sure - the Mozilla documentation is pretty sparse and it was hard enough to get as far as I did. I’m hoping some Mozilla gurus can give me some pointers now that it’s out in the open.

Right now the extension is versioned at “0.1b1” which is a wimpy way of saying “this is a first release and it could use some work”. It’s also trapped in the “sandbox” at addons.mozilla.org, where it will stay until it gets some downloads and reviews. Please check it out, write a little review, and soon people won’t have to log in to install it!

Note: While composing this post I ran across the JSONovich extension which was apparently released in mid-December and seems to do similar stuff to JSONView. No reason we can’t have two competing extensions, though.

Fallout 3 licensed soundtrack with Amazon MP3 links

I just finished Fallout 3 last night. Yeah, that’s one of the reasons I haven’t released anything new in a while. One of my favorite parts of the game was the old music they used. I loved the BioShock soundtrack too. Now that I’m done with the game and won’t be listening to Galaxy News Radio anymore, I figured I’d hunt down the individual songs on Amazon MP3 so I can listen to them while I’m playing other games (my favorite is when Halo or Chrono Trigger music plays over another game). As long as I’m doing that, I thought I’d post the links for everyone else, since I didn’t find a list with links to download the songs anywhere online. I got the list itself from Wikipedia’s Fallout 3 page. Unfortunately not all of the songs are available - hopefully they’ll show up in time.

  1. I Don’t Want To Set The World On Fire” - The Ink Spots
  2. Way Back Home” - Bob Crosby & the Bobcats
  3. Butcher Pete (Part 1)” - Roy Brown
  4. Happy Times” (From the Danny Kaye film The Inspector General) - Bob Crosby & the Bobcats
  5. Civilization (Bongo, Bongo, Bongo)” - Danny Kaye with The Andrews Sisters
  6. Into Each Life Some Rain Must Fall” - Ella Fitzgerald with The Ink Spots
  7. Anything Goes” - Cole Porter
  8. “Fox Boogie” - Gerhard Trede
  9. “I’m Tickled Pink” - Jack Shaindlin
  10. “Jazzy Interlude” - Billy Munn
  11. “Jolly Days” - Gerhard Trede
  12. “Let’s Go Sunning” - Jack Shaindlin
  13. A Wonderful Guy” - Tex Beneke
  14. “Rhythm for You” - Eddy Christiani & Frans Poptie
  15. “Swing Doors” - Allan Gray
  16. Maybe” (Intro song from the original Fallout) - The Ink Spots
  17. Mighty Mighty Man” - Roy Brown
  18. Crazy He Calls Me” - Billie Holiday
  19. Easy Living” - Billie Holiday
  20. “Boogie Man” - Sid Phillips

Update: Amazon added the right version of “Butcher Pete” and I’ve linked it above.

Setting the correct default font in .NET Windows Forms apps

I was working on XBList the other night when I realized something - the font used in its dialogs and the friends list wasn’t Segoe UI. Segoe UI is the very pretty, ClearType-optimized new default dialog font in Windows Vista. In Windows XP and 2000, it’s Tahoma, and in earlier editions it was Microsoft Sans Serif. You can see the subtle differences between them:

Microsoft System Fonts

In .NET and Windows Forms, the default font for controls is actually Microsoft Sans Serif, not the operating system’s default dialog font! Kevin Dente explains this on his blog. This is not the only time Microsoft’s dropped the ball on this - if you go through some dialogs in Vista you’ll see that many of them use Tahoma or even Microsoft Sans Serif instead of Segoe UI. This is pretty funny, especially when Rule #1 of the Top Rules for the Windows Vista User Experience is “Use the Aero Theme and System Font (Segoe UI)”. Mitch Kaplan offers up a pretty good explanation for why getting it all right is very hard, but having a mix of old and new fonts still looks shoddy.

I don’t want my apps to look shoddy. Embarrassingly enough, I’ve been hardcoding Tahoma in all my apps to get the more “modern” XP look. Now that Vista’s on the scene, it’s clear that I want to select the correct default font for whichever OS my app is running on. As Kevin points out, Control.DefaultFont is no help here - it’s what’s driving Windows Forms’ default choice of Microsoft Sans Serif in the first place. After some digging I found this Visual Studio feedback ticket (sign in required), where the Visual Studio guys explain that, while they couldn’t fix the default, they did create a SystemFonts class to help out. They recommend putting this code in your Form’s constructor:

this.Font = SystemFonts.DialogFont;
InitializeComponent();

Unfortunately, this doesn’t work in a number of ways. The first is that on Vista, SystemFonts.DialogFont is… Tahoma! Closer, but not quite right yet. If you pop open SystemFonts in Reflector, you’ll see that the DialogFont property just does some simple platform-detection, and then just hardcodes Tahoma. This worked when it was just 2000/XP vs. 9x, but Vista throws it for a total loop. Fortunately the fix is easy - use SystemFonts.MessageBoxFont instead. This one seems to always return the correct default dialog font.

However, I ran into one more problem. If I set the default font on the Form, like the code above does, I get weird, bloated controls:

Setting the default font on the form screws up controls

Fortunately I’ve got a solution for that one too. Instead of setting the font on the Form and letting it inherit, just loop through the Controls property, and individually set the right font on each control:

// Set the default dialog font on each child control
foreach (Control c in Controls)
{
    c.Font = SystemFonts.MessageBoxFont;
}

// Use a larger, bold version of the default dialog font for one control
this.label1.Font = new Font(SystemFonts.MessageBoxFont.Name, 12f, FontStyle.Bold, GraphicsUnit.Point);

Now I get a more familiar-looking dialog:

Setting the default font on each control looks fine

I could always make a subclass of Form to do this for me, but I’m OK with copying it into each new form. With this code, all my controls come up with the pretty new Segoe UI font in Windows Vista, and Tahoma in XP.