Archive for 2010

Water Heater Cost / Payback Calculator

For the last few months my partner and I have been trying to decide on a new water heater. After moving into our new place, we realized that the existing electric tank water heater wasn’t working right since the temperature of our showers steadily got colder. It was suggested that one of the heating elements was busted, but I wasn’t interested in getting it repaired since the heater was way older than the expected lifetime of an electric heater. However, there are a lot of choices for a replacement. Another electric tank water heater would be cheap, a gas tank heater would be cheaper to run but require running a gas line, and there are tankless water heaters which are much more expensive but are cheaper to operate and don’t have to keep a whole tank of water heated up all the time for the few times you use it.

There are a number of ways out there for you to figure out how the cost of installation and purchase balance out with the cost of operation over time. You can always make your own Excel spreadsheet to figure it out, or you can use calculators like this one from energy.gov. However, all the web payback calculators I’ve seen have had clunky 90s interfaces, don’t take into account all the variables, and most importantly, don’t let you compare multiple types of heaters at the same time. So, like any good software developer, I built my own.

My water heater calculator is based on the same calculations used on the Federal Energy Management Program site, with the addition of inputs for your hot and cold water temperature. It’s also more flexible about how you enter your water usage. But the best part is that you can enter as many different water heaters as you want and they’ll all be graphed against each other, taking into account the lifetime of the unit. Get multiple bids, try different models, compare gas and electric. By displaying them as a graph of total cost over time, you can see where each heater breaks even with each other, and how much savings you’re getting by the end.

As a bonus, the calculator will also calculate how much you may be able to claim as part of the Energy Star Federal Tax Credit program. It’s smart enough to know the rules about the credits (gas heaters >e; 0.82 efficiency only, 30% of total cost up to $1500), and you can choose not to use the rebate if you’ve already used it up this year or don’t plan on applying it to your heater.

You can get started with the calculator by filling in values for your water usage and resource costs, or accept the defaults. Then add as many heaters as you like, entering in the cost for purchase and installation, the Energy Factor (which should be in the documentation for the heater), and the estimated lifetime of the heater. The more accurate you can make the numbers, the better your cost projection will be. Then check out the graph to see what your total expenditure will be after every year. If you’re comparing a new heater with the option of keeping your existing heater, just set the Cost to $0 and reduce the lifetime to how long you expect your existing heater to last.

Hopefully this little tool will be helpful to anyone else looking to replace their water heater. I filled it out for a combination of several electric, gas tanked, gas tankless, and heat-pump based water heaters, and it gave me a much better picture of what was worth it and what wasn’t. In the end, even though the graphs told me that the increased efficiency of a gas tankless heater wouldn’t ever pay back the cost difference versus an electric tank water heater, we ended up going with one. The promise of infinite hot water (long showers after a hike!) and no chance of burst water heaters outweighed the additional cost. But at least we were well-informed!

Glowback - Arduino-powered glowing ceramic creature

While I spend most of my time in front of a keyboard and monitor, my partner Eva Funderburgh spends her time sculpting amazing, imaginary ceramic creatures. Her beasts are assembled out of different clays and wood-fired. About a year ago she enlisted my help in building a new type of beast with egg-shaped domes on its back. The idea was to have the domes glow and pulse with an organic, bioluminescent light. (Note: This was way before we’d seen Avatar!) Eva had already built and fired the beast a few months earlier, using thin shells of translucent Southern Ice porcelain for the domes. She left a few of the domes unattached so we could get lights inside after the firing.

The start of the Glowback

We decided to use the open-source Arduino microcontroller platform to drive LEDs inside the domes - that way we could have a bunch of independently-controlled lights and set their behavior with software. We chose the Boarduino Arduino clone from Adafruit Industries because it’s cheap, easy to assemble, and much smaller than the full-size Arduinos. Soldering it together only took an hour or so.

Completed Boarduino

After that we connected a total of 11 superbright LEDs (ordered from DigiKey) to the Boarduino. Since the Boarduino only has 6 PWM pins (which can be used to “fade” LEDs in and out), we put 5 really bright LEDs on their own PWM pins (for the big domes) and wired the remaining LEDs (slightly less blindingly bright ones) in parallel to the 6th pin. The LEDs are unbelievably bright - even after covering them in an anti-static bag they are tough to look at directly.

Franken Beast, glowing

At this point we had to sketch up some software to actually control the lights. Eva wanted a random, organic pulsing, so I started by having each light animate through 360 degrees and used trigonometric functions to create a smooth curve of lighting and fading. We tried a whole bunch of different speeds, patterns, brightnesses, and randomization (some different tests: 1 2 3 4) before settling on the final code. The code is a bit messy because of all the things that got changed around. I ended up using 1 - abs(sin(θ)) as the main brightness function, which gave the lights a sort of “breathing” effect.

1 - abs(sin(θ))

The 0-1 values from that function got converted into a brightness from 0-255 for the PWM output. Actually, the brightnesses were always between a set minimum and maximum brightness, so they never quite go all the way out. Each cycle the speed of the fade gets randomly modified, so the lights never line up in any pattern - it’s pretty hypnotic to stare at.

the belly of the beast.

After this Eva had the unenviable task of stuffing the whole works into the beast. She built little foam stoppers for each LED, and pushed one up into each dome. Then she carefully crammed all the wires inside, and the Boarduino, a switch, and the 9V battery. It ended up being way too cramped, resulting in a lot of broken wires, resoldering, and hot glue burns. Lesson learned - the next glowing beast will be bigger, with more open access to the inside.

Play Video: Glowback

The end result is really captivating. Eva ended up displaying it at Gallery Madeira in Tacoma, WA along with some of her other creatures. Since we both put a lot of personal attention the two of us put into the Glowback, and the fact that due to all the hairy wiring inside it’s sort of “high maintenance”, we decided to keep it for ourselves instead of offering it for sale. However, Eva’s not done with the idea of lit beasts containing microcontrollers.

Eva’s has written up a post on the Glowback from her perspective on her own blog - I suggest checking it out to get more detail on the concept and lineage of the piece.

Speeding up jQuery's each function

Note: This post is from 2010, and browsers/libraries have changed a lot since then. Please don’t use any of this information to make decisions about how to write code.

In my previous post, Investigating JavaScript Array Iteration Performance, I found that among a selection of different array iteration methods, jQuery’s each function was the slowest. It’s worth mentioning again that these investigations are pretty academic - array iteration and looping speed is unlikely to be the source of performance problems compared to actual program logic, DOM manipulation, string manipulation, etc. I just found it interesting to poke into how things work in different browsers. That said, with the recent release of jQuery 1.4 emphasizing performance so much, I wanted to see what if anything could be done to speed up each (which is used inside jQuery all over the place), and whether it would made much of a difference.

Again, the details are after the jump.

For reference, here’s the original implementation of jQuery.each from jQuery 1.3.2 (it hasn’t changed much for 1.4):

function( object, callback, args ) {
  var name, i = 0, length = object.length;

  if ( args ) {
    ... omitted ...
  } else {
    if ( length === undefined ) {
      ... omitted ...
    } else
      for ( var value = object[0]; i < length && callback.call( value, i, value ) !== false; value = object[++i] ){}
  }

return object;
}

I cut out some pieces relating to iterating over Objects instead of Arrays, and some internal-only code, just for brevity. You can see that at its core, each just iterates over the array with a regular for loop and calls the provided callback for each element. It’s using the call function to invoke the callback so it can set this to the value of each element in the array in turn, and passes the index in the array and the value at that index as parameters to the callback as well. I ended up trying four different modifications of jQuery’s each function. I also allowed myself to actually change the signature of each, which would likely break much existing code written on top of jQuery, but it gave me a lot more freedom to tweak things.

The first was to try using native Array.forEach (where available). I had to pass in my own callback to forEach that reversed the order of the index and value arguments to the function, since jQuery.each and Array.forEach put those arguments in opposite order. Of course, I had to fall back on the original for loop implementation for IE. This modification retains the complete behavior from the original implementation.

if (jQuery.isFunction(object.forEach) ) {
  object.forEach(function(value, i) {
    callback.call(value, i, value);
  });
}
else {
  for (var value = object[0]; i < length && callback.call(value, i, value) !== false; value = object[++i]) {}
}

Next, I tried skipping the callback that switches the order of arguments and just passing the user’s callback directly to forEach. I had to modify the fallback to match this. Notice in both cases we no longer set this to the current element in the iteration - Array.forEach doesn’t support that directly. We’re solidly in non-backwards-compatible change territory here.

if (jQuery.isFunction(object.forEach) ) {
  object.forEach(callback);
}
else {
  for (var value = object[0]; i < length && callback.call(null, value, i) !== false; value = object[++i]) {}
}

I noticed in testing this out that Firefox seems to really struggle with using call with a frequently-changing value for this (the first parameter) so I tried another variation that didn’t use native Array.forEach but just didn’t change this in the call (I let it be the whole array each time):

for (var value = object[0]; i < length && callback.call(object, i, value) !== false; value = object[++i]) {}

After that, I wondered why use call at all (I might be missing something important here about how JavaScript function invocation works - please correct me!) So I tried a version that just called the callback directly.

for (var value = object[0]; i < length && callback(i, value) !== value = object[++i]) {}

With these four variations, I went and tested how long it took for them to iterate over a 500,000 element array in different browsers. In the previous tests I used 100,000 elements but the tests completed too fast to get meaningful results (which should tell you how fast this stuff is to begin with!). As in the previous post, the absolute numbers don’t really mean much - it’s the comparison between the different approaches that matters.

Time to iterate over an array of 500,000 integers
jQuery.each Array.forEach (same signature as jQuery) Array.forEach (native signature) Unvarying ‘this No call
Firefox 3.5 1,358ms 1,591ms 371ms 576ms 469ms
Firefox 3.6rc2 546ms 672ms 201ms 194ms 109ms
Firefox 3.7a1pre 524ms 641ms 173ms 102ms 301ms
Chrome 3 81ms 94ms 41ms 38ms 35ms
Safari 4 54ms 102ms 102ms 69ms 56ms
IE 8 789ms 759ms 693ms 741ms 476ms
Opera 10.10 451ms 703ms 286ms 305ms 228ms

We find that Firefox 3.6 improves over Firefox 3.5, IE is slow no matter what (though faster than Firefox 3.5 for vanilla jQuery.each), and the Webkit browsers are both very fast. What’s more interesting is to look at each time as a percentage of the stock jQuery implementation:

Percentage of time taken to iterate over 500,000 integers compared to regular `jQuery.each`.
Array.forEach (same signature as jQuery) Array.forEach (native signature) Unvarying ‘this No call
Firefox 3.5 117% 27% 42% 35%
Firefox 3.6rc2 123% 37% 36% 20%
Firefox 3.7a1pre 122% 33% 20% 57%
Chrome 3 116% 51% 47% 43%
Safari 4 189% 188% 128% 104%
IE 8 96% 88% 94% 60%
Opera 10.10 156% 63% 68% 51%

A couple of things jump out at us - Array.forEach doesn’t buy us anything if we have to provide a callback to reverse the inputs. If we can use the native forEach signature, it gets much faster, but not by an order of magnitude. Not varying this helps a lot in Firefox and Chrome - I suspect some runtime optimizations kick in if this stays the same, but not if it’s changing. The overhead of call is significant - it tends to matter more than anything else here. The last thing to note is that, weirdly, Safari 4 is fastest with the stock jQuery.each - I wonder if they’ve optimized specifically for that pattern.

Armed with this knowledge, I customized a copy of jQuery 1.4 to stop referring to this in its uses of each, switched the for loop to call the callback directly instead of using call, and reverse-engineered the performance tests John Resig used for the jQuery 1.4 release notes. Using these tests, I compared my custom version to the released jQuery 1.4.

The result: optimizing array iteration speed made no difference. The real work being done by jQuery (DOM manipulation, etc) totally overshadows any array iteration overhead. Reducing that overhead even by 80% doesn’t matter at all. We learned a few things about how fast Array.forEach is and how setting this in call affects performance, but we haven’t found some magic way to make our code, or jQuery overall, any faster. Furthermore, the only improvement that would have preserved the signature of the original jQuery API was actually slower than the existing implementation! It’s not worth losing this in each for any of these speed gains.

There was one small improvement to jQuery, however - a very small boost to the compressability of the library. Using explicit arguments to each instead of this to refer to the current element being iterated means that YUI Compressor or Google Closure Compiler can use one character for that item, instead of 4 for this (since this is a keyword). In practice, that saved about 197 bytes out of 69,838 - still not a huge win. But I like to avoid using this in my each anyway, just so I get to use semantically meaningful variable names, so it’s nice to see that I’m saving a byte or two along the way.

PS: Aside from the “jQueryness” of it, I wondered why each set this to the current element in the array anyway. I have one idea - if this is set to the current element in the array for each invocation of the callback, you can do cleaner OO-style JavaScript. For example, let’s say you have a Dialog object that has a close method. Of course the close method would just use this to refer to the object it’s a member of. But if you had an array of Dialogs and wanted to say “$.each(dialogs, Dialog.prototype.close)” and each didn’t set this to each Dialog in turn, everything would get confused. Of course, in jQuery 1.4 you can get around this using jQuery.proxy, which goes ahead and uses apply (a variant of call) anyway.