Blog Archives

JSONView is back

On November 14th of 2017, Firefox 57 was released, and JSONView stopped working. This was not unexpected - Mozilla had been warning for a while that they were going to shut off their old extension system completely and move to the WebExtension system pioneered by Google Chrome. All of the Firefox extensions using the old system were silently disabled.

I had hoped to have a new version of JSONView available that was compatible with this new system before Firefox 57 released, but unfortunately the deprecation coincided with a crunch period at work and I wasn’t able to make the time. However, last month I had a chance to relax for a bit while out of town for my wife’s show at Archimedes Gallery in Cannon Beach and I took a couple hours to hack out a WebExtension version of JSONView.

I had always been hesitant to write a WebExtension version (or a Chrome version) because the way JSONView works on Firefox is that it actually intercepts the request and rewrites it into HTML. Existing Chrome ports of JSONView ran on every page load and did stuff to try and rewrite the page, imposing a cost and a security exposure to every page. However, Firefox’s implementation of WebExtensions includes StreamFilter which lets me intercept and rewrite content in the same way as I did before!

Once I had a version working on Firefox, I got something working on Chrome as well. This works differently because StreamFilter is not available on Chrome, so I use a technique that still reformats the page after it loads, but it figures out which pages to act on in a fast, safe way. Thus, for the first time, there is an official JSONView for Google Chrome. If and when Chrome supports StreamFilter, JSONView will start using it transparently, with no changes required.

There are a few bits of functionality that had to be left behind. JSONView had the option to add application/json to your Accept header in order to convince APIs using content negotiation to serve JSON. That turned out not to be a popular way to write APIs, and JSON has since won over XML completely, so this option isn’t as necessary. On the plus side, the method by which I detect content types that contain JSON is much more flexible now, so any content type matching application/*+json will now work with JSONView as well - a long standing feature request.

On Firefox, you may notice that local files open in Firefox’s own JSON viewer instead of JSONView. To fix this, go to about:config and set devtools.jsonview.enabled to false.

JSONView remains a mature project that doesn’t need to change much, but I’m really happy that people can continue using it, and now Chrome users can get the same experience. If you don’t already have it, download JSONView for Firefox or Chrome.

The state of JSONView in 2017

I released JSONView in early 2009 to solve a simple problem I had - I’d been working with a number of JSON APIs, but Firefox had no way of directly viewing JSON documents. At the time, it wouldn’t even display them as text, showing a download dialog instead. Even after Firefox started showing JSON as text, it was still useful to show JSON nicely formatted and highlighted rather than however it was served. In the nearly eight years since its launch, JSONView has grown to about 130,000 daily users and has been downloaded 1.5 million times.

Once the initial few versions were out, I’ve updated JSONView on average every six months. Following that trend, I released JSONView 1.2.0 in late November. I’ve expected Firefox to subsume JSONView’s functionality ever since I first launched it, but amazingly this hasn’t happened yet. That said, Firefox has recently gained a very nice native JSON viewer hidden behind a developer flag (that’s only on by default in Firefox Developer Edition) and JSONView now offers the option of simply turning on that viewer instead of doing its own thing. Hopefully soon, JSONView won’t be needed at all.

I’ve always applied a minimalist philosophy to the extension. I’ve resisted adding a lot of bells and whistles in favor of a simple extension that does one thing well. This has the side-effect of reducing the rate of change of the extension, since besides the constant work of keeping it up to date with changes to Firefox, it’s pretty much done. There are still capabilities I’d like to add to JSONView, but those that haven’t already happened are usually impossible within the bounds of Firefox’s extension API. Speaking of which, Firefox is generally moving away from the sort of deep integration for extensions that JSONView requires - I hope JSONView is completely obviated by the time it can no longer work at all.

I personally don’t use JSONView anymore at all, because I no longer use Firefox. When I interact with JSON APIs I make use of the Chrome devtools, Emacs restclient-mode, and the jq and gron command-line tools. I’ve continued to maintain the extension, which remains open sourced on GitHub with several contributors helping out over time.

JSONView has been cloned, without permission (but without any resistance - it’s open source after all) as a Chrome extension in at least two instances. One of them became very popular but has been abandoned and is no longer on the Chrome web store because it contains security vulnerabilities. I never made a Chrome version myself because Chrome doesn’t offer the same integration APIs that Firefox does, so I couldn’t make it work the same way. JSONView for Firefox intercepts requests for JSON documents and produces HTML instead. The Chrome ports run on every single page that Chrome loads and try to figure out whether it looks like a JSON document, then reformats the document in-place. I never liked the security and performance implications of that, so I’ve avoided an official port. I keep an eye on Chrome’s extension API in case it ever becomes possible to do things the way I want to.

In summary, JSONView the Firefox extension is doing well, but is a mature project that will change slowly, if at all. JSONView for Chrome appears to be dead, and I don’t have immediate plans to produce an official port myself, but I won’t rule it out and I’d accept a well-implemented contribution as long as it matched the design and implementation philosophy of the original.

Save time runnning FindBugs by excluding SQL checks

FindBugs is a great tool to help you find tricky problems with your Java code, as well as to learn more about Java best practices. Unfortunately, for a large codebase, FindBugs can be extremely slow, often taking several minutes to run. In the last few years, the Java projects I’ve worked on haven’t made use of SQL databases, and I recently discovered that FindBugs’ checks against SQL injection make up a major portion of the overall runtime for the tool. Try creating a findbugs-exclude.xml file, and putting the following in it:

<?xml version="1.0" encoding="UTF-8"?>
  <!-- We don't use SQL or JDBC, so remove these very expensive bug detectors. -->
    <Bug code="SQL,ODR" />

In my experience, this shaved a whole minute off a multi-minute FindBugs run, with no downside since we don’t use any SQL or JDBC. Of course, if you do use those technologies, these scans are probably still worth the time!

Node.js stack traces in Emacs compilation-mode

, ,

I’ve been using Emacs regularly for at least 10 years, for all of my programming tasks that don’t involve Java or .NET. Amazingly, for most of that time I’ve used a pretty stock installation of Emacs with only a few cargo-culted customizations stolen from my co-workers’ .emacs files. However, as part of switching to a new job in the last month, I’ve taken the opportunity to invest in my shell and editor, and I’ve fallen deep down the rabbit hole of Emacs customizations. My continually-evolving config is publically available on GitHub. The difference between stock Emacs and one loaded up with custom packages is astounding - I can’t believe that I had been missing out on all of this. But this is not an article about setting up Emacs to be your perfect editor. Instead, I wanted to provide a tip for Node developers that I hadn’t found the answer to when I went looking.

The other day I was working on a Node.js script, and I was constantly switching back and forth from Emacs to the terminal to run the script, hit some error, remember the line number, go back to Emacs to look up the error, etc. Then I remembered that Emacs has the compile command, which will run a shell command and show the output in a separate window. Now I could split my frame into two side-by-side windows, edit my script on the left, and run compile to test out my script with any test command I liked (and then use recompile or g in the *compilation* buffer to re-run the command). But there’s another neat feature of compilation-mode - it can understand stack traces and provide hotkey access to the line that triggered an error. Unfortunately, the stack traces from Node were not being recognized correctly, and Emacs thought the last number was the line number, when it’s really the column number.

To fix this, I simply needed to add a new regexp to the list of patterns compilation-mode understands:

;; Add NodeJS error format
(setq compilation-error-regexp-alist-alist
      (cons '(node "^[  ]+at \\(?:[^\(\n]+ \(\\)?\\([a-zA-Z\.0-9_/-]+\\):\\([0-9]+\\):\\([0-9]+\\)\)?$"
                         1 ;; file
                         2 ;; line
                         3 ;; column
(setq compilation-error-regexp-alist
      (cons 'node compilation-error-regexp-alist))

Writing that regexp would have been hard, but of course Emacs has a solution. First I found the current value of compilation-error-regexp-alist-alist using describe-variable in order to get an example regexp to start with. Then, in the compilation buffer where my stack trace was, I ran re-builder, which pops up a little mini window where I could tweak my regexp and see immediately whether or not it was matching correctly. Then I just added this to my init.el, evaluated it with eval-region, and my next compile had the errors highlighted correctly, and I could immediately go to the offending code with next-error or by putting the point on the stack trace line in the compilation buffer and pressing RET.

Hopefully this helps out some fellow Emacs-using Node developers, and serves as a reminder that your tools can always work better for you.

Converted GoPro Videos Don't Work in iMovie on Yosemite


Update June 25, 2015: iMovie 10.0.9 has been released which fixes the problem I describe below.

Recently, I’ve been enjoying using my GoPro camera with iMovie to put together silly little videos. I recently upgraded my MacBook to OS X 10.10 Yosemite, and at the same time accepted the automatic upgrade of iMovie to version After doing so, I realized that all my previously-imported GoPro videos wouldn’t play in iMovie anymore, showing as “empty” tracks instead. I also couldn’t import converted video into iMovie anymore.

iMovie not playing GoPro Videos

To back up a bit, I’ll explain how I usually work with GoPro videos. I shoot video with my GoPro using the “ProTune” mode, and then import those videos onto my Mac. From there, I convert them into an editing-friendly CFHD (CineForm HD) format using GoPro Studio. GoPro Studio understands ProTune, and offers nice color correction and exposure controls - much better than iMovie can offer. After color correcting the clips, I’ll import them into iMovie and then deduplicate them using a script so that I can still use GoPro Studio to make further color tweaks while editing in iMovie. I can then use iMovie, which I quite like, to edit my videos, add music, etc.

Unfortunately, that workflow is now broken because iMovie no longer recognizes the CFHD video format. It can import the original, pre-converted videos from the GoPro, but thanks to the ProTune recording mode, those videos require extensive color manipulation to look good (which GoPro studio does as part of its conversion process). To work around this limitation, I’ll either have to switch to doing all my video editing in GoPro Studio (which is pretty limited) or export my videos to an h.264 encoding from GoPro Studio so iMovie can use them. I’m not really excited about the latter option as that means yet a third copy of my video, and I won’t be able to tweak videos in GoPro Studio after I’ve exported them.

I’m not sure exactly what’s going on, but if I were to hazard a guess it’s that OS X Yosemite and iMovie are continuing with Apple’s migration away from Quicktime (which allowed for third-party video codec plugins, like CineForm’s CFHD plugin) to AV Foundation, the media framework from iOS that Apple has brought back across to OS X. Unlike Quicktime, AV Foundation only supports a limited set of video formats, and does not allow for external plugins that add support for other formats. This is why, for example, Perian no longer works. I had hoped that iMovie, being cut from the same codebase as Final Cut Pro, would be exempt from this dumbing-down. I wonder why this happened, without announcement, and in a point release of the software. I’m also not sure, if this is really the case, what GoPro/Cineform could do about it now that Apple has closed off any avenue for third parties to extend OS X’s media capabilities. It certainly doesn’t make me want to jump up to Final Cut Pro and wonder whether things would be better supported there.

That said, I haven’t done a lot of extra research to validate my theory. I haven’t confirmed that other formats no longer work with iMovie, and I haven’t tried a Mac with Yosemite but without the new iMovie to help narrow down which one may have introducted the change.

I’ve searched around for reports of other people having this problem, and I’ve found a few Apple support forum posts (no answer) but not much else. I opened a support ticket with GoPro, but they were clueless and eventually stopped responding to me. My hope is that by documenting this problem, this post will at least be somewhere for others in my situation to commiserate, or even better, to tell me I’m wrong and I just need to do X to fix everything.

Hard linking videos to save space in your iMovie library


After an initial adjustment period, I’ve come to quite enjoy the new iMovie (2013). While it initially appears to be oversimplified, it actually still has most of the editing and organizational features that were available in older versions of iMovie, and it moves to a Final Cut Pro X based backend that is much faster and more powerful. In fact, it’s pretty clear that the new iMovie is just a reskinned, limited version of Final Cut Pro X. The best part about this is that many operations that used to block and make you wait in older iMovies are now done asynchronously, allowing you to keep working while clips import, movies render, or stabilization is analyzed. This is a huge win even for the small, simple videos that I put together of scuba diving or flying my quadcopter.

One thing about the new iMovie that I strongly dislike is that it now consolidates all your files into an opaque “library” which can only be used by a single application. Any videos that you import into iMovie get copied into this monolithic library, and the individual files cannot be accessed from any other application. This means that you can’t use other editing tools on your video – for example, I prefer use the free GoPro Studio app to do color correction for my GoPro videos, but I can’t open the copies inside the iMovie library. iPhoto works in a similar way, wher all your photos get placed inside a library and are no longer accessible to other tools like Pixelmator. I find this frustratingly restrictive. I prefer to use the filesystem to organize my files, and to be able to use multiple tools in my workflow.

The other problem with the library concept is that iMovie copies videos into its library, meaning you now have two copies on your disk. I have a 512GB SSD on my MacBook, so I don’t have the space to leave duplicate files everywhere. I suppose Apple intends for you to only keep the copy in your iMovie library, but like I said, I prefer to have my files exposed on the filesystem so I can edit them or organize them individually.

To work around these problems, I wrote a very simple Ruby script called dedup-imovie-library. It’s available on GitHub, and you can just drop it into some location on your PATH. The script simply searches your iMovie library (which is really a folder that the Finder treats specially) for movie clips, and finds matching files in a second folder that you specify (the original source of those clips). When it finds a match, it deletes the copy in your iMovie library and replaces it with a hard link to the original.

For example, if my library is called iMovie Library.imovielibrary and my originals are in a folder called originals, I can just open Terminal and run:

cd ~/Movies
dedup-imovie-library iMovie\ Library.imovielibrary originals

Now, those files will exist in two locations but only take up space on disk once, because they’re hard linked to the same file. As an additional benefit, any editing I do to the originals (like color correction) shows up in iMovie, because they’re both using the same file.

There are a few caveats:

  • You still have to import your files into iMovie using the Import command, and wait for them to be fully copied before you run the script.
  • You should close iMovie before using the script, to avoid issues when the clip is replaced.
  • You can only do this when your originals are on the same disk as your iMovie library - hard links can’t go across disks.
  • Backup software like CrashPlan and Time Machine don’t really understand hard links, and will still back up two copies of your files.

Another option besides hardlinking the files together is to use a symbolic link instead. Symbolic links are simply a pointer to the original file, and as far as I can tell iMovie 2013 has no problem with using them. Symbolic links would solve the problems of not being able to link across disks, and would not take up double space in backups. However, iMovie ‘11 had a lot of weird behavior when using symbolically linked files (such as not being able to “favorite” sections of a clip) and I haven’t had a chance to test this with iMovie 2013. I’m also concerned about what would happen if I dragged an event into another library where the videos were stored as symbolic links. I’ll try out the alternative at some point and write another article if it works well.

Cleanly declaring AngularJS services with CoffeeScript

, , ,

I’ve recently fallen in love with AngularJS for building JavaScript applications. I’ve also come to rely on CoffeeScript to provide a more comfortable, concise syntax than plain JavaScript. It’s common to define many services when writing AngularJS applications. Services are just plain JavaScript objects that are registered with AngularJS’ dependency injection system, which makes them available to other parts of your application. If you have experience with Java, this will be familiar from Spring or Guice. I prefer to have very little logic in my controllers, and instead create a domain model from layers of services that each have their own responsibility.

One thing I’ve gone back and forth on is how best to declare AngularJS services using CoffeeScript. Let’s imagine a somewhat contrived example service named Tweets that depends on the built-in $http service. When it’s constructed it saves a timestamp, makes an HTTP call and saves the result to a property. Here’s a pretty straightforward way to declare this service using CoffeeScript’s class syntax:

app.service 'Tweets', ['$http', class Tweets
  constructor: (@$http) ->
    @timestamp = - 900000

  getNewTweets: ->
    request = @$http.get '/tweets', params: { ts: @timestamp }
    request.then (result) =>
      @tweets =
      @timestamp =

This works pretty well. The first time this service is needed in the application, AngularJS will call new Tweets() and save that one (and only one) instance to be provided to any other code that requests it. Because AngularJS calls the Tweets constructor, the first call to getNewTweets is performed right away. But one thing bugs me - a reference to $http is saved as a property of the Tweets instance in the constructor. First, this results in some ugly-looking code like @$http thanks to AngularJS’ habit of naming services with $. It gets even worse when you depend on a lot of different things, and you now have to refer to them all via @ (which is this in CoffeeScript, by the way). Second, this exposes the $http service to anything that depends on Tweets. Anything that depends on Tweets can simply call Tweets.$http, which is a violation of encapsulation. It doesn’t look too bad in this example, but imagine a deep chain of services, each with its own responsibility, declared with this method. Any caller could reach down into that chain and pull out dependencies of dependencies. Or worse, they could change the reference to $http to point to something else.

Let’s try something else. If we are only ever going to have one instance of this class, why have a class at all? We can use the factory method instead of servicefactory just calls the function you provide instead of calling new on it - and have our factory function return a literal object:

app.factory 'Tweets', ['$http', ($http) ->
  obj = 
    timestamp: - 900000

    getNewTweets: ->
      request = $http.get '/tweets', params: { ts: @timestamp }
      request.then (result) =>
        @tweets =
        @timestamp =



We’ve solved the problem of exposing $http, since now it’s simply captured by the function closure of our getNewTweets function, making it visible to that method but not elsewhere. However, we now have to assign our object to a variable so that we can do our initialization outside the object declaration before returning it. This is ugly, and it feels out of order. A more subtle problem is that stack traces and object dumps in the console will print this as Object, wheras in the first example it will say Tweets. Never underestimate the value of clear debug output.

There’s also a potential for confusion around CoffeeScript’s function binding fat arrow =>, which will bind to the this of the factory function if it’s used when declaring methods on the object literal. I actually ran into this problem when I wanted to bind a function to its object so it could be passed as a callback to another function, and ended up with it bound to the wrong this.

Note that I can declare timestamp as a property of the object literal in this form, wheras in the first version I set it in the constructor - this is because when CoffeeScript creates a class, all the properties are set on the prototype, not the instance, so you can’t store per-instance state there. Well, you can get away with it because AngularJS will only make one instance of your class, but you’re still modifying the prototype rather than the real instance, which isn’t what you want. It pays to understand exactly what CoffeeScript is generating for you.

Let’s go back to using class and try again:

app.factory 'Tweets', ['$http', ($http) ->
  class Tweets
    constructor: ->
      @timestamp = - 900000

    getNewTweets: ->
      request = $http.get '/tweets', params: { ts: @timestamp }
      request.then (result) =>
        @tweets =
        @timestamp =

  new Tweets()

The difference between this and our first example are subtle. First, we’re using factory instead of service, like in the second example. This means instead of having our dependencies provided to the Tweets constructor, they’re provided to the factory function, and our declaration of the Tweets class can simply close over those parameters, meaning we no longer need to take them as constructor arguments. That solves the problem of $http being visible to other objects, and means we don’t need to write @$http. As I’ve explored AngularJS, I’ve actually found myself always using factory instead of service because it provides more flexibility in how I provide my service value.

However, we also have to be very careful to remember to actually instantiate our class at the end, since AngularJS won’t be doing it for us. That could be really easy to miss after a long class definition.

We can change this a little bit to make it better:

app.factory 'Tweets', ['$http', ($http) ->
  new class Tweets
    constructor: ->
      @timestamp = - 900000

    getNewTweets: ->
      request = $http.get '/tweets', params: { ts: @timestamp }
      request.then (result) =>
        @tweets =
        @timestamp =

Now we call new right away to instantiate the class we’re defining. I think this is very clear, and there’s nothing after the class definition to forget. This is the pattern I’ve settled into whenever I declare singleton services – it’s clean, readable and concise, and it solves all the problems we’d identified:

  1. Dependencies of our service are not exposed.
  2. Our service will print with a meaningful name in the console.
  3. There is no out-of-order code, or extra code after the class declaration.
  4. We never have to put the characters @ and $ next to each other. Sorry, I’m still recovering from exposure to Perl and I can’t take it.

I’ve written up a more detailed example in this gist, which shows the pattern I’ve been using to produce ActiveRecord-style APIs with private dependencies. I hope this helps you write clean, understandable AngularJS services in CoffeeScript.

Casting the Apple of Eden from bronze

Early in August, my wife, Eva Funderburgh texted me from her bronze casting class:

Hey! I’m one of only two people in the class. I get as many investments as I want. Can you come up with a good plan to make an apple of Eden model by the time I’m home?

Until recently, I’ve been a big fan of the Assassin’s Creed series of video games, especially since it intersects with my interests in parkour and climbing, history, and science fiction. One of the key artifacts in the games’ fiction is the Apple of Eden, an ancient piece of technology that gives its owner magical powers and the ability to influence people’s minds. The plot of the games often focuses on finding or controlling these artifacts.

My wife had been branching out from ceramic sculpture to bronze casting, and we’d talked a bit about making a replica prop of the Apple out of bronze, but I never wanted to take up space and materials that she could use for her actual art. But now we had the opportunity, and better yet, a deadline. Within a week, we needed to have the wax form of the Apple built and ready.

The Apple of Eden

While I don’t build replica props myself (with the exception of this project), I enjoy reading Harrison Krix’s exhaustive build logs for the amazing props he builds. The rest of this article will detail how we designed and built a life-sized, illuminated replica of the Apple of Eden cast in bronze.

The first task was to actually design the Apple. We knew it needed to be a sphere, and from game screenshots we figured out a rough size based on how it looked in characters’ hands. More troublesome was the pattern on the surface of the sphere, which was hard to pick out from screenshots, and which was not consistent from shot to shot. We ended up designing our own pattern of channels inspired by the common patterns from screenshots and fan art.

Apple of Eden research

To start with, we needed a hollow wax sphere. In lost wax casting, a wax form gets encased in plaster and then melted out, and bronze is poured into the holes that are left behind. To get a good sphere shape, we made a plaster mold from a cheap toy ball that was already the right size.

Iron Man ball

First, half of the ball was covered in clay to make a “rim” for the plaster to form up against, and then we mixed up a batch of plaster and slopped it onto one side of the ball with our hands. As long as the interior surface is nice, it doesn’t matter what the outside looks like.

Eva with the bottom half of the mold

Plaster hardens in minutes, after which we removed the clay. The clay had some registration keys (little dots) in it so that it would be easy to pair up the two halves of the plaster mold again. A bit of clay was used to preserve the pour spout so we could pour wax into the finished mold.

Bottom half of the mold

Adding plaster for the top half

Once the mold was finished, we heated up casting wax in an old crock pot, and poured the wax into the mold. By pouring in wax multiple times and sloshing it around, we were able to get a mostly even shell of wax with a nice spherical outside. It was important to make sure the shell was still thick enough that I could carve the decorative channels in it without breaking through. We cast a few wax spheres so that we could experiment with different designs and carving techniques.

Heating wax

The channels were first drawn lightly on the wax, then carved in using a clay-working tool that has a looped blade at the end. I tried to make the depth nice and consistent, but this was all freehand. I didn’t mind a bit of variation because I wanted this to look like an ancient, handmade artifact rather than a machined, perfect piece. The pour spout was widened to provide a hole to stuff electronics into. Eva turned a cover for the hole on her potter’s wheel directly out of a bit of wax.

Carving the channels

At this point the wax looked generally good, but it had a lot of little nicks and scratches and even fingerprints from where I was working with it. Eva wiped the surface with turpentine, which smoothed out the surface quite a bit.

Smoothed with turpentine

Once we were happy with the shape, Eva sprued the wax onto a sprue system that she was sharing with a bunch of other small pieces. Where many artists would make a single piece at a time, Eva makes complicated sprues of many small pieces, because her art is generally on the small side. The sprue system provided channels for the bronze to flow to the piece, and for air to escape as the bronze flowed through all the spaces. Eva had to build a complex system of large inlet paths with many small outlets, being careful to make sure that there was always a vent at the highest point in the piece, so that air bubbles wouldn’t get trapped. Finally, bronze pins were inserted at points around the sphere so that the plaster in the middle wouldn’t just drop when all the wax was melted out.

Sprue system

Then, we moved to Pratt Fine Arts Center, and all the sprued pieces were placed in cylinders made of tar paper and chicken wire, and filled with a mixture of sand and plaster. The sprue system was shaken a bit to get rid of bubbles and make sure the plaster got into every corner. The plaster cylinders, called “investments”, were then moved into a large kiln, upside down, and heated to 1200°F to melt out all the wax and burn out any other organic material (like the toothpicks used in the sprue system).

Investing the wax

We returned a week later for the bronze pour. We helped dig out the sand pit, and half-buried the investments in sand to contain any bronze in case they split open. Once the bronze had melted in the crucible, it was lifted up by a crane, and Eva helped pour the liquid bronze from the crucible into the pour holes in the investment. This video shows the process much more clearly:

The bronze inside the plaster investments cooled off in about a half hour, and we were able to take axes and hammers to the plaster to free the bronze within. After the pieces were separated from the plaster, they were pressure-washed to get rid of more plaster. The bronze looked super crufty at this point because of oxidation on the surface, and flashing and bubbles that had stayed in the investment and got picked up by the bronze.

Freed fro the investment

Taking the bronze home, we separated the pieces from the sprues using a cutting wheel on a die grinder or a hacksaw. The extra bronze from the sprue gets recycled for the next pour.

Cutting off the sprue

The bubbles were popped off with a small chisel, and then Eva went over the surface with a coarse ScotchBrite pad on her die grinder to smooth off the surface. She then took another pass with a fine ScotchBrite pad, and finished with a wire brush. The transformation is remarkable - at this point the Apple was glowing, bright metallic bronze.


However, we didn’t want an Apple that looked brand new (or looked like shiny metal). This is supposed to be an ancient artifact, and fortunately there’s an entire world of bronze patinas that can be used to transform bronze surfaces with different colors and patterns. First, we tried applying a liver of sulfur and ferric nitrate patina, brushing on the solution while heating the surface of the bronze with a blowtorch. Unfortunately, there was a bit too much ferric nitrate, and a bit too much heat. The effect this produced was striking, with metallic, iridescent splatters forming over the smoother parts of the Apple, and a dark red patina for the rest.

First patina

As cool as this patina looked, it wasn’t exactly what we were looking for, but one of the nice things about bronze is that you can always just remove the patina and start over. Eva scrubbed off this patina with a wire brush, and tried again, this time with less ferric nitrate and going lighter on the blow torch. Despite using the same process, the result was very different - a dark, aged bronze that was perfect for the Apple. Eva sprayed on some Permalac to seal in the patina.

Second patina

One feature I was particularly excited about was lighting. The Apple glows along its surface channels, and I planned on using EL wire to provide this effect. EL wire glows uniformly along its length, uses very little power, and looks very bright in low light. First, Eva polished the inside of the channels with a wire brush to provide a bright reflection of the wire. Then, I worked out how to take the EL wire I had bought and work it into all the channels in a single continuous line with a minimum of overlap. This required drilling holes in strategic intersections and running some of the wire internal to the sphere. We tried some different options for attaching the wire, like silicone caulk with glow-in-the-dark powder mixed into it, but in the end it looked best to just superglue the wire into the channels. The battery pack and transformer fit snugly inside the sphere and connected to one end of the glowing wire.

Planning EL wire

The last bit to figure out was the cap. Since it was cast from bronze, it was hard to make it mate up with the hole in the sphere perfectly, so we hadn’t bothered. We bridged the gap by heating up some Shapelock plastic, forming it around the base of the cap, and then pressing it into the hole. We trimmed all the excess plastic so it wasn’t visible. Once the Shapelock cooled, it formed a perfectly fitting seal that allowed us to press the cap onto the sphere and it would stay put unless pried off with fingernails.

We really had two deadlines – the first was the bronze pour, but we also wanted to have the Apple finished in time for PAX, a big gaming convention that’s held every year in Seattle. We didn’t have time for full Assassin’s Creed costumes, so instead we carried the Apple around the show and took pictures of other video game characters holding the Apple. Oddly, we didn’t find any Assassin’s Creed cosplayers.

Joffrey with the Apple

I’m really glad that we got the opportunity to work on this fun and nerdy project. The Apple sits on my mantle now, and it’s fun to light up and hold – there’s no substitute for the weight and feel of real metal. I plan to bring the Apple with me to PAX again next year.

You can view all the progress photos and PAX photos on Flickr.

Front of the Apple Side of the Apple Back of the Apple Back, lit

Maruku is obsolete

, ,

A few weeks ago I finally released Maruku 0.7.0 after a short beta that revealed no serious issues. This was the first release in four years of the venerable Ruby Markdown library. I inherited Maruku over a year ago and I’m very proud of the work I’ve put into it during that year. I’m glad that I was able to fix many of its bugs and update it to work in a modern Ruby environment. However, I want to recommend that, if you have a choice, you should choose a different Markdown library instead of Maruku.

When Natalie Weizenbaum handed Maruku over to me, my interest in the library stemmed from its use in Middleman, and my desire to default to a pure-Ruby Markdown processor in the name of compatibility and ease of installation. The two options were Maruku and Kramdown. Maruku was the default Markdown engine for the popular Jekyll site generator, but was old and unmaintained. It also used the problematic GPLv2 license, which made its use from non-GPL projects questionable. Kramdown was inarguably a better written library, with active maintenance, but under the even-more-problematic GPLv3 license. The GPLv3 is outright forbidden in many corporate environments because of its tricky patent licensing clauses, plus it has all the issues of GPLv2 on top. I emailed Thomas Leitner, Kramdown’s maintainer, about changing the license to a more permissive license like the MIT license (used widely in the Ruby community) but he declined to change it, so I set to work on Maruku.

As I explained in my initial blog post, my plan was to fix up Maruku’s bugs and relicense it under the MIT license and release that as version 0.7.0. I did that, and then the plan was to release 1.0.0:

I’m thinking about a new API and internals that are much more friendly to extension and customization, deprecating odd features and moving most everything but the core Markdown-to-HTML bits into separate libraries that plug in to Maruku, and general non-backwards-compatible overhauls. […] Overall, my goal for Maruku is to make it the default Markdown engine for Ruby, with a focus on compatibility (across platforms, Rubies, and with other Markdown interpreters), extensibility, and ease of contribution.

However, in March of 2013, Mr. Leitner decided to relicense Kramdown under the MIT license starting with version 1.0.0. I continued to work on finishing Maruku 0.7.0, but I knew then that for people looking for a capable, flexible, well-written pure-Ruby Markdown library, Kramdown was now the correct choice. All of the things I wanted to do in Maruku for 1.0.0 were in fact already done in Kramdown – better code organization, better modularity and extensibility, good documentation, a better parser, and improved performance. Soon after Kramdown 1.0.0 was released, I switched Middleman to depend on it instead of Maruku.

I will continue to maintain Maruku and make bugfixes, because it’s the right thing to do. That said, I’m not sure I can justify doing much work on the 1.0.0 milestone knowing that, given the choice, I would use Kramdown or Redcarpet over Maruku. My recommendation to the Ruby community, and Ruby library authors, is the same: use a different Markdown library, or better yet abstract away the choice via Tilt. Please feel free to continue to send pull requests and issues to the Maruku repository, I’ll still be there.


My wife Eva Funderburgh (Hollis) is a professional artist, and has been making sculptures full-time since we both moved to Seattle in 2005. I don’t have much talent for clay, so my main contribution is to help with her website. A few weeks ago we launched the third iteration of her site, which we’d been working on for several months.

Old site

The previous version (shown above) was launched around 2008 and was basically just a Wordpress theme. Eva had hand-drawn some creatures and wiggly borders to make it all feel less digital, but there’s only so far you could go with that on a web page. The resulting site had a lot of character, but ultimately failed to put her gorgeous art front and center. Worse, it had failed to keep up with the increasing sophistication and complexity of her work. It also scaled poorly to mobile devices, and just plain looked outdated.

I had a lot of goals for the new site. First and foremost, it needed to be dominated by pictures of Eva’s art, especially on the homepage. The way I see it, any part of the screen that doesn’t include a picture of her sculptures is wasted space. I also wanted the design to be clean, contemporary, and focused. We have so many great pictures of her creatures that any ornamentation or empty space is a wasted opportunity. Beyond that, I had a bunch of technical goals befitting a website built in 2013. The first was that the site should work well on mobile, with a responsive design that supported everything from phones to tablets to widescreen desktop monitors. A related goal was that the site should support high-DPI or “retina” screens – both of us are eagerly awaiting new Retina MacBook Pros, and browsing with high-resolution phones and tablets is more and more popular. It seems safe to assume that screen resolutions will only increase over time, and I wanted Eva’s art to appear as sharp and beautiful as it could on nice devices. Also related to the goal to work on mobile, I wanted the site to be fast. This meant minimizing download size, number of external resources, and JavaScript computation. It also meant leveraging CSS transitions and animations to provide smooth, GPU-accelerated motion to give the site a nice feel. Helpfully, one of the decisions I made up front was that this site was going to target the latest browsers and the advanced CSS and JavaScript features they support. Fortunately, most browsers aggressively update themselves these days, so the age of supporting old browsers for years and years is coming to a close.

The site itself was built using Middleman, an open-source static website generator I help maintain. This allowed me to use CoffeeScript for all my JavaScript, which I have come to appreciate after a rocky first experience, and to use Haml and Sass/Compass for HTML and CSS respectively. One of my coworkers challenged me to write all my JavaScript without using jQuery, which was actually pretty straightforward and improved the performance of the site while dropping quite a bit of size from the overall download. I did rely on the indispensable Underscore for templating and utilities, however.

New site

The redesign started with the new homepage, which presents a grid of cropped pictures that fill up the whole window. First, we chose a basic grid unit or “block” that the whole page is divided into. Eva selected a bunch of pictures she liked, and used Lightroom to crop them all into tiles of specific sizes, with dimensions of 1x1, 2x1, or 3x1 blocks. She also exported double-resolution versions of each block for retina displays. Each picture was associated with a full-frame version on Flickr. JavaScript on the homepage uses that information to randomly select images and las them down like a bricklayer would to make a wall, creating a solid grid of pictures. If the browser reports itself as high-DPI, the double-resolution images are used to provide retina sharpness. A quick CSS transition animates each block into the page as they load. To make the page responsive to different browser sizes, there are media-query breakpoints at each multiple of the block size and when the browser is resized, the blocks are laid out again. You can see the effect by resizing your browser – it will reduce the width of the grid one block at a time. Once the browser gets below a certain size, the block size is halved to fit more images onto a smaller screen. Using blocks for the breakpoints instead of the classic “iPhone, iPad, Desktop” breakpoints means that the design works nicely on many devices and browser window sizes – this way it looks good on devices from non-retina smartphones all the way up to HDTVs, and not just the Apple devices I happen to own.

On mobile

The other part of the homepage is the “lightbox” that appears when each tile is tapped. This is built from scratch rather than using any lightbox plugin, and uses CSS transitions to display smoothly. It also allows for keyboard navigation, so you can use the arrow keys to browse through pictures. The full-size image for the lightbox is linked directly from Flickr, and I use the Flickr API to select the perfect image size that won’t stretch and blur, but won’t waste time with a too-large download. This can end up showing a dramatically different sized image between a phone and a Retina Macbook!


After the homepage, the rest of the site was relatively easy. For the most part, it’s still a Wordpress theme, though it reuses the responsive breakpoints at each integer multiple of the block size to provide nice reading on every screen. I also reused the exact same code from the homepage to provide a row of random tiles at the top of the page. Beyond that, there are just some SVG icons for the social media links (to ensure that they too display nicely on retina screens) and a few more subtle fit-and-polish tweaks to make it feel great. The “Art” page was designed by hand to provide high-resolution banners linking to Flickr, Etsy, and the various galleries that showcase Eva’s work, and the rest is editable directly from Wordpress so that Eva can maintain and update the content. A series of IFTTT rules make it easier for her to update the site while she’s working by piping specially-tagged Flickr uploads into the blog (and to Tumblr, Facebook, and Twitter).

Art page

I’m rarely satisfied with the website designs I produce, but this time I’m extremely proud of what we’ve built. Please check out the site, resize your browser, try it out on your phone, but most of all, enjoy the pretty pictures.