Blog Archives

JSONView is back

On November 14th of 2017, Firefox 57 was released, and JSONView stopped working. This was not unexpected - Mozilla had been warning for a while that they were going to shut off their old extension system completely and move to the WebExtension system pioneered by Google Chrome. All of the Firefox extensions using the old system were silently disabled.

I had hoped to have a new version of JSONView available that was compatible with this new system before Firefox 57 released, but unfortunately the deprecation coincided with a crunch period at work and I wasn’t able to make the time. However, last month I had a chance to relax for a bit while out of town for my partner’s show at Archimedes Gallery in Cannon Beach and I took a couple hours to hack out a WebExtension version of JSONView.

I had always been hesitant to write a WebExtension version (or a Chrome version) because the way JSONView works on Firefox is that it actually intercepts the request and rewrites it into HTML. Existing Chrome ports of JSONView ran on every page load and did stuff to try and rewrite the page, imposing a cost and a security exposure to every page. However, Firefox’s implementation of WebExtensions includes StreamFilter which lets me intercept and rewrite content in the same way as I did before!

Once I had a version working on Firefox, I got something working on Chrome as well. This works differently because StreamFilter is not available on Chrome, so I use a technique that still reformats the page after it loads, but it figures out which pages to act on in a fast, safe way. Thus, for the first time, there is an official JSONView for Google Chrome. If and when Chrome supports StreamFilter, JSONView will start using it transparently, with no changes required.

There are a few bits of functionality that had to be left behind. JSONView had the option to add application/json to your Accept header in order to convince APIs using content negotiation to serve JSON. That turned out not to be a popular way to write APIs, and JSON has since won over XML completely, so this option isn’t as necessary. On the plus side, the method by which I detect content types that contain JSON is much more flexible now, so any content type matching application/*+json will now work with JSONView as well - a long standing feature request.

On Firefox, you may notice that local files open in Firefox’s own JSON viewer instead of JSONView. To fix this, go to about:config and set devtools.jsonview.enabled to false.

JSONView remains a mature project that doesn’t need to change much, but I’m really happy that people can continue using it, and now Chrome users can get the same experience. If you don’t already have it, download JSONView for Firefox or Chrome.

Node.js stack traces in Emacs compilation-mode

, ,

I’ve been using Emacs regularly for at least 10 years, for all of my programming tasks that don’t involve Java or .NET. Amazingly, for most of that time I’ve used a pretty stock installation of Emacs with only a few cargo-culted customizations stolen from my co-workers’ .emacs files. However, as part of switching to a new job in the last month, I’ve taken the opportunity to invest in my shell and editor, and I’ve fallen deep down the rabbit hole of Emacs customizations. My continually-evolving config is publically available on GitHub. The difference between stock Emacs and one loaded up with custom packages is astounding - I can’t believe that I had been missing out on all of this. But this is not an article about setting up Emacs to be your perfect editor. Instead, I wanted to provide a tip for Node developers that I hadn’t found the answer to when I went looking.

The other day I was working on a Node.js script, and I was constantly switching back and forth from Emacs to the terminal to run the script, hit some error, remember the line number, go back to Emacs to look up the error, etc. Then I remembered that Emacs has the compile command, which will run a shell command and show the output in a separate window. Now I could split my frame into two side-by-side windows, edit my script on the left, and run compile to test out my script with any test command I liked (and then use recompile or g in the *compilation* buffer to re-run the command). But there’s another neat feature of compilation-mode - it can understand stack traces and provide hotkey access to the line that triggered an error. Unfortunately, the stack traces from Node were not being recognized correctly, and Emacs thought the last number was the line number, when it’s really the column number.

To fix this, I simply needed to add a new regexp to the list of patterns compilation-mode understands:

;; Add NodeJS error format
(setq compilation-error-regexp-alist-alist
      (cons '(node "^[  ]+at \\(?:[^\(\n]+ \(\\)?\\([a-zA-Z\.0-9_/-]+\\):\\([0-9]+\\):\\([0-9]+\\)\)?$"
                         1 ;; file
                         2 ;; line
                         3 ;; column
                         )
            compilation-error-regexp-alist-alist))
(setq compilation-error-regexp-alist
      (cons 'node compilation-error-regexp-alist))

Writing that regexp would have been hard, but of course Emacs has a solution. First I found the current value of compilation-error-regexp-alist-alist using describe-variable in order to get an example regexp to start with. Then, in the compilation buffer where my stack trace was, I ran re-builder, which pops up a little mini window where I could tweak my regexp and see immediately whether or not it was matching correctly. Then I just added this to my init.el, evaluated it with eval-region, and my next compile had the errors highlighted correctly, and I could immediately go to the offending code with next-error or by putting the point on the stack trace line in the compilation buffer and pressing RET.

Hopefully this helps out some fellow Emacs-using Node developers, and serves as a reminder that your tools can always work better for you.

Casting the Apple of Eden from bronze

Early in August, my partner, Eva Funderburgh texted me from her bronze casting class:

Hey! I’m one of only two people in the class. I get as many investments as I want. Can you come up with a good plan to make an apple of Eden model by the time I’m home?

Until recently, I’ve been a big fan of the Assassin’s Creed series of video games, especially since it intersects with my interests in parkour and climbing, history, and science fiction. One of the key artifacts in the games’ fiction is the Apple of Eden, an ancient piece of technology that gives its owner magical powers and the ability to influence people’s minds. The plot of the games often focuses on finding or controlling these artifacts.

My partner had been branching out from ceramic sculpture to bronze casting, and we’d talked a bit about making a replica prop of the Apple out of bronze, but I never wanted to take up space and materials that she could use for her actual art. But now we had the opportunity, and better yet, a deadline. Within a week, we needed to have the wax form of the Apple built and ready.

The Apple of Eden

While I don’t build replica props myself (with the exception of this project), I enjoy reading Harrison Krix’s exhaustive build logs for the amazing props he builds. The rest of this article will detail how we designed and built a life-sized, illuminated replica of the Apple of Eden cast in bronze.

The first task was to actually design the Apple. We knew it needed to be a sphere, and from game screenshots we figured out a rough size based on how it looked in characters’ hands. More troublesome was the pattern on the surface of the sphere, which was hard to pick out from screenshots, and which was not consistent from shot to shot. We ended up designing our own pattern of channels inspired by the common patterns from screenshots and fan art.

Apple of Eden research

To start with, we needed a hollow wax sphere. In lost wax casting, a wax form gets encased in plaster and then melted out, and bronze is poured into the holes that are left behind. To get a good sphere shape, we made a plaster mold from a cheap toy ball that was already the right size.

Iron Man ball

First, half of the ball was covered in clay to make a “rim” for the plaster to form up against, and then we mixed up a batch of plaster and slopped it onto one side of the ball with our hands. As long as the interior surface is nice, it doesn’t matter what the outside looks like.

Eva with the bottom half of the mold

Plaster hardens in minutes, after which we removed the clay. The clay had some registration keys (little dots) in it so that it would be easy to pair up the two halves of the plaster mold again. A bit of clay was used to preserve the pour spout so we could pour wax into the finished mold.

Bottom half of the mold

Adding plaster for the top half

Once the mold was finished, we heated up casting wax in an old crock pot, and poured the wax into the mold. By pouring in wax multiple times and sloshing it around, we were able to get a mostly even shell of wax with a nice spherical outside. It was important to make sure the shell was still thick enough that I could carve the decorative channels in it without breaking through. We cast a few wax spheres so that we could experiment with different designs and carving techniques.

Heating wax

The channels were first drawn lightly on the wax, then carved in using a clay-working tool that has a looped blade at the end. I tried to make the depth nice and consistent, but this was all freehand. I didn’t mind a bit of variation because I wanted this to look like an ancient, handmade artifact rather than a machined, perfect piece. The pour spout was widened to provide a hole to stuff electronics into. Eva turned a cover for the hole on her potter’s wheel directly out of a bit of wax.

Carving the channels

At this point the wax looked generally good, but it had a lot of little nicks and scratches and even fingerprints from where I was working with it. Eva wiped the surface with turpentine, which smoothed out the surface quite a bit.

Smoothed with turpentine

Once we were happy with the shape, Eva sprued the wax onto a sprue system that she was sharing with a bunch of other small pieces. Where many artists would make a single piece at a time, Eva makes complicated sprues of many small pieces, because her art is generally on the small side. The sprue system provided channels for the bronze to flow to the piece, and for air to escape as the bronze flowed through all the spaces. Eva had to build a complex system of large inlet paths with many small outlets, being careful to make sure that there was always a vent at the highest point in the piece, so that air bubbles wouldn’t get trapped. Finally, bronze pins were inserted at points around the sphere so that the plaster in the middle wouldn’t just drop when all the wax was melted out.

Sprue system

Then, we moved to Pratt Fine Arts Center, and all the sprued pieces were placed in cylinders made of tar paper and chicken wire, and filled with a mixture of sand and plaster. The sprue system was shaken a bit to get rid of bubbles and make sure the plaster got into every corner. The plaster cylinders, called “investments”, were then moved into a large kiln, upside down, and heated to 1200°F to melt out all the wax and burn out any other organic material (like the toothpicks used in the sprue system).

Investing the wax

We returned a week later for the bronze pour. We helped dig out the sand pit, and half-buried the investments in sand to contain any bronze in case they split open. Once the bronze had melted in the crucible, it was lifted up by a crane, and Eva helped pour the liquid bronze from the crucible into the pour holes in the investment. This video shows the process much more clearly:

Play Video: Casting the Apple of Eden in Bronze

The bronze inside the plaster investments cooled off in about a half hour, and we were able to take axes and hammers to the plaster to free the bronze within. After the pieces were separated from the plaster, they were pressure-washed to get rid of more plaster. The bronze looked super crufty at this point because of oxidation on the surface, and flashing and bubbles that had stayed in the investment and got picked up by the bronze.

Freed fro the investment

Taking the bronze home, we separated the pieces from the sprues using a cutting wheel on a die grinder or a hacksaw. The extra bronze from the sprue gets recycled for the next pour.

Cutting off the sprue

The bubbles were popped off with a small chisel, and then Eva went over the surface with a coarse ScotchBrite pad on her die grinder to smooth off the surface. She then took another pass with a fine ScotchBrite pad, and finished with a wire brush. The transformation is remarkable - at this point the Apple was glowing, bright metallic bronze.

Polished

However, we didn’t want an Apple that looked brand new (or looked like shiny metal). This is supposed to be an ancient artifact, and fortunately there’s an entire world of bronze patinas that can be used to transform bronze surfaces with different colors and patterns. First, we tried applying a liver of sulfur and ferric nitrate patina, brushing on the solution while heating the surface of the bronze with a blowtorch. Unfortunately, there was a bit too much ferric nitrate, and a bit too much heat. The effect this produced was striking, with metallic, iridescent splatters forming over the smoother parts of the Apple, and a dark red patina for the rest.

First patina

As cool as this patina looked, it wasn’t exactly what we were looking for, but one of the nice things about bronze is that you can always just remove the patina and start over. Eva scrubbed off this patina with a wire brush, and tried again, this time with less ferric nitrate and going lighter on the blow torch. Despite using the same process, the result was very different - a dark, aged bronze that was perfect for the Apple. Eva sprayed on some Permalac to seal in the patina.

Second patina

One feature I was particularly excited about was lighting. The Apple glows along its surface channels, and I planned on using EL wire to provide this effect. EL wire glows uniformly along its length, uses very little power, and looks very bright in low light. First, Eva polished the inside of the channels with a wire brush to provide a bright reflection of the wire. Then, I worked out how to take the EL wire I had bought and work it into all the channels in a single continuous line with a minimum of overlap. This required drilling holes in strategic intersections and running some of the wire internal to the sphere. We tried some different options for attaching the wire, like silicone caulk with glow-in-the-dark powder mixed into it, but in the end it looked best to just superglue the wire into the channels. The battery pack and transformer fit snugly inside the sphere and connected to one end of the glowing wire.

Planning EL wire

The last bit to figure out was the cap. Since it was cast from bronze, it was hard to make it mate up with the hole in the sphere perfectly, so we hadn’t bothered. We bridged the gap by heating up some Shapelock plastic, forming it around the base of the cap, and then pressing it into the hole. We trimmed all the excess plastic so it wasn’t visible. Once the Shapelock cooled, it formed a perfectly fitting seal that allowed us to press the cap onto the sphere and it would stay put unless pried off with fingernails.

We really had two deadlines – the first was the bronze pour, but we also wanted to have the Apple finished in time for PAX, a big gaming convention that’s held every year in Seattle. We didn’t have time for full Assassin’s Creed costumes, so instead we carried the Apple around the show and took pictures of other video game characters holding the Apple. Oddly, we didn’t find any Assassin’s Creed cosplayers.

Joffrey with the Apple

I’m really glad that we got the opportunity to work on this fun and nerdy project. The Apple sits on my mantle now, and it’s fun to light up and hold – there’s no substitute for the weight and feel of real metal. I plan to bring the Apple with me to PAX again next year.

You can view all the progress photos and PAX photos on Flickr.

Front of the Apple Side of the Apple Back of the Apple Back, lit

Maruku is obsolete

, ,

A few weeks ago I finally released Maruku 0.7.0 after a short beta that revealed no serious issues. This was the first release in four years of the venerable Ruby Markdown library. I inherited Maruku over a year ago and I’m very proud of the work I’ve put into it during that year. I’m glad that I was able to fix many of its bugs and update it to work in a modern Ruby environment. However, I want to recommend that, if you have a choice, you should choose a different Markdown library instead of Maruku.

When Natalie Weizenbaum handed Maruku over to me, my interest in the library stemmed from its use in Middleman, and my desire to default to a pure-Ruby Markdown processor in the name of compatibility and ease of installation. The two options were Maruku and Kramdown. Maruku was the default Markdown engine for the popular Jekyll site generator, but was old and unmaintained. It also used the problematic GPLv2 license, which made its use from non-GPL projects questionable. Kramdown was inarguably a better written library, with active maintenance, but under the even-more-problematic GPLv3 license. The GPLv3 is outright forbidden in many corporate environments because of its tricky patent licensing clauses, plus it has all the issues of GPLv2 on top. I emailed Thomas Leitner, Kramdown’s maintainer, about changing the license to a more permissive license like the MIT license (used widely in the Ruby community) but he declined to change it, so I set to work on Maruku.

As I explained in my initial blog post, my plan was to fix up Maruku’s bugs and relicense it under the MIT license and release that as version 0.7.0. I did that, and then the plan was to release 1.0.0:

I’m thinking about a new API and internals that are much more friendly to extension and customization, deprecating odd features and moving most everything but the core Markdown-to-HTML bits into separate libraries that plug in to Maruku, and general non-backwards-compatible overhauls. […] Overall, my goal for Maruku is to make it the default Markdown engine for Ruby, with a focus on compatibility (across platforms, Rubies, and with other Markdown interpreters), extensibility, and ease of contribution.

However, in March of 2013, Mr. Leitner decided to relicense Kramdown under the MIT license starting with version 1.0.0. I continued to work on finishing Maruku 0.7.0, but I knew then that for people looking for a capable, flexible, well-written pure-Ruby Markdown library, Kramdown was now the correct choice. All of the things I wanted to do in Maruku for 1.0.0 were in fact already done in Kramdown – better code organization, better modularity and extensibility, good documentation, a better parser, and improved performance. Soon after Kramdown 1.0.0 was released, I switched Middleman to depend on it instead of Maruku.

I will continue to maintain Maruku and make bugfixes, because it’s the right thing to do. That said, I’m not sure I can justify doing much work on the 1.0.0 milestone knowing that, given the choice, I would use Kramdown or Redcarpet over Maruku. My recommendation to the Ruby community, and Ruby library authors, is the same: use a different Markdown library, or better yet abstract away the choice via Tilt. Please feel free to continue to send pull requests and issues to the Maruku repository, I’ll still be there.

Redesigning evafunderburgh.com

My partner Eva Funderburgh is a professional artist, and has been making sculptures full-time since we both moved to Seattle in 2005. I don’t have much talent for clay, so my main contribution is to help with her website. A few weeks ago we launched the third iteration of her site, which we’d been working on for several months.

Old site

The previous version (shown above) was launched around 2008 and was basically just a Wordpress theme. Eva had hand-drawn some creatures and wiggly borders to make it all feel less digital, but there’s only so far you could go with that on a web page. The resulting site had a lot of character, but ultimately failed to put her gorgeous art front and center. Worse, it had failed to keep up with the increasing sophistication and complexity of her work. It also scaled poorly to mobile devices, and just plain looked outdated.

I had a lot of goals for the new site. First and foremost, it needed to be dominated by pictures of Eva’s art, especially on the homepage. The way I see it, any part of the screen that doesn’t include a picture of her sculptures is wasted space. I also wanted the design to be clean, contemporary, and focused. We have so many great pictures of her creatures that any ornamentation or empty space is a wasted opportunity. Beyond that, I had a bunch of technical goals befitting a website built in 2013. The first was that the site should work well on mobile, with a responsive design that supported everything from phones to tablets to widescreen desktop monitors. A related goal was that the site should support high-DPI or “retina” screens – both of us are eagerly awaiting new Retina MacBook Pros, and browsing with high-resolution phones and tablets is more and more popular. It seems safe to assume that screen resolutions will only increase over time, and I wanted Eva’s art to appear as sharp and beautiful as it could on nice devices. Also related to the goal to work on mobile, I wanted the site to be fast. This meant minimizing download size, number of external resources, and JavaScript computation. It also meant leveraging CSS transitions and animations to provide smooth, GPU-accelerated motion to give the site a nice feel. Helpfully, one of the decisions I made up front was that this site was going to target the latest browsers and the advanced CSS and JavaScript features they support. Fortunately, most browsers aggressively update themselves these days, so the age of supporting old browsers for years and years is coming to a close.

The site itself was built using Middleman, an open-source static website generator I help maintain. This allowed me to use CoffeeScript for all my JavaScript, which I have come to appreciate after a rocky first experience, and to use Haml and Sass/Compass for HTML and CSS respectively. One of my coworkers challenged me to write all my JavaScript without using jQuery, which was actually pretty straightforward and improved the performance of the site while dropping quite a bit of size from the overall download. I did rely on the indispensable Underscore for templating and utilities, however.

New site

The redesign started with the new homepage, which presents a grid of cropped pictures that fill up the whole window. First, we chose a basic grid unit or “block” that the whole page is divided into. Eva selected a bunch of pictures she liked, and used Lightroom to crop them all into tiles of specific sizes, with dimensions of 1x1, 2x1, or 3x1 blocks. She also exported double-resolution versions of each block for retina displays. Each picture was associated with a full-frame version on Flickr. JavaScript on the homepage uses that information to randomly select images and las them down like a bricklayer would to make a wall, creating a solid grid of pictures. If the browser reports itself as high-DPI, the double-resolution images are used to provide retina sharpness. A quick CSS transition animates each block into the page as they load. To make the page responsive to different browser sizes, there are media-query breakpoints at each multiple of the block size and when the browser is resized, the blocks are laid out again. You can see the effect by resizing your browser – it will reduce the width of the grid one block at a time. Once the browser gets below a certain size, the block size is halved to fit more images onto a smaller screen. Using blocks for the breakpoints instead of the classic “iPhone, iPad, Desktop” breakpoints means that the design works nicely on many devices and browser window sizes – this way it looks good on devices from non-retina smartphones all the way up to HDTVs, and not just the Apple devices I happen to own.

On mobile

The other part of the homepage is the “lightbox” that appears when each tile is tapped. This is built from scratch rather than using any lightbox plugin, and uses CSS transitions to display smoothly. It also allows for keyboard navigation, so you can use the arrow keys to browse through pictures. The full-size image for the lightbox is linked directly from Flickr, and I use the Flickr API to select the perfect image size that won’t stretch and blur, but won’t waste time with a too-large download. This can end up showing a dramatically different sized image between a phone and a Retina Macbook!

Lightbox

After the homepage, the rest of the site was relatively easy. For the most part, it’s still a Wordpress theme, though it reuses the responsive breakpoints at each integer multiple of the block size to provide nice reading on every screen. I also reused the exact same code from the homepage to provide a row of random tiles at the top of the page. Beyond that, there are just some SVG icons for the social media links (to ensure that they too display nicely on retina screens) and a few more subtle fit-and-polish tweaks to make it feel great. The “Art” page was designed by hand to provide high-resolution banners linking to Flickr, Etsy, and the various galleries that showcase Eva’s work, and the rest is editable directly from Wordpress so that Eva can maintain and update the content. A series of IFTTT rules make it easier for her to update the site while she’s working by piping specially-tagged Flickr uploads into the blog (and to Tumblr, Facebook, and Twitter).

Art page

I’m rarely satisfied with the website designs I produce, but this time I’m extremely proud of what we’ve built. Please check out the site, resize your browser, try it out on your phone, but most of all, enjoy the pretty pictures.

Maruku 0.7.0.beta1 released

, ,

I’ve just pushed Maruku 0.7.0.beta1 to RubyGems.org. This is the first real release of Maruku in over 4 years. I released version 0.6.1 exactly one year ago, but that included only a single fix on top of the 0.6.0 code (for Ruby 1.9+ compatibility).

Last year, I said in “I own Maruku now”:

My present plan is to work towards releasing Maruku 0.7.0 (with a beta beforehand) as a bugfix release including the Nokogiri changes. I’ll fix whatever bugs I can without drastically changing the code, document in tests everything I can’t fix easily, and make sure there aren’t any more regressions from the previous release. I’d actually rather minimize or remove the usage of Nokogiri in the library, simply so that Maruku can have fewer dependencies, but that may not be an attainable goal. Maruku 0.7.0 will also drop support for Rubies older than 1.8.7.

It took far longer than I thought, but that goal has finally been completed, including the removal of Nokogiri as a hard dependency. There have been 447 commits by 15 contributors - I made about half of those, followed by Jacques Distler and Natalie Weizenbaum in roughly equal measure. Ignoring whitespace changes, 405 files were changed, with 8,745 insertions, and 21,439 deletions. I always consider it a good sign when more code is deleted than added. Maruku now has pretty good test coverage and tests are green in Travis CI for Ruby 2.0.0, 1.9.3, 1.9.2, 1.8.7, JRuby, and Rubinius. You can see all the changes in the CHANGELOG.

Maruku 0.7.0 should be viewed as a fixed version of Maruku 0.6.0. While its behavior has changed because of bugfixes, the interface is the same, and there have been no major removals or additions with the exception of support for fenced code blocks (familiar from GitHub Flavored Markdown) which can now be enabled via the :fenced_code_blocks option.

At this point, what I need is for Rubyists that use Maruku to try out the new version and file issues for any bugs they find. If everything’s OK for a few weeks, I’ll release 0.7.0 final, at which point I can start working on Maruku 1.0.0, which will hopefully be a more major change.

I’ve got some more things to say about the future of Maruku, but I’ll hold off on that until 0.7.0 is completely out the door. In the meantime, please give it a try!

I own Maruku now

, ,

So it turns out that I own Maruku now. Let’s start at the beginning.

Maruku is a Markdown interpreter written in Ruby. It’s one of many Markdown interpreters available to Ruby programmers, but it’s notable in that it is written only in Ruby, without relying on compiled C libraries or anything like that. This makes it nice as a default option for dealing with Markdown files, because it will run on any platform and on any flavor of Ruby.

My interest in Maruku stemmed from working on Middleman and trying to get it working in JRuby and on Windows. Ruby libraries that require compiling C code are problematic on Windows (a platform not exactly favored by Rubyists), and are even harder to get working with JRuby, which (some tricks aside) won’t load Ruby libraries that rely on C extensions. Middleman uses Tilt to allow users to choose which Markdown engine they want, but we chose Maruku as the default to provide easy setup and testing under JRuby and on Windows.

Unfortunately, Maruku had been abandoned at this point, and for several years at that. Maintainership had been transfered from Andrea Censi, the original author, to Natalie Weizenbaum, who did a bunch of cleanup. Natalie ended up getting busy with other important projects, and didn’t release a new version of the gem or working much on Maruku after that. Issues piled up, Maruku wasn’t updated to work with Bundler or Ruby 1.9, and generally began to rot.

During this period, Jacques Distler continued to enhance the library in his own fork, most notably replacing the use of Ruby’s built in REXML library with Nokogiri, which not only provided a speed boost, but fixed a lot of bugs having to do with REXML’s quirks. Jacques continued to fix bugs reported in the main Maruku issue tracker on his own fork, but nobody was around to merge them into the main repository or release a new version of Maruku.

In early May 2012 I heard news that Natalie had chosen a new maintainer for Haml to carry on the project now that she didn’t have time for it. I tweeted what I didn’t intend to be a snarky message to her asking if she was considering a similar move for Maruku. She responded quickly, asking if I was volunteering, and I agreed to manage issues for the project, and just like that I had commit access to the repository. Unfortunately, I couldn’t just jump in, because I needed to get permission from my employer to contribute to the project. This took almost three months, and I felt like a jerk the whole time for not doing anything for Maruku after being given responsibility for it. The approval came right as I was was preparing for a long trip to Africa, and so I wasn’t able to jump right on it, but after I got back I started dusting off the code and getting things into shape. I started bugging Natalie to switch options for me in GitHub (like enabling Travis CI builds), and she simply moved the repository to my account and gave me full control of everything.

Right away I was able to release Maruku 6.0.1, which includes only a single fix since the previous release – an annoying warning that appeared when running under Ruby 1.9. Since then, I’ve been chipping away at Maruku as much as I can, starting with merging Jacques’ fixes, and then turning my attention towards correcting the tests, which often asserted results that weren’t actually correct. After that, I’ve been mostly working on getting the corrected tests to pass again, especially in JRuby. While Jacques’ change to Nokogiri is a boon to speed and accuracy, it resulted in a handful of regressions under JRuby, many of which traced down to bugs in Nokogiri itself (which have all been fixed quickly by the Nokogiri maintainers). Only a few days ago, I finally got the tests running green in MRI, JRuby, and Rubinius. I had to skip certain tests that are exposing actual bugs in Maruku, but at least I now have a basis for improving the code without accidentally breaking one of the existing tests.

My present plan is to work towards releasing Maruku 0.7.0 (with a beta beforehand) as a bugfix release including the Nokogiri changes. I’ll fix whatever bugs I can without drastically changing the code, document in tests everything I can’t fix easily, and make sure there aren’t any more regressions from the previous release. I’d actually rather minimize or remove the usage of Nokogiri in the library, simply so that Maruku can have fewer dependencies, but that may not be an attainable goal. Maruku 0.7.0 will also drop support for Rubies older than 1.8.7.

Looking forward, I want to release Maruku 1.0.0, with some more drastic changes. I’m thinking about a new API and internals that are much more friendly to extension and customization, deprecating odd features and moving most everything but the core Markdown-to-HTML bits into separate libraries that plug in to Maruku, and general non-backwards-compatible overhauls. I’m also hoping that, with the original authors’ and major contributors’ support, I can relicense Maruku under the MIT license instead of GPLv2, to make it easier for Maruku to be used in more situations. Overall, my goal for Maruku is to make it the default Markdown engine for Ruby, with a focus on compatibility (across platforms, Rubies, and with other Markdown interpreters), extensibility, and ease of contribution.

For everyone who has reported bugs in Maruku, thank you for your patience. For everyone interested in the future of Maruku, feel free to watch the repository on GitHub to see the commits flowing, or subscribe to the gem on RubyGems.org to be notified of new releases. When the beta for the next release comes out, I’ll be excited to have people take it for a spin and let me know what’s still broken.

Middleman 3.0

,

For the last 8 months or so, I’ve been contributing to Thomas Reynolds’ open source project Middleman. Middleman is a Ruby framework for building static websites using all the nice tools (Haml, Sass, Compass, CoffeeScript, partials, layouts, etc.) that we’re used to from the Ruby on Rails world, and then some. This website is built with Middleman, and I think it’s the best way to put together a site that doesn’t need dynamic content. Since I started working on Middleman in November 2011, I’ve been contributing to the as-yet-unreleased version 3.0, which overhauls almost every part of the framework. Today, after many betas and release candidates, Middleman 3.0 is finally released and ready for general use, and I’m really proud of what I’ve been able to help build.

Middleman Logo

I’ve been building web sites for myself since the early 90s, before I even had an internet connection at home (I could see some primitive sites on Mosaic on my father’s university network). Early on I was inspired by HyperCard and Quicktime VR and I wanted to make my own browseable worlds, but I didn’t have a Mac (nor did I know what I was doing). Starting with Notepad and early Netscape, I started building sites full of spinning animated GIFs, blinking text, and bad Star Wars fanfiction. As time went on and I learned more bout how to actually build websites, the duplication of common elements like menus, headers, and footers became too much to manage. Around the same time I discovered how to make dynamic websites backed by a database, using ASP. For my static websites, I just used ASP includes and functions to encapsulate these common elements and do a little templating. With the release of .NET, I switched to writing sites, static and dynamic, in ASP.NET. Far too long after, I realized that hosting websites on Windows was a terrible idea, and I switched to using PHP to handle light templating duty. For dynamic, database-driven sites I switched to using Ruby on Rails, and I fell in love with its powerful templating and layout system, the Ruby language, Haml, Sass, and all the rest. However, the gulf between my Rails projects and my cheesy PHP sites was huge – all my tools were missing, and I still needed PHP available to host sites even though they were essentially static.

My first attempt to reconcile the two worlds was a Ruby static site generator called Staticmatic. Staticmatic had Haml built in, and worked mostly like Rails. I had just finished moving my site over to it when the developer discontinued the project. In its place, he recommended Middleman. However, as I started transitioning my site to Middleman, I found a few bugs, and wishing to be a good open source citizen, I included new tests with my bug reports. Thomas Reynolds, the project’s creator, encouraged me to keep contributing, and even gave me commit access to the GitHub repository very early on (at which point I broke the build on my first commit). From that point I started fixing more and more bugs, and then eventually answering issue reports, and finally adding new features and helping to overhaul large parts of the project. I’d contributed to open source projects before, but just with a bugfix here and there. Middleman has been my first real contribution, with ongoing effort and a real sense of ownership in the project.

My contributions have focused mainly in three areas. First, I’ve helped to get the new Sitemap feature working. I liked the way similar frameworks like Nanoc had a way to access a view of all the pages in a site in code, to help with automatically generating navigation. Middleman’s sitemap goes beyond that, providing a way to inspect all the files in a project, (including virtual pages), and even add to the list via extensions.

The next area I worked on is the blogging extension. I like the way Jekyll handles static blogging, but I wanted my blog to be just one part of my whole site – Jekyll is a little too focused on blog-only sites. I basically rewrote the fledgling blog extension that Thomas had started, and I’m really proud of the result. It not only handles basic blogging, but also generates tag and date pages, supports next/previous article links, and is extremely customizable. I moved my blog over from Wordpress (no more security vulnerabilities, yay!) and it’s been great.

The last place I focused was on website performance. I’ve always been interested in how to build very fast websites, and I wanted Middleman to support best practices around caching and compression. To that end, I built an extension that appends content-based hashes to asset filenames so you can give them long cache expiration times, and another extension that will pre-gzip your files to take load off your web server and deliver smaller payloads.

Beyond that I’ve fixed a lot of bugs, sped things up, written a lot of documentation, and added a lot of little features. I’m really happy that 3.0 is finally out there for people to use, and I hope more people will choose to contribute. And I’m looking forward to all the new stuff we’re going to add for 3.1!

Securely and conveniently managing passwords

Good password security habits are more important than ever, but it can be hard to take the common advice about using complex, unique passwords when they’re so inconvenient to manage. This article explains why password security is such a big deal, and then lays out a strategy for managing passwords that can tame your accounts without causing an undue burden.

The safety of your online passwords has always been important, but in the last year or so it’s become even more clear that extra care needs to be taken with your account credentials. Passwords have been stolen from PlayStation Network, Steam, LinkedIn, Last.fm, eHarmony, and more. This problem is only going to get worse as exploits become more sophisticated and more services reach millions of users without investing in information security. Furthermore, advances in computing power mean that cracking stolen databases of passwords is getting easier and easier.

The way these hacks usually go is that, via some software flaw, somebody manages to steal the database containing user account information for a service. Sometimes, that’s enough to gain access to the stolen accounts, because the passwords were stored as plain text. One warning sign that a site does this is that they’ll offer the option to send you your password if you forget it, rather than letting you reset the password to a new one. If they can send you your password, then they know it, and if they know it, somebody who steals the database can know it too.

More frequently, passwords are “hashed” – a process that makes it easy to tell if a user has entered the correct password, but very difficult to actually recover the password. However, that’s not enough to prevent data thieves from figuring out the passwords. They use huge, pre-computed lists of common passwords and their hashes called “rainbow tables” to figure out which password was used for which account. There are defenses against this sort of attack, but even large, established sites like LinkedIn didn’t use them. This is one of the reasons why common passwords (like “password”) are so easy to crack – they’re right at the top of rainbow tables (which may contain hundreds of millions of other passwords too).

Given all that, the real problem starts when somebody uses the same password on multiple services. Imagine you use the same password for a gardening forum and your email. The gardening forum software contains a flaw that allows hackers to steal the user database and figure out the passwords. They then take those usernames and passwords to popular email services and try them out. Since the password is the same, they get right in, and have full access to your email. But that’s not all – once somebody has access to your email, they can reset passwords for all the other services you use, including juicy targets like online banking. And they’ll know what to go after by simply reading your emails. This sort of cross-service attack happened a lot after the PlayStation Network breach. The thieves took the PSN passwords they’d gotten and rightly assumed that those passwords would work on Xbox Live, where they were able to make lots of purchases using the accounts’ linked credit cards. More recently, World of Warcraft and Diablo 3 players have had their accounts taken over to sell off their gold and items, likely by people using stolen PSN, Steam, and LinkedIn account information.

Some people try to protect themselves by having a few different passwords that they reuse – one for “secure” systems like online banking, one for common things like email, and one for “everything else”. The problem is that your account security is only as good as the weakest link. Once one password falls into the wrong hands it can be used to break into more and more other services, and each newly compromised account can be a stepping stone to more sensitive targets. This need not even be as straightforward as what they get by gaining access to your email. For example, let’s say somebody gains access to your Facebook account. From there, they may be able to pull enough personal information to answer challenge questions (What’s your mother’s maiden name? Where were you born?) at your bank’s website. Or maybe they’ll just stop there and use your Facebook account to spam your friends with links to malware sites.

In an ideal world, you want to use long, complex passwords that are different for every service you have an account with. Long, complex passwords are much less likely to be found in rainbow tables, so even if a user database is stolen, your password isn’t likely to be one of the ones recovered. Having a unique password per site means that if thieves do figure out your password, they will only have access to your account on one service, not many. Plus, your response to hacks you know about (many go undetected or unreported) is to just change your password on that one site, instead of having to retire a password used all over the Internet.

Fortunately, it turns out that keeping track of hundreds of unique, complex passwords can be done, and it can be reasonably convenient. I manage separate passwords for every account, and I’m going to explain how so you can too.

Disclaimer: This is not the be-all and end-all of password security. There are weaknesses in my strategy, but I believe it provides enough security benefit along with enough convenience that it will protect most people from common attacks.

The first thing you need is a password vault. A password vault is an application that remembers your passwords for you – the vault is encrypted, and it has a password itself that lets you open it. Think of this like taking all your keys and locking them up in a mini-safe when you’re not using them. The password vault allows you to remember a unique password for every site and get at them all with a single password that you can change any time you like. Good password vaults also let you store other information you might forget, like account numbers and challenge question answers (I like to make random answers to challenge questions too, to protect against attackers who can figure out the real answer). And, as a bonus, the vault serves as a directory of all the sites you actually have accounts on - before I moved my passwords into a vault, I had no real idea of all the different services I had created an account on.

The vault I’ve chosen to use is KeePassX. I like it because it’s very secure, it’s free, and it runs on many different operating systems (I regularly use OS X, Windows, Linux, and iOS machines). There are other perfectly good password vaults like 1Password and LastPass. What’s important is that you choose one and use it. (Update from 2022: I’m using BitWarden now, which automatically handles all the syncing described below and also has nice iOS integration. iOS also has a very nice password manager built in.)

KeePassX Logo

Once the vault is installed, it’s time to fill it up with your passwords. First, choose a master password. This should be easy to memorize and you should change it every few months. Next, add in all the accounts you can remember, along with their current password. I listed them all out first so I’d know what passwords I needed to change, but you can also change the password for each account as you enter them. For each account, find the “change password” feature and use your vault’s password generator to choose a new, completely random password. Ideally this password should be long – more than 16 characters. Note that some sites impose odd restrictions on your passwords, so you might need to play with the options to generate a password the site will accept. The worst are the sites that don’t say there is any restriction on password length, but when you paste in your new password, they clip it to a certain length. This causes the password you save in your vault to not match what the site saved, and you won’t be able to log in. I have a Greasemonkey script (which works on Firefox and Google Chrome) that will show these limits even if the site doesn’t. Another thing you can do is to immediately log out of a site after changing your password, and log back in. That way, if there’s a problem, you know about it right away and can fix it then.

KeePassX Password Generator

Once you’ve gotten all the accounts you can think of, it’s time to find the ones you can’t remember. Search the Internet for your name, usernames you use, and your email address, and you’ll find accounts you’ve totally forgotten about – old forum accounts, services you tried once and dumped, etc. If you’re lucky, you can just delete your account, but few services offer such an option. In that case, just change the password and add the account to your vault.

At this point, you have a complete record of all your online accounts, and each one should have a unique, random password. I’ve done this myself with the exception of a few accounts where I have to enter my password frequently on my phone (mostly my iTunes password) – entering a 30-digit random password every time would be impossible. In that case, I have a memorizable password that I change frequently and only use on those services, and I mix in some unique bit to each of them. For example, if the base password is “fuzzydog” (it’s not), my iTunes password might be “fuzzydogappstore”. It’s certainly not as secure as fully random passwords, but I can remember it, and I’m not using it anywhere else.

Now, when you need to log into a site, just open up the password vault, find the right entry, and copy/paste the password into the site or application you’re using. To make this less of a burden, I’d recommend using the password saving features of your browser. The only thing to keep in mind is that this lets anyone who gets ahold of your computer log into those sites – you should configure your computer to lock and require a password if the screensaver comes on, to foil anyone who’d walk up to your computer while you’re gone and try to mess with it.

The next step is to make sure your vault is available wherever you need it. For this I use the file-synchronization service Dropbox (which everyone should be using already). Dropbox shows up as a folder on your computer, and whatever you put in it shows up on all your other computers. In fairness, there are other good services you could use like OneDrive, Box, or Google Drive, but I like Dropbox the best. Once you’ve got Dropbox installed, move your vault into it, and now you have access to the latest version of your vault on all of your computers. I also store the actual KeePassX software in Dropbox so that when I start using a new computer I can just install Dropbox and have everything ready to go immediately.

Dropbox Logo

For my iOS devices, I’ve installed the PassDrop app. PassDrop can read your password vault straight out of Dropbox, so you also have your passwords on your phone. You can then use the phone’s copy/paste feature to get the passwords from PassDrop to wherever they need to be. I’m sure there are similar apps for other mobile operating systems, but I don’t have experience with them.

That’s pretty much it. At this point, you’ll have instant access to all your passwords wherever you go – no more forgetting which password you used on some obscure site when you signed up years ago, and much less risk of getting your accounts hijacked or broken into. And when the next big site loses their passwords, you’ll be able to change your password there, update your vault, and get on without worrying.

Bonus: One thing you can do to go above and beyond this level of security is to take advantage of “two-factor authentication” where it’s offered. With two-factor authentication, you log in both with something you know (your password) as well as something you have (often your mobile phone). Google offers this through their Google Authenticator phone app and it’s a great idea given how much is tied into your Google Account these days, especially when email is such a juicy target. Many banks offer this too. Sometimes it’s an app, and sometimes they just send you a code via SMS. Turning this feature on means that even if somebody steals your password, they’d also need to steal your phone to log into your account. I enable these wherever I find them.

PNGGauntlet 3: Three compressors make the smallest PNGs

This weekend I released a major update to PNGGauntlet, my PNG compression utility for Windows. A lot has changed, including a lot of bug fixes, but the biggest news is that PNGGauntlet now produces even smaller PNGs! I did a bunch of research, and I found that combining the powerful PNGOUT utility that PNGGauntlet has always used with OptiPNG and DeflOpt, even more bytes could be shaved off of your PNG images. The contributions from OptiPNG and DeflOpt are often small compared to what PNGOUT does, but if every byte counts, you’ll be happy with the new arrangement. The new compressors do slow down the process a bit, though, so you can turn them off if you don’t want them.

That’s not all that’s changed, however. The UI has been streamlined, leaving only the most essential options. Drop files on the app, hit Optimize, and don’t worry about the rest. However, if you want to tweak the compressors, there’s an all-new options panel that exposes every possible setting for each compressor. The PNGGauntlet website has also been overhauled with a much more modern look.

Before you ask, no, the new PNGGauntlet will not compress multiple images at once to make use of multicore processors. I cover this in the FAQ, but since Ken Silverman, PNGOUT’s author, provides a professional PNGOUT for Windows that’s multicore-optimized for only $15, I don’t want to compete by matching PNGOUTWin’s feature set. It’s absolutely not a matter of not knowing how to do it. And anyway, the individual compressors do a good job of using multiple cores on their own.

One question that deserves an answer is why there was no PNGGauntlet release in the last year and a half. The answer is essentially that I forgot about PNGGauntlet. The last release, 2.1.3, was in May of 2010. That December, I did some work on a new version of PNGGauntlet, incorporating the new compressors and slimming down the UI. After I’d done that, I decided that I wanted to overhaul the UI completely - it’s built with the old Windows Forms technology and a pretty rickety open source data table library, and I’ve always been embarrassed by how crude it looks. My plan was to use Windows Presentation Foundation (WPF), which was supposed to be the new way of developing UIs for .NET apps. However, I soon discovered that Microsoft’s WPF libraries don’t really give you a native-looking UI. Applications developed with WPF look kinda like Windows apps, but they’re off in a bunch of subtle ways that really bothered me. So I ended up starting to draw my own controls to match Windows 7 more closely. And after a while down that rathole, I sorta gave up and shelved the whole project in disgust.

Since then, I’ve actually switched to using my Macbook Air, in OS X, almost exclusively, and I almost never use my Windows machines. The prospect of developing Windows apps no longer interests me much, and I don’t really use PNGGauntlet anymore myself (I use the very nice Mac analogue ImageOptim). PNGGauntlet still worked, so it stayed out of my mind until @drewfreyling messaged me on Twitter asking about incorporating the latest version of PNGOUT into PNGGauntlet. I figured it would be pretty simple to do a minor update, but when I booted up my old desktop and took a look at the code, I found my mostly-completed update just waiting to be released. So, no new slick modern UI, but I was able to spend an hour finishing up what I had and release it as PNGGauntlet 3. Hopefully it’ll be a useful and welcome upgrade to both new and existing PNGGauntlet users.