Blog Archives

Dashboard and Sidebar Widgets on GitHub

,

I love GitHub, but for some reason I’ve been hesitant to put some of my older, smaller projects on there. In hopes of reversing that trend, I’ve moved all my surviving Sidebar and Dashboard widgets onto GitHub:

I also discovered that Bungie has shut down all their old Halo stuff, so I’ve removed my Bungie Card gadget from my site.

Windows Sidebar is officially dead at this point, and Dashboard doesn’t seem long for this world, so I’m unlikely to do anything with these myself, but they’re all open for pull requests, so hopefully if somebody’s still interested in them they can be improved.

I own Maruku now

, ,

So it turns out that I own Maruku now. Let’s start at the beginning.

Maruku is a Markdown interpreter written in Ruby. It’s one of many Markdown interpreters available to Ruby programmers, but it’s notable in that it is written only in Ruby, without relying on compiled C libraries or anything like that. This makes it nice as a default option for dealing with Markdown files, because it will run on any platform and on any flavor of Ruby.

My interest in Maruku stemmed from working on Middleman and trying to get it working in JRuby and on Windows. Ruby libraries that require compiling C code are problematic on Windows (a platform not exactly favored by Rubyists), and are even harder to get working with JRuby, which (some tricks aside) won’t load Ruby libraries that rely on C extensions. Middleman uses Tilt to allow users to choose which Markdown engine they want, but we chose Maruku as the default to provide easy setup and testing under JRuby and on Windows.

Unfortunately, Maruku had been abandoned at this point, and for several years at that. Maintainership had been transfered from Andrea Censi, the original author, to Natalie Weizenbaum, who did a bunch of cleanup. Natalie ended up getting busy with other important projects, and didn’t release a new version of the gem or working much on Maruku after that. Issues piled up, Maruku wasn’t updated to work with Bundler or Ruby 1.9, and generally began to rot.

During this period, Jacques Distler continued to enhance the library in his own fork, most notably replacing the use of Ruby’s built in REXML library with Nokogiri, which not only provided a speed boost, but fixed a lot of bugs having to do with REXML’s quirks. Jacques continued to fix bugs reported in the main Maruku issue tracker on his own fork, but nobody was around to merge them into the main repository or release a new version of Maruku.

In early May 2012 I heard news that Natalie had chosen a new maintainer for Haml to carry on the project now that she didn’t have time for it. I tweeted what I didn’t intend to be a snarky message to her asking if she was considering a similar move for Maruku. She responded quickly, asking if I was volunteering, and I agreed to manage issues for the project, and just like that I had commit access to the repository. Unfortunately, I couldn’t just jump in, because I needed to get permission from my employer to contribute to the project. This took almost three months, and I felt like a jerk the whole time for not doing anything for Maruku after being given responsibility for it. The approval came right as I was was preparing for a long trip to Africa, and so I wasn’t able to jump right on it, but after I got back I started dusting off the code and getting things into shape. I started bugging Natalie to switch options for me in GitHub (like enabling Travis CI builds), and she simply moved the repository to my account and gave me full control of everything.

Right away I was able to release Maruku 6.0.1, which includes only a single fix since the previous release – an annoying warning that appeared when running under Ruby 1.9. Since then, I’ve been chipping away at Maruku as much as I can, starting with merging Jacques’ fixes, and then turning my attention towards correcting the tests, which often asserted results that weren’t actually correct. After that, I’ve been mostly working on getting the corrected tests to pass again, especially in JRuby. While Jacques’ change to Nokogiri is a boon to speed and accuracy, it resulted in a handful of regressions under JRuby, many of which traced down to bugs in Nokogiri itself (which have all been fixed quickly by the Nokogiri maintainers). Only a few days ago, I finally got the tests running green in MRI, JRuby, and Rubinius. I had to skip certain tests that are exposing actual bugs in Maruku, but at least I now have a basis for improving the code without accidentally breaking one of the existing tests.

My present plan is to work towards releasing Maruku 0.7.0 (with a beta beforehand) as a bugfix release including the Nokogiri changes. I’ll fix whatever bugs I can without drastically changing the code, document in tests everything I can’t fix easily, and make sure there aren’t any more regressions from the previous release. I’d actually rather minimize or remove the usage of Nokogiri in the library, simply so that Maruku can have fewer dependencies, but that may not be an attainable goal. Maruku 0.7.0 will also drop support for Rubies older than 1.8.7.

Looking forward, I want to release Maruku 1.0.0, with some more drastic changes. I’m thinking about a new API and internals that are much more friendly to extension and customization, deprecating odd features and moving most everything but the core Markdown-to-HTML bits into separate libraries that plug in to Maruku, and general non-backwards-compatible overhauls. I’m also hoping that, with the original authors’ and major contributors’ support, I can relicense Maruku under the MIT license instead of GPLv2, to make it easier for Maruku to be used in more situations. Overall, my goal for Maruku is to make it the default Markdown engine for Ruby, with a focus on compatibility (across platforms, Rubies, and with other Markdown interpreters), extensibility, and ease of contribution.

For everyone who has reported bugs in Maruku, thank you for your patience. For everyone interested in the future of Maruku, feel free to watch the repository on GitHub to see the commits flowing, or subscribe to the gem on RubyGems.org to be notified of new releases. When the beta for the next release comes out, I’ll be excited to have people take it for a spin and let me know what’s still broken.

Middleman 3.0

,

For the last 8 months or so, I’ve been contributing to Thomas Reynolds‘ open source project Middleman. Middleman is a Ruby framework for building static websites using all the nice tools (Haml, Sass, Compass, CoffeeScript, partials, layouts, etc.) that we’re used to from the Ruby on Rails world, and then some. This website is built with Middleman, and I think it’s the best way to put together a site that doesn’t need dynamic content. Since I started working on Middleman in November 2011, I’ve been contributing to the as-yet-unreleased version 3.0, which overhauls almost every part of the framework. Today, after many betas and release candidates, Middleman 3.0 is finally released and ready for general use, and I’m really proud of what I’ve been able to help build.

Middleman Logo

I’ve been building web sites for myself since the early 90s, before I even had an internet connection at home (I could see some primitive sites on Mosaic on my father’s university network). Early on I was inspired by HyperCard and Quicktime VR and I wanted to make my own browseable worlds, but I didn’t have a Mac (nor did I know what I was doing). Starting with Notepad and early Netscape, I started building sites full of spinning animated GIFs, blinking text, and bad Star Wars fanfiction. As time went on and I learned more bout how to actually build websites, the duplication of common elements like menus, headers, and footers became too much to manage. Around the same time I discovered how to make dynamic websites backed by a database, using ASP. For my static websites, I just used ASP includes and functions to encapsulate these common elements and do a little templating. With the release of .NET, I switched to writing sites, static and dynamic, in ASP.NET. Far too long after, I realized that hosting websites on Windows was a terrible idea, and I switched to using PHP to handle light templating duty. For dynamic, database-driven sites I switched to using Ruby on Rails, and I fell in love with its powerful templating and layout system, the Ruby language, Haml, Sass, and all the rest. However, the gulf between my Rails projects and my cheesy PHP sites was huge – all my tools were missing, and I still needed PHP available to host sites even though they were essentially static.

My first attempt to reconcile the two worlds was a Ruby static site generator called Staticmatic. Staticmatic had Haml built in, and worked mostly like Rails. I had just finished moving my site over to it when the developer discontinued the project. In its place, he recommended Middleman. However, as I started transitioning my site to Middleman, I found a few bugs, and wishing to be a good open source citizen, I included new tests with my bug reports. Thomas Reynolds, the project’s creator, encouraged me to keep contributing, and even gave me commit access to the GitHub repository very early on (at which point I broke the build on my first commit). From that point I started fixing more and more bugs, and then eventually answering issue reports, and finally adding new features and helping to overhaul large parts of the project. I’d contributed to open source projects before, but just with a bugfix here and there. Middleman has been my first real contribution, with ongoing effort and a real sense of ownership in the project.

My contributions have focused mainly in three areas. First, I’ve helped to get the new Sitemap feature working. I liked the way similar frameworks like Nanoc had a way to access a view of all the pages in a site in code, to help with automatically generating navigation. Middleman’s sitemap goes beyond that, providing a way to inspect all the files in a project, (including virtual pages), and even add to the list via extensions.

The next area I worked on is the blogging extension. I like the way Jekyll handles static blogging, but I wanted my blog to be just one part of my whole site – Jekyll is a little too focused on blog-only sites. I basically rewrote the fledgling blog extension that Thomas had started, and I’m really proud of the result. It not only handles basic blogging, but also generates tag and date pages, supports next/previous article links, and is extremely customizable. I moved my blog over from Wordpress (no more security vulnerabilities, yay!) and it’s been great.

The last place I focused was on website performance. I’ve always been interested in how to build very fast websites, and I wanted Middleman to support best practices around caching and compression. To that end, I built an extension that appends content-based hashes to asset filenames so you can give them long cache expiration times, and another extension that will pre-gzip your files to take load off your web server and deliver smaller payloads.

Beyond that I’ve fixed a lot of bugs, sped things up, written a lot of documentation, and added a lot of little features. I’m really happy that 3.0 is finally out there for people to use, and I hope more people will choose to contribute. And I’m looking forward to all the new stuff we’re going to add for 3.1!

Securely and conveniently managing passwords

Good password security habits are more important than ever, but it can be hard to take the common advice about using complex, unique passwords when they’re so inconvenient to manage. This article explains why password security is such a big deal, and then lays out a strategy for managing passwords that can tame your accounts without causing an undue burden.

The safety of your online passwords has always been important, but in the last year or so it’s become even more clear that extra care needs to be taken with your account credentials. Passwords have been stolen from PlayStation Network, Steam, LinkedIn, Last.fm, eHarmony, and more. This problem is only going to get worse as exploits become more sophisticated and more services reach millions of users without investing in information security. Furthermore, advances in computing power mean that cracking stolen databases of passwords is getting easier and easier.

The way these hacks usually go is that, via some software flaw, somebody manages to steal the database containing user account information for a service. Sometimes, that’s enough to gain access to the stolen accounts, because the passwords were stored as plain text. One warning sign that a site does this is that they’ll offer the option to send you your password if you forget it, rather than letting you reset the password to a new one. If they can send you your password, then they know it, and if they know it, somebody who steals the database can know it too.

More frequently, passwords are “hashed” – a process that makes it easy to tell if a user has entered the correct password, but very difficult to actually recover the password. However, that’s not enough to prevent data thieves from figuring out the passwords. They use huge, pre-computed lists of common passwords and their hashes called “rainbow tables” to figure out which password was used for which account. There are defenses against this sort of attack, but even large, established sites like LinkedIn didn’t use them. This is one of the reasons why common passwords (like “password”) are so easy to crack – they’re right at the top of rainbow tables (which may contain hundreds of millions of other passwords too).

Given all that, the real problem starts when somebody uses the same password on multiple services. Imagine you use the same password for a gardening forum and your email. The gardening forum software contains a flaw that allows hackers to steal the user database and figure out the passwords. They then take those usernames and passwords to popular email services and try them out. Since the password is the same, they get right in, and have full access to your email. But that’s not all – once somebody has access to your email, they can reset passwords for all the other services you use, including juicy targets like online banking. And they’ll know what to go after by simply reading your emails. This sort of cross-service attack happened a lot after the PlayStation Network breach. The thieves took the PSN passwords they’d gotten and rightly assumed that those passwords would work on Xbox Live, where they were able to make lots of purchases using the accounts’ linked credit cards. More recently, World of Warcraft and Diablo 3 players have had their accounts taken over to sell off their gold and items, likely by people using stolen PSN, Steam, and LinkedIn account information.

Some people try to protect themselves by having a few different passwords that they reuse – one for “secure” systems like online banking, one for common things like email, and one for “everything else”. The problem is that your account security is only as good as the weakest link. Once one password falls into the wrong hands it can be used to break into more and more other services, and each newly compromised account can be a stepping stone to more sensitive targets. This need not even be as straightforward as what they get by gaining access to your email. For example, let’s say somebody gains access to your Facebook account. From there, they may be able to pull enough personal information to answer challenge questions (What’s your mother’s maiden name? Where were you born?) at your bank’s website. Or maybe they’ll just stop there and use your Facebook account to spam your friends with links to malware sites.

In an ideal world, you want to use long, complex passwords that are different for every service you have an account with. Long, complex passwords are much less likely to be found in rainbow tables, so even if a user database is stolen, your password isn’t likely to be one of the ones recovered. Having a unique password per site means that if thieves do figure out your password, they will only have access to your account on one service, not many. Plus, your response to hacks you know about (many go undetected or unreported) is to just change your password on that one site, instead of having to retire a password used all over the Internet.

Fortunately, it turns out that keeping track of hundreds of unique, complex passwords can be done, and it can be reasonably convenient. I manage separate passwords for every account, and I’m going to explain how so you can too.

Disclaimer: This is not the be-all and end-all of password security. There are weaknesses in my strategy, but I believe it provides enough security benefit along with enough convenience that it will protect most people from common attacks.

The first thing you need is a password vault. A password vault is an application that remembers your passwords for you – the vault is encrypted, and it has a password itself that lets you open it. Think of this like taking all your keys and locking them up in a mini-safe when you’re not using them. The password vault allows you to remember a unique password for every site and get at them all with a single password that you can change any time you like. Good password vaults also let you store other information you might forget, like account numbers and challenge question answers (I like to make random answers to challenge questions too, to protect against attackers who can figure out the real answer). And, as a bonus, the vault serves as a directory of all the sites you actually have accounts on - before I moved my passwords into a vault, I had no real idea of all the different services I had created an account on.

The vault I’ve chosen to use is KeePassX. I like it because it’s very secure, it’s free, and it runs on many different operating systems (I regularly use OS X, Windows, Linux, and iOS machines). There are other perfectly good password vaults like 1Password and LastPass. What’s important is that you choose one and use it.

KeePassX Logo

Once the vault is installed, it’s time to fill it up with your passwords. First, choose a master password. This should be easy to memorize and you should change it every few months. Next, add in all the accounts you can remember, along with their current password. I listed them all out first so I’d know what passwords I needed to change, but you can also change the password for each account as you enter them. For each account, find the “change password” feature and use your vault’s password generator to choose a new, completely random password. Ideally this password should be long – more than 16 characters. Note that some sites impose odd restrictions on your passwords, so you might need to play with the options to generate a password the site will accept. The worst are the sites that don’t say there is any restriction on password length, but when you paste in your new password, they clip it to a certain length. This causes the password you save in your vault to not match what the site saved, and you won’t be able to log in. I have a Greasemonkey script (which works on Firefox and Google Chrome) that will show these limits even if the site doesn’t. Another thing you can do is to immediately log out of a site after changing your password, and log back in. That way, if there’s a problem, you know about it right away and can fix it then.

KeePassX Password Generator

Once you’ve gotten all the accounts you can think of, it’s time to find the ones you can’t remember. Search the Internet for your name, usernames you use, and your email address, and you’ll find accounts you’ve totally forgotten about – old forum accounts, services you tried once and dumped, etc. If you’re lucky, you can just delete your account, but few services offer such an option. In that case, just change the password and add the account to your vault.

At this point, you have a complete record of all your online accounts, and each one should have a unique, random password. I’ve done this myself with the exception of a few accounts where I have to enter my password frequently on my phone (mostly my iTunes password) – entering a 30-digit random password every time would be impossible. In that case, I have a memorizable password that I change frequently and only use on those services, and I mix in some unique bit to each of them. For example, if the base password is “fuzzydog” (it’s not), my iTunes password might be “fuzzydogappstore”. It’s certainly not as secure as fully random passwords, but I can remember it, and I’m not using it anywhere else.

Now, when you need to log into a site, just open up the password vault, find the right entry, and copy/paste the password into the site or application you’re using. To make this less of a burden, I’d recommend using the password saving features of your browser. The only thing to keep in mind is that this lets anyone who gets ahold of your computer log into those sites – you should configure your computer to lock and require a password if the screensaver comes on, to foil anyone who’d walk up to your computer while you’re gone and try to mess with it.

The next step is to make sure your vault is available wherever you need it. For this I use the file-synchronization service Dropbox (which everyone should be using already). Dropbox shows up as a folder on your computer, and whatever you put in it shows up on all your other computers. In fairness, there are other good services you could use like SkyDrive, Box, or Google Drive, but I like Dropbox the best. Once you’ve got Dropbox installed, move your vault into it, and now you have access to the latest version of your vault on all of your computers. I also store the actual KeePassX software in Dropbox so that when I start using a new computer I can just install Dropbox and have everything ready to go immediately.

Dropbox Logo

For my iOS devices, I’ve installed the PassDrop app. PassDrop can read your password vault straight out of Dropbox, so you also have your passwords on your phone. You can then use the phone’s copy/paste feature to get the passwords from PassDrop to wherever they need to be. I’m sure there are similar apps for other mobile operating systems, but I don’t have experience with them.

That’s pretty much it. At this point, you’ll have instant access to all your passwords wherever you go – no more forgetting which password you used on some obscure site when you signed up years ago, and much less risk of getting your accounts hijacked or broken into. And when the next big site loses their passwords, you’ll be able to change your password there, update your vault, and get on without worrying.

Bonus: One thing you can do to go above and beyond this level of security is to take advantage of “two-factor authentication” where it’s offered. With two-factor authentication, you log in both with something you know (your password) as well as something you have (often your mobile phone). Google offers this through their Google Authenticator phone app and it’s a great idea given how much is tied into your Google Account these days, especially when email is such a juicy target. Facebook also offers a similar feature which is worth turning on, as do Blizzard and PayPal. Many banks offer this too. Sometimes it’s an app, and sometimes they just send you a code via SMS. Turning this feature on means that even if somebody steals your password, they’d also need to steal your phone to log into your account. I enable these wherever I find them.

PNGGauntlet 3.1.2 - Bugfixes and forced PNG conversion

PNGGauntlet 3.1.2 is a minor update that resolves some bugs, and adds a much-requested option. First, the bugfixes - PNGGauntlet will now correctly add directories that have a ‘.’ in their name. Previously, it’d reject the whole directory, saying it was an unrecognized image extension. Now it works as expected. I’ve also tweaked the way PNGGauntlet handles temp files, so if you’ve been getting errors about not being able to write or delete temp files, this should address that.

The only new feature is an option labeled “Always convert files to PNG even if it would make them larger” in the Options dialog. PNGGauntlet has always single-mindedly gone for the smallest files it could, and this meant that when it converted from something like JPEG to PNG and the resulting file was larger, it’d just leave the JPEG and not write the PNG. After all, if a graphic was already smallest as a JPEG, then it should stay that way. However, I’ve had many people asking for PNGGauntlet to convert those files to PNG anyway. So the option is now there if you want it.

Update on XBList 4

In August, 2011, a major change to how Xbox.com works broke XBList. This sort of occurrence isn’t uncommon since XBList simply screenscrapes Xbox.com, leaving it vulnerable to even minor changes in how friend data is displayed on the site. The changes in August were more major, preventing me from just making a quick fix, though. I added fixing XBList to my pile of pending projects, and expected to get to it in the next few weeks. That didn’t happen.

What has happened?

I haven’t really had the will to work on XBList very much in the last few years. For starters, XBList isn’t a very interesting project - it’s a constant game of keeping up with Xbox.com, it’s a pretty boring app, and it sees the least usage of any of my software projects. I’ve also moved to using Mac OS X and Linux almost exclusively, meaning I wasn’t even running XBList myself. Not that it’d matter to me much, since I also haven’t been playing Xbox games. In the last few years I’ve shifted my interests away from sitting on a couch to going outside and enjoying the Pacific Northwest through parkour, scuba diving, biking, etc. My indoors time has been spent more on other projects and other creative endeavors than gaming. I’m no longer playing nightly matches of Halo with my East Coast friends who also have less time than they used to. Thus, my drive to devote time to XBList has dropped sharply.

I’ve also become increasingly embarrassed by XBList. It was my first desktop application, built in college as my first C#/.NET project as well. I barely knew how to program, let alone build anything wonderful (or maintainable). Combined with the ugly inflexibility of Windows Forms, XBList could never be something I was proud of as it was.

All this is to explain why, when XBList broke last August, I decided to throw away everything I had and start over. I chose to develop a new XBList 4 using Titanium Desktop, a cross-platform application framework that uses web technologies like HTML and JavaScript. In theory, this would allow me to build a new XBList that looked better, borrowing heavily from Microsoft’s new Metro styling and their iPhone Xbox Live app. It would allow me to ship versions of XBList for OS X and Linux, which is especially important since I predicted, correctly, that Microsoft would ship their own Xbox Live integration with Windows 8. As an aside, when I built XBList originally I thought it’d be a temporary solution until Microsoft made their own Windows Xbox Live app. It’s amazing that it’s taken them 11 years, and that they even launched on iOS before Windows. Anyway, working in Titanium would let me develop on OS X, play around with CoffeeScript and Knockout, and use my CSS (er, SASS/Compass) skills to do what I never could with Windows Forms.

However, after getting the basics working, the project stalled out. I didn’t have the interest to finish up all the little pieces that turned XBList into a finished project. Xbox.com continued to change, rendering one weekends’ work null by the time I picked it up again the next week. Bugs in Titanium and frustration with CoffeeScript slowed down my progress. And most recently, Titanium Desktop was abandoned by Appcelerator to focus on their more-popular mobile framework. They didn’t even finish releasing the beta version I was having to use. I had suspected this would happen even when I started using it, but it was still disappointing to have the rug pulled out from under me.

What happens now?

I haven’t decided. Right now, I’m pretty much on the fence between trying to finish what I’ve got and put it out there, warts and all, and just discontinuing XBList entirely. Even if I do finish it, I’m not sure it’ll be up to my standards, I’ll have to support it for three platforms, and with Titanium Desktop’s future looking bad, I don’t know what the experience of installing and running XBList would be. I’m almost certain that Titanium Desktop won’t be kept working with developments like Apple’s Gatekeeper or Windows 8. Perhaps XBList 4 will be released someday, but I can’t say when that’ll be or even if it’ll happen. And just to head off the inevitable question, no, I will not be open-sourcing XBList or giving it over to another developer. If you’d like to make your own friends list viewer, it’s probably easier to just start from scratch.

PNGGauntlet 3.1.1 fixes bugs and improves canceling

I was hoping PNGGauntlet 3.1.0 would be the last release I’d need to do for a while, but it looks like there were still a few bugs that needed to be fixed. 3.1.1 is out and fixes all the bugs that were reported to me. The most important was a bug where you could add a non-PNG image to PNGGauntlet, and after compressing the image would convert to PNG, but still have its old file extension. Now, the file extension will be changed to .png as you’d expect. Besides that, I’ve made it so that PNGGauntlet will run on the smaller .NET 4.0 Client profile, and I’ve fixed the “Cancel Optimize” button so it’ll cancel immediately, killing any in-progress compressors, rather than waiting for the current compressor to finish. That’s particularly helpful since some images can take minutes to compress. Please grab 3.1.1 if you haven’t already, and feel free to email me if you notice anything still broken.

PNGGauntlet 3.1: Bugfixes and parallel file compression

It’s been less than a week since PNGGauntlet 3 was released, and now PNGGauntlet 3.1 is out! It turns out that when I found PNGGauntlet 3, forgotten and incomplete, I hadn’t realized exactly how incomplete it was. Most everything worked, but the options dialog was only half-implemented, not allowing you to change OptiPNG and DeflOpt options. Worse, I’d broken the ability to launch PNGGauntlet with command-line options, which also broke the “Open With…” feature introduced in 2.0.1. I wanted to fix those bugs, but I know that I personally hate it when I update a program only to have another update right away. I knew I needed some neat feature to add to make it more palatable to update, and the only thing I could think of was the feature that everybody asks me for every time I release PNGGauntlet. So I did it – PNGGauntlet will now use all of your processor cores to compress files in parallel. You can turn it off if you don’t want it, but in my tests on an older dual-core machine, it halved the time to compress a batch of images. Hopefully that will make this update go down a bit smoother.

Update: OK, so it looks like there are still a few bugs, introduced by adding the parallel compression and the new compressors. Non-PNG files are being converted to PNG but keeping their original file names, and errors are popping up while parallel-compressing files. Sorry, I’ll have a 3.1.1 version out soon that addresses these. No, I didn’t “remove features” in favor of nonsensical behavior, these are all just bugs.

PNGGauntlet 3: Three compressors make the smallest PNGs

This weekend I released a major update to PNGGauntlet, my PNG compression utility for Windows. A lot has changed, including a lot of bug fixes, but the biggest news is that PNGGauntlet now produces even smaller PNGs! I did a bunch of research, and I found that combining the powerful PNGOUT utility that PNGGauntlet has always used with OptiPNG and DeflOpt, even more bytes could be shaved off of your PNG images. The contributions from OptiPNG and DeflOpt are often small compared to what PNGOUT does, but if every byte counts, you’ll be happy with the new arrangement. The new compressors do slow down the process a bit, though, so you can turn them off if you don’t want them.

That’s not all that’s changed, however. The UI has been streamlined, leaving only the most essential options. Drop files on the app, hit Optimize, and don’t worry about the rest. However, if you want to tweak the compressors, there’s an all-new options panel that exposes every possible setting for each compressor. The PNGGauntlet website has also been overhauled with a much more modern look.

Before you ask, no, the new PNGGauntlet will not compress multiple images at once to make use of multicore processors. I cover this in the FAQ, but since Ken Silverman, PNGOUT’s author, provides a professional PNGOUT for Windows that’s multicore-optimized for only $15, I don’t want to compete by matching PNGOUTWin’s feature set. It’s absolutely not a matter of not knowing how to do it. And anyway, the individual compressors do a good job of using multiple cores on their own.

One question that deserves an answer is why there was no PNGGauntlet release in the last year and a half. The answer is essentially that I forgot about PNGGauntlet. The last release, 2.1.3, was in May of 2010. That December, I did some work on a new version of PNGGauntlet, incorporating the new compressors and slimming down the UI. After I’d done that, I decided that I wanted to overhaul the UI completely - it’s built with the old Windows Forms technology and a pretty rickety open source data table library, and I’ve always been embarrassed by how crude it looks. My plan was to use Windows Presentation Foundation (WPF), which was supposed to be the new way of developing UIs for .NET apps. However, I soon discovered that Microsoft’s WPF libraries don’t really give you a native-looking UI. Applications developed with WPF look kinda like Windows apps, but they’re off in a bunch of subtle ways that really bothered me. So I ended up starting to draw my own controls to match Windows 7 more closely. And after a while down that rathole, I sorta gave up and shelved the whole project in disgust.

Since then, I’ve actually switched to using my Macbook Air, in OS X, almost exclusively, and I almost never use my Windows machines. The prospect of developing Windows apps no longer interests me much, and I don’t really use PNGGauntlet anymore myself (I use the very nice Mac analogue ImageOptim). PNGGauntlet still worked, so it stayed out of my mind until @drewfreyling messaged me on Twitter asking about incorporating the latest version of PNGOUT into PNGGauntlet. I figured it would be pretty simple to do a minor update, but when I booted up my old desktop and took a look at the code, I found my mostly-completed update just waiting to be released. So, no new slick modern UI, but I was able to spend an hour finishing up what I had and release it as PNGGauntlet 3. Hopefully it’ll be a useful and welcome upgrade to both new and existing PNGGauntlet users.

Dashboard Widgets - Reach Challenges and Xbox Live Gamercard

,

A few months ago I finally gave in and bought a MacBook Air, and I haven’t looked back. At this point I’m using my Mac for most of my day-to-day computing, and I’m very happy with it. It’s certainly much friendlier to the predominantly Ruby-based web development I like to do. As a first dip of the toes into Mac programming, I decided to try to make a couple Dashboard Widgets. Dashboard is the Mac widget platform, sorta like Windows Sidebar only it came first and it’s a much better platform. Dashboard widgets are relatively full-featured, built on Webkit using HTML and JavaScript, and can be authored with the free DashCode IDE. I found working with Dashboard to be a little frustrating, and it’s clear that Apple isn’t investing much in that space anymore, but it’s still miles ahead of the excruciating experience of developing Windows Sidebar gadgets.

My first widget was an ultra-simple Xbox Live Gamertag widget, which was basically a straight port from the Windows version. Aside from the preferences, which are now on the “back side” of the widget instead of a fly-out panel, not much is different.

The second widget is a bit more complex - it shows the current challenges available in Halo: Reach. Every day (and week) there are new challenges players can meet in order to get credits to buy in-game armor. This widget helps keep that info a keypress away.

Anyway, if you have a Mac, please head on over to my Dashboard Widgets page and give them a shot!

Update: Since Bungie has discontinued their Reach site, I’ve removed the Reach gadget.