Fazal Majid's low-intensity blog

Sporadic pontification

Fazal

Taming the paper tiger

A colleague was asking for some simple advice about all-in-one printer/copier/fax devices and got instead a rambling lecture on my paper workflow. There is no reason the Internet should be exempted from my long-winded rants, so here goes, an excruciatingly detailed description of my paper workflow. It shares the same general outline as my digital photography workflow, with a few twists.

Formats

The paperless office is what I am striving for. Digital files are easier to protect than paper from fire or theft, and you can carry them with you everywhere on a Flash memory stick. As for file formats, you don’t want to be locked in, so you should either use TIFF or PDF, both of which have open-source readers and are unlikely to disappear anytime soon, unlike Microsoft’s proprietary lock-in format of the day.

TIFF is easier to retouch in an image editing program, but:

  1. Few programs cope correctly with multi-page TIFFs
  2. PDF allows you to combine a bitmap layer to have an exact fac-simile with a searchable OCR text layer for retrieval, TIFF does not.
  3. TIFF is inefficient for vector documents, e.g. receipts printed from a web page.
  4. The TIFF format lacks many of the amenities designed in a format like PDF expressly designed as a digital replacement for paper.

Generating PDFs from web pages or office documents is as simple as printing (Mac OS X offers this feature out of the box, for Windows, you can print to PostScript and use Ghostscript to convert the PS to PDF.

Please note the bloated Acrobat Reader is not a must-have to view PDFs, Mac OS X’s Preview does a much better job, and on Windows Foxit Reader is a perfectly serviceable alternative that easily fits on a Flash USB stick. UNIX users have Ghostscript and the numerous UI wrappers that make paging and zooming easy..

Acquisition

You should process incoming mail as soon as you receive it, and not let it build up. If you have a backlog, set it aside and start your new system, applicable to all new snail mail. That way the situation does not degrade further, and you can revisit old mail later.

Junk mail that could lead to identity theft (e.g. credit card solicitations) should be shredded or even better, burnt (assuming your local environmental regulations permit this). if you get a powerful enough shredder, it can swallow the entire envelope without even forcing you to open it. Of course, you should only consider a cross-cut shredder. Junk mail that does not contain identifiable information should be recycled. When in doubt, shred. Everything else should be scanned.

Forget about flatbed scanners, what you want is a sheet-fed batch document scanner. It should support duplex mode, i.e. be capable of scanning both sides of a sheet of paper in a single pass. For Mac users Fujitsu ScanSnap is pretty much the only game in town, and for Windows users I recommend the Canon DR-2050C (the ScanSnap is available in a Windows version, but the Canon has a more reliable paper feed less prone to double-feeding). Either will quickly scan a sheaf of paperwork to a PDF file at 15–20 pages per minute.

Filing

Paper is a paradox: it is the most intuitive medium to deal with in the short-term, but also the most unwieldy and unmanageable over time. As soon as you layer two sheets into a pile, you have lost the fluidity that is paper’s essential strength. Shuffling through a pile takes an ever increasing amount of time as the pile grows.

For this reason, you want to organize your filing plan in the digital domain as much as possible. Many experts set up elaborate filing plans with color-coded manila folders and will wax lyrical about the benefits of ball-bearing sliding file cabinets. In the real world, few people have the room to store a full-fledged file cabinet.

The simplest form of filing is a chronological file. You don’t even need file folders — I just toss my mail in a letter tray after I scan it. At the end of each month, I dump the accumulated mail into a 6″x9″ clasp envelope (depending on how much mail you receive, you may need bigger envelopes), and label it with the year and month. In all likelihood, you will never access these documents again, so there is no point in arranging them more finely than that. This filing arrangement takes next to no effort and is very compact – you can keep a year’s worth in the same space as a half dozen suspended file folders, as can be seen with 9 months’ worth of mail in the photo below (the CD jewel case is for scale).

Monthly filesThere are some sensitive documents you should still file the old-fashioned way for legal reasons, such as birth certificates, diplomas, property titles, tax returns and so on. You should still scan them to have a backup in case of fire.

Date stamping

As you may have to retrieve the paper original for a scanned document, is important to date stamp every page (or at least the first page) of any mail you receive. I use a Dymo Datemark, a Rube Goldberg-esque contraption that has a rubber ribbon with embossed characters running around an ink roller and a small moving hammer that strikes when the right numeral passes by. All you really need is a month resolution so you know which envelope to fetch, thus an ordinary month-year rubber stamp would do as well. Ideally you would have software to insert a digital date stamp directly in the document, but I have not found any yet. A tip: stamp your document diagonally so the time stamp stands out from the horizontal text.

Management

Much as it pains me to admit it, Adobe Acrobat (supplied with the Fujitsu ScanSnap) is the most straightforward way to manage PDF files on Windows, e.g. merge multiple files together, insert new pages, annotate documents and so on. Through web capture OCR, it can create an invisible text layer that makes the PDF searchable with Spotlight. There are alternatives, such as Foxit PDF Page Organizer or PaperPort on Windows, and PDFPen on OS X. Since Leopard, Apple’s Preview app has included most of the PDF editing functionality required, so I take great pains to ensure my Macs are untainted by Acrobat (e.g. unselecting it when installing CS3). See also my article on resetting the creator code for PDF files on OS X so they are opened by Preview for viewing.

Encryption

If you are storing a backup of your personal papers at work or on a public service like Google’s rumored Gdrive, you don’t want third-parties to access your confidential information. Similarly, you don’t want to be exposed to identity theft if you lose a USB Flash stick with the data on it. The solution is simple: encryption.

There are many encryption packages available. Most probably have back doors for the NSA, but your threat model is the ID fraudster rummaging through your trash for backup DVDs or discarded bank statements, not the government. I use OpenSSL’s built-in encryption utility as it is cross-platform and easily scripted (I compiled a Windows executable for myself, and it is small enough to be stored on a Flash card). Mac and UNIX computers have it preinstalled, of course, do man enc for more details.

To encrypt a file using 256-bit AES, you would use the command:

openssl enc -aes-256-cbc -in somefile.pdf -out somefile.pdf.aes

to decrypt it, you would issue the command:

openssl enc -d -aes-256-cbc -in somefile.pdf.aes -out somefile.pdf

OpenSSL will prompt you for the password, but you can also supply it as a command-line argument, e.g. in a script.

Backup

Backing up scanned documents is no different than backing up photos (apart from the encryption requirements), so I will just link to my previous essay on the subject or my current backup scheme. In addition to my external Firewire hard drive rotation scheme, I have a script that does an incremental encryption of modified files using OpenSSL, and then uploads the encrypted files to my office computer using rsync.

Retention period

I tend to agree with Tim Bray in that you shouldn’t bother erasing old files, as the minimal disk space savings are not worth the risk of making a mistake. As for paper documents, you should ask your accountant what retention policy you should adopt, but a default of 2 years should be sufficient (the documents that need more, such as tax returns, are in the “file traditionally” category, in any case).

Fax

The original question was about fax. OS X can be configured to receive faxes on a modem and email them to you as PDF attachments, at which point you can edit them in Acrobat, and fax it back if required, without ever having to kill a tree with printouts. Windows has similar functionality. Of course, fax belongs in the dust-heap of history, along with clay tablets, but habits change surprisingly slowly.

Update (2006-08-26):

I recently upgraded my shredder to a Staples SPL-770M micro-cut shredder. The particles generated by the shredder are incredibly minute, much smaller than those of conventional home or office grade shredders, and it is also very quiet to boot.

Unfortunately, it isn’t able to shred an entire unopened junk mail envelope, and the micro-cut shredding action does not work very well if you feed it folded paper (the particles at the fold tend to cling as if knitted together). This unit is also more expensive than conventional shredders (but significantly cheaper than near mil-spec DIN level 5 shredders that are the nearest equivalent). Staples regularly has specials on them, however. Highly recommended.

Update (2007-04-12):

I recently upgraded my document scanner to a Fujitsu fi-5120C. The ScanSnap has a relatively poor paper feed mechanism, which often jams or double-feeds. Many reviews of the new S500M complain it also sufffers from double-feeding. The 5120C is significantly more expensive but it has a much more reliable paper feed with hitherto high-end features like ultrasonic double-feed detection. You do need to buy ScanTango software to run it on the Mac, however.

Update (2009-01-21):

I moved recently, and realized I have never yet had to open one of those envelopes. From now on, all papers not required for legal reasons (e.g. tax documents) go straight to the shredder after scanning.

Update (2009-09-08):

The new ScanSnap 1500 has ultrasonic double-feed detection. I bought a copy of ABBYY FineReader Express for the Mac. It used to be only available as bundled software with certain scanners like recent ScanSnaps, or software packages like DEVONthink, but you can now buy it as a standalone utility. It is not full-featured, missing some of the more esoteric OCR functionality of the Windows version, batch capabilities and scripting, but works well, unlike the crash-prone ReadIRIS I had but seldom used.

Update (2009-09-22):

Xamance is a really interesting French startup. Their product, the Xambox, integrates a document scanner, document management software and a physical paper filing system. The system can tell you exactly where to find the paper original for a scanned document (“use box 2, third document after tab 7”). In other words, essentially the same filing system I suggest above, but systematically managed in a database for easy retrieval.

It is quite expensive, however, making it more of a solution for businesses. I have moved on and no longer need the safety blanket of keeping the originals, but I can easily see how a complete solution like this would be valuable for businesses that are required for compliance to keep originals, such as notaries, or even government public records offices.

Credit card receipt slips and business cards are problematic for a paperless workflow. They are prone to jam in scanners, have non-standard layouts so hunting for information takes more time than it should, and are usually so trivial you don’t really feel they are worth scanning in the first place. I just subscribed to the Shoeboxed service to manage mine.They take care of the scanning and for pouring the resulting data in a form that can be directly imported into personal finance or contact-management software. I don’t yet have sufficient experience with the service, but on paper at least it seems like a valuable service that will easily save me an hour a week.

Update (2011-01-13):

I finally broke down and upgraded to a ScanSnap S1500M (we have one at work, and it is indeed a major improvement over the older models). In theory this is a downgrade as the fi-5120C is a business scanner, whereas the S1500M is a consumer/SoHo model, but with some simple customization, the integrated software bundle makes for a much more streamlined workflow: put the paper in the hopper, press the button, that’s it. With the fi-5120C, I had to select the scan settings in ScanTango, scan, press the close button, select a filename, drag the file into ABBYY FineReader, select OCR options, click save, click to confirm I do want to overwrite the original file, then dismiss the scan detection window. One step vs. nine.

Update (2012-06-19):

For portable storage of the documents, I don’t bother with manually encrypting the files any more. The IronKey S200 is a far superior option: mil-spec security and hardware encryption, with tamper-resistant circuitry, potted for environment resistance and using SLC flash memory for speed. Sure, it’s expensive, but you get what you pay for (I tried to cut costs by getting the MLC D200, and ended up returning it because it is so slow as to be unusable).

Trimming the fat from JPEGs

I use Adobe Photoshop CS2 on my Mac as my primary photo editor. Adobe recently announced that the Intel native port of Photoshop would have to wait for the next release CS3, tentatively scheduled for Spring 2007. This ridiculously long delay is a serious sticking point for Photoshop users, specially those who jumped on the MacBook Pro to finally get an Apple laptop with decent performance, as Photoshop under Rosetta emulation will run at G4 speeds or lower on the new machines.

This nonchalance is not a very smart move on Adobe’s part, as it will certainly drive many to explore Apple’s Aperture as an alternative, or be more receptive to newcomers like LightZone. I know Aperture and Photoshop are not fully equivalent, but Aperture does take care of a significant proportion of a digital photographer’s needs, and combined with Apple’s recent $200 price reduction for release 1.1, and their liberal license terms (you can install it on multiple machines as long as you are the only user of those copies, so you only need to buy a single license even if like me you have both a desktop and a laptop).

There is a disaffection for Adobe among artists of late. Their anti-competitive merger with Macromedia is leading to complacency. Adobe’s CEO, Bruce Chizen, is also emphasizing corporate customers for the bloatware that is Acrobat as the focus for Adobe, and the demotion of graphics apps shows. Recent releases of Photoshop have been rather ho-hum, and it is starting to accrete the same kind of cruft as Acrobat (to paraphrase Borges, each release of it makes you regret the previous one). Hopefully Thomas Knoll can staunch this worrisome trend.

Adobe is touting its XMP metadata platform. XMP is derived from the obnoxious RDF format, a solution in search of a problem if there ever was one. RDF files are as far from human-readable as a XML-based format can get, and introduce considerable bloat. If Atom people had not taken the RDF cruft out of their syndication format, I would refuse to use it.

I always scan slides and negatives at maximal bit depth and resolution, back up the raw scans to a 1TB external disk array, then apply tonal corrections and spot dust. One bizarre side-effect of XMP is that if I take a 16-bit TIFF straight from the slide scanner, then apply curves and reduce it to 8 bits, somewhere in the XMP metadata that Photoshop “helpfully” embedded in the TIFF the bit depth is not updated and Bridge incorrectly shows the file as being 16-bit. The only way to find out is to open it (Photoshop will show the correct bit depth in the title bar) or look at the file size.

This bug is incredibly annoying, and the only work-around I have found so far is to run ImageMagick‘s convert utility with the -strip option to remove the offending XMP metadata. I did not pay the princely price for the full version of Photoshop to be required to use open-source software as a stop-gap in my workflow.

Photoshop will embed XMP metadata and other cruft in JPEG files if you use the “Save As…” command. In Photoshop 7, all that extra baggage actually triggered a bug in IE that would break its ability to display images. You have to use the “Save for Web…” command (actually a part of ImageReady) to save files in a usable form. Another example of poor fit-and-finish in Adobe’s software: “Save for Web” will not automatically convert images in AdobeRGB or other color profiles to the Web’s implied sRGB, so if you forget to do that as a previous step, the colors in the resulting image will be off.

“Save for Web” will also strip EXIF tags that are unnecessary baggage for web graphics (and can actually be a privacy threat). While researching the Fotonotes image annotation scheme, I opened one of my “Save for Web” JPEGs under a hex editor, and I was surprised to see literal strings like “Ducky” and “Adobe” (apparently the ImageReady developers have an obsession with rubber duckies). Photoshop is clearly still embedding some useless metadata in these files, even though it is not supposed to. The overhead corresponds to about 1-2%, which in most cases doesn’t require more disk space because files use entire disk blocks, whether they are fully filled or not, but this will lead to increased network bandwidth utilization because packets (which do not have the block size constraints of disks) will have to be bigger than necessary.

I wrote jpegstrip.c, a short C program to strip out Photoshop’s unnecessary tags, and other optional JPEG “markers” from JPEG files, like the optional “restart” markers that allow a JPEG decoder to recover if the data was corrupted — it’s not really a file format’s job to mitigate corruption, more TCP’s or the filesystem’s. The Independent JPEG Group’s jpegtran -copy none actually increased the size of the test file I gave it, so it wasn’t going to cut it. jpegstrip is crude and probably breaks in a number of situations (it is the result of a couple of hours’ hacking and reading the bare minimum of the JPEG specification required to get it working). The user interface is also pretty crude: it takes an input file over standard input, spits out the stripped JPEG over standard output and diagnostics on standard error (configurable at compile time).

ormag ~/Projects/jpegstrip>gcc -O3 -Wall -o jpegstrip jpegstrip.c
ormag ~/Projects/jpegstrip>./jpegstrip < test.jpg > test_strip.jpg
in=2822 bytes, skipped=35 bytes, out=2787 bytes, saved 1.24%
ormag ~/Projects/jpegstrip>jpegtran -copy none test.jpg > test_jpegtran.jpg
ormag ~/Projects/jpegstrip>jpegtran -restart 1 test.jpg > test_restart.jpg
ormag ~/Projects/jpegstrip>gcc -O3 -Wall -DDEBUG=2 -o jpegstrip jpegstrip.c
ormag ~/Projects/jpegstrip>./jpegstrip < test_restart.jpg > test_restrip.jpg
skipped marker 0xffdd (4 bytes)
skipped restart marker 0xffd0 (2 bytes)
skipped restart marker 0xffd1 (2 bytes)
skipped restart marker 0xffd2 (2 bytes)
skipped restart marker 0xffd3 (2 bytes)
skipped restart marker 0xffd4 (2 bytes)
skipped restart marker 0xffd5 (2 bytes)
skipped restart marker 0xffd6 (2 bytes)
skipped restart marker 0xffd7 (2 bytes)
skipped restart marker 0xffd0 (2 bytes)
in=3168 bytes, skipped=24 bytes, out=3144 bytes, saved 0.76%
ormag ~/Projects/jpegstrip>ls -l *.jpg
-rw-r--r--   1 majid  majid  2822 Apr 22 23:17 test.jpg
-rw-r--r--   1 majid  majid  3131 Apr 22 23:26 test_jpegtran.jpg
-rw-r--r--   1 majid  majid  3168 Apr 22 23:26 test_restart.jpg
-rw-r--r--   1 majid  majid  3144 Apr 22 23:27 test_restrip.jpg
-rw-r--r--   1 majid  majid  2787 Apr 22 23:26 test_strip.jpg

Update (2006-04-24):

Reader “Kam” reports jhead offers JPEG stripping with the -purejpg option, and much much more. Jhead offers an option to strip mostly useless preview thumbnails, but it does not strip out restart markers.

How to show respect for your readers

Blogging is often seen as a narcissistic pursuit. It can be, but the best bloggers (that is not necessarily synonymous with the most popular) put their audience first. To do that, you need to know it first. Most blogs have three very distinct types of readers:

  1. Regular visitors who use web browsers and bookmarks to visit. If the page doesn’t change often enough, they will get discouraged by the lack of changes and eventually stop coming. You need to post often to keep this population engaged.
  2. People who come from a search engine looking for very specific information. If they do not find what they are looking for, they will move on to the next site in their list, then possibly linger for other articles and may eventually graduate to repeat visitor status. Closely related are people who follow links from other sites, pointing to yours.
  3. Those who let feed readers do the polling for them, and thus do not necessarily care how often a feed is updated. Feed readers allow for much more scalable browsing – I currently subscribe to 188 feeds (not all of them are listed in my blogroll), and I certainly couldn’t afford to visit 188 sites each day. Feed readers are still a minority, but specially for commercial publications, a very attractive one of tech-savvy early adopters. The flip side of this is a more demanding audience. Many people go overboard with the number of feeds and burn out, then mass unsubscribe. If you are a little careful, you can avoid this pendulum effect by pruning feeds that no longer offer a sufficient signal to noise ratio.

The following sections, in no particular order, are rough guidelines on how best to cater to the needs of the other two types of users.

Maintain a high signal to noise ratio

Posting consistently good information on a daily or even weekly basis is no trivial amount of work. I certainly cannot manage more than a couple of postings per month, and I’d rather not clutter my website with “filler material” if I can help it. For this reason, I have essentially given up on the first constituency, and can only hope that they can graduate to feed readers as the technology becomes more mainstream.

Needless to say, test posts are amateurish and you should not waste your readers’ time with them. Do the right thing and use a separate staging environment for your blog. If your blogging provider doesn’t provide one, switch to a supplier that has a clue.

Posting to say that one is not going to post for a few days due to travel, a vacation or any other reason is the height of idiocy and the sure sign of a narcissist. A one-way trip to the unsubscribe button as far as I am concerned.

Distinguish between browsers and feed readers

In November of last year, I had an interesting conversation with Om Malik. My feedback to him was that he was posting too often and needed to pay more attention to the quality rather than the quantity of his postings.

The issue is not quite as simple as that. To some extent the needs of these browser users and those who subscribe to feeds are contradictory, but a good compromise is to omit the inevitable filler or site status update articles from the Atom or RSS feeds. Few blog tools offer this feature, however.

Search engines will index your home page, which is normally just a summary of the last N articles you wrote. Indeed, it will often have the highest page rank (or whatever metric is used). An older article may be pushed out but still listed in the (now out of date) search engine index. The latency is often considerable, and the end result is that people searching for something saw a tantalizing preview in the search engine results listing, but cannot find it once they land on the home page, or in the best of cases they will have to wade through dozens of irrelevant articles to get to it. Ideally, you want them to reach the relevant permalink page directly without stopping by the home page.

There is a simple way to eliminate this frustration for search engine users: make the home page (and other summary pages like category-level summaries or archive index pages) non-indexable. This can be done by adding the following meta tags to the top of the summary pages, but not to permalink pages. The search engine spiders will crawl through the summary pages to the permalinks, but only store the permalink pages in their index. Thus, all searches will lead to relevant and specific content free from extraneous material (which is still available, just one click away).

Here again, not all weblog software supports having different templates for permalink pages than for summary pages.

There is an unfortunate side-effect of this — as your home page is no longer indexed, you may experience a drop in search engine listings. My weblog is no longer the first hit for Google search for “Fazal Majid”. In my opinion, the improved relevance for search engine users far outweighs the bruising to my ego, which needs regular deflating anyways.

Support feed autodiscovery

Supporting autodiscovery of RSS feeds or Atom feeds makes it much easier for novice users to detect the availability of feeds (Firefox and Safari already support it, and IE will soon). Adding them to a page is a no-brainer.

Categorize your articles

In all likelihood, your postings cover a variety of topics. Categorizing them means users can subscribe only to those of interest to them, and thus increases your feed’s signal to noise ratio.

Keep a stable feed URL under your control

If your feed location changes, set up a redirection. If this is not possible, at least post an article in the old feed to let subscribers know where to get the new feed.

Depending on a third-party feed provider like Feedburner is risky — if they ever go out of business, your subscribers are stranded. Even worse, if a link farm operator buys back the domain, they can easily start spamming your subscribers, and make it look as if the spam is coming from you. Your feeds are just as mission-critical as your email and hosting, don’t enter in an outsourcing arrangement casually, specially not one without a clear exit strategy.

Maintain old posts

Most photographers, writers and musicians depend on residuals (recurring revenue from older work) for their income and to support them in retirement. Unless your site is pure fluff (and you would not be reading this if that were the case), your old articles are still valuable. Indeed, there is often a Zipf law at work and you may find some specific archived articles account for the bulk of your traffic (in my case, my article on lossy Nikon NEF compression is a perennial favorite).

It is worth dusting these old articles off every now and then:

  • You should fix or replace the inevitable broken links (there are many programs available to locate broken links on a site, I have my own but linkchecker is a pretty good free one.
  • The content in the article may have gone stale and need refreshing, Don’t rewrite history, however, and change it in a way that alters the original meaning — better to append an update to the article. If there was a factual error, don’t leave it in the main text of the article, but leave a mention of the correction at the end
  • there is no statute of limitations on typos or spelling mistakes. Sloppy writing is a sign of disrespect towards your readers; Rewriting text to clarify the meaning is also worthwhile on heavily visited “backlist” pages. The spirit of the English language lies in straightforwardness, one thing all the good style guides agree on.
  • For those of you who have comments enabled on their site, pay special attention to your archives, comment spammers will often target those pages as it is often easier for them to avoid detection there. You may want to disable comments on older articles.
  • Provide redirection for old URLs so old links do not break. Simple courtesy, really.

Make your feeds friendly for aggregators

Having written my own feed reader, I have all too much experience with broken or dysfunctional feeds. There is only so much feed reader programmers can do to work around brain-dead feeds.

  • Stay shy of the bleeding edge in feed syndication formats. Atom offers a number of fancy features, but you have to assume many feed readers may break if you use too many of them. It is best if your feed files use fully qualified absolute URLs, even if Atom supports relative URLs, for instance. Unicode is also a double-edged sword, prefer HTML entity-encoding them over relying on a feed reader to deal with content-encoding correctly.
  • Understand GUIDs. Too many feeds with brain-dead blogging software will issue a new GUID when an article is edited or corrected, or when its title is changed. Weblogs Inc. sites are egregious offenders, as is Reuters. The end-result is that an article will appear several times in the user’s aggregator, which is incredibly annoying. Temboz has a feature to automatically suppress duplicate titles, but that won’t cope with rewritten titles.
  • Full contents vs. abstracts is a point of contention. Very long posts are disruptive on web-based feed readers, but on the other hand most people dislike the underhanded teaser tactics of commercial sites that try and draw you to their website to drive ad revenue, and providing only abstracts may turn them off your feed altogether. Remember, the unsubscribe button is a mere click away…

Blogging ethics

The golden rule of blogging is that it’s all about the readers. Everything follows from this simple principle. You should strive to be relevant and considerate of their time. Take the time to spell-check your text. It is very difficult to edit one’s own text, but any article can benefit from a little time spent maturing, and from tighter and more lucid prose.

Don’t be narcissistic, unless friends and family are the primary audience. Most people couldn’t care less about your pets, your garden or for the most part your personal life (announcing major life events like a wedding or the birth of your children is perfectly normal, however).

Respect for your readers requires absolute intellectual honesty. Laziness or expediency are no excuse for poor fact-checking or revisionist edits. Enough said…

Update (2008-05-21):

Unfortunately setting the meta tags above seems to throw Google off so that it stops indexing pages altogether (Yahoo and MSN search have no problems). So much for the myth of Google’s technical omnipotence… As a result, I have removed them and would advise you to do as well.

Update (2015-11-20):

If you use JavaScript and cookie-based web analytics like Piwik or Mint, make sure those script tags are disabled if the browser sends the Do-Not-Track header. As for third-party services like Google Analytics, just don’t. Using those services means you are selling giving away your readers’ privacy to some of the most rapacious infringers in the world.

MacBook Pro first impressions

I am writing this on a brand-spanking new Apple MacBook Pro (yes, I know, clumsy name). One of the reasons for my purchase is because I have been spending quite a bit of time in trains lately. Trains are one of the most civilized ways to travel, Caltrain certainly beats being stuck behind the wheel in the gridlock that is U.S. Highway 101. A laptop is a good way to get things done during the 3-hour round-trip to Santa Clara.

My last few laptops were company-issued Windows models. I only ever purchased two laptops before, both Macs, a PowerBook 180c in college (it sported a 68K chip, proof that Apple could have kept the PowerBook moniker on an Intel-powered machine) and one of the original white iBooks in 2001 when they first came out around the same time as Mac OS X. For the last ten years or so, I always managed to have ultra-thin and light models (less than 2kg / 4lb) assigned to me, and the MacBook Pro is certainly heavier than I would like. That said, it has a gorgeous screen and a decent keyboard.

Subjectively so far, it does not seem appreciably slower than my dual-2GHz PowerMac G5. I ran Xbench for a more objective comparison, you can see the benchmark results for more info. Unsurprisingly, the disk I/O is in the desktop’s favor, but the Core Do processor holds its own, and even beats the G5 handily on integer performance benchmarks.

I prefer desktops to laptops, for their superior capacity and peripherals. With its relatively puny 80GB of storage capacity, the laptop (it doesn’t really qualify as a notebook given its physical size) is not going to usurp the G5 soon. It doesn’t even have enough capacity to store my complete music library, for instance. I am not looking forward to the usual hassles of synchronizing two computers. Apple’s synchronization solution requires buying a $499 Mac OS X Server license, and third-party solutions are a bit thin.

Now, Apple is a designer PC company, and you want to protect the casework with a decent amount of padding, but the protective case itself must look sharp. I have always had good experience with Waterfield Designs bags made right here in San Francisco, so I naturally got one of their sleevecases. It is made of high-grade neoprene rubber rather than the foam used by other manufacturers, but in exploring my options, I couldn’t help but notice the dizzying array of choices for design-conscious Mac users. For some reason, Australian companies are over-represented, I counted no fewer than 4 manufacturers:

  • Crumpler
  • STM As for the MacBook Pro itself, it is too soon to tell. One thing you immediately notice is how hot it gets, even though the entire aluminum case should act like one big heat sink. I haven’t played with the built-in iSight yet so I can’t compare its quality with that of the stand-alone iSight I have mounted on my desktop.

    The 512MB of RAM installed are woefully inadequate for a supposedly professional machine, but I would rather not pay Apple’s grossly inflated margins on RAM compared to Crucial. I bumped it up to the full 2GB. This upper limit is kind of disappointing when you come from a 64-bit platform (my desktop has 5.5GB of RAM). Laptops benefit even more than desktops from RAM, as free RAM is automatically used as a disk cache, and reduces the need to fetch data from slow and power-hungry 2.5″ hard drives.

    Update (2006-04-05):

    Don’t try to use Monolingual to strip non-Intel architectures to save some space. You will end up rendering Rosetta unusable… I used to disable Classic, I am not sure I would go that far in only allowing Intel binaries to run on my machine.

    Update (2007-08-02):

    More Australian laptop bag manufacturers:

Another one bites the dust

After a brief period of 100% digital shooting in 1999–2001, I went back to primarily shooting with film, both black & white and color slides. I process my B&W film at home but my apartment is too small for a darkroom to make prints, not do I have a room dark enough, so I rent time at a shared darkroom. I used to go to the Focus Gallery in Russian Hill, but when I called to book a slot about a month ago, the owner informed me he was shutting down his darkroom rental business and relocating. He did recommend a suitable replacement, which actually has nicer, brand new facilities, albeit in not as nice a neighborhood. Learning new equipment and procedures was still an annoyance

Color is much harder than B&W, and requires toxic chemicals. I shoot slides, which use the E-6 process, not the C-41 process for more common color negative film. For the last five years, I have been going to ChromeWorks, a Mom-and-Pop lab on Bryant Street, San Francisco’s closest equivalent to New York’s photo district. The only thing they did was E-6 film processing, and they did it exceedingly well, with superlative customer service and quite reasonable rates. When I went there today to hand them a roll for processing, I discovered they closed down two months ago, apparently a mere week after I last went there.

I ended up giving my roll to the NewLab, another pro lab a few blocks away, which is apparently the last E-6 lab in San Francisco (I had used their services before for color negative film, which I almost never use apart from the excellent Fuji Natura 1600).

Needless to say, these developments are not encouraging for a film enthusiast.

Update (2007-12-14):

There is at least one other E-6 lab in San Francisco, Fotodepo (1063 Market @ 7th). They cater mostly to Academy of Arts students and are not a pro lab by any means (I have never seen a more cluttered and untidy lab). In and in any case they are more expensive than the New Lab, if more conveniently located.

Update (2009-08-27):

The Newlab itself closed as well few months ago. I now use Light Waves instead.