Wednesday, February 10, 2010

The Tyranny of the Podcast

Am I the only one who longs for the benefits of the written word? Or are there others still out there who are dismayed by how much web content is moving to multimedia video and audio options?

I like to read. I can scan written content very quickly, and go back if I realize I missed something important. I can read faster than the conversational flow in most podcasts and video chats. I can focus on the content, instead of trying to decipher accents, or get past someone's funny-sounding voice, or bad production values. With very few exceptions - driving being the big one - I much prefer to read content than to listen to it.

And yet, it seems like more and more information I'd be interested in reading is getting put into podcasts and blogs instead. The NYT talks about the e-book pricing battle that broke out between Amazon and Macmillan - but on a podcast. Adam Engst and Andy Ihnatko, two columnists I love reading, discuss the iPad and the Amazon furor - but only on podcast. I feel like I'm getting increasingly closed out of discussions I'd love to follow, that take place only on audio/video.

Am I doomed to be a dinosaur here?

Wednesday, June 10, 2009

The new rise of Palm?

I admit, I've got some unresolved bitterness about Palm. I was a happy and even enthusiastic Palm user for nearly a decade, since the second-generation Pilot 1000. Then things started falling apart:
  • Palm devices stagnated, or even slipped back. The T5 was the last model I really liked, and even it was a step down from prior models in some ways; cheap plastic case instead of solid metal case, for example. The LifeDrive was an interesting idea, but the hard drive actually slowed operation and lost its raison d'etre when SD cards matched its size. (And it was way too expensive.) The T|X was an even greater disappointment; cheap build quality, more application instability, and the wireless turned out to be fairly worthless in actual use. (Largely crippled by poor applications/lack of applications, and the inability to download and use software over the 'net, but it really suffered in comparison to the Nokia Internet Tablets I was starting to use at the time.) Then there were the Treos...
  • I never really liked the Treos. The main reason Palm's PDAs stagnated were because of the Treos - but I had no use for a smartphone at the time, and even if I did I thought the Treos were markedly inferior to the mainline Palm PDAs. The devices were much bigger and bulkier than the PDAs - but even with the extra bulk, the keyboard forced the screen to shrink beyond a size I found comfortable, and the keyboard was too small to be useful for me. And then the Treos started stagnating the way the PDAs did.
  • Palm basically abandoned Macintosh users, leaving the Mac version of Palm Desktop/Hotsync to stagnate, and then to rot. No updates for new Palm devices, for stability/memory issues under new OS versions, and the like.
  • All the failed "Palm's Future" projects. Palm tried several times to get a new OS version in place and modernize the platform, Cobalt and the Foleo being two of the most notable; but every time Palm tried and failed to update the platform, it reduced confidence. While similar to the 'next-generation OS' problems Apple went through in the mid-late 90s, Palm's problems were more serious. Apple at least managed to make significant enhancements to their creaky old OS while trying and failing to get something new in place; Palm didn't manage to do much of anything significant for their current customers.
  • The final straw for me was shoddy, even rude, treatment by Palm support the last couple of times I tried to get repair work done through them.
So... objectively, I can admit that the new Palm Pre is a pretty amazing achievement for a company in the situation Palm was in. I can even admit there are some features there that I'm interested in. But at this stage, it's hard for me to be objective about Palm, or to be willing to take a chance on the Pre.

Tuesday, June 9, 2009

Those who forget history...

Sometimes, I get really irritated by people who act completely ignorant of history.

To name the latest example: A new iPhone model, the iPhone 3GS, was introduced yesterday. With prices the same as the prior iPhone 3G models... unless you currently have an iPhone 3G. Then the prices are up to $200 higher, with one 'upgrade' price level in the middle for customers at a certain period in their two-year contract. Cue internet outrage.

Now, the reason for this is no mystery. The iPhone 3G was a subsidized phone; AT&T paid a substantial part of the actual cost of the phone, in return for making that money back (with, presumably, interest) in the monthly subscription fees over the required two-year contract. (This is the same reason cellphone carriers charge an early termination fee for customers who cancel the subscription on a subsidized phone before the end of the contract.) At this point in the contract (approximately one year for the people who bought the iPhone 3G on release), AT&T presumably hasn't earned back the cost of the subsidy. So they charge you extra to upgrade phones, as a way to recover the subsidy money they're not earning through the remainder of the original contract. (And no, renewing the contract on a new phone doesn't count towards the subsidy on the current phone; the new phone is also subsidized, and the two years of the contract renewal goes towards paying off the subsidy on the new phone.)

There are good arguments to be made about the details of the arrangement - whether it's fair to have simply one 'upgrade price' level for customers partway through their contracts, no matter how many months are remaining; whether the upgrade fees should be prorated, as contract cancellation fees often are; even whether AT&T should write off the remainder of the subsidy as a gesture of goodwill. And there's certainly a good argument to be made about whether subsidized phones are a good idea at all; there were some good ones percolating in the time leading up to the iPhone 3G's release, when it started to be rumored that the next iPhone would be a subsidized phone.

What really annoys me is all of the professed shock and outrage, that this is somehow something new or unprecedented. It is not. This exact same issue came up when the iPhone 3G was released last year, for AT&T customers with time remaining on a contract with a subsidized phone; they had to pay the same kinds of upgrade prices to get the iPhone 3G. The only difference this time around is that now the iPhone 3G is the subsidized phone; last time, first-gen iPhone owners didn't have to pay the upgrade prices, because first-gen iPhones were sold at full price instead of at a subsidy.

None of this should be a surprise to anyone who's been paying attention. As far as I'm aware, the same basic practices hold true for any subsidized phone from any carrier. A number of people at the iPhone 3G introduction even pointed out that iPhone 3G buyers would likely be facing this situation when a new model was released. So please, spare me the histrionics and breast-beating, as if this were a novel practice deserving outrage; it isn't. Instead, let's talk about the situation as the existing reality it is, and discuss the best thing to do about it.

Sunday, May 3, 2009

The Safari 4 Tabs

I tried to give the Safari 4 beta's tab implementation - tabs in the window titlebar - a fair evaluation. I really did. But in the end, I used the hidden preference setting to switch back to the old tab style, and I was much happier. In the end, I had problems with them both from a conceptual standpoint and from a practical one.

Conceptual Problems

Tabs-in-titlebar supporters make the valid point that tabs are, conceptually, docked independent windows. Therefore, all the content that's normally subordinate to a window should be subordinate to the tab instead, and the tab should be placed above the content - in Safari's case, above the address bar. In this situation, putting the tabs into the titlebar saves screen space. Also, the argument goes, since tabs are supposed to be dockable windows, it makes sense to put them into the title bar - the one area of the window that unequivocally belongs to the window.

Unfortunately, doing so interferes with the primary function of the titlebar. Titlebars are the conceptual foundation of a windowing system - the one fixed, stable, manipulatable part of every window on the screen. Windows in OS X do not have a border, may or may not have scroll bars or a grow box, but they always have a titlebar. Titlebars help the eye to pick out a window from all the rest of the clutter on the screen, they give the user a way to position the window, and they provide a host for the close/minimize/zoom widgets.

If you put tabs in the titlebar, that changes. The fixed, stable foundation is no longer fixed. Windows with tabs now behave differently - not just from every other kind of window on the system, but from each other. "Safe" click areas change from window to window as the number of tabs changes. Dragging a window becomes a more finicky operation, as you now have to hit the precise area of the frontmost tab - without accidentally hitting the close or drag widgets for the tab - to move the window without switching tabs. Additional clutter in the titlebar makes it harder to pick out from the background clutter on your desktop. This is just a bad idea.

Practical Problems

Aside from the conceptual issues, there are some practical usability points. As implemented in Safari 4 beta, the start of the tab area on the titlebar is practically butting up against the close/minimize/zoom widgets; it's very easy to overshoot and click one when you meant to click the other. Putting the tabs up in the titlebar puts them significantly further from the content area; it may just be psychological, but I feel like I have to work harder to reach them. Also, even if the titlebar is the same height as the tab area in older Safari versions, it feels like a smaller target because there's no margin above them. The additional visual clutter not only makes it harder to pick out the window from the rest of the desktop, it makes individual tabs harder to pick out from each other. Finally, there's no longer a way to see a full page title at a glance - in old Safari, the titlebar always showed the full page title of the frontmost tab, even if it was too long to fit in the tab, but putting tabs in the titlebar removes this display area.

A Noble Failure

I give Apple credit for trying something new, and for the possibility of a uniform way to handle tabs across applications in OS X. But the execution just doesn't work. They need to take the Safari 4 tabs back to the development labs and try again.

Wednesday, April 29, 2009

The more things change...

Just to give an idea where I'm coming from:

My first computer experience was playing with a TRS-80 Model I back in 5th grade, when one of my teachers brought it in to class for a few days. I followed all the computer magazines I could get my hands on - Byte, 80 Micro, Popular Computing, Creative Computing from time to time, A+, Nibble, and more. I lived my computer dreams vicariously through the magazines, because I couldn't afford a real computer of my own. When I was finally able to buy a TRS-80 Pocket Computer, I still lived vicariously - because nice as it was, it was still horribly underpowered compared to a full desktop computer. (1.9 kilobytes of RAM - that's smaller than the size of this post!) I read the pages and dreamed of machines I wanted to own - the TRS-80 Model 100, the Sinclair ZX Spectrum and QL, the Otrona Attache, the Workslate, the Epson HX-20, and more. (In the last few years, as people began unloading older systems on eBay, I've been able to pick up a number of them for reasonable prices.)

So I've lived through a lot of different waves in the computer world. I missed the initial microcomputer wave - the kit-builders who assembled S-100 systems like the Altair 8800 and the IMSAI 8080 from bags of parts, ran CP/M on them, and hooked up display terminals just to be able to communicate with them - but I did come in on the second wave. The TRS-80 Model I, the Commodore PET, and the Apple ][ were the first mass-produced microcomputers, the first that could arguably claim the "personal computer" label, and marked the first mass expansion of computer adoption. I was able to watch it unfolding, and that's given me a rather jaundiced view of many 'recent' buzz-trends in computing.

In the Beginning Was the Command Line? No. In the Beginning was the ROM-based BASIC prompt... well, actually, punch cards. And then little toggle switches on the front of the system cabinet. (What, you thought those switches and blinkenlightsen on the front of the IMSAI 8080 were just for show?) But for the first mass-adoption wave that started with the 1977 Trinity, and the home computer wave that followed, the BASIC interpreter built into the computer was the first thing users saw when they turned on the machine. And they could type in program listings that were included in most computer magazines, and it was Good. And it let them write their first "Hello, World!" program, and it was Good.

And yea, the Fans of the Computer were much pleased, and wrote introductory programming articles for Popular Computing starring Sherlock Holmes teaching Dr. Watson to program. (I Kid You Not.) And it was proclaimed that the Age of the Computer was upon us, and that everyone would have a computer in the home, and that everyone would learn to program. And it was said that this would lead to a revolution in society, that learning to program would make everyone better at critical thinking, to the greater benefit of all. And Lo!, the sales of the Atari 400/800 and the Commodore 64 did skyrocket, and the great Cosby did speak for the TI-99/4A, and Coleco did show a TV commercial proclaiming the Adam capable of temporal shifts. ("Adam, my time-travel program!") And all were sure that this would come to pass.

And then the bottom dropped out.

It turned out that by and large, the people buying these millions of first-generation home computers weren't interested in learning how to write their own recipe programs. The home computer market cratered, much like the home videogame market had, and most of the first-generation systems ended up getting stuffed in a closet somewhere, or on garage sale tables for $10. The 'computer in every home' had to wait another decade or two, and it wasn't because society in general was interested in learning how to program - it was because the GUI made computers easy enough and powerful enough for average people to use without learning to program.

So when I see someone touting an operating system that requires users to get down into the guts and tinker with it to make it work the way they want, and that this will lead to Wonderful Things because the users will know how the computer works, and will have more control over it and can make it do more... you'll pardon me if I'm skeptical.

Thoughts on "Pirate Google"

Let's see... a group formed a site called "Pirate Google", performing searches for torrent files on Google in an attempt to demonstrate the 'hypocrisy' of prosecuting The Pirate Bay for pirated material when Google also links to illegal torrents.

  • Google indexes torrents more or less incidentally as part of a blind (i.e. covering everything, without preference) spidering of the entire web. The Pirate Bay not only deliberately focuses on indexing torrents, it hosts its own torrent tracker.
  • I'm not sure if a takedown notice is even feasible on a blind, constantly updated index of the entire web, but Google does follow 'safe harbor' provisions by honoring takedown requests on YouTube. The Pirate Bay deliberately violates safe harbor provisions by blatantly refusing takedown requests.
Nope, looks like no difference to me!

Tuesday, April 28, 2009

New UI Paradigms?

I've seen a number of posts in the last year or two about how the 'desktop paradigm' (a GUI with windows, icons, hierarchical filing system displayed as folders, etc.) is tired, stale, outdated, or just needs to be replaced. Some form of three-dimensional organizational system is a popular choice. (See this ArsTechnica article for an example.) Most recently, a couple of examples by writers I respect are Steven Frank's 'long rambling exploration' and a set of responses (including another stacking-browser reference) by Lukas Mathis.

And yet the desktop GUI persists - and very few pundits acknowledge the elephant in the room and try to address why it persists, in the face of many attempts to replace it. (Steven F does at least touch on the subject, though he attributes it mostly to inertia rather than looking for a deeper cause.) I think it's instructive to look back at some of the failed attempts to replace it, and the reasons why they failed. For discussion's sake, I'll divide them into two main groups here: the 'cartoon map' camp and the 'complex GUI/desktop++' camp.

The "cartoon map"

This camp developed from the premise that the desktop GUI is still too complex and too abstract, and the GUI metaphor needed to be even simpler and more direct. And hence we get the 'cartoon map' - the entire interface is drawn as detailed pictures of actual rooms, desks, filing cabinets, telephones, clocks, etc. The effect is rather like a graphical adventure game. Here are some of the examples I can remember:
  • Microsoft BOB: If the cartoon map had a poster boy, this would be it, as one of the most notorious and best-remembered 'maps' out there. Introduced in 1995, BOB was supposed to be a replacement interface for Windows 3.1 and Windows 95. It flopped.
  • Magic Cap: An early PDA operating system from the mid-90s, Magic Cap powered devices from Sony (MagicLink), Motorola (Envoy) and Magic Cap's creator General Magic (DataRover). It also flopped.
  • eWorld: Another mid-90s entrant (are we seeing a pattern here?), eWorld was Apple's attempt to do their own online service based partly on America Online software and structure, but with graphical navigation maps and a higher emphasis on style. Another flop, though some have argued this was more because of poor promotion and lack of support than of flaws with the interface.
  • Andrew Tobias' Managing Your Money 5.0 (Macintosh): An alternate opening menu screen displayed a graphical office view to represent program functions, complete with a little 'mouse-hole' for exiting. (Sadly, I couldn't find any screenshots.) The screen was gone from later versions of the program.
Verdict: Experienced users found them overly restrictive, childish, and insulting. Novice users found them condescending and insulting. As far as I can tell, almost everybody found them insulting. Is it any surprise that they failed?

The Lesson: Treat users as adults, not children, if you want to succeed.

The "complex UI/desktop++"

This camp comes at things from the opposite perspective. The desktop metaphor is too limiting, too restrictive, too chained to real-world metaphors; UI design needs to break away from it, build a richer and more complex interface that gives users more power and flexibility. There isn't the kind of consistency here as we saw with the cartoon map era, but there are some concepts that show up frequently:
  • Compound Documents: Instead of focusing on applications that then create specific kinds of documents (i.e. Excel creating spreadsheets, Adobe Illustrator creating graphics, etc.), we should consider the document as the starting point, and use various modules to create different kinds of content within that framework.
  • No More Files: On the flip side, why have documents at all? The very idea of documents filed away in a hierarchical structure of folders is too limiting; do away with the folder structure and just let files float around. Let the computer itself keep track of them for you; after all, isn't that something it's supposed to be good at? Or go a step further, and do away with files altogether, and have all your information float around in a data 'soup'.
  • 3D Visualization: This group agrees that graphical/visual/spatial methods of display and information organization have merit, but they view the fixed 2D frame of the traditional desktop metaphor as too limiting; they seek to add additional content and/or organizational abilities by incorporating a third dimension into the interface in some fashion.
While some next-gen UI proponents want to start with a clean slate and build a new system from the ground up, most of them view the desktop paradigm as too embedded to get rid of completely, and build their innovations on top of existing GUIs; hence 'desktop++'.

Here are some of the UI experiments I'm familiar with - or at least have heard of, can remember off the top of my head, and can come up with some kind of reference for - incorporating one or more of the elements above. (There are several more where I frankly couldn't remember names well enough to track down references.)
  • HotSauce: Based on the Meta-Content Format created by Apple in the mid-90s, HotSauce was billed as an alternate way of navigating information on websites in 3D-space. Users saw a website as a set of floating information tags, which they could 'fly through' like a virtual reality scene out of a movie, clicking on a tag to view associated information. HotSauce faded away within a couple of years, MCF metamorphosed into RDF, and hardly anyone remembers it these days. I played with it for a few days and then dumped it; as cool as flying through 3D-space may look in the movies, in real life it was cluttered and very hard to find things, as this screenshot demonstrates.
  • OpenDoc: While not a UI per se, OpenDoc was one of the most 'complete' implementations of the compound document idea. Born at Apple in the early 90s and publicly released in System 7.5, OpenDoc was officially killed when Steve Jobs returned in 1997 - but it was effectively dead before then, with an initial burst of enthusiasm followed by lots of 'how do we make this work?' and stagnation. Others have had their own takes on why it failed; mine is that, like AOCE, OpenDoc was a victim of its own complexity. You needed too many software components to assemble something useful, putting together a compound document was more work than just doing a 'simple' document, and performance was poor on systems of the day.
  • The Humane Interface: Originally from the book of the same name, later renamed to Archy, this was UI expert (and former Macintosh team member) Jef Raskin's last project before his death. Archy is one of the 'clean-slate' projects, completely dumping a windowing interface and any concept of a filesystem, and going with a 'Zooming User Interface' instead (all 'documents' are seen as items on an infinitely large 2D plane; items can be found by zooming out, scrolling, and zooming back in, or by instant always-available text search). It is also a compound-document system, intended to eliminate applications through the concept of 'commands' that can be typed anywhere in any document, and can be installed either individually or in groups of related functionality. (For example, sending email would involve typing the body of the mail, typing the address, selecting both and typing the Send Mail command.) So far, the system doesn't appear to have gained any traction outside of a small group of fans. The main problems I have with it are scalability (how well will a ZUI document system as described work when dealing with 5 or 10 years of document accumulation, trying to browse them on a single plane?) and typing commands (which requires either lots of rote memorization or reference to documentation, and is not discoverable). Also, the system seems to be focused heavily on creating and editing text documents; I haven't seen anything in the references I've found on how the system is expected to handle graphical documents, nor anything about how to handle stand-alone applications that don't create documents, like games.
  • The Newton: Steven F covers this at some length in his post, particularly the clean-slate nature of the design and the 'data soup' system where any application could use bits of data from any other application. The Newton's marketplace failure has been discussed extensively elsewhere, but I'd like to comment on the 'data soup' idea. In practice, I found it pretty problematic, primarily because it didn't handle removable storage well. It was often difficult to tell if a bit of data was on internal storage or removable, with the result that data would often suddenly go missing when you pulled a storage card. Not good.
  • 'Piles'/Stacks: Long-rumored after an Apple patent in 1994, 'Piles' were supposed to be a major rethinking of document organization. Documents could be grouped together and moved as a unit, or 'pile'; the pile would be a 3D graphical representation of the documents in it, and could be fanned through, searched/sorted, organized into sub-piles, and manipulated in other ways. As it finally debuted as 'Stacks' in OS X 10.5, it was considerably less ambitious; merely an alternate way of viewing folders in the Dock, replacing the existing folder icon with a 'composite' icon built from the icons of all the files in the folder, and 'fanning' the folder contents when clicked on. Stacks drew a great deal of criticism at 10.5's release; the composite icon was poorly done and not truly representative of the folder contents, the curve to the 'fan' made icons somewhat harder to target, and the new behavior replaced the former behavior of popping up a list of folder contents when clicked on. Later updates to 10.5 allowed users to (mostly) restore the prior folder behavior.
Verdict: Although many people have called for the replacement of the traditional desktop metaphor/GUI over the years, to date every replacement for it has failed to gain significant adoption.

The Lesson: While proponents of new UI paradigms will often acknowledge that they are more complex than current GUIs, they contend that increasing familiarity with computers on the part of the general public renders this moot; users are more sophisticated now than they were when GUIs first received broad adoption, and will take the added complexity in stride. And they contend that the additional power brought by going beyond the desktop metaphor makes the higher learning curve worth it. I think the record demonstrates otherwise.

A third option - "Just Right?"

The 'cartoon maps' were too simple, too condescending. The attempts to create a richer, more complex environment were too complex. So what's left? Are we stuck with the desktop metaphor forever? I don't think so, but I think any attempt to change it will need to walk a middle ground, the Goldilocks option - not so simple that it's useless, not so complex that it's hard, but balancing usability with power. And in some ways I think the iPhone suggests a path to move forward.

Steven F points to the iPhone as the first truly successful new UI paradigm since the desktop metaphor, and while it's still pretty early to be judging that, I think he's got a point. Pen-based computers, as he points out, have generally been unsuccessful. PalmOS was successful; but while it didn't use windows, in most other ways the interface was the standard GUI writ small, with scroll bars, a (hidden) menu bar, and so forth, with the stylus taking the place of a mouse. The various PDA flavors of Windows took this to a ludicrous degree; aping desktop Windows features like the Start menu, Task Bar, and Windows-style scroll bars, on a screen generally far too small to comfortably accommodate them. By contrast, the iPhone OS replaced many of the traditional GUI 'widgets' with direct manipulation of the interface; instead of a scroll bar, users scrolled by drawing a finger directly across the scrolling area, for example. The key here, I think, is that instead of adding increasing levels of abstraction and complexity, iPhone OS reduced them - there were fewer intermediaries between the user and the interface operation.

While the iPhone OS has managed to establish itself as an alternative UI paradigm, it has only done so on a handheld device; unfortunately, the experience does not translate well to a traditional computer. Touchscreens, as many have pointed out, become tedious and even painful to operate over an extended period when used on a vertical screen; laying them flat removes the stress from holding the arm out at length, but replaces it with stress from the neck constantly bending over to observe the display. As a direct model to copy, therefore, the iPhone isn't much help. But I think it does suggest a useful principle: empower the user but maintain simplicity, by replacing complex abstractions with more directly manipulated ones.

Note here that I do not consider a command-line interface to be removing abstractions, as many CLI fans do. While they may argue - and correctly - that any GUI involves much more abstraction from the actual operation of the machine, this is only true from the standpoint of the computer. From the standpoint of the user, a CLI is a much greater cognitive abstraction; it requires the user to hold the command set in their mind or look it up in an external reference, and requires them to memorize when, where, and how to use it. (This is the killer flaw to Archy, in my opinion, as well as the Oberon system described by Mathis.) A good GUI, by contrast, is discoverable; you can try interacting with the system and see what happens, because there are things displayed on-screen that you can manipulate without having to study beforehand.