Externalities in worse-is-better

From Worse Is Better

Jump to: navigation, search

While worse-is-better makes it possible to develop useful and deployable software more easily, there is a great cost to developing software this way. Our claim is that this cost cannot be justified in today's modern computing environment.

Contents

The Externalities of Worse-is-Better

The inherent risk in the worse-is-better design philosophy is that it makes a tradeoff between implementation complexity (something visible only to the developers) and usability (something visible primarily to users).

To elaborate on this argument, let us conduct a thought experiment. Suppose that we wish to measure the cost of software in the number of programmer hours required to create that software. The idea behind worse-is-better is that in some cases it may be desirable to decrease the cost of software development by adding additional complexity to the resulting software, whether by choosing a less intuitive interface or by removing functionality that might be too difficult to support. Let's suppose that, hypothetically, we have the ability to quantify exactly how much time would be required to develop the more fully-fledged software (we'll call this time D) and how much time would be saved by the worse-is-better modifications to that software (call it ΔD). Then the time required to develop the software can be stated as

Now, since this software is designed to be widely adopted and used, we can think of the amount of time each user spends with the software per usage. We'll call this a "usage event." For example, if the software is a spreadsheet, one "usage event" would be one person using the spreadsheet one time to solve one problem. This value varies from person to person and from use case to use case, but we can average this out to get the cost of one usage. Now, consider two quantities. First, let's think about how much time must be spent per usage if the software were to be developed "the right way." We'll call this U. Second, consider the time per usage if the software is designed according to worse-is-better. By definition, the worse-is-better software must be harder to use than the "right thing" software, and so we can think of the increase in use time as a quantity ΔU. We thus have that

Now, let's think about, on average, how much time would be spent with the software - including both development and usage - according to both models, supposing that n users end up using the software. We have that

Now, consider how much more time is spent by the worse-is-better software than in the "the right thing" software. This is (D + nU + (-ΔD + nΔU)) - D + nU = -ΔD + nΔU. In other words, we have a one-time savings of ΔD in the development cost, plus a recurring cost of ΔU per usage of the software.

The problem with this arrangement is that no matter how great the initial savings in implementation time, eventually the marginal cost to users (nΔU) will eventually overtake ΔD. In other words, the savings in programmer time will eventually be offset by the increased complexity or reduced feature set of the resulting software system. That is, the extra cost is O(n) in the number of uses of the software.

Of course, one of the initial motivations behind worse-is-better is that the more correct version of the software will not attract as many users as the worse-is-better version because it will arrive on the scene later. In that sense, the worse-is-better product is better than the the-right-thing product because the total utility it creates is much greater. The extra cost we are considering is a "virtual cost" that is paid only relative to an idealized solution. The importance of this virtual cost is that it allows us to reason about what a better solution might look like. We can talk about solutions that have a lower net cost relative to the perfect solution, as well as solutions whose costs diminish over time. While no solution may ever achieve the ideal, measuring how close we can get to this solution can tell us much about the nature of various design paradigms.

Specific Examples

There are a great many examples where the above argument has led to the widespread adoption of technologies that place an undue burden on the end users.

JavaScript

JavaScript was designed by Brendan Eich in 1995 to enable simple programs to run inside the Netscape browser. Although it has many nice features, JavaScript is also notable for having many serious design flaws.

It was designed very much in the worse-is-better spirit: it was designed in ten days to make it into Netscape Navigator 2.0. Due to time pressure, it was designed to be very simple, and as a result it ended up with some significant flaws. A few examples:

A particularly amusing example of the limitations of JavaScript is evil.js, a library that indirectly redefines the built-in objects and functions to render any interesting JavaScript code entirely non-functional.

METAR

METAR is a message format for describing meteorological data that was designed in 1968. The system was designed to allow for automatic weather reporting and data transmission over the low-bandwidth media available at the time. It encodes meteorological data as a sequence of acronyms and compressed numeric values that describing prevailing conditions. A sample METAR message, along with a description of its contents, can be found here.

When METAR was designed, it needed to balance human readability with message succinctness. In the spirit of worse-is-better, its designers opted to put a premium on space, and so the resulting format is terse and requires readers to memorize a table of standard acronyms (available on the METAR Wikipedia page). Because of its implementation simplicity, it was widely adopted and is now an international standard.

Hardware now is significantly more powerful than the hardware available when METAR was designed, and so the bandwidth restrictions that once forced METAR's design are no longer relevant. However, METAR has become so widely adopted that displacing it would be all but impossible. Consequently, all American pilots are required to learn to read METAR reports as part of their training (Learning Statement 3).

SQL injection

SQL, the almost universal language used to communicate with databases, is entirely text-based. As the original design document from 1974 makes clear, SQL was intended to be typed in manually by humans. For this it works fine.

However, once a program starts to generate SQL, the fact that it is a text-based language that freely mixes user query inputs database commands becomes a problem. The attacks shown in this XKCD comic and described in this Microsoft white paper comes from trying to make a system designed to parse trusted human input instead parse a mixture of human and trusted machine input.

Unfortunately, the interface needed to make the system secure is complex and difficult to get right, as described in the Microsoft paper. If a programmer does the obvious text concatenation, a site becomes vulnerable.

To fix this, perhaps something a little closer to "the right thing" would have been better. If database queries on the web were handled via a library with calls for specific queries (harder to implement but prevents huge numbers of errors) as opposed to sending input directly to a SQL server (easy to implement but vulnerable to attack), this problem would never have occurred.

Personal tools
Namespaces
Variants
Actions
Navigation
Toolbox