incubator-lucy-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Apache Wiki <>
Subject [Lucy Wiki] Update of "BrainLog" by MarvinHumphrey
Date Sun, 20 Jun 2010 19:57:09 GMT
Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucy Wiki" for change notification.

The "BrainLog" page has been changed by MarvinHumphrey.


New page:
== Overview ==
A popular technique from the field of user interface testing is to record test subjects "thinking

 . Think aloud protocols involve participants thinking aloud as they are performing a set
of specified tasks. Users are asked to say whatever they are looking at, thinking, doing,
and feeling, as they go about their task. This enables observers to see first-hand the process
of task completion (rather than only its final product). Observers at such a test are asked
to objectively take notes of everything that users say, without attempting to interpret their
actions and words. Test sessions are often audio and video taped so that developers can go
back and refer to what participants did, and how they reacted.

 . []

A "brainlog" is the written equivalent: instead of speaking, the test subject types in a log
of their thoughts as they perform a task.

A typed brainlog is naturally not as rich an information source as a videotaped think-aloud
session.  It is presumably also a less accurate narration of the subject's thought process,
since typing is more laborious and intrusive than speaking.  However, a brainlog can be recorded
nearly anywhere and at any time, it can be distributed and archived via mailing lists, and
it is easy to consume quickly.

== Using brainlogs to test API design, documentation, and code clarity ==
Apache in general and Lucy in particular place a high value on API simplicity and code clarity.
 We use brainlogs to judge how transparent our codebase is and to help us improve.

Typically, a user or developer will record a brainlog while exploring an API or reviewing
a section of code for the first time.  The authors of the materials being explored then examine
the contents of the brainlog and evaluate how effectively their materials guided the test

The exercise is directly analogous to the reviewing the interface of a web page to see whether
the design is guiding visitors along the path that the designers intended.

== The "curse of knowledge" ==
Authors are uniquely unqualified to gauge how consumable their work is.  Of course it makes
sense to them -- they wrote it!

In contrast, newbies make good test subjects, as they are not yet afflicted by the "curse
of knowledge":

 . And that brings us to the villain of our book: The Curse of Knowledge. Lots of research
in economics and psychology shows that when we know something, it becomes hard for us to imagine
not knowing it. As a result, we become lousy communicators. Think of a lawyer who can't give
you a straight, comprehensible answer to a legal question. His vast knowledge and experience
renders him unable to fathom how little you know. So when he talks to you, he talks in abstractions
that you can't follow. And we're all like the lawyer in our own domain of expertise.

 . Here's the great cruelty of the Curse of Knowledge: The better we get at generating great
ideas -- new insights and novel solutions -- in our field of expertise, the more unnatural
it becomes for us to communicate those ideas clearly. That's why knowledge is a curse. But
notice we said "unnatural," not "impossible."

 . Chip Heath and Dan Heath, "[[|Made
to Stick: Why Some Ideas Survive and Others Die]]"

Innocence is precious: once you have become familiar with a source, any brainlog you might
contribute no longer reflects the experience of those who are coming to the material for the
first time.  Therefore, if you are going to record a brainlog, you should do so right away.

== Editing brainlogs ==
It you make a "mistake" during testing, it may be tempting to edit the brainlog after the
fact to conceal or minimize it.  Don't!

If multiple test subjects make the same "mistake", that indicates that there is a flaw in
the design that needs to be corrected.  In fact, that sort of pattern is ''exactly'' what
UI testing is designed to reveal.

On the other hand, it's probably not a good idea to publish a brainlog that contains egregiously
inflammatory material, even if it's an accurate record of your thoughts.  Befor you hit "send"
-- especially for the first brainlog you write -- step away for a few hours or a day, and
consider whether you might want to swap out certain passages for placeholders like "[intemperate
rant about XXXXXX here]".

== Evaluating brainlogs ==
When evaluating a brainlog, there are two things to bear in mind.

First, blaming the user is poor form.  The brainlogger is performing a valuable service precisely
by revealing where they went wrong or right, and they are doing a job that you ''cannot''
do by yourself.  Instead of criticizing the path they took, consider how you might modify
your source material so that the next user doesn't make the same "mistake" -- even if you
think it was a "dumb mistake".

Second, brainlogs are raw materials by nature, rather than carefully prepared constructive
criticism.  A critique is a contribution, even if it is impolitic.  If you feel miffed after
reading a brainlog, consider it a challenge to rise above and extract every last drop of value
from it.

View raw message