cocoon-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stefano Mazzocchi <stef...@apache.org>
Subject [RT] Improved navigation of learning objects
Date Sun, 12 Oct 2003 15:55:59 GMT

On Sunday, Oct 12, 2003, at 16:13 Europe/Rome, Alan Gutierrez wrote:

> The trouble with Wiki and docs is that new users, such as myself,
> are going to look for a documentation outline. A good TOC and index
> makes all the differnece in the world when searching documentation.

eheh, right on.

> Has anyone discussed how to impose an outline on a Wiki?

yes. there are some proposals on the table, ranging from simple to 
futuristic.

                                 - o -

the simple one is a manually created single-dereferencing linkmap.

Imagine that you have a repository with the following learning objects:

  /1
  /2
  /3
  /4

which are edited and created individually. Then you have a linkmap that 
basically says

  Trail "A"
    /1
   Section "Whatever"
     /3
     /4
   Section "Somethign else"
     /2

  Trail "B"
    /4
    /1

Trails are like "books", and they might share LOs. Trails might be 
published as a single PDF file for easier offline review.

Trails can be used as "tabs" in the forrest view, while the rest is the 
navbar on the side.

the LO identifier (http://cocoon.apache.org/LO/4) can be translated to 
a real locator (http://cocoon.apache.org/cocoon/2.1/A/introduction) and 
all the links rewritten accordingly.

This link translation is a mechanical lookup, based on the linkmap 
information.

[note: this is getting closer to what topic maps are about! see 
topicmaps.org, even if, IMO, it wouldn't make sense to use topic maps 
for this because they are much more complex]

                                     - o -

The approach above works but it requires two operations:

  1) creation of the LO
  2) connection of the LO in the linkmap

It is entirely possible to have a page that lists all the LO, but it 
would be pretty useless. It is also possible to find a LO by 
searching... this is normally how wikis are accessed, but it would be 
better to find a way to have LO "self connecting".

                                     - o -

In the more futuristic approach, each learning object specifies two 
things:

  1) what it expects the reader to know
  2) what it expect to give to the reader

the above reflects a mechanical model of the theory of cognition that 
indicates that information is transformed into knowledge by projection 
onto the existing cognitive context.

Suppose that you have learning object library of

                   /1 -(teaches)-> [a]
                   /2 -(teaches)-> [a]
  [a] <-(expects)- /3 -(teaches)-> [b]
  [a] <-(expects)- /4 -(teaches)-> [c]

now you add a new learning object indicating

  [c] <-(expects)- /5 -(teaches)-> [b]

and you can infer that

  [a] <-(expects)- /4+/5 -(teaches)-> [b]

therefore

  /3 and /4+/5

are cognitively related

                               - o -

The above is nice and good, but we hit the big wall: who defines the 
taxonomy and how.

The taxonomy is the collection of all those identifiers that [a] 
identify abstract concepts.

If each editor comes up with his own identifiers, we might have to 
issues:

  1) identifiers are so precise that there is no match in between 
documents written by different people

  2) identifiers are so broad in spectrum that concepts overlap and 
dependencies blur

This is, IMHO, the biggest problem that the semantic web has to face. 
The RDF/RDFSchema/OWL stack is very nice. A piece of art once you get 
it (and I still don't, but the fog is starting to clear up a little), 
but it's all based on taxonomical or ontological contracts... which 
are, IMO, the most painful and expensive thing to create an maintain.

So we must find a way to come up with a system that helps us in the 
discovery, creation and correctness maintenance of a taxonomy, but 
without removing or overlapping human judgment.

                               - o -

Here is a scenario of use (let's focus on text so, LO = page)

  a) you edit a page.

  b) you highlight the words in the page that you believe are important 
to identify the knowledge distilled in the page that is transmitted to 
the reader.

  c) you also highlight the words in the page that you believe identify 
the cognitive context that is required by the user to understand this 
page

  [this can be done very easily in the WYSIWIG editor, using different 
colors]

  c) when you are done, you submit the page into the system.

  d) the page gets processed against the existing repository.

  e) the system suggests you the topic that might be closer to your 
page, and you select the ones you think they fit the best. or you 
introduce a new one in case nothing fits.

point d) seems critical in the overall smartness of the system, but I 
don't think it is since semantic estimation is done by humans on point 
e), point d) just has to be smart enough to remove all the pages that 
have nothing at all to do with the current learning object.

Possible implementations are:

  - document vector space euclidean distance (used by most search 
engines, including google and lucene)
  - latent semantic distance (but this one is patented but it will 
expire in a few years, used for spam filtering by the Mail.app in 
MacOSX, used by the Microsoft Office help system and the assistant).

                                - o -

The above model paints a double-dereference hypertext, sort of 
"polymorphic" hypertext where links are made to "abstract concepts" 
that are dereferenced against resource implicitly.

This allows the structure of the system and the navigation between LO 
to grow independently, using information from the various contextual 
dependencies and from the analysis of the use of the learning objects 
from the uses.

In fact, navigation between learning objects can be:

  1) hand written
  2) inferred from the contextual dependencies
  3) inferred from the usage patterns

I believe that a system that is able to implement all of the above 
would prove to be a new kind of hypertext, much closer to the original 
Xanadu vision that Ted Nelson outlined in the '60s when he coined the 
term hypertext.

I don't think we have to jump all the way up here, there are smaller 
steps that we can do to improve what we have, but this is where I want 
to go.

--
Stefano.


Mime
View raw message