qpid-users mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kim van der Riet <kim.vdr...@redhat.com>
Subject Linearstore directory structure and handling structure change
Date Tue, 11 Nov 2014 18:52:27 GMT
In order to prepare to implement QPID-5671 (Add ability to use disk
partitions and select per-queue EFPs), it has been necessary to change
the directory structure of the store. The journal files themselves have
not changed, only the structure. The details of the old and new
structures are given below for informational purposes, but are probably
secondary to this discussion.

The question at hand is how to handle a directory structure change
across versions, and in particular, if and how users should upgrade.
There has been some verbal discussion on this issue, but I thought it
would be of benefit to condense these and allow all interested parties
to comment.

If a change in directory structure is such that the new version of
lineartsore cannot read the old version store (and visa versa), then how
should the store/broker handle cases where a start/restart encounters an
old store? Here are some possibilities:

1. Add code to the store which upgrades the old layout to the new layout
if the old layout is encountered as it recovers. This provides a
seamless experience for those upgrading, and can be accompanied if
necessary by loud log entries detailing the upgrade action. However, if
a user downgrades again, then older code will misread the new structure,
and will require the store to be truncated or downgraded by hand.

2. Stop the broker if the old layout is encountered. It is up to the
user to upgrade the layout (either by hand or by utility) or delete the
old store prior to restarting the broker with a new version.

3. Create mutually exclusive directory structures for versions. This
means that a newer version will never encounter the contents of an older
version, and visa versa. If a user wants to upgrade a version, they will
need to move the old store files into the new structure by hand or
utility. This means that the broker will always start after an upgrade,
but the user may "lose" existing persistent queues and messages unless
they perform an upgrade.

Of course, some combinations of the above are also possible.

It should be also be born in mind that for the current changes to the
store:

1. Linearstore is not yet the "official" linux store, and being
"experimental" is not built by default. Legacystore is still the current
default store. A change to the directory structure at this stage should
not be serious. These questions do affect future possible changes,
though.

2. Changes to the format of the individual journal files are handled
separately. Journal files contain a version number which is checked
prior to decoding and if it is incorrect will cause the store to stop.

3. There is currently no direct means to detect a layout change other
than by errors encountered as the store attempts to read the directory.
This could be addressed by some kind of file or directory layout/naming
convention which makes the version clear. This could be done in a way
that is somewhat similar to the idea #3 above.

I would welcome ideas/feedback on this question.


File layout for linearstore:
============================
The following is for informational purposes, and illustrates the above
issue.

Background:

In order to allow empty file pools (EFPs) to be established on different
media, and for queues to be able to choose which partition to use
depending on their performance requirements, the directory structure of
linearstore must be changed. For example, queues with high throughput
and low latency requirements may be established on expensive solid state
media, while low throughput non-critical queues can be directed to use
regular rotating magnetic media.

The current journal layout moves journal files into a directory within
the "qls/jrnl/<queue_name>" directory. However, this limits the files to
the partition on which the qls directory exists.

OLD LAYOUT

qls
  +-dat <BDB database files>
  +-jrnl
  |   +-queue_0
  |   |   +-file_1.jrnl
  |   |   +-file_2.jrnl
  |   |   ...
  |   |   +-file_n.jrnl
  |   +-queue_1
  |   ...
  |   +-queue_n
  +-tpl <contains journal files for transaction boundaries>
  +-p001 <EFP partition 1>
      +-efp
          +-2048k <Default pool of empty 2048k journal files>

Partition p001 is the default partition and is located on the same
partition as the store-directory (or its default). Additional partitions
(p002, p003, etc.) may be created and used as mount points for other
physical disk partitions. Each of these partitions would contain empty
file pools (EFPs) containing empty files of various sizes. Currently
only one size (2048k) is in use, but other sizes may be used in the
future.

To solve the single partition limitation, the journal files are now
moved from the EFP directory into a "in_use" subdirectory on the same
partition and a symlink is established in the qls/jrnl/<queue_name>
directory to the in-use file.

An addition "tidy-up" change is to remove the "efp" directory under each
partition, as it is superfluous and serves no purpose. Each EFP size
directory is now directly under the partition directory:

NEW LAYOUT

qls
  +-dat
  +-jrnl
  |   +-queue_0
  |   |   +-symlink to file_1.jrnl
  |   |   +-symlink to file_2.jrnl
  |   |   ...
  |   |   +-symlink to file_n.jrnl
  |   +-queue_1
  |   ...
  |   +-queue_n
  +-tpl <contains symlinks to files in in_use dir in the partitions>
  +-p001
  |   +-2048k
  |   |   +-in_use <contains all files in use by all queues using this partition>
  |   |       +-file_1.jrnl
  |   |       +-file_2.jrnl
  |   |       ...
  |   |       +-file_n.jrnl
  |   |   +-returned <contains files returned from use, but not yet cleaned up for re-use>
  |   +-32768k
  |   |   +-in_use
  |   |   +-returned
  |   ...
  |   +-size_n <other possible EFP sizes>
  |       +-in_use
  |       +-returned
  +-p002
  |   +-<layout as for p001>
  ...
  +-pnnn
      +-<layout as for p001>

The "returned" directory is intended to be a resting place for used
files which have not yet been cleaned up or overwritten for re-use. The
idea is that because cleaning up and overwriting are relatively
time-consuming actions, an external process or internal worker thread
can perform this function on a lower priority rather than the threads
which are handling message persistence. The action of moving a file from
one directory to another is relatively cheap. Currently the
cleanup/overwriting is being done by the store itself on the persistence
thread, but this opens the way to changing the way used files are
handled at a later time.
  


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@qpid.apache.org
For additional commands, e-mail: users-help@qpid.apache.org


Mime
View raw message