directory-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
Subject svn commit: rev 36415 - incubator/directory/snickers/trunk/xdocs/ber-codec
Date Sun, 15 Aug 2004 19:21:40 GMT
Author: akarasulu
Date: Sun Aug 15 12:21:39 2004
New Revision: 36415

More notes to myself at this point in time

Modified: incubator/directory/snickers/trunk/xdocs/ber-codec/BEREncoderDesign.xml
--- incubator/directory/snickers/trunk/xdocs/ber-codec/BEREncoderDesign.xml	(original)
+++ incubator/directory/snickers/trunk/xdocs/ber-codec/BEREncoderDesign.xml	Sun Aug 15 12:21:39
@@ -123,5 +123,61 @@
+    <section name="Determinate Length Encoder Design">
+      <subsection name="Problems and possible solutions">
+        <p>
+          Creating a determinate length encoder without sacrificing efficiency
+          is not easy.  Making the code easy to manage and read is yet another
+          more difficult matter.  This is already very difficult to manage with
+          the state machine approach we have taken.  Furthermore we have found
+          many clients and servers which reject the use of the indeterminate
+          form even though according to the spec BER allows for such variance in
+          encodings.
+        </p>
+        <p>
+          Efficiency is difficult to achieve because we need to know the lengths
+          of nested (TLV) nodes to build constructed nodes with a determinate
+          length encoding.  Since the topmost constructed TLV is the first out
+          the door we cannot transmit it until all nested TLV nodes have been
+          generated with their lengths already calculated.  A brut force
+          approach might produce a TLV tree first before serializing the output
+          to a stream.  This way all nested TLV's and up can have their length
+          fields computed depth first.  This means keeping the entire transfer
+          image in memory as well as the structures needed to manage a tree.
+          Although DoS attacks are not as much of a concern for the encoding
+          phase as they are for decoding in a server, the approach would still
+          result in very inefficient encode operations especially when large
+          PDUs are being transmitted either by a server or a client.  Plus there
+          is the fact that a PDU stub already exists with the same copy of the
+          information making it highly likely that more than 2 times the
+          transfer image will be required.
+        </p>
+        <p>
+          We must ask our selves if there is any way to avoid keeping the
+          entire transfer image in memory.  Alan and Alex have discussed the
+          use of referrals to large binary data rather than keeping the data
+          in memory during codec operation.  A referral would correspond to a
+          channel, or a stream to recall or store the data in question.  This
+          way large binary values can be streamed from or to disk.  Eventually
+          stubs will support these references although we do not have the
+          mechanism completely defined yet.  If the same referrence can be held
+          in the place of the V field of a TLV then we can avoid having more
+          than 2X the transfer image in memory.  This however will not be the
+          case when PDU's are tiny with small feilds well below the threshold
+          used to gauge when disk streaming is to occur.  This is still however
+          one means to keep the in memory foot print down when PDU's with large
+          fields are encoded.
+        </p>
+        <p>
+        </p>
+      </subsection>
+    </section>

View raw message