Return-Path: Delivered-To: apmail-lucene-hadoop-dev-archive@locus.apache.org Received: (qmail 524 invoked from network); 26 Jun 2007 19:20:47 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 26 Jun 2007 19:20:47 -0000 Received: (qmail 62956 invoked by uid 500); 26 Jun 2007 19:20:50 -0000 Delivered-To: apmail-lucene-hadoop-dev-archive@lucene.apache.org Received: (qmail 62931 invoked by uid 500); 26 Jun 2007 19:20:49 -0000 Mailing-List: contact hadoop-dev-help@lucene.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hadoop-dev@lucene.apache.org Delivered-To: mailing list hadoop-dev@lucene.apache.org Received: (qmail 62922 invoked by uid 99); 26 Jun 2007 19:20:49 -0000 Received: from herse.apache.org (HELO herse.apache.org) (140.211.11.133) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 26 Jun 2007 12:20:49 -0700 X-ASF-Spam-Status: No, hits=-100.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.4] (HELO brutus.apache.org) (140.211.11.4) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 26 Jun 2007 12:20:46 -0700 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 161DD71418E for ; Tue, 26 Jun 2007 12:20:26 -0700 (PDT) Message-ID: <20553573.1182885626087.JavaMail.jira@brutus> Date: Tue, 26 Jun 2007 12:20:26 -0700 (PDT) From: "Hairong Kuang (JIRA)" To: hadoop-dev@lucene.apache.org Subject: [jira] Commented: (HADOOP-1470) Rework FSInputChecker and FSOutputSummer to support checksum code sharing between ChecksumFileSystem and block level crc dfs In-Reply-To: <7723731.1181157326809.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-1470?page=3Dcom.atlassia= n.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12508293 ]=20 Hairong Kuang commented on HADOOP-1470: --------------------------------------- > The API does not care, since it always passes in the absolute position: t= here is no file position. An implementation should optimize for sequential = reads, and perhaps also parallel reads, but that's a secondary priority. We= may not even have to explicitly optimize for sequential reads in ChecksumF= ileSystem, since most implementations of seek already implement that optimi= zation. I think the contract of readChunk should make it clear if the method change= s the file descriptor state or not because it makes a difference if the imp= lementation of pread could use it or not. > Rework FSInputChecker and FSOutputSummer to support checksum code sharing= between ChecksumFileSystem and block level crc dfs > -------------------------------------------------------------------------= --------------------------------------------------- > > Key: HADOOP-1470 > URL: https://issues.apache.org/jira/browse/HADOOP-1470 > Project: Hadoop > Issue Type: Improvement > Components: fs > Affects Versions: 0.12.3 > Reporter: Hairong Kuang > Assignee: Hairong Kuang > Fix For: 0.14.0 > > Attachments: GenericChecksum.patch, genericChecksum.patch, InputC= hecker-01.java > > > Comment from Doug in HADOOP-1134: > I'd prefer it if the CRC code could be shared with CheckSumFileSystem. In= particular, it seems to me that FSInputChecker and FSOutputSummer could be= extended to support pluggable sources and sinks for checksums, respectivel= y, and DFSDataInputStream and DFSDataOutputStream could use these. Advantag= es of this are: (a) single implementation of checksum logic to debug and ma= intain; (b) keeps checksumming as close to possible to data generation and = use. This patch computes checksums after data has been buffered, and valida= tes them before it is buffered. We sometimes use large buffers and would li= ke to guard against in-memory errors. The current checksum code catches a l= ot of such errors. So we should compute checksums after minimal buffering (= just bytesPerChecksum, ideally) and validate them at the last possible mome= nt (e.g., through the use of a small final buffer with a larger buffer behi= nd it). I do not think this will significantly affect performance, and data= integrity is a high priority.=20 --=20 This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.