Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 44CED200C3B for ; Sat, 18 Mar 2017 13:19:19 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 438D5160B8D; Sat, 18 Mar 2017 12:19:19 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 2615D160B98 for ; Sat, 18 Mar 2017 13:19:17 +0100 (CET) Received: (qmail 98334 invoked by uid 500); 18 Mar 2017 12:19:17 -0000 Mailing-List: contact reviews-help@spark.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list reviews@spark.apache.org Received: (qmail 97965 invoked by uid 99); 18 Mar 2017 12:19:17 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 18 Mar 2017 12:19:17 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id E1518DFFC0; Sat, 18 Mar 2017 12:19:16 +0000 (UTC) From: mridulm To: reviews@spark.apache.org Reply-To: reviews@spark.apache.org References: In-Reply-To: Subject: [GitHub] spark pull request #17295: [SPARK-19556][core] Do not encrypt block manager ... Content-Type: text/plain Message-Id: <20170318121916.E1518DFFC0@git1-us-west.apache.org> Date: Sat, 18 Mar 2017 12:19:16 +0000 (UTC) archived-at: Sat, 18 Mar 2017 12:19:19 -0000 Github user mridulm commented on a diff in the pull request: https://github.com/apache/spark/pull/17295#discussion_r106778005 --- Diff: core/src/main/scala/org/apache/spark/storage/BlockManager.scala --- @@ -56,6 +57,43 @@ private[spark] class BlockResult( val bytes: Long) /** + * Abstracts away how blocks are stored and provides different ways to read the underlying block + * data. The data for a BlockData instance can only be read once, since it may be backed by open + * file descriptors that change state as data is read. + */ +private[spark] trait BlockData { + + def toInputStream(): InputStream + + def toManagedBuffer(): ManagedBuffer + + def toByteBuffer(allocator: Int => ByteBuffer): ChunkedByteBuffer + + def size: Long + + def dispose(): Unit + +} + +private[spark] class ByteBufferBlockData( + val buffer: ChunkedByteBuffer, + autoDispose: Boolean = true) extends BlockData { + + override def toInputStream(): InputStream = buffer.toInputStream(dispose = autoDispose) + + override def toManagedBuffer(): ManagedBuffer = new NettyManagedBuffer(buffer.toNetty) + + override def toByteBuffer(allocator: Int => ByteBuffer): ChunkedByteBuffer = { + buffer.copy(allocator) + } --- End diff -- autoDispose is not honored for toManagedBuffer and toByteBuffer ? On first pass, it looks like it is not ... Also, is the expectation that invoker must manually invoke dispose when not using toInputStream ? Would be good to add a comment about this to BlockData trait detailing the expectation. --- If your project is set up for it, you can reply to this email and have your reply appear on GitHub as well. If your project does not have this feature enabled and wishes so, or if the feature is enabled but not working, please contact infrastructure at infrastructure@apache.org or file a JIRA ticket with INFRA. --- --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org For additional commands, e-mail: reviews-help@spark.apache.org