hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Comment Edited] (HDFS-10175) add per-operation stats to FileSystem.Statistics
Date Tue, 12 Apr 2016 01:41:25 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236410#comment-15236410
] 

Colin Patrick McCabe edited comment on HDFS-10175 at 4/12/16 1:40 AM:
----------------------------------------------------------------------

Thanks, [~mingma].  I agree that if we are going with a map-based approach, we can simplify
the implementation of HDFS-9579.  This may also save a small amount of space... for example,
if FileSystem threads don't have any bytes read at distance 3 or 4, they will no longer need
to store space for {{bytesReadDistanceOfThreeOrFour}}.

It also seems like a good idea to make the detailed statistics optional.  Then clients could
opt-in to paying the overheads, rather than getting the overheads imposed whether they want
them or not.

[~liuml07] wrote:
bq. 2) Move long getBytesReadByDistance(int distance) from Statistics to StatisticsData. If
the user is getting bytes read for all distances, she can call getData() and then iterate
the map/array, in which case the getData() will be called only once. For cases of 1K client
threads, this may save the effort of aggregation. Colin Patrick McCabe may have different
comments about this?

Hmm.  I'm not sure I see much of a benefit to this.  Users should not be iterating over internal
data structures.  It exposes implementation details that we don't want to expose.  I also
don't see how it's more efficient, since you have to iterate over it in either case.

bq. For newly supported APIs, adding an entry in the map and one line of increment in the
new method will make the magic done. From the point of file system APIs, its public methods
are not evolving rapidly.

I agree that adding new statistics is relatively easy with the current patch.  My comment
was more that currently, many of these statistics are HDFS-specific.  Since MapReduce needs
to work with alternate filesystems, it will need to handle the case where these detailed statistics
are missing or slightly different.

bq. Another dimension will be needed for cross-DC analysis, while based on the current use
case, I don't think this dimension is heavily needed. One point is that, all the same kind
of file system is sharing the statistic data among threads, regardless of the remote/local
HDFS clusters.

(revised earlier comment about per-class stats) Yes, it does appear that stats are per-class.


was (Author: cmccabe):
Thanks, [~mingma].  I agree that if we are going with a map-based approach, we can simplify
the implementation of HDFS-9579.  This may also save a small amount of space... for example,
if FileSystem threads don't have any bytes read at distance 3 or 4, they will no longer need
to store space for {{bytesReadDistanceOfThreeOrFour}}.

It also seems like a good idea to make the detailed statistics optional.  Then clients could
opt-in to paying the overheads, rather than getting the overheads imposed whether they want
them or not.

[~liuml07] wrote:
bq. 2) Move long getBytesReadByDistance(int distance) from Statistics to StatisticsData. If
the user is getting bytes read for all distances, she can call getData() and then iterate
the map/array, in which case the getData() will be called only once. For cases of 1K client
threads, this may save the effort of aggregation. Colin Patrick McCabe may have different
comments about this?

Hmm.  I'm not sure I see much of a benefit to this.  Users should not be iterating over internal
data structures.  It exposes implementation details that we don't want to expose.  I also
don't see how it's more efficient, since you have to iterate over it in either case.

bq. For newly supported APIs, adding an entry in the map and one line of increment in the
new method will make the magic done. From the point of file system APIs, its public methods
are not evolving rapidly.

I agree that adding new statistics is relatively easy with the current patch.  My comment
was more that currently, many of these statistics are HDFS-specific.  Since MapReduce needs
to work with alternate filesystems, it will need to handle the case where these detailed statistics
are missing or slightly different.

bq. Another dimension will be needed for cross-DC analysis, while based on the current use
case, I don't think this dimension is heavily needed. One point is that, all the same kind
of file system is sharing the statistic data among threads, regardless of the remote/local
HDFS clusters.

It is not the case that "all the same kind of file system is sharing the statistic data among
threads."  Each {{Filesystem}} instance has a separate Statistics object associated with it:

{code}
  /** Recording statistics per a FileSystem class */
  private static final Map<Class<? extends FileSystem>, Statistics>
    statisticsTable =
      new IdentityHashMap<Class<? extends FileSystem>, Statistics>();
{code}

So as long as you use separate FS instances for the remote FS and local FS, it should work
for this use-case.  This is also part of the point that I was making earlier that memory overheads
add up quickly, since each new FS instance multiplies the overhead.

> add per-operation stats to FileSystem.Statistics
> ------------------------------------------------
>
>                 Key: HDFS-10175
>                 URL: https://issues.apache.org/jira/browse/HDFS-10175
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs-client
>            Reporter: Ram Venkatesh
>            Assignee: Mingliang Liu
>         Attachments: HDFS-10175.000.patch, HDFS-10175.001.patch, HDFS-10175.002.patch,
HDFS-10175.003.patch, HDFS-10175.004.patch, TestStatisticsOverhead.java
>
>
> Currently FileSystem.Statistics exposes the following statistics:
> BytesRead
> BytesWritten
> ReadOps
> LargeReadOps
> WriteOps
> These are in-turn exposed as job counters by MapReduce and other frameworks. There is
logic within DfsClient to map operations to these counters that can be confusing, for instance,
mkdirs counts as a writeOp.
> Proposed enhancement:
> Add a statistic for each DfsClient operation including create, append, createSymlink,
delete, exists, mkdirs, rename and expose them as new properties on the Statistics object.
The operation-specific counters can be used for analyzing the load imposed by a particular
job on HDFS. 
> For example, we can use them to identify jobs that end up creating a large number of
files.
> Once this information is available in the Statistics object, the app frameworks like
MapReduce can expose them as additional counters to be aggregated and recorded as part of
job summary.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message