hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "shenxianqiang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-10695) When using NN Fedration,DirectoryScanner throw IllegalStateException
Date Wed, 27 Jul 2016 03:42:20 GMT

    [ https://issues.apache.org/jira/browse/HDFS-10695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15395005#comment-15395005
] 

shenxianqiang commented on HDFS-10695:
--------------------------------------

Why not just place timer ahead for loop?like this:
{quote}
      perfTimer.start();
      throttleTimer.start();

      for (String bpid : bpList) {
        LinkedList<ScanInfo> report = new LinkedList<>();
        File bpFinalizedDir = volume.getFinalizedDir(bpid);
{quote}

> When using NN Fedration,DirectoryScanner throw IllegalStateException
> --------------------------------------------------------------------
>
>                 Key: HDFS-10695
>                 URL: https://issues.apache.org/jira/browse/HDFS-10695
>             Project: Hadoop HDFS
>          Issue Type: Bug
>    Affects Versions: 2.6.0
>         Environment: nn fedration 
> cdh5.7.0
>            Reporter: shenxianqiang
>
> DataNode DirectoryScanner always throw IllegalStateException:
> {quote}
> 2016-07-27 10:31:58,771 ERROR org.apache.hadoop.hdfs.server.datanode.DirectoryScanner:
Error compiling report
> java.util.concurrent.ExecutionException: java.lang.IllegalStateException: StopWatch is
already running
>         at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>         at java.util.concurrent.FutureTask.get(FutureTask.java:188)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:731)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:581)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:562)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:507)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.lang.IllegalStateException: StopWatch is already running
>         at org.apache.hadoop.util.StopWatch.start(StopWatch.java:49)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:812)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:778)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         ... 3 more
> 2016-07-27 10:31:58,773 ERROR org.apache.hadoop.hdfs.server.datanode.DirectoryScanner:
Exception during DirectoryScanner execution - will continue next cycle
> java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.IllegalStateException:
StopWatch is already running
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:741)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.scan(DirectoryScanner.java:581)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.reconcile(DirectoryScanner.java:562)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.run(DirectoryScanner.java:507)
>         at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>         at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:304)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:178)
>         at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
>         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.util.concurrent.ExecutionException: java.lang.IllegalStateException:
StopWatch is already running
>         at java.util.concurrent.FutureTask.report(FutureTask.java:122)
>         at java.util.concurrent.FutureTask.get(FutureTask.java:188)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner.getDiskReport(DirectoryScanner.java:731)
>         ... 10 more
> Caused by: java.lang.IllegalStateException: StopWatch is already running
>         at org.apache.hadoop.util.StopWatch.start(StopWatch.java:49)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:812)
>         at org.apache.hadoop.hdfs.server.datanode.DirectoryScanner$ReportCompiler.call(DirectoryScanner.java:778)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:262)
>         ... 3 more
> {quote}
> In DirectoryScanner.java,
> {quote}
>      for (String bpid : bpList) {
>         LinkedList<ScanInfo> report = new LinkedList<>();
>         File bpFinalizedDir = volume.getFinalizedDir(bpid);
>         perfTimer.start();
>         throttleTimer.start();
> {quote}
> Because using fedration,the size of biList is greater than 1,so perfTimer.start() will
always throw IllegalStateException
> In StopWatch.java
> {quote}
>   public StopWatch start() {
>     if (isStarted)
>       throw new IllegalStateException("StopWatch is already running");
> {quote}
> DirectoryScanner cannot complete,thus if there is a corrupted file, datanode cannot report
this.The consequences is very serious.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org


Mime
View raw message