accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nick Wise <Nicholas.W...@sa.catapult.org.uk>
Subject RE: accumulo.metadata table online but scans hang
Date Thu, 31 Aug 2017 12:17:13 GMT
Thank you Dave.  I will try the touchz approach and see what happens.



From: Dave Marion [mailto:dlmarion@comcast.net]
Sent: 31 August 2017 13:12
To: user@accumulo.apache.org; Nick Wise <Nicholas.Wise@sa.catapult.org.uk>; dev@accumulo.apache.org
Subject: RE: accumulo.metadata table online but scans hang

I don't have any other options for you at this point. Seems like you have the necessary information
to fixup the missing files and recover the system. You might be able to determine the timestamp
of the first missing WAL file and replay data from that point.

> On August 31, 2017 at 5:51 AM Nick Wise <Nicholas.Wise@sa.catapult.org.uk<mailto:Nicholas.Wise@sa.catapult.org.uk>>
wrote:
>
>
>
> It does run as the accumulo user, but sadly still no trash. I'm told that this is probably
because we move a lot of files in and out of HDFS for ingestion, and it's a space saving thing.
In hindsight I'd rather have bought more disks than have lost important files!
>
> I can't find any reference to deleting the WAL files in the gc logs, I do see lots of
lines like this around the time that things went wrong though:
>
> 2017-08-28 19:51:02,210 [gc.GarbageCollectWriteAheadLogs] INFO : Checking replication
table for hdfs://master01:9000/user/accumulo/accumulo/wal/node08+9997/0007bea2-bf57-44ab-b2ca-a8a924c6b3c8
>
> And after logs like the above for all of the referenced WAL files, lines like this:
>
> 2017-08-28 19:51:05,775 [gc.GarbageCollectWriteAheadLogs] INFO : 1822 replication entries
scanned in 6.83 seconds
> 2017-08-28 19:51:05,780 [gc.GarbageCollectWriteAheadLogs] INFO : 0 total logs removed
from 1 servers in 6.83 seconds
> 2017-08-28 19:53:05,971 [impl.ThriftTransportPool] WARN : Thread "gc" stuck on IO to
master02:9999 (0) for at least 120022 ms
>
> The only gc logging that happens after this point is startup properties print outs. No
further operational log messages come out.
>
> In terms of creating an empty table, when trying to do this it just hangs. I assume this
is because the metadata table is not working.
>
> root@instance> createtable emptytable
> 2017-08-31 10:43:57,315 [impl.ThriftTransportPool] WARN : Thread "shell" stuck on IO
to master01:9999 (0) for at least 120029 ms
>
> Are tables instance specific, or is it possible to get an empty rfile from any instance?
I don't suppose someone has such a file I could have..?!
>
> I believe it was Ivan who recommended using touchz to create an empty file in place of
the missing WALs, I'm assuming from the procedure to create an empty table that this isn't
the right thing to do, so I will hold off doing that unless someone can confirm it is good
enough.
>
> Thank you again for your help!
>
>
>
>
>
> -----Original Message-----
> From: dlmarion@comcast.net<mailto:dlmarion@comcast.net> [mailto:dlmarion@comcast.net]
> Sent: 31 August 2017 00:07
> To: user@accumulo.apache.org<mailto:user@accumulo.apache.org>; dev@accumulo.apache.org<mailto:dev@accumulo.apache.org>
> Subject: RE: accumulo.metadata table online but scans hang
>
> Re #2: Does your Accumulo processes run as the hdfs user on the O/S, or as the accumulo
user? Make sure you are checking the correct users trash folder. Also, check the Accumulo
garbage collector log to see if the GC process deleted the WAL files. Take a look at [1] to
see if you are hitting this case.
>
> You can create empty rfiles and copy them into place. I believe the procedure to do this
is to create an empty table and run a compaction on the table. Then you should be able to
copy the resulting file into the desired locations (devs - please correct me here if this
is not correct).
>
> Finally, I would not do anything destructive yet. Let's see if we can get some other
devs to chime in with some ideas.
>
> [1] https://issues.apache.org/jira/browse/ACCUMULO-4157
>
>
> -----Original Message-----
> From: Nick Wise [mailto:Nicholas.Wise@sa.catapult.org.uk]
> Sent: Wednesday, August 30, 2017 5:36 PM
> To: user@accumulo.apache.org<mailto:user@accumulo.apache.org>
> Subject: RE: accumulo.metadata table online but scans hang
>
>
> Thank you very much for the pointers Dave. Looking at those:
>
> 1. That seems reasonable, I’m not sure how to check after the fact but makes sense.
> 2. Ah. Looks like we don’t have trash enabled, there’s no /user/hdfs/.Trash folder
that I can see. I’m getting a sinking feeling… 3. I had to allocate 4G but that worked
and now I have a folder listing of 758k files. I’ve cross referenced with the 1101 WAL files
referred to in our logs and not a single one exists. Sinking some more.
>
> So, it sounds like (speaking from a position of ignorance) that we have a system where
accumulo.metadata has outstanding WAL files to recover, but the files don't exist, but the
only way to restore the system is to convince the metadata table that it doesn't need the
WAL files, but to edit the metadata table we have to resolve the outstanding WAL files, etc.
>
> What would happen if we created an empty file in place of the missing WAL files? Would
they be considered to be an invalid format and break things more (I'm not sure that's possible),
or might they be accepted as needing no further resolution?
>
> Any other thoughts (anyone) on how we might save ourselves, besides starting from scratch?
(When we first loaded our 16TB of data it took 6 weeks using the map/reduce method!)
>
> Thank you again!
>
> Nick
>
>
>
> From: Dave Marion [mailto:dlmarion@comcast.net]
> Sent: 30 August 2017 20:13
> To: user@accumulo.apache.org<mailto:user@accumulo.apache.org>; Nick Wise <Nicholas.Wise@sa.catapult.org.uk<mailto:Nicholas.Wise@sa.catapult.org.uk>>
> Subject: Re: accumulo.metadata table online but scans hang
>
> Some immediate thoughts:
>
> 1. Regarding node08 having so many files, maybe it was the last DN that had free space?
> 2. Look in the trash folder for the missing referenced WAL files 3. For you OOME using
the HDFS CLI, I think you can increase the amount of memory that the client will use with:
export HADOOP_CLIENT_OPTS="-Xmx1G" (or something like that).
>
> Still digesting the rest....
>
>
> On August 30, 2017 at 2:45 PM Nick Wise <mailto:Nicholas.Wise@sa.catapult.org.uk>
wrote:
>
> Disclaimer: I don’t have much experience with Accumulo or Hadoop, I’m standing in
because our resident expert is away on honeymoon! We’ve done a great deal of reading and
do not know if our situation is recoverable, so any and all advice would be very welcome.
>
> Background:
> We are running:
> (a) Accumulo version: 1.7.0
> (b) Hadoop version: 2.7.1
> (c) Geomesa version: 1.2.1
> We have 31 nodes, 2 masters and 3 zookeepers (obviously named in the log excerpts below).
Nodes are both data nodes and tablet servers, masters are also name nodes. Nodes have 16GB
RAM, Intel Core i5 dual core CPUs, and 500GB or 1TB SSD each.
> This is a production deployment where we are analysing 16TB (and growing) geospatial
data, with the outcomes being used daily. We have customers relying on our results.
>
> Initial Issue:
> The non-DFS storage used in our HDFS system was falsely reporting that it was using all
of the free space we had available, resulting in HDFS rejecting writes from a variety of places
across our cluster. After research it appeared that this may be as a result of a bug, and
that restarting HDFS services would resolve it. After restarting the HDFS services the non-DFS
storage used immediately returned to expected levels, but accumulo didn’t seem to be responding
to queries so we ran stop-all.sh and start-all.sh. When running stop-all.sh it timed out trying
to contact the master, and did a forced shutdown.
>
> After restarting, Accumulo listed all the tables as being online (except for accumulo.replication
which is offline) but none of the tables have their tablets associated except for:
> (a) accumulo.metadata
> (b) accumulo.root
> All Geomesa tables are showing as online though the tablets, table sizes and record counts
are not showing in the web UI.
>
> In the logs (which are very large) there are a range of issues showing, the following
seeming important from our Googling.
>
> Log excerpts:
> 2017-08-30 14:45:06,195 [master.EventCoordinator] INFO : Marked 1 tablets as unassigned
because they don't have current servers
> 2017-08-30 14:45:06,195 [master.EventCoordinator] INFO : [Metadata Tablets]: 1 tablets
are ASSIGNED_TO_DEAD_SERVER
> 2017-08-30 14:45:13,425 [master.Master] INFO : Assigning 1 tablets
> 2017-08-30 14:45:13,441 [master.EventCoordinator] INFO : [Metadata Tablets]: 1 tablets
are UNASSIGNED
> 2017-08-30 14:45:13,975 [master.EventCoordinator] INFO : tablet !0<;~ was loaded on
node03:9997
>
> An Accumulo meta data node is offline. In the accumulo master log file we see that there
are 1101 WALs associated with a node (node08) that are linked to tablet !0<~. Below are
2 instances of the message we get in the logs, which repeat over and over, and there are 1101
of them per repeat. We’re not sure why there are 1101 WALs for the one node, but we assume
that this is the main cause of our problem.
>
> 2017-08-30 15:20:29,094 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.master.recovery.HadoopLogCloser
> 2017-08-30 15:20:29,094 [recovery.RecoveryManager] INFO : Starting recovery of hdfs://master01:9000/user/accumulo/accumulo/wal/node08+9997/fed84709-3d3b-45b0-8b77-020a71762b09
(in : 300s), tablet !0;~< holds a reference
> 2017-08-30 15:20:29,142 [conf.AccumuloConfiguration] INFO : Loaded class : org.apache.accumulo.server.master.recovery.HadoopLogCloser
> 2017-08-30 15:20:29,142 [recovery.RecoveryManager] INFO : Starting recovery of hdfs://master01:9000/user/accumulo/accumulo/wal/node08+9997/ffc115dd-f094-443f-a98f-8e670fb2a924
(in : 300s), tablet !0;~< holds a reference
> 2017-08-30 15:20:45,457 [replication.WorkMaker] INFO : Replication table is not yet online
>
> Any query of the meta data table hangs, such as those recommended here: https://accumulo.apache.org/1.7/accumulo_user_manual.html#_advanced_system_recovery
> We are assuming that the above inability to recover the WALs is preventing use of the
metadata table, even though it reports as being online.
>
> Running:
> (a)
> ./hdfs dfs -du -s -h hdfs://master01:9000/user/accumulo/accumulo/wal/node08+9997/ returns:
> 1.1 G hdfs://master01:9000/user/accumulo/accumulo/wal/node08+9997
>
> (b)
> ./hdfs dfs -count -h hdfs://master01:9000/user/accumulo/accumulo/wal/node08+9997/ returns:
> 1 785.1 K 1.1 G hdfs://master01:9000/user/accumulo/accumulo/wal/node08+9997
>
> (c)
> ./hdfs dfs -ls hdfs://master01:9000/user/accumulo/accumulo/wal/node08+9997/ returns:
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> at java.lang.String.substring(String.java:1969)
> at java.net.URI$Parser.substring(URI.java:2869)
> at java.net.URI$Parser.parse(URI.java:3065)
> at java.net.URI.<init>(URI.java:746)
> at org.apache.hadoop.fs.Path.<init>(Path.java:108)
> at org.apache.hadoop.fs.Path.<init>(Path.java:93)
> at org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)
> at org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)
> at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:830)
> at org.apache.hadoop.hdfs.DistributedFileSystem.access$700(DistributedFileSystem.java:106)
> at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:853)
> at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:849)
> at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:849)
> at org.apache.hadoop.fs.shell.PathData.getDirectoryContents(PathData.java:268)
> at org.apache.hadoop.fs.shell.Command.recursePath(Command.java:373)
> at org.apache.hadoop.fs.shell.Ls.processPathArgument(Ls.java:90)
> at org.apache.hadoop.fs.shell.Command.processArgument(Command.java:271)
> at org.apache.hadoop.fs.shell.Command.processArguments(Command.java:255)
> at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:201)
> at org.apache.hadoop.fs.shell.Command.run(Command.java:165)
> at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
> at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
> at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
> (d)
> We have validated that the file permissions on the Accumulo tables are correct.
>
> We don’t understand why each of the 31 nodes that have WALs under hdfs://master01:9000/user/accumulo/accumulo/wal/
each have only a single WAL file within, yet for node08 there are 785,100 files. Also, for
a random sample of the 1101 WAL files mentioned in the logs referred to above, none of them
seem to be in the folder (hdfs dfs -l reports file not found for all of the files we tried).
>
> Judging from the notes under “Advanced System Recovery” in the manual, we’re stuck
because it suggests editing the metadata table to drop the WALs and, with data loss, get the
system back online. As the problems appear to be with the metadata table, and HDFS is reporting
healthy with no corruption, we don’t see how to proceed.
>
> We have many large log files, which I’m happy to email separately if it helps.
>
> Any suggestions as to what we might do to get back online?
>
> Thank you very much,
>
> Nick
>
>
> This email (and any attachments) may contain confidential information and is intended
solely for the recipient(s) to whom the email is addressed. If you received this email in
error, please inform us immediately and delete the email and all attachments without further
using, copying or disclosing the information. This email and any attachments are believed
to be, but cannot be guaranteed to be, secure or virus-free. Satellite Applications Catapult
Limited is registered in England & Wales. Company Number: 7964746. Registered office:
Electron Building, Fermi Avenue, Harwell Oxford, Didcot, Oxfordshire OX11 0QR.
>
>
> This email (and any attachments) may contain confidential information and is intended
solely for the recipient(s) to whom the email is addressed. If you received this email in
error, please inform us immediately and delete the email and all attachments without further
using, copying or disclosing the information. This email and any attachments are believed
to be, but cannot be guaranteed to be, secure or virus-free. Satellite Applications Catapult
Limited is registered in England & Wales. Company Number: 7964746. Registered office:
Electron Building, Fermi Avenue, Harwell Oxford, Didcot, Oxfordshire OX11 0QR.
>
> This email (and any attachments) may contain confidential information and is intended
solely for the recipient(s) to whom the email is addressed. If you received this email in
error, please inform us immediately and delete the email and all attachments without further
using, copying or disclosing the information. This email and any attachments are believed
to be, but cannot be guaranteed to be, secure or virus-free. Satellite Applications Catapult
Limited is registered in England & Wales. Company Number: 7964746. Registered office:
Electron Building, Fermi Avenue, Harwell Oxford, Didcot, Oxfordshire OX11 0QR.
This email (and any attachments) may contain confidential information and is intended solely
for the recipient(s) to whom the email is addressed. If you received this email in error,
please inform us immediately and delete the email and all attachments without further using,
copying or disclosing the information. This email and any attachments are believed to be,
but cannot be guaranteed to be, secure or virus-free. Satellite Applications Catapult Limited
is registered in England & Wales. Company Number: 7964746. Registered office: Electron
Building, Fermi Avenue, Harwell Oxford, Didcot, Oxfordshire OX11 0QR.
Mime
View raw message