hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ted Yu <yuzhih...@gmail.com>
Subject Re: Difference between ResultScanner and initTableMapperJob
Date Tue, 11 Jul 2017 20:31:02 GMT
Can you take a look at the server log on hslave35150.ams9.mydomain.com
around 17/07/07 20:23:31 ?

See if there is some clue in the log.

On Tue, Jul 11, 2017 at 12:18 PM, S L <slouie.at.work@gmail.com> wrote:

> If I forgot to say, the keys that the log shows is causing the
> RetriesExhaustedException should be deleted/gone from the table due to the
> TTL being exceeded.
>
> Fri Jul 07 20:23:26 PDT 2017, null, java.net.SocketTimeoutException:
> callTimeout=40000, callDuration=40303: row
> '41_db160190.iad3.mydomain.com_1486067940' on table 'server_based_data' at
> region=server_based_data,41_db160190.iad3.mydomain.com_
> 1486067940,1487094006943.f67c3b9836107bdbe6a533e2829c509a.,
> hostname=hslave35150.ams9.mydomain.com,60020,1483579082784, seqNum=5423139
>
> The timestamp here is from Feb 2, 2017.  My TTL is 30 days.  Since I ran
> the job on July 7, 2017, Feb 2017 is way past the 30 day TTL
>
> describe 'server_based_data'
>
> Table server_based_data is ENABLED
>
>
> server_based_data
>
>
> COLUMN FAMILIES DESCRIPTION
>
>
> {NAME => 'raw_data', DATA_BLOCK_ENCODING => 'NONE', BLOOMFILTER => 'ROW',
> REPLIC
>
> ATION_SCOPE => '0', VERSIONS => '1', COMPRESSION => 'SNAPPY', MIN_VERSIONS
> => '0
>
> ', TTL => '2592000 SECONDS (30 DAYS)', KEEP_DELETED_CELLS => 'FALSE',
> BLOCKSIZE
>
> => '65536', IN_MEMORY => 'false', BLOCKCACHE => 'true'}
>
>
> 1 row(s) in 0.5180 seconds
>
> On Tue, Jul 11, 2017 at 12:11 PM, S L <slouie.at.work@gmail.com> wrote:
>
> > Sorry for not being clear.  I tried with both versions, first 1.0.1, then
> > 1,2-cdh5.7.2.  We are currently running on Cloudera 5.7.2, thus why I
> used
> > that version of the jar.
> >
> > I had set the timeout to be as short as 30 sec and as long as 2 min but I
> > was still running into the problem.  When setting the timeout to 2 min,
> the
> > job took almost 50 min to "complete".  Complete is in quotes because it
> > fails (see pastebin below)
> >
> > Here's a copy of the hadoop output logs via pastebin.  The log is 11000
> > lines so I just pasted up to the first couple exceptions and then pasted
> > the end where it jumps from 80% maps to 100% and from 21% reduce to 100%
> > because Yarn or something killed it.
> >
> > https://pastebin.com/KwriyPn6
> > http://imgur.com/a/ouPZ5 - screenshot from failed mapreduce job from
> > cloudera manager/Yarn
> >
> >
> >
> > On Mon, Jul 10, 2017 at 8:50 PM, Ted Yu <yuzhihong@gmail.com> wrote:
> >
> >> bq. for hbase-client/hbase-server version 1.0.1 and 1.2.0-cdh5.7.2.
> >>
> >> You mean the error occurred for both versions or, client is on 1.0.1 and
> >> server is on 1.2.0 ?
> >>
> >> There should be more to the RetriesExhaustedException.
> >> Can you pastebin the full stack trace ?
> >>
> >> Cheers
> >>
> >> On Mon, Jul 10, 2017 at 2:21 PM, S L <slouie.at.work@gmail.com> wrote:
> >>
> >> > I hope someone can tell me what the difference between these two API
> >> calls
> >> > are.  I'm getting weird results between the two of them.  This is
> >> happening
> >> > for hbase-client/hbase-server version 1.0.1 and 1.2.0-cdh5.7.2.
> >> >
> >> > First off, my rowkeys are in the format hash_name_timestamp
> >> > e.g. 100_servername_1234567890.  The hbase table has a TTL of 30 days
> so
> >> > things older than 30 days should disappear after compaction.
> >> >
> >> > The following is code for using ResultScanner.  It doesn't use
> >> MapReduce so
> >> > it takes a very long time to complete.  I can't run my job this way
> >> because
> >> > it takes too long.  However, for debugging purposes, I don't have any
> >> > problems with this method.  It lists all keys for the specified time
> >> range,
> >> > which look valid to me since all the timestamps of the returned keys
> are
> >> > within the past 30 days and within the specified time range:
> >> >
> >> >     Scan scan = new Scan();
> >> >     scan.addColumn(Bytes.toBytes("raw_data"),
> Bytes.toBytes(fileType));
> >> >     scan.setCaching(500);
> >> >     scan.setCacheBlocks(false);
> >> >     scan.setTimeRange(start, end);
> >> >
> >> >     Connection fConnection = ConnectionFactory.
> createConnection(conf);
> >> >     Table table = fConnection.getTable(TableName.valueOf(tableName));
> >> >     ResultScanner scanner = table.getScanner(scan);
> >> >     for (Result result = scanner.next(); result != null; result =
> >> > scanner.next()) {
> >> >        System.out.println("Found row: " +
> Bytes.toString(result.getRow()
> >> > ));
> >> >     }
> >> >
> >> >
> >> > The follow code doesn't work but it uses MapReduce, which runs way
> >> faster
> >> > than using the ResultScanner way, since it divides things up into 1200
> >> > maps.  The problem is I'm getting rowkeys that should have disappeared
> >> due
> >> > to TTL expiring:
> >> >
> >> >     Scan scan = new Scan();
> >> >     scan.addColumn(Bytes.toBytes("raw_data"),
> Bytes.toBytes(fileType));
> >> >     scan.setCaching(500);
> >> >     scan.setCacheBlocks(false);
> >> >     scan.setTimeRange(start, end);
> >> > TableMapReduceUtil.initTableMapperJob(tableName, scan,
> >> MTTRMapper.class,
> >> > Text.class, IntWritable.class, job);
> >> >
> >> > Here is the error that I get, which eventually kills the whole MR job
> >> later
> >> > because over 25% of the mappers failed.
> >> >
> >> > > Error: org.apache.hadoop.hbase.client.RetriesExhaustedException:
> >> > > Failed after attempts=36, exceptions: Wed Jun 28 13:46:57 PDT 2017,
> >> > > null, java.net.SocketTimeoutException: callTimeout=120000,
> >> > > callDuration=120301: row '65_app129041.iad1.mydomain.
> com_1476641940'
> >> > > on table 'server_based_data' at region=server_based_data
> >> >
> >> > I'll try to study the code for the hbase-client and hbase-server jars
> >> but
> >> > hopefully someone will know offhand what the difference between the
> >> methods
> >> > are and what is causing the initTableMapperJob call to fail.
> >> >
> >>
> >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message