incubator-accumulo-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John W Vines <>
Subject Re: No Such SessionID reported on tablet server
Date Thu, 20 Oct 2011 20:23:48 GMT

----- Original Message -----
| From: "Keith Massey" <>
| To:
| Sent: Thursday, October 20, 2011 4:06:08 PM
| Subject: No Such SessionID reported on tablet server
| We are loading data into cloudbase 1.3.2 using
| from a
| map/reduce
| job. We use one BatchWriter per table. The data appears to go in fine
| --
| no exceptions are reported in the map/reduce job. And most of the data
| does appear to be there. But some of it (maybe 1% if I had to guess)
| is
| not there in our cloudbase tables. The only errors we have seen
| anywhere
| are in the tserver logs. They look like this:
| 20 16:26:21,529 [server.TThreadPoolServer] ERROR: Error occurred
| during
| processing of message.
| java.lang.RuntimeException: No Such SessionID
| at
| cloudbase.server.tabletserver.TabletServer$ThriftClientHandler.applyUpdate(
| at sun.reflect.GeneratedMethodAccessor6.invoke(Unknown Source)
| at
| sun.reflect.DelegatingMethodAccessorImpl.invoke(
| at java.lang.reflect.Method.invoke(
| at
| cloudtrace.instrument.thrift.TraceWrap$1.invoke(
| at $Proxy1.applyUpdate(Unknown Source)
| at
| cloudbase.core.tabletserver.thrift.TabletClientService$Processor$applyUpdate.process(
| at
| cloudbase.core.tabletserver.thrift.TabletClientService$Processor.process(
| at
| cloudbase.server.util.TServerUtils$TimedProcessor.process(
| at
| org.apache.thrift.server.TThreadPoolServer$
| at
| java.util.concurrent.ThreadPoolExecutor$Worker.runTask(
| at
| java.util.concurrent.ThreadPoolExecutor$
| at
| These don't seem to make it back to the client side though. And I
| don't
| believe I control any kind of session id. Any ideas what I can do?
| Thanks.
| Keith

So I'm under the impression you are writing to multiple tables in your reducer (or mapper
with no reduce process). If you do not close your batchwriters, they will not flush the buffered
data that has not yet been sent. Make sure you call close() on them before you wrap up the
process to make sure data goes through.

On a similar note, if you're ingesting to multiple tables, I highly recommend you use a MultiTableBatchWriter,
and then get the BatchWriters you need from that. It's more efficient in the way it sends
data into the tservers and that way you only have to worry about a single close() call. If
you're only ingesting into a single table, you may want to consider one of our OutputFormats,
as they handle the closing and whatnot involved so you don't have to.

As for your error, I wouldn't be concerned about it. If you take some time between sending
data, whether your client process was swapping or it just does a lot of computation between
writing data, then the server side will think the client session is over and close it. When
the BatchWriter attempts to reconnect it will handle the error and create a new session so
there should be no problems stemming from that.

Let us know if you have anymore problems

View raw message