Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id 0C986200BBD for ; Tue, 8 Nov 2016 14:50:06 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id 0B382160B15; Tue, 8 Nov 2016 13:50:06 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id E4245160B0A for ; Tue, 8 Nov 2016 14:50:03 +0100 (CET) Received: (qmail 25140 invoked by uid 500); 8 Nov 2016 13:49:58 -0000 Mailing-List: contact commits-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@hbase.apache.org Delivered-To: mailing list commits@hbase.apache.org Received: (qmail 22888 invoked by uid 99); 8 Nov 2016 13:49:57 -0000 Received: from git1-us-west.apache.org (HELO git1-us-west.apache.org) (140.211.11.23) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 08 Nov 2016 13:49:57 +0000 Received: by git1-us-west.apache.org (ASF Mail Server at git1-us-west.apache.org, from userid 33) id 0A731F173B; Tue, 8 Nov 2016 13:49:57 +0000 (UTC) Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit From: stack@apache.org To: commits@hbase.apache.org Date: Tue, 08 Nov 2016 13:50:26 -0000 Message-Id: <2eb6d2433559407da5dfda7ace721a30@git.apache.org> In-Reply-To: References: X-Mailer: ASF-Git Admin Mailer Subject: [31/52] [partial] hbase-site git commit: Published site at 28de528c6ea19c261213ee229381a18ed3b5ef94. archived-at: Tue, 08 Nov 2016 13:50:06 -0000 http://git-wip-us.apache.org/repos/asf/hbase-site/blob/f96628d5/apidocs/src-html/org/apache/hadoop/hbase/client/Table.html ---------------------------------------------------------------------- diff --git a/apidocs/src-html/org/apache/hadoop/hbase/client/Table.html b/apidocs/src-html/org/apache/hadoop/hbase/client/Table.html index cc21d62..91a81ab 100644 --- a/apidocs/src-html/org/apache/hadoop/hbase/client/Table.html +++ b/apidocs/src-html/org/apache/hadoop/hbase/client/Table.html @@ -239,413 +239,428 @@ 231 /** 232 * Atomically checks if a row/family/qualifier value matches the expected 233 * value. If it does, it adds the put. If the passed value is null, the check -234 * is for the lack of column (ie: non-existance) +234 * is for the lack of column (ie: non-existence) 235 * -236 * @param row to check -237 * @param family column family to check -238 * @param qualifier column qualifier to check -239 * @param compareOp comparison operator to use -240 * @param value the expected value -241 * @param put data to put if check succeeds -242 * @throws IOException e -243 * @return true if the new put was executed, false otherwise -244 */ -245 boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier, -246 CompareFilter.CompareOp compareOp, byte[] value, Put put) throws IOException; -247 -248 /** -249 * Deletes the specified cells/row. -250 * -251 * @param delete The object that specifies what to delete. -252 * @throws IOException if a remote or network exception occurs. -253 * @since 0.20.0 -254 */ -255 void delete(Delete delete) throws IOException; -256 -257 /** -258 * Deletes the specified cells/rows in bulk. -259 * @param deletes List of things to delete. List gets modified by this -260 * method (in particular it gets re-ordered, so the order in which the elements -261 * are inserted in the list gives no guarantee as to the order in which the -262 * {@link Delete}s are executed). -263 * @throws IOException if a remote or network exception occurs. In that case -264 * the {@code deletes} argument will contain the {@link Delete} instances -265 * that have not be successfully applied. -266 * @since 0.20.1 -267 */ -268 void delete(List<Delete> deletes) throws IOException; -269 -270 /** -271 * Atomically checks if a row/family/qualifier value matches the expected -272 * value. If it does, it adds the delete. If the passed value is null, the -273 * check is for the lack of column (ie: non-existance) -274 * -275 * @param row to check -276 * @param family column family to check -277 * @param qualifier column qualifier to check -278 * @param value the expected value -279 * @param delete data to delete if check succeeds -280 * @throws IOException e -281 * @return true if the new delete was executed, false otherwise -282 */ -283 boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier, -284 byte[] value, Delete delete) throws IOException; -285 -286 /** -287 * Atomically checks if a row/family/qualifier value matches the expected -288 * value. If it does, it adds the delete. If the passed value is null, the -289 * check is for the lack of column (ie: non-existance) -290 * -291 * @param row to check -292 * @param family column family to check -293 * @param qualifier column qualifier to check -294 * @param compareOp comparison operator to use -295 * @param value the expected value -296 * @param delete data to delete if check succeeds -297 * @throws IOException e -298 * @return true if the new delete was executed, false otherwise -299 */ -300 boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier, -301 CompareFilter.CompareOp compareOp, byte[] value, Delete delete) throws IOException; -302 -303 /** -304 * Performs multiple mutations atomically on a single row. Currently -305 * {@link Put} and {@link Delete} are supported. -306 * -307 * @param rm object that specifies the set of mutations to perform atomically -308 * @throws IOException +236 * The expected value argument of this call is on the left and the current +237 * value of the cell is on the right side of the comparison operator. +238 * +239 * Ie. eg. GREATER operator means expected value > existing <=> add the put. +240 * +241 * @param row to check +242 * @param family column family to check +243 * @param qualifier column qualifier to check +244 * @param compareOp comparison operator to use +245 * @param value the expected value +246 * @param put data to put if check succeeds +247 * @throws IOException e +248 * @return true if the new put was executed, false otherwise +249 */ +250 boolean checkAndPut(byte[] row, byte[] family, byte[] qualifier, +251 CompareFilter.CompareOp compareOp, byte[] value, Put put) throws IOException; +252 +253 /** +254 * Deletes the specified cells/row. +255 * +256 * @param delete The object that specifies what to delete. +257 * @throws IOException if a remote or network exception occurs. +258 * @since 0.20.0 +259 */ +260 void delete(Delete delete) throws IOException; +261 +262 /** +263 * Deletes the specified cells/rows in bulk. +264 * @param deletes List of things to delete. List gets modified by this +265 * method (in particular it gets re-ordered, so the order in which the elements +266 * are inserted in the list gives no guarantee as to the order in which the +267 * {@link Delete}s are executed). +268 * @throws IOException if a remote or network exception occurs. In that case +269 * the {@code deletes} argument will contain the {@link Delete} instances +270 * that have not be successfully applied. +271 * @since 0.20.1 +272 */ +273 void delete(List<Delete> deletes) throws IOException; +274 +275 /** +276 * Atomically checks if a row/family/qualifier value matches the expected +277 * value. If it does, it adds the delete. If the passed value is null, the +278 * check is for the lack of column (ie: non-existance) +279 * +280 * @param row to check +281 * @param family column family to check +282 * @param qualifier column qualifier to check +283 * @param value the expected value +284 * @param delete data to delete if check succeeds +285 * @throws IOException e +286 * @return true if the new delete was executed, false otherwise +287 */ +288 boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier, +289 byte[] value, Delete delete) throws IOException; +290 +291 /** +292 * Atomically checks if a row/family/qualifier value matches the expected +293 * value. If it does, it adds the delete. If the passed value is null, the +294 * check is for the lack of column (ie: non-existence) +295 * +296 * The expected value argument of this call is on the left and the current +297 * value of the cell is on the right side of the comparison operator. +298 * +299 * Ie. eg. GREATER operator means expected value > existing <=> add the delete. +300 * +301 * @param row to check +302 * @param family column family to check +303 * @param qualifier column qualifier to check +304 * @param compareOp comparison operator to use +305 * @param value the expected value +306 * @param delete data to delete if check succeeds +307 * @throws IOException e +308 * @return true if the new delete was executed, false otherwise 309 */ -310 void mutateRow(final RowMutations rm) throws IOException; -311 -312 /** -313 * Appends values to one or more columns within a single row. -314 * <p> -315 * This operation does not appear atomic to readers. Appends are done -316 * under a single row lock, so write operations to a row are synchronized, but -317 * readers do not take row locks so get and scan operations can see this -318 * operation partially completed. -319 * -320 * @param append object that specifies the columns and amounts to be used -321 * for the increment operations -322 * @throws IOException e -323 * @return values of columns after the append operation (maybe null) -324 */ -325 Result append(final Append append) throws IOException; -326 -327 /** -328 * Increments one or more columns within a single row. -329 * <p> -330 * This operation does not appear atomic to readers. Increments are done -331 * under a single row lock, so write operations to a row are synchronized, but -332 * readers do not take row locks so get and scan operations can see this -333 * operation partially completed. -334 * -335 * @param increment object that specifies the columns and amounts to be used -336 * for the increment operations -337 * @throws IOException e -338 * @return values of columns after the increment -339 */ -340 Result increment(final Increment increment) throws IOException; -341 -342 /** -343 * See {@link #incrementColumnValue(byte[], byte[], byte[], long, Durability)} -344 * <p> -345 * The {@link Durability} is defaulted to {@link Durability#SYNC_WAL}. -346 * @param row The row that contains the cell to increment. -347 * @param family The column family of the cell to increment. -348 * @param qualifier The column qualifier of the cell to increment. -349 * @param amount The amount to increment the cell with (or decrement, if the -350 * amount is negative). -351 * @return The new value, post increment. -352 * @throws IOException if a remote or network exception occurs. -353 */ -354 long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, -355 long amount) throws IOException; -356 -357 /** -358 * Atomically increments a column value. If the column value already exists -359 * and is not a big-endian long, this could throw an exception. If the column -360 * value does not yet exist it is initialized to <code>amount</code> and -361 * written to the specified column. -362 * -363 * <p>Setting durability to {@link Durability#SKIP_WAL} means that in a fail -364 * scenario you will lose any increments that have not been flushed. -365 * @param row The row that contains the cell to increment. -366 * @param family The column family of the cell to increment. -367 * @param qualifier The column qualifier of the cell to increment. -368 * @param amount The amount to increment the cell with (or decrement, if the -369 * amount is negative). -370 * @param durability The persistence guarantee for this increment. -371 * @return The new value, post increment. -372 * @throws IOException if a remote or network exception occurs. -373 */ -374 long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, -375 long amount, Durability durability) throws IOException; -376 -377 /** -378 * Releases any resources held or pending changes in internal buffers. -379 * -380 * @throws IOException if a remote or network exception occurs. -381 */ -382 @Override -383 void close() throws IOException; -384 -385 /** -386 * Creates and returns a {@link com.google.protobuf.RpcChannel} instance connected to the -387 * table region containing the specified row. The row given does not actually have -388 * to exist. Whichever region would contain the row based on start and end keys will -389 * be used. Note that the {@code row} parameter is also not passed to the -390 * coprocessor handler registered for this protocol, unless the {@code row} -391 * is separately passed as an argument in the service request. The parameter -392 * here is only used to locate the region used to handle the call. -393 * -394 * <p> -395 * The obtained {@link com.google.protobuf.RpcChannel} instance can be used to access a published -396 * coprocessor {@link com.google.protobuf.Service} using standard protobuf service invocations: -397 * </p> -398 * -399 * <div style="background-color: #cccccc; padding: 2px"> -400 * <blockquote><pre> -401 * CoprocessorRpcChannel channel = myTable.coprocessorService(rowkey); -402 * MyService.BlockingInterface service = MyService.newBlockingStub(channel); -403 * MyCallRequest request = MyCallRequest.newBuilder() -404 * ... -405 * .build(); -406 * MyCallResponse response = service.myCall(null, request); -407 * </pre></blockquote></div> +310 boolean checkAndDelete(byte[] row, byte[] family, byte[] qualifier, +311 CompareFilter.CompareOp compareOp, byte[] value, Delete delete) throws IOException; +312 +313 /** +314 * Performs multiple mutations atomically on a single row. Currently +315 * {@link Put} and {@link Delete} are supported. +316 * +317 * @param rm object that specifies the set of mutations to perform atomically +318 * @throws IOException +319 */ +320 void mutateRow(final RowMutations rm) throws IOException; +321 +322 /** +323 * Appends values to one or more columns within a single row. +324 * <p> +325 * This operation does not appear atomic to readers. Appends are done +326 * under a single row lock, so write operations to a row are synchronized, but +327 * readers do not take row locks so get and scan operations can see this +328 * operation partially completed. +329 * +330 * @param append object that specifies the columns and amounts to be used +331 * for the increment operations +332 * @throws IOException e +333 * @return values of columns after the append operation (maybe null) +334 */ +335 Result append(final Append append) throws IOException; +336 +337 /** +338 * Increments one or more columns within a single row. +339 * <p> +340 * This operation does not appear atomic to readers. Increments are done +341 * under a single row lock, so write operations to a row are synchronized, but +342 * readers do not take row locks so get and scan operations can see this +343 * operation partially completed. +344 * +345 * @param increment object that specifies the columns and amounts to be used +346 * for the increment operations +347 * @throws IOException e +348 * @return values of columns after the increment +349 */ +350 Result increment(final Increment increment) throws IOException; +351 +352 /** +353 * See {@link #incrementColumnValue(byte[], byte[], byte[], long, Durability)} +354 * <p> +355 * The {@link Durability} is defaulted to {@link Durability#SYNC_WAL}. +356 * @param row The row that contains the cell to increment. +357 * @param family The column family of the cell to increment. +358 * @param qualifier The column qualifier of the cell to increment. +359 * @param amount The amount to increment the cell with (or decrement, if the +360 * amount is negative). +361 * @return The new value, post increment. +362 * @throws IOException if a remote or network exception occurs. +363 */ +364 long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, +365 long amount) throws IOException; +366 +367 /** +368 * Atomically increments a column value. If the column value already exists +369 * and is not a big-endian long, this could throw an exception. If the column +370 * value does not yet exist it is initialized to <code>amount</code> and +371 * written to the specified column. +372 * +373 * <p>Setting durability to {@link Durability#SKIP_WAL} means that in a fail +374 * scenario you will lose any increments that have not been flushed. +375 * @param row The row that contains the cell to increment. +376 * @param family The column family of the cell to increment. +377 * @param qualifier The column qualifier of the cell to increment. +378 * @param amount The amount to increment the cell with (or decrement, if the +379 * amount is negative). +380 * @param durability The persistence guarantee for this increment. +381 * @return The new value, post increment. +382 * @throws IOException if a remote or network exception occurs. +383 */ +384 long incrementColumnValue(byte[] row, byte[] family, byte[] qualifier, +385 long amount, Durability durability) throws IOException; +386 +387 /** +388 * Releases any resources held or pending changes in internal buffers. +389 * +390 * @throws IOException if a remote or network exception occurs. +391 */ +392 @Override +393 void close() throws IOException; +394 +395 /** +396 * Creates and returns a {@link com.google.protobuf.RpcChannel} instance connected to the +397 * table region containing the specified row. The row given does not actually have +398 * to exist. Whichever region would contain the row based on start and end keys will +399 * be used. Note that the {@code row} parameter is also not passed to the +400 * coprocessor handler registered for this protocol, unless the {@code row} +401 * is separately passed as an argument in the service request. The parameter +402 * here is only used to locate the region used to handle the call. +403 * +404 * <p> +405 * The obtained {@link com.google.protobuf.RpcChannel} instance can be used to access a published +406 * coprocessor {@link com.google.protobuf.Service} using standard protobuf service invocations: +407 * </p> 408 * -409 * @param row The row key used to identify the remote region location -410 * @return A CoprocessorRpcChannel instance -411 */ -412 CoprocessorRpcChannel coprocessorService(byte[] row); -413 -414 /** -415 * Creates an instance of the given {@link com.google.protobuf.Service} subclass for each table -416 * region spanning the range from the {@code startKey} row to {@code endKey} row (inclusive), and -417 * invokes the passed {@link org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} method -418 * with each {@link com.google.protobuf.Service} instance. -419 * -420 * @param service the protocol buffer {@code Service} implementation to call -421 * @param startKey start region selection with region containing this row. If {@code null}, the -422 * selection will start with the first table region. -423 * @param endKey select regions up to and including the region containing this row. If {@code -424 * null}, selection will continue through the last table region. -425 * @param callable this instance's {@link org.apache.hadoop.hbase.client.coprocessor.Batch -426 * .Call#call} -427 * method will be invoked once per table region, using the {@link com.google.protobuf.Service} -428 * instance connected to that region. -429 * @param <T> the {@link com.google.protobuf.Service} subclass to connect to -430 * @param <R> Return type for the {@code callable} parameter's {@link -431 * org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} method -432 * @return a map of result values keyed by region name -433 */ -434 <T extends Service, R> Map<byte[],R> coprocessorService(final Class<T> service, -435 byte[] startKey, byte[] endKey, final Batch.Call<T,R> callable) -436 throws ServiceException, Throwable; -437 -438 /** -439 * Creates an instance of the given {@link com.google.protobuf.Service} subclass for each table -440 * region spanning the range from the {@code startKey} row to {@code endKey} row (inclusive), and -441 * invokes the passed {@link org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} method -442 * with each {@link Service} instance. -443 * -444 * <p> The given {@link org.apache.hadoop.hbase.client.coprocessor.Batch.Callback#update(byte[], -445 * byte[], Object)} method will be called with the return value from each region's {@link -446 * org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} invocation. </p> -447 * -448 * @param service the protocol buffer {@code Service} implementation to call -449 * @param startKey start region selection with region containing this row. If {@code null}, the -450 * selection will start with the first table region. -451 * @param endKey select regions up to and including the region containing this row. If {@code -452 * null}, selection will continue through the last table region. -453 * @param callable this instance's {@link org.apache.hadoop.hbase.client.coprocessor.Batch -454 * .Call#call} -455 * method will be invoked once per table region, using the {@link Service} instance connected to -456 * that region. -457 * @param callback -458 * @param <T> the {@link Service} subclass to connect to -459 * @param <R> Return type for the {@code callable} parameter's {@link -460 * org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} method -461 */ -462 <T extends Service, R> void coprocessorService(final Class<T> service, -463 byte[] startKey, byte[] endKey, final Batch.Call<T,R> callable, -464 final Batch.Callback<R> callback) throws ServiceException, Throwable; -465 -466 /** -467 * Returns the maximum size in bytes of the write buffer for this HTable. -468 * <p> -469 * The default value comes from the configuration parameter -470 * {@code hbase.client.write.buffer}. -471 * @return The size of the write buffer in bytes. -472 * @deprecated as of 1.0.1 (should not have been in 1.0.0). Replaced by {@link BufferedMutator#getWriteBufferSize()} -473 */ -474 @Deprecated -475 long getWriteBufferSize(); -476 -477 /** -478 * Sets the size of the buffer in bytes. -479 * <p> -480 * If the new size is less than the current amount of data in the -481 * write buffer, the buffer gets flushed. -482 * @param writeBufferSize The new write buffer size, in bytes. -483 * @throws IOException if a remote or network exception occurs. -484 * @deprecated as of 1.0.1 (should not have been in 1.0.0). Replaced by {@link BufferedMutator} and -485 * {@link BufferedMutatorParams#writeBufferSize(long)} -486 */ -487 @Deprecated -488 void setWriteBufferSize(long writeBufferSize) throws IOException; -489 -490 /** -491 * Creates an instance of the given {@link com.google.protobuf.Service} subclass for each table -492 * region spanning the range from the {@code startKey} row to {@code endKey} row (inclusive), all -493 * the invocations to the same region server will be batched into one call. The coprocessor -494 * service is invoked according to the service instance, method name and parameters. -495 * -496 * @param methodDescriptor -497 * the descriptor for the protobuf service method to call. -498 * @param request -499 * the method call parameters -500 * @param startKey -501 * start region selection with region containing this row. If {@code null}, the -502 * selection will start with the first table region. -503 * @param endKey -504 * select regions up to and including the region containing this row. If {@code null}, -505 * selection will continue through the last table region. -506 * @param responsePrototype -507 * the proto type of the response of the method in Service. -508 * @param <R> -509 * the response type for the coprocessor Service method -510 * @throws ServiceException -511 * @throws Throwable -512 * @return a map of result values keyed by region name -513 */ -514 <R extends Message> Map<byte[], R> batchCoprocessorService( -515 Descriptors.MethodDescriptor methodDescriptor, Message request, -516 byte[] startKey, byte[] endKey, R responsePrototype) throws ServiceException, Throwable; -517 -518 /** -519 * Creates an instance of the given {@link com.google.protobuf.Service} subclass for each table -520 * region spanning the range from the {@code startKey} row to {@code endKey} row (inclusive), all -521 * the invocations to the same region server will be batched into one call. The coprocessor -522 * service is invoked according to the service instance, method name and parameters. -523 * -524 * <p> -525 * The given -526 * {@link org.apache.hadoop.hbase.client.coprocessor.Batch.Callback#update(byte[],byte[],Object)} -527 * method will be called with the return value from each region's invocation. -528 * </p> -529 * -530 * @param methodDescriptor -531 * the descriptor for the protobuf service method to call. -532 * @param request -533 * the method call parameters -534 * @param startKey -535 * start region selection with region containing this row. If {@code null}, the -536 * selection will start with the first table region. -537 * @param endKey -538 * select regions up to and including the region containing this row. If {@code null}, -539 * selection will continue through the last table region. -540 * @param responsePrototype -541 * the proto type of the response of the method in Service. -542 * @param callback -543 * callback to invoke with the response for each region -544 * @param <R> -545 * the response type for the coprocessor Service method -546 * @throws ServiceException -547 * @throws Throwable -548 */ -549 <R extends Message> void batchCoprocessorService(Descriptors.MethodDescriptor methodDescriptor, -550 Message request, byte[] startKey, byte[] endKey, R responsePrototype, -551 Batch.Callback<R> callback) throws ServiceException, Throwable; -552 -553 /** -554 * Atomically checks if a row/family/qualifier value matches the expected value. -555 * If it does, it performs the row mutations. If the passed value is null, the check -556 * is for the lack of column (ie: non-existence) -557 * -558 * @param row to check -559 * @param family column family to check -560 * @param qualifier column qualifier to check -561 * @param compareOp the comparison operator -562 * @param value the expected value -563 * @param mutation mutations to perform if check succeeds -564 * @throws IOException e -565 * @return true if the new put was executed, false otherwise -566 */ -567 boolean checkAndMutate(byte[] row, byte[] family, byte[] qualifier, -568 CompareFilter.CompareOp compareOp, byte[] value, RowMutations mutation) throws IOException; -569 -570 /** -571 * Set timeout (millisecond) of each operation in this Table instance, will override the value -572 * of hbase.client.operation.timeout in configuration. -573 * Operation timeout is a top-level restriction that makes sure a blocking method will not be -574 * blocked more than this. In each operation, if rpc request fails because of timeout or -575 * other reason, it will retry until success or throw a RetriesExhaustedException. But if the -576 * total time being blocking reach the operation timeout before retries exhausted, it will break -577 * early and throw SocketTimeoutException. -578 * @param operationTimeout the total timeout of each operation in millisecond. -579 */ -580 void setOperationTimeout(int operationTimeout); -581 -582 /** -583 * Get timeout (millisecond) of each operation for in Table instance. -584 */ -585 int getOperationTimeout(); -586 -587 /** -588 * Get timeout (millisecond) of each rpc request in this Table instance. -589 * -590 * @returns Currently configured read timeout -591 * @deprecated Use getReadRpcTimeout or getWriteRpcTimeout instead -592 */ -593 @Deprecated -594 int getRpcTimeout(); -595 -596 /** -597 * Set timeout (millisecond) of each rpc request in operations of this Table instance, will -598 * override the value of hbase.rpc.timeout in configuration. -599 * If a rpc request waiting too long, it will stop waiting and send a new request to retry until -600 * retries exhausted or operation timeout reached. -601 * <p> -602 * NOTE: This will set both the read and write timeout settings to the provided value. -603 * -604 * @param rpcTimeout the timeout of each rpc request in millisecond. -605 * -606 * @deprecated Use setReadRpcTimeout or setWriteRpcTimeout instead +409 * <div style="background-color: #cccccc; padding: 2px"> +410 * <blockquote><pre> +411 * CoprocessorRpcChannel channel = myTable.coprocessorService(rowkey); +412 * MyService.BlockingInterface service = MyService.newBlockingStub(channel); +413 * MyCallRequest request = MyCallRequest.newBuilder() +414 * ... +415 * .build(); +416 * MyCallResponse response = service.myCall(null, request); +417 * </pre></blockquote></div> +418 * +419 * @param row The row key used to identify the remote region location +420 * @return A CoprocessorRpcChannel instance +421 */ +422 CoprocessorRpcChannel coprocessorService(byte[] row); +423 +424 /** +425 * Creates an instance of the given {@link com.google.protobuf.Service} subclass for each table +426 * region spanning the range from the {@code startKey} row to {@code endKey} row (inclusive), and +427 * invokes the passed {@link org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} method +428 * with each {@link com.google.protobuf.Service} instance. +429 * +430 * @param service the protocol buffer {@code Service} implementation to call +431 * @param startKey start region selection with region containing this row. If {@code null}, the +432 * selection will start with the first table region. +433 * @param endKey select regions up to and including the region containing this row. If {@code +434 * null}, selection will continue through the last table region. +435 * @param callable this instance's {@link org.apache.hadoop.hbase.client.coprocessor.Batch +436 * .Call#call} +437 * method will be invoked once per table region, using the {@link com.google.protobuf.Service} +438 * instance connected to that region. +439 * @param <T> the {@link com.google.protobuf.Service} subclass to connect to +440 * @param <R> Return type for the {@code callable} parameter's {@link +441 * org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} method +442 * @return a map of result values keyed by region name +443 */ +444 <T extends Service, R> Map<byte[],R> coprocessorService(final Class<T> service, +445 byte[] startKey, byte[] endKey, final Batch.Call<T,R> callable) +446 throws ServiceException, Throwable; +447 +448 /** +449 * Creates an instance of the given {@link com.google.protobuf.Service} subclass for each table +450 * region spanning the range from the {@code startKey} row to {@code endKey} row (inclusive), and +451 * invokes the passed {@link org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} method +452 * with each {@link Service} instance. +453 * +454 * <p> The given {@link org.apache.hadoop.hbase.client.coprocessor.Batch.Callback#update(byte[], +455 * byte[], Object)} method will be called with the return value from each region's {@link +456 * org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} invocation. </p> +457 * +458 * @param service the protocol buffer {@code Service} implementation to call +459 * @param startKey start region selection with region containing this row. If {@code null}, the +460 * selection will start with the first table region. +461 * @param endKey select regions up to and including the region containing this row. If {@code +462 * null}, selection will continue through the last table region. +463 * @param callable this instance's {@link org.apache.hadoop.hbase.client.coprocessor.Batch +464 * .Call#call} +465 * method will be invoked once per table region, using the {@link Service} instance connected to +466 * that region. +467 * @param callback +468 * @param <T> the {@link Service} subclass to connect to +469 * @param <R> Return type for the {@code callable} parameter's {@link +470 * org.apache.hadoop.hbase.client.coprocessor.Batch.Call#call} method +471 */ +472 <T extends Service, R> void coprocessorService(final Class<T> service, +473 byte[] startKey, byte[] endKey, final Batch.Call<T,R> callable, +474 final Batch.Callback<R> callback) throws ServiceException, Throwable; +475 +476 /** +477 * Returns the maximum size in bytes of the write buffer for this HTable. +478 * <p> +479 * The default value comes from the configuration parameter +480 * {@code hbase.client.write.buffer}. +481 * @return The size of the write buffer in bytes. +482 * @deprecated as of 1.0.1 (should not have been in 1.0.0). Replaced by {@link BufferedMutator#getWriteBufferSize()} +483 */ +484 @Deprecated +485 long getWriteBufferSize(); +486 +487 /** +488 * Sets the size of the buffer in bytes. +489 * <p> +490 * If the new size is less than the current amount of data in the +491 * write buffer, the buffer gets flushed. +492 * @param writeBufferSize The new write buffer size, in bytes. +493 * @throws IOException if a remote or network exception occurs. +494 * @deprecated as of 1.0.1 (should not have been in 1.0.0). Replaced by {@link BufferedMutator} and +495 * {@link BufferedMutatorParams#writeBufferSize(long)} +496 */ +497 @Deprecated +498 void setWriteBufferSize(long writeBufferSize) throws IOException; +499 +500 /** +501 * Creates an instance of the given {@link com.google.protobuf.Service} subclass for each table +502 * region spanning the range from the {@code startKey} row to {@code endKey} row (inclusive), all +503 * the invocations to the same region server will be batched into one call. The coprocessor +504 * service is invoked according to the service instance, method name and parameters. +505 * +506 * @param methodDescriptor +507 * the descriptor for the protobuf service method to call. +508 * @param request +509 * the method call parameters +510 * @param startKey +511 * start region selection with region containing this row. If {@code null}, the +512 * selection will start with the first table region. +513 * @param endKey +514 * select regions up to and including the region containing this row. If {@code null}, +515 * selection will continue through the last table region. +516 * @param responsePrototype +517 * the proto type of the response of the method in Service. +518 * @param <R> +519 * the response type for the coprocessor Service method +520 * @throws ServiceException +521 * @throws Throwable +522 * @return a map of result values keyed by region name +523 */ +524 <R extends Message> Map<byte[], R> batchCoprocessorService( +525 Descriptors.MethodDescriptor methodDescriptor, Message request, +526 byte[] startKey, byte[] endKey, R responsePrototype) throws ServiceException, Throwable; +527 +528 /** +529 * Creates an instance of the given {@link com.google.protobuf.Service} subclass for each table +530 * region spanning the range from the {@code startKey} row to {@code endKey} row (inclusive), all +531 * the invocations to the same region server will be batched into one call. The coprocessor +532 * service is invoked according to the service instance, method name and parameters. +533 * +534 * <p> +535 * The given +536 * {@link org.apache.hadoop.hbase.client.coprocessor.Batch.Callback#update(byte[],byte[],Object)} +537 * method will be called with the return value from each region's invocation. +538 * </p> +539 * +540 * @param methodDescriptor +541 * the descriptor for the protobuf service method to call. +542 * @param request +543 * the method call parameters +544 * @param startKey +545 * start region selection with region containing this row. If {@code null}, the +546 * selection will start with the first table region. +547 * @param endKey +548 * select regions up to and including the region containing this row. If {@code null}, +549 * selection will continue through the last table region. +550 * @param responsePrototype +551 * the proto type of the response of the method in Service. +552 * @param callback +553 * callback to invoke with the response for each region +554 * @param <R> +555 * the response type for the coprocessor Service method +556 * @throws ServiceException +557 * @throws Throwable +558 */ +559 <R extends Message> void batchCoprocessorService(Descriptors.MethodDescriptor methodDescriptor, +560 Message request, byte[] startKey, byte[] endKey, R responsePrototype, +561 Batch.Callback<R> callback) throws ServiceException, Throwable; +562 +563 /** +564 * Atomically checks if a row/family/qualifier value matches the expected value. +565 * If it does, it performs the row mutations. If the passed value is null, the check +566 * is for the lack of column (ie: non-existence) +567 * +568 * The expected value argument of this call is on the left and the current +569 * value of the cell is on the right side of the comparison operator. +570 * +571 * Ie. eg. GREATER operator means expected value > existing <=> perform row mutations. +572 * +573 * @param row to check +574 * @param family column family to check +575 * @param qualifier column qualifier to check +576 * @param compareOp the comparison operator +577 * @param value the expected value +578 * @param mutation mutations to perform if check succeeds +579 * @throws IOException e +580 * @return true if the new put was executed, false otherwise +581 */ +582 boolean checkAndMutate(byte[] row, byte[] family, byte[] qualifier, +583 CompareFilter.CompareOp compareOp, byte[] value, RowMutations mutation) throws IOException; +584 +585 /** +586 * Set timeout (millisecond) of each operation in this Table instance, will override the value +587 * of hbase.client.operation.timeout in configuration. +588 * Operation timeout is a top-level restriction that makes sure a blocking method will not be +589 * blocked more than this. In each operation, if rpc request fails because of timeout or +590 * other reason, it will retry until success or throw a RetriesExhaustedException. But if the +591 * total time being blocking reach the operation timeout before retries exhausted, it will break +592 * early and throw SocketTimeoutException. +593 * @param operationTimeout the total timeout of each operation in millisecond. +594 */ +595 void setOperationTimeout(int operationTimeout); +596 +597 /** +598 * Get timeout (millisecond) of each operation for in Table instance. +599 */ +600 int getOperationTimeout(); +601 +602 /** +603 * Get timeout (millisecond) of each rpc request in this Table instance. +604 * +605 * @returns Currently configured read timeout +606 * @deprecated Use getReadRpcTimeout or getWriteRpcTimeout instead 607 */ 608 @Deprecated -609 void setRpcTimeout(int rpcTimeout); +609 int getRpcTimeout(); 610 611 /** -612 * Get timeout (millisecond) of each rpc read request in this Table instance. -613 */ -614 int getReadRpcTimeout(); -615 -616 /** -617 * Set timeout (millisecond) of each rpc read request in operations of this Table instance, will -618 * override the value of hbase.rpc.read.timeout in configuration. -619 * If a rpc read request waiting too long, it will stop waiting and send a new request to retry -620 * until retries exhausted or operation timeout reached. -621 * -622 * @param readRpcTimeout -623 */ -624 void setReadRpcTimeout(int readRpcTimeout); +612 * Set timeout (millisecond) of each rpc request in operations of this Table instance, will +613 * override the value of hbase.rpc.timeout in configuration. +614 * If a rpc request waiting too long, it will stop waiting and send a new request to retry until +615 * retries exhausted or operation timeout reached. +616 * <p> +617 * NOTE: This will set both the read and write timeout settings to the provided value. +618 * +619 * @param rpcTimeout the timeout of each rpc request in millisecond. +620 * +621 * @deprecated Use setReadRpcTimeout or setWriteRpcTimeout instead +622 */ +623 @Deprecated +624 void setRpcTimeout(int rpcTimeout); 625 626 /** -627 * Get timeout (millisecond) of each rpc write request in this Table instance. +627 * Get timeout (millisecond) of each rpc read request in this Table instance. 628 */ -629 int getWriteRpcTimeout(); +629 int getReadRpcTimeout(); 630 631 /** -632 * Set timeout (millisecond) of each rpc write request in operations of this Table instance, will -633 * override the value of hbase.rpc.write.timeout in configuration. -634 * If a rpc write request waiting too long, it will stop waiting and send a new request to retry +632 * Set timeout (millisecond) of each rpc read request in operations of this Table instance, will +633 * override the value of hbase.rpc.read.timeout in configuration. +634 * If a rpc read request waiting too long, it will stop waiting and send a new request to retry 635 * until retries exhausted or operation timeout reached. 636 * -637 * @param writeRpcTimeout +637 * @param readRpcTimeout 638 */ -639 void setWriteRpcTimeout(int writeRpcTimeout); -640} +639 void setReadRpcTimeout(int readRpcTimeout); +640 +641 /** +642 * Get timeout (millisecond) of each rpc write request in this Table instance. +643 */ +644 int getWriteRpcTimeout(); +645 +646 /** +647 * Set timeout (millisecond) of each rpc write request in operations of this Table instance, will +648 * override the value of hbase.rpc.write.timeout in configuration. +649 * If a rpc write request waiting too long, it will stop waiting and send a new request to retry +650 * until retries exhausted or operation timeout reached. +651 * +652 * @param writeRpcTimeout +653 */ +654 void setWriteRpcTimeout(int writeRpcTimeout); +655}