hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jim Kellerman (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-1155) Verify that FSDataoutputStream.sync() works
Date Wed, 11 Feb 2009 01:19:59 GMT

    [ https://issues.apache.org/jira/browse/HBASE-1155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12672478#action_12672478
] 

Jim Kellerman commented on HBASE-1155:
--------------------------------------

each record is approximately 1024 bytes.
one block is either 1,048,576 (1MB) or 67,108,864 (64 MB)

A 1MB block holds 1,002 records
1026048 bytes written, overhead is 22.48 bytes/record

expected overhead for 64MB is 1,441,792
expected number of records for 64MB is 64,128

A 64MB block holds 64,157 records
65,696,768 bytes written, overhead is 1,412,096
overhead is 22.01 bytes/record

So overhead is ~ 22-23 bytes/record.

========================================

Without the patch the best we can do is read up to the end of the last
full block. If we write 1024 records into 1MB blocks we can read 1002
records (~ number of records in block)

If we write write 70,000 records into 64MB blocks we can read 64157
records back.

If less than a block is written, we get back nothing. We only get up
to the last full block.

========================================

With the patch, 1MB block size and no syncs:
- Writing 1024 records, none are recovered
- Writing 1200 records, 1188 are recovered
- Writing 1500 records, 1499 are recovered
- Writing 1000 records,  994 are recovered

There seems to be a problem with writing about 1024 records to a 1MB
block size file if there are no syncs. Writing more than 1024 records
ia recoverable (e.g., 1500) works, as does writing less (e.g., 1000
records - 994 are recoverable, writing 900 records - 870
are recoverable). So there appears to be a problem with writing
close to 1MB of data into a 1MB block size with no syncs. Adding more
than or some less than 1024 records seems to work.

========================================

With the patch, it appears that the block size is irrelevant and it is
possible to read up to the last sync for 64MB blocks.

With a 64MB block size:

- If the sync rate is 1, it is possible to read every record written.
- With a sync rate of 100, it is possible to read up to the last multiple of 100.

With a 1MB block size:

- Cancelling the writer's lease seems to take a lot longer.
- Sometimes it seems to never recover the lease. (e.g., write 1024 records, sync every 100
writes, 1MB block size)

More testing to do: try writing close to 64MB with a 64MB block size and see if it experiences
the non-recoverability that writing ~1MB with 1MB block size does.

> Verify that FSDataoutputStream.sync() works
> -------------------------------------------
>
>                 Key: HBASE-1155
>                 URL: https://issues.apache.org/jira/browse/HBASE-1155
>             Project: Hadoop HBase
>          Issue Type: Bug
>          Components: master, regionserver
>    Affects Versions: 0.19.0
>            Reporter: Jim Kellerman
>            Assignee: Jim Kellerman
>             Fix For: 0.19.1, 0.20.0
>
>
> In order to guarantee that an HLog sync() flushes the data to the HDFS, we will need
to invoke FSDataOutputStream.sync() per HADOOP-4379.
> Currently, there is no access to the underlying FSDataOutputStream from SequenceFile.Writer,
as it is a package private member.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message