hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dongzhe Ma <mdzfi...@gmail.com>
Subject About HDFS's single-writer, multiple-reader model, any use case?
Date Mon, 02 Feb 2015 05:19:57 GMT
We know that HDFS employs a single-writer, multiple-reader model, which
means that there could be only one process writing to a file at the same
time, but multiple readers can also work in parallel and new readers can
even observe the new content. The reason for this design is to simplify
concurrency control. But, is it necessary to support reading during
writing? Can anyone bring up some use cases? Why not just lock the whole
file like other posix file systems (in terms of locking granularity)?

View raw message