kudu-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From mpe...@apache.org
Subject [1/2] kudu git commit: log: simplify log append code path, reduce contention
Date Sun, 26 Mar 2017 00:36:58 GMT
Repository: kudu
Updated Branches:
  refs/heads/master 286de5392 -> 86da259a6


log: simplify log append code path, reduce contention

Previously the log was trying to do some tricky sequence like:

- reserve a spot in the log queue and wake up the appender thread
- serialize the log entry
             - appender thread blocks waiting for the entry to be
               "ready"
- mark ready
             - appender thread continues and writes the data

I can't recall the original logic behind this, but looking at lock
contention profiles on a write-heavy workload shows that the appender
threads spinning waiting for entries to be ready wastes CPU.

The new design is much more straightforward: the writer serializes and
appends an entry which is already "ready" to write, so once the appender
thread wakes up, it's ready to go without any spinning.

In addition to being easier to follow, the new code performs slightly better.

with patch:

 Performance counter stats for './build/latest/bin/mt-log-test -num_batches_per_thread=100000
     -log_segment_size_mb=64 -num_reader_threads=0 -noverify_log -num_writer_threads=16' (5
runs):

      59890.395565 task-clock                #    3.753 CPUs utilized            ( +-  0.51%
)
         1,619,439 context-switches          #    0.027 M/sec                    ( +-  0.28%
)
             4,565 cpu-migrations            #    0.076 K/sec                    ( +- 14.44%
)
           136,512 page-faults               #    0.002 M/sec                    ( +-  0.16%
)
   169,286,052,105 cycles                    #    2.827 GHz                      ( +-  0.57%
)
   <not supported> stalled-cycles-frontend
   <not supported> stalled-cycles-backend
   116,801,292,905 instructions              #    0.69  insns per cycle          ( +-  0.04%
)
    22,368,661,704 branches                  #  373.493 M/sec                    ( +-  0.07%
)
        90,076,839 branch-misses             #    0.40% of all branches          ( +-  0.26%
)

      15.958703468 seconds time elapsed                                          ( +-  1.49%
)

without patch:

 Performance counter stats for './build/latest/bin/mt-log-test -num_batches_per_thread=100000
     -log_segment_size_mb=64 -num_reader_threads=0 -noverify_log -num_writer_threads=16' (5
runs):

      60787.723645 task-clock                #    3.708 CPUs utilized            ( +-  0.34%
)
         1,628,039 context-switches          #    0.027 M/sec                    ( +-  0.32%
)
             4,259 cpu-migrations            #    0.070 K/sec                    ( +- 13.18%
)
           136,827 page-faults               #    0.002 M/sec                    ( +-  0.09%
)
   171,893,298,705 cycles                    #    2.828 GHz                      ( +-  0.33%
)
   <not supported> stalled-cycles-frontend
   <not supported> stalled-cycles-backend
   117,302,850,747 instructions              #    0.68  insns per cycle          ( +-  0.05%
)
    22,492,934,759 branches                  #  370.024 M/sec                    ( +-  0.08%
)
        93,862,359 branch-misses             #    0.42% of all branches          ( +-  0.31%
)

      16.392552672 seconds time elapsed                                          ( +-  0.93%
)

Change-Id: I2a9154efbab2964a63745a70b47162e3f4200660
Reviewed-on: http://gerrit.cloudera.org:8080/6284
Tested-by: Kudu Jenkins
Reviewed-by: Todd Lipcon <todd@apache.org>


Project: http://git-wip-us.apache.org/repos/asf/kudu/repo
Commit: http://git-wip-us.apache.org/repos/asf/kudu/commit/4d2ea24b
Tree: http://git-wip-us.apache.org/repos/asf/kudu/tree/4d2ea24b
Diff: http://git-wip-us.apache.org/repos/asf/kudu/diff/4d2ea24b

Branch: refs/heads/master
Commit: 4d2ea24b45aa35a1a6f17e17ccec19c106a4478e
Parents: 286de53
Author: Todd Lipcon <todd@apache.org>
Authored: Mon Mar 6 21:14:29 2017 -0800
Committer: Todd Lipcon <todd@apache.org>
Committed: Fri Mar 24 23:48:17 2017 +0000

----------------------------------------------------------------------
 src/kudu/consensus/log.cc      | 131 +++++++++++-------------------------
 src/kudu/consensus/log.h       |  93 ++++++-------------------
 src/kudu/consensus/log_util.cc |   8 +--
 src/kudu/consensus/log_util.h  |   8 +--
 4 files changed, 64 insertions(+), 176 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/kudu/blob/4d2ea24b/src/kudu/consensus/log.cc
----------------------------------------------------------------------
diff --git a/src/kudu/consensus/log.cc b/src/kudu/consensus/log.cc
index d51e326..a145767 100644
--- a/src/kudu/consensus/log.cc
+++ b/src/kudu/consensus/log.cc
@@ -216,18 +216,17 @@ void Log::AppendThread::RunThread() {
 
     bool is_all_commits = true;
     for (LogEntryBatch* entry_batch : entry_batches) {
-      entry_batch->WaitForReady();
       TRACE_EVENT_FLOW_END0("log", "Batch", entry_batch);
       Status s = log_->DoAppend(entry_batch);
       if (PREDICT_FALSE(!s.ok())) {
         LOG_WITH_PREFIX(ERROR) << "Error appending to the log: " << s.ToString();
-        entry_batch->set_failed_to_append();
         // TODO(af): If a single transaction fails to append, should we
         // abort all subsequent transactions in this batch or allow
         // them to be appended? What about transactions in future
         // batches?
         if (!entry_batch->callback().is_null()) {
           entry_batch->callback().Run(s);
+          entry_batch->callback_.Reset();
         }
       }
       if (is_all_commits && entry_batch->type_ != COMMIT) {
@@ -251,8 +250,7 @@ void Log::AppendThread::RunThread() {
       VLOG_WITH_PREFIX(2) << "Synchronized " << entry_batches.size() <<
" entry batches";
       SCOPED_WATCH_STACK(100);
       for (LogEntryBatch* entry_batch : entry_batches) {
-        if (PREDICT_TRUE(!entry_batch->failed_to_append()
-                         && !entry_batch->callback().is_null())) {
+        if (PREDICT_TRUE(!entry_batch->callback().is_null())) {
           entry_batch->callback().Run(Status::OK());
         }
         // It's important to delete each batch as we see it, because
@@ -433,85 +431,54 @@ Status Log::RollOver() {
   return Status::OK();
 }
 
-Status Log::Reserve(LogEntryTypePB type,
-                    gscoped_ptr<LogEntryBatchPB> entry_batch,
-                    LogEntryBatch** reserved_entry) {
-  TRACE_EVENT0("log", "Log::Reserve");
-  DCHECK(reserved_entry != nullptr);
-  {
-    shared_lock<rw_spinlock> l(state_lock_.get_lock());
-    CHECK_EQ(kLogWriting, log_state_);
-  }
-
-  // In DEBUG builds, verify that all of the entries in the batch match the specified type.
-  // In non-debug builds the foreach loop gets optimized out.
-  #ifndef NDEBUG
-  for (const LogEntryPB& entry : entry_batch->entry()) {
-    DCHECK_EQ(entry.type(), type) << "Bad batch: " << SecureDebugString(*entry_batch);
-  }
-  #endif
-
-  int num_ops = entry_batch->entry_size();
-  gscoped_ptr<LogEntryBatch> new_entry_batch(new LogEntryBatch(
-      type, std::move(entry_batch), num_ops));
-  new_entry_batch->MarkReserved();
+Status Log::CreateBatchFromPB(LogEntryTypePB type,
+                              unique_ptr<LogEntryBatchPB> entry_batch_pb,
+                              unique_ptr<LogEntryBatch>* entry_batch) {
+  int num_ops = entry_batch_pb->entry_size();
+  unique_ptr<LogEntryBatch> new_entry_batch(new LogEntryBatch(
+      type, std::move(entry_batch_pb), num_ops));
+  new_entry_batch->Serialize();
+  TRACE("Serialized $0 byte log entry", new_entry_batch->total_size_bytes());
 
-  if (PREDICT_FALSE(!entry_batch_queue_.BlockingPut(new_entry_batch.get()))) {
-    return kLogShutdownStatus;
-  }
-
-  // Release the memory back to the caller: this will be freed when
-  // the entry is removed from the queue.
-  //
-  // TODO (perf) Use a ring buffer instead of a blocking queue and set
-  // 'reserved_entry' to a pre-allocated slot in the buffer.
-  *reserved_entry = new_entry_batch.release();
+  *entry_batch = std::move(new_entry_batch);
   return Status::OK();
 }
 
-void Log::AsyncAppend(LogEntryBatch* entry_batch, const StatusCallback& callback) {
+Status Log::AsyncAppend(unique_ptr<LogEntryBatch> entry_batch, const StatusCallback&
callback) {
   TRACE_EVENT0("log", "Log::AsyncAppend");
-  {
-    shared_lock<rw_spinlock> l(state_lock_.get_lock());
-    CHECK_EQ(kLogWriting, log_state_);
-  }
 
-  entry_batch->Serialize();
   entry_batch->set_callback(callback);
-  TRACE("Serialized $0 byte log entry", entry_batch->total_size_bytes());
-  TRACE_EVENT_FLOW_BEGIN0("log", "Batch", entry_batch);
-  entry_batch->MarkReady();
+  TRACE_EVENT_FLOW_BEGIN0("log", "Batch", entry_batch.get());
+  if (PREDICT_FALSE(!entry_batch_queue_.BlockingPut(entry_batch.get()))) {
+    TRACE_EVENT_FLOW_END0("log", "Batch", entry_batch.get());
+    return kLogShutdownStatus;
+  }
+  entry_batch.release();
+  return Status::OK();
 }
 
 Status Log::AsyncAppendReplicates(const vector<ReplicateRefPtr>& replicates,
                                   const StatusCallback& callback) {
-  gscoped_ptr<LogEntryBatchPB> batch;
-  CreateBatchFromAllocatedOperations(replicates, &batch);
-
-  LogEntryBatch* reserved_entry_batch;
-  RETURN_NOT_OK(Reserve(REPLICATE, std::move(batch), &reserved_entry_batch));
-  // If we're able to reserve set the vector of replicate scoped ptrs in
-  // the LogEntryBatch. This will make sure there's a reference for each
-  // replicate while we're appending.
-  reserved_entry_batch->SetReplicates(replicates);
+  unique_ptr<LogEntryBatchPB> batch_pb = CreateBatchFromAllocatedOperations(replicates);
 
-  AsyncAppend(reserved_entry_batch, callback);
-  return Status::OK();
+  unique_ptr<LogEntryBatch> batch;
+  RETURN_NOT_OK(CreateBatchFromPB(REPLICATE, std::move(batch_pb), &batch));
+  batch->SetReplicates(replicates);
+  return AsyncAppend(std::move(batch), callback);
 }
 
 Status Log::AsyncAppendCommit(gscoped_ptr<consensus::CommitMsg> commit_msg,
                               const StatusCallback& callback) {
   MAYBE_FAULT(FLAGS_fault_crash_before_append_commit);
 
-  gscoped_ptr<LogEntryBatchPB> batch(new LogEntryBatchPB);
-  LogEntryPB* entry = batch->add_entry();
+  unique_ptr<LogEntryBatchPB> batch_pb(new LogEntryBatchPB);
+  LogEntryPB* entry = batch_pb->add_entry();
   entry->set_type(COMMIT);
   entry->set_allocated_commit(commit_msg.release());
 
-  LogEntryBatch* reserved_entry_batch;
-  RETURN_NOT_OK(Reserve(COMMIT, std::move(batch), &reserved_entry_batch));
-
-  AsyncAppend(reserved_entry_batch, callback);
+  unique_ptr<LogEntryBatch> entry_batch;
+  RETURN_NOT_OK(CreateBatchFromPB(COMMIT, std::move(batch_pb), &entry_batch));
+  AsyncAppend(std::move(entry_batch), callback);
   return Status::OK();
 }
 
@@ -696,12 +663,10 @@ Status Log::GetSegmentsToGCUnlocked(RetentionIndexes retention_indexes,
 }
 
 Status Log::Append(LogEntryPB* entry) {
-  gscoped_ptr<LogEntryBatchPB> entry_batch_pb(new LogEntryBatchPB);
+  unique_ptr<LogEntryBatchPB> entry_batch_pb(new LogEntryBatchPB);
   entry_batch_pb->mutable_entry()->AddAllocated(entry);
   LogEntryBatch entry_batch(entry->type(), std::move(entry_batch_pb), 1);
-  entry_batch.state_ = LogEntryBatch::kEntryReserved;
   entry_batch.Serialize();
-  entry_batch.state_ = LogEntryBatch::kEntryReady;
   Status s = DoAppend(&entry_batch);
   if (s.ok()) {
     s = Sync();
@@ -713,12 +678,12 @@ Status Log::Append(LogEntryPB* entry) {
 Status Log::WaitUntilAllFlushed() {
   // In order to make sure we empty the queue we need to use
   // the async api.
-  gscoped_ptr<LogEntryBatchPB> entry_batch(new LogEntryBatchPB);
+  unique_ptr<LogEntryBatchPB> entry_batch(new LogEntryBatchPB);
   entry_batch->add_entry()->set_type(log::FLUSH_MARKER);
-  LogEntryBatch* reserved_entry_batch;
-  RETURN_NOT_OK(Reserve(FLUSH_MARKER, std::move(entry_batch), &reserved_entry_batch));
+  unique_ptr<LogEntryBatch> reserved_entry_batch;
+  RETURN_NOT_OK(CreateBatchFromPB(FLUSH_MARKER, std::move(entry_batch), &reserved_entry_batch));
   Synchronizer s;
-  AsyncAppend(reserved_entry_batch, s.AsStatusCallback());
+  AsyncAppend(std::move(reserved_entry_batch), s.AsStatusCallback());
   return s.Wait();
 }
 
@@ -1022,14 +987,14 @@ Log::~Log() {
 }
 
 LogEntryBatch::LogEntryBatch(LogEntryTypePB type,
-                             gscoped_ptr<LogEntryBatchPB> entry_batch_pb, size_t count)
+                             unique_ptr<LogEntryBatchPB> entry_batch_pb,
+                             size_t count)
     : type_(type),
       entry_batch_pb_(std::move(entry_batch_pb)),
       total_size_bytes_(
           PREDICT_FALSE(count == 1 && entry_batch_pb_->entry(0).type() == FLUSH_MARKER)
?
           0 : entry_batch_pb_->ByteSize()),
-      count_(count),
-      state_(kEntryInitialized) {
+      count_(count) {
 }
 
 LogEntryBatch::~LogEntryBatch() {
@@ -1042,36 +1007,16 @@ LogEntryBatch::~LogEntryBatch() {
   }
 }
 
-void LogEntryBatch::MarkReserved() {
-  DCHECK_EQ(state_, kEntryInitialized);
-  ready_lock_.Lock();
-  state_ = kEntryReserved;
-}
-
 void LogEntryBatch::Serialize() {
-  DCHECK_EQ(state_, kEntryReserved);
-  buffer_.clear();
+  DCHECK_EQ(buffer_.size(), 0);
   // FLUSH_MARKER LogEntries are markers and are not serialized.
   if (PREDICT_FALSE(count() == 1 && entry_batch_pb_->entry(0).type() == FLUSH_MARKER))
{
-    state_ = kEntrySerialized;
     return;
   }
   buffer_.reserve(total_size_bytes_);
   pb_util::AppendToString(*entry_batch_pb_, &buffer_);
-  state_ = kEntrySerialized;
 }
 
-void LogEntryBatch::MarkReady() {
-  DCHECK_EQ(state_, kEntrySerialized);
-  state_ = kEntryReady;
-  ready_lock_.Unlock();
-}
-
-void LogEntryBatch::WaitForReady() {
-  ready_lock_.Lock();
-  DCHECK_EQ(state_, kEntryReady);
-  ready_lock_.Unlock();
-}
 
 }  // namespace log
 }  // namespace kudu

http://git-wip-us.apache.org/repos/asf/kudu/blob/4d2ea24b/src/kudu/consensus/log.h
----------------------------------------------------------------------
diff --git a/src/kudu/consensus/log.h b/src/kudu/consensus/log.h
index 5caa347..9048b38 100644
--- a/src/kudu/consensus/log.h
+++ b/src/kudu/consensus/log.h
@@ -58,24 +58,12 @@ typedef BlockingQueue<LogEntryBatch*, LogEntryBatchLogicalSize>
LogEntryBatchQue
 // Kudu as a normal Write Ahead Log and also plays the role of persistent
 // storage for the consensus state machine.
 //
-// Note: This class is not thread safe, the caller is expected to synchronize
-// Log::Reserve() and Log::Append() calls.
-//
 // Log uses group commit to improve write throughput and latency
-// without compromising ordering and durability guarantees.
-//
-// To add operations to the log, the caller must obtain the lock and
-// call Reserve() with the collection of operations to be added. Then,
-// the caller may release the lock and call AsyncAppend(). Reserve()
-// reserves a slot on a queue for the log entry; AsyncAppend()
-// indicates that the entry in the slot is safe to write to disk and
-// adds a callback that will be invoked once the entry is written and
-// synchronized to disk.
-//
-// For sample usage see mt-log-test.cc
+// without compromising ordering and durability guarantees. A single background
+// thread per Log instance is responsible for accumulating pending writes
+// and flushing them to the log.
 //
-// Methods on this class are _not_ thread-safe and must be externally
-// synchronized unless otherwise noted.
+// This class is thread-safe unless otherwise noted.
 //
 // Note: The Log needs to be Close()d before any log-writing class is
 // destroyed, otherwise the Log might hold references to these classes
@@ -99,24 +87,6 @@ class Log : public RefCountedThreadSafe<Log> {
 
   ~Log();
 
-  // Reserves a spot in the log's queue for 'entry_batch'.
-  //
-  // 'reserved_entry' is initialized by this method and any resources
-  // associated with it will be released in AsyncAppend().  In order
-  // to ensure correct ordering of operations across multiple threads,
-  // calls to this method must be externally synchronized.
-  //
-  // WARNING: the caller _must_ call AsyncAppend() or else the log
-  // will "stall" and will never be able to make forward progress.
-  Status Reserve(LogEntryTypePB type,
-                 gscoped_ptr<LogEntryBatchPB> entry_batch,
-                 LogEntryBatch** reserved_entry);
-
-  // Asynchronously appends 'entry_batch' to the log. Once the append
-  // completes and is synced, 'callback' will be invoked.
-  void AsyncAppend(LogEntryBatch* entry_batch,
-                   const StatusCallback& callback);
-
   // Synchronously append a new entry to the log.
   // Log does not take ownership of the passed 'entry'.
   Status Append(LogEntryPB* entry);
@@ -281,6 +251,15 @@ class Log : public RefCountedThreadSafe<Log> {
   // Make segments roll over.
   Status RollOver();
 
+  static Status CreateBatchFromPB(LogEntryTypePB type,
+                                  std::unique_ptr<LogEntryBatchPB> entry_batch_pb,
+                                  std::unique_ptr<LogEntryBatch>* entry_batch);
+
+  // Asynchronously appends 'entry_batch' to the log. Once the append
+  // completes and is synced, 'callback' will be invoked.
+  Status AsyncAppend(std::unique_ptr<LogEntryBatch> entry_batch,
+                     const StatusCallback& callback);
+
   // Writes the footer and closes the current segment.
   Status CloseCurrentSegment();
 
@@ -300,9 +279,7 @@ class Log : public RefCountedThreadSafe<Log> {
   Status PreAllocateNewSegment();
 
   // Writes serialized contents of 'entry' to the log. Called inside
-  // AppenderThread. If 'caller_owns_operation' is true, then the
-  // 'operation' field of the entry will be released after the entry
-  // is appended.
+  // AppenderThread.
   Status DoAppend(LogEntryBatch* entry_batch);
 
   // Update footer_builder_ to reflect the log indexes seen in 'batch'.
@@ -391,8 +368,8 @@ class Log : public RefCountedThreadSafe<Log> {
   // The maximum segment size, in bytes.
   uint64_t max_segment_size_;
 
-  // The queue used to communicate between the thread calling
-  // Reserve() and the Log Appender thread
+  // The queue used to communicate between the threads appending operations
+  // and the thread which actually appends them to the log.
   LogEntryBatchQueue entry_batch_queue_;
 
   // Thread writing to the log
@@ -468,7 +445,8 @@ class LogEntryBatch {
   friend class MultiThreadedLogTest;
 
   LogEntryBatch(LogEntryTypePB type,
-                gscoped_ptr<LogEntryBatchPB> entry_batch_pb, size_t count);
+                std::unique_ptr<LogEntryBatchPB> entry_batch_pb,
+                size_t count);
 
   // Serializes contents of the entry to an internal buffer.
   void Serialize();
@@ -485,27 +463,10 @@ class LogEntryBatch {
     return callback_;
   }
 
-  bool failed_to_append() const {
-    return state_ == kEntryFailedToAppend;
-  }
-
-  void set_failed_to_append() {
-    state_ = kEntryFailedToAppend;
-  }
-
-  // Mark the entry as reserved, but not yet ready to write to the log.
-  void MarkReserved();
-
-  // Mark the entry as ready to write to log.
-  void MarkReady();
-
-  // Wait (currently, by spinning on ready_lock_) until ready.
-  void WaitForReady();
 
   // Returns a Slice representing the serialized contents of the
   // entry.
   Slice data() const {
-    DCHECK_EQ(state_, kEntryReady);
     return Slice(buffer_);
   }
 
@@ -533,7 +494,7 @@ class LogEntryBatch {
   const LogEntryTypePB type_;
 
   // Contents of the log entries that will be written to disk.
-  gscoped_ptr<LogEntryBatchPB> entry_batch_pb_;
+  std::unique_ptr<LogEntryBatchPB> entry_batch_pb_;
 
    // Total size in bytes of all entries
   const uint32_t total_size_bytes_;
@@ -551,26 +512,10 @@ class LogEntryBatch {
   // synced to disk.
   StatusCallback callback_;
 
-  // Used to coordinate the synchronizer thread and the caller
-  // thread: this lock starts out locked, and is unlocked by the
-  // caller thread (i.e., inside AppendThread()) once the entry is
-  // fully initialized (once the callback is set and data is
-  // serialized)
-  base::SpinLock ready_lock_;
-
   // Buffer to which 'phys_entries_' are serialized by call to
   // 'Serialize()'
   faststring buffer_;
 
-  enum LogEntryState {
-    kEntryInitialized,
-    kEntryReserved,
-    kEntrySerialized,
-    kEntryReady,
-    kEntryFailedToAppend
-  };
-  LogEntryState state_;
-
   DISALLOW_COPY_AND_ASSIGN(LogEntryBatch);
 };
 

http://git-wip-us.apache.org/repos/asf/kudu/blob/4d2ea24b/src/kudu/consensus/log_util.cc
----------------------------------------------------------------------
diff --git a/src/kudu/consensus/log_util.cc b/src/kudu/consensus/log_util.cc
index e3f4413..c31c58a 100644
--- a/src/kudu/consensus/log_util.cc
+++ b/src/kudu/consensus/log_util.cc
@@ -842,16 +842,16 @@ Status WritableLogSegment::WriteEntryBatch(const Slice& data,
 }
 
 
-void CreateBatchFromAllocatedOperations(const vector<consensus::ReplicateRefPtr>&
msgs,
-                                        gscoped_ptr<LogEntryBatchPB>* batch) {
-  gscoped_ptr<LogEntryBatchPB> entry_batch(new LogEntryBatchPB);
+unique_ptr<LogEntryBatchPB> CreateBatchFromAllocatedOperations(
+    const vector<consensus::ReplicateRefPtr>& msgs) {
+  unique_ptr<LogEntryBatchPB> entry_batch(new LogEntryBatchPB);
   entry_batch->mutable_entry()->Reserve(msgs.size());
   for (const auto& msg : msgs) {
     LogEntryPB* entry_pb = entry_batch->add_entry();
     entry_pb->set_type(log::REPLICATE);
     entry_pb->set_allocated_replicate(msg->get());
   }
-  batch->reset(entry_batch.release());
+  return entry_batch;
 }
 
 bool IsLogFileName(const string& fname) {

http://git-wip-us.apache.org/repos/asf/kudu/blob/4d2ea24b/src/kudu/consensus/log_util.h
----------------------------------------------------------------------
diff --git a/src/kudu/consensus/log_util.h b/src/kudu/consensus/log_util.h
index 54c8698..c52e89b 100644
--- a/src/kudu/consensus/log_util.h
+++ b/src/kudu/consensus/log_util.h
@@ -473,12 +473,10 @@ class WritableLogSegment {
   DISALLOW_COPY_AND_ASSIGN(WritableLogSegment);
 };
 
-// Sets 'batch' to a newly created batch that contains the pre-allocated
+// Return a newly created batch that contains the pre-allocated
 // ReplicateMsgs in 'msgs'.
-// We use C-style passing here to avoid having to allocate a vector
-// in some hot paths.
-void CreateBatchFromAllocatedOperations(const std::vector<consensus::ReplicateRefPtr>&
msgs,
-                                        gscoped_ptr<LogEntryBatchPB>* batch);
+std::unique_ptr<LogEntryBatchPB> CreateBatchFromAllocatedOperations(
+    const std::vector<consensus::ReplicateRefPtr>& msgs);
 
 // Checks if 'fname' is a correctly formatted name of log segment file.
 bool IsLogFileName(const std::string& fname);


Mime
View raw message