cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From marc...@apache.org
Subject [3/3] cassandra git commit: Merge branch 'cassandra-3.0' into trunk
Date Wed, 31 Aug 2016 07:50:33 GMT
Merge branch 'cassandra-3.0' into trunk


Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo
Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/8a3f0e11
Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/8a3f0e11
Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/8a3f0e11

Branch: refs/heads/trunk
Commit: 8a3f0e11fecde964730cd67d6376660ae562b208
Parents: 0628099 ab98b11
Author: Marcus Eriksson <marcuse@apache.org>
Authored: Wed Aug 31 09:45:35 2016 +0200
Committer: Marcus Eriksson <marcuse@apache.org>
Committed: Wed Aug 31 09:45:35 2016 +0200

----------------------------------------------------------------------
 CHANGES.txt                                     |  1 +
 .../cassandra/tools/SSTableMetadataViewer.java  | 40 +++++++++++++++++---
 2 files changed, 36 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cassandra/blob/8a3f0e11/CHANGES.txt
----------------------------------------------------------------------
diff --cc CHANGES.txt
index 6b1acb5,4b77e4d..a0f6055
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@@ -1,59 -1,5 +1,60 @@@
 -3.0.9
 +3.10
 + * Tracing payload is passed through newSession(..) (CASSANDRA-11706)
 + * avoid deleting non existing sstable files and improve related log messages (CASSANDRA-12261)
 + * json/yaml output format for nodetool compactionhistory (CASSANDRA-12486)
 + * Retry all internode messages once after a connection is
 +   closed and reopened (CASSANDRA-12192)
 + * Add support to rebuild from targeted replica (CASSANDRA-9875)
 + * Add sequence distribution type to cassandra stress (CASSANDRA-12490)
 + * "SELECT * FROM foo LIMIT ;" does not error out (CASSANDRA-12154)
 + * Define executeLocally() at the ReadQuery Level (CASSANDRA-12474)
 + * Extend read/write failure messages with a map of replica addresses
 +   to error codes in the v5 native protocol (CASSANDRA-12311)
 + * Fix rebuild of SASI indexes with existing index files (CASSANDRA-12374)
 + * Let DatabaseDescriptor not implicitly startup services (CASSANDRA-9054, 12550)
 + * Fix clustering indexes in presence of static columns in SASI (CASSANDRA-12378)
 + * Fix queries on columns with reversed type on SASI indexes (CASSANDRA-12223)
 + * Added slow query log (CASSANDRA-12403)
 + * Count full coordinated request against timeout (CASSANDRA-12256)
 + * Allow TTL with null value on insert and update (CASSANDRA-12216)
 + * Make decommission operation resumable (CASSANDRA-12008)
 + * Add support to one-way targeted repair (CASSANDRA-9876)
 + * Remove clientutil jar (CASSANDRA-11635)
 + * Fix compaction throughput throttle (CASSANDRA-12366)
 + * Delay releasing Memtable memory on flush until PostFlush has finished running (CASSANDRA-12358)
 + * Cassandra stress should dump all setting on startup (CASSANDRA-11914)
 + * Make it possible to compact a given token range (CASSANDRA-10643)
 + * Allow updating DynamicEndpointSnitch properties via JMX (CASSANDRA-12179)
 + * Collect metrics on queries by consistency level (CASSANDRA-7384)
 + * Add support for GROUP BY to SELECT statement (CASSANDRA-10707)
 + * Deprecate memtable_cleanup_threshold and update default for memtable_flush_writers (CASSANDRA-12228)
 + * Upgrade to OHC 0.4.4 (CASSANDRA-12133)
 + * Add version command to cassandra-stress (CASSANDRA-12258)
 + * Create compaction-stress tool (CASSANDRA-11844)
 + * Garbage-collecting compaction operation and schema option (CASSANDRA-7019)
 + * Add beta protocol flag for v5 native protocol (CASSANDRA-12142)
 + * Support filtering on non-PRIMARY KEY columns in the CREATE
 +   MATERIALIZED VIEW statement's WHERE clause (CASSANDRA-10368)
 + * Unify STDOUT and SYSTEMLOG logback format (CASSANDRA-12004)
 + * COPY FROM should raise error for non-existing input files (CASSANDRA-12174)
 + * Faster write path (CASSANDRA-12269)
 + * Option to leave omitted columns in INSERT JSON unset (CASSANDRA-11424)
 + * Support json/yaml output in nodetool tpstats (CASSANDRA-12035)
 + * Expose metrics for successful/failed authentication attempts (CASSANDRA-10635)
 + * Prepend snapshot name with "truncated" or "dropped" when a snapshot
 +   is taken before truncating or dropping a table (CASSANDRA-12178)
 + * Optimize RestrictionSet (CASSANDRA-12153)
 + * cqlsh does not automatically downgrade CQL version (CASSANDRA-12150)
 + * Omit (de)serialization of state variable in UDAs (CASSANDRA-9613)
 + * Create a system table to expose prepared statements (CASSANDRA-8831)
 + * Reuse DataOutputBuffer from ColumnIndex (CASSANDRA-11970)
 + * Remove DatabaseDescriptor dependency from SegmentedFile (CASSANDRA-11580)
 + * Add supplied username to authentication error messages (CASSANDRA-12076)
 + * Remove pre-startup check for open JMX port (CASSANDRA-12074)
 + * Remove compaction Severity from DynamicEndpointSnitch (CASSANDRA-11738)
 + * Restore resumable hints delivery (CASSANDRA-11960)
 +Merged from 3.0:
+  * Add option to state current gc_grace_seconds to tools/bin/sstablemetadata (CASSANDRA-12208)
   * Fix file system race condition that may cause LogAwareFileLister to fail to classify
files (CASSANDRA-11889)
   * Fix file handle leaks due to simultaneous compaction/repair and
     listing snapshots, calculating snapshot sizes, or making schema

http://git-wip-us.apache.org/repos/asf/cassandra/blob/8a3f0e11/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
----------------------------------------------------------------------
diff --cc src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
index acad0c5,6076e32..b405fad
--- a/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
+++ b/src/java/org/apache/cassandra/tools/SSTableMetadataViewer.java
@@@ -17,28 -17,24 +17,35 @@@
   */
  package org.apache.cassandra.tools;
  
 -import java.io.File;
 -import java.io.IOException;
 -import java.io.PrintStream;
 +import java.io.*;
 +import java.nio.ByteBuffer;
 +import java.util.Arrays;
  import java.util.EnumSet;
 +import java.util.List;
  import java.util.Map;
 +import java.util.stream.Collectors;
  
 +import org.apache.cassandra.db.DecoratedKey;
 +import org.apache.cassandra.db.SerializationHeader;
 +import org.apache.cassandra.db.marshal.AbstractType;
 +import org.apache.cassandra.db.marshal.UTF8Type;
 +import org.apache.cassandra.db.rows.EncodingStats;
 +import org.apache.cassandra.dht.IPartitioner;
 +import org.apache.cassandra.io.compress.CompressionMetadata;
 +import org.apache.cassandra.io.sstable.Component;
 +import org.apache.cassandra.io.sstable.Descriptor;
 +import org.apache.cassandra.io.sstable.IndexSummary;
 +import org.apache.cassandra.io.sstable.metadata.*;
 +import org.apache.cassandra.utils.FBUtilities;
 +import org.apache.cassandra.utils.Pair;
+ import org.apache.commons.cli.CommandLine;
+ import org.apache.commons.cli.CommandLineParser;
+ import org.apache.commons.cli.HelpFormatter;
+ import org.apache.commons.cli.Option;
 -import org.apache.commons.cli.OptionBuilder;
+ import org.apache.commons.cli.Options;
+ import org.apache.commons.cli.ParseException;
+ import org.apache.commons.cli.PosixParser;
  
 -import org.apache.cassandra.io.sstable.Descriptor;
 -import org.apache.cassandra.io.sstable.metadata.*;
 -
  /**
   * Shows the contents of sstable metadata
   */
@@@ -83,43 -90,16 +106,43 @@@ public class SSTableMetadataViewe
                  {
                      out.printf("Minimum timestamp: %s%n", stats.minTimestamp);
                      out.printf("Maximum timestamp: %s%n", stats.maxTimestamp);
 +                    out.printf("SSTable min local deletion time: %s%n", stats.minLocalDeletionTime);
                      out.printf("SSTable max local deletion time: %s%n", stats.maxLocalDeletionTime);
 -                    out.printf("Compression ratio: %s%n", stats.compressionRatio);
 +                    out.printf("Compressor: %s%n", compression != null ? compression.compressor().getClass().getName()
: "-");
 +                    if (compression != null)
 +                        out.printf("Compression ratio: %s%n", stats.compressionRatio);
 +                    out.printf("TTL min: %s%n", stats.minTTL);
 +                    out.printf("TTL max: %s%n", stats.maxTTL);
 +
 +                    if (validation != null && header != null)
 +                        printMinMaxToken(descriptor, FBUtilities.newPartitioner(descriptor),
header.getKeyType(), out);
 +
 +                    if (header != null && header.getClusteringTypes().size() ==
stats.minClusteringValues.size())
 +                    {
 +                        List<AbstractType<?>> clusteringTypes = header.getClusteringTypes();
 +                        List<ByteBuffer> minClusteringValues = stats.minClusteringValues;
 +                        List<ByteBuffer> maxClusteringValues = stats.maxClusteringValues;
 +                        String[] minValues = new String[clusteringTypes.size()];
 +                        String[] maxValues = new String[clusteringTypes.size()];
 +                        for (int i = 0; i < clusteringTypes.size(); i++)
 +                        {
 +                            minValues[i] = clusteringTypes.get(i).getString(minClusteringValues.get(i));
 +                            maxValues[i] = clusteringTypes.get(i).getString(maxClusteringValues.get(i));
 +                        }
 +                        out.printf("minClustringValues: %s%n", Arrays.toString(minValues));
 +                        out.printf("maxClustringValues: %s%n", Arrays.toString(maxValues));
 +                    }
-                     out.printf("Estimated droppable tombstones: %s%n", stats.getEstimatedDroppableTombstoneRatio((int)
(System.currentTimeMillis() / 1000)));
+                     out.printf("Estimated droppable tombstones: %s%n", stats.getEstimatedDroppableTombstoneRatio((int)
(System.currentTimeMillis() / 1000) - gcgs));
                      out.printf("SSTable Level: %d%n", stats.sstableLevel);
                      out.printf("Repaired at: %d%n", stats.repairedAt);
 -                    out.printf("Replay positions covered: %s\n", stats.commitLogIntervals);
 +                    out.printf("Replay positions covered: %s%n", stats.commitLogIntervals);
 +                    out.printf("totalColumnsSet: %s%n", stats.totalColumnsSet);
 +                    out.printf("totalRows: %s%n", stats.totalRows);
                      out.println("Estimated tombstone drop times:");
 -                    for (Map.Entry<Double, Long> entry : stats.estimatedTombstoneDropTime.getAsMap().entrySet())
 +
 +                    for (Map.Entry<Number, long[]> entry : stats.estimatedTombstoneDropTime.getAsMap().entrySet())
                      {
 -                        out.printf("%-10s:%10s%n",entry.getKey().intValue(), entry.getValue());
 +                        out.printf("%-10s:%10s%n",entry.getKey().intValue(), entry.getValue()[0]);
                      }
                      printHistograms(stats, out);
                  }


Mime
View raw message