Photo by Nicole Honeywill on Unsplash

Study Guide — Cassandra Reads and Writes

Ready and Writing in Cassandra

Cassandra is a peer-to-peer, read/write anywhere architecture, so any user can connect to any node in any data center and read/write the data they need, with all writes being partitioned and replicated for them automatically throughout the cluster.

To access data in Cassandra, it is key to understand how C* stores data. The hinted handoff feature plus Cassandra’s ACID without the C (Atomic, Isolated and Durable) database are key concepts to understand reads and writes. In C* consistency refers to how up to date and synchronized a row of data is on all of its replicas.

Writes in Cassandra

Here are some key words to know to understand the write path

Partitioning Key — each table has a Partitioning Key. It helps with determining which node in the cluster the data should be stored.

Commit Log —the transactional log. It’s used for transactional recovery in case of system failures. It’s an append only file and provides durability.

Memtable — a memory cache to store the in memory copy of the data. Each node has a memtable for each CQL table. The memtable accumulates writes and provides read for data which are not yet stored to disk.

SSTable —the final destination of data in C*. They are actual files on disk and are immutable.

Compaction —the periodic process of merging multiple SSTables into a single SSTable. It’s primarily done to optimize the read operations.

How is data written?

  1. Cassandra appends writes to the commit log on disk. The commit log receives every write made to a Cassandra node and these durable writes survive permanently even if power fails on a node.
  2. Cassandra also stores the data in a memory structure called memtable and to provide configurable durability. The memtable is a write-back cache of data partitions that Cassandra looks up by key.
  3. The memtable stores writes in a sorted order until reaching a configurable limit and then it is flushed.
  4. Then it is flushed to a a sorted strings table called an SSTable. Writes are atomic at row level. Meaning that all columns are written or updated, or none are.

Side note — Automicity is one of the ACID (Atomicity, Consistency, Isolation, Durability) transaction properties. Atomicity means that it cannot be divided or split in smaller points.

When does Cassandra flush the memtable to disk (SSTable)?

Cassandra flushes the memtable to disk (SSTable) in the occurrence of the following events.

  1. When the memtable contents exceed a configurable threshold controlled by memtable_total_space_in_mb
  2. When the commit log is full controlled by commitlog_total_space_in_mb
  3. When nodetool flush command is issued

Each flush results in a new SSTable. If over a period of time we end up with a huge number of SSTables, this could slow down the read performance as we might have to walk through . multiple SSTables to read a record. So how to fix this?

Cassandra periodically performs a background operation called compaction to merge the SSTables in order to optimize the read operation.

Hinted Handoffs

On occasion, a node becomes unresponsive while data is being written due to hardware problems, network issues or overloaded nodes.

Hinted Handoff is an optional part of writes whose primary purpose is to provide extreme write availability when consistency is not required. It can reduce the time required for a temporarily failed node to become consistent again with live ones.

How does it work?

If a write is made and a replica node for the row is known to be down, C* will write a hint to a live replica node letting them now that the write needs to be replayed to the unavailable node. If no replica nodes are alive for this row key, the coordinating node will write the hint locally. Once a node discovers via Gossip that a node with the hint has recovered, it will send the data row corresponding to each hint to the target.

Strategy for Writes

  1. ANY — a write must succeed on any available node
  2. ONE — a write must succeed on any node responsible for that row (either primary or replica)
  3. QUORUM — a write must succeed on a quorum or replica nodes (replication_factor / 2 + 1)
  4. LOCAL_QUORUM — a write must succeed on a quorum or replica nodes in the same data center as the coordinator node
  5. EACH_QUORUM — a write must succeed on a quorum of replica nodes in all data centers
  6. ALL — a write must succeed on all replica nodes for a row key

Reads in Cassandra

Reads are a little more complicated than write paths. In addition to the write path key words, here are additional words that are key to understand read paths.

  1. Row Cache — a memory cache which stores recently read rows (records). It’s an optional component in C*.
  2. Bloom Filters — helps to point if a partition key may exist in its corresponding SSTable.
  3. (Partition) Key Cache — key cache maps recently read read partition keys to specific SSTable offset.
  4. Partition Indexes — sorted partition keys mapped to their SSTable offsets. Partition Indexes are created as part of the SSTable creation and resides on the disk.
  5. Partition Summaries — an off heap in memory sampling of the Partition Indexes and is used to speed up the access to index on disk.
  6. Compression Offsets — keeps the offset mapping information for compressed blocks. By default all tables in C* are compressed and when C* needs to read data, it looks into the in memory compression offset maps and unpacks the data chunks.

How does Cassandra read data?

  1. Cassandra first checks if the in-memory memtable cache still contain the data. Memtable is an in memory read/write cache for each column family.
  2. If not found, Cassandra will read all SSTables for that Column Family.
  3. To optimize reads…
  • Cassandra uses bloom filter for each SSTable to determine whether this SSTable contains the key
  • Cassandra uses index in SSTable to locate the data fast
  • Cassandra compaction merges SSTables when the number of SSTables reaches certain threshold.
  • Cassandra read is slower than write but yet still very fast

4. Cassandra depends on OS to cache SSTable files

  • Do not configure C* to use up most physical memory
  • Some deployment configures C* to use 50% of the physical memory so the rest can be used for file cache
  • However, memory configuration is sensitive to data access pattern and volume

Read Repair

Cassandra ensures that frequently read data remains consistent. Once a read is done, the coordinator node compares the data from all remaining replicas that own the row in the background. If they are inconsistent, issues writes to the out of data replicas to update the row to reflect the most recently written values.

Read repairs can be configured per column family and is enabled by default.

Read repair is important because every time a read request occurs, it provides an opportunity for consistency improvement. As a background process, read repair generally puts little strain on the cluster.

How does a read repair work?

When a query is made against a given key…

  1. Cassandra performs a Read Repair
  2. Read Repair perform a digest query on all replicas for that key. A digest query asks a replica to return a hash digest value and the timestamp for the key’s data. Digest query verify whether replica posses the same data without sending the data over the network.
  3. Cassandra pushes the most recent data to any out of date replicas to make the queries data consistent again. Next query will therefore return a consistent data.

Strategies for Reads

  1. ONE — reads from the closest node holding the data
  2. QUORUM — returns a result from a quorum of servers with the most recent timestamp of data
  3. LOCAL_QUORUM — returns a result from a quorum of servers with the most recent timestamp for the data in the same data center as teh coordinator node
  4. EACH_QUORUM — returns a result from a quorum of servers with the most recent timestamp in all data centers
  5. ALL — returns a result from all replica nodes for a row key

CAP Theorem

Consistency, Availability, Partition Tolerance

This theorem states that it’s impossible for a distributed system to have all of the 3 properties.

  1. Consistency — all nodes see the same data at the same time
  2. Availability — a guarantee that every request receieves a response about whether it was successful or failed
  3. Partition tolerance — the system continues to operate despite arbitrary message loss or failure of part of the system

Cassandra is an AP system meaning it’s more important to be available and partition tolerant. In C* you can choose between strong and eventual consistency. It can be done on a per-operation basis and for BOTH reads and writes. Also, it handles multi-data center operations.

CQL using Consistency

Providing an argument to the CONSISTENCY command overrides the default consistency level of ONE for both read and write and configures the consistency level for the future requests.

Setting Consistency Level is done on a per query basis by adding USING CONSISTENCY <level> syntax as shown below.

SELECT total_purchases from SALES USING CONSISTENCY QUORUM where customer_id = 5

UPDATE SALES USING CONSISTENCY ONE SET total_purchases = 5000 WHERE customer_id = 4


👩‍💻 Software engineer 📍Bay Area

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store