Backend Configuration
LevelDB
Configurable parameters for Riak’s LevelDB storage backend.
Config | Description | Default |
---|---|---|
leveldb.block_cache_threshold |
This setting defines the limit past which block cache memory can no longer be released in favor of the page cache. This setting has no impact in favor of file cache. The value is set on a per-vnode basis. | 32MB |
leveldb.compaction.trigger.tombstone_count |
Controls when a background compaction initiates solely due to the
number of delete tombstones within an individual .sst table
file. A value of off disables the feature. |
1000 |
leveldb.compression |
Enabling this setting (on ), which is the default,
saves disk space. Disabling it may reduce read latency but increase
overall disk activity. This option can be changed at any time, but it
will not impact data on disk until the next time a file requires
compaction. |
on |
leveldb.compression.algorithm |
This setting is used to select which compression algorithm
is selected when leveldb.compression is on.
In new riak.conf files, this is explicitly set to
lz4 ; however when this setting is not provided,
snappy will be used for backward-compatibility.
When you determine that you will no longer need backward-compatibility, setting this to lz4 will cause future compactions
to use the LZ4 algorithm for compression. |
lz4 in new riak.conf filessnappy when not provided
|
leveldb.data_root |
The directory in which LevelDB will store its data. | ./data/leveldb |
leveldb.fadvise_willneed |
Option to override LevelDB's use of fadvise(DONTNEED)
with fadvise(WILLNEED) instead. WILLNEED can
reduce disk activity on systems where physical memory exceeds the
database size. |
false |
leveldb.maximum_memory |
This parameter defines the server memory (in bytes) to assign to
LevelDB. Also see leveldb.maximum_memory.percent to set
LevelDB memory as a percentage of system total. |
80 |
leveldb.maximum_memory.percent |
This parameter defines the percentage of total server memory to
assign to LevelDB. LevelDB will dynamically adjust its internal cache
sizes to stay within this size. The memory size can alternately be
assigned as a byte count via leveldb.maximum_memory
instead. |
70 |
leveldb.threads |
The number of worker threads performing LevelDB operations. | 71 |
leveldb.verify_checksums |
Enables or disables the verification of the data fetched from LevelDB against internal checksums. | on |
leveldb.verify_compaction |
Enables or disables the verification of LevelDB data during compaction. | on |
leveldb.block.size_steps |
Defines the number of incremental adjustments to attempt between the
block.size value and the maximum block.size
for an .sst table file. A value of zero disables the
underlying dynamic block_size feature. |
16 |
leveldb.block.restart_interval |
Defines the key count threshold for a new key entry in the key index for a block. Most deployments should leave this parameter alone. | 16 |
leveldb.block.size |
Defines the size threshold for a block/chunk of data within one
.sst table file. Each new block gets an index entry in the
.sst table file's master index. |
4KB |
leveldb.bloomfilter |
Each database .sst table file can include an optional
"bloom filter" that is highly effective in shortcutting data queries
that are destined to not find the requested key. The Bloom filter
typically increases the size of an .sst table file by about
2%. |
on |
leveldb.write_buffer_size_min |
Each vnode first stores new key/value data in a memory-based write
buffer. This write buffer is in parallel to the recovery log mentioned
in the sync parameter. Riak creates each vnode with a
randomly sized write buffer for performance reasons. The random size is
somewhere between write_buffer_size_min and
write_buffer_size_max . |
30MB |
leveldb.write_buffer_size_max |
See leveldb.write_buffer_size_min directly above. |
60MB |
leveldb.limited_developer_mem |
This is a Riak-specific option that is used when a developer is
testing a high number of vnodes and/or several VMs on a machine with
limited physical memory. Do not use this option if making
performance measurements. This option overwrites values given to
write_buffer_size_min and
write_buffer_size_max . |
off |
leveldb.sync_on_write |
Whether LevelDB will flush after every write. Note: If you are familiar with fsync, this is analogous to calling fsync after every write. |
off |
leveldb.tiered |
The level number at which LevelDB data switches from the faster to
the slower array. The default of off disables the
feature. |
off |
leveldb.tiered.path.fast |
The path prefix for .sst files below the level set by
leveldb.tiered . |
|
leveldb.tiered.path.slow |
The path prefix for .sst files below the level set by
leveldb.tiered . |
Leveled
Configurable Parameters for Riak’s leveled storage backend
Config | Description | Default |
---|---|---|
leveled.data_root |
A path under which leveled data files will be stored. | $(platform_data_dir)/leveled
|
leveled.sync_strategy
| Strategy for flushing data to disk - Can be set to riak_sync , sync (if OTP > 16) or none . Use none , and the OS will flush when most efficient. Use riak_sync or sync to flush after every PUT (not recommended wihtout some hardware support e.g. flash drives and/or
Flash-backed Write Caches) |
none |
leveled.compression_method |
Can be lz4 or native (which will use the Erlang native zlib compression) within term_to_binary | native |
leveled.compression_point |
The point at which compression is applied to the Journal (the Ledger is always compressed). Use on_receipt or on_compact. on_compact is suitable when values are unlikely to yield much benefit from compression(compression is only attempted when compacting) | on_receipt |
leveled.log_level |
Can be debug, info, warn, error or critical. Set the minimum log level to be used within leveled. Leveled will log many lines to allow for stats to be etracted by those using log indexers such as Splunk | info |
leveled.journal_size |
The approximate size (in bytes) when a Journal file should be rolled. Normally keep this as around the size of o(100K) objects. | 1000000000 |
leveled.compaction_runs_perday |
The number of journal compactions per vnode per day, The higher the value, the more compaction runs, and the sooner space is recovered. But each run has a cost. | 24 |
leveled.compaction_low_hour |
The hour of the day in which journal compaction can start. Use Low hour of 0 and High hour of 23 to have no compaction window (i.e. always compactregardless of time of day) | 0 |
leveled.compaction_top_hour |
The hour of the day, after which journal compaction should stop. If low hour > top hour then, compaction will work overnight between low hour and top hour (inclusive). Timings rely on server's view of local time | 23 |
leveled.max_run_length |
In a single compaction run, what is the maximum number of consecutive files which may be compacted. | 4 |
leveled_reload_recalc |
Enable the `recalc` compaction strategy within the leveled backend in riak. | disabled |
Bitcask
Configurable parameters for Riak’s Bitcask storage backend.
Config | Description | Default |
---|---|---|
bitcask.data_root |
The directory under which Bitcask will store its data. | ./data/bitcask |
bitcask.io_mode |
Configure how Bitcask writes data to disk. If set to
erlang , writes are made via Erlang's built-in file API; if
set to nif , writes are made via direct calls to the POSIX C
API. The nif mode provides higher throughput for certain
workloads, but has the potential to negatively impact the Erlang VM,
leading to higher worst-case latencies and possible throughput collapse
|
erlang |
bitcask.expiry |
By default, Bitcask keeps all of your data around. If your data has
limited time value, or if you need to purge data for space reasons, you
can set the expiry option. For example, if you need to
purge data automatically after 1 day, set the value to 1d .
off disables automatic expiration |
off |
bitcask.expiry.grace_time |
By default, Bitcask will trigger a merge whenever a data file
contains an expired key. This may result in excessive merging under some
usage patterns. To prevent this you can set the
bitcask.expiry.grace_time option. Bitcask will defer
triggering a merge solely for key expiry by the configured number of
seconds. Setting this to 1h effectively limits each cask to
merging for expiry once per hour. |
0 |
bitcask.hintfile_checksums |
Whether to allow the CRC to be present at the end of hintfiles.
Setting this to allow_missing runs Bitcask in a
backwards-compatible mode in which old hint files will still be accepted
without CRC signatures. |
strict |
bitcask.fold.max_puts |
See the description for the bitcask.fold.max_age
config directly below. |
0 |
bitcask.fold.max_age |
Fold keys thresholds will reuse the keydir if another fold was
started less than fold.max_age ago and there were fewer
than fold.max_puts updates. Otherwise, it will wait until
all current fold keys complete and then start. Set either option to
unlimited to disable. |
unlimited |
bitcask.merge.thresholds.fragmentation |
Describes which ratio of dead keys to total keys in a file will cause it to be included in the merge. The value of this setting is a percentage from 0 to 100. For example, if a data file contains 4 dead keys and 6 live keys, it will be included in the merge at the default ratio (which is 40). Increasing the value will cause fewer files to be merged, decreasing the value will cause more files to be merged. | 40 |
bitcask.merge.thresholds.dead_bytes |
Describes the minimum amount of data occupied by dead keys in a file to cause it to be included in the merge. Increasing the value will cause fewer files to be merged, whereas decreasing the value will cause more files to be merged. | 128MB |
bitcask.merge.thresholds.small_file |
Describes the minimum size a file must have to be excluded from the merge. Files smaller than the threshold will be included. Increasing the value will cause more files to be merged, whereas decreasing the value will cause fewer files to be merged. | 10MB |
bitcask.merge.triggers.dead_bytes |
Describes how much data stored for dead keys in a single file will trigger merging. If a file meets or exceeds the trigger value for dead bytes, merge will be triggered. Increasing the value will cause merging to occur less often, whereas decreasing the value will cause merging to happen more often. When either of these constraints are met by any file in the directory, Bitcask will attempt to merge files. | 512MB |
bitcask.merge.triggers.fragmentation |
Describes which ratio of dead keys to total keys in a file will trigger merging. The value of this setting is a percentage from 0 to 100. For example, if a data file contains 6 dead keys and 4 live keys, then merge will be triggered at the default setting. Increasing this value will cause merging to occur less often, whereas decreasing the value will cause merging to happen more often. | 60 |
bitcask.merge.window.end |
See the description of the bitcask.merge.policy config
below. |
23 |
bitcask.merge.window.start |
See the description of the bitcask.merge.policy config
below. |
0 |
bitcask.merge.policy |
Lets you specify when during the day merge operations are allowed to
be triggered. Valid options are: always , meaning no
restrictions; never , meaning that merging will never be
attempted; and window , specifying the hours during which
merging is permitted, where bitcask.merge.window.start and
bitcask.merge.window.end are integers between 0 and 23. If
merging has a significant impact on performance of your cluster, or your
cluster has quiet periods in which little storage activity occurs, you
may want to change this setting from the default. |
always |
bitcask.merge_check_interval |
Bitcask periodically runs checks to determine whether merges are necessary. This parameter determines how often those checks take place. Expressed as a time unit, e.g. `10s` for 10 seconds, `5m` for 5 minutes, etc. | 3m |
bitcask.merge_check_jitter |
In order to prevent merge operations from taking place on different
nodes at the same time, Riak can apply random variance to merge times,
expressed as a percentage of bitcask.merge_check_interval .
|
30% |
bitcask.max_merge_size |
Maximum amount of data to merge in one go in the Bitcask backend. | 100GB |
bitcask.max_file_size |
Describes the maximum permitted size for any single data file in the Bitcask directory. If a write causes the current file to exceed this size threshold then that file is closed, and a new file is opened for writes. | 2GB |
bitcask.sync.interval |
See the description of the bitcask.sync.strategy
directly below. |
|
bitcask.sync.strategy |
Changes the durability of writes by specifying when to synchronize
data to disk. The default setting protects against data loss in the
event of application failure (process death) but leaves open a small
window in which data could be lost in the event of complete system
failure (e.g. hardware, OS, or power). The default mode,
none , writes data into operating system buffers which will
be written to the disks when those buffers are flushed by the operating
system. If the system fails, e.g. due power loss or crash, that data is
lost before those buffers are flushed to stable storage. This is
prevented by the setting o_sync , which forces the operating
system to flush to stable storage at every write. The effect of flushing
each write is better durability, however write throughput will suffer as
each write will have to wait for the write to complete. Available sync
strategies: none , which will let the operating system
manage syncing writes; o_sync , which will uses the
O_SYNC flag to force syncs on every write; and
interval , by which will force Bitcask to sync every
bitcask.sync.interval seconds. |
none |
bitcask.open_timeout |
Specifies the maximum time Bitcask will block on startup while
attempting to create or open the data directory. You generally need not
change this value. If for some reason the timeout is exceeded on open
you'll see a log message of the form Failed to start bitcask
backend: .... . Only then should you consider a longer timeout.
|
4s |
Memory Backend
Configurable parameters for Riak’s Memory backend.
Config | Description | Default |
---|---|---|
memory_backend.ttl |
Each value written will be written with this "time to live." Once
that object's time is up, it will be deleted on the next read of its
key. Minimum: 1s . |
|
memory_backend.max_memory_per_vnode |
The maximum amount of memory consumed per vnode by the memory
storage backend. Minimum: 1MB . |
Multi Backend
Configurable parameters for Riak’s Multi backend, which enables you to utilize multiple data backends in a single Riak cluster.
If you are using multiple backends, you can configure the backends
individually by prepending the setting with multi_backend.$name
, where
$name
is the name of the backend. $name
can be any valid
configuration word, like customer_data
, my_data
, foo_bar_backend
,
etc.
Below is the general form for setting multi-backend parameters:
multi_backend.$name.(existing_setting) = <setting>
# or
multi_backend.$name.$backend_type.(backend_specific_setting) = <setting>
Below is a listing of the available parameters:
Config | Description | Default |
---|---|---|
multi_backend.$name.storage_backend |
This parameter specifies the Erlang module defining the storage mechanism that will be used on this node. | bitcask |
multi_backend.default |
The default name of a backend when one is not specified. |
To give an example, if you have a LevelDB backend named
customer_backend
and wish to set the data_root
parameter to
$(platform_data_dir)/leveldb_backends/customer_backend/
, you would
do so as follows:
multi_backend.customer_backend.storage_backend = leveldb
multi_backend.customer_backend.leveldb.data_root = $(platform_data_dir)/leveldb_backends/customer_backend
multi_backend.customer_backend.leveldb.maximum_memory.percent = 50