2011-03-15 09:47:04 +00:00
|
|
|
.*.swp
|
2009-03-23 11:28:28 +00:00
|
|
|
*.o
|
2020-01-10 11:22:16 +00:00
|
|
|
*.xo
|
|
|
|
*.so
|
2020-01-01 15:33:02 +00:00
|
|
|
*.d
|
2009-11-03 13:36:38 +00:00
|
|
|
*.log
|
2013-04-18 14:21:32 +00:00
|
|
|
dump.rdb
|
2024-03-28 16:58:28 +00:00
|
|
|
*-benchmark
|
|
|
|
*-check-aof
|
|
|
|
*-check-rdb
|
|
|
|
*-check-dump
|
|
|
|
*-cli
|
|
|
|
*-sentinel
|
|
|
|
*-server
|
2009-03-23 16:22:24 +00:00
|
|
|
doc-tools
|
2009-03-23 23:43:38 +00:00
|
|
|
release
|
2009-11-03 13:36:38 +00:00
|
|
|
misc/*
|
2010-07-01 12:41:03 +00:00
|
|
|
src/release.h
|
2022-02-28 07:24:47 +00:00
|
|
|
appendonly.aof*
|
Implement Multi Part AOF mechanism to avoid AOFRW overheads. (#9788)
Implement Multi-Part AOF mechanism to avoid overheads during AOFRW.
Introducing a folder with multiple AOF files tracked by a manifest file.
The main issues with the the original AOFRW mechanism are:
* buffering of commands that are processed during rewrite (consuming a lot of RAM)
* freezes of the main process when the AOFRW completes to drain the remaining part of the buffer and fsync it.
* double disk IO for the data that arrives during AOFRW (had to be written to both the old and new AOF files)
The main modifications of this PR:
1. Remove the AOF rewrite buffer and related code.
2. Divide the AOF into multiple files, they are classified as two types, one is the the `BASE` type,
it represents the full amount of data (Maybe AOF or RDB format) after each AOFRW, there is only
one `BASE` file at most. The second is `INCR` type, may have more than one. They represent the
incremental commands since the last AOFRW.
3. Use a AOF manifest file to record and manage these AOF files mentioned above.
4. The original configuration of `appendfilename` will be the base part of the new file name, for example:
`appendonly.aof.1.base.rdb` and `appendonly.aof.2.incr.aof`
5. Add manifest-related TCL tests, and modified some existing tests that depend on the `appendfilename`
6. Remove the `aof_rewrite_buffer_length` field in info.
7. Add `aof-disable-auto-gc` configuration. By default we're automatically deleting HISTORY type AOFs.
It also gives users the opportunity to preserve the history AOFs. just for testing use now.
8. Add AOFRW limiting measure. When the AOFRW failures reaches the threshold (3 times now),
we will delay the execution of the next AOFRW by 1 minute. If the next AOFRW also fails, it will be
delayed by 2 minutes. The next is 4, 8, 16, the maximum delay is 60 minutes (1 hour). During the limit
period, we can still use the 'bgrewriteaof' command to execute AOFRW immediately.
9. Support upgrade (load) data from old version redis.
10. Add `appenddirname` configuration, as the directory name of the append only files. All AOF files and
manifest file will be placed in this directory.
11. Only the last AOF file (BASE or INCR) can be truncated. Otherwise redis will exit even if
`aof-load-truncated` is enabled.
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-03 17:14:13 +00:00
|
|
|
appendonlydir
|
2010-05-25 08:06:37 +00:00
|
|
|
SHORT_TERM_TODO
|
2010-11-15 14:50:41 +00:00
|
|
|
release.h
|
|
|
|
src/transfer.sh
|
2010-11-29 10:14:57 +00:00
|
|
|
src/configs
|
2010-12-29 22:08:18 +00:00
|
|
|
redis.ds
|
2024-04-03 17:47:26 +00:00
|
|
|
src/*.conf
|
2011-05-27 13:27:07 +00:00
|
|
|
deps/lua/src/lua
|
|
|
|
deps/lua/src/luac
|
2011-05-27 14:01:29 +00:00
|
|
|
deps/lua/src/liblua.a
|
Added INFO LATENCYSTATS section: latency by percentile distribution/latency by cumulative distribution of latencies (#9462)
# Short description
The Redis extended latency stats track per command latencies and enables:
- exporting the per-command percentile distribution via the `INFO LATENCYSTATS` command.
**( percentile distribution is not mergeable between cluster nodes ).**
- exporting the per-command cumulative latency distributions via the `LATENCY HISTOGRAM` command.
Using the cumulative distribution of latencies we can merge several stats from different cluster nodes
to calculate aggregate metrics .
By default, the extended latency monitoring is enabled since the overhead of keeping track of the
command latency is very small.
If you don't want to track extended latency metrics, you can easily disable it at runtime using the command:
- `CONFIG SET latency-tracking no`
By default, the exported latency percentiles are the p50, p99, and p999.
You can alter them at runtime using the command:
- `CONFIG SET latency-tracking-info-percentiles "0.0 50.0 100.0"`
## Some details:
- The total size per histogram should sit around 40 KiB. We only allocate those 40KiB when a command
was called for the first time.
- With regards to the WRITE overhead As seen below, there is no measurable overhead on the achievable
ops/sec or full latency spectrum on the client. Including also the measured redis-benchmark for unstable
vs this branch.
- We track from 1 nanosecond to 1 second ( everything above 1 second is considered +Inf )
## `INFO LATENCYSTATS` exposition format
- Format: `latency_percentiles_usec_<CMDNAME>:p0=XX,p50....`
## `LATENCY HISTOGRAM [command ...]` exposition format
Return a cumulative distribution of latencies in the format of a histogram for the specified command names.
The histogram is composed of a map of time buckets:
- Each representing a latency range, between 1 nanosecond and roughly 1 second.
- Each bucket covers twice the previous bucket's range.
- Empty buckets are not printed.
- Everything above 1 sec is considered +Inf.
- At max there will be log2(1000000000)=30 buckets
We reply a map for each command in the format:
`<command name> : { `calls`: <total command calls> , `histogram` : { <bucket 1> : latency , < bucket 2> : latency, ... } }`
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-05 12:01:05 +00:00
|
|
|
deps/hdr_histogram/libhdrhistogram.a
|
optimizing d2string() and addReplyDouble() with grisu2: double to string conversion based on Florian Loitsch's Grisu-algorithm (#10587)
All commands / use cases that heavily rely on double to a string representation conversion,
(e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ),
could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the
equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures
100% coverage of conversion.
This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries
( fmtlib ) that use the optimized double to string conversion underneath.
The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on
https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ).
test suite changes:
Despite being compatible, in some cases it produces a different result from printf, and some tests
had to be adjusted.
one case is that `%.17g` (which means %e or %f which ever is shorter), chose to use `5000000000`
instead of 5e+9, which sounds like a bug?
In other cases, we changed TCL to compare numbers instead of strings to ignore minor rounding
issues (`expr 0.8 == 0.79999999999999999`)
2022-10-15 09:17:41 +00:00
|
|
|
deps/fpconv/libfpconv.a
|
2020-08-24 09:54:56 +00:00
|
|
|
tests/tls/*
|
2011-11-15 20:40:49 +00:00
|
|
|
.make-*
|
2011-11-21 14:35:54 +00:00
|
|
|
.prerequisites
|
2012-07-23 10:54:52 +00:00
|
|
|
*.dSYM
|
2016-07-06 10:24:45 +00:00
|
|
|
Makefile.dep
|
2018-09-18 12:42:09 +00:00
|
|
|
.vscode/*
|
2020-03-12 12:44:32 +00:00
|
|
|
.idea/*
|
2020-08-24 09:54:56 +00:00
|
|
|
.ccls
|
|
|
|
.ccls-cache/*
|
|
|
|
compile_commands.json
|
2020-09-23 18:56:16 +00:00
|
|
|
redis.code-workspace
|
2024-03-29 17:38:13 +00:00
|
|
|
.cache
|