2011-03-15 09:47:04 +00:00
|
|
|
.*.swp
|
2009-03-23 11:28:28 +00:00
|
|
|
*.o
|
2024-05-03 03:00:04 +00:00
|
|
|
*.a
|
2020-01-10 11:22:16 +00:00
|
|
|
*.xo
|
|
|
|
*.so
|
2020-01-01 15:33:02 +00:00
|
|
|
*.d
|
2009-11-03 13:36:38 +00:00
|
|
|
*.log
|
2024-07-13 15:27:17 +00:00
|
|
|
dump*.rdb
|
2024-03-28 16:58:28 +00:00
|
|
|
*-benchmark
|
|
|
|
*-check-aof
|
|
|
|
*-check-rdb
|
|
|
|
*-check-dump
|
|
|
|
*-cli
|
|
|
|
*-sentinel
|
|
|
|
*-server
|
2024-05-03 03:00:04 +00:00
|
|
|
*-unit-tests
|
2009-03-23 16:22:24 +00:00
|
|
|
doc-tools
|
2009-03-23 23:43:38 +00:00
|
|
|
release
|
2009-11-03 13:36:38 +00:00
|
|
|
misc/*
|
2010-07-01 12:41:03 +00:00
|
|
|
src/release.h
|
2022-02-28 07:24:47 +00:00
|
|
|
appendonly.aof*
|
2024-07-13 15:27:17 +00:00
|
|
|
appendonlydir*
|
2010-05-25 08:06:37 +00:00
|
|
|
SHORT_TERM_TODO
|
2010-11-15 14:50:41 +00:00
|
|
|
release.h
|
|
|
|
src/transfer.sh
|
2010-11-29 10:14:57 +00:00
|
|
|
src/configs
|
2010-12-29 22:08:18 +00:00
|
|
|
redis.ds
|
2024-04-03 17:47:26 +00:00
|
|
|
src/*.conf
|
2011-05-27 13:27:07 +00:00
|
|
|
deps/lua/src/lua
|
|
|
|
deps/lua/src/luac
|
2011-05-27 14:01:29 +00:00
|
|
|
deps/lua/src/liblua.a
|
Added INFO LATENCYSTATS section: latency by percentile distribution/latency by cumulative distribution of latencies (#9462)
# Short description
The Redis extended latency stats track per command latencies and enables:
- exporting the per-command percentile distribution via the `INFO LATENCYSTATS` command.
**( percentile distribution is not mergeable between cluster nodes ).**
- exporting the per-command cumulative latency distributions via the `LATENCY HISTOGRAM` command.
Using the cumulative distribution of latencies we can merge several stats from different cluster nodes
to calculate aggregate metrics .
By default, the extended latency monitoring is enabled since the overhead of keeping track of the
command latency is very small.
If you don't want to track extended latency metrics, you can easily disable it at runtime using the command:
- `CONFIG SET latency-tracking no`
By default, the exported latency percentiles are the p50, p99, and p999.
You can alter them at runtime using the command:
- `CONFIG SET latency-tracking-info-percentiles "0.0 50.0 100.0"`
## Some details:
- The total size per histogram should sit around 40 KiB. We only allocate those 40KiB when a command
was called for the first time.
- With regards to the WRITE overhead As seen below, there is no measurable overhead on the achievable
ops/sec or full latency spectrum on the client. Including also the measured redis-benchmark for unstable
vs this branch.
- We track from 1 nanosecond to 1 second ( everything above 1 second is considered +Inf )
## `INFO LATENCYSTATS` exposition format
- Format: `latency_percentiles_usec_<CMDNAME>:p0=XX,p50....`
## `LATENCY HISTOGRAM [command ...]` exposition format
Return a cumulative distribution of latencies in the format of a histogram for the specified command names.
The histogram is composed of a map of time buckets:
- Each representing a latency range, between 1 nanosecond and roughly 1 second.
- Each bucket covers twice the previous bucket's range.
- Empty buckets are not printed.
- Everything above 1 sec is considered +Inf.
- At max there will be log2(1000000000)=30 buckets
We reply a map for each command in the format:
`<command name> : { `calls`: <total command calls> , `histogram` : { <bucket 1> : latency , < bucket 2> : latency, ... } }`
Co-authored-by: Oran Agra <oran@redislabs.com>
2022-01-05 12:01:05 +00:00
|
|
|
deps/hdr_histogram/libhdrhistogram.a
|
optimizing d2string() and addReplyDouble() with grisu2: double to string conversion based on Florian Loitsch's Grisu-algorithm (#10587)
All commands / use cases that heavily rely on double to a string representation conversion,
(e.g. meaning take a double-precision floating-point number like 1.5 and return a string like "1.5" ),
could benefit from a performance boost by swapping snprintf(buf,len,"%.17g",value) by the
equivalent [fpconv_dtoa](https://github.com/night-shift/fpconv) or any other algorithm that ensures
100% coverage of conversion.
This is a well-studied topic and Projects like MongoDB. RedPanda, PyTorch leverage libraries
( fmtlib ) that use the optimized double to string conversion underneath.
The positive impact can be substantial. This PR uses the grisu2 approach ( grisu explained on
https://www.cs.tufts.edu/~nr/cs257/archive/florian-loitsch/printf.pdf section 5 ).
test suite changes:
Despite being compatible, in some cases it produces a different result from printf, and some tests
had to be adjusted.
one case is that `%.17g` (which means %e or %f which ever is shorter), chose to use `5000000000`
instead of 5e+9, which sounds like a bug?
In other cases, we changed TCL to compare numbers instead of strings to ignore minor rounding
issues (`expr 0.8 == 0.79999999999999999`)
2022-10-15 09:17:41 +00:00
|
|
|
deps/fpconv/libfpconv.a
|
2020-08-24 09:54:56 +00:00
|
|
|
tests/tls/*
|
2011-11-15 20:40:49 +00:00
|
|
|
.make-*
|
2011-11-21 14:35:54 +00:00
|
|
|
.prerequisites
|
2012-07-23 10:54:52 +00:00
|
|
|
*.dSYM
|
2016-07-06 10:24:45 +00:00
|
|
|
Makefile.dep
|
2018-09-18 12:42:09 +00:00
|
|
|
.vscode/*
|
2020-03-12 12:44:32 +00:00
|
|
|
.idea/*
|
2020-08-24 09:54:56 +00:00
|
|
|
.ccls
|
|
|
|
.ccls-cache/*
|
|
|
|
compile_commands.json
|
2020-09-23 18:56:16 +00:00
|
|
|
redis.code-workspace
|
2024-03-29 17:38:13 +00:00
|
|
|
.cache
|
2024-05-31 06:45:47 +00:00
|
|
|
.cscope*
|
|
|
|
.swp
|
2024-07-13 15:27:17 +00:00
|
|
|
nodes*.conf
|
2024-06-02 20:15:08 +00:00
|
|
|
tests/cluster/tmp/*
|
Introduce Valkey Over RDMA transport (experimental) (#477)
Adds an option to build RDMA support as a module:
make BUILD_RDMA=module
To start valkey-server with RDMA, use a command line like the following:
./src/valkey-server --loadmodule src/valkey-rdma.so \
port=6379 bind=xx.xx.xx.xx
* Implement server side of connection module only, this means we can
*NOT*
compile RDMA support as built-in.
* Add necessary information in README.md
* Support 'CONFIG SET/GET', for example, 'CONFIG Set rdma.port 6380',
then
check this by 'rdma res show cm_id' and valkey-cli (with RDMA support,
but not implemented in this patch).
* The full listeners show like:
listener0:name=tcp,bind=*,bind=-::*,port=6379
listener1:name=unix,bind=/var/run/valkey.sock
listener2:name=rdma,bind=xx.xx.xx.xx,bind=yy.yy.yy.yy,port=6379
listener3:name=tls,bind=*,bind=-::*,port=16379
Because the lack of RDMA support from TCL, use a simple C program to
test
Valkey Over RDMA (under tests/rdma/). This is a quite raw version with
basic
library dependence: libpthread, libibverbs, librdmacm. Run using the
script:
./runtest-rdma [ OPTIONS ]
To run RDMA in GitHub actions, a kernel module RXE for emulated soft
RDMA, needs
to be installed. The kernel module source code is fetched a repo
containing only
the RXE kernel driver from the Linux kernel, but stored in an separate
repo to
avoid cloning the whole Linux kernel repo.
----
Since 2021/06, I created a
[PR](https://github.com/redis/redis/pull/9161) for *Redis Over RDMA*
proposal. Then I did some work to [fully abstract connection and make
TLS dynamically loadable](https://github.com/redis/redis/pull/9320), a
new connection type could be built into Redis statically, or a separated
shared library(loaded by Redis on startup) since Redis 7.2.0.
Base on the new connection framework, I created a new
[PR](https://github.com/redis/redis/pull/11182), some
guys(@xiezhq-hermann @zhangyiming1201 @JSpewock @uvletter @FujiZ)
noticed, played and tested this PR. However, because of the lack of time
and knowledge from the maintainers, this PR has been pending about 2
years.
Related doc: [Introduce *Valkey Over RDMA*
specification](https://github.com/valkey-io/valkey-doc/pull/123). (same
as Redis, and this should be same)
Changes in this PR:
- implement *Valkey Over RDMA*. (compact the Valkey style)
Finally, if this feature is considered to merge, I volunteer to maintain
it.
---------
Signed-off-by: zhenwei pi <pizhenwei@bytedance.com>
2024-07-15 12:04:22 +00:00
|
|
|
tests/rdma/rdma-test
|
2024-10-05 08:03:57 +00:00
|
|
|
tags
|
Add CMake build system for valkey (#1196)
With this commit, users are able to build valkey using `CMake`.
## Example usage:
Build `valkey-server` in Release mode with TLS enabled and using
`jemalloc` as the allocator:
```bash
mkdir build-release
cd $_
cmake .. -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_INSTALL_PREFIX=/tmp/valkey-install \
-DBUILD_MALLOC=jemalloc -DBUILD_TLS=1
make -j$(nproc) install
# start valkey
/tmp/valkey-install/bin/valkey-server
```
Build `valkey-unit-tests`:
```bash
mkdir build-release-ut
cd $_
cmake .. -DCMAKE_BUILD_TYPE=Release \
-DBUILD_MALLOC=jemalloc -DBUILD_UNIT_TESTS=1
make -j$(nproc)
# Run the tests
./bin/valkey-unit-tests
```
Current features supported by this PR:
- Building against different allocators: (`jemalloc`, `tcmalloc`,
`tcmalloc_minimal` and `libc`), e.g. to enable `jemalloc` pass
`-DBUILD_MALLOC=jemalloc` to `cmake`
- OpenSSL builds (to enable TLS, pass `-DBUILD_TLS=1` to `cmake`)
- Sanitizier: pass `-DBUILD_SANITIZER=<address|thread|undefined>` to
`cmake`
- Install target + redis symbolic links
- Build `valkey-unit-tests` executable
- Standard CMake variables are supported. e.g. to install `valkey` under
`/home/you/root` pass `-DCMAKE_INSTALL_PREFIX=/home/you/root`
Why using `CMake`? To list *some* of the advantages of using `CMake`:
- Superior IDE integrations: cmake generates the file
`compile_commands.json` which is required by `clangd` to get a compiler
accuracy code completion (in other words: your VScode will thank you)
- Out of the source build tree: with the current build system, object
files are created all over the place polluting the build source tree,
the best practice is to build the project on a separate folder
- Multiple build types co-existing: with the current build system, it is
often hard to have multiple build configurations. With cmake you can do
it easily:
- It is the de-facto standard for C/C++ project these days
More build examples:
ASAN build:
```bash
mkdir build-asan
cd $_
cmake .. -DBUILD_SANITIZER=address -DBUILD_MALLOC=libc
make -j$(nproc)
```
ASAN with jemalloc:
```bash
mkdir build-asan-jemalloc
cd $_
cmake .. -DBUILD_SANITIZER=address -DBUILD_MALLOC=jemalloc
make -j$(nproc)
```
As seen by the previous examples, any combination is allowed and
co-exist on the same source tree.
## Valkey installation
With this new `CMake`, it is possible to install the binary by running
`make install` or creating a package `make package` (currently supported
on Debian like distros)
### Example 1: build & install using `make install`:
```bash
mkdir build-release
cd $_
cmake .. -DCMAKE_INSTALL_PREFIX=$HOME/valkey-install -DCMAKE_BUILD_TYPE=Release
make -j$(nproc) install
# valkey is now installed under $HOME/valkey-install
```
### Example 2: create a `.deb` installer:
```bash
mkdir build-release
cd $_
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j$(nproc) package
# ... CPack deb generation output
sudo gdebi -n ./valkey_8.1.0_amd64.deb
# valkey is now installed under /opt/valkey
```
### Example 3: create installer for non Debian systems (e.g. FreeBSD or
macOS):
```bash
mkdir build-release
cd $_
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j$(nproc) package
mkdir -p /opt/valkey && ./valkey-8.1.0-Darwin.sh --prefix=/opt/valkey --exclude-subdir
# valkey-server is now installed under /opt/valkey
```
Signed-off-by: Eran Ifrah <eifrah@amazon.com>
2024-11-08 02:01:37 +00:00
|
|
|
build-debug/
|
|
|
|
build-release/
|