Following #10996, it forgot to modify RM_StringCompare in module.c
Modified RM_StringCompare, compareStringObjectsWithFlags,
compareStringObjects and collateStringObjects.
A timing issue like this was reported in freebsd daily CI:
```
*** [err]: Sanity test push cmd after resharding in tests/unit/cluster/cli.tcl
Expected 'CLUSTERDOWN The cluster is down' to match '*MOVED*'
```
We additionally wait for each node to reach a consensus on the cluster
state in wait_for_condition to avoid the cluster down error.
The fix just like #10495, quoting madolson's comment:
Cluster check just verifies the the config state is self-consistent,
waiting for cluster_state to be okay is an independent check that all
the nodes actually believe each other are healthy.
At the same time i noticed that unit/moduleapi/cluster.tcl has an exact
same test, may have the same problem, also modified it.
The temporary array for deleted entries reply of XAUTOCLAIM was
insufficient, but also in fact the COUNT argument should be used to
control the size of the reply, so instead of terminating the loop by
only counting the claimed entries, we'll count deleted entries as well.
Fix#10968
Addresses CVE-2022-31144
replace use of:
sprintf --> snprintf
strcpy/strncpy --> redis_strlcpy
strcat/strncat --> redis_strlcat
**why are we making this change?**
Much of the code uses some unsafe variants or deprecated buffer handling
functions.
While most cases are probably not presenting any issue on the known path
programming errors and unterminated strings might lead to potential
buffer overflows which are not covered by tests.
**As part of this PR we change**
1. added implementation for redis_strlcpy and redis_strlcat based on the strl implementation: https://linux.die.net/man/3/strl
2. change all occurrences of use of sprintf with use of snprintf
3. change occurrences of use of strcpy/strncpy with redis_strlcpy
4. change occurrences of use of strcat/strncat with redis_strlcat
5. change the behavior of ll2string/ull2string/ld2string so that it will always place null
termination ('\0') on the output buffer in the first index. this was done in order to make
the use of these functions more safe in cases were the user will not check the output
returned by them (for example in rdbRemoveTempFile)
6. we added a compiler directive to issue a deprecation error in case a use of
sprintf/strcpy/strcat is found during compilation which will result in error during compile time.
However keep in mind that since the deprecation attribute is not supported on all compilers,
this is expected to fail during push workflows.
**NOTE:** while this is only an initial milestone. We might also consider
using the *_s implementation provided by the C11 Extensions (however not
yet widly supported). I would also suggest to start
looking at static code analyzers to track unsafe use cases.
For example LLVM clang checker supports security.insecureAPI.DeprecatedOrUnsafeBufferHandling
which can help locate unsafe function usage.
https://clang.llvm.org/docs/analyzer/checkers.html#security-insecureapi-deprecatedorunsafebufferhandling-c
The main reason not to onboard it at this stage is that the alternative
excepted by clang is to use the C11 extensions which are not always
supported by stdlib.
If we do not use jemalloc (mostly with valgrind) and use an old compiler that does not support C11
we will get compilation error
Co-authored-by: Valentino Geron <valentino@redis.com>
In the newly added cluster hostnames test, the primary is failing over during the reboot
for valgrind so we are validating the wrong node. This change just sets the replica to
prevent taking over, which seems to fix the test.
We could have also set the timeout higher, but it slows down the test.
According to the Redis functions documentation, FCALL command format could be
FCALL function_name numberOfKeys [key1, key2, key3.....] [arg1, arg2, arg3.....]
So in the json file of fcall and fcall_ro, we should add optional for key and arg part.
Just like EVAL...
Co-authored-by: Binbin <binloveplay1314@qq.com>
The corrupt dump fuzzer uncovered a valgrind warning saying:
```
==76370== Argument 'size' of function malloc has a fishy (possibly negative) value: -3744781444216323815
```
This allocation would have failed (returning NULL) and being handled properly by redis (even before this change), but we also want to silence the valgrind warnings (which are checking that casting to ssize_t produces a non-negative value).
The solution i opted for is to explicitly fail these allocations (returning NULL), before even reaching `malloc` (which would have failed and return NULL too).
The implication is that we will not be able to support a single allocation of more than 2GB on a 32bit system (which i don't think is a realistic scenario).
i.e. i do think we could be facing cases were redis consumes more than 2gb on a 32bit system, but not in a single allocation.
The byproduct of this, is that i dropped the overflow assertions, since these will now lead to the same OOM panic we have for failed allocations.
#10942 break the new test added in #10449
```
Testing unit: 29-slot-migration-response.tcl
Cluster Join and auto-discovery test: FAILED: Cluster failed to join into a full mesh.
```
It looks like we need to wait for the cluster in 28 to become stable.
In #9389, we add a new `cluster-port` config and make cluster bus port configurable,
and currently redis-cli --cluster create/add-node doesn't support with a configurable `cluster-port` instance.
Because redis-cli uses the old way (port + 10000) to send the `CLUSTER MEET` command.
Now we add this support on redis-cli `--cluster`, note we don't need to explicitly pass in the
`cluster-port` parameter, we can get the real `cluster-port` of the node in `clusterManagerNodeLoadInfo`,
so the `--cluster create` and `--cluster add-node` interfaces have not changed.
We will use the `cluster-port` when we are doing `CLUSTER MEET`, also note that `CLUSTER MEET` bus-port
parameter was added in 4.0, so if the bus_port (the one in redis-cli) is 0, or equal (port + 10000),
we just call `CLUSTER MEET` with 2 arguments, using the old form.
Co-authored-by: Madelyn Olson <34459052+madolson@users.noreply.github.com>
CONTRIBUTING to get better formatting
CONDUCT also because github doesn't seem recognize the code of conduct page
Co-authored-by: Oran Agra <oran@redislabs.com>
Use SSL_shutdown(), in a best-effort manner, when closing a TLS
connection. This change better supports OpenSSL 3.x clients that will
not silently ignore the socket-level EOF.
## Issue
During the MULTI/EXEC flow, each command gets queued until the `EXEC`
command is received and during this phase on every command queue, a
`realloc` is being invoked. This could be expensive based on the realloc
behavior (if copy to a new memory location).
## Solution
In order to reduce the no. of syscall, couple of optimization I've used.
1. By default, reserve memory for atleast two commands. `MULTI/EXEC` for a
single command doesn't have any significance. Hence, I believe customer wouldn't use it.
2. For further reservation, increase the memory allocation in exponent growth (power of 2).
This reduces the no. of `realloc` call from `N` to `log(N)` times.
## Other changes:
* Include multi exec queued command array in client memory consumption calculation
(affects client eviction too)
Currently in cluster mode, Redis process locks the cluster config file when
starting up and holds the lock for the entire lifetime of the process.
When the server shuts down, it doesn't explicitly release the lock on the
cluster config file. We noticed a problem with restart testing that if you shut down
a very large redis-server process (i.e. with several hundred GB of data stored),
it takes the OS a while to free the resources and unlock the cluster config file.
So if we immediately try to restart the redis server process, it might fail to acquire
the lock on the cluster config file and fail to come up.
This fix explicitly releases the lock on the cluster config file upon a shutdown rather
than relying on the OS to release the lock, which is a cleaner and safer approach to
free up resources acquired.
Account sharded pubsub channels memory consumption in client memory usage
computation to accurately evict client based on the set threshold for `maxmemory-clients`.
We should also set aof_lastbgrewrite_status to C_ERR on these
errors. Because aof rewrite did fail, and we did not finish the
manifest update. Also maintain the stat_aofrw_consecutive_failures.
The `can_log` variable prevents us from outputting too
many error logs. But it should not include the modification
of server.aof_last_write_errno.
We are doing this because:
1. In the short write case, we always set aof_last_write_errno
to ENOSPC, we don't care the `can_log` flag.
2. And we always set aof_last_write_status to C_ERR in aof write
error (except for FSYNC_ALWAYS, we exit). So there may be a chance
that `aof_last_write_errno` is not right.
An innocent bug or just a code cleanup.
This was harmless because we marked the parent command
with SENTINEL flag. So the populateCommandTable was ok.
And we also don't show the flag (SENTINEL and ONLY-SENTNEL)
in COMMAND INFO.
In this PR, we also add the same CMD_SENTINEL and CMD_ONLY_SENTINEL
flags check when populating the sub-commands.
so that in the future it'll be possible to add some sub-commands to sentinel or sentinel-only but not others.
Fix regression of CLUSTER RESET command in redis 7.0.
cluster reset command format is:
CLUSTER RESET [ HARD | SOFT]
According to the cluster reset command doc and codes, the third argument is optional, so
the arity in json file should be -2 instead of 3.
Add test to verify future regressions with RESET and RESET SOFT that were not covered.
Co-authored-by: Ubuntu <lucas.guang.yang1@huawei.com>
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: Binbin <binloveplay1314@qq.com>
When calling CLIENT INFO/LIST, and in various debug prints, Redis is printing
the number of pubsub channels / patterns the client is subscribed to.
With the addition of sharded pubsub, it would be useful to print the number of
keychannels the client is subscribed to as well.
The module API docs mentions this macro, but it was not defined (so no one could have used it).
Instead of adding it as is, we decided to add a _V1 macro, so that if / when we some day extend this struct,
modules that use this API and don't need the extra fields, will still use the old version
and still be compatible with older redis version (despite being compiled with newer redismodule.h)
Since the ranges of `unsigned long long` and `long long` are different, we cannot read an
`unsigned long long` integer from a `RedisModuleString` by `RedisModule_StringToLongLong` .
So I added two new Redis Module APIs to support the conversion between these two types:
* `RedisModule_StringToULongLong`
* `RedisModule_CreateStringFromULongLong`
Signed-off-by: RinChanNOWWW <hzy427@gmail.com>
This commit has two topics.
## Passing config name and value in the same arg
In #10660 (Redis 7.0.1), when we supported the config values that can start with `--` prefix (one of the two topics of that PR),
we broke another pattern: `redis-server redis.config "name value"`, passing both config name
and it's value in the same arg, see #10865
This wasn't a intended change (i.e we didn't realize this pattern used to work).
Although this is a wrong usage, we still like to fix it.
Now we support something like:
```
src/redis-server redis.conf "--maxmemory '700mb'" "--maxmemory-policy volatile-lru" --proc-title-template --my--title--template --loglevel verbose
```
## Changes around --save
Also in this PR, we undo the breaking change we made in #10660 on purpose.
1. `redis-server redis.conf --save --loglevel verbose` (missing `save` argument before anotehr argument).
In 7.0.1, it was throwing an wrong arg error.
Now it will work and reset the save, similar to how it used to be in 7.0.0 and 6.2.x.
3. `redis-server redis.conf --loglevel verbose --save` (missing `save` argument as last argument).
In 6.2, it did not reset the save, which was a bug (inconsistent with the previous bullet).
Now we will make it work and reset the save as well (a bug fix).
This is harmless, we only restore mstate to make sure we
free the right pointer in freeClientMultiState, but it'll
be nicer to also sync that argv_len var back.
There are a lot of false sharing cache misses in line 4013 inside getIOPendingCount function.
The reason is that elements of io_threads_pending array access the same cache line from different
threads, so it is better to represent it as an array of structures with fields aligned to cache line size.
This change should improve performance (in particular, it affects the latency metric, we saw up to 3% improvement).
In 6.2.0 with the introduction of the REV subcommand in ZRANGE, there was a semantic shift in the arguments of ZRANGE when the REV sub-command is executed. Without the sub-command `min` and `max` (the old names of the arguments) are appropriate because if you put the min value and the max value in everything works fine.
```bash
127.0.0.1:6379> ZADD myset 0 foo
(integer) 1
127.0.0.1:6379> ZADD myset 1 bar
(integer) 1
127.0.0.1:6379> ZRANGE myset 0 inf BYSCORE
1) "foo"
2) "bar"
```
However - if you add the `REV` subcommand, ordering the arguments `min` `max` breaks the command:
```bash
127.0.0.1:6379> ZRANGE myset 0 inf BYSCORE REV
(empty array)
```
why? because `ZRANGE` with the `REV` sub-command is expecting the `max` first and the `min` second (because it's a reverse range like `ZREVRANGEBYSCORE`):
```bash
127.0.0.1:6379> ZRANGE myset 0 inf BYSCORE REV
(empty array)
```
The new test added in #10891 can fail with a different error.
see comment in networking.c saying
```c
/* That's a best effort error message, don't check write errors.
* Note that for TLS connections, no handshake was done yet so nothing
* is written and the connection will just drop. */
```
In `CLUSTER MEET`, bus-port argument was added in 11436b1.
For cluster announce ip / port implementation, part of the
4.0-RC1.
And in #9389, we add a new cluster-port config and make
cluster bus port configurable, part of the 7.0-RC1.
my maxclients config:
```
redis-cli config get maxclients
1) "maxclients"
2) "4064"
```
Before this bug was fixed, creating 4065 clients appeared to be successful, but only 4064 were actually created```
```
./redis-benchmark -c 4065 -I
Creating 4065 idle connections and waiting forever (Ctrl+C when done)
cients: 4065
```
now :
```
./redis-benchmark -c 4065 -I
Creating 4065 idle connections and waiting forever (Ctrl+C when done)
Error from server: ERR max number of clients reached
./redis-benchmark -c 4064 -I
Creating 4064 idle connections and waiting forever (Ctrl+C when done)
clients: 4064
```