This resolves (1.viii) from
https://github.com/valkey-io/valkey/issues/43
> REDIS_FLAGS will be updated to SERVER_FLAGS. Maybe we should also
allow REDIS_FLAGS -> SERVER_FLAGS as well, for an extra layer of
compatibility.
---------
Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
Documentation references should use `Valkey` while server and cli
references are all under `valkey`.
---------
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
Update the contributor file to include a DCO, which are going to require
for all incoming contributions to provide indication that the
contributor has the rights to contribute the code.
I also fixed the formatting of contribution HOWTO, since it was getting
mangled.
Signed-off-by: Madelyn Olson <madelyneolson@gmail.com>
In ASAN CI, we find server may crash because of NULL ptr in `kvstoreIncrementallyRehash`.
the reason is that we use two phase unlink in `dbGenericDelete`. After `kvstoreDictTwoPhaseUnlinkFind`,
the dict may be in rehashing and only have one element in ht[0] of `db->keys`.
When we delete the last element in `db->keys` meanwhile `db->keys` is in rehashing, we may free the
dict in `kvstoreDictTwoPhaseUnlinkFree` without deleting the node in `kvs->rehashing`. Then we may
use this freed ptr in `kvstoreIncrementallyRehash` in the `serverCron` and cause the crash.
This is indeed a use-after-free problem.
The fix is to call rehashingCompleted in dictRelease and dictEmpty, so that every call for
rehashingStarted is always matched with a rehashingCompleted.
Adding a test in the unit test to catch it consistently
---------
Co-authored-by: Oran Agra <oran@redislabs.com>
Co-authored-by: debing.sun <debing.sun@redis.com>
In `beginResultEmission`, -1 means the result length is not known in
advance. But after #12185, if we pass -1 to `zrangeResultBeginStore`, it
will convert to SIZE_MAX in `zsetTypeCreate` and try to `dictExpand`.
Although `dictExpand` won't succeed because the size overflows, I think
we'd better to avoid this wrong conversion.
This bug can be triggered when the source of `zrangestore` doesn't exist
or we use `zrangestore` command with `byscore` or `bylex`.
The impact is that dst keys will be converted to use skiplist instead of
listpack.
Users who abuse lua error_reply will generate a new error object on each
error call, which can make server.errors get bigger and bigger. This
will
cause the server to block when calling INFO (we also return errorstats
by
default).
To prevent the damage it can cause, when a misuse is detected, we will
print a warning log and disable the errorstats to avoid adding more new
errors. It can be re-enabled via CONFIG RESETSTAT.
Because server.errors may be very large (it may be better now since we
have the limit), config resetstat may block for a while. So in
resetErrorTableStats, we will try to lazyfree server.errors.
See the related discussion at the end of #8217.
After #13072, there is an use-after-free error. In expireScanCallback, we
will delete the dict, and then in dictScan we will continue to use the dict,
like we will doing `dictResumeRehashing(d)` in the end, this casued an error.
In this PR, in freeDictIfNeeded, if the dict's pauserehash is set, don't
delete the dict yet, and then when scan returns try to delete it again.
At the same time, we noticed that there will be similar problems in iterator.
We may also delete elements during the iteration process, causing the dict
to be deleted, so the part related to iter in the PR has also been modified.
dictResetIterator was also missing from the previous kvstoreIteratorNextDict,
we currently have no scenario that elements will be deleted in kvstoreIterator
process, deal with it together to avoid future problems. Added some simple
tests to verify the changes.
In addition, the modification in #13072 omitted initTempDb and emptyDbAsync,
and they were also added. This PR also remove the slow flag from the expire
test (consumes 1.3s) so that problems can be found in CI in the future.
This change to the Makefile makes it possible to opt out of
`-fno-omit-frame-pointer` added in #12973 and `-flto` (#11350). Those
features were implemented by conditionally modifying the `REDIS_CFLAGS`
and `REDIS_LDFLAGS` variables. Historically, those variables provided a
way for users to pass options to the compiler and linker unchanged.
Instead of conditionally appending optimization flags to REDIS_CFLAGS
and REDIS_LDFLAGS, I want to append them to the OPTIMIZATION variable.
Later in the Makefile, we have `OPT=$(OPTIMIZATION)` (meaning
OPTIMIZATION is only a default for OPT, but OPT can be overridden by the
user), and later the flags are combined like this:
FINAL_CFLAGS=$(STD) $(WARN) $(OPT) $(DEBUG) $(CFLAGS) $(REDIS_CFLAGS)
FINAL_LDFLAGS=$(LDFLAGS) $(OPT) $(REDIS_LDFLAGS) $(DEBUG)
This makes it possible for the the user to override all optimization
flags with e.g. `make OPT=-O1` or just `make OPT=`.
For some reason `-O3` was also already added to REDIS_LDFLAGS by default
in #12339, so I added OPT to FINAL_LDFLAGS to avoid more complex logic
(such as introducing a separate LD_OPT variable).
Currently (following #11695, and #12822), keys kvstore and expires
kvstore both flag with ON_DEMAND, it means that a cluster node will
only allocate a dict when the slot is assigned to it and populated,
but on the other hand, when the slot is unassigned, the dict will
remain allocated.
We considered releasing the dict when the slot is unassigned, but it
causes complications on replicas. On the other hand, from benchmarks
we conducted, it looks like the performance impact of releasing the
dict when it becomes empty and re-allocate it when a key is added
again, isn't huge.
This PR add KVSTORE_FREE_EMPTY_DICTS to cluster mode keys / expires
kvstore.
The impact is about about 2% performance drop, for this hopefully
uncommon scenario.
---------
Co-authored-by: Oran Agra <oran@redislabs.com>
In some cases, users will abuse lua eval. Each EVAL call generates
a new lua script, which is added to the lua interpreter and cached
to redis-server, consuming a large amount of memory over time.
Since EVAL is mostly the one that abuses the lua cache, and these
won't have pipeline issues (i.e. the script won't disappear
unexpectedly,
and cause errors like it would with SCRIPT LOAD and EVALSHA),
we implement a plain FIFO LRU eviction only for these (not for
scripts loaded with SCRIPT LOAD).
### Implementation notes:
When not abused we'll probably have less than 100 scripts, and when
abused we'll have many thousands. So we use a hard coded value of 500
scripts. And considering that we don't have many scripts, then unlike
keys, we don't need to worry about the memory usage of keeping a true
sorted LRU linked list. We compute the SHA of each script anyway,
and put the script in a dict, we can store a listNode there, and use
it for quick removal and re-insertion into an LRU list each time the
script is used.
### New interfaces:
At the same time, a new `evicted_scripts` field is added to
INFO, which represents the number of evicted eval scripts. Users
can check it to see if they are abusing EVAL.
### benchmark:
`./src/redis-benchmark -P 10 -n 1000000 -r 10000000000 eval "return
__rand_int__" 0`
The simple abuse of eval benchmark test that will create 1 million EVAL
scripts. The performance has been improved by 50%, and the max latency
has dropped from 500ms to 13ms (this may be caused by table expansion
inside Lua when the number of scripts is large). And in the INFO memory,
it used to consume 120MB (server cache) + 310MB (lua engine), but now
it only consumes 70KB (server cache) + 210KB (lua_engine) because of
the scripts eviction.
For non-abusive case of about 100 EVAL scripts, there's no noticeable
change in performance or memory usage.
### unlikely potentially breaking change:
in theory, a user can maybe load a
script with EVAL and then use EVALSHA to call it (by calculating the
SHA1 value on the client side), it could be that if we read the docs
carefully we'll realized it's a valid scenario, but we suppose it's
extremely rare. So it may happen that EVALSHA acts on a script created
by EVAL, and the script is evicted and EVALSHA returns a NOSCRIPT error.
that is if you have more than 500 scripts being used in the same
transaction / pipeline.
This solves the second point in #13102.