Now it will print deduplicated lines, shortening the table
dramatically. Also it's now easier to import it into excel.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
This should fix the deadlock bugs that happen in some rare cases when using epoll
proactor.
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Add SADDEX <key> <seconds> member member ....
Provides expiry semantics for set members with seconds resolution.
Important things to note:
1. The expiry is passive only, so if nobody touches that set then its members are being kept.
2. SCARD provides an upper bound estimation of set size for sets holding the expiring members.
The reason for this is because SCARD returns a cached size and does not go over all members
to check whether they expired. For regular sets it's exact, of course.
Fixes#335
This fixes#322 though some memory is still kept by memory allocator.
Tested by running:
1. debug populate 20000000 key 256
2. info memory
3. flushdb
4. info memory
Verified that the rss memory usage decreased from 5.07GB to 1.71GB
Fixes#325. Apparently, before the fix AVX instructions were produced
right at the start, thus the server crashed before it
had a chance to install sigill handler.
Partially implements #6.
Before, each shard lazily updated its clock used for the expiry evaluation.
Now, the clock value is set during the transaction scheduling phase and is assigned
to each transaction. From now on DbSlice methods use this value when checking whether
the entry is expired via passed DbContext argument.
Also, implemented transactionally consistent TIME command and
verify that time is the same during the transaction. See
https://ably.com/blog/redis-keys-do-not-expire-atomically for motivation.
Still have not implemented any lamport style updates for background processes
(not sure if it's the right way to proceed).
fix(bug): dashtable split crashes when moving items from the old segment.
Segment's Insert function uses an opportunistic heuristic that chose a bucket with smaller items among two available.
This creates a problem with split that goes from a smaller bucket to the biggest one and moves items to the new segment.
In rare case, the heurstic fills up a the next bucket with items that could reside in earlier buckets and then that bucket
does not have space for its own items. Eventually, items that had enough space in the old segment do not find space in the new one
The fix is to adopt a conservative approach during split and try to use a home bucket first. Home bucket is usually a smaller one.
This approach should be optimal because Split starts with smaller buckets first (can be proven iteratively).
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
Signed-off-by: Roman Gershman <roman@dragonflydb.io>
In more detail, RdbSaver uses AlignedBuffer that writes into io::Sink in chunks of 4KB.
It's great for the direct file I/O, but bad for sockets that receive blocks of 4KB with garbage
at the end. I improved the code around this and actually simplified the logic, so now AlignedBuffer
is just another Sink that is passed into serializer when writing into files. When sending to
sockets a socket sink is passed instead.
Also many other unrelated changes grouped into this pretty big cr.
1. dashtable readability improvements.
2. Move methods from facade::ConnectionContext - into facade::Service,
make ConnectionContext a dumb object.
3. Optionally allow journal to be memory only (not backed up by a disk)
by using a ring buffer to store last k entries in each journal slice. Also renamed
journal_shard into journal_slice because journal has presence in each DF thread and not
only in its shards.
4. Introduce journal::Entry that will consolidate any store change that happens in the thread.
5. Introduce GetRandomHex utility function.
6. Introduce two hooks: ServerFamily::OnClose that is called when a connection is closed,
and ServerFamily::BreakOnShutdown that is called when process exits and any background fibers neet to
break early.
7. Pull some noisy info logs out of rdb_load class.
8. Snapshot class now has the ability to subscribe to journal changes, thus it can include concurrent changes into the snapshot.
Currently only journal::Op::VAL is supported (it's part of RDB format anyway).
Signed-off-by: Roman Gershman <roman@dragonflydb.io>