7.5 KiB
Dragonfly
A novel memory store that supports Redis and Memcached commands. For more detailed status of what's implemented - see below.
Features include:
- High throughput reaching millions of QPS on a single node.
- TLS support.
- Pipelining mode.
- A novel cache design, which does not require specifying eviction policies.
- Memory efficiency that can save 20-40% for regular workloads and even more for cache like workloads
Running
dragonfly requires Linux OS version 5.11 or later. Ubuntu 20.04.4 or 22.04 fit these requirements.
If built locally just run:
./dragonfly --logtostderr
or with docker:
docker pull ghcr.io/dragonflydb/dragonfly:latest && \
docker tag ghcr.io/dragonflydb/dragonfly:latest dragonfly
docker run --network=host --rm dragonfly
Some systems may require adding --ulimit memlock=-1
to docker run
options.
We support redis command arguments where applicable.
For example, you can run: docker run --network=host --rm dragonfly --requirepass=foo --bind localhost
.
dragonfly currently supports the following commandline options:
port
bind
requirepass
maxmemory
memcache_port
- to enable memcached compatible API on this port. Disabled by default.dir
- by default, dragonfly docker uses/data
folder for snapshotting. You can use-v
docker option to map it to your host folder.dbfilename
dbnum
- maximum number of supported databases forselect
.keys_output_limit
- maximum number of returned keys inkeys
command. Default is 8192. We truncate the output to avoid blowup in memory when fetching too many keys.
for more options like logs management or tls support, run dragonfly --help
.
Building from source
I've tested the build on Ubuntu 20.04+. Requires: CMake, Ninja, boost, libunwind8-dev
# to install dependencies
sudo apt install ninja-build libunwind-dev libboost-fiber-dev libssl-dev
git clone --recursive https://github.com/dragonflydb/dragonfly && cd dragonfly
# another way to install dependencies
./helio/install-dependencies.sh
# Configure the build
./helio/blaze.sh -release
cd build-opt && ninja dragonfly # build
Roadmap and milestones
We are planning to implement most of the APIs 1.x and 2.8 (except the replication) before we release the project to source availability on github. In addition, we will support efficient expiry (TTL) and cache eviction algorithms.
The next milestone afterwards will be implementing redis -> dragonfly
and
dragonfly<->dragonfly
replication.
For dragonfly-native replication we are planning to design a distributed log format that will support order of magnitude higher speeds when replicating.
Commands that I wish to implement after releasing the initial code:
- PUNSUBSCRIBE
- PSUBSCRIBE
- HYPERLOGLOG
- SCRIPT DEBUG
- OBJECT
- DUMP/RESTORE
- CLIENT
Their priority will be determined based on the requests from the community. Also, I will omit keyspace notifications. For that I would like to deep dive and learn exact the exact needs for this API.
Milestone - "Source Available"
API 1.0
- String family
- SET
- SETNX
- GET
- DECR
- INCR
- DECRBY
- GETSET
- INCRBY
- MGET
- MSET
- MSETNX
- SUBSTR
- Generic family
- DEL
- ECHO
- EXISTS
- EXPIRE
- EXPIREAT
- KEYS
- PING
- RENAME
- RENAMENX
- SELECT
- TTL
- TYPE
- SORT
- Server Family
- AUTH
- QUIT
- DBSIZE
- BGSAVE
- SAVE
- DEBUG
- EXEC
- FLUSHALL
- FLUSHDB
- INFO
- MULTI
- SHUTDOWN
- LASTSAVE
- SLAVEOF/REPLICAOF
- SYNC
- Set Family
- SADD
- SCARD
- SDIFF
- SDIFFSTORE
- SINTER
- SINTERSTORE
- SISMEMBER
- SMOVE
- SPOP
- SRANDMEMBER
- SREM
- SMEMBERS
- SUNION
- SUNIONSTORE
- List Family
- LINDEX
- LLEN
- LPOP
- LPUSH
- LRANGE
- LREM
- LSET
- LTRIM
- RPOP
- RPOPLPUSH
- RPUSH
- SortedSet Family
- ZADD
- ZCARD
- ZINCRBY
- ZRANGE
- ZRANGEBYSCORE
- ZREM
- ZREMRANGEBYSCORE
- ZREVRANGE
- ZSCORE
- Not sure whether these are required for the initial release.
- BGREWRITEAOF
- MONITOR
- RANDOMKEY
- MOVE
API 2.0
- List Family
- BLPOP
- BRPOP
- BRPOPLPUSH
- LINSERT
- LPUSHX
- RPUSHX
- String Family
- SETEX
- APPEND
- PREPEND (dragonfly specific)
- BITCOUNT
- BITFIELD
- BITOP
- BITPOS
- GETBIT
- GETRANGE
- INCRBYFLOAT
- PSETEX
- SETBIT
- SETRANGE
- STRLEN
- HashSet Family
- HSET
- HMSET
- HDEL
- HEXISTS
- HGET
- HMGET
- HLEN
- HINCRBY
- HINCRBYFLOAT
- HGETALL
- HKEYS
- HSETNX
- HVALS
- HSCAN
- PubSub family
- PUBLISH
- PUBSUB
- PUBSUB CHANNELS
- SUBSCRIBE
- UNSUBSCRIBE
- PSUBSCRIBE
- PUNSUBSCRIBE
- Server Family
- WATCH
- UNWATCH
- DISCARD
- CLIENT LIST/SETNAME
- CLIENT KILL/UNPAUSE/PAUSE/GETNAME/REPLY/TRACKINGINFO
- COMMAND
- COMMAND COUNT
- COMMAND GETKEYS/INFO
- CONFIG GET/REWRITE/SET/RESETSTAT
- MIGRATE
- ROLE
- SLOWLOG
- PSYNC
- TIME
- LATENCY...
- Generic Family
- SCAN
- PEXPIREAT
- PEXPIRE
- DUMP
- EVAL
- EVALSHA
- OBJECT
- PERSIST
- PTTL
- RESTORE
- SCRIPT LOAD/EXISTS
- SCRIPT DEBUG/KILL/FLUSH
- Set Family
- SSCAN
- Sorted Set Family
- ZCOUNT
- ZINTERSTORE
- ZLEXCOUNT
- ZRANGEBYLEX
- ZRANK
- ZREMRANGEBYLEX
- ZREMRANGEBYRANK
- ZREVRANGEBYSCORE
- ZREVRANK
- ZUNIONSTORE
- ZSCAN
- HYPERLOGLOG Family
- PFADD
- PFCOUNT
- PFMERGE
Memchache API
- set
- get
- replace
- add
- stats (partial)
- append
- prepend
- delete
- flush_all
- incr
- decr
- version
- quit
Random commands we implemented as decorators along the way:
- ROLE (2.8) decorator for for master without replicas
- UNLINK (4.0) decorator for DEL command
- BGSAVE (decorator for save)
- FUNCTION FLUSH (does nothing)
Milestone "Stability"
APIs 3,4,5 without cluster support, without modules, without memory introspection commands. Without geo commands and without support for keyspace notifications, without streams. Design config support. ~10-20 commands overall... Probably implement cluster-API decorators to allow cluster-configured clients to connect to a single instance.
- HSTRLEN
Design decisions along the way
Expiration deadlines with relative accuracy
Expiration ranges are limited to ~4 years. Moreover, expiration deadlines with millisecond precision (PEXPIRE/PSETEX etc) will be rounded to closest second for deadlines greater than 134217727ms (approximately 37 hours). Such rounding has less than 0.001% error which I hope is acceptable for large ranges. If it breaks your use-cases - talk to me or open an issue and explain your case.
For more detailed differences between this and Redis implementations see here.