Compare commits
3 Commits
Author | SHA1 | Date |
---|---|---|
Roman Gershman | b61bf1ea1c | |
Roman Gershman | a7a3c89ffa | |
Roman Gershman | 4fdde2f62b |
|
@ -25,7 +25,7 @@ If applicable, add screenshots to help explain your problem.
|
|||
|
||||
**Environment (please complete the following information):**
|
||||
- OS: [ubuntu 20.04]
|
||||
- Kernel: # Command: `uname -a`
|
||||
- Kernel: [MUST BE 5.10 or greater] # Command: `uname -a`
|
||||
- Containerized?: [Bare Metal, Docker, Docker Compose, Docker Swarm, Kubernetes, Other]
|
||||
- Dragonfly Version: [e.g. 0.3.0]
|
||||
|
||||
|
|
|
@ -131,11 +131,4 @@ jobs:
|
|||
uses: CasperWA/push-protected@v2
|
||||
with:
|
||||
token: ${{ secrets.DRAGONFLY_TOKEN }}
|
||||
branch: main
|
||||
|
||||
- name: Discord notification
|
||||
env:
|
||||
DISCORD_WEBHOOK: ${{ secrets.DISCORD_WEBHOOK }}
|
||||
uses: Ilshidur/action-discord@0c4b27844ba47cb1c7bee539c8eead5284ce9fa9
|
||||
with:
|
||||
args: 'DragonflyDB version [${{ env.TAG_NAME }}](https://github.com/dragonflydb/dragonfly/releases/tag/v${{ env.TAG_NAME }}) has been released 🎉'
|
||||
branch: main
|
|
@ -24,5 +24,3 @@ include_directories(helio)
|
|||
|
||||
add_subdirectory(helio)
|
||||
add_subdirectory(src)
|
||||
|
||||
include(cmake/Packing.cmake)
|
||||
|
|
|
@ -1,15 +1,8 @@
|
|||
# Contributors (alphabetical by surname)
|
||||
|
||||
* **[Amir Alperin](https://github.com/iko1)**
|
||||
* **[Philipp Born](https://github.com/tamcore)**
|
||||
* Helm Chart
|
||||
* **[Meng Chen](https://github.com/matchyc)**
|
||||
* **[Yuxuan Chen](https://github.com/YuxuanChen98)**
|
||||
* **[Redha Lhimeur](https://github.com/redhal)**
|
||||
* **[Braydn Moore](https://github.com/braydnm)**
|
||||
* **[Logan Raarup](https://github.com/logandk)**
|
||||
* **[Ryan Russell](https://github.com/ryanrussell)**
|
||||
* Docs & Code Readability
|
||||
* **[Ali-Akber Saifee](https://github.com/alisaifee)**
|
||||
* **[Elle Y](https://github.com/inohime)**
|
||||
* **[ATM SALEH](https://github.com/ATM-SALEH)**
|
||||
|
|
24
README.md
24
README.md
|
@ -81,8 +81,9 @@ For more info about memory efficiency in Dragonfly see [dashtable doc](/docs/das
|
|||
|
||||
## Running the server
|
||||
|
||||
Dragonfly runs on linux. We advice running it on linux version 5.11 or later
|
||||
but you can also run Dragonfly on older kernels as well.
|
||||
Dragonfly runs on linux. It uses relatively new linux specific [io-uring API](https://github.com/axboe/liburing)
|
||||
for I/O, hence it requires Linux version 5.10 or later.
|
||||
Debian/Bullseye, Ubuntu 20.04.4 or later fit these requirements.
|
||||
|
||||
|
||||
### With docker:
|
||||
|
@ -126,13 +127,13 @@ Dragonfly supports common redis arguments where applicable.
|
|||
For example, you can run: `dragonfly --requirepass=foo --bind localhost`.
|
||||
|
||||
Dragonfly currently supports the following Redis-specific arguments:
|
||||
* `port` redis connection port, default: 6379
|
||||
* `bind` localhost to only allow locahost connections, Public IP ADDRESS , to allow connections **to that ip** address (aka from outside too)
|
||||
* `requirepass` password for AUTH authentication, default: ""
|
||||
* `maxmemory` Limit on maximum-memory (in bytes) that is used by the database.0 - means the program will automatically determine its maximum memory usage. default: 0
|
||||
* `dir` - by default, dragonfly docker uses `/data` folder for snapshotting. the CLI uses: ""
|
||||
* `port`
|
||||
* `bind`
|
||||
* `requirepass`
|
||||
* `maxmemory`
|
||||
* `dir` - by default, dragonfly docker uses `/data` folder for snapshotting.
|
||||
You can use `-v` docker option to map it to your host folder.
|
||||
* `dbfilename` the filename to save/load the DB. default: "dump";
|
||||
* `dbfilename`
|
||||
|
||||
In addition, it has Dragonfly specific arguments options:
|
||||
* `memcache_port` - to enable memcached compatible API on this port. Disabled by default.
|
||||
|
@ -140,10 +141,8 @@ In addition, it has Dragonfly specific arguments options:
|
|||
`keys` is a dangerous command. We truncate its result to avoid blowup in memory when fetching too many keys.
|
||||
* `dbnum` - maximum number of supported databases for `select`.
|
||||
* `cache_mode` - see [Cache](#novel-cache-design) section below.
|
||||
* `hz` - key expiry evaluation frequency. Default is 1000. Lower frequency uses less cpu when
|
||||
* `hz` - key expiry evaluation frequency. Default is 1000. Lower frequency uses less cpu when
|
||||
idle at the expense of precision in key eviction.
|
||||
* `save_schedule` - glob spec for the UTC time to save a snapshot which matches HH:MM (24h time). default: ""
|
||||
* `keys_output_limit` - Maximum number of keys output by keys command. default: 8192
|
||||
|
||||
|
||||
for more options like logs management or tls support, run `dragonfly --help`.
|
||||
|
@ -203,8 +202,7 @@ Right now it does not have much info but in the future we are planning to add th
|
|||
debugging and management info. If you go to `:6379/metrics` url you will see some prometheus
|
||||
compatible metrics.
|
||||
|
||||
The Prometheus exported metrics are compatible with the Grafana dashboard [see here](tools/local/monitoring/grafana/provisioning/dashboards/dashboard.json).
|
||||
|
||||
The Prometheus exported metrics are compatible with the Grafana dashboard [see here](examples/grafana/dashboard.json).
|
||||
|
||||
Important! Http console is meant to be accessed within a safe network.
|
||||
If you expose Dragonfly's TCP port externally, it is advised to disable the console
|
||||
|
|
|
@ -1,39 +0,0 @@
|
|||
# Packages the dragonfly binary into a .deb package
|
||||
# Use this to generate .deb package from build directory
|
||||
# cpack -G DEB
|
||||
#
|
||||
# Resulting packages can be found under _packages/
|
||||
|
||||
set(CPACK_PACKAGE_NAME "dragonflydb"
|
||||
CACHE STRING "dragonflydb"
|
||||
)
|
||||
|
||||
set(CPACK_PACKAGE_DESCRIPTION_SUMMARY "A modern replacement for Redis and Memcached"
|
||||
CACHE STRING "A modern replacement for Redis and Memcached"
|
||||
)
|
||||
|
||||
set(CPACK_PACKAGE_VENDOR "DragonflyDB")
|
||||
|
||||
set(CPACK_OUTPUT_FILE_PREFIX "${CMAKE_SOURCE_DIR}/_packages")
|
||||
|
||||
set(CPACK_PACKAGING_INSTALL_PREFIX "/usr/share/dragonfly")
|
||||
|
||||
set(CPACK_PACKAGE_VERSION_MAJOR ${PROJECT_VERSION_MAJOR})
|
||||
set(CPACK_PACKAGE_VERSION_MINOR ${PROJECT_VERSION_MINOR})
|
||||
set(CPACK_PACKAGE_VERSION_PATCH ${PROJECT_VERSION_PATCH})
|
||||
|
||||
set(CPACK_PACKAGE_CONTACT "support@dragonflydb.io")
|
||||
set(CPACK_DEBIAN_PACKAGE_MAINTAINER "DragonflyDB maintainers")
|
||||
|
||||
set(CPACK_RESOURCE_FILE_LICENSE "${CMAKE_CURRENT_SOURCE_DIR}/LICENSE.md")
|
||||
set(CPACK_RESOURCE_FILE_README "${CMAKE_CURRENT_SOURCE_DIR}/README.md")
|
||||
|
||||
set(CPACK_COMPONENTS_GROUPING ALL_COMPONENTS_IN_ONE)
|
||||
|
||||
set(CPACK_DEB_COMPONENT_INSTALL YES)
|
||||
|
||||
install(TARGETS dragonfly DESTINATION . COMPONENT dragonfly)
|
||||
|
||||
set(CPACK_COMPONENTS_ALL dragonfly)
|
||||
|
||||
include(CPack)
|
|
@ -15,10 +15,10 @@ type: application
|
|||
# This is the chart version. This version number should be incremented each time you make changes
|
||||
# to the chart and its templates, including the app version.
|
||||
# Versions are expected to follow Semantic Versioning (https://semver.org/)
|
||||
version: v0.9.1
|
||||
version: v0.6.0
|
||||
|
||||
# This is the version number of the application being deployed. This version number should be
|
||||
# incremented each time you make changes to the application. Versions are not expected to
|
||||
# follow Semantic Versioning. They should reflect the version the application is using.
|
||||
# It is recommended to use it with quotes.
|
||||
appVersion: "v0.9.1"
|
||||
appVersion: "v0.6.0"
|
||||
|
|
|
@ -71,8 +71,8 @@ spec:
|
|||
{{- end }}
|
||||
{{- if .Values.tls.enabled }}
|
||||
- "--tls"
|
||||
- "--tls_cert_file=/etc/dragonfly/tls/tls.crt"
|
||||
- "--tls_key_file=/etc/dragonfly/tls/tls.key"
|
||||
- "--tls_client_cert_file=/etc/dragonfly/tls/tls.crt"
|
||||
- "--tls_client_key_file=/etc/dragonfly/tls/tls.key"
|
||||
{{- end }}
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
|
|
|
@ -74,8 +74,8 @@ spec:
|
|||
{{- end }}
|
||||
{{- if .Values.tls.enabled }}
|
||||
- "--tls"
|
||||
- "--tls_cert_file=/etc/dragonfly/tls/tls.crt"
|
||||
- "--tls_key_file=/etc/dragonfly/tls/tls.key"
|
||||
- "--tls_client_cert_file=/etc/dragonfly/tls/tls.crt"
|
||||
- "--tls_client_key_file=/etc/dragonfly/tls/tls.key"
|
||||
{{- end }}
|
||||
resources:
|
||||
{{- toYaml .Values.resources | nindent 12 }}
|
||||
|
|
|
@ -22,7 +22,7 @@ with respect to Memcached and Redis APIs.
|
|||
### API 1
|
||||
- [X] String family
|
||||
- [X] SET
|
||||
- [X] SETNX
|
||||
- [ ] SETNX
|
||||
- [X] GET
|
||||
- [X] DECR
|
||||
- [X] INCR
|
||||
|
@ -40,7 +40,6 @@ with respect to Memcached and Redis APIs.
|
|||
- [X] EXPIRE
|
||||
- [X] EXPIREAT
|
||||
- [X] KEYS
|
||||
- [X] MOVE
|
||||
- [X] PING
|
||||
- [X] RENAME
|
||||
- [X] RENAMENX
|
||||
|
@ -106,6 +105,7 @@ with respect to Memcached and Redis APIs.
|
|||
- [ ] BGREWRITEAOF
|
||||
- [ ] MONITOR
|
||||
- [ ] RANDOMKEY
|
||||
- [ ] MOVE
|
||||
|
||||
### API 2
|
||||
- [X] List Family
|
||||
|
@ -119,15 +119,15 @@ with respect to Memcached and Redis APIs.
|
|||
- [X] SETEX
|
||||
- [X] APPEND
|
||||
- [X] PREPEND (dragonfly specific)
|
||||
- [x] BITCOUNT
|
||||
- [ ] BITCOUNT
|
||||
- [ ] BITFIELD
|
||||
- [x] BITOP
|
||||
- [ ] BITOP
|
||||
- [ ] BITPOS
|
||||
- [x] GETBIT
|
||||
- [ ] GETBIT
|
||||
- [X] GETRANGE
|
||||
- [X] INCRBYFLOAT
|
||||
- [X] PSETEX
|
||||
- [x] SETBIT
|
||||
- [ ] SETBIT
|
||||
- [X] SETRANGE
|
||||
- [X] STRLEN
|
||||
- [X] HashSet Family
|
||||
|
@ -154,8 +154,8 @@ with respect to Memcached and Redis APIs.
|
|||
- [X] PSUBSCRIBE
|
||||
- [X] PUNSUBSCRIBE
|
||||
- [X] Server Family
|
||||
- [X] WATCH
|
||||
- [X] UNWATCH
|
||||
- [ ] WATCH
|
||||
- [ ] UNWATCH
|
||||
- [X] DISCARD
|
||||
- [X] CLIENT LIST/SETNAME
|
||||
- [ ] CLIENT KILL/UNPAUSE/PAUSE/GETNAME/REPLY/TRACKINGINFO
|
||||
|
@ -173,11 +173,11 @@ with respect to Memcached and Redis APIs.
|
|||
- [X] SCAN
|
||||
- [X] PEXPIREAT
|
||||
- [ ] PEXPIRE
|
||||
- [x] DUMP
|
||||
- [ ] DUMP
|
||||
- [X] EVAL
|
||||
- [X] EVALSHA
|
||||
- [ ] OBJECT
|
||||
- [x] PERSIST
|
||||
- [ ] PERSIST
|
||||
- [X] PTTL
|
||||
- [ ] RESTORE
|
||||
- [X] SCRIPT LOAD/EXISTS
|
||||
|
@ -202,22 +202,7 @@ with respect to Memcached and Redis APIs.
|
|||
- [ ] PFMERGE
|
||||
|
||||
### API 3
|
||||
- [ ] Generic Family
|
||||
- [ ] TOUCH
|
||||
- [X] HashSet Family
|
||||
- [X] HSTRLEN
|
||||
- [X] Server Family
|
||||
- [ ] CLIENT REPLY
|
||||
- [X] REPLCONF
|
||||
- [ ] WAIT
|
||||
|
||||
### API 4
|
||||
- [X] Generic Family
|
||||
- [X] UNLINK
|
||||
- [ ] Server Family
|
||||
- [ ] MEMORY USAGE/STATS/PURGE/DOCTOR
|
||||
- [ ] SWAPDB
|
||||
|
||||
### API 5
|
||||
- [X] Stream Family
|
||||
- [X] XADD
|
||||
|
@ -244,5 +229,6 @@ with respect to Memcached and Redis APIs.
|
|||
Some commands were implemented as decorators along the way:
|
||||
|
||||
- [X] ROLE (2.8) decorator as master.
|
||||
- [X] UNLINK (4.0) decorator for DEL command
|
||||
- [X] BGSAVE (decorator for save)
|
||||
- [X] FUNCTION FLUSH (does nothing)
|
||||
- [X] FUNCTION FLUSH (does nothing)
|
|
@ -2,8 +2,9 @@
|
|||
|
||||
## Running the server
|
||||
|
||||
Dragonfly runs on linux. We advice running it on linux version 5.11 or later
|
||||
but you can also run Dragonfly on older kernels as well.
|
||||
Dragonfly runs on linux. It uses relatively new linux specific [io-uring API](https://github.com/axboe/liburing)
|
||||
for I/O, hence it requires `Linux verion 5.10` or later.
|
||||
Debian/Bullseye, `Ubuntu 20.04.4` or later fit these requirements.
|
||||
|
||||
### WARNING: Building from source on older kernels WILL NOT WORK.
|
||||
|
||||
|
@ -57,7 +58,7 @@ OK
|
|||
1) "hello"
|
||||
127.0.0.1:6379> get hello
|
||||
"world"
|
||||
127.0.0.1:6379>
|
||||
127.0.0.1:6379>
|
||||
```
|
||||
|
||||
## Step 6
|
||||
|
|
File diff suppressed because it is too large
Load Diff
|
@ -1,57 +0,0 @@
|
|||
# DenseSet in Dragonfly
|
||||
|
||||
`DenseSet` uses [classic hashtable with separate chaining](https://en.wikipedia.org/wiki/Hash_table#Separate_chaining) similar to the Redis dictionary for lookup of items within the set.
|
||||
|
||||
The main optimization present in `DenseSet` is the ability for a pointer to **point to either an object or a link key**, removing the need to allocate a set entry for every entry. This is accomplished by using [pointer tagging](https://en.wikipedia.org/wiki/Tagged_pointer) exploiting the fact that the top 12 bits of any userspace address are not used and can be set to indicate if the current pointer points to nothing, a link key, or an object.
|
||||
|
||||
The following is what each bit in a pointer is used for
|
||||
|
||||
| Bit Index (from LSB) | Meaning |
|
||||
| -------------------- |-------- |
|
||||
| 0 - 52 | Memory address of data in the userspace |
|
||||
| 53 | Indicates if this `DensePtr` points to data stored in the `DenseSet` or the next link in a chain |
|
||||
| 54 | Displacement bit. Indicates if the current entry is in the correct list defined by the data's hash |
|
||||
| 55 | Direction displaced, this only has meaning if the Displacement bit is set. 0 indicates the entry is to the left of its correct list, 1 indicates it is to the right of the correct list. |
|
||||
| 56 - 63 | Unused |
|
||||
|
||||
Further, to reduce collisions items may be inserted into neighbors of the home chain (the chain determined by the hash) that are empty to reduce the number of unused spaces. These entries are then marked as displaced using pointer tagging.
|
||||
|
||||
An example of possible bucket configurations can be seen below.
|
||||
|
||||
![Dense Set Visualization](./dense_set.svg) *Created using [excalidraw](https://excalidraw.com)*
|
||||
|
||||
### Insertion
|
||||
To insert an entry a `DenseSet` will take the following steps:
|
||||
|
||||
1. Check if the entry already exists in the set, if so return false
|
||||
2. If the entry does not exist look for an empty chain at the hash index ± 1, prioritizing the home chain. If an empty entry is found the item will be inserted and return true
|
||||
3. If step 2 fails and the growth prerequisites are met, increase the number of buckets in the table and repeat step 2
|
||||
4. If step 3 fails, attempt to insert the entry in the home chain.
|
||||
- If the home chain is not occupied by a displaced entry insert the new entry in the front of the list
|
||||
- If the home chain is occupied by a displaced entry move the displaced entry to its home chain. This may cause a domino effect if the home chain of the displaced entry is occupied by a second displaced entry, resulting in up to `O(N)` "fixes"
|
||||
|
||||
### Searching
|
||||
To find an entry in a `DenseSet`:
|
||||
|
||||
1. Check the first entry in the home and neighbour cells for matching entries
|
||||
2. If step 1 fails iterate the home chain of the searched entry and check for equality
|
||||
|
||||
### Pending Improvements
|
||||
Some further improvements to `DenseSet` include allowing entries to be inserted in their home chain without having to perform the current `O(N)` steps to fix displaced entries. By inserting an entry in their home chain after the displaced entry instead of fixing up displaced entries, searching incurs minimal added overhead and there is no domino effect in inserting a new entry. To move a displaced entry to its home chain eventually multiple heuristics may be implemented including:
|
||||
|
||||
- When an entry is erased if the chain becomes empty and there is a displaced entry in the neighbor chains move it to the now empty home chain
|
||||
- If a displaced entry is found as a result of a search and is the root of a chain with multiple entries, the displaced node should be moved to its home bucket
|
||||
|
||||
|
||||
## Benchmarks
|
||||
|
||||
At 100% utilization the Redis dictionary implementation uses approximately 32 bytes per record ([read the breakdown for more information](./dashtable.md#redis-dictionary))
|
||||
|
||||
In comparison using the neighbour cell optimization, `DenseSet` has ~21% of spaces unused at full utilization resulting in $N\*8 + 0.2\*16N \approx 11.2N$ or ~12 bytes per record, yielding ~20 byte savings. The number of bytes per record saved grows as utilization decreases.
|
||||
|
||||
Inserting 20M 10 byte strings into a set in chunks of 500 on an i5-8250U give the following results
|
||||
|
||||
| | Dragonfly (DenseSet) | Dragonfly (Redis Dictionary) | Redis 7 |
|
||||
|-------------|----------------------|------------------------------|---------|
|
||||
| Time | 44.1s | 46.9s | 50.3s |
|
||||
| Memory used | 626.44MiB | 1.27G | 1.27G |
|
File diff suppressed because one or more lines are too long
Before Width: | Height: | Size: 42 KiB |
|
@ -6,10 +6,6 @@ String sizes are limited to 256MB.
|
|||
Indices (say in GETRANGE and SETRANGE commands) should be signed 32 bit integers in range
|
||||
[-2147483647, 2147483648].
|
||||
|
||||
### String handling.
|
||||
|
||||
SORT does not take any locale into account.
|
||||
|
||||
## Expiry ranges.
|
||||
Expirations are limited to 4 years. For commands with millisecond precision like PEXPIRE or PSETEX,
|
||||
expirations greater than 2^27ms are quietly rounded to the nearest second loosing precision of less than 0.001%.
|
||||
|
|
|
@ -37,7 +37,7 @@ OK
|
|||
1) "hello"
|
||||
127.0.0.1:6379> get hello
|
||||
"world"
|
||||
127.0.0.1:6379>
|
||||
127.0.0.1:6379>
|
||||
```
|
||||
|
||||
## Step 3
|
||||
|
@ -46,6 +46,10 @@ Continue being great and build your app with the power of DragonflyDB!
|
|||
|
||||
## Known issues
|
||||
|
||||
#### `Error initializing io_uring`
|
||||
|
||||
This likely means your kernel version is too low to run DragonflyDB. Make sure to install
|
||||
a kernel version that supports `io_uring`.
|
||||
|
||||
## More Build Options
|
||||
- [Docker Compose Deployment](/contrib/docker/)
|
||||
|
|
File diff suppressed because it is too large
Load Diff
125
docs/rdbsave.md
125
docs/rdbsave.md
|
@ -1,125 +0,0 @@
|
|||
# RDB Snapshot design
|
||||
|
||||
The following document describes Dragonfly's point in time, forkless snapshotting procedure,
|
||||
including all its configurations.
|
||||
|
||||
## Redis-compatible RDB snapshot
|
||||
|
||||
This snapshot is serialized into a single file or into a network socket.
|
||||
This configuration is used to create redis-compatible backup snapshots.
|
||||
|
||||
The algorithm utilizes the shared-nothing architecture of Dragonfly and makes sure that each shard-thread serializes only its own data. Below is the high description of the flow.
|
||||
|
||||
<img src="http://static.dragonflydb.io/repo-assets/rdbsave.svg" width="80%" border="0"/>
|
||||
|
||||
|
||||
1. The `RdbSave` class instantiates a single blocking channel (in red).
|
||||
Its purpose is to gather all the blobs from all the shards.
|
||||
2. In addition it creates thread-local snapshot instances in each DF shard.
|
||||
TODO: to rename them in the codebase to another name (SnapshotShard?) since `snapshot` word creates ambiguity here.
|
||||
3. Each SnapshotShard instantiates its own RdbSerializer that is used to serialize each K/V entry into a binary representation according to the Redis format spec. SnapshotShards combine multiple blobs from the same Dash bucket into a single blob. They always send blob data at bucket granularity, i.e. they never send blob into the channel that only partially covers the bucket. This is needed in order to guarantee snapshot isolation.
|
||||
4. The RdbSerializer uses `io::Sink` to emit binary data. The SnapshotShard instance passes into it a `StringFile` which is just a memory-only based sink that wraps `std::string` object. Once `StringFile` instance becomes large, it's flushed into the channel (as long as it follows the rules above).
|
||||
4. RdbSave also creates a fiber (SaveBody) that pull all the blobs from the channel. Blobs migh come in unspecified order though it's guaranteed that each blob is self sufficient but itself.
|
||||
5. DF uses direct I/O, to improve i/o throughput, which, in turn requires properly aligned memory buffers to work. Unfortunately, blobs that come from the rdb channel come in different sizes and they are not aligned by OS page granularity. Therefore, DF passes all the data from rdb channel through AlignedBuffer transformation. The purpose of this class is to copy the incoming data into a properly aligned buffer. Once it accumalates enough data, it flushes it into the output file.
|
||||
|
||||
To summarize, this configuration employes a single sink to create one file or one stream of data that represents the whole database.
|
||||
|
||||
## Dragonfly Snapshot (TBD)
|
||||
|
||||
Required for replication. Creates several multiple files, one file per SnapshotShard. Does not require a central sink. Each SnapshotShard still uses RdbSerializer together with StringFile to guarantee bucket level granularity. We still need AlignedBuffer if we want to use direct I/O.
|
||||
For a DF process with N shard, it will create N files. Will probably require additional metadata file to provide file-level consistency, but for now we can assume that only N files are created,
|
||||
since our use-case will be network based replication.
|
||||
|
||||
How it's gonna be used? Replica (slave) will hand-shake with the master and find out how many shard it has.
|
||||
Then it will open `N` sockets and each one of them will pull shard data. First, they will pull snapshot data,
|
||||
and replay it by distributing entries among `K` replica shards. After all the snapshot data is replayed,
|
||||
they will continue with replaying the change log (stable state replication), which is out of context
|
||||
of this document.
|
||||
|
||||
## Relaxed point-in-time (TBD)
|
||||
When DF saves its snapshot file on disk, it maintains snapshot isolation by applying a virtual cut
|
||||
through all the process shards. Snapshotting may take time, during which, DF may process many write requests.
|
||||
These mutations won't be part of the snapshot, because the cut captures data up to the point
|
||||
**it has started**. This is perfect for backups. I call this variation - conservative snapshotting.
|
||||
|
||||
However, when we peform snapshotting for replication, we would like to produce a snapshot
|
||||
that includes all the data upto point in time when the snapshotting **finishes**. I called
|
||||
this *relaxed snapshotting*. The reason for relaxed snapshotting is to avoid keeping the changelog
|
||||
of all mutations during the snapshot creation.
|
||||
|
||||
As a side comment - we could, in theory, support the same (relaxed)
|
||||
semantics for file snapshots, but it's no necessary since it might increase the snapshot sizes.
|
||||
|
||||
The snapshotting phase (full-sync) can take up lots of time which add lots of memory pressure on the system.
|
||||
Keeping the change-log aside during the full-sync phase will only add more pressure.
|
||||
We achieve relaxed snapshotting by pushing the changes into the replication sockets without saving them aside.
|
||||
Of course, we would still need a point-in-time consistency,
|
||||
in order to know when the snapshotting finished and the stable state replication started.
|
||||
|
||||
## Conservative and relaxed snapshotting variations
|
||||
|
||||
Both algorithms maintain a scanning process (fiber) that iterarively goes over the main dictionary
|
||||
and serializes its data. Before starting the process, the SnapshotShard captures
|
||||
the change epoch of its shard (this epoch is increased with each write request).
|
||||
|
||||
```cpp
|
||||
SnapshotShard.epoch = shard.epoch++;
|
||||
```
|
||||
|
||||
For sake of simplicity, we can assume that each entry in the shard maintains its own version counter.
|
||||
By capturing the epoch number we establish a cut: all entries with `version <= SnapshotShard.epoch`
|
||||
have not been serialized yet and were not modified by the concurrent writes.
|
||||
|
||||
The DashTable iteration algorithm guarantees convergeance and coverage ("at most once"),
|
||||
but it does not guarantee that each entry is visited *exactly once*.
|
||||
Therefore, we use entry versions for two things: 1) to avoid serialization of the same entry multiple times,
|
||||
and 2) to correctly serialize entries that need to change due to concurrent writes.
|
||||
|
||||
Serialization Fiber:
|
||||
|
||||
```cpp
|
||||
for (entry : table) {
|
||||
if (entry.version <= cut.epoch) {
|
||||
entry.version = cut.epoch + 1;
|
||||
SendToSerializationSink(entry);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
To allow concurrent writes during the snapshotting phase, we setup a hook that is triggerred on each
|
||||
entry mutation in the table:
|
||||
|
||||
OnWriteHook:
|
||||
```cpp
|
||||
....
|
||||
if (entry.version <= cut.version) {
|
||||
SendToSerializationSink(entry);
|
||||
}
|
||||
...
|
||||
entry = new_entry;
|
||||
entry.version = shard.epoch++; // guaranteed to become > cut.version
|
||||
```
|
||||
|
||||
Please note that this hook maintains point-in-time semantics for the conservative variation by pushing
|
||||
the previous value of the entry into the sink before changing it.
|
||||
|
||||
However, for the relaxed point-in-time, we do not have to store the old value.
|
||||
Therefore, we can do the following:
|
||||
|
||||
OnWriteHook:
|
||||
|
||||
```cpp
|
||||
if (entry.version <= cut.version) {
|
||||
SendToSerializationSink(new_entry); // do not have to send the old value
|
||||
} else {
|
||||
// Keep sending the changes.
|
||||
SendToSerializationSink(IncrementalDiff(entry, new_entry));
|
||||
}
|
||||
|
||||
entry = new_entry;
|
||||
entry.version = shard.epoch++;
|
||||
```
|
||||
|
||||
The change data is sent along with the rest of the contents, and it requires to extend
|
||||
the existing rdb format to support differential operations like (hset, append, etc).
|
||||
The Serialization Fiber loop is the same for this variation.
|
2
helio
2
helio
|
@ -1 +1 @@
|
|||
Subproject commit d4adfecbd6d828cc86bf282b28294115527817c4
|
||||
Subproject commit 17fdc10f97c8c28eb9a0544ca65fb7e60cfc575a
|
|
@ -19,6 +19,7 @@ add_third_party(
|
|||
jsoncons
|
||||
URL https://github.com/danielaparker/jsoncons/archive/refs/tags/v0.168.7.tar.gz
|
||||
CMAKE_PASS_FLAGS "-DJSONCONS_BUILD_TESTS=OFF"
|
||||
|
||||
LIB "none"
|
||||
)
|
||||
|
||||
|
|
|
@ -1,8 +1,9 @@
|
|||
add_library(dfly_core compact_object.cc dragonfly_core.cc extent_tree.cc
|
||||
external_alloc.cc interpreter.cc mi_memory_resource.cc
|
||||
segment_allocator.cc small_string.cc tx_queue.cc dense_set.cc string_set.cc)
|
||||
cxx_link(dfly_core base absl::flat_hash_map absl::str_format redis_lib TRDP::lua lua_modules
|
||||
Boost::fiber crypto)
|
||||
add_library(dfly_core compact_object.cc dragonfly_core.cc extent_tree.cc
|
||||
external_alloc.cc interpreter.cc mi_memory_resource.cc
|
||||
segment_allocator.cc small_string.cc string_set.cc tx_queue.cc)
|
||||
cxx_link(dfly_core base absl::flat_hash_map absl::str_format redis_lib TRDP::lua
|
||||
Boost::fiber crypto)
|
||||
|
||||
|
||||
add_executable(dash_bench dash_bench.cc)
|
||||
cxx_link(dash_bench dfly_core)
|
||||
|
@ -11,7 +12,7 @@ cxx_test(dfly_core_test dfly_core LABELS DFLY)
|
|||
cxx_test(compact_object_test dfly_core LABELS DFLY)
|
||||
cxx_test(extent_tree_test dfly_core LABELS DFLY)
|
||||
cxx_test(external_alloc_test dfly_core LABELS DFLY)
|
||||
cxx_test(dash_test dfly_core file DATA testdata/ids.txt LABELS DFLY)
|
||||
cxx_test(dash_test dfly_core LABELS DFLY)
|
||||
cxx_test(interpreter_test dfly_core LABELS DFLY)
|
||||
cxx_test(json_test dfly_core TRDP::jsoncons LABELS DFLY)
|
||||
cxx_test(string_set_test dfly_core LABELS DFLY)
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -19,7 +19,6 @@ extern "C" {
|
|||
|
||||
#include <absl/strings/str_cat.h>
|
||||
|
||||
#include "base/flags.h"
|
||||
#include "base/logging.h"
|
||||
#include "base/pod_array.h"
|
||||
#include "core/string_set.h"
|
||||
|
@ -30,11 +29,8 @@ extern "C" {
|
|||
#include <emmintrin.h>
|
||||
#endif
|
||||
|
||||
ABSL_FLAG(bool, use_set2, true, "If true use DenseSet for an optimized set data structure");
|
||||
|
||||
namespace dfly {
|
||||
using namespace std;
|
||||
using absl::GetFlag;
|
||||
|
||||
namespace {
|
||||
|
||||
|
@ -79,7 +75,7 @@ size_t MallocUsedSet(unsigned encoding, void* ptr) {
|
|||
return 0; // TODO
|
||||
case kEncodingStrMap2: {
|
||||
StringSet* ss = (StringSet*)ptr;
|
||||
return ss->ObjMallocUsed() + ss->SetMallocUsed();
|
||||
return ss->obj_malloc_used() + ss->set_malloc_used();
|
||||
}
|
||||
case kEncodingIntSet:
|
||||
return intsetBlobLen((intset*)ptr);
|
||||
|
@ -115,7 +111,7 @@ size_t MallocUsedZSet(unsigned encoding, void* ptr) {
|
|||
|
||||
size_t MallocUsedStream(unsigned encoding, void* streamv) {
|
||||
// stream* str_obj = (stream*)streamv;
|
||||
return 0; // TODO
|
||||
return 0; // TODO
|
||||
}
|
||||
|
||||
inline void FreeObjHash(unsigned encoding, void* ptr) {
|
||||
|
@ -152,7 +148,7 @@ inline void FreeObjStream(void* ptr) {
|
|||
freeStream((stream*)ptr);
|
||||
}
|
||||
|
||||
// Daniel Lemire's function validate_ascii_fast() - under Apache/MIT license.
|
||||
// Deniel's Lemire function validate_ascii_fast() - under Apache/MIT license.
|
||||
// See https://github.com/lemire/fastvalidate-utf-8/
|
||||
// The function returns true (1) if all chars passed in src are
|
||||
// 7-bit values (0x00..0x7F). Otherwise, it returns false (0).
|
||||
|
@ -262,16 +258,6 @@ size_t RobjWrapper::Size() const {
|
|||
case OBJ_STRING:
|
||||
DCHECK_EQ(OBJ_ENCODING_RAW, encoding_);
|
||||
return sz_;
|
||||
case OBJ_LIST:
|
||||
return quicklistCount((quicklist*)inner_obj_);
|
||||
case OBJ_ZSET: {
|
||||
robj self{.type = type_,
|
||||
.encoding = encoding_,
|
||||
.lru = 0,
|
||||
.refcount = OBJ_STATIC_REFCOUNT,
|
||||
.ptr = inner_obj_};
|
||||
return zsetLength(&self);
|
||||
}
|
||||
case OBJ_SET:
|
||||
switch (encoding_) {
|
||||
case kEncodingIntSet: {
|
||||
|
@ -284,11 +270,11 @@ size_t RobjWrapper::Size() const {
|
|||
}
|
||||
case kEncodingStrMap2: {
|
||||
StringSet* ss = (StringSet*)inner_obj_;
|
||||
return ss->Size();
|
||||
return ss->size();
|
||||
}
|
||||
default:
|
||||
LOG(FATAL) << "Unexpected encoding " << encoding_;
|
||||
};
|
||||
}
|
||||
default:;
|
||||
}
|
||||
return 0;
|
||||
|
@ -410,8 +396,8 @@ void RobjWrapper::MakeInnerRoom(size_t current_cap, size_t desired, pmr::memory_
|
|||
}
|
||||
|
||||
#if defined(__GNUC__) && !defined(__clang__)
|
||||
#pragma GCC push_options
|
||||
#pragma GCC optimize("Ofast")
|
||||
#pragma GCC push_options
|
||||
#pragma GCC optimize("Ofast")
|
||||
#endif
|
||||
|
||||
// len must be at least 16
|
||||
|
@ -491,7 +477,7 @@ bool compare_packed(const uint8_t* packed, const char* ascii, size_t ascii_len)
|
|||
}
|
||||
|
||||
#if defined(__GNUC__) && !defined(__clang__)
|
||||
#pragma GCC pop_options
|
||||
#pragma GCC pop_options
|
||||
#endif
|
||||
|
||||
} // namespace detail
|
||||
|
@ -636,7 +622,8 @@ void CompactObj::ImportRObj(robj* o) {
|
|||
if (o->encoding == OBJ_ENCODING_INTSET) {
|
||||
enc = kEncodingIntSet;
|
||||
} else {
|
||||
enc = GetFlag(FLAGS_use_set2) ? kEncodingStrMap2 : kEncodingStrMap;
|
||||
LOG(DFATAL) << "This can not be done via ImportRObj for sets";
|
||||
enc = kEncodingStrMap;
|
||||
}
|
||||
}
|
||||
u_.r_obj.Init(type, enc, o->ptr);
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#include "core/compact_object.h"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#pragma once
|
||||
|
@ -716,12 +716,12 @@ template <typename U, typename V, typename EvictionPolicy>
|
|||
auto DashTable<_Key, _Value, Policy>::InsertInternal(U&& key, V&& value, EvictionPolicy& ev)
|
||||
-> std::pair<iterator, bool> {
|
||||
uint64_t key_hash = DoHash(key);
|
||||
uint32_t target_seg_id = SegmentId(key_hash);
|
||||
uint32_t seg_id = SegmentId(key_hash);
|
||||
|
||||
while (true) {
|
||||
// Keep last global_depth_ msb bits of the hash.
|
||||
assert(target_seg_id < segment_.size());
|
||||
SegmentType* target = segment_[target_seg_id];
|
||||
assert(seg_id < segment_.size());
|
||||
SegmentType* target = segment_[seg_id];
|
||||
|
||||
// Load heap allocated segment data - to avoid TLB miss when accessing the bucket.
|
||||
__builtin_prefetch(target, 0, 1);
|
||||
|
@ -731,12 +731,12 @@ auto DashTable<_Key, _Value, Policy>::InsertInternal(U&& key, V&& value, Evictio
|
|||
|
||||
if (res) { // success
|
||||
++size_;
|
||||
return std::make_pair(iterator{this, target_seg_id, it.index, it.slot}, true);
|
||||
return std::make_pair(iterator{this, seg_id, it.index, it.slot}, true);
|
||||
}
|
||||
|
||||
/*duplicate insert, insertion failure*/
|
||||
if (it.found()) {
|
||||
return std::make_pair(iterator{this, target_seg_id, it.index, it.slot}, false);
|
||||
return std::make_pair(iterator{this, seg_id, it.index, it.slot}, false);
|
||||
}
|
||||
|
||||
// At this point we must split the segment.
|
||||
|
@ -749,12 +749,12 @@ auto DashTable<_Key, _Value, Policy>::InsertInternal(U&& key, V&& value, Evictio
|
|||
hotspot.key_hash = key_hash;
|
||||
|
||||
for (unsigned j = 0; j < HotspotBuckets::kRegularBuckets; ++j) {
|
||||
hotspot.probes.by_type.regular_buckets[j] = bucket_iterator{this, target_seg_id, bid[j]};
|
||||
hotspot.probes.by_type.regular_buckets[j] = bucket_iterator{this, seg_id, bid[j]};
|
||||
}
|
||||
|
||||
for (unsigned i = 0; i < Policy::kStashBucketNum; ++i) {
|
||||
hotspot.probes.by_type.stash_buckets[i] =
|
||||
bucket_iterator{this, target_seg_id, uint8_t(kLogicalBucketNum + i), 0};
|
||||
bucket_iterator{this, seg_id, uint8_t(kLogicalBucketNum + i), 0};
|
||||
}
|
||||
hotspot.num_buckets = HotspotBuckets::kNumBuckets;
|
||||
|
||||
|
@ -770,7 +770,7 @@ auto DashTable<_Key, _Value, Policy>::InsertInternal(U&& key, V&& value, Evictio
|
|||
/*unsigned start = (bid[HotspotBuckets::kNumBuckets - 1] + 1) % kLogicalBucketNum;
|
||||
for (unsigned i = 0; i < HotspotBuckets::kNumBuckets; ++i) {
|
||||
uint8_t id = (start + i) % kLogicalBucketNum;
|
||||
buckets.probes.arr[i] = bucket_iterator{this, target_seg_id, id};
|
||||
buckets.probes.arr[i] = bucket_iterator{this, seg_id, id};
|
||||
}
|
||||
garbage_collected_ += ev.GarbageCollect(buckets, this);
|
||||
*/
|
||||
|
@ -804,12 +804,12 @@ auto DashTable<_Key, _Value, Policy>::InsertInternal(U&& key, V&& value, Evictio
|
|||
if (target->local_depth() == global_depth_) {
|
||||
IncreaseDepth(global_depth_ + 1);
|
||||
|
||||
target_seg_id = SegmentId(key_hash);
|
||||
assert(target_seg_id < segment_.size() && segment_[target_seg_id] == target);
|
||||
seg_id = SegmentId(key_hash);
|
||||
assert(seg_id < segment_.size() && segment_[seg_id] == target);
|
||||
}
|
||||
|
||||
ev.RecordSplit(target);
|
||||
Split(target_seg_id);
|
||||
Split(seg_id);
|
||||
}
|
||||
|
||||
return std::make_pair(iterator{}, false);
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -492,12 +492,7 @@ template <typename _Key, typename _Value, typename Policy = DefaultSegmentPolicy
|
|||
|
||||
// Returns valid iterator if succeeded or invalid if not (it's full).
|
||||
// Requires: key should be not present in the segment.
|
||||
// if spread is true, tries to spread the load between neighbour and home buckets,
|
||||
// otherwise chooses home bucket first.
|
||||
// TODO: I am actually not sure if spread optimization is helpful. Worth checking
|
||||
// whether we get higher occupancy rates when using it.
|
||||
template <typename U, typename V>
|
||||
Iterator InsertUniq(U&& key, V&& value, Hash_t key_hash, bool spread);
|
||||
template <typename U, typename V> Iterator InsertUniq(U&& key, V&& value, Hash_t key_hash);
|
||||
|
||||
// capture version change in case of insert.
|
||||
// Returns ids of buckets whose version would cross ver_threshold upon insertion of key_hash
|
||||
|
@ -531,10 +526,9 @@ template <typename _Key, typename _Value, typename Policy = DefaultSegmentPolicy
|
|||
}
|
||||
|
||||
// Bumps up this entry making it more "important" for the eviction policy.
|
||||
template <typename BumpPolicy>
|
||||
Iterator BumpUp(uint8_t bid, SlotId slot, Hash_t key_hash, const BumpPolicy& ev);
|
||||
template<typename BumpPolicy> Iterator BumpUp(uint8_t bid, SlotId slot, Hash_t key_hash, const BumpPolicy& ev);
|
||||
|
||||
// Tries to move stash entries back to their normal buckets (exact or neighbour).
|
||||
// Tries to move stash entries back to their normal buckets (exact or neighour).
|
||||
// Returns number of entries that succeeded to unload.
|
||||
// Important! Affects versions of the moved items and the items in the destination
|
||||
// buckets.
|
||||
|
@ -1054,7 +1048,7 @@ auto Segment<Key, Value, Policy>::Insert(U&& key, V&& value, Hash_t key_hash, Pr
|
|||
return std::make_pair(it, false); /* duplicate insert*/
|
||||
}
|
||||
|
||||
it = InsertUniq(std::forward<U>(key), std::forward<V>(value), key_hash, true);
|
||||
it = InsertUniq(std::forward<U>(key), std::forward<V>(value), key_hash);
|
||||
|
||||
return std::make_pair(it, it.found());
|
||||
}
|
||||
|
@ -1187,13 +1181,8 @@ void Segment<Key, Value, Policy>::Split(HFunc&& hfn, Segment* dest_right) {
|
|||
invalid_mask |= (1u << slot);
|
||||
|
||||
auto it = dest_right->InsertUniq(std::forward<Key_t>(key),
|
||||
std::forward<Value_t>(Value(i, slot)), hash, false);
|
||||
|
||||
// we move items residing in a regular bucket to a new segment.
|
||||
// I do not see a reason why it might overflow to a stash bucket.
|
||||
assert(it.index < kNumBuckets);
|
||||
std::forward<Value_t>(Value(i, slot)), hash);
|
||||
(void)it;
|
||||
|
||||
if constexpr (USE_VERSION) {
|
||||
// Maintaining consistent versioning.
|
||||
uint64_t ver = bucket_[i].GetVersion();
|
||||
|
@ -1229,10 +1218,8 @@ void Segment<Key, Value, Policy>::Split(HFunc&& hfn, Segment* dest_right) {
|
|||
|
||||
invalid_mask |= (1u << slot);
|
||||
auto it = dest_right->InsertUniq(std::forward<Key_t>(Key(bid, slot)),
|
||||
std::forward<Value_t>(Value(bid, slot)), hash, false);
|
||||
std::forward<Value_t>(Value(bid, slot)), hash);
|
||||
(void)it;
|
||||
assert(it.index != kNanBid);
|
||||
|
||||
if constexpr (USE_VERSION) {
|
||||
// Update the version in the destination bucket.
|
||||
uint64_t ver = stash.GetVersion();
|
||||
|
@ -1294,8 +1281,7 @@ bool Segment<Key, Value, Policy>::CheckIfMovesToOther(bool own_items, unsigned f
|
|||
|
||||
template <typename Key, typename Value, typename Policy>
|
||||
template <typename U, typename V>
|
||||
auto Segment<Key, Value, Policy>::InsertUniq(U&& key, V&& value, Hash_t key_hash, bool spread)
|
||||
-> Iterator {
|
||||
auto Segment<Key, Value, Policy>::InsertUniq(U&& key, V&& value, Hash_t key_hash) -> Iterator {
|
||||
const uint8_t bid = BucketIndex(key_hash);
|
||||
const uint8_t nid = NextBid(bid);
|
||||
|
||||
|
@ -1307,7 +1293,7 @@ auto Segment<Key, Value, Policy>::InsertUniq(U&& key, V&& value, Hash_t key_hash
|
|||
unsigned ts = target.Size(), ns = neighbor.Size();
|
||||
bool probe = false;
|
||||
|
||||
if (spread && ts > ns) {
|
||||
if (ts > ns) {
|
||||
insert_first = &neighbor;
|
||||
probe = true;
|
||||
}
|
||||
|
@ -1317,12 +1303,6 @@ auto Segment<Key, Value, Policy>::InsertUniq(U&& key, V&& value, Hash_t key_hash
|
|||
insert_first->Insert(slot, std::forward<U>(key), std::forward<V>(value), meta_hash, probe);
|
||||
|
||||
return Iterator{uint8_t(insert_first - bucket_), uint8_t(slot)};
|
||||
} else if (!spread) {
|
||||
int slot = neighbor.FindEmptySlot();
|
||||
if (slot >= 0) {
|
||||
neighbor.Insert(slot, std::forward<U>(key), std::forward<V>(value), meta_hash, true);
|
||||
return Iterator{nid, uint8_t(slot)};
|
||||
}
|
||||
}
|
||||
|
||||
int displace_index = MoveToOther(true, nid, NextBid(nid));
|
||||
|
@ -1569,8 +1549,7 @@ auto Segment<Key, Value, Policy>::FindValidStartingFrom(unsigned bid, unsigned s
|
|||
|
||||
template <typename Key, typename Value, typename Policy>
|
||||
template <typename BumpPolicy>
|
||||
auto Segment<Key, Value, Policy>::BumpUp(uint8_t bid, SlotId slot, Hash_t key_hash,
|
||||
const BumpPolicy& bp) -> Iterator {
|
||||
auto Segment<Key, Value, Policy>::BumpUp(uint8_t bid, SlotId slot, Hash_t key_hash, const BumpPolicy& bp) -> Iterator {
|
||||
auto& from = bucket_[bid];
|
||||
|
||||
uint8_t target_bid = BucketIndex(key_hash);
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -16,8 +16,6 @@
|
|||
#include "base/hash.h"
|
||||
#include "base/logging.h"
|
||||
#include "base/zipf_gen.h"
|
||||
#include "io/file.h"
|
||||
#include "io/line_reader.h"
|
||||
|
||||
extern "C" {
|
||||
#include "redis/dict.h"
|
||||
|
@ -798,32 +796,6 @@ TEST_F(DashTest, Sds) {
|
|||
// dt.Insert(std::string_view{"bar"}, 1);
|
||||
}
|
||||
|
||||
struct BlankPolicy : public BasicDashPolicy {
|
||||
static uint64_t HashFn(uint64_t v) {
|
||||
return v;
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
// The bug was that for very rare cases when during segment splitting we move all the items
|
||||
// into a new segment, not every item finds a place.
|
||||
TEST_F(DashTest, SplitBug) {
|
||||
DashTable<uint64_t, uint64_t, BlankPolicy> table;
|
||||
|
||||
io::ReadonlyFileOrError fl_err =
|
||||
io::OpenRead(base::ProgramRunfile("testdata/ids.txt"), io::ReadonlyFile::Options{});
|
||||
CHECK(fl_err);
|
||||
io::FileSource fs(std::move(*fl_err));
|
||||
io::LineReader lr(&fs, DO_NOT_TAKE_OWNERSHIP);
|
||||
string_view line;
|
||||
uint64_t val;
|
||||
while (lr.Next(&line)) {
|
||||
CHECK(absl::SimpleHexAtoi(line, &val));
|
||||
table.Insert(val, 0);
|
||||
}
|
||||
EXPECT_EQ(746, table.size());
|
||||
}
|
||||
|
||||
/**
|
||||
______ _ _ _ _______ _
|
||||
| ____| (_) | | (_) |__ __| | |
|
||||
|
|
|
@ -1,571 +0,0 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
#include "core/dense_set.h"
|
||||
|
||||
#include <absl/numeric/bits.h>
|
||||
|
||||
#include <cstddef>
|
||||
#include <cstdint>
|
||||
#include <stack>
|
||||
#include <type_traits>
|
||||
#include <vector>
|
||||
|
||||
#include "glog/logging.h"
|
||||
|
||||
extern "C" {
|
||||
#include "redis/zmalloc.h"
|
||||
}
|
||||
|
||||
namespace dfly {
|
||||
using namespace std;
|
||||
|
||||
constexpr size_t kMinSizeShift = 2;
|
||||
constexpr size_t kMinSize = 1 << kMinSizeShift;
|
||||
constexpr bool kAllowDisplacements = true;
|
||||
|
||||
DenseSet::IteratorBase::IteratorBase(const DenseSet* owner, bool is_end)
|
||||
: owner_(const_cast<DenseSet&>(*owner)),
|
||||
curr_entry_(nullptr) {
|
||||
curr_list_ = is_end ? owner_.entries_.end() : owner_.entries_.begin();
|
||||
if (curr_list_ != owner->entries_.end()) {
|
||||
curr_entry_ = &(*curr_list_);
|
||||
owner->ExpireIfNeeded(nullptr, curr_entry_);
|
||||
|
||||
// find the first non null entry
|
||||
if (curr_entry_->IsEmpty()) {
|
||||
Advance();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
void DenseSet::IteratorBase::Advance() {
|
||||
bool step_link = false;
|
||||
DCHECK(curr_entry_);
|
||||
|
||||
if (curr_entry_->IsLink()) {
|
||||
DenseLinkKey* plink = curr_entry_->AsLink();
|
||||
if (!owner_.ExpireIfNeeded(curr_entry_, &plink->next) || curr_entry_->IsLink()) {
|
||||
curr_entry_ = &plink->next;
|
||||
step_link = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (!step_link) {
|
||||
DCHECK(curr_list_ != owner_.entries_.end());
|
||||
do {
|
||||
++curr_list_;
|
||||
if (curr_list_ == owner_.entries_.end()) {
|
||||
curr_entry_ = nullptr;
|
||||
return;
|
||||
}
|
||||
owner_.ExpireIfNeeded(nullptr, &(*curr_list_));
|
||||
} while (curr_list_->IsEmpty());
|
||||
DCHECK(curr_list_ != owner_.entries_.end());
|
||||
curr_entry_ = &(*curr_list_);
|
||||
}
|
||||
DCHECK(!curr_entry_->IsEmpty());
|
||||
}
|
||||
|
||||
DenseSet::DenseSet(pmr::memory_resource* mr) : entries_(mr) {
|
||||
}
|
||||
|
||||
DenseSet::~DenseSet() {
|
||||
ClearInternal();
|
||||
}
|
||||
|
||||
size_t DenseSet::PushFront(DenseSet::ChainVectorIterator it, void* data, bool has_ttl) {
|
||||
// if this is an empty list assign the value to the empty placeholder pointer
|
||||
if (it->IsEmpty()) {
|
||||
it->SetObject(data);
|
||||
} else {
|
||||
// otherwise make a new link and connect it to the front of the list
|
||||
it->SetLink(NewLink(data, *it));
|
||||
}
|
||||
|
||||
if (has_ttl)
|
||||
it->SetTtl();
|
||||
return ObjectAllocSize(data);
|
||||
}
|
||||
|
||||
void DenseSet::PushFront(DenseSet::ChainVectorIterator it, DenseSet::DensePtr ptr) {
|
||||
DVLOG(2) << "PushFront to " << distance(entries_.begin(), it) << ", "
|
||||
<< ObjectAllocSize(ptr.GetObject());
|
||||
|
||||
if (it->IsEmpty()) {
|
||||
it->SetObject(ptr.GetObject());
|
||||
if (ptr.HasTtl())
|
||||
it->SetTtl();
|
||||
if (ptr.IsLink()) {
|
||||
FreeLink(ptr.AsLink());
|
||||
}
|
||||
} else if (ptr.IsLink()) {
|
||||
// if the pointer is already a link then no allocation needed.
|
||||
*ptr.Next() = *it;
|
||||
*it = ptr;
|
||||
DCHECK(!it->AsLink()->next.IsEmpty());
|
||||
} else {
|
||||
DCHECK(ptr.IsObject());
|
||||
|
||||
// allocate a new link if needed and copy the pointer to the new link
|
||||
it->SetLink(NewLink(ptr.Raw(), *it));
|
||||
if (ptr.HasTtl())
|
||||
it->SetTtl();
|
||||
DCHECK(!it->AsLink()->next.IsEmpty());
|
||||
}
|
||||
}
|
||||
|
||||
auto DenseSet::PopPtrFront(DenseSet::ChainVectorIterator it) -> DensePtr {
|
||||
if (it->IsEmpty()) {
|
||||
return DensePtr{};
|
||||
}
|
||||
|
||||
DensePtr front = *it;
|
||||
|
||||
// if this is an object, then it's also the only record in this chain.
|
||||
// therefore, we should just reset DensePtr.
|
||||
if (it->IsObject()) {
|
||||
it->Reset();
|
||||
} else {
|
||||
DCHECK(it->IsLink());
|
||||
|
||||
// since a DenseLinkKey could be at the end of a chain and have a nullptr for next
|
||||
// avoid dereferencing a nullptr and just reset the pointer to this DenseLinkKey
|
||||
if (it->Next() == nullptr) {
|
||||
it->Reset();
|
||||
} else {
|
||||
*it = *it->Next();
|
||||
}
|
||||
}
|
||||
|
||||
return front;
|
||||
}
|
||||
|
||||
void* DenseSet::PopDataFront(DenseSet::ChainVectorIterator it) {
|
||||
DensePtr front = PopPtrFront(it);
|
||||
void* ret = front.GetObject();
|
||||
|
||||
if (front.IsLink()) {
|
||||
FreeLink(front.AsLink());
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
void DenseSet::ClearInternal() {
|
||||
for (auto it = entries_.begin(); it != entries_.end(); ++it) {
|
||||
while (!it->IsEmpty()) {
|
||||
bool has_ttl = it->HasTtl();
|
||||
void* obj = PopDataFront(it);
|
||||
ObjDelete(obj, has_ttl);
|
||||
}
|
||||
}
|
||||
|
||||
entries_.clear();
|
||||
}
|
||||
|
||||
bool DenseSet::Equal(DensePtr dptr, const void* ptr, uint32_t cookie) const {
|
||||
if (dptr.IsEmpty()) {
|
||||
return false;
|
||||
}
|
||||
|
||||
return ObjEqual(dptr.GetObject(), ptr, cookie);
|
||||
}
|
||||
|
||||
auto DenseSet::FindEmptyAround(uint32_t bid) -> ChainVectorIterator {
|
||||
ExpireIfNeeded(nullptr, &entries_[bid]);
|
||||
|
||||
if (entries_[bid].IsEmpty()) {
|
||||
return entries_.begin() + bid;
|
||||
}
|
||||
|
||||
if (!kAllowDisplacements) {
|
||||
return entries_.end();
|
||||
}
|
||||
|
||||
if (bid + 1 < entries_.size()) {
|
||||
auto it = next(entries_.begin(), bid + 1);
|
||||
ExpireIfNeeded(nullptr, &(*it));
|
||||
if (it->IsEmpty())
|
||||
return it;
|
||||
}
|
||||
|
||||
if (bid) {
|
||||
auto it = next(entries_.begin(), bid - 1);
|
||||
ExpireIfNeeded(nullptr, &(*it));
|
||||
if (it->IsEmpty())
|
||||
return it;
|
||||
}
|
||||
|
||||
return entries_.end();
|
||||
}
|
||||
|
||||
void DenseSet::Reserve(size_t sz) {
|
||||
sz = std::min<size_t>(sz, kMinSize);
|
||||
|
||||
sz = absl::bit_ceil(sz);
|
||||
capacity_log_ = absl::bit_width(sz);
|
||||
entries_.reserve(sz);
|
||||
}
|
||||
|
||||
void DenseSet::Grow() {
|
||||
size_t prev_size = entries_.size();
|
||||
entries_.resize(prev_size * 2);
|
||||
++capacity_log_;
|
||||
|
||||
// perform rehashing of items in the set
|
||||
for (long i = prev_size - 1; i >= 0; --i) {
|
||||
DensePtr* curr = &entries_[i];
|
||||
DensePtr* prev = nullptr;
|
||||
|
||||
while (true) {
|
||||
if (ExpireIfNeeded(prev, curr)) {
|
||||
// if curr has disappeared due to expiry and prev was converted from Link to a
|
||||
// regular DensePtr
|
||||
if (prev && !prev->IsLink())
|
||||
break;
|
||||
}
|
||||
|
||||
if (curr->IsEmpty())
|
||||
break;
|
||||
void* ptr = curr->GetObject();
|
||||
|
||||
DCHECK(ptr != nullptr && ObjectAllocSize(ptr));
|
||||
|
||||
uint32_t bid = BucketId(ptr, 0);
|
||||
|
||||
// if the item does not move from the current chain, ensure
|
||||
// it is not marked as displaced and move to the next item in the chain
|
||||
if (bid == i) {
|
||||
curr->ClearDisplaced();
|
||||
prev = curr;
|
||||
curr = curr->Next();
|
||||
if (curr == nullptr)
|
||||
break;
|
||||
} else {
|
||||
// if the entry is in the wrong chain remove it and
|
||||
// add it to the correct chain. This will also correct
|
||||
// displaced entries
|
||||
auto dest = entries_.begin() + bid;
|
||||
DensePtr dptr = *curr;
|
||||
|
||||
if (curr->IsObject()) {
|
||||
curr->Reset(); // reset the original placeholder (.next or root)
|
||||
|
||||
if (prev) {
|
||||
DCHECK(prev->IsLink());
|
||||
|
||||
DenseLinkKey* plink = prev->AsLink();
|
||||
DCHECK(&plink->next == curr);
|
||||
|
||||
// we want to make *prev a DensePtr instead of DenseLink and we
|
||||
// want to deallocate the link.
|
||||
DensePtr tmp = DensePtr::From(plink);
|
||||
DCHECK(ObjectAllocSize(tmp.GetObject()));
|
||||
|
||||
FreeLink(plink);
|
||||
*prev = tmp;
|
||||
}
|
||||
|
||||
DVLOG(2) << " Pushing to " << bid << " " << dptr.GetObject();
|
||||
PushFront(dest, dptr);
|
||||
|
||||
dest->ClearDisplaced();
|
||||
|
||||
break;
|
||||
} // if IsObject
|
||||
|
||||
*curr = *dptr.Next();
|
||||
DCHECK(!curr->IsEmpty());
|
||||
|
||||
PushFront(dest, dptr);
|
||||
dest->ClearDisplaced();
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bool DenseSet::AddInternal(void* ptr, bool has_ttl) {
|
||||
uint64_t hc = Hash(ptr, 0);
|
||||
|
||||
if (entries_.empty()) {
|
||||
capacity_log_ = kMinSizeShift;
|
||||
entries_.resize(kMinSize);
|
||||
uint32_t bucket_id = BucketId(hc);
|
||||
auto e = entries_.begin() + bucket_id;
|
||||
obj_malloc_used_ += PushFront(e, ptr, has_ttl);
|
||||
++size_;
|
||||
++num_used_buckets_;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
// if the value is already in the set exit early
|
||||
uint32_t bucket_id = BucketId(hc);
|
||||
if (Find(ptr, bucket_id, 0).second != nullptr) {
|
||||
return false;
|
||||
}
|
||||
|
||||
DCHECK_LT(bucket_id, entries_.size());
|
||||
|
||||
// Try insert into flat surface first. Also handle the grow case
|
||||
// if utilization is too high.
|
||||
for (unsigned j = 0; j < 2; ++j) {
|
||||
ChainVectorIterator list = FindEmptyAround(bucket_id);
|
||||
if (list != entries_.end()) {
|
||||
obj_malloc_used_ += PushFront(list, ptr, has_ttl);
|
||||
if (std::distance(entries_.begin(), list) != bucket_id) {
|
||||
list->SetDisplaced(std::distance(entries_.begin() + bucket_id, list));
|
||||
}
|
||||
++num_used_buckets_;
|
||||
++size_;
|
||||
return true;
|
||||
}
|
||||
|
||||
if (size_ < entries_.size()) {
|
||||
break;
|
||||
}
|
||||
|
||||
Grow();
|
||||
bucket_id = BucketId(hc);
|
||||
}
|
||||
|
||||
DCHECK(!entries_[bucket_id].IsEmpty());
|
||||
|
||||
/**
|
||||
* Since the current entry is not empty, it is either a valid chain
|
||||
* or there is a displaced node here. In the latter case it is best to
|
||||
* move the displaced node to its correct bucket. However there could be
|
||||
* a displaced node there and so forth. Keep to avoid having to keep a stack
|
||||
* of displacements we can keep track of the current displaced node, add it
|
||||
* to the correct chain, and if the correct chain contains a displaced node
|
||||
* unlink it and repeat the steps
|
||||
*/
|
||||
|
||||
DensePtr to_insert(ptr);
|
||||
if (has_ttl)
|
||||
to_insert.SetTtl();
|
||||
|
||||
while (!entries_[bucket_id].IsEmpty() && entries_[bucket_id].IsDisplaced()) {
|
||||
DensePtr unlinked = PopPtrFront(entries_.begin() + bucket_id);
|
||||
|
||||
PushFront(entries_.begin() + bucket_id, to_insert);
|
||||
to_insert = unlinked;
|
||||
bucket_id -= unlinked.GetDisplacedDirection();
|
||||
}
|
||||
|
||||
if (!entries_[bucket_id].IsEmpty()) {
|
||||
++num_chain_entries_;
|
||||
}
|
||||
|
||||
ChainVectorIterator list = entries_.begin() + bucket_id;
|
||||
PushFront(list, to_insert);
|
||||
obj_malloc_used_ += ObjectAllocSize(ptr);
|
||||
DCHECK(!entries_[bucket_id].IsDisplaced());
|
||||
|
||||
++size_;
|
||||
return true;
|
||||
}
|
||||
|
||||
auto DenseSet::Find(const void* ptr, uint32_t bid, uint32_t cookie) -> pair<DensePtr*, DensePtr*> {
|
||||
// could do it with zigzag decoding but this is clearer.
|
||||
int offset[] = {0, -1, 1};
|
||||
|
||||
// first look for displaced nodes since this is quicker than iterating a potential long chain
|
||||
for (int j = 0; j < 3; ++j) {
|
||||
if ((bid == 0 && j == 1) || (bid + 1 == entries_.size() && j == 2))
|
||||
continue;
|
||||
|
||||
DensePtr* curr = &entries_[bid + offset[j]];
|
||||
|
||||
ExpireIfNeeded(nullptr, curr);
|
||||
if (Equal(*curr, ptr, cookie)) {
|
||||
return make_pair(nullptr, curr);
|
||||
}
|
||||
}
|
||||
|
||||
// if the node is not displaced, search the correct chain
|
||||
DensePtr* prev = &entries_[bid];
|
||||
DensePtr* curr = prev->Next();
|
||||
while (curr != nullptr) {
|
||||
ExpireIfNeeded(prev, curr);
|
||||
|
||||
if (Equal(*curr, ptr, cookie)) {
|
||||
return make_pair(prev, curr);
|
||||
}
|
||||
prev = curr;
|
||||
curr = curr->Next();
|
||||
}
|
||||
|
||||
// not in the Set
|
||||
return make_pair(nullptr, nullptr);
|
||||
}
|
||||
|
||||
void DenseSet::Delete(DensePtr* prev, DensePtr* ptr) {
|
||||
void* obj = nullptr;
|
||||
|
||||
if (ptr->IsObject()) {
|
||||
obj = ptr->Raw();
|
||||
ptr->Reset();
|
||||
if (prev == nullptr) {
|
||||
--num_used_buckets_;
|
||||
} else {
|
||||
DCHECK(prev->IsLink());
|
||||
|
||||
--num_chain_entries_;
|
||||
DenseLinkKey* plink = prev->AsLink();
|
||||
DensePtr tmp = DensePtr::From(plink);
|
||||
DCHECK(ObjectAllocSize(tmp.GetObject()));
|
||||
|
||||
FreeLink(plink);
|
||||
*prev = tmp;
|
||||
DCHECK(!prev->IsLink());
|
||||
}
|
||||
} else {
|
||||
DCHECK(ptr->IsLink());
|
||||
|
||||
DenseLinkKey* link = ptr->AsLink();
|
||||
obj = link->Raw();
|
||||
*ptr = link->next;
|
||||
--num_chain_entries_;
|
||||
FreeLink(link);
|
||||
}
|
||||
|
||||
obj_malloc_used_ -= ObjectAllocSize(obj);
|
||||
--size_;
|
||||
ObjDelete(obj, false);
|
||||
}
|
||||
|
||||
void* DenseSet::PopInternal() {
|
||||
std::pmr::vector<DenseSet::DensePtr>::iterator bucket_iter = entries_.begin();
|
||||
|
||||
// find the first non-empty chain
|
||||
do {
|
||||
while (bucket_iter != entries_.end() && bucket_iter->IsEmpty()) {
|
||||
++bucket_iter;
|
||||
}
|
||||
|
||||
// empty set
|
||||
if (bucket_iter == entries_.end()) {
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
ExpireIfNeeded(nullptr, &(*bucket_iter));
|
||||
} while (bucket_iter->IsEmpty());
|
||||
|
||||
if (bucket_iter->IsLink()) {
|
||||
--num_chain_entries_;
|
||||
} else {
|
||||
DCHECK(bucket_iter->IsObject());
|
||||
--num_used_buckets_;
|
||||
}
|
||||
|
||||
// unlink the first node in the first non-empty chain
|
||||
obj_malloc_used_ -= ObjectAllocSize(bucket_iter->GetObject());
|
||||
void* ret = PopDataFront(bucket_iter);
|
||||
|
||||
--size_;
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* stable scanning api. has the same guarantees as redis scan command.
|
||||
* we avoid doing bit-reverse by using a different function to derive a bucket id
|
||||
* from hash values. By using msb part of hash we make it "stable" with respect to
|
||||
* rehashes. For example, with table log size 4 (size 16), entries in bucket id
|
||||
* 1110 come from hashes 1110XXXXX.... When a table grows to log size 5,
|
||||
* these entries can move either to 11100 or 11101. So if we traversed with our cursor
|
||||
* range [0000-1110], it's guaranteed that in grown table we do not need to cover again
|
||||
* [00000-11100]. Similarly with shrinkage, if a table is shrunk to log size 3,
|
||||
* keys from 1110 and 1111 will move to bucket 111. Again, it's guaranteed that we
|
||||
* covered the range [000-111] (all keys in that case).
|
||||
* Returns: next cursor or 0 if reached the end of scan.
|
||||
* cursor = 0 - initiates a new scan.
|
||||
*/
|
||||
|
||||
uint32_t DenseSet::Scan(uint32_t cursor, const ItemCb& cb) const {
|
||||
// empty set
|
||||
if (capacity_log_ == 0) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
uint32_t entries_idx = cursor >> (32 - capacity_log_);
|
||||
|
||||
auto& entries = const_cast<DenseSet*>(this)->entries_;
|
||||
|
||||
// skip empty entries
|
||||
do {
|
||||
while (entries_idx < entries_.size() && entries_[entries_idx].IsEmpty()) {
|
||||
++entries_idx;
|
||||
}
|
||||
|
||||
if (entries_idx == entries_.size()) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
ExpireIfNeeded(nullptr, &entries[entries_idx]);
|
||||
} while (entries_[entries_idx].IsEmpty());
|
||||
|
||||
DensePtr* curr = &entries[entries_idx];
|
||||
|
||||
// when scanning add all entries in a given chain
|
||||
while (true) {
|
||||
cb(curr->GetObject());
|
||||
if (!curr->IsLink())
|
||||
break;
|
||||
|
||||
DensePtr* mcurr = const_cast<DensePtr*>(curr);
|
||||
|
||||
if (ExpireIfNeeded(mcurr, &mcurr->AsLink()->next) && !mcurr->IsLink()) {
|
||||
break;
|
||||
}
|
||||
curr = &curr->AsLink()->next;
|
||||
}
|
||||
|
||||
// move to the next index for the next scan and check if we are done
|
||||
++entries_idx;
|
||||
if (entries_idx >= entries_.size()) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
// In case of displacement, we want to fully cover the bucket we traversed, therefore
|
||||
// we check if the bucket on the right belongs to the home bucket.
|
||||
ExpireIfNeeded(nullptr, &entries[entries_idx]);
|
||||
|
||||
if (entries[entries_idx].GetDisplacedDirection() == 1) { // right of the home bucket
|
||||
cb(entries[entries_idx].GetObject());
|
||||
}
|
||||
|
||||
return entries_idx << (32 - capacity_log_);
|
||||
}
|
||||
|
||||
auto DenseSet::NewLink(void* data, DensePtr next) -> DenseLinkKey* {
|
||||
LinkAllocator la(mr());
|
||||
DenseLinkKey* lk = la.allocate(1);
|
||||
la.construct(lk);
|
||||
|
||||
lk->next = next;
|
||||
lk->SetObject(data);
|
||||
return lk;
|
||||
}
|
||||
|
||||
bool DenseSet::ExpireIfNeeded(DensePtr* prev, DensePtr* node) const {
|
||||
DCHECK_NOTNULL(node);
|
||||
|
||||
bool deleted = false;
|
||||
while (node->HasTtl()) {
|
||||
uint32_t obj_time = ObjExpireTime(node->GetObject());
|
||||
if (obj_time > time_now_) {
|
||||
break;
|
||||
}
|
||||
|
||||
// updates the node to next item if relevant.
|
||||
const_cast<DenseSet*>(this)->Delete(prev, node);
|
||||
deleted = true;
|
||||
}
|
||||
|
||||
return deleted;
|
||||
}
|
||||
|
||||
} // namespace dfly
|
|
@ -1,401 +0,0 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#pragma once
|
||||
|
||||
#include <cstddef>
|
||||
#include <cstdint>
|
||||
#include <functional>
|
||||
#include <iterator>
|
||||
#include <memory_resource>
|
||||
#include <type_traits>
|
||||
|
||||
namespace dfly {
|
||||
|
||||
// DenseSet is a nice but over-optimized data-structure. Probably is not worth it in the first
|
||||
// place but sometimes the OCD kicks in and one can not resist.
|
||||
// The advantage of it over redis-dict is smaller meta-data waste.
|
||||
// dictEntry is 24 bytes, i.e it uses at least 32N bytes where N is the expected length.
|
||||
// dict requires to allocate dictEntry per each addition in addition to the supplied key.
|
||||
// It also wastes space in case of a set because it stores a value pointer inside dictEntry.
|
||||
// To summarize:
|
||||
// 100% utilized dict uses N*24 + N*8 = 32N bytes not including the key space.
|
||||
// for 75% utilization (1/0.75 buckets): N*1.33*8 + N*24 = 35N
|
||||
//
|
||||
// This class uses 8 bytes per bucket (similarly to dictEntry*) but it used it for both
|
||||
// links and keys. For most cases, we remove the need for another redirection layer
|
||||
// and just store the key, so no "dictEntry" allocations occur.
|
||||
// For those cells that require chaining, the bucket is
|
||||
// changed in run-time to represent a linked chain.
|
||||
// Additional feature - in order to to reduce collisions, we insert items into
|
||||
// neighbour cells but only if they are empty (not chains). This way we reduce the number of
|
||||
// empty (unused) spaces at full utilization from 36% to ~21%.
|
||||
// 100% utilized table requires: N*8 + 0.2N*16 = 11.2N bytes or ~20 bytes savings.
|
||||
// 75% utilization: N*1.33*8 + 0.12N*16 = 13N or ~22 bytes savings per record.
|
||||
// with potential replacements of hset/zset data structures.
|
||||
// static_assert(sizeof(dictEntry) == 24);
|
||||
|
||||
class DenseSet {
|
||||
struct DenseLinkKey;
|
||||
// we can assume that high 12 bits of user address space
|
||||
// can be used for tagging. At most 52 bits of address are reserved for
|
||||
// some configurations, and usually it's 48 bits.
|
||||
// https://www.kernel.org/doc/html/latest/arm64/memory.html
|
||||
static constexpr size_t kLinkBit = 1ULL << 52;
|
||||
static constexpr size_t kDisplaceBit = 1ULL << 53;
|
||||
static constexpr size_t kDisplaceDirectionBit = 1ULL << 54;
|
||||
static constexpr size_t kTtlBit = 1ULL << 55;
|
||||
static constexpr size_t kTagMask = 4095ULL << 51; // we reserve 12 high bits.
|
||||
|
||||
class DensePtr {
|
||||
public:
|
||||
explicit DensePtr(void* p = nullptr) : ptr_(p) {
|
||||
}
|
||||
|
||||
// Imports the object with its metadata except the link bit that is reset.
|
||||
static DensePtr From(DenseLinkKey* o) {
|
||||
DensePtr res;
|
||||
res.ptr_ = (void*)(o->uptr() & (~kLinkBit));
|
||||
return res;
|
||||
}
|
||||
|
||||
uint64_t uptr() const {
|
||||
return uint64_t(ptr_);
|
||||
}
|
||||
|
||||
bool IsObject() const {
|
||||
return (uptr() & kLinkBit) == 0;
|
||||
}
|
||||
|
||||
bool IsLink() const {
|
||||
return (uptr() & kLinkBit) != 0;
|
||||
}
|
||||
|
||||
bool HasTtl() const {
|
||||
return (uptr() & kTtlBit) != 0;
|
||||
}
|
||||
|
||||
bool IsEmpty() const {
|
||||
return ptr_ == nullptr;
|
||||
}
|
||||
|
||||
void* Raw() const {
|
||||
return (void*)(uptr() & ~kTagMask);
|
||||
}
|
||||
|
||||
bool IsDisplaced() const {
|
||||
return (uptr() & kDisplaceBit) == kDisplaceBit;
|
||||
}
|
||||
|
||||
void SetLink(DenseLinkKey* lk) {
|
||||
ptr_ = (void*)(uintptr_t(lk) | kLinkBit);
|
||||
}
|
||||
|
||||
void SetDisplaced(int direction) {
|
||||
ptr_ = (void*)(uptr() | kDisplaceBit);
|
||||
if (direction == 1) {
|
||||
ptr_ = (void*)(uptr() | kDisplaceDirectionBit);
|
||||
}
|
||||
}
|
||||
|
||||
void ClearDisplaced() {
|
||||
ptr_ = (void*)(uptr() & ~(kDisplaceBit | kDisplaceDirectionBit));
|
||||
}
|
||||
|
||||
// returns 1 if the displaced node is right of the correct bucket and -1 if it is left
|
||||
int GetDisplacedDirection() const {
|
||||
return (uptr() & kDisplaceDirectionBit) == kDisplaceDirectionBit ? 1 : -1;
|
||||
}
|
||||
|
||||
void SetTtl() {
|
||||
ptr_ = (void*)(uptr() | kTtlBit);
|
||||
}
|
||||
|
||||
void Reset() {
|
||||
ptr_ = nullptr;
|
||||
}
|
||||
|
||||
void* GetObject() const {
|
||||
if (IsObject()) {
|
||||
return Raw();
|
||||
}
|
||||
|
||||
return AsLink()->Raw();
|
||||
}
|
||||
|
||||
// Sets pointer but preserves tagging info
|
||||
void SetObject(void* obj) {
|
||||
ptr_ = (void*)((uptr() & kTagMask) | (uintptr_t(obj) & ~kTagMask));
|
||||
}
|
||||
|
||||
DenseLinkKey* AsLink() {
|
||||
return (DenseLinkKey*)Raw();
|
||||
}
|
||||
|
||||
const DenseLinkKey* AsLink() const {
|
||||
return (const DenseLinkKey*)Raw();
|
||||
}
|
||||
|
||||
DensePtr* Next() {
|
||||
if (!IsLink()) {
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
return &AsLink()->next;
|
||||
}
|
||||
|
||||
const DensePtr* Next() const {
|
||||
if (!IsLink()) {
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
return &AsLink()->next;
|
||||
}
|
||||
|
||||
private:
|
||||
void* ptr_ = nullptr;
|
||||
};
|
||||
|
||||
struct DenseLinkKey : public DensePtr {
|
||||
DensePtr next; // could be LinkKey* or Object *.
|
||||
};
|
||||
|
||||
static_assert(sizeof(DensePtr) == sizeof(uintptr_t));
|
||||
static_assert(sizeof(DenseLinkKey) == 2 * sizeof(uintptr_t));
|
||||
|
||||
using LinkAllocator = std::pmr::polymorphic_allocator<DenseLinkKey>;
|
||||
using ChainVectorIterator = std::pmr::vector<DensePtr>::iterator;
|
||||
using ChainVectorConstIterator = std::pmr::vector<DensePtr>::const_iterator;
|
||||
|
||||
class IteratorBase {
|
||||
protected:
|
||||
IteratorBase(const DenseSet* owner, bool is_end);
|
||||
|
||||
void Advance();
|
||||
|
||||
DenseSet& owner_;
|
||||
ChainVectorIterator curr_list_;
|
||||
DensePtr* curr_entry_;
|
||||
};
|
||||
|
||||
public:
|
||||
explicit DenseSet(std::pmr::memory_resource* mr = std::pmr::get_default_resource());
|
||||
virtual ~DenseSet();
|
||||
|
||||
size_t Size() const {
|
||||
return size_;
|
||||
}
|
||||
|
||||
bool Empty() const {
|
||||
return size_ == 0;
|
||||
}
|
||||
|
||||
size_t BucketCount() const {
|
||||
return entries_.size();
|
||||
}
|
||||
|
||||
// those that are chained to the entries stored inline in the bucket array.
|
||||
size_t NumChainEntries() const {
|
||||
return num_chain_entries_;
|
||||
}
|
||||
|
||||
size_t NumUsedBuckets() const {
|
||||
return num_used_buckets_;
|
||||
}
|
||||
|
||||
size_t ObjMallocUsed() const {
|
||||
return obj_malloc_used_;
|
||||
}
|
||||
|
||||
size_t SetMallocUsed() const {
|
||||
return (num_chain_entries_ + entries_.capacity()) * sizeof(DensePtr);
|
||||
}
|
||||
|
||||
template <typename T> class iterator : private IteratorBase {
|
||||
static_assert(std::is_pointer_v<T>, "Iterators can only return pointers");
|
||||
|
||||
public:
|
||||
using iterator_category = std::forward_iterator_tag;
|
||||
using value_type = T;
|
||||
using pointer = value_type*;
|
||||
using reference = value_type&;
|
||||
|
||||
iterator(DenseSet* set, bool is_end) : IteratorBase(set, is_end) {
|
||||
}
|
||||
|
||||
iterator& operator++() {
|
||||
Advance();
|
||||
return *this;
|
||||
}
|
||||
|
||||
friend bool operator==(const iterator& a, const iterator& b) {
|
||||
return a.curr_list_ == b.curr_list_;
|
||||
}
|
||||
|
||||
friend bool operator!=(const iterator& a, const iterator& b) {
|
||||
return !(a == b);
|
||||
}
|
||||
|
||||
value_type operator*() {
|
||||
return (value_type)curr_entry_->GetObject();
|
||||
}
|
||||
|
||||
value_type operator->() {
|
||||
return (value_type)curr_entry_->GetObject();
|
||||
}
|
||||
};
|
||||
|
||||
template <typename T> class const_iterator : private IteratorBase {
|
||||
static_assert(std::is_pointer_v<T>, "Iterators can only return pointer types");
|
||||
|
||||
public:
|
||||
using iterator_category = std::input_iterator_tag;
|
||||
using value_type = const T;
|
||||
using pointer = value_type*;
|
||||
using reference = value_type&;
|
||||
|
||||
const_iterator(const DenseSet* set, bool is_end) : IteratorBase(set, is_end) {
|
||||
}
|
||||
|
||||
const_iterator& operator++() {
|
||||
Advance();
|
||||
return *this;
|
||||
}
|
||||
|
||||
friend bool operator==(const const_iterator& a, const const_iterator& b) {
|
||||
return a.curr_list_ == b.curr_list_;
|
||||
}
|
||||
|
||||
friend bool operator!=(const const_iterator& a, const const_iterator& b) {
|
||||
return !(a == b);
|
||||
}
|
||||
|
||||
value_type operator*() const {
|
||||
return (value_type)curr_entry_->GetObject();
|
||||
}
|
||||
|
||||
value_type operator->() const {
|
||||
return (value_type)curr_entry_->GetObject();
|
||||
}
|
||||
};
|
||||
|
||||
template <typename T> iterator<T> begin() {
|
||||
return iterator<T>(this, false);
|
||||
}
|
||||
|
||||
template <typename T> iterator<T> end() {
|
||||
return iterator<T>(this, true);
|
||||
}
|
||||
|
||||
template <typename T> const_iterator<T> cbegin() const {
|
||||
return const_iterator<T>(this, false);
|
||||
}
|
||||
|
||||
template <typename T> const_iterator<T> cend() const {
|
||||
return const_iterator<T>(this, true);
|
||||
}
|
||||
|
||||
using ItemCb = std::function<void(const void*)>;
|
||||
|
||||
uint32_t Scan(uint32_t cursor, const ItemCb& cb) const;
|
||||
void Reserve(size_t sz);
|
||||
|
||||
// set an abstract time that allows expiry.
|
||||
void set_time(uint32_t val) {
|
||||
time_now_ = val;
|
||||
}
|
||||
|
||||
uint32_t time_now() const {
|
||||
return time_now_;
|
||||
}
|
||||
|
||||
protected:
|
||||
// Virtual functions to be implemented for generic data
|
||||
virtual uint64_t Hash(const void* obj, uint32_t cookie) const = 0;
|
||||
virtual bool ObjEqual(const void* left, const void* right, uint32_t right_cookie) const = 0;
|
||||
virtual size_t ObjectAllocSize(const void* obj) const = 0;
|
||||
virtual uint32_t ObjExpireTime(const void* obj) const = 0;
|
||||
virtual void ObjDelete(void* obj, bool has_ttl) const = 0;
|
||||
|
||||
bool EraseInternal(void* obj, uint32_t cookie) {
|
||||
auto [prev, found] = Find(obj, BucketId(obj, cookie), cookie);
|
||||
if (found) {
|
||||
Delete(prev, found);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
bool AddInternal(void* obj, bool has_ttl);
|
||||
|
||||
bool ContainsInternal(const void* obj, uint32_t cookie) const {
|
||||
return const_cast<DenseSet*>(this)->Find(obj, BucketId(obj, cookie), cookie).second != nullptr;
|
||||
}
|
||||
|
||||
void* PopInternal();
|
||||
|
||||
// Note this does not free any dynamic allocations done by derived classes, that a DensePtr
|
||||
// in the set may point to. This function only frees the allocated DenseLinkKeys created by
|
||||
// DenseSet. All data allocated by a derived class should be freed before calling this
|
||||
void ClearInternal();
|
||||
|
||||
private:
|
||||
DenseSet(const DenseSet&) = delete;
|
||||
DenseSet& operator=(DenseSet&) = delete;
|
||||
|
||||
bool Equal(DensePtr dptr, const void* ptr, uint32_t cookie) const;
|
||||
|
||||
std::pmr::memory_resource* mr() {
|
||||
return entries_.get_allocator().resource();
|
||||
}
|
||||
|
||||
uint32_t BucketId(uint64_t hash) const {
|
||||
return hash >> (64 - capacity_log_);
|
||||
}
|
||||
|
||||
uint32_t BucketId(const void* ptr, uint32_t cookie) const {
|
||||
return BucketId(Hash(ptr, cookie));
|
||||
}
|
||||
|
||||
// return a ChainVectorIterator (a.k.a iterator) or end if there is an empty chain found
|
||||
ChainVectorIterator FindEmptyAround(uint32_t bid);
|
||||
void Grow();
|
||||
|
||||
// ============ Pseudo Linked List Functions for interacting with Chains ==================
|
||||
size_t PushFront(ChainVectorIterator, void* obj, bool has_ttl);
|
||||
void PushFront(ChainVectorIterator, DensePtr);
|
||||
|
||||
void* PopDataFront(ChainVectorIterator);
|
||||
DensePtr PopPtrFront(ChainVectorIterator);
|
||||
|
||||
// ============ Pseudo Linked List in DenseSet end ==================
|
||||
|
||||
// returns (prev, item) pair. If item is root, then prev is null.
|
||||
std::pair<DensePtr*, DensePtr*> Find(const void* ptr, uint32_t bid, uint32_t cookie);
|
||||
|
||||
DenseLinkKey* NewLink(void* data, DensePtr next);
|
||||
|
||||
inline void FreeLink(DenseLinkKey* plink) {
|
||||
// deallocate the link if it is no longer a link as it is now in an empty list
|
||||
mr()->deallocate(plink, sizeof(DenseLinkKey), alignof(DenseLinkKey));
|
||||
}
|
||||
|
||||
// Returns true if *ptr was deleted.
|
||||
bool ExpireIfNeeded(DensePtr* prev, DensePtr* ptr) const;
|
||||
|
||||
// Deletes the object pointed by ptr and removes it from the set.
|
||||
// If ptr is a link then it will be deleted internally.
|
||||
void Delete(DensePtr* prev, DensePtr* ptr);
|
||||
|
||||
std::pmr::vector<DensePtr> entries_;
|
||||
|
||||
mutable size_t obj_malloc_used_ = 0;
|
||||
mutable uint32_t size_ = 0;
|
||||
mutable uint32_t num_chain_entries_ = 0;
|
||||
mutable uint32_t num_used_buckets_ = 0;
|
||||
unsigned capacity_log_ = 0;
|
||||
|
||||
uint32_t time_now_ = 0;
|
||||
};
|
||||
|
||||
} // namespace dfly
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -25,4 +25,4 @@ void IntentLock::VerifyDebug() {
|
|||
DCHECK_EQ(0u, cnt_[1] & kMsb);
|
||||
}
|
||||
|
||||
} // namespace dfly
|
||||
} // namespace dfly
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -55,4 +55,4 @@ TEST_F(ExtentTreeTest, Basic) {
|
|||
EXPECT_THAT(*op, testing::Pair(60, 92));
|
||||
}
|
||||
|
||||
} // namespace dfly
|
||||
} // namespace dfly
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -181,7 +181,7 @@ PageClass ClassFromSize(size_t size) {
|
|||
* SegmentDescr denotes a 256MB segment on external storage -
|
||||
* holds upto 256 pages (in case of small pages).
|
||||
* Each segment has pages of the same type, but each page can host blocks of
|
||||
* different sizes upto maximal block size for that page class.
|
||||
* differrent sizes upto maximal block size for that page class.
|
||||
* SegmentDescr points to the range within external storage space.
|
||||
* By using the page.id together with segment->page_shift and segment->offset
|
||||
* one can know where the page is located in the storage.
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#pragma once
|
||||
|
@ -25,7 +25,7 @@ constexpr inline unsigned long long operator""_KB(unsigned long long x) {
|
|||
* An external allocator inspired by mimalloc. Its goal is to maintain a state machine for
|
||||
* bookkeeping the allocations of different sizes that are backed up by a separate
|
||||
* storage. It could be a disk, SSD or another memory allocator. This class serves
|
||||
* as a state machine that either returns an offset to the backing storage or the indication
|
||||
* as a state machine that either returns an offset to the backign storage or the indication
|
||||
* of the resource that is missing. The advantage of such design is that we can use it in
|
||||
* asynchronous callbacks without blocking on any IO requests.
|
||||
* The allocator uses dynamic memory internally. Should be used in a single thread.
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -117,4 +117,4 @@ TEST_F(ExternalAllocatorTest, Classes) {
|
|||
EXPECT_EQ(1_MB + 4_KB, ExternalAllocator::GoodSize(1_MB + 1));
|
||||
}
|
||||
|
||||
} // namespace dfly
|
||||
} // namespace dfly
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#include <assert.h>
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -14,11 +14,6 @@ extern "C" {
|
|||
#include <lauxlib.h>
|
||||
#include <lua.h>
|
||||
#include <lualib.h>
|
||||
|
||||
LUALIB_API int (luaopen_cjson) (lua_State *L);
|
||||
LUALIB_API int (luaopen_struct) (lua_State *L);
|
||||
LUALIB_API int (luaopen_cmsgpack) (lua_State *L);
|
||||
LUALIB_API int (luaopen_bit) (lua_State *L);
|
||||
}
|
||||
|
||||
#include <absl/strings/str_format.h>
|
||||
|
@ -201,12 +196,6 @@ int RaiseError(lua_State* lua) {
|
|||
return lua_error(lua);
|
||||
}
|
||||
|
||||
void LoadLibrary(lua_State *lua, const char *libname, lua_CFunction luafunc) {
|
||||
lua_pushcfunction(lua, luafunc);
|
||||
lua_pushstring(lua, libname);
|
||||
lua_call(lua, 1, 0);
|
||||
}
|
||||
|
||||
void InitLua(lua_State* lua) {
|
||||
Require(lua, "", luaopen_base);
|
||||
Require(lua, LUA_TABLIBNAME, luaopen_table);
|
||||
|
@ -214,12 +203,6 @@ void InitLua(lua_State* lua) {
|
|||
Require(lua, LUA_MATHLIBNAME, luaopen_math);
|
||||
Require(lua, LUA_DBLIBNAME, luaopen_debug);
|
||||
|
||||
LoadLibrary(lua, "cjson", luaopen_cjson);
|
||||
LoadLibrary(lua, "struct", luaopen_struct);
|
||||
LoadLibrary(lua, "cmsgpack", luaopen_cmsgpack);
|
||||
LoadLibrary(lua, "bit", luaopen_bit);
|
||||
|
||||
|
||||
/* Add a helper function we use for pcall error reporting.
|
||||
* Note that when the error is in the C function we want to report the
|
||||
* information about the caller, that's what makes sense from the point
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -52,7 +52,7 @@ class Interpreter {
|
|||
COMPILE_ERR = 2,
|
||||
};
|
||||
|
||||
// returns false if an error happened, sets error string into result.
|
||||
// returns false if an error happenned, sets error string into result.
|
||||
// otherwise, returns true and sets result to function id.
|
||||
// function id is sha1 of the function body.
|
||||
AddResult AddFunction(std::string_view body, std::string* result);
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -301,24 +301,4 @@ TEST_F(InterpreterTest, ArgKeys) {
|
|||
EXPECT_EQ("[str(foo) str(key1) str(key2)]", ser_.res);
|
||||
}
|
||||
|
||||
TEST_F(InterpreterTest, Modules) {
|
||||
// cjson module
|
||||
EXPECT_TRUE(Execute("return cjson.encode({1, 2, 3})"));
|
||||
EXPECT_EQ("str([1,2,3])", ser_.res);
|
||||
EXPECT_TRUE(Execute("return cjson.decode('{\"a\": 1}')['a']"));
|
||||
EXPECT_EQ("d(1)", ser_.res);
|
||||
|
||||
// cmsgpack module
|
||||
EXPECT_TRUE(Execute("return cmsgpack.pack('ok', true)"));
|
||||
EXPECT_EQ("str(\xA2ok\xC3)", ser_.res);
|
||||
|
||||
// bit module
|
||||
EXPECT_TRUE(Execute("return bit.bor(8, 4, 5)"));
|
||||
EXPECT_EQ("i(13)", ser_.res);
|
||||
|
||||
// struct module
|
||||
EXPECT_TRUE(Execute("return struct.pack('bbc4', 1, 2, 'test')"));
|
||||
EXPECT_EQ("str(\x1\x2test)", ser_.res);
|
||||
}
|
||||
|
||||
} // namespace dfly
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#include "core/mi_memory_resource.h"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -36,4 +36,4 @@ class MiMemoryResource : public std::pmr::memory_resource {
|
|||
size_t used_ = 0;
|
||||
};
|
||||
|
||||
} // namespace dfly
|
||||
} // namespace dfly
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#include "core/segment_allocator.h"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#pragma once
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#pragma once
|
||||
|
|
|
@ -1,217 +1,509 @@
|
|||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
#include "core/string_set.h"
|
||||
|
||||
#include "core/compact_object.h"
|
||||
#include "redis/sds.h"
|
||||
#include <absl/numeric/bits.h>
|
||||
#include <absl/strings/escaping.h>
|
||||
|
||||
#include "base/logging.h"
|
||||
#include "core/compact_object.h" // for hashcode
|
||||
|
||||
extern "C" {
|
||||
#include "redis/zmalloc.h"
|
||||
}
|
||||
|
||||
#include "base/logging.h"
|
||||
|
||||
namespace dfly {
|
||||
using namespace std;
|
||||
|
||||
namespace dfly {
|
||||
constexpr size_t kMinSizeShift = 2;
|
||||
constexpr size_t kMinSize = 1 << kMinSizeShift;
|
||||
constexpr size_t kAllowDisplacements = true;
|
||||
|
||||
namespace {
|
||||
|
||||
inline char SdsReqType(size_t string_size) {
|
||||
if (string_size < 1 << 5)
|
||||
return SDS_TYPE_5;
|
||||
if (string_size < 1 << 8)
|
||||
return SDS_TYPE_8;
|
||||
if (string_size < 1 << 16)
|
||||
return SDS_TYPE_16;
|
||||
if (string_size < 1ll << 32)
|
||||
return SDS_TYPE_32;
|
||||
return SDS_TYPE_64;
|
||||
inline bool CanSetFlat(int offs) {
|
||||
if (kAllowDisplacements)
|
||||
return offs < 2;
|
||||
return offs == 0;
|
||||
}
|
||||
|
||||
inline int SdsHdrSize(char type) {
|
||||
switch (type & SDS_TYPE_MASK) {
|
||||
case SDS_TYPE_5:
|
||||
return sizeof(struct sdshdr5);
|
||||
case SDS_TYPE_8:
|
||||
return sizeof(struct sdshdr8);
|
||||
case SDS_TYPE_16:
|
||||
return sizeof(struct sdshdr16);
|
||||
case SDS_TYPE_32:
|
||||
return sizeof(struct sdshdr32);
|
||||
case SDS_TYPE_64:
|
||||
return sizeof(struct sdshdr64);
|
||||
}
|
||||
return 0;
|
||||
StringSet::StringSet(pmr::memory_resource* mr) : entries_(mr) {
|
||||
}
|
||||
|
||||
sds AllocImmutableWithTtl(uint32_t len, uint32_t at) {
|
||||
size_t usable;
|
||||
char type = SdsReqType(len);
|
||||
int hdrlen = SdsHdrSize(type);
|
||||
|
||||
char* ptr = (char*)zmalloc_usable(hdrlen + len + 1 + 4, &usable);
|
||||
char* s = ptr + hdrlen;
|
||||
char* fp = s - 1;
|
||||
|
||||
switch (type) {
|
||||
case SDS_TYPE_5: {
|
||||
*fp = type | (len << SDS_TYPE_BITS);
|
||||
break;
|
||||
}
|
||||
|
||||
case SDS_TYPE_8: {
|
||||
SDS_HDR_VAR(8, s);
|
||||
sh->len = len;
|
||||
sh->alloc = len;
|
||||
*fp = type;
|
||||
break;
|
||||
}
|
||||
|
||||
case SDS_TYPE_16: {
|
||||
SDS_HDR_VAR(16, s);
|
||||
sh->len = len;
|
||||
sh->alloc = len;
|
||||
*fp = type;
|
||||
break;
|
||||
}
|
||||
|
||||
case SDS_TYPE_32: {
|
||||
SDS_HDR_VAR(32, s);
|
||||
sh->len = len;
|
||||
sh->alloc = len;
|
||||
*fp = type;
|
||||
break;
|
||||
}
|
||||
case SDS_TYPE_64: {
|
||||
SDS_HDR_VAR(64, s);
|
||||
sh->len = len;
|
||||
sh->alloc = len;
|
||||
*fp = type;
|
||||
break;
|
||||
StringSet::~StringSet() {
|
||||
for (auto& entry : entries_) {
|
||||
if (entry.IsLink()) {
|
||||
LinkKey* lk = (LinkKey*)entry.get();
|
||||
while (lk) {
|
||||
sdsfree((sds)lk->ptr);
|
||||
SuperPtr next = lk->next;
|
||||
Free(lk);
|
||||
if (next.IsSds()) {
|
||||
sdsfree((sds)next.get());
|
||||
lk = nullptr;
|
||||
} else {
|
||||
DCHECK(next.IsLink());
|
||||
lk = (LinkKey*)next.get();
|
||||
}
|
||||
}
|
||||
} else if (!entry.IsEmpty()) {
|
||||
sdsfree((sds)entry.get());
|
||||
}
|
||||
}
|
||||
s[len] = '\0';
|
||||
absl::little_endian::Store32(s + len + 1, at);
|
||||
|
||||
return s;
|
||||
DCHECK_EQ(0u, num_chain_entries_);
|
||||
}
|
||||
|
||||
inline bool MayHaveTtl(sds s) {
|
||||
char* alloc_ptr = (char*)sdsAllocPtr(s);
|
||||
return sdslen(s) + 1 + 4 <= zmalloc_usable_size(alloc_ptr);
|
||||
void StringSet::Reserve(size_t sz) {
|
||||
sz = std::min<size_t>(sz, kMinSize);
|
||||
|
||||
sz = absl::bit_ceil(sz);
|
||||
capacity_log_ = absl::bit_width(sz);
|
||||
entries_.reserve(sz);
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
bool StringSet::AddSds(sds s1) {
|
||||
return AddInternal(s1, false);
|
||||
size_t StringSet::SuperPtr::SetString(std::string_view str) {
|
||||
sds sdsptr = sdsnewlen(str.data(), str.size());
|
||||
ptr = sdsptr;
|
||||
return zmalloc_usable_size(sdsAllocPtr(sdsptr));
|
||||
}
|
||||
|
||||
bool StringSet::Add(string_view src, uint32_t ttl_sec) {
|
||||
DCHECK_GT(ttl_sec, 0u); // ttl_sec == 0 would mean find and delete immediately
|
||||
|
||||
sds newsds = nullptr;
|
||||
bool has_ttl = false;
|
||||
|
||||
if (ttl_sec == UINT32_MAX) {
|
||||
newsds = sdsnewlen(src.data(), src.size());
|
||||
} else {
|
||||
uint32_t at = time_now() + ttl_sec;
|
||||
DCHECK_LT(time_now(), at);
|
||||
|
||||
newsds = AllocImmutableWithTtl(src.size(), at);
|
||||
if (!src.empty())
|
||||
memcpy(newsds, src.data(), src.size());
|
||||
has_ttl = true;
|
||||
}
|
||||
|
||||
if (!AddInternal(newsds, has_ttl)) {
|
||||
sdsfree(newsds);
|
||||
bool StringSet::SuperPtr::Compare(std::string_view str) const {
|
||||
if (IsEmpty())
|
||||
return false;
|
||||
|
||||
sds sp = GetSds();
|
||||
return str == string_view{sp, sdslen(sp)};
|
||||
}
|
||||
|
||||
bool StringSet::Add(std::string_view str) {
|
||||
DVLOG(1) << "Add " << absl::CHexEscape(str);
|
||||
|
||||
uint64_t hc = CompactObj::HashCode(str);
|
||||
|
||||
if (entries_.empty()) {
|
||||
capacity_log_ = kMinSizeShift;
|
||||
entries_.resize(kMinSize);
|
||||
auto& e = entries_[BucketId(hc)];
|
||||
obj_malloc_used_ += e.SetString(str);
|
||||
++size_;
|
||||
++num_used_buckets_;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
uint32_t bucket_id = BucketId(hc);
|
||||
if (FindAround(str, bucket_id) < 2)
|
||||
return false;
|
||||
|
||||
DCHECK_LT(bucket_id, entries_.size());
|
||||
++size_;
|
||||
|
||||
// Try insert into flat surface first. Also handle the grow case
|
||||
// if utilization is too high.
|
||||
for (unsigned j = 0; j < 2; ++j) {
|
||||
int offs = FindEmptyAround(bucket_id);
|
||||
if (CanSetFlat(offs)) {
|
||||
auto& entry = entries_[bucket_id + offs];
|
||||
obj_malloc_used_ += entry.SetString(str);
|
||||
if (offs != 0) {
|
||||
entry.SetDisplaced();
|
||||
}
|
||||
++num_used_buckets_;
|
||||
return true;
|
||||
}
|
||||
|
||||
if (size_ < entries_.size())
|
||||
break;
|
||||
|
||||
Grow();
|
||||
bucket_id = BucketId(hc);
|
||||
}
|
||||
|
||||
auto& dest = entries_[bucket_id];
|
||||
DCHECK(!dest.IsEmpty());
|
||||
if (dest.IsDisplaced()) {
|
||||
sds sptr = dest.GetSds();
|
||||
uint32_t nbid = BucketId(sptr);
|
||||
Link(SuperPtr{sptr}, nbid);
|
||||
|
||||
if (dest.IsSds()) {
|
||||
obj_malloc_used_ += dest.SetString(str);
|
||||
} else {
|
||||
LinkKey* lk = (LinkKey*)dest.get();
|
||||
obj_malloc_used_ += lk->SetString(str);
|
||||
dest.ClearDisplaced();
|
||||
}
|
||||
} else {
|
||||
LinkKey* lk = NewLink(str, dest);
|
||||
dest.SetLink(lk);
|
||||
}
|
||||
DCHECK(!dest.IsDisplaced());
|
||||
return true;
|
||||
}
|
||||
|
||||
bool StringSet::Erase(string_view str) {
|
||||
return EraseInternal(&str, 1);
|
||||
}
|
||||
|
||||
bool StringSet::Contains(string_view s1) const {
|
||||
bool ret = ContainsInternal(&s1, 1);
|
||||
return ret;
|
||||
}
|
||||
|
||||
void StringSet::Clear() {
|
||||
ClearInternal();
|
||||
}
|
||||
|
||||
std::optional<std::string> StringSet::Pop() {
|
||||
sds str = (sds)PopInternal();
|
||||
|
||||
if (str == nullptr) {
|
||||
return std::nullopt;
|
||||
unsigned StringSet::BucketDepth(uint32_t bid) const {
|
||||
SuperPtr ptr = entries_[bid];
|
||||
if (ptr.IsEmpty()) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
std::string ret{str, sdslen(str)};
|
||||
sdsfree(str);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
sds StringSet::PopRaw() {
|
||||
return (sds)PopInternal();
|
||||
}
|
||||
|
||||
uint32_t StringSet::Scan(uint32_t cursor, const std::function<void(const sds)>& func) const {
|
||||
return DenseSet::Scan(cursor, [func](const void* ptr) { func((sds)ptr); });
|
||||
}
|
||||
|
||||
uint64_t StringSet::Hash(const void* ptr, uint32_t cookie) const {
|
||||
DCHECK_LT(cookie, 2u);
|
||||
|
||||
if (cookie == 0) {
|
||||
sds s = (sds)ptr;
|
||||
return CompactObj::HashCode(string_view{s, sdslen(s)});
|
||||
unsigned res = 1;
|
||||
while (ptr.IsLink()) {
|
||||
LinkKey* lk = (LinkKey*)ptr.get();
|
||||
++res;
|
||||
ptr = lk->next;
|
||||
DCHECK(!ptr.IsEmpty());
|
||||
}
|
||||
|
||||
const string_view* sv = (const string_view*)ptr;
|
||||
return CompactObj::HashCode(*sv);
|
||||
return res;
|
||||
}
|
||||
|
||||
bool StringSet::ObjEqual(const void* left, const void* right, uint32_t right_cookie) const {
|
||||
DCHECK_LT(right_cookie, 2u);
|
||||
auto StringSet::NewLink(std::string_view str, SuperPtr ptr) -> LinkKey* {
|
||||
LinkAllocator ea(mr());
|
||||
LinkKey* lk = ea.allocate(1);
|
||||
ea.construct(lk);
|
||||
obj_malloc_used_ += lk->SetString(str);
|
||||
lk->next = ptr;
|
||||
++num_chain_entries_;
|
||||
|
||||
sds s1 = (sds)left;
|
||||
return lk;
|
||||
}
|
||||
|
||||
if (right_cookie == 0) {
|
||||
sds s2 = (sds)right;
|
||||
#if 0
|
||||
void StringSet::IterateOverBucket(uint32_t bid, const ItemCb& cb) {
|
||||
const Entry& e = entries_[bid];
|
||||
if (e.IsEmpty()) {
|
||||
DCHECK(!e.next);
|
||||
return;
|
||||
}
|
||||
cb(e.value);
|
||||
|
||||
if (sdslen(s1) != sdslen(s2)) {
|
||||
return false;
|
||||
const Entry* next = e.next;
|
||||
while (next) {
|
||||
cb(next->value);
|
||||
next = next->next;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
inline bool cmpsds(sds sp, string_view str) {
|
||||
if (sdslen(sp) != str.size())
|
||||
return false;
|
||||
return str.empty() || memcmp(sp, str.data(), str.size()) == 0;
|
||||
}
|
||||
|
||||
int StringSet::FindAround(string_view str, uint32_t bid) const {
|
||||
SuperPtr ptr = entries_[bid];
|
||||
|
||||
while (ptr.IsLink()) {
|
||||
LinkKey* lk = (LinkKey*)ptr.get();
|
||||
sds sp = (sds)lk->get();
|
||||
if (cmpsds(sp, str))
|
||||
return 0;
|
||||
ptr = lk->next;
|
||||
DCHECK(!ptr.IsEmpty());
|
||||
}
|
||||
|
||||
if (!ptr.IsEmpty()) {
|
||||
DCHECK(ptr.IsSds());
|
||||
sds sp = (sds)ptr.get();
|
||||
if (cmpsds(sp, str))
|
||||
return 0;
|
||||
}
|
||||
|
||||
if (bid && entries_[bid - 1].Compare(str)) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (bid + 1 < entries_.size() && entries_[bid + 1].Compare(str)) {
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 2;
|
||||
}
|
||||
|
||||
void StringSet::Grow() {
|
||||
size_t prev_sz = entries_.size();
|
||||
entries_.resize(prev_sz * 2);
|
||||
++capacity_log_;
|
||||
|
||||
for (int i = prev_sz - 1; i >= 0; --i) {
|
||||
SuperPtr* current = &entries_[i];
|
||||
if (current->IsEmpty()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
return sdslen(s1) == 0 || memcmp(s1, s2, sdslen(s1)) == 0;
|
||||
SuperPtr* prev = nullptr;
|
||||
while (true) {
|
||||
SuperPtr next;
|
||||
LinkKey* lk = nullptr;
|
||||
sds sp;
|
||||
|
||||
if (current->IsLink()) {
|
||||
lk = (LinkKey*)current->get();
|
||||
sp = (sds)lk->get();
|
||||
next = lk->next;
|
||||
} else {
|
||||
sp = (sds)current->get();
|
||||
}
|
||||
|
||||
uint32_t bid = BucketId(sp);
|
||||
if (bid != uint32_t(i)) {
|
||||
int offs = FindEmptyAround(bid);
|
||||
if (CanSetFlat(offs)) {
|
||||
auto& dest = entries_[bid + offs];
|
||||
DCHECK(!dest.IsLink());
|
||||
|
||||
dest.ptr = sp;
|
||||
if (offs != 0)
|
||||
dest.SetDisplaced();
|
||||
if (lk) {
|
||||
Free(lk);
|
||||
}
|
||||
++num_used_buckets_;
|
||||
} else {
|
||||
Link(*current, bid);
|
||||
}
|
||||
*current = next;
|
||||
} else {
|
||||
current->ClearDisplaced();
|
||||
if (lk) {
|
||||
prev = current;
|
||||
current = &lk->next;
|
||||
}
|
||||
}
|
||||
if (next.IsEmpty())
|
||||
break;
|
||||
}
|
||||
|
||||
if (prev) {
|
||||
DCHECK(prev->IsLink());
|
||||
LinkKey* lk = (LinkKey*)prev->get();
|
||||
if (lk->next.IsEmpty()) {
|
||||
bool is_displaced = prev->IsDisplaced();
|
||||
prev->ptr = lk->get();
|
||||
if (is_displaced) {
|
||||
prev->SetDisplaced();
|
||||
}
|
||||
Free(lk);
|
||||
}
|
||||
}
|
||||
|
||||
if (entries_[i].IsEmpty()) {
|
||||
--num_used_buckets_;
|
||||
}
|
||||
}
|
||||
|
||||
const string_view* right_sv = (const string_view*)right;
|
||||
string_view left_sv{s1, sdslen(s1)};
|
||||
return left_sv == (*right_sv);
|
||||
#if 0
|
||||
unsigned cnt = 0;
|
||||
for (auto ptr : entries_) {
|
||||
cnt += (!ptr.IsEmpty());
|
||||
}
|
||||
DCHECK_EQ(num_used_buckets_, cnt);
|
||||
#endif
|
||||
}
|
||||
|
||||
size_t StringSet::ObjectAllocSize(const void* s1) const {
|
||||
return zmalloc_usable_size(sdsAllocPtr((sds)s1));
|
||||
void StringSet::Link(SuperPtr ptr, uint32_t bid) {
|
||||
SuperPtr& root = entries_[bid];
|
||||
DCHECK(!root.IsEmpty());
|
||||
|
||||
bool is_root_displaced = root.IsDisplaced();
|
||||
|
||||
if (is_root_displaced) {
|
||||
DCHECK_NE(bid, BucketId(root.GetSds()));
|
||||
}
|
||||
LinkKey* head;
|
||||
void* val;
|
||||
|
||||
if (ptr.IsSds()) {
|
||||
if (is_root_displaced) {
|
||||
// in that case it's better to put ptr into root and move root data into its correct place.
|
||||
sds val;
|
||||
if (root.IsSds()) {
|
||||
val = (sds)root.get();
|
||||
root.ptr = ptr.get();
|
||||
} else {
|
||||
LinkKey* lk = (LinkKey*)root.get();
|
||||
val = (sds)lk->get();
|
||||
lk->ptr = ptr.get();
|
||||
root.ClearDisplaced();
|
||||
}
|
||||
uint32_t nbid = BucketId(val);
|
||||
DCHECK_NE(nbid, bid);
|
||||
|
||||
Link(SuperPtr{val}, nbid); // Potentially unbounded wave of updates.
|
||||
return;
|
||||
}
|
||||
|
||||
LinkAllocator ea(mr());
|
||||
head = ea.allocate(1);
|
||||
ea.construct(head);
|
||||
val = ptr.get();
|
||||
++num_chain_entries_;
|
||||
} else {
|
||||
head = (LinkKey*)ptr.get();
|
||||
val = head->get();
|
||||
}
|
||||
|
||||
if (root.IsSds()) {
|
||||
head->ptr = root.get();
|
||||
head->next = SuperPtr{val};
|
||||
root.SetLink(head);
|
||||
if (is_root_displaced) {
|
||||
DCHECK_NE(bid, BucketId((sds)head->ptr));
|
||||
root.SetDisplaced();
|
||||
}
|
||||
} else {
|
||||
DCHECK(root.IsLink());
|
||||
LinkKey* chain = (LinkKey*)root.get();
|
||||
head->next = chain->next;
|
||||
head->ptr = val;
|
||||
chain->next.SetLink(head);
|
||||
}
|
||||
}
|
||||
|
||||
uint32_t StringSet::ObjExpireTime(const void* str) const {
|
||||
sds s = (sds)str;
|
||||
DCHECK(MayHaveTtl(s));
|
||||
#if 0
|
||||
void StringSet::MoveEntry(Entry* e, uint32_t bid) {
|
||||
auto& dest = entries_[bid];
|
||||
if (IsEmpty(dest)) {
|
||||
dest.value = std::move(e->value);
|
||||
Free(e);
|
||||
return;
|
||||
}
|
||||
e->next = dest.next;
|
||||
dest.next = e;
|
||||
}
|
||||
#endif
|
||||
|
||||
char* ttlptr = s + sdslen(s) + 1;
|
||||
return absl::little_endian::Load32(ttlptr);
|
||||
int StringSet::FindEmptyAround(uint32_t bid) const {
|
||||
if (entries_[bid].IsEmpty())
|
||||
return 0;
|
||||
|
||||
if (bid + 1 < entries_.size() && entries_[bid + 1].IsEmpty())
|
||||
return 1;
|
||||
|
||||
if (bid && entries_[bid - 1].IsEmpty())
|
||||
return -1;
|
||||
|
||||
return 2;
|
||||
}
|
||||
|
||||
void StringSet::ObjDelete(void* obj, bool has_ttl) const {
|
||||
sdsfree((sds)obj);
|
||||
uint32_t StringSet::BucketId(sds ptr) const {
|
||||
string_view sv{ptr, sdslen(ptr)};
|
||||
return BucketId(CompactObj::HashCode(sv));
|
||||
}
|
||||
|
||||
#if 0
|
||||
uint32_t StringSet::Scan(uint32_t cursor, const ItemCb& cb) const {
|
||||
if (capacity_log_ == 0)
|
||||
return 0;
|
||||
|
||||
uint32_t bucket_id = cursor >> (32 - capacity_log_);
|
||||
const_iterator it(this, bucket_id);
|
||||
|
||||
if (it.entry_ == nullptr)
|
||||
return 0;
|
||||
|
||||
bucket_id = it.bucket_id_; // non-empty bucket
|
||||
do {
|
||||
cb(*it);
|
||||
++it;
|
||||
} while (it.bucket_id_ == bucket_id);
|
||||
|
||||
if (it.entry_ == nullptr)
|
||||
return 0;
|
||||
|
||||
if (it.bucket_id_ == bucket_id + 1) { // cover displacement case
|
||||
// TODO: we could avoid checking computing HC if we explicitly mark displacement.
|
||||
// we have plenty-metadata to do so.
|
||||
uint32_t bid = BucketId((*it).HashCode());
|
||||
if (bid == it.bucket_id_) {
|
||||
cb(*it);
|
||||
++it;
|
||||
}
|
||||
}
|
||||
|
||||
return it.entry_ ? it.bucket_id_ << (32 - capacity_log_) : 0;
|
||||
}
|
||||
|
||||
bool StringSet::Erase(std::string_view val) {
|
||||
uint64_t hc = CompactObj::HashCode(val);
|
||||
uint32_t bid = BucketId(hc);
|
||||
|
||||
Entry* current = &entries_[bid];
|
||||
|
||||
if (!current->IsEmpty()) {
|
||||
if (current->value == val) {
|
||||
current->Reset();
|
||||
ShiftLeftIfNeeded(current);
|
||||
--size_;
|
||||
return true;
|
||||
}
|
||||
|
||||
Entry* prev = current;
|
||||
current = current->next;
|
||||
while (current) {
|
||||
if (current->value == val) {
|
||||
current->Reset();
|
||||
prev->next = current->next;
|
||||
Free(current);
|
||||
--size_;
|
||||
return true;
|
||||
}
|
||||
prev = current;
|
||||
current = current->next;
|
||||
}
|
||||
}
|
||||
|
||||
auto& prev = entries_[bid - 1];
|
||||
// TODO: to mark displacement.
|
||||
if (bid && !prev.IsEmpty()) {
|
||||
if (prev.value == val) {
|
||||
obj_malloc_used_ -= prev.value.MallocUsed();
|
||||
|
||||
prev.Reset();
|
||||
ShiftLeftIfNeeded(&prev);
|
||||
--size_;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
auto& next = entries_[bid + 1];
|
||||
if (bid + 1 < entries_.size()) {
|
||||
if (next.value == val) {
|
||||
obj_malloc_used_ -= next.value.MallocUsed();
|
||||
next.Reset();
|
||||
ShiftLeftIfNeeded(&next);
|
||||
--size_;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
#endif
|
||||
|
||||
void StringSet::iterator::SeekNonEmpty() {
|
||||
while (bucket_id_ < owner_->entries_.size()) {
|
||||
if (!owner_->entries_[bucket_id_].IsEmpty()) {
|
||||
entry_ = &owner_->entries_[bucket_id_];
|
||||
return;
|
||||
}
|
||||
++bucket_id_;
|
||||
}
|
||||
entry_ = nullptr;
|
||||
}
|
||||
|
||||
void StringSet::const_iterator::SeekNonEmpty() {
|
||||
while (bucket_id_ < owner_->entries_.size()) {
|
||||
if (!owner_->entries_[bucket_id_].IsEmpty()) {
|
||||
entry_ = &owner_->entries_[bucket_id_];
|
||||
return;
|
||||
}
|
||||
++bucket_id_;
|
||||
}
|
||||
entry_ = nullptr;
|
||||
}
|
||||
|
||||
} // namespace dfly
|
||||
|
|
|
@ -1,66 +1,332 @@
|
|||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#pragma once
|
||||
|
||||
#include <cstdint>
|
||||
#include <functional>
|
||||
#include <optional>
|
||||
|
||||
#include "core/dense_set.h"
|
||||
#include <memory_resource>
|
||||
|
||||
extern "C" {
|
||||
#include "redis/sds.h"
|
||||
#include "redis/object.h"
|
||||
}
|
||||
|
||||
namespace dfly {
|
||||
|
||||
class StringSet : public DenseSet {
|
||||
// StringSet is a nice but over-optimized data-structure. Probably is not worth it in the first
|
||||
// place but sometimes the OCD kicks in and one can not resist.
|
||||
// The advantage of it over redis-dict is smaller meta-data waste.
|
||||
// dictEntry is 24 bytes, i.e it uses at least 32N bytes where N is the expected length.
|
||||
// dict requires to allocate dictEntry per each addition in addition to the supplied key.
|
||||
// It also wastes space in case of a set because it stores a value pointer inside dictEntry.
|
||||
// To summarize:
|
||||
// 100% utilized dict uses N*24 + N*8 = 32N bytes not including the key space.
|
||||
// for 75% utilization (1/0.75 buckets): N*1.33*8 + N*24 = 35N
|
||||
//
|
||||
// This class uses 8 bytes per bucket (similarly to dictEntry*) but it used it for both
|
||||
// links and keys. For most cases, we remove the need for another redirection layer
|
||||
// and just store the key, so no "dictEntry" allocations occur.
|
||||
// For those cells that require chaining, the bucket is
|
||||
// changed in run-time to represent a linked chain.
|
||||
// Additional feature - in order to to reduce collisions, we insert items into
|
||||
// neighbour cells but only if they are empty (not chains). This way we reduce the number of
|
||||
// empty (unused) spaces at full utilization from 36% to ~21%.
|
||||
// 100% utilized table requires: N*8 + 0.2N*16 = 11.2N bytes or ~20 bytes savings.
|
||||
// 75% utilization: N*1.33*8 + 0.12N*16 = 13N or ~22 bytes savings per record.
|
||||
// TODO: to separate hash/compare functions from table logic and make it generic
|
||||
// with potential replacements of hset/zset data structures.
|
||||
// static_assert(sizeof(dictEntry) == 24);
|
||||
|
||||
class StringSet {
|
||||
struct LinkKey;
|
||||
// we can assume that high 12 bits of user address space
|
||||
// can be used for tagging. At most 52 bits of address are reserved for
|
||||
// some configurations, and usually it's 48 bits.
|
||||
// https://www.kernel.org/doc/html/latest/arm64/memory.html
|
||||
static constexpr size_t kLinkBit = 1ULL << 52;
|
||||
static constexpr size_t kDisplaceBit = 1ULL << 53;
|
||||
static constexpr size_t kTagMask = 4095ULL << 51; // we reserve 12 high bits.
|
||||
|
||||
struct SuperPtr {
|
||||
void* ptr = nullptr; //
|
||||
|
||||
explicit SuperPtr(void* p = nullptr) : ptr(p) {
|
||||
}
|
||||
|
||||
bool IsSds() const {
|
||||
return (uintptr_t(ptr) & kLinkBit) == 0;
|
||||
}
|
||||
|
||||
bool IsLink() const {
|
||||
return (uintptr_t(ptr) & kLinkBit) == kLinkBit;
|
||||
}
|
||||
|
||||
bool IsEmpty() const {
|
||||
return ptr == nullptr;
|
||||
}
|
||||
|
||||
void* get() const {
|
||||
return (void*)(uintptr_t(ptr) & ~kTagMask);
|
||||
}
|
||||
|
||||
bool IsDisplaced() const {
|
||||
return (uintptr_t(ptr) & kDisplaceBit) == kDisplaceBit;
|
||||
}
|
||||
|
||||
// returns usable size.
|
||||
size_t SetString(std::string_view str);
|
||||
|
||||
void SetLink(LinkKey* lk) {
|
||||
ptr = (void*)(uintptr_t(lk) | kLinkBit);
|
||||
}
|
||||
|
||||
bool Compare(std::string_view str) const;
|
||||
|
||||
void SetDisplaced() {
|
||||
ptr = (void*)(uintptr_t(ptr) | kDisplaceBit);
|
||||
}
|
||||
|
||||
void ClearDisplaced() {
|
||||
ptr = (void*)(uintptr_t(ptr) & ~kDisplaceBit);
|
||||
}
|
||||
|
||||
void Reset() {
|
||||
ptr = nullptr;
|
||||
}
|
||||
|
||||
sds GetSds() const {
|
||||
if (IsSds())
|
||||
return (sds)get();
|
||||
LinkKey* lk = (LinkKey*)get();
|
||||
return (sds)lk->get();
|
||||
}
|
||||
};
|
||||
|
||||
struct LinkKey : public SuperPtr {
|
||||
SuperPtr next; // could be LinkKey* or sds.
|
||||
};
|
||||
|
||||
static_assert(sizeof(SuperPtr) == 8);
|
||||
|
||||
public:
|
||||
bool Add(std::string_view s1, uint32_t ttl_sec = UINT32_MAX);
|
||||
class iterator;
|
||||
class const_iterator;
|
||||
// using ItemCb = std::function<void(const CompactObj& co)>;
|
||||
|
||||
// Used currently by rdb_load.
|
||||
bool AddSds(sds s1);
|
||||
StringSet(const StringSet&) = delete;
|
||||
|
||||
bool Erase(std::string_view s1);
|
||||
explicit StringSet(std::pmr::memory_resource* mr = std::pmr::get_default_resource());
|
||||
~StringSet();
|
||||
|
||||
bool Contains(std::string_view s1) const;
|
||||
StringSet& operator=(StringSet&) = delete;
|
||||
|
||||
void Clear();
|
||||
void Reserve(size_t sz);
|
||||
|
||||
std::optional<std::string> Pop();
|
||||
sds PopRaw();
|
||||
bool Add(std::string_view str);
|
||||
|
||||
~StringSet() {
|
||||
Clear();
|
||||
bool Remove(std::string_view str);
|
||||
|
||||
void Erase(iterator it);
|
||||
|
||||
size_t size() const {
|
||||
return size_;
|
||||
}
|
||||
|
||||
StringSet(std::pmr::memory_resource* res = std::pmr::get_default_resource()) : DenseSet(res) {
|
||||
bool empty() const {
|
||||
return size_ == 0;
|
||||
}
|
||||
|
||||
iterator<sds> begin() {
|
||||
return DenseSet::begin<sds>();
|
||||
size_t bucket_count() const {
|
||||
return entries_.size();
|
||||
}
|
||||
|
||||
iterator<sds> end() {
|
||||
return DenseSet::end<sds>();
|
||||
// those that are chained to the entries stored inline in the bucket array.
|
||||
size_t num_chain_entries() const {
|
||||
return num_chain_entries_;
|
||||
}
|
||||
|
||||
const_iterator<sds> cbegin() const {
|
||||
return DenseSet::cbegin<sds>();
|
||||
size_t num_used_buckets() const {
|
||||
return num_used_buckets_;
|
||||
}
|
||||
|
||||
const_iterator<sds> cend() const {
|
||||
return DenseSet::cend<sds>();
|
||||
bool Contains(std::string_view val) const;
|
||||
|
||||
bool Erase(std::string_view val);
|
||||
|
||||
iterator begin() {
|
||||
return iterator{this, 0};
|
||||
}
|
||||
|
||||
uint32_t Scan(uint32_t, const std::function<void(sds)>&) const;
|
||||
iterator end() {
|
||||
return iterator{};
|
||||
}
|
||||
|
||||
protected:
|
||||
uint64_t Hash(const void* ptr, uint32_t cookie) const override;
|
||||
size_t obj_malloc_used() const {
|
||||
return obj_malloc_used_;
|
||||
}
|
||||
|
||||
bool ObjEqual(const void* left, const void* right, uint32_t right_cookie) const override;
|
||||
size_t set_malloc_used() const {
|
||||
return (num_chain_entries_ + entries_.capacity()) * sizeof(SuperPtr);
|
||||
}
|
||||
|
||||
size_t ObjectAllocSize(const void* s1) const override;
|
||||
uint32_t ObjExpireTime(const void* obj) const override;
|
||||
void ObjDelete(void* obj, bool has_ttl) const override;
|
||||
/// stable scanning api. has the same guarantees as redis scan command.
|
||||
/// we avoid doing bit-reverse by using a different function to derive a bucket id
|
||||
/// from hash values. By using msb part of hash we make it "stable" with respect to
|
||||
/// rehashes. For example, with table log size 4 (size 16), entries in bucket id
|
||||
/// 1110 come from hashes 1110XXXXX.... When a table grows to log size 5,
|
||||
/// these entries can move either to 11100 or 11101. So if we traversed with our cursor
|
||||
/// range [0000-1110], it's guaranteed that in grown table we do not need to cover again
|
||||
/// [00000-11100]. Similarly with shrinkage, if a table is shrinked to log size 3,
|
||||
/// keys from 1110 and 1111 will move to bucket 111. Again, it's guaranteed that we
|
||||
/// covered the range [000-111] (all keys in that case).
|
||||
/// Returns: next cursor or 0 if reached the end of scan.
|
||||
/// cursor = 0 - initiates a new scan.
|
||||
// uint32_t Scan(uint32_t cursor, const ItemCb& cb) const;
|
||||
|
||||
unsigned BucketDepth(uint32_t bid) const;
|
||||
|
||||
// void IterateOverBucket(uint32_t bid, const ItemCb& cb);
|
||||
|
||||
class iterator {
|
||||
friend class StringSet;
|
||||
|
||||
public:
|
||||
iterator() : owner_(nullptr), entry_(nullptr), bucket_id_(0) {
|
||||
}
|
||||
|
||||
iterator& operator++();
|
||||
|
||||
bool operator==(const iterator& o) const {
|
||||
return entry_ == o.entry_;
|
||||
}
|
||||
|
||||
bool operator!=(const iterator& o) const {
|
||||
return !(*this == o);
|
||||
}
|
||||
|
||||
private:
|
||||
iterator(StringSet* owner, uint32_t bid) : owner_(owner), bucket_id_(bid) {
|
||||
SeekNonEmpty();
|
||||
}
|
||||
|
||||
void SeekNonEmpty();
|
||||
|
||||
StringSet* owner_ = nullptr;
|
||||
SuperPtr* entry_ = nullptr;
|
||||
uint32_t bucket_id_ = 0;
|
||||
};
|
||||
|
||||
class const_iterator {
|
||||
friend class StringSet;
|
||||
|
||||
public:
|
||||
const_iterator() : owner_(nullptr), entry_(nullptr), bucket_id_(0) {
|
||||
}
|
||||
|
||||
const_iterator& operator++();
|
||||
|
||||
const_iterator& operator=(iterator& it) {
|
||||
owner_ = it.owner_;
|
||||
entry_ = it.entry_;
|
||||
bucket_id_ = it.bucket_id_;
|
||||
|
||||
return *this;
|
||||
}
|
||||
|
||||
bool operator==(const const_iterator& o) const {
|
||||
return entry_ == o.entry_;
|
||||
}
|
||||
|
||||
bool operator!=(const const_iterator& o) const {
|
||||
return !(*this == o);
|
||||
}
|
||||
|
||||
private:
|
||||
const_iterator(const StringSet* owner, uint32_t bid) : owner_(owner), bucket_id_(bid) {
|
||||
SeekNonEmpty();
|
||||
}
|
||||
|
||||
void SeekNonEmpty();
|
||||
|
||||
const StringSet* owner_ = nullptr;
|
||||
const SuperPtr* entry_ = nullptr;
|
||||
uint32_t bucket_id_ = 0;
|
||||
};
|
||||
|
||||
private:
|
||||
friend class iterator;
|
||||
|
||||
using LinkAllocator = std::pmr::polymorphic_allocator<LinkKey>;
|
||||
|
||||
std::pmr::memory_resource* mr() {
|
||||
return entries_.get_allocator().resource();
|
||||
}
|
||||
|
||||
uint32_t BucketId(uint64_t hash) const {
|
||||
return hash >> (64 - capacity_log_);
|
||||
}
|
||||
|
||||
uint32_t BucketId(sds ptr) const;
|
||||
|
||||
// Returns: 2 if no empty spaces found around the bucket. 0, -1, 1 - offset towards
|
||||
// an empty bucket.
|
||||
int FindEmptyAround(uint32_t bid) const;
|
||||
|
||||
// returns 2 if no object was found in the vicinity.
|
||||
// Returns relative offset to bid: 0, -1, 1 if found.
|
||||
int FindAround(std::string_view str, uint32_t bid) const;
|
||||
|
||||
void Grow();
|
||||
|
||||
void Link(SuperPtr ptr, uint32_t bid);
|
||||
/*void MoveEntry(Entry* e, uint32_t bid);
|
||||
|
||||
void ShiftLeftIfNeeded(Entry* root) {
|
||||
if (root->next) {
|
||||
root->value = std::move(root->next->value);
|
||||
Entry* tmp = root->next;
|
||||
root->next = root->next->next;
|
||||
Free(tmp);
|
||||
}
|
||||
}
|
||||
*/
|
||||
void Free(LinkKey* lk) {
|
||||
mr()->deallocate(lk, sizeof(LinkKey), alignof(LinkKey));
|
||||
--num_chain_entries_;
|
||||
}
|
||||
|
||||
LinkKey* NewLink(std::string_view str, SuperPtr ptr);
|
||||
|
||||
// The rule is - entries can be moved to vicinity as long as they are stored
|
||||
// "flat", i.e. not into the linked list. The linked list
|
||||
std::pmr::vector<SuperPtr> entries_;
|
||||
size_t obj_malloc_used_ = 0;
|
||||
uint32_t size_ = 0;
|
||||
uint32_t num_chain_entries_ = 0;
|
||||
uint32_t num_used_buckets_ = 0;
|
||||
unsigned capacity_log_ = 0;
|
||||
};
|
||||
|
||||
} // end namespace dfly
|
||||
#if 0
|
||||
inline StringSet::iterator& StringSet::iterator::operator++() {
|
||||
if (entry_->next) {
|
||||
entry_ = entry_->next;
|
||||
} else {
|
||||
++bucket_id_;
|
||||
SeekNonEmpty();
|
||||
}
|
||||
|
||||
return *this;
|
||||
}
|
||||
|
||||
inline StringSet::const_iterator& StringSet::const_iterator::operator++() {
|
||||
if (entry_->next) {
|
||||
entry_ = entry_->next;
|
||||
} else {
|
||||
++bucket_id_;
|
||||
SeekNonEmpty();
|
||||
}
|
||||
|
||||
return *this;
|
||||
}
|
||||
#endif
|
||||
|
||||
} // namespace dfly
|
||||
|
|
|
@ -1,28 +1,18 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
#include "core/string_set.h"
|
||||
|
||||
#include <gtest/gtest.h>
|
||||
#include <absl/strings/str_cat.h>
|
||||
#include <gmock/gmock.h>
|
||||
#include <mimalloc.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <cstddef>
|
||||
#include <memory_resource>
|
||||
#include <random>
|
||||
#include <string>
|
||||
#include <string_view>
|
||||
#include <unordered_set>
|
||||
#include <vector>
|
||||
|
||||
#include <absl/strings/match.h>
|
||||
#include <absl/strings/str_cat.h>
|
||||
|
||||
#include "core/compact_object.h"
|
||||
#include "base/gtest.h"
|
||||
#include "base/logging.h"
|
||||
#include "core/mi_memory_resource.h"
|
||||
#include "glog/logging.h"
|
||||
#include "redis/sds.h"
|
||||
|
||||
extern "C" {
|
||||
#include "redis/zmalloc.h"
|
||||
|
@ -31,379 +21,111 @@ extern "C" {
|
|||
namespace dfly {
|
||||
|
||||
using namespace std;
|
||||
using absl::StrCat;
|
||||
|
||||
class DenseSetAllocator : public pmr::memory_resource {
|
||||
public:
|
||||
bool all_freed() const {
|
||||
return alloced_ == 0;
|
||||
}
|
||||
|
||||
void* do_allocate(size_t bytes, size_t alignment) override {
|
||||
alloced_ += bytes;
|
||||
void* p = pmr::new_delete_resource()->allocate(bytes, alignment);
|
||||
return p;
|
||||
}
|
||||
|
||||
void do_deallocate(void* p, size_t bytes, size_t alignment) override {
|
||||
alloced_ -= bytes;
|
||||
return pmr::new_delete_resource()->deallocate(p, bytes, alignment);
|
||||
}
|
||||
|
||||
bool do_is_equal(const pmr::memory_resource& other) const noexcept override {
|
||||
return pmr::new_delete_resource()->is_equal(other);
|
||||
}
|
||||
|
||||
private:
|
||||
size_t alloced_ = 0;
|
||||
};
|
||||
|
||||
class StringSetTest : public ::testing::Test {
|
||||
protected:
|
||||
static void SetUpTestSuite() {
|
||||
auto* tlh = mi_heap_get_backing();
|
||||
init_zmalloc_threadlocal(tlh);
|
||||
// SmallString::InitThreadLocal(tlh);
|
||||
|
||||
// static MiMemoryResource mi_resource(tlh);
|
||||
// needed for MallocUsed
|
||||
// CompactObj::InitThreadLocal(&mi_resource);
|
||||
}
|
||||
|
||||
static void TearDownTestSuite() {
|
||||
}
|
||||
|
||||
void SetUp() override {
|
||||
ss_ = new StringSet(&alloc_);
|
||||
}
|
||||
|
||||
void TearDown() override {
|
||||
delete ss_;
|
||||
|
||||
// ensure there are no memory leaks after every test
|
||||
EXPECT_TRUE(alloc_.all_freed());
|
||||
EXPECT_EQ(zmalloc_used_memory_tl, 0);
|
||||
}
|
||||
|
||||
StringSet* ss_;
|
||||
DenseSetAllocator alloc_;
|
||||
StringSet ss_;
|
||||
};
|
||||
|
||||
TEST_F(StringSetTest, Basic) {
|
||||
EXPECT_TRUE(ss_->Add("foo"sv));
|
||||
EXPECT_TRUE(ss_->Add("bar"sv));
|
||||
EXPECT_FALSE(ss_->Add("foo"sv));
|
||||
EXPECT_FALSE(ss_->Add("bar"sv));
|
||||
EXPECT_TRUE(ss_->Contains("foo"sv));
|
||||
EXPECT_TRUE(ss_->Contains("bar"sv));
|
||||
EXPECT_EQ(2, ss_->Size());
|
||||
EXPECT_TRUE(ss_.Add("foo"));
|
||||
EXPECT_TRUE(ss_.Add("bar"));
|
||||
EXPECT_FALSE(ss_.Add("foo"));
|
||||
EXPECT_FALSE(ss_.Add("bar"));
|
||||
EXPECT_EQ(2, ss_.size());
|
||||
}
|
||||
|
||||
TEST_F(StringSetTest, StandardAddErase) {
|
||||
EXPECT_TRUE(ss_->Add("@@@@@@@@@@@@@@@@"));
|
||||
EXPECT_TRUE(ss_->Add("A@@@@@@@@@@@@@@@"));
|
||||
EXPECT_TRUE(ss_->Add("AA@@@@@@@@@@@@@@"));
|
||||
EXPECT_TRUE(ss_->Add("AAA@@@@@@@@@@@@@"));
|
||||
EXPECT_TRUE(ss_->Add("AAAAAAAAA@@@@@@@"));
|
||||
EXPECT_TRUE(ss_->Add("AAAAAAAAAA@@@@@@"));
|
||||
EXPECT_TRUE(ss_->Add("AAAAAAAAAAAAAAA@"));
|
||||
EXPECT_TRUE(ss_->Add("AAAAAAAAAAAAAAAA"));
|
||||
EXPECT_TRUE(ss_->Add("AAAAAAAAAAAAAAAD"));
|
||||
EXPECT_TRUE(ss_->Add("BBBBBAAAAAAAAAAA"));
|
||||
EXPECT_TRUE(ss_->Add("BBBBBBBBAAAAAAAA"));
|
||||
EXPECT_TRUE(ss_->Add("CCCCCBBBBBBBBBBB"));
|
||||
|
||||
// Remove link in the middle of chain
|
||||
EXPECT_TRUE(ss_->Erase("BBBBBBBBAAAAAAAA"));
|
||||
// Remove start of a chain
|
||||
EXPECT_TRUE(ss_->Erase("CCCCCBBBBBBBBBBB"));
|
||||
// Remove end of link
|
||||
EXPECT_TRUE(ss_->Erase("AAA@@@@@@@@@@@@@"));
|
||||
// Remove only item in chain
|
||||
EXPECT_TRUE(ss_->Erase("AA@@@@@@@@@@@@@@"));
|
||||
EXPECT_TRUE(ss_->Erase("AAAAAAAAA@@@@@@@"));
|
||||
EXPECT_TRUE(ss_->Erase("AAAAAAAAAA@@@@@@"));
|
||||
EXPECT_TRUE(ss_->Erase("AAAAAAAAAAAAAAA@"));
|
||||
TEST_F(StringSetTest, Ex1) {
|
||||
EXPECT_TRUE(ss_.Add("AA@@@@@@@@@@@@@@"));
|
||||
EXPECT_TRUE(ss_.Add("AAA@@@@@@@@@@@@@"));
|
||||
EXPECT_TRUE(ss_.Add("AAAAAAAAA@@@@@@@"));
|
||||
EXPECT_TRUE(ss_.Add("AAAAAAAAAA@@@@@@"));
|
||||
EXPECT_TRUE(ss_.Add("AAAAAAAAAAAAAAA@"));
|
||||
EXPECT_TRUE(ss_.Add("BBBBBAAAAAAAAAAA"));
|
||||
EXPECT_TRUE(ss_.Add("BBBBBBBBAAAAAAAA"));
|
||||
EXPECT_TRUE(ss_.Add("CCCCCBBBBBBBBBBB"));
|
||||
}
|
||||
|
||||
static string random_string(mt19937& rand, unsigned len) {
|
||||
const string_view alpanum = "1234567890abcdefghijklmnopqrstuvwxyz";
|
||||
string ret;
|
||||
ret.reserve(len);
|
||||
TEST_F(StringSetTest, Many) {
|
||||
double max_chain_factor = 0;
|
||||
for (unsigned i = 0; i < 8192; ++i) {
|
||||
EXPECT_TRUE(ss_.Add(absl::StrCat("xxxxxxxxxxxxxxxxx", i)));
|
||||
size_t sz = ss_.size();
|
||||
bool should_print = (sz == ss_.bucket_count()) || (sz == ss_.bucket_count() * 0.75);
|
||||
if (should_print) {
|
||||
double chain_usage = double(ss_.num_chain_entries()) / ss_.size();
|
||||
unsigned num_empty = ss_.bucket_count() - ss_.num_used_buckets();
|
||||
double empty_factor = double(num_empty) / ss_.bucket_count();
|
||||
|
||||
for (size_t i = 0; i < len; ++i) {
|
||||
ret += alpanum[rand() % alpanum.size()];
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
TEST_F(StringSetTest, Resizing) {
|
||||
constexpr size_t num_strs = 4096;
|
||||
// pseudo random deterministic sequence with known seed should produce
|
||||
// the same sequence on all systems
|
||||
mt19937 rand(0);
|
||||
|
||||
vector<string> strs;
|
||||
while (strs.size() != num_strs) {
|
||||
auto str = random_string(rand, 10);
|
||||
if (find(strs.begin(), strs.end(), str) != strs.end()) {
|
||||
continue;
|
||||
}
|
||||
|
||||
strs.push_back(random_string(rand, 10));
|
||||
}
|
||||
|
||||
for (size_t i = 0; i < num_strs; ++i) {
|
||||
EXPECT_TRUE(ss_->Add(strs[i]));
|
||||
EXPECT_EQ(ss_->Size(), i + 1);
|
||||
|
||||
// make sure we haven't lost any items after a grow
|
||||
// which happens every power of 2
|
||||
if (i != 0 && (ss_->Size() & (ss_->Size() - 1)) == 0) {
|
||||
for (size_t j = 0; j < i; ++j) {
|
||||
EXPECT_TRUE(ss_->Contains(strs[j]));
|
||||
LOG(INFO) << "chains: " << 100 * chain_usage << ", empty: " << 100 * empty_factor << "% at "
|
||||
<< ss_.size();
|
||||
#if 0
|
||||
if (ss_.size() == 15) {
|
||||
for (unsigned i = 0; i < ss_.bucket_count(); ++i) {
|
||||
LOG(INFO) << "[" << i << "]: " << ss_.BucketDepth(i);
|
||||
}
|
||||
/*ss_.IterateOverBucket(93, [this](const CompactObj& co) {
|
||||
LOG(INFO) << "93->" << (co.HashCode() % ss_.bucket_count());
|
||||
});*/
|
||||
}
|
||||
#endif
|
||||
}
|
||||
}
|
||||
EXPECT_EQ(8192, ss_.size());
|
||||
|
||||
LOG(INFO) << "max chain factor: " << 100 * max_chain_factor << "%";
|
||||
/*size_t iter_len = 0;
|
||||
for (auto it = ss_.begin(); it != ss_.end(); ++it) {
|
||||
++iter_len;
|
||||
}
|
||||
EXPECT_EQ(iter_len, 512);*/
|
||||
}
|
||||
|
||||
TEST_F(StringSetTest, SimpleScan) {
|
||||
unordered_set<string_view> info = {"foo", "bar"};
|
||||
unordered_set<string_view> seen;
|
||||
#if 0
|
||||
TEST_F(StringSetTest, IterScan) {
|
||||
unordered_set<string> actual, expected;
|
||||
auto insert_actual = [&](const CompactObj& val) {
|
||||
string tmp;
|
||||
val.GetString(&tmp);
|
||||
actual.insert(tmp);
|
||||
};
|
||||
|
||||
for (auto str : info) {
|
||||
EXPECT_TRUE(ss_->Add(str));
|
||||
EXPECT_EQ(0, ss_.Scan(0, insert_actual));
|
||||
EXPECT_TRUE(actual.empty());
|
||||
|
||||
for (unsigned i = 0; i < 512; ++i) {
|
||||
string s = absl::StrCat("x", i);
|
||||
expected.insert(s);
|
||||
EXPECT_TRUE(ss_.Add(s));
|
||||
}
|
||||
|
||||
|
||||
for (CompactObj& val : ss_) {
|
||||
insert_actual(val);
|
||||
}
|
||||
|
||||
EXPECT_EQ(actual, expected);
|
||||
actual.clear();
|
||||
uint32_t cursor = 0;
|
||||
do {
|
||||
cursor = ss_->Scan(cursor, [&](const sds ptr) {
|
||||
sds s = (sds)ptr;
|
||||
string_view str{s, sdslen(s)};
|
||||
EXPECT_TRUE(info.count(str));
|
||||
seen.insert(str);
|
||||
});
|
||||
} while (cursor != 0);
|
||||
|
||||
EXPECT_TRUE(seen.size() == info.size() && equal(seen.begin(), seen.end(), info.begin()));
|
||||
cursor = ss_.Scan(cursor, insert_actual);
|
||||
} while (cursor);
|
||||
EXPECT_EQ(actual, expected);
|
||||
}
|
||||
|
||||
// Ensure REDIS scan guarantees are met
|
||||
TEST_F(StringSetTest, ScanGuarantees) {
|
||||
unordered_set<string_view> to_be_seen = {"foo", "bar"};
|
||||
unordered_set<string_view> not_be_seen = {"AAA", "BBB"};
|
||||
unordered_set<string_view> maybe_seen = {"AA@@@@@@@@@@@@@@", "AAA@@@@@@@@@@@@@",
|
||||
"AAAAAAAAA@@@@@@@", "AAAAAAAAAA@@@@@@"};
|
||||
unordered_set<string_view> seen;
|
||||
#endif
|
||||
|
||||
auto scan_callback = [&](const sds ptr) {
|
||||
sds s = (sds)ptr;
|
||||
string_view str{s, sdslen(s)};
|
||||
EXPECT_TRUE(to_be_seen.count(str) || maybe_seen.count(str));
|
||||
EXPECT_FALSE(not_be_seen.count(str));
|
||||
if (to_be_seen.count(str)) {
|
||||
seen.insert(str);
|
||||
}
|
||||
};
|
||||
|
||||
EXPECT_EQ(ss_->Scan(0, scan_callback), 0);
|
||||
|
||||
for (auto str : not_be_seen) {
|
||||
EXPECT_TRUE(ss_->Add(str));
|
||||
}
|
||||
|
||||
for (auto str : not_be_seen) {
|
||||
EXPECT_TRUE(ss_->Erase(str));
|
||||
}
|
||||
|
||||
for (auto str : to_be_seen) {
|
||||
EXPECT_TRUE(ss_->Add(str));
|
||||
}
|
||||
|
||||
// should reach at least the first item in the set
|
||||
uint32_t cursor = ss_->Scan(0, scan_callback);
|
||||
|
||||
for (auto str : maybe_seen) {
|
||||
EXPECT_TRUE(ss_->Add(str));
|
||||
}
|
||||
|
||||
while (cursor != 0) {
|
||||
cursor = ss_->Scan(cursor, scan_callback);
|
||||
}
|
||||
|
||||
EXPECT_TRUE(seen.size() == to_be_seen.size());
|
||||
}
|
||||
|
||||
TEST_F(StringSetTest, IntOnly) {
|
||||
constexpr size_t num_ints = 8192;
|
||||
unordered_set<unsigned int> numbers;
|
||||
for (size_t i = 0; i < num_ints; ++i) {
|
||||
numbers.insert(i);
|
||||
EXPECT_TRUE(ss_->Add(to_string(i)));
|
||||
}
|
||||
|
||||
for (size_t i = 0; i < num_ints; ++i) {
|
||||
EXPECT_FALSE(ss_->Add(to_string(i)));
|
||||
}
|
||||
|
||||
mt19937 generator(0);
|
||||
size_t num_remove = generator() % 4096;
|
||||
unordered_set<string> removed;
|
||||
|
||||
for (size_t i = 0; i < num_remove; ++i) {
|
||||
auto remove_int = generator() % num_ints;
|
||||
auto remove = to_string(remove_int);
|
||||
if (numbers.count(remove_int)) {
|
||||
ASSERT_TRUE(ss_->Contains(remove)) << remove_int;
|
||||
EXPECT_TRUE(ss_->Erase(remove));
|
||||
numbers.erase(remove_int);
|
||||
} else {
|
||||
EXPECT_FALSE(ss_->Erase(remove));
|
||||
}
|
||||
|
||||
EXPECT_FALSE(ss_->Contains(remove));
|
||||
removed.insert(remove);
|
||||
}
|
||||
|
||||
size_t expected_seen = 0;
|
||||
auto scan_callback = [&](const sds ptr) {
|
||||
string str{ptr, sdslen(ptr)};
|
||||
EXPECT_FALSE(removed.count(str));
|
||||
|
||||
if (numbers.count(atoi(str.data()))) {
|
||||
++expected_seen;
|
||||
}
|
||||
};
|
||||
|
||||
uint32_t cursor = 0;
|
||||
do {
|
||||
cursor = ss_->Scan(cursor, scan_callback);
|
||||
// randomly throw in some new numbers
|
||||
uint32_t val = generator();
|
||||
VLOG(1) << "Val " << val;
|
||||
ss_->Add(to_string(val));
|
||||
} while (cursor != 0);
|
||||
|
||||
EXPECT_GE(expected_seen + removed.size(), num_ints);
|
||||
}
|
||||
|
||||
TEST_F(StringSetTest, XtremeScanGrow) {
|
||||
unordered_set<string> to_see, force_grow, seen;
|
||||
|
||||
mt19937 generator(0);
|
||||
while (to_see.size() != 8) {
|
||||
to_see.insert(random_string(generator, 10));
|
||||
}
|
||||
|
||||
while (force_grow.size() != 8192) {
|
||||
string str = random_string(generator, 10);
|
||||
|
||||
if (to_see.count(str)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
force_grow.insert(random_string(generator, 10));
|
||||
}
|
||||
|
||||
for (auto& str : to_see) {
|
||||
EXPECT_TRUE(ss_->Add(str));
|
||||
}
|
||||
|
||||
auto scan_callback = [&](const sds ptr) {
|
||||
sds s = (sds)ptr;
|
||||
string_view str{s, sdslen(s)};
|
||||
if (to_see.count(string(str))) {
|
||||
seen.insert(string(str));
|
||||
}
|
||||
};
|
||||
|
||||
uint32_t cursor = ss_->Scan(0, scan_callback);
|
||||
|
||||
// force approx 10 grows
|
||||
for (auto& s : force_grow) {
|
||||
EXPECT_TRUE(ss_->Add(s));
|
||||
}
|
||||
|
||||
while (cursor != 0) {
|
||||
cursor = ss_->Scan(cursor, scan_callback);
|
||||
}
|
||||
|
||||
EXPECT_EQ(seen.size(), to_see.size());
|
||||
}
|
||||
|
||||
TEST_F(StringSetTest, Pop) {
|
||||
constexpr size_t num_items = 8;
|
||||
unordered_set<string> to_insert;
|
||||
|
||||
mt19937 generator(0);
|
||||
|
||||
while (to_insert.size() != num_items) {
|
||||
auto str = random_string(generator, 10);
|
||||
if (to_insert.count(str)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
to_insert.insert(str);
|
||||
EXPECT_TRUE(ss_->Add(str));
|
||||
}
|
||||
|
||||
while (!ss_->Empty()) {
|
||||
size_t size = ss_->Size();
|
||||
auto str = ss_->Pop();
|
||||
DCHECK(ss_->Size() == to_insert.size() - 1);
|
||||
DCHECK(str.has_value());
|
||||
DCHECK(to_insert.count(str.value()));
|
||||
DCHECK_EQ(ss_->Size(), size - 1);
|
||||
to_insert.erase(str.value());
|
||||
}
|
||||
|
||||
DCHECK(ss_->Empty());
|
||||
DCHECK(to_insert.empty());
|
||||
}
|
||||
|
||||
TEST_F(StringSetTest, Iteration) {
|
||||
constexpr size_t num_items = 8192;
|
||||
unordered_set<string> to_insert;
|
||||
|
||||
mt19937 generator(0);
|
||||
|
||||
while (to_insert.size() != num_items) {
|
||||
auto str = random_string(generator, 10);
|
||||
if (to_insert.count(str)) {
|
||||
continue;
|
||||
}
|
||||
|
||||
to_insert.insert(str);
|
||||
EXPECT_TRUE(ss_->Add(str));
|
||||
}
|
||||
|
||||
for (const sds ptr : *ss_) {
|
||||
string str{ptr, sdslen(ptr)};
|
||||
EXPECT_TRUE(to_insert.count(str));
|
||||
to_insert.erase(str);
|
||||
}
|
||||
|
||||
EXPECT_EQ(to_insert.size(), 0);
|
||||
}
|
||||
|
||||
TEST_F(StringSetTest, Ttl) {
|
||||
EXPECT_TRUE(ss_->Add("bla"sv, 1));
|
||||
EXPECT_FALSE(ss_->Add("bla"sv, 1));
|
||||
ss_->set_time(1);
|
||||
EXPECT_TRUE(ss_->Add("bla"sv, 1));
|
||||
EXPECT_EQ(1u, ss_->Size());
|
||||
|
||||
for (unsigned i = 0; i < 100; ++i) {
|
||||
EXPECT_TRUE(ss_->Add(StrCat("foo", i), 1));
|
||||
}
|
||||
EXPECT_EQ(101u, ss_->Size());
|
||||
|
||||
ss_->set_time(2);
|
||||
for (unsigned i = 0; i < 100; ++i) {
|
||||
EXPECT_TRUE(ss_->Add(StrCat("bar", i)));
|
||||
}
|
||||
|
||||
for (auto it = ss_->begin(); it != ss_->end(); ++it) {
|
||||
ASSERT_TRUE(absl::StartsWith(*it, "bar")) << *it;
|
||||
string str = *it;
|
||||
VLOG(1) << *it;
|
||||
}
|
||||
}
|
||||
|
||||
} // namespace dfly
|
||||
} // namespace dfly
|
|
@ -1,746 +0,0 @@
|
|||
a2e24b073cf57fa4
|
||||
a2f1bbc7fc9600fb
|
||||
a2f3684310f1e895
|
||||
a2e8d1df84366075
|
||||
a2e997d4134f4046
|
||||
a2f9824557d420ba
|
||||
a2e2d9d90b3b1841
|
||||
a2f6f8031af068b8
|
||||
a2ea89709696b08f
|
||||
a2e7bf0f03267014
|
||||
a2e3d46c5893efbb
|
||||
a2e1c01c5e68c7dd
|
||||
a2eaea6601b33fb0
|
||||
a2e250ca959bc7f4
|
||||
a2eb5e11044768fa
|
||||
a2e3731bc400185c
|
||||
a2f82888c8f868e2
|
||||
a2eb9bea7247c808
|
||||
a2f97d0065a380fd
|
||||
a2f6940fd277d8f8
|
||||
a2e638b5fd214061
|
||||
a2f3b77dd4fdd85b
|
||||
a2f61c0bb34320c4
|
||||
a2e6aa99dfee1854
|
||||
a2e9e267998b81e0
|
||||
a2e7d6c50702811b
|
||||
a2eacadee3e49971
|
||||
a2f71490c5881072
|
||||
a2eba6f3148c8926
|
||||
a2eeb82230b8a96c
|
||||
a2fa11351697219a
|
||||
a2e470f5bdb82146
|
||||
a2e989a0a3add110
|
||||
a2f4e47496465138
|
||||
a2fbf81d9eb3a12c
|
||||
a2fb473cb87281f6
|
||||
a2fac75352db492c
|
||||
a2eecc5dd94671f1
|
||||
a2e977499664a15d
|
||||
a2ea0c28000911e2
|
||||
a2f5fefbfa752226
|
||||
a2fb862ad9362abf
|
||||
a2e0a3a0629953fe
|
||||
a2ebfbaa189813ac
|
||||
a2f40ead50f6aad5
|
||||
a2ffc642cc7e42d5
|
||||
a2e7a35701176a31
|
||||
a2f82c2ca98972e6
|
||||
a2e1209b19f0da73
|
||||
a2fb1a51b2cba2d9
|
||||
a2fa434b183cbaef
|
||||
a2e5bffa40a61a4f
|
||||
a2e80f94a2033249
|
||||
a2f768f7f5af23d3
|
||||
a2ef76f25e1093df
|
||||
a2e97b60a53f7bdb
|
||||
a2f7ec84b9611387
|
||||
a2f0f0a971a20be1
|
||||
a2e3b72061eba3c1
|
||||
a2e0e46833620bca
|
||||
a2ee9ee8643b0bbd
|
||||
a2f2e57deca70bb7
|
||||
a2e56aa5a32fa3b3
|
||||
a2e3337430edeb97
|
||||
a2e8942657a43b5d
|
||||
a2e10aadbef4c3f6
|
||||
a2f7d759e4d30342
|
||||
a2eb04df10f92b42
|
||||
a2e17f2873509b6a
|
||||
a2fc6e0b74e7835d
|
||||
a2ea17280ed7f4e6
|
||||
a2e3b163ddaf946d
|
||||
a2e6bdef56ddb400
|
||||
a2fdeabc7dd94c82
|
||||
a2ec11e354bf7c34
|
||||
a2efb83055c41405
|
||||
a2f26430a428e409
|
||||
a2f56566dbd2acbe
|
||||
a2f21394b7be6c47
|
||||
a2e18c40653fcc3a
|
||||
a2f06a8ff1219473
|
||||
a2f7e80c04eea4f9
|
||||
a2f980638e0a4456
|
||||
a2e57ee2be853440
|
||||
a2e984e074002dee
|
||||
a2f88db3a2c0dd3e
|
||||
a2fc31867ca29d4b
|
||||
a2f36b82e35cbd66
|
||||
a2e11da0dbf24daf
|
||||
a2e3145034660591
|
||||
a2f7f1db168655d8
|
||||
a2ea4d022ac46594
|
||||
a2f4f617d700de11
|
||||
a2e813b0d2542685
|
||||
a2ebf14ecc74bef1
|
||||
a2fa3e1591df7e26
|
||||
a2f49650186be648
|
||||
a2ee03aeb6d32eaf
|
||||
a2fd7506c5f2865e
|
||||
a2f204c1f965e6e4
|
||||
a2ffd3f0ac6bdea3
|
||||
a2f4bd6745756e66
|
||||
a2f79471989e360b
|
||||
a2eec66780ad9655
|
||||
a2eb39c6c7134699
|
||||
a2f482d83d8596bb
|
||||
a2eca4aa0f2a0efc
|
||||
a2f7c2719d8e4e47
|
||||
a2fd49654604ee3c
|
||||
a2e98788ed00ae99
|
||||
a2efb147b11097e8
|
||||
a2efe633d1a6f700
|
||||
a2ebbde450f1cf39
|
||||
a2f5b5b98a374742
|
||||
a2f2af0b92d60f3b
|
||||
a2fd96138c4b8785
|
||||
a2e580f21c661f51
|
||||
a2e78eecf23ccf14
|
||||
a2fe52c33d66a71e
|
||||
a2f934e1f2160f19
|
||||
a2fcf00ca8c5c7fb
|
||||
a2fecfe355d0ff3f
|
||||
a2f1f20d7a248f61
|
||||
a2fde1f40d1557ad
|
||||
a2eaa243acac2064
|
||||
a2e1a3390cd0e847
|
||||
a2e1b4444a3e9075
|
||||
a2f8e0f41a8cd8e6
|
||||
a2ecbe972c3fd818
|
||||
a2eae4fe970398c0
|
||||
a2f95816742380c1
|
||||
a2e4b575972a9031
|
||||
a2eb15f66232b0f7
|
||||
a2e3a74c018398ac
|
||||
a2eab2edf417d048
|
||||
a2e0440a262100cd
|
||||
a2e7a4be7e0eb120
|
||||
a2f1c1a1a6ba79ea
|
||||
a2e02a14348e412d
|
||||
a2e646e5f49641a4
|
||||
a2fe6a1116d7490f
|
||||
a2eccebbc172d926
|
||||
a2ef35ccb31659b6
|
||||
a2f6a08a2a8169ee
|
||||
a2ebed8c7ba6f9fe
|
||||
a2fc1bda0dcd9110
|
||||
a2fbf857cf11ba0e
|
||||
a2fc69f892bd32b5
|
||||
a2e64615716f2296
|
||||
a2e015b79319ca99
|
||||
a2f7ead8ef9a5b24
|
||||
a2ea0807e5433370
|
||||
a2e4bba27feada4e
|
||||
a2faa731689512a9
|
||||
a2e5c057e3a54a66
|
||||
a2eebeb3f91db2e1
|
||||
a2ed34fb7842d2a3
|
||||
a2fd5cb3baf74a09
|
||||
a2e68b11e41c129f
|
||||
a2edfcc93f47aad1
|
||||
a2e4bc94b914aaf9
|
||||
a2ffed576e3de3af
|
||||
a2f427529e103b56
|
||||
a2f730db92d69ba5
|
||||
a2fa74d0015873df
|
||||
a2f651345a345b20
|
||||
a2ff3f79b100f360
|
||||
a2fff6c8cd44239b
|
||||
a2f38e5e2b8e5ba4
|
||||
a2ea7cf369cb23fd
|
||||
a2e09ca087e16b0a
|
||||
a2f92a7f9076b32b
|
||||
a2f740c5c746a3ac
|
||||
a2e79530d5b8732f
|
||||
a2e96e0f5f6593f4
|
||||
a2fd47a4b7d9c3b9
|
||||
a2f496d67891db58
|
||||
a2e4e801c29b7b3d
|
||||
a2f2c7a0e276841b
|
||||
a2fa105c331874e9
|
||||
a2f16f1aeedb347c
|
||||
a2fa7762c5da2c2e
|
||||
a2e671a0656e0cef
|
||||
a2f3a6aac8066ce0
|
||||
a2fae23544e16cca
|
||||
a2f98c94bcdac46f
|
||||
a2e8c7054839ec68
|
||||
a2e15d7ed7be94ba
|
||||
a2f60cd929b9fcc9
|
||||
a2fc5b87f07e7cb6
|
||||
a2f9d6ca31d22c01
|
||||
a2f0a2b020297cf7
|
||||
a2fc5fad51156d32
|
||||
a2f131243676452a
|
||||
a2e96bb1b43eed98
|
||||
a2f25886d05715d0
|
||||
a2e543f6079035ce
|
||||
a2e93a4f4e92bdc1
|
||||
a2fc3e6a34fa65bf
|
||||
a2f5ef7570c03d56
|
||||
a2e96d854c086e09
|
||||
a2f5913bc1c2d6d0
|
||||
a2fb4c3bf5fc1e8a
|
||||
a2f72380e8706e44
|
||||
a2f9670b15094e28
|
||||
a2f1ac1ce0a0fefb
|
||||
a2e8a1050a5ccf7f
|
||||
a2ed52457a46d6ab
|
||||
a2fc970329f2f689
|
||||
a2e45d6cceced660
|
||||
a2ff5adb32a1b6ed
|
||||
a2e106993f46c653
|
||||
a2fc27b50449aec2
|
||||
a2e4d60ff6f0ae72
|
||||
a2fd82c1befd06a9
|
||||
a2ec1038472e6e87
|
||||
a2e539611677de98
|
||||
a2eab474e7b85e55
|
||||
a2fd37c0aa996e2b
|
||||
a2ea68b22962cf99
|
||||
a2f81a14b7870719
|
||||
a2ea048a126e4714
|
||||
a2ec0ba843a1efe8
|
||||
a2e61c86ef796791
|
||||
a2e34e57600227b2
|
||||
a2f6f7269c8e1f02
|
||||
a2e3d5f020635723
|
||||
a2f35a421d587f7c
|
||||
a2e5ab56cfab67e7
|
||||
a2ec0c3570d0bf8d
|
||||
a2fcddf06fd84fa0
|
||||
a2f868cbcaa8afd0
|
||||
a2e562ef52980fcb
|
||||
a2f43e70872b2f08
|
||||
a2e3607908c3c0b1
|
||||
a2fdc1f4f8eae874
|
||||
a2f19564a576486a
|
||||
a2e2607351536083
|
||||
a2edf68234e71899
|
||||
a2fb8e1d0a7f28f8
|
||||
a2f19a5f9f01d0a5
|
||||
a2e5b29bf006d832
|
||||
a2ef5edd44c788f4
|
||||
a2e43d4ba691784a
|
||||
a2ea47046f8b0035
|
||||
a2f3d6d5ab0b303a
|
||||
a2e32021129a79ab
|
||||
a2e55b853fe259a4
|
||||
a2e9025193fe99ef
|
||||
a2fc1f0dafbdc977
|
||||
a2e3b2e42592e101
|
||||
a2f8cdd05a1e5910
|
||||
a2e145c888ad41c0
|
||||
a2e08d7bc2bfc189
|
||||
a2ee8ecf6cb2d9e9
|
||||
a2e88dfb118b41a4
|
||||
a2f90d672b30e9fd
|
||||
a2fb9c54a6a2811f
|
||||
a2f42048ca7e0943
|
||||
a2f920472784018a
|
||||
a2ecbf270467a911
|
||||
a2ff7d158d3bd916
|
||||
a2e531b5dcd3c2db
|
||||
a2e80c2217186a30
|
||||
a2f523fbc87f5a59
|
||||
a2f45dd69efccaf0
|
||||
a2fccf99ff9f8200
|
||||
a2e855addee38af0
|
||||
a2f0b87b8efe7a6c
|
||||
a2e1a3eb966b4299
|
||||
a2e1dcaa3a1cf231
|
||||
a2fa3a9b65f5cb98
|
||||
a2f5c40068b92bec
|
||||
a2ea24325612d3d4
|
||||
a2e2327bb939039c
|
||||
a2ff8501a9d03bda
|
||||
a2f4a313bd5b24ab
|
||||
a2e2ff1f75504cf7
|
||||
a2f9dc1107c69c48
|
||||
a2ed595d533e43c3
|
||||
a2faef811a758bd6
|
||||
a2f69492ea01038f
|
||||
a2f41c3e98f313f1
|
||||
a2f0e00b07e65be4
|
||||
a2ed89b583ab0b32
|
||||
a2f86bcdc7c9eb83
|
||||
a2f8a6b519acb440
|
||||
a2fd1e6e95e4448a
|
||||
a2f165ddd67814b7
|
||||
a2e3523130e06456
|
||||
a2e7d3f553497c9d
|
||||
a2e13755c8070c96
|
||||
a2ea482faa39f467
|
||||
a2f7cb9d108444ee
|
||||
a2e589c995a6f4cb
|
||||
a2e85bbfd86d9401
|
||||
a2fde01274d2eca5
|
||||
a2f4c2edf89334a0
|
||||
a2e1ba5a2c364c8b
|
||||
a2fa41568fb03478
|
||||
a2e515a2640fc405
|
||||
a2e7e2d9d21a0c43
|
||||
a2e0f3acc8da3c1a
|
||||
a2e43723e42514d0
|
||||
a2e0d7fd9ba4ed4d
|
||||
a2e4664fbf593df8
|
||||
a2e0027228701562
|
||||
a2e7126e07f93575
|
||||
a2e9de5bbb73cd3c
|
||||
a2e2c7c68528654c
|
||||
a2e00bfe0b60256b
|
||||
a2f30bda09e8b524
|
||||
a2eb00bd3ae44511
|
||||
a2e487266e86fd0c
|
||||
a2fb2f78c0ccadbc
|
||||
a2ec864b10ec3df7
|
||||
a2e7dcee08e4ddd4
|
||||
a2f72fbb724f8538
|
||||
a2fece0d3926ae95
|
||||
a2f0ea55bf1a1645
|
||||
a2ea0263eadda653
|
||||
a2e80f62267a2e8e
|
||||
a2f2fca72a6a9689
|
||||
a2f57e2111a976f1
|
||||
a2ee6ca220cf3693
|
||||
a2f435af0ba3de97
|
||||
a2eadbdcb61f3ec1
|
||||
a2f6c0e3ac48fe3a
|
||||
a2f3014a228a0667
|
||||
a2f456492c3dc684
|
||||
a2f8aa8a22852ff3
|
||||
a2e9034ae041efac
|
||||
a2fc35459b259758
|
||||
a2f1baa15c21a782
|
||||
a2f4631ef86ff7d1
|
||||
a2f4df5933549fcc
|
||||
a2f3dc38be01ef25
|
||||
a2f94402530b6f4e
|
||||
a2f8c1870e9597c8
|
||||
a2e6e629b35d376b
|
||||
a2eff51434865f92
|
||||
a2ed2a74101ba7d2
|
||||
a2ebbdddbdcae702
|
||||
a2e1e88d9c08e787
|
||||
a2f107a097368fba
|
||||
a2eda512bcf7d894
|
||||
a2e1f0e47c3b4074
|
||||
a2e1553b93c7b885
|
||||
a2f7408afa91089c
|
||||
a2ef469573a40853
|
||||
a2f7e5ec92e83062
|
||||
a2fa525ff419f8da
|
||||
a2ec5c2270bf8074
|
||||
a2f969dce46d8045
|
||||
a2ff058fa1c7f849
|
||||
a2e8dc2b0d7e884e
|
||||
a2ef79f22ef23019
|
||||
a2ecac4e581d60bb
|
||||
a2f8e4a4022998a4
|
||||
a2fb2d9a2a5bd81d
|
||||
a2fbe59bbc46a120
|
||||
a2f94433f99449c5
|
||||
a2e4297a2c4a8909
|
||||
a2e415d9352729a8
|
||||
a2e579775dd439b6
|
||||
a2f71a3d5e76316e
|
||||
a2e26083c03c397a
|
||||
a2f4cb685617a98d
|
||||
a2e8fcef3a8dd1f9
|
||||
a2fe5f1c4712d937
|
||||
a2eca0363d4a39bd
|
||||
a2f186267d5dc984
|
||||
a2ff0423b71991e1
|
||||
a2efe05cf9d1f131
|
||||
a2ee979e3aa52231
|
||||
a2e4c672c0e34274
|
||||
a2efe2faef5cf200
|
||||
a2f3bc6b0df0ba11
|
||||
a2e7c24e8760dac8
|
||||
a2f6bdad16b28262
|
||||
a2e674493fba929f
|
||||
a2f2ce2c99bd8a4f
|
||||
a2efaa90ba8c4287
|
||||
a2eb910070495a97
|
||||
a2e7fb0711781a98
|
||||
a2ee6be18e269a42
|
||||
a2f26a2c852ee21a
|
||||
a2f802d191914351
|
||||
a2fa54fdc2686bb3
|
||||
a2e8913c8bbebb87
|
||||
a2e4afaa1150f352
|
||||
a2eb56f6e0a08b44
|
||||
a2e03f1db8be7337
|
||||
a2ed438edc0b6b82
|
||||
a2ec68aef8e3e369
|
||||
a2f76b2836063b39
|
||||
a2f578df074c8bca
|
||||
a2e83b255b849328
|
||||
a2ebf2e778d64bdd
|
||||
a2ec3b482f92e393
|
||||
a2e6afc0027f3bfc
|
||||
a2e9cd831a8013f4
|
||||
a2f9873c40bbbcc9
|
||||
a2fa4fc1ae568404
|
||||
a2e61034c6b114a4
|
||||
a2e2065358ba044b
|
||||
a2e8c22cca77b4b1
|
||||
a2fa7189e2746c31
|
||||
a2fdcebc58381497
|
||||
a2f33d2fda05eceb
|
||||
a2f8b8c078cb5cbf
|
||||
a2ecab059aa2347b
|
||||
a2fd442eeb969c80
|
||||
a2e7ce6a12311402
|
||||
a2fb2404576cdc89
|
||||
a2eace4903dd7c17
|
||||
a2fd5f4cd5ea1521
|
||||
a2fe4911fb419d5c
|
||||
a2f9d11e5e7fad89
|
||||
a2e0b0550e5dfd2a
|
||||
a2fd846ae0aa156d
|
||||
a2ff72ef4a7005f4
|
||||
a2f7f2b3d0c48ded
|
||||
a2f1b5835d3dddca
|
||||
a2e2970132afe5c1
|
||||
a2e12b9c27b08d91
|
||||
a2ee684d66e535a6
|
||||
a2e58fd0f23c05cb
|
||||
a2e031c1e4990d94
|
||||
a2f0869cf30c959b
|
||||
a2fc5a64bf093679
|
||||
a2f030984c283fc1
|
||||
a2f42898c017e6cd
|
||||
a2f1932dd7c7dec6
|
||||
a2ff96e30982c7cd
|
||||
a2ef5af4547ca729
|
||||
a2ec87368d722722
|
||||
a2ed0dc40d605fec
|
||||
a2e3dd2c63775f11
|
||||
a2eebf0fee6bdfc8
|
||||
a2e2ff041ddebfde
|
||||
a2f7b72cbb5d4f44
|
||||
a2fd9e42dc771f2b
|
||||
a2ece2265d312fe5
|
||||
a2fae195ad844736
|
||||
a2f174b077754749
|
||||
a2f9542f8c9a5784
|
||||
a2f1e2bc7f65575f
|
||||
a2e4e759a180f774
|
||||
a2fc70604ef1cfe7
|
||||
a2e35115bb5a886d
|
||||
a2e2b6ab20e6d8bd
|
||||
a2e91359c45d088c
|
||||
a2f2a5f85217e0e0
|
||||
a2fa6991423a4893
|
||||
a2ef22e1e2f73011
|
||||
a2f019d8933b38ec
|
||||
a2f9bb484145b054
|
||||
a2e06255e5ccf0e8
|
||||
a2eb1383233f6048
|
||||
a2e6d8ace3c5483b
|
||||
a2f8972dfe69f042
|
||||
a2eebb9f5938f894
|
||||
a2f96d8f87222026
|
||||
a2ffa5c12fce203a
|
||||
a2f4c27944b218e0
|
||||
a2ff938ed21b3047
|
||||
a2f9047fed849919
|
||||
a2f95ce7dbd6d9e8
|
||||
a2ece4b2d08f79fe
|
||||
a2f6b5de30112120
|
||||
a2ef3324936dc98f
|
||||
a2e717388ed341e0
|
||||
a2f834ec000ac1e0
|
||||
a2f699be3750e9b1
|
||||
a2fee4c5061101fd
|
||||
a2facc94153d511f
|
||||
a2e8f3c15325d992
|
||||
a2ec154faba169a1
|
||||
a2fab6a3a4dff984
|
||||
a2e3536f5690f996
|
||||
a2fbce3edd31aaa3
|
||||
a2e0122ce0ff7b9b
|
||||
a2e7c537d287328a
|
||||
a2f58fb54541e2cc
|
||||
a2e908a6c2b13a6f
|
||||
a2ede01793f24295
|
||||
a2fea0b788c2caa7
|
||||
a2ffb055b95242f5
|
||||
a2f775e2f4416a37
|
||||
a2e6313b69adfadf
|
||||
a2e04be7755de21d
|
||||
a2fc95535c8622e1
|
||||
a2f46ea056e55a92
|
||||
a2e9abc1587f6aa9
|
||||
a2fffc5fd802eb46
|
||||
a2f9b1215f64eb5d
|
||||
a2faf28cbe4ab3e6
|
||||
a2fe2b264e78ab7a
|
||||
a2e26eb66626b38a
|
||||
a2f774f93215d3b2
|
||||
a2e237623ca493f1
|
||||
a2eb0bbe1020e319
|
||||
a2fa646b2af40b7d
|
||||
a2eac3bc22e36bf7
|
||||
a2fb798535223b15
|
||||
a2e90c9a4841131c
|
||||
a2e5ca364fe35b3e
|
||||
a2e9753a16f74bc7
|
||||
a2f612bdfe0a23d4
|
||||
a2ecc754d4eaacbf
|
||||
a2f5eb4d957094db
|
||||
a2e134fbfb7edc97
|
||||
a2f9cfb69e316c24
|
||||
a2eaa319c2b7345f
|
||||
a2e5533632101cf5
|
||||
a2e987e1f0dc84fc
|
||||
a2e5e7124e0a5448
|
||||
a2f19fc570147c2a
|
||||
a2eb17b5b547e49b
|
||||
a2eb881a6dd814b2
|
||||
a2e09686c9c40c62
|
||||
a2e78eaade894dd6
|
||||
a2ffb38fe47bfd12
|
||||
a2fa4f2dd18645da
|
||||
a2e18717b6a89d50
|
||||
a2ee3562a5f53d99
|
||||
a2e3f9d35da5f524
|
||||
a2f35c32cb437521
|
||||
a2ea91299c794590
|
||||
a2ed2e26d510058e
|
||||
a2e776d236bfc501
|
||||
a2e5e5f1ec6bed7b
|
||||
a2ffb6c598b215d7
|
||||
a2e16b794cc9b5e5
|
||||
a2f29186cb564518
|
||||
a2e9961dd9902d70
|
||||
a2f91d28d7382649
|
||||
a2f9b66914dd86eb
|
||||
a2f1ce555134a674
|
||||
a2f662021c24169d
|
||||
a2fbf4d80a744e77
|
||||
a2e9c9950d9dbedd
|
||||
a2e3a505af411e17
|
||||
a2ecd74fab909e9d
|
||||
a2edd8044f8ee654
|
||||
a2e1f48cd7ebe62a
|
||||
a2fbf3b75e4f2e8b
|
||||
a2eaf54355e1b6a1
|
||||
a2e3f09a2d49a635
|
||||
a2f0730319f7c606
|
||||
a2ee3ba45f700f82
|
||||
a2fb711057106f26
|
||||
a2ee1e5a21945794
|
||||
a2e5989a1a71ef14
|
||||
a2e4ffd3add27703
|
||||
a2f61e002b01efb1
|
||||
a2ed683b30746ff8
|
||||
a2e676a6a7e6ff55
|
||||
a2fa5b1fcbe2c74f
|
||||
a2f0f876ae8a689c
|
||||
a2f74cde1518b0f0
|
||||
a2e38eeb11e388e1
|
||||
a2f76f98b321d8aa
|
||||
a2f20d2d69a1580a
|
||||
a2fb088ad7c9108e
|
||||
a2e591dfbc8cd072
|
||||
a2f50c4402a5e82f
|
||||
a2e370bad3e14893
|
||||
a2e02ca82aed382d
|
||||
a2f19934437d10b6
|
||||
a2eede4c2278581e
|
||||
a2e87e59b63ee971
|
||||
a2e19ed90ff4816b
|
||||
a2efede047d3e183
|
||||
a2ed2b64d2c589b1
|
||||
a2fdd8acb1eae12e
|
||||
a2f59ea35abfe971
|
||||
a2e3a78353af312f
|
||||
a2ee35a132b63956
|
||||
a2f04c5c1f5111b5
|
||||
a2ef1e30adeae997
|
||||
a2e6cd371d37e9c7
|
||||
a2e16f13387ba164
|
||||
a2fe66ea7dfbe936
|
||||
a2e8201dda5d1afe
|
||||
a2e048ceb41daa84
|
||||
a2ee5cd3bc21ba4d
|
||||
a2fc2cdd2e852a17
|
||||
a2e5516e6d87da50
|
||||
a2f8fe9259c48277
|
||||
a2e3e0faf4882246
|
||||
a2f84a382d584a21
|
||||
a2f8991bfdf2da1f
|
||||
a2e8d34bd3fd4291
|
||||
a2e037dfa0a4dae0
|
||||
a2f3c751bda4b241
|
||||
a2f7325c0ade229a
|
||||
a2fd1b1e2bbd825b
|
||||
a2f5b4084e90e274
|
||||
a2e3945f970212fc
|
||||
a2ec82618c199a76
|
||||
a2e59ce100eea3c9
|
||||
a2f807a98fb54b12
|
||||
a2f42b4b974b5b53
|
||||
a2ec0481f8894bd5
|
||||
a2e3ad44ca667b7d
|
||||
a2e49b7b786cf3b5
|
||||
a2f525d85ebe43b4
|
||||
a2fff5ee055873e8
|
||||
a2e9fc0f87a14bc2
|
||||
a2e67a4e79915b1a
|
||||
a2f88adbd6728337
|
||||
a2e188b762e75411
|
||||
a2fdb8f06761acb9
|
||||
a2f18e4f1b0954b0
|
||||
a2f479aaeca8e4ce
|
||||
a2f6c153af074468
|
||||
a2e9b53491d1f467
|
||||
a2e83fe2e24bccdc
|
||||
a2efffb234feb4e3
|
||||
a2ee2582f5e3ec6d
|
||||
a2f3404ee195ac08
|
||||
a2f1d86c0c1d6ce4
|
||||
a2ff5d7f552ce447
|
||||
a2e3f24086c82cd6
|
||||
a2fe5944422394bc
|
||||
a2fb1d654bc53475
|
||||
a2e3e2843eae5c4d
|
||||
a2e0e32d11bb64ce
|
||||
a2e25acae5c87510
|
||||
a2fcdaa5ede615a6
|
||||
a2fc0fedd0c3bda4
|
||||
a2e4c42cb23ea593
|
||||
a2ee65c55801ad2a
|
||||
a2efca5b7a11bd9e
|
||||
a2e89ff043ea1576
|
||||
a2f35be56eb1a566
|
||||
a2ef4588e1c63d39
|
||||
a2f292df378a257c
|
||||
a2f33d4c603d458a
|
||||
a2f0918470249db8
|
||||
a2f66dd28662d5c1
|
||||
a2f440ee4af7adbd
|
||||
a2e5dd2f0837c6e1
|
||||
a2f5318bb32166d4
|
||||
a2f9cf8e8f96f665
|
||||
a2f505c4a80a76f3
|
||||
a2f17c5887b70ec1
|
||||
a2f13734f3f6a651
|
||||
a2e0e647e6c806c0
|
||||
a2e4b5522bd7b684
|
||||
a2f00aec31f21686
|
||||
a2fff6888604b609
|
||||
a2ec33c9219c2e06
|
||||
a2e0cb33ac0b6ea2
|
||||
a2f20fa9469c96f1
|
||||
a2e3871b6ecf3f51
|
||||
a2ed15b7ef5297e3
|
||||
a2e34b9f541eaf26
|
||||
a2f66ecf90d65781
|
||||
a2ef280a7e3b4f3f
|
||||
a2e59a139b6557d7
|
||||
a2e4433736ab4749
|
||||
a2f520ec7e42df69
|
||||
a2e0e08899be6f48
|
||||
a2e7b0444f0e673f
|
||||
a2e9052ac87387a3
|
||||
a2e37c7a27869fbe
|
||||
a2eb056597e8c7fc
|
||||
a2ef57fd79034782
|
||||
a2fc21a583692fcf
|
||||
a2f774c59af21800
|
||||
a2febe8fb4404055
|
||||
a2efe426b922c02f
|
||||
a2e83b7fc5691086
|
||||
a2f4391672e14050
|
||||
a2e4f5959dad58e5
|
||||
a2eab8ad446b184b
|
||||
a2e021421a92c882
|
||||
a2e3410d3867004a
|
||||
a2e8bde92620386f
|
||||
a2f619fe93d9d82e
|
||||
a2ec9490efcb3119
|
||||
a2ec715014e25161
|
||||
a2e5f2968062713c
|
||||
a2ee1268e80b719a
|
||||
a2e35129faa181d8
|
||||
a2eab74cd9775171
|
||||
a2e272cc9732b10b
|
||||
a2e455088c1b890e
|
||||
a2edac772b10b976
|
||||
a2f99310a52309a2
|
||||
a2f378eca29fa17d
|
||||
a2f2114cab96f9f5
|
||||
a2f6c25232c0a1c5
|
||||
a2e27f9f8665a1d6
|
||||
a2f41b90a79039c2
|
||||
a2e863fa943a2a71
|
||||
a2fc5dae53fa7abb
|
||||
a2fdec6f45d9aad7
|
||||
a2f3858279058262
|
||||
a2e2d4417109121c
|
||||
a2ecfc626aeaf203
|
||||
a2f0bd7bfcf44221
|
||||
a2f4164c02e7aacd
|
||||
a2eece3d40284253
|
||||
a2ed3d4bec643220
|
||||
a2f64de0314182d3
|
||||
a2f94ba2e2a7926f
|
||||
a2f6c3d5e83a42e0
|
||||
a2f8277a3651526b
|
||||
a2fe57fc685cb224
|
||||
a2ed02bd908d7a1e
|
||||
a2fff6a0788d9352
|
||||
a2fe035d23754bad
|
||||
a2f3549797fc23c6
|
||||
a2efaee73fe34bd4
|
||||
a2ff8cdf29bae3d1
|
||||
a2ff51a8d988a301
|
||||
a2e90f9490fafb5a
|
||||
a2ea041eed989be5
|
||||
a2ed6a86b67ef3b4
|
||||
a2ecac768ce07b86
|
||||
a2e9226bc128c33d
|
||||
a2f27a53dcb093b2
|
||||
a2f08559540bb3f4
|
||||
a2edd9e67a3ab305
|
||||
a2fdc1d295ba14b4
|
||||
a2fdbfc163ca04cb
|
||||
a2ff88184e3f543a
|
||||
a2ff67b65385f4a7
|
||||
a2e3d3c68b53a4b5
|
||||
a2e7f478e29c5491
|
||||
a2e152cc78f49c38
|
||||
a2e247a7961454e6
|
||||
a2f18aeb05972c4b
|
||||
a2f4fa41e73524fe
|
||||
a2f60a03b23934a2
|
||||
a2eeee72984e8c1b
|
||||
a2eb9091eda16cf9
|
||||
a2f6ce56507ecce2
|
||||
a2eaeddb13a16546
|
||||
a2f09f88f993d568
|
||||
a2eaf08c93a475fe
|
||||
a2eb31b03cdd054b
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#include "core/tx_queue.h"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#pragma once
|
||||
|
|
|
@ -1,5 +1,5 @@
|
|||
add_library(dfly_facade dragonfly_listener.cc dragonfly_connection.cc facade.cc
|
||||
memcache_parser.cc redis_parser.cc reply_builder.cc op_status.cc)
|
||||
memcache_parser.cc redis_parser.cc reply_builder.cc)
|
||||
|
||||
if (DF_USE_SSL)
|
||||
set(TLS_LIB tls_lib)
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -17,8 +17,8 @@ class ConnectionContext {
|
|||
public:
|
||||
ConnectionContext(::io::Sink* stream, Connection* owner);
|
||||
|
||||
// We won't have any virtual methods, probably. However, since we allocate a derived class,
|
||||
// we need to declare a virtual d-tor, so we could properly delete it from Connection code.
|
||||
// We won't have any virtual methods, probably. However, since we allocate derived class,
|
||||
// we need to declare a virtual d-tor so we could delete them inside Connection.
|
||||
virtual ~ConnectionContext() {}
|
||||
|
||||
Connection* owner() {
|
||||
|
@ -51,6 +51,10 @@ class ConnectionContext {
|
|||
bool authenticated: 1;
|
||||
bool force_dispatch: 1; // whether we should route all requests to the dispatch fiber.
|
||||
|
||||
virtual void OnClose() {}
|
||||
|
||||
virtual std::string GetContextInfo() const { return std::string{}; }
|
||||
|
||||
private:
|
||||
Connection* owner_;
|
||||
Protocol protocol_ = Protocol::REDIS;
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -22,6 +22,8 @@
|
|||
#include "util/tls/tls_socket.h"
|
||||
#endif
|
||||
|
||||
#include "util/uring/uring_socket.h"
|
||||
|
||||
ABSL_FLAG(bool, tcp_nodelay, false,
|
||||
"Configures dragonfly connections with socket option TCP_NODELAY");
|
||||
ABSL_FLAG(bool, http_admin_console, true, "If true allows accessing http console on main TCP port");
|
||||
|
@ -146,7 +148,6 @@ Connection::Connection(Protocol protocol, util::HttpListenerBase* http_listener,
|
|||
Connection::~Connection() {
|
||||
}
|
||||
|
||||
// Called from Connection::Shutdown() right after socket_->Shutdown call.
|
||||
void Connection::OnShutdown() {
|
||||
VLOG(1) << "Connection::OnShutdown";
|
||||
|
||||
|
@ -157,26 +158,6 @@ void Connection::OnShutdown() {
|
|||
}
|
||||
}
|
||||
|
||||
void Connection::OnPreMigrateThread() {
|
||||
// If we migrating to another io_uring we should cancel any pending requests we have.
|
||||
if (break_poll_id_ != kuint32max) {
|
||||
auto* ls = static_cast<LinuxSocketBase*>(socket_.get());
|
||||
ls->CancelPoll(break_poll_id_);
|
||||
break_poll_id_ = kuint32max;
|
||||
}
|
||||
}
|
||||
|
||||
void Connection::OnPostMigrateThread() {
|
||||
// Once we migrated, we should rearm OnBreakCb callback.
|
||||
if (breaker_cb_) {
|
||||
DCHECK_EQ(kuint32max, break_poll_id_);
|
||||
|
||||
auto* ls = static_cast<LinuxSocketBase*>(socket_.get());
|
||||
break_poll_id_ =
|
||||
ls->PollEvent(POLLERR | POLLHUP, [this](int32_t mask) { this->OnBreakCb(mask); });
|
||||
}
|
||||
}
|
||||
|
||||
auto Connection::RegisterShutdownHook(ShutdownCb cb) -> ShutdownHandle {
|
||||
if (!shutdown_) {
|
||||
shutdown_ = make_unique<Shutdown>();
|
||||
|
@ -240,18 +221,30 @@ void Connection::HandleRequests() {
|
|||
} else {
|
||||
cc_.reset(service_->CreateContext(peer, this));
|
||||
|
||||
auto* us = static_cast<LinuxSocketBase*>(socket_.get());
|
||||
bool should_disarm_poller = false;
|
||||
// TODO: to move this interface to LinuxSocketBase so we won't need to cast.
|
||||
uring::UringSocket* us = static_cast<uring::UringSocket*>(socket_.get());
|
||||
uint32_t poll_id = 0;
|
||||
if (breaker_cb_) {
|
||||
break_poll_id_ =
|
||||
us->PollEvent(POLLERR | POLLHUP, [this](int32_t mask) { this->OnBreakCb(mask); });
|
||||
should_disarm_poller = true;
|
||||
|
||||
poll_id = us->PollEvent(POLLERR | POLLHUP, [&](int32_t mask) {
|
||||
cc_->conn_closing = true;
|
||||
if (mask > 0) {
|
||||
VLOG(1) << "Got event " << mask;
|
||||
breaker_cb_(mask);
|
||||
}
|
||||
|
||||
evc_.notify(); // Notify dispatch fiber.
|
||||
should_disarm_poller = false;
|
||||
});
|
||||
}
|
||||
|
||||
ConnectionFlow(peer);
|
||||
|
||||
if (break_poll_id_ != kuint32max) {
|
||||
us->CancelPoll(break_poll_id_);
|
||||
if (should_disarm_poller) {
|
||||
us->CancelPoll(poll_id);
|
||||
}
|
||||
|
||||
cc_.reset();
|
||||
}
|
||||
}
|
||||
|
@ -263,7 +256,8 @@ void Connection::RegisterOnBreak(BreakerCb breaker_cb) {
|
|||
breaker_cb_ = breaker_cb;
|
||||
}
|
||||
|
||||
void Connection::SendMsgVecAsync(const PubMessage& pub_msg, fibers_ext::BlockingCounter bc) {
|
||||
void Connection::SendMsgVecAsync(const PubMessage& pub_msg,
|
||||
fibers_ext::BlockingCounter bc) {
|
||||
DCHECK(cc_);
|
||||
|
||||
if (cc_->conn_closing) {
|
||||
|
@ -297,7 +291,7 @@ string Connection::GetClientInfo() const {
|
|||
absl::StrAppend(&res, " age=", now - creation_time_, " idle=", now - last_interaction_);
|
||||
absl::StrAppend(&res, " phase=", phase_, " ");
|
||||
if (cc_) {
|
||||
absl::StrAppend(&res, service_->GetContextInfo(cc_.get()));
|
||||
absl::StrAppend(&res, cc_->GetContextInfo());
|
||||
}
|
||||
|
||||
return res;
|
||||
|
@ -371,7 +365,7 @@ void Connection::ConnectionFlow(FiberSocketBase* peer) {
|
|||
VLOG(1) << "Before dispatch_fb.join()";
|
||||
dispatch_fb.join();
|
||||
VLOG(1) << "After dispatch_fb.join()";
|
||||
service_->OnClose(cc_.get());
|
||||
cc_->OnClose();
|
||||
|
||||
stats->read_buf_capacity -= io_buf_.Capacity();
|
||||
|
||||
|
@ -393,11 +387,10 @@ void Connection::ConnectionFlow(FiberSocketBase* peer) {
|
|||
error_code ec2 = peer->Write(::io::Buffer(sv));
|
||||
if (ec2) {
|
||||
LOG(WARNING) << "Error " << ec2;
|
||||
ec = ec2;
|
||||
ec = ec;
|
||||
}
|
||||
}
|
||||
error_code ec2 = peer->Shutdown(SHUT_RDWR);
|
||||
LOG_IF(WARNING, ec2) << "Could not shutdown socket " << ec2;
|
||||
peer->Shutdown(SHUT_RDWR);
|
||||
}
|
||||
|
||||
if (ec && !FiberSocketBase::IsConnClosed(ec)) {
|
||||
|
@ -515,19 +508,6 @@ auto Connection::ParseMemcache() -> ParserStatus {
|
|||
return OK;
|
||||
}
|
||||
|
||||
void Connection::OnBreakCb(int32_t mask) {
|
||||
if (mask <= 0)
|
||||
return; // we cancelled the poller, which means we do not need to break from anything.
|
||||
|
||||
VLOG(1) << "Got event " << mask;
|
||||
CHECK(cc_);
|
||||
cc_->conn_closing = true;
|
||||
break_poll_id_ = kuint32max; // do not attempt to cancel it.
|
||||
|
||||
breaker_cb_(mask);
|
||||
evc_.notify(); // Notify dispatch fiber.
|
||||
}
|
||||
|
||||
auto Connection::IoLoop(util::FiberSocketBase* peer) -> variant<error_code, ParserStatus> {
|
||||
SinkReplyBuilder* builder = cc_->reply_builder();
|
||||
ConnectionStats* stats = service_->GetThreadLocalConnectionStats();
|
||||
|
@ -696,15 +676,9 @@ auto Connection::FromArgs(RespVec args, mi_heap_t* heap) -> Request* {
|
|||
return req;
|
||||
}
|
||||
|
||||
void Connection::ShutdownSelf() {
|
||||
util::Connection::Shutdown();
|
||||
}
|
||||
|
||||
void RespToArgList(const RespVec& src, CmdArgVec* dest) {
|
||||
dest->resize(src.size());
|
||||
for (size_t i = 0; i < src.size(); ++i) {
|
||||
DCHECK(src[i].type == RespExpr::STRING);
|
||||
|
||||
(*dest)[i] = ToMSS(src[i].GetBuf());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -72,12 +72,8 @@ class Connection : public util::Connection {
|
|||
std::string GetClientInfo() const;
|
||||
uint32 GetClientId() const;
|
||||
|
||||
void ShutdownSelf();
|
||||
|
||||
protected:
|
||||
void OnShutdown() override;
|
||||
void OnPreMigrateThread() override;
|
||||
void OnPostMigrateThread() override;
|
||||
|
||||
private:
|
||||
enum ParserStatus { OK, NEED_MORE, ERROR };
|
||||
|
@ -101,7 +97,6 @@ class Connection : public util::Connection {
|
|||
|
||||
ParserStatus ParseRedis();
|
||||
ParserStatus ParseMemcache();
|
||||
void OnBreakCb(int32_t mask);
|
||||
|
||||
base::IoBuf io_buf_;
|
||||
std::unique_ptr<RedisParser> redis_parser_;
|
||||
|
@ -128,7 +123,6 @@ class Connection : public util::Connection {
|
|||
|
||||
unsigned parser_error_ = 0;
|
||||
uint32_t id_;
|
||||
uint32_t break_poll_id_ = UINT32_MAX;
|
||||
|
||||
Protocol protocol_;
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -22,8 +22,8 @@ ABSL_FLAG(bool, conn_use_incoming_cpu, false,
|
|||
"If true uses incoming cpu of a socket in order to distribute"
|
||||
" incoming connections");
|
||||
|
||||
ABSL_FLAG(string, tls_cert_file, "", "cert file for tls connections");
|
||||
ABSL_FLAG(string, tls_key_file, "", "key file for tls connections");
|
||||
ABSL_FLAG(string, tls_client_cert_file, "", "cert file for tls connections");
|
||||
ABSL_FLAG(string, tls_client_key_file, "", "key file for tls connections");
|
||||
|
||||
#if 0
|
||||
enum TlsClientAuth {
|
||||
|
@ -54,8 +54,8 @@ namespace {
|
|||
// To connect: openssl s_client -cipher "ADH:@SECLEVEL=0" -state -crlf -connect 127.0.0.1:6380
|
||||
static SSL_CTX* CreateSslCntx() {
|
||||
SSL_CTX* ctx = SSL_CTX_new(TLS_server_method());
|
||||
const auto& tls_key_file = GetFlag(FLAGS_tls_key_file);
|
||||
if (tls_key_file.empty()) {
|
||||
const auto& tls_client_key_file = GetFlag(FLAGS_tls_client_key_file);
|
||||
if (tls_client_key_file.empty()) {
|
||||
// To connect - use openssl s_client -cipher with either:
|
||||
// "AECDH:@SECLEVEL=0" or "ADH:@SECLEVEL=0" setting.
|
||||
CHECK_EQ(1, SSL_CTX_set_cipher_list(ctx, "aNULL"));
|
||||
|
@ -66,17 +66,17 @@ static SSL_CTX* CreateSslCntx() {
|
|||
// you can still connect with redis-cli with :
|
||||
// redis-cli --tls --insecure --tls-ciphers "ADH:@SECLEVEL=0"
|
||||
LOG(WARNING)
|
||||
<< "tls-key-file not set, no keys are loaded and anonymous ciphers are enabled. "
|
||||
<< "tls-client-key-file not set, no keys are loaded and anonymous ciphers are enabled. "
|
||||
<< "Do not use in production!";
|
||||
} else { // tls_key_file is set.
|
||||
CHECK_EQ(1, SSL_CTX_use_PrivateKey_file(ctx, tls_key_file.c_str(), SSL_FILETYPE_PEM));
|
||||
const auto& tls_cert_file = GetFlag(FLAGS_tls_cert_file);
|
||||
} else { // tls_client_key_file is set.
|
||||
CHECK_EQ(1, SSL_CTX_use_PrivateKey_file(ctx, tls_client_key_file.c_str(), SSL_FILETYPE_PEM));
|
||||
const auto& tls_client_cert_file = GetFlag(FLAGS_tls_client_cert_file);
|
||||
|
||||
if (!tls_cert_file.empty()) {
|
||||
// TO connect with redis-cli you need both tls-key-file and tls-cert-file
|
||||
if (!tls_client_cert_file.empty()) {
|
||||
// TO connect with redis-cli you need both tls-client-key-file and tls-client-cert-file
|
||||
// loaded. Use `redis-cli --tls -p 6380 --insecure PING` to test
|
||||
|
||||
CHECK_EQ(1, SSL_CTX_use_certificate_chain_file(ctx, tls_cert_file.c_str()));
|
||||
CHECK_EQ(1, SSL_CTX_use_certificate_chain_file(ctx, tls_client_cert_file.c_str()));
|
||||
}
|
||||
CHECK_EQ(1, SSL_CTX_set_cipher_list(ctx, "DEFAULT"));
|
||||
}
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#include "facade/memcache_parser.h"
|
||||
|
@ -173,4 +173,4 @@ auto MP::Parse(string_view str, uint32_t* consumed, Command* cmd) -> Result {
|
|||
return ParseValueless(tokens + 1, num_tokens - 1, cmd);
|
||||
};
|
||||
|
||||
} // namespace dfly
|
||||
} // namespace dfly
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -75,4 +75,4 @@ class MemcacheParser {
|
|||
private:
|
||||
};
|
||||
|
||||
} // namespace dfly
|
||||
} // namespace dfly
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -78,4 +78,4 @@ TEST_F(MCParserTest, Stats) {
|
|||
EXPECT_EQ(MemcacheParser::PARSE_ERROR, st);
|
||||
}
|
||||
|
||||
} // namespace facade
|
||||
} // namespace facade
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,46 +0,0 @@
|
|||
#include "facade/op_status.h"
|
||||
|
||||
namespace facade {
|
||||
|
||||
const char* DebugString(OpStatus op) {
|
||||
switch (op) {
|
||||
case OpStatus::OK:
|
||||
return "OK";
|
||||
case OpStatus::KEY_EXISTS:
|
||||
return "KEY EXISTS";
|
||||
case OpStatus::KEY_NOTFOUND:
|
||||
return "KEY NOTFOUND";
|
||||
case OpStatus::SKIPPED:
|
||||
return "SKIPPED";
|
||||
case OpStatus::INVALID_VALUE:
|
||||
return "INVALID VALUE";
|
||||
case OpStatus::OUT_OF_RANGE:
|
||||
return "OUT OF RANGE";
|
||||
case OpStatus::WRONG_TYPE:
|
||||
return "WRONG TYPE";
|
||||
case OpStatus::TIMED_OUT:
|
||||
return "TIMED OUT";
|
||||
case OpStatus::OUT_OF_MEMORY:
|
||||
return "OUT OF MEMORY";
|
||||
case OpStatus::INVALID_FLOAT:
|
||||
return "INVALID FLOAT";
|
||||
case OpStatus::INVALID_INT:
|
||||
return "INVALID INT";
|
||||
case OpStatus::SYNTAX_ERR:
|
||||
return "INVALID SYNTAX";
|
||||
case OpStatus::BUSY_GROUP:
|
||||
return "BUSY GROUP";
|
||||
case OpStatus::STREAM_ID_SMALL:
|
||||
return "STREAM ID TO SMALL";
|
||||
case OpStatus::ENTRIES_ADDED_SMALL:
|
||||
return "ENTRIES ADDED IS TO SMALL";
|
||||
case OpStatus::INVALID_NUMERIC_RESULT:
|
||||
return "INVALID NUMERIC RESULT";
|
||||
}
|
||||
return "Unknown Error Code"; // we should not be here, but this is how enums works in c++
|
||||
}
|
||||
const char* OpResultBase::DebugFormat() const {
|
||||
return DebugString(st_);
|
||||
}
|
||||
|
||||
} // namespace facade
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -27,8 +27,6 @@ enum class OpStatus : uint16_t {
|
|||
INVALID_NUMERIC_RESULT,
|
||||
};
|
||||
|
||||
const char* DebugString(OpStatus op);
|
||||
|
||||
class OpResultBase {
|
||||
public:
|
||||
OpResultBase(OpStatus st = OpStatus::OK) : st_(st) {
|
||||
|
@ -50,8 +48,6 @@ class OpResultBase {
|
|||
return st_ == OpStatus::OK;
|
||||
}
|
||||
|
||||
const char* DebugFormat() const;
|
||||
|
||||
private:
|
||||
OpStatus st_;
|
||||
};
|
||||
|
@ -101,7 +97,7 @@ inline bool operator==(OpStatus st, const OpResultBase& ob) {
|
|||
namespace std {
|
||||
|
||||
template <typename T> std::ostream& operator<<(std::ostream& os, const facade::OpResult<T>& res) {
|
||||
os << res.status();
|
||||
os << int(res.status());
|
||||
return os;
|
||||
}
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#include "facade/redis_parser.h"
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#pragma once
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#include "facade/reply_builder.h"
|
||||
|
@ -299,10 +299,6 @@ void RedisReplyBuilder::SendNullArray() {
|
|||
SendRaw("*-1\r\n");
|
||||
}
|
||||
|
||||
void RedisReplyBuilder::SendEmptyArray() {
|
||||
StartArray(0);
|
||||
}
|
||||
|
||||
void RedisReplyBuilder::SendStringArr(absl::Span<const std::string_view> arr) {
|
||||
if (arr.empty()) {
|
||||
SendRaw("*0\r\n");
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
#include <absl/container/flat_hash_map.h>
|
||||
|
@ -128,10 +128,7 @@ class RedisReplyBuilder : public SinkReplyBuilder {
|
|||
void SendError(OpStatus status);
|
||||
|
||||
virtual void SendSimpleStrArr(const std::string_view* arr, uint32_t count);
|
||||
// Send *-1
|
||||
virtual void SendNullArray();
|
||||
// Send *0
|
||||
virtual void SendEmptyArray();
|
||||
|
||||
virtual void SendStringArr(absl::Span<const std::string_view> arr);
|
||||
virtual void SendStringArr(absl::Span<const std::string> arr);
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2021, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -56,4 +56,4 @@ namespace std {
|
|||
ostream& operator<<(ostream& os, const facade::RespExpr& e);
|
||||
ostream& operator<<(ostream& os, facade::RespSpan rspan);
|
||||
|
||||
} // namespace std
|
||||
} // namespace std
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -32,13 +32,6 @@ class ServiceInterface {
|
|||
|
||||
virtual void ConfigureHttpHandlers(util::HttpListenerBase* base) {
|
||||
}
|
||||
|
||||
virtual void OnClose(ConnectionContext* cntx) {
|
||||
}
|
||||
|
||||
virtual std::string GetContextInfo(ConnectionContext* cntx) {
|
||||
return {};
|
||||
}
|
||||
};
|
||||
|
||||
} // namespace facade
|
||||
|
|
|
@ -10,17 +10,13 @@ endif()
|
|||
|
||||
add_library(redis_lib crc64.c crcspeed.c debug.c dict.c intset.c
|
||||
listpack.c mt19937-64.c object.c lzf_c.c lzf_d.c sds.c
|
||||
quicklist.c rax.c redis_aux.c siphash.c t_hash.c t_stream.c t_zset.c
|
||||
quicklist.c rax.c redis_aux.c siphash.c t_hash.c t_stream.c t_zset.c
|
||||
util.c ziplist.c ${ZMALLOC_SRC})
|
||||
|
||||
cxx_link(redis_lib ${ZMALLOC_DEPS})
|
||||
|
||||
if("${CMAKE_CXX_COMPILER_ID}" STREQUAL "GNU")
|
||||
target_compile_options(redis_lib PRIVATE -Wno-maybe-uninitialized)
|
||||
endif()
|
||||
target_compile_options(redis_lib PRIVATE -Wno-maybe-uninitialized)
|
||||
|
||||
if (REDIS_ZMALLOC_MI)
|
||||
target_compile_definitions(redis_lib PUBLIC USE_ZMALLOC_MI)
|
||||
endif()
|
||||
|
||||
add_subdirectory(lua)
|
||||
|
|
|
@ -1,12 +0,0 @@
|
|||
add_library(lua_modules STATIC
|
||||
cjson/fpconv.c cjson/strbuf.c cjson/lua_cjson.c
|
||||
cmsgpack/lua_cmsgpack.c
|
||||
struct/lua_struct.c
|
||||
bit/bit.c
|
||||
)
|
||||
|
||||
target_compile_options(lua_modules PRIVATE
|
||||
-Wno-sign-compare -Wno-misleading-indentation -Wno-implicit-fallthrough -Wno-undefined-inline
|
||||
-Wno-stringop-overflow)
|
||||
|
||||
target_link_libraries(lua_modules TRDP::lua)
|
|
@ -1,3 +0,0 @@
|
|||
Since version 5.2 `luaL_register` is deprecated and removed. The new `luaL_newlib` function doesn't make the module globally available upon registration and is ment to be used with the `require` function.
|
||||
|
||||
To provide the modules globally, `luaL_newlib` is followed by a `lua_setglobal` for bit and struct.
|
|
@ -1,196 +0,0 @@
|
|||
/*
|
||||
** Lua BitOp -- a bit operations library for Lua 5.1/5.2.
|
||||
** http://bitop.luajit.org/
|
||||
**
|
||||
** Copyright (C) 2008-2012 Mike Pall. All rights reserved.
|
||||
**
|
||||
** Permission is hereby granted, free of charge, to any person obtaining
|
||||
** a copy of this software and associated documentation files (the
|
||||
** "Software"), to deal in the Software without restriction, including
|
||||
** without limitation the rights to use, copy, modify, merge, publish,
|
||||
** distribute, sublicense, and/or sell copies of the Software, and to
|
||||
** permit persons to whom the Software is furnished to do so, subject to
|
||||
** the following conditions:
|
||||
**
|
||||
** The above copyright notice and this permission notice shall be
|
||||
** included in all copies or substantial portions of the Software.
|
||||
**
|
||||
** THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
** EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
** MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
** IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||||
** CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
** TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||||
** SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
**
|
||||
** [ MIT license: http://www.opensource.org/licenses/mit-license.php ]
|
||||
*/
|
||||
|
||||
#define LUA_BITOP_VERSION "1.0.3"
|
||||
|
||||
#define LUA_LIB
|
||||
#include "lua.h"
|
||||
#include "lauxlib.h"
|
||||
|
||||
#ifdef _MSC_VER
|
||||
/* MSVC is stuck in the last century and doesn't have C99's stdint.h. */
|
||||
typedef __int32 int32_t;
|
||||
typedef unsigned __int32 uint32_t;
|
||||
typedef unsigned __int64 uint64_t;
|
||||
#else
|
||||
#include <stdint.h>
|
||||
#endif
|
||||
|
||||
typedef int32_t SBits;
|
||||
typedef uint32_t UBits;
|
||||
|
||||
typedef union {
|
||||
lua_Number n;
|
||||
#if defined(LUA_NUMBER_DOUBLE) || defined(LUA_FLOAT_DOUBLE)
|
||||
uint64_t b;
|
||||
#else
|
||||
UBits b;
|
||||
#endif
|
||||
} BitNum;
|
||||
|
||||
/* Convert argument to bit type. */
|
||||
static UBits barg(lua_State *L, int idx)
|
||||
{
|
||||
BitNum bn;
|
||||
UBits b;
|
||||
#if LUA_VERSION_NUM < 502
|
||||
bn.n = lua_tonumber(L, idx);
|
||||
#else
|
||||
bn.n = luaL_checknumber(L, idx);
|
||||
#endif
|
||||
#if defined(LUA_NUMBER_DOUBLE) || defined(LUA_FLOAT_DOUBLE)
|
||||
bn.n += 6755399441055744.0; /* 2^52+2^51 */
|
||||
#ifdef SWAPPED_DOUBLE
|
||||
b = (UBits)(bn.b >> 32);
|
||||
#else
|
||||
b = (UBits)bn.b;
|
||||
#endif
|
||||
#elif defined(LUA_NUMBER_INT) || defined(LUA_INT_INT) || \
|
||||
defined(LUA_NUMBER_LONG) || defined(LUA_INT_LONG) || \
|
||||
defined(LUA_NUMBER_LONGLONG) || defined(LUA_INT_LONGLONG) || \
|
||||
defined(LUA_NUMBER_LONG_LONG) || defined(LUA_NUMBER_LLONG)
|
||||
if (sizeof(UBits) == sizeof(lua_Number))
|
||||
b = bn.b;
|
||||
else
|
||||
b = (UBits)(SBits)bn.n;
|
||||
#elif defined(LUA_NUMBER_FLOAT) || defined(LUA_FLOAT_FLOAT)
|
||||
#error "A 'float' lua_Number type is incompatible with this library"
|
||||
#else
|
||||
#error "Unknown number type, check LUA_NUMBER_*, LUA_FLOAT_*, LUA_INT_* in luaconf.h"
|
||||
#endif
|
||||
#if LUA_VERSION_NUM < 502
|
||||
if (b == 0 && !lua_isnumber(L, idx)) {
|
||||
luaL_typerror(L, idx, "number");
|
||||
}
|
||||
#endif
|
||||
return b;
|
||||
}
|
||||
|
||||
/* Return bit type. */
|
||||
#if LUA_VERSION_NUM < 503
|
||||
#define BRET(b) lua_pushnumber(L, (lua_Number)(SBits)(b)); return 1;
|
||||
#else
|
||||
#define BRET(b) lua_pushinteger(L, (lua_Integer)(SBits)(b)); return 1;
|
||||
#endif
|
||||
|
||||
static int bit_tobit(lua_State *L) { BRET(barg(L, 1)) }
|
||||
static int bit_bnot(lua_State *L) { BRET(~barg(L, 1)) }
|
||||
|
||||
#define BIT_OP(func, opr) \
|
||||
static int func(lua_State *L) { int i; UBits b = barg(L, 1); \
|
||||
for (i = lua_gettop(L); i > 1; i--) b opr barg(L, i); BRET(b) }
|
||||
BIT_OP(bit_band, &=)
|
||||
BIT_OP(bit_bor, |=)
|
||||
BIT_OP(bit_bxor, ^=)
|
||||
|
||||
#define bshl(b, n) (b << n)
|
||||
#define bshr(b, n) (b >> n)
|
||||
#define bsar(b, n) ((SBits)b >> n)
|
||||
#define brol(b, n) ((b << n) | (b >> (32-n)))
|
||||
#define bror(b, n) ((b << (32-n)) | (b >> n))
|
||||
#define BIT_SH(func, fn) \
|
||||
static int func(lua_State *L) { \
|
||||
UBits b = barg(L, 1); UBits n = barg(L, 2) & 31; BRET(fn(b, n)) }
|
||||
BIT_SH(bit_lshift, bshl)
|
||||
BIT_SH(bit_rshift, bshr)
|
||||
BIT_SH(bit_arshift, bsar)
|
||||
BIT_SH(bit_rol, brol)
|
||||
BIT_SH(bit_ror, bror)
|
||||
|
||||
static int bit_bswap(lua_State *L)
|
||||
{
|
||||
UBits b = barg(L, 1);
|
||||
b = (b >> 24) | ((b >> 8) & 0xff00) | ((b & 0xff00) << 8) | (b << 24);
|
||||
BRET(b)
|
||||
}
|
||||
|
||||
static int bit_tohex(lua_State *L)
|
||||
{
|
||||
UBits b = barg(L, 1);
|
||||
SBits n = lua_isnone(L, 2) ? 8 : (SBits)barg(L, 2);
|
||||
const char *hexdigits = "0123456789abcdef";
|
||||
char buf[8];
|
||||
int i;
|
||||
if (n < 0) { n = -n; hexdigits = "0123456789ABCDEF"; }
|
||||
if (n > 8) n = 8;
|
||||
for (i = (int)n; --i >= 0; ) { buf[i] = hexdigits[b & 15]; b >>= 4; }
|
||||
lua_pushlstring(L, buf, (size_t)n);
|
||||
return 1;
|
||||
}
|
||||
|
||||
static const struct luaL_Reg bit_funcs[] = {
|
||||
{ "tobit", bit_tobit },
|
||||
{ "bnot", bit_bnot },
|
||||
{ "band", bit_band },
|
||||
{ "bor", bit_bor },
|
||||
{ "bxor", bit_bxor },
|
||||
{ "lshift", bit_lshift },
|
||||
{ "rshift", bit_rshift },
|
||||
{ "arshift", bit_arshift },
|
||||
{ "rol", bit_rol },
|
||||
{ "ror", bit_ror },
|
||||
{ "bswap", bit_bswap },
|
||||
{ "tohex", bit_tohex },
|
||||
{ NULL, NULL }
|
||||
};
|
||||
|
||||
/* Signed right-shifts are implementation-defined per C89/C99.
|
||||
** But the de facto standard are arithmetic right-shifts on two's
|
||||
** complement CPUs. This behaviour is required here, so test for it.
|
||||
*/
|
||||
#define BAD_SAR (bsar(-8, 2) != (SBits)-2)
|
||||
|
||||
LUALIB_API int luaopen_bit(lua_State *L)
|
||||
{
|
||||
UBits b;
|
||||
#if LUA_VERSION_NUM < 503
|
||||
lua_pushnumber(L, (lua_Number)1437217655L);
|
||||
#else
|
||||
lua_pushinteger(L, (lua_Integer)1437217655L);
|
||||
#endif
|
||||
b = barg(L, -1);
|
||||
if (b != (UBits)1437217655L || BAD_SAR) { /* Perform a simple self-test. */
|
||||
const char *msg = "compiled with incompatible luaconf.h";
|
||||
#if defined(LUA_NUMBER_DOUBLE) || defined(LUA_FLOAT_DOUBLE)
|
||||
#ifdef _WIN32
|
||||
if (b == (UBits)1610612736L)
|
||||
msg = "use D3DCREATE_FPU_PRESERVE with DirectX";
|
||||
#endif
|
||||
if (b == (UBits)1127743488L)
|
||||
msg = "not compiled with SWAPPED_DOUBLE";
|
||||
#endif
|
||||
if (BAD_SAR)
|
||||
msg = "arithmetic right-shift broken";
|
||||
luaL_error(L, "bit library self-test failed (%s)", msg);
|
||||
}
|
||||
|
||||
luaL_newlib(L, bit_funcs);
|
||||
lua_setglobal(L, "bit");
|
||||
|
||||
return 1;
|
||||
}
|
|
@ -1,205 +0,0 @@
|
|||
/* fpconv - Floating point conversion routines
|
||||
*
|
||||
* Copyright (c) 2011-2012 Mark Pulford <mark@kyne.com.au>
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining
|
||||
* a copy of this software and associated documentation files (the
|
||||
* "Software"), to deal in the Software without restriction, including
|
||||
* without limitation the rights to use, copy, modify, merge, publish,
|
||||
* distribute, sublicense, and/or sell copies of the Software, and to
|
||||
* permit persons to whom the Software is furnished to do so, subject to
|
||||
* the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||||
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||||
* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
/* JSON uses a '.' decimal separator. strtod() / sprintf() under C libraries
|
||||
* with locale support will break when the decimal separator is a comma.
|
||||
*
|
||||
* fpconv_* will around these issues with a translation buffer if required.
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <assert.h>
|
||||
#include <string.h>
|
||||
|
||||
#include "fpconv.h"
|
||||
|
||||
/* Lua CJSON assumes the locale is the same for all threads within a
|
||||
* process and doesn't change after initialisation.
|
||||
*
|
||||
* This avoids the need for per thread storage or expensive checks
|
||||
* for call. */
|
||||
static char locale_decimal_point = '.';
|
||||
|
||||
/* In theory multibyte decimal_points are possible, but
|
||||
* Lua CJSON only supports UTF-8 and known locales only have
|
||||
* single byte decimal points ([.,]).
|
||||
*
|
||||
* localconv() may not be thread safe (=>crash), and nl_langinfo() is
|
||||
* not supported on some platforms. Use sprintf() instead - if the
|
||||
* locale does change, at least Lua CJSON won't crash. */
|
||||
static void fpconv_update_locale()
|
||||
{
|
||||
char buf[8];
|
||||
|
||||
snprintf(buf, sizeof(buf), "%g", 0.5);
|
||||
|
||||
/* Failing this test might imply the platform has a buggy dtoa
|
||||
* implementation or wide characters */
|
||||
if (buf[0] != '0' || buf[2] != '5' || buf[3] != 0) {
|
||||
fprintf(stderr, "Error: wide characters found or printf() bug.");
|
||||
abort();
|
||||
}
|
||||
|
||||
locale_decimal_point = buf[1];
|
||||
}
|
||||
|
||||
/* Check for a valid number character: [-+0-9a-yA-Y.]
|
||||
* Eg: -0.6e+5, infinity, 0xF0.F0pF0
|
||||
*
|
||||
* Used to find the probable end of a number. It doesn't matter if
|
||||
* invalid characters are counted - strtod() will find the valid
|
||||
* number if it exists. The risk is that slightly more memory might
|
||||
* be allocated before a parse error occurs. */
|
||||
static inline int valid_number_character(char ch)
|
||||
{
|
||||
char lower_ch;
|
||||
|
||||
if ('0' <= ch && ch <= '9')
|
||||
return 1;
|
||||
if (ch == '-' || ch == '+' || ch == '.')
|
||||
return 1;
|
||||
|
||||
/* Hex digits, exponent (e), base (p), "infinity",.. */
|
||||
lower_ch = ch | 0x20;
|
||||
if ('a' <= lower_ch && lower_ch <= 'y')
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Calculate the size of the buffer required for a strtod locale
|
||||
* conversion. */
|
||||
static int strtod_buffer_size(const char *s)
|
||||
{
|
||||
const char *p = s;
|
||||
|
||||
while (valid_number_character(*p))
|
||||
p++;
|
||||
|
||||
return p - s;
|
||||
}
|
||||
|
||||
/* Similar to strtod(), but must be passed the current locale's decimal point
|
||||
* character. Guaranteed to be called at the start of any valid number in a string */
|
||||
double fpconv_strtod(const char *nptr, char **endptr)
|
||||
{
|
||||
char localbuf[FPCONV_G_FMT_BUFSIZE];
|
||||
char *buf, *endbuf, *dp;
|
||||
int buflen;
|
||||
double value;
|
||||
|
||||
/* System strtod() is fine when decimal point is '.' */
|
||||
if (locale_decimal_point == '.')
|
||||
return strtod(nptr, endptr);
|
||||
|
||||
buflen = strtod_buffer_size(nptr);
|
||||
if (!buflen) {
|
||||
/* No valid characters found, standard strtod() return */
|
||||
*endptr = (char *)nptr;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Duplicate number into buffer */
|
||||
if (buflen >= FPCONV_G_FMT_BUFSIZE) {
|
||||
/* Handle unusually large numbers */
|
||||
buf = malloc(buflen + 1);
|
||||
if (!buf) {
|
||||
fprintf(stderr, "Out of memory");
|
||||
abort();
|
||||
}
|
||||
} else {
|
||||
/* This is the common case.. */
|
||||
buf = localbuf;
|
||||
}
|
||||
memcpy(buf, nptr, buflen);
|
||||
buf[buflen] = 0;
|
||||
|
||||
/* Update decimal point character if found */
|
||||
dp = strchr(buf, '.');
|
||||
if (dp)
|
||||
*dp = locale_decimal_point;
|
||||
|
||||
value = strtod(buf, &endbuf);
|
||||
*endptr = (char *)&nptr[endbuf - buf];
|
||||
if (buflen >= FPCONV_G_FMT_BUFSIZE)
|
||||
free(buf);
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
/* "fmt" must point to a buffer of at least 6 characters */
|
||||
static void set_number_format(char *fmt, int precision)
|
||||
{
|
||||
int d1, d2, i;
|
||||
|
||||
assert(1 <= precision && precision <= 14);
|
||||
|
||||
/* Create printf format (%.14g) from precision */
|
||||
d1 = precision / 10;
|
||||
d2 = precision % 10;
|
||||
fmt[0] = '%';
|
||||
fmt[1] = '.';
|
||||
i = 2;
|
||||
if (d1) {
|
||||
fmt[i++] = '0' + d1;
|
||||
}
|
||||
fmt[i++] = '0' + d2;
|
||||
fmt[i++] = 'g';
|
||||
fmt[i] = 0;
|
||||
}
|
||||
|
||||
/* Assumes there is always at least 32 characters available in the target buffer */
|
||||
int fpconv_g_fmt(char *str, double num, int precision)
|
||||
{
|
||||
char buf[FPCONV_G_FMT_BUFSIZE];
|
||||
char fmt[6];
|
||||
int len;
|
||||
char *b;
|
||||
|
||||
set_number_format(fmt, precision);
|
||||
|
||||
/* Pass through when decimal point character is dot. */
|
||||
if (locale_decimal_point == '.')
|
||||
return snprintf(str, FPCONV_G_FMT_BUFSIZE, fmt, num);
|
||||
|
||||
/* snprintf() to a buffer then translate for other decimal point characters */
|
||||
len = snprintf(buf, FPCONV_G_FMT_BUFSIZE, fmt, num);
|
||||
|
||||
/* Copy into target location. Translate decimal point if required */
|
||||
b = buf;
|
||||
do {
|
||||
*str++ = (*b == locale_decimal_point ? '.' : *b);
|
||||
} while(*b++);
|
||||
|
||||
return len;
|
||||
}
|
||||
|
||||
void fpconv_init()
|
||||
{
|
||||
fpconv_update_locale();
|
||||
}
|
||||
|
||||
/* vi:ai et sw=4 ts=4:
|
||||
*/
|
|
@ -1,22 +0,0 @@
|
|||
/* Lua CJSON floating point conversion routines */
|
||||
|
||||
/* Buffer required to store the largest string representation of a double.
|
||||
*
|
||||
* Longest double printed with %.14g is 21 characters long:
|
||||
* -1.7976931348623e+308 */
|
||||
# define FPCONV_G_FMT_BUFSIZE 32
|
||||
|
||||
#ifdef USE_INTERNAL_FPCONV
|
||||
static inline void fpconv_init()
|
||||
{
|
||||
/* Do nothing - not required */
|
||||
}
|
||||
#else
|
||||
extern void fpconv_init();
|
||||
#endif
|
||||
|
||||
extern int fpconv_g_fmt(char*, double, int);
|
||||
extern double fpconv_strtod(const char*, char**);
|
||||
|
||||
/* vi:ai et sw=4 ts=4:
|
||||
*/
|
File diff suppressed because it is too large
Load Diff
|
@ -1,251 +0,0 @@
|
|||
/* strbuf - String buffer routines
|
||||
*
|
||||
* Copyright (c) 2010-2012 Mark Pulford <mark@kyne.com.au>
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining
|
||||
* a copy of this software and associated documentation files (the
|
||||
* "Software"), to deal in the Software without restriction, including
|
||||
* without limitation the rights to use, copy, modify, merge, publish,
|
||||
* distribute, sublicense, and/or sell copies of the Software, and to
|
||||
* permit persons to whom the Software is furnished to do so, subject to
|
||||
* the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||||
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||||
* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdarg.h>
|
||||
#include <string.h>
|
||||
|
||||
#include "strbuf.h"
|
||||
|
||||
static void die(const char *fmt, ...)
|
||||
{
|
||||
va_list arg;
|
||||
|
||||
va_start(arg, fmt);
|
||||
vfprintf(stderr, fmt, arg);
|
||||
va_end(arg);
|
||||
fprintf(stderr, "\n");
|
||||
|
||||
exit(-1);
|
||||
}
|
||||
|
||||
void strbuf_init(strbuf_t *s, int len)
|
||||
{
|
||||
int size;
|
||||
|
||||
if (len <= 0)
|
||||
size = STRBUF_DEFAULT_SIZE;
|
||||
else
|
||||
size = len + 1; /* \0 terminator */
|
||||
|
||||
s->buf = NULL;
|
||||
s->size = size;
|
||||
s->length = 0;
|
||||
s->increment = STRBUF_DEFAULT_INCREMENT;
|
||||
s->dynamic = 0;
|
||||
s->reallocs = 0;
|
||||
s->debug = 0;
|
||||
|
||||
s->buf = malloc(size);
|
||||
if (!s->buf)
|
||||
die("Out of memory");
|
||||
|
||||
strbuf_ensure_null(s);
|
||||
}
|
||||
|
||||
strbuf_t *strbuf_new(int len)
|
||||
{
|
||||
strbuf_t *s;
|
||||
|
||||
s = malloc(sizeof(strbuf_t));
|
||||
if (!s)
|
||||
die("Out of memory");
|
||||
|
||||
strbuf_init(s, len);
|
||||
|
||||
/* Dynamic strbuf allocation / deallocation */
|
||||
s->dynamic = 1;
|
||||
|
||||
return s;
|
||||
}
|
||||
|
||||
void strbuf_set_increment(strbuf_t *s, int increment)
|
||||
{
|
||||
/* Increment > 0: Linear buffer growth rate
|
||||
* Increment < -1: Exponential buffer growth rate */
|
||||
if (increment == 0 || increment == -1)
|
||||
die("BUG: Invalid string increment");
|
||||
|
||||
s->increment = increment;
|
||||
}
|
||||
|
||||
static inline void debug_stats(strbuf_t *s)
|
||||
{
|
||||
if (s->debug) {
|
||||
fprintf(stderr, "strbuf(%lx) reallocs: %d, length: %d, size: %d\n",
|
||||
(long)s, s->reallocs, s->length, s->size);
|
||||
}
|
||||
}
|
||||
|
||||
/* If strbuf_t has not been dynamically allocated, strbuf_free() can
|
||||
* be called any number of times strbuf_init() */
|
||||
void strbuf_free(strbuf_t *s)
|
||||
{
|
||||
debug_stats(s);
|
||||
|
||||
if (s->buf) {
|
||||
free(s->buf);
|
||||
s->buf = NULL;
|
||||
}
|
||||
if (s->dynamic)
|
||||
free(s);
|
||||
}
|
||||
|
||||
char *strbuf_free_to_string(strbuf_t *s, int *len)
|
||||
{
|
||||
char *buf;
|
||||
|
||||
debug_stats(s);
|
||||
|
||||
strbuf_ensure_null(s);
|
||||
|
||||
buf = s->buf;
|
||||
if (len)
|
||||
*len = s->length;
|
||||
|
||||
if (s->dynamic)
|
||||
free(s);
|
||||
|
||||
return buf;
|
||||
}
|
||||
|
||||
static int calculate_new_size(strbuf_t *s, int len)
|
||||
{
|
||||
int reqsize, newsize;
|
||||
|
||||
if (len <= 0)
|
||||
die("BUG: Invalid strbuf length requested");
|
||||
|
||||
/* Ensure there is room for optional NULL termination */
|
||||
reqsize = len + 1;
|
||||
|
||||
/* If the user has requested to shrink the buffer, do it exactly */
|
||||
if (s->size > reqsize)
|
||||
return reqsize;
|
||||
|
||||
newsize = s->size;
|
||||
if (s->increment < 0) {
|
||||
/* Exponential sizing */
|
||||
while (newsize < reqsize)
|
||||
newsize *= -s->increment;
|
||||
} else {
|
||||
/* Linear sizing */
|
||||
newsize = ((newsize + s->increment - 1) / s->increment) * s->increment;
|
||||
}
|
||||
|
||||
return newsize;
|
||||
}
|
||||
|
||||
|
||||
/* Ensure strbuf can handle a string length bytes long (ignoring NULL
|
||||
* optional termination). */
|
||||
void strbuf_resize(strbuf_t *s, int len)
|
||||
{
|
||||
int newsize;
|
||||
|
||||
newsize = calculate_new_size(s, len);
|
||||
|
||||
if (s->debug > 1) {
|
||||
fprintf(stderr, "strbuf(%lx) resize: %d => %d\n",
|
||||
(long)s, s->size, newsize);
|
||||
}
|
||||
|
||||
s->size = newsize;
|
||||
s->buf = realloc(s->buf, s->size);
|
||||
if (!s->buf)
|
||||
die("Out of memory");
|
||||
s->reallocs++;
|
||||
}
|
||||
|
||||
void strbuf_append_string(strbuf_t *s, const char *str)
|
||||
{
|
||||
int space, i;
|
||||
|
||||
space = strbuf_empty_length(s);
|
||||
|
||||
for (i = 0; str[i]; i++) {
|
||||
if (space < 1) {
|
||||
strbuf_resize(s, s->length + 1);
|
||||
space = strbuf_empty_length(s);
|
||||
}
|
||||
|
||||
s->buf[s->length] = str[i];
|
||||
s->length++;
|
||||
space--;
|
||||
}
|
||||
}
|
||||
|
||||
/* strbuf_append_fmt() should only be used when an upper bound
|
||||
* is known for the output string. */
|
||||
void strbuf_append_fmt(strbuf_t *s, int len, const char *fmt, ...)
|
||||
{
|
||||
va_list arg;
|
||||
int fmt_len;
|
||||
|
||||
strbuf_ensure_empty_length(s, len);
|
||||
|
||||
va_start(arg, fmt);
|
||||
fmt_len = vsnprintf(s->buf + s->length, len, fmt, arg);
|
||||
va_end(arg);
|
||||
|
||||
if (fmt_len < 0)
|
||||
die("BUG: Unable to convert number"); /* This should never happen.. */
|
||||
|
||||
s->length += fmt_len;
|
||||
}
|
||||
|
||||
/* strbuf_append_fmt_retry() can be used when the there is no known
|
||||
* upper bound for the output string. */
|
||||
void strbuf_append_fmt_retry(strbuf_t *s, const char *fmt, ...)
|
||||
{
|
||||
va_list arg;
|
||||
int fmt_len, try;
|
||||
int empty_len;
|
||||
|
||||
/* If the first attempt to append fails, resize the buffer appropriately
|
||||
* and try again */
|
||||
for (try = 0; ; try++) {
|
||||
va_start(arg, fmt);
|
||||
/* Append the new formatted string */
|
||||
/* fmt_len is the length of the string required, excluding the
|
||||
* trailing NULL */
|
||||
empty_len = strbuf_empty_length(s);
|
||||
/* Add 1 since there is also space to store the terminating NULL. */
|
||||
fmt_len = vsnprintf(s->buf + s->length, empty_len + 1, fmt, arg);
|
||||
va_end(arg);
|
||||
|
||||
if (fmt_len <= empty_len)
|
||||
break; /* SUCCESS */
|
||||
if (try > 0)
|
||||
die("BUG: length of formatted string changed");
|
||||
|
||||
strbuf_resize(s, s->length + fmt_len);
|
||||
}
|
||||
|
||||
s->length += fmt_len;
|
||||
}
|
||||
|
||||
/* vi:ai et sw=4 ts=4:
|
||||
*/
|
|
@ -1,154 +0,0 @@
|
|||
/* strbuf - String buffer routines
|
||||
*
|
||||
* Copyright (c) 2010-2012 Mark Pulford <mark@kyne.com.au>
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining
|
||||
* a copy of this software and associated documentation files (the
|
||||
* "Software"), to deal in the Software without restriction, including
|
||||
* without limitation the rights to use, copy, modify, merge, publish,
|
||||
* distribute, sublicense, and/or sell copies of the Software, and to
|
||||
* permit persons to whom the Software is furnished to do so, subject to
|
||||
* the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||||
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||||
* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdarg.h>
|
||||
|
||||
/* Size: Total bytes allocated to *buf
|
||||
* Length: String length, excluding optional NULL terminator.
|
||||
* Increment: Allocation increments when resizing the string buffer.
|
||||
* Dynamic: True if created via strbuf_new()
|
||||
*/
|
||||
|
||||
typedef struct {
|
||||
char *buf;
|
||||
int size;
|
||||
int length;
|
||||
int increment;
|
||||
int dynamic;
|
||||
int reallocs;
|
||||
int debug;
|
||||
} strbuf_t;
|
||||
|
||||
#ifndef STRBUF_DEFAULT_SIZE
|
||||
#define STRBUF_DEFAULT_SIZE 1023
|
||||
#endif
|
||||
#ifndef STRBUF_DEFAULT_INCREMENT
|
||||
#define STRBUF_DEFAULT_INCREMENT -2
|
||||
#endif
|
||||
|
||||
/* Initialise */
|
||||
extern strbuf_t *strbuf_new(int len);
|
||||
extern void strbuf_init(strbuf_t *s, int len);
|
||||
extern void strbuf_set_increment(strbuf_t *s, int increment);
|
||||
|
||||
/* Release */
|
||||
extern void strbuf_free(strbuf_t *s);
|
||||
extern char *strbuf_free_to_string(strbuf_t *s, int *len);
|
||||
|
||||
/* Management */
|
||||
extern void strbuf_resize(strbuf_t *s, int len);
|
||||
static int strbuf_empty_length(strbuf_t *s);
|
||||
static int strbuf_length(strbuf_t *s);
|
||||
static char *strbuf_string(strbuf_t *s, int *len);
|
||||
static void strbuf_ensure_empty_length(strbuf_t *s, int len);
|
||||
static char *strbuf_empty_ptr(strbuf_t *s);
|
||||
static void strbuf_extend_length(strbuf_t *s, int len);
|
||||
|
||||
/* Update */
|
||||
extern void strbuf_append_fmt(strbuf_t *s, int len, const char *fmt, ...);
|
||||
extern void strbuf_append_fmt_retry(strbuf_t *s, const char *format, ...);
|
||||
static void strbuf_append_mem(strbuf_t *s, const char *c, int len);
|
||||
extern void strbuf_append_string(strbuf_t *s, const char *str);
|
||||
static void strbuf_append_char(strbuf_t *s, const char c);
|
||||
static void strbuf_ensure_null(strbuf_t *s);
|
||||
|
||||
/* Reset string for before use */
|
||||
static inline void strbuf_reset(strbuf_t *s)
|
||||
{
|
||||
s->length = 0;
|
||||
}
|
||||
|
||||
static inline int strbuf_allocated(strbuf_t *s)
|
||||
{
|
||||
return s->buf != NULL;
|
||||
}
|
||||
|
||||
/* Return bytes remaining in the string buffer
|
||||
* Ensure there is space for a NULL terminator. */
|
||||
static inline int strbuf_empty_length(strbuf_t *s)
|
||||
{
|
||||
return s->size - s->length - 1;
|
||||
}
|
||||
|
||||
static inline void strbuf_ensure_empty_length(strbuf_t *s, int len)
|
||||
{
|
||||
if (len > strbuf_empty_length(s))
|
||||
strbuf_resize(s, s->length + len);
|
||||
}
|
||||
|
||||
static inline char *strbuf_empty_ptr(strbuf_t *s)
|
||||
{
|
||||
return s->buf + s->length;
|
||||
}
|
||||
|
||||
static inline void strbuf_extend_length(strbuf_t *s, int len)
|
||||
{
|
||||
s->length += len;
|
||||
}
|
||||
|
||||
static inline int strbuf_length(strbuf_t *s)
|
||||
{
|
||||
return s->length;
|
||||
}
|
||||
|
||||
static inline void strbuf_append_char(strbuf_t *s, const char c)
|
||||
{
|
||||
strbuf_ensure_empty_length(s, 1);
|
||||
s->buf[s->length++] = c;
|
||||
}
|
||||
|
||||
static inline void strbuf_append_char_unsafe(strbuf_t *s, const char c)
|
||||
{
|
||||
s->buf[s->length++] = c;
|
||||
}
|
||||
|
||||
static inline void strbuf_append_mem(strbuf_t *s, const char *c, int len)
|
||||
{
|
||||
strbuf_ensure_empty_length(s, len);
|
||||
memcpy(s->buf + s->length, c, len);
|
||||
s->length += len;
|
||||
}
|
||||
|
||||
static inline void strbuf_append_mem_unsafe(strbuf_t *s, const char *c, int len)
|
||||
{
|
||||
memcpy(s->buf + s->length, c, len);
|
||||
s->length += len;
|
||||
}
|
||||
|
||||
static inline void strbuf_ensure_null(strbuf_t *s)
|
||||
{
|
||||
s->buf[s->length] = 0;
|
||||
}
|
||||
|
||||
static inline char *strbuf_string(strbuf_t *s, int *len)
|
||||
{
|
||||
if (len)
|
||||
*len = s->length;
|
||||
|
||||
return s->buf;
|
||||
}
|
||||
|
||||
/* vi:ai et sw=4 ts=4:
|
||||
*/
|
|
@ -1,974 +0,0 @@
|
|||
#include <math.h>
|
||||
#include <stdlib.h>
|
||||
#include <stdint.h>
|
||||
#include <string.h>
|
||||
#include <assert.h>
|
||||
|
||||
#include "lua.h"
|
||||
#include "lauxlib.h"
|
||||
|
||||
#define LUACMSGPACK_NAME "cmsgpack"
|
||||
#define LUACMSGPACK_SAFE_NAME "cmsgpack_safe"
|
||||
#define LUACMSGPACK_VERSION "lua-cmsgpack 0.4.0"
|
||||
#define LUACMSGPACK_COPYRIGHT "Copyright (C) 2012, Salvatore Sanfilippo"
|
||||
#define LUACMSGPACK_DESCRIPTION "MessagePack C implementation for Lua"
|
||||
|
||||
/* Allows a preprocessor directive to override MAX_NESTING */
|
||||
#ifndef LUACMSGPACK_MAX_NESTING
|
||||
#define LUACMSGPACK_MAX_NESTING 16 /* Max tables nesting. */
|
||||
#endif
|
||||
|
||||
/* Check if float or double can be an integer without loss of precision */
|
||||
#define IS_INT_TYPE_EQUIVALENT(x, T) (!isinf(x) && (T)(x) == (x))
|
||||
|
||||
#define IS_INT64_EQUIVALENT(x) IS_INT_TYPE_EQUIVALENT(x, int64_t)
|
||||
#define IS_INT_EQUIVALENT(x) IS_INT_TYPE_EQUIVALENT(x, int)
|
||||
|
||||
/* If size of pointer is equal to a 4 byte integer, we're on 32 bits. */
|
||||
#if UINTPTR_MAX == UINT_MAX
|
||||
#define BITS_32 1
|
||||
#else
|
||||
#define BITS_32 0
|
||||
#endif
|
||||
|
||||
#if BITS_32
|
||||
#define lua_pushunsigned(L, n) lua_pushnumber(L, n)
|
||||
#else
|
||||
#define lua_pushunsigned(L, n) lua_pushinteger(L, n)
|
||||
#endif
|
||||
|
||||
/* =============================================================================
|
||||
* MessagePack implementation and bindings for Lua 5.1/5.2.
|
||||
* Copyright(C) 2012 Salvatore Sanfilippo <antirez@gmail.com>
|
||||
*
|
||||
* http://github.com/antirez/lua-cmsgpack
|
||||
*
|
||||
* For MessagePack specification check the following web site:
|
||||
* http://wiki.msgpack.org/display/MSGPACK/Format+specification
|
||||
*
|
||||
* See Copyright Notice at the end of this file.
|
||||
*
|
||||
* CHANGELOG:
|
||||
* 19-Feb-2012 (ver 0.1.0): Initial release.
|
||||
* 20-Feb-2012 (ver 0.2.0): Tables encoding improved.
|
||||
* 20-Feb-2012 (ver 0.2.1): Minor bug fixing.
|
||||
* 20-Feb-2012 (ver 0.3.0): Module renamed lua-cmsgpack (was lua-msgpack).
|
||||
* 04-Apr-2014 (ver 0.3.1): Lua 5.2 support and minor bug fix.
|
||||
* 07-Apr-2014 (ver 0.4.0): Multiple pack/unpack, lua allocator, efficiency.
|
||||
* ========================================================================== */
|
||||
|
||||
/* -------------------------- Endian conversion --------------------------------
|
||||
* We use it only for floats and doubles, all the other conversions performed
|
||||
* in an endian independent fashion. So the only thing we need is a function
|
||||
* that swaps a binary string if arch is little endian (and left it untouched
|
||||
* otherwise). */
|
||||
|
||||
/* Reverse memory bytes if arch is little endian. Given the conceptual
|
||||
* simplicity of the Lua build system we prefer check for endianess at runtime.
|
||||
* The performance difference should be acceptable. */
|
||||
void memrevifle(void *ptr, size_t len) {
|
||||
unsigned char *p = (unsigned char *)ptr,
|
||||
*e = (unsigned char *)p+len-1,
|
||||
aux;
|
||||
int test = 1;
|
||||
unsigned char *testp = (unsigned char*) &test;
|
||||
|
||||
if (testp[0] == 0) return; /* Big endian, nothing to do. */
|
||||
len /= 2;
|
||||
while(len--) {
|
||||
aux = *p;
|
||||
*p = *e;
|
||||
*e = aux;
|
||||
p++;
|
||||
e--;
|
||||
}
|
||||
}
|
||||
|
||||
/* ---------------------------- String buffer ----------------------------------
|
||||
* This is a simple implementation of string buffers. The only operation
|
||||
* supported is creating empty buffers and appending bytes to it.
|
||||
* The string buffer uses 2x preallocation on every realloc for O(N) append
|
||||
* behavior. */
|
||||
|
||||
typedef struct mp_buf {
|
||||
unsigned char *b;
|
||||
size_t len, free;
|
||||
} mp_buf;
|
||||
|
||||
void *mp_realloc(lua_State *L, void *target, size_t osize,size_t nsize) {
|
||||
void *(*local_realloc) (void *, void *, size_t osize, size_t nsize) = NULL;
|
||||
void *ud;
|
||||
|
||||
local_realloc = lua_getallocf(L, &ud);
|
||||
|
||||
return local_realloc(ud, target, osize, nsize);
|
||||
}
|
||||
|
||||
mp_buf *mp_buf_new(lua_State *L) {
|
||||
mp_buf *buf = NULL;
|
||||
|
||||
/* Old size = 0; new size = sizeof(*buf) */
|
||||
buf = (mp_buf*)mp_realloc(L, NULL, 0, sizeof(*buf));
|
||||
|
||||
buf->b = NULL;
|
||||
buf->len = buf->free = 0;
|
||||
return buf;
|
||||
}
|
||||
|
||||
void mp_buf_append(lua_State *L, mp_buf *buf, const unsigned char *s, size_t len) {
|
||||
if (buf->free < len) {
|
||||
size_t newsize = (buf->len+len)*2;
|
||||
|
||||
buf->b = (unsigned char*)mp_realloc(L, buf->b, buf->len + buf->free, newsize);
|
||||
buf->free = newsize - buf->len;
|
||||
}
|
||||
memcpy(buf->b+buf->len,s,len);
|
||||
buf->len += len;
|
||||
buf->free -= len;
|
||||
}
|
||||
|
||||
void mp_buf_free(lua_State *L, mp_buf *buf) {
|
||||
mp_realloc(L, buf->b, buf->len + buf->free, 0); /* realloc to 0 = free */
|
||||
mp_realloc(L, buf, sizeof(*buf), 0);
|
||||
}
|
||||
|
||||
/* ---------------------------- String cursor ----------------------------------
|
||||
* This simple data structure is used for parsing. Basically you create a cursor
|
||||
* using a string pointer and a length, then it is possible to access the
|
||||
* current string position with cursor->p, check the remaining length
|
||||
* in cursor->left, and finally consume more string using
|
||||
* mp_cur_consume(cursor,len), to advance 'p' and subtract 'left'.
|
||||
* An additional field cursor->error is set to zero on initialization and can
|
||||
* be used to report errors. */
|
||||
|
||||
#define MP_CUR_ERROR_NONE 0
|
||||
#define MP_CUR_ERROR_EOF 1 /* Not enough data to complete operation. */
|
||||
#define MP_CUR_ERROR_BADFMT 2 /* Bad data format */
|
||||
|
||||
typedef struct mp_cur {
|
||||
const unsigned char *p;
|
||||
size_t left;
|
||||
int err;
|
||||
} mp_cur;
|
||||
|
||||
void mp_cur_init(mp_cur *cursor, const unsigned char *s, size_t len) {
|
||||
cursor->p = s;
|
||||
cursor->left = len;
|
||||
cursor->err = MP_CUR_ERROR_NONE;
|
||||
}
|
||||
|
||||
#define mp_cur_consume(_c,_len) do { _c->p += _len; _c->left -= _len; } while(0)
|
||||
|
||||
/* When there is not enough room we set an error in the cursor and return. This
|
||||
* is very common across the code so we have a macro to make the code look
|
||||
* a bit simpler. */
|
||||
#define mp_cur_need(_c,_len) do { \
|
||||
if (_c->left < _len) { \
|
||||
_c->err = MP_CUR_ERROR_EOF; \
|
||||
return; \
|
||||
} \
|
||||
} while(0)
|
||||
|
||||
/* ------------------------- Low level MP encoding -------------------------- */
|
||||
|
||||
void mp_encode_bytes(lua_State *L, mp_buf *buf, const unsigned char *s, size_t len) {
|
||||
unsigned char hdr[5];
|
||||
int hdrlen;
|
||||
|
||||
if (len < 32) {
|
||||
hdr[0] = 0xa0 | (len&0xff); /* fix raw */
|
||||
hdrlen = 1;
|
||||
} else if (len <= 0xff) {
|
||||
hdr[0] = 0xd9;
|
||||
hdr[1] = len;
|
||||
hdrlen = 2;
|
||||
} else if (len <= 0xffff) {
|
||||
hdr[0] = 0xda;
|
||||
hdr[1] = (len&0xff00)>>8;
|
||||
hdr[2] = len&0xff;
|
||||
hdrlen = 3;
|
||||
} else {
|
||||
hdr[0] = 0xdb;
|
||||
hdr[1] = (len&0xff000000)>>24;
|
||||
hdr[2] = (len&0xff0000)>>16;
|
||||
hdr[3] = (len&0xff00)>>8;
|
||||
hdr[4] = len&0xff;
|
||||
hdrlen = 5;
|
||||
}
|
||||
mp_buf_append(L,buf,hdr,hdrlen);
|
||||
mp_buf_append(L,buf,s,len);
|
||||
}
|
||||
|
||||
/* we assume IEEE 754 internal format for single and double precision floats. */
|
||||
void mp_encode_double(lua_State *L, mp_buf *buf, double d) {
|
||||
unsigned char b[9];
|
||||
float f = d;
|
||||
|
||||
assert(sizeof(f) == 4 && sizeof(d) == 8);
|
||||
if (d == (double)f) {
|
||||
b[0] = 0xca; /* float IEEE 754 */
|
||||
memcpy(b+1,&f,4);
|
||||
memrevifle(b+1,4);
|
||||
mp_buf_append(L,buf,b,5);
|
||||
} else if (sizeof(d) == 8) {
|
||||
b[0] = 0xcb; /* double IEEE 754 */
|
||||
memcpy(b+1,&d,8);
|
||||
memrevifle(b+1,8);
|
||||
mp_buf_append(L,buf,b,9);
|
||||
}
|
||||
}
|
||||
|
||||
void mp_encode_int(lua_State *L, mp_buf *buf, int64_t n) {
|
||||
unsigned char b[9];
|
||||
int enclen;
|
||||
|
||||
if (n >= 0) {
|
||||
if (n <= 127) {
|
||||
b[0] = n & 0x7f; /* positive fixnum */
|
||||
enclen = 1;
|
||||
} else if (n <= 0xff) {
|
||||
b[0] = 0xcc; /* uint 8 */
|
||||
b[1] = n & 0xff;
|
||||
enclen = 2;
|
||||
} else if (n <= 0xffff) {
|
||||
b[0] = 0xcd; /* uint 16 */
|
||||
b[1] = (n & 0xff00) >> 8;
|
||||
b[2] = n & 0xff;
|
||||
enclen = 3;
|
||||
} else if (n <= 0xffffffffLL) {
|
||||
b[0] = 0xce; /* uint 32 */
|
||||
b[1] = (n & 0xff000000) >> 24;
|
||||
b[2] = (n & 0xff0000) >> 16;
|
||||
b[3] = (n & 0xff00) >> 8;
|
||||
b[4] = n & 0xff;
|
||||
enclen = 5;
|
||||
} else {
|
||||
b[0] = 0xcf; /* uint 64 */
|
||||
b[1] = (n & 0xff00000000000000LL) >> 56;
|
||||
b[2] = (n & 0xff000000000000LL) >> 48;
|
||||
b[3] = (n & 0xff0000000000LL) >> 40;
|
||||
b[4] = (n & 0xff00000000LL) >> 32;
|
||||
b[5] = (n & 0xff000000) >> 24;
|
||||
b[6] = (n & 0xff0000) >> 16;
|
||||
b[7] = (n & 0xff00) >> 8;
|
||||
b[8] = n & 0xff;
|
||||
enclen = 9;
|
||||
}
|
||||
} else {
|
||||
if (n >= -32) {
|
||||
b[0] = ((signed char)n); /* negative fixnum */
|
||||
enclen = 1;
|
||||
} else if (n >= -128) {
|
||||
b[0] = 0xd0; /* int 8 */
|
||||
b[1] = n & 0xff;
|
||||
enclen = 2;
|
||||
} else if (n >= -32768) {
|
||||
b[0] = 0xd1; /* int 16 */
|
||||
b[1] = (n & 0xff00) >> 8;
|
||||
b[2] = n & 0xff;
|
||||
enclen = 3;
|
||||
} else if (n >= -2147483648LL) {
|
||||
b[0] = 0xd2; /* int 32 */
|
||||
b[1] = (n & 0xff000000) >> 24;
|
||||
b[2] = (n & 0xff0000) >> 16;
|
||||
b[3] = (n & 0xff00) >> 8;
|
||||
b[4] = n & 0xff;
|
||||
enclen = 5;
|
||||
} else {
|
||||
b[0] = 0xd3; /* int 64 */
|
||||
b[1] = (n & 0xff00000000000000LL) >> 56;
|
||||
b[2] = (n & 0xff000000000000LL) >> 48;
|
||||
b[3] = (n & 0xff0000000000LL) >> 40;
|
||||
b[4] = (n & 0xff00000000LL) >> 32;
|
||||
b[5] = (n & 0xff000000) >> 24;
|
||||
b[6] = (n & 0xff0000) >> 16;
|
||||
b[7] = (n & 0xff00) >> 8;
|
||||
b[8] = n & 0xff;
|
||||
enclen = 9;
|
||||
}
|
||||
}
|
||||
mp_buf_append(L,buf,b,enclen);
|
||||
}
|
||||
|
||||
void mp_encode_array(lua_State *L, mp_buf *buf, int64_t n) {
|
||||
unsigned char b[5];
|
||||
int enclen;
|
||||
|
||||
if (n <= 15) {
|
||||
b[0] = 0x90 | (n & 0xf); /* fix array */
|
||||
enclen = 1;
|
||||
} else if (n <= 65535) {
|
||||
b[0] = 0xdc; /* array 16 */
|
||||
b[1] = (n & 0xff00) >> 8;
|
||||
b[2] = n & 0xff;
|
||||
enclen = 3;
|
||||
} else {
|
||||
b[0] = 0xdd; /* array 32 */
|
||||
b[1] = (n & 0xff000000) >> 24;
|
||||
b[2] = (n & 0xff0000) >> 16;
|
||||
b[3] = (n & 0xff00) >> 8;
|
||||
b[4] = n & 0xff;
|
||||
enclen = 5;
|
||||
}
|
||||
mp_buf_append(L,buf,b,enclen);
|
||||
}
|
||||
|
||||
void mp_encode_map(lua_State *L, mp_buf *buf, int64_t n) {
|
||||
unsigned char b[5];
|
||||
int enclen;
|
||||
|
||||
if (n <= 15) {
|
||||
b[0] = 0x80 | (n & 0xf); /* fix map */
|
||||
enclen = 1;
|
||||
} else if (n <= 65535) {
|
||||
b[0] = 0xde; /* map 16 */
|
||||
b[1] = (n & 0xff00) >> 8;
|
||||
b[2] = n & 0xff;
|
||||
enclen = 3;
|
||||
} else {
|
||||
b[0] = 0xdf; /* map 32 */
|
||||
b[1] = (n & 0xff000000) >> 24;
|
||||
b[2] = (n & 0xff0000) >> 16;
|
||||
b[3] = (n & 0xff00) >> 8;
|
||||
b[4] = n & 0xff;
|
||||
enclen = 5;
|
||||
}
|
||||
mp_buf_append(L,buf,b,enclen);
|
||||
}
|
||||
|
||||
/* --------------------------- Lua types encoding --------------------------- */
|
||||
|
||||
void mp_encode_lua_string(lua_State *L, mp_buf *buf) {
|
||||
size_t len;
|
||||
const char *s;
|
||||
|
||||
s = lua_tolstring(L,-1,&len);
|
||||
mp_encode_bytes(L,buf,(const unsigned char*)s,len);
|
||||
}
|
||||
|
||||
void mp_encode_lua_bool(lua_State *L, mp_buf *buf) {
|
||||
unsigned char b = lua_toboolean(L,-1) ? 0xc3 : 0xc2;
|
||||
mp_buf_append(L,buf,&b,1);
|
||||
}
|
||||
|
||||
/* Lua 5.3 has a built in 64-bit integer type */
|
||||
void mp_encode_lua_integer(lua_State *L, mp_buf *buf) {
|
||||
#if (LUA_VERSION_NUM < 503) && BITS_32
|
||||
lua_Number i = lua_tonumber(L,-1);
|
||||
#else
|
||||
lua_Integer i = lua_tointeger(L,-1);
|
||||
#endif
|
||||
mp_encode_int(L, buf, (int64_t)i);
|
||||
}
|
||||
|
||||
/* Lua 5.2 and lower only has 64-bit doubles, so we need to
|
||||
* detect if the double may be representable as an int
|
||||
* for Lua < 5.3 */
|
||||
void mp_encode_lua_number(lua_State *L, mp_buf *buf) {
|
||||
lua_Number n = lua_tonumber(L,-1);
|
||||
|
||||
if (IS_INT64_EQUIVALENT(n)) {
|
||||
mp_encode_lua_integer(L, buf);
|
||||
} else {
|
||||
mp_encode_double(L,buf,(double)n);
|
||||
}
|
||||
}
|
||||
|
||||
void mp_encode_lua_type(lua_State *L, mp_buf *buf, int level);
|
||||
|
||||
/* Convert a lua table into a message pack list. */
|
||||
void mp_encode_lua_table_as_array(lua_State *L, mp_buf *buf, int level) {
|
||||
#if LUA_VERSION_NUM < 502
|
||||
size_t len = lua_objlen(L,-1), j;
|
||||
#else
|
||||
size_t len = lua_rawlen(L,-1), j;
|
||||
#endif
|
||||
|
||||
mp_encode_array(L,buf,len);
|
||||
luaL_checkstack(L, 1, "in function mp_encode_lua_table_as_array");
|
||||
for (j = 1; j <= len; j++) {
|
||||
lua_pushnumber(L,j);
|
||||
lua_gettable(L,-2);
|
||||
mp_encode_lua_type(L,buf,level+1);
|
||||
}
|
||||
}
|
||||
|
||||
/* Convert a lua table into a message pack key-value map. */
|
||||
void mp_encode_lua_table_as_map(lua_State *L, mp_buf *buf, int level) {
|
||||
size_t len = 0;
|
||||
|
||||
/* First step: count keys into table. No other way to do it with the
|
||||
* Lua API, we need to iterate a first time. Note that an alternative
|
||||
* would be to do a single run, and then hack the buffer to insert the
|
||||
* map opcodes for message pack. Too hackish for this lib. */
|
||||
luaL_checkstack(L, 3, "in function mp_encode_lua_table_as_map");
|
||||
lua_pushnil(L);
|
||||
while(lua_next(L,-2)) {
|
||||
lua_pop(L,1); /* remove value, keep key for next iteration. */
|
||||
len++;
|
||||
}
|
||||
|
||||
/* Step two: actually encoding of the map. */
|
||||
mp_encode_map(L,buf,len);
|
||||
lua_pushnil(L);
|
||||
while(lua_next(L,-2)) {
|
||||
/* Stack: ... key value */
|
||||
lua_pushvalue(L,-2); /* Stack: ... key value key */
|
||||
mp_encode_lua_type(L,buf,level+1); /* encode key */
|
||||
mp_encode_lua_type(L,buf,level+1); /* encode val */
|
||||
}
|
||||
}
|
||||
|
||||
/* Returns true if the Lua table on top of the stack is exclusively composed
|
||||
* of keys from numerical keys from 1 up to N, with N being the total number
|
||||
* of elements, without any hole in the middle. */
|
||||
int table_is_an_array(lua_State *L) {
|
||||
int count = 0, max = 0;
|
||||
#if LUA_VERSION_NUM < 503
|
||||
lua_Number n;
|
||||
#else
|
||||
lua_Integer n;
|
||||
#endif
|
||||
|
||||
/* Stack top on function entry */
|
||||
int stacktop;
|
||||
|
||||
stacktop = lua_gettop(L);
|
||||
|
||||
lua_pushnil(L);
|
||||
while(lua_next(L,-2)) {
|
||||
/* Stack: ... key value */
|
||||
lua_pop(L,1); /* Stack: ... key */
|
||||
/* The <= 0 check is valid here because we're comparing indexes. */
|
||||
#if LUA_VERSION_NUM < 503
|
||||
if ((LUA_TNUMBER != lua_type(L,-1)) || (n = lua_tonumber(L, -1)) <= 0 ||
|
||||
!IS_INT_EQUIVALENT(n))
|
||||
#else
|
||||
if (!lua_isinteger(L,-1) || (n = lua_tointeger(L, -1)) <= 0)
|
||||
#endif
|
||||
{
|
||||
lua_settop(L, stacktop);
|
||||
return 0;
|
||||
}
|
||||
max = (n > max ? n : max);
|
||||
count++;
|
||||
}
|
||||
/* We have the total number of elements in "count". Also we have
|
||||
* the max index encountered in "max". We can't reach this code
|
||||
* if there are indexes <= 0. If you also note that there can not be
|
||||
* repeated keys into a table, you have that if max==count you are sure
|
||||
* that there are all the keys form 1 to count (both included). */
|
||||
lua_settop(L, stacktop);
|
||||
return max == count;
|
||||
}
|
||||
|
||||
/* If the length operator returns non-zero, that is, there is at least
|
||||
* an object at key '1', we serialize to message pack list. Otherwise
|
||||
* we use a map. */
|
||||
void mp_encode_lua_table(lua_State *L, mp_buf *buf, int level) {
|
||||
if (table_is_an_array(L))
|
||||
mp_encode_lua_table_as_array(L,buf,level);
|
||||
else
|
||||
mp_encode_lua_table_as_map(L,buf,level);
|
||||
}
|
||||
|
||||
void mp_encode_lua_null(lua_State *L, mp_buf *buf) {
|
||||
unsigned char b[1];
|
||||
|
||||
b[0] = 0xc0;
|
||||
mp_buf_append(L,buf,b,1);
|
||||
}
|
||||
|
||||
void mp_encode_lua_type(lua_State *L, mp_buf *buf, int level) {
|
||||
int t = lua_type(L,-1);
|
||||
|
||||
/* Limit the encoding of nested tables to a specified maximum depth, so that
|
||||
* we survive when called against circular references in tables. */
|
||||
if (t == LUA_TTABLE && level == LUACMSGPACK_MAX_NESTING) t = LUA_TNIL;
|
||||
switch(t) {
|
||||
case LUA_TSTRING: mp_encode_lua_string(L,buf); break;
|
||||
case LUA_TBOOLEAN: mp_encode_lua_bool(L,buf); break;
|
||||
case LUA_TNUMBER:
|
||||
#if LUA_VERSION_NUM < 503
|
||||
mp_encode_lua_number(L,buf); break;
|
||||
#else
|
||||
if (lua_isinteger(L, -1)) {
|
||||
mp_encode_lua_integer(L, buf);
|
||||
} else {
|
||||
mp_encode_lua_number(L, buf);
|
||||
}
|
||||
break;
|
||||
#endif
|
||||
case LUA_TTABLE: mp_encode_lua_table(L,buf,level); break;
|
||||
default: mp_encode_lua_null(L,buf); break;
|
||||
}
|
||||
lua_pop(L,1);
|
||||
}
|
||||
|
||||
/*
|
||||
* Packs all arguments as a stream for multiple upacking later.
|
||||
* Returns error if no arguments provided.
|
||||
*/
|
||||
int mp_pack(lua_State *L) {
|
||||
int nargs = lua_gettop(L);
|
||||
int i;
|
||||
mp_buf *buf;
|
||||
|
||||
if (nargs == 0)
|
||||
return luaL_argerror(L, 0, "MessagePack pack needs input.");
|
||||
|
||||
if (!lua_checkstack(L, nargs))
|
||||
return luaL_argerror(L, 0, "Too many arguments for MessagePack pack.");
|
||||
|
||||
buf = mp_buf_new(L);
|
||||
for(i = 1; i <= nargs; i++) {
|
||||
/* Copy argument i to top of stack for _encode processing;
|
||||
* the encode function pops it from the stack when complete. */
|
||||
luaL_checkstack(L, 1, "in function mp_check");
|
||||
lua_pushvalue(L, i);
|
||||
|
||||
mp_encode_lua_type(L,buf,0);
|
||||
|
||||
lua_pushlstring(L,(char*)buf->b,buf->len);
|
||||
|
||||
/* Reuse the buffer for the next operation by
|
||||
* setting its free count to the total buffer size
|
||||
* and the current position to zero. */
|
||||
buf->free += buf->len;
|
||||
buf->len = 0;
|
||||
}
|
||||
mp_buf_free(L, buf);
|
||||
|
||||
/* Concatenate all nargs buffers together */
|
||||
lua_concat(L, nargs);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* ------------------------------- Decoding --------------------------------- */
|
||||
|
||||
void mp_decode_to_lua_type(lua_State *L, mp_cur *c);
|
||||
|
||||
void mp_decode_to_lua_array(lua_State *L, mp_cur *c, size_t len) {
|
||||
assert(len <= UINT_MAX);
|
||||
int index = 1;
|
||||
|
||||
lua_newtable(L);
|
||||
luaL_checkstack(L, 1, "in function mp_decode_to_lua_array");
|
||||
while(len--) {
|
||||
lua_pushnumber(L,index++);
|
||||
mp_decode_to_lua_type(L,c);
|
||||
if (c->err) return;
|
||||
lua_settable(L,-3);
|
||||
}
|
||||
}
|
||||
|
||||
void mp_decode_to_lua_hash(lua_State *L, mp_cur *c, size_t len) {
|
||||
assert(len <= UINT_MAX);
|
||||
lua_newtable(L);
|
||||
while(len--) {
|
||||
mp_decode_to_lua_type(L,c); /* key */
|
||||
if (c->err) return;
|
||||
mp_decode_to_lua_type(L,c); /* value */
|
||||
if (c->err) return;
|
||||
lua_settable(L,-3);
|
||||
}
|
||||
}
|
||||
|
||||
/* Decode a Message Pack raw object pointed by the string cursor 'c' to
|
||||
* a Lua type, that is left as the only result on the stack. */
|
||||
void mp_decode_to_lua_type(lua_State *L, mp_cur *c) {
|
||||
mp_cur_need(c,1);
|
||||
|
||||
/* If we return more than 18 elements, we must resize the stack to
|
||||
* fit all our return values. But, there is no way to
|
||||
* determine how many objects a msgpack will unpack to up front, so
|
||||
* we request a +1 larger stack on each iteration (noop if stack is
|
||||
* big enough, and when stack does require resize it doubles in size) */
|
||||
luaL_checkstack(L, 1,
|
||||
"too many return values at once; "
|
||||
"use unpack_one or unpack_limit instead.");
|
||||
|
||||
switch(c->p[0]) {
|
||||
case 0xcc: /* uint 8 */
|
||||
mp_cur_need(c,2);
|
||||
lua_pushunsigned(L,c->p[1]);
|
||||
mp_cur_consume(c,2);
|
||||
break;
|
||||
case 0xd0: /* int 8 */
|
||||
mp_cur_need(c,2);
|
||||
lua_pushinteger(L,(signed char)c->p[1]);
|
||||
mp_cur_consume(c,2);
|
||||
break;
|
||||
case 0xcd: /* uint 16 */
|
||||
mp_cur_need(c,3);
|
||||
lua_pushunsigned(L,
|
||||
(c->p[1] << 8) |
|
||||
c->p[2]);
|
||||
mp_cur_consume(c,3);
|
||||
break;
|
||||
case 0xd1: /* int 16 */
|
||||
mp_cur_need(c,3);
|
||||
lua_pushinteger(L,(int16_t)
|
||||
(c->p[1] << 8) |
|
||||
c->p[2]);
|
||||
mp_cur_consume(c,3);
|
||||
break;
|
||||
case 0xce: /* uint 32 */
|
||||
mp_cur_need(c,5);
|
||||
lua_pushunsigned(L,
|
||||
((uint32_t)c->p[1] << 24) |
|
||||
((uint32_t)c->p[2] << 16) |
|
||||
((uint32_t)c->p[3] << 8) |
|
||||
(uint32_t)c->p[4]);
|
||||
mp_cur_consume(c,5);
|
||||
break;
|
||||
case 0xd2: /* int 32 */
|
||||
mp_cur_need(c,5);
|
||||
lua_pushinteger(L,
|
||||
((int32_t)c->p[1] << 24) |
|
||||
((int32_t)c->p[2] << 16) |
|
||||
((int32_t)c->p[3] << 8) |
|
||||
(int32_t)c->p[4]);
|
||||
mp_cur_consume(c,5);
|
||||
break;
|
||||
case 0xcf: /* uint 64 */
|
||||
mp_cur_need(c,9);
|
||||
lua_pushunsigned(L,
|
||||
((uint64_t)c->p[1] << 56) |
|
||||
((uint64_t)c->p[2] << 48) |
|
||||
((uint64_t)c->p[3] << 40) |
|
||||
((uint64_t)c->p[4] << 32) |
|
||||
((uint64_t)c->p[5] << 24) |
|
||||
((uint64_t)c->p[6] << 16) |
|
||||
((uint64_t)c->p[7] << 8) |
|
||||
(uint64_t)c->p[8]);
|
||||
mp_cur_consume(c,9);
|
||||
break;
|
||||
case 0xd3: /* int 64 */
|
||||
mp_cur_need(c,9);
|
||||
#if LUA_VERSION_NUM < 503
|
||||
lua_pushnumber(L,
|
||||
#else
|
||||
lua_pushinteger(L,
|
||||
#endif
|
||||
((int64_t)c->p[1] << 56) |
|
||||
((int64_t)c->p[2] << 48) |
|
||||
((int64_t)c->p[3] << 40) |
|
||||
((int64_t)c->p[4] << 32) |
|
||||
((int64_t)c->p[5] << 24) |
|
||||
((int64_t)c->p[6] << 16) |
|
||||
((int64_t)c->p[7] << 8) |
|
||||
(int64_t)c->p[8]);
|
||||
mp_cur_consume(c,9);
|
||||
break;
|
||||
case 0xc0: /* nil */
|
||||
lua_pushnil(L);
|
||||
mp_cur_consume(c,1);
|
||||
break;
|
||||
case 0xc3: /* true */
|
||||
lua_pushboolean(L,1);
|
||||
mp_cur_consume(c,1);
|
||||
break;
|
||||
case 0xc2: /* false */
|
||||
lua_pushboolean(L,0);
|
||||
mp_cur_consume(c,1);
|
||||
break;
|
||||
case 0xca: /* float */
|
||||
mp_cur_need(c,5);
|
||||
assert(sizeof(float) == 4);
|
||||
{
|
||||
float f;
|
||||
memcpy(&f,c->p+1,4);
|
||||
memrevifle(&f,4);
|
||||
lua_pushnumber(L,f);
|
||||
mp_cur_consume(c,5);
|
||||
}
|
||||
break;
|
||||
case 0xcb: /* double */
|
||||
mp_cur_need(c,9);
|
||||
assert(sizeof(double) == 8);
|
||||
{
|
||||
double d;
|
||||
memcpy(&d,c->p+1,8);
|
||||
memrevifle(&d,8);
|
||||
lua_pushnumber(L,d);
|
||||
mp_cur_consume(c,9);
|
||||
}
|
||||
break;
|
||||
case 0xd9: /* raw 8 */
|
||||
mp_cur_need(c,2);
|
||||
{
|
||||
size_t l = c->p[1];
|
||||
mp_cur_need(c,2+l);
|
||||
lua_pushlstring(L,(char*)c->p+2,l);
|
||||
mp_cur_consume(c,2+l);
|
||||
}
|
||||
break;
|
||||
case 0xda: /* raw 16 */
|
||||
mp_cur_need(c,3);
|
||||
{
|
||||
size_t l = (c->p[1] << 8) | c->p[2];
|
||||
mp_cur_need(c,3+l);
|
||||
lua_pushlstring(L,(char*)c->p+3,l);
|
||||
mp_cur_consume(c,3+l);
|
||||
}
|
||||
break;
|
||||
case 0xdb: /* raw 32 */
|
||||
mp_cur_need(c,5);
|
||||
{
|
||||
size_t l = ((size_t)c->p[1] << 24) |
|
||||
((size_t)c->p[2] << 16) |
|
||||
((size_t)c->p[3] << 8) |
|
||||
(size_t)c->p[4];
|
||||
mp_cur_consume(c,5);
|
||||
mp_cur_need(c,l);
|
||||
lua_pushlstring(L,(char*)c->p,l);
|
||||
mp_cur_consume(c,l);
|
||||
}
|
||||
break;
|
||||
case 0xdc: /* array 16 */
|
||||
mp_cur_need(c,3);
|
||||
{
|
||||
size_t l = (c->p[1] << 8) | c->p[2];
|
||||
mp_cur_consume(c,3);
|
||||
mp_decode_to_lua_array(L,c,l);
|
||||
}
|
||||
break;
|
||||
case 0xdd: /* array 32 */
|
||||
mp_cur_need(c,5);
|
||||
{
|
||||
size_t l = ((size_t)c->p[1] << 24) |
|
||||
((size_t)c->p[2] << 16) |
|
||||
((size_t)c->p[3] << 8) |
|
||||
(size_t)c->p[4];
|
||||
mp_cur_consume(c,5);
|
||||
mp_decode_to_lua_array(L,c,l);
|
||||
}
|
||||
break;
|
||||
case 0xde: /* map 16 */
|
||||
mp_cur_need(c,3);
|
||||
{
|
||||
size_t l = (c->p[1] << 8) | c->p[2];
|
||||
mp_cur_consume(c,3);
|
||||
mp_decode_to_lua_hash(L,c,l);
|
||||
}
|
||||
break;
|
||||
case 0xdf: /* map 32 */
|
||||
mp_cur_need(c,5);
|
||||
{
|
||||
size_t l = ((size_t)c->p[1] << 24) |
|
||||
((size_t)c->p[2] << 16) |
|
||||
((size_t)c->p[3] << 8) |
|
||||
(size_t)c->p[4];
|
||||
mp_cur_consume(c,5);
|
||||
mp_decode_to_lua_hash(L,c,l);
|
||||
}
|
||||
break;
|
||||
default: /* types that can't be idenitified by first byte value. */
|
||||
if ((c->p[0] & 0x80) == 0) { /* positive fixnum */
|
||||
lua_pushunsigned(L,c->p[0]);
|
||||
mp_cur_consume(c,1);
|
||||
} else if ((c->p[0] & 0xe0) == 0xe0) { /* negative fixnum */
|
||||
lua_pushinteger(L,(signed char)c->p[0]);
|
||||
mp_cur_consume(c,1);
|
||||
} else if ((c->p[0] & 0xe0) == 0xa0) { /* fix raw */
|
||||
size_t l = c->p[0] & 0x1f;
|
||||
mp_cur_need(c,1+l);
|
||||
lua_pushlstring(L,(char*)c->p+1,l);
|
||||
mp_cur_consume(c,1+l);
|
||||
} else if ((c->p[0] & 0xf0) == 0x90) { /* fix map */
|
||||
size_t l = c->p[0] & 0xf;
|
||||
mp_cur_consume(c,1);
|
||||
mp_decode_to_lua_array(L,c,l);
|
||||
} else if ((c->p[0] & 0xf0) == 0x80) { /* fix map */
|
||||
size_t l = c->p[0] & 0xf;
|
||||
mp_cur_consume(c,1);
|
||||
mp_decode_to_lua_hash(L,c,l);
|
||||
} else {
|
||||
c->err = MP_CUR_ERROR_BADFMT;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
int mp_unpack_full(lua_State *L, int limit, int offset) {
|
||||
size_t len;
|
||||
const char *s;
|
||||
mp_cur c;
|
||||
int cnt; /* Number of objects unpacked */
|
||||
int decode_all = (!limit && !offset);
|
||||
|
||||
s = luaL_checklstring(L,1,&len); /* if no match, exits */
|
||||
|
||||
if (offset < 0 || limit < 0) /* requesting negative off or lim is invalid */
|
||||
return luaL_error(L,
|
||||
"Invalid request to unpack with offset of %d and limit of %d.",
|
||||
offset, len);
|
||||
else if (offset > len)
|
||||
return luaL_error(L,
|
||||
"Start offset %d greater than input length %d.", offset, len);
|
||||
|
||||
if (decode_all) limit = INT_MAX;
|
||||
|
||||
mp_cur_init(&c,(const unsigned char *)s+offset,len-offset);
|
||||
|
||||
/* We loop over the decode because this could be a stream
|
||||
* of multiple top-level values serialized together */
|
||||
for(cnt = 0; c.left > 0 && cnt < limit; cnt++) {
|
||||
mp_decode_to_lua_type(L,&c);
|
||||
|
||||
if (c.err == MP_CUR_ERROR_EOF) {
|
||||
return luaL_error(L,"Missing bytes in input.");
|
||||
} else if (c.err == MP_CUR_ERROR_BADFMT) {
|
||||
return luaL_error(L,"Bad data format in input.");
|
||||
}
|
||||
}
|
||||
|
||||
if (!decode_all) {
|
||||
/* c->left is the remaining size of the input buffer.
|
||||
* subtract the entire buffer size from the unprocessed size
|
||||
* to get our next start offset */
|
||||
int offset = len - c.left;
|
||||
|
||||
luaL_checkstack(L, 1, "in function mp_unpack_full");
|
||||
|
||||
/* Return offset -1 when we have have processed the entire buffer. */
|
||||
lua_pushinteger(L, c.left == 0 ? -1 : offset);
|
||||
/* Results are returned with the arg elements still
|
||||
* in place. Lua takes care of only returning
|
||||
* elements above the args for us.
|
||||
* In this case, we have one arg on the stack
|
||||
* for this function, so we insert our first return
|
||||
* value at position 2. */
|
||||
lua_insert(L, 2);
|
||||
cnt += 1; /* increase return count by one to make room for offset */
|
||||
}
|
||||
|
||||
return cnt;
|
||||
}
|
||||
|
||||
int mp_unpack(lua_State *L) {
|
||||
return mp_unpack_full(L, 0, 0);
|
||||
}
|
||||
|
||||
int mp_unpack_one(lua_State *L) {
|
||||
int offset = luaL_optinteger(L, 2, 0);
|
||||
/* Variable pop because offset may not exist */
|
||||
lua_pop(L, lua_gettop(L)-1);
|
||||
return mp_unpack_full(L, 1, offset);
|
||||
}
|
||||
|
||||
int mp_unpack_limit(lua_State *L) {
|
||||
int limit = luaL_checkinteger(L, 2);
|
||||
int offset = luaL_optinteger(L, 3, 0);
|
||||
/* Variable pop because offset may not exist */
|
||||
lua_pop(L, lua_gettop(L)-1);
|
||||
|
||||
return mp_unpack_full(L, limit, offset);
|
||||
}
|
||||
|
||||
int mp_safe(lua_State *L) {
|
||||
int argc, err, total_results;
|
||||
|
||||
argc = lua_gettop(L);
|
||||
|
||||
/* This adds our function to the bottom of the stack
|
||||
* (the "call this function" position) */
|
||||
lua_pushvalue(L, lua_upvalueindex(1));
|
||||
lua_insert(L, 1);
|
||||
|
||||
err = lua_pcall(L, argc, LUA_MULTRET, 0);
|
||||
total_results = lua_gettop(L);
|
||||
|
||||
if (!err) {
|
||||
return total_results;
|
||||
} else {
|
||||
lua_pushnil(L);
|
||||
lua_insert(L,-2);
|
||||
return 2;
|
||||
}
|
||||
}
|
||||
|
||||
/* -------------------------------------------------------------------------- */
|
||||
const struct luaL_Reg cmds[] = {
|
||||
{"pack", mp_pack},
|
||||
{"unpack", mp_unpack},
|
||||
{"unpack_one", mp_unpack_one},
|
||||
{"unpack_limit", mp_unpack_limit},
|
||||
{0}
|
||||
};
|
||||
|
||||
int luaopen_create(lua_State *L) {
|
||||
int i;
|
||||
/* Manually construct our module table instead of
|
||||
* relying on _register or _newlib */
|
||||
lua_newtable(L);
|
||||
|
||||
for (i = 0; i < (sizeof(cmds)/sizeof(*cmds) - 1); i++) {
|
||||
lua_pushcfunction(L, cmds[i].func);
|
||||
lua_setfield(L, -2, cmds[i].name);
|
||||
}
|
||||
|
||||
/* Add metadata */
|
||||
lua_pushliteral(L, LUACMSGPACK_NAME);
|
||||
lua_setfield(L, -2, "_NAME");
|
||||
lua_pushliteral(L, LUACMSGPACK_VERSION);
|
||||
lua_setfield(L, -2, "_VERSION");
|
||||
lua_pushliteral(L, LUACMSGPACK_COPYRIGHT);
|
||||
lua_setfield(L, -2, "_COPYRIGHT");
|
||||
lua_pushliteral(L, LUACMSGPACK_DESCRIPTION);
|
||||
lua_setfield(L, -2, "_DESCRIPTION");
|
||||
return 1;
|
||||
}
|
||||
|
||||
LUALIB_API int luaopen_cmsgpack(lua_State *L) {
|
||||
luaopen_create(L);
|
||||
|
||||
lua_pushvalue(L, -1);
|
||||
lua_setglobal(L, LUACMSGPACK_NAME);
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
LUALIB_API int luaopen_cmsgpack_safe(lua_State *L) {
|
||||
int i;
|
||||
|
||||
luaopen_cmsgpack(L);
|
||||
|
||||
/* Wrap all functions in the safe handler */
|
||||
for (i = 0; i < (sizeof(cmds)/sizeof(*cmds) - 1); i++) {
|
||||
lua_getfield(L, -1, cmds[i].name);
|
||||
lua_pushcclosure(L, mp_safe, 1);
|
||||
lua_setfield(L, -2, cmds[i].name);
|
||||
}
|
||||
|
||||
#if LUA_VERSION_NUM < 502
|
||||
/* Register name globally for 5.1 */
|
||||
lua_pushvalue(L, -1);
|
||||
lua_setglobal(L, LUACMSGPACK_SAFE_NAME);
|
||||
#endif
|
||||
|
||||
return 1;
|
||||
}
|
||||
|
||||
/******************************************************************************
|
||||
* Copyright (C) 2012 Salvatore Sanfilippo. All rights reserved.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining
|
||||
* a copy of this software and associated documentation files (the
|
||||
* "Software"), to deal in the Software without restriction, including
|
||||
* without limitation the rights to use, copy, modify, merge, publish,
|
||||
* distribute, sublicense, and/or sell copies of the Software, and to
|
||||
* permit persons to whom the Software is furnished to do so, subject to
|
||||
* the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||||
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||||
* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
******************************************************************************/
|
|
@ -1,422 +0,0 @@
|
|||
/*
|
||||
** {======================================================
|
||||
** Library for packing/unpacking structures.
|
||||
** $Id: struct.c,v 1.7 2018/05/11 22:04:31 roberto Exp $
|
||||
** See Copyright Notice at the end of this file
|
||||
** =======================================================
|
||||
*/
|
||||
/*
|
||||
** Valid formats:
|
||||
** > - big endian
|
||||
** < - little endian
|
||||
** ![num] - alignment
|
||||
** x - pading
|
||||
** b/B - signed/unsigned byte
|
||||
** h/H - signed/unsigned short
|
||||
** l/L - signed/unsigned long
|
||||
** T - size_t
|
||||
** i/In - signed/unsigned integer with size 'n' (default is size of int)
|
||||
** cn - sequence of 'n' chars (from/to a string); when packing, n==0 means
|
||||
the whole string; when unpacking, n==0 means use the previous
|
||||
read number as the string length
|
||||
** s - zero-terminated string
|
||||
** f - float
|
||||
** d - double
|
||||
** ' ' - ignored
|
||||
*/
|
||||
|
||||
|
||||
#include <assert.h>
|
||||
#include <ctype.h>
|
||||
#include <limits.h>
|
||||
#include <stddef.h>
|
||||
#include <string.h>
|
||||
|
||||
|
||||
#include "lua.h"
|
||||
#include "lauxlib.h"
|
||||
|
||||
|
||||
/* basic integer type */
|
||||
#if !defined(STRUCT_INT)
|
||||
#define STRUCT_INT long
|
||||
#endif
|
||||
|
||||
typedef STRUCT_INT Inttype;
|
||||
|
||||
/* corresponding unsigned version */
|
||||
typedef unsigned STRUCT_INT Uinttype;
|
||||
|
||||
|
||||
/* maximum size (in bytes) for integral types */
|
||||
#define MAXINTSIZE 32
|
||||
|
||||
/* is 'x' a power of 2? */
|
||||
#define isp2(x) ((x) > 0 && ((x) & ((x) - 1)) == 0)
|
||||
|
||||
/* dummy structure to get alignment requirements */
|
||||
struct cD {
|
||||
char c;
|
||||
double d;
|
||||
};
|
||||
|
||||
|
||||
#define PADDING (sizeof(struct cD) - sizeof(double))
|
||||
#define MAXALIGN (PADDING > sizeof(int) ? PADDING : sizeof(int))
|
||||
|
||||
|
||||
/* endian options */
|
||||
#define BIG 0
|
||||
#define LITTLE 1
|
||||
|
||||
|
||||
static union {
|
||||
int dummy;
|
||||
char endian;
|
||||
} const native = {1};
|
||||
|
||||
|
||||
typedef struct Header {
|
||||
int endian;
|
||||
int align;
|
||||
} Header;
|
||||
|
||||
|
||||
static int getnum (lua_State *L, const char **fmt, int df) {
|
||||
if (!isdigit(**fmt)) /* no number? */
|
||||
return df; /* return default value */
|
||||
else {
|
||||
int a = 0;
|
||||
do {
|
||||
if (a > (INT_MAX / 10) || a * 10 > (INT_MAX - (**fmt - '0')))
|
||||
luaL_error(L, "integral size overflow");
|
||||
a = a*10 + *((*fmt)++) - '0';
|
||||
} while (isdigit(**fmt));
|
||||
return a;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
#define defaultoptions(h) ((h)->endian = native.endian, (h)->align = 1)
|
||||
|
||||
|
||||
|
||||
static size_t optsize (lua_State *L, char opt, const char **fmt) {
|
||||
switch (opt) {
|
||||
case 'B': case 'b': return sizeof(char);
|
||||
case 'H': case 'h': return sizeof(short);
|
||||
case 'L': case 'l': return sizeof(long);
|
||||
case 'T': return sizeof(size_t);
|
||||
case 'f': return sizeof(float);
|
||||
case 'd': return sizeof(double);
|
||||
case 'x': return 1;
|
||||
case 'c': return getnum(L, fmt, 1);
|
||||
case 'i': case 'I': {
|
||||
int sz = getnum(L, fmt, sizeof(int));
|
||||
if (sz > MAXINTSIZE)
|
||||
luaL_error(L, "integral size %d is larger than limit of %d",
|
||||
sz, MAXINTSIZE);
|
||||
return sz;
|
||||
}
|
||||
default: return 0; /* other cases do not need alignment */
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
** return number of bytes needed to align an element of size 'size'
|
||||
** at current position 'len'
|
||||
*/
|
||||
static int gettoalign (size_t len, Header *h, int opt, size_t size) {
|
||||
if (size == 0 || opt == 'c') return 0;
|
||||
if (size > (size_t)h->align)
|
||||
size = h->align; /* respect max. alignment */
|
||||
return (size - (len & (size - 1))) & (size - 1);
|
||||
}
|
||||
|
||||
|
||||
/*
|
||||
** options to control endianess and alignment
|
||||
*/
|
||||
static void controloptions (lua_State *L, int opt, const char **fmt,
|
||||
Header *h) {
|
||||
switch (opt) {
|
||||
case ' ': return; /* ignore white spaces */
|
||||
case '>': h->endian = BIG; return;
|
||||
case '<': h->endian = LITTLE; return;
|
||||
case '!': {
|
||||
int a = getnum(L, fmt, MAXALIGN);
|
||||
if (!isp2(a))
|
||||
luaL_error(L, "alignment %d is not a power of 2", a);
|
||||
h->align = a;
|
||||
return;
|
||||
}
|
||||
default: {
|
||||
const char *msg = lua_pushfstring(L, "invalid format option '%c'", opt);
|
||||
luaL_argerror(L, 1, msg);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static void putinteger (lua_State *L, luaL_Buffer *b, int arg, int endian,
|
||||
int size) {
|
||||
lua_Number n = luaL_checknumber(L, arg);
|
||||
Uinttype value;
|
||||
char buff[MAXINTSIZE];
|
||||
if (n < 0)
|
||||
value = (Uinttype)(Inttype)n;
|
||||
else
|
||||
value = (Uinttype)n;
|
||||
if (endian == LITTLE) {
|
||||
int i;
|
||||
for (i = 0; i < size; i++) {
|
||||
buff[i] = (value & 0xff);
|
||||
value >>= 8;
|
||||
}
|
||||
}
|
||||
else {
|
||||
int i;
|
||||
for (i = size - 1; i >= 0; i--) {
|
||||
buff[i] = (value & 0xff);
|
||||
value >>= 8;
|
||||
}
|
||||
}
|
||||
luaL_addlstring(b, buff, size);
|
||||
}
|
||||
|
||||
|
||||
static void correctbytes (char *b, int size, int endian) {
|
||||
if (endian != native.endian) {
|
||||
int i = 0;
|
||||
while (i < --size) {
|
||||
char temp = b[i];
|
||||
b[i++] = b[size];
|
||||
b[size] = temp;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static int b_pack (lua_State *L) {
|
||||
luaL_Buffer b;
|
||||
const char *fmt = luaL_checkstring(L, 1);
|
||||
Header h;
|
||||
int arg = 2;
|
||||
size_t totalsize = 0;
|
||||
defaultoptions(&h);
|
||||
lua_pushnil(L); /* mark to separate arguments from string buffer */
|
||||
luaL_buffinit(L, &b);
|
||||
while (*fmt != '\0') {
|
||||
int opt = *fmt++;
|
||||
size_t size = optsize(L, opt, &fmt);
|
||||
int toalign = gettoalign(totalsize, &h, opt, size);
|
||||
totalsize += toalign;
|
||||
while (toalign-- > 0) luaL_addchar(&b, '\0');
|
||||
switch (opt) {
|
||||
case 'b': case 'B': case 'h': case 'H':
|
||||
case 'l': case 'L': case 'T': case 'i': case 'I': { /* integer types */
|
||||
putinteger(L, &b, arg++, h.endian, size);
|
||||
break;
|
||||
}
|
||||
case 'x': {
|
||||
luaL_addchar(&b, '\0');
|
||||
break;
|
||||
}
|
||||
case 'f': {
|
||||
float f = (float)luaL_checknumber(L, arg++);
|
||||
correctbytes((char *)&f, size, h.endian);
|
||||
luaL_addlstring(&b, (char *)&f, size);
|
||||
break;
|
||||
}
|
||||
case 'd': {
|
||||
double d = luaL_checknumber(L, arg++);
|
||||
correctbytes((char *)&d, size, h.endian);
|
||||
luaL_addlstring(&b, (char *)&d, size);
|
||||
break;
|
||||
}
|
||||
case 'c': case 's': {
|
||||
size_t l;
|
||||
const char *s = luaL_checklstring(L, arg++, &l);
|
||||
if (size == 0) size = l;
|
||||
luaL_argcheck(L, l >= (size_t)size, arg, "string too short");
|
||||
luaL_addlstring(&b, s, size);
|
||||
if (opt == 's') {
|
||||
luaL_addchar(&b, '\0'); /* add zero at the end */
|
||||
size++;
|
||||
}
|
||||
break;
|
||||
}
|
||||
default: controloptions(L, opt, &fmt, &h);
|
||||
}
|
||||
totalsize += size;
|
||||
}
|
||||
luaL_pushresult(&b);
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
||||
static lua_Number getinteger (const char *buff, int endian,
|
||||
int issigned, int size) {
|
||||
Uinttype l = 0;
|
||||
int i;
|
||||
if (endian == BIG) {
|
||||
for (i = 0; i < size; i++) {
|
||||
l <<= 8;
|
||||
l |= (Uinttype)(unsigned char)buff[i];
|
||||
}
|
||||
}
|
||||
else {
|
||||
for (i = size - 1; i >= 0; i--) {
|
||||
l <<= 8;
|
||||
l |= (Uinttype)(unsigned char)buff[i];
|
||||
}
|
||||
}
|
||||
if (!issigned)
|
||||
return (lua_Number)l;
|
||||
else { /* signed format */
|
||||
Uinttype mask = (Uinttype)(~((Uinttype)0)) << (size*8 - 1);
|
||||
if (l & mask) /* negative value? */
|
||||
l |= mask; /* signal extension */
|
||||
return (lua_Number)(Inttype)l;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
static int b_unpack (lua_State *L) {
|
||||
Header h;
|
||||
const char *fmt = luaL_checkstring(L, 1);
|
||||
size_t ld;
|
||||
const char *data = luaL_checklstring(L, 2, &ld);
|
||||
size_t pos = luaL_optinteger(L, 3, 1);
|
||||
luaL_argcheck(L, pos > 0, 3, "offset must be 1 or greater");
|
||||
pos--; /* Lua indexes are 1-based, but here we want 0-based for C
|
||||
* pointer math. */
|
||||
int n = 0; /* number of results */
|
||||
defaultoptions(&h);
|
||||
while (*fmt) {
|
||||
int opt = *fmt++;
|
||||
size_t size = optsize(L, opt, &fmt);
|
||||
pos += gettoalign(pos, &h, opt, size);
|
||||
luaL_argcheck(L, size <= ld && pos <= ld - size,
|
||||
2, "data string too short");
|
||||
/* stack space for item + next position */
|
||||
luaL_checkstack(L, 2, "too many results");
|
||||
switch (opt) {
|
||||
case 'b': case 'B': case 'h': case 'H':
|
||||
case 'l': case 'L': case 'T': case 'i': case 'I': { /* integer types */
|
||||
int issigned = islower(opt);
|
||||
lua_Number res = getinteger(data+pos, h.endian, issigned, size);
|
||||
lua_pushnumber(L, res); n++;
|
||||
break;
|
||||
}
|
||||
case 'x': {
|
||||
break;
|
||||
}
|
||||
case 'f': {
|
||||
float f;
|
||||
memcpy(&f, data+pos, size);
|
||||
correctbytes((char *)&f, sizeof(f), h.endian);
|
||||
lua_pushnumber(L, f); n++;
|
||||
break;
|
||||
}
|
||||
case 'd': {
|
||||
double d;
|
||||
memcpy(&d, data+pos, size);
|
||||
correctbytes((char *)&d, sizeof(d), h.endian);
|
||||
lua_pushnumber(L, d); n++;
|
||||
break;
|
||||
}
|
||||
case 'c': {
|
||||
if (size == 0) {
|
||||
if (n == 0 || !lua_isnumber(L, -1))
|
||||
luaL_error(L, "format 'c0' needs a previous size");
|
||||
size = lua_tonumber(L, -1);
|
||||
lua_pop(L, 1); n--;
|
||||
luaL_argcheck(L, size <= ld && pos <= ld - size,
|
||||
2, "data string too short");
|
||||
}
|
||||
lua_pushlstring(L, data+pos, size); n++;
|
||||
break;
|
||||
}
|
||||
case 's': {
|
||||
const char *e = (const char *)memchr(data+pos, '\0', ld - pos);
|
||||
if (e == NULL)
|
||||
luaL_error(L, "unfinished string in data");
|
||||
size = (e - (data+pos)) + 1;
|
||||
lua_pushlstring(L, data+pos, size - 1); n++;
|
||||
break;
|
||||
}
|
||||
default: controloptions(L, opt, &fmt, &h);
|
||||
}
|
||||
pos += size;
|
||||
}
|
||||
lua_pushinteger(L, pos + 1); /* next position */
|
||||
return n + 1;
|
||||
}
|
||||
|
||||
|
||||
static int b_size (lua_State *L) {
|
||||
Header h;
|
||||
const char *fmt = luaL_checkstring(L, 1);
|
||||
size_t pos = 0;
|
||||
defaultoptions(&h);
|
||||
while (*fmt) {
|
||||
int opt = *fmt++;
|
||||
size_t size = optsize(L, opt, &fmt);
|
||||
pos += gettoalign(pos, &h, opt, size);
|
||||
if (opt == 's')
|
||||
luaL_argerror(L, 1, "option 's' has no fixed size");
|
||||
else if (opt == 'c' && size == 0)
|
||||
luaL_argerror(L, 1, "option 'c0' has no fixed size");
|
||||
if (!isalnum(opt))
|
||||
controloptions(L, opt, &fmt, &h);
|
||||
pos += size;
|
||||
}
|
||||
lua_pushinteger(L, pos);
|
||||
return 1;
|
||||
}
|
||||
|
||||
/* }====================================================== */
|
||||
|
||||
|
||||
|
||||
static const struct luaL_Reg thislib[] = {
|
||||
{"pack", b_pack},
|
||||
{"unpack", b_unpack},
|
||||
{"size", b_size},
|
||||
{NULL, NULL}
|
||||
};
|
||||
|
||||
|
||||
LUALIB_API int luaopen_struct (lua_State *L);
|
||||
|
||||
LUALIB_API int luaopen_struct (lua_State *L) {
|
||||
luaL_newlib(L, thislib);
|
||||
lua_setglobal(L, "struct");
|
||||
return 1;
|
||||
}
|
||||
|
||||
|
||||
/******************************************************************************
|
||||
* Copyright (C) 2010-2018 Lua.org, PUC-Rio. All rights reserved.
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining
|
||||
* a copy of this software and associated documentation files (the
|
||||
* "Software"), to deal in the Software without restriction, including
|
||||
* without limitation the rights to use, copy, modify, merge, publish,
|
||||
* distribute, sublicense, and/or sell copies of the Software, and to
|
||||
* permit persons to whom the Software is furnished to do so, subject to
|
||||
* the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be
|
||||
* included in all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||
* EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
* MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
|
||||
* IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
|
||||
* CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
|
||||
* TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
|
||||
* SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
******************************************************************************/
|
|
@ -27,7 +27,7 @@
|
|||
#define MAXMEMORY_NO_EVICTION (7<<8)
|
||||
|
||||
|
||||
#define CONFIG_RUN_ID_SIZE 40U
|
||||
#define CONFIG_RUN_ID_SIZE 40
|
||||
|
||||
#define EVPOOL_CACHED_SDS_SIZE 255
|
||||
#define EVPOOL_SIZE 16
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -37,6 +37,8 @@ size_t zmalloc_usable_size(const void* p) {
|
|||
void zfree(void* ptr) {
|
||||
size_t usable = mi_usable_size(ptr);
|
||||
|
||||
// I wish we can keep this assert but rdb_load creates objects in one thread and
|
||||
// uses them in another.
|
||||
// assert(zmalloc_used_memory_tl >= (ssize_t)usable);
|
||||
zmalloc_used_memory_tl -= usable;
|
||||
|
||||
|
@ -49,34 +51,34 @@ void* zrealloc(void* ptr, size_t size) {
|
|||
}
|
||||
|
||||
void* zcalloc(size_t size) {
|
||||
// mi_good_size(size) is not working. try for example, size=690557.
|
||||
size_t usable = mi_good_size(size);
|
||||
|
||||
void* res = mi_heap_calloc(zmalloc_heap, 1, size);
|
||||
size_t usable = mi_usable_size(res);
|
||||
zmalloc_used_memory_tl += usable;
|
||||
|
||||
return res;
|
||||
return mi_heap_calloc(zmalloc_heap, 1, size);
|
||||
}
|
||||
|
||||
void* zmalloc_usable(size_t size, size_t* usable) {
|
||||
size_t g = mi_good_size(size);
|
||||
*usable = g;
|
||||
|
||||
zmalloc_used_memory_tl += g;
|
||||
assert(zmalloc_heap);
|
||||
void* res = mi_heap_malloc(zmalloc_heap, size);
|
||||
size_t uss = mi_usable_size(res);
|
||||
*usable = uss;
|
||||
void* ptr = mi_heap_malloc(zmalloc_heap, g);
|
||||
assert(mi_usable_size(ptr) == g);
|
||||
|
||||
zmalloc_used_memory_tl += uss;
|
||||
|
||||
return res;
|
||||
return ptr;
|
||||
}
|
||||
|
||||
void* zrealloc_usable(void* ptr, size_t size, size_t* usable) {
|
||||
ssize_t prev = mi_usable_size(ptr);
|
||||
|
||||
void* res = mi_heap_realloc(zmalloc_heap, ptr, size);
|
||||
ssize_t uss = mi_usable_size(res);
|
||||
*usable = uss;
|
||||
zmalloc_used_memory_tl += (uss - prev);
|
||||
size_t g = mi_good_size(size);
|
||||
size_t prev = mi_usable_size(ptr);
|
||||
*usable = g;
|
||||
|
||||
zmalloc_used_memory_tl += (g - prev);
|
||||
void* res = mi_heap_realloc(zmalloc_heap, ptr, g);
|
||||
// does not hold, say when prev = 16 and size = 6. mi_malloc does not shrink in this case.
|
||||
// assert(mi_usable_size(res) == g);
|
||||
return res;
|
||||
}
|
||||
|
||||
|
@ -85,9 +87,8 @@ size_t znallocx(size_t size) {
|
|||
}
|
||||
|
||||
void zfree_size(void* ptr, size_t size) {
|
||||
ssize_t uss = mi_usable_size(ptr);
|
||||
zmalloc_used_memory_tl -= uss;
|
||||
mi_free_size(ptr, uss);
|
||||
zmalloc_used_memory_tl -= size;
|
||||
mi_free_size(ptr, size);
|
||||
}
|
||||
|
||||
void* ztrymalloc(size_t size) {
|
||||
|
|
|
@ -1,30 +1,23 @@
|
|||
add_executable(dragonfly dfly_main.cc)
|
||||
cxx_link(dragonfly base dragonfly_lib epoll_fiber_lib)
|
||||
|
||||
if (CMAKE_SYSTEM_PROCESSOR STREQUAL "x86_64" AND CMAKE_BUILD_TYPE STREQUAL "Release")
|
||||
# Add core2 only to this file, thus avoiding instructions in this object file that
|
||||
# can cause SIGILL.
|
||||
set_source_files_properties(dfly_main.cc PROPERTIES COMPILE_FLAGS -march=core2 COMPILE_DEFINITIONS SOURCE_PATH_FROM_BUILD_ENV=${CMAKE_SOURCE_DIR})
|
||||
endif()
|
||||
cxx_link(dragonfly base dragonfly_lib)
|
||||
|
||||
add_library(dfly_transaction db_slice.cc engine_shard_set.cc blocking_controller.cc common.cc
|
||||
io_mgr.cc journal/journal.cc journal/journal_slice.cc table.cc
|
||||
io_mgr.cc journal/journal.cc journal/journal_shard.cc table.cc
|
||||
tiered_storage.cc transaction.cc)
|
||||
cxx_link(dfly_transaction uring_fiber_lib dfly_core strings_lib)
|
||||
|
||||
add_library(dragonfly_lib channel_slice.cc command_registry.cc
|
||||
config_flags.cc conn_context.cc debugcmd.cc dflycmd.cc
|
||||
generic_family.cc hset_family.cc json_family.cc
|
||||
list_family.cc main_service.cc memory_cmd.cc rdb_load.cc rdb_save.cc replica.cc
|
||||
snapshot.cc script_mgr.cc server_family.cc malloc_stats.cc
|
||||
generic_family.cc hset_family.cc json_family.cc
|
||||
list_family.cc main_service.cc rdb_load.cc rdb_save.cc replica.cc
|
||||
snapshot.cc script_mgr.cc server_family.cc
|
||||
set_family.cc stream_family.cc string_family.cc
|
||||
zset_family.cc version.cc bitops_family.cc container_utils.cc)
|
||||
zset_family.cc version.cc)
|
||||
|
||||
cxx_link(dragonfly_lib dfly_transaction dfly_facade redis_lib strings_lib html_lib
|
||||
absl::random_random TRDP::jsoncons)
|
||||
cxx_link(dragonfly_lib dfly_transaction dfly_facade redis_lib strings_lib html_lib TRDP::jsoncons)
|
||||
|
||||
add_library(dfly_test_lib test_utils.cc)
|
||||
cxx_link(dfly_test_lib dragonfly_lib epoll_fiber_lib facade_test gtest_main_ext)
|
||||
cxx_link(dfly_test_lib dragonfly_lib facade_test gtest_main_ext)
|
||||
|
||||
cxx_test(dragonfly_test dfly_test_lib LABELS DFLY)
|
||||
cxx_test(generic_family_test dfly_test_lib LABELS DFLY)
|
||||
|
@ -33,7 +26,6 @@ cxx_test(list_family_test dfly_test_lib LABELS DFLY)
|
|||
cxx_test(set_family_test dfly_test_lib LABELS DFLY)
|
||||
cxx_test(stream_family_test dfly_test_lib LABELS DFLY)
|
||||
cxx_test(string_family_test dfly_test_lib LABELS DFLY)
|
||||
cxx_test(bitops_family_test dfly_test_lib LABELS DFLY)
|
||||
cxx_test(rdb_test dfly_test_lib DATA testdata/empty.rdb testdata/redis6_small.rdb
|
||||
testdata/redis6_stream.rdb LABELS DFLY)
|
||||
cxx_test(zset_family_test dfly_test_lib LABELS DFLY)
|
||||
|
@ -45,4 +37,4 @@ cxx_test(json_family_test dfly_test_lib LABELS DFLY)
|
|||
add_custom_target(check_dfly WORKING_DIRECTORY .. COMMAND ctest -L DFLY)
|
||||
add_dependencies(check_dfly dragonfly_test json_family_test list_family_test
|
||||
generic_family_test memcache_parser_test rdb_test
|
||||
redis_parser_test snapshot_test stream_family_test string_family_test bitops_family_test set_family_test zset_family_test)
|
||||
redis_parser_test snapshot_test stream_family_test string_family_test)
|
||||
|
|
|
@ -1,698 +0,0 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
#include "server/bitops_family.h"
|
||||
|
||||
#include <bitset>
|
||||
|
||||
extern "C" {
|
||||
#include "redis/object.h"
|
||||
}
|
||||
|
||||
#include "base/logging.h"
|
||||
#include "server/command_registry.h"
|
||||
#include "server/common.h"
|
||||
#include "server/conn_context.h"
|
||||
#include "server/engine_shard_set.h"
|
||||
#include "server/error.h"
|
||||
#include "server/tiered_storage.h"
|
||||
#include "server/transaction.h"
|
||||
#include "util/varz.h"
|
||||
|
||||
namespace dfly {
|
||||
using namespace facade;
|
||||
|
||||
namespace {
|
||||
|
||||
using ShardStringResults = std::vector<OpResult<std::string>>;
|
||||
const int32_t OFFSET_FACTOR = 8; // number of bits in byte
|
||||
const char* OR_OP_NAME = "OR";
|
||||
const char* XOR_OP_NAME = "XOR";
|
||||
const char* AND_OP_NAME = "AND";
|
||||
const char* NOT_OP_NAME = "NOT";
|
||||
|
||||
using BitsStrVec = std::vector<std::string>;
|
||||
|
||||
// The following is the list of the functions that would handle the
|
||||
// commands that handle the bit operations
|
||||
void BitPos(CmdArgList args, ConnectionContext* cntx);
|
||||
void BitCount(CmdArgList args, ConnectionContext* cntx);
|
||||
void BitField(CmdArgList args, ConnectionContext* cntx);
|
||||
void BitFieldRo(CmdArgList args, ConnectionContext* cntx);
|
||||
void BitOp(CmdArgList args, ConnectionContext* cntx);
|
||||
void GetBit(CmdArgList args, ConnectionContext* cntx);
|
||||
void SetBit(CmdArgList args, ConnectionContext* cntx);
|
||||
|
||||
OpResult<std::string> ReadValue(const DbContext& context, std::string_view key, EngineShard* shard);
|
||||
OpResult<bool> ReadValueBitsetAt(const OpArgs& op_args, std::string_view key, uint32_t offset);
|
||||
OpResult<std::size_t> CountBitsForValue(const OpArgs& op_args, std::string_view key, int64_t start,
|
||||
int64_t end, bool bit_value);
|
||||
std::string GetString(const PrimeValue& pv, EngineShard* shard);
|
||||
bool SetBitValue(uint32_t offset, bool bit_value, std::string* entry);
|
||||
std::size_t CountBitSetByByteIndices(std::string_view at, std::size_t start, std::size_t end);
|
||||
std::size_t CountBitSet(std::string_view str, int64_t start, int64_t end, bool bits);
|
||||
std::size_t CountBitSetByBitIndices(std::string_view at, std::size_t start, std::size_t end);
|
||||
OpResult<std::string> RunBitOpOnShard(std::string_view op, const OpArgs& op_args, ArgSlice keys);
|
||||
std::string RunBitOperationOnValues(std::string_view op, const BitsStrVec& values);
|
||||
|
||||
// ------------------------------------------------------------------------- //
|
||||
|
||||
// This function can be used for any case where we allowing out of bound
|
||||
// access where the default in this case would be 0 -such as bitop
|
||||
uint8_t GetByteAt(std::string_view s, std::size_t at) {
|
||||
return at >= s.size() ? 0 : s[at];
|
||||
}
|
||||
|
||||
// For XOR, OR, AND operations on a collection of bytes
|
||||
template <typename BitOp, typename SkipOp>
|
||||
std::string BitOpString(BitOp operation_f, SkipOp skip_f, const BitsStrVec& values,
|
||||
std::string&& new_value) {
|
||||
// at this point, values are not empty
|
||||
std::size_t max_size = new_value.size();
|
||||
|
||||
if (values.size() > 1) {
|
||||
for (std::size_t i = 0; i < max_size; i++) {
|
||||
std::uint8_t new_entry = operation_f(GetByteAt(values[0], i), GetByteAt(values[1], i));
|
||||
for (std::size_t j = 2; j < values.size(); ++j) {
|
||||
new_entry = operation_f(new_entry, GetByteAt(values[j], i));
|
||||
if (skip_f(new_entry)) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
new_value[i] = new_entry;
|
||||
}
|
||||
return new_value;
|
||||
} else {
|
||||
return values[0];
|
||||
}
|
||||
}
|
||||
|
||||
// Helper functions to support operations
|
||||
// so we would not need to check which
|
||||
// operations to run in the look (unlike
|
||||
// https://github.com/redis/redis/blob/c2b0c13d5c0fab49131f6f5e844f80bfa43f6219/src/bitops.c#L607)
|
||||
constexpr bool SkipAnd(uint8_t byte) {
|
||||
return byte == 0x0;
|
||||
}
|
||||
|
||||
constexpr bool SkipOr(uint8_t byte) {
|
||||
return byte == 0xff;
|
||||
}
|
||||
|
||||
constexpr bool SkipXor(uint8_t) {
|
||||
return false;
|
||||
}
|
||||
|
||||
constexpr uint8_t AndOp(uint8_t left, uint8_t right) {
|
||||
return left & right;
|
||||
}
|
||||
|
||||
constexpr uint8_t OrOp(uint8_t left, uint8_t right) {
|
||||
return left | right;
|
||||
}
|
||||
|
||||
constexpr uint8_t XorOp(uint8_t left, uint8_t right) {
|
||||
return left ^ right;
|
||||
}
|
||||
|
||||
std::string BitOpNotString(std::string from) {
|
||||
std::transform(from.begin(), from.end(), from.begin(), [](auto c) { return ~c; });
|
||||
return from;
|
||||
}
|
||||
|
||||
// Bits manipulation functions
|
||||
constexpr int32_t GetBitIndex(uint32_t offset) noexcept {
|
||||
return offset % OFFSET_FACTOR;
|
||||
}
|
||||
|
||||
constexpr int32_t GetNormalizedBitIndex(uint32_t offset) noexcept {
|
||||
return (OFFSET_FACTOR - 1) - GetBitIndex(offset);
|
||||
}
|
||||
|
||||
constexpr int32_t GetByteIndex(uint32_t offset) noexcept {
|
||||
return offset / OFFSET_FACTOR;
|
||||
}
|
||||
|
||||
uint8_t GetByteValue(std::string_view str, uint32_t offset) {
|
||||
return static_cast<uint8_t>(str[GetByteIndex(offset)]);
|
||||
}
|
||||
|
||||
constexpr bool CheckBitStatus(uint8_t byte, uint32_t offset) {
|
||||
return byte & (0x1 << offset);
|
||||
}
|
||||
|
||||
constexpr std::uint8_t CountBitsRange(std::uint8_t byte, std::uint8_t from, uint8_t to) {
|
||||
int count = 0;
|
||||
for (int i = from; i < to; i++) {
|
||||
count += CheckBitStatus(byte, GetNormalizedBitIndex(i));
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
// Count the number of bits that are on, on bytes boundaries: i.e. Start and end are the indices for
|
||||
// bytes locations inside str CountBitSetByByteIndices
|
||||
std::size_t CountBitSetByByteIndices(std::string_view at, std::size_t start, std::size_t end) {
|
||||
if (start >= end) {
|
||||
return 0;
|
||||
}
|
||||
end = std::min(end, at.size()); // don't overflow
|
||||
std::uint32_t count =
|
||||
std::accumulate(std::next(at.begin(), start), std::next(at.begin(), end), 0,
|
||||
[](auto counter, uint8_t ch) { return counter + absl::popcount(ch); });
|
||||
return count;
|
||||
}
|
||||
|
||||
// Count the number of bits that are on, on bits boundaries: i.e. Start and end are the indices for
|
||||
// bits locations inside str
|
||||
std::size_t CountBitSetByBitIndices(std::string_view at, std::size_t start, std::size_t end) {
|
||||
auto first_byte_index = GetByteIndex(start);
|
||||
auto last_byte_index = GetByteIndex(end);
|
||||
if (start % OFFSET_FACTOR == 0 && end % OFFSET_FACTOR == 0) {
|
||||
return CountBitSetByByteIndices(at, first_byte_index, last_byte_index);
|
||||
}
|
||||
const auto last_bit_first_byte =
|
||||
first_byte_index != last_byte_index ? OFFSET_FACTOR : GetBitIndex(end);
|
||||
const auto first_byte = GetByteValue(at, start);
|
||||
std::uint32_t count = CountBitsRange(first_byte, GetBitIndex(start), last_bit_first_byte);
|
||||
if (first_byte_index < last_byte_index) {
|
||||
first_byte_index++;
|
||||
const auto last_byte = GetByteValue(at, end);
|
||||
count += CountBitsRange(last_byte, 0, GetBitIndex(end));
|
||||
count += CountBitSetByByteIndices(at, first_byte_index, last_byte_index);
|
||||
}
|
||||
return count;
|
||||
}
|
||||
|
||||
// General purpose function to count the number of bits that are on.
|
||||
// The parameters for start, end and bits are defaulted to the start of the string,
|
||||
// end of the string and bits are false.
|
||||
// Note that when bits is false, it means that we are looking on byte boundaries.
|
||||
std::size_t CountBitSet(std::string_view str, int64_t start, int64_t end, bool bits) {
|
||||
const int32_t size = bits ? str.size() * OFFSET_FACTOR : str.size();
|
||||
|
||||
auto NormalizedOffset = [size](int32_t orig) {
|
||||
if (orig < 0) {
|
||||
orig = size + orig;
|
||||
}
|
||||
return orig;
|
||||
};
|
||||
|
||||
if (start > 0 && end > 0 && end < start) {
|
||||
return 0; // for illegal range with positive we just return 0
|
||||
}
|
||||
|
||||
if (start < 0 && end < 0 && start > end) {
|
||||
return 0; // for illegal range with negative we just return 0
|
||||
}
|
||||
|
||||
start = NormalizedOffset(start);
|
||||
if (end > 0 && end < start) {
|
||||
return 0;
|
||||
}
|
||||
end = NormalizedOffset(end);
|
||||
if (start > end) {
|
||||
std::swap(start, end); // we're going backward
|
||||
}
|
||||
if (end > size) {
|
||||
end = size; // don't overflow
|
||||
}
|
||||
++end;
|
||||
return bits ? CountBitSetByBitIndices(str, start, end)
|
||||
: CountBitSetByByteIndices(str, start, end);
|
||||
}
|
||||
|
||||
// return true if bit is on
|
||||
bool GetBitValue(const std::string& entry, uint32_t offset) {
|
||||
const auto byte_val{GetByteValue(entry, offset)};
|
||||
const auto index{GetNormalizedBitIndex(offset)};
|
||||
return CheckBitStatus(byte_val, index);
|
||||
}
|
||||
|
||||
bool GetBitValueSafe(const std::string& entry, uint32_t offset) {
|
||||
return ((entry.size() * OFFSET_FACTOR) > offset) ? GetBitValue(entry, offset) : false;
|
||||
}
|
||||
|
||||
constexpr uint8_t TurnBitOn(uint8_t on, uint32_t offset) {
|
||||
return on |= 1 << offset;
|
||||
}
|
||||
|
||||
constexpr uint8_t TunBitOff(uint8_t on, uint32_t offset) {
|
||||
return on &= ~(1 << offset);
|
||||
}
|
||||
|
||||
bool SetBitValue(uint32_t offset, bool bit_value, std::string* entry) {
|
||||
// we need to return the old value after setting the value for offset
|
||||
const auto old_value{GetBitValue(*entry, offset)}; // save this as the return value
|
||||
auto byte{GetByteValue(*entry, offset)};
|
||||
std::bitset<8> bits{byte};
|
||||
const auto bit_index{GetNormalizedBitIndex(offset)};
|
||||
byte = bit_value ? TurnBitOn(byte, bit_index) : TunBitOff(byte, bit_index);
|
||||
(*entry)[GetByteIndex(offset)] = byte;
|
||||
return old_value;
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------------- //
|
||||
|
||||
class ElementAccess {
|
||||
bool added_ = false;
|
||||
PrimeIterator element_iter_;
|
||||
std::string_view key_;
|
||||
DbContext context_;
|
||||
EngineShard* shard_ = nullptr;
|
||||
|
||||
public:
|
||||
ElementAccess(std::string_view key, const OpArgs& args) : key_{key}, context_{args.db_cntx} {
|
||||
}
|
||||
|
||||
OpStatus Find(EngineShard* shard);
|
||||
|
||||
bool IsNewEntry() const {
|
||||
CHECK_NOTNULL(shard_);
|
||||
return added_;
|
||||
}
|
||||
|
||||
constexpr DbIndex Index() const {
|
||||
return context_.db_index;
|
||||
}
|
||||
|
||||
std::string Value() const;
|
||||
|
||||
void Commit(std::string_view new_value) const;
|
||||
};
|
||||
|
||||
OpStatus ElementAccess::Find(EngineShard* shard) {
|
||||
try {
|
||||
std::pair<PrimeIterator, bool> add_res = shard->db_slice().AddOrFind(context_, key_);
|
||||
if (!add_res.second) {
|
||||
if (add_res.first->second.ObjType() != OBJ_STRING) {
|
||||
return OpStatus::WRONG_TYPE;
|
||||
}
|
||||
}
|
||||
element_iter_ = add_res.first;
|
||||
added_ = add_res.second;
|
||||
shard_ = shard;
|
||||
return OpStatus::OK;
|
||||
} catch (const std::bad_alloc&) {
|
||||
return OpStatus::OUT_OF_MEMORY;
|
||||
}
|
||||
}
|
||||
|
||||
std::string ElementAccess::Value() const {
|
||||
CHECK_NOTNULL(shard_);
|
||||
if (!added_) { // Exist entry - return it
|
||||
return GetString(element_iter_->second, shard_);
|
||||
} else { // we only have reference to the new entry but no value
|
||||
return std::string{};
|
||||
}
|
||||
}
|
||||
|
||||
void ElementAccess::Commit(std::string_view new_value) const {
|
||||
if (shard_) {
|
||||
auto& db_slice = shard_->db_slice();
|
||||
db_slice.PreUpdate(Index(), element_iter_);
|
||||
element_iter_->second.SetString(new_value);
|
||||
db_slice.PostUpdate(Index(), element_iter_, key_, !added_);
|
||||
}
|
||||
}
|
||||
|
||||
// =============================================
|
||||
// Set a new value to a given bit
|
||||
|
||||
OpResult<bool> BitNewValue(const OpArgs& args, std::string_view key, uint32_t offset,
|
||||
bool bit_value) {
|
||||
EngineShard* shard = args.shard;
|
||||
ElementAccess element_access{key, args};
|
||||
auto& db_slice = shard->db_slice();
|
||||
DCHECK(db_slice.IsDbValid(element_access.Index()));
|
||||
bool old_value = false;
|
||||
|
||||
auto find_res = element_access.Find(shard);
|
||||
|
||||
if (find_res != OpStatus::OK) {
|
||||
return find_res;
|
||||
}
|
||||
|
||||
if (element_access.IsNewEntry()) {
|
||||
std::string new_entry(GetByteIndex(offset) + 1, 0);
|
||||
old_value = SetBitValue(offset, bit_value, &new_entry);
|
||||
element_access.Commit(new_entry);
|
||||
} else {
|
||||
bool reset = false;
|
||||
std::string existing_entry{element_access.Value()};
|
||||
if ((existing_entry.size() * OFFSET_FACTOR) <= offset) {
|
||||
existing_entry.resize(GetByteIndex(offset) + 1, 0);
|
||||
reset = true;
|
||||
}
|
||||
old_value = SetBitValue(offset, bit_value, &existing_entry);
|
||||
if (reset || old_value != bit_value) { // we made a "real" change to the entry, save it
|
||||
element_access.Commit(existing_entry);
|
||||
}
|
||||
}
|
||||
return old_value;
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------
|
||||
|
||||
std::string RunBitOperationOnValues(std::string_view op, const BitsStrVec& values) {
|
||||
// This function accept an operation (either OR, XOR, NOT or OR), and run bit operation
|
||||
// on all the values we got from the database. Note that in case that one of the values
|
||||
// is shorter than the other it would return a 0 and the operation would continue
|
||||
// until we ran the longest value. The function will return the resulting new value
|
||||
std::size_t max_len = 0;
|
||||
std::size_t max_len_index = 0;
|
||||
|
||||
const auto BitOperation = [&]() {
|
||||
if (op == OR_OP_NAME) {
|
||||
std::string default_str{values[max_len_index]};
|
||||
return BitOpString(OrOp, SkipOr, std::move(values), std::move(default_str));
|
||||
} else if (op == XOR_OP_NAME) {
|
||||
return BitOpString(XorOp, SkipXor, std::move(values), std::string(max_len, 0));
|
||||
} else if (op == AND_OP_NAME) {
|
||||
return BitOpString(AndOp, SkipAnd, std::move(values), std::string(max_len, 0));
|
||||
} else if (op == NOT_OP_NAME) {
|
||||
return BitOpNotString(values[0]);
|
||||
} else {
|
||||
LOG(FATAL) << "Operation not supported '" << op << "'";
|
||||
return std::string{}; // otherwise we will have warning of not returning value
|
||||
}
|
||||
};
|
||||
|
||||
if (values.empty()) { // this is ok in case we don't have the src keys
|
||||
return std::string{};
|
||||
}
|
||||
// The new result is the max length input
|
||||
max_len = values[0].size();
|
||||
for (std::size_t i = 1; i < values.size(); ++i) {
|
||||
if (values[i].size() > max_len) {
|
||||
max_len = values[i].size();
|
||||
max_len_index = i;
|
||||
}
|
||||
}
|
||||
return BitOperation();
|
||||
}
|
||||
|
||||
OpResult<std::string> CombineResultOp(ShardStringResults result, std::string_view op) {
|
||||
// take valid result for each shard
|
||||
BitsStrVec values;
|
||||
for (auto&& res : result) {
|
||||
if (res) {
|
||||
auto v = res.value();
|
||||
values.emplace_back(std::move(v));
|
||||
} else {
|
||||
if (res.status() != OpStatus::KEY_NOTFOUND) {
|
||||
// something went wrong, just bale out
|
||||
return res;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// and combine them to single result
|
||||
return RunBitOperationOnValues(op, values);
|
||||
}
|
||||
|
||||
// For bitop not - we cannot accumulate
|
||||
OpResult<std::string> RunBitOpNot(const OpArgs& op_args, ArgSlice keys) {
|
||||
DCHECK(keys.size() == 1);
|
||||
|
||||
EngineShard* es = op_args.shard;
|
||||
// if we found the value, just return, if not found then skip, otherwise report an error
|
||||
auto key = keys.front();
|
||||
OpResult<PrimeIterator> find_res = es->db_slice().Find(op_args.db_cntx, key, OBJ_STRING);
|
||||
if (find_res) {
|
||||
return GetString(find_res.value()->second, es);
|
||||
} else {
|
||||
return find_res.status();
|
||||
}
|
||||
}
|
||||
|
||||
// Read only operation where we are running the bit operation on all the
|
||||
// values that belong to same shard.
|
||||
OpResult<std::string> RunBitOpOnShard(std::string_view op, const OpArgs& op_args, ArgSlice keys) {
|
||||
DCHECK(!keys.empty());
|
||||
if (op == NOT_OP_NAME) {
|
||||
return RunBitOpNot(op_args, keys);
|
||||
}
|
||||
EngineShard* es = op_args.shard;
|
||||
BitsStrVec values;
|
||||
values.reserve(keys.size());
|
||||
|
||||
// collect all the value for this shard
|
||||
for (auto& key : keys) {
|
||||
OpResult<PrimeIterator> find_res = es->db_slice().Find(op_args.db_cntx, key, OBJ_STRING);
|
||||
if (find_res) {
|
||||
values.emplace_back(std::move(GetString(find_res.value()->second, es)));
|
||||
} else {
|
||||
if (find_res.status() == OpStatus::KEY_NOTFOUND) {
|
||||
continue; // this is allowed, just return empty string per Redis
|
||||
} else {
|
||||
return find_res.status();
|
||||
}
|
||||
}
|
||||
}
|
||||
// Run the operation on all the values that we found
|
||||
std::string op_result = RunBitOperationOnValues(op, values);
|
||||
return op_result;
|
||||
}
|
||||
|
||||
template <typename T> void HandleOpValueResult(const OpResult<T>& result, ConnectionContext* cntx) {
|
||||
static_assert(std::is_integral<T>::value,
|
||||
"we are only handling types that are integral types in the return types from "
|
||||
"here");
|
||||
if (result) {
|
||||
(*cntx)->SendLong(result.value());
|
||||
} else {
|
||||
switch (result.status()) {
|
||||
case OpStatus::WRONG_TYPE:
|
||||
(*cntx)->SendError(kWrongTypeErr);
|
||||
break;
|
||||
case OpStatus::OUT_OF_MEMORY:
|
||||
(*cntx)->SendError(kOutOfMemory);
|
||||
break;
|
||||
default:
|
||||
(*cntx)->SendLong(0); // in case we don't have the value we should just send 0
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
OpStatus NoOpCb(Transaction* t, EngineShard* shard) {
|
||||
return OpStatus::OK;
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------------- //
|
||||
// Impl for the command functions
|
||||
void BitPos(CmdArgList args, ConnectionContext* cntx) {
|
||||
(*cntx)->SendLong(0);
|
||||
}
|
||||
|
||||
void BitCount(CmdArgList args, ConnectionContext* cntx) {
|
||||
// Support for the command BITCOUNT
|
||||
// See details at https://redis.io/commands/bitcount/
|
||||
// Please note that if the key don't exists, it would return 0
|
||||
|
||||
if (args.size() == 3 || args.size() > 5) {
|
||||
return (*cntx)->SendError(kSyntaxErr);
|
||||
}
|
||||
// return (*cntx)->SendLong(0);
|
||||
std::string_view key = ArgS(args, 1);
|
||||
bool as_bit = false;
|
||||
int64_t start = 0;
|
||||
int64_t end = std::numeric_limits<int64_t>::max();
|
||||
if (args.size() >= 4) {
|
||||
if (absl::SimpleAtoi(ArgS(args, 2), &start) == 0 ||
|
||||
absl::SimpleAtoi(ArgS(args, 3), &end) == 0) {
|
||||
return (*cntx)->SendError(kInvalidIntErr);
|
||||
}
|
||||
if (args.size() == 5) {
|
||||
ToUpper(&args[4]);
|
||||
as_bit = ArgS(args, 4) == "BIT";
|
||||
}
|
||||
}
|
||||
auto cb = [&](Transaction* t, EngineShard* shard) {
|
||||
return CountBitsForValue(t->GetOpArgs(shard), key, start, end, as_bit);
|
||||
};
|
||||
Transaction* trans = cntx->transaction;
|
||||
OpResult<std::size_t> res = trans->ScheduleSingleHopT(std::move(cb));
|
||||
HandleOpValueResult(res, cntx);
|
||||
}
|
||||
|
||||
void BitField(CmdArgList args, ConnectionContext* cntx) {
|
||||
(*cntx)->SendLong(0);
|
||||
}
|
||||
|
||||
void BitFieldRo(CmdArgList args, ConnectionContext* cntx) {
|
||||
(*cntx)->SendLong(0);
|
||||
}
|
||||
|
||||
void BitOp(CmdArgList args, ConnectionContext* cntx) {
|
||||
static const std::array<std::string_view, 4> BITOP_OP_NAMES{OR_OP_NAME, XOR_OP_NAME, AND_OP_NAME,
|
||||
NOT_OP_NAME};
|
||||
ToUpper(&args[1]);
|
||||
std::string_view op = ArgS(args, 1);
|
||||
std::string_view dest_key = ArgS(args, 2);
|
||||
bool illegal = std::none_of(BITOP_OP_NAMES.begin(), BITOP_OP_NAMES.end(),
|
||||
[&op](auto val) { return op == val; });
|
||||
|
||||
if (illegal || (op == NOT_OP_NAME && args.size() > 4)) {
|
||||
return (*cntx)->SendError(kSyntaxErr); // too many arguments
|
||||
}
|
||||
|
||||
// Multi shard access - read only
|
||||
ShardStringResults result_set(shard_set->size(), OpStatus::KEY_NOTFOUND);
|
||||
ShardId dest_shard = Shard(dest_key, result_set.size());
|
||||
|
||||
auto shard_bitop = [&](Transaction* t, EngineShard* shard) {
|
||||
ArgSlice largs = t->ShardArgsInShard(shard->shard_id());
|
||||
DCHECK(!largs.empty());
|
||||
|
||||
if (shard->shard_id() == dest_shard) {
|
||||
CHECK_EQ(largs.front(), dest_key);
|
||||
largs.remove_prefix(1);
|
||||
if (largs.empty()) { // no more keys to check
|
||||
return OpStatus::OK;
|
||||
}
|
||||
}
|
||||
OpArgs op_args = t->GetOpArgs(shard);
|
||||
result_set[shard->shard_id()] = RunBitOpOnShard(op, op_args, largs);
|
||||
return OpStatus::OK;
|
||||
};
|
||||
|
||||
cntx->transaction->Schedule();
|
||||
cntx->transaction->Execute(std::move(shard_bitop), false); // we still have more work to do
|
||||
// All result from each shard
|
||||
const auto joined_results = CombineResultOp(result_set, op);
|
||||
// Second phase - save to targe key if successful
|
||||
if (!joined_results) {
|
||||
cntx->transaction->Execute(NoOpCb, true);
|
||||
(*cntx)->SendError(joined_results.status());
|
||||
return;
|
||||
} else {
|
||||
auto op_result = joined_results.value();
|
||||
auto store_cb = [&](Transaction* t, EngineShard* shard) {
|
||||
if (shard->shard_id() == dest_shard) {
|
||||
ElementAccess operation{dest_key, t->GetOpArgs(shard)};
|
||||
auto find_res = operation.Find(shard);
|
||||
|
||||
if (find_res == OpStatus::OK) {
|
||||
operation.Commit(op_result);
|
||||
}
|
||||
}
|
||||
return OpStatus::OK;
|
||||
};
|
||||
|
||||
cntx->transaction->Execute(std::move(store_cb), true);
|
||||
(*cntx)->SendLong(op_result.size());
|
||||
}
|
||||
}
|
||||
|
||||
void GetBit(CmdArgList args, ConnectionContext* cntx) {
|
||||
// Support for the command "GETBIT key offset"
|
||||
// see https://redis.io/commands/getbit/
|
||||
|
||||
uint32_t offset{0};
|
||||
std::string_view key = ArgS(args, 1);
|
||||
|
||||
if (!absl::SimpleAtoi(ArgS(args, 2), &offset)) {
|
||||
return (*cntx)->SendError(kInvalidIntErr);
|
||||
}
|
||||
auto cb = [&](Transaction* t, EngineShard* shard) {
|
||||
return ReadValueBitsetAt(t->GetOpArgs(shard), key, offset);
|
||||
};
|
||||
Transaction* trans = cntx->transaction;
|
||||
OpResult<bool> res = trans->ScheduleSingleHopT(std::move(cb));
|
||||
HandleOpValueResult(res, cntx);
|
||||
}
|
||||
|
||||
void SetBit(CmdArgList args, ConnectionContext* cntx) {
|
||||
// Support for the command "SETBIT key offset new_value"
|
||||
// see https://redis.io/commands/setbit/
|
||||
|
||||
uint32_t offset{0};
|
||||
int32_t value{0};
|
||||
std::string_view key = ArgS(args, 1);
|
||||
|
||||
if (!absl::SimpleAtoi(ArgS(args, 2), &offset) || !absl::SimpleAtoi(ArgS(args, 3), &value)) {
|
||||
return (*cntx)->SendError(kInvalidIntErr);
|
||||
}
|
||||
|
||||
auto cb = [&](Transaction* t, EngineShard* shard) {
|
||||
return BitNewValue(t->GetOpArgs(shard), key, offset, value != 0);
|
||||
};
|
||||
|
||||
Transaction* trans = cntx->transaction;
|
||||
OpResult<bool> res = trans->ScheduleSingleHopT(std::move(cb));
|
||||
HandleOpValueResult(res, cntx);
|
||||
}
|
||||
|
||||
// ------------------------------------------------------------------------- //
|
||||
// This are the "callbacks" that we're using from above
|
||||
std::string GetString(const PrimeValue& pv, EngineShard* shard) {
|
||||
std::string res;
|
||||
if (pv.IsExternal()) {
|
||||
auto* tiered = shard->tiered_storage();
|
||||
auto [offset, size] = pv.GetExternalPtr();
|
||||
res.resize(size);
|
||||
|
||||
std::error_code ec = tiered->Read(offset, size, res.data());
|
||||
CHECK(!ec) << "TBD: " << ec;
|
||||
} else {
|
||||
pv.GetString(&res);
|
||||
}
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
OpResult<bool> ReadValueBitsetAt(const OpArgs& op_args, std::string_view key, uint32_t offset) {
|
||||
OpResult<std::string> result = ReadValue(op_args.db_cntx, key, op_args.shard);
|
||||
if (result) {
|
||||
return GetBitValueSafe(result.value(), offset);
|
||||
} else {
|
||||
return result.status();
|
||||
}
|
||||
}
|
||||
|
||||
OpResult<std::string> ReadValue(const DbContext& context, std::string_view key,
|
||||
EngineShard* shard) {
|
||||
OpResult<PrimeIterator> it_res = shard->db_slice().Find(context, key, OBJ_STRING);
|
||||
if (!it_res.ok()) {
|
||||
return it_res.status();
|
||||
}
|
||||
|
||||
const PrimeValue& pv = it_res.value()->second;
|
||||
|
||||
return GetString(pv, shard);
|
||||
}
|
||||
|
||||
OpResult<std::size_t> CountBitsForValue(const OpArgs& op_args, std::string_view key, int64_t start,
|
||||
int64_t end, bool bit_value) {
|
||||
OpResult<std::string> result = ReadValue(op_args.db_cntx, key, op_args.shard);
|
||||
|
||||
if (result) { // if this is not found, just return 0 - per Redis
|
||||
if (result.value().empty()) {
|
||||
return 0;
|
||||
}
|
||||
if (end == std::numeric_limits<int64_t>::max()) {
|
||||
end = result.value().size();
|
||||
}
|
||||
return CountBitSet(result.value(), start, end, bit_value);
|
||||
} else {
|
||||
return result.status();
|
||||
}
|
||||
}
|
||||
|
||||
} // namespace
|
||||
|
||||
void BitOpsFamily::Register(CommandRegistry* registry) {
|
||||
using CI = CommandId;
|
||||
|
||||
*registry << CI{"BITPOS", CO::CommandOpt::READONLY, -3, 1, 1, 1}.SetHandler(&BitPos)
|
||||
<< CI{"BITCOUNT", CO::READONLY, -2, 1, 1, 1}.SetHandler(&BitCount)
|
||||
<< CI{"BITFIELD", CO::WRITE, -3, 1, 1, 1}.SetHandler(&BitField)
|
||||
<< CI{"BITFIELD_RO", CO::READONLY, -5, 1, 1, 1}.SetHandler(&BitFieldRo)
|
||||
<< CI{"BITOP", CO::WRITE, -4, 2, -1, 1}.SetHandler(&BitOp)
|
||||
<< CI{"GETBIT", CO::READONLY | CO::FAST | CO::FAST, 3, 1, 1, 1}.SetHandler(&GetBit)
|
||||
<< CI{"SETBIT", CO::WRITE, 4, 1, 1, 1}.SetHandler(&SetBit);
|
||||
}
|
||||
|
||||
} // namespace dfly
|
|
@ -1,30 +0,0 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
#pragma once
|
||||
|
||||
/// @brief This would implement bits related string commands: GETBIT, SETBIT, BITCOUNT, BITOP.
|
||||
/// Note: The name of this derive from the same file name in Redis source code.
|
||||
/// For more details about these command see:
|
||||
/// BITPOS: https://redis.io/commands/bitpos/
|
||||
/// BITCOUNT: https://redis.io/commands/bitcount/
|
||||
/// BITFIELD: https://redis.io/commands/bitfield/
|
||||
/// BITFIELD_RO: https://redis.io/commands/bitfield_ro/
|
||||
/// BITOP: https://redis.io/commands/bitop/
|
||||
/// GETBIT: https://redis.io/commands/getbit/
|
||||
/// SETBIT: https://redis.io/commands/setbit/
|
||||
namespace dfly {
|
||||
class CommandRegistry;
|
||||
|
||||
class BitOpsFamily {
|
||||
public:
|
||||
/// @brief Register the function that would be called to operate on user commands.
|
||||
/// @param registry The location to which the handling functions would be registered.
|
||||
///
|
||||
/// We are assuming that this would have a valid registry to work on (i.e this do not point to
|
||||
/// null!).
|
||||
static void Register(CommandRegistry* registry);
|
||||
};
|
||||
|
||||
} // end of namespace dfly
|
|
@ -1,423 +0,0 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
#include "server/bitops_family.h"
|
||||
|
||||
#include <bitset>
|
||||
#include <iomanip>
|
||||
#include <iostream>
|
||||
#include <string>
|
||||
#include <string_view>
|
||||
|
||||
#include "base/gtest.h"
|
||||
#include "base/logging.h"
|
||||
#include "facade/facade_test.h"
|
||||
#include "server/command_registry.h"
|
||||
#include "server/conn_context.h"
|
||||
#include "server/engine_shard_set.h"
|
||||
#include "server/error.h"
|
||||
#include "server/test_utils.h"
|
||||
#include "server/transaction.h"
|
||||
|
||||
using namespace testing;
|
||||
using namespace std;
|
||||
using namespace util;
|
||||
using absl::StrCat;
|
||||
|
||||
namespace dfly {
|
||||
|
||||
class Bytes {
|
||||
using char_t = std::uint8_t;
|
||||
using string_type = std::basic_string<char_t>;
|
||||
|
||||
public:
|
||||
enum State { GOOD, ERROR, NIL };
|
||||
|
||||
Bytes(std::initializer_list<std::uint8_t> bytes) : data_(bytes.size(), 0) {
|
||||
// note - we want this to be like its would be used in redis where most significate bit is to
|
||||
// the "left"
|
||||
std::copy(rbegin(bytes), rend(bytes), data_.begin());
|
||||
}
|
||||
|
||||
explicit Bytes(unsigned long long n) : data_(sizeof(n), 0) {
|
||||
FromNumber(n);
|
||||
}
|
||||
|
||||
static Bytes From(unsigned long long x) {
|
||||
return Bytes(x);
|
||||
}
|
||||
|
||||
explicit Bytes(State state) : state_{state} {
|
||||
}
|
||||
|
||||
Bytes(const char_t* ch, std::size_t len) : data_(ch, len) {
|
||||
}
|
||||
|
||||
Bytes(const char* ch, std::size_t len) : Bytes(reinterpret_cast<const char_t*>(ch), len) {
|
||||
}
|
||||
|
||||
explicit Bytes(std::string_view from) : Bytes(from.data(), from.size()) {
|
||||
}
|
||||
|
||||
static Bytes From(RespExpr&& r);
|
||||
|
||||
std::size_t Size() const {
|
||||
return data_.size();
|
||||
}
|
||||
|
||||
operator std::string_view() const {
|
||||
return std::string_view(reinterpret_cast<const char*>(data_.data()), Size());
|
||||
}
|
||||
|
||||
std::ostream& Print(std::ostream& os) const;
|
||||
|
||||
std::ostream& PrintHex(std::ostream& os) const;
|
||||
|
||||
private:
|
||||
template <typename T> void FromNumber(T num) {
|
||||
// note - we want this to be like its would be used in redis where most significate bit is to
|
||||
// the "left"
|
||||
std::size_t i = 0;
|
||||
for (const char_t* s = reinterpret_cast<const char_t*>(&num); i < sizeof(T); s++, i++) {
|
||||
data_[i] = *s;
|
||||
}
|
||||
}
|
||||
|
||||
string_type data_;
|
||||
State state_ = GOOD;
|
||||
};
|
||||
|
||||
Bytes Bytes::From(RespExpr&& r) {
|
||||
if (r.type == RespExpr::STRING) {
|
||||
return Bytes(ToSV(r.GetBuf()));
|
||||
} else {
|
||||
if (r.type == RespExpr::NIL || r.type == RespExpr::NIL_ARRAY) {
|
||||
return Bytes{Bytes::NIL};
|
||||
} else {
|
||||
return Bytes(Bytes::ERROR);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
std::ostream& Bytes::Print(std::ostream& os) const {
|
||||
if (state_ == GOOD) {
|
||||
for (auto c : data_) {
|
||||
std::bitset<8> b{c};
|
||||
os << b << ":";
|
||||
}
|
||||
} else {
|
||||
if (state_ == NIL) {
|
||||
os << "nil";
|
||||
} else {
|
||||
os << "error";
|
||||
}
|
||||
}
|
||||
return os;
|
||||
}
|
||||
|
||||
std::ostream& Bytes::PrintHex(std::ostream& os) const {
|
||||
if (state_ == GOOD) {
|
||||
for (auto c : data_) {
|
||||
os << std::hex << std::setfill('0') << std::setw(2) << (std::uint16_t)c << ":";
|
||||
}
|
||||
} else {
|
||||
if (state_ == NIL) {
|
||||
os << "nil";
|
||||
} else {
|
||||
os << "error";
|
||||
}
|
||||
}
|
||||
return os;
|
||||
}
|
||||
|
||||
inline bool operator==(const Bytes& left, const Bytes& right) {
|
||||
return static_cast<const std::string_view&>(left) == static_cast<const std::string_view&>(right);
|
||||
}
|
||||
|
||||
inline bool operator!=(const Bytes& left, const Bytes& right) {
|
||||
return !(left == right);
|
||||
}
|
||||
|
||||
inline Bytes operator"" _b(unsigned long long x) {
|
||||
return Bytes::From(x);
|
||||
}
|
||||
|
||||
inline Bytes operator"" _b(const char* x, std::size_t s) {
|
||||
return Bytes{x, s};
|
||||
}
|
||||
|
||||
inline Bytes operator"" _b(const char* x) {
|
||||
return Bytes{x, std::strlen(x)};
|
||||
}
|
||||
|
||||
inline std::ostream& operator<<(std::ostream& os, const Bytes& bs) {
|
||||
return bs.PrintHex(os);
|
||||
}
|
||||
|
||||
class BitOpsFamilyTest : public BaseFamilyTest {
|
||||
protected:
|
||||
// only for bitop XOR, OR, AND tests
|
||||
void BitOpSetKeys();
|
||||
};
|
||||
|
||||
// for the bitop tests we need to test with multiple keys as the issue
|
||||
// is that we need to make sure that accessing multiple shards creates
|
||||
// the correct result
|
||||
// Since this is bit operations, we are using the bytes data type
|
||||
// that makes the verification more ergonomics.
|
||||
const std::pair<std::string_view, Bytes> KEY_VALUES_BIT_OP[] = {
|
||||
{"first_key", 0xFFAACC01_b},
|
||||
{"key_second", {0x1, 0xBB}},
|
||||
{"_this_is_the_third_key", {0x01, 0x05, 0x15, 0x20, 0xAA, 0xCC}},
|
||||
{"the_last_key_we_have", 0xAACC_b}};
|
||||
|
||||
// For the bitop XOR OR and AND we are setting these keys/value pairs
|
||||
void BitOpsFamilyTest::BitOpSetKeys() {
|
||||
auto resp = Run({"set", KEY_VALUES_BIT_OP[0].first, KEY_VALUES_BIT_OP[0].second});
|
||||
EXPECT_EQ(resp, "OK");
|
||||
resp = Run({"set", KEY_VALUES_BIT_OP[1].first, KEY_VALUES_BIT_OP[1].second});
|
||||
EXPECT_EQ(resp, "OK");
|
||||
resp = Run({"set", KEY_VALUES_BIT_OP[2].first, KEY_VALUES_BIT_OP[2].second});
|
||||
EXPECT_EQ(resp, "OK");
|
||||
resp = Run({"set", KEY_VALUES_BIT_OP[3].first, KEY_VALUES_BIT_OP[3].second});
|
||||
EXPECT_EQ(resp, "OK");
|
||||
}
|
||||
|
||||
const long EXPECTED_VALUE_SETBIT[] = {0, 1, 1, 0, 0, 0,
|
||||
0, 1, 0, 1, 1, 0}; // taken from running this on redis
|
||||
const int32_t ITERATIONS = sizeof(EXPECTED_VALUE_SETBIT) / sizeof(EXPECTED_VALUE_SETBIT[0]);
|
||||
|
||||
TEST_F(BitOpsFamilyTest, GetBit) {
|
||||
auto resp = Run({"set", "foo", "abc"});
|
||||
|
||||
EXPECT_EQ(resp, "OK");
|
||||
|
||||
for (int32_t i = 0; i < ITERATIONS; i++) {
|
||||
EXPECT_EQ(EXPECTED_VALUE_SETBIT[i], CheckedInt({"getbit", "foo", std::to_string(i)}));
|
||||
}
|
||||
|
||||
// make sure that when accessing bit that is not in the range its working and we are
|
||||
// getting 0
|
||||
EXPECT_EQ(0, CheckedInt({"getbit", "foo", std::to_string(strlen("abc") + 5)}));
|
||||
}
|
||||
|
||||
TEST_F(BitOpsFamilyTest, SetBitExistingKey) {
|
||||
// this test would test when we have the value in place and
|
||||
// we are overriding and existing key
|
||||
// so there are no allocations of keys
|
||||
auto resp = Run({"set", "foo", "abc"});
|
||||
|
||||
EXPECT_EQ(resp, "OK");
|
||||
|
||||
// we are setting all to 1s first, we are expecting to get the old values
|
||||
for (int32_t i = 0; i < ITERATIONS; i++) {
|
||||
EXPECT_EQ(EXPECTED_VALUE_SETBIT[i], CheckedInt({"setbit", "foo", std::to_string(i), "1"}));
|
||||
}
|
||||
|
||||
for (int32_t i = 0; i < ITERATIONS; i++) {
|
||||
EXPECT_EQ(1, CheckedInt({"getbit", "foo", std::to_string(i)}));
|
||||
}
|
||||
}
|
||||
|
||||
TEST_F(BitOpsFamilyTest, SetBitMissingKey) {
|
||||
// This test would run without pre-allocated existing key
|
||||
// so we need to allocate the key as part of setting the values
|
||||
for (int32_t i = 0; i < ITERATIONS; i++) { // we are setting all to 1s first, we are expecting
|
||||
// get 0s since we didn't have this key before
|
||||
EXPECT_EQ(0, CheckedInt({"setbit", "foo", std::to_string(i), "1"}));
|
||||
}
|
||||
// now all that we set are at 1s
|
||||
for (int32_t i = 0; i < ITERATIONS; i++) {
|
||||
EXPECT_EQ(1, CheckedInt({"getbit", "foo", std::to_string(i)}));
|
||||
}
|
||||
}
|
||||
|
||||
const int32_t EXPECTED_VALUES_BYTES_BIT_COUNT[] = { // got this from redis 0 as start index
|
||||
4, 7, 11, 14, 17, 21, 21, 21, 21};
|
||||
|
||||
const int32_t BYTES_EXPECTED_VALUE_LEN =
|
||||
sizeof(EXPECTED_VALUES_BYTES_BIT_COUNT) / sizeof(EXPECTED_VALUES_BYTES_BIT_COUNT[0]);
|
||||
|
||||
TEST_F(BitOpsFamilyTest, BitCountByte) {
|
||||
// This would run without the bit flag - meaning it count on bytes boundaries
|
||||
auto resp = Run({"set", "foo", "farbar"});
|
||||
EXPECT_EQ(resp, "OK");
|
||||
EXPECT_EQ(0, CheckedInt({"bitcount", "foo2"})); // on none existing key we are expecting 0
|
||||
|
||||
for (int32_t i = 0; i < BYTES_EXPECTED_VALUE_LEN; i++) {
|
||||
EXPECT_EQ(EXPECTED_VALUES_BYTES_BIT_COUNT[i],
|
||||
CheckedInt({"bitcount", "foo", "0", std::to_string(i)}));
|
||||
}
|
||||
EXPECT_EQ(21, CheckedInt({"bitcount", "foo"})); // the total number of bits in this value
|
||||
}
|
||||
|
||||
TEST_F(BitOpsFamilyTest, BitCountByteSubRange) {
|
||||
// This test test using some sub ranges of bit count on bytes
|
||||
auto resp = Run({"set", "foo", "farbar"});
|
||||
EXPECT_EQ(resp, "OK");
|
||||
EXPECT_EQ(3, CheckedInt({"bitcount", "foo", "1", "1"}));
|
||||
EXPECT_EQ(7, CheckedInt({"bitcount", "foo", "1", "2"}));
|
||||
EXPECT_EQ(4, CheckedInt({"bitcount", "foo", "2", "2"}));
|
||||
EXPECT_EQ(0, CheckedInt({"bitcount", "foo", "3", "2"})); // illegal range
|
||||
EXPECT_EQ(10, CheckedInt({"bitcount", "foo", "-3", "-1"}));
|
||||
EXPECT_EQ(13, CheckedInt({"bitcount", "foo", "-5", "-2"}));
|
||||
EXPECT_EQ(0, CheckedInt({"bitcount", "foo", "-1", "-2"})); // illegal range
|
||||
}
|
||||
|
||||
TEST_F(BitOpsFamilyTest, BitCountByteBitSubRange) {
|
||||
// This test test using some sub ranges of bit count on bytes
|
||||
auto resp = Run({"set", "foo", "abcdef"});
|
||||
EXPECT_EQ(resp, "OK");
|
||||
resp = Run({"bitcount", "foo", "bar", "BIT"});
|
||||
ASSERT_THAT(resp, ErrArg("value is not an integer or out of range"));
|
||||
|
||||
EXPECT_EQ(1, CheckedInt({"bitcount", "foo", "1", "1", "BIT"}));
|
||||
EXPECT_EQ(2, CheckedInt({"bitcount", "foo", "1", "2", "BIT"}));
|
||||
EXPECT_EQ(1, CheckedInt({"bitcount", "foo", "2", "2", "BIT"}));
|
||||
EXPECT_EQ(0, CheckedInt({"bitcount", "foo", "3", "2", "bit"})); // illegal range
|
||||
EXPECT_EQ(2, CheckedInt({"bitcount", "foo", "-3", "-1", "bit"}));
|
||||
EXPECT_EQ(2, CheckedInt({"bitcount", "foo", "-5", "-2", "bit"}));
|
||||
EXPECT_EQ(4, CheckedInt({"bitcount", "foo", "1", "9", "bit"}));
|
||||
EXPECT_EQ(7, CheckedInt({"bitcount", "foo", "2", "19", "bit"}));
|
||||
EXPECT_EQ(0, CheckedInt({"bitcount", "foo", "-1", "-2", "bit"})); // illegal range
|
||||
}
|
||||
|
||||
// ------------------------- BITOP tests
|
||||
|
||||
const auto EXPECTED_LEN_BITOP =
|
||||
std::max(KEY_VALUES_BIT_OP[0].second.Size(), KEY_VALUES_BIT_OP[1].second.Size());
|
||||
const auto EXPECTED_LEN_BITOP2 = std::max(EXPECTED_LEN_BITOP, KEY_VALUES_BIT_OP[2].second.Size());
|
||||
const auto EXPECTED_LEN_BITOP3 = std::max(EXPECTED_LEN_BITOP2, KEY_VALUES_BIT_OP[3].second.Size());
|
||||
|
||||
TEST_F(BitOpsFamilyTest, BitOpsAnd) {
|
||||
BitOpSetKeys();
|
||||
auto resp = Run({"bitop", "foo", "bar", "abc"}); // should failed this is illegal operation
|
||||
ASSERT_THAT(resp, ErrArg("syntax error"));
|
||||
// run with none existing keys, should return 0
|
||||
EXPECT_EQ(0, CheckedInt({"bitop", "and", "dest_key", "1", "2", "3"}));
|
||||
|
||||
// bitop AND single key
|
||||
EXPECT_EQ(KEY_VALUES_BIT_OP[0].second.Size(),
|
||||
CheckedInt({"bitop", "and", "foo_out", KEY_VALUES_BIT_OP[0].first}));
|
||||
|
||||
auto res = Bytes::From(Run({"get", "foo_out"}));
|
||||
EXPECT_EQ(res, KEY_VALUES_BIT_OP[0].second);
|
||||
|
||||
// this will 0 all values other than one bit it would end with result with length ==
|
||||
// FOO_KEY_VALUE && value == BAR_KEY_VALUE
|
||||
EXPECT_EQ(EXPECTED_LEN_BITOP, CheckedInt({"bitop", "and", "foo-out", KEY_VALUES_BIT_OP[0].first,
|
||||
KEY_VALUES_BIT_OP[1].first}));
|
||||
const auto EXPECTED_RESULT = Bytes((0xffaacc01 & 0x1BB)); // first and second values
|
||||
res = Bytes::From(Run({"get", "foo-out"}));
|
||||
EXPECT_EQ(res, EXPECTED_RESULT);
|
||||
|
||||
// test bitop AND with 3 keys
|
||||
EXPECT_EQ(EXPECTED_LEN_BITOP2,
|
||||
CheckedInt({"bitop", "and", "foo-out", KEY_VALUES_BIT_OP[0].first,
|
||||
KEY_VALUES_BIT_OP[1].first, KEY_VALUES_BIT_OP[2].first}));
|
||||
const auto EXPECTED_RES2 = Bytes((0xffaacc01 & 0x1BB & 0x01051520AACC));
|
||||
res = Bytes::From(Run({"get", "foo-out"}));
|
||||
EXPECT_EQ(EXPECTED_RES2, res);
|
||||
|
||||
// test bitop AND with 4 parameters
|
||||
const auto EXPECTED_RES3 = Bytes((0xffaacc01 & 0x1BB & 0x01051520AACC & 0xAACC));
|
||||
EXPECT_EQ(EXPECTED_LEN_BITOP3, CheckedInt({"bitop", "and", "foo-out", KEY_VALUES_BIT_OP[0].first,
|
||||
KEY_VALUES_BIT_OP[1].first, KEY_VALUES_BIT_OP[2].first,
|
||||
KEY_VALUES_BIT_OP[3].first}));
|
||||
res = Bytes::From(Run({"get", "foo-out"}));
|
||||
EXPECT_EQ(EXPECTED_RES3, res);
|
||||
}
|
||||
|
||||
TEST_F(BitOpsFamilyTest, BitOpsOr) {
|
||||
BitOpSetKeys();
|
||||
|
||||
EXPECT_EQ(0, CheckedInt({"bitop", "or", "dest_key", "1", "2", "3"}));
|
||||
|
||||
// bitop or single key
|
||||
EXPECT_EQ(KEY_VALUES_BIT_OP[0].second.Size(),
|
||||
CheckedInt({"bitop", "or", "foo_out", KEY_VALUES_BIT_OP[0].first}));
|
||||
|
||||
auto res = Bytes::From(Run({"get", "foo_out"}));
|
||||
EXPECT_EQ(res, KEY_VALUES_BIT_OP[0].second);
|
||||
|
||||
// bitop OR 2 keys
|
||||
EXPECT_EQ(EXPECTED_LEN_BITOP, CheckedInt({"bitop", "or", "foo-out", KEY_VALUES_BIT_OP[0].first,
|
||||
KEY_VALUES_BIT_OP[1].first}));
|
||||
const auto EXPECTED_RESULT = Bytes((0xffaacc01 | 0x1BB)); // first or second values
|
||||
res = Bytes::From(Run({"get", "foo-out"}));
|
||||
EXPECT_EQ(res, EXPECTED_RESULT);
|
||||
|
||||
// bitop OR with 3 keys
|
||||
EXPECT_EQ(EXPECTED_LEN_BITOP2,
|
||||
CheckedInt({"bitop", "or", "foo-out", KEY_VALUES_BIT_OP[0].first,
|
||||
KEY_VALUES_BIT_OP[1].first, KEY_VALUES_BIT_OP[2].first}));
|
||||
const auto EXPECTED_RES2 = Bytes((0xffaacc01 | 0x1BB | 0x01051520AACC));
|
||||
res = Bytes::From(Run({"get", "foo-out"}));
|
||||
EXPECT_EQ(EXPECTED_RES2, res);
|
||||
|
||||
// bitop OR with 4 keys
|
||||
const auto EXPECTED_RES3 = Bytes((0xffaacc01 | 0x1BB | 0x01051520AACC | 0xAACC));
|
||||
EXPECT_EQ(EXPECTED_LEN_BITOP3, CheckedInt({"bitop", "or", "foo-out", KEY_VALUES_BIT_OP[0].first,
|
||||
KEY_VALUES_BIT_OP[1].first, KEY_VALUES_BIT_OP[2].first,
|
||||
KEY_VALUES_BIT_OP[3].first}));
|
||||
res = Bytes::From(Run({"get", "foo-out"}));
|
||||
EXPECT_EQ(EXPECTED_RES3, res);
|
||||
}
|
||||
|
||||
TEST_F(BitOpsFamilyTest, BitOpsXor) {
|
||||
BitOpSetKeys();
|
||||
|
||||
EXPECT_EQ(0, CheckedInt({"bitop", "or", "dest_key", "1", "2", "3"}));
|
||||
|
||||
// bitop XOR on single key
|
||||
EXPECT_EQ(KEY_VALUES_BIT_OP[0].second.Size(),
|
||||
CheckedInt({"bitop", "xor", "foo_out", KEY_VALUES_BIT_OP[0].first}));
|
||||
auto res = Bytes::From(Run({"get", "foo_out"}));
|
||||
EXPECT_EQ(res, KEY_VALUES_BIT_OP[0].second);
|
||||
|
||||
// bitop on XOR with two keys
|
||||
EXPECT_EQ(EXPECTED_LEN_BITOP, CheckedInt({"bitop", "xor", "foo-out", KEY_VALUES_BIT_OP[0].first,
|
||||
KEY_VALUES_BIT_OP[1].first}));
|
||||
const auto EXPECTED_RESULT = Bytes((0xffaacc01 ^ 0x1BB)); // first xor second values
|
||||
res = Bytes::From(Run({"get", "foo-out"}));
|
||||
EXPECT_EQ(res, EXPECTED_RESULT);
|
||||
|
||||
// bitop XOR with 3 keys
|
||||
EXPECT_EQ(EXPECTED_LEN_BITOP2,
|
||||
CheckedInt({"bitop", "xor", "foo-out", KEY_VALUES_BIT_OP[0].first,
|
||||
KEY_VALUES_BIT_OP[1].first, KEY_VALUES_BIT_OP[2].first}));
|
||||
const auto EXPECTED_RES2 = Bytes((0xffaacc01 ^ 0x1BB ^ 0x01051520AACC));
|
||||
res = Bytes::From(Run({"get", "foo-out"}));
|
||||
EXPECT_EQ(EXPECTED_RES2, res);
|
||||
|
||||
// bitop XOR with 4 keys
|
||||
const auto EXPECTED_RES3 = Bytes((0xffaacc01 ^ 0x1BB ^ 0x01051520AACC ^ 0xAACC));
|
||||
EXPECT_EQ(EXPECTED_LEN_BITOP3, CheckedInt({"bitop", "xor", "foo-out", KEY_VALUES_BIT_OP[0].first,
|
||||
KEY_VALUES_BIT_OP[1].first, KEY_VALUES_BIT_OP[2].first,
|
||||
KEY_VALUES_BIT_OP[3].first}));
|
||||
res = Bytes::From(Run({"get", "foo-out"}));
|
||||
EXPECT_EQ(EXPECTED_RES3, res);
|
||||
}
|
||||
|
||||
TEST_F(BitOpsFamilyTest, BitOpsNot) {
|
||||
// should failed this is illegal number of args
|
||||
auto resp = Run({"bitop", "not", "bar", "abc", "efg"});
|
||||
ASSERT_THAT(resp, ErrArg("syntax error"));
|
||||
|
||||
// Make sure that this works with none existing key as well
|
||||
EXPECT_EQ(0, CheckedInt({"bitop", "NOT", "bit-op-not-none-existing-key-results",
|
||||
"this-key-do-not-exists"}));
|
||||
EXPECT_EQ(Run({"get", "bit-op-not-none-existing-key-results"}), "");
|
||||
|
||||
// test bitop not
|
||||
resp = Run({"set", KEY_VALUES_BIT_OP[0].first, KEY_VALUES_BIT_OP[0].second});
|
||||
EXPECT_EQ(KEY_VALUES_BIT_OP[0].second.Size(),
|
||||
CheckedInt({"bitop", "not", "foo_out", KEY_VALUES_BIT_OP[0].first}));
|
||||
auto res = Bytes::From(Run({"get", "foo_out"}));
|
||||
|
||||
const auto NOT_RESULTS = Bytes(~0xFFAACC01ull);
|
||||
EXPECT_EQ(res, NOT_RESULTS);
|
||||
}
|
||||
|
||||
} // end of namespace dfly
|
|
@ -1,4 +1,4 @@
|
|||
// Copyright 2022, DragonflyDB authors. All rights reserved.
|
||||
// Copyright 2022, Roman Gershman. All rights reserved.
|
||||
// See LICENSE for licensing terms.
|
||||
//
|
||||
|
||||
|
@ -106,22 +106,18 @@ void BlockingController::RunStep(Transaction* completed_t) {
|
|||
}
|
||||
}
|
||||
|
||||
DbContext context;
|
||||
context.time_now_ms = GetCurrentTimeMs();
|
||||
|
||||
for (DbIndex index : awakened_indices_) {
|
||||
auto dbit = watched_dbs_.find(index);
|
||||
if (dbit == watched_dbs_.end())
|
||||
continue;
|
||||
|
||||
context.db_index = index;
|
||||
DbWatchTable& wt = *dbit->second;
|
||||
for (auto key : wt.awakened_keys) {
|
||||
string_view sv_key = static_cast<string_view>(key);
|
||||
DVLOG(1) << "Processing awakened key " << sv_key;
|
||||
|
||||
// Double verify we still got the item.
|
||||
auto [it, exp_it] = owner_->db_slice().FindExt(context, sv_key);
|
||||
auto [it, exp_it] = owner_->db_slice().FindExt(index, sv_key);
|
||||
if (!IsValid(it) || it->second.ObjType() != OBJ_LIST) // Only LIST is allowed to block.
|
||||
continue;
|
||||
|
||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue