* Some users are interested, as a metric, in the number of documents parsed per second.
Obviously, this means reusing the same parser again and again.
* Adding a sentence
* This update the parsingcompetition benchmark so that it displays the number of documents parsed per second.
* Fallback should use our scalar code.
* parse should have a nicer error message.
* Making it so that "minify" can use different architectures.
* Let us change the minifier competition so that it tests all implementations.
* Documenting the untaken optimization opportunity.
Co-authored-by: John Keiser <john@johnkeiser.com>
* If we are going to have a google benchmark flag, we better make sure that we test it out minimal (it should build).
* Fix bench_dom_api
Co-authored-by: John Keiser <john@johnkeiser.com>
* Make architecture implementations virtual functions
- Easier to add new architectures (add implementation to implementation.cpp)
- Easier to add new algorithms / functions to architecture selection
(add to implementation.h, implement)
- Automatically select best implementation in static initialization
- Allow user to explicitly select implementation with a string (i.e.
parameter)
- Allow user to inspect current implementation name/description
- Allow user to list available implementations
- Eliminate architecture enum and architecture-based templating
- Add noexcept in non-inline functions
* Move implementation static methods to their own classes
* Detect best supported implementation on first use
* available_implementationsI() -> available_implementations
This creates a "document" class with only user-facing document state (no parser internals).
- document: user-facing document state
- document::iterator: iterator (equivalent of ParsedJsonIterator)
- document::parser: parser state plus a "docked" document we parse into (equivalent of ParsedJson)
Usage:
```c++
auto doc = simdjson::document::parse(buf, len); // less efficient but simplest
```
```c++
simdjson::document::parser parser; // reusable parser
parser.allocate_capacity(len);
simdjson::document* doc = parser.parse(buf, len); // pointer to doc inside parser
doc = parser.parse(buf2, len); // reuses all buffers and overwrites doc; more efficient
```
* Fix for issue467
* Updating single-header
* Let us make it so that JsonStream is constructed from a padded_string which will avoid dangerous overruns.
* Fixing parse_stream
* Updating documentation.
* This revert the code back to how it was prior to the silly "run two stages" routine and instead
adds an option to benchmark the code over hot buffers. It turns out that it can be expensive,
when the files are large, to allocate the pages.
* Instead of emulating the whole parsing as stage 1 + stage 2, let us
benchmark the real thing.
* Adding explicit constructor.
* Adding warning to the benchmark user.
* Making re-running optional.