From fa4ce6a8bcc07a0d31ab4be659deb08b26dc62a5 Mon Sep 17 00:00:00 2001 From: Daniel Lemire Date: Fri, 1 May 2020 09:16:18 -0700 Subject: [PATCH] There is confusion between gigabytes and gigibytes. Let us standardize throughout. (#838) * There is confusion between gigabytes and gigibytes. * Trying to be consistent. --- HACKING.md | 6 +++--- README.md | 11 +++++------ benchmark/bench_parse_call.cpp | 12 +++++++----- benchmark/benchmarker.h | 3 ++- benchmark/distinctuseridcompetition.cpp | 10 +++++----- benchmark/get_corpus_benchmark.cpp | 7 ++++--- benchmark/minifiercompetition.cpp | 9 +++++---- benchmark/parseandstatcompetition.cpp | 10 +++++----- benchmark/parsingcompetition.cpp | 10 +++++----- doc/performance.md | 16 ++++++++++++++-- tests/allparserscheckfile.cpp | 8 ++++---- tests/readme_examples.cpp | 4 ++-- 12 files changed, 61 insertions(+), 45 deletions(-) diff --git a/HACKING.md b/HACKING.md index 7f2bdafc..d1094dc0 100644 --- a/HACKING.md +++ b/HACKING.md @@ -41,7 +41,7 @@ Other important files and directories: * **amalgamate.sh:** Generates singleheader/simdjson.h and singleheader/simdjson.cpp for release. * **benchmark:** This is where we do benchmarking. Benchmarking is core to every change we make; the cardinal rule is don't regress performance without knowing exactly why, and what you're trading - for it. Many of our benchmarks are microbenchmarks. We trying to assess a specific functions in a specific library. In this scenario, we are effectively doing controlled scientific experiments for the purpose of understanding what affects our performance. So we simplify as much as possible. We try to avoid irrelevant factors such as page faults, interrupts, unnnecessary system calls, how fast and how eagerly the OS maps memory In such scenarios, we typically want to get the best performance that we can achieve... the case where we did not get interrupts, context switches, page faults... What we want is consistency and predictability. The numbers should not depend too much on how busy the machine is, on whether your upgraded your operating system recently, and so forth. This type of benchmarking is distinct from system benchmarking. If you're not sure what else to do to check your performance, this is always a good start: + for it. Many of our benchmarks are microbenchmarks. We are effectively doing controlled scientific experiments for the purpose of understanding what affects our performance. So we simplify as much as possible. We try to avoid irrelevant factors such as page faults, interrupts, unnnecessary system calls. We recommend checking the performance as follows: ```bash mkdir build cd build @@ -53,11 +53,11 @@ Other important files and directories: ```bash mkdir build cd build - cmake .. -DSIMDJSON_GOOGLE_BENCHMARKS=ON + cmake .. cmake --build . --target bench_parse_call --config Release ./benchmark/bench_parse_call ``` - The last line becomes `./benchmark/Release/bench_parse_call.exe` under Windows. Under Windows, you can also build with the clang compiler by adding `-T ClangCL` to the call to `cmake .. `. + The last line becomes `./benchmark/Release/bench_parse_call.exe` under Windows. Under Windows, you can also build with the clang compiler by adding `-T ClangCL` to the call to `cmake ..`: `cmake .. - TClangCL`. * **fuzz:** The source for fuzz testing. This lets us explore important edge and middle cases * **fuzz:** The source for fuzz testing. This lets us explore important edge and middle cases automatically, and is run in CI. diff --git a/README.md b/README.md index 0ce8fd3b..3a1ba38e 100644 --- a/README.md +++ b/README.md @@ -75,15 +75,14 @@ Usage documentation is available: Performance results ------------------- -The simdjson library uses three-quarters less instructions than state-of-the-art parser RapidJSON and +The simdjson library uses three-quarters less instructions than state-of-the-art parser [RapidJSON](https://rapidjson.org) and fifty percent less than sajson. To our knowledge, simdjson is the first fully-validating JSON parser -to run at gigabytes per second on commodity processors. It can parse millions of JSON documents -per second on a single core. +to run at [gigabytes per second](https://en.wikipedia.org/wiki/Gigabyte) (GB/s) on commodity processors. It can parse millions of JSON documents per second on a single core. The following figure represents parsing speed in GB/s for parsing various files on an Intel Skylake processor (3.4 GHz) using the GNU GCC 9 compiler (with the -O3 flag). We compare against the best and fastest C++ libraries. -The simdjson library offers full unicode (UTF-8) validation and exact +The simdjson library offers full unicode ([UTF-8](https://en.wikipedia.org/wiki/UTF-8)) validation and exact number parsing. The RapidJSON library is tested in two modes: fast and exact number parsing. The sajson library offers fast (but not exact) number parsing and partial unicode validation. In this data set, the file @@ -183,8 +182,8 @@ Head over to [CONTRIBUTING.md](CONTRIBUTING.md) for information on contributing License ------- -This code is made available under the Apache License 2.0. +This code is made available under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0.html). Under Windows, we build some tools using the windows/dirent_portable.h file (which is outside our library code): it under the liberal (business-friendly) MIT license. -For compilers that do not support C++17, we bundle the string-view library which is published under the Boost license (http://www.boost.org/LICENSE_1_0.txt). Like the Apache license, the Boost license is a permissive license allowing commercial redistribution. +For compilers that do not support [C++17](https://en.wikipedia.org/wiki/C%2B%2B17), we bundle the string-view library which is published under the Boost license (http://www.boost.org/LICENSE_1_0.txt). Like the Apache license, the Boost license is a permissive license allowing commercial redistribution. diff --git a/benchmark/bench_parse_call.cpp b/benchmark/bench_parse_call.cpp index 484ffc9f..94a6580c 100644 --- a/benchmark/bench_parse_call.cpp +++ b/benchmark/bench_parse_call.cpp @@ -36,9 +36,10 @@ static void parse_twitter(State& state) { } benchmark::DoNotOptimize(doc); } - state.counters["Bytes"] = benchmark::Counter( + // Gigabyte: https://en.wikipedia.org/wiki/Gigabyte + state.counters["Gigabytes"] = benchmark::Counter( double(bytes), benchmark::Counter::kIsRate, - benchmark::Counter::OneK::kIs1024); + benchmark::Counter::OneK::kIs1000); // For GiB : kIs1024 state.counters["docs"] = Counter(double(state.iterations()), benchmark::Counter::kIsRate); } BENCHMARK(parse_twitter)->Repetitions(10)->ComputeStatistics("max", [](const std::vector& v) -> double { @@ -72,9 +73,10 @@ static void parse_gsoc(State& state) { } benchmark::DoNotOptimize(doc); } - state.counters["Bytes"] = benchmark::Counter( + // Gigabyte: https://en.wikipedia.org/wiki/Gigabyte + state.counters["Gigabytes"] = benchmark::Counter( double(bytes), benchmark::Counter::kIsRate, - benchmark::Counter::OneK::kIs1024); + benchmark::Counter::OneK::kIs1000); // For GiB : kIs1024 state.counters["docs"] = Counter(double(state.iterations()), benchmark::Counter::kIsRate); } BENCHMARK(parse_gsoc)->Repetitions(10)->ComputeStatistics("max", [](const std::vector& v) -> double { @@ -160,4 +162,4 @@ BENCHMARK(document_parse_exception); #endif // SIMDJSON_EXCEPTIONS -BENCHMARK_MAIN(); \ No newline at end of file +BENCHMARK_MAIN(); diff --git a/benchmark/benchmarker.h b/benchmark/benchmarker.h index c01dcda3..56ed56a6 100644 --- a/benchmark/benchmarker.h +++ b/benchmark/benchmarker.h @@ -379,9 +379,10 @@ struct benchmarker { run_loop(iterations); } + // Gigabyte: https://en.wikipedia.org/wiki/Gigabyte template void print_aggregate(const char* prefix, const T& stage) const { - printf("%s%-13s: %8.4f ns per block (%6.2f%%) - %8.4f ns per byte - %8.4f ns per structural - %8.3f GB/s\n", + printf("%s%-13s: %8.4f ns per block (%6.2f%%) - %8.4f ns per byte - %8.4f ns per structural - %8.4f GB/s\n", prefix, "Speed", stage.elapsed_ns() / static_cast(stats->blocks), // per block diff --git a/benchmark/distinctuseridcompetition.cpp b/benchmark/distinctuseridcompetition.cpp index f04231c2..f94ddd06 100644 --- a/benchmark/distinctuseridcompetition.cpp +++ b/benchmark/distinctuseridcompetition.cpp @@ -335,13 +335,13 @@ int main(int argc, char *argv[]) { std::cerr << "Could not load the file " << filename << std::endl; return EXIT_FAILURE; } - + // Gigabyte: https://en.wikipedia.org/wiki/Gigabyte if (verbose) { std::cout << "Input has "; - if (p.size() > 1024 * 1024) - std::cout << p.size() / (1024 * 1024) << " MB "; - else if (p.size() > 1024) - std::cout << p.size() / 1024 << " KB "; + if (p.size() > 1000 * 1000) + std::cout << p.size() / (1000 * 1000) << " MB "; + else if (p.size() > 1000) + std::cout << p.size() / 1000 << " KB "; else std::cout << p.size() << " B "; std::cout << std::endl; diff --git a/benchmark/get_corpus_benchmark.cpp b/benchmark/get_corpus_benchmark.cpp index b24656bc..855eaaac 100644 --- a/benchmark/get_corpus_benchmark.cpp +++ b/benchmark/get_corpus_benchmark.cpp @@ -4,6 +4,7 @@ #include #include +// Gigabyte: https://en.wikipedia.org/wiki/Gigabyte never_inline double bench(std::string filename, simdjson::padded_string& p) { std::chrono::time_point start_clock = @@ -12,7 +13,7 @@ double bench(std::string filename, simdjson::padded_string& p) { std::chrono::time_point end_clock = std::chrono::steady_clock::now(); std::chrono::duration elapsed = end_clock - start_clock; - return (static_cast(p.size()) / (1024. * 1024. * 1024.)) / elapsed.count(); + return (static_cast(p.size()) / (1000000000.)) / elapsed.count(); } int main(int argc, char *argv[]) { @@ -32,8 +33,8 @@ int main(int argc, char *argv[]) { double meanval = 0; double maxval = 0; double minval = 10000; -std::cout << "file size: "<< (static_cast(p.size()) / (1024. * 1024. * 1024.)) << " GB" < 1024*1024*1024 ? 5 : 50; +std::cout << "file size: "<< (static_cast(p.size()) / (1000000000.)) << " GB" < 1000000000 ? 5 : 50; #if __cpp_exceptions try { #endif diff --git a/benchmark/minifiercompetition.cpp b/benchmark/minifiercompetition.cpp index 7be76fc2..ad6a8842 100644 --- a/benchmark/minifiercompetition.cpp +++ b/benchmark/minifiercompetition.cpp @@ -72,12 +72,13 @@ int main(int argc, char *argv[]) { std::cerr << "Could not load the file " << filename << std::endl; return EXIT_FAILURE; } + // Gigabyte: https://en.wikipedia.org/wiki/Gigabyte if (verbose) { std::cout << "Input has "; - if (p.size() > 1024 * 1024) - std::cout << p.size() / (1024 * 1024) << " MB "; - else if (p.size() > 1024) - std::cout << p.size() / 1024 << " KB "; + if (p.size() > 1000 * 1000) + std::cout << p.size() / (1000 * 1000) << " MB "; + else if (p.size() > 1000) + std::cout << p.size() / 1000 << " KB "; else std::cout << p.size() << " B "; std::cout << std::endl; diff --git a/benchmark/parseandstatcompetition.cpp b/benchmark/parseandstatcompetition.cpp index b68178e2..de9d12ab 100644 --- a/benchmark/parseandstatcompetition.cpp +++ b/benchmark/parseandstatcompetition.cpp @@ -302,13 +302,13 @@ int main(int argc, char *argv[]) { std::cerr << "Could not load the file " << filename << std::endl; return EXIT_FAILURE; } - + // Gigabyte: https://en.wikipedia.org/wiki/Gigabyte if (verbose) { std::cout << "Input has "; - if (p.size() > 1024 * 1024) - std::cout << p.size() / (1024 * 1024) << " MB "; - else if (p.size() > 1024) - std::cout << p.size() / 1024 << " KB "; + if (p.size() > 1000 * 1000) + std::cout << p.size() / (1000 * 1000) << " MB "; + else if (p.size() > 1000) + std::cout << p.size() / 1000 << " KB "; else std::cout << p.size() << " B "; std::cout << std::endl; diff --git a/benchmark/parsingcompetition.cpp b/benchmark/parsingcompetition.cpp index 215c4cf0..29d5c71e 100644 --- a/benchmark/parsingcompetition.cpp +++ b/benchmark/parsingcompetition.cpp @@ -86,13 +86,13 @@ bool bench(const char *filename, bool verbose, bool just_data, double repeat_mul int repeat = static_cast((50000000 * repeat_multiplier) / static_cast(p.size())); if (repeat < 10) { repeat = 10; } - + // Gigabyte: https://en.wikipedia.org/wiki/Gigabyte if (verbose) { std::cout << "Input " << filename << " has "; - if (p.size() > 1024 * 1024) - std::cout << p.size() / (1024 * 1024) << " MB"; - else if (p.size() > 1024) - std::cout << p.size() / 1024 << " KB"; + if (p.size() > 1000 * 1000) + std::cout << p.size() / (1000 * 1000) << " MB"; + else if (p.size() > 1000) + std::cout << p.size() / 1000 << " KB"; else std::cout << p.size() << " B"; std::cout << ": will run " << repeat << " iterations." << std::endl; diff --git a/doc/performance.md b/doc/performance.md index a3fba7f9..bc58eeab 100644 --- a/doc/performance.md +++ b/doc/performance.md @@ -9,6 +9,8 @@ are still some scenarios where tuning can enhance performance. * [Server Loops: Long-Running Processes and Memory Capacity](#server-loops-long-running-processes-and-memory-capacity) * [Large files and huge page support](#large-files-and-huge-page-support) * [Computed GOTOs](#computed-gotos) +* [Number parsing](#number-parsing) +* [Visual Studio](#visual-studio) Reusing the parser for maximum efficiency ----------------------------------------- @@ -61,7 +63,7 @@ without bound: * You can set a *max capacity* when constructing a parser: ```c++ - dom::parser parser(1024*1024); // Never grow past documents > 1MB + dom::parser parser(1000*1000); // Never grow past documents > 1MB for (web_request request : listen()) { auto [doc, error] = parser.parse(request.body); // If the document was above our limit, emit 413 = payload too large @@ -77,7 +79,7 @@ without bound: ```c++ dom::parser parser(0); // This parser will refuse to automatically grow capacity - simdjson::error_code allocate_error = parser.allocate(1024*1024); // This allocates enough capacity to handle documents <= 1MB + simdjson::error_code allocate_error = parser.allocate(1000*1000); // This allocates enough capacity to handle documents <= 1MB if (allocate_error) { cerr << allocate_error << endl; exit(1); } for (web_request request : listen()) { @@ -140,3 +142,13 @@ few hundred megabytes per second if your JSON documents are densely packed with - When possible, you should favor integer values written without a decimal point, as it simpler and faster to parse decimal integer values. - When serializing numbers, you should not use more digits than necessary: 17 digits is all that is needed to exactly represent double-precision floating-point numbers. Using many more digits than necessary will make your files larger and slower to parse. - When benchmarking parsing speeds, always report whether your JSON documents are made mostly of floating-point numbers when it is the case, since number parsing can then dominate the parsing time. + + +Visual Studio +-------------- + +On Intel and AMD Windows platforms, Microsoft Visual Studio enables programmers to build either 32-bit (x86) or 64-bit (x64) binaries. We urge you to always use 64-bit mode. Visual Studio 2019 should default on 64-bit builds when you have a 64-bit version of Windows, which we recommend. + +We do not recommend that you compile simdjson with architecture-specific flags such as `arch:AVX2`. The simdjson library automatically selects the best execution kernel at runtime. + +Recent versions of Microsoft Visual Studio on Windows provides support for the LLVM Clang compiler. You only need to install the "Clang compiler" optional component. You may also get a copy of the 64-bit LLVM CLang compiler for [Windows directly from LLVM](https://releases.llvm.org/download.html). The simdjson library fully supports the LLVM Clang compiler under Windows. In fact, you may get better performance out of simdjson with the LLVM Clang compiler than with the regular Visual Studio compiler. diff --git a/tests/allparserscheckfile.cpp b/tests/allparserscheckfile.cpp index f529c9e4..6784a060 100644 --- a/tests/allparserscheckfile.cpp +++ b/tests/allparserscheckfile.cpp @@ -70,10 +70,10 @@ int main(int argc, char *argv[]) { } if (verbose) { std::cout << "Input has "; - if (p.size() > 1024 * 1024) - std::cout << p.size() / (1024 * 1024) << " MB "; - else if (p.size() > 1024) - std::cout << p.size() / 1024 << " KB "; + if (p.size() > 1000 * 1000) + std::cout << p.size() / (1000 * 1000) << " MB "; + else if (p.size() > 1000) + std::cout << p.size() / 1000 << " KB "; else std::cout << p.size() << " B "; std::cout << std::endl; diff --git a/tests/readme_examples.cpp b/tests/readme_examples.cpp index ca29bca0..5418d54d 100644 --- a/tests/readme_examples.cpp +++ b/tests/readme_examples.cpp @@ -214,7 +214,7 @@ void performance_1() { SIMDJSON_PUSH_DISABLE_ALL_WARNINGS // The web_request part of this is aspirational, so we compile as much as we can here void performance_2() { - dom::parser parser(1024*1024); // Never grow past documents > 1MB + dom::parser parser(1000*1000); // Never grow past documents > 1MB // for (web_request request : listen()) { auto [doc, error] = parser.parse("1"_padded/*request.body*/); // // If the document was above our limit, emit 413 = payload too large @@ -226,7 +226,7 @@ void performance_2() { // The web_request part of this is aspirational, so we compile as much as we can here void performance_3() { dom::parser parser(0); // This parser will refuse to automatically grow capacity - simdjson::error_code allocate_error = parser.allocate(1024*1024); // This allocates enough capacity to handle documents <= 1MB + simdjson::error_code allocate_error = parser.allocate(1000*1000); // This allocates enough capacity to handle documents <= 1MB if (allocate_error) { cerr << allocate_error << endl; exit(1); } // for (web_request request : listen()) {