There is confusion between gigabytes and gigibytes. Let us standardize throughout. (#838)
* There is confusion between gigabytes and gigibytes. * Trying to be consistent.
This commit is contained in:
parent
9863f62321
commit
fa4ce6a8bc
|
@ -41,7 +41,7 @@ Other important files and directories:
|
||||||
* **amalgamate.sh:** Generates singleheader/simdjson.h and singleheader/simdjson.cpp for release.
|
* **amalgamate.sh:** Generates singleheader/simdjson.h and singleheader/simdjson.cpp for release.
|
||||||
* **benchmark:** This is where we do benchmarking. Benchmarking is core to every change we make; the
|
* **benchmark:** This is where we do benchmarking. Benchmarking is core to every change we make; the
|
||||||
cardinal rule is don't regress performance without knowing exactly why, and what you're trading
|
cardinal rule is don't regress performance without knowing exactly why, and what you're trading
|
||||||
for it. Many of our benchmarks are microbenchmarks. We trying to assess a specific functions in a specific library. In this scenario, we are effectively doing controlled scientific experiments for the purpose of understanding what affects our performance. So we simplify as much as possible. We try to avoid irrelevant factors such as page faults, interrupts, unnnecessary system calls, how fast and how eagerly the OS maps memory In such scenarios, we typically want to get the best performance that we can achieve... the case where we did not get interrupts, context switches, page faults... What we want is consistency and predictability. The numbers should not depend too much on how busy the machine is, on whether your upgraded your operating system recently, and so forth. This type of benchmarking is distinct from system benchmarking. If you're not sure what else to do to check your performance, this is always a good start:
|
for it. Many of our benchmarks are microbenchmarks. We are effectively doing controlled scientific experiments for the purpose of understanding what affects our performance. So we simplify as much as possible. We try to avoid irrelevant factors such as page faults, interrupts, unnnecessary system calls. We recommend checking the performance as follows:
|
||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
|
@ -53,11 +53,11 @@ Other important files and directories:
|
||||||
```bash
|
```bash
|
||||||
mkdir build
|
mkdir build
|
||||||
cd build
|
cd build
|
||||||
cmake .. -DSIMDJSON_GOOGLE_BENCHMARKS=ON
|
cmake ..
|
||||||
cmake --build . --target bench_parse_call --config Release
|
cmake --build . --target bench_parse_call --config Release
|
||||||
./benchmark/bench_parse_call
|
./benchmark/bench_parse_call
|
||||||
```
|
```
|
||||||
The last line becomes `./benchmark/Release/bench_parse_call.exe` under Windows. Under Windows, you can also build with the clang compiler by adding `-T ClangCL` to the call to `cmake .. `.
|
The last line becomes `./benchmark/Release/bench_parse_call.exe` under Windows. Under Windows, you can also build with the clang compiler by adding `-T ClangCL` to the call to `cmake ..`: `cmake .. - TClangCL`.
|
||||||
* **fuzz:** The source for fuzz testing. This lets us explore important edge and middle cases
|
* **fuzz:** The source for fuzz testing. This lets us explore important edge and middle cases
|
||||||
* **fuzz:** The source for fuzz testing. This lets us explore important edge and middle cases
|
* **fuzz:** The source for fuzz testing. This lets us explore important edge and middle cases
|
||||||
automatically, and is run in CI.
|
automatically, and is run in CI.
|
||||||
|
|
11
README.md
11
README.md
|
@ -75,15 +75,14 @@ Usage documentation is available:
|
||||||
Performance results
|
Performance results
|
||||||
-------------------
|
-------------------
|
||||||
|
|
||||||
The simdjson library uses three-quarters less instructions than state-of-the-art parser RapidJSON and
|
The simdjson library uses three-quarters less instructions than state-of-the-art parser [RapidJSON](https://rapidjson.org) and
|
||||||
fifty percent less than sajson. To our knowledge, simdjson is the first fully-validating JSON parser
|
fifty percent less than sajson. To our knowledge, simdjson is the first fully-validating JSON parser
|
||||||
to run at gigabytes per second on commodity processors. It can parse millions of JSON documents
|
to run at [gigabytes per second](https://en.wikipedia.org/wiki/Gigabyte) (GB/s) on commodity processors. It can parse millions of JSON documents per second on a single core.
|
||||||
per second on a single core.
|
|
||||||
|
|
||||||
The following figure represents parsing speed in GB/s for parsing various files
|
The following figure represents parsing speed in GB/s for parsing various files
|
||||||
on an Intel Skylake processor (3.4 GHz) using the GNU GCC 9 compiler (with the -O3 flag).
|
on an Intel Skylake processor (3.4 GHz) using the GNU GCC 9 compiler (with the -O3 flag).
|
||||||
We compare against the best and fastest C++ libraries.
|
We compare against the best and fastest C++ libraries.
|
||||||
The simdjson library offers full unicode (UTF-8) validation and exact
|
The simdjson library offers full unicode ([UTF-8](https://en.wikipedia.org/wiki/UTF-8)) validation and exact
|
||||||
number parsing. The RapidJSON library is tested in two modes: fast and
|
number parsing. The RapidJSON library is tested in two modes: fast and
|
||||||
exact number parsing. The sajson library offers fast (but not exact)
|
exact number parsing. The sajson library offers fast (but not exact)
|
||||||
number parsing and partial unicode validation. In this data set, the file
|
number parsing and partial unicode validation. In this data set, the file
|
||||||
|
@ -183,8 +182,8 @@ Head over to [CONTRIBUTING.md](CONTRIBUTING.md) for information on contributing
|
||||||
License
|
License
|
||||||
-------
|
-------
|
||||||
|
|
||||||
This code is made available under the Apache License 2.0.
|
This code is made available under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0.html).
|
||||||
|
|
||||||
Under Windows, we build some tools using the windows/dirent_portable.h file (which is outside our library code): it under the liberal (business-friendly) MIT license.
|
Under Windows, we build some tools using the windows/dirent_portable.h file (which is outside our library code): it under the liberal (business-friendly) MIT license.
|
||||||
|
|
||||||
For compilers that do not support C++17, we bundle the string-view library which is published under the Boost license (http://www.boost.org/LICENSE_1_0.txt). Like the Apache license, the Boost license is a permissive license allowing commercial redistribution.
|
For compilers that do not support [C++17](https://en.wikipedia.org/wiki/C%2B%2B17), we bundle the string-view library which is published under the Boost license (http://www.boost.org/LICENSE_1_0.txt). Like the Apache license, the Boost license is a permissive license allowing commercial redistribution.
|
||||||
|
|
|
@ -36,9 +36,10 @@ static void parse_twitter(State& state) {
|
||||||
}
|
}
|
||||||
benchmark::DoNotOptimize(doc);
|
benchmark::DoNotOptimize(doc);
|
||||||
}
|
}
|
||||||
state.counters["Bytes"] = benchmark::Counter(
|
// Gigabyte: https://en.wikipedia.org/wiki/Gigabyte
|
||||||
|
state.counters["Gigabytes"] = benchmark::Counter(
|
||||||
double(bytes), benchmark::Counter::kIsRate,
|
double(bytes), benchmark::Counter::kIsRate,
|
||||||
benchmark::Counter::OneK::kIs1024);
|
benchmark::Counter::OneK::kIs1000); // For GiB : kIs1024
|
||||||
state.counters["docs"] = Counter(double(state.iterations()), benchmark::Counter::kIsRate);
|
state.counters["docs"] = Counter(double(state.iterations()), benchmark::Counter::kIsRate);
|
||||||
}
|
}
|
||||||
BENCHMARK(parse_twitter)->Repetitions(10)->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
|
BENCHMARK(parse_twitter)->Repetitions(10)->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
|
||||||
|
@ -72,9 +73,10 @@ static void parse_gsoc(State& state) {
|
||||||
}
|
}
|
||||||
benchmark::DoNotOptimize(doc);
|
benchmark::DoNotOptimize(doc);
|
||||||
}
|
}
|
||||||
state.counters["Bytes"] = benchmark::Counter(
|
// Gigabyte: https://en.wikipedia.org/wiki/Gigabyte
|
||||||
|
state.counters["Gigabytes"] = benchmark::Counter(
|
||||||
double(bytes), benchmark::Counter::kIsRate,
|
double(bytes), benchmark::Counter::kIsRate,
|
||||||
benchmark::Counter::OneK::kIs1024);
|
benchmark::Counter::OneK::kIs1000); // For GiB : kIs1024
|
||||||
state.counters["docs"] = Counter(double(state.iterations()), benchmark::Counter::kIsRate);
|
state.counters["docs"] = Counter(double(state.iterations()), benchmark::Counter::kIsRate);
|
||||||
}
|
}
|
||||||
BENCHMARK(parse_gsoc)->Repetitions(10)->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
|
BENCHMARK(parse_gsoc)->Repetitions(10)->ComputeStatistics("max", [](const std::vector<double>& v) -> double {
|
||||||
|
|
|
@ -379,9 +379,10 @@ struct benchmarker {
|
||||||
run_loop(iterations);
|
run_loop(iterations);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Gigabyte: https://en.wikipedia.org/wiki/Gigabyte
|
||||||
template<typename T>
|
template<typename T>
|
||||||
void print_aggregate(const char* prefix, const T& stage) const {
|
void print_aggregate(const char* prefix, const T& stage) const {
|
||||||
printf("%s%-13s: %8.4f ns per block (%6.2f%%) - %8.4f ns per byte - %8.4f ns per structural - %8.3f GB/s\n",
|
printf("%s%-13s: %8.4f ns per block (%6.2f%%) - %8.4f ns per byte - %8.4f ns per structural - %8.4f GB/s\n",
|
||||||
prefix,
|
prefix,
|
||||||
"Speed",
|
"Speed",
|
||||||
stage.elapsed_ns() / static_cast<double>(stats->blocks), // per block
|
stage.elapsed_ns() / static_cast<double>(stats->blocks), // per block
|
||||||
|
|
|
@ -335,13 +335,13 @@ int main(int argc, char *argv[]) {
|
||||||
std::cerr << "Could not load the file " << filename << std::endl;
|
std::cerr << "Could not load the file " << filename << std::endl;
|
||||||
return EXIT_FAILURE;
|
return EXIT_FAILURE;
|
||||||
}
|
}
|
||||||
|
// Gigabyte: https://en.wikipedia.org/wiki/Gigabyte
|
||||||
if (verbose) {
|
if (verbose) {
|
||||||
std::cout << "Input has ";
|
std::cout << "Input has ";
|
||||||
if (p.size() > 1024 * 1024)
|
if (p.size() > 1000 * 1000)
|
||||||
std::cout << p.size() / (1024 * 1024) << " MB ";
|
std::cout << p.size() / (1000 * 1000) << " MB ";
|
||||||
else if (p.size() > 1024)
|
else if (p.size() > 1000)
|
||||||
std::cout << p.size() / 1024 << " KB ";
|
std::cout << p.size() / 1000 << " KB ";
|
||||||
else
|
else
|
||||||
std::cout << p.size() << " B ";
|
std::cout << p.size() << " B ";
|
||||||
std::cout << std::endl;
|
std::cout << std::endl;
|
||||||
|
|
|
@ -4,6 +4,7 @@
|
||||||
#include <cstring>
|
#include <cstring>
|
||||||
#include <iostream>
|
#include <iostream>
|
||||||
|
|
||||||
|
// Gigabyte: https://en.wikipedia.org/wiki/Gigabyte
|
||||||
never_inline
|
never_inline
|
||||||
double bench(std::string filename, simdjson::padded_string& p) {
|
double bench(std::string filename, simdjson::padded_string& p) {
|
||||||
std::chrono::time_point<std::chrono::steady_clock> start_clock =
|
std::chrono::time_point<std::chrono::steady_clock> start_clock =
|
||||||
|
@ -12,7 +13,7 @@ double bench(std::string filename, simdjson::padded_string& p) {
|
||||||
std::chrono::time_point<std::chrono::steady_clock> end_clock =
|
std::chrono::time_point<std::chrono::steady_clock> end_clock =
|
||||||
std::chrono::steady_clock::now();
|
std::chrono::steady_clock::now();
|
||||||
std::chrono::duration<double> elapsed = end_clock - start_clock;
|
std::chrono::duration<double> elapsed = end_clock - start_clock;
|
||||||
return (static_cast<double>(p.size()) / (1024. * 1024. * 1024.)) / elapsed.count();
|
return (static_cast<double>(p.size()) / (1000000000.)) / elapsed.count();
|
||||||
}
|
}
|
||||||
|
|
||||||
int main(int argc, char *argv[]) {
|
int main(int argc, char *argv[]) {
|
||||||
|
@ -32,8 +33,8 @@ int main(int argc, char *argv[]) {
|
||||||
double meanval = 0;
|
double meanval = 0;
|
||||||
double maxval = 0;
|
double maxval = 0;
|
||||||
double minval = 10000;
|
double minval = 10000;
|
||||||
std::cout << "file size: "<< (static_cast<double>(p.size()) / (1024. * 1024. * 1024.)) << " GB" <<std::endl;
|
std::cout << "file size: "<< (static_cast<double>(p.size()) / (1000000000.)) << " GB" <<std::endl;
|
||||||
size_t times = p.size() > 1024*1024*1024 ? 5 : 50;
|
size_t times = p.size() > 1000000000 ? 5 : 50;
|
||||||
#if __cpp_exceptions
|
#if __cpp_exceptions
|
||||||
try {
|
try {
|
||||||
#endif
|
#endif
|
||||||
|
|
|
@ -72,12 +72,13 @@ int main(int argc, char *argv[]) {
|
||||||
std::cerr << "Could not load the file " << filename << std::endl;
|
std::cerr << "Could not load the file " << filename << std::endl;
|
||||||
return EXIT_FAILURE;
|
return EXIT_FAILURE;
|
||||||
}
|
}
|
||||||
|
// Gigabyte: https://en.wikipedia.org/wiki/Gigabyte
|
||||||
if (verbose) {
|
if (verbose) {
|
||||||
std::cout << "Input has ";
|
std::cout << "Input has ";
|
||||||
if (p.size() > 1024 * 1024)
|
if (p.size() > 1000 * 1000)
|
||||||
std::cout << p.size() / (1024 * 1024) << " MB ";
|
std::cout << p.size() / (1000 * 1000) << " MB ";
|
||||||
else if (p.size() > 1024)
|
else if (p.size() > 1000)
|
||||||
std::cout << p.size() / 1024 << " KB ";
|
std::cout << p.size() / 1000 << " KB ";
|
||||||
else
|
else
|
||||||
std::cout << p.size() << " B ";
|
std::cout << p.size() << " B ";
|
||||||
std::cout << std::endl;
|
std::cout << std::endl;
|
||||||
|
|
|
@ -302,13 +302,13 @@ int main(int argc, char *argv[]) {
|
||||||
std::cerr << "Could not load the file " << filename << std::endl;
|
std::cerr << "Could not load the file " << filename << std::endl;
|
||||||
return EXIT_FAILURE;
|
return EXIT_FAILURE;
|
||||||
}
|
}
|
||||||
|
// Gigabyte: https://en.wikipedia.org/wiki/Gigabyte
|
||||||
if (verbose) {
|
if (verbose) {
|
||||||
std::cout << "Input has ";
|
std::cout << "Input has ";
|
||||||
if (p.size() > 1024 * 1024)
|
if (p.size() > 1000 * 1000)
|
||||||
std::cout << p.size() / (1024 * 1024) << " MB ";
|
std::cout << p.size() / (1000 * 1000) << " MB ";
|
||||||
else if (p.size() > 1024)
|
else if (p.size() > 1000)
|
||||||
std::cout << p.size() / 1024 << " KB ";
|
std::cout << p.size() / 1000 << " KB ";
|
||||||
else
|
else
|
||||||
std::cout << p.size() << " B ";
|
std::cout << p.size() << " B ";
|
||||||
std::cout << std::endl;
|
std::cout << std::endl;
|
||||||
|
|
|
@ -86,13 +86,13 @@ bool bench(const char *filename, bool verbose, bool just_data, double repeat_mul
|
||||||
|
|
||||||
int repeat = static_cast<int>((50000000 * repeat_multiplier) / static_cast<double>(p.size()));
|
int repeat = static_cast<int>((50000000 * repeat_multiplier) / static_cast<double>(p.size()));
|
||||||
if (repeat < 10) { repeat = 10; }
|
if (repeat < 10) { repeat = 10; }
|
||||||
|
// Gigabyte: https://en.wikipedia.org/wiki/Gigabyte
|
||||||
if (verbose) {
|
if (verbose) {
|
||||||
std::cout << "Input " << filename << " has ";
|
std::cout << "Input " << filename << " has ";
|
||||||
if (p.size() > 1024 * 1024)
|
if (p.size() > 1000 * 1000)
|
||||||
std::cout << p.size() / (1024 * 1024) << " MB";
|
std::cout << p.size() / (1000 * 1000) << " MB";
|
||||||
else if (p.size() > 1024)
|
else if (p.size() > 1000)
|
||||||
std::cout << p.size() / 1024 << " KB";
|
std::cout << p.size() / 1000 << " KB";
|
||||||
else
|
else
|
||||||
std::cout << p.size() << " B";
|
std::cout << p.size() << " B";
|
||||||
std::cout << ": will run " << repeat << " iterations." << std::endl;
|
std::cout << ": will run " << repeat << " iterations." << std::endl;
|
||||||
|
|
|
@ -9,6 +9,8 @@ are still some scenarios where tuning can enhance performance.
|
||||||
* [Server Loops: Long-Running Processes and Memory Capacity](#server-loops-long-running-processes-and-memory-capacity)
|
* [Server Loops: Long-Running Processes and Memory Capacity](#server-loops-long-running-processes-and-memory-capacity)
|
||||||
* [Large files and huge page support](#large-files-and-huge-page-support)
|
* [Large files and huge page support](#large-files-and-huge-page-support)
|
||||||
* [Computed GOTOs](#computed-gotos)
|
* [Computed GOTOs](#computed-gotos)
|
||||||
|
* [Number parsing](#number-parsing)
|
||||||
|
* [Visual Studio](#visual-studio)
|
||||||
|
|
||||||
Reusing the parser for maximum efficiency
|
Reusing the parser for maximum efficiency
|
||||||
-----------------------------------------
|
-----------------------------------------
|
||||||
|
@ -61,7 +63,7 @@ without bound:
|
||||||
* You can set a *max capacity* when constructing a parser:
|
* You can set a *max capacity* when constructing a parser:
|
||||||
|
|
||||||
```c++
|
```c++
|
||||||
dom::parser parser(1024*1024); // Never grow past documents > 1MB
|
dom::parser parser(1000*1000); // Never grow past documents > 1MB
|
||||||
for (web_request request : listen()) {
|
for (web_request request : listen()) {
|
||||||
auto [doc, error] = parser.parse(request.body);
|
auto [doc, error] = parser.parse(request.body);
|
||||||
// If the document was above our limit, emit 413 = payload too large
|
// If the document was above our limit, emit 413 = payload too large
|
||||||
|
@ -77,7 +79,7 @@ without bound:
|
||||||
|
|
||||||
```c++
|
```c++
|
||||||
dom::parser parser(0); // This parser will refuse to automatically grow capacity
|
dom::parser parser(0); // This parser will refuse to automatically grow capacity
|
||||||
simdjson::error_code allocate_error = parser.allocate(1024*1024); // This allocates enough capacity to handle documents <= 1MB
|
simdjson::error_code allocate_error = parser.allocate(1000*1000); // This allocates enough capacity to handle documents <= 1MB
|
||||||
if (allocate_error) { cerr << allocate_error << endl; exit(1); }
|
if (allocate_error) { cerr << allocate_error << endl; exit(1); }
|
||||||
|
|
||||||
for (web_request request : listen()) {
|
for (web_request request : listen()) {
|
||||||
|
@ -140,3 +142,13 @@ few hundred megabytes per second if your JSON documents are densely packed with
|
||||||
- When possible, you should favor integer values written without a decimal point, as it simpler and faster to parse decimal integer values.
|
- When possible, you should favor integer values written without a decimal point, as it simpler and faster to parse decimal integer values.
|
||||||
- When serializing numbers, you should not use more digits than necessary: 17 digits is all that is needed to exactly represent double-precision floating-point numbers. Using many more digits than necessary will make your files larger and slower to parse.
|
- When serializing numbers, you should not use more digits than necessary: 17 digits is all that is needed to exactly represent double-precision floating-point numbers. Using many more digits than necessary will make your files larger and slower to parse.
|
||||||
- When benchmarking parsing speeds, always report whether your JSON documents are made mostly of floating-point numbers when it is the case, since number parsing can then dominate the parsing time.
|
- When benchmarking parsing speeds, always report whether your JSON documents are made mostly of floating-point numbers when it is the case, since number parsing can then dominate the parsing time.
|
||||||
|
|
||||||
|
|
||||||
|
Visual Studio
|
||||||
|
--------------
|
||||||
|
|
||||||
|
On Intel and AMD Windows platforms, Microsoft Visual Studio enables programmers to build either 32-bit (x86) or 64-bit (x64) binaries. We urge you to always use 64-bit mode. Visual Studio 2019 should default on 64-bit builds when you have a 64-bit version of Windows, which we recommend.
|
||||||
|
|
||||||
|
We do not recommend that you compile simdjson with architecture-specific flags such as `arch:AVX2`. The simdjson library automatically selects the best execution kernel at runtime.
|
||||||
|
|
||||||
|
Recent versions of Microsoft Visual Studio on Windows provides support for the LLVM Clang compiler. You only need to install the "Clang compiler" optional component. You may also get a copy of the 64-bit LLVM CLang compiler for [Windows directly from LLVM](https://releases.llvm.org/download.html). The simdjson library fully supports the LLVM Clang compiler under Windows. In fact, you may get better performance out of simdjson with the LLVM Clang compiler than with the regular Visual Studio compiler.
|
||||||
|
|
|
@ -70,10 +70,10 @@ int main(int argc, char *argv[]) {
|
||||||
}
|
}
|
||||||
if (verbose) {
|
if (verbose) {
|
||||||
std::cout << "Input has ";
|
std::cout << "Input has ";
|
||||||
if (p.size() > 1024 * 1024)
|
if (p.size() > 1000 * 1000)
|
||||||
std::cout << p.size() / (1024 * 1024) << " MB ";
|
std::cout << p.size() / (1000 * 1000) << " MB ";
|
||||||
else if (p.size() > 1024)
|
else if (p.size() > 1000)
|
||||||
std::cout << p.size() / 1024 << " KB ";
|
std::cout << p.size() / 1000 << " KB ";
|
||||||
else
|
else
|
||||||
std::cout << p.size() << " B ";
|
std::cout << p.size() << " B ";
|
||||||
std::cout << std::endl;
|
std::cout << std::endl;
|
||||||
|
|
|
@ -214,7 +214,7 @@ void performance_1() {
|
||||||
SIMDJSON_PUSH_DISABLE_ALL_WARNINGS
|
SIMDJSON_PUSH_DISABLE_ALL_WARNINGS
|
||||||
// The web_request part of this is aspirational, so we compile as much as we can here
|
// The web_request part of this is aspirational, so we compile as much as we can here
|
||||||
void performance_2() {
|
void performance_2() {
|
||||||
dom::parser parser(1024*1024); // Never grow past documents > 1MB
|
dom::parser parser(1000*1000); // Never grow past documents > 1MB
|
||||||
// for (web_request request : listen()) {
|
// for (web_request request : listen()) {
|
||||||
auto [doc, error] = parser.parse("1"_padded/*request.body*/);
|
auto [doc, error] = parser.parse("1"_padded/*request.body*/);
|
||||||
// // If the document was above our limit, emit 413 = payload too large
|
// // If the document was above our limit, emit 413 = payload too large
|
||||||
|
@ -226,7 +226,7 @@ void performance_2() {
|
||||||
// The web_request part of this is aspirational, so we compile as much as we can here
|
// The web_request part of this is aspirational, so we compile as much as we can here
|
||||||
void performance_3() {
|
void performance_3() {
|
||||||
dom::parser parser(0); // This parser will refuse to automatically grow capacity
|
dom::parser parser(0); // This parser will refuse to automatically grow capacity
|
||||||
simdjson::error_code allocate_error = parser.allocate(1024*1024); // This allocates enough capacity to handle documents <= 1MB
|
simdjson::error_code allocate_error = parser.allocate(1000*1000); // This allocates enough capacity to handle documents <= 1MB
|
||||||
if (allocate_error) { cerr << allocate_error << endl; exit(1); }
|
if (allocate_error) { cerr << allocate_error << endl; exit(1); }
|
||||||
|
|
||||||
// for (web_request request : listen()) {
|
// for (web_request request : listen()) {
|
||||||
|
|
Loading…
Reference in New Issue