After I wrote the article about Python yield
and C++ Coroutines, I felt that I needed to test the performance of istream_line_reader
. The immediate result was both good and bad: good in that there was no actual difference between the straightforward std::getline
and my istream_line_reader
(as anticipated), and bad in that neither version performed well (a surprise to me). I vaguely remember that sync_with_stdio(false)
may affect the performance, so I also tested calling this function in the beginning. However, it did not seem to matter. By the way, my favourite compiler has always been Clang recently (and I use a Mac).
Seeing that istream_line_reader
had a performance problem, I tried other approaches. One thing I tried was using the traditional C I/O functions. I wrote another file_line_reader
, which used either fgets
or fread
to read the data, depending what the delimiter is. (fgets
could only use ‘\n
’ as the delimiter, but it performed better than fread
, for I could fgets
into the final line buffer, but had to fread
into a temporary buffer first.) I also added a switch on whether to strip the delimiter, something not possible with the getline
function. The result achieved a more than 10x performance improvement (from about 28 MB/s to 430 MB/s). I was happy, and presented this on the last slide of my presentation on C++ and Functional Programming in the 2016 C++ and System Software Summit (China).
Until C++11, modifying the character array accessed through string::data()
has undefined behaviour. To be on the safe side, I implemented a self-expanding character buffer on my own, which complicated the implementation a little bit. It also made the interface slightly different from istream_line_reader
, which can be demonstrated in the following code snippets.
Iteration with istream_line_reader
:
for (auto& line : istream_line_reader(cin)) { puts(line.c_str()); }
Iteration with file_line_reader
:
for (auto& line : file_line_reader(stdin)) { puts(line); }
I.e. each iteration with file_line_reader
returns a char*
instead of a string
. This should be OK, as a raw character pointer is often enough. One can always construct a string
from char*
easily, anyway.
After the presentation, I turned to implementing a small enhancement—iterating over the lines with mmap
. This proved interesting work. Not only did it improved the line reading performance, but the code was simplified as well. As I could access the file content directly with a pointer, I was able to copy the lines to a string simply with string::assign
. As I used string
again, there was no need to define a custom copy constructor, copy assignment operator, move constructor, and move assignment operator as well. The performance was, of course, also good: the throughput rate reached 650 MB/s, a 50% improvement! The only negative side was that it could not work on stdin, so testing it required more lines. Apart from that, I was quite satisfied. And I had three different line readers that could take an istream&
, FILE*
, or file descriptor as the input source. So all situations were dealt with. Not bad!
One thing of note about the implementation. I tried copying (a character at a time) while searching, before adopting the current method of searching first before assigning to the string. The latter proved faster when dealing with long lines. I can see two reasons:
- Strings are normally (and required to be since C++11) null-terminated, so copying one character at a time has a big overhead of zeroing the next byte. I confirmed the case from the libc++ source code of Clang.
- Assignment can use
memcpy
ormemmove
internally, which normally has a fast platform-specific implementation. In the case ofstring::assign(const char*, size_t)
, I verified that libc++ usedmemmove
indeed.
If you are interested, this is the assembly code I finally traced into on my Mac (comments are my analysis; you may need to scroll horizontally to see them all):
libsystem_c.dylib`memcpy$VARIANT$sse42: 0x7fff9291fcbd: pushq %rbp 0x7fff9291fcbe: movq %rsp, %rbp 0x7fff9291fcc1: movq %rdi, %r11 ; save dest 0x7fff9291fcc4: movq %rdi, %rax 0x7fff9291fcc7: subq %rsi, %rax ; dest - src 0x7fff9291fcca: cmpq %rdx, %rax 0x7fff9291fccd: jb 0x7fff9291fd04 ; dest in (src, src + len)? ; Entry condition: dest <= src or dest >= src + len; copy starts from front 0x7fff9291fccf: cmpq $80, %rdx 0x7fff9291fcd3: ja 0x7fff9291fd09 ; len > 128? ; Entry condition: len <= 128 0x7fff9291fcd5: movl %edx, %ecx 0x7fff9291fcd7: shrl $2, %ecx ; len / 4 0x7fff9291fcda: je 0x7fff9291fcec ; len < 4? 0x7fff9291fcdc: movl (%rsi), %eax ; 4-byte read 0x7fff9291fcde: addq $4, %rsi ; src <- src + 4 0x7fff9291fce2: movl %eax, (%rdi) ; 4-byte write 0x7fff9291fce4: addq $4, %rdi ; dest <- dest + 4 0x7fff9291fce8: decl %ecx 0x7fff9291fcea: jne 0x7fff9291fcdc ; more 4-byte blocks? ; Entry condition: len < 4 0x7fff9291fcec: andl $3, %edx 0x7fff9291fcef: je 0x7fff9291fcff ; len == 0? 0x7fff9291fcf1: movb (%rsi), %al ; 1-byte read 0x7fff9291fcf3: incq %rsi ; src <- src + 1 0x7fff9291fcf6: movb %al, (%rdi) ; 1-byte write 0x7fff9291fcf8: incq %rdi ; dest <- dest + 1 0x7fff9291fcfb: decl %edx 0x7fff9291fcfd: jne 0x7fff9291fcf1 ; more bytes? 0x7fff9291fcff: movq %r11, %rax ; restore dest 0x7fff9291fd02: popq %rbp 0x7fff9291fd03: ret 0x7fff9291fd04: jmpq 0x7fff9291fdb9 ; Entry condition: len > 128 0x7fff9291fd09: movl %edi, %ecx 0x7fff9291fd0b: negl %ecx 0x7fff9291fd0d: andl $15, %ecx ; 16 - dest % 16 0x7fff9291fd10: je 0x7fff9291fd22 ; dest 16-byte aligned? 0x7fff9291fd12: subl %ecx, %edx ; adjust len 0x7fff9291fd14: movb (%rsi), %al ; one-byte read 0x7fff9291fd16: incq %rsi ; src <- src + 1 0x7fff9291fd19: movb %al, (%rdi) ; one-byte write 0x7fff9291fd1b: incq %rdi ; dest <- dest + 1 0x7fff9291fd1e: decl %ecx 0x7fff9291fd20: jne 0x7fff9291fd14 ; until dest is aligned ; Entry condition: dest is 16-byte aligned 0x7fff9291fd22: movq %rdx, %rcx ; len 0x7fff9291fd25: andl $63, %edx ; len % 64 0x7fff9291fd28: andq $-64, %rcx ; len <- align64(len) 0x7fff9291fd2c: addq %rcx, %rsi ; src <- src + len 0x7fff9291fd2f: addq %rcx, %rdi ; src <- dest + len 0x7fff9291fd32: negq %rcx ; len <- -len 0x7fff9291fd35: testl $15, %esi 0x7fff9291fd3b: jne 0x7fff9291fd80 ; src not 16-byte aligned? 0x7fff9291fd3d: jmp 0x7fff9291fd40 0x7fff9291fd3f: nop ; Entry condition: both src and dest are 16-byte aligned 0x7fff9291fd40: movdqa (%rsi,%rcx), %xmm0 ; aligned 16-byte read 0x7fff9291fd45: movdqa 16(%rsi,%rcx), %xmm1 0x7fff9291fd4b: movdqa 32(%rsi,%rcx), %xmm2 0x7fff9291fd51: movdqa 48(%rsi,%rcx), %xmm3 0x7fff9291fd57: movdqa %xmm0, (%rdi,%rcx) ; aligned 16-byte write 0x7fff9291fd5c: movdqa %xmm1, 16(%rdi,%rcx) 0x7fff9291fd62: movdqa %xmm2, 32(%rdi,%rcx) 0x7fff9291fd68: movdqa %xmm3, 48(%rdi,%rcx) 0x7fff9291fd6e: addq $64, %rcx 0x7fff9291fd72: jne 0x7fff9291fd40 ; more 64-byte blocks? 0x7fff9291fd74: jmpq 0x7fff9291fcd5 0x7fff9291fd79: nopl (%rax) ; 7-byte nop ; Entry condition: src is NOT 16-byte aligned but dest is 0x7fff9291fd80: movdqu (%rsi,%rcx), %xmm0 ; unaligned 16-byte read 0x7fff9291fd85: movdqu 16(%rsi,%rcx), %xmm1 0x7fff9291fd8b: movdqu 32(%rsi,%rcx), %xmm2 0x7fff9291fd91: movdqu 48(%rsi,%rcx), %xmm3 0x7fff9291fd97: movdqa %xmm0, (%rdi,%rcx) ; aligned 16-byte write 0x7fff9291fd9c: movdqa %xmm1, 16(%rdi,%rcx) 0x7fff9291fda2: movdqa %xmm2, 32(%rdi,%rcx) 0x7fff9291fda8: movdqa %xmm3, 48(%rdi,%rcx) 0x7fff9291fdae: addq $64, %rcx 0x7fff9291fdb2: jne 0x7fff9291fd80 ; more 64-byte blocks? 0x7fff9291fdb4: jmpq 0x7fff9291fcd5 ; Entry condition: dest > src and dest < src + len; copy starts from back 0x7fff9291fdb9: addq %rdx, %rsi ; src <- src + len 0x7fff9291fdbc: addq %rdx, %rdi ; dest <- dest + len 0x7fff9291fdbf: cmpq $80, %rdx 0x7fff9291fdc3: ja 0x7fff9291fdf6 ; len > 128? ; Entry condition: len < 128 0x7fff9291fdc5: movl %edx, %ecx 0x7fff9291fdc7: shrl $3, %ecx ; len / 8 0x7fff9291fdca: je 0x7fff9291fdde ; len < 8? ; Entry condition: len >= 8 0x7fff9291fdcc: subq $8, %rsi ; src <- src - 8 0x7fff9291fdd0: movq (%rsi), %rax ; 8-byte read 0x7fff9291fdd3: subq $8, %rdi ; dest <- dest - 8 0x7fff9291fdd7: movq %rax, (%rdi) ; 8-byte write 0x7fff9291fdda: decl %ecx 0x7fff9291fddc: jne 0x7fff9291fdcc ; until len < 8 ; Entry condition: len < 8 0x7fff9291fdde: andl $7, %edx 0x7fff9291fde1: je 0x7fff9291fdf1 ; len == 0? 0x7fff9291fde3: decq %rsi ; src <- src - 1 0x7fff9291fde6: movb (%rsi), %al ; 1-byte read 0x7fff9291fde8: decq %rdi ; dest <- dest - 1 0x7fff9291fdeb: movb %al, (%rdi) ; 1-byte write 0x7fff9291fded: decl %edx 0x7fff9291fdef: jne 0x7fff9291fde3 ; more bytes? 0x7fff9291fdf1: movq %r11, %rax ; restore dest 0x7fff9291fdf4: popq %rbp 0x7fff9291fdf5: ret ; Entry condition: len > 128 0x7fff9291fdf6: movl %edi, %ecx 0x7fff9291fdf8: andl $15, %ecx 0x7fff9291fdfb: je 0x7fff9291fe0e ; dest 16-byte aligned? 0x7fff9291fdfd: subq %rcx, %rdx ; adjust len 0x7fff9291fe00: decq %rsi ; src <- src - 1 0x7fff9291fe03: movb (%rsi), %al ; one-byte read 0x7fff9291fe05: decq %rdi ; dest <- dest - 1 0x7fff9291fe08: movb %al, (%rdi) ; one-byte write 0x7fff9291fe0a: decl %ecx 0x7fff9291fe0c: jne 0x7fff9291fe00 ; until dest is aligned ; Entry condition: dest is 16-byte aligned 0x7fff9291fe0e: movq %rdx, %rcx ; len 0x7fff9291fe11: andl $63, %edx ; len % 64 0x7fff9291fe14: andq $-64, %rcx ; len <- align64(len) 0x7fff9291fe18: subq %rcx, %rsi ; src <- src - len 0x7fff9291fe1b: subq %rcx, %rdi ; dest <- dest - len 0x7fff9291fe1e: testl $15, %esi 0x7fff9291fe24: jne 0x7fff9291fe61 ; src 16-byte aligned? ; Entry condition: both src and dest are 16-byte aligned 0x7fff9291fe26: movdqa -16(%rsi,%rcx), %xmm0; aligned 16-byte read 0x7fff9291fe2c: movdqa -32(%rsi,%rcx), %xmm1 0x7fff9291fe32: movdqa -48(%rsi,%rcx), %xmm2 0x7fff9291fe38: movdqa -64(%rsi,%rcx), %xmm3 0x7fff9291fe3e: movdqa %xmm0, -16(%rdi,%rcx); aligned 16-byte write 0x7fff9291fe44: movdqa %xmm1, -32(%rdi,%rcx) 0x7fff9291fe4a: movdqa %xmm2, -48(%rdi,%rcx) 0x7fff9291fe50: movdqa %xmm3, -64(%rdi,%rcx) 0x7fff9291fe56: subq $64, %rcx 0x7fff9291fe5a: jne 0x7fff9291fe26 ; more 64-byte blocks? 0x7fff9291fe5c: jmpq 0x7fff9291fdc5 ; Entry condition: src is NOT 16-byte aligned but dest is 0x7fff9291fe61: movdqu -16(%rsi,%rcx), %xmm0; unaligned 16-byte read 0x7fff9291fe67: movdqu -32(%rsi,%rcx), %xmm1 0x7fff9291fe6d: movdqu -48(%rsi,%rcx), %xmm2 0x7fff9291fe73: movdqu -64(%rsi,%rcx), %xmm3 0x7fff9291fe79: movdqa %xmm0, -16(%rdi,%rcx); aligned 16-byte write 0x7fff9291fe7f: movdqa %xmm1, -32(%rdi,%rcx) 0x7fff9291fe85: movdqa %xmm2, -48(%rdi,%rcx) 0x7fff9291fe8b: movdqa %xmm3, -64(%rdi,%rcx) 0x7fff9291fe91: subq $64, %rcx 0x7fff9291fe95: jne 0x7fff9291fe61 ; more 64-byte blocks? 0x7fff9291fe97: jmpq 0x7fff9291fdc5
I am happy that I can take advantage of such optimizations, but do not need to write such code on my own—there are so many different cases to deal with!
Of couse, nothing is simple regarding performance. More tests revealed more facts that are interesting and/or surprising:
- While libc++ (it is the library, but not the compiler, that matters here) seems to completely ignore
sync_with_stdio
, it makes a big difference in libstdc++. The same function call gets a more than 10x performance improvement when theistream_line_reader
test program is compiled with GCC (which uses libstdc++), from ~28 MB/s to ~390 MB/s. It shows that I made a wrong assumption! Interestingly, reading from stdin (piped from the pv tool) is slightly faster than reading from a file on my Mac (when compiled with GCC). - On a CentOS 6.5 Linux system,
sync_with_stdio(false)
has a bigger performance win (~23 MB/s vs. ~800 MB/s). Reading from a file directly is even faster at 1100 MB/s. That totally beats myfile_line_reader
(~550 MB/s reading a file directly) andmmap_line_reader
(~600 MB/s reading a file directly) on the same machine. I was stunned when first seeing this performance difference of nearly 40 times!
So, apart from the slight difference in versatility, the first and simplest form of my line readers is also the best on Linux, while the mmap
-based version may be a better implementation on OS X—though your mileage may vary depending on the different combinations of OS versions, compilers, and hardware. Should I be happy, or sad?
You can find the implementation of istream_line_reader
among my example code for the ‘C++ and Functional Programming’ presentation, and the implementations of file_line_reader
and mmap_line_reader
in the Nvwa repository. And the test code is as follows:
test_istream_line_reader.cpp:
#include <fstream> #include <iostream> #include <string> #include <getopt.h> #include <stdio.h> #include <stdlib.h> #include "istream_line_reader.h" using namespace std; int main(int argc, char* argv[]) { char optch; while ( (optch = getopt(argc, argv, "s")) != EOF) { switch (optch) { case 's': cin.sync_with_stdio(false); break; } } if (!(optind == argc || optind == argc - 1)) { fprintf(stderr, "Only one file name can be specified\n"); exit(1); } istream* is = nullptr; ifstream ifs; if (optind == argc) { is = &cin; } else { ifs.open(argv[optind]); if (!ifs) { fprintf(stderr, "Cannot open file '%s'\n", argv[optind]); exit(1); } is = &ifs; } for (auto& line : istream_line_reader(*is)) { puts(line.c_str()); } }
test_file_line_reader.cpp:
#include <stdio.h> #include <stdlib.h> #include <nvwa/file_line_reader.h> using nvwa::file_line_reader; int main(int argc, char* argv[]) { FILE* fp = stdin; if (argc == 2) { fp = fopen(argv[1], "r"); if (!fp) { fprintf(stderr, "Cannot open file '%s'\n", argv[1]); exit(1); } } file_line_reader reader(fp, '\n', file_line_reader::no_strip_delimiter); for (auto& line : reader) { fputs(line, stdout); } }
test_mmap_line_reader.cpp:
#include <stdio.h> #include <stdlib.h> #include <stdexcept> #include <nvwa/mmap_line_reader.h> using nvwa::mmap_line_reader; int main(int argc, char* argv[]) { if (argc != 2) { fprintf(stderr, "A file name shall be provided\n"); exit(1); } try { mmap_line_reader reader(argv[1], '\n', mmap_line_reader::no_strip_delimiter); for (auto& str : reader) { fputs(str.c_str(), stdout); } } catch (std::runtime_error& e) { puts(e.what()); } }
One thought on “Performance of My Line Readers”