Whilst working on an upcoming feature for our Arthor search system (the topic of another future blogpost!), I ran the feature through some performance profiling tools, partly out of curiosity and partly to check I hadn’t done anything too horrendous :).
Looking through the flame chart for this new feature showed mostly what I was expecting, as well as an unexpected visitor, the equality operator (operator==
) for our AtomType struct. For the sake of argument, it looks roughly like this:
// in AtomType.h
struct AtomType {
unsigned char element;
unsigned char isotope;
char charge;
unsigned char implicit_valence;
unsigned char explicit_valence;
unsigned char implicit_hydrogens;
unsigned char explicit_hydrogens;
bool is_aromatic;
bool operator==(const AtomType& other) const;
};
// in AtomType.cpp
#include "AtomType.h"
bool AtomType::operator==(const AtomType &other) const {
return ((this->element == other.element) &&
(this->isotope == other.isotope) &&
(this->charge == other.charge) &&
(this->implicit_valence == other.implicit_valence) &&
(this->explicit_valence == other.explicit_valence) &&
(this->implicit_hydrogens == other.implicit_hydrogens) &&
(this->explicit_hydrogens == other.explicit_hydrogens) &&
(this->is_aromatic == other.is_aromatic));
}
The method is nothing fancy, just going through each field in the struct and checking that their values are equal. How bad can this be?
Digging down
To try and understand why this was taking long enough to show up in a profile, I turned to compiler explorer to view the x86 assembly generated by the compiler to perform this function call. All compilers we looked at produced output similar to this: (this particular code is from clang 9.0)
_ZNK8AtomTypeeqERKS_:
mov al, byte ptr [rdi]
cmp al, byte ptr [rsi]
jne .LBB0_8
mov al, byte ptr [rdi + 1]
cmp al, byte ptr [rsi + 1]
jne .LBB0_8
mov al, byte ptr [rdi + 2]
cmp al, byte ptr [rsi + 2]
jne .LBB0_8
mov al, byte ptr [rdi + 3]
cmp al, byte ptr [rsi + 3]
jne .LBB0_8
mov al, byte ptr [rdi + 4]
cmp al, byte ptr [rsi + 4]
jne .LBB0_8
mov al, byte ptr [rdi + 5]
cmp al, byte ptr [rsi + 5]
jne .LBB0_8
mov al, byte ptr [rdi + 6]
cmp al, byte ptr [rsi + 6]
jne .LBB0_8
mov al, byte ptr [rdi + 7]
cmp al, byte ptr [rsi + 7]
sete al
ret
.LBB0_8:
xor eax, eax
ret
Here we can see that the processor is being instructed to load a single byte (mov al, byte ptr [rdi]
) and compare (cmp
) it to another byte for every single field in the struct. Should any comparison fail, the code jumps to the.LBB0_8
label and exit as false (xor eax eax, ret
), otherwise the result of the 8th comparison is given as the return (sete al, ret
).
Recognising that the 8 different members of the struct will be contiguous in memory, this seems like a laborious way to approach this. Instead we can instead check that they are all equal in a single function call by using memcmp
.
#include <cstring>
bool AtomType operator==(const AtomType &other) const {
return memcmp(this, &other, sizeof(AtomType)) == 0;
}
Better yet, this function call gets inlined and the assembly produced by all compilers now shows that this operation is being done in far fewer instructions and without any branching!
_ZNK8AtomTypeeqERKS_:
xor eax, eax
mov rdx, QWORD PTR [rdi]
mov rcx, QWORD PTR [rsi]
cmp rdx, rcx
sete al
ret
Here all 8 bytes of each struct are being loaded (mov dst QWORD PTR [src]
) and compared (cmp rdx, rcx
) in a single pass. (Checking the instruction tables over on Agner Fog’s website will let you check that these instructions take the same time as the single byte equivalents.)
With uncommon sized structs
Things get more interesting when the size of the struct is larger than the simple 8 byte struct originally presented. Instead if we have an awkwardly sized 11 byte struct….
// in AtomType.h
struct AtomType {
unsigned char element;
unsigned char isotope;
char charge;
unsigned char implicit_valence;
unsigned char explicit_valence;
unsigned char implicit_hydrogens;
unsigned char explicit_hydrogens;
bool is_aromatic;
unsigned char fieldA, fieldB, fieldC;
bool operator==(const AtomType& other) const;
};
We get different assembly produced from each compiler as they try and optimise how to instruct the processor to perform the memcmp
call defined in our implementation. Firstly, gcc 9.2:
_ZNK8AtomTypeeqERKS_:
mov rax, QWORD PTR [rsi]
cmp QWORD PTR [rdi], rax
je .L5
.L2:
mov eax, 1
.L3:
test eax, eax
sete al
ret
.L5:
movzx eax, WORD PTR [rsi+8]
cmp WORD PTR [rdi+8], ax
jne .L2
movzx eax, BYTE PTR [rsi+10]
cmp BYTE PTR [rdi+10], al
jne .L2
xor eax, eax
jmp .L3
Here gcc first checks the first 8 bytes (mov QWORD PTR
, cmp
), jumps down to label L5
then checks the next 2 bytes (WORD PTR
), then finally the last byte. Any failed comparisons will jump to label L2
which will return 0 (false). This has split the 11 bytes into much more computer friendly (i.e. addressable) chunks of 8, 2 and 1, required 3 total loads and comparisons. Can this be improved upon? Enter clang 9.0:
_ZNK8AtomTypeeqERKS_:
mov rax, qword ptr [rdi]
mov rcx, qword ptr [rdi + 3]
xor rax, qword ptr [rsi]
xor rcx, qword ptr [rsi + 3]
or rcx, rax
sete al
ret
Here we can see that clang is instead doing 2 QWORD PTR
movs, which initially looks like 16 bytes of data(?) until we see that the second load starts only 3 bytes after the first, so in effect the middle 5 bytes of the struct have been loaded twice. Next this memory is compared using xor
(instead of cmp
) which produces the same result but the result will operate in place (as opposed to cmp
which will store the result in a special flag register, requiring the result to be checked before it is lost!). The result of the two comparisons are then combined using an or
and this result returned.
The overlapping read approach is more elegant as only 2 loads are required to read the entire struct, and is now affectionately called “memory shingling” in the office :). Finally we can check out what icc 19.0 will give us:
_ZNK8AtomTypeeqERKS_:
push rsi
mov edx, 11
call _intel_fast_memcmp
xor edx, edx
test eax, eax
sete dl
mov eax, edx
pop rcx
ret
Ok, so Intel just calls out to _intel_fast_memcmp
, which isn’t very fun for the purposes of this blog. A bit anticlimactic really.
Conclusions
It’s not entirely clear to me why the compiler can’t do these optimisations itself. Even if the struct is declared as just having unsigned char fields[8]
(so struct padding isn’t an issue) the compiler will still check each byte individually and won’t jump to comparing all bytes at once.
This little find put me on a bit of an optimisation binge, which has turned up some other fun hacks elsewhere in our codebase too! Hopefully these will be filtering out into our releases this year (alongside the new features to Arthor I’ll blog about soon!)
Thanks for reading my first blog! – Richard
I realize this post is a year old, and perhaps not even relevant by now, but just in case…
The compiler can’t safely optimize the function the way it’s implemented, as “this” and “other” have different types (ptr vs atomtype).
If you make the function a non-member instead, declaring it like `bool operator== (const AtomType& lv, const AtomType& rv) const`, the compiler has a lot more freedom to optimize.