I'm curious how you guys/gals deal with practical optimization of your code. I'm not talking about in a large scope of when to start optimizing or what needs to be Optimized. I am also not talking about compiler optimization flags. But rather, once you've decided something needs to be optimized at a level that you control in the code itself, you can know the time and space complexity of the code you write if it uses no depencies... But for example, say I have code like this:
bool search_by_length(char * const arr, size_t start_pos, size_t end_pos, char *const term, char *work_buf)
{
size_t i;
size_t const length = end_pos-start_pos;
if(!work_buf)
{
ERR("search_by_length: Null work_buf provided");
exit(EXIT_FAILURE);
}
memset(work_buf, 0, 500);
memcpy(work_buf,arr+start_pos, length);
D printf("Searching: %s", work_buf);
if(strstr(work_buf, term))
{
return true;
}
return false;
}
Now, say this function is called in a for loop n times... The time complexity of this code actually depends on the time complexity of strstr() and memset() which is not guaranteed to be anything in particular. So, if you needed to optimize this code, what steps would you take? Would you go and look at the specific implementation of strstr() for your specific <string.h> and memset()? If we implemented our own strstr() and memset(), we could reliably compute the complexity from our code alone but as we all know, a lot of code uses API calls.
j
stuff ;)
I would actually read the original headers and look up the assembly instructions of the target machine. Because if we would need to optimize those guessing is not enough.
To me, adding your own implementations only makes sense if the added new complexity (maintenance, cognitive load, obfuscation) is less important than the performance.
otherwise I would look into other compilers and look for possible optimizations from the outside tooling like inline the instructions and other nfa - dfa transformations.