You are viewing a single comment's thread.

view the rest of the comments →

0
1

[–] HentaiOjisan 0 points 1 point (+1|-0) ago 

Hmm you picked my interest. And thanks for the explanation, I still have a lot to learn about compilers.

So in what situation would an array, that the compiler feels it's not going to be read, actually be read and needed to be zeroed? I can only think in having that array in a hard-coded address and being read from that address instead of the variable itself. And could you point out an example of a code that it's insecure because the compiler omits some part of the code if optimizations are active?

I'm not being sarcastic or anything like that, I really want to know what confuses the compiler in which situations. Could you link some kind of documentation about it? Damn, I'm kinda sad that I didn't choose computer engineering when I started in the university.

0
1

[–] NotSurvivingLife [S] 0 points 1 point (+1|-0) ago 

For this particular bug?

a) when interfacing with external code that's not C-based, or

b) when combined with other bugs.

For example, suppose you have a bug similar to heartbleed, where you forget to check a length somewhere and accidentally read over the end of the array. If you make sure to always zero out important data (or better yet, replace it with a canary value), it's not the end of the world unless you are actively working with sensitive data when the read overrun occurs. But if the compiler optimizes out the memset to zero out important data, things can be leaked long after you've finished working with them.

Another example:

The Linux kernel compiles with certain optimizations disabled. In particular, it uses "-fno-delete-null-pointer-checks", which is exactly what it says on the tin. Why? Because of a couple times when code along the lines of the following:

struct foo *s = ...;
int x = s->f;
if (!s) return ERROR;

Had the null check removed (because s was referenced, so the null check is redundant. Right? Except of course in kernel-land the deference will always work, so this is actually "safe" in practice.). This was a bug in the linux kernel, but is a bug that is virtually impossible to check for. And hence the Linux kernel just tells GCC to not optimize such null checks out. Full stop. And loses the potential benefits everywhere, because the potential problems in some cases is too much.

For another example of compilers and formally undefined behaviour, see here.

Again, these are all things that are formally bugs in the code being compiled. The compiler is adhering to the standard. But the problem is that trying to figure out what is and isn't undefined behavior is so complex that these sorts of mistakes are made all the time by the best of us. And these mistakes often have nasty consequences. And hence, it's the standard that's the problem.

0
1

[–] HentaiOjisan 0 points 1 point (+1|-0) ago 

Ohh. I see! That makes a lot of sense! So the point is to overwrite data after using it to avoid data leaks by another bug (for example a passphrase or something similar). But because you might not use again that data, the compiler will skip that part of the code.

I didn't know either that a deference will always work for the kernel. I expected it to panic or something if it was pointing to NULL. I'm reading about it now and it seems that it's because the physical memory does have a 0 address, so you can actually read or write to it if you are not in user space. In fact it makes sense, in a microcontroller I'm programming that address is reserved for a pointer to the top of the stack.

Thanks again!!