You are viewing a single comment's thread.

view the rest of the comments →

0
1

[–] rdnetto 0 points 1 point (+1|-0) ago 

The problem I see with this is that it's pretty much the opposite of "make the common case fast/easy". It's basically equivalent to using assembly, or maybe C with -O0, and would be extremely verbose for the case where you wanted to enable all but one optimization.

I think doing it the opposite way around would make more sense - there should be a special qualifier for variables that disables optimizations for them, and maybe a block structure that disables optimizations inside it as well. You could even parameterize that qualifier/block in the optimizations it would disallow.

0
1

[–] NotSurvivingLife [S] 0 points 1 point (+1|-0) ago  (edited ago)

I agree with making the common case easy.

Unfortunately, I don't see any way of doing so that doesn't miss the point entirely, your thought included.

The problem is when people introduce additional optimizations down the line. (And new optimizations will be added.) There's no way of knowing in advance what new optimizations are safe and what are unsafe. Alas, this is exactly the problem we are trying to solve - namely writing something that is safe not only now but will always be safe given the language standard remains.

You could have optimizations tied to language versions - but then you have optimizations hinging on the language standard, which isn't exactly optimal.

0
1

[–] rdnetto 0 points 1 point (+1|-0) ago 

The problem is when people introduce additional optimizations down the line. (And new optimizations will be added.) There's no way of knowing in advance what new optimizations are safe and what are unsafe. Alas, this is exactly the problem we are trying to solve - namely writing something that is safe not only now but will always be safe given the language standard remains.

You could have optimizations tied to language versions - but then you have optimizations hinging on the language standard, which isn't exactly optimal.

That's a good point. I think restricting it based on language version is probably the only sane solution. Large projects normally make adoption of new versions of the language explicit anyway, so when you bump the version you could at least test for those kinds of regressions.

Unfortunately, I don't see any way of doing so that doesn't miss the point entirely, your thought included.

Agreed. The problem we have is that developers cannot predict what the compiler will do, because they do not understand how it works. You proposed making everything explicit, which would effectively simplify the compiler to the extent that it was a simple translation layer rather than an optimizer (in other words, an assembler). I proposed blacklisting instead of whitelisting, but that merely empowers the user who understands the gotchas of the compiler without making them more visible.

The underlying issue is one of semantic gap - the difference between how the human and compiler interpret the code. e.g. the compiler thinks that variables are only used to store data for consumption within the program, while the human intends them to be used externally or wiped securely. The only way to close that gap is to make the language more complex/sophisticated and to force the user to annotate the code appropriately. e.g. Python uses TemporaryFile and NamedTemporaryFile to distinguish between the cases where external access to a file is needed, and Rust uses owned/borrowed pointers to enforce memory safety. The problems with this approach are twofold: many developers find such languages constrictive (consider how many people struggle with type errors in Haskell), and the complexity of the language is limited by the power of the compiler and (to a lesser extent) the ability of the developers. Despite this, I think it's probably the best approach we have atm.