You are viewing a single comment's thread.

view the rest of the comments →

0
1

[–] NotSurvivingLife [S] 0 points 1 point (+1|-0) ago  (edited ago)

I agree with making the common case easy.

Unfortunately, I don't see any way of doing so that doesn't miss the point entirely, your thought included.

The problem is when people introduce additional optimizations down the line. (And new optimizations will be added.) There's no way of knowing in advance what new optimizations are safe and what are unsafe. Alas, this is exactly the problem we are trying to solve - namely writing something that is safe not only now but will always be safe given the language standard remains.

You could have optimizations tied to language versions - but then you have optimizations hinging on the language standard, which isn't exactly optimal.

0
1

[–] rdnetto 0 points 1 point (+1|-0) ago 

The problem is when people introduce additional optimizations down the line. (And new optimizations will be added.) There's no way of knowing in advance what new optimizations are safe and what are unsafe. Alas, this is exactly the problem we are trying to solve - namely writing something that is safe not only now but will always be safe given the language standard remains.

You could have optimizations tied to language versions - but then you have optimizations hinging on the language standard, which isn't exactly optimal.

That's a good point. I think restricting it based on language version is probably the only sane solution. Large projects normally make adoption of new versions of the language explicit anyway, so when you bump the version you could at least test for those kinds of regressions.

Unfortunately, I don't see any way of doing so that doesn't miss the point entirely, your thought included.

Agreed. The problem we have is that developers cannot predict what the compiler will do, because they do not understand how it works. You proposed making everything explicit, which would effectively simplify the compiler to the extent that it was a simple translation layer rather than an optimizer (in other words, an assembler). I proposed blacklisting instead of whitelisting, but that merely empowers the user who understands the gotchas of the compiler without making them more visible.

The underlying issue is one of semantic gap - the difference between how the human and compiler interpret the code. e.g. the compiler thinks that variables are only used to store data for consumption within the program, while the human intends them to be used externally or wiped securely. The only way to close that gap is to make the language more complex/sophisticated and to force the user to annotate the code appropriately. e.g. Python uses TemporaryFile and NamedTemporaryFile to distinguish between the cases where external access to a file is needed, and Rust uses owned/borrowed pointers to enforce memory safety. The problems with this approach are twofold: many developers find such languages constrictive (consider how many people struggle with type errors in Haskell), and the complexity of the language is limited by the power of the compiler and (to a lesser extent) the ability of the developers. Despite this, I think it's probably the best approach we have atm.

0
0

[–] NotSurvivingLife [S] ago 

The problem with that is that it largely precludes having multiple compilers. Not entirely, but it certainly makes things more difficult. A compiler cannot add a new optimization without waiting for a new version of the language standard to come out.

And the entire point of this is to be able to have a language where, regardless of compiler, things are still sane.


This approach would not simplify the compiler. I am not sure how you got the idea that this would strip out all of the machinery for determining what optimizations to do when. But a simple counterexample to that claim: you wrap the entire program in @suggest(*) (or whatever) and you get classical behavior.


I agree fully with the semantic gap between compilers and humans. My general point is this. We already have strongly typed languages, etc. We do not have a language that deals with things in this way. It may be worth exploring simply because it is a potential alternative.