You are viewing a single comment's thread.

view the rest of the comments →

0
1

[–] rdnetto 0 points 1 point (+1|-0) ago 

The problem is when people introduce additional optimizations down the line. (And new optimizations will be added.) There's no way of knowing in advance what new optimizations are safe and what are unsafe. Alas, this is exactly the problem we are trying to solve - namely writing something that is safe not only now but will always be safe given the language standard remains.

You could have optimizations tied to language versions - but then you have optimizations hinging on the language standard, which isn't exactly optimal.

That's a good point. I think restricting it based on language version is probably the only sane solution. Large projects normally make adoption of new versions of the language explicit anyway, so when you bump the version you could at least test for those kinds of regressions.

Unfortunately, I don't see any way of doing so that doesn't miss the point entirely, your thought included.

Agreed. The problem we have is that developers cannot predict what the compiler will do, because they do not understand how it works. You proposed making everything explicit, which would effectively simplify the compiler to the extent that it was a simple translation layer rather than an optimizer (in other words, an assembler). I proposed blacklisting instead of whitelisting, but that merely empowers the user who understands the gotchas of the compiler without making them more visible.

The underlying issue is one of semantic gap - the difference between how the human and compiler interpret the code. e.g. the compiler thinks that variables are only used to store data for consumption within the program, while the human intends them to be used externally or wiped securely. The only way to close that gap is to make the language more complex/sophisticated and to force the user to annotate the code appropriately. e.g. Python uses TemporaryFile and NamedTemporaryFile to distinguish between the cases where external access to a file is needed, and Rust uses owned/borrowed pointers to enforce memory safety. The problems with this approach are twofold: many developers find such languages constrictive (consider how many people struggle with type errors in Haskell), and the complexity of the language is limited by the power of the compiler and (to a lesser extent) the ability of the developers. Despite this, I think it's probably the best approach we have atm.

0
0

[–] NotSurvivingLife [S] ago 

The problem with that is that it largely precludes having multiple compilers. Not entirely, but it certainly makes things more difficult. A compiler cannot add a new optimization without waiting for a new version of the language standard to come out.

And the entire point of this is to be able to have a language where, regardless of compiler, things are still sane.


This approach would not simplify the compiler. I am not sure how you got the idea that this would strip out all of the machinery for determining what optimizations to do when. But a simple counterexample to that claim: you wrap the entire program in @suggest(*) (or whatever) and you get classical behavior.


I agree fully with the semantic gap between compilers and humans. My general point is this. We already have strongly typed languages, etc. We do not have a language that deals with things in this way. It may be worth exploring simply because it is a potential alternative.

0
1

[–] rdnetto 0 points 1 point (+1|-0) ago 

The problem with that is that it largely precludes having multiple compilers. Not entirely, but it certainly makes things more difficult. A compiler cannot add a new optimization without waiting for a new version of the language standard to come out.

You'll run into that in any language were optimizations are specified, since the optimizations available are dependent on the way the compiler is implemented. The only way to make optimizations explicit without specifying them as part of the language is to write assembly (or something like it).

This approach would not simplify the compiler. I am not sure how you got the idea that this would strip out all of the machinery for determining what optimizations to do when. But a simple counterexample to that claim: you wrap the entire program in @suggest(*) (or whatever) and you get classical behavior.

I had a mental model where the optimizations and machine code generation were done by different tools, but when I tried to explain I realised they were just different layers of the compiler. My bad.

I agree fully with the semantic gap between compilers and humans. My general point is this. We already have strongly typed languages, etc. We do not have a language that deals with things in this way. It may be worth exploring simply because it is a potential alternative.

Agreed. Research languages are always fun to experiment with, even if they never get used for anything. I suspect the area you'd end up exploring here would be what structures are best annotated with optimizations - that is, whether to annotate imperative blocks of code (if statements, loops, etc.) or to annotate the values/variables themselves (which would enable a more functional approach). I guess you'd need both if you wanted to be able to disable all optimizations, but I'm having a hard time thinking of situations where something like TCO causes problems. (apart from debuggers and stack traces)