You are viewing a single comment's thread.

view the rest of the comments →

0
1

[–] CanIHazPhD [S] 0 points 1 point (+1|-0) ago  (edited ago)

The wider an operand becomes, the more time it takes to do the computation

I was wondering if this would lead to a loss of performance, I don't think the switch from 32 to 64 leaded to an important lose of performance, but I'm not so sure about that.

The part that it wouldn't help many people it's quite true, though.

0
0

[–] svipbo ago 

This is why the x32 ABI was created for Linux. https://en.wikipedia.org/wiki/X32_ABI

0
0

[–] CanIHazPhD [S] ago 

This is interesting. Never heard of it before! Do you think something like this could be implemented to go from 64-bit to 128-bit (or some other number)?

0
0

[–] littul_kitton ago 

I suspect but cannot verify that the move from 32 to 64 did slow things down, but they deepened the pipeline for other reasons and it turned out to actually run in fewer cycles. (At least with respect to the first generation of 32-bit processors. Multiply used to be slow enough that you would manually bit shift when possible, and division was scary slow.)