You are viewing a single comment's thread.

view the rest of the comments →

1
0

[–] littul_kitton 1 point 0 points (+1|-1) ago 

No. Only a few people need more than 64 bits, but many of those people need more than 128. Going to 128 bits would help very few people.

There is also the question of whether it would even help. The wider an operand becomes, the more time it takes to do the computation, especially for division. So you either have to slow down the clock rate, or you have to add a few stages to the pipeline. At that point the speed, power consumption, complexity are not much improvement over just using multiple 64-bit instructions.

So I predict that we have maxed out at 64/80-bit operands. But SIMD vector unit width seems to be increasing steadily. Current vector units top out at 512 bits, which packs in 8 64-bit operands. To work with wider data, you will use sequential 64-bit operations to handle the full precision, and run several of those in parallel in the 512-bit vector unit.

0
1

[–] CanIHazPhD [S] 0 points 1 point (+1|-0) ago  (edited ago)

The wider an operand becomes, the more time it takes to do the computation

I was wondering if this would lead to a loss of performance, I don't think the switch from 32 to 64 leaded to an important lose of performance, but I'm not so sure about that.

The part that it wouldn't help many people it's quite true, though.

0
0

[–] svipbo ago 

This is why the x32 ABI was created for Linux. https://en.wikipedia.org/wiki/X32_ABI

0
0

[–] littul_kitton ago 

I suspect but cannot verify that the move from 32 to 64 did slow things down, but they deepened the pipeline for other reasons and it turned out to actually run in fewer cycles. (At least with respect to the first generation of 32-bit processors. Multiply used to be slow enough that you would manually bit shift when possible, and division was scary slow.)