Default integer size 32 or 64 bits?
Posted: 2021-03-22, 9:58:14
Some ForwardCom instructions are available in a short form using format template C. Template C has one register field, 16 bits of immediate data, and no operand size field. This will fit an instruction like for example
I am in doubt whether the integer size should be 32 bits or 64 bits for instruction formats that have no operand size field. The current version has an integer size of 64 bits in this case, based on the logic that a large integer version will work in most cases where a smaller integer size is specified. However, this places a serious burden on the compiler or the assembly programmer to decide whether it is safe to use a larger integer size than specified in the original code. The above code will not work with a 64 bit integer size if the programmer intended to get an unsigned result modulo 232 in a 64-bit register.
Format C is particular useful for combined ALU/branch instructions, for example in a loop like this:
This loop can be implemented very efficiently with an increment-and-compare instruction that adds 1 to a register and jumps back if the value is below the limit. This fits format C with the loop counter in the register field, an 8-bit constant limit, and an 8-bit address offset for jumping back.
This will work perfectly well regardless if the integer size in almost all cases. But what if the programmer has made a branch that sets the loop counter to -1 inside the loop in order to restart the loop in certain cases. This will not work if the instruction that sets the loop counter to -1 is using a 32-bit operand size (unused bits are zero), while the increment-and-compare instruction is using 64 bits.
The best modern compilers can do amazing things in terms of optimization, but is it realistic to require that the compiler can decide whether it is safe to replace a 32-bit instruction with a 64-bit instruction? Or is it better to set the default integer size in format C to 32 bits because this is the most common integer size?
There are obvious cases where it is safe to use 64 bits, for example when setting a register to a small positive value. The assembler actually does this optimization automatically. And there are other cases where it is obviously not safe to use a different integer size, for example in branch instructions that check for overflow. And then there are the difficult cases where it requires a lot of logic in the compiler to decide whether it can use a different operand size.
There is actually a third possibility: We could make the rule that all integer instructions with an operand size less than 64 bits must sign-extend the result to 64 bits. This will increase the number of cases where we can use a larger integer size than specified, but there will still be contrived cases that are difficult to decide. Another disadvantage with sign-extending everything is that it will increase the power consumption because unused bits will be shifting.
What is your opinion? Should we use 32 bits or 64 bits in short-form instructions that have no operand size field? 64 bits will increase the number of cases where we can use short form instructions, but at the cost of considerable complexity in the compiler. 32 bits will result in slightly larger code because we need instructions of double size in certain cases with 64-bit operands. (Double-size instructions can execute at the same throughput as single-size instructions, but they take up more space in the code cache).
Code: Select all
int r1 += 1000
Format C is particular useful for combined ALU/branch instructions, for example in a loop like this:
Code: Select all
for (int i=0; i<100; i++) {...}
This will work perfectly well regardless if the integer size in almost all cases. But what if the programmer has made a branch that sets the loop counter to -1 inside the loop in order to restart the loop in certain cases. This will not work if the instruction that sets the loop counter to -1 is using a 32-bit operand size (unused bits are zero), while the increment-and-compare instruction is using 64 bits.
The best modern compilers can do amazing things in terms of optimization, but is it realistic to require that the compiler can decide whether it is safe to replace a 32-bit instruction with a 64-bit instruction? Or is it better to set the default integer size in format C to 32 bits because this is the most common integer size?
There are obvious cases where it is safe to use 64 bits, for example when setting a register to a small positive value. The assembler actually does this optimization automatically. And there are other cases where it is obviously not safe to use a different integer size, for example in branch instructions that check for overflow. And then there are the difficult cases where it requires a lot of logic in the compiler to decide whether it can use a different operand size.
There is actually a third possibility: We could make the rule that all integer instructions with an operand size less than 64 bits must sign-extend the result to 64 bits. This will increase the number of cases where we can use a larger integer size than specified, but there will still be contrived cases that are difficult to decide. Another disadvantage with sign-extending everything is that it will increase the power consumption because unused bits will be shifting.
What is your opinion? Should we use 32 bits or 64 bits in short-form instructions that have no operand size field? 64 bits will increase the number of cases where we can use short form instructions, but at the cost of considerable complexity in the compiler. 32 bits will result in slightly larger code because we need instructions of double size in certain cases with 64-bit operands. (Double-size instructions can execute at the same throughput as single-size instructions, but they take up more space in the code cache).