This arose from a question earlier today on the subject of bignum libraries and gcc specific hacks to the C language. Specifically, these two declarations were used:
typedef unsigned int dword_t __attribute__((mode(DI)));
On 32 bit systems and
typedef unsigned int dword_t __attribute__((mode(TI)));
On 64-bit systems.
I assume given this is an extension to the C language that there exists no way to achieve whatever it achieves in current (C99) standards. So my questions are simple: is that assumption correct? And what do these statements do to the underlying memory? I think the result is I have 2xsizeof(uint32_t)
for a dword
in 32-bit systems and 2*sizeof(uint64_t)
for 64-bit systems, am I correct?
dword = word << 1
safely and easily; I'd rather not replace that with a function etc if I can help it. – Sadaluk Dec 30 '10 at 0:31__int128
I believe: gcc.gnu.org/onlinedocs/gcc/_005f_005fint128.html. – Matthew Iselin Dec 30 '10 at 0:34__int128_t
and__uint128_t
(at least on 64-bit platforms, not sure about 32-bit targets) – Stephen Canon Dec 30 '10 at 14:19