Should i use uint32 t
The stdint. I also encountered the same problem on Mac OSX That is fine if it is. But what if you are using an older compiler? Add the following in the base. Find centralized, trusted content and collaborate around the technologies you use most. Connect and share knowledge within a single location that is structured and easy to search.
When should one use the datatypes from stdint. Is it right to always use as a convention them? What was the purpose of the design of nonspecific size types like int and short?
Things are leaning that way. The fixed width types are a more recent addition to C. Original C had char, short, int, long and that was progressive as it tried, without being too specific, to accommodate the various integer sizes available across a wide variety of processors and environments. As C is 40ish years old, it speaks to the success of that strategy. Much C code has been written and successfully copes with the soft integer specification size.
New languages tend to require very specific fixed integer size types and 2's complement. As they are successfully, that Darwinian pressure will push on C. My crystal ball says we will see a slow migration to increasing uses of fixed width types in C.
It was a good first step to accommodate the wide variety of various integer widths 8,9,12,18,36, etc. So much coding today uses power-of-2 size integers with 2's complement, that one may not realize that many other arrangements existed beforehand.
See this answer also. I find it useful when I have to implement a protocol and use them inside a structure which can be a message that needs to be sent out or a holder of certain information. Add a comment. Active Oldest Votes.
Improve this answer. Nicol Bolas Nicol Bolas 7, 10 10 silver badges 20 20 bronze badges. If you need fixed bitdepth integers in your code that does not talk to OpenGL use cstdint?
GLvoid is no different from void. GLvoid will always be defined as void. GLuint makes sense, because you communicate that value to the GL, you either put it into a GL function or it's returned from a GL function.
Show 1 more comment. Dan Hulme Dan Hulme 6, 1 1 gold badge 12 12 silver badges 35 35 bronze badges. Sign up or log in Sign up using Google. Sign up using Facebook. Sign up using Email and Password. You don't have to update code. The one exception is when dealing with quantities that actually depend on the bit width of the CPU, like dealing with array sizes.
Quite the contrary, int will sink you, because int is almost always 32 bits even on bit systems. I had my first nasty production bug back in the early s when I assumed an Integer was 32bit in VBScript. After all, in C you can just declare something "unsigned" and it's the width of int.
However, I think this is a problem. The expected value ranges of your variables don't change just because your memory bus got wider - maybe you can use more than 4GB memory in a process now, but it's a mistake to plan for single array indexes being more than 32bit.
If you do try to be more flexible, I'm sure this would introduce more bugs than the forward-compatibility it'd add. Especially if 'int' is smaller than on the platform you tested on. That's why languages like Swift, Java, C always have bit int on every platform. What if someone made a typo and put the wrong type in the cast? How do you tell what's right? What if you change the type of the lvalue or the casted value? Now you have to think about each related cast you added. What's the alternative?
Well, the compiler should just know what you mean…. Int is not cross-platform and forward-compatible. It's implementation-defined, so it's up to the compiler. Practically speaking, every modern compiler defines int as 4 bytes, and can be expected to never change that because of the vast swaths of bad code out there that is written with the assumption that an int is 4 bytes.
So it's not forward-compatible. And while on most platforms you can expect the compiler to have picked 4 bytes, it's certainly possible for compilers to pick other sizes for int I would assume compilers for embedded architectures might do that , which means it's not cross-platform either.
How is 'int' cross-platform and forward compatible? The size of int is implementation dependent, but its minimal range isn't. I believe that's what GP is referring to. This last one eliminates maybe half the possible execution paths it can see, and loop structure optimizations practically don't work without it.
0コメント