Unable to set integer size in Unity tests

I can’t get the integer width options to “take” when I do native unit tests.

I set,

build_flags =
-D UNITY_INT_WIDTH=16

in the platformio.ini file, as directed in the tutorials. If I dig down into unity.h and included files therein, it appears that the defines are properly set (the various #ifdef’s are enabled/disabled, as expected).

But when I run tests, the sizes are incorrect. For example, if I have a function,

int ReturnANegativeNumber(void)

   return (int)0x80 << 8;

and then ASSERT the result, it will fail, saying, “Expected -32,768; was 32,768”.

It appears to be creating a 32-bit int (0x00 00 80 00), which is a positive number, instead of 16-bit (0x80 00), which is negative.

To confirm, if I shift left 16, then the function returns 8388608 (0x 00 80 00 00)

You can find a minimalist example here: GitHub - gcl8a/Integer-Test · GitHub

Am I missing something “obvious”? Is this working as expected?

The unit tests are compiled for your own computer with gcc, setting any macro does not have an affect of the size of a native int at all. It does affect how Unity tries todefine its fixed-width integer types like UNITY_INT16 etc., which you don’t use in the code.

Are you trying to make a x86/x86_64 gcc compile code with int = 16 bits to match an Arduino AVR? I don’t think GCC is even capable of configuring that on the fly, that’s intrinsic to the compiler. That’s why when writing code for embedded (e.g. an AVR) and testing it on a desktop, it’s best to use fixed-width integers all over the place if the exact same values are expected.

So, something like

UNITY_INT16 ReturnANegativeInt(void)
{
    return (UNITY_INT16)(0x80 << 8);
}

or equally

#include <stdint.h>
int16_t ReturnANegativeInt(void)
{
    return (int16_t)(0x80 << 8);
}

will pass on both desktop and AVR. Or at least set the parenthesis correctly so that it casts the whole result. This will make the unit tests pass, but does fundamentaly not affect the size of int. The unit test is merely saved by the TEST_ASSERT_EQUAL_INT16 casting both to INT16, the int return value will hold +32768 as a 32-bit value, no matter what UNITY_INT_WIDTH is set to.

int ReturnANegativeInt(void)
{
    return (int)(0x80 << 8);
}

Ah. That’s a fundamental misunderstanding on my part.

However, while I can use UNITY_INT16 in unity, I can’t use it with the Arduino libraries, unless I add more defines.

Ultimately, I was trying to create a test for an AVR device, but one that didn’t have to be done on the device. I see where to go, now: either make it truly hardware independent, or use defines so that the same code can run in unity or on an AVR.

#include <stdint.h>
int16_t ReturnANegativeInt(void)
{
return (int16_t)(0x80 << 8);
}

Ultimately, everything is solved by including stdint and just using specified sizes.

Thanks.