Nucelo-F446RE optimization issue

Hi @ all …

I have a issue with my nucleo-F446RE and I can’t finde a problem in my code … never the less it does not work when building it with the default platformio optimization -Os.

Let me show you my code:


typedef struct
uint32_t NumLost;

} ttyRxData_t;

ttyRxData_t ttyRxData;

Fifo *pRxFifo;

void bspTTYInit(uint32_t baud)
LL_USART_InitTypeDef init;

init.BaudRate = baud;
init.DataWidth = LL_USART_DATAWIDTH_8B;
init.HardwareFlowControl = LL_USART_HWCONTROL_NONE;
init.StopBits = LL_USART_STOPBITS_1;
init.TransferDirection = LL_USART_DIRECTION_TX_RX;
LL_USART_Init(TTY_USARTx, &init);



pRxFifo = new Fifo(ttyRxData.Data, sizeof(ttyRxData.Data));
ttyRxData.NumLost = 0;


#endif /* BSP_TTY_RX_IRQ == BSP_ENABLED */

printf("USART done!!! %s\n", __TIME__);


The code snippet above is from my very own BSP which I have used so far on a Nucleo-F103 and now I want to port it to F446RE. If I disable the interrupt support the USART is working fine, this was the first test I have made. But when I enable the interrupt several strange things happen …

  1. The baud rate is messed up and I can’t read anything on the terminal
  2. I get USART interrupts which should not occur as they are disabled.

What I have figured out so far …

  • When I try to debug the code it works perfectly fine under all conditions.
  • When I remove the line ttyRxData.NumLost = 0 it works fine!!
  • When I do not remove the line but disable the optimization at all by adding build_unflags = -Os to my configuration it is also working fine.

The last finding is pretty interesting. I know that errors caused by enabling aggressive optimization are a indicator for somehow dirty code, but I don’t see something rude in my code. But independent of my coding skills… I want to get this fixed and keep the optimization turned on. But how to achieve this goal?

Last but not least here is my platformio.ini (excluding the build_unflags modification) to show you my setup:

board = nucleo_f446re
platform = ststm32
framework = stm32cube
build_flags =
; build_unflags = -Os

plus interrupts: Have you made sure to follow best-practices and mark all global objects with which you interact in the ISRs are marked volatile? E.g. the ttyRxData_t ttyRxData object and FIFO? Otherwise compiler-optimization might optimize away criticial checks…

When you hit the debug button in VSCode, it internally recompiles it with the debug settings, which is -Og. Thus certain bugs are irreproducable.

Yes … That’s the FIFO’s job and as I mentioned in my initial post the same code as worked well on all my projects and test on f103rb targets. So I’m convinced that this is not the issue. But maybe the issue is the optimization of the access to NumLost in the init function, so I made it volatile. But this has had no impact on my observations. Beside of that I enable the only RXNE interrupt and my issues are not related to receiving data and the access of the variables used in this scope. I have observed with -Os that …

  1. The baud rate is set incorrectly when setting NumLost to zero as if there would be a wild pointer which overwrites the BRR register.
  2. The USART ISR gets called but the RXNE (which should be the only enabled one) interrupt is not set. I figured out that the TC interrupt is root cause for the observed interrupt. Its flag is set in the status register and also in the CR1 register. But it should not be set in the CR1 register as the default is zero after a rest and don’t set it at any time … see my code.

I have found the following post which reports something similar in the scope of the ADC: ST Forum - ADC init bug with optimization >= O1 (STM32L4)

After reading that I made the following test:

  1. Debug the code and note the BRR and CR1 values what are known as working very well while debugging and set by LL_USART_Init().
  2. Hardcode those values and remove the LL_USART_Init() call:

USART2->BRR = 0x1b2;
USART2->CR1 = 0xC;

  1. restore the default optimization

Result: I was not able to reproduce any issues … everything works as it should including data reception via ISR.

Then I have repeated the test by using something more user friendly then raw register values:

LL_USART_SetBaudRate(TTY_USARTx, rcc_clocks.PCLK1_Frequency, 0, 115200);

Result: Same as before, no issues at all.

Conclusion: I assume that something odd is going on in LL_USART_Init() but so far I have not figured out what this is.

Thats why I tested the impact of changing the optimization settings.

I have performed tests in the the LL_USART_Init function and figured out that the following call is causing issues:

(USART_InitStruct->DataWidth | USART_InitStruct->Parity |
USART_InitStruct->TransferDirection | USART_InitStruct->OverSampling));

replacing that code by

USART2->CR1 = 0xC;

seems to work well.

Anyway … I’m not happy at all as I still don’t understand the root cause at all. :frowning:

OMG … I found the issue!!

For F4 targets there is a additional member “OverSampling” in LL_USART_InitTypeDef. As my code has been used previously for F1 target this member has has not been set as it does not exist there. So it has been undefined now as a memset() or LL_USART_StructInit() has been missing.

This results in writing garbage to the CR1 register in LL_USART_Init() as this value is used there without further checks. In this register you can enable interrupts AND the entire USART. If the USART is enabled you can not set the baud rate any longer. So I assume that all of this happens because of “OverSampling” is not initialized. The result matches 100% to my observations = Unexpected interrupts and wrong baud rate!

Lessons learned: ALWAYS USE A GENERIC INIT FUNCTION FOR STRUCTURES AS THEY MIGHT CHANGE FOR A OTHER CPU!! But honestly, this is nothing new … isn’t it? :innocent:

1 Like