While I try to find the root cause why my USART DMA setup is not working i have detected that the STM32Cube F1 version provided with framework-stm32cube in version 1.1.0 is quite old (actually V1.0.4 from 29-April-2016). The current version would be V1.0.6.
The Problem I have is that ST does not provide older STM32 Cube versions to get the Documentation and the example code for V1.0.4 (Shame on you ST btw. … that’s so annoying, can’t find words for that without being rude!).
Beside of that the LL driver support has been massively extended in V1.0.6 … or in pio not all the LL drivers are provided … cant check this as I cant find the original V1.0.4 package from ST.
Is there a good reason to stay with V1.0.4 in pio?
If not … Is it possible to estimate when a update will be provided within pio?
Last nut not least … I think that the root cause for my DMA problem is somehow related to the Cube version as the V1.0.6 examples have the same problem as my own code … no TX output.
In the meanwhile I found the root cause for my DMA problems, they have not been related to the STM32 Cube version. So far so good.
I have also tried to patch the framework-stm32cube for F1 chips manually by replacing the files installed at /home/julian/.platformio/packages/framework-stm32cube/f1 by those which came with the STM32Cube_FW_F1_V1.6.0 package and observed no problems compiling my code. Also all the LL drivers get compiled. My next step will be using the LL drivers by trying some of the existing examples.
In the meanwhile I created projects based on the STM32 Cube LL drivers not included in pio so far for a nucleo_f103rb board.
So it seams pretty easy to update the Cube version by hand, but imho thats not satisfying as long term solution
The open question is … will there be a update of framework-stm32cube/f1 to the latest version and when.
Will there be a update of framework-stm32cube/f1 to the latest version and when?
In the mean while there has been a update of ST STM32 and framework-STM32Cube which brings in the latest F1 Version … thanks for that.