I have an embedded project running on an ESP32 board.
Its primary purpose is to control two motors with included encoders - the encoders which trigger interrupts and update the position. I have multiple tasks running with different priorities at different frequencies.
The main control task sets the speed and direction of the motors and is running at 1000Hz with the second-highest priority.
The motor watchdog task runs with the highest priority at 100Hz. It periodically calculates the speed and stops the motor when it has slowed down so much that it is about to stall.
I have tested the classes in isolation in the native environment. Now I am having trouble with the code on the embedded platform as the speed is not calculated at all.
I could of course whip out the debugger and figure out what the issue is, so I can fix it. But what happens if I or someone else changes something and removes a task because it “didn’t” seem important or they had a Pina Colada or two?
I would like to be able to verify the correct setup and behavior of the code that will be run on the embedded device. I am not quite sure how to integrate it into a testing framework and how to even “inject” my testing mocked objects into my production code.
The only thing I can think of is to have a setup method that can take all my interfaces as input. The production setup method would call the parameterized setup method with the production objects. During testing, I could call the parameterized setup method with my mocked interfaces.
Maybe I can also mock out all the FreeRTOS functions and run a dead simple scheduler on native. However, I really would like to be able to verify that the actual hardware works.