Tokenization long lines

I include a file with wav sound data with long lines like this one:

const unsigned char wavData[5617] = { 0x52, 0x49, 0x46, 0x46, 0xE9, 0x15, ...

Unfortunately, the PlatformIO editor (framework = arduino) wants to check this file when changing “wavData” to something else in main.cpp. PlatformIO runs about 30 seconds for each change. Sometimes PlatformIO complains about Tokenization of long lines.

Is there a directive that I can put in platformio.ini to stop considering the included file and thus speed up editing?

How is the include file generated? Isn’t it possible to break the long lines into several lines of reasonable length? That would help other tools down the line as well (git etc.).

PlatformIO doesn’t provide any editor. Instead it integrates with IDEs like Visual Studio Code, Atom, CLion etc. If you really need to have the editor improved, you need to contact the people responsible for your IDE.

1 Like

You are right. It is an editor function. I put
"editor.maxTokenizationLineLength": 100
in my workspace file. (or search for Max Token in your Text Editor settings)
This reduces the wait time to 12 seconds. Looking at the Windows Task Manager the checking of tokens causes a high burst in CPU usage.
The included file has audio .wav data pasted in from a HEX Editor (HXD) in C format. There are CRLF characters at the end of each line. They are just very long statements.