We use build_cache_dir.
du -hs .pio/build_cache
32G .pio/build_cache
But since the cache is managed by Python, it’s about as slow as just recompiling many of our smaller sources. Of course since each of our 39 copies of, say, AsyncTCP has different pathnames, each of sources precompile down to something different because of FILE type differences, so there’s never a hit on builds 2-39 against the first.
Even when compiling absolutely no objects, PlatformIO is slow by default, as demonstrated by our 20+ second “do nothing” builds.
$ time pio run -e mesmerizer && time pio run -e mesmerizer
It spends a good 5-7 seconds “retrieving from cache”. Why does a bunch of stat(2) calls take that long?
[ … ]
yulc-demo SUCCESS 00:00:18.757
So with ZERO compilation and everything cached, it still takes 18 seconds. Not fast.
build_cache_dir seems to just let you choose WHERE it downloads 39 copies of the same dozen libraries. It still downloads 39 copies.
I know there’s a mode to make it totally give up on dependency checking that claims to be faster, but I can’t say that faster incorrect builds is exactly a tradeoff I’m anxious to make, but watching LDF iterate on my UNCHANGED tree for several seconds every time doesn’t contribute to my impression that anyone is using this that cares about build times.
So, yes, I’ve already experimented with both of those and didn’t exactly find joy in either one.
If git submodules weren’t such a terrible user experience for first time builders, I’d probably just pull those dozen packages into a third_party directory, version them on our own, and then “only” have 39 slow builds instead of 39 slow builds AND 39*a dozen fetches and installs. It’s not the time spent by g++ and ld that is killing us. It’s PlatformIO taking long to decide what to build, taking long to retrieve from cache, but most importantly, it’s those fetches that put the bus in park on a fresh work tree.
If anyone has tips on avoiding 38 of those fetches - none of which parallelize in any meaningful way, it seems - you could knock almost an hour off a cold build for us.
It’s “fast” in CI on GitHub only because Microsoft can afford to distribute the build to 39 different machines that can do them totally in parallel - exactly the way a normal developer on a single machine can’t.