Shared Compilation Cache
Lets talks about Shared Compilation Cache and how CI systems use it to make builds fast across different machines. Its pretty simple.
When you compile a project you get intermediate files like *.o files. For example you have foo.rs and bar.rs. When the compiler runs it produces compiled outputs like foo.o and bar.o.
Now imagine building the same project on another machine. Normally that machine will repeat the exact same compilation steps again and produce the same foo.rs and bar.rs.
Now lets do something interesting.
What if instead of compiling everything again we store those compiled files somewhere shared, like S3, Redis, or another remote storage ?
So after the first machine compiles:
foo.rs -> foo.o
bar.rs -> bar.o
those compiled files get uploaded to the shared cache.
Now when another machine builds the same project, before compiling it checks the cache first. If the exact compilation already exists, it can simply download foo.o and bar.o from the cache instead of compiling them again.
This works because if the source code, compiler version, and build flags are the same, the compiled output will also be the same.
This technique helps a lot when multiple machines build the same large project, because most files usually don’t change between builds. Instead of compiling thousands of files again, the build system can reuse the cached compiled objects and only compile the files that actually changed.
That’s the basic idea behind shared compilation caches used by many CI systems.