GCC 15, released in 2026 — six years after C++20 — does not enable modules with -std=c++20. You need a separate -fmodules flag. The compiler tells you so, politely: “C++20 module only available with -fmodules, which is not yet enabled with -std=c++20.” That’s where I knew this investigation was going to go well.
I set up a Fedora 43 box with GCC 15.2.1 and Clang 21.1.8 and tried to do ordinary things: compile a module, import it from another file, use import std;, wire it through CMake. The kind of stuff you’d do on a Monday morning if someone said “let’s try modules.” Every step worked, eventually. None of them worked the way you’d expect.
Two compilers, two workflows, zero interop
The smallest useful module:
// math.cppm
export module math;
export int add(int a, int b) {
return a + b;
}
// consumer-math.cpp
import math;
int main() {
return add(2, 3) - 5;
}
GCC, once you remember -fmodules, is two commands. It stashes a Binary Module Interface at gcm.cache/math.gcm and the consumer finds it automatically:
g++ -std=c++20 -fmodules -c math.cppm -o math.o
g++ -std=c++20 -fmodules consumer-math.cpp math.o -o consumer
Reasonable. Now Clang.
Clang 21 has no auto-discovery. You get a three-step workflow where every consumer must be told where every BMI lives:
# Step 1: precompile interface → .pcm
clang++ -std=c++20 --precompile math.cppm -o math.pcm
# Step 2: compile PCM → object file
clang++ -std=c++20 -c math.pcm -o math.o
# Step 3: compile consumer, explicitly naming the module file
clang++ -std=c++20 -fmodule-file=math=math.pcm consumer-math.cpp math.o -o consumer
That -fmodule-file=math=math.pcm flag? It goes on every translation unit that imports math. A project with thirty modules and two hundred consumers needs a lot of flags. And the BMI formats between GCC and Clang aren’t interchangeable — .gcm and .pcm are compiler-specific binary blobs. Pick your compiler and commit.
Order matters now
Headers don’t care when you compile them. The preprocessor pastes text, the compiler chews on whatever it gets. Modules break that: the BMI has to exist before any consumer compiles.
Compile the consumer first on GCC:
math: error: failed to read compiled module: No such file or directory
math: note: compiled module file is 'gcm.cache/math.gcm'
math: note: imports must be built before being imported
math: fatal error: returning to the gate for a mechanical issue
Clang is terser:
fatal error: module 'math' not found
import math;
~~~~~~~^~~~
Both fail with exit code 1, and the error messages are clear enough about why. The implication is the important part: your build system must know the dependency graph before compilation starts. With headers, the compiler figures out dependencies during preprocessing. With modules, someone has to scan every source file first, work out who provides what and who imports what, and feed that graph to the build system. That someone is P1689R5 — a JSON format that lets compilers report module provides/requires without doing a full compile. It’s the plumbing that makes CMake’s module support possible.
The one thing that works exactly right
Macro isolation. I keep coming back to this because it’s the cleanest win modules offer and it requires zero build-system cleverness.
// macro-mod.cppm
export module macrotest;
#define INTERNAL_MAGIC 42
export int get_value() {
return INTERNAL_MAGIC;
}
// macro-consumer.cpp
import macrotest;
#include <cstdio>
#ifndef INTERNAL_MAGIC
#define INTERNAL_MAGIC 0
#endif
int main() {
printf("get_value() = %d\n", get_value());
printf("INTERNAL_MAGIC = %d\n", INTERNAL_MAGIC);
}
Output:
get_value() = 42
INTERNAL_MAGIC = 0
The function returns 42 — the value got baked in when the module compiled. But INTERNAL_MAGIC as a preprocessor symbol is invisible to the consumer; I confirmed with -E that it doesn’t expand there at all. The consumer’s own #define INTERNAL_MAGIC 0 takes effect because nothing crossed the module boundary. Declarations cross the boundary, preprocessor state does not.
If you’ve ever lost an afternoon to min/max leaking out of <windows.h>, or configuration macros from one header poisoning another — this is the fix. And you can adopt it incrementally: new code as modules, old code stays as headers. They coexist.
import std; needs bootstrapping
This is the feature I actually wanted to test. One import instead of twenty includes. C++23 says it should work.
On GCC 15, the source for the standard library module exists at /usr/include/c++/15/bits/std.cc — 98KB of module interface. But it’s not pre-compiled. A bare import std; fails with the same “imports must be built before being imported” error as everything else.
You pre-compile it yourself:
g++ -std=c++23 -fmodules -x c++ \
/usr/include/c++/15/bits/std.cc -c -o std.o
3.7 seconds on an i7-4790 — after that, import std; works.
Clang 21 with libc++ has the same problem in a different shape. Fedora 43 ships the source at /usr/share/libc++/v1/std.cppm but no pre-compiled PCM. You do it yourself:
clang++ -std=c++23 -stdlib=libc++ --precompile \
/usr/share/libc++/v1/std.cppm -o std.pcm
2.6 seconds, 33MB PCM file. Then you add -fmodule-file=std=std.pcm to every TU that uses import std;.
Both compilers ship the source but no pre-built artifact. The distributions haven’t caught up. Until dnf install gcc-c++ gives you a ready-to-use std module, import std; adds a mandatory bootstrapping step to every fresh build environment. Every CI image. Every developer’s first checkout.
The compile-time payoff is real
I wrote a ~150-function library spread across five namespaces (biglib::math, biglib::string, biglib::io, biglib::mem, biglib::algo) — templates, constexpr functions, bit manipulation — in both header and module form. Identical implementations. Compiled a consumer five times consecutively on GCC 15 with -O2 on an i7-4790 @ 3.60GHz.
One-time BMI pre-compilation: 246ms.
| Approach | Per-compile time | Total for 5 builds |
|---|---|---|
| Module consumer (BMI exists) | 79ms | 246 + 395 = 641ms |
| Header consumer | 217ms | 1085ms |
2.75x faster per incremental build — the BMI cost amortizes after roughly two compilations. By the fourth rebuild, you’ve saved 230ms.
An i7-4790 is not your CI machine, and a 150-function library is not the standard library. A real header like <algorithm> with its template instantiation depth and SFINAE machinery would show a wider gap. What matters is the ratio: reading a pre-digested binary representation of declarations is structurally cheaper than re-tokenizing, re-preprocessing, and re-parsing the same text in every translation unit. 2.75x is a lower bound for header-heavy codebases.
What CMake does with all this
CMake 3.31 supports modules through FILE_SET CXX_MODULES:
add_library(mathlib)
target_sources(mathlib
PUBLIC FILE_SET CXX_MODULES FILES src/math.cppm)
target_compile_features(mathlib PUBLIC cxx_std_20)
add_executable(consumer src/consumer-math.cpp)
target_link_libraries(consumer PRIVATE mathlib)
That looks fine. What it generates is not fine. A verbose Ninja build shows 8 steps:
- Dependency scan
consumer-math.cpp→ P1689R5.ddifile - Dependency scan
math.cppm→ P1689R5.ddifile cmake_ninja_dyndep: resolve the module dependency graph for mathlibcmake_ninja_dyndep: resolve the module dependency graph for consumer- Compile
math.cppmwith-fmodule-mapper=pointing to a generated modmap - Compile
consumer-math.cppwith-fmodule-mapper=for its dependencies ar: createlibmathlib.a- Link
consumer
The header-based equivalent? Two steps: compile and link.
Steps 1–4 are the dependency scanning pass — CMake runs the compiler with -fdeps-format=p1689r5 to extract module provides/requires, then feeds that into Ninja’s dynamic dependency mechanism. Steps 5–6 use synthesized -fmodule-mapper flags so GCC knows where each BMI lives. You don’t need to understand this to use modules with CMake. You absolutely need to understand this the moment step 4 produces the wrong graph and your build fails with a missing BMI that you can see sitting right there in the build directory.
And this is the good case. CMake’s module support is the most mature among mainstream build systems — it required changes to Ninja itself to support dyndep rules that re-evaluate the dependency graph mid-build. Meson has experimental support behind a feature flag. Bazel is working on it. If your project uses custom Makefiles — and a surprising number of game studios and embedded shops still do — you’re on your own for dependency scanning.
The gap
The language feature works. GCC compiles modules. Clang compiles modules. Macro isolation, separate compilation, name visibility — all correct, all as specified.
Here’s what doesn’t work: GCC needs -fmodules on top of -std=c++20, six years in. Clang demands explicit -fmodule-file= flags for every consumer, and neither compiler auto-discovers BMIs past trivial examples. import std; still needs manual bootstrapping on every major Linux distribution — no distro ships a pre-compiled PCM yet. CMake hides 8 build steps behind a clean API, and when step 4 goes wrong you need to understand P1689R5, dyndep, and -fmodule-mapper to debug it.
None of these are hard blockers for a team that has build system expertise and time to invest. All of them are walls for the median C++ project that has a working #include-based build and no appetite for migration risk.
The compile-time improvement is genuine — 2.75x on incremental builds, and that gap widens as headers get more complex. Macro isolation alone eliminates an entire class of bugs that every large codebase eventually hits. Explicit dependency graphs are architecturally correct.
But every successful module adoption I’ve seen had the same thing: one person who understood P1689R5, who knew the difference between --precompile and -fmodules, who could trace a missing BMI back to a scan failure in CMake’s dyndep output. That person is the module adoption. When they go on vacation, the build breaks and nobody knows why.
The language is ready. The question for your project is whether you have that person, and whether you can afford the month it takes to become that person. For most teams, in April 2026, the honest answer is: not yet.