Cargo build for windows on linux

Время на прочтение
10 мин

Количество просмотров 14K

Наверное не будет уж очень удивительным если я тут, на IT площадке Хабра, скажу что я иногда балую себя программированием.

Основная OS у меня Linux, но иногда приходится собирать исполняемые файлы и для Windows. И естественно что перегружаться в Windows только для сборки exe не особо хочется. С языками C и C++ проблем нет, давно существует кросскомпилятор MinGW, который прекрасно с этим справляется. Про Python и Java даже упоминать не стоит, кроссплатформенность в них изначально. Но в прошлом году я решил попробовать такой пока что новомодный язык, как Rust. При сборке исполняемого файла при помощи включённого в дистрибутив Rust пакетного менеджера cargo вроде как достаточно задать ключ --target, при помощи которого указать результирующий процессор, архитектуру и ABI и при сборке из Linux в результате получить exe, который будет являться стандартным исполняемым файлом для Windows. Но пытаясь так сделать:

cargo build --target x86_64-pc-windows-gnu

я получил только сообщения об ошибках линкера:

error: linking with `gcc` failed: exit code: 1

[...]

  = note: /usr/bin/ld: unrecognized option '--nxcompat'
          /usr/bin/ld: use the --help option for usage information
          collect2: error: ld returned 1 exit status

error: aborting due to previous error

error: could not compile `foobar`.

Если кому интересно как я это поборол и теперь спокойно могу кросскомпилировать программы на Rust для Windows, не покидая Linux, добро пожаловать под кат.

Disclaimer

Далее я рассматриваю только цели 32bit и 64bit pc-windows-gnu, цели pc-windows-msvc для меня интереса не представляют и поэтому в них я не углублялся. Так же речь будет идти о том дистрибутиве Linux, который установлен на моём компьютере, то есть Fedora Linux 31, но я не думаю что на других дистрибутивах Linux будут очень уж существенные различия. И я использую Rust установленный при помощи The Rust toolchain installer, а не входящий в репозиторий Fedora Rust по причине того, что мне иногда требуются nightly сборки Rust, которых в стандартном репозитории, естественно, нет.

Первым делом убеждаемся что у нас установлены необходимые цели, запустив следующую команду:

rustup target list

Получаем список всех возможных целей, и целей, которые у нас установлены:

aarch64-apple-ios
aarch64-fuchsia
[...]
i686-pc-windows-gnu (installed)
[...]
i686-unknown-linux-gnu (installed)
[...]
x86_64-pc-windows-gnu (installed)
x86_64-unknown-linux-gnu (installed)
[...]

Для создания исполняемых файлов для Windows из Linux нам необходимы цели i686-pc-windows-gnu для 32bit exe и x86_64-pc-windows-gnu для 64bit exe. Если данные цели не отмечены как (installed), то доставляем их при помощи команды

rustup target add имя_цели

После убеждаемся что у нас установлен кросскомпилятор MinGW, запустив

rpm -qa | grep mingw

или другой пакетный менеджер для нашего дистрибутива Linux:

mingw32-gcc-9.2.1-1.fc31.x86_64
mingw32-binutils-2.32-6.fc31.x86_64
mingw64-gcc-9.2.1-1.fc31.x86_64
mingw-binutils-generic-2.32-6.fc31.x86_64
mingw-filesystem-base-110-1.fc31.noarch
mingw64-winpthreads-6.0.0-2.fc31.noarch
mingw32-winpthreads-6.0.0-2.fc31.noarch
mingw32-crt-6.0.0-2.fc31.noarch
mingw64-binutils-2.32-6.fc31.x86_64
mingw64-crt-6.0.0-2.fc31.noarch
mingw64-filesystem-110-1.fc31.noarch
mingw32-filesystem-110-1.fc31.noarch
mingw32-cpp-9.2.1-1.fc31.x86_64
mingw64-headers-6.0.0-2.fc31.noarch
mingw32-headers-6.0.0-2.fc31.noarch
mingw64-cpp-9.2.1-1.fc31.x86_64

При отсутствии MinGW устанавливаем необходимые пакеты, запустив

sudo dnf install mingw32-gcc mingw64-gcc

Ну вот вроде бы теперь всё в наличии, далее будем решать проблемы по мере их появления (ага, можно сказать что это получается прям какой-то Test-Driven Development, :-)

Создаём простейший проект на языке Rust:

[pfemidi@pfemidi rust]$ cargo new foobar
     Created binary (application) `foobar` package
[pfemidi@pfemidi rust]$ cat foobar/src/main.rs 
fn main() {
    println!("Hello, world!");
}
[pfemidi@pfemidi rust]$

Сначала компилируем и запускаем его как родное приложение Linux:

[pfemidi@pfemidi foobar]$ cargo run
   Compiling foobar v0.1.0 (/home/pfemidi/mywork/rust/foobar)
    Finished dev [unoptimized + debuginfo] target(s) in 1.65s
     Running `target/debug/foobar`
Hello, world!
[pfemidi@pfemidi foobar]$ 

Всё работает. Теперь пробуем его собрать как цель x86_64-pc-windows-gnu:

cargo build --target x86_64-pc-windows-gnu

и получаем всё то же сообщение об ошибке сборки:

error: linking with `gcc` failed: exit code: 1

[...]

  = note: /usr/bin/ld: unrecognized option '--nxcompat'
          /usr/bin/ld: use the --help option for usage information
          collect2: error: ld returned 1 exit status

error: aborting due to previous error

error: could not compile `foobar`.

Понятно, для сборки вызывается не линкер из MinGW, а уже установленный в системе gcc. Исправляем эту ситуацию, для этого создаём в проекте директорию .cargo и в ней файл config со следующим содержимым:

[pfemidi@pfemidi foobar]$ mkdir .cargo
[pfemidi@pfemidi foobar]$ cat > .cargo/config
[target.i686-pc-windows-gnu]
linker = "i686-w64-mingw32-gcc"
ar = "i686-w64-mingw32-ar"

[target.x86_64-pc-windows-gnu]
linker = "x86_64-w64-mingw32-gcc"
ar = "x86_64-w64-mingw32-ar"
[pfemidi@pfemidi foobar]$

Это необходимо для того чтобы при сборке целей для Windows в качестве линкера использовался не установленный в системе gcc, а линкер из MinGW.

Пробуем собрать проект снова:

cargo build --target x86_64-pc-windows-gnu

и получаем другую ошибку от линкера, уже от x86_64-w64-mingw32-gcc:

error: linking with `x86_64-w64-mingw32-gcc` failed: exit code: 1

[...]

  = note: /usr/lib/gcc/x86_64-w64-mingw32/9.2.1/../../../../x86_64-w64-mingw32/bin/ld: cannot find -lpthread
          collect2: error: ld returned 1 exit status

error: aborting due to previous error

error: could not compile `foobar`.

Дело в том, что Rust по-умолчанию собирает всё в статическом виде, поэтому кроме пакетов mingw32-winpthreads и mingw64-winpthreads, которые dnf автоматически установил как зависимости для mingw32-gcc и mingw64-gcc обязательно должны быть установлены пакеты статических библиотек mingw32-winpthreads-static и mingw64-winpthreads-static, без них линкер всё время будет жаловаться на отсутствующий -lpthread и сборка не пройдёт. Доустанавливаем недостающие пакеты:

sudo dnf install mingw??-winpthreads-static

и опять запускаем компиляцию:

cargo build --target x86_64-pc-windows-gnu

Опять ошибка линковки! Но уже другая:

error: linking with `x86_64-w64-mingw32-gcc` failed: exit code: 1

[...]

  = note: /usr/lib/gcc/x86_64-w64-mingw32/9.2.1/../../../../x86_64-w64-mingw32/bin/ld: /home/pfemidi/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-gnu/lib/crt2.o:crtexe.c:(.rdata$.refptr.__onexitbegin[.refptr.__onexitbegin]+0x0): undefined reference to `__onexitbegin'
          /usr/lib/gcc/x86_64-w64-mingw32/9.2.1/../../../../x86_64-w64-mingw32/bin/ld: /home/pfemidi/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-gnu/lib/crt2.o:crtexe.c:(.rdata$.refptr.__onexitend[.refptr.__onexitend]+0x0): undefined reference to `__onexitend'
          collect2: error: ld returned 1 exit status

error: aborting due to previous error

error: could not compile `foobar`.

Линкер жалуется на отсутствующие символы __onexitbegin и __onexitend в файле ~/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-gnu/lib/crt2.o, который мы установили в составе цели x86_64-pc-windows-gnu. После некоторых раздумий, гугления, чтения доков на сайте Rust, изучения исходников самого Rust, того как и чем сам Rust собирается я понял: дело в том что сам Rust для Windows, и соответственно его компоненты для целей pc-windows-gnu, собраны с использованием MinGW 6.3.0, а у меня в Fedora Linux 31 версия MinGW 9.2.1, поэтому и происходит несоответствие в CRT. Ok, попробуем перенести crt2.o из федориного MinGW в директорию Rust для цели x86_64-pc-windows-gnu. И кроме crt2.o перенесём ещё и dllcrt2.o, который является точкой входа для динамических библиотек:

[pfemidi@pfemidi foobar]$ cd ~/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-gnu/lib/
[pfemidi@pfemidi lib]$ cp /usr/x86_64-w64-mingw32/sys-root/mingw/lib/crt2.o .
[pfemidi@pfemidi lib]$ cp /usr/x86_64-w64-mingw32/sys-root/mingw/lib/dllcrt2.o .
[pfemidi@pfemidi lib]$ cd -
/home/pfemidi/mywork/rust/foobar
[pfemidi@pfemidi foobar]$ 

и опять запускаем компиляцию нашего проекта на Rust:

pfemidi@pfemidi foobar]$ cargo build --target x86_64-pc-windows-gnu
   Compiling foobar v0.1.0 (/home/pfemidi/mywork/rust/foobar)
    Finished dev [unoptimized + debuginfo] target(s) in 4.46s
[pfemidi@pfemidi foobar]$

Прекрасно! Всё собралось! Т.к. у меня установлен wine, то тут же я могу и проверить как это работает:

[pfemidi@pfemidi foobar]$ cargo run --target x86_64-pc-windows-gnu
    Finished dev [unoptimized + debuginfo] target(s) in 0.38s
     Running `target/x86_64-pc-windows-gnu/debug/foobar.exe`
Hello, world!
[pfemidi@pfemidi foobar]$

И даже работает! Теперь пробуем сделать то же самое для 32bit версии исполняемого файла Windows, делаем сразу run без предварительного build:

error: linking with `i686-w64-mingw32-gcc` failed: exit code: 1

[...]

  = note: /usr/lib/gcc/i686-w64-mingw32/9.2.1/../../../../i686-w64-mingw32/bin/ld: /home/pfemidi/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/i686-pc-windows-gnu/lib/crt2.o:crtexe.c:(.text+0x75): undefined reference to `__onexitend'
          /usr/lib/gcc/i686-w64-mingw32/9.2.1/../../../../i686-w64-mingw32/bin/ld: /home/pfemidi/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/i686-pc-windows-gnu/lib/crt2.o:crtexe.c:(.text+0x7a): undefined reference to `__onexitbegin'
          collect2: error: ld returned 1 exit status

error: aborting due to previous error

error: could not compile `foobar`.

Ошибку с отсутствием символов __onexitbegin и __onexitend теперь уже в файле ~/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/i686-pc-windows-gnu/lib/crt2.o мы уже проходили, лечится точно так же, как и для 64bit цели заменой файлов crt2.o и dllcrt2.o на аналогичные по именам, но из дистрибутива MinGW из Fedora:

[pfemidi@pfemidi foobar]$ cd ~/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/i686-pc-windows-gnu/lib/
[pfemidi@pfemidi lib]$ cp /usr/i686-w64-mingw32/sys-root/mingw/lib/crt2.o .
[pfemidi@pfemidi lib]$ cp /usr/i686-w64-mingw32/sys-root/mingw/lib/dllcrt2.o .
[pfemidi@pfemidi lib]$ cd -
/home/pfemidi/mywork/rust/foobar
[pfemidi@pfemidi foobar]$ 

Проверяем:

[pfemidi@pfemidi foobar]$ 
[pfemidi@pfemidi foobar]$ cargo run --target i686-pc-windows-gnu
   Compiling foobar v0.1.0 (/home/pfemidi/mywork/rust/foobar)
    Finished dev [unoptimized + debuginfo] target(s) in 5.12s
     Running `target/i686-pc-windows-gnu/debug/foobar.exe`
Hello, world!
[pfemidi@pfemidi foobar]$

Тут теперь тоже всё собирается и работает.

И всё было прекрасно пока я не использовал никакие функции, которые паникуют (macro panic!, функция expect и т.д.) в 32bit целях для Windows. В целях 64bit всё хорошо, а вот в целях 32bit нет.

Добавим в наш проект панику:

[pfemidi@pfemidi foobar]$ cat src/main.rs 
fn main() {
    println!("Hello, world!");
    panic!("I'm panicked!");    // ВОТ НАША ПАНИКА!
}
[pfemidi@pfemidi foobar]

и попробуем собрать как исполняемый файл для 64bit Windows:

[pfemidi@pfemidi foobar]$ cargo run --target x86_64-pc-windows-gnu
   Compiling foobar v0.1.0 (/home/pfemidi/mywork/rust/foobar)
    Finished dev [unoptimized + debuginfo] target(s) in 2.95s
     Running `target/x86_64-pc-windows-gnu/debug/foobar.exe`
Hello, world!
thread 'main' panicked at 'I'm panicked!', src/main.rs:3:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
[pfemidi@pfemidi foobar]$

И компилируется, и собирается, и работает. Попробуем теперь сделать то же самое, но в качестве цели укажем 32bit Windows.

Упс:

[pfemidi@pfemidi foobar]$ cargo run --target i686-pc-windows-gnu
   Compiling foobar v0.1.0 (/home/pfemidi/mywork/rust/foobar)
error: linking with `i686-w64-mingw32-gcc` failed: exit code: 1

[...]

  = note: /usr/lib/gcc/i686-w64-mingw32/9.2.1/../../../../i686-w64-mingw32/bin/ld: /home/pfemidi/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/i686-pc-windows-gnu/lib/libpanic_unwind-1a1fb2d4d34efaf8.rlib(panic_unwind-1a1fb2d4d34efaf8.panic_unwind.2hbcqjo8-cgu.0.rcgu.o): in function `ZN12panic_unwind3imp5panic17hdaabfe6326236dacE':
          /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8\/src\libpanic_unwind/gcc.rs:73: undefined reference to `_Unwind_RaiseException'
          /usr/lib/gcc/i686-w64-mingw32/9.2.1/../../../../i686-w64-mingw32/bin/ld: /home/pfemidi/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/i686-pc-windows-gnu/lib/libpanic_unwind-1a1fb2d4d34efaf8.rlib(panic_unwind-1a1fb2d4d34efaf8.panic_unwind.2hbcqjo8-cgu.0.rcgu.o): in function `rust_eh_unwind_resume':
          /rustc/5e1a799842ba6ed4a57e91f7ab9435947482f7d8\/src\libpanic_unwind/gcc.rs:327: undefined reference to `_Unwind_Resume'
          collect2: error: ld returned 1 exit status

error: aborting due to previous error

error: could not compile `foobar`.

Опять линкер жалуется на отсутствие символов, но теперь это символы _Unwind_RaiseException и _Unwind_Resume в модуле libpanic стандартной библиотеки Rust.

Снова раздумия, снова гугление, снова чтение доков и изучение исходников как самого Rust, так и его стандартной библиотеки. И я понял почему возникает такая ошибка.

Для разматывания стека при исключении Rust использует метод Dwarf для 32bit целей Windows и SEH для 64bit целей Windows, а MinGW из стандартного репозитория Fedora Linux использует метод SJLJ для 32bit целей Windows и SEH для 64bit целей Windows (о различии между этими методами читать тут). Поэтому 64bit цели собираются без вопросов, а для 32bit просто нет необходимых символов и объектных файлов. Чтобы получить данные файлы необходимо пересобрать MinGW с поддержкой Dwarf вместо поддерки SJLJ по умолчанию для 32bit целей Windows.

Я не буду вдаваться в подробности как именно пересобирать MinGW, это уже не так сложно и не так интересно (configure там надо запускать с параметром --disable-sjlj-exceptions, остальное тривиально), скажу только одно: после того как MinGW пересобран с разматыванием стека Dwarf вместо SJLJ оттуда надо взять всего один файл под названием libgcc_eh.a и положить его в директорию с библиотеками для цели i686-pc-windows-gnu. После этого проекты в которых используются паникующие функции начнут собираться не только для 64bit целей Windows, но и для 32bit:

[pfemidi@pfemidi foobar]$ cd ~/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/i686-pc-windows-gnu/lib/
[pfemidi@pfemidi lib]$ cp ~/rpmbuild/BUILD/gcc-9.2.1-20190827/build_win32/i686-w64-mingw32/libgcc/libgcc_eh.a .
[pfemidi@pfemidi lib]$ cd -
/home/pfemidi/mywork/rust/foobar
[pfemidi@pfemidi foobar]$ cargo run --target i686-pc-windows-gnu
   Compiling foobar v0.1.0 (/home/pfemidi/mywork/rust/foobar)
    Finished dev [unoptimized + debuginfo] target(s) in 4.57s
     Running `target/i686-pc-windows-gnu/debug/foobar.exe`
Hello, world!
thread 'main' panicked at 'I'm panicked!', src/main.rs:3:5
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
[pfemidi@pfemidi foobar]$ 

Ну вот, как-то так.

UPDATE 2019-06-11

This fails for me with:

     Running `rustc --crate-name animation examples/animation.rs --color always --crate-type bin --emit=dep-info,link -C debuginfo=2 --cfg 'feature="default"' -C metadata=006e668c6384c29b -C extra-filename=-006e668c6384c29b --out-dir /home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/examples --target x86_64-pc-windows-gnu -C ar=x86_64-w64-mingw32-gcc-ar -C linker=x86_64-w64-mingw32-gcc -C incremental=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/incremental -L dependency=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps -L dependency=/home/roman/projects/rust-sdl2/target/debug/deps --extern bitflags=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/libbitflags-2c7b3e3d10e1e0dd.rlib --extern lazy_static=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/liblazy_static-a80335916d5ac241.rlib --extern libc=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/liblibc-387157ce7a56c1ec.rlib --extern num=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/libnum-18ac2d75a7462b42.rlib --extern rand=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/librand-7cf254de4aeeab70.rlib --extern sdl2=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/libsdl2-3f37ebe30a087396.rlib --extern sdl2_sys=/home/roman/projects/rust-sdl2/target/x86_64-pc-windows-gnu/debug/deps/libsdl2_sys-3edefe52781ad7ef.rlib -L native=/home/roman/.cargo/registry/src/github.com-1ecc6299db9ec823/winapi-x86_64-pc-windows-gnu-0.4.0/lib`
error: linking with `x86_64-w64-mingw32-gcc` failed: exit code: 1

Maybe this will help https://github.com/rust-lang/rust/issues/44787

Static compile sdl2

There is option to static-compile sdl but it didn’t work for me.

Also mixer is not included when used with bundled.

Let’s cross-compile examples from rust-sdl2 project from Ubuntu to Windows x86_64

In ~/.cargo/config

[target.x86_64-pc-windows-gnu]
linker = "x86_64-w64-mingw32-gcc"
ar = "x86_64-w64-mingw32-gcc-ar"

Then run this:

sudo apt-get install gcc-mingw-w64-x86-64 -y
# use rustup to add target https://github.com/rust-lang/rustup.rs#cross-compilation
rustup target add x86_64-pc-windows-gnu

# Based on instructions from https://github.com/AngryLawyer/rust-sdl2/

# First we need sdl2 libs
# links to packages https://www.libsdl.org/download-2.0.php

sudo apt-get install libsdl2-dev -y
curl -s https://www.libsdl.org/release/SDL2-devel-2.0.9-mingw.tar.gz | tar xvz -C /tmp

# Prepare files for building

mkdir -p ~/projects
cd ~/projects
git clone https://github.com/Rust-SDL2/rust-sdl2
cd rust-sdl2
cp -r /tmp/SDL2-2.0.9/x86_64-w64-mingw32/lib/* ~/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/lib/rustlib/x86_64-pc-windows-gnu/lib/
cp /tmp/SDL2-2.0.9/x86_64-w64-mingw32/bin/SDL2.dll .

Build examples at once

cargo build --target=x86_64-pc-windows-gnu --verbose --examples

Or stop after first fail:

echo; for i in examples/*; do [ $? -eq 0 ] && cargo build --target=x86_64-pc-windows-gnu --verbose --example $(basename $i .rs); done

Run

cargo build will put binaries in target/x86_64-pc-windows-gnu/debug/examples/

Copy needed files:

cp /tmp/SDL2-2.0.4/x86_64-w64-mingw32/bin/SDL2.dll target/x86_64-pc-windows-gnu/debug/examples/
cp assets/sine.wav target/x86_64-pc-windows-gnu/debug/examples/

Then copy directory target/x86_64-pc-windows-gnu/debug/examples/ to your Windows machine and run exe files.

Run in cmd.exe

If you want to see the console output when running exe files, you may run them from cmd.exe.

To open cmd.exe in current directory in file explorer, right click with shift on empty place in window and choose Open command window here.

Backtraces with mingw should work now — if not use msvc https://github.com/rust-lang/rust/pull/39234

  • Introduction

  • Why?

    • Speed
    • Cost
    • Containers + k8s
  • Rejected Strategies

    • Using x86_64-pc-windows-gnu
    • Using wine to run the MSVC toolchain
  • How?

    • Prerequisites
    • 1. Setup toolchain(s)
    • 2. Acquire Rust std lib
    • 3. Acquire CRT and Windows 10 SDK
    • 4. Override cc defaults
    • 5. Profit
  • Bonus: Headless testing

    • 1. Install
    • 2. Specify runner
    • 3. Test
  • Final image definition

  • Common issues

    • CMake
    • MASM
    • Compiler Target Confusion
  • Conclusion

Introduction

Last November I added a new job to our CI to cross compile our project for x86_64-pc-windows-msvc from an x86_64-unknown-linux-gnu host. I had wanted to blog about that at the time but never got around to it, but after making some changes and improvements last month to this, in addition to writing a new utility, I figured now was as good of a time as any to share some knowledge in this area for those who might be interested.

Why?

Before we get started with the How, I want to talk about why one might want to do this in the first place, as natively targeting Windows is a «known quantity» with the least amount of surprise. While there are reasons beyond the following, my primary use case for why I want to do cross compilation to Windows is our Continuous Delivery pipeline for my main project at Embark.

Speed

It’s fairly common knowledge that, generally speaking, Linux is faster than Windows on equivalent hardware. From faster file I/O to better utilization of high core count machines, and faster process and thread creation, many operations done in a typical CI job such as compilation and linking tend to be faster on Linux. And since I am lazy, I’ll let another blog post about cross compiling Firefox from Linux to Windows actually present some numbers in defense of this assertion.

Cost

Though we’re now running a Windows VM in our on-premise data center for our normal Windows CD jobs, we actually used to run it in GCP. It was 1 VM with a modest 32 CPU count, but the licensing costs (Windows Server is licensed by core) alone accounted for >20% of our total costs for this particular GCP project.

While this single VM is not a huge deal relative to the total costs of our project, it’s still a budget item that provides no substantive value, and on principle I’d rather have more/better CPUs, RAM, disk, or GPUs, that provide immediate concrete value in our CI, or just for local development.

Containers + k8s

This one is probably the most subjective, so strap in!

While fast CI is a high priority, it really doesn’t matter how fast it is if it gives unreliable results. Since I am the (mostly) sole maintainer, (which yes, we’re trying to fix) for our CD pipeline in a team of almost 40 people, my goal early on was to get it into a reliably working state that I could easily maintain with a minimal amount of my time, since I have other, more fun, things to do.

The primary way I did this was to build buildkite-jobify (we use Buildkite as our CI provider). This is just a small service that spawns Kubernetes (k8s) jobs for each of the CI jobs we run on Linux, based on configuration from the repo itself.

This has a few advantages and disadvantages over a more typical VM approach, which we use for x86_64-pc-windows-msvc (for now?), x86_64-apple-darwin, and aarch64-apple-darwin.

Pros

  • Consistency — Every job run from the same container image has the exact same starting environment.
  • Versioned — The image definitions are part of our monorepo, as well as the k8s job descriptions, so we get atomic updates of the environment CI jobs execute in with the code itself. This also makes rollbacks trivial if needed.
  • Scalability — Scaling a k8s cluster up or down is fairly easily (especially in eg GKE, because $) as long as you have the compute resources. k8s also makes it easy to specify resource requests so that individual jobs can dynamically spin up on the most appropriate node at the time based on the other workloads currently running on the cluster.
  • Movability — Since k8s is just running containers, it’s trivial to move build jobs between different clusters, for example in our case, from GKE to our on-premise cluster.

Cons

  • Clean builds — Clean builds are quite slow compared to incremental builds, however we mitigate this by using cargo-fetcher for faster crate fetching and sccache for compiler output caching.
  • Startup times — Changing the image used for a build job means that every k8s node that runs an image it doesn’t have needs to pull it before running. For example, the pull can take up to almost 2m for our aarch64-linux-android which is by far our largest image at almost 3GiB (the Android NDK/SDK are incredibly bloated). However, this is generally a one time cost per image per node and we don’t update images so often that it is actually a problem in practice.

Rejected Strategies

Before we get into the how I just wanted to show two other strategies that could be used for cross compilation that you might want to consider if your needs are different than ours.

Using x86_64-pc-windows-gnu

To be honest, I rejected this one pretty much immediately simply because the gnu environment is not the «native» msvc environment for Windows. Targeting x86_64-pc-windows-gnu would not be representative for actual builds used by users, and it would be different from the local builds built by developers on Windows, which made it an unappealing option. That being said, generally speaking, Rust crates tend to support x86_64-pc-windows-gnu fairly well, which as we’ll see later is a good thing due to my chosen strategy.

Using wine to run the MSVC toolchain

I briefly considered using wine to run the various components of the MSVC compiler toolchain, as that would be the most accurate way to match the native compilation for x86_64-pc-windows-msvc. However, we already use LLD when linking on Windows since it is vastly faster than the MSVC linker, so why not just replace the rest of the toolchain while we’re at it? 😉 This kind of contradicts the reasons stated in x86_64-pc-windows-gnu since we’d be changing to a completely different compiler with different codegen, but this tradeoff is actually ok with me for a couple of reasons.

The first reason is that the driving force behind clang-cl, lld-link, and the other parts of LLVM replacing the MSVC toolchain, is so that Chrome can be built with LLVM for all of their target platforms. The size of the Chrome project dwarfs the amount of C/C++ code in our project by a huge margin, and (I assume) includes far more…advanced…C++ code than we depend on, so the risk of mis-compilation or other issues compared to cl.exe seems reasonably low.

And secondly, we’re actively trying to get rid of C/C++ dependencies as the Rust ecosystem matures and provides its own versions of C/C++ libraries we use. For example, at the time of this writing, we use roughly 800k lines of C/C++ code, a large portion of which comes from Physx, which we will, hopefully, be able to replace in the future with something like rapier.

How?

Ok, now that I’ve laid out some reasons why you might want to consider cross compilation to Windows from Linux, let’s see how we can actually do it! I’ll be constructing a container image (in Dockerfile format) as we go that can be used to compile a Rust program. If you’re only targeting C/C++ the broad strokes of this strategy will still be relevant, you’ll just have a tougher time of it because…well, C/C++.

The strategy I chose is to use clang, which, like most compilers based off of LLVM (including rustc), is a native cross compiler, to compile any C/C++ code and assembly. Specifically this means using clang-cl and lld-link so that we, generally, don’t need to modify any C/C++ code to take cross compilation into account.

Prerequisites

If you want to follow along at home, you’ll need to be on Linux (though WSL might work?) with something that can build container images, like docker or podman.

1. Setup toolchain(s)

First thing we need are the actual toolchains needed to compile and link a full Rust project.

# We'll just use the official Rust image rather than build our own from scratch
FROM docker.io/library/rust:1.54.0-slim-bullseye

ENV KEYRINGS /usr/local/share/keyrings

RUN set -eux; \
    mkdir -p $KEYRINGS; \
    apt-get update && apt-get install -y gpg curl; \
    # clang/lld/llvm
    curl --fail https://apt.llvm.org/llvm-snapshot.gpg.key | gpg --dearmor > $KEYRINGS/llvm.gpg; \
    echo "deb [signed-by=$KEYRINGS/llvm.gpg] http://apt.llvm.org/bullseye/ llvm-toolchain-bullseye-13 main" > /etc/apt/sources.list.d/llvm.list;

RUN set -eux; \
    # Skipping all of the "recommended" cruft reduces total images size by ~300MiB
    apt-get update && apt-get install --no-install-recommends -y \
        clang-13 \
        # llvm-ar
        llvm-13 \
        lld-13 \
        # We're using this in step 3
        tar; \
    # ensure that clang/clang++ are callable directly
    ln -s clang-13 /usr/bin/clang && ln -s clang /usr/bin/clang++ && ln -s lld-13 /usr/bin/ld.lld; \
    # We also need to setup symlinks ourselves for the MSVC shims because they aren't in the debian packages
    ln -s clang-13 /usr/bin/clang-cl && ln -s llvm-ar-13 /usr/bin/llvm-lib && ln -s lld-link-13 /usr/bin/lld-link; \
    # Verify the symlinks are correct
    clang++ -v; \
    ld.lld -v; \
    # Doesn't have an actual -v/--version flag, but it still exits with 0
    llvm-lib -v; \
    clang-cl -v; \
    lld-link --version; \
    # Use clang instead of gcc when compiling binaries targeting the host (eg proc macros, build files)
    update-alternatives --install /usr/bin/cc cc /usr/bin/clang 100; \
    update-alternatives --install /usr/bin/c++ c++ /usr/bin/clang++ 100; \
    apt-get remove -y --auto-remove; \
    rm -rf /var/lib/apt/lists/*;

2. Acquire Rust std lib

By default, rustup only installs the native host target of x86_64-unknown-linux-gnu, which we still need to compile build scripts and procedural macros, but since we’re cross compiling we need to add the x86_64-pc-windows-msvc target as well to get the Rust std library. We could also build the standard library ourselves, but that would mean requiring nightly and taking time to compile something that we can just download instead.

# Retrieve the std lib for the target
RUN rustup target add x86_64-pc-windows-msvc

3. Acquire CRT and Windows 10 SDK

In all likelihood, you’ll need the MSVCRT and Windows 10 SDK to compile and link most projects that target Windows. This is problematic because the official way to install them is, frankly, atrocious, in addition to not being redistributable (so no one but Microsoft can provide, say, a tarball with the needed files).

But really, our needs are relatively simple compared to a normal developer on Windows, as we just need the headers and libraries from the typical VS installation. We could if we wanted use the Visual Studio Build Tools from a Windows machine, or if we were feeling adventurous try to get it running under wine (warning: I briefly tried this but it requires .NET shenanigans that at the time were broken under wine) and then create our own tarball with the needed files, but that feels too slow and tedious.

So instead, I just took inspiration from other projects and created my own xwin program to download, decompress, and repackage the MSVCRT and Windows SDK into a form appropriate for cross compilation. This has several advantages over using the official installation methods.

  • No cruft — Since this program is tailored specifically to getting only the files needed for compiling and linking we skip a ton of cruft, some of which you can opt out of, but some of which you cannot with the official installers. For example, even if you never target aarch64-pc-windows-msvc, you will still get all of the libraries needed for it.
  • Faster — In addition to not even downloading stuff we don’t need, all download, decompression, and disk writes are done in parallel. On my home machine with ~11.7MiB download speeds and a Ryzen 3900X I can download, decompress, and «install» the MSVCRT and Windows SDK in about 27 seconds.
  • Fixups — While the CRT is generally fine, the Windows SDK headers and libraries are an absolute mess of casing (seriously, what maniac thought it would be a good idea to capitalize the l in .lib!?), making them fairly useless on a case-sensitive file system. Rather than rely on using a case-insensitive file system on Linux, xwin just adds symlinks as needed, so eg. windows.h -> Windows.h, kernel32.lib -> kernel32.Lib etc.

We have two basic options for how we could get the CRT and SDK, either run xwin directly during image building, or run it separately and tarball the files and upload them to something like GCS and just retrieve them as needed in the future. We’ll just use it directly while building the image since that’s easier.

RUN set -eux; \
    xwin_version="0.1.1"; \
    xwin_prefix="xwin-$xwin_version-x86_64-unknown-linux-musl"; \
    # Install xwin to cargo/bin via github release. Note you could also just use `cargo install xwin`.
    curl --fail -L https://github.com/Jake-Shadle/xwin/releases/download/$xwin_version/$xwin_prefix.tar.gz | tar -xzv -C /usr/local/cargo/bin --strip-components=1 $xwin_prefix/xwin; \
    # Splat the CRT and SDK files to /xwin/crt and /xwin/sdk respectively
    xwin --accept-license 1 splat --output /xwin; \
    # Remove unneeded files to reduce image size
    rm -rf .xwin-cache /usr/local/cargo/bin/xwin;

4. Override cc defaults

cc is the Rust ecosystem’s primary (we’ll get to the most common exception later) way to compile C/C++ code for use in Rust crates. By default it will try and use cl.exe and friends when targeting the msvc environment, but since we don’t have that, we need to inform it what we actually want it to use instead. We also need to provide additional compiler options to clang-cl to avoid common problems when compiling code that assumes that targeting x86_64-pc-windows-msvc can only be done with the MSVC toolchain.

We also need to tell lld where to search for libraries. We could place the libs in one of the default lib directories lld will search in, but that would mean changing the layout of the CRT and SDK library directories, so it’s generally easier to just specify them explicitly instead. We use RUSTFLAGS for this, which does mean that if you are specifying things like -Ctarget-feature=+crt-static in .cargo/config.toml you will need to reapply them in the container image either during image build or by overriding the environment at runtime to get everything working.

# Note that we're using the full target triple for each variable instead of the
# simple CC/CXX/AR shorthands to avoid issues when compiling any C/C++ code for
# build dependencies that need to compile and execute in the host environment
ENV CC_x86_64_pc_windows_msvc="clang-cl" \
    CXX_x86_64_pc_windows_msvc="clang-cl" \
    AR_x86_64_pc_windows_msvc="llvm-lib" \
    # Note that we only disable unused-command-line-argument here since clang-cl
    # doesn't implement all of the options supported by cl, but the ones it doesn't
    # are _generally_ not interesting.
    CL_FLAGS="-Wno-unused-command-line-argument -fuse-ld=lld-link /imsvc/xwin/crt/include /imsvc/xwin/sdk/include/ucrt /imsvc/xwin/sdk/include/um /imsvc/xwin/sdk/include/shared" \
    RUSTFLAGS="-Lnative=/xwin/crt/lib/x86_64 -Lnative=/xwin/sdk/lib/um/x86_64 -Lnative=/xwin/sdk/lib/ucrt/x86_64"

# These are separate since docker/podman won't transform environment variables defined in the same ENV block
ENV CFLAGS_x86_64_pc_windows_msvc="$CL_FLAGS" \
    CXXFLAGS_x86_64_pc_windows_msvc="$CL_FLAGS"

As already noted above in the reasons why we went this route, we use lld-link even when compiling on Windows hosts due to its superior speed over link.exe. So for our project we just set it in our .cargo/config.toml so it’s used regardless of host platform.

[target.x86_64-pc-windows-msvc]
linker = "lld-link" # Note the lack of extension, which means it will work on both Windows and unix style platforms

If you don’t already use lld-link when targeting Windows, you’ll need to add an additional environment variable so that cargo knows what linker to use, otherwise it will default to link.exe.

ENV CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_LINKER=lld-link

5. Profit

Building a container image from this Dockerfile spec should allow you to run containers capable of compiling and linking a Rust project targeting Windows, including any C/C++ code that might be used as a dependency….mostly.

cargo build --target x86_64-pc-windows-msvc

Bonus: Headless testing

Of course, though compiling and linking a Rust project on Linux is one thing, our CD pipeline also needs to run tests! I’ve mentioned wine several times so far as a way you could run Windows programs such as the MSVC toolchain under Linux, so naturally, that’s what we’re going to do with our test executables.

1. Install

Debian tends to update packages at a glacial pace, as in the case of wine where the 5.0.3 version packaged in bullseye is about 9 months out of date. In this case, it actually matters, as some crates, for example mio, rely on relatively recent wine releases to implement features or fix bugs. Since mio is a foundational crate in the Rust ecosystem, we’ll be installing wine’s staging version, which is 6.15 at the time of this writing.

RUN set -eux; \
    curl --fail https://dl.winehq.org/wine-builds/winehq.key | gpg --dearmor > $KEYRINGS/winehq.gpg; \
    echo "deb [signed-by=$KEYRINGS/winehq.gpg] https://dl.winehq.org/wine-builds/debian/ bullseye main" > /etc/apt/sources.list.d/winehq.list; \
    # The way the debian package works requires that we add x86 support, even
    # though we are only going be running x86_64 executables. We could also
    # build from source, but that is out of scope.
    dpkg --add-architecture i386; \
    apt-get update && apt-get install --no-install-recommends -y winehq-staging; \
    apt-get remove -y --auto-remove; \
    rm -rf /var/lib/apt/lists/*;

2. Specify runner

By default, cargo will attempt to run test binaries natively, but luckily this behavior is trivial to override by supplying a single environment variable to tell cargo how it should run each test binary. This method is also how you can run tests for wasm32-unknown-unknown locally via a wasm runtime like wasmtime. 🙂

ENV \
    # wine can be quite spammy with log messages and they're generally uninteresting
    WINEDEBUG="-all" \
    # Use wine to run test executables
    CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_RUNNER="wine"

3. Test

Now we can compile, link, and test Windows executables with just a standard cargo invocation.

cargo test --target x86_64-pc-windows-msvc

Final image definition

Putting it all together, here is an image definition that should allow you to cross compile to Windows and run headless tests, without needing a Windows install at any step.

# We'll just use the official Rust image rather than build our own from scratch
FROM docker.io/library/rust:1.54.0-slim-bullseye

ENV KEYRINGS /usr/local/share/keyrings

RUN set -eux; \
    mkdir -p $KEYRINGS; \
    apt-get update && apt-get install -y gpg curl; \
    # clang/lld/llvm
    curl --fail https://apt.llvm.org/llvm-snapshot.gpg.key | gpg --dearmor > $KEYRINGS/llvm.gpg; \
    # wine
    curl --fail https://dl.winehq.org/wine-builds/winehq.key | gpg --dearmor > $KEYRINGS/winehq.gpg; \
    echo "deb [signed-by=$KEYRINGS/llvm.gpg] http://apt.llvm.org/bullseye/ llvm-toolchain-bullseye-13 main" > /etc/apt/sources.list.d/llvm.list; \
    echo "deb [signed-by=$KEYRINGS/winehq.gpg] https://dl.winehq.org/wine-builds/debian/ bullseye main" > /etc/apt/sources.list.d/winehq.list;

RUN set -eux; \
    dpkg --add-architecture i386; \
    # Skipping all of the "recommended" cruft reduces total images size by ~300MiB
    apt-get update && apt-get install --no-install-recommends -y \
        clang-13 \
        # llvm-ar
        llvm-13 \
        lld-13 \
        # get a recent wine so we can run tests
        winehq-staging \
        # Unpack xwin
        tar; \
    # ensure that clang/clang++ are callable directly
    ln -s clang-13 /usr/bin/clang && ln -s clang /usr/bin/clang++ && ln -s lld-13 /usr/bin/ld.lld; \
    # We also need to setup symlinks ourselves for the MSVC shims because they aren't in the debian packages
    ln -s clang-13 /usr/bin/clang-cl && ln -s llvm-ar-13 /usr/bin/llvm-lib && ln -s lld-link-13 /usr/bin/lld-link; \
    # Verify the symlinks are correct
    clang++ -v; \
    ld.lld -v; \
    # Doesn't have an actual -v/--version flag, but it still exits with 0
    llvm-lib -v; \
    clang-cl -v; \
    lld-link --version; \
    # Use clang instead of gcc when compiling binaries targeting the host (eg proc macros, build files)
    update-alternatives --install /usr/bin/cc cc /usr/bin/clang 100; \
    update-alternatives --install /usr/bin/c++ c++ /usr/bin/clang++ 100; \
    apt-get remove -y --auto-remove; \
    rm -rf /var/lib/apt/lists/*;

# Retrieve the std lib for the target
RUN rustup target add x86_64-pc-windows-msvc

RUN set -eux; \
    xwin_version="0.1.1"; \
    xwin_prefix="xwin-$xwin_version-x86_64-unknown-linux-musl"; \
    # Install xwin to cargo/bin via github release. Note you could also just use `cargo install xwin`.
    curl --fail -L https://github.com/Jake-Shadle/xwin/releases/download/$xwin_version/$xwin_prefix.tar.gz | tar -xzv -C /usr/local/cargo/bin --strip-components=1 $xwin_prefix/xwin; \
    # Splat the CRT and SDK files to /xwin/crt and /xwin/sdk respectively
    xwin --accept-license 1 splat --output /xwin; \
    # Remove unneeded files to reduce image size
    rm -rf .xwin-cache /usr/local/cargo/bin/xwin;

# Note that we're using the full target triple for each variable instead of the
# simple CC/CXX/AR shorthands to avoid issues when compiling any C/C++ code for
# build dependencies that need to compile and execute in the host environment
ENV CC_x86_64_pc_windows_msvc="clang-cl" \
    CXX_x86_64_pc_windows_msvc="clang-cl" \
    AR_x86_64_pc_windows_msvc="llvm-lib" \
    # wine can be quite spammy with log messages and they're generally uninteresting
    WINEDEBUG="-all" \
    # Use wine to run test executables
    CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_RUNNER="wine" \
    # Note that we only disable unused-command-line-argument here since clang-cl
    # doesn't implement all of the options supported by cl, but the ones it doesn't
    # are _generally_ not interesting.
    CL_FLAGS="-Wno-unused-command-line-argument -fuse-ld=lld-link /imsvc/xwin/crt/include /imsvc/xwin/sdk/include/ucrt /imsvc/xwin/sdk/include/um /imsvc/xwin/sdk/include/shared" \
    # Let cargo know what linker to invoke if you haven't already specified it
    # in a .cargo/config.toml file
    CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_LINKER="lld-link" \
    RUSTFLAGS="-Lnative=/xwin/crt/lib/x86_64 -Lnative=/xwin/sdk/lib/um/x86_64 -Lnative=/xwin/sdk/lib/ucrt/x86_64"

# These are separate since docker/podman won't transform environment variables defined in the same ENV block
ENV CFLAGS_x86_64_pc_windows_msvc="$CL_FLAGS" \
    CXXFLAGS_x86_64_pc_windows_msvc="$CL_FLAGS"

# Run wineboot just to setup the default WINEPREFIX so we don't do it every
# container run
RUN wine wineboot --init

Here is a gist with the same dockerfile, and an example of how you can build it. I’m using podman here, but docker should also work.

curl --fail -L -o xwin.dockerfile https://gist.githubusercontent.com/Jake-Shadle/542dfa000a37c4d3c216c976e0fbb973/raw/bf6cff2bd4ad776d3def8520adb5a5c657140a9f/xwin.dockerfile
podman -t xwin -f xwin.dockerfile .

Common issues

Unfortunately, everything is not sunshine and unicorns where cross compiling is concerned, but all of them are solvable, at least in principle.

CMake

It’s not exactly a secret that I am not a fan of CMake. Inexplicably (to me at least), CMake has become the default way to configure and build open source C/C++ code. As I basically only use Rust now, this would normally not bother me, however, many Rust crates still wrap C/C++ libraries, and due to the ubiquitous nature of CMake, a significant minority of those crates just directly use cmake (or worse, direct invocation) to let CMake drive the building of the underlying C/C++ code. This is great excusable when it works, however, in my experience, CMake scripts tend to be a house of cards that falls down at the slightest deviation from the «one true path» intended by the author(s) of the CMake scripts and cross compiling to Windows is a big deviation that not only knocks down the cards but also sets them on fire.

The simplest and most effective solution to CMake issues is to replace it with cc. In some cases like Physx or spirv-tools that can be a fair amount of work, but in many cases it’s not much. The benefits of course extend beyond just making cross compilation easier, it also gets rid of the CMake installation dependency, as well as just making it easier for outside contributors to understand how the C/C++ code is built, since they don’t need to actually crawl through some project’s CMake scripts trying to figure out what the hell is going on, they can just look at the build.rs file instead.

MASM

Unfortunately, we don’t need to worry about just Rust, C, and C++ in some Rust projects, there are also a small number of crates here and there which also use assembly. While in the future this will be able to be handled natively in rustc, we have to deal with the present, and unfortunately the present contains multiple assemblers with incompatible syntax. In the Microsoft toolchain, ml64.exe assembles MASM, and while there are ongoing efforts to get LLVM to assemble MASM via llvm-ml, the fact that the last update I can find is from October 2020, and there is no project page for llvm-ml like there are for other llvm tools, tells me I might be wasting my time trying to get it to work for all assembly that we need to compile.

Luckily, there is a fairly easy workaround for this gap until llvm-ml becomes more mature. Even though we aren’t targeting x86_64-pc-windows-gnu for the reasons stated above, the few projects that we use that use assembly generally do have both a MASM version as well as a GAS version so that people who want to can target x86_64-pc-windows-gnu. However, since cross compilation to Windows from a non-Windows platform is fairly rare, you’ll often need to provide PRs to projects to fix up assumptions made about the target and host being the same. And unfortunately, this niche case also comes with a bit of maintenance burden that maintainers of a project might be uncomfortable with taking since they can’t easily provide coverage, which is a totally fair reason to not merge such a PR.

Compiler Target Confusion

This one is the rarest of all, at least anecdotally, as I only encountered this kind of issue in Physx. Basically, the issue boils down to a project assuming that Windows == MSVC toolchain and Clang != Windows, which can result in (typically) preprocessor logic errors.

For example, here we have a clang specific warning being disabled for a single function, except it’s fenced by both using clang as the compiler as well as targeting Linux, which means targeting Windows won’t disable the warning, and if warnings are treated as errors, we’ll get a compile failure.

@see PxCreateFoundation()
*/
#if PX_CLANG
-#if PX_LINUX
-#pragma clang diagnostic push
-#pragma clang diagnostic ignored "-Wreturn-type-c-linkage"
-#endif // PX_LINUX
+    #pragma clang diagnostic push
+    #pragma clang diagnostic ignored "-Wreturn-type-c-linkage"
#endif // PX_CLANG
PX_C_EXPORT PX_FOUNDATION_API physx::PxFoundation& PX_CALL_CONV PxGetFoundation();
#if PX_CLANG
-#if PX_LINUX
-#pragma clang diagnostic pop
-#endif // PX_LINUX
+    #pragma clang diagnostic pop
#endif // PX_CLANG

namespace physx

For the most part these kinds of problems won’t occur since clang-cl very effectively masquerades as cl, including setting predefined macros like _MSC_VER and others so that a vast majority if C/C++ code that targets Windows «just works».

Conclusion

And there you have it, a practical summary on how to cross compile Rust projects for x86_64-pc-windows from a Linux host or container, I hope you’ve found at least some of this information useful!

As for next steps, my team is rapidly improving our renderer built on top of Vulkan and rust-gpu, but our non-software rasterization testing is mostly limited to a few basic tests on our Mac VMs since they are the only ones with GPUs. While I am curious about getting rendering tests working for Windows under wine, I am also quite hesitant. While wine and Proton have been making big steps and support a large amount of Windows games, we are using fairly bleeding edge parts of Vulkan like ray tracing, and running rendering tests on Linux means you’re now running on top of the Linux GPU drivers rather than the Windows ones, making test results fairly suspect on whether they are actually detecting issues that might be present in a native Windows environment. It could still be fun though!

While it might seem like I hate Windows due to the content of this post, that’s very much not the case. I am comfortable in Windows having used it for 20+ years or so both personally and professionally, I just prefer Linux these days, especially for automated infrastructure like CD which this post is geared towards…

..however, the same cannot be said for Apple/Macs, as I do hate them with the fiery passion of a thousand suns. Maintaining «automated» Mac machines is one of the most deeply unpleasant experiences of my career, one I wouldn’t wish on my worst enemy, but since Macs are one of our primary targets (thankfully iOS is off the table due to «reasons»), we do need to build and test it along with our other targets. So maybe cross compiling to Macs will be in a future post. 😅

This post contains excerpts from my book Black Hat Rust

Now we have a mostly secure RAT, it’s time to expand our reach.

Until now, we limited our builds to Linux. While the Linux market is huge server-side, this is another story client-side, with a market share of roughly 2.5% on the desktop.

To increase the number of potential targets, we are going to use cross-compilation: we will compile a program from a Host Operating System for a different Operating System. Compiling Windows executables on Linux, for example.

But, when we are talking about cross-compilation, we are not only talking about compiling a program from an OS to another one. We are also talking about compiling an executable from one architecture to another. From x86_64 to aarch64 (also known as arm64), for example.

In this chapter, we are going to see why and how to cross-compile Rust programs and how to avoid the painful edge-cases of cross-compilation, so stay with me.

Why multi-platform

From computers to smartphones passing by smart TVs, IoT such as cameras or «smart» fridges… Today’s computing landscape is kind of the perfect illustration of the word «fragmentation».

Thus, if we want our operations to reach more targets, our RAT needs to support many of those platforms.

Platform specific APIs

Unfortunately, OS APIs are not portable: for example, persistence techniques(the act of making the execution of a program persist across restarts) are very different if you are on Windows or on Linux.

The specificities of each OS force us to craft platform-dependent of code.

Thus we will need to write some parts of our RAT for windows, rewrite the same part for Linux, and rewrite it for macOS…

The goal is to write as much as possible code that is shared by all the platforms.

Cross-platform Rust

Thankfully, Rust makes it easy to write code that will be conditionally compiled depending on the platform it’s compiled for.

The cfg attribute

The cfg attribute enables the conditional compilation of code. It supports many options so you can choose on which platform to run which part of your code.

For example: #[cfg(target_os = "linux")], #[cfg(target_arch = "aarch64")], #[cfg(target_pointer_width = "64")];

Here is an example of code that exports the same install function but picks the right one depending on the target platform.

ch_12/rat/agent/src/install/mod.rs

// ...

#[cfg(target_os = "linux")]
mod linux;

#[cfg(target_os = "linux")]
pub use linux::install;

#[cfg(target_os = "macos")]
mod macos;
#[cfg(target_os = "macos")]
pub use macos::install;

#[cfg(target_os = "windows")]
mod windows;
#[cfg(target_os = "windows")]
pub use windows::install;

Then, in the part of the code that is shared across platforms, we can import and use it like any module.

mod install;

// ...

install::install();

The cfg attribute can also be used with any, all, and not:

// The function is only included in the build when compiling for macOS OR Linux
#[cfg(any(target_os = "linux", target_os = "macos"))]
// ...

// This function is only included when compiling for Linux AND the pointer size is 64 bits
#[cfg(all(target_os = "linux", target_pointer_width = "64"))]
// ...


// This function is only included when the target Os IS NOT Windows
#[cfg(not(target_os = "windows"))]
// ...

Platform dependent dependencies

We can also conditionally import dependencies depending on the target.

For example, we are going to import the winreg crate to interact with Windows’ registry, but it does not makes sense to import, or even build this crate for platforms different thant Windows.

ch_12/rat/agent/Cargo.toml

[target.'cfg(windows)'.dependencies]
winreg = "0.10"

Supported platforms

The Rust project categorizes the supported platforms into 3 tiers.

  • Tier 1 targets can be thought of as «guaranteed to work».
  • Tier 2 targets can be thought of as «guaranteed to build».
  • Tier 3 targets are those for which the Rust codebase has support for but which the Rust project does not build or test automatically, so they may or may not work.

Tier 1 platforms are the followings:

  • aarch64-unknown-linux-gnu
  • i686-pc-windows-gnu
  • i686-pc-windows-msvc
  • i686-unknown-linux-gnu
  • x86_64-apple-darwin
  • x86_64-pc-windows-gnu
  • x86_64-pc-windows-msvc
  • x86_64-unknown-linux-gnu

You can find the platforms for the other tiers in the official documentation: https://doc.rust-lang.org/nightly/rustc/platform-support.html.

In practical terms, it means that our RAT is guaranteed to work on Tier 1 platforms without problems (or it will be handled by the Rust teams). For Tier 2 platforms, you will need to write more tests to be sure that everything works as intended.

Cross-compilation

Error: Toolchain / Library XX not found. Aborting compilation.

How many times did you get this kind of message when trying to follow the build instructions of a project or cross-compile it?

What if, instead of writing wonky documentation, we could consign the build instructions into an immutable recipe that would guarantee us a successful build 100% of the time?

This is where Docker comes into play:

Immutability: The Dockerfiles are our immutable recipes, and docker would be our robot, flawlessly executing the recipes all days of the year.

Cross-platform: Docker is itself available on the 3 major OSes (Linux, Windows, and macOS). Thus, we not only enable a team of several developers using different machines to work together, but we also greatly simplify our toolchains.

By using Docker, we are finally reducing our problem to compiling from Linux to other platforms, instead of:

  • From Linux to other platforms
  • From Windows to other platforms
  • From macOS to other platforms

cross

The Tools team develops and maintains a project named cross which allow you to easily cross-compile Rust projects using Docker, without messing with custom Dockerfiles.

It can be installed like that:

$ cargo install -f cross

cross works by using pre-made Dockerfiles, but they are maintained by the Tools team, not you, and they take care of everything.

The list of targets supported is impressive. As I’m writing this, here is the list of supported platforms: https://github.com/rust-embedded/cross/tree/master/docker

Dockerfile.aarch64-linux-android
Dockerfile.aarch64-unknown-linux-gnu
Dockerfile.aarch64-unknown-linux-musl
Dockerfile.arm-linux-androideabi
Dockerfile.arm-unknown-linux-gnueabi
Dockerfile.arm-unknown-linux-gnueabihf
Dockerfile.arm-unknown-linux-musleabi
Dockerfile.arm-unknown-linux-musleabihf
Dockerfile.armv5te-unknown-linux-gnueabi
Dockerfile.armv5te-unknown-linux-musleabi
Dockerfile.armv7-linux-androideabi
Dockerfile.armv7-unknown-linux-gnueabihf
Dockerfile.armv7-unknown-linux-musleabihf
Dockerfile.asmjs-unknown-emscripten
Dockerfile.i586-unknown-linux-gnu
Dockerfile.i586-unknown-linux-musl
Dockerfile.i686-linux-android
Dockerfile.i686-pc-windows-gnu
Dockerfile.i686-unknown-freebsd
Dockerfile.i686-unknown-linux-gnu
Dockerfile.i686-unknown-linux-musl
Dockerfile.mips-unknown-linux-gnu
Dockerfile.mips-unknown-linux-musl
Dockerfile.mips64-unknown-linux-gnuabi64
Dockerfile.mips64el-unknown-linux-gnuabi64
Dockerfile.mipsel-unknown-linux-gnu
Dockerfile.mipsel-unknown-linux-musl
Dockerfile.powerpc-unknown-linux-gnu
Dockerfile.powerpc64-unknown-linux-gnu
Dockerfile.powerpc64le-unknown-linux-gnu
Dockerfile.riscv64gc-unknown-linux-gnu
Dockerfile.s390x-unknown-linux-gnu
Dockerfile.sparc64-unknown-linux-gnu
Dockerfile.sparcv9-sun-solaris
Dockerfile.thumbv6m-none-eabi
Dockerfile.thumbv7em-none-eabi
Dockerfile.thumbv7em-none-eabihf
Dockerfile.thumbv7m-none-eabi
Dockerfile.wasm32-unknown-emscripten
Dockerfile.x86_64-linux-android
Dockerfile.x86_64-pc-windows-gnu
Dockerfile.x86_64-sun-solaris
Dockerfile.x86_64-unknown-freebsd
Dockerfile.x86_64-unknown-linux-gnu
Dockerfile.x86_64-unknown-linux-musl
Dockerfile.x86_64-unknown-netbsd

Cross-compiling from Linux to Windows

# In the folder of your Rust project
$ cross build --target x86_64-pc-windows-gnu

Cross-compiling to aarch64 (arm64)

# In the folder of you Rust project
$ cross build --target aarch64-unknown-linux-gnu

Cross-compiling to armv7

# In the folder of your Rust project
$ cross build --target armv7-unknown-linux-gnueabihf

Custom Dockerfiles

Sometimes, you may need specific tools in your Docker image, such as a packer (what is a packer? we will see that below) or tools to strip and rewrite the metadata of your final executable.

In this situation, it’s legitimate to create a custom Dockerfile and to configure cross to use it for a specific target.

Create a Cross.toml file in the root of your project (where your Cargo.toml file is), with the following content:

[target.x86_64-pc-windows-gnu]
image = "my_image:tag"

We can also completely forget cross and build our own Dockerfiles. Here is how.

Cross-compiling from Linux to Windows

ch_12/rat/docker/Dockerfile.windows

FROM rust:latest

RUN apt update && apt upgrade -y
RUN apt install -y g++-mingw-w64-x86-64

RUN rustup target add x86_64-pc-windows-gnu
RUN rustup toolchain install stable-x86_64-pc-windows-gnu

WORKDIR /app

CMD ["cargo", "build", "--target", "x86_64-pc-windows-gnu"]
$ docker build . -t black_hat_rust/ch12_windows -f Dockerfile.windows
# in your Rust project
$ docker run --rm -ti -v `pwd`:/app black_hat_rust/ch12_windows

Cross-compiling to aarch64 (arm64)

ch_12/rat/docker/Dockerfile.aarch64

FROM rust:latest

RUN apt update && apt upgrade -y
RUN apt install -y g++-aarch64-linux-gnu libc6-dev-arm64-cross

RUN rustup target add aarch64-unknown-linux-gnu
RUN rustup toolchain install stable-aarch64-unknown-linux-gnu

WORKDIR /app

ENV CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc \
    CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc \
    CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++

CMD ["cargo", "build", "--target", "aarch64-unknown-linux-gnu"]
$ docker build . -t black_hat_rust/ch12_linux_aarch64 -f Dockerfile.aarch64
# in your Rust project
$ docker run --rm -ti -v `pwd`:/app black_hat_rust/ch12_linux_aarch64

Want to learn more? Get my book Black Hat Rust to learn Rust, Cybersecurity and Cryptography.

Cross-compiling is a very handy capability to have in multiple scenarios. Let’s take a look at why you might want to do it and how to get set up in Rust for cross-compilation.

What we’ll cover:

  • Understanding cross-compiling and its Rust benefits
  • Setting up an example Rust cross-compilation project
  • How Rust represents platforms
  • Cross-compiling our demo Rust project from Linux to Windows
  • How to write platform-specific code

To follow along, see the GitHub repo for this project.

Understanding cross-compiling and its Rust benefits

Cross-compiling means compiling a program on a platform for a different platform. For example, if you are on a Windows machine, you can compile a program that can run on Linux.

There are a few reasons cross-compiling can be helpful. One is that if you have a product that you want to ship on multiple platforms, it can be convenient to be able to build all versions from a single machine instead of having one Windows machine, one Mac machine, etc.

Cross-compilation can be helpful in cloud-based build scenarios as well. Rust even supports running tests across multiple target platforms on the same host platform.

Another reason you may want to cross-compile is that it might be necessary, as the Rust compiler and host tools are not supported on every platform they can build for. For example, the Rust compiler supports building an app for iOS, but the Rust compiler itself doesn’t run on iOS.

Setting up an example Rust cross-compilation project

There is some built-in support in rustc for cross-compiling, but getting the build to actually work can be tricky due to the need for an appropriate linker. Instead, we’re going to use the Cross crate, which used to be maintained by the Rust Embedded Working Group Tools group.

First, let’s set up a simple project that will show which platform it’s running on. To do this, we’re going to use the current_platform crate, which is an easy way to see what platform your code is running on, as well as what platform it was compiled on.

Let’s make a new crate with cargo new and add the crate with cargo add current_platform. Then we can add the following to the src/main.rs file:

use current_platform::{COMPILED_ON, CURRENT_PLATFORM};

fn main() {
    println!("Hello, world from {}! I was compiled on {}.", CURRENT_PLATFORM, COMPILED_ON);
}

On my Linux machine, running this with cargo run leads to this output:

Hello, world from x86_64-unknown-linux-gnu! I was compiled on x86_64-unknown-linux-gnu.

This agrees with what rustc thinks the platform is; running rustc -vV gives this output:

rustc 1.68.2 (9eb3afe9e 2023-03-27)
binary: rustc
commit-hash: 9eb3afe9ebe9c7d2b84b71002d44f4a0edac95e0
commit-date: 2023-03-27
host: x86_64-unknown-linux-gnu
release: 1.68.2
LLVM version: 15.0.6

How Rust represents platforms

To cross-compile, you need to know the “target triple” for the platform you’re building for. Rust uses the same format that LLVM does. The format is <arch><sub>-<vendor>-<sys>-<env>, although figuring out these values for a given platform is not obvious.

As we saw above, x86_64-unknown-linux-gnu represents a 64-bit Linux machine. Running rustc --print target-list will print all targets that Rust supports, but the list is long, and it’s hard to find the one you want.

The two best ways to find the target triple for a platform you care about are:

  1. Run rustc -vV on the platform and look for the line that starts with host: — the rest of that line will be the target triple
  2. Look it up in the list provided on the Rust Platform Support page

For quick reference, here are a few common values:

Cross-compiling our Rust project from Linux to Windows

Now that we know that the target triple for Windows is x86_64-pc-windows-msvc, let’s get to cross-compiling!

To install the cross crate, the first step is to run cargo install cross. This will install Cross to $HOME/.cargo/bin. You can add this to your $PATH if you’d like, or just run it from there when we’re ready to do so.

Cross works by using a container engine with images that have the appropriate toolchain for cross-compiling. All of this is transparent to the user, as we’ll see below, but you do need a container engine installed.

If your machine is running Windows, the official Getting Started guide from Cross recommends using Docker as your container engine. However, for Linux, it recommends using Podman, a popular Docker alternative. On my Ubuntu system, installing this was as easy as sudo apt-get install podman.

That’s all the setup we need! Now we can cross-compile to Windows and run the executable with the following command:

cross run --target x86_64-pc-windows-gnu

Remember, the Cross executable is in $HOME/.cargo/bin.

Running this the first time will take a while as the appropriate container is downloaded and started. Once it’s done, we should see the following output:

   Compiling current_platform v0.2.0
   Compiling rustcrosscompile v0.1.0 (/project)
    Finished dev [unoptimized + debuginfo] target(s) in 7.95s
     Running `wine /target/x86_64-pc-windows-gnu/debug/rustcrosscompile.exe`
0054:err:winediag:nodrv_CreateWindow Application tried to create a window, but no driver could be loaded.
0054:err:winediag:nodrv_CreateWindow Make sure that your X server is running and that $DISPLAY is set correctly.
0054:err:systray:initialize_systray Could not create tray window
Hello, world from x86_64-pc-windows-gnu! I was compiled on x86_64-unknown-linux-gnu.

As expected, we see that rustcrosscompile.exe is running on Windows! Actually, through Wine — a compatibility layer — but close enough!

As you can see from the output above, the compiled .exe is located in target/x86_64-pc-windows-gnu/debug. You can copy it to a Windows machine and run it, which will show the expected output:

Hello, world from x86_64-pc-windows-gnu! I was compiled on x86_64-unknown-linux-gnu.

Cross even supports running tests on other platforms! Let’s add a test to our main.rs file:

mod tests {
    use current_platform::{COMPILED_ON, CURRENT_PLATFORM};

    #[test]
    fn test_compiled_on_equals_current_platform() {
        assert_eq!(COMPILED_ON, CURRENT_PLATFORM);
    }
}

Note that this is a test that we would expect to pass when running on Linux, but fail when we cross-compile to Windows and run it there.

Indeed, if we run cargo test on Linux, we get this output:

     Running unittests src/main.rs (target/debug/deps/rustcrosscompile-1e5afb54a2ed5306)

running 1 test
test tests::test_compiled_on_equals_current_platform ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

To run the test on Windows, the syntax is very similar to running the executable:

cross test --target x86_64-pc-windows-gnu

After a minute or so, we get the output:

     Running unittests src/main.rs (/target/x86_64-pc-windows-gnu/debug/deps/rustcrosscompile-99628163463e0d18.exe)
0050:err:winediag:nodrv_CreateWindow Application tried to create a window, but no driver could be loaded.
0050:err:winediag:nodrv_CreateWindow Make sure that your X server is running and that $DISPLAY is set correctly.
0050:err:systray:initialize_systray Could not create tray window

running 1 test
test tests::test_compiled_on_equals_current_platform ... FAILED

failures:

---- tests::test_compiled_on_equals_current_platform stdout ----
thread 'tests::test_compiled_on_equals_current_platform' panicked at 'assertion failed: `(left == right)`
  left: `"x86_64-unknown-linux-gnu"`,
 right: `"x86_64-pc-windows-gnu"`', src/main.rs:22:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace


failures:
    tests::test_compiled_on_equals_current_platform

test result: FAILED. 0 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 0.00s

error: test failed, to rerun pass `--bin rustcrosscompile`

As expected, the test fails!

Note that running tests isn’t supported on all platforms. Additionally, because of threading issues, tests run sequentially, which can be much slower than running tests natively. See the Cross documentation on supported targets for details.

How to write platform-specific code

Often, you may want to write code that only runs on one platform. Rust makes this easy with the cfg attribute. Now that we can cross-compile and run, we can easily try it out.

Let’s modify our program to add a message that only gets printed on Windows. In fact, for hypothetical efficiency reasons (😉), we won’t even compile this code on non-Windows platforms:

use current_platform::{COMPILED_ON, CURRENT_PLATFORM};

#[cfg(target_os="windows")]
fn windows_only() {
    println!("This will only get printed on Windows.");
}

fn main() {
    println!("Hello, world from {}! I was compiled on {}.", CURRENT_PLATFORM, COMPILED_ON);
    #[cfg(target_os="windows")]
    {
        windows_only();
    }
}

Here, we applied the cfg attribute to the windows_only() function so it won’t get compiled on non-Windows platforms. But that means we can only call it on Windows, so we apply the same cfg attribute to the block of code that calls the function.

You can actually apply the attribute in other places as well, like enum variants, struct fields, and match expression arms!

Running this on Linux with cargo run gives this output:

Hello, world from x86_64-unknown-linux-gnu! I was compiled on x86_64-unknown-linux-gnu.

As you can see, the output above does not have the Windows-specific message. But running with cross run --target x86_64-pc-windows-gnu gives this output:

Hello, world from x86_64-pc-windows-gnu! I was compiled on x86_64-unknown-linux-gnu.
This will only get printed on Windows.

Rust also provides an easy way to conditionally apply attributes based on the platform. You can look up the Rust reference guide to the cfg_attr attribute for more information on that.


More great articles from LogRocket:

  • Don’t miss a moment with The Replay, a curated newsletter from LogRocket
  • Learn how LogRocket’s Galileo cuts through the noise to proactively resolve issues in your app
  • Use React’s useEffect to optimize your application’s performance
  • Switch between multiple versions of Node
  • Discover how to use the React children prop with TypeScript
  • Explore creating a custom mouse cursor with CSS
  • Advisory boards aren’t just for executives. Join LogRocket’s Content Advisory Board. You’ll help inform the type of content we create and get access to exclusive meetups, social accreditation, and swag.

Conclusion

Cross makes it quite easy to cross-compile, run, and test your Rust library or application. This crate is helpful — and sometimes necessary — if you have a product that you want to ship on multiple platforms.

There are some limitations — notably, performance — since the building and running is done through a virtual machine. So if this is something you’re planning on doing with a larger project, definitely try it out first in your build environment to make sure the performance will work for you!

LogRocket: Full visibility into web frontends for Rust apps

Debugging Rust applications can be difficult, especially when users experience issues that are difficult to reproduce. If you’re interested in monitoring and tracking performance of your Rust apps, automatically surfacing errors, and tracking slow network requests and load time, try LogRocket. LogRocket Dashboard Free Trial Banner

LogRocket is like a DVR for web and mobile apps, recording literally everything that happens on your Rust app. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app’s performance, reporting metrics like client CPU load, client memory usage, and more.

Modernize how you debug your Rust apps — start monitoring for free.

  • Care cam pro для windows
  • Caramba switcher для windows 10
  • Car simulator скачать windows 10
  • Carach angren when crows tick on windows
  • Capicom для windows 10 64 bit скачать roseltorg