Cross compiling static builds

Posted on 01.08.2016

In my previous blog post I laid out a technique for building nmap statically using musl as libc. In this post I am going to take this further and cross compile nmap for ARM. In a later post, I will cover other tools such as SSH.

Firstly I am going to clear up some points based on the feedback to my first post, just for clarity. I think this is important to do, but you can skip it if you already get this stuff.

On use the of docker

  1. The real reason for using docker was the potential for automation via dockerfiles in the future. I am not yet expert enough to do that but perhaps soon :)
  2. Another side benefit of using docker is the risk of doing something dangerous with an accidental make install. If you break your docker container, no worries. If you hose your host linux box, though... you take backups, right?

    Docker containers are also much more lightweight, containing fewer existing libs than my linux distribution after 2 years of development work. It is easier to see what is missing this way and create reproducible instructions.

On static linking

Under Linux, the syscall interface is public. This means that all applications ultimately talk directly to the kernel. In practice, however, programmers use libraries to help them out, like libc, so as to make applications more portable and not spend so long writing them. To reduce disk space and for more efficient memory management since only one copy of the library is required to be loaded in memory and can be mapped into each process' address space, modern systems use shared objects (DLLs in Windows speak, conceptually the same idea although they are technically slightly different).

Applications must find these shared objects when they load. Briefly, the dynamic loader will look in $LD_LIBRARY_PATH$, the RPATH of the executable (download a third party binary and try chrpath with no options on it, likely you will see something like $ORIGIN), any directories listed in /etc/ and finally /lib and /usr/lib, or /lib64 and /usr/lib64 on RedHat systems on 64-bit.

Normally when you install applications from your package manager, the package manager fulfils any dependencies you might need and all libraries you need will exist. You can see what dependencies are needed for a given executable with ldd or objdump -x, for example

ldd /usr/bin/file (0x00007ffdf15ad000) => /lib64/ (0x00007f13cb748000) => /lib64/ (0x00007f13cb531000) => /lib64/ (0x00007f13cb16f000)
    /lib64/ (0x0000563b3dfd2000)

(for those of you using multilib glibc, the so number for glibc5 is 1 and glibc6 is 2).

For security purposes, when we get access to a box, we cannot be sure what libraries might or might not exist and we would prefer not to install them if at all possible as the package manager will definitely leave logs everywhere.

With a static binary, all of those libraries are linked directly into the file itself and the only talking the binary needs to do is to the kernel via the syscall interface (unless it calls dlopen itself, but we can ignore that for now). We thus have more chance of the binary running successfully, since we don't need dependencies that might not be installed.

On cross compiling itself

The normal process for compiling a C program is straightforward. We take the C code we want to compile and go through the compilation process to produce machine code for the platform we want. Normally, this is the same platform as you are running on.

You can, however, ask a compiler to build code for a different platform. However to do this you often have to build at least the back end parts to target that platform. In this case the machine code of the compiler you run matches your platform while the machine code of its output matches the target platform.

Why would you do this? One obvious case is to bootstrap a C compiler on a platform that does not yet have a C compiler available. This is the most common use case, but one can also take advantage of more powerful chips and cross compile products for other targets.

Confusion arises when we start to talk about x86 itself, since there are various targettable processors supporting for example i686 instructions, AMD64/INTEL64 extensions for 64-bit. GCC treats these variants as separate processor types, which in a sense they are. Certainly, 64-bit and 32-bit code are vastly different, so must be treated separately. So it is possible to cross compile from a 64-bit to a 32-bit system. Likewise we may cross compile to target a non-Intel processor such as those of the ARM family.

While i686 code may be runnable on x64, there is no guarantee of this as it requires special effort on the part of the distribution, so we treat these as separate targets.

Finally, libc. By default, the compiler makes a libc implementation available to code unless you pass -nostdlib and -ffreestanding. The compiler will simply have a libc available and it is this that forms part of the target triple for gcc.

It is possible to cross compile using a single gcc front-end and multiple backends supporting multiple combinations of architectures and libraries. Indeed this is what GCC is designed to do. However, to keep things simple and clear I have avoided this approach.

Let's get on with it then!

We are going to follow my previous article on building binaries, with some changes.

Firstly, we will build our cross compiler for ARM. First cd /work/musl-cross and then edit to be:


I have also used


and you can follow a similar process for 32-bit x86 (i686) binaries if you like, as we are doing here. I shall continue using ARM as an example.

Now as before, run ./ in this directory, sit back and wait.

Several cups of tea later, you should have a shiny new cross compiler in /opt/cross/armv7l-linux-musleabi/. We will use this as our install prefix in place of our x64 case.

We now head to the OpenSSL build tree. Edit configure again to add the line:

"linux-armv4-musl",     "gcc: -O3 -Wall::-D_REENTRANT::-static:

Below the line for linux-armv4. This will produce static output, which we will use in the configure step below.

If you haven't already done so, fix the TERMIOS issue:

sed -i 's/-DTERMIO/-DTERMIOS/g' Configure

Now we can configure OpenSSL:

./Configure no-shared enable-ssl3 enable-ssl3-method
enable-weak-ssl-ciphers enable-egd enable-heartbeats
enable-md2 enable-rc5 --prefix=/opt/cross/armv7l-linux-musleabi/
linux-armv4-musl -march=armv7-a

Note we have had to set -march=armv7-a in this case. The Configure script explains this as allowing us a choice of arm micro-architectures to support. We pick ARM Cortex-A as this is the common architecture in use in phones, raspberry PIs etc.

Next, we set the compilers and build:

export CC=/opt/cross/armv7l-linux-musleabi/bin/armv7l-musl-linuxeabi-gcc
export CXX=/opt/cross/armv7l-linux-musleabi/bin/armv7l-musl-linuxeabi-g++
export LD=$CC
make depend
make install

We now head to the libpcap directory and build similarly to x64:

./configure --disable-shared --prefix=/opt/cross/armv7l-linux-musleabi/
 --host=armv7-linux-gnueabi --with-pcap=linux
make install

Liblinear is built in the same way as for the x64 case, see my previous notes.

Finally, enter the source tree for nmap. We configure and build as before:

./configure --prefix=/opt/cross/armv7l-linux-musleabi/ --without-zenmap 
 --without-ndiff --without-nping --with-liblua=included --with-pcap=linux
make static

and you can then use the resultant nmap binary that is produced in the source directory. Note that make install produces a dynamic binary for the nmap tree.

Stripping binaries

Note that to strip these binaries you need to use an architecture-aware tool, i.e. that from the cross compiled tree. I copied my binary to /output/armv7l/nmap, so to strip my nmap, I used:

/opt/cross/armv7l-linux-musleabi/bin/armv7l-musl-linuxeabi-strip /output/armv7l/nmap

The final result

What you finally end up with is:

NMap running in console session on android phone

That's right, static nmap running on my android phone. Enjoy!