wiki:linux/kernel

Version 13 (modified by Tim Harvey, 6 years ago) ( diff )

added external module documentation and fixed some inaccuracies in with modules_install and headers_install (env vs params)

Linux Kernel Development

This page is for new or inexperienced developers wanting to work directly on the linux kernel.

Kernel sources

When choosing a kernel source you need to determine what support you are interested in.

Mainline upstream Linux kernel

Gateworks usually recommends using the mainline or upstream Linux kernel because it has the most up-to-date features and bug-fixes that have been accepted and peer reviewed from the widest array of kernel developers around the world.

The 'mainline' Linux kernel is the one officially released by Linus Torvalds and has an approximate release cycle of every 6 months.

Resources:

See also:

Stable Kernel

The Stable Kernel adds backported bugfixes on top of previous mainline Linux kernel releases. A variant of this is the Long Term Support (LTS) kernel versions that are supported by the community for typically 4 years.

Resources:

Gateworks Newport kernel (OcteonTX)

Gateworks maintains a kernel for the OcteonTX tailored (Newport product family) based off a mainline Linux kernel with additional patches that may not be available in the mainline kernel yet.

Resources:

Gateworks Ventana downstream vendor kernel (IMX6)

Gateworks maintains a downstream IMX6 Linux kernel that contains patches from Freescale and Gateworks to add support for Ventana including some items that have not made it into mainline Linux yet.

Our pre-built Gateworks downstream vendor kernel is a build artifact of our Yocto BSP and contains the following:

  • drivers we feel are important for our user base
  • wireless drivers from the Linux backports project
  • firmware for various devices that require run-time firmware loading
  • kernel headers

Our kernel source has a default config file (arch/arm/configs/gwventana_defconfig) which which we use but keep in mind that in general wireless drivers and subsystems are not defined there because those modules come from linux-backports instead.

Toolchains

A toolchain is a set of compiler tools (typically gcc) and libraries necessary for compiling and linking source (and possibly debugging, diss-assembling, etc).

Typically this includes the following (for the GNU C Compiler):

  • ar - linker
  • as - assembler
  • c++/g++ - C++ compiler
  • gcc - C compiler
  • ld - linker
  • nm - list symbols from object files
  • strip - strips debug info from object files
  • objcopy - copy and translate object files
  • objdump - display info from object files
  • ranlib - generate index to archive
  • lib/lib* - stdc libraries
  • include/ - stdc library headers

The kernel (which is written entirely in ANSI-C) is compiled with gcc and does not link against the stdc library.

You likely have one of the following toolchains to work with:

  • toolchain from an OpenWrt build directory
  • toolchain from a prebuilt OpenWrt BSP SDK
  • toolchain from a Yocto build directory
  • toolchain from a prebuilt Yocto BSP SDK
  • pre-built toolchain from somewhere else, like Android

Userspace and root filesystem

Userspace refers to anything in a Linux based OS which is 'not' the kernel. After initialization, the Linux kernel mounts the rootfs and executes PID1 (/sbin/init) which is the start of userspace. The init process and any processes it launches are part of userspace. When applications act on device files such as /dev/sda they use an ioctl API to interact with kernel functions from userspace. A useful way to determine the uses per directory is to execute a man hier on your Ubuntu system.

A root filesystem (sometimes referred to as 'a userspace') contains everything you need for a Linux based OS other than the kernel itself. The root filesystem could contain the kernel, but only if the bootloader mounts it and launches it from the rootfs (ie the Gateworks Ventana product family). Note that kernel modules are contained on the root filesystem but they are not considered userspace (yet the insmod/rmmod/modprobe apps that load/remove them are).

You likely have one of the following userspaces you want to work with:

  • OpenWrt root filesystem
  • Yocto root filesystem
  • Android root filesystem
  • Ubuntu root filesystem

In general, you should be able to mix and match kernels and userspace within reason, but note that certain Linux based OS's may require specific kernel functionality (ie Android requires several Android specific kernel features to exist).

When working on mainline Linux patches, Gateworks developers often use either an OpenWrt userspace root filesystem or a Yocto userspace root filesystem depending on their needs.

Building the Linux kernel (cross-compiling)

Building the Linux kernel for a target board that differs from your development host outside various Board Support Package development environments is easily accomplished.

Gateworks developers typically do kernel development out-of-tree and boot the kernel over the network to speed up development.

If working with kernel modules, one could remove the module (rmmod), download it, install it (insmod) again and test and avoid constantly rebooting the board. Often its just as quick to build a driver static in the kernel, configure your board for network booting, and just rebuild/reboot continually.

Prerequisites:

  • Linux development host: Gateworks uses and supports Ubuntu which is used here as an example, but other Linux variants can work as well
  • Toolchain for the CPU architecture of the boards you are working with:
    • Newport uses the Cavium CN80xx OcteonTX SoC which has ARMv8 CPU cores. You can download the toolchain provided Cavium for their SDK here or use one that you have pre-built from OpenWrt
    • Ventana uses the NXP/Freescale IMX6 SoC which has ARM Cortex-A9 cores with NEON SIMD support. You can download our pre-built OpenWrt toolchain here
    • Laguna uses the Cavium cns3xxx SoC which has ARM11 MP-Core cores

Steps to build the Gateworks kernel using the Gateworks OpenWrt toolchain:

  1. Install pre-requisites:
    apt-get install build-essential ncurses-dev bc u-boot-tools liblzo2-dev lzop git bison
    
    • typically 'build-essential' provides enough for compiling, however we need a few other things for kernel development:
      • ncurses-dev is needed for menuconfig
      • u-boot-tools, bc, and lzop are needed for uImage
      • git is needed for checking out the source
  1. Obtain and install compiler toolchain:
    • For Newport, you can use the pre-built toolchain provided by Cavium for their SDK:
      wget http://dev.gateworks.com/newport/tools-gcc-6.2.tar.bz2
      tar -xvf tools-gcc-6.2.tar.bz2
      
    • For Ventana, you can use the pre-built Gateworks OpenWrt 14.08 BSP tailored to the ARM Cortex-A9 CPU used in the IMX6 SoC:
      wget http://dev.gateworks.com/openwrt/14.08/imx6/OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi.tar.bz2
      tar -xvf OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi.tar.bz2
      
  1. Obtain Linux kernel source (see above to help you decide which kernel version you should use):
    • for mainline Linux v4.14 for example:
      git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
      cd linux
      git checkout v4.14
      
    • for the Gateworks Newport kernel based on 4.14:
      git clone https://github.com/Gateworks/linux-newport.git
      cd linux-newport
      git checkout v4.14.4-newport
      
    • for the Gateworks Linux 3.14.x based downstream vendor kernel (with full Ventana support including video capture which hasn't made it fully into mainline yet):
      git clone https://github.com/Gateworks/linux-imx6.git
      cd linux-imx6
      git checkout gateworks_fslc_3.14_1.0.x_ga
      
  1. Setup shell environment for building which depends on the toolchain you are using. This includes the following env variables used by the kernel build system:
    • ARCH - the architecture to build (arm for 32bit ARM such as Ventana IMX6, arm64 for 64bit ARMv8 such as Newport CN80xx)
    • CROSS_COMPILE - the prefix the toolchain you are using has prior to the gcc executables (everything before the 'gcc').
    • PATH - make sure the directory where the toolchains compiler (${CROSS_COMPILE}gcc) executable exists. Typically you will prepend this to the default PATH
    • INSTALL_MOD_PATH - where to install kernel modules too if using the 'modules_install' build target (in our examples we install to ./install)
    • INSTALL_HDR_PATH - where to install header files if using the 'headers_install' build target (in our examples we install to ./install; note that this needs to be passed into make as opposed to an env var)
    • LOADADDR - where to relocate/uncompress the kernel to when using the uImage target
    • Examples:
      • using the Newport toolchain to build Real-time Linux stable 14.4.59 kernel:
        export STAGING_DIR=/home/user/builds/newportToolchain
        TOOLCHAIN=thunderx-tools-97
        PATH=$PATH:$STAGING_DIR/$TOOLCHAIN/bin
        export ARCH=arm64
        export CROSS_COMPILE=aarch64-thunderx-linux-gnu-
        
      • using the pre-built Gateworks OpenWrt 14.08 BSP toolchain:
        export STAGING_DIR=../OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi
        TOOLCHAIN=toolchain-arm_cortex-a9+neon_gcc-4.8-linaro_uClibc-0.9.33.2_eabi
        PATH=$PATH:$STAGING_DIR/$TOOLCHAIN/bin
        export ARCH=arm
        export CROSS_COMPILE=arm-openwrt-linux-
        export LOADADDR=0x10008000
        
        • Note that STAGING_DIR is something required by the OpenWrt toolchain... your toolchain may differ
    • Some troubleshooting steps:
      echo $ARCH # make sure this is not empty and is set to your target CPU architecture
      
      which ${CROSS_COMPILE}gcc #is this your compiler
      
      ${CROSS_COMPILE}gcc -v #is target: your desired architecture
      
      echo $PATH #have you appended the appropriate parameters to your path. When in doubt start new shell
      
  2. Configure the kernel. Often you will want to start with an existing defconfig file followed by doing a 'make kernel menuconfig' to change any options you desire to modify.
    • Examples:
      • for the mainline Ventana/IMX6:
        make imx_v6_v7_defconfig # Ventana/IMX6
        make menuconfig
        
      • for the mainline Newport/CN80XX:
        make defconfig # ARM64 defconfig 
        make menuconfig
        
      • for the Gateworks Newport kernel:
        make newport_defconfig # Gateworks default config
        make menuconfig
        
      • for the Gateworks Linux 3.14.x based downstream vendor kernel:
        make gwventana_defconfig # Gateworks default config
        make menuconfig
        
      • The menuconfig make target launches the ncurses based (serial console) Linux kernel menu configuration tool so that you can add or adjust the support to your needs. A common change to the default kernel configurations above would be to add support for a USB based device, or wireless (802.11) support for example.
  1. Build kernel targets. What targets you build depends on the product family and the bootloader or boot method you are using. Some common examples are:
    • Image - uncompressed kernel (use this for Newport/CN80XX arm64 targets)
    • uImage - compressed kernel wrapped in a U-Boot image header (use this for Ventana/IMX6) which requires the $LOADADDR env variable to tell U-Boot where to uncompress the kernel to
    • modules - for building loadable kernel modules (*.ko)
    • modules_install - for installing the loadable kernel modules (set the INSTALL_MOD_PATH parameter to the location you wish to install to)
    • headers_install - for installing kernel header files for development of userspace applications that use the kernel APIs (set the INSTALL_HDR_PATH parameter to the location you wish to install to)
    • dtbs - for building device-tree blobs (*.dtb) from kernel source tree (Ventana/IMX6; for Newport we use out-of-kernel tree device-tree)
    • Examples:
      • Newport/CN80XX:
        mkdir install
        make INSTALL_MOD_PATH=install INSTALL_HDR_PATH=install/usr Image modules modules_install headers_install
        
      • Ventana/IMX6:
        mkdir install
        make INSTALL_MOD_PATH=install INSTALL_HDR_PATH=install/usr uImage dtbs modules modules_install headers_install
        
  1. Copy artifacts (depends on target):
    • kernel images (Image/uImage) will be in arch/$ARCH/boot
    • dtbs will be in arch/$ARCH/boot/dts
    • modules will be in the directory specified by the INSTALL_MOD_PATH parameter. You can create a tarball via:
      tar -C <INSTALL_MOD_PATH dir> --owner=0 --group=0 -cvJf modules.tar.xz .
      

Installing Kernel Headers

The 'make headers_install' command exports the kernel's header files in a form suitable for use by userspace programs. These headers describe the API for user space programs attempting to use kernel services and are used by the system's C library (ie glibc or uClibc) to define available system calls as well as constants and structures to be used with these system calls. The C library's header files are usually installed in /usr/include and the kernel's headers are usually in /usr/include/linux and /usr/include/asm. Kernel headers are backwards compatible but not forwards compatible meaning a program built against a C library using older kernel headers should run on a newr kernel (although possibly missing access to new features) but a program built against newer kernel headers may not work on an older kernel.

The 'make headers_install' command uses two optional arguments to define the architecture (ARCH) and the location to install the files (INSTALL_HDR_PATH)

Reference:

Building external (out of tree) kernel modules

The Linux 'kbuild' system is used by the Linux kernel for kernel configuration. Linux kernel modules must use kbuild to stay compatible with the changes in the build infrastructure. A kernel module can be built stand-alone either 'in tree' (within the directory structure of the linux kernel) or 'out of tree' (aka 'external') meaning it exists in a directory of its own outside of the linux source tree.

An external module must always include a wrapper makefile that supports building the module using 'make' with no arguments. See modules.txt for details.

To build external modules you must have a prebuilt kernel available that contains the configuration and header files used in the build. Also the kernel must have been built with modules enabled.

To build an external kernel module you typically would use:

cd <external-module-directory>
make -C <path-to-prebuilt-kernel> M=$PWD
  • this will build the modules located in the current directory pointed to by the M param
  • you can then manually copy or load your kernel modules or use the modules_install make target to install them to a specific path
  • if cross-compiling make sure to define ARCH and CROSS_COMPILE env variables and have the $(CROSS_COMPILE)-gcc in your path

As an example consider the following files in a directory called 'hello-driver':

  • hello.c:
    #include <linux/init.h>
    #include <linux/module.h>
    MODULE_LICENSE("Dual BSD/GPL");
    
    static int hello_init(void)
    {
        printk(KERN_ALERT "Hello, world\n");
        return 0;
    }
    
    static void hello_exit(void)
    {
        printk(KERN_ALERT "Goodbye, cruel world\n");
    }
    
    module_init(hello_init);
    module_exit(hello_exit);
    
  • Makefile:
    obj-m := hello.o
    
  • building:
    make -C <path-to-prebuilt-kernel> M=$PWD
    ls hello.ko
    

References:

Building the Latest Kernel Modules for an older kernel - Linux Backports

Sometimes you may be using a kernel is not as new as the mainline kernel at kernel.org and you desire a driver or newer driver from the latest kernel. The Linux Backports project aims at providing this capability.

In order to build this, you will need a kernel tree already built. In this example, I will assume the gateworks_fslc_3.14_1.0.x_ga kernel is being used.

  1. Change into the kernel tree directory, making sure it is built using the example shown in the building section above.
    cd gateworks_fslc_3.14_1.0.x_ga
    make gwventana_defconfig
    etc...
    
  1. Grab the backports version you want to use and cd into it. This example will use the one currently used for our Yocto BSPs.
    wget https://www.kernel.org/pub/linux/kernel/projects/backports/2016/01/22/backports-20160122.tar.gz
    tar xzf backports-20160122.tar.gz
    cd backports-20160122
    
  1. [OPTIONAL] You may have to patch the backports project, depending on the version being used
    wget https://raw.githubusercontent.com/Gateworks/meta-gateworks/fido/recipes-kernel/compat-wireless/compat-wireless-all/0001-disable_kconf.patch
    wget https://raw.githubusercontent.com/Gateworks/meta-gateworks/fido/recipes-kernel/compat-wireless/compat-wireless-all/0002-add-KLIB_CONFIG.patch
    wget https://raw.githubusercontent.com/Gateworks/meta-gateworks/fido/recipes-kernel/compat-wireless/compat-wireless-all/add_db_txt.patch
    
    patch -p1 < 0001-disable_kconf.patch   # Used if no kconfig available (generally never needed)
    patch -p1 < 0002-add-KLIB_CONFIG.patch # Used if config is not located in kernel directory
    patch -p1 < add_db_txt.patch           # Used if needing static wireless regulatory database
    
  1. Configure shell for cross compiler (please see the building section for more details)
    export STAGING_DIR=../OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi
    TOOLCHAIN=toolchain-arm_cortex-a9+neon_gcc-4.8-linaro_uClibc-0.9.33.2_eabi
    PATH=$PATH:$STAGING_DIR/$TOOLCHAIN/bin
    export INSTALL_MOD_PATH=install
    export ARCH=arm
    export CROSS_COMPILE=arm-openwrt-linux-
    export KLIB=..
    export KLIB_BUILD=..
    export KLIB_CONFIG=..
    

Notice that the KLIB variables are new; They refer to the location of the raw kernel, location where it is built, and location to the config file.

  1. Make menuconfig to select drivers you want the latest of, then build.
    make menuconfig
    make modules
    
  1. [OPTIONAL] Locate the .ko's and copy them to the built kernel
    cp --parent $(find -name "*.ko") ../
    

And that's it! If you tar the kernel up, make sure to remove the backports directory as each .ko will be double counted if you already copied the .ko's to the kernel tree. Please see the updating section for details on how to update a target's kernel.

Installing or Updating the kernel artifacts

Installation of a newly build kernel and other artifacts (headers, modules, dtbs) depends on the target platform and how you are booting it.

Kernel modules are located in a directory structure /lib/modules/(kernel_version) which matches the structure used when you install the modules with the 'modules install' build target. If you created a tarball of the kernel modules you can install those on your target as such:

# download kernel specific tarball suitable for booted kernel
KERNEL=$(uname -r)
URL=http://192.168.1.165/tftpboot/ventana/
rm -f /tmp/linux-$KERNEL.tar.gz
wget -P /tmp $URL/linux-$KERNEL.tar.gz

# untar appropriately
if [ -d /etc/config ]; then
  # OpenWrt stores modules in a flat tree
  echo "OpenWrt Filesystem"
  ( cd /; tar -xvzf /tmp/linux-$KERNEL.tar.gz )
  find /lib/modules/$KERNEL -name *.ko -exec mv {} /lib/modules/$KERNEL \;
else
  ( cd /; tar -xvf /tmp/linux-$KERNEL.tar.gz )
fi
depmod -a
sync
  • Note that OpenWrt uses a flat directory structure for kernel modules

Booting the kernel from the network

The bootloader's job is to load the linux kernel and execute it. Often during kernel development its advantageous to boot the kernel from the network to create very quick edit/build/boot cycles.

This can be accomplished in a variety of ways depending on what you want to come from the network, the kernel, the device-tree blobs, the root filesystem etc. During kernel development usually its just the kernel and device-tree blobs that are needed to boot over the network and using a flash based userspace filesystem is fine.

To do this using Ventana as an example:

# boot kernel+dtbs over network via tftp, using NAND filesystem
tftp ${loadaddr} ${prefix}uImage && \
  tftp ${fdt_addr} ${prefix}${fdt_file2} && \
  fdt addr ${fdt_addr} && \
  fdt boardsetup && \
  setenv bootargs console=${console},${baudrate} root=ubi0:rootfs ubi.mtd=2 rootfstype=ubifs ${video} ${extra} && \
  bootm ${loadaddr} - ${fdt_addr}
# boot kernel+dtbs over netowrk via tftp, using MMC filesystem
tftp ${loadaddr} ${prefix}uImage && \
  tftp ${fdt_addr} ${prefix}${fdt_file2} && \
  fdt addr ${fdt_addr} && \
  fdt boardsetup && \
  setenv bootargs console=${console},${baudrate} root=/dev/mmcblk0p1 rootfstype=ext4 rootwait rw ${video} ${extra} && \
  bootm ${loadaddr} - ${fdt_addr}
# boot kernel_dtbs over network via tftp, using USB filesystem
tftp ${loadaddr} ${prefix}uImage && \
  tftp ${fdt_addr} ${prefix}${fdt_file2} && \
  fdt addr ${fdt_addr} && \
  fdt boardsetup && \
  setenv bootargs console=${console},${baudrate} root=/dev/sda1 rootfstype=ext4 rootwait rw ${video} ${extra} && \
  bootm ${loadaddr} - ${fdt_addr}

Loading Kernel Modules

Kernel modules are dynamically loadable portions of the kernel that can be loaded or unloaded at runtime. It is generally not feasible to build every kernel driver statically into a modern Linux kernel so instead only options required to load further modules are build static.

The userspace applications insmod and rmmod are used to load and remove modules at runtime.

Most Linux based Operating Systems these days also contain support for automatically determining and loading module dependencies (modules required by the module you are trying to load) and for these you will also have:

  • modprobe - load module and all dependecies
  • depmod - builds module dependency database

See Linux documentation for more information.

Note: See TracWiki for help on using the wiki.