[[PageOutline]] = Linux Kernel Development = This page is for new or inexperienced developers wanting to work directly on the linux kernel. [=#linuxsource] == Kernel sources == When choosing a kernel source you need to determine what support you are interested in. === Mainline upstream Linux kernel Gateworks usually recommends using the mainline or upstream Linux kernel because it has the most up-to-date features and bug-fixes that have been accepted and peer reviewed from the widest array of kernel developers around the world. The 'mainline' Linux kernel is the one officially released by Linus Torvalds and has an approximate release cycle of every 6 months. Resources: * [https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git Linux git] * https://www.kernel.org See also: * [wiki:ventana/#mainline-linux Gateworks mainline linux details and patches] * http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git - source * https://www.kernel.org/ - source archives * http://kernelnewbies.org/LinuxVersions - Linux version info === Stable Kernel The Stable Kernel adds backported bugfixes on top of previous mainline Linux kernel releases. A variant of this is the Long Term Support (LTS) kernel versions that are supported by the community for typically 4 years. Resources: * [https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git/ Linux Stable Git] === Gateworks Newport kernel (OcteonTX) Gateworks maintains a kernel for the OcteonTX tailored (Newport product family) based off a mainline Linux kernel with additional patches that may not be available in the mainline kernel yet. Resources: * [https://github.com/Gateworks/linux-newport linux-newport Git] * [http://trac.gateworks.com/wiki/newport/bsp#kernel Newport Kernel wiki section] === Gateworks Ventana downstream vendor kernel (IMX6) Gateworks maintains a downstream IMX6 Linux kernel that contains patches from Freescale and Gateworks to add support for Ventana including some items that have not made it into mainline Linux yet. * [https://github.com/Gateworks/linux-imx6 source] * [http://dev.gateworks.com/ventana/images/gateworks-linux-imx6-3.14.48.tar.gz gateworks-linux-imx6-3.14.48.tar.gz] - prebuilt tarball * see [#backports below] if needing the latest wireless drivers from linux-wireless on top of the older Gateworks downstream vendor kernel Our pre-built Gateworks downstream vendor kernel is a build artifact of our Yocto BSP and contains the following: * drivers we feel are important for our user base * wireless drivers from the Linux backports project * firmware for various devices that require run-time firmware loading * kernel headers Our kernel source has a default config file (arch/arm/configs/gwventana_defconfig) which which we use but keep in mind that in general wireless drivers and subsystems are not defined there because those modules come from linux-backports instead. [=#toolchain] == Toolchains == A toolchain is a set of compiler tools (typically gcc) and libraries necessary for compiling and linking source (and possibly debugging, diss-assembling, etc). Typically this includes the following (for the GNU C Compiler): * ar - linker * as - assembler * c++/g++ - C++ compiler * gcc - C compiler * ld - linker * nm - list symbols from object files * strip - strips debug info from object files * objcopy - copy and translate object files * objdump - display info from object files * ranlib - generate index to archive * lib/lib* - stdc libraries * include/ - stdc library headers The kernel (which is written entirely in ANSI-C) is compiled with gcc and does not link against the stdc library. You likely have one of the following toolchains to work with: * toolchain from an OpenWrt build directory * toolchain from a prebuilt OpenWrt BSP SDK * toolchain from a Yocto build directory * toolchain from a prebuilt Yocto BSP SDK * pre-built toolchain from somewhere else, like Android [=#rootfs] == Userspace and root filesystem == Userspace refers to anything in a Linux based OS which is 'not' the kernel. After initialization, the Linux kernel mounts the rootfs and executes PID1 (/sbin/init) which is the start of userspace. The init process and any processes it launches are part of userspace. When applications act on device files such as /dev/sda they use an ioctl API to interact with kernel functions from userspace. A useful way to determine the uses per directory is to execute a {{{man hier}}} on your Ubuntu system. A root filesystem (sometimes referred to as 'a userspace') contains everything you need for a Linux based OS other than the kernel itself. The root filesystem could contain the kernel, but only if the bootloader mounts it and launches it from the rootfs (ie the Gateworks Ventana product family). Note that kernel modules are contained on the root filesystem but they are not considered userspace (yet the insmod/rmmod/modprobe apps that load/remove them are). You likely have one of the following userspaces you want to work with: * OpenWrt root filesystem * Yocto root filesystem * Android root filesystem * Ubuntu root filesystem In general, you should be able to mix and match kernels and userspace within reason, but note that certain Linux based OS's may require specific kernel functionality (ie Android requires several Android specific kernel features to exist). When working on mainline Linux patches, Gateworks developers often use either an OpenWrt userspace root filesystem or a Yocto userspace root filesystem depending on their needs. [=#building] == Building the Linux kernel (cross-compiling) Building the Linux kernel for a target board that differs from your development host outside various Board Support Package development environments is easily accomplished. Gateworks developers typically do kernel development out-of-tree and boot the kernel over the network to speed up development. If working with kernel modules, one could remove the module (rmmod), download it, install it (insmod) again and test and avoid constantly rebooting the board. Often its just as quick to build a driver static in the kernel, configure your board for network booting, and just rebuild/reboot continually. '''Prerequisites:''' - Linux development host: Gateworks uses and supports Ubuntu which is used here as an example, but other Linux variants can work as well - Toolchain for the CPU architecture of the boards you are working with: * Newport uses the Cavium CN80xx OcteonTX SoC which has ARMv8 CPU cores. You can download the toolchain provided Cavium for their SDK [http://dev.gateworks.com/newport/tools-gcc-6.2.tar.bz2 here] or use one that you have pre-built from OpenWrt * Ventana uses the NXP/Freescale IMX6 SoC which has ARM Cortex-A9 cores with NEON SIMD support. You can download our pre-built OpenWrt toolchain [http://dev.gateworks.com/openwrt/14.08/imx6/OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi.tar.bz2 here] * Laguna uses the Cavium cns3xxx SoC which has ARM11 MP-Core cores Steps to build the Gateworks kernel using the Gateworks OpenWrt toolchain: 1. Install pre-requisites: {{{#!bash apt-get install build-essential ncurses-dev bc u-boot-tools liblzo2-dev lzop git bison }}} * typically 'build-essential' provides enough for compiling, however we need a few other things for kernel development: - ncurses-dev is needed for menuconfig - u-boot-tools, bc, and lzop are needed for uImage - git is needed for checking out the source 2. Obtain and install compiler toolchain: * For Newport, you can use the pre-built toolchain provided by Cavium for their SDK: {{{#!bash wget http://dev.gateworks.com/newport/tools-gcc-6.2.tar.bz2 tar -xvf tools-gcc-6.2.tar.bz2 }}} * For Ventana, you can use the pre-built Gateworks OpenWrt 14.08 BSP tailored to the ARM Cortex-A9 CPU used in the IMX6 SoC: {{{#!bash wget http://dev.gateworks.com/openwrt/14.08/imx6/OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi.tar.bz2 tar -xvf OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi.tar.bz2 }}} 3. Obtain Linux kernel source (see [#linuxsource above] to help you decide which kernel version you should use): * for mainline '''Linux v4.14''' for example: {{{#!bash git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git cd linux git checkout v4.14 }}} * for the Gateworks Newport kernel based on 4.14: {{{#!bash git clone https://github.com/Gateworks/linux-newport.git cd linux-newport git checkout v4.14.4-newport }}} * for the '''Gateworks Linux 3.14.x''' based downstream vendor kernel (with full Ventana support including video capture which hasn't made it fully into mainline yet): {{{#!bash git clone https://github.com/Gateworks/linux-imx6.git cd linux-imx6 git checkout gateworks_fslc_3.14_1.0.x_ga }}} 4. Setup shell environment for building which depends on the toolchain you are using. This includes the following env variables used by the kernel build system: * ARCH - the architecture to build (arm for 32bit ARM such as Ventana IMX6, arm64 for 64bit ARMv8 such as Newport CN80xx) * CROSS_COMPILE - the prefix the toolchain you are using has prior to the gcc executables (everything before the 'gcc'). * PATH - make sure the directory where the toolchains compiler (${CROSS_COMPILE}gcc) executable exists. Typically you will prepend this to the default PATH * INSTALL_MOD_PATH - where to install kernel modules too if using the 'modules_install' build target (in our examples we install to ./install) * INSTALL_HDR_PATH - where to install header files if using the 'headers_install' build target (in our examples we install to ./install; note that this needs to be passed into make as opposed to an env var) * LOADADDR - where to relocate/uncompress the kernel to when using the uImage target * Examples: * using the [http://dev.gateworks.com/newport/tools-gcc-6.2.tar.bz2 Newport toolchain] to build Real-time Linux stable 14.4.59 kernel: {{{#!bash export STAGING_DIR=/home/user/builds/newportToolchain TOOLCHAIN=thunderx-tools-97 PATH=$PATH:$STAGING_DIR/$TOOLCHAIN/bin export ARCH=arm64 export CROSS_COMPILE=aarch64-thunderx-linux-gnu- }}} - using the pre-built Gateworks OpenWrt 14.08 BSP toolchain: {{{#!bash export STAGING_DIR=../OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi TOOLCHAIN=toolchain-arm_cortex-a9+neon_gcc-4.8-linaro_uClibc-0.9.33.2_eabi PATH=$PATH:$STAGING_DIR/$TOOLCHAIN/bin export ARCH=arm export CROSS_COMPILE=arm-openwrt-linux- export LOADADDR=0x10008000 }}} * Note that STAGING_DIR is something required by the OpenWrt toolchain... your toolchain may differ * Some troubleshooting steps: {{{#!bash echo $ARCH # make sure this is not empty and is set to your target CPU architecture }}} {{{#!bash which ${CROSS_COMPILE}gcc #is this your compiler }}} {{{#!bash ${CROSS_COMPILE}gcc -v #is target: your desired architecture }}} {{{#!bash echo $PATH #have you appended the appropriate parameters to your path. When in doubt start new shell }}} 5. Configure the kernel. Often you will want to start with an existing defconfig file followed by doing a 'make kernel menuconfig' to change any options you desire to modify. * Examples: * for the mainline Ventana/IMX6: {{{ #!bash make imx_v6_v7_defconfig # Ventana/IMX6 make menuconfig }}} * for the mainline Newport/CN80XX: {{{ #!bash make defconfig # ARM64 defconfig make menuconfig }}} - for the Gateworks Newport kernel: {{{#!bash make newport_defconfig # Gateworks default config make menuconfig }}} - for the Gateworks Linux 3.14.x based downstream vendor kernel: {{{#!bash make gwventana_defconfig # Gateworks default config make menuconfig }}} * The menuconfig make target launches the ncurses based (serial console) Linux kernel menu configuration tool so that you can add or adjust the support to your needs. A common change to the default kernel configurations above would be to add support for a USB based device, or wireless (802.11) support for example. 6. Build kernel targets. What targets you build depends on the product family and the bootloader or boot method you are using. Some common examples are: * Image - uncompressed kernel (use this for Newport/CN80XX arm64 targets) * uImage - compressed kernel wrapped in a U-Boot image header (use this for Ventana/IMX6) which requires the $LOADADDR env variable to tell U-Boot where to uncompress the kernel to * modules - for building loadable kernel modules (*.ko) * modules_install - for installing the loadable kernel modules (set the INSTALL_MOD_PATH parameter to the location you wish to install to) * headers_install - for installing kernel header files for development of userspace applications that use the kernel APIs (set the INSTALL_HDR_PATH parameter to the location you wish to install to) * dtbs - for building device-tree blobs (*.dtb) from kernel source tree (Ventana/IMX6; for Newport we use out-of-kernel tree device-tree) * Examples: - Newport/CN80XX: {{{#!bash mkdir install make INSTALL_MOD_PATH=install INSTALL_HDR_PATH=install/usr Image modules modules_install headers_install }}} - Ventana/IMX6: {{{#!bash mkdir install make INSTALL_MOD_PATH=install INSTALL_HDR_PATH=install/usr uImage dtbs modules modules_install headers_install }}} 7. Copy artifacts (depends on target): * kernel images (Image/uImage) will be in arch/$ARCH/boot * dtbs will be in arch/$ARCH/boot/dts * modules will be in the directory specified by the INSTALL_MOD_PATH parameter. You can create a tarball via: {{{#!bash tar -C --owner=0 --group=0 -cvJf modules.tar.xz . }}} [=#headers] == Installing Kernel Headers This section is in regards to installing kernel headers into the Linux kernel build detailed above. These instructions are not for installing headers on an already running target board (ie Gateworks SBC). The 'make headers_install' command exports the kernel's header files in a form suitable for use by userspace programs. These headers describe the API for user space programs attempting to use kernel services and are used by the system's C library (ie glibc or uClibc) to define available system calls as well as constants and structures to be used with these system calls. The C library's header files are usually installed in /usr/include and the kernel's headers are usually in /usr/include/linux and /usr/include/asm. Kernel headers are backwards compatible but not forwards compatible meaning a program built against a C library using older kernel headers should run on a newr kernel (although possibly missing access to new features) but a program built against newer kernel headers may not work on an older kernel. The 'make headers_install' command uses two optional arguments to define the architecture (ARCH) and the location to install the files (INSTALL_HDR_PATH) Reference: - https://www.kernel.org/doc/Documentation/kbuild/headers_install.txt [=#modules] == Building external (out of tree) kernel modules The Linux 'kbuild' system is used by the Linux kernel for kernel configuration. Linux kernel modules must use kbuild to stay compatible with the changes in the build infrastructure. A kernel module can be built stand-alone either 'in tree' (within the directory structure of the linux kernel) or 'out of tree' (aka 'external') meaning it exists in a directory of its own outside of the linux source tree. **Kernel headers are not needed to build kernel modules, use the kernel source for this.** An external module must always include a wrapper makefile that supports building the module using 'make' with no arguments. See [https://www.kernel.org/doc/Documentation/kbuild/modules.txt modules.txt] for details. To build external modules you must have a prebuilt kernel available that contains the configuration and header files used in the build. Also the kernel must have been built with modules enabled. * Newport prebuilt kernel: [http://dev.gateworks.com/newport/kernel/] * Ventana prebuilt kernel: [http://dev.gateworks.com/ventana/images/] * Note, for example, on a running Gateworks Newport SBC using Ubuntu, do not use the below command, because this will install the Ubuntu kernel headers which do not match the Gateworks kernel: {{{ apt-get install linux-headers-generic }}} To build an external kernel module you typically would use: {{{#!bash cd make -C M=$PWD }}} * This will build the modules located in the current directory pointed to by the M param * You can then manually copy or load your kernel modules or use the modules_install make target to install them to a specific path * If cross-compiling make sure to define ARCH and CROSS_COMPILE env variables and have the $(CROSS_COMPILE)-gcc in your path As an example consider the following files in a directory called 'hello-driver': * hello.c: {{{#!bash #include #include MODULE_LICENSE("Dual BSD/GPL"); static int hello_init(void) { printk(KERN_ALERT "Hello, world\n"); return 0; } static void hello_exit(void) { printk(KERN_ALERT "Goodbye, cruel world\n"); } module_init(hello_init); module_exit(hello_exit); }}} * Makefile: {{{#!bash obj-m := hello.o }}} * building: {{{#!bash make -C M=$PWD ls hello.ko }}} References: - https://www.kernel.org/doc/Documentation/kbuild/modules.txt - https://www.oreilly.com/library/view/linux-device-drivers/0596005903/ch02.html [=#backports] == Building the Latest Kernel Modules for an older kernel - Linux Backports Sometimes you may be using a kernel is not as new as the mainline kernel at kernel.org and you desire a driver or newer driver from the latest kernel. The [https://backports.wiki.kernel.org/index.php/Main_Page Linux Backports] project aims at providing this capability. In order to build this, you will need a kernel tree already built. In this example, I will assume the {{{gateworks_fslc_3.14_1.0.x_ga}}} kernel is being used. 1. Change into the kernel tree directory, making sure it is built using the example shown in the [wiki:linux/kernel#building building section above]. {{{#!bash cd gateworks_fslc_3.14_1.0.x_ga make gwventana_defconfig etc... }}} 2. Grab the backports version you want to use and {{{cd}}} into it. This example will use the one currently used for our Yocto BSPs. {{{#!bash wget https://www.kernel.org/pub/linux/kernel/projects/backports/2016/01/22/backports-20160122.tar.gz tar xzf backports-20160122.tar.gz cd backports-20160122 }}} 3. [OPTIONAL] You may have to patch the backports project, depending on the version being used {{{#!bash wget https://raw.githubusercontent.com/Gateworks/meta-gateworks/fido/recipes-kernel/compat-wireless/compat-wireless-all/0001-disable_kconf.patch wget https://raw.githubusercontent.com/Gateworks/meta-gateworks/fido/recipes-kernel/compat-wireless/compat-wireless-all/0002-add-KLIB_CONFIG.patch wget https://raw.githubusercontent.com/Gateworks/meta-gateworks/fido/recipes-kernel/compat-wireless/compat-wireless-all/add_db_txt.patch patch -p1 < 0001-disable_kconf.patch # Used if no kconfig available (generally never needed) patch -p1 < 0002-add-KLIB_CONFIG.patch # Used if config is not located in kernel directory patch -p1 < add_db_txt.patch # Used if needing static wireless regulatory database }}} 4. Configure shell for cross compiler (please see [wiki:linux/kernel#building the building section] for more details) {{{#!bash export STAGING_DIR=../OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi TOOLCHAIN=toolchain-arm_cortex-a9+neon_gcc-4.8-linaro_uClibc-0.9.33.2_eabi PATH=$PATH:$STAGING_DIR/$TOOLCHAIN/bin export INSTALL_MOD_PATH=install export ARCH=arm export CROSS_COMPILE=arm-openwrt-linux- export KLIB=.. export KLIB_BUILD=.. export KLIB_CONFIG=.. }}} Notice that the {{{KLIB}}} variables are new; They refer to the location of the raw kernel, location where it is built, and location to the config file. 5. Make menuconfig to select drivers you want the latest of, then build. {{{#!bash make menuconfig make modules }}} 6. [OPTIONAL] Locate the .ko's and copy them to the built kernel {{{#!bash cp --parent $(find -name "*.ko") ../ }}} And that's it! If you tar the kernel up, make sure to remove the backports directory as each {{{.ko}}} will be double counted if you already copied the {{{.ko's}}} to the kernel tree. Please see the [wiki:linux/kernel#updating updating section] for details on how to update a target's kernel. [=#install] [=#updating] == Installing or Updating the kernel artifacts Installation of a newly build kernel and other artifacts (headers, modules, dtbs) depends on the target platform and how you are booting it. Kernel modules are located in a directory structure /lib/modules/(kernel_version) which matches the structure used when you install the modules with the 'modules install' build target. If you created a tarball of the kernel modules you can install those on your target as such: {{{#!bash # download kernel specific tarball suitable for booted kernel KERNEL=$(uname -r) URL=http://192.168.1.165/tftpboot/ventana/ rm -f /tmp/linux-$KERNEL.tar.gz wget -P /tmp $URL/linux-$KERNEL.tar.gz # untar appropriately if [ -d /etc/config ]; then # OpenWrt stores modules in a flat tree echo "OpenWrt Filesystem" ( cd /; tar -xvzf /tmp/linux-$KERNEL.tar.gz ) find /lib/modules/$KERNEL -name *.ko -exec mv {} /lib/modules/$KERNEL \; else ( cd /; tar -xvf /tmp/linux-$KERNEL.tar.gz ) fi depmod -a sync }}} * Note that OpenWrt uses a flat directory structure for kernel modules [=#netboot] == Booting the kernel from the network == The bootloader's job is to load the linux kernel and execute it. Often during kernel development its advantageous to boot the kernel from the network to create very quick edit/build/boot cycles. This can be accomplished in a variety of ways depending on what you want to come from the network, the kernel, the device-tree blobs, the root filesystem etc. During kernel development usually its just the kernel and device-tree blobs that are needed to boot over the network and using a flash based userspace filesystem is fine. To do this using Ventana as an example: {{{#!bash # boot kernel+dtbs over network via tftp, using NAND filesystem tftp ${loadaddr} ${prefix}uImage && \ tftp ${fdt_addr} ${prefix}${fdt_file2} && \ fdt addr ${fdt_addr} && \ fdt boardsetup && \ setenv bootargs console=${console},${baudrate} root=ubi0:rootfs ubi.mtd=2 rootfstype=ubifs ${video} ${extra} && \ bootm ${loadaddr} - ${fdt_addr} # boot kernel+dtbs over netowrk via tftp, using MMC filesystem tftp ${loadaddr} ${prefix}uImage && \ tftp ${fdt_addr} ${prefix}${fdt_file2} && \ fdt addr ${fdt_addr} && \ fdt boardsetup && \ setenv bootargs console=${console},${baudrate} root=/dev/mmcblk0p1 rootfstype=ext4 rootwait rw ${video} ${extra} && \ bootm ${loadaddr} - ${fdt_addr} # boot kernel_dtbs over network via tftp, using USB filesystem tftp ${loadaddr} ${prefix}uImage && \ tftp ${fdt_addr} ${prefix}${fdt_file2} && \ fdt addr ${fdt_addr} && \ fdt boardsetup && \ setenv bootargs console=${console},${baudrate} root=/dev/sda1 rootfstype=ext4 rootwait rw ${video} ${extra} && \ bootm ${loadaddr} - ${fdt_addr} }}} [=#insmod] = Loading Kernel Modules Kernel modules are dynamically loadable portions of the kernel that can be loaded or unloaded at runtime. It is generally not feasible to build every kernel driver statically into a modern Linux kernel so instead only options required to load further modules are build static. The userspace applications {{{insmod}}} and {{{rmmod}}} are used to load and remove modules at runtime. Most Linux based Operating Systems these days also contain support for automatically determining and loading module dependencies (modules required by the module you are trying to load) and for these you will also have: * {{{modprobe}}} - load module and all dependecies * {{{depmod}}} - builds module dependency database See Linux documentation for more information.