-
Linux Kernel Development
- Kernel sources
- Cross Toolchains
- Userspace and root filesystem
- Building the Linux kernel (cross-compiling)
- Installing Kernel Headers
- Building external (out of tree) kernel modules
- Building the Latest Kernel Modules for an older kernel - Linux Backports
- Installing or Updating the kernel artifacts
- Booting the kernel from the network
- Loading Kernel Modules
- Creating your own defconfig.
- Make HELP
Linux Kernel Development
This page is for new or inexperienced developers wanting to work directly on the Linux kernel.
Video tutorial for working with Kernel: https://youtu.be/XCkegC05xXY
Kernel sources
When choosing a kernel source you need to determine what support you are interested in.
Mainline upstream Linux kernel
Gateworks usually recommends using the mainline or upstream Linux LTS/stable kernel because it has the most up-to-date features and bug-fixes that have been accepted and peer reviewed from the widest array of kernel developers around the world.
The 'mainline' Linux kernel is the one officially released by Linus Torvalds and has an approximate release cycle of every 6 months and has a major and minor version number components like v5.7.
The 'stable' Linux kernel refers to periodic patches that address security issues and bugs (but not new features) and has a major, a minor, and a stable release version number like v5.7.6.
Certain kernel versions are designated as 'long term support' kernels (LTS) which means they will get support in the 'stable' tree for several years.
Gateworks has Github kerel repo's for each of our product families that can be described as:
- based on (usually the latest) LTS kernel
- usually a 'stable' branch release of that LTS kernel (but we don't often periodically rebase on top of new stable releases)
- patches that may not yet be upstream or have been backported from a newer kernel
- a defconfig that enables everything necessary to support the devices on the board as well as module support for a large variety of usb and pcie devices
- a bootscript that can be used to boot that particular kernel
- kernel userspace headers
Resources:
- https://www.kernel.org - The Linux Kernel
- Linux mainline git
- Linux stable git
- Gateworks kernel for IMX8M based Venice
- Gateworks kernel for CN80xx based Newport
- Gateworks kernel for IMX6 based Ventana
See also:
- Gateworks mainline linux details and patches
- http://kernelnewbies.org/LinuxVersions - Linux version info
Stable Kernel
The Stable Kernel adds backported bugfixes on top of previous mainline Linux kernel releases. A variant of this is the Long Term Support (LTS) kernel versions that are supported by the community for typically 4 years.
Resources:
Cross Toolchains
A toolchain is a set of compiler tools (typically gcc) and libraries necessary for compiling and linking source (and possibly debugging, diss-assembling, etc). A Cross toolchain (aka cross compiler) runs under a host architecture but produces binaries for a target architecture (ie x86-64 executable compiler producing code for aarch64 architecutre)
Typically this includes the following (for the GNU C Compiler):
- ar - linker
- as - assembler
- c++/g++ - C++ compiler
- gcc - C compiler
- ld - linker
- nm - list symbols from object files
- strip - strips debug info from object files
- objcopy - copy and translate object files
- objdump - display info from object files
- ranlib - generate index to archive
- lib/lib* - stdc libraries
- include/ - stdc library headers
The kernel (which is written entirely in ANSI-C) is compiled with gcc and does not link against the stdc library.
You likely have one of the following toolchains to work with:
- cross toolchain from your dev host distro (ie Ubuntu)
- cross toolchain from an OpenWrt build directory
- cross toolchain from a prebuilt OpenWrt BSP SDK
- cross toolchain from a Yocto build directory
- cross toolchain from a prebuilt Yocto BSP SDK
- pre-built cross toolchain from somewhere else, like Android
Userspace and root filesystem
Userspace refers to anything in a Linux based OS which is 'not' the kernel. After initialization, the Linux kernel mounts the rootfs and executes PID1 (/sbin/init) which is the start of userspace. The init process and any processes it launches are part of userspace. When applications act on device files such as /dev/sda they use an ioctl API to interact with kernel functions from userspace. A useful way to determine the uses per directory is to execute a man hier
on your Ubuntu system.
A root filesystem (sometimes referred to as 'a userspace') contains everything you need for a Linux based OS other than the kernel itself. The root filesystem could contain the kernel, but only if the bootloader mounts it and launches it from the rootfs (ie the Gateworks Ventana product family). Note that kernel modules are contained on the root filesystem but they are not considered userspace (yet the insmod/rmmod/modprobe apps that load/remove them are).
You likely have one of the following userspaces you want to work with:
- OpenWrt root filesystem
- Yocto root filesystem
- Android root filesystem
- Ubuntu root filesystem
In general, you should be able to mix and match kernels and userspace within reason, but note that certain Linux based OS's may require specific kernel functionality (ie Android requires several Android specific kernel features to exist).
When working on mainline Linux patches, Gateworks developers often use either an OpenWrt userspace root filesystem or a Yocto userspace root filesystem depending on their needs.
Building the Linux kernel (cross-compiling)
Building the Linux kernel for a target board that differs from your development host outside various Board Support Package development environments is easily accomplished.
Gateworks developers typically do kernel development out-of-tree and boot the kernel over the network to speed up development.
If working with kernel modules, one could remove the module (rmmod), download it, install it (insmod) again and test and avoid constantly rebooting the board. Often its just as quick to build a driver static in the kernel, configure your board for network booting, and just rebuild/reboot continually.
Prerequisites:
- Linux development host: Gateworks uses and supports Ubuntu which is used here as an example, but other Linux variants can work as well
- Cross Toolchain for the CPU architecture of the boards you are working with
- Various other tools required by the Linux kernel build system
Steps to build the full Gateworks kernel (modules, FIT image, dtbs, bootscript and all):
- Install pre-requisites:
apt install build-essential ncurses-dev bc u-boot-tools liblzo2-dev lzop git bison flex openssl libssl-dev
- typically 'build-essential' provides enough for compiling, however we need a few other things for kernel development:
- ncurses-dev is needed for menuconfig
- u-boot-tools, bc, and lzop are needed for uImage (used for ARM 32-bit architectures only)
- git is needed for checking out the source
- typically 'build-essential' provides enough for compiling, however we need a few other things for kernel development:
- Obtain Linux kernel source (see above to help you decide which kernel version you should use):
- for the latest Gateworks kernel for Venice:
git clone https://github.com/Gateworks/linux-venice.git cd linux-venice
- for mainline lastest working tree:
git clone git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git cd linux
- for the latest Gateworks kernel for Venice:
- Install cross toolchain and setup shell environment for building. This includes the following env variables used by the kernel build system:
- ARCH - the architecture to build (arm64 for 64-bit ARM architectures such as Malibu/Venice/Newport or arm for 32-bit ARM architectures such as Ventana)
- CROSS_COMPILE - the prefix the toolchain you are using has prior to the gcc executables (everything before the 'gcc').
- PATH - make sure the directory where the toolchains compiler (${CROSS_COMPILE}gcc) executable exists. Typically you will prepend this to the default PATH
- INSTALL_MOD_PATH - where to install kernel modules to by default if using the 'modules_install' build target (in our examples we install to ./install)
- INSTALL_MOD_STRIP - set to 1 to strip kernel modules of debugging data (saves a tremendous amount of space)
- LOADADDR - where to relocate/uncompress the kernel to when using the uImage target
- Examples:
- Venice (IMX8M arm64) using Ubuntu's aarch64-linux-gnu-gcc cross toolchain for building code for ARM 64-bit architecture
sudo apt install gcc-aarch64-linux-gnu export ARCH=arm64 export CROSS_COMPILE=aarch64-linux-gnu- export INSTALL_PATH=install export INSTALL_MOD_STRIP=1 export LOADADDR=0x40200000
- Newport (CN80XX arm64) using Ubuntu's aarch64-linux-gnu-gcc cross toolchain for building code for ARM 64-bit architecture
sudo apt install gcc-aarch64-linux-gnu export ARCH=arm64 export CROSS_COMPILE=aarch64-linux-gnu- export INSTALL_PATH=install export INSTALL_MOD_STRIP=1 export LOADADDR=0x20000000
- Ventana (IMX6 arm) using Ubuntu's arm-linux-gnueabi cross toolchain for building code for ARM 32-bit architecture
sudo apt install gcc-arm-linux-gnueabi export ARCH=arm export CROSS_COMPILE=arm-linux-gnueabi- export INSTALL_PATH=install export INSTALL_MOD_STRIP=1 export LOADADDR=0x10008000
- Venice (IMX8M arm64) using Ubuntu's aarch64-linux-gnu-gcc cross toolchain for building code for ARM 64-bit architecture
- Configure the kernel. You will want to start with an existing defconfig file followed by doing a 'make kernel menuconfig' to change any options you desire to modify. When using 'menuconfig' you can search for various items using the '/' key which will show you those items as well as their dependencies (you can not enable a kernel config unless all of its dependencies are met)
- Examples:
- Venice (IMX8M arm64)
make distclean make imx8m_venice_defconfig # Gateworks default config make menuconfig
- Note that imx8m_venice_defconfig exists in arch/arm64/configs/ for the Gateworks venice kernels and does contain support for imx8mm as well as imx8mn and imx8mp SoC's. This defconfig contains all kernel modules deemed useful and is rather large. For a more minimal venice kernel defconfig copy venice_minimal_kernel_defconfig to arch/arm64/configs and use 'make venice_minimal_kernel_defconfig'
- Newport (CN80XX arm64)
make distclean make newport_defconfig # Gateworks default config make menuconfig
- Note that newport_defconfig exists in arch/arm64/configs for the Gateworks newport kernels and contains all kernel modules deemed useful and is rather large. For a more minimal newport kernel defconfig copy newport_minimal_kernel_defconfig to arch/arm/configs and use 'make newport_minimal_kernel_defconfig'
- Ventana (IMX6 arm)
make distclean make gwventana_defconfig # Gateworks default config make menuconfig
- Note that gwventana_defconfig exists in arch/arm/configs for the Gateworks Centana kernels and contains all kernel modules deemed useful and is rather large. For a more minimal ventana kernel defconfig copy gwventana_minimal_kernel_defconfig to arch/arm64/configs and use 'make gwventana_minimal_kernel_defconfig'
- Venice (IMX8M arm64)
- Examples:
- Build kernel targets. What targets you build depends on the product family and the bootloader or boot method you are using. Some common examples are:
- Image - uncompressed kernel (use this for 64-bit targets)
- uImage - compressed kernel wrapped in a U-Boot image header (use this for 32-bit targets that use 'bootm' from the bootloader). This requires the $LOADADDR env variable to tell U-Boot where to uncompress the kernel to which is SoC specific
- modules - for building loadable kernel modules (*.ko)
- dtbs - for building device-tree blobs (*.dtb) from kernel source tree (Ventana/IMX6; for Newport we use out-of-kernel tree device-tree)
- Examples:
- Venice (IMX8M arm64)
make -j8 Image modules dtbs
- Newport (CN80XX arm64):
make -j8 Image modules
- Note Newport uses the device-tree from the boot firmware not from the kernel repository so we don't bother building dtbs
- Ventana (IMX6 arm):
make -j8 uImage dtbs modules modules_install headers_install
- Note that ARM 32-bit bootloaders typically use uImage for kernel image which requires LOADADDR to be defined with a specific address for the SoC
- Venice (IMX8M arm64)
- Install additional files to proper directories. Various kernel make targets exist to help you install kernel modules and header files to a directory. In addition depending on the target we may need to copy dtbs and create a bootscript and a kernel fit image (if that is what the bootscript expects).
- modules_install - for installing the loadable kernel modules (set the INSTALL_MOD_PATH 'parameter' or 'evn' to the location you wish to install to)
- headers_install - for installing kernel header files for development of userspace applications that use the kernel APIs (set the INSTALL_HDR_PATH 'parameter' not env to the location you wish to install to)
- Examples:
- Venice (IMX8M)
mkdir -p $INSTALL_PATH/boot make modules_install INSTALL_MOD_PATH=$INSTALL_PATH # install modules make headers_install INSTALL_HDR_PATH=$INSTALL_PATH/usr # install headers cp arch/arm64/boot/Image arch/arm64/boot/dts/freescale/imx8*venice*.dtb* $INSTALL_PATH/boot # install kernel and dtbs gzip -fk $INSTALL_PATH/boot/Image mkimage -f auto -A $ARCH -O linux -T kernel -C gzip -n "Kernel" \ -a $LOADADDR -e $LOADADDR -d $INSTALL_PATH/boot/Image.gz $INSTALL_PATH/boot/kernel.itb # FIT image wget https://raw.githubusercontent.com/Gateworks/bsp-venice/master/boot.scr -O boot.scr mkimage -A $ARCH -T script -C none -d boot.scr $INSTALL_PATH/boot/boot.scr # boot script
- Newport (CN80XX)
mkdir -p $INSTALL_PATH/boot make modules_install INSTALL_MOD_PATH=$INSTALL_PATH # install modules make headers_install INSTALL_HDR_PATH=$INSTALL_PATH/usr # install headers cp arch/arm64/boot/Image $INSTALL_PATH/boot # install kernel gzip -fk $INSTALL_PATH/boot/Image mkimage -f auto -A $ARCH -O linux -T kernel -C gzip -n "Kernel" \ -a $LOADADDR -e $LOADADDR -d $INSTALL_PATH/boot/Image.gz $INSTALL_PATH/boot/kernel.itb # FIT image wget https://raw.githubusercontent.com/Gateworks/bsp-newport/sdk-10.1.1.0-newport/ubuntu.scr -O boot.scr mkimage -A $ARCH -T script -C none -d boot.scr $INSTALL_PATH/boot/newport.scr # boot script
- Ventana (IMX6)
mkdir -p $INSTALL_PATH/boot make modules_install INSTALL_MOD_PATH=$INSTALL_PATH # install modules make headers_install INSTALL_HDR_PATH=$INSTALL_PATH/usr # install headers mkimage -A $ARCH -T script -C none -d gwventana_bootscript $INSTALL_PATH/boot/6x_bootscript-ventana # boot script
- Venice (IMX8M)
- Create tarball of build artifacts:
tar -C $INSTALL_PATH --owner=0 --group=0 -cvJf linux.tar.xz .
- This kernel can be installed on your rootfs via
tar -C / -xvf linux.tar.xz --keep-directory-symlink
- This kernel can be installed on your rootfs via
Troubleshooting:
- ensure you have the ARCH env variable set properly for your target CPU architecture:
echo $ARCH # should be arm64 for 64-bit ARM targets and arm for 32-bit ARM targets
- ensure that ${CROSS_COMPILE}gcc exists in your $PATH and is configured for your desired architecture:
user@build:/usr/src/linux$ ${CROSS_COMPILE}gcc -v Using built-in specs. COLLECT_GCC=aarch64-linux-gnu-gcc COLLECT_LTO_WRAPPER=/usr/lib/gcc-cross/aarch64-linux-gnu/9/lto-wrapper Target: aarch64-linux-gnu Configured with: ../src/configure -v --with-pkgversion='Ubuntu 9.4.0-1ubuntu1~20.04.1' --with-bugurl=file:///usr/share/doc/gcc-9/README.Bugs --enable-languages=c,ada,c++,go,d,fortran,objc,obj-c++,gm2 --prefix=/usr --with-gcc-major-version-only --program-suffix=-9 --enable-shared --enable-linker-build-id --libexecdir=/usr/lib --without-included-gettext --enable-threads=posix --libdir=/usr/lib --enable-nls --with-sysroot=/ --enable-clocale=gnu --enable-libstdcxx-debug --enable-libstdcxx-time=yes --with-default-libstdcxx-abi=new --enable-gnu-unique-object --disable-libquadmath --disable-libquadmath-support --enable-plugin --enable-default-pie --with-system-zlib --without-target-system-zlib --enable-libpth-m2 --enable-multiarch --enable-fix-cortex-a53-843419 --disable-werror --enable-checking=release --build=x86_64-linux-gnu --host=x86_64-linux-gnu --target=aarch64-linux-gnu --program-prefix=aarch64-linux-gnu- --includedir=/usr/aarch64-linux-gnu/include Thread model: posix gcc version 9.4.0 (Ubuntu 9.4.0-1ubuntu1~20.04.1)
- the above (from an Ubuntu 22.04 jammy host) shows the toolchain is based on GCC v9.4.0 built to execute on x86_64 architecture (--host) for a target of aarch64 (ARM 64bit) architecture (--target)
Installing Kernel Headers
This section is in regards to installing kernel headers into the Linux kernel build detailed above. These instructions are not for installing headers on an already running target board (ie Gateworks SBC).
The 'make headers_install' command exports the kernel's header files in a form suitable for use by userspace programs. These headers describe the API for user space programs attempting to use kernel services and are used by the system's C library (ie glibc or uClibc) to define available system calls as well as constants and structures to be used with these system calls. The C library's header files are usually installed in /usr/include and the kernel's headers are usually in /usr/include/linux and /usr/include/asm. Kernel headers are backwards compatible but not forwards compatible meaning a program built against a C library using older kernel headers should run on a newr kernel (although possibly missing access to new features) but a program built against newer kernel headers may not work on an older kernel.
The 'make headers_install' command uses two optional arguments to define the architecture (ARCH which defaults to $ARCH) and the location to install the files (INSTALL_HDR_PATH)
Reference:
Building external (out of tree) kernel modules
Video Overview: https://youtu.be/PcrGUiuNBcg
The Linux 'kbuild' system is used by the Linux kernel for kernel configuration. Linux kernel modules must use kbuild to stay compatible with the changes in the build infrastructure. A kernel module can be built stand-alone either 'in tree' (within the directory structure of the linux kernel) or 'out of tree' (aka 'external') meaning it exists in a directory of its own outside of the linux source tree. Kernel headers are not needed to build kernel modules, use the kernel source for this.
An external module must always include a wrapper makefile that supports building the module using 'make' with no arguments. See modules.txt for details.
To build external modules you must have the kernel source. Gateworks does not recommend doing this on the target.
- Note, for example, on a running Gateworks Newport SBC using Ubuntu, do not use the command "apt-get install linux-headers-generic", this will install the Ubuntu kernel headers which do not match the Gateworks kernel.
To build an external kernel module you typically would use:
cd <external-module-directory> make -C <path-to-prebuilt-kernel> M=$PWD
- This will build the modules located in the current directory pointed to by the M param
- You can then manually copy or load your kernel modules or use the modules_install make target to install them to a specific path
- If cross-compiling make sure to define ARCH and CROSS_COMPILE env variables and have the $(CROSS_COMPILE)-gcc in your path
As an example consider the following files in a directory called 'hello-driver':
- hello.c:
#include <linux/init.h> #include <linux/module.h> MODULE_LICENSE("Dual BSD/GPL"); static int hello_init(void) { printk(KERN_ALERT "Hello, world\n"); return 0; } static void hello_exit(void) { printk(KERN_ALERT "Goodbye, cruel world\n"); } module_init(hello_init); module_exit(hello_exit);
- Makefile:
obj-m := hello.o
- building:
make -C <path-to-prebuilt-kernel> M=$PWD ls hello.ko
References:
- https://www.kernel.org/doc/Documentation/kbuild/modules.txt
- https://www.oreilly.com/library/view/linux-device-drivers/0596005903/ch02.html
Building the Latest Kernel Modules for an older kernel - Linux Backports
Sometimes you may be using a kernel is not as new as the mainline kernel at kernel.org and you desire a driver or newer driver from the latest kernel. The Linux Backports project aims at providing this capability.
In order to build this, you will need a kernel tree already built. In this example, I will assume the gateworks_fslc_3.14_1.0.x_ga
kernel is being used.
- Change into the kernel tree directory, making sure it is built using the example shown in the building section above.
cd gateworks_fslc_3.14_1.0.x_ga make gwventana_defconfig etc...
- Grab the backports version you want to use and
cd
into it. This example will use the one currently used for our Yocto BSPs.wget https://www.kernel.org/pub/linux/kernel/projects/backports/2016/01/22/backports-20160122.tar.gz tar xzf backports-20160122.tar.gz cd backports-20160122
- [OPTIONAL] You may have to patch the backports project, depending on the version being used
wget https://raw.githubusercontent.com/Gateworks/meta-gateworks/fido/recipes-kernel/compat-wireless/compat-wireless-all/0001-disable_kconf.patch wget https://raw.githubusercontent.com/Gateworks/meta-gateworks/fido/recipes-kernel/compat-wireless/compat-wireless-all/0002-add-KLIB_CONFIG.patch wget https://raw.githubusercontent.com/Gateworks/meta-gateworks/fido/recipes-kernel/compat-wireless/compat-wireless-all/add_db_txt.patch patch -p1 < 0001-disable_kconf.patch # Used if no kconfig available (generally never needed) patch -p1 < 0002-add-KLIB_CONFIG.patch # Used if config is not located in kernel directory patch -p1 < add_db_txt.patch # Used if needing static wireless regulatory database
- Configure shell for cross compiler (please see the building section for more details)
export STAGING_DIR=../OpenWrt-Toolchain-imx6-for-arm_cortex-a9+neon-gcc-4.8-linaro_uClibc-0.9.33.2_eabi TOOLCHAIN=toolchain-arm_cortex-a9+neon_gcc-4.8-linaro_uClibc-0.9.33.2_eabi PATH=$PATH:$STAGING_DIR/$TOOLCHAIN/bin export INSTALL_MOD_PATH=install export ARCH=arm export CROSS_COMPILE=arm-openwrt-linux- export KLIB=.. export KLIB_BUILD=.. export KLIB_CONFIG=..
Notice that the KLIB
variables are new; They refer to the location of the raw kernel, location where it is built, and location to the config file.
- Make menuconfig to select drivers you want the latest of, then build.
make menuconfig make modules
- [OPTIONAL] Locate the .ko's and copy them to the built kernel
cp --parent $(find -name "*.ko") ../
And that's it! If you tar the kernel up, make sure to remove the backports directory as each .ko
will be double counted if you already copied the .ko's
to the kernel tree. Please see the updating section for details on how to update a target's kernel.
Installing or Updating the kernel artifacts
Installation of a newly build kernel and other artifacts (headers, modules, dtbs) depends on the target platform and how you are booting it.
Kernel modules are located in a directory structure /lib/modules/(kernel_version) which matches the structure used when you install the modules with the 'modules install' build target. If you created a tarball of the kernel modules you can install those on your target as such:
# download kernel specific tarball suitable for booted kernel KERNEL=$(uname -r) URL=http://192.168.1.165/tftpboot/ventana/ rm -f /tmp/linux-$KERNEL.tar.gz wget -P /tmp $URL/linux-$KERNEL.tar.gz # untar appropriately if [ -d /etc/config ]; then # OpenWrt stores modules in a flat tree echo "OpenWrt Filesystem" ( cd /; tar -xvzf /tmp/linux-$KERNEL.tar.gz ) find /lib/modules/$KERNEL -name *.ko -exec mv {} /lib/modules/$KERNEL \; else ( cd /; tar -xvf /tmp/linux-$KERNEL.tar.gz --keep-directory-symlink) fi depmod -a sync
- Note that OpenWrt uses a flat directory structure for kernel modules
For a more simple process for bringing in a new kernel tarball on top of Ubuntu: (change wget link accordingly....)
wget http://dev.gateworks.com/ventana/images/linux-ventana.tar.xz tar -C / -xvf linux-ventana.tar.xz --keep-directory-symlink sync reboot #after reboot: depmod -a
Booting the kernel from the network
The bootloader's job is to load the linux kernel and execute it. Often during kernel development its advantageous to boot the kernel from the network to create very quick edit/build/boot cycles.
This can be accomplished in a variety of ways depending on what you want to come from the network, the kernel, the device-tree blobs, the root filesystem etc. During kernel development usually its just the kernel and device-tree blobs that are needed to boot over the network and using a flash based userspace filesystem is fine.
To do this using Ventana as an example:
# boot kernel+dtbs over network via tftp, using NAND filesystem tftp ${loadaddr} ${prefix}uImage && \ tftp ${fdt_addr} ${prefix}${fdt_file2} && \ fdt addr ${fdt_addr} && \ fdt boardsetup && \ setenv bootargs console=${console},${baudrate} root=ubi0:rootfs ubi.mtd=2 rootfstype=ubifs ${video} ${extra} && \ bootm ${loadaddr} - ${fdt_addr} # boot kernel+dtbs over netowrk via tftp, using MMC filesystem tftp ${loadaddr} ${prefix}uImage && \ tftp ${fdt_addr} ${prefix}${fdt_file2} && \ fdt addr ${fdt_addr} && \ fdt boardsetup && \ setenv bootargs console=${console},${baudrate} root=/dev/mmcblk0p1 rootfstype=ext4 rootwait rw ${video} ${extra} && \ bootm ${loadaddr} - ${fdt_addr} # boot kernel_dtbs over network via tftp, using USB filesystem tftp ${loadaddr} ${prefix}uImage && \ tftp ${fdt_addr} ${prefix}${fdt_file2} && \ fdt addr ${fdt_addr} && \ fdt boardsetup && \ setenv bootargs console=${console},${baudrate} root=/dev/sda1 rootfstype=ext4 rootwait rw ${video} ${extra} && \ bootm ${loadaddr} - ${fdt_addr}
Loading Kernel Modules
Kernel modules are dynamically loadable portions of the kernel that can be loaded or unloaded at runtime. It is generally not feasible to build every kernel driver statically into a modern Linux kernel so instead only options required to load further modules are build static.
The userspace applications insmod
and rmmod
are used to load and remove modules at runtime.
Most Linux based Operating Systems these days also contain support for automatically determining and loading module dependencies (modules required by the module you are trying to load) and for these you will also have:
modprobe
- load module and all dependeciesdepmod
- builds module dependency database
See Linux documentation for more information.
Creating your own defconfig.
The config files contained in the Gateworks kernel repos generally enable nearly everything or almost nothing. This can be inconvenient if you're looking for a config that represents what's minimally needed while enabling what's necessary for the board to function as it's intended.
For Buildroot we have: gwventana_minimal_kernel_defconfig (for Ventana)
This config can be adapted to be used with Ubuntu by disabling CPIO and enabling module support. With this done other features can be added that are specific to your application such as radio or modem support. Configuring the kernel can be an iterative process making it beneficial to create a defconfig once the essential features required have been enabled.
- Make the changes required by your application using "make menuconfig", save and exit.
- Execute "make savedefconfig", this will create the file "./defconfig".
- Copy or move this file to "arch/arm/configs" or whatever is appropriate for your platform.
- Rename the file so the filename ends in "_defconfig", for example "filename_defconfig".
- When this config is required, "make distclean" to wipe out the old config and any patches, then "make <filename_defconfig>"
- If required use "make menuconfig" to make additional changes then save. If no changes are required proceed with your build as normal.
Make HELP
Executing "make help" in your kernel build directory will provide a complete list of available targets with explanations. This list will also include available defconfigs.