Best practice tips on using Linux security for your IoT devices
Embedded Linux Security for IoT
Queries around open source security or securing your Linux system? We’ve written this guide for you on embedded Linux security for IoT devices.
- Ensuring software security
- Linux security: secure hardware
- Disabling JTAG
- Chain of Trust
- Linux kernel and Device Tree
- Data Partition
- Init Systems
- System software – network facing
- Get Embedded Linux development support
We all want a system with no security flaws (vulnerabilities). However, for an IoT system, unlike say a server, an attacker is more likely to get physical access to a device.
Therefore, anything confidential (user data, cryptographic keys, intellectual property) stored on the device should be secured/made unreadable.
Additionally, if an attacker were to gain control of the device or reverse engineer it, breaches might range from the trivial (lightbulbs, kettles) to the extreme (car-chargers, industrial control systems).
Ensuring software security
There is a wide range of possible vulnerabilities. For instance,
- Are we sending unencrypted data over UDP to perform commands on the device, and if so, does that matter in our use-case?
- Are there plain-text passwords hard-encoded and unencrypted? Ideally a software vendor should be able to demonstrate their own security auditing.
However, things can be missed and this is when the use of vulnerability scanners comes into play, as they search for backdoors and exploits against open ports.
We recommend scanning all software that goes on the device (this can’t be seen from a network scan and would normally be done when building the BSP).
This binary scanning identifies vulnerabilities and exposures present in the software.
Linux security: secure hardware
Secure hardware is very much context, platform and use-case specific. However, key considerations include:
Disabling the JTAG (Joint Test Action Group) port, which is there for testing and programming a device during manufacture is advisable.
Often unused (as JTAG software can be expensive) this disabling can be conducted as a one-time operation at the end of your programming (but only after all tests have passed).
Almost all embedded devices use debug UART during the design and prototype stage. But it shouldn’t be left in once shipped to customers.
Hobbyists hunting for UART ports with oscilloscopes on embedded devices report finding them left on in routers and smart-cameras. Sometimes this gives attackers root access with no password.
The input of the UART should be disabled, but some modern security requirements ensure that any unused / non-vital port should be disabled entirely, this way an attacker can’t gain any information about the system by reading the boot logs.
Any other test/debug only ports should be disabled. Anything to impede physical access to the device will also help, however few anti-tamper screws are unlikely to deter an attacker for long.
Installing a TPM – a small chip that can perform cryptographic functions is becoming a common practice to develop secure hardware.
It can protect secret keys and perform software and hardware verification, prevent cloning and augment an existing chip’s encryption capabilities. Windows 11 will require TPM 2.0 as standard.
This is the first thing that runs on your device. If it’s not secure, then nothing is. It doesn’t necessarily need to be encrypted, but it must be signed.
What does signing mean? It means that the bootloader came from your company and your company alone. It works through public-key cryptography, where you use a private key to sign a piece of software.
The public key is then used to validate if the signature was created with the private key. Then the public key is encoded onto the device in a one-time operation, during manufacture.
If anyone tries to load their own software that hasn’t been signed, the device will refuse to run it.
This doesn’t mean the software can never be changed, this is a common misconception, the software can be changed, but it needs to be signed.
This also applies to all boot modes, so if you are booting off an SD card, you can still boot via USB with secure boot enabled, as long as it is correctly signed, that is the key point.
Take care of your private keys, if your private key is compromised then, so is every device. But this is a general IT security concern.
Examples of chips that implement the above, and this is by no means an exhaustive list, are the NXP iMX6/7/8 ranges, the STMicroelectronics STM32MP1 range, and the TI AM335X range.
After you have your nice and secure bootloader, you then have the not-so-trivial task of securing every single piece of software that runs after it, in what is called a “chain-of-trust”.
U-Boot is the most common bootloader for Embedded Linux. It’s powerful and useful during design and prototype. That same flexibility and power needs to be constrained when you move to production.
For example, there is a command line interface that enables reads and writes to raw memory, which could theoretically open access to other devices.
There are commands which allow network access and fuse programming, so the CLI must be disabled by disabling the UART.
Another issue is called the “U-Boot Environment” which as on the Linux command line, is a set of key-value pairs.
These often define the boot-flow (i.e.: how/where to load the kernel, device-tree, what boot arguments to pass to the kernel) and is great for development but bad for security.
These are often by default stored on physical memory, unencrypted with a simple checksum.
Attackers accessing the physical memory could change it and therefore change the bootflow.
In production, the u-boot configuration variable CONFIG_ENV_IS_NOWHERE should be set.
This ensures variables are always loaded from their defaults in C code, rather than from an unsecured memory chip.
Alternatively, you can force certain crucial variables to be restored to their defaults on boot, providing security but also allowing you to leverage the environment for more benign things such as boot counters, boot info, etc.
Chain of trust
All software must be cryptographically signed to ensure no malicious software is run; including the kernel, device tree, and RAM filesystem.
U-Boot has support for loading and signing these together in a unified image called a FIT image. Or, the various components could be loaded and verified individually.
The approach taken depends on project targets and objectives but everything that can affect software must be signed.
Linux Kernel and device-tree
T Kernel and device-tree should be signed and verified in the bootloader. The kernel configuration should enable security related features (including control groups (CGroups).
These allow access to resources to be separated on a process-by-process basis.
Enable network filtering to setup a good firewall using iptables. Any configuration options that enable a MAC (mandatory access control) should be enabled. ARM OPTEE should be enabled for projects using it.
In terms of kernel security patches, run a kernel that has a long-term support declaration, so that any security flaws that are patched in the latest kernel, and will be backported to the LTS version by the kernel community until it reaches end-of-life.
The 5.10 kernel with a EOL of December 2026is a good choice for long term security patches.
The device tree should disable the Debug UART (at least the input!).
The initramfs is a RAM loaded filesystem. It is generally meant to be a small filesystem that is temporary and mounts the real root filesystem, but can be enlarged to preference.
If you have enough memory, or your application is small and has few dependencies, you could put everything in the initramfs, and run from there indefinitely.
It must be signed, but as it is a single file, and read-only, all software running on it will be secure. If the entire system is within this initramfs, then the chain-of-trust is complete. All software running on the device is now signed.
However, in a lot of cases, when making use of open-source libraries and with a limited amount of RAM, development will run from some sort of disk (NAND, eMMC, etc.).
Therefore, the initramfs will have to verify the root filesystem. All RAM used by the initramfs will be returned to the system if the “switch_root” program is called
This can be encrypted. Various kernel modules allow transparent mounting of filesystems. It depends on the limitations of a platform or filesystem.
The location of keys is important. On some platforms this can be done via a key that is inside the processor itself and will never be revealed. For others, the TPM may be used to store a key, or some other secure storage mechanism.
If stored incorrectly, encryption can be compromised.
Ideally the root filesystem will be a read-only filesystem like squashFS, a single file, and therefore easy to sign and verify.
At that point your chain-of-trust ends. Any update to the filesystem will be signed in its entirety.
If instead, you need to do package-based updates, maybe you have a slow link and large root filesystem with lots of libraries, and can’t afford to do atomic full rootfs updates in a single file, then your chain-of-trust becomes a lot more complicated.
You will need to verify every executable, script and library, and any config files that won’t/mustn’t change. Any infraction must stop the boot in its tracks.
(With this method extra care must be taken that files aren’t unexpectedly changed by the system during normal operation – which will trigger a false alarm.
It’s a good idea to use Control Groups to isolate processes to only be able to write to parts of the filesystem. This is also good from a security standpoint.)
Most applications, even if the root filesystem is read-only, will need to store data.
This is normally done in a separate data partition. Make sure that this is encrypted, especially if it contains sensitive data.
If the entire root filesystem is signed, there is no need to worry about scripts/services. If there is a need to verify everything that is running, however, then ensure that no-one changes or adds any script.
After check-summing / signing, all directories where these scripts could be linked from, a chain of trust may remain incomplete.
Some processes can call scripts / plugins in certain directories. With open-source software, also audit what additional scripts/services can be called and run with elevated permissions.
System software – network facing
SSH is a great tool, but it should be locked down. Disable password-based login and use key-based only. Or better yet disable it entirely (or at least run it off a non-standard port).
Remember that any attacker can read every line of front-end code straight from the web-browser with no effort. So, any front-end to back-end communication should be code-reviewed with security in mind, by several engineers.
Any other network-facing service, consider the same principles.
Disable anything else that isn’t needed.
As with the webserver and custom code needs to be checked for security bugs. Get the more obvious stuff right first. Any comms or cloud communication needs to checked and audited, and best practices should be followed.
There is a lot of documentation online for any given platform or API or service for checking for security holes. It’s worth locking down the application from accessing most of the system using permissions and systemd namespaces (cgroups).
If you really want to be hardcore you can use something like SELinux to really lock down the app and other parts of the system.
Get Embedded Linux Development Support
Hope you’ve found this guide on Linux security for IoT devices useful!
If your engineering team is looking for support with your Linux kernel, Linux distributions or any developments involving Linux operating systems, get in touch with our technical experts at ByteSnap, who have years of experience in this field. We’re ready to help.