Miscellaneous Linux Notes¶
Table of Contents¶
- Tools
- Preserving Environment Variables when using
sudo
- Testing Filesystems with Write Tests and Read Tests
- Preserving Positional Line Numbers in Text
- MTU and Using Ping without Fragmentation
- Finding the default network interface
- Checking Active Connections
- Adding Users Through /etc/passwd
- Playing Games on Linux
- Update to Newer Versions of Packages and Commands
- Specify a Certain Version of a Package
- Builtin Colon (
:
) Command - Find > ls
- SCP
- Getting the Current System's Kernel Version
- sudoers and sudoing
- Encrypt a file with Vi
- Run a script when any user logs in.
- Run a script as another user
- Show the PID of the shell you're running in
- List all aliases, make an alias, and remove an alias. Make it persistent.
for
Loops- Set a variable of one data point
- Run a command repeatedly, displaying output and errors
- Make your system count to 100
- Loop over an array/list from the command line
- Loop over an array/list in a file
- Test a variable against an expected (known) value
- Get the headers from a HTTP request
- List the number of CPUs
- CPU Commands
- Check if this system is physical or virtual
- RAM Commands
- Connect to another server
- List Users on a System
- Hard Links vs Symbolic Links
- Different Colors for
less
Output - MEF (Metro Ethernet Framework)
- Finding Environment Variables
- Important Linux Commands for Sysadmins
- Network Firewalls vs. WAFs
- Bash Parameter Transformation
- Checking the Operating System with the
OSTYPE
Variable - Patching Linux Systems
Tools¶
Cybersecurity Tools to Check Out¶
pfsense
- A tool for authenticationfail2ban
- Ban IPs that fail to login too many timesOpenSCAP
- benchmarking toolnmap
nmap
maps the server if you don't have permissions- good for external mapping without permissions
ss
ornetstat
are internal with elevated permissions for viewing- Used locally with elevated permissions, it's better than
nmap
? ss -ntulp
- Used locally with elevated permissions, it's better than
sleuthkit
- File and filesystem analysis/forensics toolkit.
Other Tools to Check Out¶
getent
parallel
(GNU Parallel) - Shell tool for executing jobs in parallel using one or more machines.- A job is typically a inslge command or small script that has to be run
for each line in the input. - Typical input is a list of either files, hosts, users, or tables.
- A job is typically a inslge command or small script that has to be run
- KeePassXC - Password manager or safe. Locked with one master key or key-disk.
- traefik - HTTP reverse proxy and load balancer that makes deploying microservices easy.
- Autodesk
vault
- Product data management (PDM) tool. Integrates with CAD systems. Autodesk product. - Ncat
- sleuthkit - File and filesystem analysis/forensics toolkit.
- ranger - a console file manager (vi hotkeys)
- btop - Customizable TUI Resource monitor. See the github page
- See example btop.conf
- Goes in
$XDG_CONFIG_HOME/btop/btop.conf
or$HOME/.config/btop/btop.conf
pfsense
- A tool for authenticationOpenSCAP
- benchmarking tool
Preserving Environment Variables when using sudo
¶
Make sudo see variables from the current environment.
When using sudo
, you can preserve environment variables from the current environment with the -E
flag.
┎ kolkhis@homelab:~
┖ $ export testvar=smth
┎ kolkhis@homelab:~
┖ $ sudo bash -c 'echo $testvar'
┎ kolkhis@homelab:~
┖ $ sudo -E bash -c 'echo $testvar'
smth
A command for programatically checking servers (prod-k8s in this example):
for server in `cat /etc/hosts | grep -i prod-k8s | awk '{print $NF}'`; do echo "I am checking $server"; ssh $server 'uptime; uname -r'; done
Why Not to Use #!/usr/bin/env bash
¶
Using /usr/bin/env bash
in the shebang line is common, especially for people who
use MacOS. MacOS uses zsh
as its primary shell, and it only has Bash v3 on the
system. Bash v5 can be installed via homebrew
but it won't live at
/bin/bash
by default. So this can be useful if your scripts run on multiple
systems, and you don't know where bash
is installed.
The argument that /usr/bin/env bash
is portable is only the case on Mac systems,
and possibly virtual environments (e.g., Python scripts use /usr/bin/env python3
to
use virtual environments - venv
).
If you're in a controlled environment (servers, embedded systems, initramfs), it's
better to hardcode /bin/bash
than to use /usr/bin/env bash
.
Here's why not to use /usr/bin/env bash
:
-
PATH
Manipulation:env
makes your script susceptible toPATH
injection attacks.- If someone runs your script in an environment with a modified
$PATH
,env
will gram the firstbash
it finds.
This can be: - A malicious or broken
bash
binary. - A different (incorrect) version of bash.
- If someone runs your script in an environment with a modified
-
SUID/SGID scripts can be way more dangerous.
- Since
env
relies on a user-controlled$PATH
, this opens up privilege escalation attacks if a malicious user puts a fakebash
earlier in thePATH
. - Though, most modern systems disable SUID shell scripts because they're inherently unsafe. But, this is still important to know.
- Since
-
Boot/recovery environments: In
initramfs
/recovery shells/minimal containers/single-user mode:/usr/bin/env
may not exist.- Even if it exists,
bash
might not be in$PATH
.
-
Performance overhead:
- Calling
/usr/bin/env
introduces an extra fork &exec
(a tiny bit more memory/CPU usage).- First it runs
/usr/bin/env
- Then
env
searches$PATH
- Then it runs
bash
- First it runs
- This is negligible for short scripts but can matter for many scripts or tight loops.
- Calling
-
Non-portable on some systems. Some systems might:
- Not have
/usr/bin/env
- Have
env
in a differnt location (/bin/env
on some weird unix systems)
- Not have
-
Problems passing arguments to bash. Some edge cases:
- If you need to pass arguments to the interpreter, the
env
method doesn't allow it cleanly.
This will fail.#!/usr/bin/env bash -e
env
treatsbash -e
as a single argument and will not pass-e
properly. - You can not safely use
env
if your shebang line needs options/flags.
- If you need to pass arguments to the interpreter, the
-
Breakage in chrooted/minimal environments.
- If you run a script inside a chroot, container, or jailed environment:
/usr/bin/env
may not exist.- You might only have
/bin/bash
and noenv
binary.
- If you run a script inside a chroot, container, or jailed environment:
-
Harder to debug. When something breaks,
env
adds an extra layer of abstraction.- Now you have to debug not only your script but also which
bash
was actually called. - If you're debugging, the hardcoded path is clearer and more predictable.
- Now you have to debug not only your script but also which
-
Not POSIX compliant.
/usr/bin/env
is not POSIX-mandated.- If you're writing scripts for POSIX systems (AIX, Solaris, old Unix, embedded
systems, BusyBox environments), they might not have
/usr/bin/env
(maybe/bin/env
).
- If you're writing scripts for POSIX systems (AIX, Solaris, old Unix, embedded
systems, BusyBox environments), they might not have
The consensus:
- If the script is personal, portable, and user-level,
/usr/bin/env bash
is fine. - If it's production, automation, infrastruture, root-owned, or in a secure
environment use
/bin/bash
.
The Three "Main" Families of Linux¶
There are three major families of Linux distributions:
- Red Hat
- SUSE
- Debian
Red Hat Family Systems (incl CentOS, Fedora, Rocky Linux)¶
Red Hat Enterprise Linux (RHEL) heads up the Red Hat family.
- The basic version of CentOS is virutally identical to RHEL.
- CentOS is a close clone of RHEL, and has been a part of Red Hat since 2014.
- Fedora is an upstream testing platform for RHEL.
- Supports multiple hardware platforms.
- Uses
dnf
, an RPM-based package manager to manage packages. - RHEL is a popular distro for enterprises that host their own systems.
SUSE Family Systems (incl openSUSE)¶
SUSE (SUSE Linux Enterprise Server, or SLES) and openSUSE are very close to each
other, just like RHEL/CentOS/Fedora.
- SLES (SUSE Linux Enterprise Server) is upstream for openSUSE.
- Uses
zypper
, an RPM-based package manager to manage packages. - Includes
YaST
(Yet Another Setup Tool) for system administration. - SLES is widely used in retail and other sectors.
Debian Family Systems (incl Ubuntu and Linux Mint)¶
Debian provides the largest and most complete software repo to its users of any other
Linux distribution.
- Ubuntu tries to provide a compromise of long term stability and ease of use.
- The Debian family is upstream for several other distros (including Ubuntu).
- Ubuntu is upstream for Linux Mint and other distros.
- Uses
apt
, a DPKG-based package manager to manage packages. - Ubuntu is widely used for cloud deployments.
Testing Filesystems with Write Tests and Read Tests¶
With bash you can use a for
loop with the notation {1..10}
to loop through 1
and 10.
For POSIX compliance, use seq
for the loop.
-
Write test:
for i in {1..10}; do time dd if=/dev/zero of=/space/testfile_$i bs=1024k count=1000 | tee -a /tmp/speedtest1.basiclvm
-
Read tests:
for i in seq 1 10; do time dd if=/space/testfile_$i of=/dev/null; done
- You don't need to specify the
bs
(blocksize) orcount
for read tests.
- You don't need to specify the
-
Cleanup:
for i in seq 1 10; do rm -rf /space/testfile_$i; done
Preserving Positional Line Numbers in Text¶
Use nl
to preserve line numbers.
cat somefile | nl | grep "search for this"
You can also use the
grep -n
option.cat somefile | grep -n "search for this"
MTU and Using Ping without Fragmentation¶
ping -M do <target>
-M do
makes sure the packet does not get fragmented.
ping
: The basicping
command is used to test the connectivity between two devices on a network by sending ICMP echo request packets to the target and measuring the time it takes to receive an echo reply (response).-M
: This option sets the path MTU discovery mode, which controls how theping
command deals with IP fragmentation.
MTU (Maximum Transmission Unit)¶
MTU stands for Maximum Transmission Unit, which is the largest size of a packet that can be sent over a network link without fragmentation.
- The MTU for a network interface can be found using
ifconfig
orip
:
These commands list all the network interfaces, their statuses, and other info.ifconfig ip addr # or ip a
The network interface will usually be named something likeeth0
,en0
,enp0
, etcifconfig
may not be installed by default on modern Linux distributions.
Finding the default network interface¶
To specifically find the interface being used for outbound traffic (like connecting
to the internet), you can check the default route:
ip route
# or
ip r
enp0s31f6
, or something like eth1
on older systems.
en
: Indicates that it's an ethernet connection.p0s31f6
: This is the info on the ethernet port and bus being used.
Checking Active Connections¶
You can use netstat
or ss
to check active network connections, as well as which
network interface is being used.
netstat -i
ss -i
Adding Users Through /etc/passwd¶
Add a new line for the user:¶
Each line in the /etc/passwd
file represents a user account.
The format of each line in /etc/passwd
is as follows:
username:password:UID:GID:GECOS:home_directory:shell
username
: The username for the new user.password
: The encrypted password for the user (you can leave this field empty to disable password login).UID
: The user ID for the new user.GID
: The primary group ID for the new user.GECOS
: Additional information about the user (such as full name or description).home_directory
: The home directory for the new user.shell
: The login shell for the new user.
-
Add the new user by editing /etc/passwd:
sudo vi /etc/passwd # then add: user:password:UID:GID:GECOS:/home/newuser:/bin/bash
sudo
is required for writing to this file.- Save and close the file after adding the user information.
- The shell needs to be the path to the actual binary for the shell.
-
Create the user's home directory: If you specified a home directory for the new user, you need to manually create it using the
mkdir
command.bash sudo mkdir /home/newuser
-
Set permissions for the home directory: After creating the home directory, you may need to set the appropriate permissions to allow the new user to access it.
bash sudo chown newuser:newuser /home/newuser
-
Set the user's password (if applicable): If you left the password field empty in the
/etc/passwd
file, you may need to set a password for the new user using thepasswd
command.bash sudo passwd newuser
-
Test the new user account: After completing those steps, you can test the new user account by logging in with the username and password (if applicable) and verifying that the user has access to the home directory.
Playing Games on Linux¶
Use Proton - a version of WINE made specifically for gaming.
Update to Newer Versions of Packages and Commands¶
sudo update-alternatives --config java
sudo update-alternatives --config javac
Specify a Certain Version of a Package¶
sudo apt install <package>=<version>
=
sign.Example:
sudo apt install mysql-server=8.0.22
Builtin Colon (:
) Command¶
: [arguments]
- No effect; the command does nothing beyond expanding arguments and performing any specified redirections.
- The return status is zero (which evaluates to
true
in bash/sh).
Find > ls¶
Use find
instead of ls
to better handle non-alphanumeric filenames.
SCP¶
Usage: scp <options> source_path destination_path
SCP Commands¶
# copying a file to the remote system using scp command
scp file user@host:/path/to/file
# copying a file from the remote system using scp command
scp user@host:/path/to/file /local/path/to/file
# copying multiple files using scp command
scp file1 file2 user@host:/path/to/directory
# Copying an entire directory with scp command
scp -r /path/to/directory user@host:/path/to/directory
SCP Options¶
Option | Description |
---|---|
-r |
Transfer directory |
-v |
See the transfer details |
-C |
Copy files with compression |
-l 800 |
Limit bandwith with 800 |
-p |
Preserving the original attributes of the copied files |
-P |
Connection port |
-q |
Hidden the output |
Getting the Current System's Kernel Version¶
Get Kernel version
ls /boot
dmesg | head
uname -a
sudoers and sudoing¶
- List of sudoers or sudo users is found in
/etc/sudoers
sudoers
uses per-user timestamp files for credential caching.- Once a user has been authenticated (by running
sudo [cmd]
and entering password), a record is written.- The record contains:
- The user-ID that was used to authenticate
- The terminal session ID
- The start time of the session leader (or parent process)
- A time stamp (using a monotonic clock if one is available)
- The record contains:
- The authenticated user can sudo without a password for 15 minutes,
unless overridden by the
timestamp_timeout
option.
Encrypt a file with Vi¶
Vi/vim only.
:X
It's said to be extremely insecure, and was removed by
Neovim due to implementation flaws and vulnerability.
Use GPG instead.
Run a script when any user logs in.¶
- To run a script automatically when ANY user logs in,
add it to the
/etc/profile
file, or add it as a shell (.sh
) script in the/etc/profile.d
directory./etc/profile
is a POSIX shell script.
# /etc/profile: system-wide .profile file for the Bourne shell (sh(1)) # Add anything here to be run at printf "%s: %s" "$DATE" "$USER" >> /home/kolkhis/userlogs.log # This will timestamp the current user in userlogs.log
/etc/profile.d/
id a directory containing.sh
scripts.
Add a script here and it will be run when any user logs in.
#!/bin/sh printf "%s: %s" "$DATE" "$USER" >> /home/kolkhis/userlogs.log
Run a script as another user¶
su -c '/home/otheruser/script' otheruser
# runs `script` as 'otheruser'
su -c '/home/otheruser/script'
# runs 'script' as root if no username is provided
Show the PID of the shell you're running in¶
echo $$
List all aliases, make an alias, and remove an alias. Make it persistent.¶
alias -p # or just `alias`. Print list of aliases
alias c="clear" # Creates the alias
unalias c # Removes the alias
echo 'alias c="clear"' >> ~/.bash_aliases # Makes it persistent.
for
Loops¶
Create 200 files named file<number>
skipping every even number 001-199¶
Using seq
,¶
# Using `seq`
for i in $(seq 1 2 200); do
touch file${i}
done
Using Brace Expansion (Parameter Expansion)¶
# Bash 4 Exclusive: Brace expansion
for i in {1..200..2}; do
touch file${i}
done
Using and C-style For-Loops¶
# C-style loop
for (( i=0; i<200; i+2 )); do
touch file${i}
done
Set a variable of one data point¶
VAR=somevalue
VAR="somevalue"
VAR="$(echo somevalue)"
Run a command repeatedly, displaying output and errors¶
-
By using
watch
, you can execute a program periodically, showing output fullscreen
Using the -n, --interval seconds option, you can specify how oftenwatch -n 2 uptime
the program is run. -
Using
entr
- This only runs a command when a given file changes.
Every time theentr bash -c "clear; echo date '\nFile has changed.';" < <(file.sh)
file.sh
file changes,entr
will run bash, with the commandsclear;
, and
echo date '\nFile has changed.'
.
It was fed the file via stdin with "Process Substitution" (< <(file.sh)
).
Usingentr -c
will eliminate the need for theclear;
command.
Make your system count to 100¶
This can either be done with a C-style loop, the seq
command, or "brace expansion."
Respectively:
for (( i=0; i<=100; i++)); do
echo $i
done
seq 100
echo {1..100}
Loop over an array/list from the command line¶
list=$(find . -name *.py); # Predefining a list. Can do this inline too.
# Looping over the array itself
list=$(find . -name *.py); for n in "${list[@]}"; do printf "Current item: %s\n" "$n"; done;
# Using the length (#)
list=$(find . -name *.py); for n in ${#list}; do echo $list[$n]; done
Loop over an array/list in a file¶
while read -r linevar; do
echo "Current item:"; echo $linevar;
done <file
# or
while read -r line; do
echo "Current item: $line";
done < <(ls -alh)
In this case, it's looping over the lines in a file.
The first example uses an actual file, the second example uses process substitution to take the output of a command and treat it like a file.
or:
while read -r file; do
echo "Current item:";
echo $file;
done < <(find . -name *.py)
or:
declare -a FILES
read -r -d '' -a FILES < <(find . -name *.py)
for file in "${FILES[@]}"; do
printf "Current item: %s\n" "$file";
done
Test a variable against an expected (known) value¶
if [[ "$var" == "known value" ]]; then
echo "$var is known value"
fi
Get the headers from a HTTP request¶
curl -i hostname.com/:port
List the number of CPUs¶
CPU info is stored in /proc/cpuinfo
cat /proc/cpuinfo | grep cores
lscpu
.This displays all the information you'd want about your system's CPU.
lscpu | grep "CPU(s)"
CPU Commands¶
Find the manufacturer of the CPU¶
lscpu | grep "Vendor ID"
Find the architecture of the CPU chip¶
lscpu | grep "Architecture"
Check the CPU speed in MHz¶
lscpu | grep "MHz"
Check if this system is physical or virtual¶
dmidecode -s system-manufacturer
- If it's a phyical system, you'll see the manufacturer (e.g.,
Dell Inc.
). - If it's a virtual system, you'll see some sort of virtualization:
QEMU
innotek Gmbh
(VirtualBox)
dmidecode
dmidecode
is for dumping a computer'sDMI
(SMBIOS
) table contents in a human-readable format.SMBIOS
stands for "System Management BIOS", whileDMI
stands for "Desktop Management Interface."
- This table contains a description of the system's hardware components, and other useful pieces of information such as serial numbers and BIOS revision.
RAM Commands¶
Check how much RAM we have¶
Use the free
command to see RAM statistics.
free -h
free -h | awk '{print $2}' | sed -E 's/^used/total memory/'
free -h | awk '{printf $2}' | sed -E 's/^used/total memory/'
awk '{print $2}'
will showthe
total
column, but the header is used
.So, passing it through
sed
can fix that.
-
The
free
field shows unused memory (MemFree and SwapFree in /proc/meminfo)bash cat /proc/meminfo | grep -E "MemFree|SwapFree"
-
free -h
shows usage in a base2/binary format:B
bytes - 8 bits eachKi
: kibibytes - 1024 bytesMi
: mebibytes - 1024 kibibytesGi
: gibibytes - 1024 mebibytesTi
: tebibytes - 1024 gibibytes
Check how much RAM we are using¶
Using free -h
, just like checking total RAM.
free -h | awk '{print $3}' | sed -E 's/^free/\nRAM in Use/'
Swap Memory¶
Swap memory is writing down to a block device. It's like a paging file. In a modern Linux system, you want RAM to be used and not SWAP.
Connect to another server¶
nc -dz
nmap -sS
List Users on a System¶
-
To list all the user accounts on the system, you can run the command:
cat /etc/passwd | awk '{ print($1) }' # or to see the users with shell access cat /etc/passwd | grep 'bash' cat /etc/passwd | awk -F: '{ print($1) }' | grep -v "nologin"
-
You can also list all the users on a system with
cut
:
cut -d: -f1 /etc/passwd
-
-d:
sets the delimiter to:
-
-f1
tellscut
to print the first field (the username) -
You can also achieve this with
compgen -u
compgen -u | column
-
This command outputs possible completions depending on the options.
Hard Links vs Symbolic Links¶
Symbolic links (sometimes called soft links or symlinks) and hard links are both pointers to a memory address where a file is stored. They both allow you to access a file.
The main difference is that hard links are not deleted when the original file is deleted. Symbolic links break when this happens.
Hard links cannot exist across file system boundaries. A symbolic link can.
Hard links will only work within the same block device.
Hard links point to the block of memory itself, whereas a symlink points to the file path.
-
Hard Links:
- Points to the same inode (file content).
- Cannot link to directories.
- Still works even if the original file is deleted.
- Can't span across different filesystems.
- This means you can't hard link a file on another block device.
-
Symlinks (Symbolic Links):
- Points to the file path.
- Can link to directories.
- Breaks if the original file is deleted or moved.
- Can span across different filesystems.
Hard Links and File Deletion¶
When you delete a file in Linux, it's not actually deleted.
The inode (index node) pointer to that file is deleted. The file is still there on the disk.
You'd need to manually overwrite/stripe over the data to remove it.
Without manually overwriting the data, forensic tools would be able to recover the data.
This is why hard links can exist when the original file is deleted. They're still
pointing to a valid block of memory on the disk.
If both the original file and hard link are deleted, the data will still be there on
the disk, but it will never be recovered through normal means. There are forensic
tools that exist that can read the disk and recover the data.
- An inode (index node) is a metadata structure that stores information about files and directories, like ownership, permissions, and pointers to the file's data blocks.
- Inodes do not store filenames. Those are managed by directory structures.
- Every file has an inode. Hard links share inodes, while symbolic links do not.
Different Colors for less
Output¶
LESS_TERMCAP_**
: Allows you to specify colors for different parts of terminal output.
You need to use the less -R
option to enable this.
TODO: Experiment with using the formats "[10m"
and $'\e[10m'
export LESS='-R'
export LESS_TERMCAP_md=$'\e[33m' # Start Bold
export LESS_TERMCAP_mb=$'\e[4m' # Start Blinking
export LESS_TERMCAP_us=$'\e[10m' # Start Underline
export LESS_TERMCAP_so=$'\e[11m' # Start Standout
export LESS_TERMCAP_me="" # End all modes
export LESS_TERMCAP_se="" # End Standout
export LESS_TERMCAP_ue="" # End underline
Termcap String Capabilities¶
man://termcap 259
¶
Termcap stands for "Terminal Capability".
It's a database used by Terminal Control Libraries (e.g., ncurses
) to manage colors
and other terminal features.
You can use LESS_TERMCAP_**
to add colors to less
output in the terminal (where
**
are two letters that indicate a mode).
Some of the modes that you can use to colorize output:
so
: Start standout modese
: End standout modeus
: Start underliningue
: End underliningmd
: Start bold modemb
: Start blinkingmr
: Start reverse modemh
: Start half bright modeme
: End all "modes" (likeso
,ue
,us
,mb
,md
, andmr
)
E.g.:
export LESS="-FXR" # Add default options for less export LESS_TERMCAP_mb="[35m" # magenta export LESS_TERMCAP_md="[33m" # yellow export LESS_TERMCAP_me="" # "0m" export LESS_TERMCAP_se="" # "0m"
Other Options for less
¶
export LESS="-FXR"
-F
causes less to automatically exit if the entire file can be displayed on the first screen.-X
stopsless
from clearing the screen when it exits.- Disables sending the termcap initialization and deinitialization strings to the terminal.
- The initialization string is sent when
less
starts. This causes the terminal
to be cleared. The deinitialization string does the same thing.
MEF (Metro Ethernet Framework)¶
MEF is a protocol that allows you to connect multiple Ethernet devices to a single Ethernet port.
Finding Environment Variables¶
Print out environment variables line-by-line:
env
printenv
Important Linux Commands for Sysadmins¶
Getting general information about the system¶
Use lshw
to list the hardware on the system
lshw # List the hardware on the system
lscpu # List the CPU information
uname -a # Get information about the system (kernel, version, hostname, etc)
who # Shows who is logged into the system
w # More detailed version of 'who'
last # Show the last users to log into the system (prints in reverse)
cat /etc/*release # Get information about the operating system
cat /proc/cmdline # Get the kernel command line arguments (boot parameters, boot image)
ethtool # Show info on the network interfaces
ip a # Show info on the network interfaces
ip r # Show the routing table (shows network gateway)
lsblk # List the block devices on the system (disk info)
blkid # Show the UUIDs of the block devices (or a specific block device)
ps # Show running processes on the system
pstree # Show the processes running in a tree
df -h # Show disk usage (-h is human-readable)
free -h # Show memory usage
du -sh /dir # Show the disk usage of a specific directory
Package Management commands¶
For Debian-based systems (like Ubuntu):¶
apt update # Update package lists
apt upgrade # Upgrade all packages
apt install package # Install a package
apt remove package # Remove a package
dpkg -i package.deb # Install a .deb package manually
dpkg -r package # Remove a package
dpkg -l # List all installed packages
For Red Hat-based systems (like CentOS, Fedora):¶
# Package updates and installations
dnf update # Update all packages to the latest available versions
dnf upgrade # Upgrade installed packages, replacing old versions
dnf install package # Install a package
dnf remove package # Remove a package
dnf reinstall package # Reinstall a package
dnf downgrade package # Downgrade a package to an earlier version
# Searching and querying
dnf search package # Search for a package in repositories
dnf info package # Get detailed information about a package
dnf list installed # List all installed packages
dnf list available # List available packages in the enabled repos
dnf list package # Show details about a specific package
# Managing repositories
dnf repolist # List enabled repositories
dnf repolist all # Show all available repositories
dnf config-manager --enable repo_id # Enable a repository
dnf config-manager --disable repo_id # Disable a repository
# Cleaning up package cache
dnf clean all # Clean all cached data
dnf autoremove # Remove unneeded dependencies
# Working with .rpm files
rpm -ivh package.rpm # Install an .rpm package manually
rpm -Uvh package.rpm # Upgrade an installed .rpm package
rpm -e package # Remove a package
rpm -qa # List all installed packages
rpm -q package # Check if a package is installed
rpm -ql package # List files installed by a package
rpm -qc package # List configuration files of a package
# Dependency and package verification
dnf deplist package # Show package dependencies
rpm -V package # Verify installed package integrity
# Transaction history and rollback
dnf history # Show transaction history
dnf history info transaction_id # Show details of a specific transaction
dnf history undo transaction_id # Rollback a transaction
# Group operations
dnf group list # List available package groups
dnf group install "group-name" # Install a package group
dnf group remove "group-name" # Remove a package group
# Older systems with yum
yum update # Update packages
yum install package # Install a package
yum remove package # Remove a package
rpm -ivh package.rpm # Install an .rpm package manually
rpm -qa # List all installed packages
Process Management Commands¶
ps aux # View all processes
top # Interactive process viewer
htop # Enhanced interactive process viewer (often pre-installed)
kill PID # Kill a process by PID
killall processname # Kill all instances of a process by name
pkill -u username # Kill all processes from a specific user
nice -n 10 command # Start a command with a priority (lower values = higher priority)
renice -n 10 -p PID # Change the priority of an existing process
System Monitoring and Logging Commands¶
dmesg | less # View boot and kernel-related messages
journalctl # Query the systemd journal logs
tail -f /var/log/syslog # Follow system logs in real-time
uptime # Show how long the system has been running
vmstat 5 # Display memory, CPU, and I/O statistics every 5 seconds
iostat 5 # Display disk I/O statistics every 5 seconds
Network Management Commands¶
ping hostname_or_IP # Test connectivity to another host
nslookup hostname # Query DNS for a host
traceroute hostname # Trace the route packets take to reach a host
netstat -tuln # Show open ports and connections
ss -tuln # Similar to netstat; show listening sockets and ports
iptables -L # View firewall rules
firewalld-cmd --list-all # View firewalld rules (CentOS/RedHat)
curl url # Transfer data from or to a server
wget url # Download files from the internet
scp file user@remote:/path # Securely copy files to a remote system
Network Firewalls vs. WAFs¶
A WAF and a standard firewall are both firewalls, but they function in different ways.
A standard firewall acts like a gatekeeper.
Standard firewalls are designed to permit or deny access to networks.
On the other hand, a WAF generally focuses on threats aimed at HTTP/HTTPS and other areas of the application, as you mentioned in your post.
Additionally, WAFs run on different algorithms such as anomaly detection, signature-based, and heuristic algorithms.
Therefore, it is best to place a standard firewall as the first layer of security and then place a WAF in front of the application servers in the DMZ zone.
Bash Parameter Expansion (Slicing/Substitution)¶
Not to be confused with parameter transformation.
This does technically transform variables, but it serves a different purpose.
Replace strings and variables in place with parameter expansion.
Slicing with Parameter Expansion¶
name="John"
echo "${name}"
echo "${name/J/j}" #=> "john" (substitution)
echo "${name:0:2}" #=> "Jo" (slicing)
echo "${name::2}" #=> "Jo" (slicing)
echo "${name::-1}" #=> "Joh" (slicing)
echo "${name:(-1)}" #=> "n" (slicing from right)
echo "${name:(-2):1}" #=> "h" (slicing from right)
echo "${food:-Cake}" #=> $food or "Cake"
length=2
echo "${name:0:length}" #=> "Jo"
# Cutting out the suffix
str="/path/to/foo.cpp"
echo "${str%.cpp}" # /path/to/foo
echo "${str%.cpp}.o" # /path/to/foo.o
echo "${str%/*}" # /path/to
# Cutting out the prefix
echo "${str##*.}" # cpp (extension)
echo "${str##*/}" # foo.cpp (basepath)
# Cutting out the path, leaving the filename
echo "${str#*/}" # path/to/foo.cpp
echo "${str##*/}" # foo.cpp
echo "${str/foo/bar}" # /path/to/bar.cpp
str="Hello world"
echo "${str:6:5}" # "world"
echo "${str: -5:5}" # "world"
Substitution with Parameter Expansion¶
${foo%suffix} Remove suffix
${foo#prefix} Remove prefix
${foo%%suffix} Remove long suffix
${foo/%suffix} Remove long suffix
${foo##prefix} Remove long prefix
${foo/#prefix} Remove long prefix
${foo/from/to} Replace first match
${foo//from/to} Replace all
${foo/%from/to} Replace suffix
${foo/#from/to} Replace prefix
src="/path/to/foo.cpp"
base=${src##*/} #=> "foo.cpp" (basepath)
dir=${src%$base} #=> "/path/to/" (dirpath)
Substrings with Parameter Expansion¶
${foo:0:3} # Substring (position, length)
${foo:(-3):3} # Substring from the right
Getting the Length of a String/Variable with Parameter Expansion¶
${#foo} # Length of $foo
Bash Parameter Transformation¶
man://bash 1500
Parameter transformation is a way to perform a transformation on a parameter before it is used.
Syntax:
${parameter@operator}
Parameter Transformation Operators¶
U
: Converts all lowercase letters in the value to uppercase.u
: Capitalizes only the first letter of the value.L
: Converts all uppercase letters in the value to lowercase.Q
: Quotes the value, making it safe to reuse as input.E
: Expands escape sequences in the value (like$'...'
syntax).P
: Expands the value as a prompt string.A
: Generates an assignment statement to recreate the variable with its attributes.K
: Produces a quoted version of the value, displaying arrays as key-value pairs.a
: Returns the variable's attribute flags (likereadonly
,exported
).
Multiple Parameter Transformation Operators¶
You can use parameter transformation on multiple positional parameters or arguments
at once by using @
or *
.
When you use ${@}
or ${*}
, Bash treats each positional
parameter (e.g., command-line arguments) one by one, applying the
transformation to each.
The output is a list with each item transformed according to the specified operator.
If you use ${array[@]}
or ${array[*]}
, Bash applies the transformation to each element of the array, one by one. The result is also a list with each array item transformed individually.
The final transformed output might go through word splitting (separating by spaces)
and pathname expansion (turning wildcard characters like *
into matching filenames)
if enabled, so the result could expand further into multiple words or paths.
Typically *
will combine the parameters into one string, whereas @
will split the
parameters into an array.
Syntax | Description | Example Output (for hello world bash ) |
---|---|---|
${@^} |
Capitalizes each parameter | Hello World Bash |
${*^} |
Capitalizes only the first letter of the combined string | Hello world bash |
${@^^} |
Uppercases each parameter completely | HELLO WORLD BASH |
${*^^} |
Uppercases the entire combined string | HELLO WORLD BASH |
${@,} |
Lowercases the first character of each parameter | hello world bash |
${*,} |
Lowercases only the first character of the combined string | hello world bash |
${@Q} |
Quotes each parameter individually | 'hello' 'world' 'bash' |
${*Q} |
Quotes the entire combined string | 'hello world bash' |
Parameter Transformation Examples¶
Parameter transformation on variables, arrays, and associative arrays:
# Example variable for demonstration
var="hello world"
array_var=("one" "two" "three")
declare -A assoc_array_var=([key1]="value1" [key2]="value2")
# U: Convert all lowercase letters to uppercase
echo "${var@U}" # Output: HELLO WORLD
# u: Capitalize only the first letter
echo "${var@u}" # Output: Hello world
# L: Convert all uppercase letters to lowercase
var="HELLO WORLD"
echo "${var@L}" # Output: hello world
# Q: Quote the value, safe for reuse as input
echo "${var@Q}" # Output: 'HELLO WORLD' (or "HELLO WORLD" depending on context)
# E: Expand escape sequences (e.g., newline, tab)
esc_var=$'hello\nworld'
echo "${esc_var@E}" # Output: hello
# world
# P: Expand as a prompt string (useful for prompt formatting)
PS1="[\u@\h \W]\$ " # Set the prompt
echo "${PS1@P}" # Output: [user@host directory]$
# A: Generate an assignment statement that recreates the variable
echo "${var@A}" # Output: declare -- var="HELLO WORLD"
# K: Quoted version of the value, with arrays as key-value pairs
echo "${array_var@K}" # Output: 'one' 'two' 'three'
echo "${assoc_array_var@K}" # Output: [key1]="value1" [key2]="value2"
# a: Display attributes of the variable (flags)
declare -r readonly_var="test"
echo "${readonly_var@a}" # Output: r (indicates readonly)
Examples of using parameter transformation with positional parameters and arrays
using the ^
and @
operators:
# Positional parameters example
# You can set positional parameters using `set -- "var1" "var2"` etc.
set -- "hello" "world"
echo "${@^}" # Each positional parameter capitalized: "Hello World"
# Array example
array=("one" "two" "three")
echo "${array[@]^}" # Capitalize each array element: "One Two Three"
# Word splitting and pathname expansion example
files=("file1.txt" "*.sh")
echo "${files[@]^}" # Expands "*.sh" to match any shell scripts in the directory
Handling Empty and Undefined Variables in Bash¶
The walrus operator :=
is available in bash using the syntax:
"${foo:=default_value}"
foo
if foo
is either:
- Unset (variable doesn't exist yet)
- Empty (variable exists but has no value)
If foo
is set to a value, then the :=
operator will do nothing, and the value of
foo
is returned.
Bash Walrus Examples¶
Assigning a Default Value¶
unset VAR
echo "${VAR:=default}" # Output: default
echo "VAR" # Output: default
A more practical example:
FILENAME="${1:=default_file.txt}"
echo "Processing ${FILENAME}"
$1
(the first argument) as the FILENAME
, or if $1
is empty, not set,
or doesn't exist, it will use default_file.txt
as the value instead.
Leaving a non-null value unchanged¶
VAR="Hello"
echo "${VAR:=default}" # Output: Hello
echo "VAR" # Output: Hello (does not change)
Handling an Empty Variable¶
VAR=""
echo "${VAR:=default}" # Output: default
echo "VAR" # Output: default (value is updated)
Checking the Operating System with the OSTYPE
Variable¶
Use OSTYPE
instead of uname
for checking the operating system.
- This is an environment variable. On Ubuntu Server (or any Linux distro), it will have the value
linux-gnu
. - This saves you from making an external call to the shell (with
uname
).
Patching Linux Systems¶
How you go about patching/updating systems in Linux depends on if the node is
stateful or stateless.
- A stateful node retains its state, data, and specific configurations across session
and reboots. - A stateless node is ephemeral. This means that it does not retain state, data, or
configurations across sessions.- Stateless nodes are easier to scale and redeploy with updated versions.
Updating Linux Boxes in Enterprise Environments¶
- Stateful nodes need to be patched.
- This is because stateful nodes retain data and specific configurations that persist across reboots.
- Updating and patching these nodes ensures security vulnerabilities are fixed and that software is up to date while maintaining the state and data.
-
Patching is crucial here since these nodes can’t simply be replaced without data loss or complex migration.
-
Stateless nodes typically aren’t patched directly.
- Stateless nodes don’t retain data or configuration once they’re restarted or terminated; they’re often part of horizontally scaled environments (like microservices or containerized applications).
- Instead of patching, stateless nodes are redeployed with a new image that includes all updates and patches.
- This approach allows for easy replacement and minimizes downtime.
System Update Strategy¶
- Rolling Updates: For services that require high availability, rolling updates allow nodes to be updated in a staggered manner.
- This minimizes downtime by ensuring that some nodes remain available while others are updated.
- Blue-Green Deployment: For stateless applications, a blue-green deployment can be used.
- Deploy the updated image to a “blue” environment while the “green” environment continues serving traffic.
- Once validated, switch all traffic to the blue environment.
- Canary Releases: Deploy updates to a small subset of nodes initially to monitor for issues before rolling out to the full environment.
Scheduling System Updates¶
- Non-Peak Hours: Schedule updates during off-peak hours to reduce the impact on end-users.
- Maintenance Windows: Use designated maintenance windows approved by stakeholders to ensure updates do not interfere with critical operations.
Automating System Updates¶
- Use configuration management tools like Ansible, Chef, or Puppet to automate patching and updating of stateful nodes.
- For stateless nodes, use CI/CD pipelines to automate the creation and deployment of new images with the latest updates.
System Patching Compliance and Security:¶
- Compliance: Regular updates are often required to maintain compliance with security standards (e.g., PCI-DSS, HIPAA).
- Ensure systems are patched according to these standards.
- Vulnerability Management: Use tools like OpenSCAP or Lynis for vulnerability scanning to ensure updates address all known vulnerabilities.
- Audit Logs: Keep detailed logs of updates and patches applied to ensure traceability and accountability for changes in the environment.
Testing System Updates:¶
- Always test updates in a staging environment that mirrors production.
- This is to ensure compatibility and identify potential issues before applying them in production.
- Have a rollback plan.
- For each update, have a rollback plan in case of failures, especially for stateful systems where data integrity is critical.
Updating Linux Boxes, tl;dr¶
- Stateful nodes require patching due to their persistent state and data.
- Stateless nodes are usually replaced with updated images, avoiding direct patching.
- Automate and schedule updates to minimize impact and maintain consistency.
- Ensure testing, compliance, and logging to meet enterprise standards and maintain system reliability.
Clear Cache, Memory, and Swap Space on Linux¶
tl;dr:
echo 1 > /proc/sys/vm/drop_caches # Clears only the PageCache. Frees memory used for caching file data.
echo 2 > /proc/sys/vm/drop_caches # Clears dentries and inodes. Releases memory used for filesystem metadata.
echo 3 > /proc/sys/vm/drop_caches # Clears all three types of caches.
sync && echo 1 > /proc/sys/vm/drop_caches # Clears Buffer Cache and PageCache
swapoff -a && swapon -a # Clear swap space (turn it off and on again)
Clearing PageCache and inode/dentry Caches¶
Clearing cache, memory, and swap space on Linux is done with a special file:
/proc/sys/vm/drop_caches
The drop_caches
file is used to clean cache without killing any applications.
This file is used by echo
ing a number between 1
and 3
into the file.
- Clear PageCache only:
sudo sh -c 'echo 1 > /proc/sys/vm/drop_caches'
- The PageCache is the in-memory cache for storing file contents that are read from the disk.
- This speeds up file access by caching disk data in RAM.
- Clear dentries and inodes:
sudo sh -c 'echo 2 > /proc/sys/vm/drop_caches'
- The dentry (directory entry) cache is the directory-related metadata stored in
memory.- This is used for navigating the system quickly without repeatedly
scanning the disk.
- This is used for navigating the system quickly without repeatedly
- The inode (index node) cache is the metadata for files and directories.
- This is used to quickly locate files and access metadata without scanning
the disk.
- This is used to quickly locate files and access metadata without scanning
- The dentry (directory entry) cache is the directory-related metadata stored in
- Clear all 3 (PageCache, dentries, and inodes):
sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
- This frees up memory but potentially impacting performance until caches are rebuilt.
These commands do not delete data from the disk. These only release cached data that's kept in RAM to free up memory.
Clear Buffer Cache¶
To clear the buffer cache, use sync
before clearing the PageCache.
sudo sync && echo 1 > /proc/sys/vm/drop_caches
sync
: Flushes the filesystem buffers to disk.- It synchronizes cached writes to the disk.
Clearing Swap Space¶
Swap space is a part of the HDD (or SSD) that is used as summplemetary RAM.
Swap is FAR slower than RAM, so it's not a great practice to use a ton of it.
To clear swap space, use swapoff
and swapon
:
sudo swapoff -a
sudo swapon -a
swapoff
: Disables swapping on a given device.-a
: Disables swapping on all swap devices and files.
swapon
: Used to specify devices to be used for paging and swapping.-a
: All devices marked withswap
in the/etc/fstab
file will be enabled.
Clearing swap space will not free up RAM.
It only frees up space used for swap on the disk.
Buffer Cache vs PageCache¶
-
PageCache: Used to cache the contents of files on the disk into memory.
- Makes file access and R/W operations faster by storing the actual contents of a file in memory.
- When a process reads from a file, the data is loaded into the PageCache first,
then all subsequent reads can pull directly from memory rather than needing to
access the disk. - When a process requests data from a file, the system checks if it's already in
the PageCache (if it is, it's called a "cache hit"). - If it's a cache hit, it serves the data from memory.
-
Buffer Cache: Used to cache raw block data that is read from, or written to, the disk.
- Primarily used for I/O operations on the block level.
- Serves as temporary storage for data that is read from, or written to, disk sectors.
- Stores disk block and sector data, including low-level metadata about
filesystem structures, which the kernel needs to manage disk operations (superblocks, inodes, dentries). - Helps with filesystem metadata and block-level operations. Optimizes the
performance of low-level disk I/O operations by minimizing direct reads and
writes to disk.
OPNsense¶
OPNsense is an open-source firewall and routing software.
It's based on FreeBSD and designed to provide advanced networking features while
still being easy to use and highly customizabale.
Widely used for both home and enterprise networks.
Easy, free, secure, and flexible.
OPNsense is commonly used for:
- Home network firewalls
- Enterprise gateway firewalls
- VPN servers
- Content filtering
- Load balancing
Some of the features of OPNsense:
- Firewall and security:
- Uses
pfSense
'spf
(packet fileter) for robust and efficient packet filtering. - Supports statful packet inspection; tracks the state of connections and applies rules accordingly.
- Supports VPNs; Includes options for setting up OpenVPN, IPsec, and WireGuard VPNs.
- Uses
- Has a web-based interface:
- The entire configuration is managed through a web interface.
- Provides graphs, dashboards, and real-time monitoring of system and network activity/performance.
- Advanced routing options:
- Supports static and dynamic routing protocols.
- OSFP, BGP, OSPF, RIP, RIPng, IS-IS, EIGRP, PIM, and HSRP.
- Can function as a gateway for complex network setups.
- Supports static and dynamic routing protocols.
- Plugins and extensibility:
- OPNsense includes a plugin system that allows for third-party software
extensions to be installed and managed.
- Intrusion Detection/Prevention (IDS/IPS) with tools like Suricata
- Proxy server for content filtering anc caching.
- Dynamic DNS support.
- You can extend functionality to meet enterprise-grade needs or specific home network needs.
- OPNsense includes a plugin system that allows for third-party software
extensions to be installed and managed.
- Hardware compatibility
- Runs on x86 hardware, making it compatible with many devices, including embedded systems.
- Available as an image for VMs.
Creating a Partition Table on a Disk using parted
¶
parted
is a command-line utility for partitioning disks.
Example disk: /dev/sdb
parted /dev/sdb
# Now we're in the interactive prompt for parted
# Create the partition tabel
mklabel gpt
# Or, for older MBR systems:
mklabel msdos
# Create a partition using 100% of the disk
mkpart primary 0% 100%
# Verify the partition
print
# Quit parted
quit
mkpart primary 0% 100%
mkpart
: Make partition. Creates a new partition on the disk.primary
: Specify the type of partition.0%
: Start point. Starts at the beginning of the disk.100%
: End point. Ends at the end of the disk.
Scriptable Way to Partition Disks with fdisk
/gdisk
¶
Send input to fdisk
(or gdisk
) without entering the interactive prompt by using a
pipe to send input to the program.
echo -e "o\nn\np\n1\n\n\nw" | fdisk /dev/sda
echo -e "o\nn\np\n1\n\n\nw"
: This line is sending a series of commands
to fdisk
as follows:
* o
: Delete all partitions and create a new empty DOS partition table.
* n
: Add a new partition.
* p
: Makes the new partition primary.
* 1
: Specifies it as partition number 1.
* The three blank lines (\n
) give the default start and end values, i.e., use the entire disk.
* w
: Writes the changes and exits fdisk .
* \n
: Each \n
is a linebreak, same as pressing Enter in the interactive prompt.
-
|
: Pipe. It takes the output from the previous command (echo
) and sends it as input to the next command (fdisk
). -
fdisk /dev/sda
fdisk
is a command line utility used to create and manipulate disk partition tables./dev/sda
specifies the first hard disk.
-
This command will delete all partitions on the first hard disk and create a new primary partition that uses the whole disk.
Wipe Existing Data on a Disk or Parition¶
You can use dd
to wipe a disk by overwriting the disk.
sudo dd if=/dev/zero of=/dev/sdb1 bs=1M count=100
Adjust the count to roughly match the size of the disk or partition you want to wipe.
Testing a Disk for Errors using smartctl
¶
smartctl
is a command line utility that can be used to test a disk for errors.
smartctl -a /dev/sda # Show all SMART info about the disk
smartctl -t short /dev/sda # Start a quick test (~2 minutes, runs in background)
smartctl -l selftest /dev/sda # View test results
smartctl -t long /dev/sda # Run a full test (takes a long time)
smartctl -l selftest /dev/sda # View test results
Special Files to Get Bits: Zero, Random or Random 0/1¶
/dev/zero
: returns zero/dev/random
: returns random/dev/urandom
: returns random 0 or 1
Setting up ClamAV Antivirus¶
source¶
ClamAV is an open-source antivirus engine for detecting trojans, viruses, malware, and other malicious threats. It is mostly used on Linux and Unix systems.
ClamAV is in the apt
package repository, for Debian-based systems.
sudo apt update
sudo apt install clamav clamav-daemon
clamav
: The main ClamAV package.clamav-daemon
: The ClamAV daemon.
To manually update the database, stop clamav-freshclam
and run freshclam
.
sudo systemctl status clamav-freshclam
sudo systemctl stop clamav-freshclam
freshclam
sudo systemctl start clamav-freshclam --now
Run a scan against a directory, and time it:
time clamscan -i -r --log=/var/log/clamav/clamav.log /home/
-i
/--infected
: Only show infected files.--remove
: Automatically remove infected files.-r
/--recursive
: Recursively scan directories.
Automate ClamAV Scans¶
Set up a script to run scans daily.
#!/bin/bash
# Set your logfiles
DAILYLOGFILE="/var/log/clamav/clamav-$(date +'%Y-%m-%d').log";
LOGFILE="/var/log/clamav/clamav.log";
# Scan the entire system from root
clamscan -ri / &> "$LOGFILE"
# Copying to daily log file for history tracking
cp $LOGFILE $DAILYLOGFILE
# Gather Metrics to use later
scanDirectories=`tail -20 $DAILYLOGFILE | grep -i directories | awk '{print $NF}'`
scanFiles=`tail -20 $DAILYLOGFILE | grep -i "scanned files" | awk '{print $NF}'`
infectedFiles=`tail -20 $DAILYLOGFILE | grep -i infected | awk '{print $NF}'`
runTimeSeconds=`tail -20 $DAILYLOGFILE | grep -i time | awk '{print $2}' | awk -F. '{print $1}'`
# Report out what metrics you have
echo "Directories: $scanDirectories Files: $scanFiles Infected: $infectedFiles Time: $runTimeSeconds"
exit 0
- Copy the script into
/etc/cron.daily/
- Set it to run in the
crontab
(cron table)crontab -e # Then add the line: 0 1 * * * root /path/to/script
Format Drives and Create Partitions¶
Format drives and create disk partitions using gdisk
.
sudo gdisk /dev/sdX
o # Create a new empty GPT partition table
n # Create a new partition
p # Make it the primary partition
# Use defaults
w # Write changes (otherwise they won't be applied)
Last sector
, you can specify a specific size, like +100G
for a 100GB partition.
Checking the Systemd Configuration for a Service¶
Use the systectl cat
command to view the systemd
configuration for a service.
systemctl cat k3s
Set a Timeout Timer for a Command¶
Use the timeout
command to kill it if it doesn't finish in a certain amount of time.
timeout 5 ssh user@hostname
timeout --preserve-status 5 ssh user@hostname # Exit with the same status as ssh
timeout -s SIGINT 5 ssh user@hostname
--preserve-status
: Exittimeout
with the same status as the command.-s
/--signal
: Specify the signal to exit with if a timeout occurs.
Get the Full Path of the Current Script¶
You can use this inside a script to get the directory that the script is inside:
script_dir=$(cd $(dirname "${BASH_SOURCE[0]}") && pwd)
source
d scripts.Use this if you're not running this script directly.
To get the whole path, including the filename, just use realpath
:
script_dir=$(realpath $0)
${script_dir%/*}
The
$0
argument changes to reflect the script calling it.
Setting up Bash on a Mac¶
Instead of using the old v3 bash on mac systems, use brew
to install bash v5.
brew install bash
echo "$(brew --prefix)/bin/bash" | sudo tee -a /etc/shells
chsh -s "$(brew --prefix)/bin/bash"
Changing the Hostname of a System¶
Change hostname:
-
Change the hostname of a system with
hostnamectl
.
sudo hostnamectl set-hostname new-hostname
-
Then, update the
/etc/hosts
file.
Change any entries of the old hostname with the new one. -
Either reboot the system or restart the
network
andsystemd-hostnamed
services.
sudo reboot # or sudo systemctl restart network sudo systemctl restart systemd-hostnamed
Stack Tracing¶
Use strace
to track the system calls that a program makes.
The output here is C code.
To see each of the syscalls a command makes, you can run the command directly after
the strace
/strace [opts]
:
sudo strace -ffs320 mknod -m 666 /tmp/mynull c 1 3
-ff
: Also trace child processes as they're created by the traced process (spawned viafork()
/vfork()
).- Combines
-f
and--output-separately
. - Essentially it
- Combines
Using umask
to Set Permissions before Creating Files¶
You can set the permissions a file should have before actually creating the file.
This can be done to prevent an attacker from potentially grabbing a readable file
descriptor to a sensitive file (e.g., a private key file).
umask 0 && mknod /tmp/mynull c 1 3
The umask
(user file-create mode mask) is a permission bitmask that restricts
default permissions given to new files and directories.
When you create a file, the requested mode (like 0666
) is bitwise AND
-ed with
the inverse of the bitmask.
actual_mode = requested_mode & ~umask
A common umask
is 022
.
So if requested_mode = 0666
and the umask = 0022
(removes write for group and
others):
0666 & ~0022 = 0666 & 0755 = 0644
# which gives:
rw-r--r--
Suspending and Unsuspending your Terminal¶
It's easy to accidentally hit Control-S
in the terminal.
That's how it gets "suspended."
Everything you type when the terminal is suspended is buffered.
That means when you use Control-Q
to resume the terminal, everything you typed
will be sent to the shell.
IFS (Internal Field Separator)¶
IFS
: The Internal Field Separator.
This is a variable that's used for handling word splitting after expansion.
It's also used to split lines into words with the read
builtin command.
The default value of the IFS
variable is <space><tab><newline>
.
E.g., when $*
is called, the IFS
variable is used to check how to join all the
arguments together. It uses the first value of the IFS
, which is a space by default.
Using IFS to Manipulate how Data is Joined¶
Take the script:
#!/bin/bash
printf "%s\n" "$*"
Call the script with ./script hello world
and you get:
hello world
Change the script to unset IFS
:
#!/bin/bash
IFS=
printf "%s\n" "$*"
helloworld
Fork Bombs¶
Forkbombs are malicious attacks that execute arbitrary code to use up system
resources.
They generally work by calling themselves over and over (recursively) and spawning
more and more useless processes that grow at an exponential rate.
A common fork bomb:
:(){:|:&};:
:()
: Start a function definition for a function literally named:
.{ ... }
: The contents of the function.:|:
: Calls:
(itself), then pipe it to:
to call itself again.&
: Backgrounds the process.
;:
: End the function definition and then call the:
function.
What happens here:
- It calls itself twice (
:
and:
again via the pipe). - Backgrounds the second call (
&
). - Runs the function.
- Each function call spawns two new copies of itself.
- Those spawn two more...
- Exponential growth happens very quickly.
- This exhausts system resources (mostly process table limits, maybe CPU/memory too).
To fork
mean to create a new process (using the fork()
syscall).
A fork bomb abuses fork()
repeatedly and infinitely.
Protecting Against Fork Bombs¶
User Process Limit¶
You can proctect against forkbombs with ulimit -u
(max user processes).
ulimit -u 4096
: Prevent a fork bomb from destroying the whole system.- Only the user account running the fork bomb would freeze.
- Using
ulimit -u
is temporary, it will only work for the current shell session. - For a permament limit, use
/etc/security/limits.conf
(see this section)
Without a limit, the kernel gets overwhelmed and completely crashes or forces a reboot.
System-Wide Process Limit¶
You can check the syste-wide process limit (pid_max
) by looking in
/proc/sys/kernel/pid_max
.
cat /proc/sys/kernel/pid_max
Setting a Permanent Process Limit (User and System-Wide)¶
The correct place to set a process limit is in /etc/security/limits.conf
.
Or, any file in /etc/security/limits.d/
.
sudo vi /etc/security/limits.conf
Add the lines:
username soft nproc 2048
username hard nproc 4096
username
: The user account you want to limit.soft
: Warning limit. User can change it temporarily lower.- Set to
2048
in this example.
- Set to
hard
: Strict maxiumum, enforced by the system.- Set to
4096
in this example.
- Set to
nproc
: Specify that you're writing a rule for number of processes.
pid_max
Kernel Parameter¶
If you really wanted to, you could change pid_max
kernel parameter temporarily.
sudo sysctl -w kernel.pid_max=65536
sysctl -w
: Write (set) a value.
This is temporary only unless you set it in /etc/sysctl*
.
How a Fork Bomb Works¶
A fork bomb works by exhausting system resources, primarily process table limits.
- Every time you run a process, it gets a "process descriptor."
- A process table stores the process descriptors.
- Each process table entry stores a
task_struct
(from Linux source code), which contains: - Process ID (PID)
- Parent process ID (PPID)
- User and Group IDs (UIG, GID)
- Open file handles
- Memory used
- State (running, sleeping, zombie, etc.)
- CPU time used
- And a lot more metadata.
- Each process table entry stores a
- The process table is a limited size structure. The system can only ahve so many processes at one time.
- Limits come from two places:
- Kernel config: Hardcoded number of max processes the kernel can track.
- User limits: Soft/hard limits for each user account (controlled by
ulimit -u
).
So without user limits, a fork bomb can take down a system.
Creating Special Files with mknod
¶
The mknod
command is used to create block special files, character special files,
or FIFOs.
The term "special file" on Linux/Unix means a file that can generate or receive data.
Syntax:
mknod [OPTION]... NAME TYPE [MAJOR MINOR]
NAME
: The path to the special file.TYPE
: You can specify theTYPE
of file:p
: Pipe (FIFO) special file.b
: Block device special file. Requires major/minor numbers.c
: Character special file. Requires major/minor numbers.
MAJOR
: Major device number (identifies the driver).MINOR
: Minor device number (identifies the specific device handled by that driver).
When specifying either a block or character special file, you need to specify major and minor device numbers.
- The
MAJOR
number tells the kernel which driver to use (e.g., the block driver).- This is like the "class" of device.
- The
MINOR
number tells the kernel which instance of the device the file refers to.- This is the "specific device in that class."
For example:
ls -l /dev/sda
# brw-rw---- 1 root disk 8, 0 Mar 4 13:38 /dev/sda
b
) with a major number of 8
(driver: sd
), and a minor
number of 0
(/dev/sda
)./dev/sda1
would be minor number 1
.
You can check major and minor numbers with stat
or ls -l
.
stat /dev/sda
File: /dev/sda
Size: 0 Blocks: 0 IO Block: 4096 block special file
Device: 5h/5d Inode: 323 Links: 1 Device type: 8,0
Access: (0660/brw-rw----) Uid: ( 0/ root) Gid: ( 6/ disk)
Access: 2025-04-28 15:44:20.295315382 -0400
Modify: 2025-03-04 13:38:48.603526535 -0500
Change: 2025-03-04 13:38:48.603526535 -0500
Birth: -
Device type: 8,0
.This is the major and minor number.
Creating a Null Device with mknod
¶
ls -l /dev/null
# crw-rw-rw- 1 root root 1, 3 Mar 4 13:38 /dev/null
/dev/null
are 1
and 3
respectively.We can use those to duplicate the file with
mknod
.sudo mknod /tmp/mynull c 1 3
sudo chmod 666 /tmp/mynull
- This creates a character device at
/tmp/mynull
with major1
and minor3
. - That matches the null device (
/dev/null
).
You can also save yourself the chmod
and use the -m
option:
sudo mknod -m 666 /tmp/mynull c 1 3
Kill Unresponsive SSH Sessions¶
To kill an unresponsive SSH session, you can hit ~.
to kick you back off the SSH
connection.
This is known as the "SSH Escape Character."
~.
Hit enter afterwards.
This can be set manually to something different if you want.
Set the SSH escape character with:
ssh -e <char>
when connecting with SSH.EscapeChar
in the SSH config file (~/.ssh/config
)
Manual Sections (man
)¶
The man
command has 9 sections, each representing documentation of a different type.
Section | Description |
---|---|
1 |
Executable programs or shell commands |
2 |
System calls (C functions provided by the kernel) |
3 |
Library calls (functions within program libraries) |
4 |
Special files (usually found in /dev) |
5 |
File formats and conventions eg /etc/passwd |
6 |
Games |
7 |
Miscellaneous (including macro packages and conventions), e.g. man(7) , groff(7) |
8 |
System administration commands (usually only for root) |
9 |
Kernel routines (Non-standard) |
TTY vs Terminal Emulator¶
An actual TTY (stands for Teletypewriter, usually called a terminal) is a direct connection to the system.
- A TTY is a physical device used to interact with computers in early Unix systems.
- On modern systems, a TTY refers to either a real hardware terminal or a
virtual console (e.g., the ones you access via
Ctrl-Alt-F1
throughF6
on Linux).
A terminal emulator is an emulated version of a TTY.
Terminal emulators connect to the machine through pseudo-terminals (PTYs).
- The terminal emulator is a program that mimics the behavior of a TTY in a graphical or networked environment.
- These are your
xterm
,alactritty
,gnome-terminal
,konsole
,tmux
, etc.
A pseudo-terminal is a pair of virtual devices that simulate a physical terminal.
There is a master and a slave in a pseudo-terminal pair.
- The master (/dev/ptmx
) is controlled by the terminal emulator (e.g.,
gnome-terminal
, ssh
, tmux
).
- The slave (/dev/pts/n
) acts like the terminal device for the shell or program.
- pts
stands for psuedo-terminal slave.
- This is the slave sside of a pseudo-terminal pair. It's what your shell is
using as its TTY.
tty
(the command) in Linux/Unix systems prints the file name of the terminal connected to stdin.
tty
# /dev/pts/2
You'll see:
/dev/tty0
,/dev/tty1
, etc: Physical or virtual TTYs (real terminals)./dev/pts/0
,/dev/pts/1
, etc: Pseudo-terminals (terminal emulators).
In Linux/Unix, "Everything is a file," even your terminal.
Creating a Bootable USB Drive with dd
¶
Quick note: This is a destructive operation. Creating a bootable USB drive will wipe all the data that's on it.
First locate your USB drive.
lsblk -f
We'll use this as an example:
sdb
└─sdb1 exfat 1.0 backups
3035-9085
Once you've identified your drive, umount
(unmount) it if it's mounted anywhere.
sudo umount /dev/sdb*
- The
/dev/sdb*
ensures all partitions are unmounted.
Then, pick the ISO you want to create bootable media from.
I'll use /ISOs/linuxmint-22.1-xfce-64bit.iso
in this example.
sudo dd if=/ISOs/linuxmint-22.1-xfce-64bit.iso of=/dev/sdb bs=4M status=progress conv=fsync
if=
: Input file. The ISO you want to write.of=
: Ouput file. The block device, not the partition.bs=4M
: Block size of4M
gives a good speed/safety tradeoff.status=progress
: Shows a progress bar.conv=fsync
: Forces flush to disk after write buffers to ensure all data is written.- Calls
fsync()
on the output file after each block is written. - Ensures that the data written to the kernel buffer is immediately flushed to the disk instead of waiting in RAM for the OS to decide when to write.
- Calls
Note we're not writing to the partition sdb1
, we're writing directly to the block
device itself.
Once dd
finished, you should see an output like this:
123456789+0 records in
123456789+0 records out
xxxx bytes (x.x GB) copied, x seconds, x.x MB/s
Once that's done, the drive should be ready.
Go ahead and run a sync
and eject
it.
sync && sudo eject /dev/sdb
sync
: Forces the system to flush all filesystem write buffers to disk.- This is a safeguard in case something is still buffered.
eject
: Tells the OS to safely detach the device.- Flushes any remaining buffers.
- Unmounts the device if it's mounted.
- Signals the drive to power down (or pop out, if it's a CD).
Inspecting the Written Drive¶
This part is optional.
You can mount the USB after writing and inspect it if you want to check that it was
written correctly.
udisksctl mount -b /dev/sdb1
ls /run/media/$USER/*
udisksctl
: A CLI frontend to theudisks2
daemon.- Used by most desktop Linux systems.
mount -b
: Tells it to mount a block device.
Using udisksctl
to mount prevents you from needing to specify a filesystem type.
It also handles the mountpoint.
If it's available on the system by default (desktop Linux distros), then it's a
user-friendly alternative to mount
and doesn't require sudo
access.
If you want to use mount
instead, you'll need to determine the filesystem type.
lsblk -f
iso9660
).
Create the mountpoint.
mkdir -p /mnt/usb
Mount the drive.
sudo mount -t iso9660 /dev/sdb1 /mnt/usb
By default,
mount
will try to auto-detect the filesystem type. So you don't
necessarily need to specify it each time.sudo mount /dev/sdb1 /mnt/usb
Terms¶
-
RTO: Recovery time objective
-
RPO: Recovery point objective
-
Flow Engineering (DevOps)