Linuxgraphy by Strabo

Custom assets

Licensing information in the last slide, at the bottom of the page

Presented at

bg Unofficial FMI Computer Science Discord server   04.03.2023 || 14:50 - 17:20
In front of a tiny audience of other uni students. Exact revision here.
bg Unofficial FMI Computer Science Discord server   22.01.2023 || 18:00 - 21:00
In front of a tiny audience of other uni students. Exact revision here.


Linuxgraphy by Strabo

Made by Syndamia

2000 years ago Strabo published Geography, in which he laid out the world as it was known to the Romans and Greeks.
Today, I’ll lay out Linux as it is known to me.

Contents

  1. Historical context
  2. Structure of UNIX-likes
    1. Shell
    2. UNIX-style file system
    3. Kernel
  3. Linux from the inside
    1. Shells
    2. Files and directories
    3. Common configuration files
    4. System management
  4. Software licensing
  1. What makes up a useful OS
    1. Bootloader
    2. Init system
    3. Package manager
    4. Desktop manager
  2. War on distros
  3. Demos - virtual machine and setup from zero

1. Historical context

1945 - 1955

You directly loaded instructions into memory and let it execute your “code” (processor instructions). Nothing else ran on the machine.

This process was labor intensive: a qualified operator loaded your program, dumped the memory contents, remove any external media, reset the machine and load the next job.

1956 - 1959

Computers became faster, but a lot of time was spend on managing jobs.

resident monitors: very small programs, which always resided in memory, and monitored what the state of the current job was.
Jobs were loaded in series (batches). When the current job finished, memory would be dumped and the next one would automatically be loaded and started.

1960s

Peripherals (tapes, punch cards, …) were extremely slow.
Multiprogramming: the current job is waiting for a peripheral, another job would be started.

Travel reservation system by American Airlines: where travel agents would search, price and book services. A computer system now had to support:

More and more business started using computers, thanks to minicomputers, so demand for OS software increased.

1964 - 1969

IBM’s System/360 line of computers, each with expansion capabilities, backwards compatibility, all under one instruction set and operating system.

Multics, an influential operating system that was designed for a General Electric mainframe. Some of it’s novel ideas include, but are not limited to:

1969: Death of Multics and birth of UNIX

Ultimately Multics grew too large, becoming unusable and unmaintainable.
In the beginning, programming was done mostly by Bell Labs, with Ken Thompson being one of the developers.


Ken Thompson image by National Inventors Hall of Fame
Ken Thompson and Dennis Ritchie in 1973

Thompson still had some desire to work on operating systems, after Bell Labs pulled out. The tools he made while rewriting his own video game on a PDP-7 lead to a whole operating system: UNIX.

Based, expanded on and improved upon many of Multic’s ideas, UNIX became the father of modern OSs.

2. Structure of UNIX-likes

Ignoring standards and specifications, in my opinion, from the point of a user, a UNIX-like OS has the following components:

  1. shell: A programmable replaceable command-line interpreter, with utilities for managing the whole system and support for pipelines and I/O redirection
  2. UNIX-style file system: Filesystem as a single rooted tree, objects (nodes) in the file system are inodes and an inode can be (at least) either a regular file, directory or devices. Permissions per users and group.
  3. kernel: Program that manages all communication and operations between the hardware and software
For now we’ll generally just look over some of the common stuff and go into detail later, while in Linux.

2.1. shell

The operating system is divided into layers, like an onion.
Only the kernel has access to the hardware.
shells and executables (binaries) have access to the kernel.

shells provide user interfaces, both command-line and graphical. While on the topic of UNIX, we’ll only discuss the former.

Command-line shells

Command-line shells operate solely with text, often only with ASCII characters. Usually connecting to a computer (to the shell) is said to be done with either a “terminal” (“console”) or “teleprinter” (“teletype”).

Video displays became widely available in the late 1970s, so before that access was done with a “teleprinter”, a typewriter-printer combo.


Teletype Model 33 by Arnold Reinhold

With the advent of computer displays, teleprinters were replaced with terminals, video display-keyboard combo. Thanks to the widely popular VT100, almost all terminals support ANSI escape codes.


DEC VT100 by Jason Scott

Today we don’t usually use such specialised hardware, but the names persist with slightly different meanings.

Virtual Terminal (Console, tty): on some modern UNIX-likes (like Linux), the kernel/OS provides special devices, where a console is directly implemented/simulated with the connected computer and display.

Terminal Emulator: a special program that emulates a virtual terminal (within another display system)

Pseudoterminal (pty): a program that sits between a terminal and a shell (or other program), and makes them both think they’re directly connected

Command location and parameters

Most commands are regular programs, which exist somewhere on the system. When entering a command name, the shell searches for an executable with that name. Searching the whole drive is very inefficient, so the search is limited to the few directories, listed in a variable called “PATH”.

You can give a full or relative path, and that will be executed directly.

Each command can be given parameters, separated by spaces, each (by convention) being either text values or options (switches).

Options usually start with a hyphen - (like -c or -current-time) and often support abbreviations (-a -b -c to -abc). Newer conventions also allow start with two hyphens -- (--current-time), though if shown as-is indicate the rest is a string. More rarely you might see --OPTION=VALUE or just OPTION=VALUE.

Thompson shell

The first UNIX shell is the Thompson shell. The two major features that every other shell supports are:

  • I/O redirection: redirection of input and output, allowing insertion of input from a file or storage of output into a file:

    command [args...] < filepath
    command [args...] > filepath
    
  • pipelines: being able to redirect the output from one command to another without limit

    command1 [args...] | command2 [args...] | ...
    

2.2. UNIX-style file system

Attributes of an inode

  • file type: Each inode represents some sort of file (data), but a “file” can also be a directory or device, so we need to know how to handle it.
    There are 7 main types (but there can exist more, depending on OS):

    • regular: just a plain old file
    • directory: as explained, a file containing hard links to other files. Each directory is allowed to appear only once in a single parent.
    • symbolic link: points to any file (or directory). It contains the (relative) path to that object (as a simple string), so a symlink not could even be valid. Think of a C++ pointer (pseudocode).
    symlink /bin/mprog = "/usr/local/bin/mprog";
    exec(/bin/mprog); -> Executes /usr/local/bin/mprog
    
  • sockets: file for inter-process communication. Compared to FIFO specials, they can be used by more than two processes, used in both directions and support file descriptors and packets

    [Process A]  <--read&write-->  [Socket 1]
    [Process B]  <--read&write-->  [Socket 1]
    ...
    
    [Process A]  --write-->  [Socket 1]  --read-->  [Process B]
                                         \-read-->  [Process C]
    

Side-step into users and groups

A user in UNIX is essentially a small collection of data, most important of it being a unique ID (number), a name (string), a group id (number; files created by the user are in that group) and a password.

Groups are also small collections of data, but much simpler, comprised only of a unique ID (number), a name (string) and a list of users that are “in” the group. Their main purpose is to simplify access control.

User with ID 0 is called root, it is the “system administrator”, all actions made by the system itself are done as that user. Every user has their own “home” folder (under “/home/USERNAME/”, except for root, which is “/root/”), in which they store personal files, as well as user-specific configuration files.

2.3. Kernel

The kernel has the core functionality of the operating system and bridges the gap between programs and hardware. Some important subsystems include scheduling, file, device, process and memory management.

To preserve the everything is a file methodology, devices and processes can be handled as files (inside /dev and /proc). They (most of the time, in modern kernels) aren’t actual files on a hard drive, but “virtual” files, where operations on them are handled in a different manner than normal by the kernel.

Linux is not an operating system, it is a kernel!!!

Two of the important things a kernel does is manage processes and memory.

2.3.1 Processes

A program is some collection of code and other data, stored in a file (or noncompiled code).
A process is the program in motion, the program itself alongside certain system states.

The system states include (but are not limited to):

  • the processor state, including the program counter and register values
  • memory map, indicating what regions of memory are allocated (to the process)
  • file descriptors, unique identifiers for files

On a multitasking (multiprocessing) system, which is any modern desktop system, multiple processes are always running “at once”.

In reality, constantly one process is stopped and another is ran, switching between all of them (context switching). All “waiting” processes are put in a prioritized queue (run queue).

Stopping is either done by the process itself (more on that later), or by the system when the process has been actively running for too much time.

Process Control Block

All process information is stored in a data structure, called a process control block (or process descriptor). Not to be confused with a printed circuit board!
A PCB’s data can be split into three categories:

  1. Process identification - the ID of the process itself, of it’s parent process, of it’s owner (user), etc.
  2. Process state - as described, the processor state, etc.
    On a context switch, values in registers are put into the stopped PCB and then are loaded from the running PCB.
  3. Process control information - process state, memory map, privileges, etc.

A process can be forked, meaning a new process is created, with the same underlying code (but not process states and …).

Process states

  • Created: This is the initial state given to a new process. In this state, the process awaits a “ready” state.
  • Ready (Waiting): A process is put in the run queue and awaits being executed (“running” state).
  • Running: When the process’s program is being currently executed by the CPU. The process can run in either kernel mode, having access to the kernel and user addresses, or user mode, having access only to it’s own code and data.
  • Blocked: When a process cannot continue without external change, usually when waiting for an I/O device.
  • Terminated: The process has either completed execution or has been killed. The process itself is called a “zombie process” (until it is removed).

Exit status

Every process, before being deleted (and after being in a Terminated state) passes a small number, called the exit status (exit code), to it’s parent process. For all intents and purposes, this number is an 8bit unsigned integer, meaning it’s value is between 0 and 255.

Pretty much universally, an exit code of 0 signifies a successful termination, and anything else specifies some sort of error code. Different programs specify the meaning of an exit code in different ways, some assign each number to a specific error, other use the bits representing the number as flags.

2.3.2 Memory management

In ye olde days, you would save a bit of memory for the operating system and give all of the rest to the current process (monoprogramming), which we wait until completion.

This doesn’t allow for context switching (disks are slow) and a blocked state forces the CPU to wait.

Some bad ideas include: depending on the code of all processes to not use the same addresses, use only relative addresses (-fPIC option in gcc) or use a table that the kernel fills out with addresses.

Virtual memory


Image by Ehamberg

A program (process) works with some (sequential) addresses, and the OS (with the help of hardware) translates/maps the process’ addresses (logical/virtual addresses) to real ones (physical addresses).

Simplified memory management for the program, protected address space for each process and improved flexibility.

Segmentation (without paging)

Initially, virtual memory was implemented with segmentation, where a process’ memory is divided into segments for code, stack, heap, etc. Each segment is given a chunk of contiguous memory, which can be resized, and addresses are defined as the base segment (physical) address with an offset.

In the modern day, segmentation isn’t used by itself, since resizing segments could require reordering of memory and external fragmentation, where there simply isn’t enough contiguous memory, could occur.

Paging

The virtual and physical memory is divided into fixed-sized chunks of consecutive addresses. A virtual memory chunk is called a page, while a physical memory chunk is called a (page) frame. Usually both have the same size of (at least) 4 KiB.

Every process has a page table, which maps the process’ pages to system frames. Thus every memory address is translated from it’s page location to a frame location.

Advantages include:

  • elimination of external fragmentation
  • increased flexibility, allowed sharing of memory between processes (useful for shared libraries)
  • finding free memory is fast and easy, just use the first free pages you see
  • frames can be scattered
  • more efficient swapping, moving of memory from RAM to disk and back

Disadvantages include:

  • longer memory access, because of the page table
  • internal fragmentation, where one or multiple pages might be partially empty, leaving free space that cannot be used

3. Linux from the inside

Enough theory, time to have some fun and learn Linux!

We’ll be using Linux Minimal Live (just a bootable ISO) because it’s:

We’re going to explore and look around the following stuff:

If you’re following at home

If you want to try it out yourself, after booting up Minimal Linux Live, type out and run:

wget -q http://unsecure.syndamia.com/mll-set.sh && chmod +x mll-set.sh && ./mll-set.sh

You might’ve noticed that the command above doesn’t use the normal https://syndamia.com/talks/linuxgraphy-by-strabo/mll-set.sh link. That is because, out of the box, MLL doesn’t really support secure connections (https), so I’ve made only that file available without it.

3.1. Shells

Variables and data types

Before talking about the different features newer shells support, we’ll roughly cover variables and data types (mostly bash specifics).

A variable, also called parameter, is created with the syntax (NO spaces between the =):

name=value

Where name is either a combination of letters (upper and lower case), numbers and the underscore character, which cannot begin with a number, or one of select few positional and special parameters.

Positional and special parameters

Some of the special parameters are:

  • @ is all positional parameters, separated by spaces
  • # is the number of positional parameters
  • ? is the last executed (foreground) process
  • $ is process ID of the current shell.

value, without any special surrounding characters (more on that later), is treated as a string (but it mustn’t have spaces!), however it may also be omitted, in which case it is the empty string "".

Data types

In most shells, there are 3 data types: strings, integers and (one-dimensional) indexed arrays. bash has more, like references and associative arrays, but those aren’t universal.

A value is a string when surrounded by single ' or double " quotes,
an integer when it isn’t surrounded by anything and composed only of digits (if there are letters, it is a string; it’s quite tricky when a value is interpreted as a string or integer) and
an array when surrounded by braces (), where inside elements are strings or integers, separated by spaces. An indexed array can also be created by specifying the value at any index:

name[index]=value

Mixing types and operations

name=value
Type Value with string operation integer operation array operation
string value 0 value when index is 0, empty string otherwise
integer empty string, but sometimes value value value when index is 0, empty string otherwise
array operation is done on element at index 0 operation is done on element at index 0 value

Parameter expansion

With parameter expansion, actions are done with the general variable (entity) or on strings and arrays values. Operations on integers are done with arithmetic expansion.

Parameter expansion is started with the character $ and either a name or curly braces. Retrieving a value is done with the forms $name, ${name} and ${name[index]} for arrays.

Some parameter expansions in bash:

Arithmetic expansion

Arithmetic expansions allow evaluation of arithmetic expressions. They’re in the form $(( expression )).

Some useful expressions are:

Expressions work with both variable names and integers.

Bourne shell

Most modern shells support a lot of the more defying features of the Bourne Shell, which include:

  • Job control: Management of currently running groups of processes

    • &: A process can be ran in the background with an ampersand (&) at the end of the command:
    command [args...] &
    
    • Ctrl+z: suspend the currently running (foreground) job
    • bg: start a suspended job in the background
    • fg: resume the last job to be put in the background and make it the current job with which we’re interacting
    • jobs: list all active (background) jobs
    • Every job is identified with a Job ID (number), which you can use by prepending a percentage sign. Example, to resume the job with JID 4:
    fg %4
    
    • For managing everything, usually there is a job table. Upon shell termination, the shell tells all jobs in that table to terminate and waits for them.
      disown: remove a job from the job table
    • kill: send a signal to the process or job. Then that process will have to handle it accordingly.
      Common ones are -KILL to immediately stop the process, -QUIT to quit it, -ABRT to cancel the current action, -TERM shut down (orderly), -STOP shut down (forcefully)
  • heredoc: File literal, meaning it is a user “string” which is interpreted as a file

    • start with << NAME, where NAME can be anything you want, it is used to mark the beginning and end of a heredoc. << is also a redirection symbol
    • on every new line write out your text, all characters will be preserved
    • to end it, write out NAME on the beginning of a new empty line. Example:
    cat << MYFILE
    This is
    some
        text
    MYFILE
    
  • control operators: Control what command is executed, depending on exit status

    • expr1 && expr2: Run expr2, only if expr1 exited successfully
    • expr1 || expr2: Run expr2 if expr1 exited unsuccessfully
    • expr1 ; expr2: Run expr2 after expr1 (sequential execution)
  • redirection: Outside of having < and > for I/O redirection, often there is also

    • >>: Acts like >, putting text into a file, but rather than overwriting everything, it appends it
    • <<: As explained, for heredocs
    • <<<: To the right is a string (herestring), and it is interpreted as a file
    cat <<< "Hello World!"
    
    • Standard input, standard output and standard error are all files with which the shell works. Typing text in the shell puts it into stdin, command output are put into stdout (and shown to the user) and errors are put into stderr (also shown to the user).
      Each one of them is numbered from 0 to 2, and you can specify redirection by appending or prepending it to the redirection symbol.
    • N>outputfile: redirects the output from standard stream N into outputfile
    • N>&M: redirects the output from standard stream N into standard stream M
    • N<inputfile: redirects input file contents to standard stream N
    • N<&M: redirects standard stream M to standard stream N
  • built-in test command: With test you can do conditional expressions. You can also often use brackets [ args... ] instead of test args...
    • Some of the available file checks
    • -e FILENAME: FILENAME exists
    • -d FILENAME: if FILENAME is a directory
    • -h FILENAME: if FILENAME is a symbolic link
    • -w FILENAME: you can write in FILENAME
    • Some of the available string checks
    • -n STR: string STR has nonzero length
    • -z STR: string STR has length zero
    • STR1 = STR2 and STR1 != STR2: self explanatory
    • Some of the available number checks
    • INT1 -eq INT2: equal integers
    • INT1 -gt INT2: INT1 > INT2
    • INT1 -gt INT2: INT1 >= INT2
    • Operators: ! - negation, -a - binary AND, -o - binary OR, parentheses for grouping (escaped with \)

3.2. Files and directories

  • /dev: devices
  • /proc: process information
  • /bin: main command programs (binaries), used by users
  • /sbin: command programs, used for the system operation
  • /usr: other system resources
    • /usr/bin: all other user command binaries

Everyday file system commands

Other useful file system commands

Everyday text (file) commands

Other useful text (file) commands

Device commands

  • mount: bind another file system to a location in the file system
  • umount: remove binding to a file system
  • lsusb: lists USB devices
  • lspci: lists PCI devices

System information commands

  • date: display current date and time
  • hostname: show (or set) the systems hostname (network name)
  • free: display free memory on the system

User and group management

    • /etc/fstab: read on boot by mount to setup devices in fs
    # device-spec   mount-point  fs-type  options              dump  pass
    /dev/sda1       /            ext4     defaults             1     1
    UUID=123A-456B  /boot        vfat     noauto,noatime       1     2
    LABEL=Vault     /mnt/Vault   auto     nosuid,nodev,nofail  0     0
    
    • /etc/hosts: list of host names (domain names) and their corresponding IP address, when a DNS server doesn’t do the job
    # ipaddress    domains
    127.0.0.1      localhost mywebsite.com something.else
    62.44.101.138  my.uni
    
    • /etc/bashrc: global defaults and aliases used by the bash shell
    • /etc/motd: message of the day (unexecuted text), shown after login but before the shell is ran
  • /usr: other system resources
    • /usr/lib: library files
    • /usr/local: local, system software
    • /usr/share: architecture independent data, like manuals
  • /var: variable data, like logs and cache
  • /home: home directories of all users
  • /lib: libraries and kernel modules
  • /mnt: mounted temporary filesystems
  • /root: home directory for the root user

4. Software licensing

It is important to note that most software in Linux, including the kernel itself, is under a variety of open licenses. An open license is a license which allows others to reuse the original work, under some restrictions.

For software, such a license is applied to the source code from which the original application was made. Some commonly used ones, with a (NON-LEGALLY BINDING) summary include:

This is important, since pretty much everything can be freely modified by anyone (for the better or worse, but generally better).

5. What makes up a useful OS

A Linux-based operating system is nothing more than the Linux kernel, combined with an assortment of programs.
Overall, the main elements that make up a useful desktop operating system (alongside a command-line shell) are:

  1. Bootloader: load the kernel and operating system
  2. Init system: well managed way to start everything inside the OS
  3. Package manager: way to manage our binaries
  4. Desktop manager: graphical shell

5.1. Bootloader

The computer boot process is a relay race.

In a “standard” BIOS-MBR boot configuration:

BIOS -> Master Boot Record -> Active Parition -> Bootloader ->
-> Boot menu (optional) -> Kernel -> Everything else

The BIOS is baked into the motherboard, the MBR is pretty universal (and it’s less than 512 bytes) and the kernel we know and love.

That leaves us with the need for a bootloader. On Linux, the most used one is called GRUB.

It’s config is found in /etc/default/grub (which is then fully generated with grub-mkconfig) and can be “installed in the partition” with grub-install.

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_TIMEOUT_STYLE=menu
GRUB_CMDLINE_LINUX=""
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_GFXMODE=640x480

5.2. Init system

Now we need to load the user stuff, like a network manager, the graphical user interface and so on.

Init systems use configuration files, called service, to start a program in some way.

  • systemd, which also aims to also provide easy system configuration
  • openrc, a simple and modern system
[Unit]
Description=Service
[Service]
Type=simple
User=root

[Install]
WantedBy=multi-user.target

5.3. Package manager

Programs are just executable files, usually placed in a specific folder, like /usr/bin.

Updating all of them means replacing all of the binaries, means tracking where each one came from, installing a new one might require installing multiple others and so on.

On modern systems, managing binaries is automated by an application called a package manager.

Additional benefits of package management automation include:

  • checking for package validity (and tampering)
  • utilising multiple server mirrors
  • automatic dependency management and installation
  • version caveats
  • (indirectly) improved stability because of maintainer compatibility checks
apt install PKG
apt remove PKG
apt upgrade

5.4. Window systems

A widow system is your graphical shell: the graphical way to interact with your computer (the kernel). It’s main components are:

6. War on distros

A distribution is a complete set of of all applications that you might use.

Overall, there are 4-5 distributions from which 90% of all other distros are based upon:

  1. Slackware: as the oldest still maintained distro, it served as inspiration for a lot of other distros. It aims to be as simple and as close to UNIX as possible.
  2. Debian (and Ubuntu): it is the most popular distribution (if we also include Ubuntu). Ubuntu aims to be modern and fancy, while Debian aims to be as stable as possible, which makes it the most popular in server usage.
  3. Gentoo: the distribution which defined non-binary software distribution
  4. Arch Linux: a very popular and modern distribution with the goal of providing the latest and greatest software.

We haven’t touched too much, but Linux distributions aren’t limited to desktop or server computers.

  1. AOSP: the base of the whole Android operating system
  2. OpenWRT: a router operating system

7. Demos - virtual machine and setup from zero

Now for the best part: we’ll be installing a Linux distribution on a virtual machine and then directly on a laptop, running Windows 10!

I’ve chosen to install Linux Mint, since it is generally targeted towards newcomers and will feel familiar enough to Windows users.

Currently there won’t be an official recording of the process, you should’ve been here live!

Thank you for your time

Sources

All of these contain, at most, small paraphrased sentences in the slides. They served as personal educational and reference tools.

Licensing

Images on slides “2.1. shell” and “2.2. UNIX-style file system” are made by me and licensed as content.