Hi all - I am learning about Linux and want to see if my understanding is correct on this - the list of major parts of any distro:

  1. the Linux Kernel
  2. GRUB or another bootloader
  3. one or more file systems (gotta work with files somehow, right?)
  4. one or more Shells (the terminal - bash, zsh, etc…)
  5. a Desktop Environment (the GUI, if included, like KDE or Gnome - does this include X11 or Wayland or are those separate from the DE?)
  6. a bunch of Default applications and daemons (is this where systemd fits int? I know about the GNU tools, SAMBA, CUPS, etc…)
  7. a Package Manager (apt, pacman, etc…)

Am I forgetting anything at this 50,000 foot level? I know there are lots of other things we can add, but what are the most important things that ALL Linux distributions include?

Thanks!

  • @TootSweet
    link
    English
    73
    edit-2
    1 year ago

    Hey! Great questions.

    It seems like what you’re asking about are more what I’d think of as components of an a Linux “system” or “install.”

    First off, it’s definitely worth saying that there aren’t a lot of rules that would apply to “all” Linux systems. Linux is huge in embedded systems, for instance, and it’s not terribly uncommon to find embedded Linux systems with no shells, no DE/WM, and no package manager. (I’m not 100% sure a filesystem is technically necessary. If it is, you can probably get away with something that’s… kinda sorta a filesystem. But I’ll get to that.)

    Also, it’s very common to find “headless” systems without any graphical system whatsoever. Just text-mode. These are usually either servers that are intended to be interacted with over a network or embedded systems without screens. But there are a lot of them in the wild.

    There’s also Linux From Scratch. You can decide for yourself whether it qualifies as a “distribution”, but it’s a way of running Linux on (typically) a PC (including things like DE’s) without a package manager.

    All that I’d say are truly necessary for all Linux systems are 1) a bootloader, 2) a Linux kernel, 3) A PID 1 process which may or may not be an init system. (The “PID 1 process” is just the first process that is run by the Linux kernel after the kernel starts.)

    The “bunch of default applications and daemons” feels like three or four different items to me:

    • Systemd is an example of an “init system.” There are several available. OpenRC, Runit, etc. It’s main job is to manage/supervise the daemons. Ensure they’re running when they’re supposed to be. (I’ll mention quickly here that Systemd has a lot more functionality built in than just for managing daemons and gets a bad rap for it. Network configuration, cron, dbus for communication between processes, etc. But it still probably qualifies as “an init system.” Just not just an init system.)
    • Daemons are programs that kindof run in the background and handle various things.
    • Coreutils are probably something I’d list separately from user applications. Coreutils are mostly for interacting with low-ish level things. Formatting filesystems. Basic shell commands. Things like that.
    • User applications are the programs that you run on demand and interact with. Terminal emulators, browsers compilers, things like that. (I’ll admit the line between coreutils and user applications might be a little fuzzy.)

    As for your question about graphical systems, X11 and Wayland work a little differently. X11 is a graphical system that technically can be run without a desktop environment or window manager, but it’s pretty limited without one. The DE/WM runs as one or more separate processes communicating with X11 to add functionality like a taskbar, window decorations, the ability to have two or more separate windows and move them around and switch between them, etc. A Wayland “compositor” is generally the same process handling everything X11 would handle plus everything the DE/WM would handle. (Except for the Weston compositor that uses different “shells” for DE/WM kind of functionality.)

    As far as things that might be missing from your list, I’ll mention the initrd/initramfs. Typically, the way things are done, when the Linux kernel is first loaded by the bootloader, it an “initial ramdisk” is also loaded. Basically, it creates a filesystem that lives only in ram and populates it from an archive file called an “initramfs”. (“initrd” is the older way to do the same thing.) Sometimes the initramfs is bundled into the same file as the kernel itself. But, that initial ramdisk provides an initial set of tools necessary to load the “main” root filesystem. The initramfs can also do some cool things like handling full disk encryption.

    So, the whole list of typical components for a PC-installed Linux system to be interacted with directly as I’d personally construct it would be something like:

    • Bootloader
    • Linux Kernel
    • Initramfs
    • Filesystem(s)
    • Shell(s)
    • Init system
    • Daemons
    • Coreutils
    • Graphical system (X11 or Wayland potentially with a DE/WM.)
    • User applications
    • Package manager

    But techinically, you could have a functional, working “Linux system” with just:

    • Bootloader
    • Linux Kernel
    • Either a nonvolatile filesystem or initrd/initramfs (and I’m not 100% sure this one is even strictly necessary)
    • A PID 1 process

    Hopefully this all helps and answers your questions! Never stop learning. :D

    • @aodhsishaj
      link
      21 year ago

      You would need some non volatile storage to hold your bootloader be that on the network or local. Also any shell more complicated than tty will need to store config files to run.

  • @[email protected]
    link
    fedilink
    28
    edit-2
    1 year ago

    I would say, that from most important to least important components are:

    1. kernel
    2. init system (systemd, openrc, runit…)
    3. C library (glibc, musl)
    4. filesystem
    5. coreutils
    6. shell
    7. bootloader
    8. package manager
    9. x11/Wayland (if any)
    10. sound system (if any)
    11. WM (if any)
    12. DE (if any)
    • @[email protected]
      link
      fedilink
      61 year ago

      what do u mean by important? like ‘essential to the system’, or ‘important to consider when choosing a distro’, or what?

    • @[email protected]
      link
      fedilink
      41 year ago

      I’m surprised you put shell so high when it tends to be less impactful in my experience. Like I care a lot more if my distro is using GNOME instead of KDE a lot more than if it’s using bash instead of zsh. Plus it’s easy to install and use a different shell

      • @[email protected]
        link
        fedilink
        31 year ago

        It is easy to install another shell indeed, but it is quite difficult to configure it. While installation of DE is usually done with just one command. And you can use linux without DE, but not without shell. Many distributions even do not install DE by default at all.

        • @[email protected]
          link
          fedilink
          21 year ago

          Okay but unless you are spending a lot of time in the command line, one (POSIX compliant) shell is as good as another. Like yes every distro needs a shell, but I don’t much care which shell it is.

          • R0cket_M00se
            link
            English
            11 year ago

            That’s irrelevant, he’s ranking them in order of criticality or necessity. Just because the component’s individual choices are all the same basic flavor doesn’t affect that.

    • @[email protected]
      link
      fedilink
      2
      edit-2
      1 year ago

      One thing I don’t know: if C is a compiled language already, what exactly does the C library do?

      • @[email protected]
        link
        fedilink
        6
        edit-2
        1 year ago

        it is a dynamically linked library, meaning its not in the compiled binary, but its assumed to already be on the system. as opposed to a statically linked binary. this lowers the file size of the binaries, because most will use the standard library.

        edit: this may not be 100% correct, but its the general idea

      • @[email protected]
        link
        fedilink
        41 year ago

        Most C binaries usually do not contain everything needed for their execution. It would make them too platform-specific. What most c programs do is that they use standard c library from platform for low-level things and communication with the system like memory allocation or stdin/stdout things, for example.

    • Alex
      link
      fedilink
      21 year ago

      Wouldn’t the c library be more important than the init system?

  • @faethon
    link
    20
    edit-2
    1 year ago

    I think you would also need an initial run process such as systemd or the sysV runlevels.

    • lemmyvore
      link
      fedilink
      English
      51 year ago

      Fun fact, the init process can be anything, even /bin/bash or a shell script. But if it ends or dies so does the system, and of course you want extra features like multiuser capability, better interface etc. So it’s typically a more complex system like you said, that starts a bunch of other things. But you can still see the init process with PID 1 there in the process list. 😊

      • @KISSmyOS
        link
        2
        edit-2
        11 months ago

        deleted by creator

  • @[email protected]
    link
    fedilink
    121 year ago

    Also:

    • init system, without which you’d be left with only one program running at a time
    • some programs are written in interpreted language (e.g python, shell, perl), so the interpreter would also be required
    • C library, without which none of the above would function (yes, even if all the programs are statically compiled, it still has that library included with each executable)
    • this one is not necessary for the runtime, but is needed for creating a working system: toolchain – preprocessor, compiler, linker, assembler – all the stuff for transforming the source code into executables

    Another comment mentioned Linux From Scratch, I’d totally recommend that, but it would take so much of your time manually building stuff (which is why it is so educational). If you don’t have the time, you may want to opt with Gentoo instead.

    • lemmyvore
      link
      fedilink
      English
      31 year ago

      I would also mention:

      • The multi-user system, which is a bunch of config files, libraries, utils and UIs, that deal with logging in or doing stuff as a specific user.
      • The logging system. Individual applications can simply log to a different file each but for system services the logging is usually centralized and offers additional features (like logging remotely etc.)
      • Setting up networking is pretty much mandatory these days.
  • funkajunk
    link
    fedilink
    English
    111 year ago

    You pretty much got it, except for the fifth point.

    A desktop environment (“DE”) is separate from the compositor (X11 or Wayland), but can’t exist without it.

    At the end of the day, a DE is really just a “window manager” with a bunch of bundled applications, like taskbars/panels, a file manager, an app menu, etc. It’s as minimal or as feature rich as you want it to be.

    The window manager dictates what to draw on the screen and where, but the compositor is what actually does the work. One is kind of useless without the other.

    Hopefully that makes sense, I’m not a rocket surgeon.

  • @just_another_person
    link
    101 year ago

    The major parts of any distro are just bootloader, kernel, init, shell, and package system. The filesystem isn’t “part” of the distro, it’s just an abstraction layer to work with data on the drive, and should be considered independent of the packaged distribution itself.

    With the above, you can run the basics of Linux on a device. The DE is not needed, and included packages and libraries are at the discretion of the maintainers. The included choices of all the above is the only thing that differentiates each distro.

    If it helps in your understanding at all, back in the day, in order to install something like Slackware, you had to build each layer of these things manually like so: format and partition disk from disk in DOS, copy bootloader to newly partitioned HDD, boot to single user mode, compile kernel, add entries to bootloader, reboot from disk to Linux kernel, open TTY, set user and shell, reboot again, compile DE, set init level and basic services, reboot to DE, and then you had a Desktop.

  • @[email protected]
    link
    fedilink
    English
    71 year ago

    Systemd has gone way and beyond what was supposed to be a replacement for init.rc.

    Most important thing… not ALL Linux distros include systemd as the default init system. That’s the beauty of Linux (and POSIX in general), you can choose.

  • chameleon
    link
    fedilink
    51 year ago

    A biggie you miss is the toolchain: the compiler/binutils/linux-headers/libc/libstdc++ combination. The libc and usually libstdc++ are key components of any install. The other parts usually don’t make it to non-dev-desktops, but the distro couldn’t be made without them, so they’re virtually always available as packages.

    Only exception is if the entire distro is cross-compiled or it’s made exclusively for containers, but those kinds of special distros break every rule imaginable anyway. Some might not even ship a bootloader or a Linux kernel by themselves.

  • @[email protected]
    link
    fedilink
    4
    edit-2
    1 year ago

    The list is generally correct but these days, systemd has made quite an impact also. If a distribution uses systemd, it has one software to handle everything from booting (instead of grub), handling start and status of all system services etc. Its probably the largest change to the Linux ecosystem in a long time.

    X11 and Wayland are desktop protocols, so things like desktop environments and window managers depends on one or them to be installed. Without them, you don’t get any graphics except for the console. It’s all built on top of one of those.

  • thelastknowngod
    link
    fedilink
    41 year ago

    It’s easier to think about Linux on the context of what an individual application needs to run. Pretty much everything you do will have these components.

    • configuration
    • an executable
    • a communication mechanism (dbus, networking, web server, etc)
    • something that decides if the application runs or not (systemd, monit, docker/docker compose, kubernetes scheduler, or you as the user)
    • a way of accepting input (keyboard and mouse, web requests, database queries, etc)
    • a way of delivering an output (logging to unique log files, through syslog, or to stdout/stderr, showing something on a screen, playing a sound, returning a message to the client, etc)
    • storage (optional)
    • some cpu and memory capacity

    That’s really it. If something isn’t working, it’s pretty much exclusively going to fall into one of those categories. What that means is going to vary significantly from app to app but understanding this is how literally everything works makes the troubleshooting process a lot easier.

  • @TCB13
    link
    English
    31 year ago

    Systemd.

  • ZephyrXero
    link
    English
    31 year ago

    You need the “Userland” programs. Basic things like ls, cp, cat, etc. Usually it’s GNU core tools, but there’s also BusyBox or BSD equivalents.

  • FauxPseudo
    link
    31 year ago

    Package management is optional according to Slackware and Linux From Scratch.

    A key part you left out is the init scripts. Without those you don’t have the fundamental under the hood flavor of a distribution.

  • kpw
    link
    fedilink
    11 year ago

    Honestly, I wouldn’t worry too much about how the OS works. If you’re very ambitious, you could try to install Arch in a virtual machine environment: https://wiki.archlinux.org/title/Installation_guide

    Installing Arch for the first time taught me a lot about how my system works, since you have to choose all the parts that make up your system yourself.