What do you advice for shell usage?

  • Do you use bash? If not, which one do you use? zsh, fish? Why do you do it?
  • Do you write #!/bin/bash or #!/bin/sh? Do you write fish exclusive scripts?
  • Do you have two folders, one for proven commands and one for experimental?
  • Do you publish/ share those commands?
  • Do you sync the folder between your server and your workstation?
  • What should’ve people told you what to do/ use?
  • good practice?
  • general advice?
  • is it bad practice to create a handful of commands like podup and poddown that replace podman compose up -d and podman compose down or podlog as podman logs -f --tail 20 $1 or podenter for podman exec -it "$1" /bin/sh?

Background

I started bookmarking every somewhat useful website. Whenever I search for something for a second time, it’ll popup as the first search result. I often search for the same linux commands as well. When I moved to atomic Fedora, I had to search for rpm-ostree (POV: it was a horrible command for me, as a new user, to remember) or sudo ostree admin pin 0. Usually, I bookmark the website and can get back to it. One day, I started putting everything into a .bashrc file. Sooner rather than later I discovered that I could simply add ~/bin to my $PATH variable and put many useful scripts or commands into it.

For the most part I simply used bash. I knew that you could somehow extend it but I never did. Recently, I switched to fish because it has tab completion. It is awesome and I should’ve had completion years ago. This is a game changer for me.

I hated that bash would write the whole path and I was annoyed by it. I added PS1="$ " to my ~/.bashrc file. When I need to know the path, I simply type pwd. Recently, I found starship which has themes and adds another line just for the path. It colorizes the output and highlights whenever I’m in a toolbox/distrobox. It is awesome.

  • jlsalvador@lemmy.ml
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    8 months ago
    #!/usr/bin/env bash
    

    A folder dotfiles as git repository and a dotfiles/install that soft links all configurations into their places.

    Two files, ~/.zshrc (without secrets, could be shared) and another for secrets (sourced by .zshrc if exist secrets).

        • unlawfulbooger@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          20
          ·
          edit-2
          8 months ago

          because bash isn’t always in /usr/bin/bash.

          On macOS the version on /usr/bin/bash is very old (bash 3 I think?), so many users install a newer version with homebrew which ends up in PATH, which /usr/bin/env looks at.

          Protip: I start every bash script with the following two lines:

          #!/usr/bin/env bash
          set -euo pipefail
          

          set -e makes the script exit if any command (that’s not part of things like if-statements) exits with a non-zero exit code

          set -u makes the script exit when it tries to use undefined variables

          set -o pipefail will make the exit code of the pipeline have the rightmost non-zero exit status of the pipeline, instead of always the rightmost command.

        • calm.like.a.bomb@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          8
          ·
          8 months ago

          #!/usr/bin/env will look in PATH for bash, and bash is not always in /bin, particularly on non-Linux systems. For example, on OpenBSD it’s in /usr/local/bin, as it’s an optional package.

          If you are sure bash is in /bin and this won’t change, there’s no harm in putting it directly in your shebang.

    • mryessir@lemmy.sdf.org
      link
      fedilink
      arrow-up
      1
      ·
      8 months ago

      Instead of a install skript, check out GNU stow. It does exactly that and you can interqctively choose which things to install/symlink.

  • bionicjoey@lemmy.ca
    link
    fedilink
    arrow-up
    12
    ·
    8 months ago

    Do you use bash?

    Personally I use Bash for scripting. It strikes the balance of being available on almost any system, while also being a bit more featureful than POSIX. For interactive use I bounce between bash and zsh depending on which machine I’m on.

    Do you write #!/bin/bash or #!/bin/sh?

    I start my shell scripts with #! /usr/bin/env bash. This is the best way of ensuring that the same bash interpreter is called that the user expects (even if more than one is present or if it is in an unusual location)

    Do you have two folders, one for proven commands and one for experimental?

    By commands, do you mean bash scripts? If so, I put the ones I have made relatively bulletproof in ~/bin/, as bash usually makes them automatically on the path with this particular folder name. If I’m working on a script and I don’t think it’s ready for that, or if it goes with a specific project/workflow, I will move it there.

    Do you sync the folder between your server and your workstation?

    No. I work on lots of servers, so for me it’s far more important to know the vanilla commands and tools rather than expect my home-made stuff to follow me everywhere.

    good practice? general advice?

    Pick a bash style guide and follow it. If a line is longer than 80 characters, find a better way of writing that logic. If your script file is longer than 200 lines, switch to a proper programming language like Python. Unless a variable is meant to interact with something outside of your script, don’t name it an all caps name.

    is it bad practice to create a handful of commands like podup and poddown that replace podman compose up -d and podman compose down or podlog as podman logs -f --tail 20 $1 or podenter for podman exec -it "$1" /bin/sh?

    This is a job for bash aliases.

    • A Phlaming Phoenix@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      Good advice. I’ll add that any time you have to parse command line arguments with any real complexity you should probably be using Python or something. I’ve seen bash scripts where 200+ lines are dedicated to just reading parameters. It’s too much effort and too error prone.

      • bionicjoey@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        8 months ago

        It depends. Parsing commands can be done in a very lightweight way if you follow the bash philosophy of positional/readline programming rather than object oriented programming. Basically, think of each line of input (including the command line) as a list data structure of space-separated values, since that’s the underlying philosophy of all POSIX shells.

        Bash is basically a text-oriented language rather than an object-oriented language. All data structures are actually strings. This is aligned with the UNIX philosophy of using textual byte streams as the standard interface between programs. You can do a surprising amount in pure bash once you appreciate and internalize this.

        My preferred approach for CLI flag parsing is to use a case-esac switch block inside a while loop where each flag is a case, and then within the block for each case, you use the shift builtin to consume the args like a queue. Again, it works well enough if you want a little bit of CLI in your script, but if it grows too large you should probably migrate to a general purpose language.

        • bionicjoey@lemmy.ca
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          8 months ago

          Here’s a simple example of what I mean:

          #! /usr/bin/env bash
          
          while [[ -n $1 ]]; do
            case $1 in
              -a) echo "flag A is set" ;;
              -b|--bee) echo "flag B is set" ;;
              -c) shift; echo "flag C is $1" ;;
              --dee=*) echo "flag D is ${1#--dee=}" ;;
            esac
            shift
          done
          

          Showing how to do long flags with B and flags with parameters with C and D. The parameters will correctly work with quoted strings with spaces, so for example you could call this script with --dee="foo bar" and it will work as expected.

        • MigratingtoLemmy@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          8 months ago

          Hoho, now do that in POSIX shell.

          I had a rude awakening the day I tried it, but my scripts are bulletproof now (I think) so I don’t mind at this point

          • bionicjoey@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            8 months ago

            Imma be real, I never remember which parts of bash aren’t POSIX. Luckily it doesn’t matter in my line of work, but it’s good to be aware of if you have a job that often has you in machines running other types of UNIX.

  • taladar@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    8 months ago

    I use bash for scripts almost exclusively even though i use zsh interactively (startup scripts for zsh are an obvious exception).

    The vast majority of my scripts start with

      set -e -u
    

    which makes the script exit if a command (that is not in a few special places like an if) exits with an error status code and also complains about unbound variables when you use them.

    Use

    bash -n
    

    and

    shellcheck
    

    to test your script for errors and problems if you try it.

    Always use curly braces for variables to avoid issues with strings after the variable name being interpreted as part of the variable name.

    Always use 10# before numbers in $(()) expressions to avoid leading zeroes turning your decimal number variables into octal ones.

    Always use

    while read -r foo
    do
    ...
    done < <(command ...)
    

    instead of

    command ... | while read -r foo
    do
    ...
    done
    

    to avoid creating a subshell where some changes you make will not affect your script outside the loop.

    In

    while read -r foo
    do
    ...
    done < ...
    

    loops always make sure you redirect all stdin from /dev/null or otherwise close it with suitable parameters or the content of your loop will eat some of the lines you meant for the read. Alternatively fill a bash array in the loop and then use a for loop to call your commands and do more complex logic.

    When using temporary directories or similar resources use

    cleanup()
    {
     ...
    }
    trap cleanup EXIT
    

    handlers to clean up after the script in case it dies or is killed (by SIGTERM or SIGINT,…; obviously not SIGKILL).

    When writing scripts for cronjobs take into account that the environment (PATH In particular) might be more limited. Also take into account that stderr output and non-zero exit status can lead to an email about the cronjob.

    Use pushd and popd instead of cd (especially cd …), redirect their output to /dev/null. This will prevent your scripts from accidentally running later parts of the script in a wrong directory.

    There are probably many other things to consider but that is just standard stuff off the top of my head.

    If you do need any sort of data structure and in particular arrays of data structures use a proper programming language. I would recommend Rust since a compiled language is much easier to run on a variety of systems than the Python so many others here recommend, especially if you need to support the oldest supported version of an OS and the newest one at the same time.

      • taladar@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        Good point, forgot one of the basics.

        Also, to make your scripts more readable and less error prone use something like

        if [[ $# -gt 0 ]] && [[ "$1" == "--dry-run" ]]; then
          dry_run=1
          shift
        else
          dry_run=0
        fi
        
        if [[ $# != 3 ]]; then
          echo "Usage: $0 [ --dry-run ] <description of foo> <description of bar> <description of baz>" >&2
          exit 1
        fi
        
        foo="$1"
        shift
        bar="$1"
        shift
        baz="$1"
        shift
        

        at the start of your script to name your parameters and provide usage information if the parameters did not match what you expected. The shift and use of $1 at the bottom allows for easy addition and removal of parameters anywhere without renumbering the variables.

        Obviously this is only for the 90% of scripts that do not have overly complex parameter needs. For those you probably want to use something like getopt or another language with libraries like the excellent clap crate in Rust.

  • DasFaultier@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    8 months ago
    • I use bash, because I never had the time to learn anything else.
    • Like @[email protected] said, I use the #!/usr/bin/env bash shebang.
    • Nope
    • Also nope
    • Nope. Shell scripts reside in Git repos on Gitlab/Gitea/Forgejo and are checked out using Ansible playbooks onto the servers as necessary.
    • For scripts? Python. Read this blog post by the great @[email protected]. For interactive use? bash is just fine for me, though I’ve customized it using Starship and created some aliases to have colored/pretty output where possible.
    • Use shellcheck before running your scripts in production, err on the side of caution, set -o pipefail. There are best practices guides for Bash, use those and you’ll probably be fine.
    • Be prepared to shave yaks. Take breaks, touch grass, pet a dog. Use set -x inside your Bash script or bash -x scriptname on the CLI for debugging. Remember that you can always fallback to interactive CLI to test/prepare commands before you put them into your script. Think before you type. Test. Optimize only what needs optimization. Use long options for readability. And remember: Always code as if the guy who ends up maintaining your code will be a violent psychopath who knows your address.
    • Nope, it’s absolutely not bad practice to create aliases to save you some typing in interactive shell. You shouldn’t use them inside your scripts though, because they might/will not be available in other environments.

    I switched to fish because it has tab completion Yeah, so does Bash, just install it.

    Oh, I also “curate” a list of Linux tools that I like, that are more modern alternatives to “traditional” Linux tools or that provide information I would otherwise not easily get. I’ll post i

    Tools

    Debian-Packages available

    • mtr
    • iputils-tracepath
    • iproute2
    • zsh
    • httpie
    • aria2
    • icdiff
    • progress
    • diffoscope
    • atop
    • powertop
    • ntopng
    • ethtool
    • nethogs
    • vnstat
    • ss
    • glances
    • discus
    • dstat
    • logwatch
    • swatch
    • multitail
    • lynis
    • ncdu (du-clone), alias du=“ncdu --color dark -rr -x --exclude .git --exclude node_modules”
    • nnn (fully-featured terminal file manager. It’s tiny, nearly 0-config and incredibly fast. https://github.com/jarun/nnn)
    • slurm
    • calcurse
    • newsbeuter
    • tig (“ncurses TUI for git. It’s great for reviewing and staging changes, viewing history and diffs.”)
    • qalc -ttyrec
    • taskwarrior
    • ttytter
    • ranger
    • ipcalc
    • pandoc
    • moreutils
    • googler
    • weechat
    • pdftk
    • abcde
    • dtrx
    • tload
    • ttyload
    • cockpit
    • sar
    • ht (hte Hex Editor)
    • dhex
    • ack (grep-clone)
    • silversearcher-ag (grep-clone)
    • ripgrep (“recursively searches file trees for content in files matching a regular expression. It’s extremely fast, and respects ignore files and binary files by default.”, https://github.com/BurntSushi/ripgrep)
    • exa (statt ls) https://the.exa.website/ (“replacement for ls with sensible defaults and added features like a tree view, git integration, and optional icons.”)
    • fzf (CLI fuzzy finder), alias preview=“fzf --preview ‘bat --color "always" {}’”
    • fd (simple, fast and user-friendly alternative to ‘find’, https://github.com/sharkdp/fd) -entr (watch-clone)
    • csvkit (awk-clone)
    • ccze (log coloring)
    • surfraw -hexyl (“hex viewer that uses Unicode characters and colour”, https://github.com/sharkdp/hexyl) -jq (“awk for JSON. It lets you transform and extract information from JSON documents”, https://stedolan.github.io/jq/) -pass (“password manager that uses GPG to store the passwords”, https://github.com/lunaryorn/mdcat)
    • restic (“backup tool that performs client side encryption, de-duplication and supports a variety of local and remote storage backends.”, https://restic.net/)
    • mdp (Markdown Presentation on CLI) -grepcidr
    • qrencode
    • caca-utils (show images on the CLI)
    • fbi ( & fbgs) (show images in Framebuffer device)
    • fbcat (take screnshot on framebuffer device)
    • nmap
    • micro (CLI Text Editor, ab Debian 11, https://micro-editor.github.io)
    • masscan (https://github.com/robertdavidgraham/masscan)
    • socat (Nachfolger von netcat, https://www.heise.de/select/ix/2017/11/1509815804306324)
    • dc3dd (patched version of GNU dd with added features for computer forensics)
    • smem (memory reporting tool)
    • free (Show Linux server memory usage)
    • mpstat (Monitor multiprocessor usage on Linux, part of sysstat package)
    • pmap (Montor process memory usage on Linux, part of the procps)
    • monit (Process supervision)
    • oping & noping
    • saidar (Curses-basiertes Programm für die Anzeige von Live-Systemstatistiken)
    • reptyr (Tool for moving running programs between ptys)
    • gron (https://github.com/tomnomnom/gron, makes JSON greppable, kann HTTP-Requests absetzen)
    • jc (https://github.com/kellyjonbrazil/jc, CLI tool and python library that converts the output of popular command-line tools and file-types to JSON or Dictionaries. This allows piping of output to tools like jq and simplifying automation scripts.)
    • bat (cat-clone), alias cat=‘bat’ (“alternative to the common (mis)use of cat to print a file to the terminal. It supports syntax highlighting and - git integration.”, https://github.com/sharkdp/bat)
    • ioping (https://github.com/koct9i/ioping, simple disk I/0 latency measuring tool, auch für disk seek rate/iops/avg)
    • vd (Visidata, multipurpose terminal utility for exploring, cleaning, restructuring and analysing tabular data. Current supported sources are TSV, CSV, fixed-width text, JSON, SQLite, HTTP, HTML, .xls, and .xlsx)
    • pdfgrep
    • duf https://github.com/muesli/duf (combined df and du, ncurses-based)
    • nala (apt-alternate, https://gitlab.com/volian/nala, https://christitus.com/stop-using-apt/)
    • iprange
    • tldr
    • rmlint
    • nvtop (https://github.com/Syllo/nvtop, GPUs process monitoring for AMD, Intel and NVIDIA)
    • lf (lf (as in “list files”) is a terminal file manager written in Go with a heavy inspiration from ranger file manager)

    no Deb pkg avail

    ___

  • wuphysics87@lemmy.ml
    link
    fedilink
    arrow-up
    7
    ·
    8 months ago

    Several things

    • write bash and nothing else (except posix sh)
    • find a good way to take notes. It shouldn’t be in your bashrc
    • only write fish for fish config
    • use $!/usr/bin/env bash
    • GravitySpoiled@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 months ago

      Good idea I added a “iwish” command a while ago. Whenever I am pissed about gnome not being able to do something, or anything else that didn’t work as it should, I wrote “iwish gnome had only one extension app” and it would add a new line to my wishlist.md Maybe it would be good for notes too. inote bla

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    8 months ago

    Shell scripts are one of the things that makes Linux what it is. They’re relatively easy to create, powerful, etc. It was the thing that drove me to it from Windows in the first place.

    One thing I would recommend against is creating dozens of utility scripts and/or aliases for things you run frequently. I have found it’s much better in the long-run to simply learn the “proper” commands and switches. If you use them often enough you start to type them very quickly. When you create helpers you start to learn your own ecosystem and will be lost on any system that doesn’t have your suite of helper apps installed.

    There are exceptions to this to be sure (e.g. I always alias ‘l=ls -FhlA’) but I would specifically avoid the podup and poddown ones myself. I’ve gotten very quick at typing “docker run -it --rm foo” just by rote repetition.

    You’re free to do as you like though. Maybe you’ll only run Linux on your own desktop so that’s all that matters. But something to keep in mind. I would at least learn the commands very well first and then later alias or script them for convenience.

    • draughtcyclist@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      I agree. However… I do have a public repo with my helper scripts in case I need to set them up on a new machine. best of both worlds!

      • MigratingtoLemmy@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        8 months ago

        Welcome to the world of funny when you come across an airgapped server which doesn’t have the tools you use.

        Eg: RHEL doesn’t have vim installed, now I can deal with nano but I’m way slower to do that. Luckily IaC has made my life somewhat easier

  • starman@programming.dev
    link
    fedilink
    English
    arrow-up
    7
    ·
    8 months ago

    That’s the way I do it:

    #!/usr/bin/env nix
    #! nix shell nixpkgs#nushell <optionally more dependencies>  --command nu
    
    <script content>
    

    But those scripts are only used by me

  • exu@feditown.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    8 months ago

    I use Bash for scripts, though my interactive shell is Fish.

    Usually I use #!/usr/bin/env bash as shebang. This has the advantage of searching your PATH for Bash instead of hardcoding it.

    My folders are only differentiated by those in my PATH and those not.

    Most of my scripts can be found here. They are purely desktop use, no syncing to any servers. Most would be useless there.

    For good practice, I’d recommend using set -euo pipefail to make Bash slightly less insane and use shellcheck to check for issues.
    This is personal preference, but you could avoid Bashisms like [[ and stick to POSIX sh. (Use #!/usr/bin/env sh then.)

    With shortened commands the risk is that you might forget how the full command works. How reliant you want to be on those commands being present is up to you. I wouldn’t implement them as scripts though, just simple aliases instead.
    Scripts only make sense if you want to do something slightly more complex over multiple lines for readability.

    • GravitySpoiled@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      8 months ago

      #/usr/bin/env bash typo? #!/usr/bin/env bash

      thx for the tips!

      I prefer single files over aliases since I can more easily manage each command.

  • redxef@feddit.de
    link
    fedilink
    arrow-up
    5
    ·
    8 months ago
    • I usually use bash/python/perl if I can be sure that it will be available on all systems I intend to run the scripts. A notable exception for this would be alpine based containers, there it’s nearly exclusively #!/bin/sh.
    • Depending on the complexity I will either have a git repository for all random scripts I need and not test them, or a single repo per script with Integrationtests.
    • Depends, if they are specific to my setup, no, otherwise the git repository is public on my git server.
    • Usually no, because the servers are not always under my direct control, so the scripts that are on servers are specific to that server/the server fleet.
    • Regarding your last question in the list: You do you, I personally don’t, partly because of my previous point. A lot of servers are “cattle” provisioned and destroyed on a whim. I would have to sync those modifications to all machines to effectively use them, which is not always possible. So I also don’t do this on any personal devices, because I don’t want to build muscle memory that doesn’t apply everywhere.
  • bss03@infosec.pub
    link
    fedilink
    arrow-up
    5
    ·
    8 months ago

    I primarily operate in strict standard compliance mode where I write against the shell specifications in the lastest Single Unix Specification and do not use a she-bang line since including one results in unspecified, implementation-defined behavior. Generally people seem to find this weird and annoying.

    Sometimes I embrace using bash as a scripting language, and use one of the env-based she-bangs. In that case, I go whole-hog on bashisns. While I use zsh as my interactive shell, even I’m not mad enough to try to use it for scripts that need to run in more than one context (like other personal accounts/machines, even).

    In ALL cases, use shellcheck and at least understand the diagnostics reported, even if you opt not to fix them. (I generally modify the script until I get a clean shellcheck run, but that can be quite involved… lists of files are pretty hard to deal with safely, actually.)

  • ace_garp@lemmy.world
    link
    fedilink
    arrow-up
    4
    ·
    8 months ago

    Yes, using bash on all boxen.

    Scripts start with #!/bin/sh ,because, that gives quicker execution times.

    Any simple aliases, I put in .bash_aliases

    Tried tcsh and zsh around 30yrs ago, all bash since then.