currently doing a fix of the code, wait for the 0.2 release!

Thunderbird is great, but very complex and possibly insecure and not private.

Threat model is an important key word here. Imagine you would write Mails over Tor/Tails only and need a secure Mail client.

(Btw I can recommend Carburetor Flatpak for that).

Because of this, the thunderbird hardening user.js, similar to the Arkenfox project exists.

But it is a bit too strict for most threat models. Also settings might change or break, and this has no automatic updating mechanism.

(I should upstream the updater)

The user.js is also just a template, so a ton of mostly not needed configs will stay there.

This project makes the setup of the hardening user.js easy.

Once setup, the script is placed in ~/.local/bin and a user systemd service runs it every once in a while.

You can comment out lines if you want to keep certain settings.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    5
    ·
    7 months ago

    curling into a temporary directory and then piping into Bash is effectively the same as the current way. Why not provide a clear instruction for installation and maybe even a separate installation script? Why does the setup script download the hardening script from the web, if its included in the repository anyway?

    Here is how I would imagine the install instructions could look like. Git clone command will download all files from the current repo, including the hardening-overwrite script. With bash scriptname the user does not require to use chmod. I would remove the curl from the setup script. Also there is a dedicated install command in Linux.

    Inside setup . sh you could use:

    program='thunderbird-hardening-overwrite'
    install -v "${program}" ~/.local/bin/"${program}"
    

    And the installation instructions in the Readme could look like this:

    git clone https://github.com/boredsquirrel/thunderbird-hardening-automation
    bash setup.sh
    

    If people are capable of copying the curl command, then they are capable of copying a few more lines like above.


    Ah, I didn’t think about the commenting out stuff. This breaks it. If that is something you want to allow, then this technique wouldn’t work. There is a way to run sed only once, by building a command variable as a Bash array. I am using this technique in my scripts nowadays, but it might look strange for people who don’t know about it. Commenting out lines is possible with arrays. Not sure if you would want do that. In case you want to look at how this looks:

    # Base command.
    sed_cmd=(
        sed
        -i
    )
    
    # Arguments that can be added by condition or excluded with commenting out.
    sed_cmd+=(-e 's/abc/ABC/g')
    sed_cmd+=(-e 's/def/DEF/g')
    
    # Then the last argument that is intended to be added always.
    sed_cmd+=(user.js)
    
    # Execute the Bash array as a commandline:
    "${sed_cmd[@]}"
    

    This might look intimidating and I can understand if you pass on this one. But I just wanted bring this to your attention. You might want to experiment before committing to it.

    • boredsquirrel@slrpnk.netOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      7 months ago

      Interesting, learning new things!

      I did the “git clone and use only one file” stuff a lot and it sucks having all these files in the homedir.

      I now use a subdir called “Git”, and I would recommend that too. Or I would remove the other files, that are not needed.

      The setup script can execute a lot of things, you should read it anyways. So yeah it may be a benefit to be sure that it is one git clone and then everything is local.

      I was just annoyed about all the unneeded git repos in my home dir, so I started never using the actual git stuff, and always using wget or curl.

      by building a command variable as a Bash array

      Damn this is really good. I will use that and make quite a few scripts like 99% faster XD

      Thanks!

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        7 months ago

        Git clone is useful if you want actually keep the source code you downloaded originally. Also I assume people who use this command to get a program, would remove that directory manually after job is done (if they don’t want to keep it). And I am always very careful with rm commands, therefore I do not include them most of the time. It’s not like people would not know how to deal with temporary files they download, just like downloading an archive, unpacking it and removing the archive file as an analogy.

        At least this is my way of doing so. I think this transparency is good for the end user, better than “hiding” it behind a curl into bash in my opinion (opinions vary I have noticed in the forums). You could put cd Downloads right before/above git clone command, to remind them its meant to be temporary. But I guess this does not align with the values and philosophy you follow, because you want to have it as simple and distraction free as possible for your user. That’s why the curl into bash in the first place. It’s just a priority thing what you value more.

    • boredsquirrel@slrpnk.netOP
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      7 months ago

      Found a new issue, fail-safeness.

      This is a set of changes that may not be needed anymore, if things change.

      I tried it with a file, and it one of these commands fails, the whole command seems to fail.

      So if a single setting is removed, this means the whole script would fail.

      • thingsiplay@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        7 months ago

        I see. Indeed if this is the way you want to proceed, having individual commands is more appropriate. But the thing is, if something fails, then isn’t it better to fail the entire script, instead proceeding silently without the fail being noticed? It depends, in some cases this can be the desired behavior.

        • boredsquirrel@slrpnk.netOP
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          7 months ago

          Hm, kinda bad.

          I could just add a GUI error message and get tons of bug reports, needing a fix.

          • thingsiplay@beehaw.org
            link
            fedilink
            arrow-up
            1
            ·
            7 months ago

            Hey, I’m not trying to convince you, just wanted to mention something more to think about. Sometimes fail-safeness is truly the better way. But is it in this case too? I mean if the script fails at once with a single sed command, then it means the file is not manipulated. If you have bunch of sed commands and one or two fails, then you have maybe 90% success commands and a few that did not work. That means the script edited the file in a state that was not intended to be. However, if it is a single command and fails all at once, at least the file is preserved as it is.

            I don’t know enough about this project to know whats important and appropriate in your case. I mean if its okay that commands “fail”, then keep it this way.

            • boredsquirrel@slrpnk.netOP
              link
              fedilink
              arrow-up
              2
              ·
              7 months ago

              In this specific case it is not how this works.

              It modifies lines searching for unique strings. If the string is not found, then it was maybe removed.

              (The user.js handles removals normally by commenting things out, so I might actually use a single command).

              If something was not found then it doesnt need to be changed, everything fine.

              The result is a user.js from a good template, with all the settings applied that I knew. Maybe something new was added and that is unchanged.

              The alternative would be not updating the config at all, which means no response to Mozilla adding weird stuff to it.

              Firefox is a more moving target here.

              I will implement a persistent GUI error message if something failed.