Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Shittp – Volatile Dotfiles over SSH (github.com/fobshippingpoint)
132 points by sdovan1 1 day ago | hide | past | favorite | 83 comments




I often need to login to colleagues' machines at work, but I find that their settings are not what I am familiar with. So I wrote an SSH wrapper in POSIX shell which tars dotfiles into a base64 string, passes it to SSH, and decodes / setups on the remote temp directory. Automatically remove when session ends.

Supported: .profile, .vimrc, .bashrc, .tmux.conf, etc.

This idea comes from kyrat[1]; passing files via a base64 string is a really cool approach.

[1]: https://github.com/fsquillace/kyrat/


   scp my-precious-dotfiles remote:~
   trap 'ssh remote rm my-precious-dotfiles' EXIT
   ssh remote
Or you can even bake the trap into the remote bash's invocation, although that'd be a bit harder.

That overwrites the remote dotfiles. Any workarounds?

:h netrw

You can also just place config files anywhere if you know what you then load. That's what I do in my dotfiles, but not exactly like the parent. I also purposefully keep the repo size tiny so it's also just easy to clone. I'd recommend setting a env var so you can always just set that

Also don't forget you can have local vim files. I have a function at the end of my vimrc that looks for '.exrc', '.vim.local', '.nvim.local' in the current directory. Helpful for setting project settings.


I've found lnk [0] to be a nice tool for this. Similar to GNU Stow as another comment mentioned, but plays a bit nicer with git (and, in my opinion, is nicer to use).

Edit: just remembered there was a good comparison of lnk and stow on the HN discussion of lnk from a few months back [1].

[0] https://github.com/yarlson/lnk

[1] https://news.ycombinator.com/item?id=44080514


You can set HOME to some temporary path of your choosing. You’ll still need to be a little careful.

GNU Stow? https://systemcrafters.net/managing-your-dotfiles/using-gnu-...

Keep the alternate sets in different subdirectories.


It's kinda amusing how much of interesting software there is beyond coreutils and GCC that came from GNU, and how little adoption it has actually seen.

I came across something similar a few months ago. I pieced together a working hybrid by patching in parts from an older release with the latest version. I didn't ever work out if the latest version failed because of something in my environment or not, but I'm on a Mac fwiw.

https://github.com/cdown/sshrc


Ok, but what if your colleague does not have Vim installed?

Wouldn't it make more sense to have a tool that brings files over to the local computer, starts Vim on them, and then copies them back?


That starts to sound like using VS Code in remote mode.

Emacs in tramp mode.

I can’t recall encountering a system in the last 15 years that didn’t have vim (or at least vi for esoteric things) on it.

Would not be uncommon in a container or purpose-built VM.

Have you run into that? I can't recall ever facing that issue. Seems very weird to strip down that much and then use a different editor. Do you remember if ed was missing in those machines?

> Do you remember if ed was missing in those machines

I had to laugh out loud. I couldn't imagine such a system, that wouldn't be POSIX compliant. So I looked it up, and indeed, it's entirely possible. Debian doesn't necessarily include it.

https://unix.stackexchange.com/a/609067


Yes I've run into containers where every utility that wasn't needed to run the service was stripped out. Even tools such as "less."

So what was the editor?

While not mandatory, vi is part of the POSIX commands. I mean you could use ed or even hack your way with awk, sed, and/or grep but no one wants to deal with that bullshit. And if you're installing vi you might as well install vim, right?

I've been on a lot of systems and can't remember a single instance of not having vi (though I do vim). So pretty rare, like you said

https://en.wikipedia.org/wiki/List_of_POSIX_commands


We usually work on the VM with daily-built ISO. For example, I would compile and upload Java program to the frontend team member's VM, and type "srt" for "systemctl restart tomcat."

How much time does it add when running e.g. "shittp user@lan-host uname" ?

> I often need to login to colleagues' machines at work, but I find that their settings are not what I am familiar with

I'd hate to jump to conclusions, but what username are you looking into what machines with for that to be an issue?


I have a python script [0] which builds and statically links my toolbox (fish, neovim, tmux, rg/fd/sd, etc.) into a self contained —-prefix which can be rsynced to any machine.

It has an activate script which sets PATH, XDG_CONFIG_HOME, XDG_DATA_HOME, and friends. This way everything runs out of that single dir and doesn’t pollute the remote.

My ssh RemoteCommand then just checks for and calls the activate script if it exists. I get dropped into a nice shell with all my config and tools wherever I go, without disturbing others’ configs or system packages.

[0] https://github.com/foltik/dots


Is this available somewhere? I'm curious to see how this works.

Published a minimal version and added a link! This implements everything I mentioned except for static linking, so YMMV depending on your C/CXX toolchain and installed packages.

Thank you!

This reminds me - in a previous company I worked at, we had a bunch of old firewalls and switches that ran SSH servers without support for modern key exchange algorithms etc

One of the engineers wrote a shell alias called “shitssh”, which would call ssh with the right options to allow the old crufty crypto algorithms to be used. This alias got passed down to new members of the team like a family heirloom.


Nice, although wouldn't work today. Modern distros (ime, fedora 42) need you to update policy and reboot. You can't connect with just --key-exchange YOLO1 any more

I hate network vendors. Wish I could put BSD on my old Catalysts.


  tmp="$(mktemp -d)" && rsync -a --exclude='.ssh' user@host:~/.[!.]* "$tmp"/ && HOME="$tmp" exec "$SHELL"

I think this will copy your 9gb Mozilla cache directory as well? Still one liners like this is all you need lol

My mozilla cache would be under ~/.mozilla/firefox. Is the nightly version moving to ~/.config?

Reason I say would be is that I disable disk cache among other things performed by Arkenfox [1]

[1] - https://github.com/arkenfox/user.js


Yes, Firefox 147 will respect XDG dirs.

What does config have to do with the one liner?

Prevents some data from ending up in ~/.mozilla. We dont sync what does not exist.

My guy, the one liner as written copies all dot files. Mozilla included

My guy, the one liner as written copies all dot files. Mozilla included

Exactly why I apply Sun Tzu methodology.


Any sufficiently-advanced automated rsync would have a filter for caches.

Except only ssh is filtered. Just commenting on what I see, not what should be

What I mean is an .rsync-filter with ‘H Cache/‘ or some lines of patterns to exclude. You’ll need to run with -F every time. On the sending side, a recent tar will accept —-exclude-caches if you can be diligent about creating CACHEDIR.TAG.

For sure, you need to exclude whatever "dotfiles" you don't want copied (or explicitly copy the ones you want), particularly caches and other giant hidden things.

Overriding HOME variable is neat! Make things much easier.

I do the same, but I skip rsync for git.

    git clone $uri dotfiles; export HOME=$(pwd)/dotfiles 
These days, my laptop acts as a dumb SSH gateway for Linux VMs. No configuration or setup, aside from VS code connecting to VMs. Any server that I would want to load my dotfiles onto will almost always have git installed.

Rant (not directed at any comment here): If it's a production server without git, then please do not run scripts like this. Do not create junk directories on (or ideally any modifications to) secure machines. It inevitably causes new and uninteresting puzzles for your colleagues. Create documented workflows for incident responses or inspection.


I use something similar.

It's surprising to me how many projects can be replaced with just a line or two of shell script. This project is a slightly more sophisticated shell script that exposes a friendlier UI, but I don't see why it's needed when the alternative is much simpler, considering the target audience.


How about mounting your dotfiles directory (~/.config) or even your entire home directory on the remote system using SSHFS or NFS? I'm sure somebody would have tried it or some project may already exist. Any idea why that isn't as prevalent as copying your dotfiles over?

I’m trying to imagine why sshfs mounting the less-capable remote onto the workstation would be blocked.

That requires the remote machine to be configured to SSH into your local machine. In the scenario where OP's project is useful (SSH to foreign machines) I might not want that.

On the other hand, if the remote machine is mine, it will have my config anyway.


There should be some way to mount a local directory onto a remote system without requiring the remote system to log in to the local system. SSH provides a secure bidirectional communication channel between the two systems. While we normally use sshfs to mount a remote directory to the local system, why should the reverse be impossible? Besides, you could also use NFS over SSH or TLS.

This would enable a lot of attacks.

Could you elaborate?

Now anybody with root/sudo/physical access to the remote machine has full R/W access to your entire home directory.

Well, what if it's a separate directory meant exclusively for remote systems alone? And what if the remote mount is read-only, perhaps with a writable layer on top using overlayfs that can be discarded on logout?

This now looks very complex.

It's actually far less complex than what container runtimes do. I've even done parts of those, which is why I'm able to suggest it. I'm thinking about implementing it and was checking if anybody else wanted to do it or if they foresee any problems that I can't.

I didn't look closely at the project, but why take the extra step of base64? I do this all the time with tar by itself and it's wire-proof enough to work fine.

In some cases, shar would be a useful wrapper for that.

something like this, i recon:

  $ tar cf - ~/.shrc | ssh target '(cd ~ && tar xf -)'

It's nice to read the different takes on this.

On that note, I didn't see any mention of https://github.com/romkatv/zsh4humans/blob/master/tips.md#ex... , so there.


chezmoi has similar functionality, but it does install a binary on the target machine:

https://www.chezmoi.io/reference/commands/ssh/


Is this similar to sshrc?

https://github.com/cdown/sshrc


Maybe also kind of related xxh

https://github.com/xxh/xxh


I love the concept but I'd be worried about security in enterprise environments. Some of the dotfiles (especially .bashrc) could override security policies or compliance settings that IT has configured.

That said, for personal servers this is brilliant. I've been using a git repo for dotfiles but having them automatically cleanup on disconnect is clever.

One improvement: consider using SSH's ProxyCommand or LocalCommand instead of wrapping SSH entirely. That way it works transparently with tools that call SSH directly (git, rsync, etc).

Also curious - does this handle tmux sessions properly? I often SSH in, start tmux, disconnect, then reconnect later. Would the dotfiles still be there?


People who choose such a noxious name for their project that it actually dissuades people who might otherwise be users think that says something about those prudish users, but it really says something about them.

Oh! The horror.

I have been doing something similar for years, especially for login to VMs: sets up an environment of my dotfiles based on a checkout and runs a resumable 'screen' session with tmux. This looks elegant (ephemeral), but I seldom log in to a machine I can't leave my files on as installed.

${HOME} is where your dotfiles are.


I have a dotfiles git repo that symlinks my dotfiles. Then I can either pull the repo down on remote machine or rsync. I’m not sure why I would pick this over a git repo with a dotfiles.sh script

https://erock-git-dotfiles.pgs.sh/tree/main/item/dotfiles.sh...


This is for when you have to ssh into some machine that's not yours, in order to do debugging or troubleshooting -- and you need your precious dotfiles while you're in there, but it would be not nice to scatter your config and leave it as a surprise for the next person.

This installs into temp dirs and cleans it all up when you disconnect.

Personally, my old-man solution to this problem is different: always roll with defaults even if you don't like them, and don't use aliases. Not for everyone, but I can ssh into any random box and not be flailing about.

Even with OP's neat solution, it's not really going to work when you have to go through a jump box, or have to connect with a serial connection or some enterprise audit loggable ssh wrapper, etc


There's definitely something be said for speaking the common tongue, and being able to use the defaults when it's necessary. I have some nice customisations, but make a point of not becoming depwndent on them because I'm so often not in my own environment.

On the other hand, your comment has me wondering if ssh-agent could be abused to drag your config along between jump hosts and enterprise nonsense, like ti does forwarding of keys.


Why would you want to ssh into a machine that's not yours? That's a violation of the Computer Frauds and Abuse Act, up to 10 years in prison!

I think you're joking, but to clarify -- not personally yours. A misbehaving worker box, an app server in the staging environment, etc. A resource owned by the organization for which you work, where it would not be appropriate for you to customize it to your own liking

When you have permission to do so, it isn’t.

I wonder why are dofiles have to be on remote machines?

e.g. I type an alias, the ssh client expands it on my local machine and send complex commands to remote. Could this be possible?

I suppose a special shell could make it work.


> I wonder why are dofiles have to be on remote machines?

Because the processes that use them run on the remote machines.

> I type an alias, the ssh client expands it on my local machine and send complex commands to remote.

This is not how SSH works. It merely takes your keystrokes and sends them to the remote machine, where bash/whatever reads and processes them.

Of course, you can have it work the way you imagine, it's just that it'd require a very special shell on your local machine, and a whole RAT client on the remote machine, which your special shell should be intimately aware about. E.g. TAB-completion of files would involve asking the remote machine to send the dir contents to your shell, and if your alias includes a process substitution... where should that process run?


> the processes that use them run on the remote

Yes but but does the process have to read from a file system dotfile, instead of some data fetched over a ssh connection?

> your alias includes a process substitution

Very valid point. How about a special shell only provides sys calls and process substitution on remote, the rest runs on local client, and communicate via ssh?

I understand this will make client "fat" but it's way more portable.


> Yes but but does the process have to read from a file system dotfile, instead of some data fetched over a ssh connection?

Well, no. But if you didn't write that program (e.g. bash or vim), you're stuck with what their actual logic is. Which is "read a file from the filesystem". You can, of course, do something like mounting your local home directory onto the remote's filesystem (hopefully, read-only)... But in the end of the day, there are still two separate machines, and you have to mend the divide somehow, and it'll never be completely pretty, I'm afraid.

> How about a special shell only provides sys calls and process substitution on remote.

Again, as I said, lots of RATs exist, not all of them malicious. But to make "the rest runs on local client" you need to write what essentially will end up a "purely remote-only shell". Essentially, all the parts of bash that manage parsing, user interaction and internal state tracking but without actual process management. Perhaps it's a good idea, actually; but untangling the mess of bash source is not going to be easy.

The current solution of "have a completely normal, standard shell run on the remote and stretch the terminal connection to it over the network" is Good Enough for most of people. Which is not surprising given that that's the environment in which UNIX and its shell were originally implemented.


> I suppose a special shell could make it work.

Working on it! :)

Remote machines usually don’t need to know your keystrokes or handle your line editing, either. There’s a lot of latency to cut out, local customization to preserve, and protocol simplification to be had.


I don't know, I just use the standard on my machine or on remote. Why bother to customize it all the time when you can't work without the customizations

time to call the it team at work (on the phone) to ask them to add a new item to the software allowlist

Be careful, this will force your defaults over system defaults possibly overriding compliance or security settings. There are a few places I noticed where well-placed malware could hop in etc.

It’s not bad software, it’s also not mature. I’m currently on a phone and on vacation so this is the extent of my review. Maybe I’ll circle back around with some PRs next week


i was merely joking about the name apparently being intended to be pronounced in a rather juvenile manner

It's not obvious, but the shitt-p is borrowed from an anime character. So it should pronounce like sheet-p: https://ipa-reader.com/?text=%C9%95it%CB%90opi%CB%90


More like shit toilet paper. Name like findtherapist.com

Why call this Shittp? Is it to imply it’s actually shitty and just a proof of concept or fun project?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: