I often need to login to colleagues' machines at work, but I find that their settings are not what I am familiar with.
So I wrote an SSH wrapper in POSIX shell which tars dotfiles into a base64 string, passes it to SSH, and decodes / setups on the remote temp directory. Automatically remove when session ends.
Supported: .profile, .vimrc, .bashrc, .tmux.conf, etc.
This idea comes from kyrat[1]; passing files via a base64 string is a really cool approach.
You can also just place config files anywhere if you know what you then load. That's what I do in my dotfiles, but not exactly like the parent. I also purposefully keep the repo size tiny so it's also just easy to clone. I'd recommend setting a env var so you can always just set that
Also don't forget you can have local vim files. I have a function at the end of my vimrc that looks for '.exrc', '.vim.local', '.nvim.local' in the current directory. Helpful for setting project settings.
I've found lnk [0] to be a nice tool for this. Similar to GNU Stow as another comment mentioned, but plays a bit nicer with git (and, in my opinion, is nicer to use).
Edit: just remembered there was a good comparison of lnk and stow on the HN discussion of lnk from a few months back [1].
It's kinda amusing how much of interesting software there is beyond coreutils and GCC that came from GNU, and how little adoption it has actually seen.
I came across something similar a few months ago. I pieced together a working hybrid by patching in parts from an older release with the latest version. I didn't ever work out if the latest version failed because of something in my environment or not, but I'm on a Mac fwiw.
Have you run into that? I can't recall ever facing that issue. Seems very weird to strip down that much and then use a different editor. Do you remember if ed was missing in those machines?
> Do you remember if ed was missing in those machines
I had to laugh out loud. I couldn't imagine such a system, that wouldn't be POSIX compliant. So I looked it up, and indeed, it's entirely possible. Debian doesn't necessarily include it.
While not mandatory, vi is part of the POSIX commands. I mean you could use ed or even hack your way with awk, sed, and/or grep but no one wants to deal with that bullshit. And if you're installing vi you might as well install vim, right?
I've been on a lot of systems and can't remember a single instance of not having vi (though I do vim). So pretty rare, like you said
We usually work on the VM with daily-built ISO. For example, I would compile and upload Java program to the frontend team member's VM, and type "srt" for "systemctl restart tomcat."
I have a python script [0] which builds and statically links my toolbox (fish, neovim, tmux, rg/fd/sd, etc.) into a self contained —-prefix which can be rsynced to any machine.
It has an activate script which sets PATH, XDG_CONFIG_HOME, XDG_DATA_HOME, and friends. This way everything runs out of that single dir and doesn’t pollute the remote.
My ssh RemoteCommand then just checks for and calls the activate script if it exists. I get dropped into a nice shell with all my config and tools wherever I go, without disturbing others’ configs or system packages.
Published a minimal version and added a link! This implements everything I mentioned except for static linking, so YMMV depending on your C/CXX toolchain and installed packages.
This reminds me - in a previous company I worked at, we had a bunch of old firewalls and switches that ran SSH servers without support for modern key exchange algorithms etc
One of the engineers wrote a shell alias called “shitssh”, which would call ssh with the right options to allow the old crufty crypto algorithms to be used. This alias got passed down to new members of the team like a family heirloom.
Nice, although wouldn't work today. Modern distros (ime, fedora 42) need you to update policy and reboot. You can't connect with just --key-exchange YOLO1 any more
I hate network vendors. Wish I could put BSD on my old Catalysts.
What I mean is an .rsync-filter with ‘H Cache/‘ or some lines of patterns to exclude. You’ll need to run with -F every time. On the sending side, a recent tar will accept —-exclude-caches if you can be diligent about creating CACHEDIR.TAG.
For sure, you need to exclude whatever "dotfiles" you don't want copied (or explicitly copy the ones you want), particularly caches and other giant hidden things.
These days, my laptop acts as a dumb SSH gateway for Linux VMs. No configuration or setup, aside from VS code connecting to VMs. Any server that I would want to load my dotfiles onto will almost always have git installed.
Rant (not directed at any comment here): If it's a production server without git, then please do not run scripts like this. Do not create junk directories on (or ideally any modifications to) secure machines. It inevitably causes new and uninteresting puzzles for your colleagues. Create documented workflows for incident responses or inspection.
It's surprising to me how many projects can be replaced with just a line or two of shell script. This project is a slightly more sophisticated shell script that exposes a friendlier UI, but I don't see why it's needed when the alternative is much simpler, considering the target audience.
How about mounting your dotfiles directory (~/.config) or even your entire home directory on the remote system using SSHFS or NFS? I'm sure somebody would have tried it or some project may already exist. Any idea why that isn't as prevalent as copying your dotfiles over?
That requires the remote machine to be configured to SSH into your local machine. In the scenario where OP's project is useful (SSH to foreign machines) I might not want that.
On the other hand, if the remote machine is mine, it will have my config anyway.
There should be some way to mount a local directory onto a remote system without requiring the remote system to log in to the local system. SSH provides a secure bidirectional communication channel between the two systems. While we normally use sshfs to mount a remote directory to the local system, why should the reverse be impossible? Besides, you could also use NFS over SSH or TLS.
Well, what if it's a separate directory meant exclusively for remote systems alone? And what if the remote mount is read-only, perhaps with a writable layer on top using overlayfs that can be discarded on logout?
It's actually far less complex than what container runtimes do. I've even done parts of those, which is why I'm able to suggest it. I'm thinking about implementing it and was checking if anybody else wanted to do it or if they foresee any problems that I can't.
I didn't look closely at the project, but why take the extra step of base64? I do this all the time with tar by itself and it's wire-proof enough to work fine.
I love the concept but I'd be worried about security in enterprise environments. Some of the dotfiles (especially .bashrc) could override security policies or compliance settings that IT has configured.
That said, for personal servers this is brilliant. I've been using a git repo for dotfiles but having them automatically cleanup on disconnect is clever.
One improvement: consider using SSH's ProxyCommand or LocalCommand instead of wrapping SSH entirely. That way it works transparently with tools that call SSH directly (git, rsync, etc).
Also curious - does this handle tmux sessions properly? I often SSH in, start tmux, disconnect, then reconnect later. Would the dotfiles still be there?
People who choose such a noxious name for their project that it actually dissuades people who might otherwise be users think that says something about those prudish users, but it really says something about them.
I have been doing something similar for years, especially for login to VMs: sets up an environment of my dotfiles based on a checkout and runs a resumable 'screen' session with tmux. This looks elegant (ephemeral), but I seldom log in to a machine I can't leave my files on as installed.
I have a dotfiles git repo that symlinks my dotfiles. Then I can either pull the repo down on remote machine or rsync. I’m not sure why I would pick this over a git repo with a dotfiles.sh script
This is for when you have to ssh into some machine that's not yours, in order to do debugging or troubleshooting -- and you need your precious dotfiles while you're in there, but it would be not nice to scatter your config and leave it as a surprise for the next person.
This installs into temp dirs and cleans it all up when you disconnect.
Personally, my old-man solution to this problem is different: always roll with defaults even if you don't like them, and don't use aliases. Not for everyone, but I can ssh into any random box and not be flailing about.
Even with OP's neat solution, it's not really going to work when you have to go through a jump box, or have to connect with a serial connection or some enterprise audit loggable ssh wrapper, etc
There's definitely something be said for speaking the common tongue, and being able to use the defaults when it's necessary. I have some nice customisations, but make a point of not becoming depwndent on them because I'm so often not in my own environment.
On the other hand, your comment has me wondering if ssh-agent could be abused to drag your config along between jump hosts and enterprise nonsense, like ti does forwarding of keys.
I think you're joking, but to clarify -- not personally yours. A misbehaving worker box, an app server in the staging environment, etc. A resource owned by the organization for which you work, where it would not be appropriate for you to customize it to your own liking
> I wonder why are dofiles have to be on remote machines?
Because the processes that use them run on the remote machines.
> I type an alias, the ssh client expands it on my local machine and send complex commands to remote.
This is not how SSH works. It merely takes your keystrokes and sends them to the remote machine, where bash/whatever reads and processes them.
Of course, you can have it work the way you imagine, it's just that it'd require a very special shell on your local machine, and a whole RAT client on the remote machine, which your special shell should be intimately aware about. E.g. TAB-completion of files would involve asking the remote machine to send the dir contents to your shell, and if your alias includes a process substitution... where should that process run?
Yes but but does the process have to read from a file system dotfile, instead of some data fetched over a ssh connection?
> your alias includes a process substitution
Very valid point. How about a special shell only provides sys calls and process substitution on remote, the rest runs on local client, and communicate via ssh?
I understand this will make client "fat" but it's way more portable.
> Yes but but does the process have to read from a file system dotfile, instead of some data fetched over a ssh connection?
Well, no. But if you didn't write that program (e.g. bash or vim), you're stuck with what their actual logic is. Which is "read a file from the filesystem". You can, of course, do something like mounting your local home directory onto the remote's filesystem (hopefully, read-only)... But in the end of the day, there are still two separate machines, and you have to mend the divide somehow, and it'll never be completely pretty, I'm afraid.
> How about a special shell only provides sys calls and process substitution on remote.
Again, as I said, lots of RATs exist, not all of them malicious. But to make "the rest runs on local client" you need to write what essentially will end up a "purely remote-only shell". Essentially, all the parts of bash that manage parsing, user interaction and internal state tracking but without actual process management. Perhaps it's a good idea, actually; but untangling the mess of bash source is not going to be easy.
The current solution of "have a completely normal, standard shell run on the remote and stretch the terminal connection to it over the network" is Good Enough for most of people. Which is not surprising given that that's the environment in which UNIX and its shell were originally implemented.
Remote machines usually don’t need to know your keystrokes or handle your line editing, either. There’s a lot of latency to cut out, local customization to preserve, and protocol simplification to be had.
I don't know, I just use the standard on my machine or on remote. Why bother to customize it all the time when you can't work without the customizations
Be careful, this will force your defaults over system defaults possibly overriding compliance or security settings. There are a few places I noticed where well-placed malware could hop in etc.
It’s not bad software, it’s also not mature. I’m currently on a phone and on vacation so this is the extent of my review. Maybe I’ll circle back around with some PRs next week
Supported: .profile, .vimrc, .bashrc, .tmux.conf, etc.
This idea comes from kyrat[1]; passing files via a base64 string is a really cool approach.
[1]: https://github.com/fsquillace/kyrat/
reply