Yep, I use rsync to sync files / directories between my desktop, laptop and even phone (Android). Also an external drive.
I ended up creating https://github.com/nickjj/bmsu which calls rsync under the hood but helps you build up a valid rsync command with no surprises. It also codifies each of your backup / restore strategies so you're not having to run massively long rsync commands each time. It's 1 shell script with no dependencies except rsync.
Nothing leaves my local network since it's all local file transfers.
The functionality I miss from a GUI spreadsheet is this "simple" case:
I have a spreadsheet with 186 rows and want to select a specific cell's value from rows 6 and 127 and immediately know the sum if it happens to be 2 digits. Most of the time it's from the same column but not always.
With a GUI, this is really easy. CTRL + left click each cell and get the answer in a second. Throw SHIFT into the mix if you want a range of cells.
This is what I use too, the nice thing about this is you can use your existing tools natively and you have lots of flexibility in how you want them arranged. Could be split panes in 1 window or different windows in their own session.
If I'm already in tmux and run that script it splits a few tools in a new tmux window but if I'm not in tmux already then it creates a new "monitor" session with the same splits. I also have it assigned to my status bar where if I click Waybar's CPU icon, it "launches or focuses" that monitor script in a new terminal so it doesn't spawn duplicates if I click the CPU multiple times. Basically ultimate freedom in how I want to launch it.
> watching people discover all these tools now that claude code is sending them into the terminal.
Hi, I'm the author of the post.
I don't like replying to comments like this but I think it's important because of how "invasive" LLMs have become and how they might jade your opinion (not you specifically, but everyone) on any type of output such as blog posts, videos, code, etc..
I wrote about this because I've done contract work for lots of companies, spoken with lots of developers and every time they see the output of Delta they are like "how did you make your git diffs look so cool?", so I thought it was worth sharing because there's lots of folks out there who might not know about it.
By the way, this concept of having a terminal based workflow is something I've openly been using, sharing and writing about for around a decade. There's 500+ posts and videos on my site covering a ton of different topics.
You're more than welcome to explore any of the 70+ open source projects I maintain https://github.com/nickjj?tab=repositories, with git histories going back well before LLMs existed. Thousands upon thousands of human written lines of shell scripts, Python scripts, Docker set ups, etc.. Every readme file was written by hand and 99.999% of current day code is by hand too. I've been playing with AI to learn new languages like Lua to solve specific problems but I end up rewriting most of that code afterwards. You can view comments I've made on HN in the past in how I feel about LLM code haha.
I've been subbed since 2008 (before streaming with DVDs).
My bill has increased for each increase they introduced. The last bill for the standard 1080p no ads plan was $19.56 (before this new price update). It was about half that when I first signed up.
To be honest I'm going to cancel, not because I can't afford it but because they keep raising their prices very frequently and it has hit the point now where I'm not interested in paying more so they have lost a lifetime customer.
I find it funny how they also only show you the last 1 year of billing history. It's a nice dark pattern to not let you easily see how much prices have gone up over the years. You have to go through the account cancellation menu just to see when you first joined.
6 months ago for $575, I picked up a 15" 1080p IPS display laptop with an AMD Ryzen 7 6800H (8 cores / 16 threads), 32 GB of DDR5 RAM, Radeon 680M iGPU that can use up to 8 GB VRAM and a 1 TB NVME SSD with a backlight keyboard, a bunch of USB ports and HDMI port. It weighs the same as a MBP and comes with a 2 year manufacturer warranty. It's upgradable to 64 GB of RAM and 2 TB SSD. It has Windows 11 but all of the parts are compatible with Linux if you want to go down that route.
It's from a brand I never heard of, Nimo N155 but I took a gamble and so far I couldn't be happier. The only problem now is there's major shortages and prices are jacked because of the RAM situation. The same model is $700 today and much harder to find, even their official site is out of stock on this model.
High quality for their time. The toilet bowl was very heavy for its screen size, and had minimal volume for battery. The G3 iBook lacked rigidity, and had a tendency to damage the mainboard if picked up from a corner. The G4 iBook had grounding issues, and would occasionally get spicy with two-prong outlets. All three of these issues were directly related to the plastic chassis. All three were great laptops for their day; none would be acceptable in this decade.
There’s nothing wrong with plastic as a material, but there’s a lot wrong with many of the designs of mid-tier laptops that happen to use plastic. The plastic isn’t as much a cause of their problems as it is a signature feature of all hastily assembled corner-cut devices.
It's made of metal and is sturdy. I've taken it on 2 trips (including international), it's all good and still feels like new but to be fair I don't abuse it. For traveling I put it into a regular backpack that has a laptop sleeve, I don't use extra packing.
The track pad is of course not as good as Apple's but it's good enough where it's not in the way and feels ok to use.
The brightness and battery life both fall into the same category of they haven't negatively impacted me in my day to day. For example a few hours of dev work in the park with the sun out hasn't been a problem for both battery life or visibility.
You are right in that I don't value battery life as a top tier feature. ~5 hours of "real work" is enough because if you need extended battery life for doing intensive tasks away from human civilization you can always keep a power bank on hand for extended usage. If you're not out in the middle of no where, access to a power outlet is readily available.
> A much larger laptop with less than half the number of display pixels is not really the same market. And how's that battery life?
Yes, the display isn't as good but the Neo with 512 GB of storage is already $700 and has half the storage of the other laptop. The Neo also has 8 GB of RAM vs 32 GB. Big differences IMO.
Battery life is "good enough" but not great. It really depends on what you're using it for. If you're doing CPU bound tasks a lot, it's not going to last as long. I guess a takeaway is I was never in a situation where I had to change my behaviors because of the battery life. Unless you're planning to be out in the middle of no where without a power bank for an extended period time doing workload intensive tasks it's fine.
Likewise, the display being only 1080p isn't as bad as you would think. I'd be surprised if anyone is running their 13" Neo at 2408 x 1506 at native scaling. That would be 219 PPI. For reference I run a 4k 32" monitor at native 1:1 scaling and that's 138 PPI. It would be bonkers to consider using 219 PPI from a normal viewing distance. Most scaled resolutions with the Neo would be effectively 1080p resolution but with sharper text.
You're not in the market for a netbook-type machine if this is the case.
> but with sharper text.
Text huh? Sounds important.
> Battery life is "good enough" but not great.
So, do you want a lightweight client / light productivity machine with tons of battery life, great text, and a kickass trackpad? Or an affordable workstation replacement? Different markets.
What you’re missing is that the target market for this devices — the casual laptop user — DGAF about memory or storage if it is at the expense of the directly observable user experience.
Few people want or need 32gb of RAM, nor give a shit about what it even means. Most people just want to run MS Word and Google Chrome and maybe TurboTax.
Sure but if people want a device for only casual browsing and are ok with 256 GB of storage and 8 GB of memory they can get a Chromebook for half the price of the Neo. Not all of them are bad, there's tons in the $300 range with good enough specs for casual usage.
If you want to spend ~$600-700, the laptop I mentioned fits the bill for casual use, a development workstation, media editing and casual gaming at a directly comparable price to the Neo. I replied initially because you wrote nothing good exists in the $600-700 range.
Again, this device isn’t someone who’s buying based on specs. Nor is it for somebody who’s buying based on price.
It’s for somebody who goes to the store, puts their hands on the keyboard, uses the touchpad, looks at the screen, and feels the chassis, and then makes their decision. This is how regular people purchase these commodity items. Most people have no clue what the difference between storage and memory is. They just want to know: will it run [software]? That’s all the specs they need to know. Maybe the battery life as well
If you haven’t already go put your hands on one of these at the store. There’s no $600 laptop that feels like it.
> It’s for somebody who goes to the store, puts their hands on the keyboard, uses the touchpad, looks at the screen, and feels the chassis, and then makes their decision.
We might live in different areas of the world. Every person I know who isn't into tech has never walked into a store by themselves and bought a laptop based on feel or a hunch.
They always get a recommendation from someone who is into tech, either for a specific model to buy online or someone to go with in real life at a store to help them make a purchase.
I don't blame them either, I wouldn't make a big purchase with no information and trust the sales floor to give high quality personalized advice.
Yep I had a GeForce 750 Ti (2 GB) and I was able to run a ton of things on Windows without any issues at all.
As soon as I switched to Linux I had all sorts of problems on Wayland where as soon as that 2 GB was reached, apps would segfault or act in their own unique ways (opening empty windows) when no GPU memory was available to allocate.
Turns out this is a problem with NVIDIA on Wayland. On X, NVIDIA's drivers act more like Windows. AMD's Linux drivers act more like Windows out of the box on both Wayland and X. System memory gets used when VRAM is full. I know this because I got tired of being unable to use my system after opening 3 browser tabs and a few terminals on Wayland so I bought an AMD RX 480 with 8 GB on eBay. You could say my cost of running Linux on the desktop was $80 + shipping.
tmux by itself lets you create any number of sessions, windows and panes. You can arrange them for anything you want to do.
Having a pane dedicated to some LLM prompt split side by side with your code editor doesn't require additional tools, it's just a tmux hotkey to split a pane.
There's also plugins like tmux resurrect that lets you save and restore everything, including across reboots. I've been using this set up for like 6-7 years, here's a video from ~5 years ago but it still applies today https://www.youtube.com/watch?v=sMbuGf2g7gc&t=315s. I like this approach because you can use tmux normally, there's no layout config file you need to define.
It lets me switch between projects in 2 seconds and everything I need is immediately available.
Why do you often need to re-prompt things like "can you simplify this and make it more human readable without sacrificing performance?". No amount of specification addresses this on the first shot unless you already know the exact implementation details in which case you might as well write it yourself directly.
I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worthy of being git commit.
I sometimes use AI for tiny standalone functions or scripts so we're not talking about a lot of deeply nested complexity here.
> I often have to put in a prompt like this 5-10 times before the code resembles something I'd even consider using as a 1st draft base to refactor into something I would consider worth of being git commit.
Are you stuck entering your prompts in manually or do you have it setup like a feedback loop like "beautify -> check beauty -> in not beautiful enough beautify again"? I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.
I do everything manually. Prompt, look at the code, see if it works (copy / paste) and if it works but it's written poorly I'll re-prompt to make the code more readable, often ending with me making it more readable without extra prompts. Btw, this isn't about code formatting or linting. It's about how the logic is written.
> I can't imagine why everyone things AIs can just one shot everything like correctness, optimization, and readability, humans can't one shot these either.
If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning. If not, we're admitting it's providing an initial worse result for unknown reasons. Maybe it's to make you as the operator feel more important (yay I'm providing feedback), or maybe it's to extract the most amount of money it can since each prompt evaluates back to a dollar amount. With the amount of data they have I'm sure they can assess just how many times folks will pay for the "make it better" loop.
Why do you orchestrate the AI manually? You could write a BUILD file that just does it in a loop a few times, or I guess if you lack build system interaction, write a python script?
> If it knows how to make the code more readable and / or better for performance by me simply asking "can you make this more readable and performant?" then it should be able to provide this result from the beginning.
This is the wrong way to think about AI (at least with our current tech). If you give AI a general task, it won't focus its attention at any of these aspects in particular. But, after you create the code, if you use separate readability and optimization feedback loops where you specifically ask it to work on those aspects of the code, it will do a much better job.
People who feel like AI should just do the right thing already without further prompting or attention focus are just going to be frustrated.
> Btw, this isn't about code formatting or linting. It's about how the logic is written.
Yes, but you still aren't focusing the AI's attention on the problem. You can also write a guide that it puts into context for things you notice that it consistently does wrong. But I would make it a separate pass, get the code to be correct first, and then go through readability refactors (while keeping the code still passing its tests).
I have zero trust in any of these tools and usually I use them for 1 off tasks that fit well with the model of copy / pasting small chunks of code.
> But, after you create the code, if you use separate readability and optimization feedback loops where you specifically ask it to work on those aspects of the code, it will do a much better job.
I think that's where I was going with the need to re-prompt. Why not provide the result after 5 internal rounds of readability / optimization loops as the default? I can't think of times where I wouldn't want the "better" version first.
Make (or whatever successor you are using, I'm sure no one actually uses nmake anymore) is pretty reliable in filling in templates that feed into prompts. And AI is pretty efficient at writing make files, lowering the effort/payoff payoff threshold.
> I think that's where I was going with the need to re-prompt. Why not provide the result after 5 internal rounds of readability / optimization loops as the default? I can't think of times where I wouldn't want the "better" version first.
I don't think this would work very well right now. I find that the AI is good at writing code, or maybe optimizing code, or maybe making the code more readable (that isn't one I do often, but optimization all the time), but if I ask it to do it all at once it does a worse job. But I guess you could imagine a wrapper around LLM calls (ClaudeCode) that does multiple rounds of prompting, starting with code, then improving the code somewhat after the code "works". I kind of like that it doesn't do this though, since I'm often repairing code and don't want the diff to be too great. Maybe a readability pass when the code is first written and then a readability pass sometimes afterwards when it isn't in flux (to keep source repository change diffs down?).
There's two secret sauces to making Claude Code your b* (please forgive me future AI overlords), one is to create a spec, the other is to not prompt merely "what" you want and only what you want, but what you want, HOW you want it done (you can get insanely detailed or just vague enough), and even in some cases the why is useful to know and understand, WHO its for sometimes as well. Give it the context you know, don't know anything about the code? Ask it to read it, all of it, you've got 1 million tokens, go for it.
I have one shot prompted projects from empty folder to full feature web app with accounts, login, profiles, you name it, insanely stable, maybe and oops here or there, but for a non-spec single prompt shot, that's impressive.
When I don't use a tool to handle the task management I have Claude build up a markdown spec file for me and specify everything I can think of. Output is always better when you specify technology you want to use, design patterns.
I ended up creating https://github.com/nickjj/bmsu which calls rsync under the hood but helps you build up a valid rsync command with no surprises. It also codifies each of your backup / restore strategies so you're not having to run massively long rsync commands each time. It's 1 shell script with no dependencies except rsync.
Nothing leaves my local network since it's all local file transfers.
reply