I can totally wrap my head around that, and it's an interesting thought experiment, though:
- building functionalities as components that are swappable on a whim requires a level of careful thought, abstraction and architecture that essentially is the exact opposite to ai slop
- in this day and age we still don't make software for the sake of it, and who's financing it doesn't generally require such levels of functional flexibility (the physical world commandeering the coding isn't nearly as volatile as to justify that)
- this comes loaded with the implication that "stuff needs to work": if you are developing software that manages inventory, orders, resources, ... you just can't take the chance to corrupt your customers data or disrupt their business processes. Shipping faster than you can test and with no accountability and no oversight is a solution to a problem I've personally never encountered in the wild
>- building functionalities as components that are swappable on a whim requires a level of careful thought, abstraction and architecture that essentially is the exact opposite to ai slop
that is only for humans really. Why we need these careful thought, abstraction and architecture? Because otherwise the required code becomes an unmanageable pile of spaghetti handling myriad of edge cases of abstraction leaks and unexpected side effects. Human brain can't manage it. AI can or at least soon would be able to. It will just be a large pile of AI slop.
It may also happen that AI will also start generate good component based architecture if forced to minimize or in some other measurable way improve its slop.
> Why we need these careful thought, abstraction and architecture?
your answer focused on maintainability, but you are overlooking what I think is the bigger problem: those components will eventually interact with one another (technically, by nature of living in the same code tree, sharing the same storage backend, framework, common libraries, …, or logically, by referencing the same entities for slightly different and complementary features). With that comes the need to centrally control what they should/can/cannot do. There is no shortcut to having to clarify (with your customer) and formally document what those layered interactions are, or, before you know it, you have multiple incompatible user access controls, row-level access policies, competing master/reference data, or different parts of the application interpreting differently the same data.
It's a pretty bad value proposition, if you ask me, to have to do so much hand-holding for a result that comes with no guarantees whatsoever (you will never know the extent of which your clean spec "made it in").
Indeed. I would love for it to be true, but aside from opencascade^1 all the professional kernels are proprietary and not in the training set, so LLMs can't just regurgitate them.
^1: Which I really appreciate, but let's be real, it is far behind eg. parasolid.
FreeCAD is perfectly good user interface for opencascade. The problem is that as your geometry gets more complicated you start running into the kernel limitations.
Vibe coding only seems to work, insofar as it does when the training data includes multiple exemplars of solutions to a given problem.
As noted elsethread, there's only one geometric kernel which is decently far along and opensource and it's over 1 million LOC --- I doubt it's being included in any training data, and I doubt that an LLM could regurgitate such a large project which would then compile w/o errors and then work as expected --- the number of tokens required to get such a project to an initial state is a marked hurdle as well.
I haven't used a laptop in the last decade that wouldn't last a whole day on battery, or would hold any of those qualifiers for that matter, from Apple or other manufacturers. Not that Apple are bad devices, but they are flawed like the rest of them (often less, and sometimes more in areas that may matter less to you, the software being a major and increasingly one to me).
Also good to remember that Apple is a company of good devices and tremendous marketing, not a company of tremendous devices per se. That entails a lot of subjectivity and awkward tribalism.
Any mid- to high-end pro-line laptop from the usual manufacturers (dell latitude series, lenovo T series, hp pro/elitebook series) gives you that, really (rigidly built body using magnesium/aluminum alloys, good input devices and IO, high-end config, …) and some practical perks (hot swappable batteries, repairable/expandable, on-site warranty, …)
I don't understand how we keep hearing so often here about Apple OSes being so amazingly simple, approachable and cleverly designed with a lot of attention paid to detail, while every practical productivity advice involves some undiscoverable trick, or combinations of tricks, that seems so arbitrary and obtuse. I don't like Mac, in large parts because of that. No amount of marketing and peer pressure will convince me of the superior elegance and sophistication of something that hates you for wanting windows maximised. Those hidden tricks only add insult to injury as pervasive reminders of your presumed inadequacy, that you need to suffer to have things your way, and that Apple is magnanimous to even let you have them.
Every system has its issues. It's really a question of which issues you can live with and which system ultimately fits your workflow best.
After I got used to working in windows instead of full screen all the time, I can't really go back. Even on Windows I find myself working the way I do on macOS. Full screening every app made more sense on a 1024x768 screen (or smaller). Once I moved to a widescreen display (which happened to coincide with getting my first mac) running full screen felt like the wrong move most of time.
> After I got used to working in windows instead of full screen all the time, I can't really go back.
Sorry if this comes across as disrespectful, but it smells like Stockholm Syndrome. You are choosing not to use the full extent of your screen estate, and that is your fine choice, but that is no excuse for making it hard. If you compound the whitespace, the thick borders and the generally oversized UI controls, not much of "productive space" remains available to get the work done. I am not interested in macOS as a content-consumption-first vehicle, though that's clearly where Apple is steering.
It is situational but I think on a modern wide screen(or screens) if it is a single text-like document(like a web page or a terminal) you want 2 or perhaps 3 side by side. if the app implements it's own window management(like blender) a single full screen is best. Overlapping windows are important to have, but almost never desirable, it usually happens because you ran out of room.
The problem I have with this is that I was using a 1600x1200 21" display in 2000, and got used to workflows for it back then.
I am currently running a 16" display at a similar fractional scaled resolution (because Apple stopped understanding DPI after shipping the first LaserWriter, apparently).
Over that time, my eyes have not gotten better to match display DPI, so I'd rather have web sites just adjust the font size so that there are a reasonable number of words per line instead of rendering whitespace.
Non-full-screen windows would make more sense if Apple supported tiling properly, like most Linux WMs and also modern Windows.
MacOS sort of supports tiling in a "program manager shipped it + got promoted" sort of way, but you have to hover over the window manager buttons, which is slower than just manually arranging stuff. If there are any keyboard shortcuts to invoke tiling, or a way to change the WM buttons to not suck, I have not found them.
1600x1200 is still a 4:3 aspect ratio, I think I agree that scaling that makes sense. Full screen really got problematic with 16:9 and 16:10 aspect ratios. That's when the empty gutters in most apps, and especially websites, became really pronounced.
As for tiling in macOS...
You can use the mouse to drag windows into tiled positions. Grab a window and when your cursor hits the side, corner, or top edge of the screen, it will indicate the tiling position, much like AeroSnap on Windows from some years back. You can also hold the Option key while holding the window to get the tiling regions to show up without moving all the way to the edge.
Keyboard shortcuts exist as well. Go to Settings -> Keyboard -> Keyboard Shortcuts... In the dialog that opens, go to Windows. There you can see all the options and customize them if you'd like. Or set shortcuts for things that might not have one yet.
If for some reason dragging the windows around doesn't work, go to Settings -> Desktop & Dock -> the Windows heading. There are toggles to enable or disable dragging to tile, and the Option key trick. You can also turn off the margins on tiled Windows, which you'd probably want to do.
I've never been a big fan of window tiling myself. There was a time when I needed a lot of different windows visible at all times, but that hasn't been the case in a long time. I find tiling makes things too big or small, it's never what I actually want. I drag the window up to the top of the screen to invoke Fill from time to time, but that's about it.
Apple OSes being so amazingly simple, approachable and cleverly designed with a lot of attention paid to detail
That was the Mac in the 1990s. It was designed for, and highly usable with, a one-button mouse. It didn't have hidden context menus or obscure keyboard shortcuts. Everything was visible in the menu bar and discoverable. The Finder was spatially aware with a high degree of persistence that allowed you to develop muscle memory for where icons would appear onscreen every time you opened a folder.
There was almost nothing hidden or lurking in the background, unlike today (my modern Mac system has 500 running processes right now, despite having only 15 applications open). We've had decades of feature creep since the classic Mac OS, which has made modern Macs extremely hard to use (relatively speaking).
Eh. If you get into enterprise business, this is the accepted management style. AI will now mix this up a little, but before you basically needed to ask if you want to blow 300k on developer salaries to maybe fix something that is already working and generating money, or add more features to the roadmap you can pin on your chest. Scaling infrastructure is the best choice for 90% of managers, especially since they are not the ones paying for it and this kind of technical debt doesn't matter on typical bonus check timeframes.
I used to work for AWS on a service team. I noticed we were spending way too much on provisioned concurrency for dynamo and would benefit from on-demand provisioning. After proving it worked, making the change, deploying, was rather pleased with myself. "Saved $2M in costs by switching to on-demand provisioning" barely made it onto my performance review lol.
They might just not have believed it. At the management level everyone is busy claiming to be delivering huge numbers all the time, and people stop trusting that sort of claim.
- the possibility of only partial success creating an even messier situation than the existing one
Having a way to do the whole thing on a much smaller timescale and budget lets decision makers focus more on those externalities, and also can simplify them. This kind of bit rot is somewhere (often everywhere) in many fast-moving businesses, as a natural consequence of the value tradeoffs we have had up to now. Now there are machines that can speedrun the grunt work of clearing them.
I am always having these arguments. We are paying this other company x a year for something we should build if we really need it.
The rebuttals I always get are “I want you working on something that I can’t pay another company for”. I think it sounds good, but in the long run we always end up a budget conversations and head count limits because we spend so much money on external services and software we should just build.
Every company ever has this problem.
But now with AI. The cost of showing the company “yes we can” is so cheap. I worry for companies who have promotable replacements.
I never found a satisfying search algotithm with atuin (iterating over time between fuzzy, skim, fulltext, and maybe others). Is there one that is both:
I'd start with setting the combined frecency score to 0 and testing out how the raw fuzzy scoring does. Then if you find you're having trouble finding recent or frequent commands, you can adjust from there. If you can't find a configuration that feels good, please feel free let me know in an issue.
Wouldn't a solution be to have the opening on the side and pull it toward you, like a "box on wheels"? As long as the sides of the "box" are thermally insulated, it seems like a sound solution for the stated problem (but certainly not one that's mechanically the cheapest/simplest).
A friend suggested a bottom-hinged door like that on a garbage chute, though well sealed, and as wide as the fridge, so the sides of the door don't get in the way of storing long objects in the fridge.
reply