i don't know, where I work 100% of resumes come from a third-party recruiter, same at my previous job. I've gotten my last 3 jobs either through third-party recruiters or direct applications, not referrals.
In my experience, if your executives are communicating in corporate PR speak internally, it means you’re surrounded by grifters and people who have failed upward.
Speaking directly and clearly is a core skill of successful executives. Corporate PR speak only interferes with conveying direct and clear internal messages. You aren’t worried about accidentally offending the people you work with, you’re more worried about not getting your point across as directly as possible.
> Corporate PR speak only interferes with conveying direct and clear internal messages.
I'd argue corporate PR speak directly and purposefully interferes with conveying any kind of message clearly. It often skirts the line between misleading and lying. And obviously you don't want to poison your own well with lies.
I'd expect there's some corporate-speak maximum that happens in middle management, and upper and lower people would use it less, with the curve being flatter for more bureaucratic companies.
Corporate-speak is usually centered around covering your ass with ambiguity.
CEOs dont feel compelled to do that on "private" emails but they sure as hell do it in public statements. They dial it up to 11 during layoffs or being interviewed about unethical behavior.
Nothing's gone wrong. Apple's phones are good but so are their phones from a couple of years ago. It's not sustainable to have a huge improvement or redesign every year.
But you care about friends? If yes, then don't forget that there has been a time when they were strangers too.
I don't imply you have to care about strangers, but there are tangible benefits to maintaining a balanced approach (and that, for introverts like us can mean doing more than we would if we just follow our guts).
After all even if you don't care about strangers, being on good terms with them, or even making the occasional friend will have a positive impact on literally everything else that you actually care for, be it your own well being, your projects, etc.
> And who likes doing this to themselves anyway? Isn't it a very frustrating experience? How is this the most loved language?
The thing is, these dependencies do exist no matter what language you use if they stem from an underlying concept. In that case rust just makes you explicitly write them which is a good thing since in C++ all these dependencies would be more or less implicit and everytime somebody edits the code he needs to think all these cases through and get a mental model (if he sees it at all!). In Rust you at least have the lifetime annotations which make it A: obvious there is some special dependency going on and B: show the explicit lifetimes etc.
So what I'm saying, you need to put in this work no matter which language you choose, writing it down is then not a big problem anymore. If you don't think about these rules your program will probably work most of the time but only most of the time, and that can be very bad for certain scenarios.
> So what I'm saying, you need to put in this work no matter which language you choose
This is very false. Managed-memory languages don't require you to even think about lifetimes, let alone write them down.
Yes, I understand that this is for efficiency - but claiming that you have to think about lifetimes everywhere is just wrong, and irrelevant when discussing topics (prototyping/design work/scripting) where you don't care about efficiency.
Lifetimes are still important in managed languages. You just have to track them in your head, which is fallible. The difference is that if you get it wrong in a managed language, you get leaks or stale objects or other logic bugs. In rust you get compile time errors.
While this is correct, it's still much easier to think about lifetimes in managed languages. The huge majority of allocated objects gets garbage-collected after a very short time, when they leave the context (similar to RAII).
Mostly you need to think about large and/or important objects, and avoid cycles, and avoid unneeded references to such objects that would live for too long. Such cases are few.
The silver lining is that if you make a mistake and a large object would have to live slightly longer, you won't have to wrangle with the lifetime checker for that small sliver of lifetime. But if you make a big mistake, nothing will warn you about a memory leak, before the prod monitoring does.
> The huge majority of allocated objects gets garbage-collected after a very short time, when they leave the context (similar to RAII).
Those objects are also virtually no problem in languages like Rust or C++. Those are local objects whose lifetimes are trivial and they are managed automatically with no additional effort from the developer.
Once upon a time (at least through IE7) Internet Explorer had separate memory managers for javascript and the DOM. If there was a cycle between a JS object and a DOM object (a DOM node is assigned as a property of an object, and another property was assigned as an event handler to the DOM node) then IE couldn't reclaim the memory.
Developers of anything resembling complex scripts (for the time) had to manually break these cycles by setting to null the attributes of the DOM node that had references to any JS objects.
Douglas Crockford has a little writeup here[0] with a heavy-handed solution, but it was better than doing it by hand if you were worried another developer would come along and add something and forget to remove it.
Other memory managed languages also have to deal with the occasional sharp corners. Most of the time, this can be avoided by knowing to clean up resources properly, but some are easier to fall for than others.
Oracle has a write up on hunting Java memory leaks [1]
Microsoft has a similar, but less detailed article here[2]
Of course, sometimes a "leak" is really a feature. One notorious example is variable shadowing in the bad old days of JS prior to the advent of strict mode. I forget the name of the company, but someone's launch was ruined because a variable referencing a shopping cart wasn't declared with `var` and was treated as a global variable, causing concurrent viewers to accidentally get other user's shopping cart data as node runs in a single main thread, and concurrency was handled only by node's event loop.
My question was about the nature of a memory-managed language causing "leaks or stale objects or other logic bugs". This issue is not that - this is due to a buggy implementation causing memory leaks.
To be more precise: this is a bug, that was fixable, in the runtime, not in user applications that would run on top of it.
Assume a well-designed memory-safe language and implementation. What kinds of memory hazards are there?
I will note that in GC literature at least, that is still considered a leak.
In an ideal world, we could have a GC that reclaimed all unused memory, but that turns out to be impossible because of the halting problem. So, we settle for GCs that reclaim only unreachable memory, which is a strict subset of unused memory. Unused reachable memory is a leak.
> Managed-memory languages don't require you to even think about lifetimes, let alone write them down.
Memory is only one of many types of resources applications use. Memory-managed languages do nothing to help you with those resources, and effectively managing those resources is way harder in those languages than in Rust or C++.
What? Rust doesn't do anything to "help you with those resources", either - you can still create cycles in ARC objects or allocate huge amounts of memory and then forget about it.
In both languages you have to rely on careful design, and then profile memory use and manage it.
However, Rust requires you to additionally reason about lifetimes explicitly. Again - great for performance, terrible for design, prototyping, and tools in non-resource-constrained environments*.
> The thing is, these dependencies do exist no matter what language you use
Sure, but in a lot of cases, these invariants can be trivially explained, or intuitive enough that it wouldn't even need explanation. While in Rust, you can easily spend a full day just explaining it to the compiler.
I remember spending litteral _days_ tweaking intricate lifetimes and scopes just to promise Rust that some variables won't be used _after_ a thread finishes.
Some things I even never managed to be able to express in Rust, even if trivial in C, so I just rely on having a C core library for the hot path, and use it from Rust.
Overall, performance sensitive lifetime and memory management in Rust (especially in multithreaded contexts) often comes down to:
1) Do it in _sane_ Rust, and copy everything all over the place, use fancy smart pointers, etc.
2) Do it in a performant manner, without useless copies, without over the top memory management, but prepare a week of frustrating development and a PhD in Rust idiosyncrasies.
The thing is, you think your code is safe and it most likely is, but mathematically speaking, what you are doing is difficult or even impossible to prove correct. It is akin to running an NP complete algorithm on a problem that is easier than NP. Most practical problem instances are easy to solve, but the worst case which can't be ruled out is utterly, utterly terrible, which forces you to use a more general solution than is actually necessary.
Since smart pointers because ubiquitous in c++, I've (personally) had only a handful of memory and lifetime issues. They were all deduceable by looking at where we "escape hatched" and stored a raw ptr that was actually a unique pointer, or something similar. I'll take having one of those every 18 months over throwing away my entire language, toolchain,ecosystem and iteration times.
> Some things I even never managed to be able to express in Rust, even if trivial in C, so I just rely on having a C core library for the hot path, and use it from Rust.
i can’t think of anything you can do in c that you can’t do in unsafe rust, and that has the advantage that you can both narrow it down to exactly where you need it and only there, and your can test it in miri to find bugs
To be fair, unsafe Rust has an entirely new set of idiosyncrasies that you have to learn for your code not to cause UB. Most of them revolve around the many ways in which using references can invalidate raw pointers, and using raw pointers can invalidate references, something that simply doesn't exist in C apart from the rarely-used restrict qualifier.
(In particular, it's very easy to inadvertently trigger the footgun of converting a pointer to a reference, then back to a pointer, so that using the original pointer again can invalidate the new pointer.)
Extremely pointer-heavy code is entirely possible in unsafe Rust, but often it's far more difficult to correctly express what you want compared to C. With that in mind, a tightly-scoped core library in C can make a lot of sense; more lines of unsafe code in either language leave more room for bugs to slip in.
Personal preference and pain tolerance. Just like learning Emacs[1] - there's lots of things that programmers can prioritize, ignore, enjoy, or barely tolerate. Some people are alright with the fact that they're prototyping their code 10x more slowly than in another language because they enjoy performance optimization and seeing their code run fast, and there's nothing wrong with that. I, myself, have wasted a lot of time trying to get the types in some of my programs just right - but I enjoy it, so it's worth it, even though my productivity has decreased.
Plus, Rust seems to have pushed out the language design performance-productivity-safety efficiency frontier in the area of performance-focused languages. If you're a performance-oriented programmer used to buggy programs that take a long time to build, then a language that gives you the performance you're used to with far fewer bugs and faster development time is really cool, even if it's still very un-productive next to productivity-oriented languages (e.g. Python). If something similar happened with productivity languages, I'd get excited, too - actually, I think that's what's happening with Mojo currently (same productivity, greater performance) and I'm very interested.
> even if it's still very un-productive next to productivity-oriented languages (e.g. Python).
The thing is, for many people, including me, Rust is actually a more productive language than Python or other dynamic languages. Actually writing Python was an endless source of pain for me - this was the only language where my code did not initially work as expected more times than it did. Where in Rust it works fine from the first go in 99% of cases after it compiles, which is a huge productivity boost. And quite surprisingly, even writing the code in Rust was faster for me, due to more reliable autocomplete / inline docs features of my IDE.
I think part of the problem is "developer productivity" is a poorly-defined term that means different things to different people.
To some, it means getting something minimal working and running as quickly as possible, accepting that there will be bugs, and that a comprehensive test suite will have to be written later to suss them all out.
To others (myself included), it means I don't mind so much if the first running version takes a bit longer, if that means the code is a bit more solid and probably has fewer bugs. And on top of that, I won't have to write anywhere near as many tests, because the type system and compiler will ensure that some kinds of bugs just can't happen (not all, but some!).
And I'm sure it means yet other things to other people!
I should have stated that I'm comparing Rust to typed Python (or TypeScript or typed Racket or whatever). Typed Python gives you a type system that's about a good as Rust's, and the same kinds of autocompletion and inline documentation that you would get with Rust, while also freeing you from the constraints of (1) being forced to type every variable in your program upfront, (2) being forced to manage memory, and (3) no interactive shell/REPL/Jupyter notebooks - Rust simply can't compete against that.
You're experience would likely have been very different if you were using typed Python.
> Typed Python gives you a type system that's about a good as Rust's
No, it absolutely does not.
Also consider that Python has a type system regardless of whether or not you use typing, and that type system does not change because you've put type annotations on your functions. It does allow you to validate quite a few more things before runtime, of course.
> Some people are alright with the fact that they're prototyping their code 10x more slowly than in another language because they enjoy performance optimization and seeing their code run fast, and there's nothing wrong with that.
I look at it a little differently: I'm fine with the fact that I'm prototyping my code 10x more slowly (usually the slowdown factor is nowhere near that bad, though; I'd say sub-2x is more common) than in another language because I enjoy the fact that when my code compiles successfully, I know there are a bunch of classes of bugs my code just cannot have, and this wouldn't be the case if I used the so-called "faster development" language.
I also hate writing tests; in a language like Rust, I can get away with writing far fewer tests than in a language like Python, but have similar confidence about the correctness of the code.
> Some people are alright with the fact that they're prototyping their code 10x more slowly than in another language because they enjoy performance optimization and seeing their code run fast, and there's nothing wrong with that.
Disclaimer: I've sort of bounced off of Rust 3 or so times and while I've created both long-running services in it as well as smaller tools I've basically mostly had a hard time (not enjoying it at all, feeling like I'm paying a lot in terms of development friction for very little gain, etc.) and if you're the type to write off most posts with "You just don't get it" this would probably just be one more on the pile. I would argue that I do understand the value of Rust, but I take issue with the idea that the cost is worth it in the majority of cases, and I think that there are 80% solutions that work better in practice for most cases.
From personal experience: You could be prototyping your code faster and get performance in simpler ways than dealing with the borrow checker by being able to express allocation patterns and memory usage in better, clearer ways instead and avoid both of the stated problems.
Odin (& Zig and other simpler languages) with access to these types of facilities are just an install away and are considerably easier to learn anyway. In fact, I think you could probably just learn both of them on top of what you're doing in Rust since the time investment is negligible compared to it in the long run.
With regards to the upsides in terms of writing code in a performance-aware manner:
- It's easier to look at a piece of code and confidently say it's not doing any odd or potentially bad things with regards to performance in both Odin and Zig
- Both languages emphasize custom allocators which are a great boon to both application simplicity, flexibility and performance (set up limited memory space temporarily and make sure we can never use more, set up entire arenas that can be reclaimed or reused entirely, segment your resources up in different allocators that can't possibly interfere with eachother and have their own memory space guaranteed, etc.)
- No one can use one-at-a-time constructs like RAII/`Drop` behind your back so you don't have to worry about stupid magic happening when things go out of scope that might completely ruin your cache, etc.
To borrow an argument from Rust proponents, you should be thinking about these things (allocation patterns) anyway and you're doing yourself a disservice by leaving them up to magic or just doing them wrong. If your language can't do what Odin and Zig does (pass them around, and in Odin you can inherit them from the calling scope which coupled with passing them around gives you incredible freedom) then you probably should try one where you can and where the ecosystem is based on that assumption.
My personal experience with first Zig and later Odin is that they've provided the absolute most productive experience I've ever had when it comes to the code that I had to write. I had to write more code because both ecosystems are tiny and I don't really like extra dependencies regardless. Being able to actually write your dependencies yourself but have it be such a productive experience is liberating in so many ways.
Odin is my personal winner in the race between Odin and Zig. It's a very close race but there are some key features in Odin that make it win out in the end:
- There is an implicit `context` parameter primarily used for passing around an allocator, a temp-allocator and a logger that can be implicitly used for calls if you don't specify one. This makes your code less chatty and let's you talk only about the important things in some cases. I still prefer to be explicit about allocators in most plumbing but I'll set `context.allocator` to some appropriate choice for smaller programs in `main` and let it go
- We can have proper tagged unions as errors and the language is built around it. This gives you code that looks and behaves a lot like you'll be used to with `Result` and `Option` in Rust, with the same benefits.
- Errors are just values but the last value in a multiple-value-return function is understood as the error position if needed so we avoid the `if error != nil { ... }` that would otherwise exist if the language wasn't made for this. We can instead use proper error values (that can be tagged unions) and `or_return`, i.e.:
If we wanted to inspect the error this would instead be:
// The zero value for a union is `nil` by default and the language understands this
ParsingError :: union {
UnparsableHeader,
UnparsableBody,
}
UnparsableHeader :: struct {
...
}
UnparsableBody :: struct {
...
}
doing_things :: proc() {
parsed_data, parsing_error := parse_config_file(filename)
// `p in parsing_error` here unpacks the tag of the union
// Notably there are no actual "constructors" like in Haskell
// and so a type can be part of many different unions with no syntax changes
// for checking for it.
switch p in parsing_error {
case UnparsableHeader:
// In this scope we have an `UnparsableHeader`
function_that_deals_with_unparsable_header(p)
case UnparsableBody:
function_that_deals_with_unparsable_body(p)
}
...
}
- ZVI or "zero-value initialization" means that all values are by default zero-initialized and have to have zero-values. The entire language and ecosystem is built around this idea and it works terrifically to allow you to actually talk only about the things that are important, once again.
P.S. If you want to make games or the like Odin has the absolute best ecosystem of any C alternative or C++ alternative out there, no contest. Largely this is because it ships with tons of game related bindings and also has language features dedicated entirely to dealing with vectors, matrices, etc., and is a joy to use for those things. I'd still put it forward as a winner with regards to most other areas but it really is an unfair race when it comes to games.
It stays with you until you need to change something and find yourself unable to make incremental changes.
And in many use cases people are throwing Rust (and especially async Rust) on problems solved just fine with GC languages so the safety argument doesn’t apply there.
The safety argument is actually the reason why you can use Rust in those cases to begin with. If it was C or C++ you simply couldn't use it for things like webservers due to the safety problems inherent to these languages. So Rust creeps into the part of the market that used to be exclusive to GC languages.
Sort of. Do you want someone that doesn't understand the constraints that likely is creating a bug that will cause crashes? Or do you want to block them until they understand the constraints?
So you use a safe, garbage-collected language like Python, and iterate 5x as fast as Rust. Problem solved. It's 2023 - there are at least a dozen production-quality safe languages.
I've been involved in Java, Python, PHP, Scala, C++, Rust, JS projects in my career. I think I'd notice a 5x speed difference in favor of Python if it existed. But I haven't.
You're probably just using Python wrong, then. You can use a Jupyter notebook to incrementally develop and run pieces of code in seconds, and this scales to larger programs. With Rust, you have to re-compile and re-run your entire application every time you want to test your changes. That's a 5x productivity benefit by itself.
This is false. This "thread" is not "about" anything. The top-level comment was about writing a game engine, and various replies to that thread deviated from that topic to a greater or lesser extent. Nobody has the authority to decide what a thread is "about".
Additionally, the actual article under consideration is about Rust's design in general. That makes my comments more on topic than one about game engines in particular, and so it should be pretty clear that if you're going to assume anything about my comments, then it would not be that they're about game engines.
It doesn't really matter, there doesn't exist a problem space where both Rust and Python are reasonable choices.
Case in point, I once wrote a program to take a 360 degree image and rotate it so that the horizon followed the horizontal line along the middle, and it faced north. I wrote it in python first and running it on a 2k image took on the order of 5 minutes. I rewrote it in rust and it took on the order of 200ms.
Could I iterate in Python faster? Yes, but the end result was useless.
> there doesn't exist a problem space where both Rust and Python are reasonable choices
This thread, and many other threads about Rust, are filled with people arguing the exact opposite - that Rust is a good, productive language for high-level application development. I agree with you, there's relatively little overlap - that's what I'm arguing for!
Both qualify for writing tiny web servers, cli/byte-manipulation scripts, server automation jobs, in-house GUI applications, and other small stuff. Could technically argue that these are a "relatively little overlap" depending on what you do though..
The “beats debugging” part I took it as meaning “it is better than spending that day debugging”.
I have fought the ownership rules and lost (replaced references by integers to a common vector-ugly stuff, but I was time constrained). But I have seen people spend several weeks debugging a single problem, and that was really soul-crushing.
I think you may be misunderstanding what GP means. It's about spending a day working on issues. You're either doing it before you launch your iteration, or you're doing it after. GP thinks it's better to spend the time before you push the change. From a quality perspective it's hard to see how anyone could disagree with that, but I can certainly see why there would be different preferences from programmers.
I don't personally mind debugging, too much, but if your goal is to avoid bugs in your running software, then Rust has some serious advantages. We mainly use TypeScript to do things, which isn't really comparable to Rust. But we do use C when we need performance, and we looked into Rust, even did a few PoCs on real world issues, and we sort of ended up in a situation similar to GP. Rust is great though a bit "verbose" to write, but its eco-system is too young to be "boring" enough for us, so we're sticking with C for the time being. But being able to avoid running into crashes by doing the work before your push your code is immensely valuable in fault-intolerant systems. Like, we do financial work with C, it cannot fail. So we're actually still doing a lot of the work up-front, and then we handle it by rigorously testing everything. Because it's mainly used for small performance enhancement, our C programs are small enough to where this isn't an issue, but it would be a nightmare to do with 40.000 lines of C code.
I agree that fast iteration time is valuable, but I don't think this has to hold 100% of the time.
I would much rather bang my head against a compiler for N hours, and then finally have something that compiles -- and thus am fairly confident works properly -- than have something that compiles and runs immediately, but then later I find I have to spend N hours (or, more likely, >N hours) debugging.
Your preferences may differ on this, and that's fine. But in the medium to long term, I find myself much more productive in a language like Rust than, say, Python.
Fair point. Perhaps title can be 'How to keep things interesting when you're not happy serving customer requests all day'. On the flip side, not suggesting that you don't serve customer requests either so more of a way to keep your head up while doing day-to-day.