Personally I use filter and map (and others like .some, .every .flat and flatmap etc) all the time but I avoid reduce.
filter and map immediately tell me what the purpose of the loops are - filter things and transform things. A for loop does not do this.
To someone familiar with functional programming these are very normal and easier to read and grep than just a loop. In other words filter and map give additional context as to the intent of the loop, a bare for loop does not.
Not to mention this is not abnormal in languages outside of JS, even non-functional ones.
That said Ive seen too many convoluted uses of reduce that I just avoid it out of principle.
Yeah, the "problem" with reduce is that it can do anything, and so doesn't offer much over a traditional for loop with mutable local variables to accumulate into. Of course, if you can replace a reduce() with a filter and/or map or whatever (which more clearly states intent), by all means do so!
If you really need arbitrary computation, I'm not sure there's any real readability benefit to reduce() over local mutation (emphasis on local!). Sure, there's immutability and possibly some benefit to that if you want to prove your code is correct, but that's usually pretty marginal.
Reduce can be very useful to signal that the state used is inherently limited. My rule of thumb is to use reduce when the state is a primitive or composed of at most two primitives, and a for loop otherwise. What counts as "primitive" depends on the language of choice and abstraction level of the program, of course.
Fair observation, but just opening a local lexical scope (in an expression-oriented language) can help with that. Also ... something something ST monad :)
I like map() and filter() in Python, but unfortunately they’re 2nd class citizens compared to list comprehensions, which continue to get optimizations to further increase their speed.
I like comprehensions as well - and their syntax is quite readable - but I’d like for the two to be more at parity.
One of the main reasons that map() and filter() don’t get optimizations in Python is because they’re lazy.
It’s a lot easier to optimize comprehensions because you can make a lot more guarantees about what doesn’t happen between iterations: the outer stack doesn’t move, the outer scope can be treated as static for purposes of GC-ing its locals, various interpreter-internal checks for “am I in a new place/should I become aware of a new scope’s state?” can be skipped, the inner generator can use a simplified implementation since it never needs to work with manual send(), and so on.
Map/filter can’t take advantage of any of those assumptions; they have to support being passed around to different arbitrary places each time next() is called on them, and have to support infinite sequences (so do comprehensions technically, but the interpreter can assume infinite comprehensions will terminate in fairly short order via OOM lol).
That said, there are likely optimizations that could be applied for the common cases of “x = list(map(…))” and “for x in filter(…):” (in nongenerator functions) which allow optimizers to make more assumptions about the outer context staying static.
filter and map immediately tell me what the purpose of the loops are - filter things and transform things. A for loop does not do this.
To someone familiar with functional programming these are very normal and easier to read and grep than just a loop. In other words filter and map give additional context as to the intent of the loop, a bare for loop does not.
Not to mention this is not abnormal in languages outside of JS, even non-functional ones.
That said Ive seen too many convoluted uses of reduce that I just avoid it out of principle.