Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're misusing big O notation


How so? There, n means "the number of [this type of] drivers", thus the GP is saying that "self driving cars improve in a way proportional to the number of self driving cars [that are collecting and sharing data], while humans improve their driving skills in a way independent of the number of humans driving."

(I think that's slightly wrong -- humans are probably O(log n) because large populations of humans invest in heuristics and teaching methods that enhance driver improvement rate, and for SDCs, the "lessons learned" between units are probably somewhat redundant, so it might scale as something like sqrt(n). But I don't see a problem with the O(n) usage.)


It’s pretty clear that the author isn’t using the technical definition of big O but rather is using it in a descriptive way. It’s not ambiguous in any way so what’s the harm?


I would say no harm, but rather confusing to people that do not know big O, and useless to those who do.


Can you explain how it's wrong? Obviously, normally you want things to scale more slowly rather than faster, but other than applying to a different thing than usual, it seems accurate to me -- human drivers improve at a constant rate regardless of how many there are, and self-driving cars improve linearly with their number.


A function being in O(f(x)) means the function grows no more than a (multiplicative) constant factor faster than f(x). So O(1) is a subset of O(n). O is analogous to <=.

The semantics that GGP comment meant to convey are better represented by saying that autonomous cars' driving experience grows with Ω(n) (analogous to >=) or Ɵ(n) (analogous to =).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: