Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

100ms is /massive/ for a timing delta but you really need a lot of samples. I have exploited timing deltas that were not much more than a handful of machine code instructions in terms of execution time. But you really do need a lot of samples to confirm small deltas. It starts getting impractical for many APIs (someone will notice, hopefully).


> someone will notice, hopefully

Or more likely "someone will notice, eventually"


This comment is why I love hackernews


See: https://rdist.root.org/2010/07/19/exploiting-remote-timing-a... and Crosby 2007. I got into infosec around 06 and tptacek, Nate Lawson and some others were my heroes. Now I run my own consulting firm with a bunch of cool people :)

Also in infosec: what is old is new. We still find shitty comparison routines (timing attacks) and SQL injection... some day :)


I just found a use-case for the sleep( rand(1000) ) function :-)


Nope!

If the rand function produces uniform random numbers, then with enough samples the signal comes out ontop the noise.

If it is non-uniform, then with enough samples you can determine the non uniformity, and you are at square 1 again.

Use proper security instead of obscurity.


Easily accommodated for. I can get the execution run-time and store in an average in memory for some time-period and have the sleep function top-up the difference between the two paths. Not sure what the "proper security" method is to prevent execution deltas.


Why not just run the thing (which takes some small fraction of time), then pad to five seconds, and respond. Since your work will be done in milliseconds, padding to nearest five seconds will remove any noise.

And it's not a thing anyone has a legitimate interest in submitting more than that per second.


Adding five seconds to everything just adds five seconds, it doesn't matter if the difference between the two requests is .01s or 5.01s.


The parent said "pad to 5 seconds" not "add 5 seconds". Thus everything would be 5 seconds (never 5.01). The difference between a hit and a miss would be exactly 0s. Note that I'm not advocating for or against this solution; rather, clarifying the conversation.


Pad to, not pad by.

I.e. the padding to add is (5 - duration_of_operation) with duration of operation being far lower than 5 s.


Depends. With comparison functions you can implement a constant time comparison that takes the same amount of time. In this case it isn’t really a crypto problem, so anything where we are confident about things taking the same amount of time is fine. Basically in some parent method/func make sure we always spend 2000ms or whatever time is that is always greater than the max runtime of the slowest path. Secondary / defense in depth mitigations would be rate limiting this page and making it purposefully slow on response, just to make it that much harder to collect samples / abuse it without being noticed. The captcha is a nice touch, but it didn’t seem particularly strong (a good captcha solver could break it). Still, captcha will chase off a lot of script kiddies. You don’t have to be faster than the bear, just faster than the slowest person ;)


rand() produces linear distributon, which is uniform. Do I understand properly that rand() + rand() would return normal distribution, so #2, for which you can determine the non uniformity?

What would be a proper first step to harden API for timing attacks?


Adding any random noise, even perfect randomness, doesn't prevent the attack. It just means the attacker needs more samples.


rand() + rand() does not produce normal, but adding together a few thousand rands does start to approach it. Central limit theorem.


rand() + rand() does not give normal. If there is any statistical difference between the timings, it's in theory possible to break.

An easy mitigation would be to just drop the card number into a queue and process asynchronously without waiting and returning to the user.


Flood the queue with invalid numbers and timings can still be worked out.


I would have gone for `sleep(1000)` and have it run in parallel with the actual function, so that every request takes 1000 milliseconds




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: