Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I also found this. Computing a transitive closure, for example, on a dataset that's not a pure DAG was absurdly painful for data that I stored in any SQL database. It ended up being cheaper and much, much faster, where I could fit the data, to just buy an old workstation with more than half a terabyte of RAM and just do all the computations in memory using networkx and graph-tool.

I presume that for larger real world graph datasets, maybe there's some better algorithms and storage methods. I couldn't figure out neo4j fast enough, and it wasn't clear that I could map all of the stuff like block modeling into it anyways, but it would be very useful for someone to figure out a better production ready storage backend for networkx at least where some of the data could be cached in SQLite3.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: