Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
|
_cs2017_'s favorites
login
submissions
|
comments
1.
When AI writes the software, who verifies it?
(
leodemoura.github.io
)
305 points
by
todsacerdoti
16 days ago
|
299 comments
2.
Don't become an engineering manager
(
manager.dev
)
396 points
by
flail
16 days ago
|
269 comments
3.
Why No AI Games?
(
franklantz.substack.com
)
72 points
by
pavel_lishin
16 days ago
|
85 comments
4.
Semantic ablation: Why AI writing is generic and boring
(
theregister.com
)
285 points
by
benji8000
30 days ago
|
204 comments
5.
OpenAI should build Slack
(
latent.space
)
252 points
by
swyx
33 days ago
|
327 comments
6.
uBlock filter list to hide all YouTube Shorts
(
github.com/i5heu
)
1172 points
by
i5heu
33 days ago
|
344 comments
7.
Cowork: Claude Code for the rest of your work
(
claude.com
)
1298 points
by
adocomplete
66 days ago
|
565 comments
8.
TimeCapsuleLLM: LLM trained only on data from 1800-1875
(
github.com/haykgrigo3
)
737 points
by
admp
66 days ago
|
314 comments
9.
Insights into Claude Opus 4.5 from Pokémon
(
lesswrong.com
)
123 points
by
surprisetalk
72 days ago
|
23 comments
10.
Play Aardwolf MUD
(
aardwolf.com
)
182 points
by
caminanteblanco
71 days ago
|
102 comments
11.
Agent design is still hard
(
pocoo.org
)
426 points
by
the_mitsuhiko
3 months ago
|
258 comments
12.
The Continual Learning Problem
(
jessylin.com
)
102 points
by
Bogdanp
4 months ago
|
8 comments
13.
Kafka is Fast – I'll use Postgres
(
topicpartition.io
)
561 points
by
enether
4 months ago
|
401 comments
14.
BERT is just a single text diffusion step
(
nathan.rs
)
455 points
by
nathan-barry
5 months ago
|
110 comments
15.
SWE-Grep and SWE-Grep-Mini: RL for Fast Multi-Turn Context Retrieval
(
cognition.ai
)
97 points
by
meetpateltech
5 months ago
|
31 comments
16.
Writing an LLM from scratch, part 22 – training our LLM
(
gilesthomas.com
)
254 points
by
gpjt
5 months ago
|
10 comments
17.
Show HN: I invented a new generative model and got accepted to ICLR
(
discrete-distribution-networks.github.io
)
656 points
by
diyer22
5 months ago
|
91 comments
18.
A small number of samples can poison LLMs of any size
(
anthropic.com
)
1202 points
by
meetpateltech
5 months ago
|
439 comments
19.
Reasoning LLMs are wandering solution explorers
(
arxiv.org
)
90 points
by
Surreal4434
5 months ago
|
98 comments
20.
Building the heap: racking 30 petabytes of hard drives for pretraining
(
si.inc
)
412 points
by
nee1r
5 months ago
|
274 comments
21.
We reverse-engineered Flash Attention 4
(
modal.com
)
134 points
by
birdculture
5 months ago
|
48 comments
22.
Claude’s memory architecture is the opposite of ChatGPT’s
(
shloked.com
)
448 points
by
shloked
6 months ago
|
236 comments
23.
Le Chat: Custom MCP Connectors, Memories
(
mistral.ai
)
398 points
by
Anon84
6 months ago
|
165 comments
24.
A PM's Guide to AI Agent Architecture
(
productcurious.com
)
208 points
by
umangsehgal93
6 months ago
|
62 comments
25.
Physics of badminton's new killer spin serve
(
arstechnica.com
)
119 points
by
amichail
7 months ago
|
16 comments
26.
Dispelling misconceptions about RLHF
(
aerial-toothpaste-34a.notion.site
)
120 points
by
fpgaminer
7 months ago
|
32 comments
27.
Diffusion language models are super data learners
(
jinjieni.notion.site
)
218 points
by
babelfish
7 months ago
|
16 comments
28.
My Lethal Trifecta talk at the Bay Area AI Security Meetup
(
simonwillison.net
)
430 points
by
vismit2000
7 months ago
|
115 comments
29.
How attention sinks keep language models stable
(
hanlab.mit.edu
)
219 points
by
pr337h4m
7 months ago
|
36 comments
30.
Gemini 2.5 Deep Think
(
blog.google
)
461 points
by
meetpateltech
7 months ago
|
249 comments
More
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: