Hacker Newsnew | past | comments | ask | show | jobs | submit | vinhnx's commentslogin

What excites me about the OpenAI + Astral acquisition: Codex CLI, uv, and ruff are all written in Rust. Fast by design, and fully open source.

I think my submission about this post was selected to "second-chance" pool by HN Moderators. Hence it's being shown again. Thanks for the heads up!

I think Cognition DeepWiki's or Google CodeWiki's code map does generated a architecture map (Mermaid style). Eg: https://deepwiki.com/openai/codex#project-purpose-and-archit...

Thanks for recommending these tools; very helpful!

https://codewiki.google/github.com/openai/codex


This month, I'm working on VT Code, a terminal-native coding agent I've been building in Rust (https://github.com/vinhnx/vtcode).

This month I'm focusing on long-pending TODO items: self-benchmarking with Terminal bench (https://www.tbench.ai/), fuzzing the security parsers (it executes shell commands, so the threat model is real), normalizing extended thinking traces across providers, and improving the agent UI/UX and TUI components and harness.


I listen to a lots of podcast.

Currently here is my pinned favorites ones:

Practical AI

Grit

Wenbin Fang's Podcast Playlist (Founder of https://www.listennotes.com/)

The Gradient: Perspectives on AI

Latent Space: The AI Engineer Podcast

The Empty Bow

Hacker News Recap

Dwarkesh Podcast

Machine Learning Street Talk (MLST)

Interconnects

Talk Python To Me

Training Data

---

My podcast collection (opml file format). Exported from Overcast.

Feel free to import to your podcast app of choice. https://github.com/vinhnx/podcasts


This research paper "Mercury: Ultra-Fast Language Models Based on Diffusion" from last year (2025)

https://arxiv.org/pdf/2506.17298


You're welcome! Love the article, I hope you write more.



This month I'm currently continuing to work on my opensource coding agent VT Code (https://github.com/vinhnx/VTCode). For the last few months I've been improving the harness and UI/UX. Focusing developer experience and TUI performance.


Just used Opus 4.6 via GitHub Copilot. It feels very different. Inference seems slow for now. I guess Opus 4.6 has adaptive thinking activated by default.


Confirm by PM lead at VS Code team

> "We have high thinking as default + adaptive thinking, first time we’ve run with these settings..."

> https://x.com/pierceboggan/status/2019645801769689486


It dos seem noticeably slower. I may stick with 4.5 which was good enough for me for most tasks.


VS Code confirms that they are experimenting with the new adaptive thinking and high reasoning effort params. https://x.com/pierceboggan/status/2019645801769689486


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: