But then you go on to describe exactly what @Brendinooo described, just under the guise of your system of "value hierarchy." The problem is that you can always default to "our values are hierarchically misaligned" and then never have to do any coalition building ever.
So how do you solve that? Because it seems that you can't.
Hierarchical values are just that. Not wholesale. We call that nonsense, e.g. I believe pigs can fly, therefore the sky is red. They are making an ontological error.
For a Christian, a top maxim in their value hierarchy would be rooted in Jesus' famous commandment: "Love the Lord your God with all your heart, soul, and mind." Now, if you're an atheist, this might be nonsense to you. You might not believe that Jesus was resurrected or that God even exists. To you, these are fundamentally irrational statements ("pigs can fly," etc.). Under your system, if you were an atheist and your opposition was a Christian, you could never possibly build a coalition because there's a disagreement at the top of the value hierarchy.
But this seems wrong because people of different creeds and value systems do stuff together all the time. Or am I misunderstanding your point? What I understand @Brendinooo to be saying is: "we may not share the same moral framework (or value hierarchy, using your term), but we do agree on X, so let's do X."
I don't know, I've noticed this in the right as well. I think there's always some degree of purity-testing to any community, though I agree there is more on the current (radical?) progressive end than average.
Every single day, three things are becoming more and more clear:
(1) OpenAI & Anthropic are absolutely cooked; it's obvious they have no moat
(2) Local/private inference is the future of AI
(3) There's *still* no killer product yet (so get to work!)
1) OpenAI and Anthropic are killing it, and continue to do so, their coding tools are unmatched for professionals.
2) Local models don't hold a candle to SOTA models and there's nothing on the horizon that indicates that consumers will be able to run anything close to what you can get in a data center.
3) Coding is a killer product, OpenAI and Anthropic are raking in the cash. The top 3 apps are apps in the app store are AI. Everyone who knows anything is using AI, every day, across the economy.
The grandparent is definitely wrong on (3). Yes, coding is a killer product, I agree with you.
On (2), I agree with you for local models. BUT, there are also the open source Chinese models accessible via open-router. Your argument ("don't hold a candle to SOTA models") does not hold if the comparison is between those.
On (1), I agree more with the grandparent than with your assessment. Yes, OpenAI and Anthropic are killing it for now, but the time horizon is very short. I use codex and claude daily, but it's also clear to me that open source is catching up quickly, both w.r.t. the models and the agentic harnesses.
>BUT, there are also the open source Chinese models accessible via open-router.
I thought so myself, but after burning a lot of money on OpenRouter in a few days I just subscribed to Z.ai's Coding Pro plan and using the subscription is much, much friendlier with my wallet.
Open models are good but if you need a $10k GPU to run them then 99% of people are better of subscribing to OAI or CC.
Nowadays I also feel model performance matters less than the design of the tool harness, inference speed, and the other systems that surround a typical coding model.
I used GLM5 quite a bit, and I'd say it was maybe on par with Sonnet for most simple to medium tasks. Definitely not Opus though. Didn't test super long context tasks, and that's where I would expect it to break down. A recent study on software maintainability still showed Sonnet and Opus were peerless on that metric, although GLM series of models has been making impressive gains.
I don't want to respond to 100 comments about the same thing, and this one happens to be on top, so, in my humble opinion:
(1): You don't have to be an Ed Zitron disciple to infer that OpenAI and Anthropic are likely overvalued and that Nvidia is selling everyone shovels in a gold rush. AI is a game-changing technology, but a shitty chat interface does not a company make. OpenAI and Anthropic need to recoup astronomical costs used in training these models. Models that are now being distilled[1] and are quickly becoming commoditized. (And frankly, models that were trained by torrenting copyrighted data[2], anyway.) Many have been calling this out for years: the model cannot be your product. And to be clear, OpenAI/Anthropic most definitely know this: that's why they've been aquihiring like crazy, trying to find that one team that will make the thing.
(2): Token prices are significantly subsidized and anyone that does any serious work with AI can tell you this. Go use an almost-SOTA model (a big Deepseek or Qwen model) offered by many bare-metal providers and you'll see what "true" token prices should look like. The end-state here is likely some models running locally and some running in the cloud. But the current state of OpenClaw token-vomit on top of Claude is fiscally untenable (in fact, this is why Anthropic shut it down).
(3): This is typical Dropbox HN snark[3], of which I am also often guilty of. I really don't think AI coding is a killer product and this seems very myopic—engineers are an extreme minority. Imo, the closest we've seen to something revolutionary is OpenClaw, but it's janky, hard to set up, full of vulnerabilities, and you need to buy a separate computer. But there's certainly a spark there. (And that's personally the vertical I'm focusing on.)
> And to be clear, OpenAI/Anthropic most definitely know this: that's why they've been aquihiring like crazy, trying to find that one team that will make the thing.
Anthropic is up to $30B annual recurring revenue. I wish I had failing business models like that.
> Token prices are significantly subsidized and anyone that does any serious work with AI can tell you this. Go use an almost-SOTA model (a big Deepseek or Qwen model) offered by many bare-metal providers and you'll see what "true" token prices should look like.
I'm not sure what think you are saying here, but if you look at the providers for both "almost-SOTA model (a big Deepseek or Qwen model)" or at the price for Claude on AWS Bedrock, Azure or on GCP you will quickly see inference is very profitable.
Anthropic has raised $64B in total since they were founded.
Even if you say we are going to measure profit in the very special hacker news way of looking at money taken in from customer revenue against money invested and we say they can't do things like counting building data centers or buying GPUs as capital expenses and instead have to count them against profit then in 2 years time they will have made more money than they have taken in investment.
The proverbial "50B" is investment in next year's model. The current model cost under "30B", and therefore "is profitable". It is a bet on scaling, yes, but that's been common throughout the industry (see, eg, Amazon not being profitable for many years but building infrastructure)
> If every year we predict exactly what the demand is going to be, we’ll be profitable every year. Because spending 50% of your compute on research, roughly, plus a gross margin that’s higher than 50% and correct demand prediction leads to profit. That’s the profitable business model that I think is kind of there, but obscured by these building ahead and prediction errors.
You're missing the forest for the trees. Per-token pricing is irrelevant when you're just trying to get shit done. I pay 20 bucks a month for OpenAI, but I use likely $200+ a month of tokens just on the coding (and I'm just looking at the raw tokens, this is ignoring all the harnessing on their end). Even OpenAI has said that they're losing money on the 200-dollar subscriptions[1]. This is not a viable business model. Why do you think they are introducing ads this year[2]?
> Go use an almost-SOTA model (a big Deepseek or Qwen model) offered by many bare-metal providers and you'll see what "true" token prices should look like.
Qwen3.5-122B-A10B is $0.26 input, $2.08 output. Where's the subsidy? It's ten times cheaper than Opus. Or did you mean that we're subsidizing their training? But then "OpenClaw token-vomit on top of Claude is fiscally untenable" makes no sense.
Yeah, I don't know where you got your costs from. Bare metal providers are significantly cheaper than Anthropic.
Maybe he's comparing the renting price of a bare metal server on its own, and doesn't realise how drastically cheaper they are to batch together for an API provider.
No killer product? Coding assistants and LLM's in general are the single most awe-inspiring achievement of humanity in my lifetime, technological or otherwise. They've already massively improved my and others' lives and they're only going to get better. If pre and post industrial revolution used to be the major binary delineation of our history, I'm fairly confident it will soon be seen as pre and post AI instead.
I know right? 8-year-old me dreamed of being able to articulate software to a computer without having to write code. It (along with the original Stable Diffusion) are Definitely one of the coolest inventions to ever come along in my lifetime
Coding assistants are currently quite hard to run locally with anything like SOTA abilities. Support in the most popular local inference frameworks is still extremely half-baked (e.g. no seamless offload for larger-than-RAM models; no support for tensor-parallel inference across multiple GPUs, or multiple interconnected machines) and until that improves reliably it's hard to propose spending money on uber-expensive hardware one might be unable to use effectively.
GPU and RAM prices have definitely not made consumer PC's cheaper than they were before bitcoin blew up or before AI blew up.
Maybe you could make an argument that they are more cost efficient for the price point... But that's not the same as cheaper when every application or program is poorly optimized. For example why would a browser take up more than a GB or two of RAM?
And I'd postulate that R&D to develop localized AI is another example, the big players seem hellbent that there needs to be a most and it's data centers... The absolute opposite of optimization
We've had RAM shocks before. We nerds can't control the Wall Street or Virginians who like to break the world every so often for the lulz. However, a wobble on the curve doesn't change the curve's destination.
I've also been using the LLM in Posthog and it has been impressive. I need to check if I can also plug a MCP/Skill to my actual claude code so that I can cross reference the data from my other data source (stripe, local database, access logs etc.) for in depth analysis
This might be up your alley - have Posthog and a ton of other SaaS tools connected so you can run analysis across quant/qualitative data sources: https://dialog.tools
> Coding assistants and LLM's in general are the single most awe-inspiring achievement of humanity in my lifetime
Landing a man on the moon is way more impressive. Finding several vaccines for a once in a century pandemic within a year of its outbreak is and achievement that in its impact and importance dwarfs what the entire LLM industry put together has achieved. The near-complete eradication of polio, once again, way more important and impactful.
Those are all good things, but with the current AI boom we've invented something with the potential to invent those kinds of things on its own, if not now then in the near future. It's far more important and impactful to invent a digital mind that can invent an arbitrary number of vaccines than to just invent one vaccine, no matter how hard it was to invent the vaccine by hand.
I was trying to use Claude.ai today to learn how to do hexagonal geometry.
Every time I asked a question it generated an interactive geometry graph on the fly in Javascript. Sometimes it spent minutes compiling and testing code on the server so it could make sure it was correct. I was really impressed.
Anyway I couldn't really learn anything since when the code didn't work I wasn't sure if I had ported it wrong or the AI did it wrong, so I ended up learning how to calculate SDF and pixel to hex grid from tutorials I found on google instead.
I'd like to think the superior product wins. But Windows still thrives despite widespread Linux availability. I think sometimes we can underestimate the resilience of the tech oligopolies, particularly when they're VC-funded.
VC can spend all the money in the world and it won't matter if the cost of switching providers is effectively zero.
If I want to switch from Windows to Linux, I have to reconsider a whole variety of applications, learn a different UX, migrate data, all sorts of annoyances.
When I switch between Codex and Claude Code, there is literally no difference in how I interact with them. They and a number of other competitors are drop in replacements for each other.
I don't see how its possible to think this. AI coding assistants are some of the most useful technologies ever created, and model quality is by far the most important thing, so I doesn't make sense why local inference would be the path forward unless something fundamentally changes about hardware.
It will run exactly the same tomorrow, and the next day, and the day after that, and 10 years from now. It will be just as smart as the day you downloaded the weights. It won't stop working, exhaust your token quota, or get any worse.
That's a valuable guarantee. So valuable, in fact, that you won't get it from Anthropic, OpenAI, or Google at any price.
That's why we all still use our e machines its never obsolete PCs. Works just the same it did 20 years ago, though probably not because I've never heard of hardware that's guaranteed not to fail.
Intel has just released a high VRAM card which allows you to have 128GB of VRAM for $4k. The prices are dropping rapidly. The local models aren't adapted to work on this setup yet, so performance is disappointing. But highly capable local models are becoming increasingly realistic. https://www.youtube.com/watch?v=RcIWhm16ouQ
That's 4 32GB GPUs with 600GB/s bw each. This model is not running on that scale GPUs. I think something like 96GB RTX PRO 6000 Blackwells would be the minimum to run a model of this size with performance in the range of subscription models.
> I think something like 96GB RTX PRO 6000 Blackwells would be the minimum to run a model of this size with performance in the range of subscription models.
GLM 5.1 has 754B parameters tho. And you still need RAM for context too. You'll want much more than 96GB ram.
Well, there kinda was - most computing then was done on mainframes. Personal / Micro computers were seen as a hobby or toy that didn't need any "serious" amounts of memory. And then they ate the world and mainframes became sidelined into a specific niche only used by large institutions because legacy.
I can totally see the same happening here; on-device LLMs are a toy, and then they eat the world and everyone has their own personal LLM running on their own device and the cloud LLMs are a niche used by large institutions.
My point is LLMs aren't more usable if the hardware is in your room versus a few states away. Personal computers still to this day aren't great when the hardware is fully remote.
Agreed. But you couldn't do much on a PC when they launched, at least compared to a mainframe. The hardware was slow, the memory was limited, there was no networking at all, etc. If you wanted to do any actual serious computing, you couldn't do that on a PC. And yet they ate the world.
I can easily see the advantage, even now, of running the LLM locally. As others have said in this topic. I think it'll happen.
Is it so hard to project out a couple product cycles? Computers get better. We’ve gone from $50k workstation to commodity hardware before several times
Subscription services get all the same benefits from computer hardware getting better. But actually due to scale, batching, resource utilization, they'll always be able to take more advantage of that.
as a local LLM novice, do you have any recommended reading to bootstrap me on selecting hardware? It has been quite confusing bring a latecomer to this game. Googling yields me a lot of outdated info.
First answer: If you haven't, give it a shot on whatever you already have. MoE models like Qwen3 and GPT-OSS are good on low-end hardware. My RTX 4060 can run qwen3:30b at a comfortable reading pace even though 2/3 of it spills over into system RAM. Even on an 8-year-old tiny PC with 32gb it's still usable.
Second answer: ask an AI, but prices have risen dramatically since their training cutoff, so be sure to get them to check current prices.
Third answer: I'm not an expert by a long shot, but I like building my own PCs. If I were to upgrade, I would buy one of these:
Framework desktop with 128gb for $3k or mainboard-only for $2700 (could just swap it into my gaming PC.) Or any other Strix Halo (ryzen AI 385 and above) mini PC with 64/96/128gb; more is better of course. Most integrated GPUs are constrained by memory bandwidth. Strix Halo has a wider memory bus and so it's a good way to get lots of high-bandwidth shared system/video RAM for relatively cheap. 380=40%; 385=80%; 395=100% GPU power.
I was also considering doing a much hackier build with 2x Tesla P100s (16gb HBM2 each for about $90 each) in a precision 5820 (cheap with lots of space and power for GPUs.) Total about $500 for 32gb HBM2+32gb system RAM but it's all 10-year-old used parts, need to DIY fan setup for the GPUs, and software support is very spotty. Definitely a tinker project; here there be dragons.
Agree on the framework, last week you could get a strix halo for $2700 shipped now it's over $3500, find a deal on a NVME and the framework with the noctua is probably going to be the quietest, some of them are pretty loud and hot.
I run qwen 122b with Claude code and nanoclaw, it's pretty decent but this stuff is nowhere prime time ready, but super fun to tinker with. I have to keep updating drivers and see speed increases and stability being worked on. I can even run much larger models with llama.cpp (--fit on) like qwen 397b and I suppose any larger model like GLM, it's slow but smart.
qwen3:0.6b is 523mb, what model are you talking about? You seem to have a specific one in mind but the parent comment doesn't mention any.
For a hobby/enthusiast product, and even for some useful local tasks, MoE models run fine on gaming PCs or even older midrange PCs. For dedicated AI hardware I was thinking of Strix Halo - with 128gb is currently $2-3k. None of this will replace a Claude subscription.
Google doesn't release Gemma 4 if Gemini is similiar good.
We probably talk abuot a year of progress diffeerence.
Its also still quite expensive for an avg person to consume any of it. Either due to hardware invest, energy cost or API cost.
Also professionally I don't think anyone will really spend a little bit less money of having the 3th quality model running if they can run the best model.
I'm happy that we reach levels were this becomes an alternative if you value open and control though.
(3) is simply a lie spread by engineers who have no other context. I manage some real estate (mid-term rentals) and everyone I know has switched over to AI robo-handlers to do the contact at this point. It's almost a passive investment at this point. Some can even handle interfacing with contractors and service requests for you. Revolutionized the field in my opinion.
(1) is absolutely not true if you actually use these models on a regular basis and include Google in here too. The difference in reliability beyond basic tasks is night and day. Their reward function is just so much better, and there are many nuanced reasons for this.
(2) is probably true but with caveats. Top-tier models will never run on desktop machines, but companies should (and do) host their own models. The future is open-weight though, that much is for sure.
(3) This is so ignorant that others have already responded to it. Look outside of your own bubble, please.
If we get to the point where a local model can reliably do the coding for a good majority of cases, then the economic landscape changes significantly. And we are not that far from having big open weight models that can do that, which is a first step
Larger, yes, absolutely. Better? Right now it seems that bigger is better, but if we are thinking about long term future, it's not obvious that there isn't a point of diminishing returns with regards to size. I can also imagine a breakthrough, where models become much smaller, with the same or better capabilities as the current, very large ones.
You are always going to get the same scaling laws in model size regardless of what else you do, so the same degree of improvement seen now relative to the smaller models will be achievable in the future. Yes, small models may be on par with previous generation large models, but the same is true for processors and you don't see supercomputers going away. It's the same principle.
No moat: yes. Cooked: no. It's a race. Why assume they're going to lose? It relies on (2) which is only true if AI usefulness plateaus at some level of compute. That's a huge claim to be making at this stage.
(3) AI has lots of killer products already. The big one is filling in moats. Unrealized potential though for sure.
Solid work and great showcase, I've done a bunch of stuff with Kokoro and the latency is incredible. So crazy how badly Apple dropped the ball... feels like your demo should be a Siri demo (I mean that in the most complimentary way possible).
Thank you. This reminds me of a paragraph from the LatentSpace newsletter [0]
> The excellent on device capabilities makes one wonder if these are the basis for the models that will be deployed in New Siri under the deal with Apple….
> the user is immediately able to understand the constraints
Nagel's point was quite literally the opposite[1] of this, though. We can't understand what it must "be like to be a bat" because their mental model is so fundamentally different than ours. So using all the human language tokens in the world can't get us to truly understand what it's like to be a bat, or a guppy, or whatever. In fact, Nagel's point is arguably even stronger: there's no possible mental mapping between the experience of a bat and the experience of a human.
IMO we're a step before that: We don't even have a real fish involved, we have a character that is fictionally a fish.
In LLM-discussions, obviously-fictional characters can be useful for this, like if someone builds a "Chat with Count Dracula" app. To truly believe that a typical "AI" is some entity that "wants to be helpful" is just as mistaken as believing the same architecture creates an entity that "feels the dark thirst for the blood of the living."
Or, in this case, that it really enjoys food-pellets.
Id highly disagree with that. Were all living in the same shared universe, and underlying every intelligence must be precisely an understanding of events happening in this space-time.
No I am saying the basis of intelligence must be shared, not that we have the same exact mental model.
I might for example say a human entered a building, a bat might on the other hand think "some big block with two sticks moved through a hole", but both are experiencing a shared physical observation, and there is some mapping between the two.
Its like when people say, if there are aliens they would find the same mathematical constants thet we do
I’m not going to argue other than to say that you need to view the point from a third party perspective evaluating “fish” vs “more verbose thing,” such that the composition is the determinant of the complexity of interaction (which has a unique qualia per nagel)
Hence why it’s a “unintentional nod” not an instantiation
> moderately successful in academia and as far as I can tell
Why not pay your debts then? I totally understand debt forgiveness for extenuating circumstances (and imo, it's a crime that student loan debt can't be forgiven, and the interest rates are often predatory—especially in the case of med school and law school), but this just sounds like stealing with extra steps.
Sorry - didn't mean to be vague but I don't want to out my acquaintance too much. She has a good job in STEM. I think she does fine for herself and I would have thought her capable of paying the loan.
One of the people in the article was supposed to pay $60/month for 20 years. That seems manageable for pretty much anyone but the article cites "psychological issues" or whatever
Yeah. I don't know the extent of her debt or current income, but she went to an in-state school for a STEM degree, she's not someone who got a useless degree from an overly expensive school. She definitely doesn't seem to regret her decision, whatever the financial or moral considerations.
She went to an out-of-state school for her master's. Article said that she was a ward of Colorado but went to the University of Oregon for her graduate degree.
No mention of undergrad so hopefully she did go in-state for much lower or free tuition.
I have 2 kids in college and a recent graduate. I am routinely horrified by the choices that some students make in going out of their home state or to a private school instead of a public one.
Running locally or privately (in the cloud) is the future. Anthropic/OAI will need to recoup (astronomical) training costs and I'm not going to be their bailout plan, especially considering training was done on torrented & copyrighted data anyway.
Public model inference quality is almost at SOTA levels, why would anyone pay these VC-subsidized companies even a cent? For a shitty chat interface? Give me a break.
Exactly! It's insane we are so willing to be so dependent on these companies. Imagine AWS having the same downtime and service issues. You would immediately switch providers.
To me, what's super interesting about this is the fact that my brain instantly recognized it's AI coded (not sure why, it might be the spacing, the font, the text glow, etc.).
Developers and their customers mostly gave up design many years ago and used frameworks like Bootstrap because they are good enough, they are cheap to create, they increase speed to deliver with no external designer in the loop, etc. That made many sites look alike. AI designed web sites are the next natural step.
The First thought that came to mind was It's AI coded. Maybe it's because they follow a similar design pattern. Or maybe we have some supernatural powers
I think it's because the UI sucks, like really bad. Why is there a CRT-type line in the background going down constantly. The mission timeline has weird colors that make no sense. Some graphs don't even fit their parent element. And so on.
I don't care if its vibe-coded but if you looked at this and thought "yeah that looks good", it only shows how bad you are at UI.
These types of interfaces are cool if you're like 12
What's even more interesting is that data is completely off compared to official sources, and the author doesn't even have the decency and self-reflection of checking if their slop is at all accurate before posting it to the HN front page.
Vibe coders, like the eggman himself, are philosophical zombies.
This blog post gets way too caught up in Gödel numbers, which are merely a technical detail (specifically how the encoding is done is irrelevant). A clever detail, but a detail nonetheless. Author gets lost in the sauce and kind of misses the forest for the trees. In class, we used Löb's Theorem[1] to prove Gödel, which is much more grokkable (and arguably even more clever). If you truly get Löb, it'll kind of blow your mind.
As a non-mathematician I always wondered about one thing. Because the way I interpret the Incompleteness Theory is that "you cannot have a universal system of infinite expressiveness, because you will need a more expressive one to prove it".
In other words, you can't have a top-down universal system. But you very well can have well described ones perfectly describe observable behaviour without defects.
We have these things called systems: a "system" is anything that follows rules: a board game, traffic, the English language, math, C++, etc. Some systems are smart and they can talk about themselves, but others can't. For example, Tic-Tac-Toe can't talk about Tic-Tac-Toe, but English can talk about English.
Gödel is interested in smart systems because dumb systems are boring.
Some systems are useful: they are "useful" if they always say true things. So math is more useful than English. I can lie in English, but I can't lie in math. (Formally, this is what we call consistency).
So here's a problem for you: suppose we have a smart-useful-system, call it SUS. SUS should be able to say "SUS is useful." It can talk about itself and it can't lie, so we should have no problem, right?
Gödel showed that if our system can actually say that about itself, it wasn't useful to begin with. For a few centuries, philosophers and mathematicians were trying to come up with the "one perfect system": useful, smart, and also complete (it can say all true things), and a few more properties. Turns out such a system is impossible.
NB: I use the words "say" or "talk about" in a very hand-wavy fashion, sometimes I mean Prove(), sometimes I mean Entail(). The details are very nuanced, and this isn't meant to be a deep dive.
Other names for Gödel encoding: Digital. Binary. Zorros and Unos.
Today Gödel encoding is so pervasive, it’s easy to miss that everything is trivially Gödel encladed. Because like most everything invisible, it’s right in front of us.
We Gödel our memes and gift cards, and (pick your poison) pr0ns. Colors and AI’s, lax ASMR’s and our (sneaky don’t read me) terms of service. Even this very small humble .
Gödel isn’t eating the world. Gödel already pööped it.
That period was encoded in a symbol string, i.e. it is a bit string.
Today we encode everything in bi-symbol strings.
This was not common when Gödel crafted his incompleteness theorem. And at the time it was a novel approach for setting up a context for testing the limits of computing.
Some people can still be struck by it as novel when reading the proof, because in context it was, and still feels that way. But today "symbol string" representation is ordinary and pervasive.
Morse didn't conceptually extend encoding to self-referential symbolic systems. Morse's insight was pure communication of symbols devoid of meaning.
Important but nowhere near the same.
Today, general symbolic encoding is viewed as trivial. Every symbol we have is pervasively encoded as bits, so of course entire expressions are. So Morse's code might seem comparable.
But what Gödel invented went well beyond Morse. We are just jaded with regard to his insight now.
Of course you can encode self-references in morse code, how could morse prevent that? Just use the same lisp syntax as in the article and then encode using morse code instead of Gödel numbering.
The purpose of Gödel numbering is to represent an arbitrary-length string of symbols as a single integer which allows you to manipulate it using Peano arithmetic.
But it is not like Gödel invented binary as you seem to suggest. Baudot code (a 5-bit character encoding) was in use in 1870’s.
In any case, Gödel-numbering is the least interesting part of the the theorem. The groundbreaking idea is creating statements about theorems.
Yeah, I think it would be better to first explain the liar's paradox to give the broad brush strokes, and then go into the details of Gôdel numbering.
It seems like most expositions of Gödel's incompleteness theorem go into a surprising amount of detail about Gödel numbering. In a way it's nice though, because you see that the proof is actually pretty elementary and doesn't require fancy math as a prerequisite.
> the thing JAX was truly meant for: a graphics renderer
I mean, just like ray-tracing, SDF (ray-marching) is neat, but basically everything useful is expensive or hard to do (collisions, meshes, texturing etc.). I mean mathy stuff is easier (rotations, unions/intersections, function composition, etc.) but 3D is usually used in either modeling software or video games, which care more about the former than they do the latter.
I believe we could get there eventually. For example for collision there is work to make it differentiable (or use a local surrogate at the collision point):
https://arxiv.org/abs/2207.00669
The robotics will need to connect vision with motors with haptics with 3D modelling. And to propagate gradient seamlessly. For calibrating torque with the the elastic deformation of the material for example. After all matter is not discreet at small scales (staying above the atomic scale)
All this will require all modules to be compatible with differentiability. It'll be expensive at first, but I'm sure some optimizations can get us close to the discreet case.
Also even for meshes there is a lot to gain with trying to go the continuous way:
JAX is designed from the start to fit well with systolic arrays (TPUs, Nvidia's tensor cores, etc), which are extremely energy-efficient. WebGL won't be the tool that connects it on the web, but the generation after WebGPU will.
I'm a CTO, expert engineer, and data professional interested in team-building, consulting and architecting data pipelines. At Edmunds.com, I worked on a fairly successful ad-tech product and my team bootstrapped a data pipeline using Spark, Databricks, and microservices built with Java, Python, and Scala.
At ATTN:, I re-built an ETL Kubernetes stack, including data loaders and extractors that handle >10,000 API payload extractions daily. I created SOPs for managing data interoperability with Facebook Marketing, Facebook Graph, Instagram Graph, Google DFP, Salesforce, etc.
More recently, I was the CTO and co-founder of a gaming startup. We raised over $6M and I was in charge of building out a team of over a dozen remote engineers and designers, with a breadth of experience ranging from Citibank, to Goldman Sachs, to Microsoft. I moved on, but retain significant equity and a board seat.
I am also a minority owner of a coffee shop in northern Spain. That I'm a top-tier developer goes without saying. I'm interested in flexing my consulting muscle and can help with best practices, architecture, and hiring.
Would love to connect even if it's just for networking!
But then you go on to describe exactly what @Brendinooo described, just under the guise of your system of "value hierarchy." The problem is that you can always default to "our values are hierarchically misaligned" and then never have to do any coalition building ever.
So how do you solve that? Because it seems that you can't.
reply