Hacker Newsnew | past | comments | ask | show | jobs | submit | bradfa's commentslogin

I suspect there’s quite a difference between what most people do and what most HN commenters do.

I frequently see comments which would have made sense in the past (e.g. early 2000th) but kinda aren't fully reflecting reality anymore

it's as if humans have a tendency to make up their mind/world view in their younger years and then tend to kinda stick with it/only change it slowly as long as no big live changing events happen


3rd is the only one still supported.



I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?

NVIDIA gross margins lately are like 75%, so it's more like you give me $100 to buy something from me that cost me $25 to produce, hence I end up with $100 worth of stock in your company and it only cost me $25.


> hence I end up with $100 worth of stock in your company and it only cost me $25.

You also lost out on $75 worth of cash revenue (opportunity cost from selling the same thing to a different customer), so really you just took stock in lieu of cash.

It'd be different if Nvidia (TSMC) had excess production capacity, but afaik they're capped out.

So it's really just whether they'd be selling them to OpenAI and getting equity in return or selling to customers and getting cash in return.

If OpenAI thinks their own stock is valued above fundamentals, it's a no brainer to try and buy Nvidia hardware with stock.


Sure, but OpenAI doesn't have cash. It does have stock.

Even if Nvidia has capped production for now, increased demand still allows them to sell chips at a greater margin. Or, to put another way, presumably Nvidia is charging OpenAI a premium for the privilege of paying with stock.


In that case, you spent $80 to produce an item and exchanged it for $100 worth of their stock.

Now if you check, these companies selling their stock like this tend to have large amounts of debt. If their stock becomes worthless, you just wasted $80 producing an item that their creditors have first dibs on. And liquidating your shares immediately to ensure your gain, would weigh on their stock's value, potentially to the point where their stock would be only $80 worth, and you wouldn't be gaining anything anymore. Your earnings would then tank, alongside them.


> I give you $100 cash and you give me $100 worth of stock in return. Now you give me $100 cash to buy something from me that cost me $80 to produce. I end up with $100 worth of stock in your company which cost me only $80. No?

Sure, but how's that a cheat code? If you normally sell something for $100 that costs $80 to make, and then use that $100 revenue to buy $100 of stock, this is an identical outcome for you.


They wouldn’t have bought $100 worth of product if the deal weren’t offered, because they didn’t have $100 to spend.


If they couldn't borrow $100, or get $100 from any other investor, that just puts you in the position of being an investor, and even then the difference between bradfa's version and mine is simply when you became an investor, not that you became one.

Again, this is not a cheat code: if you sell $80 of cost for $100 of stock, the stock you now own can go up or down, and if you overvalued it then down is the more likely direction.


The primary cheat code here would actually seem to be (a) getting preferential access to Nvidia's production through these deals and (b) creating a paper story of increasing OpenAI private valuation.


Aaaannd get to claim the 100 as revenue to show investors that the company is performing better than if I had not made the deal, which also means that demand for the product stays inflated which also means I can keep my margins higher by not needing to discount my product.


Urgently need an IPO so losers can chip in. If the sandcastle plummets before, funds and other AI companies lose a lot, so better bet again and again, even if this is nonsensical.


I have a pair of Freestyle2 keyboards, both are over a decade old. I strongly recommend the V3 tenting kit. You can get a refurb USB Freestyle2 with the V3 kit for $70 direct from Kinesis.


Would be nice if you could filter based on the number of pieces or layers of the golf ball manufacturing process. Might require some leg work to actually find out, but many manufacturers will list it on their product info.


I'll look into it. Thanks for the suggestion.


Sure but if we find another few “easy” 5% improvements in find/replace/edit (which is one of the most important actions for coding) then they really start to add up.

Most harnesses already have rather thorough solutions for this problem but new insights are still worth understanding.


GLM-5 at FP8 should be similar in hardware demands to Kimi-K2.5 (natively INT4) I think. API pricing on launch day may or may not really indicate longer term cost trends. Even Kimi-K2.5 is very new. Give it a whirl and a couple weeks to settle out to have a more fair comparison.


Yes and no. There are many not-trivial things you have to solve when using an LLM to help (or fully handle writing) code.

For example, applying diffs to files. Since the LLM uses tokenization for all its text input/output, sometimes the diffs it'll create to modify a file aren't quite right as it may slightly mess up the text which is before/after the change and/or might introduce a slight typo in text which is being removed, which may or may not cleanly apply in the edit. There's a variety of ways to deal with this but most of the agentic coding tools have this mostly solved now (I guess you could just copy their implementation?).

Also, sometimes the models will send you JSON or XML back from tool calls which isn't valid, so your tool will need to handle that.

These fun implementation details don't happen that often in a coding session, but they happen often enough that you'd probably get driven mad trying to use a tool which didn't handle them seamlessly if you're doing real work.


I'm part of the subset of developers that was not trained in Machine Learning, so I can't actually code up an LLM from scratch (yet). Some of us are already behind with AI. I think not getting involved in the foundational work of building coding agents will only leave more developers left in the dust. We have to know how these things work in and out. I'm only willing to deal with one black box at the moment, and that is the model itself.


You don't need to understand how the model works internally to make an agentic coding tool. You just need to understand how the APIs work to interface with the model and then comprehend how the model behaves given different prompts so you can use it effectively to get things done. No Machine Learning previous experience necessary.

Start small, hit issues, fix them, add features, iterate, just like any other software.

There's also a handful of smaller open source agentic tools out there which you can start from, or just join their community, rather than writing your own.


It's hardly a subset. Most devs that use it have no idea how it works under the hood. If a large portion of them did, then maybe they'd cut out the "It REALLY IS THINKING!!!" posting


what you are doing is largely a free text=> structured api call and back, more than anything else.

ML related stuff isnt going to matter a ton since for most cases an LLM inference is you making an API call

web scraping is probably the most similar thing


Most well pumps are electric powered. The holding tank will give you a very small amount of water that’s in it if it’s up high but after that without electricity it won’t refill.

In the USA most residential toilets are tank type and don’t directly use electricity.


Any sold at a physical ski shop. You will make a face at the price. You will reconsider the sketchy ones for their price. But the ski shop ones will last significantly longer.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: