Hacker Newsnew | past | comments | ask | show | jobs | submit | Leftium's commentslogin

I think the strategy you're suggesting is: "We lose money on every sale, but make it up in volume!"

If the resellers down the chain were purchasing your shoes for less then your cost, would you still be happy?

Say the resellers were abusing an 80% discount coupon. Anthropic is basically closing a 95% discount coupon that was being abused.

If OpenClaw users were paying the API rate, your strategy could make more sense.

The reason Anthropic is subsidizing inference is because they are trying to capture users (marketshare). However the acquisition costs for a single OpenClaw user is much higher. And OpenClaw users are less likely to convert into profitable users later.

---

In addition, there is a supply bottleneck. Currently Anthropic is having trouble servicing all the demand due to a shortage of GPU's. And in the current market it is impossible to get more GPU's (or at least prohibitively expensive).

Anthropic (and all other AI companies) also need GPU's to stay competitive: GPU's are needed to train better models. So you could view it as Anthropic has decided instead of subsidizing nonprofitable OpenClaw users, it is better to repurpose that GPU for internal R&D instead.


The numbers are still there (unless "calm" mode is enabled.) You can scrub any of the plots to get the precise values for the metrics at that time.

The colors + space simply help you understand the numbers better.

(Weather forecast precision is artificial "because weather forecasts fundamentally have very high uncertainty and error bands"[1])

[1]: https://news.ycombinator.com/item?id=46570599


Can you clarify? Is the rendering broken on mobile?

I just checked, and the responsive layout seems to render correctly on Android Firefox/Chrome and iOS Safari.

You can even save WeatherSense to your home screen as a simple progressive web app.


I don't like the color scheme with the gradients; nothing functionally wrong, just my reaction.

The gradients actually serve a purpose:

- You can see the weekly high/low temperature trends by scanning down vertically along the left.

- Redder color means warmer; bluer means cooler.

- The gradient is constant for all data plots, so you can visually compare the temperature across days and hours.

- The gradient block for each day goes from the high to the low temp for that day.

- Even the hourly temperature plot line is calibrated to the same gradient.

---

The sky background gradient is slightly superfluous, but it's very subtle and meant to emulate (a more vibrant) version of the actual sky.

For anyone who wants more gradients: there's a setting here: https://weather-sense.leftium.com/wmo-codes

I disabled those by default because they were distracting and didn't serve a purpose.


There is a unit toggle button right below the day tiles. Your selection should be persisted across page loads.

- You can also tap any unit to toggle.

- But the main point of WeatherSense is to transcend units ^^


Very good overview of options here: https://ai.davis7.sh

Explained in more detail: https://youtu.be/1WFgIjAvMDw?t=882

TL;DR:

- Cursor Ultra

- OpenAI Codex

- OpenCode Black (currently not accepting new subs)


> - OpenCode Black (currently not accepting new subs)

Temporarily paused on new subs. It'll be back.


https://open-meteo.com API is pretty good for weather forecasts, too.

I made a similar project: https://veneer.leftium.com

You can publish any publicly readable sheet/form (append rows with publicly available form):

- sample sheet: https://veneer.leftium.com/s.1RoVLit_cAJPZBeFYzSwHc7vADV_fYL...

- sample form: https://veneer.leftium.com/g.chwbD7sLmAoLe65Z8


How come you don't show the realtime transcription... in realtime?

I think it would make it feel even faster.

> the UX difference between streaming and offline STT is night and day. Words appearing while you're still talking completely changes the feedback loop. You catch errors in real time, you can adjust what you're saying mid-sentence, and the whole thing feels more natural. Going back to "record then wait" feels broken after that.

(https://hw.leftium.com/#/item/47149479)


I think realtime transcription hurts the UX of polishing what's said worse. In FreeFlow the output of the transcription is fed to an LLM to polish in context of where the text is being injected. This way we can go beyond naive transcription.

FreeFlow already feels extremely fast and text being typed as I dictate is distracting especially if the polishing phase edits it.


I would delay polishing until right before delivery.

Eventually, I will add a polishing step to my own https://rift-transcription.vercel.app.

Right now, you can experience what true realtime streaming transcription feels like.

I plan to add two "levels" of polishing:

- Simple deterministic text replacements will be applied to both interim and final text.

- LLM polishing will only be applied right before delivery.

- It will be possible to undo one or both polishing steps. (Actually even more fine-grained undo: at the replacement rule level).


That said, FreeFlow is open source for exactly this reason, everyone will have their own preference. If you would like to turn this behavior into a configurable preference, we'd happily accept a pull request.


I'm not familiar with screenshot drag. Does that copy the image into the target?

If you take a lot of screenshots, I highly recommend https://shottr.cc (nagware/freemium)

- Shows preview with buttons to copy to clipboard and/or save to file

- Can be configured to automatically copy/save (open app to preview last capture)

- Preview has tons of useful features like crop, annotations, color picker, ruler, OCR


This was shared on HN over a decade ago, but still stands the test of time: http://ciar.org/ttk/public/apigee.web_api.pdf


Thank you!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: