Radio signals do weaken and dissipate over time and space. Broadcast signals could fade into the cosmic microwave background in a few light years depending on their strength. The sci-fi trope of aliens picking up Earth tv and radio just isn't plausible.
No, we don't. If you're talking about SETI, that's looking at radio signals. If you're talking about killer asteroid early-warning detection, we generally don't have the capacity to reliably detect voyager-sized asteroids even in our own solar system, let alone in interstellar space.
Imagine how far technology has come in 100 years. Then imagine if the alien had just a 1 million year head start to technology. 1 million years is less than 1/1000 of the age of the universe earlier.
We have literally no idea what technology the alien could have.
Maybe there are aliens out there so advanced that they could be reading our screens right now in realtime from across the galaxy using some weird post-quantum silly sauce we can't even comprehend. But it doesn't seem likely given what we do know and observe, at least not to me (based mostly on the Fermi Paradox and thermodynamics) that there is someone 100 light years away teasing I Love Lucy from the CMB. It seems less likely that they would be able to pinpoint our location based on that, and try to annihilate us.
The aliens have the same physics we do. Science isn't magic. Without quite literally having to replace everything we have known or discovered in the past 250 years from entropy to electromagnetic theory to gravity to motion with brand new theories that somehow equally explain all known phenomenon while also allowing lots of outright magic, no, the aliens are not able to collect radio waves from below the noise floor.
> The aliens have the same physics we do. Science isn't magic.
Show a spacecraft to someone from the middle ages and they would think it's magic.
There is physics that has not been discovered. Lots of things are still unexplained.
> no, the aliens are not able to collect radio waves from below the noise floor
Before we had quadrature modulation and quadrature phase shift keying, we thought we had hit the noise floor for wireless bandwidth. After we thought we really hit the ceiling, we had beamforming. There's stuff that hasn't been thought of. We don't know the unknown unknowns.
I don't think VR will ever reach mass adoption, I don't think there's any reason for it to. It isn't sufficiently superior to screens and keyboard/mouse or phones to warrant it. I liken the hype around VR to the hype around LLMs - it's good at some things, but not everything.
I recently started enjoying virtual bike tours on my exercise bike, but vertigo when the camera turns is an issue. I absolutely wouldn't do it on a treadmill.
Are video game consoles "fundamentally asocial" because there are likely fewer controllers than people in a household? Are computers, because they only have one mouse and keyboard? The existence of VR chat suggests it handles social gaming just fine.
I can think of a lot of impediments to VR (the weight of the headset and vertigo being the biggest) but needing everyone in a room to share a single headset at the same time seems like an extremely fringe case. The real problem there is just the cost of buying enough headsets.
Apologies if this wasn’t clear, I thought it was obvious: you have something sitting on your face, isolating you physically, visually, physically, and emotionally.
When I’m playing on my couch with my wife, when something happens on screen we still look at each other and laugh—regardless of whether it’s a single player game or not. There’s eye contact.
If I’m engrossed in a game of RL in my office, I can still look down at my dog when she comes and boops me. There’s eye contact.
Virtual reality, for all its qualities and ability to let you be digitally present with people online or also in VR, is physically isolating users from the people who are physically nearby.
>Less cynically now, the president has admired Xi many many times openly, and it’s clear he prefers an administrative style similar to China.
China style authoritarianism can't work in the US because the CCP has to actually deliver quality of life to the people in exchange for their political and cultural oppression. America's tech oligarchy is only willing to deliver for billionaires.
If Trump were really setting up a Xi style dictatorship he would be pacifying the people with investment in infrastructure, education and healthcare. Americans would gladly tolerate armed masked thugs kidnapping immigrants and the wholesale censorship of the internet and the press if he actually kept prices down and employment up.
Or he could at least better at giving the impression of doing so. The biggest failure of Trumpism by far has to be its propaganda. None of these people know how to lie effectively. Not nearly as well as the neocons. Look at the whole song and dance they did to justify war with Iraq versus... whatever the hell America's plan with Iran is supposed to be (besides Christian holy war, apparently.)
> It understands and acknowledges every request, idea, vision, flaw, structure, requirement, needs and just ignores and fails to implement it and cannot consistently think through it. I just can’t believe that.
Believe it. You're anthropomorphizing. It doesn't understand anything. There is no "thinking" going on. Yes, the point of LLMs as a service is to make money. Yes, the service is designed to maximize profit. Yes, there are dark patterns baked into the system. Yes, keeping you addicted and using the service is part of the business model. This isn't human instrumentality, it's just capitalism.
Until you realize the machine isn't qualitatively superior to your own mind and your own efforts, you're just going to keep torturing yourself because your nature forces you to maximize your productivity at any cost, which given your false assumptions about LLMs means ceding as much of yourself to the machine as possible and suffering its inadequacies. I use "you" collectively here because it seems like a lot of people have worked themselves into this corner where they don't like what LLMs do for them but feel compelled to use them anyway.
It's just a tool. If you don't like the tool, don't use the tool.
Fair. “It understands” is probably the emotional description, not the technical one.
The practical problem is that it can imitate understanding well enough to get a large project moving, then break down right where durable system memory and architectural consistency matter most.
So yes, it’s a tool. The problem is that it’s useful enough that “just don’t use it” is not a real answer, and broken enough that you eventually have to build around the gap.
>Well, this is how it is with real humans as well. The moment the human gets tired, or the information they need to process is too much, they produce errors.
LLMs don't hallucinate because they get overwhelmed and tired JFC.
Because LLMs are stochastic text-generation machines. The are designed to generate plausible natural human language based on next token prediction, the result of which coincidentally may or may not be true based on the likely correctness and quality of their data set. But that correctness (or lack thereof) comes from the human effort that produced the training data, not some innate ability of the LLM to comprehend real-world context and deduce truth from falsehood, because LLMs don't have anything of the sort.
Yes, exactly.
That’s why it feels so strange in practice. It can mimic understanding well enough to get you moving, but when the project gets deep enough, you find out it was generating plausibility, not actually holding the system in its context.
They don't. They work as intended and "hallucination" is actually a marketing term to make it seem they are more than what they really are: text prediction software.
The only thing karma reliably indicates is participation over time, the signal is too noisy for anything else. If anything high karma should be a red flag. The very best contributors here rarely comment because they have better things to do. It shows an 8/8 score for me and I doubt anyone would consider me a top tier high quality contributor.
A plugin like HN Comments Owl would be more useful IMHO.
"stochastic parrot" describes what an LLM does, that it (like a parrot) generates coherent human language without understanding its meaning.
Being offended on behalf of software is weird.
reply