Hacker Newsnew | past | comments | ask | show | jobs | submit | imglorp's commentslogin

There are glasses that do only captions, no recording or camera.

The article says "any eyewear with video and audio recording capability" which makes sense. Although even that is unreasonably specific and should just say "recording or transmission device" to ban the activity and not the item.


Pretty sure any device which subtitles audio could be used to record that audio.

Then it's up to the company to make a compliant device if they want it to be used in a courtroom.

I wonder if it's possible for a regular machine with two high speed ports to do a cable test by itself. Maybe it can't test all the attributes but could it at least verify speed claims in software?

Apparently the USB driver stack doesn't report the cable's eMarker chip data back to the OS. However benchmarking actual transfer throughput is the ultimate test for data connections (vs charging use cases). Unfortunately, TFA doesn't really go into this aspect of cable testing as the tester seems to only report eMarker data, which pins are connected and copper resistance.

Since a >$1,000 automated lab cable throughput tester is overkill, my thumbnail test for high-speed USB-C data cables is to run a disk speed benchmark to a very fast, well-characterized external NVMe enclosure with a known-fast NVMe drive. I know what the throughput should be based on prior tests with an $80 active 1M Thunderbolt cable made for high-end USB-C docks and confirmed by online benchmark reviews from credible sources.


There would be too many factors involved for a proper test. Many laptop USB controllers would probably not even have the capacity to run two ports at full speed simultaneously.

Is there a good prompt addition to skip all the gratuitous affirmation and tell me when I'm wrong?

It doesn't know when you're wrong! Pretend I'm shaking you by your shoulders as I'm saying this, because it's really important to understand!

And it can NEVER know when you’re wrong!

yes:

> skip all the gratuitous affirmation and tell me when I'm wrong


That seems to imply concern for the gambler, who at least has chosen to play.

A much bigger problem might be when these markets bet on a meatspace event and then a bunch go out and try to influence innocents in meatspace, to great detriment of society. Like this journalist https://readwrite.com/threats-israeli-reporter-polymarket


What's also interesting is the Russians adopted a similar color for aircraft cockpits, eg this MiG 31. https://cdn.jetphotos.com/full/2/75332_1265484412.jpg

Meanwhile the Yanks stayed with mil-spec gray on a similar ship, the F-15: https://en.wikipedia.org/wiki/File:F-15_Eagle_Cockpit.jpg


Yes it's different: it will match anywhere in the previous command lines.

The marketers did this for 5G also, calling their product 5G before it was actually deployed, only because theirs came after 4G but wanted to ride the upcoming 5G buzz.

It seems marketing /depends/ on conflating terms and misleading consumers. Shakespeare might have gotten it wrong with his quip about lawyers.

https://www.pbs.org/newshour/economy/att-to-drop-misleading-...


There was soooo much intentional disinformation around 5G. Everyone who wanted to sell anything intentionally confused the >1Gbps millimeter wave line-of-sight kind of 5G with the "4G but with some changes to handle more devices connected to one tower" kind of 5G. I wonder how many bought a "5G phone" expecting millimeter wave but only got the slightly improved 4G.

This is mostly the standard’s fault, right? Putting more conventional wavelengths and the mm stuff together in one standard was… a choice.

From a standards design perspective, there is nothing wrong with it. It's the same protocol running on two very different frequency bands. They co-exist and support each other.

The problem is how marketing interacted with it.


They should share a specification (I know this is correctly called a 'standard') but the should have been a separate logo for each non-interoperable group of useful features (a different concept also often called a 'standard'); as USB has proved.

Wait til you search the term “6g”.

Bill Hicks had some thoughts, too:

https://youtube.com/watch?v=GaD8y-CGhMw


It’s been a long long time since I’ve heard that name come up in conversation.

Thanks for the trip down memory lane.


Yes, my wireless router has "5G WiFi" but only does 4G. I didn't have a choice about using it since it comes from the provider, but still stupid.

5G and 4G are not terms applied to WiFi. We have 802.11a/b/g/n/ac/ax and WiFi6/7

WiFi operates in the 2.4, 5, 6GHz bands, but those frequency bands are not used to differentiate WiFi standards because you can mix and match WiFi 6/7 on all three bands.

There are also more WiFi bands below 2.4 and above 6GHz, but they're not common worldwide.


> 5G and 4G are not terms applied to WiFi.

Tell that to Netgear, AT&T and several other Wifi hub manufacturers.

The default SSIDs on many hubs are names like “NETGEAR23” and NETGEAR23-5G”.


What is 5G WiFi? Do you mean 5Ghz WiFi?

Yeah, this is probably the point. His MO, over and over, is to take something away from you and then sell it back.

The good news is this domain doesn't call for cutting edge, 2 nm process.

There are more than a half dozen fabs in the US which can produce networking chips like ethernet controllers, line drivers, SOC, etc: Intel, TI, Samsung, et al. If US wanted to onshore routers, we could make it happen.


    > If US wanted to onshore routers, we could make it happen
It will take months if not years to get a product to market.

If I understand the determination on which the FCC decision is based:

https://www.fcc.gov/sites/default/files/NSD-Routers0326.pdf

a router produced in the US from foreign-made parts - like controllers, drivers etc. - is not in the "covered list". Although I admit the wording is vague: "produced in the US".


Public service announcement

You can pin actions versions to their hash. Some might say this is a best practice for now. It looks like this, where the comment says where the hash is supposed to point.

      Old -->   uses: actions/checkout@v4
      New -->   uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
There is a tool to sweep through your repo and automate this: https://github.com/mheap/pin-github-action

The problem is actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 probably doesn’t do this same pinning, and the actions ecosystem is such an intertwined mess that any single compromised action can propagate to the rest

Yes, true, but at least the fire won't spread through this one point. Hopefully all of your upstreams can be persuaded to pin also.

Doesn't a single compromised action in the chain cause the whole to be fucked? Pinning the top level doesn't prevent any spread.

Might want to vendor everything?

That’s the way to go indeed. We’ve done it, not difficult, just a bit of gruntwork to keep them updated when needed

I don't know what this means in this context.

Make copies of the entire GitHub action dependency tree.

Well, it is a git commit hash of the action repo that contains the transpiled/bundled javascript.

Like: https://github.com/actions/checkout/tree/11bd71901bbe5b1630c...

So I'm pretty sure that for the same commit hash, I'll be executing the same content.


This is true specifically for actions/checkout, but composite actions can have other actions as dependencies, and unless the composite action pins the versions of its dependencies, it is vulnerable for this attack.

This article[0] gives a good overview of the challenges, and also has a link to a concrete attack where this was exploited.

[0]: https://nesbitt.io/2025/12/06/github-actions-package-manager...


My preferred tool to solve these issues is called 'gitlab'

Does it solve anything? I don't see this as a GitHub problem, it's a "we built a dependency management system with untrusted publishers" problem.

GitLab's `include` feature has the same concern. They do offer an integrity check, but it's not any more capable than hash pinning to a commit.

Fundamentally, if you offer a way to extend your product with externally-provided components, and you can't control the external publishers, then you've left the door open to 'these issues'.


CircleCI

TravisCI

Jenkins

scripts dir

Etc


yeah, github's business model is not really a git repository but a bunch of other (admittedly useful) stuff that traps people in their ecosystem.

See also pinact[1], gha-update[2], and zizmor's unpinned-uses[3].

The main desiderata with these kinds of action pinning tools is that they (1) leave a tag comment, (2) leave that comment in a format that Dependabot and/or Renovate understands for bumping purposes, and (3) actually put the full tag in the comment, rather than the cutesy short tag that GitHub encourages people to make mutable (v4.x.y instead of v4).

[1]: https://github.com/suzuki-shunsuke/pinact

[2]: https://github.com/davidism/gha-update

[3]: https://docs.zizmor.sh/audits/#unpinned-uses


This won't pin the action's dependencies, so it's a shallow approach only.

I've always been worried about their backend changing and somehow named tags with a previous commit hash working for an attacker to give something you didn't expect for the commit hash.

> There is a tool to sweep through your repo and automate this: [third-party]

Dependabot, too.


Checkout v4 of course, released in August 2025, which already now pollutes my CI status with garbage warnings about some Node version being deprecated I could absolutely care less about. I swear half the problems of GitHub are because half that organization has some braindead obsession with upgrading everything everywhere all the time, delivering such great early slop experiments as "dependabot".

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: