Often an acquisition of a company is for the set of customers. If I sell my lawn care business and then turn around and email all my former clients offering them lawn care via my new company, I’ve just undercut what I just sold.
Noncompete shouldn’t be so broad that I couldn’t move to another city and start a lawn care business there, but I shouldn’t be able to compete directly with the business I just sold using my insider information of that business.
There's also a big difference between starting a competing business like your example, and being barred from say working on "cloud infrastructure" because your previous employer also worked on "cloud infrastructure". It can be blurry for executives, but in general noncompetes seem to be used to push pay down more than for any legitimate business purpose.
> Often an acquisition of a company is for the set of customers.
That's a merger. You can, not having any business currently, buy yourself into one. In which case the acquisition is purely for the profits.
> I’ve just undercut what I just sold.
No you've just competed with them. If your prices are lower then you've undercut them. If their prices are artificially high then the market, a.k.a. those customers, are the ones to benefit.
> but I shouldn’t be able to compete directly with the business I just sold
Competition is _competition_. You didn't buy a market you bought an opportunity. You still have to compete against everyone else.
> I just sold using my insider information of that business.
Insider information? On a lawn care business that has no issued securities?
Purchases that wouldn't go through if they didn't reduce competition shouldn't happen anyway. Banning those kinds of restrictions would help with that.
I disagree. “Authored by” - and authorship in general - says who did the work. Not who signed off on the work. Reviewed-by me, authored by Claude feels most correct.
> Before AI, did you credit your code completion engine for the portions of code it completed?
Code completions before LLMs was helping me type faster by completing variable names, variable types, function arguments, and that’s about it. It was faster than typing it all out character by character, but the auto completion wasn’t doing anything outside of what I was already intending to write.
With an LLM, I give brief explanations in English to it and it returns tens to hundreds of lines of code at a time. For some people perhaps even more than that. Or you could be having a “conversation” with the LLM about the feature to be added first and then when you’ve explored what it will be like conceptually, you tell it to implement that.
In either case, I would then commit all of that resulting code with the name of the LLM I used as author, and my name as the committer. The tool wrote the code. I committed it.
As the committer of the code, I am responsible for what I commit to the code base, and everyone is able to see who the committer was. I don’t need to claim authorship over the code that the tool wrote in order for people to be able to see who committed it. And it is in my opinion incorrect to claim authorship over any commit that consists for the very most part of AI generated code.
True. Might also vary depending on how one uses the LLM.
For example, in a given interaction the user of the LLM might be acting more like someone requesting a feature, and the LLM is left to implement it. Or the user might be acting akin to a bug reporter providing details on something that’s not working the way it should and again leaving the LLM to implement it.
While on the other hand, someone might instruct the LLM to do something very specific with detailed constraints, and in that way the LLM would perhaps be more along the line of a fancy auto-complete to write the lines of code for something that the user of the LLM would otherwise have written more or less exactly the same by hand.
> Oddly they don’t seem to have figured out the generation counting trick, which is something I did come up with over twenty years ago. Combining the two ideas is what allows for there to be no reference to commit ids in the history and have the entire algorithm be structural.
Can you say more about this? What exactly is this trick you’re talking about? What are the benefits?
(That seems to be an archive of the old revctrl.org pages from a while back; most likely Bram Cohen has a blog somewhere explaining it in his own words - probably about 2003, at a guess)
They trade blows performance wise with the M1 MacBook Pro sitting on my desk. And theres nothing stopping asahi linux running on them except for driver support. They look like fantastic machines.
They’re not ideal for all use cases, of course. I’m happy to still have my big Linux workstation under my desk. But they seem to me like personal computers in all the ways that matter.
Asahi Linux is NOT an option and may never be one due to: The A18 Pro (and M4) introduced SPTM — Secure Page Table Monitor — which runs at a higher privilege level (GXF EL2) than the OS kernel. Unlike M1/M2/M3 where m1n1 can directly chainload Linux, on A18 Pro/M4 the page table infrastructure is owned by SPTM and must be initialized by XNU before anything else can run. You cannot bypass it. (source: https://github.com/rusch95/asahi_neo)
> Different color temperatures are produced by different mixes of phosphors.
We can make LED light appear to be any given colour by mixing multiple LEDs. But mixed colour isn't the same as pure colour, made from a single spectra of light. Nor is it the same as true broad spectrum light - like we get from black-body radiation like the sun, or a tungsten bulb.
Its hard to tell the difference just by looking at a light. But different kinds of lights - even lights which look the same colour - will change what objects actually look like. And they probably have different effects on our sleep cycle and our low light vision. I was in a room once lit only by sodium vapour lights. The lights were yellow, but everything in the room (including me) appeared to be in greyscale. It was uncanny.
This is part of the reason why LED lights are still looked down on by a lot of old school photographers and film makers. Skin doesn't look as good under cheap LED lights.
For light with a narrow spectrum, it is possible to make LEDs that emit that light with high-efficiency, for any color inside 2 ranges, one from near infrared to yellow (corresponding to semiconductor phosphides and arsenides) and one from blue to near ultraviolet (corresponding to semiconductor nitrides).
Only green LEDs have worse efficiency, because they must be made with semiconductors for which optimum efficiency is attained at either lower or higher light frequencies.
Lamps using high-efficiency amber LEDs with about the same color with sodium lamps could be made at an energetic efficiency at least double to that of white LED lamps.
The double factor comes from the visual sensitivity being double for the light at sodium color than for ideal white light.
In reality the energetic efficiency of such LED lamps should be more than double, because they do not have losses caused by conversion through fluorescence.
seafoam green choice was also influenced by eye rest studies... since our eyes are most sensitive to middle wavelengths. just keep the room dimmer without losing detail. It reduces fatigue for operators on long shifts.
I guess you linking to it was a self fulfilling prophecy
If you read your own reference (not the picture, but where you took it from on Wikipedia) really really carefully, you might be able to tell why it so perfectly applies to you
The person with little knowledge overestimates they're capability, and the person which actually knows how complicated [the thing] is , usually isn't as confident they mastered it.
You’re talking about a confidence and ability gap. I have heard of the Dunning-Kruger effect. I accept all of that.
But the claim above was that having low confidence was correlated to higher skill. Ie, skill and confidence are anti correlated. The chart does not show that. The lowest data point for confidence is the point on the left of the chart. This is also the data point corresponding to people who have the least competence. Having low confidence is not evidence that you’re secretly an expert. Confidence and competence are still positively correlated according to that chart.
The Dunning-Kruger effect is not so strong that there are scores of novices convinced they are experts in a field. But in your case, I admit the data may not tell the full story.
That isn't what that shows, and the article you linked to even warns:
> In popular culture, the Dunning–Kruger effect is sometimes misunderstood as claiming that people with low intelligence are generally overconfident, instead of denoting specific overconfidence of people unskilled at particular areas.
Dunning-Kruger has also been discredited with suggestion they may have been over confident themselves:
Debunking the Dunning‑Kruger effect – the least skilled people know how much they don’t know, but everyone thinks they are better than average (2023)https://theconversation.com/debunking-the-dunning-kruger-eff... the Dunning‑Kruger effect – the least skilled people know how much they don’t know, but everyone thinks they are better than average
Self-reported studies are arguably weaker evidence, but are common in some areas for ethics reasons. In general, if errors are truly random, than they will cancel out over larger/frequent population samples.
The study conclusion inferred the skills needed to be effective at some task, are the same skills needed to correctly evaluate if you are actually proficient at the same tasks.
Or put another way, the <5% population of narcissists by their nature become evasive when their egos are perceived as threatened. Thus, often will pose a challenge in a team setting, as compulsive lying or LLM turd-polishing is orthogonal to most real world tasks.
People are not as unique as they like to believe, and spotting problems is trivial after you meet around 3000 people. Best to avoid the nonsense, and get outside to enjoy life. Have a great day =3
No idea why we all get negative karma on this thread, as I do respect a cited source opinion even if we disagree. Do have a look around for papers rather than editorialized content in the future, and note account LLM agent output is a violation of YC usage policy. Have a great day =3
> Outside of being forced to use a game launcher to launch their games, what was the real crime?
To me, this was the crime. Me and my friends played mass effect 3 multiplayer around launch, which was an EA Origin exclusive. It was a total pain! All of us needed to download and install the launcher, then buy & download the game through it. Then add each other as "EA origin friends". The whole process was riddled with bugs at the time - including payment problems and download problems. Origin would crash sometimes. Sometimes we couldn't see each other in multiplayer, and needed to restart origin to fix it. Sometimes another of our friends would join us - and it was always "oh god, what do I have to do to make this work??".
I really love mass effect 3. But the experience was traumatic enough that I never bought or played anything through EA Origin ever since then. The quality of Steam is table stakes now. And there's so many good games coming out that game exclusivity usually isn't enough to get you over that initial hump.
The biggest gripe I have with the origin launcher (and to a lesser extent, the epic launcher) other than "why does it exist at all?" is how laggy all UI actions are. Game developers can render a 3d world at 120+fps. Why on earth does it take multiple seconds for the UI to respond to a button press sometimes? Its completely inexcusable. The blizzard launcher is (IMO) the best launcher by this metric. You can tell competent people made it, because everything responds instantly. (The EA launcher might be good now, I wouldn't know. I mostly only play games that release on steam.)
Epic Games does way more than just purely making games.
They also have their own Steam competitor (Epic Games Store) and, more importantly, they develop and support Unreal Engine used by tons of other game dev companies.
If you want an apples to apples comparison (i.e., other big live-service game companies) in terms of the employee count, you got:
Mihoyo (Genshin Impact, Honkai Star Rail) - ~5,000-6,000
What about Valve itself? They have ~350 employees. They make Steam, SteamOS, Steam Deck, Steam Machine, Steam Frame, the Source engine, and run four actively successful live service games: CS2, Dota2, TF2, Deadlock.
Last I've heard Valve makes use of a lot of contractors however. So the number of people working on their projects is a bit higher than their employee count suggests. Anyone's guess how many though.
I know they're sponsoring a bunch of ARM and Linux projects as well.
The small size of Valve is simultaneously mind boggling but also not, given its very intentional independence. I would have to imagine that they must contract out or have partners at least for their hardware relationships if not for their massively multiplayer online games. At just 350 people that's enough annual revenue to make everyone there a millionaire several times over. Simultaneously plausible but mind boggling.
They contract out all the time, they've admitted to it in lots of interviews. So I think through the amount of contracting they're able to keep their core hires down.
Yeah but Valve is not publicly traded, so that comparison is of course totally unfair! /s
Having skilled and happy employees that aren't constantly changing and do not spend all of their time on ways to fuck over customers and chase trends is simply impossible. Releasing a piece of hardware and leaving it open for customers to do with what they want? Linux? Not hiring people the second line goes up and then immediately firing them when line stagnates? Preposterous.
The game store doesn't need a lot of employees. A few years ago it was reported that Valve only needed about 70 employees to run Steam while it generated billions of dollars in Steam fees (30% per game). It's basically free money for Valve. I bet the situation is similar for the AppStore and Google Play.
Though Unreal Engine does indeed need quite a few developers. Additionally, using UE is much cheaper (5% on games exceeding 1 milion USD gross revenue) than using Steam (30% on every game). So they not only need more developers than Valve, they also earn less money.
Steam doesn’t really attempt to gatekeep submitted content the same way that Apple or Google do so I would expect those companies to have much larger teams supporting, in mostly non-development roles. Steam support has also historically been kind of a joke (not sure if it’s improved in the last 5 years) though I don’t know if Google/Apple provide a better experience
Don't make games, but Unity does operate worldwide and has a LOT of supports for ads (their main money maker, unless something recent happened).
That globalization is a big reason many tech companies swell. When you need a team to work in and around every region's laws and regulations, you get big quickly.
But also, unity has slimmed down and scaled down on a lot of initiatives.
reply