Hacker Newsnew | past | comments | ask | show | jobs | submit | artemonster's commentslogin

Imagine in steampunk fashion wed get an alternative future timeline where computer tech froze in 80s due to some physical limitation that prohibited shrinking transistors. all typical laptops would have same config as this awesome project. what would the society become?

Speed certainly wouldn't be there, but capabilities would. Plenty could get done on those old machines — most of it had to do with programmers having the imagination & skill to be able to shoehorn their ideas into spaces they weren't meant to be crammed into.

One memory this project brought to mind for me was a hack I came across which allowed simultaneously running DOS 3.3 & ProDOS on a 128k Apple II, giving each 64k (well, a little less due to overhead) & a way to switch between the two with a simple command. Two programs couldn't run at once, but one could step between the two OSes to run programs made for each pretty seamlessly. If this sort of thing was possible on basic consumer hardware, ten or twenty years of development would have led to many far more interesting & useful things.


nah, something like LLMs wouldnt be possible due to sheer power consumption - abstract (FL)OPs/uW is billions worse than modern tech. I used claude to make me back of a napkin calcs - single LLM prompt in 6502 era tech would be over 3k Eur vs fraction of a cent today, DISRECARDING WALL TIME (which is ridiculously impractical)

I believe the actual silicon of a 6502 is much smaller than the DIP package, so even if we couldn't shrink the silicon itself much more, you could just take up more space inside the package, and use a package that has more pins in it, like current CPU designs. You would probably hit a bottleneck at some point since I believe the speed of light is a problem for processing speed at some point, but then I'd expect we'd just go into massively parallel systems, with multiple cores acting somewhat individually

Okay what if something else had prevented something better than a 6502 being mass market available?

the 6502 package would probably shrink to use something like a BGA package, and you could probably make some kind of "multicore" system using 6502 processors. I'm not knowledgeable enough to say how feasible that would be, but you could probably use something with shared memory regions to pass data between them and run code in parallel.

If you are absolutely limited to 6502 DIP chips, there would probably be more prevalence of large mainframe systems and single 6502-based "terminals"/"thin clients". The mainframes could use systems similar to the Transputer or the Connection Machine to use large amounts of (comparatively) low-power processors to make a single, more powerful computer. They both used custom processors, with the Connection Machine in the early 80s and the Transputer in the late 70s. You could probably reasonably easily create a "graphics card" style system, comprised of many 6502 cores in a SIMD configuration.

I don't know how easy it would be to implement wifi or ethernet with only 6502 chips, so communications with the mainframe might be quite slow


Isn't this basically the idea behind collapse os? Chin up! That could still be our future.

I was thinking lately about how much memory you could handle on a 6502. The BBC Micro had a 16KB block of RAM paged between up to 16 ROMs/RAM but if you could have 256 banks you could do 4MB. One problem is that that would require a very large PCB. Another problem is that the OS searches for commands on all the ROMs and this would become slow for so many banks; one solution would be to limit the ROMs to the first few banks and let the rest be RAM.

It could be useful for some sort of minicomputer for business applications.


The Commodore REU (RAM Expansion Unit) architecture for the C64/C128 allows for up to 16 MiB - 256 banks of 256 addresses in 256 pages.

Due to the lack of support hardware in the C64 (no hardware RAM bank switching/MMU) this memory is not bank switched and then directly addressable by the CPU, it's copied on request by DMA into actual system RAM. But in some sense, a C64 with a 16 MiB REU is a 6502 with 16 MiB RAM.

But yeah, you want CPU addressable RAM with real bank switching. You couldn't really do 16 MiB, you wouldn't want to bank switch the entire 64 KiB memory space. The Commander X16 (a modern hobbyist 6502 computer) supports up to 2 MiB by having hardware capable of switching 256 banks into an 8 KiB window (2 MiB/256 banks = 8 KiB).

Let's say you design something with 32 KiB pages instead -- that seems kind of plausible, depending on what the system does -- you could then do 256*32 = 8 MiB and still have 32 KiB of non-paged memory space available. I think this looks like just about the maximum you would want to do without the code or hardware getting too hairy.


Depends entirely on what banking scheme you use. Nothing stops you from adding e.g. an 8-bit banking register (even two of them, one for instruction fetches, another one for normal memory reads/writes) to serve as bits 23–16 for the 24-bit memory bus. That's what WDC 65C816 from 1985 does, but it also goes full 16-bit mode as well.

And if you have a 16-bit CPU, you can do all kinds of silly stuff; for instance, you can have 4 16-bit MSRs, let's call them BANK0–BANK3, that would be selected by the two upper bits of a 16-bit address, and would provide top 16 bits for the bus, while the lower 14-bits would come from the original address. That already gives you 30 bits for 1 GiB of addressable physical memory (and having 4 banks available at the same time instead of just 2 is way more comfortable) and nothing stops you from adding yet another 4 16-bit registers BANK0_TOP–BANK3_TOP, to serve as even higher 16 bits of the total address — that'd give you 16+16+14 = 46 bit of physical address (64 TiB) which is only slightly less than what x64 used to give you for many years (48 bits, 256 TiB).


I was trying to get a grasp on what would be pratical.

Even 4MB would take you hours to load from floppies with a 6502.

Terabytes with a 68000 would also be impractical.


> Even 4MB would take you hours to load from floppies with a 6502.

Depends on your clock. Also, you could use some dedicated hardware, like a DMA controller e.g. 8257, or 8237. From 8257's datasheet:

    Speed

    The 8257 uses four clock cycles to transfer byte of
    data. No cycles are lost in the master to master transfer
    maximizing bus efficiency. 2MHz clock input will
    allow the 8257 to transfer at rate of 500K bytes/second.
and I recall 8237 could do even better, if wired and programmed properly.

Hard drivers were available for the 6502. They were expensive ($10k for a 10MB drive as I recall prices came down a lot, but never affordable in the 1980s)

Processing terabytes with a single CPU was impractical, but you could in theory connect it.


I know someone who - in the 1990s had 5MB connected to his Atari. He had two different expansions, and used all the memory for a RAM disk, as a result his BBS was the most responsive remote system I've ever used - including ssh to the server under my desk (open question, was it really or is this nostalgia?).

> Imagine in steampunk fashion

See The 8-bit Guy regarding what the world would be like if we were still limited to vacuum tubes: https://www.youtube.com/watch?v=mEpnRM97ACQ (video)


Apple XXVgs and Amiga 15,000, I’m digging this alternative.

Laptops would be a lot less common. If computers were stuck in this era for that long, fewer people would be interested. Prices would be high.

Please stop bickering about verilog vs vhdl - if you use NBAs the scheduler works exactly the same in modern day simulators. There is no crown jewel in vhdl anymore. Also type system is annoying. Its just in your way, not helping at all.

You're not wrong, but blocking assignments (and their equivalent in VHDL, variables), are useful as local variables to a process/always block. For instance to factor common sub-expressions and not repeat them. So using only non-blocking assignments everywhere would lead to more ugly code.

Ofc blocking assign is used too and even it that always_comb case scheduler splits eval/assign into 2 phases!

Draw yourself an SR latch and try simulating. Or a circuit what is known as „pulse generator“

Both SystemVerilog and VHDL have AMS extensions for simulating analog circuits. They work pretty well but you also pay a pretty penny for the simulator licenses for them.

Those are analog circuits, if you put them in your digital design you are doing something wrong.

dont know if trolling. SR latch you can do with 2 NANDs, or NORs there are plenty of *digital* circuits with that functionality, and yes, there are very rare cases when you construct this out of logic and not use a library cell for this. pulse circuit is AND(not(not(not(a))),a) also rarely used but used nonetheless. to properly model/simulate them you would need delta cycles

I'm not sure if you are trolling. 99.999% of digital design is "if rising edge clk new_state <= fn(old_state, input)", with an (a)sync reset. The language should make that the default and simple to do, and anything else out of the ordinary hard. Now it's more the other way around.

All circuits are analog when physically realized, the digital view is an abstraction.

New proc step : Cheese Vapor Deposition


I want that on my waffler


I always applaud homebrew cpu designs but after doing so many myself I would reaaaaly advice to stay away from dip chips/breadboards/wirewraps and any attempts to put it into real physical world. Taking a build out of a logisim/verilog to real world in chips sucks away all the fun about cpu design - suddenly you have to deal with invisible issues like timing, glitchy half-dead chip, bad wire connection, etc. these are not challenges, just mundane dull work. The only exception to „stay in the sim“ rule is if you want to make an „art statement“, i.e. like BMOW (or my relay cpu https://github.com/artemonster/relay-cpu/blob/main/images/fr... /shamelessplug)


I'm totally with you personally, but sometimes doing the actually hard part is fun. Type 2 fun.

Long ago I took a CPU architecture class and we implemented designs in Verilog as a final project. Apparently people who took the class in the late 90s (before my time) could actually tape-out their designs and pay a few hundred dollars to get fabbed chips as part of a multiproject wafer. I was always curious if those chips actually worked, or just looked pretty.


Type 2 fun, totally stealing that!


My advice would be to consider the possibility, not necessarily to stay out of the physical world. For some, those physical details may be the fun part. Some hate verilog. Some want to put it on an FPGA, some don't. I, personally, moved away from FPGAs due to bad documentation (looking at you, Lattice).

An alternative to Verilog is RTl simulation in a higher-level Language, or even higher-level Simulation.

Just remember that you can't define what is "fun".


id take it further to say dont even design your own ISA because its super rewarding watching your custom designed CPU run real software from an actual compiler (all you need is rv32i minus the CSRs)


Couldn't disagree more. To the extent building a homebrew CPU is interesting at all, for me it's _only_ making it actually work despite all of the real world hiccups that make it interesting. Designing it in the simulator is "easy".


I think next step will be an isolated version of invite-only internet where you have to be physically present with your invitee to give them access. There will be a beautiful navigation widget where you can access a unified "addon" to any page: community moderated comment section, version history of that page, backlinks, carefully curated "related" section(so that you can continue browsing beautiful human written content on 1910 era steam locomotives, similar to 90s era webrings), donate button so that you can support he author and much more! Oh, the dream


optional de-centralized hosting, unified cryptocurrency as payment tokens, single open LLM as summary and search-indexing tool, specialized toolkits for journals and social networks (livejournal, early twitter, early fb). Most importantly: you can post anonymously where its allowed (there could be areas where it can be disallowed entirely, like a public square), but your account will take the punishment, so no edgy shitposting behind throwaways.


I find it interesting that we havent invented a democratic version of policing a rule system. HN is dang, and he is dictator and guardian of these rules, basically. If you replace them with some typical reddit mod HN dies. If you spread out this role to some democratically elected mods via karma system this will fall apart just as quick as StackOverflow did, so, also HN dies.


very hot and edgy take: theoretical CS is vastly overrated and useless. as someone who actively studied the field, worked on contemporary CPU archs and still doing some casual PL research - asides from VERY FEW instances from theoretical CS about graphs/algos there is little to zero impact on our practical developments in the overall field since 80s. all modern day Dijkstras produce slop research about waving dynamic context into java program by converting funds into garbage papers. more deep CS research is totally lost in some type gibberish or nonsense formalisms. IMO research and science overall is in a deep crisis and I can clearly see it from CS perspective


Well, I think there is something to it. Computers were at some point newly invented so research in algorithms suddenly became much more applicable. This opened up a gold mine of research opportunities. But like real life mines at some point they get depleted and then the research becomes much less interesting unless you happen to be interested in niche topics. But, of course, the paper mill needs to keep running and so does the production of PhDs.


> theoretical CS is vastly overrated and useless

> as someone who actively studied the field,

Does not compute.

Your comment is mere empty verbiage with no information.


Your critique is valid, but I am not in a mood to prove myself to anons on the internet :)


I am not the "anon" here; you are.

You made a preposterous statement, got called out, and are now making excuses.

Anybody who claims to have studied "Theoretical Computer Science" can/will never make the statements that you did (and that too in a thread to do with Niklaus Wirth's achievements who was one of the most "practical" of "theoretical computer scientists"!).

Here is wikipedia for edification - https://en.wikipedia.org/wiki/Theoretical_computer_science and https://en.wikipedia.org/wiki/Computer_science


"I come to talk low-effort shit, not to think or inform". Par for the course, I suppose.


I assume that you are talking about modern "theoretical CS", because among the "theoretical CS" papers from the fifties, sixties, seventies, and even some that are more recent I have found a lot that remain very valuable and I have seen a lot of modern programmers who either make avoidable mistakes or they implement very suboptimal solutions, just because they are no longer aware of ancient research results that were well known in the past.

I especially hate those who attempt to design new programming languages today, but then demonstrate a complete lack of awareness about the history of programming languages, by introducing a lot of design errors in their languages, which had been discussed decades ago and for which good solutions had been found at that time, but those solutions were implemented in languages that never reached the popularity of C and its descendants, so only few know about them today.



Indeed, we don't really need affine type systems, what use could we get for them in the industry. /s


If you really have followed the research in type systems and see how it *factually* intersects with practical reality you wouldnt joke about it. Its a bizzare nonsense what they do in „research“ and sane implementations (only slightly grounded in formalisms) are actually used


I do, and hope that one day stuff like dependent types and formal proofs are every day tools, alongside our AI masters, which also don't use any learnings from scientific research.


Every clueless person who suggest that we move to GPUs entirely have zero idea how things work and basically are suggesting using lambos to plow fields and tractors to race in nascar


Bad comparison. Lambos are regularly plowing fields and they're quite good at it. https://www.lamborghini-tractors.com/en-eu/


I remembered that labos used to make tractors after I posted the comment. Nice catch!


In the past times where Czar elite could be executed like cattle and when French kings knew their heads could fly off guillotines, the elites were *behaving*. There was an unspoked social contract that you do shit for us, and we let you do be yourselves, whatever you do. Nowadays, we have wonderful law and nobody is responsible for anything, nobody is prosecuted, just fucking nothing. Time for pitchforks?


Don't make the mistake of idealizing the past. It took a decade of terrible winters and famine for the head of a single French king to be parted from his body. And it took one century more for a lasting Republic to be born.


We made the Norman nobles CEOs and gave them protection/removed responsibility from all of their actions. But let them continue to see themselves as purely 'value extractors' extracting from workers/markets/economies and doing nothing else.

At least Norman lords had to nominally provide housing on their holdings and had to have some kind of care that their serfs survived. CEOs don't even do that (they literally build models on lowest wage zero hour jobs that their labor can't actually live on or move labor from one desperate overseas country to the next).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: