I disagree with the author. C is an amazing language but its "lack" of an ABI is, in some ways, a byproduct of the language itself. C essentially sits half a layer above assembly and is meant to give the programmer fine-grained control over the processor and memory without requiring intimate knowledge of x86/ARM.
This is why pure C projects can turn into an unmanageable behemoths. This is also why the best way to use C is to use it to implement core algorithms and data structures which can then be called by higher-level languages. Numpy/Scipy did this perfectly and now their use is now ubiquitous within the Python community.
Most software engineers I know who have a background in EE love C, simply because it maps very well to what a processor actually does during execution.
> the best way to use C is to use it to implement core algorithms and data structures
is this a joke. you literally cannot write containers in C unless you commit to heap-allocating everything and storing it as void*
> Most software engineers I know who have a background in EE love C, simply because it maps very well to what a processor actually does during execution.
lol. no it absolutely does not. i have a B.S. CpE and have actually built simple processors. the C execution model has nothing to do with how silicon operates, and modern silicon in particular goes to absurd lengths to put up a façade that c programs can use to pretend they're still on a pdp-11 while the processor goes and does other things.
easy example: here's a memory address. what happens when you try to read from it
>> Most software engineers I know who have a background in EE love C, simply because it maps very well to what a processor actually does during execution.
>lol. no it absolutely does not. i have a B.S. CpE and have actually built simple processors. the C execution model has nothing to do with how silicon operates, and modern silicon in particular goes to absurd lengths to put up a façade that c programs can use to pretend they're still on a pdp-11 while the processor goes and does other things.
I'm an EE who's been writing bare-metal firmware for over ten years, and who's helped develop memory subsystems for microcontrollers. What you're saying is certainly true for PC CPUs, but the C execution model works just fine for a Cortex-M or other low-end CPU. No "absurd lengths" are needed; there's a clear relationship between:
>easy example: here's a memory address. what happens when you try to read from it
The CPU puts the address on the address lines and sets some control signals appropriately, and the SRAM returns the value at that address on the data lines. (Simplifying, obviously; I'm not going to go dig up an AHB spec.)
Cache? What cache? SRAM reads are single-cycle. The flash memory probably has some caching, but that's in the flash subsystem, not the CPU. And a lot of your most important reads and writes will be to memory-mapped registers, which had better not be cached!
No, C does not express the details of the instruction pipeline or complex memory subsystems directly in the language. Neither does assembly. C also does not cover every CPU instruction -- that wouldn't be portable at all. That's what inline assembly and compiler intrinsics are for.
C strikes a balance between portability and closeness to the hardware while remaining a small language. It does this very well, which is why it has historically been so popular, and still is for some purposes. Not all software is CPU-limited data processing on a 64-bit server.
Also the "PDP-11 facade" is needed so that the thing can be programmed in assembly language without the board support people doing bring-up tearing their hair out. A sane assembly language that you can read instruction by instruction to understand the abstract effect is necessary for more than serving as a C compiler target.
Runs just about every computer from the tiniest microcontroller to the largest supercomputer. Has been doing so for 50 years, despite a constant parade of miracle languages that were going to replace it Real Soon Now.
When Miracle Language of the Day actually does what C does, across the same variety of hardware, I will be the first to congratulate it and its designers.
But I don't think that's going to happen any time soon.
It runs on all hardware because hardware manufacturers support it, which they essentially must do because that's what's expected. It's a self-fulfilling prophecy (and arguably a vicious cycle).
Or, it’s a virtuous cycle of people recognizing a valuable tool and giving back to its ecosystem by developing for it (even if that’s not intended–it’s a second-order effect).
It can also be seen on another axis, that of organic growth versus what each new miracle language tries to be, which is a centrally planned and all-encompassing solution, which isn’t possible without the entire rest of the industry just stopping and waiting for it all to be made to work.
It's "organic" because, as this article is pointing out, creating alternatives is really difficult due to this exact lock-in, both at the OS level and at the hardware-vendor level.
You shouldn't need a "miracle language" or even an "all-encompassing solution" to have a chance to break free of this.
Sure. You can re-implement everything starting with the Kernel, as long as you don't have to interface with any of the C microcode on the hardware itself. And, yeah, people are doing this, for instance with Redox OS.
But if you actually want to program something usable in conjunction with existing software, such as Linux, you need to use the C ABI. There is no alternative.
> Most software engineers I know who have a background in EE love C, simply because it maps very well to what a processor actually does during execution.
I agree with this. One thing to note though: C maps onto small CPUs pretty well. It doesn't map directly onto the x86_64 execution model at all.
Today's fast CPUs are very different than the abstract machine that C represents. They run all kinds of things out-of-order, do speculative execution, run instructions in parallel, engage in branch prediction, etc. If anything the C execution model is almost a like a little VM that gets mapped onto the core that's running it.
This is why pure C projects can turn into an unmanageable behemoths. This is also why the best way to use C is to use it to implement core algorithms and data structures which can then be called by higher-level languages. Numpy/Scipy did this perfectly and now their use is now ubiquitous within the Python community.
Most software engineers I know who have a background in EE love C, simply because it maps very well to what a processor actually does during execution.