> ...we could have made something extremely minimal. Instead UEFI goes hard in the opposite direction...
My initial suspicion was that this was about preparing the ground for closed computing regardless of the surrounding hardware.
That this hasn't happened suggests it's just my imagination gone wild, it's a missed opportunity for (say) Microsoft, or the folks behind it had good intentions. Occam's Razor, I guess?
TL;DR UEFI builds an open platform even if the actual code is closed, while "simple" alternatives make closed platforms with open source code.
Raw coreboot/uboot like approaches give you open source but closed platform - the simplicity means you need considerable resources to do something else than the original maker wanted to do with it.
UEFI (and before it, Microsoft attempts at semi-standardizing PC low level interfaces, effort on ACPI, etc) are an effort to provide an open platform no matter the opennes of code, availability of deep dive docs of individual computer models, and handling the fact that computers arez in fact, complex.
If you want a general purpose computer that explicitly targets the idea that it's owner can just plug in CD/USB/netboot a windows/Linux/BSD installer media, without waiting for new release just to have a bootable kernel on s new machine, there's s lot of inherent complexity. Especially if you want to be able to boot a version from before release of the board you're using without significant loss of functionality (something that devicetree cannot do without special explicit work from physical device vendor, but is handled by ACPI through bytecode and _OSI checks for supported capability levels from OS).
Especially if you also want to make it extensible and reduce cost in integrating parts from different vendors (aka why UEFI with hardcoded CSM boot started taking over by 2005).
It's much easier not just to integrate a third party driver for example for network chip when the driver will use well defined interfaces instead of hooking into "boot BASIC from ROM" interrupt, especially when the driver can then expose it's configuration in standard format that will work no matter if you have monitor and mouse connected or just serial port. Petitboot is not the answer - it's way worse when you have to custom rebuild system firmware to add drivers (possibly removing other drivers to make space) because you want to netboot from a network card from different vendor, or just because the hardware is still good but the NIC is younger. Much easier to just grab driver from OpRom or worst case drop it in standardised firmware-accessible partition.
Did I mention how much easier handling booting with UEFI is compared to unholy mess of most other systems? Yes, even GRUB on x86, which by default doesn't write standard compliant boot code so if you dual boot and use certain software packages you end up with nonbootable system. Or how many Linux installers and guides make partitions that only boot because of bug-compatibility in many BIOSes. Not to mention messing with bootsectors Vs "if you drop a compatible filesystem with a file named this way* it will be used for booting".
If I want to play around with booting a late 1960s design where you need to patch binaries if you change something in hardware, I can boot a PDP-10 emulator instead. I push for using UEFI properly because I have work to do and goals to achieve other than tinkering with booting, no matter how much I like tinkering in general
> Did I mention how much easier handling booting with UEFI is compared to unholy mess of most other systems?
Yeah. Like linux entries getting ignored, no eay to debug what went wrong (if an EFI executable fails, you're on your own). A shell which is undocumented. With BIOS i didn't spent hours trying to boot a linux kernel.
Unless you encountered a BIOS that actually followed the spec instead of blindly loaded a sector and hoped for the best, and learnt that your typical linux install programs made malformed MBR that does not mark the drive as bootable so BIOS actually should skip it when booting.
Or when grub suddenly stops working on a dual boot system, and after you fix it another part on the non linux system fails, and if you fix that grub fails (and fails in all cases hard enough you need to boot from something else). After some time you either ended up dropping either grub or $some_important_software - poor you if the latter is something you make money to sustain yourself. All because, surprise, you're not supposed to write bootloader code into cylinder 0 outside of first sector, and various signature systems used in licensing use the no-mans-land of cylinder 0 to scribble signatures.
BIOS offered even less debug tools than (U)EFI, so dunno what you're talking about. UEFI shell is documented, if not as well as it could be, and even has built-in help. That you can, worst case, boot it (or any other tools) from simple fat32 formatted pendrive that can be created with a file manager instead of rawdogging the hard disk is just a bonus.
The cases with Linux entries being ignored were plain bugs by specific manufacturers, bugs that also hit windows (because the vendors in question worried that them not checking properly would backfire, and instead of checking the firmware properly hardcoded few tested bootvar names).
And the ignored entries could be resolved by using standard fallback bootloader paths, or renaming linux entries to same as ones that work.
> That you can, worst case, boot it (or any other tools) from simple fat32 formatted pendrive that can be created with a file manager instead of rawdogging the hard disk is just a bonus.
There was a wonderful, brief period of time where most systems were UEFI out of the box, and the largest file on a Windows ISO was under 4 GiB.
You could literally drag+drop the contents of the ISO to a FAT32 thumb drive and install Windows with it. You didn't have to erase it first. You didn't need a special app or the command line to pull it off. It just worked.
The WIM files are too big now. At least Rufus is a pretty good utility.
My initial suspicion was that this was about preparing the ground for closed computing regardless of the surrounding hardware.
That this hasn't happened suggests it's just my imagination gone wild, it's a missed opportunity for (say) Microsoft, or the folks behind it had good intentions. Occam's Razor, I guess?