Wikipedia:Reference desk/Archives/Computing/2017 February 24

Computing desk
< February 23 << Jan | February | Mar >> Current desk >
Welcome to the Wikipedia Computing Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 24

edit

The chip or the OS, the chicken or the egg

edit

How can it be that an OS is compatible with so many architectures (for example, see the list of Linux supported architectures, ). Do the OS developers design it to be compatible, or do the chip manufacturers make chips that are compatible with OSs? Or, is there a standard in-between which both, chip manufacturers and OS developers, aim to?--Hofhof (talk) 15:06, 24 February 2017 (UTC)[reply]

The operating system is made compatible to the chip. Using Linux as an example, I have Linux running my computer here with a standard Intel CPU. I also have it running at home on a Sparc chip. I also had it running on the PS3. Each of those were Linux. I could run XFCE, open Chrome, and surf the web just fine. But, I cannot run the Sparc version of Linux on my Intel box. I cannot run the Intel version of Linux on my Sparc box. The OS is an abstraction of the hardware. It lets me write a program for Linux and then Linux does the dirty work to make what I wrote work on the hardware. Of course, hardware has standards that must be followed. So, they do engineer the hardware to work with the OS that works with those standards - but that is really looking at it a bit backwards. Since the beginning, the purpose of the OS is to abstract the software developer from hardware specifics. 209.149.113.5 (talk) 15:39, 24 February 2017 (UTC)[reply]
We have a Wikipedia article on the subject, called Porting. It doesn't go very deep, but it's worth reading. Jahoe (talk) 15:51, 24 February 2017 (UTC)[reply]
There are examples of chip design being customized to higher-level software, though not on the GUI level, but on the high-level-language level. See High-level language computer architecture for more about that. I happen to be the proud owner of a (barely) functional Lisp machine, so this is something I find quite interesting. The IP's answer is a good one. ᛗᛁᛟᛚᚾᛁᚱPants Tell me all about it. 15:56, 24 February 2017 (UTC)[reply]
(ec) One of the big innovations of UNIX was that it was written (well, early-on rewritten) in C, a relatively portable language, with only very small critical parts in assembler. Thus, porting a UNIX or UNIXoid operating system (like Linux and the BSDs) to a new architecture is relatively easy. --Stephan Schulz (talk) 15:57, 24 February 2017 (UTC)[reply]
POSIX is an important article on that topic. If it is POSIX-compatible, it will run on any POSIX compatible OS, regardless of the underlying hardware ... in theory. 209.149.113.5 (talk) 17:12, 24 February 2017 (UTC)[reply]
Yes, but POSIX specifies the user-facing part of the OS. For porting, we need to port the machine-facing part. --Stephan Schulz (talk) 19:55, 24 February 2017 (UTC)[reply]
Is there a name for the standard that a machine has to meet to be able to run an OS or some other code written in C? Hofhof (talk) 01:24, 25 February 2017 (UTC)[reply]
No, it works the other way around. Specifications for hardware are released, and OSes are written to interface with the hardware according to those specifications. For instance, here are the Intel Software Developer Manuals for the x86 architecture. Some OSes are written to expect certain features of the underlying hardware, most commonly, these days, support for virtual memory and a memory management unit. For example, see the Linux kernel release notes. The whole point of high-level programming languages and operating systems is to abstract away the underlying hardware, so you can write a text editor or Web browser without having to write directly in machine language or assembler and manage all the hardware yourself. If you're interested in low-level hardware hacking, you'll probably need to read some books and/or take classes to get a firm grasp of all the concepts. We probably aren't going to be able to do a good job of teaching them all on the Ref Desk. --47.138.163.230 (talk) 03:34, 25 February 2017 (UTC)[reply]
The two can be done concurrently. The largest single contributor to the Linux kernel is now the Intel corporation, Contributing about 11% of new additions to the kernel.[1] (Runners up : Red Hat, Linaro, Samsung) I can't find an easy breakdown of those contributions, but it's safe to say that a large part of them are to better support Intel chipsets. ApLundell (talk) 16:05, 24 February 2017 (UTC)[reply]
(Incidentally, that document also debunks the myth that the Linux kernel is maintained by hobbyists. Only 16.4% of contributions were sponsored by "none" or "unknown". ApLundell (talk) 16:08, 24 February 2017 (UTC))[reply]
Statistics, harrummphh. The correct statistic would be, what percentage of content has been attributed to "none" or "unknown". Akld guy (talk) 19:59, 24 February 2017 (UTC)[reply]

Another aspect of this is the fact that when Intel plans a new CPU, one of the early steps before they make any actual chips is to use a supercomputer to (slowly) emulate a PC using the new CPU and have it boot to the BIOS screen. Then they see if it boots MS-DOS, followed by Linux, then Windows. IIRC, we are talking about boot times measured in months. And of course before a new version of Linux or Windows comes out it obviously gets tested on multiple CPUs. --Guy Macon (talk) 03:29, 25 February 2017 (UTC)[reply]

There's not an all-encompassing standard, nor is one needed, but there are tons of little standards that apply to the relevant bits. If the temperature reporting is done by the LM75 chip, say, then the kernel will have code to talk to that chip and to the bus controller chip it hangs on. The hardware interfaces of those chips are then the "standard." Asmrulz (talk) 17:30, 25 February 2017 (UTC)[reply]

As in all human endeavors, yes they do try to meet one another halfway. CPU manufacturers maintain some degree of backwards compatibility (you can run XP on a Pentium II, III and 4, after all) and OSes on their part are written with portability in mind (some more than others.) Ideally, porting an OS to a some new architecture would involve hardly more than making a C compiler for it (or rather retargetting an existing compiler to support it) and rewriting the few machine-dependent bits in assembly. Asmrulz (talk) 17:30, 25 February 2017 (UTC)[reply]

Portable also means configurable, for example you likely won't find a 8253 timer chip (or its emulation) in a non-Intel architecture. It's an ecosystem. The build process needs to be flexible enough to cover the ways in which one architecture is different from another apart from the machine language (which is taken care of by the compiler.) Asmrulz (talk) 17:48, 25 February 2017 (UTC)[reply]