Talk:Binary-code compatibility

(Redirected from Talk:Binary code compatibility)
Latest comment: 3 years ago by MaxEnt in topic Lead too discursive

Binary Compatible Operating Systems

edit

thought i would do a little section talking about Windows~ReactOS OSX~Darwin and any help with Unix vs Linux or anything would be appreciated. =) Linux is binary compat with all distros as far as i know, its just different package managers. Dava4444 (talk) 19:40, 14 November 2009 (UTC)Reply

liked the rework of my bit in the article:) BUT Darwin is IMHO a little closer to Linux than the other Unix OS's, the reason OS X can't run Linux apps natively is because Apple design it that way by adding cocoa and carbon..gnome would of been fine.Dava4444 (talk) 23:37, 18 February 2010 (UTC)Reply

Requested move

edit
  1. Hyphenated "Binary-compatibility" is incorrect English (grammar)
  2. Binary compatibility should contain the contents of this article instead of redirecting to Application binary interface

--Jerome Potts (talk) 06:23, 11 October 2008 (UTC)Reply

The second problem was easily solved by redirecting Binary compatibility to Binary-compatibility… why was that part of the move proposal? — Twas Now ( talkcontribse-mail ) 08:33, 15 October 2008 (UTC)Reply
Because there might have been a strong reason why Binary compatibility currently redirects to Application binary interface, as described in Wikipedia:Requested moves. I couldn't perform the move myself because the target page already exists AND i have not admin privilege, that is why i followed the instructions. But now that you mention it, by changing the redirect of the target page to that of the source, perhaps the move (a swap, in this case) would work without complaining, even for me. I think i'll try that. --Jerome Potts (talk) 03:32, 16 October 2008 (UTC)Reply
Ah, i see the first part has already been done. OK, i rebaptised the whole thing "binary code compatibility". There. Thanks, though. --Jerome Potts (talk) 03:37, 16 October 2008 (UTC)Reply

Very weak definition

edit

"a computer that can run the same binary code intended to be run on another computer is said to be binary-compatible."

Is that supposed to mean something? E.g., if I have one windows 95 pc over here, and copy a program "minesweeper.exe" over to another windows 95 pc sitting over there, then this "computer" is "binary compatible" (or, this program?)? Perhaps that's just the degenerate case (and thus not terribly illuminating.) Hopefully someone can give a better shot at this? Of course, I say that as an information consumer and not producer on this particular topic.
In my mind, if you consider the stack (from the ground up) to be: hardware => firmware => { kernel + OS } => applications, then "Binary compatibility" is typically something people strive for at the application level, and is therefore implemented at or below the OS. And therefore the OS would make the claim of "binary compatibility" (not the application... though, I guess that's also possible...e.g., Universal Binary is another approach from the opposite direction). Anyway, hopefully someone can improve upon what's currently here... I'd be interested in reading this article once it's written. :-) Michael (talk|contrib) 04:15, 1 July 2009 (UTC)Reply
It's true that this article isn't of a very high quality, but I think the definition is well written; it's written in the same style as definitions which may be found in a theory of computation textbook (although binary compatibility doesn't fall under the theory of computation umbrella). The term "computer" is general enough to cover the appropriate scenarios. You seem to be digesting the word in the commercial electronics consumer sense, not the theoretical sense. In fact, I think the definition is the strongest part of the article. It's best to leave it as is, because there is no way to universally and objectively classify components of the stack, because you run into questions like what, exactly, defines an operating system?
In your example, yes, it can be said that the two computers are binary-compatible with one another. You go on to question whether it should be said that the application is binary-compatible, versus the operating system. I don't understand this part, because you introduced the distinction yourself in the example. See what I wrote above.-- C. A. Russell (talk) 22:04, 10 September 2009 (UTC)Reply

Mac OS X used emulation or even dynamic translation? Not vitualizatrion(?)

edit

Took out: "For example, Mac OS X on the PowerPC had the ability to run MacOS 9 and earlier application software through Classic—but this did not make OS X a binary compatible operating system with MacOS 9. Instead, the Classic environment was actually running MacOS 9 in a virtual machine, running as a normal process inside of the OS X operating system." Already an example there that I think is correct. Make sure it's correct if you want to "revert" in some say [[1]]. comp.arch (talk) 16:18, 29 October 2013 (UTC)Reply

The sentences you quote describe a process of executing PowerPC binary code on a PowerPC processor, so there's no translation involved. What's involved is having calls to OS 9 routines call code that runs on top of OS X rather than calling an OS (OS 9) running on the bare hardware. Classic isn't supported on Intel processors. Guy Harris (talk) 17:34, 29 October 2013 (UTC)Reply
And emulation, in the sense of Windows Virtual PC for Mac, which emulated an x86 PC on a PowerPC Mac, is another example of the same sort of thing - allowing a full operating system to run under a host operating system, rather than allowing applications for that other operating system to run on the host operating system. I expanded that section to include emulation, put the Classic environment example back, and gave some more examples. Guy Harris (talk) 23:46, 30 October 2013 (UTC)Reply
OK, there are now two references for Classic being a VM (making it the most referenced part of the entire article!). At this point, I see no reason to believe that claim will ever be proven wrong. Guy Harris (talk) 22:44, 5 November 2013 (UTC)Reply

Move?

edit
The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the proposal was no consensus. --BDD (talk) 22:55, 2 December 2013 (UTC)Reply

I don't really have anything against "binary compatibility" except that then piping needs to be used and people would still use [[binary compatibility|binary compatible]]. While "binary code compatibility" (and source code compatibility) is true and more discriptive I should just cancel this request (don't know how). I just thought the page should start with the name of the page (and what people really use, might be just us geeks). It didn't even start with either term, was "..said to be binary-compatible", and Source code compatible starts with "..said to be source-compatible". Seems pages might have been renamed to change "compatible" to "compatibility" and "code" added. Not sure if hypen should be used or what of the above (with or wihout hypen) is most used (don't know how to search for what is used after the pipe), seems to be "binary compatible" [Special:WhatLinksHere/Binary_code_compatibility]. Like I said might not matter what us geeks do.. But seems akward to add "code" in "Binary compatible operating systems" (and see here: Source port, Source upgrade). comp.arch (talk) 09:36, 14 November 2013 (UTC)Reply
And if it were titled "binary compatible", people would use [[binary compatible|binary compatibility]]. Guy Harris (talk) 20:01, 14 November 2013 (UTC)Reply
Yes, except I thought that is more rare. I'm not saying we shouldn't use "binary compatibility" if that is the policy. Do you like it better with "code" as it is or that way? comp.arch (talk) 23:33, 14 November 2013 (UTC)Reply

Consider this request cancelled (if these numbers are correct). I was used to "binary compatible" but with "code", seems actually to be more common. I Googled: 463,000 "binary code compatible"
333,000 "binary code compatibility"
271,000 "binary compatible"
293,000 "binary compatibility"
104,000 "object code compatibility"

5,840,000 "source code compatible"
319,000 "source code compatibility"
123,000 "source compatible"
73,800 "source compatibility"
"compatible" is slightly more common though (and lots more for source code, I would want to use "compatibility" for both (or not, that is keep in sync)). comp.arch (talk) 11:12, 15 November 2013 (UTC)Reply

Actually this gives the opposite, "code" is much less used and interesting trivia how binary compatibility is a rapidly less discussed topic: [binary ngram] and [source ngram]. comp.arch (talk) 11:44, 15 November 2013 (UTC)Reply

The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

Related/possible ref

edit

See: [Kinds of Compatibility: Source, Binary, and Behavioral] for "Behavioral compatibility". comp.arch (talk) 11:27, 15 November 2013 (UTC)Reply

"More-or-less compatible" taken out

edit

"CPU emulator or a faster dynamic translation mechanisim is sometimes provided that makes them more-or-less compatible." I wrote more-orless explicitly because emulator are usually "compatible" but slower. In some ways they are then no compatible. I was going to write a lot more to explain these thing but havenðt gotten around to it yet. Exact timing of indivitual instructions is usually not preserved either even if the host computer is faster and could emulate without slowdown. But then again a faster computer, even without emulation could pose a problem for old programs that demand a certain speed and not faster. Are they compatible? :) People do not program like that any more. See emulators for old consoles or PC and old games. comp.arch (talk) 09:58, 20 March 2014 (UTC)Reply

I assume you're referring to this change, which changed "As the job of an operating system is to run actual programs, the instruction set architectures running the operating systems have to be the same, compatible or else an CPU emulator or a faster dynamic translation mechanism is sometimes provided that makes them more-or-less compatible." to "Otherwise, programs can be employed within an CPU emulator or a faster dynamic translation mechanism to make them compatible.", i.e. "more-or-less-compatible" was changed to "compatible".
When you say
But then again a faster computer, even without emulation could pose a problem for old programs that demand a certain speed and not faster. Are they compatible? :)
the answer to the question is "yes". Binary compatibility generally does not mean "identical down to the instruction timings"; for example, IBM's System/360 documentation, back in the 1960's, explicitly said that the binary compatibility between models of S/360 did not mean that code would run at the same speed on different models, just that it would give the same behavior on those models (modulo some optional features, e.g. a program using floating-point instructions would behave differently on machines with and without the floating-point option, and a program making unaligned data references would behave differently on machines with and without the unaligned access option).
So a number of people didn't program like that even back in the 1960's, unless they deliberately or accidentally prevented their employer from upgrading to a 360/30 to a 360/40. :-)
Games that depend on a particular CPU speed are a very special case - and, as you say, people don't, in general, write games that depend on a particular CPU speed any more, as even a given generation of CPUs comes in a variety of clock rates. I don't consider an emulator being slower - or faster! - than another CPU to break binary compatibility any more than a 2.6 GHz Core i5 processor being faster than a 2.4 GHz Core i5 processor breaks binary compatibility. Guy Harris (talk) 16:58, 20 March 2014 (UTC)Reply
We are in violent agreement here, except IBM may have defined compatibility in this way at that point (and it could be sourced, I didn't know this), but I'm pretty sure they said no such thing for the original IBM PC. As you know, then there were "compatibles". Mostly faster. I remember the "turbo" button (to slow them down). You could say it's just my opinion (or IBM's) that only behaviour, not timing, is included in the definintion of compatibility. I'm not sure there is any universial agreement on that. We could not say "more-or-less compatible", just compatible, but add an explanation, sometime (no hurry) for this "exception" (as I always meant to do..). Running faster could be a problem that is usually no problem now, but slower can be a real problem (for real-time, eg. games). comp.arch (talk) 20:46, 20 March 2014 (UTC)Reply
IBM most definitely defined compatibility in that way; see, for example, the "Compatibility" section on page 5 of IBM System/360 Principles of Operation.
And, no, they said nothing about "compatibility" for the original IBM PC, because there was no "compatibility" to discuss - IBM had the IBM Personal Computer, and other people had their personal computers, which weren't necessarily compatible. "Compatibility" was something other vendors had to advertise.
The only issue I see with emulators for binary compatibility is if the emulator is so much slower as to be impractical; small speed differences are no more of a compatibility issue than are small clock rate differences between machines using the same processor chip (same chip, not just same instruction set). Guy Harris (talk) 21:24, 20 March 2014 (UTC)Reply
You're right, even the same exact chip could be "incompatible" with itself if run at a slower clock frequency. Seems non-sensical.. I seem to be painting myself into a corner. However I still maintain this for systems. I *feel* compatibility has to be judged by expected behaviour and when it runs slower, drops frames lets say, it's no longer fully compatible in some sense. Maybe I can get away with writing something to that effect in the page in some clarification? Clock rate (only) has not been getting lower on purpose that I know if so this seems academic, mostly applies to emulators. comp.arch (talk) 09:45, 24 March 2014 (UTC)Reply
"program making unaligned data references would behave differently on machines [..] without". How convenient that you can just define compatibility away and put the burden on the programmer to code properly. :) Reminds me of similar issue on ARM. We could define "compatibilty" and then strict compatibility. In more ways than one. In addition see source (didn't read, concurrency another issue) from above: "binary-preserving" and "A tricky program could even make decisions based on information like a timing side channel." and the article is not sourced well. I intended to write just what I know and explain. Borderline WP:OR if I make up definitions.. But I'm sure all these issues are sourceable. I believe all these issues belong here in the details as this is an important and interesting (and a can of worms to get "right") issue. Since I didn't get around to it, I'm ok with leaving confusing language "more-or-less" out (or not). comp.arch (talk) 20:55, 20 March 2014 (UTC)Reply
Yes, that's how IBM defined it; as they said in the aforementioned "Compatibility" section said "The systems facilities used by a program should be the same in each case. Thus, the optional CPU features and the storage capacity, as well as the quantity, type, and priority of I/O equipment, should be equivalent." "Storage capacity" was probably relevant because they didn't have virtual memory, so a program requiring 12K of data might run on a machine with 16K of memory but not run at all 8K of memory. I guess they mentioned the I/O equipment because, for example, a program that reads data from a tape, sorts it, and writes out a new tape won't work very well on a machine that just has a card reader, card punch, and printer, and perhaps because they didn't want to commit to all OSes supporting some level of device independence. The "optional CPU features" covers alignment issues, floating point, decimal arithmetic, and the like; the first two of those were also issues for, say, the Motorola 68000 series, as the Motorola 68000 and the Motorola 68010 required 2-byte alignment for 2-byte and 4-byte integral operands, but the Motorola 68020 and later didn't, and, until the Motorola 68040, the floating-point unit was a separate chip and not all 68K-based machines in a given line of systems would necessarily have it. (And S/3x0 and 68k shared another such issue. S/360 was a 32-bit architecture but only the lower 24 bits of addresses were used, so people could stuff data in the upper byte of a pointer; S/370-XA added a 31-bit-addressing mode, so that old programs could run without blowing up. The original 68000 and the 68010 also only used the lower 24 bits of addresses; the 68020 used all of them and didn't provide a 24-bit mode, and some Mac programs did blow up.) Guy Harris (talk) 21:24, 20 March 2014 (UTC)Reply
And, yes, there's often been a burden on the programmer (or compiler writer) to code properly. Not using instruction bit patterns not explicitly specified to have a given behavior, for example, might have unintended effects on processors not checking for them, but might crash - or implement new instructions - on other processors. (The IBM 7094 needed a compatibility mode to support programs written for the IBM 7090 that assumed that, if you had more than one bit set in the index register field, the index registers specified by the bits were ORed together. With the mode off, the index register field selected the index register whose number was in the field.) Guy Harris (talk) 21:54, 20 March 2014 (UTC)Reply

Requested move 19 August 2019

edit
The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review after discussing it on the closer's talk page. No further edits should be made to this discussion.

The result of the move request was: no consensus. Despite being open for a month, this discussion has attracted little participation. At this time, there is no consensus that the current title is problematic. Another RM advancing a specific target title or bringing new evidence to bear may be more successful at eliciting discussion. (non-admin closure) Colin M (talk) 19:43, 18 September 2019 (UTC)Reply



Binary-code compatibility → ? – The same issues apply to decimal computers, and there have been emulators for, e.g., IBM 1401, 7070 and 7080.Why Shmuel (Seymour J.) Metz Username:Chatul (talk) 14:24, 19 August 2019 (UTC) --Relisting. — Newslinger talk 09:40, 27 August 2019 (UTC)Reply

"Executable code compatibility"? That emphasizes the code being executed by some combination of hardware/firmware/software, rather than how it's encoded, although people do refer to "binary" code, in contrast to "source" code.
On the other hand, those decimal computers used binary-coded decimal, so it's still binary code in a sense? Guy Harris (talk) 16:58, 19 August 2019 (UTC)Reply

The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page or in a move review. No further edits should be made to this section.

Lead too discursive

edit

For my money, the current lead is a slog to wade through, because it's almost as top heavy as the Vasa, larded with parentheticals that add to the general buzzword soup, but provide little value otherwise. — MaxEnt 00:57, 26 November 2021 (UTC)Reply