Talk:Microprocessor/Archive 1

Latest comment: 7 years ago by InternetArchiveBot in topic External links modified
Archive 1Archive 2

Regarding the merging of the two articles

re: Suggestion to merge CPU and Microprocessor

I've done some preliminary work on this user subpage of mine. Work is still undergoing though and it's far from complete.

Anyone care to load it up, tell me what you think, maybe make a few changes to it? Leave any comments on my user talk. Thanks.

Also, we'll need to decide if we do go ahead with the merger which page to keep and which to change to a #redirect..

splintax 14:47, 8 September 2005 (UTC)

I personally oppose merging CPU with microprocessor. CPUs existed before ICs. Ancheta Wis 16:36, 11 September 2005 (UTC)
I agree with Ancheta Wis. Not only did CPUs exist before ICs, but to this day there are still some CPUs that are not microprocessors, though they are usually only found in rather specialized applications. Furthermore, it is now common for a microprocessor to contain more than just a CPU, e.g., memory controllers, simple peripherals. --Brouhaha 01:08, 12 September 2005 (UTC)
Okay then, I've abandoned the project. I didn't think it was all that great an idea myself but there seemed to be a fair bit of support for it around here. Perhaps the warning should be removeD? — Preceding unsigned comment added by Splintax (talkcontribs) 14:54, September 12, 2005 (UTC)


In the spirit of "less talk, more work" I have taken upon myself the ambitious task of making the CPU article not suck. I hope that when I'm done you will see no need to merge the two articles. I believe that you will also be able to condense this article a bit. I'm not finished yet, but I've made some definite progress. I still need to write the largest section regarding CPU implementation (both historically and modern considerations, which will be brief since we have a pretty decent CPU design article). I encourage you all to look over my edits and offer any advice/help that you can. I'm also currently hunting down some good images to use in the article (I've located some, but I need the authors' consent to upload them). -- uberpenguin 23:31, 25 September 2005 (UTC)

Some parts of this article are also similar to integrated circuit. — Preceding unsigned comment added by Twilight Realm (talkcontribs) 01:19, September 27, 2005(UTC)

NPOV

"As with many advances in technology, the microprocessor was an idea whose time had come." This isn't the right style of writing, and it isn't neutral. I haven't read much of the article, so there may be more. Also, a grammar error, right after the explanation of Moore's Law. I would normally do it, but... not now. Too tired, I'd mess it up.Twilight Realm 01:22, 27 September 2005 (UTC)

Difference from IC

This article, along with the articles for integrated circuit, CPU, etc, don't make it clear what's different about them. How exactly is a microprocessor different than an IC other than "microprocessors are very advanced integrated circuits"? Twilight Realm 01:46, September 27, 2005 (UTC)

Um.. Well you just said the difference... One is a specific and notable type of the other. Microprocessors ARE, in general, advanced and complex integrated circuits whose purpose is that of a CPU. -- uberpenguin 00:50, 29 September 2005 (UTC)

Are there any criteria defining microchips? Or is it just an objective term? The IC article says "For the first time it became possible to fabricate a CPU or even an entire microprocessor on a single integrated circuit." To me at least, that sounds like there's a specific level an IC must pass to be considered a microprocessor, or that there's a difference, that a microprocessor contains some components that an IC doesn't necessarily have to have. Clarification would be appreciated, even though it's not in this article. Twilight Realm 01:16, 30 September 2005 (UTC)

A microchip is really just an informal term for an integrated circuit, they are the same thing. "aka microchip", as it says in the article. I agree about the sentence quoted not being correct, I have changed it slightly. Alf Boggis (talk) 09:29, 30 September 2005 (UTC)
Be careful not to confuse microchips and microprocessors. Microprocessor almost ALWAYS means a CPU (Von Neumann-like) that is fabricated on one or more ICs. "Microchip" is usually just another term for IC. Something like a DSP/DAC could be fabricated on a microchip (or IC), but it would not likely be considered a microprocessor. -- uberpenguin 22:20, 1 October 2005 (UTC)


Apologies for no summery

Sorry that I didn't use an edit summery on my last edit, but I clicked on save page instead of minor edit. It was just reverting vandalism though. --Apyule 12:36, 2 November 2005 (UTC)

Removed "History of 64-bit support" section

This section is totally misplaced here. Not only does it have NOTHING to do with microprocessors, but it was TOTALLY wrong. Linux support for 64 bit microprocessors dates back to the Alpha and MIPS ports (LONG before x86-64). Windows support also dates back to NT 3's Alpha and MIPS R4xxx ports. Likewise, Mac OSX's blood relatives Darwin, Mach, and L4 all ran on 64-bit microprocessors before OSX was compiled for PowerPC64.

The section was REALLY 'history of OS support for x86-64,' which is already included in the AMD64 article (in much more complete form). -- uberpenguin 02:38, 18 December 2005 (UTC)

"In 64-bit computing, the DEC(-Intel) ALPHA, the AMD 64, and the HP-Intel Itanium are the most popular designs as of late 2004." Was that really true? And, if so, is it still true? Or is "popular" defined as something other than "most common"? I suspect there might be more 64-bit SPARC machines and 64-bit POWER/PowerPC machines (especially if you include AS/400 and iSeries PowerAS) than Alpha machines, much less Itanium machines. Guy Harris 19:06, 24 December 2005 (UTC)

Yeah, it's a spurious claim for sure. It would be far better to say that they are all popular designs (ALPHA isn't all-caps either...) -- uberpenguin 19:41, 24 December 2005 (UTC)

Intel Pentium D

Hey didn't you forget the Intel Pentium D (Dual Core) processors. I feel the performance of Intel is way better than the DEC, AMD, others. — Preceding unsigned comment added by 167.206.128.33 (talk) 00:52, January 26, 2006 (UTC)

I think there should be more links to processor architecture from this page. Von neuman, Harvard, DIB, etc. — Preceding unsigned comment added by 167.206.128.33 (talk) 00:53, January 26, 2006 (UTC)


OK, so make it
In 64-bit computing, DEC Alpha, AMD64/EM64T, MIPS, SPARC, PowerPC/IBM POWER, and HP-Intel Itanium are all popular designs.
to include both AMD and Intel variations of that instruction set architecture. Guy Harris 01:50, 26 January 2006 (UTC)

First commercial RISC architecture

I'm not sure, if MIPS was really the first here. ARM was made in (working!) silicon (ARM1) on April 261985, first products were sold 1986 (exact date missing, the "ARM Development System", a second processor card for the BBC Micro ), first workstations released June 1987 (Acorn Archimedes). But I don't know, when the first working MIPS silcon was made (I find 1985-1987 on the web, mips.com says nothing), what the first MIPS based products were, and when they were released. Some of the early products I know are the DECstation 2100 (1989), SGI Indigo (1990), MIPS Magnum 3000 (1990). Another candidate would be IBM ROMP, the first workstation was released 1986 (exact date missing), other products before that unlikely. - Alureiter 16:02, 7 February 2006 (UTC)

Missing and bloated sections.

The first paragraph tells me what a microprocessor is made of but doesn't tell me what it does. I would like to see a succinct sentence about what a microprocessor actual does (execute instructions, for example), and then perhaps explain it a bit more in section farther down in the article.

Then at the end of the article there are three screens full of lists of various stuff. On Wikipedia it's easy to allow lists to get out of control and lose sight of what makes a thorough, balanced article. And complete doesn't mean we have to make a list of every possible internal and external link that might be somehow related!

So, tell me what the thing does and judiciously select a very few closely related links that might also be helpful. JonHarder 22:10, 16 July 2006 (UTC)

Well this again brings up the issue I raised months back (see the top of this talk page). Is a microprocessor necessarily a CPU? Can microprocessors be non-programmable? If that's the case, does a microprocessor have to execute "instructions" in the CPU sense? It's a big mess, just like the state of this article. Big issues about simple terminology have to be resolved before this article can see the sweeping changes it needs. -- uberpenguin @ 2006-07-16 23:01Z
I don't think it's too bad. The intro describes the function of the part (a microprocessor is a part), and links to the CPU article. If the readers read the CPU article as well, they'll have a better idea of what a microprocessor does. However, it does feel like there's a paragraph missing there - something that explains how microprocessors "made possible the advent of the microcomputer," a role previously filled by several parts. We should describe how they were able to do that. MFNickster 05:50, 17 July 2006 (UTC)

Moore's Law

WRT the reverting to #transistors doubling every 18 months: I initially thought that this was wrong, but on checking the article, even though 18 months is oft quoted, 24 seems to fit the data much better. Also, from Moore's law:

"In 1975, Moore projected a doubling only every two years. He is adamant that he himself never said "every 18 months", but that is how it has been quoted. The SEMATECH roadmap follows a 24 month cycle."

I think the best thing may be to change the 18 at the top of the Moore's Law article to 24, and re-revert the change here. Comments? --Mike Van Emmerik 22:42, 27 February 2006 (UTC)

I don't think that's a problem, as long as it is consistent with the other article. The "law" itself is not very rigid, as the article on it makes clear - the time period and the meaning of "complexity" can vary depending on which trends you look at. It might be enough to simply make note in this article that the complexity of integrated circuits and number of transistors on microprocessors have increased over time (while cost has stayed relatively flat) and simply link to the "Moore's Law" article. MFNickster 02:05, 28 February 2006 (UTC)
Whatever the actual statistics are, the forumlation of Moore's Law is that the number of transistors doubles every 18 months. So readers of this article are presented with a definition of the law that's directly contradicted by the article on Moore's Law. We should get this straight. --Mr random 20:20, 8 August 2006 (UTC)
You should have corrected the Moore's Law article instead - Moore said 2 years, not 18 months. The first line of that article is factually incorrect. MFNickster 22:34, 8 August 2006 (UTC)
I have correct both the Moore's Law entry and this one. The fact that the intro paragraph of the Moore's Law article contradicted the direct quotation of Moore that immediately followed, and referenced the interview it came from, and went against information that was established on the Talk page, yet still managed to endure for over a month is discouraging. — Aluvus t/c 02:20, 9 August 2006 (UTC)

First or seccond?

From text:
The world's first single-chip 32-bit microprocessor was the AT&T Bell Labs BELLMAC-32A, with first samples in 1980, and general production in 1982(...)
but a few lines later:
The most famous of the 32-bit designs is the MC68000, introduced in 1979.
so, which one is right? it was the bellmac-3a or the mc68k?


Alejandro Matos 14:47, 20 November 2006 (UTC)

Depends whether you're measuring word size or address bus lines. The 68000 had a 32-bit word size but a 16-bit address bus, and it wasn't until 1984 that the 68020 was introduced with a 32-bit address bus. MFNickster 17:09, 20 November 2006 (UTC)
That turns out not to be the case. Look at the Motorola databooks for the 68000 but it had 24 address lines and 16 data lines. --Wtshymanski 18:50, 20 November 2006 (UTC)
Quite right. 24-bit address space, but the address registers were 32 bits wide. I was thinking 16-bit data bus, but my fingers typed "address bus." I think the way the article elaborates on this after the "MC68000 introduced in 1979" is enough to explain why the 68K was a 32-bit processor, though in some ways not a "full" 32-bit processor. MFNickster 02:24, 21 November 2006 (UTC)

I suggest a link to my site called 'How Computers Work: Processor and Main Memory' at http://www.fastchip.net/howcomputerswork/p1.html . It tells how a processor and memory work simply and in COMPLETE DETAIL. A microprocessor is a processor on a single chip. It is not to replace the 'How Stuff Works' link but compliment it. If you understand this book/site, you will understand PRECISELY what a microprocessor and its main parts are and how they work together. Thinkorrr 01:09, 4 December 2006 (UTC)

μP/μC Patent Disagreement

I just googled "Hyatt microprocessor" and found this. Apparently TI overturned the earlier patent on the grounds that it was never implemented at the time. --ArtifexCrastinus 06:57, 12 December 2006 (UTC)

Concern about "GPU"

I'm a little wary that the article classifies GPUs as microprocessors. I have always seen the term "microprocessor" applied to an IC-based CPU. As I'm sure most readers realize, GPUs are much more akin to DSPs or stream processors than CPUs, despite the unfortunate acronym similarity. The programmability and general design model of GPUs certainly does not qualify it to be called a CPU. So my question is, is it appropriate to call a GPU a microprocessor, given that I've always known the term microprocessor to be related to CPUs? I'm not entirely sure, thoughts? -- uberpenguin 12:59, 20 October 2005 (UTC)

Okay, since nobody has ventured to add input to this concern, I'll just remove the offending text. -- uberpenguin 01:45, 18 December 2005 (UTC)
Hi, sorry - I didn't see this topic before I made my edit. Your concern seems to be that "microprocessor" should always mean "CPU", but I don't see any reason for that to be the case - they are not synonyms. Do you have any citations for a definition of "microprocessor" that would exclude GPUs? The fact that some GPUs are now used for non-graphics computation (see GPGPU) makes a pretty good case for their inclusion. MFNickster 02:16, 18 December 2005 (UTC)
There is no formal definition for the term 'microprocessor;' I'd be highly impressed if you could even find the first instance of its usage. I've personally never seen it used to include anything OTHER than CPUs. GPUs are called what they are, DSPs are called DSPs, etc. Just because GPUs perform arithmetic and are being used to a small extent as general purpose DSPs does not in itself qualify them as microprocessors in my mind (you wouldn't call a DSP a microprocessor, would you?). — Preceding unsigned comment added by Matt Britt (talkcontribs) 02:24, December 18, 2005 (UTC)
I don't have to call a DSP a microprocessor; all I have to do is show examples of people in research and industry calling it that, which I have done. MFNickster 06:29, 20 December 2005 (UTC)
I guess I could turn your question back on you -- do you have any reference that suggests that a microprocessor is anything other than a CPU? It would be enough for me if you could find one or two major hardware vendors that classifies something as a microprocessor that isn't (or doesn't contain) a CPU. -- uberpenguin 02:24, 18 December 2005 (UTC)
Incidentally, FOLDOC only mentions CPUs in its definition of 'microprocessor.' 1 It's definition expands a bit from Wikipedia's, presumably to easily include microcontrollers and SoCs. -- uberpenguin 02:29, 18 December 2005 (UTC)
Fair enough! I would start with a dictionary definition from Merriam-Webster: "a computer processor contained on an integrated-circuit chip", which would include GPUs unless you define "computer" as a CPU only (circular logic) - but then, graphics computations are still computations.
Then I'd cite a few articles and pages:
[1] "ANTIC (locations 54272-54783) This chip is actually a specialized microprocessor in its own right. It controls the screen display through instructions to C/GTIA."
[2] "...if this still doesn't get them the required performance, go to a specialized microprocessor like a digital signal processor or even a custom microprocessor implemented in an application-specific integrated circuit — an ASIC."
[3] "A graphics processing unit (GPU) is a microprocessor that has been designed specifically for the processing of 3D graphics."
[4] "A DSP is a specialized microprocessor adept at high-speed arithmetic and real-time data transfer to and from the real world."
[5] "A DSP is a microprocessor designed to work with analog signals such as video or audio that have been digitally encoded."
[6] "...the microprocessor became most visible as the central processor of the personal computer. Microprocessors also play supporting roles within larger computers as smart controllers for graphics displays, storage devices, and high-speed printers."
On the opposing side:
[7] "The microprocessor is the central processing unit (CPU) fabricated on one or more chips"
[8] "microprocessor: a computer whose entire CPU is contained on one (or a small number of) integrated circuits"
[9] "A microprocessor generally means a CPU on a single silicon chip, but exceptions have been made (and are documented) when the CPU includes particularly interesting design idea..."
What do you think? I think there's evidence that they're not synonymous. If you can imagine a Venn diagram with a "CPU" circle and a "microprocessor" circle, and your definition being the shaded overlapping area. A "microprocessor" seems to be (at the least) a single-chip computer, but it doesn't have to be the CPU even if it has the capability. For instance, look at the Sega Saturn, it used a Motorola 68000 for the sound controller. MFNickster 03:00, 18 December 2005 (UTC)
First, M-W is hardly a definitive source for computer related information. In any case, we are talking specifically about the term 'microprocessor,' not 'processor.' Since, as I stated earlier, there is no formal definition of the term, we must go by what is common usage in the industry. DSPs and stream processors are both 'processors' that are never called 'microprocessors' (unless in some SoC form).
'Processor' is simply an abbreviation of 'microprocessor', at least in systems that use ICs. In older systems, it's short for 'processing unit'. MFNickster 07:03, 18 December 2005 (UTC)
Now, point by point: 1. Quick searching indicates that Atari never referred to ANTIC as a GPU; indeed most sources refer to it as a microprocessor by the merit that it could execute stored programs (something that no GPU can do by itself).
ANTIC is an example of a microprocessor which is not used as the CPU of a system. I think it supports my statement below that "GPUs contain CPUs", because it's a custom controller that meets the criteria for a microprocessor - it was never intended for use as a CPU. As far as I know, though, you are right that they never called it a GPU (that term came much later). But it is a graphics coprocessor, essentially the same thing. MFNickster 19:09, 21 December 2005 (UTC)
2. Why are you confusing ASICs with GPUs? What does that have to do with the question at hand? Do you know what this term actually refers to or are you just giving me googled links? 3. I've never heard of WAVE Report, and from what I can tell they aren't a manufacturer nor considered an authority in the field of digital microelectronics. 4. GPS World... That's a source for EE information? You'd trust one line of research done by someone not knowledgable in the field? 5. *sigh* See my complaints with #3 and #4. Additionally, I fail to see any point you're making as regards to the Sega Saturn. The M68k is undoubtedly a microprocessor and a CPU; it was simply used to control sound functions in that capacity. Perhaps some of this confusion lies in failing to separate design from functional capacity.
I'm starting to see where the confusion lies - and perhaps this is the solution to the quandary. A microprocessor is a physical unit, a chip, a component. A CPU is an abstration, defined by function instead of form. The 68k (actually a CMOS 68EC000 in the Saturn, I see) is a specific implementation of a CPU, though it can serve other functions. A PowerPC or Pentium, while being microprocessors, are much more - they contain functions traditionally separate from the CPU, such as floating-point units, MMUs, vector units. We could rewrite the intro to include the distinction between the physical chip (microprocessor) and the function it serves (CPU, FPU, DSP, GPU etc.). MFNickster 03:47, 18 December 2005 (UTC)
Yes you're right, but there's no confusion here. My issue is that you never will see someone refer to a FPU as a microprocessor
"A coprocessor is a second microprocessor that has been specially designed to perform a limited number of functions very quickly" [10] Need more? MFNickster 06:47, 20 December 2005 (UTC)
you'd rarely see a DSP called a microprocessor outside SoC applications, and to the best of my knowledge you'd never see a GPU referred to as a microprocessor by those in the industry. -- uberpenguin 04:02, 18 December 2005 (UTC)
Nvidia, The GeForce 6 SeriesGPU Architecture "Figure 30-1.The GeForce 6800 Microprocessor" [11] Need more? MFNickster 06:47, 20 December 2005 (UTC)
Then perhaps a distinction can be made between general-purpose and special-purpose microprocessors? (Incidentally, if there is no formal definition, why are you asking me to cite one? just curious :) MFNickster 04:26, 18 December 2005 (UTC)
Please don't give googled links to support a point based on predisposition. I'm looking for a respectable reference work; some research paper published by digital VLSI designers, or perhaps a whitepaper by a manufacturer. You can't just use Google to support a point of view (for example, Google would probably provide sufficient evidence for supporting the notion that CPUs are all microelectronic and only exist in the x86 form). Don't get me wrong now, I'm not trying to be caustic or jump all over you, but providing a list of links from unusable reference sources doesn't assist the discussion. -- uberpenguin 03:20, 18 December 2005 (UTC)
Yes, these are Googled links which are not intended as reference, but simply evidence that other people often use the term "microprocessor" to mean something other than a CPU, specifically because you said "I have always seen the term 'microprocessor' applied to an IC-based CPU." You are using your own experience as (dare I say) original research, and since you said that "there is no formal definition" these viewpoints are just as valid as yours. If you want to search for reputable references to back your own definition, you're certainly welcome to do so. That said, would you agree that in a general article it's better to be more inclusive unless there's reason to do otherwise? MFNickster 03:47, 18 December 2005 (UTC)
I phrased it like that because one can never be 100% sure that their position is correct, and I'd never want to come across that way. I find it unlikely that I could find any formal paper that explicitly spells out what a microprocessor is, because they assume the reader already has an idea of their intentions in usage. The point is that when manufacturers and researchers refer to microprocessors, I have never seen a case where they did not mean CPU. If you'd like me to dig up long winded papers that support this in a general fashion I can, but that hardly proves my point. It's easier for me to simply ask you to find a citable source that uses the term to mean something other than a CPU. -- uberpenguin 04:02, 18 December 2005 (UTC)
I'll look for one, but since "there is no formal definition" that gives us a lot of leeway in the article's scope. MFNickster 04:26, 18 December 2005 (UTC)
Here's something else to ponder (using google :). If you search for the terms GPU and microprocessor on major GPU designers' sites (e.g. ATI, NVidia, Matrox), initial inspection indicates that they themselves never refer to their products as microprocessors. In the case of difficult to define product terms, I think the tendencies of vendors is the best standard to go by. -- uberpenguin 03:26, 18 December 2005 (UTC)
Acutally, I did find a "FORM 40-F" for ATI which explicitly states "A GPU is a microprocessor specifically designed for processing 3D graphics data." I didn't include it because it's a PDF, but here is a http://tinyurl.com/aax27 TinyURL to the cached HTML version. MFNickster 03:47, 18 December 2005 (UTC)
Okay, now there's something that we can actually discuss. I think that might be a starting point, but I'm hesitant to consider it justification here because it's a legal document. While I'm not so hard-headed as to reject it on those grounds alone, I'd feel a lot better if we could find something written by researchers or designers (e.g. technical whitepapers) that uses this definition. I'll be looking for such a source myself... Right now I'm leaning towards adding text that points out that some people consider microelectronic GPUs, DSPs, etc to be 'microprocessors,' but in general the term is used to refer to CPUs and SoCs. -- uberpenguin 04:02, 18 December 2005 (UTC)
Well, they are microprocessors, if you take the word in a broad sense to mean "a chip that processes digital data", but as I pointed out, a CPU is really only part of a modern microprocessor, and such a chip doesn't have to be used as the CPU in a given system, so when you say "the term is used to refer to CPUs and SoCs", that is true but not the whole picture technically. Perhaps we can include a section for DSPs, GPUs, etc. describing how they were developed as specialized refinements of general-purpose microprocessors. The technology in silicon is basically the same, it's the function which differs. MFNickster 04:26, 18 December 2005 (UTC)
Well that's still the crux of our disagreement. I do not take microprocessor to be such a broad definition because it's very uncommon in my experience to see any researcher, engineer, or designer refer to anything that doesn't contain a CPU as a microprocessor... Indeed, so far it seems that the people that generally refer to GPUs as microprocessors are either lay men or something else far removed from a computer engineer. Ehh... I'll do a quick survey of some of the newsgroups and technical forums I frequent. I'll also try to see if I can find the first usage of the term anywhere. -- uberpenguin 15:36, 18 December 2005 (UTC)
I understand, and I think the point is not to determine "who's right", but instead to enlighten the reader with accurate information. The usage of 'CPU' has changed somewhat - originally the microprocessor was a way to implement a CPU on one chip; now the CPU has become one part of a microprocessor chip. In a very real sense, DSPs and GPUs do contain CPUs - they are just dedicated to a specialized purpose. But when someone refers to the microprocessor of a system, they are always referring to the general-purpose CPU and not the microcontrollers, FPUs, GPUs etc. in the system, so you are right about that usage. The article should also contain some sense of the broader meaning (microprocessors as a class of ICs). MFNickster 15:52, 18 December 2005 (UTC)
"In a very real sense, DSPs and GPUs do contain CPUs - they are just dedicated to a specialized purpose." By what definition of CPU? Certainly not a common one... Most people these days define a CPU as a turing complete stored program machine. Most DSPs and GPUs fail one or both these requirements (do you have any notable counterexamples?).
I think I spoke too soon on that one, in light of my earlier comment about 'CPU' being defined by its function, not its implementation. What I really mean is that DSPs and GPUs have logic cores similar to general-purpose microprocessors, and process data in a specialized way - i.e. the single-chip CPU had to be developed before programmable DSPs and GPUs could be made. MFNickster 06:29, 20 December 2005 (UTC)
Again, I simply have to disagree that the term microprocessor is commonly used to refer to ICs that don't function as CPUs. All publications I've ever read by the IEEE and computer architecture researchers seem to agree. The more general term "microchip" could certainly mean a DSP or GPU, but I still see no reason to think of a microprocessor as anything BUT a CPU, and the engineers I've talked to agree. I looked through the library's archive of old IEEE publications, and discovered that the very first issue of IEEE Micro (February 1981), the IEEE's bimonthly for microprocessor and microcomputer development, contains an article by M. Hoff and R. Noyce entitled "A History of Microprocessor Development at Intel." In it, Noyce states that the term "microprocessor" emerged at Intel in 1972 (not too long after the 4004 was released) and of course was used to refer to CPUs implemented as small multi-IC packages. So certainly the term originally was intended to mean CPU, and as of yet I have seen no sources from the IEEE or component designers that suggest that the meaning has changed since then.
1981? 1972? You're going to have to do better than that. Please find an IEEE article or paper that definitively says that microprocessors are CPUs and only CPUs. Better yet, try to find something on a DSP or GPU chip and see how it is described. MFNickster 00:05, 19 December 2005 (UTC)
Pardon? You don't consider the first usage of the term to be relevant? That's the hardest evidence supporting either position in this entire conversation thus far.
Actually it isn't, since the meaning has changed over time to include specialized processor chips. The articles you cite were written at a time when putting a whole CPU (without extras like FPU, MMU, cache) on a single chip was considered quite a feat in itself. There were no single-chip DSPs at that time, and even the term FPU was less common than "math coprocessor," a name for a microchip which is a kind of processor. The term "microprocessor" is just a combination of those two terms anyway. MFNickster 04:41, 19 December 2005 (UTC)
Again, you will not find any reference that says in explicit terms that "a microprocessor is and only is a CPU" because papers that use the term just assume that the reader knows what the author is talking about. I have certainly read plenty of technical papers involving both DSPs and CPUs and, as I've said several times, have never seen them referred to as microprocessors
Yes, I know that - but absence of evidence is not evidence of absence. So far all you have offered is "I've never seen it and nobody I know uses it that way." Not good enough, because I have seen it used that way. The links I provided are just some quick examples of the common usage. Unless you can find a formal definition, then the article should cover all bases. You can easily find lots more examples, but it's up to you to dig deeper. You'll have to convince yourself, I can't do it for you. MFNickster 04:41, 19 December 2005 (UTC)
Just citing one paper wouldn't be sufficient evidence to invalidate your position, but if that's all you want, I can certainly provide a couple. -- uberpenguin 03:20, 19 December 2005 (UTC)
Please do - that's all I ask, is that you support your position. Also, please make a note whether these papers are referring to a microprocessor, or the microprocessor (CPU) of a single-processor system. I would find that distinction interesting and relevant. MFNickster 04:56, 19 December 2005 (UTC)
Additionally, I don't think we have the right to take liberties with a fairly well established term just because it seems like its usage could be expanded to other devices. If no industry publication or manufacturer seems to commonly use 'microprocessor' to refer to non-CPU devices, then I don't see why this article should. -- uberpenguin 22:18, 18 December 2005 (UTC)
How "well-established" the term is is what we're debating, so you're begging the question by calling your definition the "well-established" one. I'm not saying it seems like its usage could be expanded, I'm arguing that it has been expanded. If you want some examples from manufacturers, here are a few (yes, they're Googled - our libraries don't open until tomorrow, sorry)
  • Texas Instruments [12] "A digital signal processor (DSP) is a type of microprocessor - one that is incredibly fast and powerful."
  • Intel [13] "Digital Signal Processor (DSP) - A specialized digital microprocessor that performs calculations on digitized signals that were originally analog, and then forwards the results."
  • Intel [14] "DSP: 1. Digital signal processor. A specialized microprocessor designed to perform speedy and complex operations with digital signals."
  • IBM [15] A Microprocessor for Signal Processing, the RSP: "The Real-Time Signal Processor (RSP) is a microprocessor architecture that was created to exploit these characteristics in order to provide an expeditious and economical way to implement signal processing applications."
MFNickster 04:41, 19 December 2005 (UTC)

Halfway through making a list of papers from IEEE journals to demonstrate the term's usage, I decided that all this rhetoric is really silly over a minor terminology disagreement. I went ahead and wrote a section describing the usage of "microprocessor" to mean something other than a CPU; feel free to add to it or revise it as you see fit. I still hold that DSPs and GPUs are not in themselves microprocessors, but I doubt many people would have such issues with using the term thus. I do feel strongly, however, that when no further clarification is given, the term "microprocessor" can safely be assumed to refer to a CPU. The section I wrote reflects that point. -- uberpenguin 22:40, 19 December 2005 (UTC)

Oh, and just so you don't think I've been blowing smoke about this whole point:
  1. van de Waerdt, J.; et al. (2005), The TM3270 Media-Processor, 38th Annual IEEE/ACM International Symposium on Microarchitecture {{citation}}: Explicit use of et al. in: |author= (help) - Paper describing the architecture of the TM3270 media processor. It's somewhat similar to a DSP/GPU, but is actually much closer architecturally to a CPU than GPUs are. The article never refers to the TM3270 as a CPU or a microprocessor, but as a "media processor" (actually, I think a very apt term for GPUs and CPUs).
That is interesting. The TM3270 looks like a general-purpose CPU with custom media extensions. This press release [16] refers to it as a CPU, but not as a microprocessor. MFNickster 01:05, 21 December 2005 (UTC)
  1. Goodnight, N., Wang, R. & Humphreys, G. (2005), Computation on programmable graphics hardware, IEEE Computer Graphics and Applications{{citation}}: CS1 maint: multiple names: authors list (link) - Paper specifically addressing general purpose programming on the latest generation of programmable GPUs (this was only published in October of this year). It refers to GPUs as "stream processors," never microprocessors. It even makes a very clear distinction between GPUs and CPUs (as, IMO, it should).
  2. Geer, D. (2005), Taking the Graphics Processor beyond Graphics, IEEE Computer - Another paper talking in some detail about general purpose computation on GPUs. Always uses the terms "graphics coprocessor" or simply "graphics processor," never microprocessor.
I just found these resources in a few minutes of digging through late IEEE journals; and there are several more I haven't cited that talk extensively about GPU microarchitecture and always use terms like "graphics processor." True, omission is never sufficient to prove the point, but it does show (in a small way) that the trend by professionals in the field of digital microarchitecture is to refer to GPUs as what they are, and not as microprocessors. I do believe that this latter term is used much more commonly by lay men or those unexperienced in the field that don't already make the mental association of microprocessors with CPUs and simply decide that the term COULD apply to other things. -- uberpenguin 23:02, 19 December 2005 (UTC)
Perhaps. I didn't think you were "blowing smoke," just that you were only seeing part of the picture. I hope I've made fair case that such usage is more common in the industry than you have seen before. If you dig a little deeper, you'll find plenty of examples. MFNickster 06:29, 20 December 2005 (UTC)
Umm... You know you're really just beating this thing to death now; I agreed several edits back that it would be okay to mention GPUs and CPUs in the article, and I added a section addressing these myself. -- uberpenguin 19:22, 21 December 2005 (UTC)
Okay, I'll give it a rest! :) Having just done a bit of research, and would like to give future editors the benefit of that. MFNickster 19:39, 21 December 2005 (UTC)


I would like to add that prior to 1972 there were no formal definitions of microprocessor or microcomputer. The general term used was LSI (even for a processing unit). In 1972, Hank Smith, then Intel Marketing Manager gave this definition at a speech for the IEEE 1973 WESCON Professional Conference. He said "A CPU uses P-channel MOS and is contained in 1, 2, 3 or 4 LSI standard dual-in-line packages from 16 - 42 pins per package". This was as close as he could come to a definition and it was based on the current technology used by Intel. Of course, later technology and definitions changed. I think it is very important to put some attention on the phrase "Single Chip Microprocessor" as was called the Intel 4004 and 8008. It should be known that it took about 52 outside chips to make the 4004 work and about half that many to make the 8008 work. The F-14 MP944 chip set had no outside devices for the processor. In support of the technology in 1968 I offer this paper "LSI Technology State of the Art in 1968". Ray Holt 01:25, 04 April 2007 (UTC)
Concerning Moore's Law, the F14 MP944 chip set would blow away this concept. It would appear that Moore's Law would only apply to conservative, consumer-based designs as clearly the F14 MP944 went well beyond the concept Moore presented and in the same time frame he presented his concept. Maybe he and the Intel engineers were not aware that with some creative and cleaver design techniques more could be placed on a chip. Apparently, the designers at American MicroSystems were not aware of the limitations presented my the Moore's Law concept. Ray Holt 01:25, 05 April 2007 (UTC)

References

Here are some reference points for inclusion of a "specialized microprocessor" subsection. MFNickster 05:58, 18 December 2005 (UTC)

Corroboration within Wikipedia, for consistency:

NPU "Network Processing Unit or NPU is a CPU whose instructions are specialized to handle networking-related functions."
Microcontroller "A microcontroller is a computer-on-a-chip used to control electronic devices. It is a type of microprocessor emphasizing self-sufficiency and cost-effectiveness, in contrast to a general-purpose microprocessor, the kind used in a PC."
Graphics processing unit (old version) "A Graphics Processing Unit or GPU (also occasionally called Visual Processing Unit or VPU) is the microprocessor of a graphics card (or graphics accelerator) for a personal computer or game console"
Digital signal processor "A digital signal processor (DSP) is a specialized microprocessor designed specifically for digital signal processing, generally in real-time."
— Preceding unsigned comment added by MFNickster (talkcontribs) 05:36, December 19, 2005 (UTC)

Books

  • From The Winn L. Rosch Hardware Bible, Third Edition:
"At heart, a [numeric] coprocessor is a microprocessor but unlike a general purpose microprocessor it is dedicated to its specific function as a special purpose device." (p. 151)
"Graphic coprocessors are full-fledged microprocessors that are designed primarily for carrying out graphic operations." (p. 622)
"A DSP need be nothing more than a microprocessor optimized for processing audio signals." (p. 789)
  • From "IA-32 Processor Architecture," Section 2.4.2 Video Output (p. 60) [17]
"The video controller is itself a special-purpose microprocessor, relieving the primary CPU of the job of controlling video hardware.
  • From Signal Processing Handbook, C.H. Chen, Ed., 1988:
"Advances in IC technology have made possible microprocessors of ever-increasing complexity whose architectures are tailored to DSP algorithms." (p. 193)
"This section will discuss the design of general-purpose digital signal processors. We will restrict our attention to microprocessors and use the term microprocessor and microcomputer interchangeably." (p. 197)
  • From Digital Signal Processing Implementations Using DSP Microprocessors, with Examples from TMS320C54xx, Avtar Singh & S. Srinivasan, 2004:
"A programmable digital signal processor is a microprocessor whose architecture is optimized to process sampled data at high rates." (p. 3)
  • From The Microprocessor: A Biography, Michael S. Malone, 1995:
"...we have basically restricted ourselves to the characteristics of microprocessors used in the central processing units of computers ... there are other kinds of microprocessors as well, most notably microcontrollers ... beyond the features they share with their central processing counterparts, also add another important function: digital signal processing." (p. 120)
— Preceding unsigned comment added by MFNickster (talkcontribs) 05:36, December 19, 2005 (UTC)

Academic

  • Real-Time Computing For Human Computer Interfacing", Princeton University [18]
"A Digital Signal Processing chip (DSP) is a microprocessor designed specifically to implement Digital Signal Processing (DSP) algorithms."
— Preceding unsigned comment added by MFNickster (talkcontribs) 05:36, December 19, 2005 (UTC)

Industry

  • Electronic Engineering Times [19]
Mapping computational concepts to GPUs, Mark Harris, Nvidia Corp.
"The computational speed on microprocessors is increasing faster than communication speed, especially on parallel processors such as GPUs."
  • NXP Semiconductor [20]
"The TriMedia processor, developed by Philips, is a special-purpose microprocessor for the real-time processing of audio, video, graphics and communications data streams."
  • Microsoft [21] "graphics coprocessor, n. - A specialized microprocessor, included in some video adapters, that can generate graphical images such as lines and filled areas in response to instructions from the CPU, freeing the CPU for other work."
  • Bluetooth Designer resource for engineers [22] "Digital Signal Processor: a microprocessor dedicated to real-time signal processing."
  • Apple IIgs Tech Note #11 [23] "The Ensoniq DOC in the Apple IIGS is actually a microprocessor dedicated to producing sound."
— Preceding unsigned comment added by MFNickster (talkcontribs) 05:36, December 19, 2005 (UTC)

No Intel?

Even though they dominate the desktop computers, there is almost no mention of the x86 family of processors at all in the history section after i386?

MIPS is not only used in embedded systems "like Cisco routers". The PlayStation game consolesare perhaps more well-known? — Preceding unsigned comment added by 80.202.211.146 (talk) 16:49, February 6, 2005 (UTC)

Remember Intel is mentioned quite a lot in the beginning. After that AMD is mostly mentioned because it gained a lead over Intel.
To this day AMD is still dominating Intel and if your doing a paragraph on modern microprocessors you should do it on the best,
AMD.— Preceding unsigned comment added by 207.144.176.202 (talk) 02:35, April 7, 2007 (UTC)
Yeah, sure. -- mattb 03:00, April 7, 2007 (UTC)

I find it odd that the notable 32-bit section says the following: "The most famous of the 32-bit designs is the MC68000, introduced in 1979." The question here is, if the word famous is being used in the normal fashion, shouldn't the MOST famous 32-bit be a member of the x86 family? Regardless of how many applications there were of the 68k series, fame is a measure of popular knowledge. I'm not saying that the x86 family needs a boost in the article so much as that a word other than famous should be used to describe why the 68k series is more SIGNIFICANT than the x86, which I would argue it is. Jo7hs2 22:15, 2 November 2007 (UTC)

Dead Link: History of general purpose CPU

There is no such article. If someone removed it, please provide a substitute. If not, please remove the link. Landroo 13:31, 1 September 2007 (UTC)

Fixed. MFNickster 00:55, 3 November 2007 (UTC)

Wayne D. Pickette

Look at these articles everyone!

http://www.indybay.org/newsitems/2004/12/08/17088681.php

http://www.thocp.net/biographies/pickette_wayne.html


Its about the real brains and the actual "father" of the microprocessor. How come he isn't included in this article? And there isn't a single mention of him in Wikipedia either! His name doesn't appear anywhere as far as I've seen! Seriously this is one great guy screwed by Intel, Fairchild etc. big time!

And this is to the moderator(s): kindly dont hide what I've just (with a * or whatever). Certain stuff needs to be spoken out loud!

Hope he gets the credit due to him soon! Krishvanth (talk) 06:50, 5 January 2008 (UTC)

Leonardo's computer

Regarding the claim: There have even been designs for simple computing machines based on mechanical parts such as gears, shafts, levers, Tinkertoys, etc. Leonardo DaVinci made one such design, although none were possible to construct using the manufacturing techniques of the time. ... Does anyone know if the Leonardo DaVinci mechanical 'computer' or 'processor' claim is true? It's not mentioned in the Leonardo article, unless Leonardo's robot is considered a computing device. Reading up on the 'robot' does not sell me on the 'computing' possibility, though it is obviously an impressive contraption for the time. -- Ds13 03:31, 2004 Apr 15 (UTC)

Yes, mechanical computers have been designed and built. I suspect the original writer is thinking about the difference engine and analytical engine designed by Charles Babbage. --68.0.124.33 (talk) 02:16, 8 March 2008 (UTC)

Funny

It's funny how this article explains jack shit about how microprocessors work. The most simple thing this article should have is somehow nonexistant. —Preceding unsigned comment added by 137.28.55.114 (talk) 21:55, 31 January 2008 (UTC)

See Central processing unit#CPU operation. --Wtshymanski (talk) 15:38, 18 September 2008 (UTC)

added section

added a small section on history of general purpose microprocessorsMatsuiny2004 (talk) 22:11, 18 April 2009 (UTC)

added citations

added citations to the first types sectionMatsuiny2004 (talk) 21:58, 18 April 2009 (UTC)

can somebody do some more research on the TMS 1000 since the source I have used considers it a microcontroller. If this is so then should it not be moved to the micro controllers article?Matsuiny2004 (talk) 22:37, 18 April 2009 (UTC)

Basic block diagram

It's been decades since I've been that deep into the matter, but we used to have these simple block diagrams of the essential components of a microprocessor. If s.o. knows what I'm talking about and still has one or can find one it would be nice if we could add something like that to this page. 71.236.24.129 (talk) 09:59, 13 May 2009 (UTC)

Early History

Datapoint never used the 8008 or 8080 altho they did play a role in their creation. They were too slow. The only unit I recall that used a single-chip "microprocessor" was their 15xx series which used the Z80. more info here: http://www.old-computers.com/museum/doc.asp?c=596

and more: http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9111341

Ken (talk) 15:40, 26 May 2009 (UTC)

I have changed the appropriate text in the main article to reflect this.Ken (talk) 02:31, 5 June 2009 (UTC)

Mfgs and markets

So, I'm in a minor edit war with what I assume to be the same anonymous contributor (IP address varies, but writing style and method is the same -- you may want to register an account to make things clearer, or at least provide a handle in the edit summary). I keep removing a giant list of manufacturers, and the other contributor keeps putting it back in, with an edit summary that implies they are concerned that the article gives the impression that the microprocessors used in general-purpose PCs are the only applications.

I find that's a reasonable concern. If you want to make sure it is understood that microprocessors are used both in general-purpose and embedded designs, by all means, do so. But please do so in prose, by discussing applications in both GP PCs and embedded systems. Ideally, cite market-share figures in reliable sources, for both applications. It would be nice to know what the percentages are. (Be aware that we currently draw a distinction between microprocessor and microcontroller. Perhaps both articles should be clarified.)

However, I must insist that dumping a huge list of manufacturers into the article is the wrong thing to do. This is purely an editorial/style objection. Lists belong in the list pages we already have. They should not be duplicated here.

Thanks. —DragonHawk (talk|hist) 17:59, 26 December 2009 (UTC)

Voyager didn't have an 1802 µprocessor on board..

Well, at least according to the RCA 1802 article it didn't. —Preceding unsigned comment added by Stib (talkcontribs) 23:44, 25 May 2010 (UTC)

Transputer - not worth a mention at all

http://en.wiki.x.io/wiki/Transputer

Skyshack (talk) 17:45, 1 April 2011 (UTC)

So? Why should this article mention the Transputer family? Tell us more. --Wtshymanski (talk) 18:35, 1 April 2011 (UTC)
I agree that it deserves at least a passing mention. Can't say why exactly, other than my vague opinion "it is a significant part of microprocessor history", especially concerning parallelism -- Nczempin (talk) 22:20, 1 April 2011 (UTC)
Having failed to delete all of WP's articles on transistors, Wtshymanski is presumably shifting his attention to microprocessors.
We should probably mention Viper too. Even though it was a failure, it was a notable attempt in one particular direction. Andy Dingley (talk) 22:34, 1 April 2011 (UTC)
Not sure if the potshot was necessary. Wtshymanski asked in perfectly neutral tone why the OP thought Transputer should be included. Given that at least one other person (yours truly) cannot easily deliver a more convincing argument than his opinion, the question seems entirely appropriate. Viper would IMHO have to demonstrate a lot more importance in its own article before it could be considered here. Perhaps you want to start another thread. It does make Transputer look good in comparison, not sure if that was the intent :-). -- Nczempin (talk) 22:53, 1 April 2011 (UTC)
Wtshymanski has spent the last week parroting the view that anything electronic with a part number is inherently non-notable. Stuff that. Andy Dingley (talk) 22:57, 1 April 2011 (UTC)

Getting back to the topic of this article, was the Transputer as big a deal as it seemed at the time? It got a lot of press but seems to have faded away as "regular" microprocessors caught up; I wonder why the Transputer didn't keep its lead over more complicated processors. --Wtshymanski (talk) 23:37, 1 April 2011 (UTC)

Well, first of all, it was (and the article is) somewhat disadvantaged as a British design in an American-dominated world (this by itself is of course no justification for its inclusion, just a note why it seems to be overlooked quite a bit). Secondly, it was probably a great deal ahead of its time; only with current developments after the GHz limit has been pretty much reached are we starting to "re-discover" parallelism. I wouldn't want a whole page on the transputer in the mP article, but a sentence or two wouldn't be wrong. Perhaps it would make sense to start by giving it a little more space in the more specialized articles on parallel computing. -- Nczempin (talk) — Preceding undated comment added 00:09, April 2, 2011 (UTC)
Why, are you planning to delete this too? Andy Dingley (talk) 00:19, 2 April 2011 (UTC)
No, I'm editing encyclopedia articles. You can, too.. My rather elderly IEEE computer encyclopedia has a disappointing article "transputer" which tells me nothing about why they didn't catch on. The multiple communication ports in the processor seems not to have endured to other designs (though someone much more hip to the current Intel designs might shed some light here). Parallelism doesn't seem to get applied except for graphics and number-crunching. --Wtshymanski (talk) 14:58, 2 April 2011 (UTC)

Info

I think that this page doesn't offer enough information, such as how they function, how the transistors work, what type of transistors there are, such as MOSFETS. Then again, no a lot of people need to learn all that. — Preceding unsigned comment added by Patrick-liu11 (talkcontribs) 19:14, 3 April 2011 (UTC)

We have more than one page in Wikipedia: How they work: CPU, CPU design. What microprocessors are made out of: Logic gate. Transistors including MOSFETs. If you feel that those articles are deficient in some way, go ahead and help improve them! Note however that Wikipedia is not a textbook. Perhaps Wikibooks or Wikiversity would be more appropriate. The pages on those sites can also improved by you. -- Nczempin (talk) 21:16, 3 April 2011 (UTC)

TMS 1000

This section implies that in the opinion of the Smithsonian staff the TMS 1000 was the first microprocessor. In fact, the link is to a page from a book that the Smithsonian has scanned in called STATE OF THE ART. The bottom of the page says "The National Museum of American History and the Smithsonian Institution make no claims as to the accuracy or completeness of this work." The information in this section was discredited in connection with litigation in the 1990s, when Texas Instruments claimed to have patented the microprocessor. In response, Lee Boysel assembled a system in which a single 8-bit AL1 was used as part of a courtroom demonstration computer system, together with ROM, RAM and an input-output device. See the Wikipedia article on Four Phase Systems: http://en.wiki.x.io/wiki/Four_Phase_Systems_AL1 — Preceding unsigned comment added by GilCarrick (talkcontribs) 16:43, 8 June 2011 (UTC)

Exhaustive Discussion (with references) of history of invention: Schaller PhD thesis chapter 7

...see http://home.comcast.net/~gordonepeterson2/schaller_dissertation_2004.pdf

The main article is missing, among other things, the Four Phase AL1 (of of several claims prior to Intel 4004). Schaller's discussion is even-handed and makes it clear that the history is complicated enough for it to be impossible to simply pick a "winner" as being "the first".

Schaller begins "CHAPTER 7: The Invention of the Microprocessor, Revisited" with an excellent selection of quotes from other cited sources:

"The 4004, invented by Intel, was the world's first commercially available microprocessor." (Intel website)1

"TI invents the single-chip microcomputer and receives the first patent for the single-chipmicroprocessor, ushering in the personal computer era." (Texas Instruments website)2

"The first microprocessor in a commercial product was Lee Boysel's AL1, which was designed and built at Four-Phase for use in a terminal application in 1969." (Nick Tredennick)3

"Alongside to the IC, the invention of the 'micro-processor' (MPU - Micro Processing Unit) is the greatest invention of the 20th century in the field of electronics." (Busicom Corp.)4

"[T]he idea of putting the computer on a chip was a fairly obvious thing to do. People had been talking about it in the literature for some time, it's just... I don't think at that point anybody realized that the technology had advanced to the point where if you made a simple enough processor, it was now feasible.~] (Ted Hoff)5

"Having been involved with integrated electronics when I was at Intel, we never conceived of patenting a computer on a chip or CPU on a chip, because the idea was patently obvious. That is you worked on a processor with 25 chips, then 8 chips, and by- God eventually you get one chip so where's 'the invention'." (Stan Mazor)6

Such inventions don't come from new scientific principles but from the synthesis of existing principles... Because these inventions have a certain inevitability about them, the real contribution lies in making them work. (Federico Faggin)7

[A]t the time in the early 1970s, late 1960s, the industry was ripe for the invention of the microprocessor. With the industry being ready for it, I think the microprocessor would have been born in 1971 or 1972, just because the technology and the processing capability were there. (Hal Feeney)8

"I don't think anyone 'invented' the microprocessor. Having lived through it, this [claim] sounds so silly." (Victor Poor)9

"It is problematic to call the microprocessor an 'invention' when every invention rides on the shoulders of past inventions." (Ted Hoff)10

"Most of us who have studied the question of the origin of the microprocessor have concluded that it was simply an idea whose time had come. Throughout the 1960's there was an increasing count of the number of transistors that could be fabricated on one substrate, and were several programs in existence, both commercial and government funded, to fabricate increasingly complex systems in a monolithic fashion. (Robert McClure)11

The question of 'who invented the microprocessor?' is, in fact, a meaningless one in any non-legal sense. The microprocessor is not really an invention at all; it is an evolutionary development, combining functions previously implemented on separate devices into one chip. Furthermore, no one individual was responsible for coming up with this idea or making it practical. There were multiple, concurrent efforts at several companies, and each was a team effort that relied on the contributions of several people.? (Microprocessor Report)12

"The emergence of microprocessors is not due to foresight, astute design or advanced planning. It has been accidental." (Rodnay Zaks)13

"The only thing that was significant about the microprocessor was that it was cheap! People now miss this point entirely." (Stan Mazor)14

1 "Intel Consumer Desktop PC Microprocessor History Timeline," http://www.intel.com/pressroom/archive/backgrnd/30thann_timeline.pdf

2 "History of Innovation: 1970s," http://www.ti.com/corp/docs/company/history/1970s.shtml

3 Nick Tredennick, "Technology and Business: Forces Driving Microprocessor Evolution," Proceedings of the IEEE, Vol. 83, No. 12, December 1995, 1647.

4 "Innovation: The World's first MPU 4004," http://www.dotpoint.com/xnumber/agreement0.htm

5 Ted Hoff as quoted in Rob Walker, "Silicon Genesis: Oral Histories of Semiconductor Industry Pioneers, Interview with Marcian (Ted) Hoff, Los Altos Hills, California" Stanford University, March 3, 1995.

6 Stan Mazor, Stanford University Online Lecture, May 15, 2002, 020515-ee380-100, http://www.stanford.edu/class/ee380/

7 Federico Faggin, "The Birth Of The Microprocessor: An invention of major social and technological impact reaches its twentieth birthday," Byte, Volume 2, 1992, 145, http://www.uib.es/c- calculo/scimgs/fc/tc1/html/MicroProcBirth.html

8 "Microprocessor pioneers reminisce: looking back on the world of 16-pin, 2000-transistor microprocessors," Microprocessor Report, Vol. 5, No. 24, December 26, 1991, 13(6). Hal Feeney helped design the 8008 at Intel.

9 Vic Poor, former vice president of research R&D for Datapoint, telephone interview with the author, June 5, 2003.

10 Dean Takahashi, "Yet Another 'Father' of the Microprocessor Wants Recognition From the Chip Industry," Wall Street Journal, September 22, 1998, http://www.microcomputerhistory.com/f14wsj1.htm

11 See e-mail/newsgroup posting to Dave Farber's IP list dated May 12, 2002 to Dave Farber dave@farber.net McClure was formerly with TI and helped found CTC; he also was an expert witness in the Boone patent case.

12 Microprocessor Report, op. cit.

13 Rodnay Zaks, Microprocessors: from chips to systems, 3/e, SYBEX Inc., 1980, First Edition Published 1977, 29.

14 Stan Mazor, telephone interview with the author, June 10, 2003.

It's a rich source of information for enhancing the main article (and quite interesting reading for its own sake)

Dougmerritt 04:32, 23 January 2007 (UTC)

The paper can be found on the Computer History Museum web site: http://corphist.computerhistory.org/corphist/documents/doc-487ecec0af0da.pdf — Preceding unsigned comment added by GilCarrick (talkcontribs) 17:11, 8 June 2011 (UTC)

Firsts

This page has obviously gone through a lot of editing and the result it that it contradicts itself in several places. The section on the Four-Phase Systems AL1 was apparently added somewhat late in the evolution. It refers to the litigation where TI tried to overturn Intel microprocessor patents. The case was dismissed when Lee Boysel demonstrated that the Four Phase AL1 processor predated both the TI and Intel designs.

The section titled "Firsts" says that "Three projects delivered a microprocessor at about the same time," and mentions TI, Intel and the CADC. It should at least also mention the AL1 since it was clearly first.

The section titled "Intel 4004" says "The Intel 4004 is generally regarded as the first microprocessor." This is contradicted by the section on the AL1.

The section titled "8-bit designs" says "The Intel 4004 was followed in 1972 by the Intel 8008, the world's first 8-bit microprocessor." The AL1 was an 8 bit processor and predated the 4004, much less the 8008. See the Wikipedia article on the Four Phase AL1: http://en.wiki.x.io/wiki/Four_Phase_Systems_AL1 — Preceding unsigned comment added by GilCarrick (talkcontribs) 17:25, 8 June 2011 (UTC)

8086 Memory Segmentation Model

I just recently made an edit to the article which included a small change on the subject named by the title of this section, and I left a note in the edit summary referring here. The article had described the x86 memory segmentation model prior to the introduction of the 80286 (i.e. the x86 real mode memory model) as "crude"; I slightly adjusted this to remove bias. While it is undeniable that many programmers disliked (and even hated, sometimes intensely) the 8086 segment register design, calling 8086 memory segmentation "crude" is an opinion; the word has a pejorative connotation and implies a negative judgement along with the objective characterization that this aspect of the 8086 is not sophisticated or advanced. Contrasting 8086 segmentation with other processors' designs, it was clearly innovative (noting that not all innovations are improvements over past designs); this is evident from the simple fact that no microprocessor before the 8086 used any memory segmentation method quite like it, and none received qualitatively similar criticism. 8086 segmentation is also undeniably limited, particularly in that every segment is 64 KB in size, making it undisputedly difficult (or at least a non-trivial problem) to deal with large data objects (such as arrays or instances of implementations of any kind of ADT), i.e. ones larger than 64K bytes (= 64 KiB = 65536 Bytes). This is a tradeoff in an engineering design that, it should be remembered, was a solution to the problem of making a moderate-cost 16-bit processor able to address more than 2^16 words, (i.e. able to drive more than 16 address bus lines). But nonetheless, it is a limit, and one that many 8086 programmers found themselves having to deal with frequently.

On the other hand, in my opinion, I find this (the difficuly with >64 K memory blocks) to be the only really major disadvantage of the strategy the 8086 design engineers chose, and I otherwise find the 8086 memory segmentation model extremely flexible. You can use it like a bank-switching system, like a double-register addressing system (in the mode of the HL register of the 6502 CPU), or for up to two-level plus immediate indexed addressing ([base address in segment] + BX + SI + [immediate displacement]). Perhaps it takes an imaginative attitude and a fresh perspective.

Considering all of this, I have changed "crude" to "innovative but limited", which is objective and, I believe, fair. I call it fair because it balances what is generally considered a positive quality (innovation) with a negative one (limitation). It also avoids injecting inappropriate details into this article, as a more detailed characterization of the processor's memory addressing model would.

(For readers unfamilar with x86 real mode, it basically works like this: All addresses are 20-bit values that are each built from two 16-bit values, called the "segment" and the "offset". The innovative part is that rather than each bit of the finished address coming from either the segment or the offset, the segment is shifted left by four bits [i.e. multiplied by sixteen] and then added to the offset to generate the 20-bit address. For dealing with the segment parts of address, the CPU has four segment registers, one for code (which is always combined with the IP to generate the execution address), two for data, and one for the stack. Most instructions have default segment registers from which they get the segment part of any addresses in their operands, but those can be overridden with opcode prefixes. A key aspect that is unusual is that there are multiple segment and offset combinations [4096 of them, in fact] that correspond to each physical address. This system has been much maligned for being allegedly too complicated and illogical, but it actually makes sense, and it in fact does work, as the existence of thousands of MS-DOS software tiles attests. Of course, just because it works doesn't logically imply it's any good.)

It appears to me that this topic could easily be the subject of a flame war (among people who care about old CPUs, of which I of course am one), and I certainly am not out to start one of those pointless wastes of time, especially here on WP of all places. I personally have a decent respect for the 8086 and the accomplishment of the Intel engineers that designed it, while admitting that the 80286 is better, the 80386 is even better, and the Motorola 68000 is better than either of the first two or maybe all three of those, discounting cost and from a programmer's perspective. Still, I think I removed a significant but subtle bias from the sentence about the 8086 (a.k.a. x86 real mode) segmented memory model, and I hope the WP community will agree. I just wanted to explain my reasoning. --Stephen 74.109.5.17 (talk) 12:29, 24 July 2011 (UTC)

Catalog

Once again, we get a list of part numbers but very little explanation as to *why* there were so many part numbers. Why did we waste all that money on 6502s when the Itanium is clearly a better processor? A little history might be more encyclopediac than a recitation of part numbers, as popular as those are. Isn't there a List of microprocessor someehere that we can point at here instead of reciting numbers with no reasons behind them? --Wtshymanski (talk) 16:30, 23 August 2011 (UTC)

Hey, we have List of 7400 series integrated circuits and with enough 7400 series integrated circuits you can build an Itanium... Seriously, though, There seems to be a description of each part so it is hardly "reciting numbers with no reasons behind them." It could be improved of course, WP:SOFIXIT. --Guy Macon (talk) 18:50, 23 August 2011 (UTC)
So helpful. YOu are as scholarly as you are sociable. --Wtshymanski (talk) 19:24, 23 August 2011 (UTC)
* * GROUP HUG * *  :) Guy Macon (talk) 20:17, 23 August 2011 (UTC)

Look out for possible copyright violations in this article

This article has been found to be edited by students of the Wikipedia:India Education Program project as part of their (still ongoing) course-work. Unfortunately, many of the edits in this program so far have been identified as plain copy-jobs from books and online resources and therefore had to be reverted. See the India Education Program talk page for details. In order to maintain the WP standards and policies, let's all have a careful eye on this and other related articles to ensure that no copyrighted material remains in here. --Matthiaspaul (talk) 14:15, 30 October 2011 (UTC)

Error?

Please check/correct

"l, with TI as intervenor and owner of the microprocessor patent." [sic]

shouldn't this be "inventor" or am I missing something legalistic? — Preceding unsigned comment added by 69.86.252.239 (talk) 22:55, 3 December 2011 (UTC)

number of bits

can we interface different -2 no. of bits system i.e. a micro controller is 32 bit but the out put device are 64 bit. will data loss or not when data are transfer mpu to output device. — Preceding unsigned comment added by Pradyuman Katiyar (talkcontribs) 12:34, 27 August 2012 (UTC)

Military use in F-14 Tomcat

In 1968, Garrett AiResearch, with designer Ray Holt and Steve Geller, were invited to produce a digital computer to compete with electromechanical systems then under development for the main flight control computer in the US Navy's new F-14 Tomcat fighter.

The processor was used for the flight control computer, or for the Fire Control System (FCS)? Because as far as I know, the Tomcat didn't have a fly-by-wire control system. Maybe the author meant the FCS but got confused? —Preceding unsigned comment added by 79.107.73.166 (talk) 05:21, 28 October 2008 (UTC)

Added April 8, 2013. I am Ray Holt and I meant flight control computer. The F-14 CADC was a fly-by-wire control of the moving surfaces including the wings and provided real-time data to the communications computer and the weapons computer. it also provide altitude, mach #, angle of attack to the pilot ... as well as many other functions. — Preceding unsigned comment added by Zajacik (talkcontribs) 16:29, 8 April 2013 (UTC)

Introduction

The intro to this page makes no sense to someone without a background. —Preceding unsigned comment added by 71.163.67.111 (talk) 04:54, 6 June 2009 (UTC)

Only to a person who didn't think of following the links to the articles on central processing units and integrated circuits, surely? --Brian Josephson (talk) 21:18, 22 April 2013 (UTC)

Difference from CPU (Central Processing Unit)

there's a similar list on Central processing unit. Do these need merging, or is one the parent article of the other? -- Tarquin 16:57 Jan 5, 2003 (UTC)

I also would like to understand the difference between a Microprocessor and a CPU. Is Microprocessor the generic term? ScotXW (talk) 08:17, 5 September 2013 (UTC)
No, but I would like microprocessor clarified. CPUs used to require many chips (or lamps or whatever). Microprocessor is the same thing when the CPU is on one IC (all modern ones for many years now). I would say CPU is the term and microproccesor is redundant now (or "synonym"). I would like "or at most a few integrated circuits." in lead clarified. From Central Air Data Computer-article: "The MP944 contained six chips used to build the CADC's microprocessor". For me that meant it's NOT a microprocessor! Note, because some of them seem to be (calucaling) "functional units". ROM and RAM (and later memory controllers and then also built-in) has traditionally been separate and do not count. "Very small chips", probably also do not, but needs to be clarified. GPUs are a different story then and make the Central in CPU a little cloudy.. comp.arch (talk) 15:47, 12 November 2013 (UTC)

programmable?

I'm a bit concerned about the assertion that a microprocessor is programmable. Is it not the computer that is programmable? Programmable suggests that the program is within what is being programmed. A microprocessor can execute program code, but the code that it executes is external to it (OK, perhaps this dubious use of language is now standard, in which case I suppose this can stay. Let the experts decide here). Brian Josephson (talk) 21:09, 22 April 2013 (UTC)

"Programming" a computer (in a pretty broad sense, broader than both Von Neumann or Harvard) is surely the process of arranging a suitable sequence of opcodes to be supplied via the CPU's data bus, in response to addressed requests according to the program counter. This is a logical retrieval: the supply of program data based on addresses, whether those opcodes were stored either on-chip or off-chip. We might discuss whether some EPROM that is loaded with these opcodes is either on-package or off-package to the microprocessor, but this just doesn't matter – we care about the logical location, but not the mundane physical implementation. It's possible to claim that, "EPROMs are what is programmed, CPUs only execute", but I don't see that as a productive distinction to make. Andy Dingley (talk) 21:43, 22 April 2013 (UTC)
Setting aside the dual use of a single term ("programming" an EPROM, commonly referred to as "burning", is a lot more like writing to a floppy disk than it is like "programming" a computer), external/internal is not a useful concept here. For example, a diskless node can boot from OS code residing on a network server, then log on to Wikipedia and execute Javascript code residing on Wikipedia's servers, then cause those servers to execute code that is local the the Wikipedia servers. The useful concept here is that the same physical hardware can do a number of different things based upon some bits somewhere, including things that the designers of the hardware had not conceived of. --Guy Macon (talk) 03:02, 23 April 2013 (UTC)
If you're going to say that it doesn't matter whether the code is contained within the microprocessor or not, you're in danger of saying a basic calculator is programmable -- it too 'can do a number of different things based upon some bits somewhere, including things that the designers of the hardware had not conceived of'. As I see it, something is programmable only if the thing itself stores the program, and the early microprocessors at least didn't do that. --Brian Josephson (talk) 08:45, 23 April 2013 (UTC)
Some calculators are programmable. The problem with your definition is that it defines a Sun 2/50 diskless workstation as not being programmable and a Casio F91W as being programmable, when the the reality is that the opposite is true. --Guy Macon (talk) 08:47, 23 April 2013 (UTC)
I was well aware that some calculators are programmable, and on that account had, in the interests of accuracy, edited in the word 'basic' while you were composing your reply. I still feel it is better to think of a microprocessor as something designed to execute programs. Language seems to have slipped since the 1960s when I first did programming (on a mainframe so primitive you had to look up opcodes in the manual to write a program). And since I didn't volunteer a definition of 'programmable' it is meaningless to talk of 'my definition'. I did say that I considered a computer to be programmable, but I don't regard a watch as being a computer. even if it may run a program (but one prescribed by the manufacturer, not the user, who merely selects among the options available). -- Brian Josephson (talk) 09:14, 23 April 2013 (UTC)
But anyway y'guys are all going to stick to your current linguistic practice whatever I say, and I'm happy to leave you to it. I just happened to arrive on this page because the microprocessor concept is relevant to what I am doing. -- Brian Josephson (talk) 09:19, 23 April 2013 (UTC)
The difference with "a simple calculator" is that a micro's CPU is "programmable", in that it varies its own actions in response to stored instructions (wherever we store those instructions), whilst the simple calculator only has one fixed reaction to the keystrokes and any variation is due to its operator changing their behaviour. Andy Dingley (talk) 10:14, 23 April 2013 (UTC)
This conversation is getting nowhere, so I've unchecked its 'watch this page' box. -- Brian Josephson (talk) 10:43, 23 April 2013 (UTC)
That's probably best. Considering your rather **cough** interesting views on Wikipedia collaboration and consensus,[24][25][26][27] this discussion is unlikely to result in any improvements to the article. --Guy Macon (talk) 17:09, 23 April 2013 (UTC)
I think Brian sort of misused the word 'computer' in there. I agree A microprocessor is not a programmable device. It has not such capabilities - data comes from elsewhere, interpolated, and sent out. Perhaps whomever wrote it confused a microprocessor with a microcontroller which is programmable. A 'computer', as a whole (memory (primary/secondary) +i/o peripheral) is programmable. And note that I am using that term as a 'cohesive unit' because technically we don't need the processor in there at all for the programmability feature. The processor itself is not. Your argument about a programmable calculator only reinforces that argument. It's a device with a microprocessor and a storage unit (or a microcontroller) which allows the 'unit', as a whole, to be programmable. --CyberXReftalk 04:24, 24 December 2013 (UTC)

PACE is not the first 16-bit single-chip

The first 16-bit single-chip microprocessor is the Hewlett-Packard BPC, released in late 1973. The BPC is basically a single-chip implementation of a HP 2100-series minicomputer. The BPC was available as a stand-alone device, but was more commonly implemented in a single package with other HP support chips known as the IOC, EMC and AEC. The BPC was originally designed as the CPU of the HP 9825 computer, but was later used in the 9845 and various pieces of computerized HP test equipment, such as the 4955A Protocol Analyzer and 64000 development system. --Accutron (talk) 15:10, 16 June 2013 (UTC)

I know that's what it says on the HP 2100 page. However I've disputed that section on the talk page. No reliable source. Waiting on some input before I go a head and rewrite things. --CyberXReftalk 05:40, 24 December 2013 (UTC)

The combination of ALU and CU is called CPU — Preceding unsigned comment added by 182.68.97.208 (talk) 05:17, 26 February 2014 (UTC)

Materials used in production

What sort of materials are used in production? What chemical elements are present in the final product (especially those other than silicon? Apparently some are conflict minerals, which is notable. -- Beland (talk) 16:22, 8 April 2014 (UTC)

Muddled sentence starts 3rd graf

What is this supposed to be saying? "Microprocessors integrated into one or a few large-scale ICs the architectures that had previously been implemented using many medium- and small-scale integrated circuits." That smells like bad cut and paste. It appears to try to explain the microprocessors were previously a few separate ICs before they become integrated, or something. 108.33.72.18 (talk) 14:59, 8 May 2014 (UTC)

  Done. I agree that sentence tried to cram too much stuff into one sentence. It is now split into two sentences. Feel free to continue improving this article. --DavidCary (talk) 17:48, 2 July 2014 (UTC)

Motorola 68000 is not a 32-bit processor.

Nobody who knows what they are talking about would describe this processor as a 32-bit processor. The only people who regularly did so were marketing people for rather obvious reasons. Virtually every reference work describes the processor as 16 bit.[1][2] In the case of both references, although they make reference to to the 68000's 32 bit data and address registers, nowhere does either work claim that the 68000 is a 32-bit processor. If anyone really wants to insist that the 68000 is listed as a 32-bit processor, then I shall insist that the 8080 and the Z80 are listed as 16-bit processors (precisely because both have 16-bit data and address registers, and the Z80 even has 16-bit arithmetic instructions). DieSwartzPunkt (talk) 18:10, 22 January 2015 (UTC)

References

  1. ^ 16-bit Microprocessor Handbook, Adam Osborne and Gerry Kane (Page 7-1)
  2. ^ Motorola MCS68000 data sheet.
There is no quick answer to explain this, but it's wrong to describe the 68000 (either the family or the literal 68000 part) as a 16 bit processor. It is, and was always so described by Motorola, from the outset, as a 16/32 processor. Nor is this the same thing (but enlarged) as the TMS9995, which was a 16 bit processor from a 16 bit family, multiplexing onto an 8 bit bus to save pins and board space. Andy Dingley (talk) 19:06, 22 January 2015 (UTC)
By the late '70s, it was clear to those who looked that the cost of CPUs, and the balance between their parts, was changing. It was no longer so expensive to provide more registers, or wider registers. Yet this was the era of CISC, and so instruction decode and microcode was growing and growing. Also memory was becoming cheaper, thus encouraging larger amounts of it, with all the address management that implied. So simply making fatter registers wasn't a huge cost to bear, and the lessons of having 16 bit registers (as accumulators, supporting ALU operations) in 8 bit processors had been valuable for managing address pointers.
It was also clear, with the ongoing software crisis, that the notion of an OS as a long-term stable development platform was important. Software was no longer developed for the specific chip, with a single chip generation in mind. Vendor tie-in to particular platforms, be they UCSD / CP/M / VMS or whatever would be the route to strong long-term sales for any CPU family.
Motorola had success with their 8 bit series, but these were lagging. Other makers were eyeing up the 16 bit world. So Motorola took a deliberate decision to leapfrog the competition. They would produce a 16/32 processor. This would be a full 16 bit processor, but its internal registers and ALU would be fattened up to 32 bits. This made no sense in the short term. As a 32 bit processor, the 68000 was hobbled by access on and off the chip package. Complaints were raised at the time that this was a very poor way to build a 32 bit processor (which Motorola never claimed it was), and indeed it was. The fact that there weren't any comparable 32 bit microprocessors at the time, or that if there had been they'd be in the then unaffordably expensive pin grid packages, passed these armchair commentators by. It was also a somewhat wasteful way to build a 16 bit processor – but not by much. Fattening the internal data was not where the big silicon costs were at this time.
Motorola played the long game. Through the 1980s, the 68000 family developed, particularly as packaging constraints for package and PCB layers relaxed and allowed fatter external buses too. 68000 code was characterised by gentle change at this time, without the major architectural leaps that characterised the x86 world. As ever, this wasn't the success it deserved. Too many external business aspects got in the way. Its flagship adopter, Apple, going up and down all over the place. The Unix workstation market turning into the dinosaurs, with their own architectures to sustain, eating the smaller 68000 workstation companies. The 68000 seemingly having the touch of death on its much superior domestic / games machines, the Amiga and Atari. Hovering over all, of course, the PC.
Intel had even tried the same leapfrogging trick itself, a couple of years behind the 68000, with the iAPX432. This never happened. I don't know why, presumably it was trampled by the golden cash cow at the heart of every PC.
So, it's a 16/32 architecture. Always was. It's a disservice to call it less. Sorry it doesn't fit into a simplistic "equal bus width throughout" VN model. Andy Dingley (talk) 19:25, 22 January 2015 (UTC)
So by your argument, my above assertion applies in equal measure. The 8080 and Z80 processor is a 8/16 bit processor precisely because both have 16 bit internal registers and both need to be moved to the 16 bit section. DieSwartzPunkt (talk) 11:38, 28 January 2015 (UTC)
<sigh> You're yet again trying to cut history to fit your over-simplistic "processors are characterised by a single number called 'size'"
The Z80 had registers BC / DE / HL that could be coupled in pairs as 16 bit registers. However its A accumulator register (i.e. the ALU capacity) was 8 bit, with only a handful of simple operations working across 16 bits. The 68000 had 32 bit accumulators. Andy Dingley (talk) 14:07, 28 January 2015 (UTC)
How many bits was an 8086? An 8088? And which is better, the Amiga or the ST ? --Wtshymanski (talk) 14:41, 28 January 2015 (UTC)
For the record, re: "the Z80 even has 16-bit arithmetic instructions", the 8080 also has 16-bit arithmetic instructions: INX, DCX, and DAD. The Z80 expands on these. They are implemented in precisely the same way as 32-bit operations on the MC68000: microcoded double-precision arithmetic using the normal ALU. (Actually, the 8080 & 8085 block diagrams show a 16-bit address incrementer/decrementer—it is unclear whether this relates only to SP or can operate on all register pairs—so while DAD is a double precision main-ALU addition in microcode, INX and DCX might not be.) 74.103.131.73 (talk) 06:43, 22 March 2016 (UTC)

First microprocessor, not Intel 4004?

"The founders of Pico had the idea that it was possible to develop a single chip calculator (most calculators at the time used at least 5 ICs). Pico did this and this calculator IC was actually the world's first microprocessor (despite what Intel, or Texas Instruments would like you to believe)."[28] (used as a source at X10 (industry standard)). Integrated circuit says 4004 is "the world's first microprocessor" while this article qualifies. [I know the difference of a calculator and a PC, i.e. the calculator was not programmable, but was the chip?]

I see now, Pico is already in the article and "lay claim to be one of the first microprocessors or microcontrollers having ROM, RAM and a RISC instruction set on-chip." Having RAM on-chip, is stronger then later (4004, and even (most) current micropocessors) or former systems in the article can claim. RISC, may be dubious or not.., I doubt it means what it does now. Is "The key team members had originally been tasked by Elliott Automation to create an 8-bit computer in MOS" about the same chip? Then 8-bit vs. 4-bit for 4004, and I'm curious about size/transistor count. The lowest I've heard for any CPU is about 4000, curious how it compares and if anyone has beat that..

There is "Category:American inventions" but possibly should say "Category:Scottish inventions"? comp.arch (talk) 14:46, 20 September 2016 (UTC)

Hello fellow Wikipedians,

I have just modified 3 external links on Microprocessor. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 18:18, 10 June 2017 (UTC)

Abbreviation: µP

Is a microprocessor actually abbreviated μP? It certainly isn't an abbreviation in common usage, so if this is some specific jargon it should be labeled as such. --Delirium 04:51, Dec 12, 2003 (UTC)

I think it's an old habit, from the early days when most of the people using μPs (see, it just slipped out :-) ), were EEs, and used to saying μF for capacitors and the like. I see it in my old copies of Byte for instance (one of them also mentioned a North Star μdisc system, heh-heh). uP was a later concession to the limitations of ASCII. This all is worth noting, but as an older and informal usage, doesn't really need to be at the top. Stan 05:12, 12 Dec 2003 (UTC)
Also, µP and µC (microcontroller) are often used when quickly drawing embedded system concept sketches on a black/whiteboard or for that matter, on the proverbial napkin, so I felt that the abbreviation(s) should be very visibly included in the relevant articles (and made into associated #redirects). The general case, as Stan touches upon, is that µ and other Greek letters are much used in science/engineering environments to save space/time in written material. --Wernher 23:27, 12 Dec 2003 (UTC)

Contradiction: single versus multiple chips

The initial definition says a microprocessor is implemented on a single chip, which I have always understood to be an essential feature. However, further down the page there is mention of multi-chip 16-bit "microprocessors", which by this definition cannot exist. — Preceding unsigned comment added by 212.44.25.184 (talk) 14:17, February 8, 2005 (UTC)

You might have a valid point there, and I've always thought so myself. On the other side, cf. the definition at FOLDOC :
microprocessor <architecture> (Or "micro") A computer whose entire CPU is contained on one (or a small number of) integrated circuits.
Thus, many two or three-chip CPUs qualify as a µP, such as the RCA CDP1801 and Intel iAPX 432 (which, contrary to my general assumpton, I have always thought to be proper µPs). I think the essential part of the definition is the clause "or a small number of", which precludes CPUs made out of piles of TTL chips, but includes CPUs consisting of, say, 1--4 LSI chips. --Wernher 03:28, 9 Feb 2005 (UTC)
I think a lot of discussion about computing terminology can be resolved by looking in to the origins of the words. (Luckily this is easy with computer jargon which hasn't been around as long as other language) I don't have evidence to back this up, but I suspect that the term Microprocessor was originally marketing speak for the processors in the computers that came after mini computers, and therefore there isn't actually a rigorous technical definition. — Preceding unsigned comment added by Alf Boggis (talkcontribs) 15:22, September 1, 2005 (UTC)

Move History of Operating System support for 64 bit microchips

Is there any support for moving the section History of Operating System support for 64 bit microchips somewhere else, like maybe Operating Systems? It doesn't seem to serve much purpose here (other than a thinly veiled Linux good M$ bad dig) — Preceding unsigned comment added by Alf Boggis (talkcontribs) 15:32, September 1, 2005 (UTC)

Interesting

How about this The article. 134.250.72.176 — Preceding undated comment added 04:06, October 28, 2005 (UTC)

First 16bit single chip processor contradiction

"National introduced the first 16-bit single-chip microprocessor, the National Semiconductor PACE..." and then a paragraph or so later, "The first single-chip 16-bit microprocessor was TI's TMS 9900..." — Preceding unsigned comment added by 66.41.35.114 (talk) 19:00, August 10, 2007 (UTC)