Talk:AI accelerator

Latest comment: 5 months ago by 37.205.105.179 in topic TPU

Redirect discussion for AI accelerator vs category page

edit

Creating this page speculatively in response to the AI accelerator redirect discussion.

It might be overkill? It just repeats the information you get from the category & more specific examples (TPU,VPU,TrueNorth etc)

On the other hand you might want to expand this into a dedicated AI accelerator article, instead of that just being a redirect, and skew the way those are presented.

Perhaps this article could detail history more, and applications (e.g. which units are being used in the self-driving cars, and the continued efforts by NVIdia to keep GPUs dominant int the space) the early attempts, use of DSPs, and cover the Cell processor with its clear overlap with VPUs (a many-core chip aimed at video and AI/physics/graphics workloads in games, whilst being more general than a GPU).

I don't know why the wikipedia community object to category redirects. My intent with the original redirect was that the collective noun 'AI accelerator's links to the list of examples. I suppose you could put more general information about the common context here and trim some information from the specific pages (like I keep mentioning the parallel with CELL & adapteva)

Fmadd (talk) 16:10, 16 June 2016 (UTC)Reply

To clarify, this page was at one point a redirect to https://en.wiki.x.io/wiki/Category:AI_accelerators. After discussion, a list of AI accelerators was moved to this title. Link to redirect discussion: https://en.wiki.x.io/wiki/Wikipedia:Redirects_for_discussion/Log/2016_June_16#AI_accelerator. I also updated the title of this section for clarity. ★NealMcB★ (talk) 15:32, 28 July 2021 (UTC)Reply

Fleshing out history etc

edit

I remember a youvideo by the RISC-V person (Krste Asanović) describing some 80s/90s attempts to build AI accelerators for Sun workstations, and explaining how it looked similar to the current PC + multiple SLI card setup. (this was a video about accelerators past and future, he's pushing RISCV as a basis for accelerators)

There's also a video showing Yann Lecun using some DSP32 card on a 486 PC to accelerate machine vision.

I'm sure there were earlier attempts too.. there must be people out there who have been into AI for a long time who know (i'm a graphics person really).

Should mention the use of FPGA aswell - microsoft have been using these, and how this motivated intel to acquire later — Preceding unsigned comment added by Fmadd (talkcontribs) 16:55, 16 June 2016 (UTC)Reply

Possible reorganisation vs VPU

edit

This article says a lot of what I was trying to convey in VPU (which people agreed was 'notable'). The content there could be migrated to the Movidius article. VPU becomes a redirect to Movidius#VPU .. "a VPU is an AI accelerator with integrated camera interfaces and video related fixed function units then we have parallel between the vendors of different potential 'AI accelerators'..

AI accelerators - overview of all, history etc.

then cover vendor specifics in their own pages..

  • Movidius Myriad 2 VPU- what is a VPU etc. Cover Fathom USB stick
  • Google Tensor processing unit ...
  • Qualcom Zeroth NPU
  • adapteva epiphany parallela

(and if more details emerge about the NPU,SpiNNaker,google TPU .. those pages can similar treatment) Fmadd (talk) 23:59, 17 June 2016 (UTC)Reply

This article needs a better title

edit

This article is about hardware-accelerated neural networks, but its current title is strangely contrived. "AI accelerator" typically refers to incubator programs for startup companies, which seem unrelated to this article's topic. Jarble (talk) 20:55, 16 June 2017 (UTC)Reply

  • The article is a possible WP:AfD candidate from my point of view. It's a general summary of current events in the chip space, but it never suggests that an "AI accelerator" is a well-defined term or a generally-used term; none of the references use this term as far as I can tell. Power~enwiki (talk) 21:02, 16 June 2017 (UTC)Reply
@Power~enwiki: For that reason, I suggest merging this article into the main hardware acceleration article. It provides a useful overview of hardware acceleration, but AI accelerator isn't a good title for it. Jarble (talk) 20:00, 23 June 2017 (UTC)Reply
What about debating a better name: 'neuromorphic chip' is heard more commonly.How about AI accelerator chip? 'hardware acceleration' is very broad, whilst this article focusses on types of accelerator distinct from GPUs; and GPUs are one class of hardware accelerator that have demanded their own page. Similarly, there are other classes of hardware accelerator (e.g. video encoding/playback, bitcoin mining) which would not fit in this article. Wouldn't it be better to keep seperately focussed pages.. easier to organise. Consider that this space is fast changing, and in a few years terms may have stabilised. Regarding the alternate perception of 'AI accelerator' being business/investment related, should we make a disambiguation for that term? here are references for the term 'AI accelerator' as hardware:[1], [2] MfortyoneA (talk) 11:22, 11 August 2017 (UTC)Reply
I agree that this article has merit under a proper title, 'neuromorphic chip' is used more, but can still be ambiguous, how about a more technical title? bquast (talk) 11:22, 2 September 2017 (UTC)Reply
see: AI accelerator, is this arrangement an improvement? I note the seed accelerator page also uses the term 'hardware accelerator' ambiguously (again, business startup accelerator, rather than computing-device). Would further DAB be needed.. MfortyoneA (talk) 11:39, 11 August 2017 (UTC)Reply
I did not find any instances in current WP pages where AI acelerator was used in the sense Jarble finds "more common". This may have been a premature move. — jmcgnh(talk) (contribs) 22:57, 11 August 2017 (UTC)Reply
google search for "AI accelerator" does indeed primarily yield startup incubators, as the above concerns suggested; there are only a few actual instances of the term 'AI accelerator' elsewhere.(but I've added references for those). My assumption was that adding parenthesised disambiguation (but retaining the original title) wouldn't hurt, hence the 'bold move': it's just making the original title more specific. But lets see what everyone thinks. personally I think parenthesised clarification is the correct default for any title, as the world is more likely than not to have ambiguity MfortyoneA (talk) 23:14, 11 August 2017 (UTC)Reply
My reading of WP:TITLE says that your personal idea is not consistent with established policy, but I've made the change at Template:CPU technologies to quiet DPLbot and removed the "too many links to dab page" tag. — jmcgnh(talk) (contribs) 00:11, 12 August 2017 (UTC)Reply
I would disagree with merging as this is a reasonably distinct field from hardware acceleration (more general than fixed function, but less general than a CPU core). Perhaps a more specific name (and focus to the article) would be helpful. Neural Network accelerators/accelerating hardware or something similar? Is there a cite/reference for what this field is generally called? Dbsseven (talk) 22:11, 12 September 2017 (UTC)Reply
I did find citations for the term 'AI accelerator'. Many suggest 'neuromorphic computing'. However, I think there are devices that are intended to accelerate AI but that are *not* specifically hardware based ANNs.. rather things dealing with matrix multiplies, convolutions etc. Remember that ANN's are just one approach to AI. Although perhaps you'd suggest focussing the article *on* ANNs ? MfortyoneA (talk) 09:47, 13 September 2017 (UTC)Reply
While there are likely some AI implementations which do not use ANN, my impression is that most AI implementations do. More generally, this article seems very unfocused and confusing/contradictory. The lead says these are "are distinct from GPUs", yet GPUs are discussed. The cell processor is discussed but no application to AI is cited. And there is minimal discussion of more topical products like the Tensor Processing Unit, Nvidia's Volta, the Movidius,[1] and the new A11 Bionic. I am hesitant to go editing until I know what this article is and isn't about.Dbsseven (talk) 14:27, 13 September 2017 (UTC)Reply
The way I read it, GPUs/CELL etc are mentioned for historical reasons (it's not *about* them): they're part of the evolutionary path that is yielding *dedicated* AI accelerators. CELL has similarities to Movidius (it's a many-core device, with managed on chip memory, and had instructions intended for video). it wasn't deployed that way, but sony did think about using it in the AIBO robots etc. We can reshape the article toward this. R.e. ANNs , I actually think we should emphasise the difference between 'neural networks' (biomimicry), and 'linear algebra focussed accelerators' (which is what a lot of AI really is). the later GPUs introduce 'tensor units' which are just low-precision 4x4 matrix multipliers. I think those would have use in graphics too.. is it really NN hardware, or just a specific blend of precision/functoinalist that happens to work well for recognition tasks. My own view is the term NERUAL network is slightly buzzwordy hype. the yovidius chip still has traditional fixed-function elements for handling video preprocessing MfortyoneA (talk) 03:44, 14 September 2017 (UTC)Reply
e..g if you put Movidius and CELL side by side, the use of the term 'AI' in the former is just marketing focus. r.e their on chip processing, both can do similar things. Infact Movidius started out with a broader mission statement (making 'a high throughput accelerator' which could be used for game physics etc) and just happened to find this niche. Movidius is more focussed on taking direct video inputs, but that's not a 'neural' aspect, just interfacing. If CELL was released today, the exact same device with no technical changes would get the AI (and 'Neural') marketing hype equally applied to it. MfortyoneA (talk) 03:57, 14 September 2017 (UTC)Reply
I generally agree with you, though I think much of this needs cites to be included in the article. Doing a quick (not extensive) search I didn't find any references of Cell in the context of AI. It's similarity/relevance need cites to avoid being original research. (The AIBO page doesn't include any reference to cell.) Even if something could or should have been used for AI, we as editors need sources. Though (cited) similarities of the hardware could be pointed out.
It seems to me that the biomimicry is the origin/model for ANN (ANN definition from article: 'computing systems inspired by the biological neural networks'). Generally/currently the mathematical basis of those models tends to be linear algebra and matrix multiplication.
I propose, to fit within the current title the subject of the article, to focus the article on hardware used/offered/described for specifically (but not necessarily exclusively) use in accelerating AI (ANNs or otherwise). This would then keep/include GPU, TPUs, Movidius, etc. However, this definition would exclude general purpose CPUs and anything without a cite regarding AI (except perhaps as background/context). (An alternative would be a separate ANN accelerator article, however I do not know of any non-ANN AI implementation much less hardware specifically for accelerating it, so it would overlap heavily.) Dbsseven (talk) 18:13, 14 September 2017 (UTC)Reply
(Artificial) Neural Network Hardware could be a title for this article. I.e. the text could be about what hardware is used to run neural nets. The text could – as it already does – include CPU:s, DSP:s and GPU:s that are adapted to running neural nets. It can include FPGA cores, and – finally – the real dedicated neural net accelerator chips and cores. gnirre (talk) 20:33, 3 October 2017 (UTC)Reply
I would support this. I think this would better encapsulate the scope/idea of the article. Dbsseven (talk) 15:28, 18 October 2017 (UTC)Reply

References

Merge with HW Acceleration

edit

I will remove the tag that proposes merge with HW acceleration. HW Accelerator is a much wider topic (obviously?). Still, the present article about AI acc. is much longer. We cannot force editors to write more about the wider subject, so it would be a very unbalanced article. For the reader, it doesn't matter much, as long as we have a clear pointer here from the article with the wider scope. --Ettrig (talk) 09:55, 18 September 2017 (UTC)Reply

cleanup

edit

I'd like to clean up the examples section, perhaps as a table. I think this is the most interesting/useful portion of the article currently, but right now it is a hodge-podge of data depending upon what each editor decided to include. Perhaps at least two tables for spiking and more conventional NN accelerators. Anyone have any thoughts before I go re-organizing? Dbsseven (talk) 16:11, 20 October 2017 (UTC)Reply

general cleanup

edit

I've begun cleaning up the text to make it more focused and general to AI accelerators, based on the previous discussions here. Please let me know what you think. Dbsseven (talk) 22:35, 14 November 2017 (UTC)Reply

Looks great with the subdivision of the examples!

I would prefer ”IP cores” to ”co-processors”, in that headline. I think ”co-processor” is kind of ok, but ”IP core” narrows it down exactly, whereas my first association to ”co-processor” or ”processor” would be a physical component. gnirre (talk) 08:49, 28 November 2017 (UTC)Reply

examples

edit

The Examples section of this article has been repeately tagged with {{example farm}}. To some degree this seems fair. But this is also one of the more actively edited portions of the article. Can we find a consensus of what to include/exclude to simplify this? Or is the example farm tag misplaced and should consensus be to remove it? Dbsseven (talk) 18:23, 15 January 2018 (UTC)Reply

Applications

edit

"AI accelerators can be found in many devices such as smartphones, tablets, and computers all around the world." - I just removed this statement as no source was given, and I have not found a publication supporting it. We are talking about _specialised hardware_ here, not mere NN algorithms. Always willing to learn - please re-insert the statement as soon as there is a reputable source. --Bernd.Brincken (talk) 17:42, 27 January 2020 (UTC)Reply

Atomically thin semiconductors?

edit

Does this section really belong on an AI Accelerator page? This is device physics, and while it might eventually have some application to accelerators, it's pretty removed. Otherwise, why not include sections on III-V semiconductors or EUV technology, since it could be used to build an AI accelerator? Sheridp (talk) 18:46, 2 September 2021 (UTC)Reply

"Tensor"

edit

The article refers to tensor processing and tensor units. Is this in reference to GPU's vector processing? As the next step up from scalar, to vector, to tensor? (number, vector, matrix); it is also referred to as AI processing something from Tensor Processing Unit -- 65.92.246.43 (talk) 13:46, 20 November 2021 (UTC)Reply

Merge Proposal: Deep learning processor

edit

The page Deep learning processor has useful information, but it appears to be a synonym for AI accelerator. The first example of a Deep learning processor is the Tensor Processing Unit, but according to Tensor Processing Unit, the Tensor Processing Unit is an AI accelerator.

Given that AI accelerator has more traffic and more inbound links, and Deep learning processor is out of date (its tables all end in 2019) I propose that Deep learning processor be merged into AI accelerator. Lwneal (talk) 05:25, 10 August 2023 (UTC)Reply

Thats a good idea. The two articles are basically the same thing. The photonic system is in both pages. I think the research groups are just posting their ideas. 50.104.65.211 (talk) 05:06, 17 August 2023 (UTC)Reply
After reading some more about the two terms, it seems that most sources now treat them as true synonyms. So I agree that they should be merged. However, I do think there should be a mention of the distinction between the AI accelerators that are just for inference (like edge compute devices) and those that accelerate general training as well. That's what I initially was lead to believe the difference between the two was anyways. Errata-c (talk) 14:20, 26 August 2023 (UTC)Reply
Merge. Little to no distinction between the terms. 195.50.217.133 (talk) 21:11, 28 August 2023 (UTC)Reply
Merge Actually both pages need a lot of work. We can merge them and focus on improving one page only. Digital27 (talk) 15:34, 13 November 2023 (UTC)Reply
Consensus is to Merge Lwneal (talk) 03:25, 28 January 2024 (UTC)Reply

Intel Innovation 2023: Technologies to Bring AI Everywhere

edit

Seems that we are missing Intel Gaudi2 AI hardware accelerators as discussed at Intel Innovation 2023: Technologies to Bring AI Everywhere article. Rjluna2 (talk) 19:43, 20 September 2023 (UTC)Reply

Groq? Distinguish inference from training?

edit

I'm trying to understand where Groq would fit in to the current outline. Perhaps in a new section talking about dataflow optimization and deterministic architectures? And as noted by @Errata-c, it would make sense to make a high-level distinction between inference (which clearly benefits from those architectures, as Groq demonstrates) and training uses, which presumably wouldn't. ★NealMcB★ (talk) 00:04, 11 May 2024 (UTC)Reply

TPU

edit

As prominent as TPU is, I believe a comparison section to TPU is necessary. Maybe even to CPU and GPU. 37.205.105.179 (talk) 23:31, 25 June 2024 (UTC)Reply