Talk:High Efficiency Video Coding

Latest comment: 11 months ago by Artoria2e5 in topic Profile mumble

Tidying up

edit

Needs to be tidied up a bit. Should really be updated once the JCT-VC meeting report and subjective test reports are completed. Iainerichardson (talk) 17:39, 3 May 2010 (UTC)Reply

The Features section sure needs to be updated, however there is too much technical information and too many proposals to make a short summary. Somebody with a good understanding of video coding technology is needed. Might as well wait for a summary of the proposals at the July 2010 meeting to update the section. --Dmitry (talkcontibs ) 06:47, 14 June 2010 (UTC)Reply
It seems like people were talking about cleaning up this article almost 4 years ago. I've added some tags to help encourage a cleanup of the article. TekBoi [Ali Kilinc] (talk) 23:13, 21 February 2014 (UTC)Reply
The article has changed greatly in the last 4 years. The number of section headings doesn't violate any Wikipedia policy and is less than many of the featured articles I have read, the prose size of the article is below the point at which splitting is recommended, and I don't see how the article has a problem with quotations when it has only a few short quotes in it. --GrandDrake (talk) 17:33, 22 February 2014 (UTC)Reply

Tools update

edit

I added more descriptive text on each of the coding tools and removed the update note. Pieter3d (talk) 07:54, 8 July 2012 (UTC)Reply

"High Efficiency Video Coding" is accurate

edit

The article was recently moved without discussion and it had to be moved back with a requested move. I would mention that "High Efficiency Video Coding" is accurate and is used on the Joint Collaborative Team on Video Coding website, the organization that is developing HEVC, and is used in the HEVC draft specification. --GrandDrake (talk) 00:55, 13 November 2012 (UTC)Reply

Yes it is the best of all Mhiz angel (talk) 13:00, 7 September 2019 (UTC)Reply

The size of the levels chart for HEVC

edit

The concern I have with having a lot of examples in the levels chart is that it harms the readability of the chart. If you think that those examples are needed on Wikipedia what do you think about the idea of having them in a HEVC levels article? --GrandDrake (talk) 06:47, 27 November 2012 (UTC)Reply

  • If you remove resolutions from the levels table, sync this with the relevant table from H.264/AVC. I do not understand their choice of resolutions either (and it used to be much leaner as well), so I can't give opinion about including very similar resolutions, but I strongly feel these tables should be cross-synced between the two articles, so that direct comparisons can be easily made. --Dmitry (talkcontibs) 19:44, 27 November 2012 (UTC)Reply
Created the High Efficiency Video Coding tiers and levels article which has a chart with most of the maximum level resolutions for H.264/MPEG-4 AVC including resolutions such as 1280×1024, 3672×1536, and 4096×2304. --GrandDrake (talk) 03:01, 28 November 2012 (UTC)Reply

Proposal to move article to H.265/HEVC

edit

Since the ITU announced in a press release that the ITU will use H.265 for HEVC I propose that the article be moved to H.265/HEVC. --GrandDrake (talk) 00:35, 28 January 2013 (UTC)Reply

Actually, that press release primarily refers to the standard as HEVC rather than H.265. It just says that ITU-T H.265 is one of the two ways the standard is formally identified, but continues to use the name HEVC in the rest of the press release. —Mulligatawny (talk) 18:49, 29 January 2013 (UTC)Reply
I agree. Even if the final ITU nomenclature is H.265, the current name has already achieved quite a notability. "High Efficiency Video Coding" is the most common name which reflects the content of the article quite well and a much friendlier one to the average user (in fact I would rather rename H.264/MPEG-4 AVC to Advanced Video Coding for the same reason). And if we rename to "H.265/HEVC", some nitpicker will certainly come out and rename it to "H.264/MPEG-H Part 2 HEVC", etc. --Dmitry (talkcontibs) 16:26, 2 February 2013 (UTC)Reply
I agree that some of the video standard articles on Wikipeda don't follow Wikipedia:Article titles with one example being H.262/MPEG-2 Part 2. I would strongly oppose any proposed article title for HEVC that includes "Part 2" since that would be confusing to the average reader and overly precise. I guess it would make sense to wait and see whether HEVC will remain the most common name before considering any changes to the article title. --GrandDrake (talk) 06:46, 8 February 2013 (UTC)Reply

IBDI

edit

IBDI is listed as a coding tool. However, there is no IBDI coding tool in the FDIS. IBDI can be realized with any video format by padding zero LSBs before encoding and truncating/rounding/dithering after decoding. As such it is an encoder feature, and not a coding tool of the video format. IBDI is only possible if the maximum allowed bit-depth of the video format is greater than the input bit-depth. If the input video is 8-bit, then any format which allows bit-depths greater than 8-bit can perform IBDI. But in the broad sense, any video format with more than 2-bit bit depth can perform IBDI for 1-bit input, so it can be classified as performing IBDI. Practically though, I think the "more-than-8-bit" (strict sense) definition is more useful. For example, H.264/AVC can also use more than 10-bit bit depth, but there it's rarely referred to as IBDI (see e.g. "10bit-depth output information" on [1]). From my point of view, IBDI should either be removed from the HEVC article or added to the H.264/AVC article (and any article of a lossy video codec which supports bit-depths over 8-bit), for the sake of consistency. Conquerist (talk) 21:57, 27 April 2013 (UTC)Reply

That explains why there isn't any mention of IBDI in the HEVC draft specification or in "Overview of the High Efficiency Video Coding (HEVC) Standard" which covered HEVC coding tools. Since IBDI is it not a HEVC coding tool I removed that section from the article and moved the references to the statement in the Main 10 section about improved coding efficiency when video is encoded at a higher bit depth. --GrandDrake (talk) 02:43, 28 April 2013 (UTC)Reply

Opening remarks

edit
HEVC is said to improve video quality and to double the data compression ratio compared to H.264/MPEG-4 AVC.

I'm not sure I like the wording of this. "double the data compression ratio" implies (as is mentioned further down) a measurement at the same subjective quality (otherwise simply "doubling the data compression ratio" is meaningless since you can select whatever ratio you like in both H.264 and H.265). Either you can improve the video quality or you can keep the same quality and double the compression ratio. David (talk) 11:41, 8 August 2013 (UTC)Reply

Changed the statement so it only mentions the data compression ratio. --GrandDrake (talk) 03:37, 9 August 2013 (UTC)Reply

What we are trying to say is that HEVC is said to have twice the compression efficiency of AVC. This means that HEVC can encode video at the same subjective visual quality, using only half the bit rate. Alternatively, a doubling of compression efficiency can provide substantially higher visual quality at the same bit rate as AVC. Tvaughan1 (talk) 06:05, 23 July 2015 (UTC)Reply

Should "bits per color" be changed to "bits per sample"?

edit

I used "bits per color" in the article since the term was more common on Google search results but the HEVC standard does say "bit depth of the samples" and documents from the people who are working on HEVC, such as "Overview of the High Efficiency Video Coding (HEVC) Standard", use "bits per sample". Should "bits per color" be changed to "bits per sample"? --GrandDrake (talk) 22:55, 10 August 2013 (UTC)Reply

No. "Bits per color" or "bpc" is more common and more appropriate for overview sections.
However, in the technical sections, we should refer to YUV color model as in the text of the standard, using "luminance/chrominance sample" and correspondingly "bits per luma/chroma sample", "bits per sample", etc. instead of RGB terms such as "color" or "bpc". --Dmitry (talkcontibs) 06:40, 11 August 2013 (UTC)Reply
I changed the first use of "bits per color" to "bits per color/sample" so people will know that they are the same thing, changed "bits per color" to "bits per sample" in the technical sections, and added a note about luma/chroma bit depth to the profiles chart. --GrandDrake (talk) 21:21, 11 August 2013 (UTC)Reply
A "color" is typically represented as a set of three ("tristimulus") values (or perhaps more – e.g., see tetrachromacy and pentachromacy). Each of the three values represents the intensity of a particular primary such as red, green or blue, or the coordinate along one axis in a transformed color representation system such as YCbCr. Please see color and color space – but it takes three values to make a color. I've heard of "bpc" as an abbreviation for "bits per channel" or "bits per component" before, which makes sense, but I've never heard of it being called "bits per color", which seems like a difficult concept (especially with chroma subsampling). The bit depth discussed in the article is not bits per color – it is bits per sample or bits per channel or bits per component or bits per color component, but it is not bits per color. —Mulligatawny (talk) 03:41, 12 August 2013 (UTC)Reply

I was concerned about this, too, as bit depths are usually bits per color (mean per primary color component). But in the referenced standards, the bit depths are for luma and chroma components, not colors. And those are differently sampled, in general. So bits per sample, though an unusual term in color systems, seems be reasonably suitable here. Better than bits per color, at least. Dicklyon (talk) 03:51, 12 August 2013 (UTC)Reply

The News on our HEVC decoder on ARM has been removed

edit

Hi,

I had submitted a press release on HEVC decoder on ARM in HEVC page under implementations and products,but it seems to be deleted. Could you let me know? Should I send it for review? — Preceding unsigned comment added by 203.201.61.238 (talk) 07:25, 8 July 2014 (UTC)Reply

It was moved to the High Efficiency Video Coding implementations and products article. Only the more notable announcements are kept in this article and there are already several software HEVC decoders for ARM. --GrandDrake (talk) 02:52, 12 July 2014 (UTC)Reply
edit

I'd appreciate a well-sourced background on licensing, trademark / patent locks, open source legality and other stuff make open source implementors and users' life miserable. I see there an opensource released code but I don't know about it's legal problems or usage restrictions. Maybe someone feels like digging. --grin 15:11, 13 March 2015 (UTC)Reply

as what I read from this article and others related to video codices , you need license for commercial use only e.g some video steaming services like Amazon etc need to pay small charge from there revenue (0.5%) . embedded devices like Blue-ray or any video encoding/decoding hardware needs license to use this technology .
For End users not need to pay anything as this Codec will be embedded with software/hardware they use .--Salem F (talk) 11:39, 4 November 2015 (UTC)Reply
Obviously I don't mean what's already in the article, and by all means no anecdotal evidence. :-) You probably don't see the meaning of the question since "will be embedded" doesn't help for example open source programmers to know who'll sue them next week and why. A detailed background would mention sources to who's the licensor, on what basis, what's allowed and what's not, etc. --grin 13:09, 17 February 2016 (UTC)Reply
My take:
The problem with royalty-bearing formats is, firstly, that putting a price on a software copy is detrimental to no-cost-software ecosystems – it makes such software either unviable or illegal. Note that it doesn't matter how open/free the implementation is; if it implements a patented format, the patent holders can demand whatever. The demand may be a price per copy, as in this case, which is reasonable and bearable for a product sold for money… But not so much otherwise: It was a deal-breaker for Firefox's support for H.264 on platforms without this support, until Cisco graciously offered to pay their licensing costs (see OpenH264). Most Linux distributions ship without out-of-the-box support for H.264, AAC, even MP3, and instead require the user to explicitly enable a "restricted" repository for that, described as "may be illegal in some countries".
Secondly, this is a format war – a collective path dependence tug-of-war that tends toward collective lock-in on a global scale, meaning an eventual end to individual choice. One could therefore hope to standardize on something everyone would be able to adopt eventually (i.e. no glaring interoperability problem), yet we are considering something that seems incompatible with the open/free software ecosystem as we know it. —84.214.220.135 (talk) 22:50, 13 September 2016 (UTC)Reply
Quote x265 developers[2]: Software decoding/encoding on consumer devices must be royalty free84.214.220.135 (talk) 23:33, 13 September 2016 (UTC)Reply

chroma subsampling

edit

Hi All, Correct me if I'm wrong, but should the following statement: (supporting higher bit depths and 4:0:0, 4:2:2, and 4:4:4 chroma subsampling formats)

Actually read: (supporting higher bit depths and 4:2:0, 4:2:2, and 4:4:4 chroma subsampling formats)

Pm 1982 (talk) 01:30, 20 April 2015 (UTC)Reply

No, that is not correct. What was added was support for monochrome (4:0:0), not 4:2:0, which was already supported in the first version. Also, it seems a bit strange to refer to 4:4:4 as a chroma subsampling format, since the 4:4:4 format does not subsample the chroma. The standard uses the term "chroma format" or "chroma sampling format", not "chroma subsampling format". I will edit the statement to improve its clarity. Mulligatawny (talk) 18:12, 20 April 2015 (UTC)Reply

Wording

edit
  • There are various algorithms for data compression, especially for lossy data compression ones for video/audio data
  • There are various implementations of such, either in software or as some ASIC; in case of software, this software can be compiled and the compiled binary be executed on various CPUs, or GPUs, or DSPs, ...
  • there are various file formats
  • There are various standards (e.g. ISO 216)
  • And then there is a huuge collection of buzzword-bingo like e.g. "video compression standard", "Video coding format", "video container", etc. Why? User:ScotXWt@lk 13:15, 15 May 2015 (UTC)Reply
Whether HEVC can be considered an algorithm gets into the issue of software patents which is covered by several other Wikipedia articles. As for "video container" I couldn't find it in the article but the reason that there is a section on containers is because HEVC can be stored in different containers with a few of them being m2ts, mp4, and mkv. --GrandDrake (talk) 02:15, 27 May 2015 (UTC)Reply

Please help clean this article

edit

If there's a more wonderful example of what can go wrong in a modern Wiki article, I can't find it (well maybe the H.264/MPEG-4 AVC which is equally terrible). It's filled with jargon that can be easily explained, its terribly disorganized, and it's horribly difficult to edit due to the overuse of inline cites. So moving forward I am going to try to slowly beat the text into shape. To start with:

  • the lede is supposed to be a clear and concise overview of the topic. A reader should be able to stop at the bottom of the lede and understand the basics of the body. Ledes should not introduce terms or topics that are not covered in the body (it's an overview of the body). As such there is a special rule that the lede does not normally need to be cited. This lede consisted of an enormous amount of jargon and history that does not help the reader understand the topic, did not actually summarize the system, and failed to mention at all the problems with licensing. If you don't like the lede as I have changed it, IMPROVE IT, don't just revert it back into the horrible state it used to be. We do not need to have a complete description of every group involved, every detail of the releases, and every acronym in the article, leave that in the body!
  • citations are needed wherever someone might challenge a statement, and/or where the citation provides a singular overview of the topic that the user might want to refer to. This is not the case in this article, where citations are piled on top of citations that already cover the statements being cited. Many of these appear to be stuffing, added simply to make the article look more robust. Moreover, the article is literally filled with the same cite being used over and over in the same paragraph, which is entirely unneeded. I have started the process of moving multi-use cites into a bibliography and removing single-use sites on statements that are already covered. I have also removed some of the many, many examples where the same cite is used multiple times in the same para for no obvious reason, although the amount of work here is enormous.
  • in-line referencing, that is cites placed in the body using <ref></ref> tags should be avoided at all costs. They make the edit text very difficult to read, and especially edit. Our goal should be to produce the same article while making it as easily as possible for future editors to add material. So if you are going to add a citation, please check that it isn't already covered in one of the ones we already have. And if you're going to use that cite more than once, please put the cite tag at the bottom and use SFN to refer to it. This can make the edit text orders of magnitude easier to maintain.

Thank you for reading my rant, I now return you to your regularly scheduled edit wars. Maury Markowitz (talk) 16:01, 2 January 2016 (UTC)Reply

The lede should contain the redirects to this article which include MPEG-H Part 2 (internal name used by the ISO/IEC), H.265 (internal name used by the ITU), and JCT-VC (the joint team that develops HEVC). That is why those terms were in the first paragraph. --GrandDrake (talk) 05:29, 4 January 2016 (UTC)Reply
That's certainly nothing I've seen in the past - to the contrary, it is well established that inlinks can point to specific subsections. But if you feel so strongly about this that's fine, but let me at least re-arrange it so it's not entirely gobblygook - these terms do not help the reader understand the topic and THAT is the primary purpose of the lede. Maury Markowitz (talk) 17:07, 4 January 2016 (UTC)Reply
Maury Markowitz, I agree that helping the reader understand the topic is the primary purpose of the lede. However, mentioning incoming redirects is mandated by the WP:R#ASTONISH guideline. Re-pointing inlinks to specific subsections helps for things associated with HEVC but not a synonym for HEVC. So I did that for JCT-VC. But re-pointing to subsections that doesn't seem possible for popular synonyms of HEVC as a whole. --DavidCary (talk) 19:43, 27 December 2021 (UTC)Reply
@DavidCary: Whoa, necrothread reply record! Maury Markowitz (talk) 20:04, 27 December 2021 (UTC)Reply

Coding efficiency: 50% improvement only over the unoptimized x.264 (from 2006) and early reference implementations. In practice its closer to 25%

edit

I just encoded a movie with handbrake using x265 and was dissapointed to not see a factor 2 in efficiency improvement over x264 with standard settings. After reading http://www.compression.ru/video/codec_comparison/hevc_2015/MSU_HEVC_comparison_2015_free.pdf I understand that you see those 50% numbers only in comparison with rather old, unoptimized h264 encoders (and reference implementations). The up to date or 2015 versions x265 vs x264 (both one of the best availiable encoders) give a 20-25% improvement! http://www.compression.ru/video/codec_comparison/hevc_2015/MSU_HEVC_comparison_2015_free.pdf

So maybe one should (additionally) mention the more realistic values - or wait until 265 is further optimized :). — Preceding unsigned comment added by 134.3.100.150 (talk) 10:56, 14 May 2016 (UTC)Reply

Numerous studies have shown a compression efficiency benefit of roughly 50%, including a study done recently by Netflix. Academic studies generally compare the HEVC HM reference encoder to the AVC JM reference encoder. Practical studies often compare x264 to x265. The compression efficiency gain is typically measured with 1080P or higher resolutions (the gain is less for smaller picture sizes), at typical consumer bit rates. The percentage efficiency gain is lower at high bit rates (if you give any codec enough bits, the quality approaches lossless compression). By the way, I founded and run the x265 project. Tvaughan1 (talk) 18:58, 26 May 2017 (UTC)Reply

edit

Hello fellow Wikipedians,

I have just modified 2 external links on High Efficiency Video Coding. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 15:24, 3 November 2017 (UTC)Reply

Does the section of "Implementations and products" really add anything?

edit

Does the section of "Implementations and products" really add anything? If so it is far from complete, since basically all mobile phones or media players support HEVC. It has nothing to do with the actual codec, only the industry adaption of it. Could this not be stated shorter? — Preceding unsigned comment added by Jsvensso (talkcontribs) 14:23, 31 December 2017 (UTC)Reply

Implementations and products probably does not need to state every nvidia card.

edit

It does not seem necessary to list each individual nvidia graphics card that supports HEVC. Especially since they all feature the exact same HEVC decoder, and are within the same generation, or a successive generation. 192.150.137.20 (talk) 06:14, 24 May 2019 (UTC)Reply

Cost of decoding

edit

This article makes countless mentions to the same numbers we find everywhere about H.265 compression efficiency (50 % smaller at 720p, 60% at 1080p compared to H.264), but says surprisingly few about the other side of the trade-off: the computing cost of decoding. On that front, how does it compare with other codecs? Except for some absolute numbers on some special-purpose hardware, the only mentions I found in the rather long article were:

Effective use of these improvements requires much more signal processing capability for compressing the video, but has less impact on the amount of computation needed for decompression.
— in #Concept, no reference given.

The tests showed that large CTU sizes increase coding efficiency while also reducing decoding time.[110]
— in #Coding efficiency, the reference given is [1].

HEVC was designed to substantially improve coding efficiency compared with H.264/MPEG-4 AVC HP […] at the expense of increased computational complexity.[16] […] HEVC encoders can trade off computational complexity, compression rate, robustness to errors, and encoding delay time.[16] Two of the key features where HEVC was improved compared with H.264/MPEG-4 AVC was support for higher resolution video and improved parallel processing methods.[16]
— in #Features, the reference given is [2].

Only broad trends, no numbers, no orders of magnitude, no comparison. Moreover, while these sentences may lead the inattentive reader to believe H.265 is more efficient at decoding than its predecessors, these are in fact internal comparisons (the overhead on decoding H.265 with respect to predecessors, is less than the overhead on encoding H.265 with respect to predecessors; H.265 with large CTUs has reduced decoding time with respect to H.265 with small CTUs).

At a time when energy consumption is a pressing concern (more so than storage and bandwith), this lack of interest astonishes me.

As a non-expert, a quick search on Google gave me:

Because of the significant processing power required to decode either of these two new codecs [H.265 and AV1], it is impractical to expect devices to play them back unless they have been specifically designed to support hardware decoding.
— HEVC (H.265): What is it and Why Should You Care? 2018-09-24

HEVC/H.265 comes with the trade-off requiring almost 10x more computing power [Note: this doesn’t say whether this applies to encoding, decoding or both; but later on in the same paragraph, the author evokes decoding]. […] Even though some softwares such as VideoLAN are capable to decode such codec, software decoding, although more flexible, is not an option since hardware decoding is usually faster and saves battery life tremendously.
— H.264 vs H.265 — A technical comparison. When will H.265 dominate the market? 2016-06-09

Higher efficiency usually comes with a cost: complexity. H.265 is far more difficult to encode as a result of its complexity, and can require up to 10 times the compute power to encode at the same speed as H.264. […] Decoding is less of an issue, but load roughly doubles compared to H.264. […]

Any computer can decode H.265 using software (in theory, at least). […] Software decoding isn’t the best option, however, because it’s not terribly efficient.[…]
— Everything you need to know about h.265/HEVC on your PC 2015-02-04

So it does appear that H.265 decoding is more costly, and requires hardware upgrade to become an option without wasting too much energy and computing power. Could some domain expert add input on this subject?

Regards, Maëlan 14:18, 22 February 2020 (UTC)Reply

More costly decoding in comparison to previous technologies is something that happens with all codecs. Efficiency improvements come at the cost of more complexity, and some of this complexity is on the decoder side of things. So in that regard, HEVC isn't all that special. However, as hardware manufacturers incorporate dedicated decoding hardware into their products, increased decoding complexity becomes less and less relevant, since special-purpose hardware is significantly more efficient than software-based approaches. With hardware decoding, the power usage drops to near-insignificant levels, at least when compared to software.
It's now been almost seven years since HEVC was introduced. At this point hardware decoders are widespread enough that at the pace people typically upgrade their devices, "having to" buy a new device really isn't a concern anymore, unless your hardware is five years old or more.
The lack of solid, real-world performance numbers for software decoding is regrettable but not that surprising. After all, for a codec to become widely used, software decoding needs to become niche. So there really isn't all that much commercial longevity in writing a software decoder and showing off how well it performs. I haven't really paid much attention to this space, but the only benchmark I've come across is this one by Ronald S. Bultje, who worked on creating VP9 at Google and founded a video technology company called Two Orioles. The post is focused on VP9 decoding, but does include comparisons to two different, open-source HEVC software decoders. While this is a blog post and therefore a self-published source, it might be usable due to the writer's track record, though I'm not that experienced in how self-published sources should be assessed (I know they're not automatically unusable, though). There's also a possibility of bias that should be taken into account, since the writer is one of the creators of a competing video coding standard and various products related to it.
Another factor in the lack of software decoding benchmarks is probably that software can improve with updates. And not only do software decoders improve over time, but encoders also receive enhancements and are able to deliver more quality for less bits. So when normalised for video quality, the exact same version of a software decoder will perform better if it's given a same-quality video that's generated by a more efficient encoder. The method/metric of quality normalisation will also affect the test; do you use PSNR, SSIM, VMAF, or some other method of making sure the videos you feed the encoders you're testing are the same quality?
So as to the lack of solid numbers for software decoding complexity, it can be rather difficult to do as there are so many variables, the results can be different a few months after you conduct your tests, and all of your results will be almost useless when hardware decoders eventually take over. In the light of all of that, doing an encoder comparison instead is more enticing and probably easier to find the funding for.
When it comes to estimates, you'll probably find more sources if you use the term "decoding complexity" in your searches. A quick search provided the following:
TL;DR: Testing is hard and since the hope is that software decoding will eventually become unnecessary, there isn't very much incentive. --Veikk0.ma 18:01, 22 February 2020 (UTC)Reply
Good points! The energy cost of bandwidth and storage space saved (a negative value) will tend to cancel out that needed for add'l processing. But I just noticed it said, in <ref name=HEVCTechnicolorJuly2012Overview>, "The overall requirements <sic> of HEVC is to improve the compression efficiency by a factor of at least two compared to the H.264/AVC compression standard" and "The encoding time is increased only by 10% and the decoding time by 60% in its best encoding profile and coding structure, compared to H.264/AVC reference SW model [4]" (Same place.) I note how in the author's mind, it's implied/assumed that "efficiency" is mainly (even essentially) about size, not processing cost. I mention it as you were looking for it. I agree with the unstruck part: "With hardware decoding, the power usage drops to near-insignificant levels, at least when compared to software." Surely, the relative energy cost of hardware-based decoding is more important than that of software-based. If Maëlan's concern is that the power usage of a billion hardware decoders may reach significant levels, I say interesting idea, but probably not, especially compared to the significant power usage of ASICs performing Proof-of-Work to support cryptocoin distributed ledgers.--50.201.195.170 (talk) 10:55, 21 January 2021 (UTC)Reply

True? Detail: Who cares?

edit

"Additionally, in the Main 10 profile 8-bit video can be coded with a higher bit depth of 10-bits, which allows improved coding efficiency compared to the Main profile." was supported by 4 sources. I removed one and found a second (<ref name=HEVCTechnicolorJuly2012Overview>) doesn't actually support the claim either. Haven't checked the other two, but have a feeling they don't support it either and wonder if it's true and and I'm signing off soon. The Ericsson source is vague.

Either way, the section, especially the last paragraph of the section (for which <ref name=HEVCApril2013M0166> is the ref), seems to have way more detail than is appropriate for a crude (PSNR) test, esp. given 114. --50.201.195.170 (talk) 10:55, 21 January 2021 (UTC)Reply

True; confirmed other two don't support. Also "n-bit" is correct but "n-bits" should be "n bits"; fixing (will match sources better too), and detail adjusting.--50.201.195.170 (talk) 10:44, 6 February 2021 (UTC)Reply

Map of Adoption

edit

I would like to see in the article map of Adoption on Terrestrial Broadcast, when this codec is the standard. — Preceding unsigned comment added by 46.187.202.138 (talk) 04:12, 18 June 2020 (UTC)Reply

x265 in package x265

edit

𓅓𓅓تِاެبِعُ اެݪمِطٚؤرِ اެخَرِ اެݪصدرِ ليك يصلك كل جديد يُاެقݪبِ💔𓅓𓅓


𓅓𓅓𓅓 ح͓֘ــٚـٚ҉ـ☬ـاڴ❉ـم͜ الم͜ـ☬ـج̀اࢦ اެݪڪ☬ـ꙰ـنِجَ قاެ☬سُ҉مِ ݪاެ⍣سُ꙰ـط꙰ـ☬ـۅٖٚ͜ꪆه ̗ڜܔخ͜⃢ـ༅ـص͜ـ☬ـينِ𓅓𖤍☬🇾🇪⃢١⚚˼١⚚ 134.35.169.125 (talk) 08:07, 1 December 2022 (UTC)Reply

Profile mumble

edit

Our descriptions for profiles of versions 2 and 3+ quickly turns into an incomprehensible wall of text, consisting of:

  • 50% profile names or a list of such names
  • 45% repeat of whatever a similar profile does, e.g. Main 12 has 400 and 420 just like Main 10.
  • 5% new stuff, like what makes Main 12 not Main 10: more bits.

This is not useful. We gotta find a way to make it less repetitive, but still well-organized. Like two tables, one for features and one for mandatory downward compatibility, maybe. Artoria2e5 🌉 09:49, 21 November 2023 (UTC)Reply

  1. ^ Ohm 2012.
  2. ^ Sullivan 2012.