Talk:JPEG/Archive 1
This is an archive of past discussions about JPEG. Do not edit the contents of this page. If you wish to start a new discussion or revive an old one, please do so on the current talk page. |
Archive 1 |
JFIF vs. JPEG Interchange format - which one extends what?
The JPEG page states "The file format is known as 'JPEG Interchange Format', as specified in Annex B of the standard. This is often confused with the JPEG File Interchange Format (JFIF), a minimal version of the JPEG Interchange Format that was deliberately simplified so that it could be widely implemented and thus become the de-facto standard.", while the JFIF page states that JFIF is an extension of JPEG Interchange Format. So, what is it? Stolsvik (talk) 14:27, 11 January 2008 (UTC)
- It's a bit of both! It's a concatenation of the JPEG Interchange Format in that it uses a subset of the elements in the JPEG standard. It's also an extension of the JPEG Interchange Format in that they created a standardized JFIF header, which was necessary so that decoders could reliably identify a file meeting the JFIF standard. That JFIF header is written in a block with an APP0 marker, which is part of the JPEG standard but whose contents are not defined in the JPEG standard (they are application specific). This formalization of the contents of the JFIF header within the previously defined APP0 block is the only 'entension'. Ian Fieggen (talk) 22:10, 11 January 2008 (UTC)
Multimedia Wiki
Could some of the authors contribute to the wiki.multimedia.cx jpeg article? Dcsmith77 02:58, 21 October 2006 (UTC)
Page Appearance
Notice how the flower graphic at the top of the page blocks some of the text? Not sure how to correct this myself, if indeed it's not just an issue with my own browser? I thought it worth mentioning just in case.—Preceding unsigned comment added by 82.110.0.163 (talk • contribs) 17:20, 6 Apr 2006 (UTC)
Virus Alert to add?
There was a computer virus alert earlier this year regarding the JPEG format. Evidently, there's some way to hide a virus in a jpeg file, and have the virus activate when the jpeg is loaded in a typical viewer. This would be good info to include, but I would want to research and verify the details first. Perhaps someone will beat me to it? Wesley
- There is some weird virus that attaches itself to JPEG files. (Un)fortunately, JPEG files cannot be executed on any computer I know of, so the virus will never be activated. The writer might as well have written a computer virus that attaches itself to lamp shades, that would have the same effect. See [1].--branko
- Unless there's a buffer overrun or similar exploit for some commonly-used JPEG decoder library... -- The Anome 09:22, 1 Sep 2004 (UTC)
- iirc there was (iirc it was in a microsoft implementation) Plugwash 20:48, 2 Apr 2005 (UTC)
graphics
pl:JPEG If you would like to use some graphic for this topic.
There's an online Polish-English translator that does one sentence at a time, though results can be sketchy. (unsigned comment by Ke4roh
- anything that uses the % sign when describing jpeg quality has no credibility whatsoever sorry Plugwash 20:48, 2 Apr 2005 (UTC)
external link
Are you sure the "Compress JPG and GIF images" link is appropriate for this article? The site looks quite commercial, but I didn't delete it since I didn't know if someone put it there on purpose or if was simply vandalism. TheListener
- mmm link didn't work at all for me i'll try it again later. Plugwash 20:48, 2 Apr 2005 (UTC)
typo to be fixed
In the example 8 by 8 matrix to be put through the jpeg compression algorithm, there is something wrong with the entry 68 in row 6, column 6, because subtracting 128 does not give the asserted result -65 in the next displayed matrix. It is not clear which of the two matrices is wrong, except that the discrete cosine transform result matches the second matrix rather than the first. These matrices, with same inconsistency, also appear in the second edition of the Gonzalez/Woods book Digital Image Processing, second editon, page 499. Hopefully the author of this page can clear up the inconsistency. ---- 24.118.95.84 (sig added by Cburnett)
- How's that now? Cburnett 03:58, Apr 19, 2005 (UTC)
- Not sure. Which of the two numbers was wrong? ---- 24.118.95.84 (sig added by Cburnett)
- You'll have to go into the history and check. I re-ran all the numbers and just pasted them in. Cburnett 02:11, Apr 20, 2005 (UTC)
- I contacted the original author of the example to determine which of the numbers to correct. The correct fix is to 63. ---- 24.118.95.84 (sig added by Cburnett)
- You're point is irrelevant as everything is based on the matrix here. Changing them requires changing of *ALL* the numbers. There are enough discrepancies in Gonzalez & Woods numbers that I didn't copy the numbers but generated them on my own.
- So stop reverting as you're invalidating all the numbers. Cburnett 22:47, Apr 22, 2005 (UTC)
- OK. Fine. That's what I wanted to know - that you had recalculated them all. I was in the process of writing code to do it myself, for the same reason you gave, but I am happy to have it done by you.
The matrix reformatted?
The numerical matrices are in what looks to me like a *terribly* old-fashioned font (probably through association with similar fonts I've seen in antiquated schoolbooks). They're also so wide that when viewed at 800 pixels width (which I often do when using a Favorites or History sidebar in IE6) the right hand side of some tables disappears behind the images on the right. Lee M 00:39, 3 May 2005 (UTC)
- A lot of web viewers sitll use 800x600; it's about 30% of the browsing public. Time to fix it. Samboy 09:02, 3 May 2005 (UTC)
- yeah its mediawiki math markup being rendered as png
- maybe there is some other way to do the matrixes that will work better (is there any particular reason for them to look like maths matrixes rather than more generic grids of numbers?) Plugwash 10:14, 3 May 2005 (UTC)
- As the author of the data, you're going to have to explain to me the difference between a matrix and a "grid of numbers".
- Regarding the "*terribley* old-fashioned font", do you never read any math articles? Like covariance matrix or List of integrals of rational functions. It's TeX. Cburnett 14:21, May 3, 2005 (UTC)
- I think we should use a different font than the one TeX uses; it doesn't look that great against the sans-serif font what we use for articles. Samboy 04:41, 16 May 2005 (UTC)
- This is the incorrect place for such a discussion... Cburnett 23:11, May 22, 2005 (UTC)
- Imo the issue at hand here is if its appropriate to use matrix math markup for a table of numbers that afaict has basically nothing to do with matrix algebra etc. Plugwash 12:22, 23 May 2005 (UTC)
- A matrix IS a table of numbers. MATLAB stores images as matrices and there's really no difference between a 2-D array and a matrix. Simply because the subimages in question are not used as linear algebra matrices will not compel me to agree to switch to another form of representation of the same numbers. Cburnett 16:16, May 25, 2005 (UTC)
- Mediawiki math markup goes to outputing as an image pretty quickly even when its feasible to represent what is desired without using an image. Just because we have a built in tool for generating ugly images doesn't mean its the best option in every case i've redone the first matrix below using wiki-table markup as an example and it looks far far nicer and is more compact (helping those with narrower screens) without changing the basic layout.
52 55 61 66 70 61 64 73 63 59 55 90 109 85 69 72 62 59 68 113 144 104 66 73 63 58 71 122 154 106 70 69 67 61 68 104 126 88 68 70 79 65 60 70 77 68 58 75 85 71 64 59 55 61 65 83 87 79 69 68 65 76 78 94
version currently in article to compare
- Firstly, the HTML is not narrower than the image table on my browser. Secondly, your HTML table is, no offense, an ugly hack; even if you don't reproduce the [] effect it's still hackery.
- I've had this discussion many times about TeX vs. HTML and I don't really care to repeat it...again. Sufficient to say that your objection to using TeX is the generation of images. Again, I don't want to repeat the discussion, but the goal should be to get TeX output of the above to be renderable as HTML instead of removing perfectly correct math markup for those that don't desire to view TeX images. I prefer them.
- The TeX output is extremely foul. First of all, never minding how good or bad the font is in isolation, it doesn't match. Basic typography rule: only change fonts for a very good reason. Second of all, it does look a bit archaic. (The badly named old "Computer Modern" perhaps? Whatever, it sucks.) Third of all, line thickness varies dramatically. The "1" and "4" are thin/light. The "3" and "9" and "6" are thick/dark. The "0" is even lopsided. Frankly it's an abomination. Adding insult to injury, the result can't be selected with a mouse and it makes the page load slower. It probably doesn't work all that well with a screen reader. AlbertCahalan 03:43, 14 April 2006 (UTC)
- It's pleasing to my eyes. As for not changing fonts, presenting math formulas seems like a very good reason to me. As for screen readers, that is what the alt text of the image is for. Qutezuce 07:00, 14 April 2006 (UTC)
- I prefer CMR to sans-serif fonts, although the name modern was meant by Knuth to mean the visual compatability of different fonts, not the style, which was in fact based on a 19th Century schoolbook. (damn, nearly used the LaTeX command for superscript, oh well) In fact, I have books from the fifties at home in a font which is very similar to CMR.|333173|3|_||3 23:36, 12 November 2006 (UTC)
- I have just done some reaerch (picked up the AMS authour handbook:D), and found that this might help the ppl who hate the use of Roman math in sans-serif text.
- Solve the problem, lazy, I managed to find this command in the first Google hit for LaTeX math font change, with a clear explanation of what it does, so next time stop whingeing and change it.23:36, 12 November 2006 (UTC)
- Solve the problem — TeX rendering not in HTML — not the symptoms. Cburnett 01:29, August 11, 2005 (UTC)
- Well my position is to use the tools availible to produce the best results possible and that to me means avoiding math markup whereever there is a feasible alternative. but i cba to start an edit war over this here so i'll leave that to someone else if they care, i've shown the way to make it look nice. Plugwash 21:54, 24 September 2005 (UTC)
I will try to find the TeX code and change the math font from CMR to CMS or similar, which willl change it from the good-looking but non-matching default (La)TeX font to a sans-serif font. THis will not match perfectly, and is not normal practice, but it is a small improvement. THis was part of the point of Computer Modern, the font sizes will match wheter using Roman, Sans-Serif, or Typewriter text. If you don't like the PNG graphics, there are user setting for other viewes. If you want to have a HTML output mode, there are HTML LaTeX compilers avaliable, so you could be WP:BOLD and use that. Of course, if all you want is for it to match, create a new skin using a truetype of CMR instead of the default font.
Once I find the right place to change it, it will take seconds to fix.|333173|3|_||3 22:58, 12 November 2006 (UTC)
- Math rendering seems to have improved greatly from the situation when i orginally reported this (the horrible grey edges and uneven lines have been replaced by nice sharp transitions and even lines). Having said that on a normally configured windows box the text in the math pngs is still significantly larger than the main body text. Plugwash 11:20, 13 November 2006 (UTC)
Quantisation matrix
Where is that quantisation matrix from and what justification is there for calling it a common one. Plugwash 21:54, 24 September 2005 (UTC)
- I originally got it from Digital Image Processing (ISBN 02091180758) and it describes it as "the JPEG baseline standard" which leads me to believe it's in the actual JPEG standard as the default quantization matrix. It's been a while since I took the class, but I recall the prof echoing that it's the common matrix.
- I believe both the Q matrix and the 8x8 subimage size are the result of heuristic testing of various images and those turn out to yield the best average result.
- Of course, references to it would be great. Cburnett 02:43, 7 April 2006 (UTC)
- Isn't it the IJG standard matrix? 83.184.217.78 16:13, 8 May 2006 (UTC)
- My thinking is that, as illustrative as the matrices may be, they take up way too much space in the article, to the point of being overly pedantic. Would it make sense to break them off into a stub? algocu 23:47, 19 September 2006 (UTC)
- The current quantization matrix shown on the JPEG page is provided as an example of a luminance quantization table in Appendix K of CCITT Recommendation T.81, the latest version of which is available, free of charge, from here: http://www.itu.int/rec/T-REC-T.81-199209-I/en. I believe this document is technically equivalent to the one referenced in the external links section, it is however easier to navigate. Recommendation T.81 states, in clause 4, that it does not specify a default quantization matrix, regarding the example tables in Annex K it states: "These are based on psychovisual thresholding and are derived empirically using luminance and chrominance and 2:1 horizontal subsampling". In response to the last comment about removing the matrices, I for one found them useful and think they should stay where they are. Jon A Fox 11:04, 5 March 2007 (UTC)
- Highly agree with the technical details staying. This is an encyclopedia and how JPEG actually works is definitely encyclopedic, which is why I added it. Cburnett 13:39, 5 March 2007 (UTC)
- Please keep the matrices. Though not required per se, they were highly useful for me to grasp each step in a quick and efficient manner. It really adds to the understanding process, although I agree some cosmetic fix might help to make them not stand out so much. Lloeki (talk) 12:30, 8 October 2008 (UTC)
Progressive encoding
Could do with a mention of the progressive encoding capability. This uses multiple passes and usually involves sending low-frequency DCT components first to give an early low quality render. But you can also do funky stuff like send the colour info first or last, or send the high frequency components first. --KJBracey 17:40, 29 October 2005 (UTC)
I was looking for this topic in the article too, didn't see it. 24.23.133.238 18:29, 27 May 2006 (UTC)
Transparency
Has Jpeg got no transparency in it?? /Slarre 19:54, 26 November 2005 (UTC)
- The JPEG standard doesn't define any means of including transparency, and the usual JPEG file formats (JFIF and Exif) don't allow transparency. JNG allows transparency (but nobody uses JNG). --Zundark 21:45, 26 November 2005 (UTC)
- JPEG cannot easily support transparency because it is lossy and doesn't store precise color (transparency is treated as a color) values for each pixel but approximates it.
slack area
if my understanding of this is correct there is an area beyond the actual image that must be filled with something to fit the block size. I'd also imagine that what this is filled with could affect the quality of the bit that is visible. Does anyone know what strategies image processing software uses to fill this area and what affect the strategies have on image quality in that part of the image?—Preceding unsigned comment added by Plugwash (talk • contribs) 19:18, 19 Feb 2006 (UTC)
- Yes I do :) You can do tests by opening the file with a HEX editor, searching for the dimensions, and changing them to the next multiple of 8 (i.e. 9x7 --> 16x8), or the next multiple of 16 with color subsampling. I found that "most" encoders (which I would pressume use the "official" JPEG code) just repeat the border pixels. This seems suboptimal. Example:
- http://img268.imageshack.us/img268/6462/jpeg8x81ra.png (Right original; left exposed)
- The JPEG encoder of Photoshop, however, does a special kind of "mirroring":
- (original is 4x4, expanded is 8x8 or 16x16, withouth grid it's confusing, sorry)
- (top left, original without chroma subsampling) (right: expanded)
- (bottom left, original with chroma subsampling) (right: expanded)
- For images where only 1 pixel is left, this is the same as the "repeat" method, the other 7 pixels are just repeats. If 2 pixels are left, Photoshop (and ImageReady) "mirrors" them twice (one to 4px, the second time to 8px). If 3 pixels are left, the first is copied to the 4th, and then the usual mirror to 8. So it basically picks up the first 1, 2, or 4 available pixels and fills with a mirror of them.
- I hope that made some sense.—Preceding unsigned comment added by 80.34.88.17 (talk • contribs) 22:49, 30 Apr 2006 (UTC)
Photo of a flower doesn't look like JPEG
The "photo of a flower compressed with successively lossier compression ratios from left to right" doesn't look right to me. It looks like the compression applied is in fact color depth reduction, as it doesn't show JPEG-like artifacting (blocking, ringing), and indeed retains high frequency/spatial resolution thorough, just loosing color accuracy.
The fact that it's displayed resized doesn't help, as downsizing it gets rid of the obvious JPEG artifacting (though the original full-size one still seems rather strange to me).
- Good point, it doesn't look like JPEG to me either. Maybe we could apply the same idea to Image:Sunflowers.jpg (as suggested here) and create a new image that displays JPEG artifacts better. Qutezuce 07:43, 8 April 2006 (UTC)
- I made a quick attempt at this in GIMP, by cutting the image into 16px stripes, saving at different qualities and stitching them back together. I don't like the result much, though — I think this isn't a very good test image for this purpose, being too dark and too busy. Something lighter with smooth gradients and some well-defined sharp edges would be better. —Ilmari Karonen (talk) 16:59, 22 April 2006 (UTC)
I found a better flower photo on Commons and played with it until I got a reasonable test image. I like it, what do you folks think? —Ilmari Karonen (talk) 22:57, 30 April 2006 (UTC)
- Chipping in a bit late yere ... it would be even better if we could:
- find a lossless version of this or another image as a starting point
- if we're going to have the final image as a JPEG, be sure that the process of recombining the JPEG-compressed slices doesn't add another level of generation loss
- -- Smjg (talk) 10:55, 29 March 2009 (UTC)
Donkey image?
I was wondering, maybe we could use the lenna image for the comparison of different compression ratios instead of the donkey as it is the de facto standard?--Frenchman113 21:10, 8 April 2006 (UTC)
I saw you used a Q100 image to show full quality jpeg, but the standard full quality image is Q95 as Q100 is a theorical maximum and should never be used but for experimental purposes. :)
http://www.faqs.org/faqs/jpeg-faq/
83.184.217.78 15:48, 8 May 2006 (UTC)
- I added a link to the Lenna article at the bottomKriegaffe 17:57, 28 May 2007 (UTC)
History?
When and where and why and by whom and for what application was JPEG developed? We need to mention this. ProhibitOnions 13:40, 28 April 2006 (UTC)
Patent claims
Now that the patent has been invalidated, does that mean JPEG is public domain? Copysan 20:35, 26 May 2006 (UTC)
- It has NOT been invalidated (yet). BsL 03:32, 31 May 2006 (UTC)
- The "Potential patent issues" section doesn't agree with it's references (hooray!):
- Text from the article:
- The USPTO also found that Forgent knew about the prior art, and did not tell the Patent Office, making any appeal to reinstate the patent highly unlikely to succeed.
- Text from http://www.groklaw.net/article.php?story=20060526105754880 (ref 4):
- Forgent can respond, but it seems they'll have some explaining to do, because PubPat's Executive Director, Dan Ravicher, says that the submitters knew about the prior art but failed to tell the USPTO about it.
- Also, comments attached to [4] indicate that the patent is NOT invalidated, only that some parts of it are. Note that [4] doesn't say anywhere that the patent has been overturned, although it is a little ambiguous.
- The text about the *341 patent ends saying it was invalidated, if I understand. BTW, it says Claim 17 is the one disputed, but if you read the patent there are only 16 claims. --150.241.250.3 (talk) 10:30, 14 April 2009 (UTC)
See also: GDI vulnerability
"see GDI vulnerability section of GDI article" in see also, this page doesnt contain any info like that.--152.78.71.52 19:49, 11 June 2006 (UTC)
Lossless operations
Certain operations on JPEG images can be performed without having to re-compress them. Rotation, flip, crop, partial modification (say, text insertion), color correction. Can we add a section about this? --AlexMld 13:41, 12 October 2006 (UTC)
- It's already there, though not necessarily in the right place, under "Usage". How can color correction be done without recompressing? Just curious. Notinasnaid 13:56, 12 October 2006 (UTC)
- Yes, I just noticed it is there, but it is not very correct:
- "can be performed losslessly as long as the image size is a multiple of eight pixels in both directions." Should say "to rotate 90 clockwise image height should be multiple of the MCU height, to rotate counterclockwise, it's width should be multiple of the MCU width". MCU width and height are not always 8 pixels, any of them can be 16px as well.
- We can also add information about lossless crop (which I think is a part of IJG library now) and other lossless operations.
- As for the lossless color correction, I think it is done by manipulating DC values or/and quantation tables. I have seen this kind of operation in JPEG Wizard --AlexMld 17:07, 12 October 2006 (UTC)
The section states a crop cannot be done from the top or left (unless it is a block boundary). Bottom and right do not have this restriction. As a JPEG can be losslessly rotated/flipped, it is possible to first flip/rotate, then crop and finally flip/rotate back. So in practice an image can be arbitary cropped? --Jontew (talk) 14:35, 30 April 2008 (UTC)
- You're forgetting that lossless rotations/flips also only work on block boundaries. For example, if you take an image which is an exact number of blocks, flip it horizontally, then crop it half way through the rightmost block, you can't then flip the image back because that rightmost half-block cannot become the leftmost half-block. Well, it can, but not losslessly. —Preceding unsigned comment added by Ian Fieggen (talk • contribs) 00:33, 1 May 2008 (UTC)
Dots per inch (DPI)
No mention is made in this article about JPEG and dots per inch (DPI) -- i.e., most JPEG images are 72 dpi, and if small, are often unsuitable for print work. Shouldn't a note about JPEG and DPI be added? - Mecandes 18:29, 22 November 2006 (UTC)
- That really isn't a characteristic of JPEG. Any given image can be any resolution. JPEG images may be a problem for print work if the resolution is too low (like any image format), or if the compression artefacts are too great (a specific JPEG issue). Notinasnaid 18:33, 22 November 2006 (UTC)
- While JPEG has a specified DPI in the file format its pretty meaningless, what really matters with any image is the DPI at the desired output size. Images intended for the web will indeed be too low for most proffesional print work but thats hardly a jpeg issue. Plugwash 18:57, 22 November 2006 (UTC)
What about CMYK JPGs?
When this page details the whole process of color space trasformation from RGB to YCbCr, how does this explain CMYK JPGs? Sure, it should be just as easy to convert CMYK to YCbCr as it is to convert RGB to YCbCr, albeit with a different set of calculations. The result of either will be a file encoded with YCbCr values. How then does the resulting JPG file know that it should be decoded to a CMYK image and not an RGB image?
From what I understood, it is only JFIF files that imply YCbCr color space, and that CMYK JPGs do not undergo the same color space transformation. This would therefore require a different method of encoding and decoding for CMYK JPGs, or in fact anything other than RGB or YCbCr color space. Can anyone shed any further light on this? Ian Fieggen 23:57, 25 November 2006 (UTC)
- Note that RGB-->YCrCb is reversable (other than rounding errors) but CMYK to YCrCb isn't, dunno what CMYK jpeg does though. Plugwash 00:21, 26 November 2006 (UTC)
- I don't know for sure, but couldn't they just encode each of the four channels? You could do the same with RGB, without the YCrCb transformation. The reason RGB is transformed to YCrCb is so that the luminance (Y) channel can be encoded with higher quality than the chrominance channels, since we humans see changes is brightness much more than changes in colour. Thus you can compress the image more using YCrCb. With CMYK you'd probably just leave them in the CMYK colour space and compress all four channels equally, or perhaps compress black a little more. Like I said, I don't know for sure. Just some ideas. --Imroy 10:12, 26 November 2006 (UTC)
Apparent Confusion in article
There seems to be confusion within this article over what this article is about, also the redirects to this article don't all make sense. Here's how I see it:
- JPEG - An article about ISO/IEC IS 10918-1 | ITU-T Recommendation T.81
This should include both the JPEG codec and the JPEG Interchange Format (which is NOT JFIF), since both are specified in the ISO document. (the JPEG Interchange Format is specified in annex B)
- JFIF - This should be separate article about the JPEG File Interchange Format (JFIF) application segment zero extensions. Which are mostly no longer relavent or useful.
- Joint Photographic Experts Group - Should be a separate article about the body that developed ISO 10918-1 as well as JPEG2000, JBIG....
--Ozhiker 13:52, 28 November 2006 (UTC)
Also, JFIF is listed in the article as being from the 'Independent JPEG Group'. This is incorrect as far as I can tell - it was developed by Eric Hamilton of C-Cube Microsystems. - see this.
JFIF has apparently been supersceded by the SPIFF extensions to the JPEG standard. SPIFF is defined in Annex F of ITU-T Recommendation T.84 | ISO/IEC IS 10918-3. It appears however that it is not in widespread use. Neither of these standards are of any relevance now, with the near universal acceptance of EXIF.
If there are no objections, I'll make changes to reflect the above, and also add some history of the JPEG standard ( this will help with the history) --Ozhiker 01:10, 30 November 2006 (UTC)
- Just how much is there to jfif thats not part of other jpeg file formats? if its only a small ammount then it probablly doesn't deserve a seperate article. Agree on splitting off the Experts group but what we could really do with is info on exactly WHAT parts jpeg defines and what the various file format authors had to add to it. Plugwash 02:46, 30 November 2006 (UTC)
- The JFIF standard contributed the following things:
- A standard colour space (which as far as I understand is universally ignored, as sRGB is used instead)
- A standard sub-sampling method for chromiance information (centered) ( I am not sure what is used currently by most software)
- An application segment extension to JPEG. It uses Application Segment #0, with a segment header of 'JFIF\x00' and defines the horizontal and vertical pixel density, and may provide a thumbnail.
- All of these things have been replaced and extended in EXIF (and other metadata standards).
- I think that most people often mistake JPEG Interchange Format and JPEG File Interchange Format. JIF is defined in the JPEG standard, and lacks only pixel aspect ratio, component sample registration, and colour space designation to make it a universal format.
- --Ozhiker 10:21, 30 November 2006 (UTC)
- The JFIF standard contributed the following things:
I agree that a separate JFIF article would be desirable. I also agree that JFIF wasn't the creation of Independent JPEG Group. I have an old copy of the Image Alchemy documentation that says that JFIF was agreed upon on 1 March 1991 at C-Cube by representatives of C-Cube, Radius, NeXT, Storm Tech., the PD JPEG group, Sun, Handmade Software, and maybe others. --Zundark 13:20, 30 November 2006 (UTC)
- from http://www.jpeg.org/jpeg/index.html : "the file format was created originally by Eric Hamilton, the then convenor of JPEG as part of his work at C-Cube Microsystems, and was placed by them into the public domain under the name JFIF (available here in the latest version, 1.02)"
- This seems to conflict with your statement that jfif is an extention of the standard file format. Plugwash 14:43, 14 December 2006 (UTC)
- Yes I have seen the http://www.jpeg.org/jpeg/index.html site - you are right that it is in conflict - I can't explain why the Joint Photographic Experts Group would have that on their website. The file format is very definitely specified in the main JPEG standard, not in JFIF - look at Annex B of http://www.w3.org/Graphics/JPEG/itu-t81.pdf, which specifies the JPEG Interchange format. The JFIF spec refers to JIF. I recently wrote a parser for JPEG data, and only discovered JFIF even existed when I was almost finished. I've made an article at JPEG File Interchange Format which explains what is specified by JFIF. --Ozhiker 15:26, 14 December 2006 (UTC)
- What we really need to solve this conflict is the original versions of both the JPEG and JFIF specifications. Just because a subset of JFIF is in the standard now doesn't mean it always was. Plugwash 15:44, 14 December 2006 (UTC)
- You're right. I've had a search for the history of JPEG & JFIF and not turned up much, however I did find this document from May '92 which refers to JPEG Interchange Format being in JPEG-9-R7: Working Draft for Development of JPEG CD, 14 February 1991. Which I think is before the first meeting to agree a JFIF standard -March 1, 1991. I couldn't find any of the working drafts or previous versions of either standard online. --Ozhiker 16:38, 14 December 2006 (UTC)
- What we really need to solve this conflict is the original versions of both the JPEG and JFIF specifications. Just because a subset of JFIF is in the standard now doesn't mean it always was. Plugwash 15:44, 14 December 2006 (UTC)
- Yes I have seen the http://www.jpeg.org/jpeg/index.html site - you are right that it is in conflict - I can't explain why the Joint Photographic Experts Group would have that on their website. The file format is very definitely specified in the main JPEG standard, not in JFIF - look at Annex B of http://www.w3.org/Graphics/JPEG/itu-t81.pdf, which specifies the JPEG Interchange format. The JFIF spec refers to JIF. I recently wrote a parser for JPEG data, and only discovered JFIF even existed when I was almost finished. I've made an article at JPEG File Interchange Format which explains what is specified by JFIF. --Ozhiker 15:26, 14 December 2006 (UTC)
SVG
how about mentioning SVG in the first para, for vector/line/text graphics, rather than gif/png?
- Maybe, but its not really supported on the web at all (flash is better supported but has issues of its own ;) )and anyway monitors are raster devices so a hand optimised bitmap is generally the best choice for getting good display on them. Plugwash 14:41, 14 December 2006 (UTC)
- SVG is indeed alive. Use Firefox or Chrome or Safari (Opera?). Only IE does not support it among the big ones. --150.241.250.3 (talk) 10:28, 14 April 2009 (UTC)
quality parameter
How does the quality parameter influence the JPEG compression? That is: at what point in the algorithm does it make a difference whether I chose a quality of 10 instead of 90? --Abdull 19:47, 11 December 2006 (UTC)
- My experience of examining compressed files suggests the following. The JPEG compression settings don't have a place to plug a number. Rather, the quality is affected by a bunch of settings including the quantitisation table, and whether downsampling is done. JPEG encoders often offer quality settings. Sometimes, they might have "low, medium, high". Some might have 10 to 90. Some might have 0 to 12. It's arbitrary. Anyway, what I suspect happens is that the programmer has a range of compression paramers (quantitisation tables and subsampling), which the quality settings deliver. Some programmers might use a lookup table; some might derive settings algorithmically from a quality "number". Notinasnaid 20:23, 11 December 2006 (UTC)
- I'm not positive but i belive the quality number controls the agressiveness of the quantisation tables and in some programs thresholds in it are also selected to change between the three subsampling ammounts. Plugwash 15:39, 13 December 2006 (UTC)
- According to http://www.ams.org/featurecolumn/archive/image-compression.html , the quality setting (aka q) applies a scale factor to the quantization matrix. FivePointPalmExplodingHeart (talk) 07:31, 26 April 2008 (UTC)
- For anyone interested, many programs DO save the quality parameter somewhere in the file. For example, Adobe Photoshop's "Save for Web" feature adds a "Ducky" segment, in which the quality setting is stored in the 8th byte after the "Ducky" string. Other programs place such info in a more readable format within a text comment, such as "CREATOR: gd-jpeg v1.0 (using IJG JPEG v62), quality = 80".
- Either way, storing the quality in some manner is useful for programs that allow such things as "Use Original Quality" when re-saving files. It's also useful for those people interested enough to assess and/or compare the effects of different programs' different quality settings. Ian Fieggen (talk) 00:33, 30 March 2009 (UTC)
- I disagree that it's useful to programs. Reader programs have access to the actual quantization tables used, so they don't have to try to interpret the various kinds of "quality" parameters that different writers use. Dicklyon (talk) 01:13, 30 March 2009 (UTC)
does a JPEG
have to use the same quantization table for all blocks? —The preceding unsigned comment was added by Plugwash (talk • contribs) 14:32, 14 December 2006 (UTC).
Every component has its own quantization table, referenced in the start of frame (SOF).
For baseline DCT (SOF0) a maximum of two quantization tables is allowed. In progressive mode (SOF2) up to four
tables are allowed, one for each component.
There is a poorly used extension to the standard, ITU-T T.84, which allows a more flexible change of all the quantization tables by scalar factors immediatly before encoding/decoding a block, see Annex C of T.84 (p.28) —Preceding unsigned comment added by 84.188.193.147 (talk) 23:00, 8 September 2008 (UTC)
???
Why are the example images not in a gallery??? It looks messy. I liked a the donkeys better. 216.37.139.6 22:36, 25 December 2006 (UTC)
For Evaluation: Graphic illustrating the relationship between MGT and DCTs
I have uploaded an image for evaluation for use on the JPEG article. Not sure if it will be useful. Clearly, it will need to be broken up into smaller images. Please let me know if anybody wants to work this into the article. Thanks. * Image:Mcu_dct_evaluation.gif
Please leave comments in this section. TCMike 04:11, 6 January 2007 (UTC)
- Broken link to the image. Rain 01:17, 9 February 2007 (UTC)
Encoded size in the example
Thanks for the informative article. It is a clear explanation and a good level of detail for a lay reader like myself. Could I request that in the detailed example where we take one block through all the steps of encoding, someone knowledgeable add one more detail? After the example is all the way through the process to the Huffman-encoded bit stream, I think it would be cool to say something like "Under standard encoding the data can now be stored in x bits, compared to the y bits required to store the original data." 132.170.162.196 23:46, 19 May 2007 (UTC)
Encoding as in F.1.2.1 and F.1.2.2 (p.88-91) of ITU-T T.81 (sequential base line):
Assuming the previous DC coeff. is 0, (as only a correction to this prediction is encoded),
the sequence of codewords is
5(DC use 5bits) -26,
0/2(AC skip 0 coeff./use 2bits) -3,
1/2 -3, 0/2 -2, 0/3 -6,
0/2 2, 0/3 -4, 0/1 1, 0/3 -4,
0/1 1, 0/1 1, 0/3 5, 0/1 1, 0/2 2
0/1 -1, 0/1 1, 0/1 -1, 0/2 2
5/1 -1, 0/1 -1, 0/0 (EOB).
Using Huffman table K.3 (p.149) for the DC correction,
and Huffman table K.5 (p.150-153) for the AC coefficients, we get the following bit sequence
110 00101,
01 00,
11011 00, 01 01, 100 001,
01 10, 100 011, 00 1, 100 011,
00 1, 00 1, 100 101, 00 1, 01 10,
00 0, 00 1, 00 0, 01 10
1111010 0, 00 0, 1010.
Sorted by quads:
1100 0101, 0100 1101, 1000 1011, 0000 1011, 0100 0110, 0110 0011,
0010 0110, 0101 0010, 1100 0000, 1000 0110, 1111 0100, 0001 010.
This gives the hexadecimal sequence C5 4D 8B 0B 46 63 26 52 C0 86 F4 15 (added one 1-bit),
a total of 95bits = 11.875 bytes. An unencoded block consists of 8*64=512 bit = 64 bytes.
Encoding in progressive mode would produce multiple sequences of likely shorter total length. —Preceding unsigned comment added by 84.188.193.147 (talk) 21:19, 8 September 2008 (UTC)
A discussion of JPEG's limitations?
I would like to see a discussion of JPEG's limitations (on text, on line drawings, on sharp edges) in this article. Is that appropriate? It seems that a lot of people use JPEGs for text and come to regret it, and it would be nice to be able to explain why in a useful fashion. Any takers? jhawkinson 21:45, 23 June 2007 (UTC)
- I highly suspect that JPEG could be more attuned to text if the quantization matrix were reworked for text. Text has a lot of high frequencies and the general quantization matricies used don't have retaining high frequencies in mind. Cburnett 13:12, 11 July 2007 (UTC)
DCT maths
Recently, a lot of DCT maths and working was added. I don't think that most of this is necessary, as the general form of the DCT-II can be found at the DCT article, and the worked example for the DC coefficient breaks the flow unnecessarily, without really adding anything. Therefore, I've taken the liberty of removing the majority of this. Oli Filth 08:18, 11 July 2007 (UTC)
- I wholly disagree.
- 1) Showing the general form here avoids having to jump to another article.
- 2) The general form here does not match that used on Discrete cosine transform because it doesn't address normalization in the transform (only mentioned in the inverse discussion). Again, avoid having to jump to another article insufficiently written for the context of the JPEG encoder discussion.
- 3) The generalized form & normalization applies directly to the example I added here many moons ago. Having to both digest another form the DCT and apply that as used here is unnecessarily diverting the reader's attention from understanding the JPEG codec.
- 4) Explaining how to use the DCT equation to determine the DC coefficient is simple, direct, and very telling to someone not familiar with DCT or how to use it. Hardly "unnecessarily" in my book.
- 5) If the example for the DC coefficient breaks the flow then it can be fixed without simply removing it.
- 6) Simply saying that the DC coefficient is -415 doesn't tell you anything about how to use the DCT or how the DCT works. Just like how saying "you DCT shifted subimages, quantize, RLE, then Huffman" doesn't tell anyone how JPEG encoding works. An example illustrates how it is done. Same thing with the details about using the DCT.
- I have no problems with reworking and rewriting my work. That's not my complaint. My complaint is that deleting is generally a cop-out to doing work. Remember, this article is not supposed to be written for just people like you and me who know DSP. Cburnett 13:09, 11 July 2007 (UTC)
- Hi. Here are my thoughts (matching your numbering above):
- Wikilinks are the whole point of a web-pased encyclopedia. If the user is reading about X, and wants to know more about Y, they can safely drill down by clicking on the link to the article about Y, so that details of Y aren't "inflicted" on other, more casual, readers. In this context, the mathematical details of the DCT are not essential for the flow of the article, and don't (in my opinion) provide any further insight into how JPEG works.
- Normalisation is discussed in the DCT-II section; although admittedly, its not as clear as it could be. Anyway, in my edit, I left the specific form in.
- I sort of agree. That's why I've left the specific form in.
- I don't believe it's necessary to demonstrate how to use an equation. Also, demonstrating with the DC coefficient is a bad choice, because all the cos terms evaluate to 1 and disappear, which doesn't give a good impression of what's going on.
- Possibly true.
- I think the example as it stood was more than adequate as an illustration of the DCT in action.
- In some senses, yes, deleting is a cop-out; I generally only do that when I see incorrect stuff. In this case, the stuff wasn't incorrect, I guess I was just being lazy!
- However, I believe that because "this article is not supposed to be written for just people like you and me who know DSP" is precisely why we shouldn't fill it with lines and lines of equations. Oli Filth 20:16, 11 July 2007 (UTC)
I wholly agree with Oli that the DCT details shouldn't be on the JPEG page, and I applaud him for removing them. What's needed in that section is an explanation of what the transform is actually doing to the data and why the transformed data is far more compressible than the input matrix of pixels. This subject is big enough that it deserves a separate Wikipedia page about image compaction, imho. In any case the new stuff added by Cburnett is totally worthless as an explanation of JPEG and moreover it just duplicates what's in the DCT article. Cburnett claims to know what he's talking about. (More exactly he claims to "know about DSP", which is not at all the same subject as JPEG.) It's my view that the formulas he stuck are not understandable by an ordinary programmer who wants to undersand how JPEG succeeds, and it's my suspiction that Cburnett doesn't understand it decently himself either. User: sean Sean111111 20:40, 11 July 2007 (UTC)
- So you're claiming that JPEG encoding is...not DSP? And you claim I don't know what I'm talking about? Re: "totally worthless": meta:don't be a dick. Cburnett 00:51, 12 July 2007 (UTC)
downsampling
The wikipedia article mentions downsampling of the chroma channels as second step of the jpeg encoding process. I've been trying to find the description of this in the jpeg specification (http://www.w3.org/Graphics/JPEG/itu-t81.pdf), but I can't find it, it seems as if the specification is missing an exact specification about this. The JFIF spec also doesn't mention it.
Where in the spec can I find which bytes exactly of the JPEG file mention exactly what type of subsampling of the chroma channels is used? What does the subsampling do if the number of pixels in a certain dimension is uneven? What kind of filters are used?
193.74.100.50 07:32, 25 July 2007 (UTC)
The ITU-T T.81 covers only the encoding/decoding of (independent) components.
"[The] Specification does not specify a complete coded image representation. Such representations may include
certain parameters, such as aspect ratio, component sample registration and colour space designation, which are application-dependent." [2], p.1,"1 Scope"
There is only a technical dependence of the components regarding dimensions and structure of MCUs in interleaved mode, see p.19,"4.8 Multiple-component control" and p.24,"A.1.1 Dimensions and sampling factors".
The coding of the sampling parameters Hi,Vi is defined on p.35,"B.2.2 Frame header syntax".
Standards like JFIF or EXIF specify, what the components represent (YCbCr) and how they are converted,up-and downsampled, see [3], p.2,"Standard color space",p.4,"Spatial Relationship of Components" and [4], p.5,"4.4.3 Pixel Composition and Sampling" —Preceding unsigned comment added by 84.188.193.147 (talk) 18:54, 8 September 2008 (UTC)
Example is so cool
The way the work out the compression algorithm by hand in this article is so cool. 134.79.236.179 22:27, 2 August 2007 (UTC)
"Color space transformation": Probable mixup of YCbCr and YUV
Hi, the article states in the Color space transformation section that "[YCbCr] is the same as the color space used by PAL [...]". This contradicts the YUV article, which says that PAL uses YUV, not YCbCr. The YCbCr article says that YCbCr and YUV are often mixed up, this seems to be one instance of the problem. --Stachelfisch 22:44, 5 October 2007 (UTC)
CMYK jpg
PDF files seem to often have embedded image streams in "DCTDecode" format, which is their term for jpg. The streams can be extracted by hand to a file, and become complete working jpg files. This seems to work fine if they are "DeviceRGB" 3-byte-per-pixel streams. But "DeviceCMYK" 4-byte-per-pixel streams also seem common. These seem to be very hard to view/generate directly. What exactly is a CMYK jpg? -69.87.200.6 23:43, 19 October 2007 (UTC)
- The JPEG FAQ says, in the answer to question 16:
“ | Adobe Photoshop and some other prepress-oriented applications will produce four-channel CMYK JPEG files when asked to save a JPEG from CMYK image mode. Hardly anything that's not prepress-savvy will cope with CMYK JPEGs (or any other CMYK format for that matter). When making JPEGs for Web use, be sure to save from RGB or grayscale mode. | ” |
- Of course, if you're looking for technical details, that's not very helpful. I'm sure they're out there somewhere, but my very quick Googling didn't turn up anything more detailed. —Ilmari Karonen (talk) 14:21, 21 October 2007 (UTC)
- When Photoshop saves an RGB image as a "JPEG File", it actually creates a JFIF file, which is a standardised subset of the JPEG file format. One of the assumptions of this simplified format is that the image data uses the YCbCr color space. When most people talk of a "JPEG File", they're actually talking about a "JFIF File".
- When Photoshop saves a CMYK image as a "JPEG File", it actually saves it in a JPEG format with an EXIF header. This allows it to maintain the original CMYK color space. It's not possible to create a JFIF file with CMYK color space because the color data will be interpreted incorrectly by any program that assumes it to be YCbCr data.
- JPEG files in CMYK format are used mainly in the publishing industry. Not many other programs handle them correctly, as the decoding can be much more complex because they are not in the simpler JFIF format. Programs that can only handle JFIF formatted files should check for a JFIF header and abort if it isn't present. Most browsers abort when presented with CMYK formatted JPEGs, often displaying a "broken image" icon.
- In terms of viewing / editing programs, I've used CMYK format JPEGs successfully in Photoshop, ThumbsPlus, IrfanView, ACDSee and JPGExtra, to name but a few. Be prepared for some odd results though, particularly with the way some programs convert the CMYK colors for display on RGB screens, sometimes producing dark results and sometimes lurid. Ian Fieggen 05:32, 23 October 2007 (UTC)
i find that there are some JPEG files with an EXIF header are RGB file... —Preceding unsigned comment added by 121.33.199.173 (talk) 06:47, 15 April 2008 (UTC)
Variable-quality jpeg images
What are the best tools for generating variable-quality jpeg images? In particular, it would be desirable to be able to specify a low quality for the general background, and a higher quality for a small portion of the image of particular importance. This would allow further reducing image storage sizes. It can be done tediously by hand with IrfanView, to demonstrate the theoretical feasibility. Are there any tools that make this easy? -69.87.200.6 23:49, 19 October 2007 (UTC)
- There are tools that allow this, such as the regional compression feature of JPEG Wizard. However, it's always struck me that these features are rather ridiculous. The whole point of the JPEG compression algorithm is that low-detail areas such as sky and other flat backgrounds are compressed much more than areas with higher detail. The idea of taking this further by manually outlining low detail and high detail areas for higher or lower compression is merely exaggerating the existing compression algorithm. One could presumably achieve the same effect with no manual intervention by changing the DCT or quantization matrices.
- Of course, one can use regional compression to manually apply higher compression to selected higher detail regions, but this will be at the expense of introducing more noticeable artifacts. Ian Fieggen 22:16, 20 October 2007 (UTC)
- You're under the assumption that the image is fairly uniform and low-frequency based. Cburnett 01:34, 21 October 2007 (UTC)
- The purpose of lossy compression is to favour information that is relavent to the viewer over that which is not, getting the detail right in say a face is far more important than getting it right in a leaf litter background. Plugwash 14:50, 23 October 2007 (UTC)
- You're under the assumption that the image is fairly uniform and low-frequency based. Cburnett 01:34, 21 October 2007 (UTC)
- Absolutely incorrect.
- The purpose of lossy compression is to reduce the overall size by the loss of information. One common way this can be done is selectively discarding data. The standard Q matrix weighs heavily on the low-frequency corner of the DCT coefficients. If you take a picture of Plaid (pattern) (assume the pattern is of very high frequency in both directions) and use the standard Q matrix then not only will you get a high error rate but you will not achieve good compression compared to a matrix that favors the high frequencies. So using a different Q matrix for your picture will net better compression and less error.
- Now, if you take a picture of a plaid t-shirt on a solid white background then if you use aforementioned Q matrix for plaid then you get good compression on the shirt, but you've masked out the entire DC coefficient because this plaid shirt is only high frequencies. If you use another Q matrix that is only the DC value then you will get good compression on the background with little to no error.
- The standard-ish Q matrix is trial-and-error matrix that works good "in general". Cburnett 16:01, 23 October 2007 (UTC)
- Something like a plaid pattern has high and low frequencies; leaving out the low frequencies will produce bad artifacts for any image that is not a construed case of sine-like oscillations that have a half-integer number of periods within each 8x8 square of the DCT. The best approach would be to use the same quantization for all components, but that would anyway hardly give a space saving compared to using a bit less quantization for the low-freq. components since there simply are many more HF than LF components (i.e., a factor 4 if you place the cutoff halfway). But re the original question, I think there can only be one Q-matrix for the whole image, so for selective compression you would need to filter out HF components or pre-quantize them with a higher factor in uninteresting areas before starting the final compression. Is that what irfanview and jpg wizard do? Han-Kwang (t) 16:39, 23 October 2007 (UTC)
- I said "assume the pattern is of very high frequency in both directions". If it's @ 45 degrees then you have no DC and no low frequencies horizontally nor vertically for sufficiently high frequency. I explicitly said it was all high frequency to demonstrate my point. Cburnett 19:09, 23 October 2007 (UTC)
(unindent) Try doing a DCT on data that looks like f(x) = 1+cos(pi*x*7.5/8). [changed 7 into 7.5 @ 24 Oct] For starters, you always have a DC component since image data is not negative. And because the period of the high frequency part does not match the size of the 8-pixel DCT blocks, this supposedly high frequency component will generate a DC contribution (and plenty of other lower frequencies) in the DCT. Han-Kwang (t) 21:52, 23 October 2007 (UTC)
- You missed this step in JPEG codec:
- To center around zero it is necessary to subtract by half the number of possible values, or 128.
- So an image does not have negative but JPEG forces it so that the issue you just raised is moot. Cburnett 16:31, 27 October 2007 (UTC)
- So what? If the average of an 8x8 block is 234, then after substraction of 128 you still have a DC component of 234-128=106 to deal with in the DCT. That's why there is a DCT basis function G_00. Han-Kwang (t) 17:41, 27 October 2007 (UTC)
- And if I have a 2x2 block of one diagonal of white (255) and the off-diagonal is black (0) then the average is 127.5, or 128. Subtract 128 and you have zero. Zero DC and maximum frequency in both directions. No offense, but I'm done being your teacher on this. Cburnett 00:36, 29 October 2007 (UTC)
Plugwash's analogy of a face against a leaf litter background is one of very few examples where regional compression could see significant benefit. Most examples that I've seen involve things such as a face against the sky, which already compresses very well and benefits little from being compressed further using lower quality.
Let's use an example image of a face against sky, where the face occupies 50% of the image and sky occupies the remaining 50%. That sky would already compress well, and may only occupy 20% of the file data. Compressing this with a substantially higher compression may save 50% of that storage, which is only 10% of the whole file, hardly worth going to that much manual effort. The only way to achieve more savings is to go for a greater differential in quality between "face" and "sky" regions.
If the background occupied a more substantial percentage, say 75%, the savings are greater. However, because that same higher percentage of the image is now at a visibly lower quality, the overall perception is of a lower quality image. In other words, with more backround at low quality, and less face at high quality, the image looks worse overall.
To gain any worthwhile benefit, there needs to be a substantial difference in the quality setting for a substantial portion of the picture, which is always noticeable. I've experimented by saving an image with regional compression, then saving the same image with uniform compression at a lower quality setting to achieve a similar filesize. The latter required much less manual intervention, yet produced images with similar perceived overall quality. Ian Fieggen 00:24, 24 October 2007 (UTC)
- "The purpose of lossy compression is to reduce the overall size by the loss of information.". If that were the case we would just downsample our images until they reached the size we wanted. The point of lossy compression is *selective* loss of information getting rid of information that is less important (or possiblly even invisible) to the viewer while keeping the most important bits. Plugwash 01:06, 24 October 2007 (UTC)
- I think we've exchanged our viewpoints by now. Per WP:FORUM, we should aim to use the talk page to work on articles. Can somebody who has software with selective jpeg compression mention how this is implemented in the article? I would implement it by pre-quantizing areas with a different Q' matrix before the final quantization with matrix Q, e.g. with Q'=2Q such that the final quantization won't give an additional quality loss. But I don't know what programs such as jpeg wizard and irfanview actually do. Han-Kwang (t) 15:41, 27 October 2007 (UTC)
- Plugwash, you're tearing my argument apart...because you're on the offensive. I *never* described how you determine which information to lose. Take a step back. Lossless compression requires no loss of information, ergo it can only remove redundant information. This REQUIRES lossy compression to lose information, which is all I stated. Favoring information "that is relavent to the viewer" is one way to lose information, not THE way as you stated. Cburnett 16:31, 27 October 2007 (UTC)
Failed Good Article nomination
GA review – see WP:WIAGA for criteria
- Is it reasonably well written?
- A. Prose quality:
- B. MoS compliance:
- Is it factually accurate and verifiable?
- A. References to sources:
- B. Citation of reliable sources where necessary:
- C. No original research:
- Is it broad in its coverage?
- A. Major aspects:
- B. Focused:
- Is it neutral?
- Fair representation without bias:
- Is it stable?
- No edit wars, etc:
- Does it contain images to illustrate the topic?
- A. Images are copyright tagged, and non-free images have fair use rationales:
- B. Images are provided where possible and appropriate, with suitable captions:
- Overall:
- Pass or Fail:
My main concern is the distinct lack of sources. Having no sources makes it hard to tell whether there is any original research involved. There was also a few recent incidents of vandalism which may have tarnished the quality of the article. I would recommend going through the article, make sure everything is up to date, cite a few sources (1 per paragraph is fine), and then re-nominate it. NF24(radio me!Editor review) 23:14, 28 October 2007 (UTC)
JFIF and JFIF
The article has a section "JPEG Interchange Format file format" with comments that it has some problems that are addressed by various improved formats including "'JPEG File Interchange Format' (JFIF)". Huh? I am aware that I don't understand this, but this wording is not helping me. Is there another way to word this, to help me (and others) who come here to learn? Was this a typo, or are these two separate things? Are there two different formats with the same name, that we can somehow separate? Further reading tells me that they are different, but it's not clear how, or how to tell them apart. -SandyJax 16:56, 31 October 2007 (UTC)
- Okay, I've fixed this by combining the two separate sections, one near the top of the article and another that was near the bottom, each with overlapping information about the differences between the "JPEG Interchange File Format" and the "JPEG File Interchange Format". Hopefully this clears up the confusion. Ian Fieggen 00:58, 1 November 2007 (UTC)
- Well, I'm still confused, but I'm not going to blame you. That part, at least, is much clearer. Thank you! -SandyJax 17:05, 5 November 2007 (UTC)
'Histogram equalized' image is not histogram equalized.
One image in the article has a caption which reads, "The 8×8 sub-image shown after having its histogram equalized (i.e., 154 becomes white, 55 becomes black). Note that this is done just for visual purposes and no such equalization is done in the example data."
The process of making 154 white and 55 black is not histogram equalization.
There is really no reason to actually histogram equalize the image, and I don't believe there is any reason to include the image at all, as it doesn't show any additional detail in the image.
Butlertd (talk) 19:00, 3 January 2008 (UTC)
- However, equalization yields an image such that the maximum value becomes white and the minimum value becomes black. In this case: 154 becomes white and 55 becomes black.
- That said: no reason?!? You do know what the point of histogram equalization is, right? The point of this example is to understand the JPEG codec. The dynamic range in the sample subimage is less than half of that of that possible. Equalization takes full use of the 8-bit range to allow the reader to better see the differences in the subimage. I am happy, though, that you have superb eye sight and equalization is purely a waste for you. Cburnett (talk) 00:01, 4 January 2008 (UTC)
- However, it's true that it unnecessarily complicates the article to the lay reader. As it's nothing to do with the codec, why not just use the equalised image as the example, and do away with the unequalised version entirely? Oli Filth(talk) 00:46, 4 January 2008 (UTC)
- The equalized image is strictly for better visualization of the subimage and to exacerbate the differences caused by quantizing for the simple reason that we have a harder time seeing minor luminosity changes.
- If you want to completely change the entire page to reflect your new example then...I guess I can't stop you. Keep in mind that you'll have to recalculate everything and all the data has been on this page for a while and has been fairly stable. Reworking the example will inevitably introduce errors and the cycle will have to repeat. All because you want to cater to the lower denominator.
- However, I think it will backfire. If you use a higher dynamic range image then your quantization will still be fairly small and with a higher dynamic range I really, really doubt you're going to see the pixel value changes — mostly because you can't do a histogram equalization to exacerbate the changes. If you can easily tell the difference of 3-5 shades of gray then good for you. But, since you're catering to the lower denominator you've just made it harder for people with not-so-awesome sight as you. Personally, I find it easier to change my understanding of a topic than to improve my eye sight.
- I also think you're blowing this issue out of proportion. Do you have any kind of evidence to support your position that mentioning histogram equalizing loses people? I don't see it as "obvious" as you do, I guess. Visually comparing the images I imagine that most people would see the lighter shade of gray is now gleaming white and the darker shade of gray is a deep black, and from that they'll "get" histogram equalization without having to know or care about what a histogram, cdf, or normalization is: it makes the dark darker and the light lighter. Cburnett (talk) 03:14, 4 January 2008 (UTC)
On a tanget: I uploaded an SVG of the original image and of the equalized image (after calculating the equalization by hand). I also put them in histogram equalization as a more thorough example since it's quite light on such details. Cburnett (talk) 01:01, 4 January 2008 (UTC)
Required precision
As far as I can tell (by doing a quick search on IEEEXplore), there is no such standard as IEEE 1880-1990. Unless I'm doing something extremely silly in my search...
Also:
""" On the contrary, the JPEG standard (as well as the derived MPEG standards) have very strict precision requirements for the decoding, including all parts of the decoding process (variable length decoding, inverse DCT, dequantization, renormalization of outputs); the output from the reference algorithm must not exceed:
* a maximum 1 bit of difference for each pixel component * low mean square error over each 8×8-pixel block * very low mean error over each 8×8-pixel block * very low mean square error over the whole image * extremely low mean error over the whole image
"""
Wow that's a long sentence. The descriptions of the required error margins are pretty meaningless and all look very similar to me. I would have said something like "there are specified error tolerances for each pixel, block, and for the whole image" or something. Just an idea.
FivePointPalmExplodingHeart (talk) 07:37, 26 April 2008 (UTC)
- "1880-1990" is probably a typo referring to "1180-1190". Rcooley (talk) 13:50, 26 April 2008 (UTC)
A reference is ITU Recommendation T.83 [5] —Preceding unsigned comment added by 84.188.221.68 (talk) 12:23, 8 September 2008 (UTC)
Are there different versions or some compatibility gaps in JPEG format?
I have apparent incompatibility problem with certain .jpg files created on XP computer, that renders wrong (or causes wrong format error on attempt to open it after a download) in every Linux and Win 98SE viewer application I tried, including Internet Explorer. I would assign it to a bug in specific installation but it is consistent and repeatable. Furthermore, it goes both ways: a (Word 97 compatible) .doc text document authored on OO.org 2.4 with embedded JPEG pictures created in Dadaware Embelish freeware (OK, I AM cheap, ... but standards are standards!) on Win 98 SE is rendered wrong in MS Word on an XP. It makes me wonder if there was some "upgrade" or "enhancement" or "deprecation" of old JPEG format in recent years that causes this apparent incompatibility? —Preceding unsigned comment added by 147.91.1.41 (talk) 08:53, 6 May 2008 (UTC)
- If you're able to upload an example of one of these JPEG files somewhere, I can download the file, inspect it on a byte level, and figure out what's strange about it. Ian Fieggen (talk) 04:14, 7 May 2008 (UTC)
- Follow-up: The files were CMYK JPEGs, which normally will not display in browsers and certain viewers not versed in CMYK format. Ian Fieggen (talk) 00:31, 8 May 2008 (UTC)
External links
I'm cleaning up the external links section per WP:EL and WP:NOTDIRECTORY:
- Wotsit.org's entry on the JPEG format not much relevant here, just describing that the file extension jpg means jpeg.
- JPEG Compression (Gernot Hoffman) does not add anything to the article, but might be used as a reference
- David Austin: Image Compression: Seeing What's Not There - ditto
- Article about hidden data in JPEG files might be considered for the EXIF article, but not here
- More about hidden extras, plus a program to remove them ditto
- Jpeg Delphi implementation using official JPEG Group C library or Intel Jpeg Library (ijl.dll included) linkspam
- Intel Integrated Performance Primitives what is THIS doing here??
- Oskar Breuning: JPEG Compression: Data Loss & Image Impact - linkspam
- Jim M. Goldstein: RAW vs JPEG: Is Shooting RAW Format For Me? - second url from same site, linkspam
- DCTlab: Matlab GUI - not interesting for most people (requires expensive software to run)
- JPEG Tutor, an interactive applet allowing you to investigate the effects of changing the quantisation matrix. - WP:EL - pages requiring extra plug-ins
- Gregory K. Wallace, The JPEG Still Picture Compression Standard, IEEE transactions on consumer electronics (1991) (Gzipped PostScript file) - might serve as a reference, but not suitable as external link
- JPEG deringing and deblocking: Matlab software and Photoshop plug-in - I find it interesting, but once we start putting JPEG software in here, the list will grow to hundreds of software packages
- JPEGdump, a command line program that dumps information about the markers in a JPEG file - see above
Han-Kwang (t) 07:57, 10 August 2008 (UTC)
- The Wotsit link looks relevant to me - the downloads including some pretty thorough information. I've restored it. Dcoetzee 20:09, 25 November 2008 (UTC)
Inverse DCT
71.131.29.55 (talk) placed a question in the article at the below JPEG#Decoding, which I'm moving to the talk page: "Can somebody add the inverse DCT equations here?" Han-Kwang (t) 12:48, 14 October 2008 (UTC)
Naming conventions for image file formats
Please see the discussion at Talk:Image file formats#Naming_conventions_for_image_file_formats on naming conventions for articles on image file formats. Dcoetzee 00:46, 25 October 2008 (UTC)
TIFF?
How come the article recommends (twice) TIFF as a lossless alternative to JPEG? Is anyone still using TIFF? I'd expect PNG to be much more popular, and in the case of images under active editing, whatever application-specific formats Photoshop and GIMP use. JöG (talk) 19:31, 25 November 2008 (UTC)
- I'd agree on this one - if not PNG, then DNG. TIFF may still be used, but it's utterly impractical considering the alternatives. Dcoetzee 20:08, 25 November 2008 (UTC)
DNG's are a fairly new standard, and many of the major camera makers do not include support for this format (e.g. Nikon, Canon, Kodak, ...). Furthermore, DNG's are not used to store processed images, only "negatives." For lossless, TIFF is still the leader, in the digital camera world at least.
- In any case I've revised it to avoid recommending TIFF so strongly - any lossless format is really just fine for the purpose described here (avoiding generation loss and representing images with sharp edges), and common ones like PNG have far superior compression to TIFF. Dcoetzee 02:08, 3 December 2008 (UTC)
- PNG is unsuitable for photography (or any photorealistic images.) The format was designed to replace GIF and works best for non-vector illustrations, line art, etc. Also, PNG can't handle color-spaces, or embed EXIF data. Hence TIFF is still the preferred format for professional digital photography, along with JPEG when lossy processing is tolerable (i.e., final work saved at maximum quality.) 216.223.143.38 (talk) 16:16, 24 March 2009 (UTC)
Entropy coding confusion?
The article seems to imply the steps of runlength encoding and huffman coding together constitute entropy coding. Is that actually the case? I understand RLE is part of the algorithm, but I would have considered only the huffman coding step to be entropy coding.—Preceding unsigned comment added by 79.23.243.219 (talk • contribs) 19:00, 13 Dec 2008 (UTC)
Edge behavior
I'm a bit confused over the recommended edge behavior. The article states: [...] a better strategy is to fill pixels using colors that preserve the DCT coefficients of the visible pixels, at least for the low frequency ones (for example filling with the average color of the visible part will preserve the first DC coefficient, but best fitting the next two AC coefficients will produce much better results with less visible 8×8 cell edges along the border). How can you talk about "DCT coefficients of the visible pixels"? Individual pixels don't have DCT coefficients, the entire 8x8 block does. Is the article talking about preserving them vs. what a (say) 5x5 DCT would? How would you fit (and after what metric -- L^2? L^1?) the free pixels to coefficients once you know those? -Sesse (talk) 18:21, 9 December 2008 (UTC)
- Hard to say, since whoever added it didn't reveal his source. I took it out. Dicklyon (talk) 18:33, 9 December 2008 (UTC)
- I'm still interested in the problem, though. :-) I have a solution based on linear programming, similar to what's done in compressed sensing, but something tells me this should be a problem solved ages ago. -Sesse (talk) 19:26, 9 December 2008 (UTC)
Wrong. Jpeg can be use with loss or LOSSLESS MODE
This article needs several corrections. —Preceding unsigned comment added by 151.61.155.207 (talk) 13:04, 15 July 2010 (UTC)
JPEG Quality vs bits image needs BPP
Hello all, just a quick point, i think the image that shows quality vs bits for a number of JPEG quality levels could benefit from specifying the size of the original image and thus also the calculation of the bits per pixel rather than absolute number of bits (which is not particularly useful). Then a bits per pixel vs PSNR graph would allow an informative comparison against other compression techniques. 79.23.243.219 (talk) 17:59, 13 December 2008 (UTC)
Latest - JPEG XR for Digital Cameras Nears Completion
The article can be read here : http://www.jpeg.org/newsrel24.html. --NiluKush (talk) 05:10, 3 February 2009 (UTC)
Small Color JPEG Image Without Artifacts
Years ago I was web surfing for how JPEG works (and this was before Wikipedia was popular, maybe before it even existed.) and found a great tutorial that had a small square image, probably red, green and blue, and despite being a JPEG it had absolutely no compression artifacts, and I'm guessing it was at a low if not the lowest compression setting. The image was just a jumble of rectangles and squares and it looked the way it did (like a GIF) because it was an exact size and used JPEG block organization (or whatever its called) to avoid compression artifacts. The author then showed the same image scaled slightly larger and there were tons of artifacts because the squares and rectangles no longer fit so nice and neatly.
I recall the wording the author used was something along the lines of (paraphrasing heavily) "As you can see, JPEG images always have artifacts..." and then maybe a comical sort of "huh, wha..." (though I could be completely off on this part.) when he introduced the image without artifacts and went on to explain how he created it and how it worked.
After reading that article, I tried to revisit it either months or weeks later and found nothing. Current web surfing doesn't turn up much either, and I'm wondering if anyone knows where I could find the article (an internet archive of it?), the image, others like it, and maybe it could even be a good addition to the article. (Though for all I know it once was part and later removed.)
Thanks! —Preceding unsigned comment added by 4.254.80.74 (talk) 20:44, 22 October 2009 (UTC)
file size vs "quality" settings plot
According to e.g. here: http://www.faqs.org/faqs/jpeg-faq/part1/ section 5 says: "In fact, quality scales aren't even standardized across JPEG programs."
In so far a plot of quality vs filesize seems rather useless to me, as the result can rather arbitrarily depend on the software used. The same FAQ writes in this paragraph: "The quality settings discussed in this article apply to the free IJG JPEG software (see part 2, item 15), and to many programs based on it."
If that claim is true and most programs use the same quality settings, as they are based on a reference implementation, such a plot would be meaningful, if we can be sure that the plot was created using one of those programs.
Apropos that graph: I find it even more useless for this article to compare a "normal jpeg" against a "save for web jpeg", which nobody knows what it's supposed to be - we can only guess that those might be progressive jpegs, which would make sense for a webpage.
The graph also seems not to be mentioned/explained in the text... which is bad style but makes removing so much easier...
So... thoughts on that?
If it doesn't meet fierce (and justified) opposition, I'll take the graph out in a couple of days.
Iridos (talk) 00:02, 10 November 2009 (UTC)
- Last time I compared the IJG's 0-100 scale file sizes with Photoshop's 0-12 scale, over 5 years ago, the relationship didn't look at all like that. I didn't realize that Photoshop's "save for web" feature now had a 0-100 scale, but I bet it's not the IJG library that they're using. I'll ask Guido if he knows. I'm not sure what to think of the plot; it's potentially interesting/useful info, but definitely violates WP:NOR. Dicklyon (talk) 05:25, 10 November 2009 (UTC)
- Guido (of IJG) doesn't know whether Adobe uses their library or not. Dicklyon (talk) 07:07, 11 November 2009 (UTC)
- I agree, Iridos, the plot does have some shortcomings. Firstly, it would seem to make more sense that if the article mentions the IJG scale, any such examples should also use the same IJG scale. Secondly, there is no mention of the additional options used in Photoshop when saving, which dictate the additional metadata embedded. For example, there are separate options for a preview thumbnail, ICC profile, copyright text, etc.
- As for the reasoning behind comparing the "Normal JPG" and "Save for Web", I understood that it was there merely to provide a comparison of the two different scales in the one program, one of which is 0-100, the other of which is 0-12. I think that this is actually helpful.
- Overall, I think the plot is worthwhile, so I'm not sure that it's worth deleting unless there is a clear reason against it and/or a better replacement. Ian Fieggen (talk) 08:43, 11 November 2009 (UTC)
Patent issues confusion
I added some tags to this section where the text confuses me:
Beginning in August 2007, another company, Global Patent Holdings, LLC claimed that its patent (U.S. patent 5,253,341), is infringed by the downloading of JPEG images on either a website or through e-mail. If not invalidated, this patent could apply to any website that displays JPEG images. The patent emerged{{clarify|date=February 2010|reason=how can it emerge in 2007 when a cite below has "In autumn 2000 ..."}}
The patent owner{{which|date=February 2010|reason=cite starts "In autumn 2000 TechSearch Inc"; was that the owner at the time? Has the owner changed since 2000? Please make this more precise.}}
has also used the patent{{which|date=February 2010|reason=US 5253341?}}
to sue or threaten outspoken critics of broad software patents, ...
I do not understand patent law terminology to fix the text myself, but my guess is that "emerged" is probably wrong or means something that should be more clearly expressed; "Techsearch Inc" was the owner in 2000 and "Global Patent Holdings, LLC" became the owner by 2007; and "the patent" is always the same "US patent 5,253,341". My suggestion would be to spell out the patent and its owner in full each paragraph. -84user (talk) 18:46, 18 February 2010 (UTC)
Embedded Text & Comments ??
My question is about common usage rather than technical stuff. The Irfanview picture viewer (for one,) has several categories of Embedded Text in most pics, including stuff like 1) shutter speed and other camera info, 2) copyright info and requests to the printer, and 3)text prose such as historical info about the photo like where it was taken and what the subject is, - whatever you want to type in, up to several meaty paragraphs long.
Ifanview calls the last category "Comments," it's in the image info section with the others, and is several (5+?) years old. I searched the article for "text," "comments," and several other keywords and could find no info on ANY of the categories. It seems they should be covered in lay language, -- I request that somebody will.
My specific question is; What other common viewers can read these embedded prose "comments," if any? Or is that feature common/typical? It seems all my buddies are ignorant of this useful feature. I also wonder if it would be useful to have an overview of the features of the commonly used viewers in this article.
(Also; I think GIF is NOT lossless as claimed, it typically loses colors.)
Thank You, --71.128.254.239 (talk) 21:15, 23 March 2010 (UTC)Doug Bashford
- The "Common JPEG markers" table lists most of the typical segments to be found in a JPEG file, one of which is the "Comment" segment. Many image viewers, even those from many years ago, can view embedded text comments. As for whether it's worth expanding this point within the article, I'm not sure, as the comment segment seems to be hardly ever used nowadays other than as a place for software to put a copyright stamp!
- You're right that GIF is generally not lossless when used to encode photographic images, which typically contain thousands of different colored pixels, and which therefore need to be reduced to a palette of 256 of fewer colors. However, photographic images are not GIF's strong suit. The part of the article that referred to GIF being lossless was in relation to "line drawings and other textual or iconic graphics", which typically have 256 or fewer colors, and which can therefore be compressed losslessly. Ian Fieggen (talk) 22:37, 23 March 2010 (UTC)
Thanks Ian! Now I feel silly to have missed it. I find the Comments so useful that I feel frustrated that I can't tell others with other software how to operate this secret function! But I admit, I often use pics more like notes than as personal snapshots. --71.128.254.239 (talk) 01:31, 24 March 2010 (UTC)Doug
Archiving
Does anyone object to me setting up automatic archiving for this page using MiszaBot? Unless otherwise agreed, I would set it to archive threads that have been inactive for 30 days and keep ten threads.--Oneiros (talk) 18:31, 1 April 2010 (UTC)
- Done--Oneiros (talk) 12:30, 5 April 2010 (UTC)
Blatant copy of this JPEG article available on Amazon.com
Today I was made aware of the activities of Alphascript Publishing, who are selling hard copies of Wikipedia articles on the book site Amazon.com. Here's a link to their book on JPEG: JPEG book on Amazon.com
While I'm not real comfortable with Alphascript Publishing selling what I and many other Wikipedians have contributed freely, I guess I have to accept that under the terms of the Creative Commons license, they have every right to do so. However, it should be done with proper attribution. Instead, this book is credited to Frederic P. Miller (Editor), Agnes F. Vandome (Editor), John McBrewster (Editor). Where is the mention of the many editors of this article, myself included?
Searching Amazon.com for "Alphascript Publishing" currently turns up 39,827 results. Searching for "Alphascript Publishing JPEG" was sufficient to home in on this particular book. I'd suggest that any Wikipedians who are as disgruntled as I should search Amazon for books on their pet Wikipedia subjects and leave appropriate reviews and/or feedback for Amazon.com staff. Ian Fieggen (talk) 22:51, 5 April 2010 (UTC)
What's the point?
What's the point of using JPEG if you can use PNG instead? Is the only advantage that JPEG has over PNG is that its file sizes are slightly smaller? —Preceding unsigned comment added by Arf arf arf Imma seal (talk • contribs) 22:23, 7 August 2010 (UTC)
- Not "slightly" smaller, but significantly smaller. A typical photographic image saved as a PNG file may achieve as high as 2:1 compression, whereas the same photograph saved as a JPEG file can easily achieve 10:1 compression, making the JPEG file 1/5 the size of the PNG file. In addition, the user can specify the quality setting. A JPEG file at a lower, but still acceptable, quality setting can be 1/10 the size of a PNG file, while even at the absolute maximum quality setting, the JPEG file can still be 1/2 the size. Ian Fieggen (talk) 23:08, 7 August 2010 (UTC)
- JPEGs much smaller files come with greatly reduced quality. At the high quality end of the spectrum, the JPEG is typically only slightly smaller, and still not lossless. People do often prefer JPEG for the smaller file sizes, when quality is not so critical. Dicklyon (talk) 00:26, 8 August 2010 (UTC)
law.com link #39
We can't access it (website law.com says "premmium account required").
This is the #39 link: A Bounty of $5,000 to Name Troll Tracker: Ray Niro Wants To Know Who Is saying All Those Nasty Things About Him —Preceding unsigned comment added by 190.96.64.106 (talk) 20:42, 20 August 2010 (UTC)
Decoding clarification please
Why does the DCT coefficient matrix in the decoding section differ at row 4 column 1 from the DCT coefficient matrix under encoding?
81.233.40.84 (talk) 16:37, 24 August 2010 (UTC)anonymous student
- That's an error introduced by User:FalseAlarm in this diff. Dicklyon (talk) 18:18, 24 August 2010 (UTC)
- Do you mean a computational error? Or a common typing error? 81.233.40.84 (talk) 22:46, 25 August 2010 (UTC)anonymous student
- You could ask him what he was thinking. The edit summary "Correcting quantized result for one entry, which is different due to using full-precision DCT" suggests that he thought he was fixing an error, but in the process he left it inconsistent. Dicklyon (talk) 04:59, 27 August 2010 (UTC)
Can there be a better non-technical explanation?
I am interested in how the compression of jpegs is achieved, but would it be possible to have a slightly more user-friendly examples for non-techies?
Explanations I can find elsewhere on the web fall into two camps: 1. the total non-technical (i.e. with no real explanation of what's actually going on) 2. the we've-all-got-a-degree-in-computer-science-or-mathematics explanations, which I really can't follow.
I'm afraid the current wikipedia page falls into the latter camp. Would it be possible to create a section which breaks new ground and give a technical explanation, but without assuming a great deal of addition knowledge? For example, I have a PhD in biology, and so I'm not entirely stupid. I followed the explanation from 'Color space transformation' through 'Downsampling' and 'Block splitting', but then, the section on the core of the compression process starts with '... is converted to a frequency-domain representation, using a normalized, two-dimensional type-II discrete cosine transform (DCT).'
Great opening gambit! Ahhhhhh, what????
It helpfully points me to a page on 'discrete cosine transform (DCT)', but this is a general page on all such transformations, with nothing specific to the application of type-II DCTs in jpeg compression. I think that it would be almost impossible for anyone without degree-level maths to extract any useful explanation from the DCT page, if they were trying to get a grip on jpegs.
I am certain I can be made to understand what's going on! I'm not too afraid of maths! But I need a bit more help!
Perhaps a worked example could be created for the jpeg page at this point, with a few more pictures or diagrams? I'd love to help, but as yet I really don't understand how it works! I'd be interested to know how many uninitiated readers actually understand the compression process by reading these sections (i.e. those who do not already understand it!). —Preceding unsigned comment added by 83.80.19.211 (talk) 08:52, 16 December 2010 (UTC)
Syntax editors?
I wonder what editors can be used to edit JPEG syntax? An obvious edit would be to remove EXIF etc. information.
My specific problem is that I have photos taken by a Pentax pocket camera, which impose a persistent "date taken" on the image (in a very ugly & obstructive manner). I would like to remove these dates; I'm pretty certain the are super-imposed on the standard JPEG image (because they seem to shift slightly as the image is displayed), but I'm hitting a blank wall in finding out how to shift them.Memethuzla (talk) 15:33, 24 December 2010 (UTC)
- I've written a tiny program called JPGExtra, whose specific role is to remove all "extra" information (ie. metadata) from JPG files, including the EXIF data. If you need more control over which segments to remove, this can be done with another popular program called Irfanview. Look under "JPG Lossless Rotation", choose "None" for the transformation, then under "JPG APP marker options", choose "Custom" and select the markers to be kept (the remainder will be discarded).
- As to whether this sort of information makes a worthwhile contribution to the article, I'm not sure. I don't want to seem biased and pushing my own JPGExtra program, but it may well be worth having a list of programs and the types of metadata edits that each can perform. What do others think?
- Having said all of the above, I'm intrigued by your description of the problem, specifically that the superimposed dates seem to "shift slightly as the image is displayed". If this is the case, then it could be a feature of the viewing software that you are using, which you may be able to disable. However, if these same photos still contain the date stamp when you e-mail them, view them on a web site, or view them on other systems, then the date has almost certainly been embedded as pixels in the image itself and therefore cannot be removed (except by careful photo retouching). Ian Fieggen (talk) 07:20, 25 December 2010 (UTC)
- Thanks,Ian, I'll investigate that.Memethuzla (talk) 16:20, 29 December 2010 (UTC)
NO WAY
Nice view of a jpeg city. I can almost see compression avenue —Preceding unsigned comment added by 94.197.137.70 (talk) 19:52, 18 January 2011 (UTC)
Block splitting: why do it
I think the block spitting section should explain why it is necessary. The DCT of a 256x256 image is equally expensive to compute in 8x8 blocks (64*64*32 = 256*256) — Preceding unsigned comment added by Nothing1212 (talk • contribs) 16:29, 25 February 2011 (UTC)
- With 8x8 blocks you need to do 8x8 operations per pixel, with 256x256 blocks you'd need 256x256 operations. It would take 1024 times longer to encode/decode with such large blocks. Furthermore it would probably introduce problems at the quantization stage.
YCCK JPEGs with APP14 Marker
I have a few JPEG images with APP14 marker (ColorTransform = 2 so color is not CMYK but YCCK). About a half of them can be viewed OK, other must be inverted before converting to RGB (otherwise thay are nearly black). I don't see any difference in metadata which would give me a hint, which images should be inverted and which not.
Any ideas??? —Preceding unsigned comment added by 79.197.218.189 (talk) 23:25, 6 January 2011 (UTC)
Normalizing function in inverse DCT wrong?
Using the current iDCT you never get back the original matrix, however, if you replace the normalizing function with the one used on the German article it works out just fine. Anyone care to verify and correct? —Preceding unsigned comment added by 79.245.205.66 (talk) 21:19, 7 January 2011 (UTC)
- I just modified it – please review the result. (I don't know what German article you are referring to, but I believe the current formulation is correct.) —LightStarch (talk) 10:57, 28 January 2011 (UTC)
Invention claim by Filipino Scholar
I can find no other reference to this; surely this standard was as the result of a committee, not the invention of a particular person?
- Agreed. Vandalism edits have been reverted. Ian Fieggen (talk) 01:51, 13 December 2011 (UTC)
Top Picture
I'm not trying to sound picky, but I find it weird that the picture of the cat at the top of the page is a Fireworks PNG graphic rather than a JPEG. --Demon of the Sand ∞ 17:05, 14 October 2011 (UTC)
Great Work, Guy's!
Last glanced at this entry some 4, or so, years ago, and too soon conceded that the subject matter might be beyond my depth of intellect and breadth of experience. Recently referenced it again, and noticed that it has happily come a long way since previous revisions, representing a highly authoritative and now significantly more understandable information source.
Great work, Guy's. --203.206.69.227 (talk) 06:16, 10 November 2011 (UTC)
CMYK JPGs
Do CMYK JPEGs exist and do they conform to the standard? TIA,--78.48.41.233 (talk) 22:44, 20 November 2011 (UTC)
- Yes, CMYK JPGs do exist, and although they conform to the standard, not all software will decode them. Image handling software (eg. PhotoShop, IrfanView, etc.) should read them correctly, but many browsers will incorrectly assume that they are in JFIF format and will be unable to read them. Ian Fieggen (talk) 04:13, 21 November 2011 (UTC)
Medium quality over Higher quality compression
According to the article, after giving some file compression examples, "The medium quality photo uses only 4.3% of the storage space". However the file sizes for the examples given are 83,261 bytes for Higher quality, and 9,553 for Medium quality.
Am I missing something, or is that 11.4% rather then the quoted 4.3? MrZoolook (talk) 03:50, 25 November 2011 (UTC)
- Your calculation of 11.4% is based on the medium quality filesize divided by the higher quality filesize (9,553 ÷ 83,261). The 4.3% quoted in the article refers to the medium quality filesize divided by the uncompressed storage required by the image (9,553 ÷ 219,726). Ian Fieggen (talk) 02:04, 13 December 2011 (UTC)
- Well spotted. Though saying that, the intro in the comparison clearly compares a setting of Q=100 as being 'full quality' then in the comparison itself says that Q=100 is 'higher quality'. Perhaps some clarification should be made here. MrZoolook (talk) 11:40, 19 December 2011 (UTC)
- Good point, MrZoolook. I've therefore re-worded "full / higher quality" to "highest quality" for consistency. Ian Fieggen (talk) 21:49, 21 December 2011 (UTC)
- Just to help with the original confusion also, I added a note that the comparison is directed to the uncompressed image. MrZoolook (talk) 01:23, 22 December 2011 (UTC)
Progressive JPEGs in older browsers
and even some software which does support them (such as versions of Internet Explorer before Windows 7)[12] only displays the image after it has been completely downloaded.
This is not true, and the citation doesn't seem to back it up either. I know that Netscape could decode progressive JPEGs progressively back in the 90's and I'm fairly confident that Internet Explorer was doing this as well, at least as far back as version 5.5.
Problem with "losless editing" section
Current text:
- A number of alterations to a JPEG image can be performed losslessly (that is, without recompression and the associated quality loss) as long as the image sizeUnit) (usually 16 pixels in both directions, for 4:2:0 chroma subsampling).
That's not a complete sentence and there is something wrong with those parentheses. Furthermore it's referencing something called sizeUnit, which is not described and has no other mentions to it on the page.Hhh3h (talk) 04:50, 7 April 2013 (UTC)
- This issue appears to be the result of a series of bad edits by 122.60.61.117 http://en.wiki.x.io/w/index.php?title=JPEG&diff=547939340&oldid=545212426 . This series of bad edits was followed by another bad edit by 116.203.111.193 , a weak attempt at fixing some of the damage by 78.134.103.104 and an addition of an image by Mormegil. I have reverted to the version from before 122.60.61.117's edits and the re-added the image added by Mormegil. Plugwash (talk) 12:44, 7 April 2013 (UTC)
What's the CORRECT DCT equation used for JPEG
According to http://www.lokminglui.com/dct.pdf in the "The DCT Equation" section, the DCT equation (as well as the normalization constants that depend upon where in the block it is) are DIFFERENT than those here in the Wikipedia article. Is this just a different (but still equivalent) form to what's shown here? Or is one of these actually wrong (either the Wiki article, or the other article)? Animedude5555 (talk) 23:08, 8 May 2013 (UTC)
Who knows the history of JPEG?
There is not even the release year.64.134.236.249 (talk) 06:54, 30 April 2012 (UTC)
Pre-JPEG software was used several years before the standard was set. IG JAS developed transmitting systems for radar-images using jpeg-type data compression already ca 1986 for the Gripen fighter. 130.241.141.153 (talk) 10:59, 28 January 2014 (UTC)
Need a better Video to show the continuosly varying JPEG compression
A better video could be utilized to showcase the compression scale from Q=100 to Q=1. Below are few reasons for my point: 1. The video is black and white i.e. grey scale. JPEG's strength lies in capturing and rendering the colors of natural images, photos. — Preceding unsigned comment added by 62.215.232.83 (talk) 08:36, 6 July 2014 (UTC)
Vulnerabilities
Should this vulnerability be added to this article? As with many compressible file formats, the relatively small size of some JPEG files can hide the requirements to view or process some JPEG files. For example, the webpage has a link to a 618KB JPG file of a 10,000 x 10,000 pixel 24-bit image, which if a program converted to an uncompressed format would require at least 300MB of memory or file storage. • Sbmeirow • Talk • 08:08, 22 November 2014 (UTC)
If this topic isn't added to this article, should we create an article similar in concept to Zip bomb, E-mail bomb, Billion laughs articles. Possibly name the article "JPEG bomb"? • Sbmeirow • Talk • 08:16, 22 November 2014 (UTC)
Has anyone come across an article/blog on the internet that describes this vulnerability? • Sbmeirow • Talk • 08:19, 22 November 2014 (UTC)
Does not explain compression levels 0-9 ?
I came here expecting to find some topic explaining the levels that are commonly used in picture software for jpeg. But there's none. — Preceding unsigned comment added by 218.252.34.131 (talk) 03:35, 28 May 2015 (UTC)
External links modified
Hello fellow Wikipedians,
I have just added archive links to one external link on JPEG. Please take a moment to review my edit. If necessary, add {{cbignore}}
after the link to keep me from modifying it. Alternatively, you can add {{nobots|deny=InternetArchiveBot}}
to keep me off the page altogether. I made the following changes:
- Added archive https://web.archive.org/20080602141045/http://www.uspto.gov:80/web/patents/patog/week30/OG/html/1320-4/US05253341-20070724.html to http://www.uspto.gov/web/patents/patog/week30/OG/html/1320-4/US05253341-20070724.html
When you have finished reviewing my changes, please set the checked parameter below to true to let others know.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers. —cyberbot IITalk to my owner:Online 11:30, 26 August 2015 (UTC)
High-dynamic-range imaging
The "JPEG compression" section contains the following sentence: "Widespread use of the format has stimulated the adoption of simulated high-dynamic-range imaging (HDR) modes in inexpensive cameras and smartphones, to correct the loss of shadow and highlight detail."
It's not clear exactly what kind of "loss of shadow and highlight detail" is meant, especially as opposed to compression artifacts that would affect midtones just as well. Nor is 12-bit JPEG relevant here, as it is not widely supported on smartphones nor consumer-grade cameras. Therefore, I don't see why HDR should be mentioned here, and I would like to remove the sentence; any objections? --SoledadKabocha (talk) 19:51, 12 September 2015 (UTC)
IE info in lead outdated, or ambiguous?
In the lead: "The MIME media type for JPEG is image/jpeg, except in older Internet Explorer versions". This is ambiguous. In the source I find: "In Microsoft Internet Explorer 4.0 and later, MIME type determination occurs in URL monikers through the FindMimeFromData method." so it could mean "in all current Internet Explorer browser down to 4.0".
Maybe I'm not reading this right, does the "expect" apply to IE4 up to some IE that is still old? Then maybe drop this from the lead as too much trivia? If this does not apply to Microsoft Edge (does it?) then at least this will get dated at some point.. comp.arch (talk) 16:36, 12 December 2015 (UTC)
- According to this may have changed as of "ie 9 image/jpeg image/bmp image/gif image/png"[6]
- "But some applications (most notably MS Internet Explores but also Yahoo! mail) send jpeg files as image/pjpeg
- I thought I knew that pjpeg stood for 'progressive' jpeg. It turns out that progressive/standard encoding has nothing to do with it. MS Internet explorer send out all jpeg files as pjpeg regardless of the contents of the file.
- The same goes for citrix: all jpeg files send from a citrix client are reported as the image/x-citrix-pjpeg MIME type."[7]
Lossless editing
@Waerloeg: your qualifier isn't limited to rotations, or is it? On commons I sometimes state "lossless crop with XnView" assuming that it's understood to be not really lossless for width x height != m*8 x n*8. –Be..anyone 💩 17:03, 22 April 2016 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified 4 external links on JPEG. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Added archive https://web.archive.org/web/20091008041637/http://www.jpeg.org/newsrel25.html to http://www.jpeg.org/newsrel25.html
- Added archive https://web.archive.org/web/20090123232605/http://www.elektronik.htw-aalen.de/packjpg/ to http://www.elektronik.htw-aalen.de/packjpg/
- Corrected formatting/usage for http://www.algovision-luratech.com/company/news/patentquarrel.jsp?OnlineShopId=164241031081525276
- Added archive https://web.archive.org/web/20110716123228/http://eupat.ffii.org/pikta/xrani/rozmanith/index.en.html to http://eupat.ffii.org/pikta/xrani/rozmanith/index.en.html
- Added
{{dead link}}
tag to http://patentlaw.jmbm.com/2013/04/hps-motion-to-dismiss-for-lack.html/
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 15:22, 16 April 2017 (UTC)
External links modified
Hello fellow Wikipedians,
I have just modified 5 external links on JPEG. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
- Corrected formatting/usage for http://www.jpeg.org/public/jfif.pdf
- Added archive https://web.archive.org/web/20131231055215/http://kikaku.itscj.ipsj.or.jp/sc29/29w12901.htm to http://kikaku.itscj.ipsj.or.jp/sc29/29w12901.htm
- Added archive https://web.archive.org/web/20131231012300/http://kikaku.itscj.ipsj.or.jp/sc29/29w42901.htm to http://kikaku.itscj.ipsj.or.jp/sc29/29w42901.htm
- Added archive https://web.archive.org/web/20100216033245/http://eu.sabotage.org/www/ITU/P/P0930e.pdf to http://eu.sabotage.org/www/ITU/P/P0930e.pdf
- Added archive https://web.archive.org/web/20070714232941/http://www.jpeg.org/newsrel1.html to http://www.jpeg.org/newsrel1.html
When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.
This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}}
(last update: 5 June 2024).
- If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
- If you found an error with any archives or the URLs themselves, you can fix them with this tool.
Cheers.—InternetArchiveBot (Report bug) 22:20, 18 November 2017 (UTC)
Nothing on embedded Comments.
Quoting Web: "One great feature of JPEG images that make them a great asset over, lets say, PNG images is the fact that one can add comments to them." Yup. And .GIF etc... I also notice some discussion re: embedded Jpeg comments Vs embedded Windows comments in Jpegs being incompatible. At first scan I also see nothing on IPTC metadata, nor EXIF metadata, etc. Fun: Metadata#Photographs also Comparison_of_metadata_editors. But nothing here? Or is it hiding in all that O-so-impressive jargon?
With Irfanview I can make Comments only several kB long, I was wondering whether that was an Irfanview, or a JPG rules limit. Cheers!
--2602:306:CFCE:1EE0:35E7:576D:803C:9EB8 (talk) 20:18, 29 June 2018 (UTC)Doug Bashford
- JPEG can store comments in COM segments. The maximum length of a COM segment is 65535 bytes, but I think you can have as many COM segments as you like. (The quote is wrong about PNG, which has always been able to store comments.) --Zundark (talk) 21:19, 30 June 2018 (UTC)
Castle picture
The Lichtenstein (castle) picture is png in this article about jpeg. If intentional this isn't immediately explicit. — Preceding unsigned comment added by 90.252.188.41 (talk) 11:20, 20 January 2019 (UTC)
- It's intentional, since it's showing the original image (that is, before JPEG compression). --Zundark (talk) 11:47, 20 January 2019 (UTC)
JPEG filename extensions
In addition to the five common file extensions for JPEG files (.jpg, .jpeg, .jpe, .jfif, .jif), the article currently mentions other types of files that "may contain embedded JPEG data", singling out .tif and .mp3 files. It's true of any file that it can be embedded in certain other files. Consider what happens when one places a JPEG image in a word processor document, spreadsheet, database, pdf, etc. Should we mention any or all of these? Each of these examples is probably more common to the average person than a .tif file. Personally, I think it's better not to even mention the embedding of JPEG data – at least not in the section on file extensions. Ian Fieggen (talk) 21:46, 24 April 2019 (UTC)
".pjp" listed at Redirects for discussion
An editor has asked for a discussion to address the redirect .pjp. Please participate in the redirect discussion if you wish to do so. signed, Rosguill talk 19:01, 8 September 2019 (UTC)
No mention of M-JPEG
All though not a part of the JPEG standard I think the Motion JPEG convention as used in the USB video class and supported by major browsers at least rates a mention if not a subsection in this article. — Preceding unsigned comment added by 60.241.72.197 (talk) 03:43, 23 September 2019 (UTC)
Update JPEG-XL
I suggest a strong update to JPEG-XL part based on this very recent article: https://cloudinary.com/blog/how_jpeg_xl_compares_to_other_image_codecs
I would do that but the current section contains so many buzzwords that I don't dare to touch. Basic info:
- maximum dimension (in single code stream): 1,073,741,823 x 1,073,741,824 = 1,152,921,502,459 megapixels = 1,152,921 terapixels
- maximum bit depth: 24-bit integer / 32-bit float
- maximum [color] channels: 4100 (JPEG: 3)
- planned release as ISO standard: end of 2021
- reference implementation: https://gitlab.com/wg1/jpeg-xl
Prefer present level of detail
One criticism suggested the article is perhaps too detailed. I'm a retired electrical engineer, at one time working in the earliest digital television hardware, and found this an excellent single place to get all the information I wanted on a present investigation. While I suppose I'm an example of a "particular audience," I would hate to see it reduced in detail or scope. Mweir2 (talk) 13:54, 6 March 2020 (UTC)
- I am a computer scientist, therefore also from a "particular audience." I agree with you that there is no good reason to remove well-revised information from the Wiki. If the lead and second paragraph need to be simplified, they should be, but removing good quality text from a Wiki page which gets ~1579 views/day seems counterproductive. BernardoSulzbach (talk) 20:50, 6 March 2020 (UTC)
- I'm computer science researcher at the university and I agree with both of you. I came to this article while reading advanced research papers on DCT transform just to refresh my knowledge on technical details about JPEG compression. I feel totally scared of reading the criticism suggesting that the article is too detailed. I think it should be enough with a simplified leading part for general public. However, what makes great wikipedia is having it as a source for deep knowledge and not leaving that role to paying publishers alone, thus restricting access to knowledge to those who can pay the expensive access to private publishing. It seems counterproductive to remove a good job about disemination of technological knolwedge. --88.21.80.206 (talk) 00:37, 20 April 2020 (UTC)
- I'm neither of those things, I think I'm what you'd consider part of the general audience, and quite frankly, I didn't care for very minute and technical details describing, of all things, a jpeg. If I wanted to I would have looked up some documentation. — Preceding unsigned comment added by 168.169.10.39 (talk) 06:36, 12 November 2020 (UTC)