Wikipedia:Reference desk/Archives/Science/2016 August 16

Science desk
< August 15 << Jul | August | Sep >> August 17 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 16

edit

Tebua Tarawa - did it ever sink, and is it sunk now?

edit

Tebua Tarawa is one of the small islands of Kiribati that is said to have sunk beneath the waves as a result of global sea level rise said to be caused by global warming.

Now there's a lot of general argument about these things but I have kind of a specific issue which is that I thought a coral atoll was a self-regulating entity, capable of responding to sea level rise - though there is a general question, certainly of great public interest, whether the rate of rise could increase enough due to warming to overwhelm this natural mechanism, or whether ocean acidification could decrease coral growth to the point where it cannot keep up with the sea level rise.

But in specific, I used Google's search and looked and lo and behold, there appears to be a small lovely tree-covered island labelled Tebua Tarawa, the island our article, citing a BBC report, says hasn't existed since 1999. But they could be wrong; or at least, this island might be much smaller than it used to be. There certainly is a lot of room south of it on the sandbar where an erstwhile Tebua Tarawa or portion thereof might once have been; I can't tell. So was the report of its death greatly exaggerated, entirely bogus, or actually accurate and Google is wrong?

Google isn't finding Abanuea for me, another island reported to have disappeared; it's somewhere in Tarawa Island perhaps, but I don't know its status. Wnt (talk) 13:54, 16 August 2016 (UTC)[reply]

I don't know, but I think it's a bit naive to think of coral atolls as having any large degree of stability or resilience in the face of environmental conditions that have not been seen for around 3 million years. Sure, they have some positive feedbacks that keep usually them going on the scale of thousands of years, but the current, world-wide coral die off is [1] massive [2] :-/ SemanticMantis (talk) 14:36, 16 August 2016 (UTC)[reply]
This is an entirely valid point - indeed, the interest in whether these islands sank or not is based in part on the idea that if one coral island can't stay above water, then they are all in imminent trouble. But coral is not entirely without evolutionary potential here [3] - it makes sense that if some corals have evolved to permanently activate heat resistance genes to better withstand bleaching, then these repeated events will make them all start to do so, and thereby keep the islands viable for longer than we would have expected assuming no evolution at all. But yeah, that's a lot to demand of natural selection in a brief time. This wouldn't be an interesting question if the issue weren't still, AFAIK, open. Wnt (talk) 15:15, 16 August 2016 (UTC)[reply]
I suspect that they mean the land is completely submerged at high tide (tree tops might still stick up). The time between when this condition occurs and when the island remains completely submerged, even at low tide, could be decades or centuries. StuRat (talk) 14:49, 16 August 2016 (UTC)[reply]
Look at the Google Earth photo. Those aren't dead tree trunks sticking up out of water, but healthy trees surrounded by a sandy beach. Wnt (talk) 15:03, 16 August 2016 (UTC)[reply]
That could be low tide. And any surviving trees on such an island would be salt-tolerant. There are trees in the Amazon that are underwater at the base during the rainy season. StuRat (talk) 15:44, 16 August 2016 (UTC)[reply]
  • I'm assuming you know Google Earth is not live? Here in urban areas I've seen imagery up to 5 years old. I would be very surprised if the photos in more remote areas like that are more recent, and very much not surprised if they were significantly older satellite imagery. A lot of things can happen in that time. Fgf10 (talk) 17:13, 16 August 2016 (UTC)[reply]
Well yes, but Tebua Tarawa allegedly sank in 1999. Wnt (talk) 18:44, 16 August 2016 (UTC)[reply]
The phrase "sank in 1999" can't mean what you seem to think it means, that it was always completely above the surface of the water until then, then sank entirely below the surface that year, never to be seen again. Maybe it could if there was some type of sudden collapse of the structure under the island, but not due to sea level changes, because sea level changes are dramatic over the short term, due to tides, storm surge, etc., but only very gradually over the long term. This means there would be many years where it was only above the water part of the time. Exactly how "sank" is defined would be critical to know where in that process that determination is made. StuRat (talk) 02:59, 17 August 2016 (UTC)[reply]
  • I'd just like to point out that whether or not the island has disappeared, the cause would not be global sea level rise due to global warming. There has been a significant local sea level rise in that region caused by a long-term change in wind distribution, which might or might not have been caused by global warming. The estimated rise in global sea levels since 1950 has only been about 5 inches, not enough to cause islands to disappear. Looie496 (talk) 15:27, 16 August 2016 (UTC)[reply]
Well, this is the case for another island, Bikeman Island, which was only about one tree wide [4] and which was extinguished by construction of a causeway between two islands to the southeast of it. I haven't found anything online that knows where Abanuea is or was, including the micronation pretending to claim it, but it is in the same Tarawa atoll and so my suspicions of a local cause are better than average. Even so, I honestly have no idea how long the whole chain of events from CO2 rise to ocean acidification to coral bleaching to offshore erosion to beach erosion and swamping of the island actually takes, and I can't rule out that it wouldn't be fast when I don't know. (Note this is a different atoll from the one that Tebua Tarawa is on, despite the name) Indeed, I wonder if claiming the one island sank due to global warming helped politically to excuse the loss of the others... Wnt (talk) 15:34, 16 August 2016 (UTC)[reply]
Tebua Tarawa found in Google Earth is not a part of Tarawa atoll. It is a part of Butaritari atoll. So, there may be some confusion here. Ruslik_Zero 17:37, 16 August 2016 (UTC)[reply]
That's what I was noting above. Abanuea and Bikeman Island are both part of Tarawa ... but I still don't know where Abanuea is/was (Bikeman is marked in the middle of open water on Google; again, I don't know the accuracy of that but by all accounts it's gone). Wnt (talk) 18:41, 16 August 2016 (UTC)[reply]
You found Tebua correct (north of the village of Naa)? According to this Bikeman and Abanuea may be the same islet. Also:

On Naa, we look up Toauru Utire, 61, who as a boy, waded to Tebua to harvest coconuts and breadfruit, and to fish. When he was about seven, he recalls, "there were coconut and pandana trees" on Tebua. But during his teenage years in the 1950s, trees on Tebua began to die. Sand and other sediment washed away. The island began to shrink. Finally all the trees and vegetation disappeared, leaving only a few square yards of dead coral polished by waves and wind.

Curtis A. Moore, "AWASH IN A RISING SEA - How Global Warming Is Overwhelming the Islands of the Tropical Pacific," International Wildlife, January-February 2002.—eric 14:48, 17 August 2016 (UTC)[reply]

I think there is a pretty good case to be made that all the evidence for Abanuea derives from a reporter wanting to use the phrase: which ironically means "the beach which is long-lasting", and a redirect to Bikeman is appropriate.

I will also be redirecting Global Warming to Global Warming Hoax as these so-called scientists cannot even keep track of the number of islands that are sinking.—eric 01:58, 18 August 2016 (UTC)[reply]

If you do, it will be undone immediately. Please do not vandalize Wikipedia. Your anti-intellectualism may seem quaint and amusing to your peers, but it is wholly inappropriate for the reference desk. SemanticMantis (talk) 15:36, 18 August 2016 (UTC)[reply]

Using quantum teleportation between orbiting satellites to read a license plate on Callisto?

edit

According to Paul Kwiat, a physicist at UIUC, [5] Eventually, quantum teleportation in space could even allow researchers to combine photons from satellites to make a distributed telescope with an effective aperture the size of Earth — and enormous resolution. “You could not just see planets,” says Kwiat, “but in principle read licence plates on Jupiter’s moons.” Is there a name for this kind of god-awesome telescope, and do we have an article about it? We ought to... I am so tired of speculating about exoplanets in low resolution. :)

This article describes something I'm almost, but not quite, sure is completely unrelated, that makes a measurement of when a photon is present for some reason I cannot guess, then uses stimulated emission to create an optical amplifier that improves the signal with multiple cloned photons with the same characteristics. Wnt (talk) 18:50, 16 August 2016 (UTC)[reply]

The device you are looking for is called a macroscope. μηδείς (talk) 21:28, 16 August 2016 (UTC)[reply]
I suppose he's talking about some variant of very-long-baseline interferometry. It sounds very hypey to me (as does this Chinese satellite, for that matter). The quantum teleportation wouldn't be an essential part of the setup, since the effect of quantum-teleporting a photon from A to B is the same as just sending it directly from A to B. Quantum teleportation only helps if you don't have any quantum channel between A and B, but do have a quantum channel from X to A and X to B, where X is some Bell-pair-producing device (the satellite, in this case). -- BenRG (talk) 22:30, 16 August 2016 (UTC)[reply]
@BenRG: Surely this Chinese satellite isn't hype. All it takes is one of their spies to have the right transmitter/receiver and he can talk to someone in Beijing and *know* he's not being tapped. As I understand it, the big bad NSA is toothless against a country spending literally 500 times more on quantum cryptography research. At best, they are secretly spending some money to develop a comparable device for U.S. spies abroad, yet I don't think they'll get as much as the Chinese will. Wnt (talk) 02:31, 17 August 2016 (UTC)[reply]
I've created an article about the satellite, QUESS. It's hype in that the satellite itself doesn't have any military value - it can only send keys between a couple of specially configured observatories (two in China, one in Austria) at specific times of day when there's no interference from sunlight - and the principle it operates on (quantum key distribution) is already used by the US gov as well as various European/Japanese projects, although they use fibre optic cables rather than satellites. Also, the satellite itself doesn't carry communications - it's basically a space-based key generator, which sends people codes that they can then use to encrypt messages to be sent through the internet/other normal channels. It's very cool, but it's not exactly the "unhackable" satellite that the media claims, nor does it mean China are the only people with access to quantum encryption technology. Smurrayinchester 14:23, 17 August 2016 (UTC)[reply]
As that article says, the satellite can be used to distribute keys that can be used as one-time pads; the users can XOR them with data to be sent to make it theoretically unbreakable encryption. Assuming it is totally random data being broadcast and XORed it is also impossible to tell that the data being sent is a message and not just random. And the only people who can compromise the key are the two parties who received it. I mean, an old-fashioned James Bond gets a code book and can send secret messages home just as easily, but he has to trust some unknown number of people who wrote the code book or keep copies of it, and the general experience of intelligence agencies seem to be that those people are foreign agents a surprising proportion of the time. Wnt (talk) 14:36, 17 August 2016 (UTC)[reply]
Theoretically unbreakable systems can be, and in practice often are, broken by violating the assumptions of the unbreakability proof. The comparison to classical one-time pads is apt since OTPs have a poor track record in practice: they go from unbreakable to very easily broken if you make even a small mistake.
To use a satellite for this purpose, you need equipment that can reliably detect a single photon from an orbiting satellite, and distinguish it from all of the other photons in the environment. I don't think a device like that can be hidden in your shoe or the lining of your coat.
Assuming you somehow manage to do everything right, what quantum-teleportation-based cryptography gives you over a classical OTP is uncopyability of the pad: you can be sure that just two copies of it exist. But this doesn't prevent a man-in-the-middle attack: you generate identical pads A and A' and mail them to the endpoints P and Q; the enemy generates B and B' and swaps A' with B' before it reaches Q, so P has A and Q has B'; then the enemy uses A' and B to translate messages between P and Q while recording or altering the plaintext. -- BenRG (talk) 23:30, 17 August 2016 (UTC)[reply]
I very much doubt there are license plates on Callisto, and it would be disturbing to find out that someone has left one there. Similar coherent synthesis can already be done with radio waves to increase detail. Note that you do not get much from two points, but due to orbiting of satellites you could get ellipses, and if the satellites are in different orbits you could get a spirograph like filling to get a good aperture. But it will take days to get enough light and detail to detect things that are small and remote. Graeme Bartlett (talk) 09:21, 17 August 2016 (UTC)[reply]
Well, I'd be lying if I said I understood just how the magic of very-long-baseline interferometry really works, or why many existing arrays involve telescopes rather close together. But as I understand it, if only from our own article, timestamped data is accumulated and sent out over the internet - it's not entangled or teleported or quantumly magicked in any way, but gets correlated by (non-quantum) computer. (At the moment I'm really fuzzy on how this turns into a zoom magnification that the original telescopes couldn't resolve...) Whereas this statement made it sound like the quantum teleportation itself gave some sort of unique insight. Wnt (talk) 14:26, 17 August 2016 (UTC)[reply]
I don't see either how quantum teleportation could help. But for understanding the interferometry you can think of it as rather like X-ray crystallography - except only seeing a few spots or lines of the image corresponding to where the telescopes are. With such limited data one has to have a model of what one is looking for and find what best fits the data. Wide apart means smaller things can be distinguished - but the number of points says how complex a model one can use and so how close to a proper picture one can get. Perhaps the inhabitants of Calisto are huge or very bad sight and have correspondingly large licence plates on their flying saucers Dmcq (talk) 14:55, 17 August 2016 (UTC)[reply]
The high-level principle of VLBI is that instead of taking a picture with a camera, you measure the properties of the light at the aperture and then simulate (using Maxwell's equations) how it would have propagated from the aperture to the film inside the camera. This saves you from having to actually build the camera, which is helpful when the aperture size is measured in kilometers or megameters.
The reason a larger aperture helps is that nearby points on the film map (running Maxwell's equations backwards) to light frequencies (measured in the plane of the aperture) that are very close to each other. It's much easier to distinguish two signals of nearly the same frequency if you have widely separated samples.
In principle, you could do the calculation with a quantum computer instead of a classical computer. The advantage would be that you could avoid the measurement at the aperture, and therefore extract more relevant information from each photon. This is allowed by the rules of quantum mechanics, but I think that we're very, very far from having the technology to do it. If we did have the technology, it might be more convenient, in some cases, to move qubits around using quantum teleportation instead of ordinary transport. But that applies to any sort of quantum computation, and it's just an implementation detail. Quantum teleportation doesn't fundamentally enable any new computations that can't be done without it.
I don't know whether that's what Paul Kwiat was talking about, but I can't think of anything else that he could have been talking about. -- BenRG (talk) 23:30, 17 August 2016 (UTC)[reply]
It seems to be what was called "quantum faxing" a while back. And that's not such a bad term. Quoting from our article "Quantum teleportation":
  • "The quantum states of single atoms have been teleported.[1][2][3] An atom consists of several parts: the qubits in the electronic state or electron shells surrounding the atomic nucleus, the qubits in the nucleus itself, and, finally, the electrons, protons and neutrons making up the atom. Physicists have teleported the qubits encoded in the electronic state of atoms; they have not teleported the nuclear state, nor the nucleus itself. It is therefore false to say "an atom has been teleported". It has not. The quantum state of an atom has. Thus, performing this kind of teleportation requires a stock of atoms at the receiving site, available for having qubits imprinted on them. The importance of teleporting nuclear state is unclear: nuclear state does affect the atom, e.g. in hyperfine splitting, but whether such state would need to be teleported in some futuristic "practical" application is debatable."
I can see how by being able to timestamp the quantum state of, say, a photon detected at a satellite in geosynchronous orbit over Malaysia and compare it to a photon propagating from the same point in space detected in geosynchronous orbit over its antipodal point in the Amazon Basin at the same time, quantum teleportation might give you Kwiat's very long baseline interferometer (with a baseline twice the Clarke orbit radius of 42,164 km (26,199 mi), or 84,328 km (52,398 mi) wide). You wouldn't get that information instantaneously, you'd have to wait until the classical bit associated with the qubit carrying that info arrives. But you'd still get good phase information on the photons from that point in space, and that's what Kwiat's probably excited about. loupgarous (talk) 13:04, 19 August 2016 (UTC)[reply]

Fifth physical force claimed

edit

Any way to say "crank or not" about this report and associated paper? [6][7] It's looking notable enough for an article already, but I haven't found good sources other than the news reports and the authors or press releases yet (but it just came out...) Wnt (talk) 18:57, 16 August 2016 (UTC)[reply]

It may be worthwhile to wait before pronouncing it for realz. But I see nothing in the particulars to yet say it's overly cranky. UC-Irvine, UC-Riverside, and University of Kentucky; where the authors hail, are all legitimate research universities. Of note is the the journal in question Physical Review Letters, is a legitimate peer reviewed journal, of the American Physical Society, a well-respected organization. Notably, however, this is published in Physical Review Letters and not Physical Review, which is for more developed research. Generally, any journal that is titled "Letters" is intended for short, preliminary results of new research; when it becomes more developed (if, I should also say) it would be likely published in one of the main Physical Review journals (which are subdivided by discipline). That this is preliminary and not well developed is why a) you aren't seeing more about it and b) why it is in the "Letters" journal. --Jayron32 19:17, 16 August 2016 (UTC)[reply]
Yep, definitely not crank work; crank work doesn't get published in Phys. Rev. Let. However, let's not get carried away. Just because it's not a crank piece doesn't mean that it's reporting true facts about the universe. There's a lot of middle ground. Science takes years, science reporting takes days. Me, I'm not going to read any more about this for at least a few months ;) SemanticMantis (talk) 19:56, 16 August 2016 (UTC)[reply]
Science doesn't concern itself with "truth" or "facts". See Karl Popper, et. al. --Jayron32 20:17, 16 August 2016 (UTC)[reply]
From his article, section Karl_Popper#Truth]: ' Popper refers to it as a theory in which "is true" is replaced with "corresponds to the facts". ' It also explains Popper's stance that "hypotheses of scientific theories can be objectively measured with respect to the amount of truth and falsity that they imply."
I was pretty sure that you and I and Wnt were all familiar with Popper and falsifiability, and that's not really what I thought OP was about. Rather, this is news because previously the statement "there are exactly four fundamental forces" was basically thought to be a true statement, and this recent work suggests that it might not be true after all. For clarity, I've updated my first link above to go to "truthlikeness". SemanticMantis (talk) 21:27, 16 August 2016 (UTC)[reply]
I'd prefer the phrasing "'There are exactly four fundamental forces' is thought to be the current level of understanding", but we're in broad agreement. The problem with words like "true" and "facts" is that they imply immutability; science doesn't discover immutable facts, it develops progressively more accurate concepts, models, etc. The four-forces model is (or perhaps was; that's yet to be seen) the most accurate model to describe physical phenomena. If a more accurate model comes along, it doesn't become false per se (because it wasn't true, per se), it just becomes improved. --Jayron32 10:25, 17 August 2016 (UTC)[reply]
PRL is one of the most respected journals in physics. Many important papers have been published in it, such as the detections of gravitational waves and neutrino oscillation (more are listed here). That doesn't mean this alleged boson really exists, though, just that the paper is well written (or the reviewers thought it was). For context, note that hundreds of papers were published (some in PRL) to explain the 750 GeV boson that turned out to be a statistical fluke. That's just how science works; it wasn't wrong of anyone to write or publish those papers. -- BenRG (talk) 22:07, 16 August 2016 (UTC)[reply]
It is serious research, but personally I doubt their explanation is correct. This reminds me of people trying to explain the pioneer anomaly by modifying gravity. Yes, there is an unexpected experimental observation, and yes one way to explain that observation is to modify the physical laws of the universe, but have you really considered all other reasonable explanations? I strongly suspect that a more mundane explanation will be found that doesn't require a new gauge boson. Also, I wouldn't be surprised if accelerator experiments already rule out a new boson with so low energy. It would be weird if something like what they are proposing has been missed. Dragons flight (talk) 08:51, 18 August 2016 (UTC)[reply]

How reliable is the number of citations in Google Scholar

edit

By how far could the number of citations of a paper in Google Scholar deviate from the real one?

As I understand, the quotes are produced by an algorithm, and not by a human librarian.

That could imply a deviation due to citations from non-peer-reviewed publications, multiple citations from one single work, citations from the same author, a matching conference paper/journal paper being counted as two, or sequential editions of a book increasing the count by 1 each. Hofhof (talk) 22:54, 16 August 2016 (UTC)[reply]

It does often list multiple entries for the same thing, due to spelling variations. Also it drops off to just about nothing in the 19th century, so early writers are under counted. Also various systems struggle with some non-ASCII characters in names, such as "š" and authors may appear multiple times with their work split. Graeme Bartlett (talk) 23:19, 16 August 2016 (UTC)[reply]
At http://www.int-res.com/articles/esep2008/8/e008p061.pdf it claims that GS gives more comprehensive result than other citation counting systems. At https://www.researchgate.net/profile/Henk_Moed/publication/301747343_A_new_methodology_for_comparing_Google_Scholar_and_Scopus/links/5724d21c08ae262228adb97b.pdf it indicates double counting in GS is only 2%. Graeme Bartlett (talk) 09:12, 17 August 2016 (UTC)[reply]
  • Original research, I guess, but one of my publications has somewhere north of 70 citations on there now, and I actually checked them the other day, as I was wondering the same thing. I foudn that they were all correct, and the same for some of my other papers I had a quick look at. I would agree with the above link. Fgf10 (talk) 15:47, 17 August 2016 (UTC)[reply]
I and other editors have noticed (in Wikipedia:Articles_for_deletion/Ruggero_Santilli_(2nd_nomination)) that both the h-index and Google Scholar are being gamed by groups of authors who publish in "open access journals" whose business model depends on hefty author processing fees, some with deceptive names like American Journal of Modern Physics, which hides the fact that only 2 of 33 editorial board members live or work in the USA, and that the journal is published in the Sudan. These authors essentially pay for publication of their articles, cite their own work extensively (40 out of 70 citations in one article we discussed), and seem to be citing each others' work in the same or related journals. This led some editors insisting the subject of the article under discussion to assume the subject had greater impact in his academic field than other editors who work in that field said he has.
I'm not saying Google Journal or h-index citation analyses are always unusuable, but they are tools which aren't always reliable. A scholar is responsible for reading someone's work directly before assessing how important it is. In the case of the subject of the article we were discussing, the game was given away by the author's selecting "forgiving" journals such as the American Journal of Modern Physics, other journals printed by the "Institute for Basic Research," which he owned, and his list of citations including the text of an address he made to the St. Petersburg Astronomy Club.
So I'd raise the equally important issue that even "real" citations counted by Google Scholar might not be ones we'd regard as reliable sources here in wikipedia, or as solid work by other researchers in the field. Caveat lector. loupgarous (talk) 16:49, 17 August 2016 (UTC)[reply]
I'll assume you're not talking about my citations being fraudulent, but your indentation makes it seem so, can you change that please? Also, there are certainly many bogus journals, but there are also many many genuine open access journals now. In fact, many funding bodies now required open access publication of work funded by them. Fgf10 (talk) 07:38, 18 August 2016 (UTC)[reply]
I felt WP:BOLD and fixed it. Apologies if I'm stepping on the toes of User:Vfrickey, but it's just a simple WP:INDENT issue. As for the larger scope of the OP: typing the question header into said google scholar gives some very promising references on this very issue [8]. You are also right that journals published via open access publishing are not disreputable by virtue of that feature. E.g. I'd love to have more cites from my work coming from PLoS Biology, which is held in very high esteem at present. Finally, there's a sort of an "smell test". Science is an increasingly small world, and professional academics are not likely to be fooled by a google scholar profile that is padded with pay to publish junk publications. SemanticMantis (talk) 15:31, 18 August 2016 (UTC)[reply]
My toes are entirely unstepped-on, and thanks for doing that, SemanticMantis. And no, Fgf10, you weren't the guy I was talking about (unless you are, indeed, the owner, publisher, and lord high panjandrum of the "Institute for Basic Research". That's a whole other guy).
Going to your other point, you're right that open access journals aren't necessarily bad. By charging article preparation and reprint fees, you'd think each one could afford qualified and attentive peer-reviewers. However, there have been nasty scandals (unconnected to my original example, because nothing a fringe scientist does short of murder seems to provoke a scandal) connected to open access journals in the press. One is discussed in Derek Lowe's blog "In the Pipeline", the article "Crap, Courtesy of a Major Scientific Publisher". Nature Publishing Group's open access journal Scientific Reports published "Novel piperazine core compound induces death in human liver cancer cells: possible pharmacological properties" by Nima Samie, Sekaran Muniandy, M. S. Kanthimathi, Batoul Sadat Haerian, and Raja Elina Raja Azudin which Lowe was able to demolish as "deliriously incompetent fraud" in its alleged synthesis and laboratory analysis of a compound supposedly effective in killing liver cancer cells.
Lowe went down every substantive statement and figure in the paper (until there was no point in doing so, as the main points of the paper had been shown wrong) and showed them to be fallacious organic chemistry, either misstated or made-up mass spectrometry readings, wildly inconsistent NMR spectra (compared to what the compound was purported to be) and six supposed before-and-after fluorescence photomicrographs of three different liver cell lines - all six frames were identical save for the red arrows supposedly showing activity of the test compound. After Lowe's column appeared the researchers' institution, the University of Malaya, forced retraction of the paper and announced disciplinary sanctions for the authors.
I wouldn't hold this up as an illustrative case, except that a Nature Publishing Group journal put that proud name on what can only be called crap. And these are early days in open source publication for serious biomedical topics. The consequences of fraudulent publications like this are, first, that trying to reproduce findings like these will be costly and futile, since they're pretty obviously faked. One hopes that researchers at other institutions doing research like this are as sharp-eyed as Dr. Lowe and spot the howling errors of fact BEFORE attempting to reproduce the work (of course, just trying to repeat the synthesis as described would have given the game away, one would think). Second, scandals like this hurt the institutions associated with them and waste scarce research funds. Third, they're diversions from the serious work of evaluating new treatments for cancer and other vital scientific research. Of course, garbage gets published by pay journals, too, but nowhere near as often. Journals that charge their readers for content, if nothing else, would be unsubscribed by their main customers, research libraries, for printing nonsense like this - they have an incentive to use peer review and editorial oversight to validate research findings like this prior to publication. loupgarous (talk) 10:30, 19 August 2016 (UTC)[reply]
@Vfrickey: This is all too true, but I'm skeptical you can fix an inherently fouled-up model. The problem with copyright is that it rations reading; but the vanity publisher model has problems of its own. There are some very straightforward, obvious reforms to be made: to begin with, the archivist should be a bargain-basement web host, and the journal should be a group of articles selected from those freely available online, so that the acts of making something available and having it "accepted" are entirely unrelated. This also implies that peer review should be retroactive and ongoing. And the other thing is that the "journal", i.e. the people who assemble lists of noteworthy articles for the recommendation of readers, should be grant-funded: the NIH should pay neither the libraries to subscribe, nor the researchers to publish, but instead fund the people who rate quality directly to do their jobs. Wnt (talk) 13:34, 19 August 2016 (UTC)[reply]
You would probably not be surprised how many of the folks who responded to that article in Derek Lowe's blog (most of whom are researchers or Big Pharma scientists themselves) agree with you, as do I re: reform of peer review of journal articles. Peer review ought to be like pharmacovigilance - ongoing, and an activity funded and resourced properly. I also agree that NIH and other countries' equivalent organizations ought to pay peer reviewers. It would eliminate what I hear too much of - peer reviewers who write actual critiques of work too often suddenly not being consulted by certain journals. Someone ought to be minding the store, and it ought to be something you get paid to do, and rated on your work. loupgarous (talk) 14:50, 19 August 2016 (UTC)[reply]