Wikipedia:Reference desk/Archives/Science/2010 March 5

Science desk
< March 4 << Feb | March | Apr >> March 6 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 5

edit

Water for Hydrogen Fuel in Deserts (cont.)

edit

I am continuing from my last question.

Who has proposed using an energy store in deserts? I have recently read a book called Solar Hydrogen Energy: The POWER to save the Earth which proposed that hydrogen be used to transport solar energy from deserts to other places. It also proposed that most production of solar energy be in deserts so they won't take up farmland.

The water in oceans is saltwater. Could we extract the hydrogen used to transport energy from saltwater?

Most combustion of hydrogen as fuel would be in places outside deserts, places all over the world. They would be very far away from deserts. Alot of the combustion of hydrogen would be in transports and not in producing electricity so they wouldn't be transported back to deserts.

An Unknown Person (talk) 04:44, 5 March 2010 (UTC)[reply]

There have been lots of proposals to use hydrogen as a storehouse for excess energy production from solar, wind, hydroelectric, etc. It will likely eventually become a common fuel source, however there is an utter lack of hydrogen-fuel infrastructure (basically, no fuel stations exist to sell the stuff; no distribution network exists to get it to the stations, etc.) the way there is with gasoline. So you get a catch-22 with hydrogen as a fuel: No one owns hydrogen-fueled cars because you can't get hydrogen anywhere to fill the tank with, and you can't find hydrogen stations anywhere because there are no customers with hydrogen cars to buy them from. This barrier is what has kept fuel cell vehicles off the road, despite being a feasible means of fueling a car for, oh, 50 years or so. --Jayron32 04:56, 5 March 2010 (UTC)[reply]
That doesn't mean it's impossible. I imagine you could start with solar hydrogen production facilities near desert cities, like Las Vegas and Phoenix, and provide refueling stations in those cities and maybe a fleet of rental cars that use hydrogen. You could eventually expand the network with hydrogen pipelines to cities near the desert, like Los Angeles. It might never be feasible to ship hydrogen all the way across the country to places like New York, however, depending on the prices of other fuels. StuRat (talk) 12:32, 5 March 2010 (UTC)[reply]
The idea isn't to "ship" hydrogen - it's to use it as a storage medium. The cheapest way to "ship" hydrogen is to burn it to make electricity, send the electricity over wires to the destination and use that electricity to make hydrogen at the other end. Sure, it's inefficient - but it's a LOT easier and safer than building hydrogen pipelines or having trucks or trains carry the stuff. SteveBaker (talk) 20:08, 5 March 2010 (UTC)[reply]
You may be right about the advantages of each approach, but the Q was clearly: could "hydrogen be used to transport solar energy from deserts to other places" ? So, that's what I answered. StuRat (talk) 21:24, 5 March 2010 (UTC)[reply]

What does the integral of the position function represent?

edit

That is, suppose we have velocity v'. If we integrate (with respect to time), we get position v + c. What do we get if we integrate again?--70.122.117.52 (talk) 05:06, 5 March 2010 (UTC)[reply]

The integral of the velocity is displacement, not position. That is, if we integrate a velocity function between any two arbitrary points in time, you get the distance traveled over that time. I'm not sure that integrating the displacement function gets you any meaningful physical quality. Mathematically (especially if you have a complicated velocity function) you could repeatedly integrate any function until you get only a constant left, but that doesn't mean that the operation produces a meaningful physical result. --Jayron32 05:18, 5 March 2010 (UTC)[reply]
(ec)The definite integral of velocity is displacement. The OP is talking about the indefinite integral, which would be position if you take the constant as being the starting position. Also, if you repeatedly integrate you don't end up with a constant - that's repeated differentiation. If you repeatedly integrate you end up with some function plus a polynomial (of ever increasing degree). --Tango (talk) 05:32, 5 March 2010 (UTC)[reply]
I don't think the integral of position wrt time has any physical significance. It would have units of length times time (eg. metre seconds), which doesn't correspond to anything useful that I can think of (an angular momentum divided by a force, I suppose, but I can't think why you would ever do that). --Tango (talk) 05:32, 5 March 2010 (UTC)[reply]
It could have a meaning if there is some other property dependent on the position (∫f(x)⋅dt rather than just ∫x⋅dt). Two ideas:
  • A compressed spring. Force is based on displacement, so k⋅distance⋅time is the impulse or momentum-change of the object against which the spring is pushing.
  • Some other sort of energy transfer where the transfer/flux falls off linearly over distance. Total transfer is like k⋅(1-distance)⋅time. Hrm, actually all the specific ones I can think of are squared relationships not linear:( Something involving a dashpot/damper or other particle motion through viscous material I guess.
DMacks (talk) 21:39, 5 March 2010 (UTC)[reply]

Why would you only end up with a constant through iterated integration? Wouldn't that be what happens through iterated differentiation? —Preceding unsigned comment added by 70.122.117.52 (talk) 05:31, 5 March 2010 (UTC)[reply]

Yeah yeah yeah. Stop piling on. --Jayron32 05:34, 5 March 2010 (UTC)[reply]
Two people edit conflicting does not a pile make! --Tango (talk) 05:38, 5 March 2010 (UTC)[reply]
That depends on the size of the people. I have met people who constitute a pile all by themselves. --Jayron32 06:19, 5 March 2010 (UTC)[reply]
Fair point. I don't know about the OP, but I'm pretty skinny - I think you would need at least 3 or 4 of me to constitute a pile. --Tango (talk) 06:24, 5 March 2010 (UTC)[reply]
Neither iterated integration, nor iterated differentiation guarantee that you'll end up with a constant. Even with iterated differentiation, that only happens for polynomials. —Preceding unsigned comment added by 157.193.173.205 (talk) 08:22, 5 March 2010 (UTC)[reply]
The integral of displacement is sometimes called absement - our article on it has been deleted on account of its low notability, but here is a short-but-good explanation of the concept. --Link (tcm) 22:55, 6 March 2010 (UTC)[reply]

Could a star like the earths sun just collapse in on itself, like due to chaos theory or something? —Preceding unsigned comment added by Jetterindi (talkcontribs) 09:27, 5 March 2010 (UTC)[reply]

No. The sun is not massive enough to become a neutron star or a black hole. In about 5 or 6 billion years it will become a red giant, and then finally a white dwarf surrounded by a planetary nebula. See stellar evolution for more details. Gandalf61 (talk) 09:39, 5 March 2010 (UTC)[reply]
I heard this same thing in grade school about thirty years ago. Don't you mean "4,999,999,970 - 5,999,999,970 years"? Kingsfold (talk) 14:19, 5 March 2010 (UTC)[reply]
Also, "chaos theory" doesn't mean any crazy thing you can think of might happen. It just means that tiny variations in the beginning of an event can have large and unpredictable effects on the outcome. --Sean 14:19, 5 March 2010 (UTC)[reply]
Perhaps the OP is asking whether all the particles in the Sun could move inwards due to random chance (although this could apply to any bunch of matter at >0K). The probability is unfathomably low. --Mark PEA (talk) 11:53, 6 March 2010 (UTC)[reply]

Thinking brain?

edit

The idea that our thinking activity is located inside the head seems very natural, but is it something we have learned, or do we know it because we feel the brain thinking? (like e.g. we feel our belly digesting the food). If we have learned it, when did men first have this intuition, and how did they came to it? --pma 10:07, 5 March 2010 (UTC)[reply]

comment removed. -- kainaw 21:36, 5 March 2010 (UTC)[reply]
You obviously saw it in a fictional movie (if your memory is correct), since a head transplant is totally beyond the abilities of science men. 82.113.121.110 (talk) 11:25, 5 March 2010 (UTC)[reply]
According to the Head transplant article, the procedure has been Performed with limited success on dogs, monkeys and rats.... Mitch Ames (talk) 11:34, 5 March 2010 (UTC)[reply]
even worse, if that was the end of the "experiment", it only proved: a thinking activity was either in the first monkey's head, or in the second monkey's body, and nowhere in the "scientist". But my question is more on the psychological side: how is that we feel that our thinking activity happens inside our head: is it cultural, that is we learn it when we are kids, or is it physiological, that is, we know what part of our body is thinking much the same way that e.g. we know which part of our body is eating or defecating. --pma 11:45, 5 March 2010 (UTC)[reply]
Scientists probably first knew this from people with head injuries, causing brain damage, which led to abnormal thought processes (speech problems, etc.). But people might have thought this intuitively because our perception is associated with our senses, and 4 out of 5 are located on the head alone. This is a rare case where the intuition was right on. One exception seems to be emotion, which instinct told people was in the heart, since it beats faster or slower based on our emotional state. However, emotion, just like logic, actually comes from the brain. StuRat (talk) 12:18, 5 March 2010 (UTC)[reply]
History of neuroscience has some useful information. The ancient Egyptians believed intelligence was situated in the heart; Alcmaeon of Croton (c. 500 BC) is claimed to be the first person to suggest the brain was used for thinking. Galen (AD 129 – 199/217) also did work on brain structure. Although little is known about him, Alcmaeon probably used vivisection and dissection to make his discoveries, e.g. cutting the optic nerve of live animals. --Normansmithy (talk) 12:24, 5 March 2010 (UTC)[reply]

if thez thought intelligence was situated in the heart, what did they think the material in the skull was for? Also, presumably they knew what some of the other organs did because people injured in those organs all suffered in the same way as a result. Didn't a single person in ancient times suffer traumatic head injury that caused brain damage, so it became obvious to everyone that intelligence and cognitive functions are done there? I seriously can't imagine how they could have thought brain-stuff was for anything else than thinking... what did they think it was? 82.113.121.110 (talk) 13:06, 5 March 2010 (UTC)[reply]

The first article linked by Normansmithy above answers your questions directly. Remember that for much of the times under discussion, there were almost no means of communication other than face-to-face, few libraries or other centres of learning existed, and very few people were literate anyway: consequently, most people's opinions could only be based on their own direct experiences. While some did deduce that the brain was the seat of thinking, some thought instead that the brain was merely packing material, and some (following Aristotle's opinion) that its function was to cool the blood which, since the brain does indeed have a very generous blood supply and does indeed create and radiate a significant proportion of the body's heat was, though wrong, not at all unreasonable. 87.81.230.195 (talk) 13:52, 5 March 2010 (UTC)[reply]

The best discussion of this issue that I've seen is in The Mind's I, in two essays called "Where am I?" (by Daniel Dennett) and "Where was I?" (by David Hawley Sanford). I think the basic answer is that we feel ourselves to be located in our heads because that's where our eyes are -- we are vision-dominated creatures. By using remote-vision equipment it is possible to create a very strong sense of being located outside your body in various ways. Looie496 (talk) 17:11, 5 March 2010 (UTC)[reply]

Thany you all! very interesting information. --pma 08:35, 6 March 2010 (UTC)[reply]

Goedel and Wikipedia

edit

Does the Goedel incompleteness theorem imply that insofar as Wikipedia is comprehensive, it cannot be accurate, and insofar as it is accurate, it cannot be comprehensive? 82.113.121.110 (talk) 11:30, 5 March 2010 (UTC)[reply]

Gödel's incompleteness theorems deals with questions about the fundamental definitions of maths. It is quite a stretch to apply it to wikipedia, without taking more poetic licence than is normally allocated to pure maths. I think it is nonsense to suppose that wikipedia will ever be complete (even if we knew what that meant in wikipedia terms) not completely accurate. We can duduce all of that without the need to appeal to Gödel. --Tagishsimon (talk) 11:43, 5 March 2010 (UTC)[reply]
Wikipedia will not be complete until it contains an article that is about itself. Looie496 (talk) 17:02, 5 March 2010 (UTC)[reply]
Please see our article Wikipedia, which is both (IMHO) accurate and comprehensive. So there you go - Goedel disproved, once and for all. Phew! --NorwegianBlue talk 19:15, 5 March 2010 (UTC) [reply]
Gah, lexical scope ambiguity! I didn't mean an article about Wikipedia, I meant an article with a title something like This article. Looie496 (talk) 19:40, 5 March 2010 (UTC)[reply]
Gödel's theorem relates to provability in a formal axiomatic system. It has nothing whatever to tell us about completeness (or truth) in the phenomenal world. --ColinFine (talk) 17:32, 5 March 2010 (UTC)[reply]
The problem is the definition of "comprehensive".
  1. To truly contain all knowledge (and thereby to be complete) we would have to carefully document (for example) the precise position of every object in the universe and keep that information up to date on a moment-by-moment basis. For truly comprehensive information, we'd require to provide the location of every fundamental particle. The required data storage for such a system would be larger by far than the universe itself - and is therefore quite impossible. This is a kind of "diagonal" argument that Godel and Cantor would have been happy to provide...but it's unrelated to Godel's famous theorem. Hence, true "completeness" is indeed impossible...not just impractical.
  2. Fortunately, Wikipedia's own definition of "comprehensive" requires us only to describe things that are "notable" and "referenceable". It probably is theoretically possible to be fully comprehensive within that definition. If every human on earth were to sit down and write articles about every single notable thing that happened to them personally, or which they had written about in a document that Wikipedia would accept as a reference - then they could easily do it within their lifetimes. Most people would be done with it very quickly...and those people would have plenty of time to fill in the notable/referenceable information from past generations...especially since the world population is much larger than it was in previous generations. Doing that would result in a complete, "comprehensive" encyclopedia (within our own definition of that word) in rather a short period of time.
SteveBaker (talk) 19:42, 5 March 2010 (UTC)[reply]
First, Wikipedia isn't written in a formal language, so Godel's Incompleteness Theorem doesn't apply. If it were, it would be more like a big list of facts ("1+54 = 55" "7 is prime" "65=65") than a small set of axioms (like the nine Peano axioms, that start "for all x, x=x"). Incompleteness is about the limits of derivable facts. Any list of facts (about an interesting system) is trivially incomplete by virtue of not being infinitely long (but an infinitely long list of facts totally could be complete.) Paul Stansifer 23:08, 5 March 2010 (UTC)[reply]
Exactly. For example true arithmetic is complete. True arithmetic satisfies all the stipulations for incompleteness, except the one saying that it can be axiomatized by a computably enumerable set of axioms.
Basically the whole endeavor of trying to apply incompleteness outside of mathematics is fraught with pitfalls. That's not to say it can never be done, but it is to say that some very smart people have wound up making unsound arguments of this sort. Torkel Franzén has a whole book on the subject. --Trovatore (talk) 23:15, 5 March 2010 (UTC)[reply]

do we benefit in any way from c?

edit

Is there any benefit for anyone from having a current universal speed limit of c? ie, if the Universe were a democracy, and its lawmaker obeyed the will of the people in it, is there any reason we would do well to have the lawmaker keep c? At least two good reasons for abolishing c from the laws of the universe would be: 1) easier intergalactic space travel, easier communication with probes on mars, etc, 2) smaller ping times to China, and faster processing within a single piece of electronics. In fact, in today's 3 GHz processors, I heard that electrons only have enough time between clock ticks to travel a few centimeters (you can verify this for yourself with a simple Google calculation). Now this means we can't possibly make processors much bigger than that, with logic that has electrons travelling far more than that distance, depending on what path they take. So, this might be a simplification, but abolishing c could also bring better computation power. But as I asked above, my question is now: is there ANY benefit at all from actually having c? 82.113.121.110 (talk) 12:41, 5 March 2010 (UTC)[reply]

Your question is nonsense. c is a fundamental constant. Okay, technically there are some limited theoretical circumstances where it bends (shortly after the Big Bang), but we have no control over it. Nature's laws are not up for a vote. —ShadowRanger (talk|stalk) 13:13, 5 March 2010 (UTC)[reply]
In English, when you say "if the Universe were", the were means that the speaker knows he or she is asking about an imaginary case that is contrary to actual reality. (Otherwise he or she would say "if the Universe turns out to be" or even "if the Universe really is"). So, your point is totally invalid, instead please answer my question: is there any benefit to c, and even though we can't, if we could petition to have this law abolished, would there be any practical benefit to us if we could and we did and it were? —Preceding unsigned comment added by 82.113.106.97 (talk) 13:22, 5 March 2010 (UTC)[reply]
There are obvious benefits, as you listed. The question is whether the universe could still exist (in any way still somewhat pleasant to us) if we raised c. After all, it's not a speed limit in isolation of the rest of physics. Fine-tuned universe addresses variations on the question. --Sean 14:32, 5 March 2010 (UTC)[reply]
Thanks. Unfortunately your benefit to c in that without it perhaps the Universe "couldn't exist in a way still pleasant to us" is so very broad. Could you, or anyone else, possibly give a more direct, narrow benefit to having c as a speed limit? Does this actually help in a practical, specific (rather than an overarching universal) way? Thanks. 82.113.121.103 (talk) 14:44, 5 March 2010 (UTC)[reply]
The slow speed of light has so far prevented the evil Zorn empire from invading from their galaxy and eating our brains. :-) StuRat (talk) 14:42, 5 March 2010 (UTC) [reply]
But, seriously, there may well be (or have been) other intelligent species out there which would, at some point, have colonized the Earth if they could only get here, meaning we may never have existed. We don't need to attribute evil motives to them, as they might not have found any intelligent life if they arrived long ago, but maybe they wondered what that slime was in those volcanically heated ponds. StuRat (talk) 14:48, 5 March 2010 (UTC)[reply]
This is exactly the type of answer I'm looking for. Do you know of any other answers of this specific nature? 82.113.121.103 (talk) 14:44, 5 March 2010 (UTC)[reply]
Perhaps next time, you could specify which type of answer you are looking in front, and save us and yourself some time? :-) DVdm (talk) 14:53, 5 March 2010 (UTC)[reply]
um, I didn't have any idea of this answer. I like it because it is specific, and OBVIOUSLY a benefit. I would easily pay any amount of my money not to have my brain eaten by Zorns. Are there any other, specific benefits like this you can list? 82.113.121.103 (talk) 14:58, 5 March 2010 (UTC)[reply]
George Gamow wrote a little book "Mr. Tompkins in Wonderlandwherein the Mr. Tompkins dreams about a world where the speed of light was 30 miles per hour. Relativistic effects are seen when someone rides a bicycle. Surely someone has written a similar work where c was orders of magnitude higher. Radio antennas would get bigger, at least for the same frequency. Optics might have to change their size, at least to focus the wavelengths used presently in vision and photography. I wonder if electron orbitals would have to change, along with the the size of atoms and molecules? Edison (talk) 15:03, 5 March 2010 (UTC)[reply]
Changing c would change the fine structure constant, and Fine-structure_constant#Anthropic_explanation suggests that that would result in a very different, and probably inhospitable, universe. Also, if you are talking about an infinite c (instead of just a larger c), I'm not sure you would have electromagnetic waves at all anymore. -- Coneslayer (talk) 15:11, 5 March 2010 (UTC)[reply]

Okay, the above is a real benefit: smaller radio antennas than if C were larger. Then Cones Layer says the same thing others have been saying, that it's a general requirement for our whole universe. Guys, I got this part. Are there any other specific benefits to the current c, as the bit about smaller antennas is? Thanks. 82.113.121.103 (talk) 15:38, 5 March 2010 (UTC)[reply]


I can think of a couple of Japanese cities that would not have suffered as much if the value of c in E=mc2 were not so large. TimBuck2 (talk) 17:07, 5 March 2010 (UTC)[reply]

Awesome answer. We have all benefited, since if c were larger we may have destroyed all of Japan, or even the entire earth with the first atomic test. Also, if c were a lot smaller, I imagine that we wouldn't be able to generate much electricity through nuclear reactors.24.150.18.30 (talk) 17:44, 6 March 2010 (UTC)[reply]

new Best answer (so far) as chosen by OP:Reduced from a spurious subheading that intruded in the contents list. Cuddlyable3 (talk) 18:09, 6 March 2010 (UTC)[reply]

As you increase c, I think it would get harder to implement the Global Positioning System with the same accuracy. Obviously it wouldn't work at all with infinite c. If you must split my name in two, it should be Cone Slayer, not Cones Layer. -- Coneslayer (talk) 15:43, 5 March 2010 (UTC)[reply]
Along the same lines, it would be more difficult for us to precisely measure the distance to the moon. —Bkell (talk) 15:54, 5 March 2010 (UTC)[reply]
the GPS is the best practical answer anyone here has given so far, and it is spot on, since it relies on specific timing of distances that are travelled at or nearly at c. Therefore, with an infinite or much larger c, this would become difficult. Can anyone come up with other practical aspects of our life that could not work but for c on the scale it currently is? Thank you. 82.113.121.103 (talk) 16:52, 5 March 2010 (UTC)[reply]
You're pretty insistent, aren't you? —Bkell (talk) 18:00, 5 March 2010 (UTC)[reply]
This is a meaningless question. If 'c' were even slightly different (either more or less) than it actually is, then humans would not exist...but radically different life-forms might. If it were significantly different then probably galaxies and stars wouldn't exist. One of the things you learn after enough years answering questions on the reference desks is that once one utter impossibility has been injected into a question, all else falls to the ground. We can't meaningfully list trivia like GPS being more or less accurate when the elephant in the room is that the existence of all things pretty much depends on 'c' being precisely what it is. GPS could not possibly function in any way whatever if 'c' were more than a percent or two different than it actually is because nobody would have been here to invent it...and in all likelyhood, there wouldn't even be a "here". Sorry - but you don't get to pick between answers you like and answers you don't. You get answers...hopefully true ones. SteveBaker (talk) 19:30, 5 March 2010 (UTC)[reply]
I thought he showed outstanding judgment and discernment with his choice of answer. -- Coneslayer (talk) 19:35, 5 March 2010 (UTC)[reply]


If c were infinite, and the universe were also spatially infinite, then the sky would be blindingly bright 24/7, would it not? Vranak (talk) 21:44, 5 March 2010 (UTC)[reply]
thank you for the response, can you explain your thinking in more detail, specifically what causes the sky to be brighter than it is now? thank you. 82.113.121.94 (talk) 21:57, 5 March 2010 (UTC)[reply]
Olbers' paradox --ColinFine (talk) 00:24, 6 March 2010 (UTC)[reply]

If you look up in the sky there is a giant thermonuclear furnace that relies on E=mc2. If you start playing with that c you could either turn off or explode the sun. Even a ~5% change in solar luminosity would change the temperature on Earth about 10 C, so that isn't a balance to be trifled with. Dragons flight (talk) 02:09, 6 March 2010 (UTC)[reply]

OK, I'll bite. Your question presumes that a "benefit" must be a "benefit" to "us," i.e., people. We have existed only a fraction of a million years, whereas the universe is 13.8 billion years old. So obviously nothing in the universe exists for "our" benefit. Perhaps you mean "life in general" -- does c benefit life? If light had no speed limit, then light would be infinitely fast. If so, every form of life in the universe would be blind, because all the light in the universe would endlessly travel around the universe; nothing could evolve "eyes," because no organic organ (developing from a primitive predecessor) could adjust to infinite stimulation. So, yes, the speed of light helps "us" because "we" like to be able to see things, which "we" couldn't if "we" were incapable of evolving optical organs. 63.17.82.123 (talk) 04:06, 6 March 2010 (UTC)[reply]
I disagree completely. Why would there be any more light, on average, at any given point ? We would see the light emitted from Proxima Centauri now, instead of the light it sent out some 4.2 years ago, but how would that change the total amount of light we see from that source ? Now apply that same logic to every other light source. If you're thinking there are an infinite number of stars out there, and the light from most of them hasn't reached us yet, due to a finite value of c, I don't think that's right. I believe there's a very large, but still finite, number of stars. Also, light doesn't travel an infinite distance, as eventually it gets absorbed by something, like interstellar dust. StuRat (talk) 16:12, 7 March 2010 (UTC)[reply]
Just for kicks, Wikipedia has a page on the variable speed of light (http://en.wiki.x.io/wiki/Variable_speed_of_light) which has some information on cosmologists investigating the possibility of c not always being what it is known as today.24.150.18.30 (talk) 17:52, 6 March 2010 (UTC)[reply]

Entropy

edit

Is the Moon a higher or lower entropy environment then the Earth? What caused it? TheFutureAwaits (talk) 12:59, 5 March 2010 (UTC)[reply]

Our article on entropy suggests the definition "entropy is as such a function of a system's tendency towards spontaneous change." As such, the Earth is a lower entropy system than the Moon, as it is more prone to spontaneous change, and it is such primarily because it is larger -- large enough to retain an atmosphere and an active volcanic system. — Lomn 14:06, 5 March 2010 (UTC)[reply]

I think it's the opposite way : low entropy means highly ordered, for example instead of a moon, a perfect sphere made entirely of a single element and uniformly a single temperature would be highly ordered: you could basically describe it entirely in half a sentence, giving the diameter, the element it is made of, and the temperature (maybe I'm leaving out one or two things). Because you can describe it in very few words, it therefore has a very low entropy. Now the moon has much higher entropy than a perfect sphere made of a single element. You would need far more space to describe it fully. But the earth has a higher entropy still: it is much more complex. So, I would say that the Earth is a higher-entropy environment. To put it another way, as a percentage, you increase entropy far more when you put an American flag into the low-entropy conditions on the moon than when you place on in the high-entropy conditions on an Earthly mountain. Can someone better versed in math and science confirm my interpretation? Thanks. 82.113.121.103 (talk) 14:30, 5 March 2010 (UTC)[reply]

That's completely wrong. Entropy is not a measure of how many words it takes to give a macroscopic description of an object. It is measure of how many microscopic states are consistent with the macroscopic description. To labor on your example, if you were to melt the moon, mix if thoroughly and find a way to cool it fast enough to keep the mix uniform the final sphere would have a higher entropy. Read entropy of mixing. Dauto (talk) 15:42, 5 March 2010 (UTC)[reply]
I find this extremely hard to believe. You are telling me if we took the universe, melted it all together, and made a black hole out of it, with precisely 0 information in the black hole other than maybe it's total mass (a single real number, in grams) and MAYBE one or two more variables such as it's spin and charge (maybe) then there would be MORE entropy in Universe (even though you can just describe it as "1 black hole, in the "center" (ha ha) of nothing else, having mass x, charge y, and angular momentum z". Even if you give all of these to an obscenely unrealistic level of exactness, you still will use maybe a paragraph of digits. A paragraph, even using the best theoretically possible compression, is not enough to accurately describe (ie represent a compressed version of) even a single book (say, a collection of Shakespeare plays). So it seems to me that a SINGLE book would have more entropy than all of the universe, if you reduced the universe to a black hole. Likewise, it seems to me that a SINGLE city on Earth would have more entropy than the Moon, if the moon were a uniform substance you can perfectly describe in a few words. If I really am wrong, maybe it's because I'm conflating physical entropy with information entropy? For me, the fewer words you can use to give a second God in a different Universe enough information to fully reproduce an exact copy of something, the lower entropy it has. Our God would need to give a LOT of information to a second God in a different Universe to reproduce the Earth, but considerably less if the Earth were a uniform ball that is an exact geometrical sphere, of fixed temperature, density, etc. Don't you think? Can someone confirm whether I'm right, or whether Dauto above is right? Thank you. 82.113.121.103 (talk) 17:13, 5 March 2010 (UTC)[reply]
Yes, that's what I'm telling you. In fact for any given mass a black hole will be the state of maximum entropy. See black hole thermodynamics.Dauto (talk) 01:31, 6 March 2010 (UTC)[reply]
A small correction: I meant to say that for any given volume a black hole will be the state of maximum entropy. Dauto (talk) 04:20, 6 March 2010 (UTC)[reply]
These answers are just leading to more confusion. Dauto, is the Moon higher or lower entropy than the Earth? Why? TheFutureAwaits (talk) 15:56, 5 March 2010 (UTC)[reply]
If you take the planet as a whole I would venture that earth's mean specific entropy (entropy per unit volume if you will) will be higher simply because earth's core temperature is higher. Dauto (talk) 16:05, 5 March 2010 (UTC)[reply]

My wording may be off, what I'm getting at is which way is the energy exchange moving? So for example in the Sun-Earth system the Sun is increasing in entropy while the Earth is decreasing. How does this work in the Earth-Moon system? TheFutureAwaits (talk) 16:20, 5 March 2010 (UTC)[reply]

I doubt there's any meaningful give-and-take between the two. Entropy increases. — Lomn 16:36, 5 March 2010 (UTC)[reply]
I don't think the sun's entropy is increasing since the most important factor here is likely the fact that it is losing a massive amount of heat through radiation so its entropy is actually decreasing. The radiation exchange between the earth and the moon is not a very important factor since they are a similar temperatures which means radiation is moving in both directions. Dauto (talk) 16:40, 5 March 2010 (UTC)[reply]
Correct me if I'm wrong, but it seems relevant to highlight the increase in entropy of the system, e.g. as the sun radiates energy and mass. The mass that we call the sun one moment becomes a geometrically larger object the next, and the entropy of that system increases (while the entropy of the circumscribed orb we call "the Sun" may decrease). -- Scray (talk) 17:29, 5 March 2010 (UTC)[reply]

Probably the most important point is that theoretical thermodynamics shows that entropy per se is not very important -- what matters for understanding interactions is the dependence of entropy on energy, which is measured by a quantity we call the temperature. Looie496 (talk) 16:58, 5 March 2010 (UTC)[reply]

Since a black hole decays into hawking radiation, it can't be the state or maximum entropy, rather a bunch of randomly spread photons and leptons is a state of higher entropy, see Heat death of the universe and Black hole#Entropy and Hawking radiation 82.132.136.207 (talk) 00:32, 7 March 2010 (UTC)[reply]

The black hole is the state of maximum entropy for a given volume. The Hawking radiation that replaces the hole will ocupy a larger volume and can have a larger entropy then the hole as you've shown it must. Dauto (talk) 02:18, 7 March 2010 (UTC)[reply]

panda

edit

Where can you find a giant panda in the United States? —Preceding unsigned comment added by Yelopiclle (talkcontribs) 14:02, 5 March 2010 (UTC)[reply]

From the Giant Panda article...

As of 2007, five major North American zoos have Giant Pandas:

  • Chapultepec Zoo, Mexico City – home of Xi Hua, born on June 25, 1985, Shuan Shuan, born on June 15, 1987, and Xin Xin, born on July 1, 1990 from Tohui (Tohui born on Chapultepec Zoo on July 21, 1981 and died on November 16, 1993), all females
  • San Diego Zoo, San Diego, California – home of Bai Yun (F), Gao Gao (M), Su Lin (F), Zhen Zhen (F), and Yun Zi (M).
  • US National Zoo, Washington, D.C. – home of Mei Xiang (F) and Tian Tian (M).
  • Zoo Atlanta, Atlanta, Georgia – home of Lun Lun (F), Yang Yang (M) and Xi Lan (M)
  • Memphis Zoo, Memphis, Tennessee – home of Ya Ya (F) and Le Le (M)

Googlemeister (talk) 14:18, 5 March 2010 (UTC)[reply]

Who's the best when nature calls?

edit

Hi

1.I've often wondered when you have to go, (pee) but sometimes in a situation where you can't (whatever the reason might be). Who's better at holding it, males or females? -However I don't think females can hold it that long.

2. What are the complications of holding a number 2 for too long?

3. I'm sure most guys have had this happen to them somewhere along their lives. However if you've still somehow managed to elude this experience you're in for a treat. I've been hit a couple of times in the groin, but on one or two occations it hurt so bad that I felt I was going to need a new pair of undergarments.

3.1 Is this normal and what are the complications when the injury is serious?


Thanks, NirocFX

41.193.16.234 (talk) 14:05, 5 March 2010 (UTC)[reply]

Women can hold for slightly longer in tests, but it's negligible. There are no adverse affects to health of holing a poo in too long, you'll simply lose control of the bowls and crap your pants. And it's very normal to experience massive pain for both men and women when impacted on the genitalia area. They risks include hemorrhaging and sterility. R12IIIeloip (talk) —Preceding undated comment added 14:47, 5 March 2010 (UTC).[reply]
I disagree on "holding in poo". The large intestine removes water, and poo held in too long (several days) will thus be dried out and cause constipation. StuRat (talk) 15:37, 5 March 2010 (UTC)[reply]
Cecil Adams covered the potential hazards of holding in "Number 1" for too long here. Doesn't mention the other, though. APL (talk) 15:43, 5 March 2010 (UTC)[reply]

It's possible to get so constipated it comes out the wrong end. So you might want to hold your tongue too . . . or at least your breath. —Preceding unsigned comment added by 71.108.171.138 (talk) 23:32, 5 March 2010 (UTC)[reply]

Was that comment necessary? It is not pertinent to the discussion.--79.68.242.68 (talk) 01:24, 6 March 2010 (UTC)[reply]

Microbial locomotion time

edit

If I am trimming the fat from some chicken thighs with a 5" chef's knife, how long could I expect it to take for the potentially harmful bacteria from the blade to make their way to the handle such that my right hand (holding the knife) should be reasonably assumed to be contaminated? I ask because I usually tend to perceive my right hand to be totally clean and use it without much discretion in terms of touching other things in the kitchen while preparing raw meat, such getting a pot from the cabinet, taking things from the fridge, etc. DRosenbach (Talk | Contribs) 16:40, 5 March 2010 (UTC)[reply]

Our Swarming motility article describes this as "rapid (2–10 μm/s) and coordinated translocation of a bacterial population across solid or semi-solid surfaces". If we take the upper end of that range (10 μm/s), then even if the bacteria were to move directly toward your hand, it would take more than 40 minutes for them to move an inch (or a couple of hours for a few inches). Add to that the less-directed nature of bacterial movement, the less-than-ideal culture conditions of a knife surface, and the fact that I assumed the upper end of the range of rates. For most users (perhaps not a trained health professional like yourself), it seems extremely unlikely that this mechanism would account for more contamination of your kitchen than a slip in technique (such as setting the knife down in a "clean" versus "dirty" area). -- Scray (talk) 17:13, 5 March 2010 (UTC)[reply]
I dislike that answer. By far the most likely way bacteria would get from something you're cutting onto your hand would be as a fine aerosol of particles sprayed from the object being cut. The mere act of cutting something is going to first stretch - then break and release fibers in the material. As they elastically return to their former shape, they could easily flick microscopic droplets from within the meat or whatever onto your hand. Also air currents and other things could easily be involved. Just measuring the speed a bacterium can move under its' own steam is not going to give you anything but a low-end estimate. I'd guess that the high end is probably 100 mph or something. SteveBaker (talk) 19:18, 5 March 2010 (UTC)[reply]
Not sure why you "dislike" that answer enough to make a point of saying so (twice). It's a direct answer to the OP, who asked whether the hand with which he holds the cutting knife is likely to get contaminated by direct spread of the bacteria. The question was not, "how do bacteria spread in the kitchen". I do like your conjectures - they seem plausible (except for the bit about bacterial coming from within intact meat). -- Scray (talk) 20:05, 5 March 2010 (UTC)[reply]

120 volt equipment

edit

Hello, Q-what happens to 120 volt 60 hz equipment when it is plugged in to 120 volt 400hz power source? my theory is that it will run for awhile, but eventually it would overheat, example such as an electric motor that's 120 volt hz? —Preceding unsigned comment added by 205.200.77.222 (talk) 16:48, 5 March 2010 (UTC)[reply]

It's a difficult question because it depends entirely on the appliance. Somethings will die instantly, others will run perfectly happily, others will be somewhere inbetween. I live in the USA (60Hz, 120v) but have some things I brought with me from the UK (50Hz, 240v) - the cheaper kinds of converters convert to the correct voltage but put out the wrong frequency (ie 60Hz, 240v) and I have several gadgets that don't like that - one overheats, two others just don't work - and that's with just a 10Hz difference! A more expensive converter that I found corrects the frequency too - and that allows those gadgets to work OK. SteveBaker (talk) 19:12, 5 March 2010 (UTC)[reply]
In general, using electronics designed for low-frequency power on a high-frequency power input is safer than the other way around -- you don't need to worry about transformer cores saturating and overheating. Electric motors are one of the exceptions: if the input frequency is faster than the motor can spin, the motor won't move at all, and will act as a short circuit. --Carnildo (talk) 02:07, 6 March 2010 (UTC)[reply]
How about if it is a universal motor? Would a 120 volt universal motor run just as well on DC or 60 Hz or 400Hz? I would expect a purely heating appliance or an incandescent light bulb similarly should run on DC to 60 Hz to 400 Hz. Edison (talk) 21:28, 6 March 2010 (UTC)[reply]

Matter

edit

I know matter can be converted to energy, but can it be converted to anything else? Dark matter perhaps?

Its not that matter can be converted into energy, like a magic trick. Its that matter and energy are different expressions of the same fundemental concept. Matter is merely energy which has been confined by the limits of a set of quantum numbers, but fundementally matter is energy and energy is matter. See Mass–energy equivalence for more information. --Jayron32 17:43, 5 March 2010 (UTC)[reply]
yeah, you might have heard of this equivalency as e=m times a constant which I forget. 82.113.106.100 (talk) 21:26, 6 March 2010 (UTC)[reply]

Which species of any has a stinger AND bites to inject formic acid into the wounds? —Preceding unsigned comment added by 518c&e (talkcontribs) 17:04, 5 March 2010 (UTC)[reply]

Species upper limit?

edit

Have scientists discovered almost all species that exist? —Preceding unsigned comment added by SpiderLighting (talkcontribs) 17:09, 5 March 2010 (UTC)[reply]

As surprising as it might seem, only a minority of species have been identified. -RobertMel (talk) 17:15, 5 March 2010 (UTC)[reply]
Obviously we have not identified them all, but without knowing how many there are, how do we know we have more then 50% to go? Googlemeister (talk) 17:29, 5 March 2010 (UTC)[reply]
And in order to know if we've discovered them all, we'd first have to know how many there are. Which we don't. The species article has some things to say about this. Dismas|(talk) 17:18, 5 March 2010 (UTC)[reply]
Certainly not. Researchers have detected over 700 species of periodontal pathogens in the diseased gum tissue but have characterized and positively identified and marked less than 300 of them -- and that's just bacteria of the gums, let alone of the entire mouth and let alone the entire body and let alone the entire world. But in case you were referring to animal species, the answer would still be no. DRosenbach (Talk | Contribs) 17:21, 5 March 2010 (UTC)[reply]
Likewise, I have heard it speculated that if we counted up all of the discrete animal species we have identified, it would still be less than the number of beetle species yet unidentied. We are no where near ending the catalogueing of species. Furthermore, depending on how you define species, there are some life (bacteria) or pseudolife (virus) forms which speciate at a rate which means that we can get new ones within a human lifespan, meaning that we will never be done. Even if we confine ourselves to macrolife (plants & animals) we aren't even close to being done catalogueing them. --Jayron32 17:41, 5 March 2010 (UTC)[reply]
To add that many species will just vanish before we identity them. -RobertMel (talk) 18:04, 5 March 2010 (UTC)[reply]
The question of how to estimate the total number of species is interesting. One approach might be sampling. For example, if you take one small island and study it to death to hopefully identify every species, and find that there are 10 times as many as you knew initially, that might be some indication that we know less than 50% of the species worldwide. StuRat (talk) 17:51, 5 March 2010 (UTC)[reply]
I suppose you might also plot a graph of the number of newly discovered species per year (adjusted for the number of people hunting them and the amount of effort they put into it) and see if the graph was showing any signs of asymptoting out - which would be a clue that we were close to finding them all. However, (as others have pointed out) there is no way that we're 50% of the way through the process if you include microscopic stuff like bacteria and algea. That 50% number could only possibly be for things the size of insects and above. SteveBaker (talk) 19:06, 5 March 2010 (UTC)[reply]
Plus, the definition of the word "species" is far from settled, so you have to argue about that for another few hundred years before you even start counting beetles. --Sean 21:28, 5 March 2010 (UTC)[reply]
We've mapped maybe 3% of the floors of the oceans. It is unilkely that this is the coolest 3% out there. There are lots and lots of species in the oceans alone that we have not ever seen. Comet Tuttle (talk) 22:33, 5 March 2010 (UTC)[reply]
There are also likely millions of species of microorganisms in soil that we have not yet identified. Do those count? ~AH1(TCU) 04:01, 6 March 2010 (UTC)[reply]
Do you mean species on Earth? If there is life elsewhere in the universe, we are probably nowhere even close to discovering all of the species that exist.24.150.18.30 (talk) 18:01, 6 March 2010 (UTC)[reply]

For microorganisms, estimates of the number of species are made by counting the number of distinct DNA sequences in a sample of earth or water, and then doing calculations. This method yields numbers far higher than attempts to count species directly. Looie496 (talk) 18:07, 6 March 2010 (UTC)[reply]

relationship between entropy in physics and entropy in information science?

edit

if you look at my above edit, you will see that I might be very confused indeed. Can someone help explain to me in simple terms what the relationship, if any, is between entropy as understood in physics and entropy as understood in information science? If there is no relationship, why is it the same word? Thank you. —Preceding unsigned comment added by 82.113.121.103 (talk) 17:19, 5 March 2010 (UTC)[reply]

We have an article about this. See Entropy in thermodynamics and information theory. In simple terms, entropy is used in both cases to refer to a loss of usability. In thermodynamics, it refers to the loss of Free energy in the universe, or in any closed system. In information theory, it refers to the loss of predictability or understandability in some bit of information. Its sort of like how both biology and chemistry use the word "nucleus" to mean "the bit in the center" (either a cell nucleus or an atomic nucleus.) However, the two entropies are far closer related than that. The mathematics of both is still controlled by the Boltzmann equation, which in information theory takes the slightly different form known as the Hartley function. There are also some gedanken experiments in physics which tie the two fields together nicely. Schroedinger's cat is essentially about information entropy (uncertainty in knowing the state of decay of a single particle is a form of information entropy, as is the fate of our poor cat). The paradox of Maxwell's demon, which conceives of an energy-less way of reducing entropy (and thus SHOULD be a violation of the second law of thermodynamics) can be resolved if we consider information to be negative entropy. I have a nice little "physics for laypeople" primer I should dig up which discusses the connections between information entropy and thermodynamic entropy. --Jayron32 17:37, 5 March 2010 (UTC)[reply]
Thank you. Again looking at my above edit, as I referenced at the start of this thread, could you try to understand the way in which I was trying to see black holes, planet-sized highly ordered/uniform/geometrically perfect and easily described objects, and the actual Earth, in terms of "entropy" (which I was using in more the informational sense) and tell me why it doesn't apply (if it doesn't) in a physical sense. ie someone in the linked thread said that I was wrong, and I wonder why (if it's true). Contrary to the implications you make above, physical and information entropy seems to me would have to be opposites in direction/sense in order for me to be wrong above... 82.113.121.103 (talk) 17:49, 5 March 2010 (UTC)[reply]
Well, the deal is that entropy is often a slightly misused term in the physical sciences (thermodynamics). Entropy is really just a mathematical concept, after all Ludwig Boltzmann was basically a mathematician. Entropy is just the relationship between the number of possible states of a closed system and the uncertainty of knowing which of those states you are in at any given moment. In thermodynamics, the term is used to basically mean the disorder in a system; it is expressed as negative free energy, or in other terms, entropy is expressed as "the energy needed to return a system to a state of perfect order, where all variables are eliminated and the location of all particles is known with perfect precision". Entropy as a concept is still the mathematical thing, but in thermodynamics we discuss it in terms of the free energy lost in reordering a system. In information theory, the entropy is the same idea; it is based on the number of possible states of a system and the uncertainty of knowing which state the system is in. Information theory actually uses a more pure form of entropy, in that it doesn't use a concept like energy as a surrogate for entropy; it uses a unitless value to express "pure" entropy. But the sign and directionality of the two are the same; the more disordered the system, the higher the value the entropy is. --Jayron32 01:26, 6 March 2010 (UTC)[reply]
Jayron, your final conclusion is essentially correct but your argument is stuffed with a potpourri of misconceptions. I won't go into all of them. Let me just point out that entropy and Free energy are not even measured with the same units so your explanation makes no sense. BTW your rant abount physical sciences misusing entropy is also a gem of nonsense. Dauto (talk) 03:20, 6 March 2010 (UTC)[reply]
Well, its not that they misuse the term entropy really. Just use it a bit differently than does information theory. And entropy is essentially negative free energy (for any given temperature). At a conceptual level, the two are basically opposite concepts. Entropy is merely disorder, the thermodynamic definition expresses this disorder as a unit of energy per temperature. Free energy is energy availible to do work; we can consider the theoretical "free energy of the universe" to have been at a maximum at the Big Bang, and it has been consitantly decreasing over time. Thus, the for the universe, spontaneousness is represented by a decreasing free energy, or ΔG < 0. The early physical chemists and mathematicians working in this area recognized that this was essentially the functional opposite of entropy; the entropy of the entire universe tends towards a maximum as the free energy tends towards a minimum. When we look at a chemical process, by Helmholtz's and Gibbs's conventions, we tend to take the perspective of "free energy". But removing the math for a second, and looking at the conceptual nature of it. Every process has two contributions to make towards affecting the entropy of the universe. Entropy is temperature dependant (warmer things have faster moving particles, so their position is less certain than cooler things), so any process that tends to heat its surroundings tends to increase the entropy at the universe level. However, most chemical processes also involve a change in organization of the particles, which is a direct effect on the entropy itself. So, if we want to look at a process, there are two things going on: Release or absorbtion of heat, which affects the entropy of the surroundings and reorganization of the substances involved, which affects the entropy of the system. Conceptually, entropy and free energy are basically the same thing in the opposite direction, and in thermodynamics, we tend to express everything in terms of energy, so by convention, the entire thing is expressed in terms of energy values, the classic G = H - TS (Or A = U - TS for the Helmholtzally inclined) which contains the expression of entropy as an energy value (J/K). But this is a convention to give us a number we can find meaning with (usually looking at the "spontanaity" or "extensiveness" of a chemical process) to compare different processes. --Jayron32 04:42, 6 March 2010 (UTC)[reply]
You spend a whole paragraph explaining how entropy and free energy are supposed to be the same concept and then write a (correct) equation that shows that they are related but are definitely different concepts and then conclude by saying that the equation is just a convention? Are you a Chemist? Dauto (talk) 15:45, 6 March 2010 (UTC)[reply]

Any known planets with satellites with satellites?

edit

Are there any known cases in astronomy of a planet which has a moon orbiting it and that moon has another smaller moon orbiting it?20.137.18.50 (talk) 17:39, 5 March 2010 (UTC)[reply]

This question was answered a few years ago - the short answer is "No". The longer answer is that the nature of gravitational and tidal interactions between planet and moon would make a moon-of-a-moon orbitally unstable - so if one ever did somehow come into existence, it would either smack into the planet, the parent moon - or spin off into space within a relatively short period of time. The only exception to that are the various artificial satellites that humans put in orbit around various moons within the solar system - which were there only for short periods of time. SteveBaker (talk) 18:55, 5 March 2010 (UTC)[reply]
Thanks, Steve. When Lomm said in that original question "The key consideration (discussed at n-body problem) is that the smallest body must have a mass insignificant with regards to the largest body." with respect to the word "insignificant" are we talking 1/10, 1/100, 1/1000? I followed the link to the n-body problem and didn't see any elaboration of significance of mass difference there, though I have a little more idea of how crazily complex things get with moon-on-moon action. 20.137.18.50 (talk) 19:08, 5 March 2010 (UTC)[reply]
A close possibility is the tiny moon Nix and Hydra. They are orbiting a double body system, so in a sense, the satellite is orbiting a pair of satellites where Pluto is orbiting Charon while Charon is simultaneously orbiting Pluto. I suppose someone will come by soon and point out that none of these are technically planets. Googlemeister (talk) 19:16, 5 March 2010 (UTC)[reply]
Drat, I was just going to say that. I also wonder if Mercury and maybe Venus lack moons for the same reason, basically that the nearby Sun would knock the moons out of orbit. StuRat (talk) 21:17, 5 March 2010 (UTC)[reply]
The Lunar Reconnaissance Orbiter now orbiting the moon is a small artificial 'moon' of the Moon. You may be familiar with the planet. Cuddlyable3 (talk) 03:07, 6 March 2010 (UTC)[reply]
The LRO has the mass of a large automobile. Astronomical definitions are notoriously sketchy, but it's doubtful anything the size of a Lincoln Continental would ever be defined as a "moon," artificial or otherwise. 63.17.82.123 (talk) 04:25, 6 March 2010 (UTC)[reply]
@63.17.82.123 Enter "artificial moon" in the Wikipedia search box and you are taken to the article about artificial satellites. That is correct. The first satellite in fiction was called The Brick Moon. There is nothing that an artificial moon can be other than a manufactured satellite. Your objection to my use of the term "artificial moon" is groundless. Cuddlyable3 (talk) 18:04, 6 March 2010 (UTC)[reply]
Well clearly, the sun-earth-moon system has been stable for quite a while, so mass_moon/mass_sun = 3.69e-8 is "insignificant". —Preceding unsigned comment added by 83.134.176.244 (talk) 08:37, 6 March 2010 (UTC)[reply]
Doesn't distance play a role ? That might explain why Mercury (and maybe Venus) doesn't have any moons, while Pluto has at least 3. StuRat (talk) 13:22, 6 March 2010 (UTC)[reply]

Telescopes

edit

Is there a functional limit on the level of magnification a telescope can achieve? So for example, how big would the lens have to be to read a car's license plate on Earth from Alpha Centauri?

Are there any technologies would could implement that would reduce that size or is it a hard limit? TheFutureAwaits (talk) 18:06, 5 March 2010 (UTC)[reply]

Generally speaking, the main purpose of a telescope is not to magnify to a great extent, but to collect as much light as possible, to study faint objects. That said, the angular resolution of a telescope depends primarily on its diameter. The larger the diameter of the telescope, the finer the detail it can resolve. On the surface of the earth, however, atmospheric seeing spoils the resolution of optical telescopes larger than ~10 cm, so you either need advanced technologies like adaptive optics or interferometry, or you need to put the telescope in orbit. -- Coneslayer (talk) 18:13, 5 March 2010 (UTC)[reply]
To address your question quantitatively, to read a license plate on Alpha Centauri, you need an angular resolution of roughly θ = (1 cm) / (1 pc) = 3e-19 radians. Using θ = 1.22 λ/D (as explained in angular resolution), and assuming λ = 500 nm (visible light), you need a space telescope with diameter D = 2e12 meters, or 2 billion kilometers, or 13 astronomical units. That's bigger than the orbit of Jupiter. -- Coneslayer (talk) 18:21, 5 March 2010 (UTC)[reply]
see http://en.wiki.x.io/wiki/Diffraction_limit —Preceding unsigned comment added by 83.134.176.244 (talk) 18:14, 5 March 2010 (UTC)[reply]
(ec) Even ignoring such things as atmospheric distortion and nearby bright things such as the Sun, one important question to consider (I don't know the answer) is how many photons you would need reflected from a license plate in order to be able to read it, and the rate at which these photons would reach Alpha Centauri. Probably you don't even have the remotest chance of being able to read a license plate, because at Alpha Centauri you'd receive one photon reflected from the license plate every 300 years (totally out-of-my-ass guess, but you get the idea). —Bkell (talk) 18:15, 5 March 2010 (UTC)[reply]
You could (in theory) solve the issue of the rare appearance of a photon reflected from that license place by integrating the incoming light over a very long period - but now we have a mirror the size of the orbit of Jupiter that's got to stay pointing in the right direction for (perhaps) tens of thousands of years - getting that kind of stability would be really tough - and in any case, that gigantic mirror would have to be moved to track the motion of the license plate relative to the telescope. If it has to point accurately to within 3x10-19 radians yet move by enough to track the motion of the license plate parent planet rotation - and that planets' orbit around the parent star - and the relative motion of that star with respect to the telescope (not negligable over the amount of time you'd need to be gathering light over)...then the mechanism that moves the mirror would be a horror to construct! This is so far from being a practical possiblity that we're going to have to say that the answer is "No" - you can't do that. SteveBaker (talk) 18:49, 5 March 2010 (UTC)[reply]
I wouldn't hold the mirror still, I would just measure its movements and have a computer compensate for them in the final image. --Tango (talk) 19:46, 5 March 2010 (UTC)[reply]

Interesting, so then how big of a diameter lens would be required to achieve sufficient angular resolution to read a license plate of a car on Earth assuming there was no atmosphere? TheFutureAwaits (talk) 18:17, 5 March 2010 (UTC)[reply]

Hmmm. I read license plates on earth without any lens all the time ;-). --Stephan Schulz (talk) 18:48, 5 March 2010 (UTC)[reply]
As Coneslayer already calculated...about the size of the orbit of Jupiter. SteveBaker (talk) 18:49, 5 March 2010 (UTC)[reply]
I was still drafting my calculation when TheFutureAwaits asked above. Given the "on Earth" part of this question, I'm not sure if it's meant to be a restatement of the Alpha Centauri question, or a different question. -- Coneslayer (talk) 19:01, 5 March 2010 (UTC)[reply]
The mirror does not need to be a disc 13 AU in diameter. You can have a series of smaller telescopes, all coordinated together. This already happens with radio telescopes and I think more rarely with optical telescopes. I cannot recall what this is called - I'm sure there must be an article on it. Edit: see Cambridge Optical Aperture Synthesis Telescope and http://www.mrao.cam.ac.uk/telescopes/coast/handout.html Very Long Baseline Interferometry Perhaps you could just have one telescope at Jupiter and put together what it sees with what it sees six "Jupiter months" later. 89.243.198.135 (talk) 19:13, 5 March 2010 (UTC)[reply]
That would be interferometry (or aperture synthesis), which I linked to in my original response. -- Coneslayer (talk) 19:14, 5 March 2010 (UTC)[reply]
In regards to Perhaps you could just have one telescope at Jupiter and put together what it sees with what it sees six "Jupiter months" later, no. Interferometric observations must be combined coherently. Without a direct way to measure optical phase, I do not believe there is any way to interferometrically combine optical observations taken at different times. -- Coneslayer (talk) 20:42, 5 March 2010 (UTC)[reply]
The Aperture synthesis article says that one telescope with the rotation of the earth is used. What stops a one-year rotation being used instead of a twenty-four hour rotation, at least for radio waves? 89.243.198.135 (talk) 20:53, 5 March 2010 (UTC)[reply]
Can you point me to where you're seeing "one telescope"? With a single baseline (two telescopes), you use the rotation of the earth to change the orientation of the baseline relative to the target, which lets you achieve high resolution along different axes at different times. But in any case, that's radio interferometry, not optical interferometry. It's possible to directly record phase of radio signals, which lets you do the interferometry in post-processing. You can't do that in the optical. -- Coneslayer (talk) 20:57, 5 March 2010 (UTC)[reply]

The alphacentaurians(?) could at best see only the license plate you had over 4 years ago. Cuddlyable3 (talk) 02:59, 6 March 2010 (UTC)[reply]

Even from earth orbit it is very hard to read license plates. I'm wondering why the alphacentaurians stick their license plates on the top of their cars instead of at the ends. Dmcq (talk) 08:44, 6 March 2010 (UTC)[reply]
Of course you don't look for license plates in the center of the Earth's disk, but at the very corner, when they are just driving over the horizon. You can even use head and tail lamps for targeting. Duh! --Stephan Schulz (talk) 09:10, 6 March 2010 (UTC)[reply]

OTC Cryogenics to treat warts or moles

edit

Are OTC Cryogenics to treat warts or moles? Are they effective? What if a person would apply it on something else? For example, it is for moles and he applies it on a wart or the other way round? Quest09 (talk) 18:38, 5 March 2010 (UTC)[reply]

They are to treat warts (and are effective) and their instructions specifically state that they are not to be used on moles. I'll think on the rest of your question, but I think it's bordering on medical advice. -- Flyguy649 talk 18:56, 5 March 2010 (UTC)[reply]
(edit conflict)I have heard of warts being treated with liquid nitrogen which, amongst other things, stimulates the immune system into responding to the hpv virus that causes the wart. The effectiveness probably depends on the person and type of wart. I'm not sure about moles. — Ƶ§œš¹ [aɪm ˈfɹ̠ˤʷɛ̃ɾ̃ˡi] 18:59, 5 March 2010 (UTC)[reply]


I think in general it's a bad idea to think of warts and moles as having anything to do with each other. Warts are viral infections; they need to be gotten rid of, so the virus doesn't spread. Most moles, on the other hand, require no treatment at all; you can just leave them be unless they're bothering you in some way. There are of course exceptions — see malignant melanoma for the worst-case scenario. --Trovatore (talk) 19:01, 5 March 2010 (UTC)[reply]

120v v. 240v

edit

How much power could you safely continuously draw through an ordinary domestic 120 volt American electricity socket versus an ordinary domestic British 240 volt (or possibly 220 volts) socket? British electric sockets are installed on a ring main which I think also allows more power to be drawn. Thanks 89.243.198.135 (talk) 20:30, 5 March 2010 (UTC)[reply]

I can't speak for UK circuits, but a North American 15 amp circuit is good for about 1800 watts, assuming no other loads. At that rate the circuit breaker may eventually trip if load is continuous. Acroterion (talk) 20:55, 5 March 2010 (UTC)[reply]
UK must be more than that because I had a 2000 watt electric fire plugged into the mains. —Preceding unsigned comment added by Dataport676 (talkcontribs) 21:25, 5 March 2010 (UTC)[reply]
The maximum current you can get from a UK socket is 13A - the nominal UK mains voltage is 230V ± 10% (see Mains power around the world). This means that 3kW (13A @ 230.8V) _might_ blow the fuse if the voltage is a bit high (or with a constant-power device and the voltage a bit low). 2kW and 2.5kW devices are common, though. Tevildo (talk) 21:26, 5 March 2010 (UTC)[reply]
You can also still get the old BS 546 unfused 15A round-pin plugs (giving you 3.6kW @ 240V), but that's not really an _ordinary_ domestic socket these days. Tevildo (talk) 21:35, 5 March 2010 (UTC)[reply]
In Britain the 13A limit mentioned by Tevildo is imposed by a fuse in the standard plug, not the socket. Modern UK houses are wired with a ring main that is capable of delivering 13A to multiple sockets. Therefore by bypassing or up-rating the 13A plug fuse one can draw more current from a British socket, limited by the main fuse. This is not a safe procedure for an amateur but it is possible. Cuddlyable3 (talk) 21:53, 5 March 2010 (UTC)[reply]
I would advise very, very strongly _indeed_ against this (unless one wants to burn one's house down). If you need more than 3kW from a socket, replace it with a proper BS 546 15A one. You _can_ get more than 13A out of an ordinary socket by shorting out the fuse. You _can_ get a higher steam pressure in a boiler by welding down the safety valve. The OP, however, included (quite rightly) the word "safely" in the question. This solution does not satisfy that criterion. Tevildo (talk) 22:15, 5 March 2010 (UTC)[reply]
As long as you only do it on one device, you should be fine, but if you do it too much you'll trip the circuit. The mains breaker would normally take more effort to trip. --Tango (talk) 22:05, 5 March 2010 (UTC)[reply]
Concern for safety is admirable if it is based on facts. BS546 is an outdated British standard for round-pin plugs and sockets that are incompatible with modern 13A BS1363 sockets and lack safety features of BS1363: shuttered socket, finger shrouds, fuse in plug and flat-wiping contacts. If recommending someone install new sockets for high current supply the Europe wide IEC 60309 standard family is preferable because of its modern design. Connectors are colour coded yellow 100-130V, blue 200-250V. Versions for 16 A, 32 A, 63 A and higher are available. Cuddlyable3 (talk) 02:52, 6 March 2010 (UTC)[reply]
Similarly in North America, while the standard 120 V socket is intended for circuits fused at 15 A or 20 A, there are higher-current sockets available (yes, for 120 V as well as for 240 V); but in any country such higher-current sockets are meant only for use on a special circuit with heavier wiring than usual. --Anonymous, 05:23 UTC, March 6, 2010.
In UK,ring mains are normally fused at 30 Amps.--79.68.242.68 (talk) 00:17, 6 March 2010 (UTC)[reply]
This means that, in the UK, 3kW appliances are common. (The 13a fuse allows up to 3.12 kW at the standard UK voltage of 240v (yes, I know that 220 is quoted, but 240 is standard)). For anything larger than this, it would (in theory) be possible to use multiple plugs and cables, but this would be very dangerous for obvious reasons (live plugs!), so any appliance that draws more than 3 kW must be permanently wired with a high current switch. Instant showers up to 9.8kW are available, fused at 40 amps. Drawing more than 3kW by shorting out the fuse will result in a dangerously hot plug and socket because the connectors are not designed to carry more than 13a. Dbfirs 11:30, 6 March 2010 (UTC)[reply]
Are you sure that drawing more than 13A via a 13 Plug /socket arrangement will result in a dangerously hot plug and socket? Since P = I^2*R, what resistance would the plug/socket have to have to make it 'dangerously hot'? And how does this square with the commonly accepted standard safe current density in conductors of 1000A/cm^2? —Preceding unsigned comment added by 79.76.171.183 (talk) 21:49, 6 March 2010 (UTC)[reply]
A compliant plug and socket has a safety factor that should allows for 3 kVA plus a 'bit more' . This is because the 'contact points' between the cable and the screw terminals are a fraction of their cross sectional areas (cm^2). Yes, plugs and sockets can get warm. Properly wired and carrying no more that the rated kVA they should not generate undue heat though. So, as stated above: do not over load you circuit. I have seen charred plugs (more than once) where people have wired up two appliances to one plug and drawn toooo much current. --Aspro (talk) 22:23, 6 March 2010 (UTC)[reply]


I've not seen "ordinary 120 volt domestic outlets" rated above 20 amps, but U.S. electric codes usually want you to stay below 80% of that rating, which would be 16 amps. That said, if it is rated at 20 amps it could probably supply 20 amps. RV hookups have 30 amp 120 volt outlets. 50 amp 125 volt connectors are available.For more than 20 amps it is common to go to 240 volt supply. There is really no physical limit on how many amps a 120 volt connector could be built to supply for industrial or utility applications. Some low voltage network grids in large cities are built to carry thousands of amps at 120 volts, in a 120/208 three phase circuit, and these have connectors. Edison (talk) 21:24, 6 March 2010 (UTC)[reply]

Genus name

edit

From where did the Genus Zyzzyx derive it's name? Googlemeister (talk) 20:34, 5 March 2010 (UTC)[reply]

First paragraph of that article you linked to, emphasis added:
Zyzzyx is a monospecific genus of sand wasp, containing a brightly-colored, medium-sized species, Z. chilensis, named after the sound they make while flying. They were first studied in detail by H. Janvier (a.k.a. Claude-Joseph) in 1928, more than 100 years after they were first described. The unusual name is onomatopoeia for the buzzing sound they make.
Hope that answers your question. —Bkell (talk) 20:36, 5 March 2010 (UTC)[reply]
I read that but suspected it was maybe vandalism. Googlemeister (talk) 20:42, 5 March 2010 (UTC)[reply]
Ah, okay, that's possible. —Bkell (talk) 20:48, 5 March 2010 (UTC)[reply]
This source says Alan Solem named Zyzzyxdonta and Aaadonta "with the idea of being the first and the last entries in any list of endodontoid snails." Not sure if the sand wasp is related. Gobonobo T C 21:28, 5 March 2010 (UTC)[reply]
I've fact-tagged the claims that it's supposed to represent the sound made while flying. There's no citation. Of interest of course is that there is a "Zyzzyx Road" in California, found on Interstate 15 between Los Angeles and Las Vegas. Two films are named after the road, according to our articles; the film Zyzzyx Road is said to have earned a US box office total of US$30, and an international take of over 10,000 times that amount. Comet Tuttle (talk) 23:58, 5 March 2010 (UTC)[reply]
The real road is actually Zzyzx Road. —Bkell (talk) 03:57, 6 March 2010 (UTC)[reply]

A little confusion concerning E=mc^2

edit

I've been told that the E in E=mc^2 stands for energy in general. But, the equation itself is derived from solving the work integral (with m(v) = γ*m_o), and so I would assume that it is only applicable for kinetic energy. When an object is heated, I can see why its mass would increase: the increased temperature means the atoms are vibrating faster, implying their individual masses are higher (by m(v) = γ*m_o), and thus the object has a higher mass. But for something like binding energy, I'm at a loss as to how the equation E=mc^2 can predict that CO2 would have a lower mass than a carbon atom and two oxygen atoms at the same temperature, or that an object at a higher altitude would have more mass. —Preceding unsigned comment added by 173.179.59.66 (talk) 21:02, 5 March 2010 (UTC)[reply]

You might like to ask the maths desk to help with this one
I don't see why. It is clearly a science question. --Tango (talk) 21:27, 5 March 2010 (UTC)[reply]
Yes, but it's also maths. I just felt the OP could benefit from a wider insight into his question, for example making a thread at the maths desk directing them here. —Preceding unsigned comment added by Dataport676 (talkcontribs) 21:29, 5 March 2010 (UTC)[reply]
It's only maths to the extent that all quantitative science uses maths. We don't notify the maths desk every time a science question is asked on the science desk. --Tango (talk) 21:54, 5 March 2010 (UTC)[reply]
E=mc^2 doesn't predict anything about energy, it just tells you the relationship between energy and mass. CO2 will have a lower energy than the atoms separately and E=mc^2 tells us that that means it will have a lower mass. To realise that it has a lower mass you need some results from physical chemistry, not relativity. --Tango (talk) 21:27, 5 March 2010 (UTC)[reply]
Right, but the derivation of E=mc^2 seems to be only applicable to kinetic energy. My question is how it can encompass potential energy too. 173.179.59.66 (talk) 21:45, 5 March 2010 (UTC)[reply]
There are more complicated derivations which are more general, I think. --Tango (talk) 21:54, 5 March 2010 (UTC)[reply]
E=mc^2 is applicable to all energy and mass, not just "kinetic". So potential energy stored between the bonds of a compound makes that compound slightly heavier than the atoms would be seperately. A hydrogen atom is a slightly lower mass than the sum of the mass of a proton and an electron because of the energy released because of the force of attraction between a positive and negative charge, etc. etc. Concepts like "kinetic energy" and "potential energy" aren't relevent to mass-energy equivalence. It's all just energy. So objects moving faster are heavier because of the higher kinetic energy, AND stored-up energy also generates more mass. --Jayron32 01:14, 6 March 2010 (UTC)[reply]
As you said, the derivation usually given in introductory books seems to be applicable only to kinetic energy but in fact is is applicable to all forms of energy. Please do the following gedanken experiment: Put some carbon and oxygen inside a box and burn the carbon. During the burning potential energy gets converted into kinetic energy leading to a higher final temperature. Now assume that an observer was passing by with a speed v. From the point of view of that observer the box was moving at constant speed. Since momentum is also conserved, the mass of the box must have remained constant throughout the process. That means that the mass-(kinetic energy) equivalence at the end must have come froma mass-(potential energy) equivalence at the start. Dauto (talk) 01:16, 6 March 2010 (UTC)[reply]
Interesting, I never thought of that! Out of curiosity, is this the official derivation, or is there a more "mathematical" method of coming to this conclusion (like through 4-vectors or something of the sort)? —Preceding unsigned comment added by 173.179.59.66 (talk) 07:06, 6 March 2010 (UTC)[reply]
If you want a derivation of e=mc^2 then this comes from special relativity. It comes from the fact that in natural units from a reference frame of time t E/m = dt/d(tau) where tau is proper time. Viewing in a frame t where the particle is at rest, t and tau are the same leading to E=m or E=mc^2 in conventional units. See Mass–energy_equivalence#Background 82.132.136.207 (talk) 01:00, 7 March 2010 (UTC)[reply]

Protein architecture, motif, domain, fold, topology.

edit

I'm taking a 4th level biochemistry class titled Protein structure and function and, though I already learned them intro biochem courses, the way the materials are covered is much more detailed. I don't really understand the following: architecture, motif, domain, fold, topology (used in a different sense instead of the more common definition of how the transmembrane protein is embedded). Many of these terms are used interchangeable and I don't have a clear idea of the true definition of each. For example, I saw Alpha/Beta/Alpha domain being also referred to as Alpha/Beta/Alpha architecture. Also, it seems that Greek key motif is also called a Greek key domain, which can also be said to be an architecture. Also, fold and motif seem to be complete synonyms. I'm really confused. Please clarify with clear examples. (PLEASE DON'T JUST REDIRECT ME TO LINKS HAVING GENERAL DEFINITIONS OF EACH, BECAUSE I ALREADY CHECKED MANY OF THEM OUT) —Preceding unsigned comment added by 142.58.129.94 (talk) 21:43, 5 March 2010 (UTC)[reply]

Some of these terms are broader or narrower. Architecture is the most broad covering the construction from small to big. The architecture can be made up of those other components. Motif is a pattern that you can see over and over again in different places. domain is a piece of the protein that has a stable shape in itself, so if you chop it off te part will retain this shape. Domain is narrower term or architecture, and may include folds and motifs as components. Fold is pretty clear that it is a bend or change in direction, I would count it as a narrower term of motif, but some think otherwise. The term topology seems to be abused in biochemistry, referring to where parts of the protein chain is, whether embedded in the membrane or poking out. Toploogy in maths will refer more to how it is connected, loops, or multiple elements. Is that 4th year at a uni? I have never studied biochemistry in a class. Graeme Bartlett (talk) 23:31, 5 March 2010 (UTC)[reply]
First off, I'm a cell biologist, not a structural biochemist. However, I've seen and used these terms thus:
  1. Architecture - the general design of a protein, consisting of perhaps one or more domains or motifs. The architecture of the Src protein consists of an SH2 domain and an SH3 domain
  2. motif - a short, often linear, amino acid sequence that can be a target recognition sequence, e.g. LLDLD clathrin interaction motif or PxxP, SH3 binding motif where x = unimportant. These can have different possible amino acids at certain positions, e.g [D/E][G/A](0-1)F[G/A][D/E]F binds gamma ear domains of AP-1.
  3. domain - usually a larger part of a protein with a specific function and usually a certain fold/folds (i.e. its own 3D structure). e.g. SH3 domains have a consensus sequence and bind certain protein sequences containing a PxxP motif. ENTH domains are large (145 aa) domains at the N-terminus that bind membranes containing specific PIPs. BUT some people use "domain" for motif
  4. fold - usually a change in direction in the secondary structure of a protein, but it can mean a fold in the 3D shape of a protein, context dependent
  5. topology - the specific 3D shape of a protein either overall or locally, e.g. folds and pockets in a globular region.
Perhaps a structural biologist/protein biochemist can come around and weigh in. -- Flyguy649 talk 00:53, 6 March 2010 (UTC)[reply]
I dabble in structural biology, though it's not the central focus of my work. I'd like to emphasize that 'fold' in the context of structural biology usually doesn't mean the same thing as it would in a nontechnical context — it's not just a bend or a change in direction. A 'fold' in structural biology generally refers to a particular tertiary structure (at least, an arrangement of secondary structures) which may or may not have a common underlying primary amino acid sequence. For example, the ubiquitin-like proteins (including ubiquitin, NEDD8, and SUMO, among others) all share a 'beta-grasp' fold, in which a beta sheet snuggles up adjacent to (and partially around) an alpha helix. Despite having a very similar tertiary structure, the ubiquitin-like proteins have a relatively limited similarity in sequence. A 'fold' may describe the structure of an entire protein, or a common structure shared by parts of several proteins. (The beta-grasp fold, for instance, shows up in a lot of places.) TenOfAllTrades(talk) 03:34, 6 March 2010 (UTC)[reply]

Positive and negative features of CATH and SCOP

edit

They both offer similar services, but how are they different? Can you say which is better? I checked out the tutorial manuals for CATH and SCOP but they aren't very user-friendly. —Preceding unsigned comment added by 142.58.129.94 (talk) 22:22, 5 March 2010 (UTC)[reply]

You want the article Systematic Comparison of SCOP and CATH: A new Gold Standard for Protein Structure Analysis, full text here. Cuddlyable3 (talk) 02:15, 6 March 2010 (UTC)[reply]

Atoms

edit

Without using a microscope, would it actually be possible to see an atom using just normal magnifying glasses, perhaps using lots of them lined up to increase the zoom? —Preceding unsigned comment added by Firemansam490 (talkcontribs) 22:52, 5 March 2010 (UTC)[reply]

No. The most powerful microscopes are scanning electron microscopes. By comparison, optical microscopes have much poorer resolution. Dolphin51 (talk) 23:17, 5 March 2010 (UTC)[reply]

Uh, no. We can't even really "see" atoms with a microscope - we need a special kind of microscope (scanning electron microscope) that interprets the image for us, rather than just using lenses to make things look bigger than they are. By the time you lined up enough lenses to hope to see atoms through them, what you saw would just be a big blur anyway - you'd be magnifying the imperfections of the lenses over and over again and the image would be distorted beyond recognition far before you got to the atomic level. - AJ —Preceding unsigned comment added by 71.108.171.138 (talk) 23:22, 5 March 2010 (UTC)[reply]

If you line up good magnifying glasses in the right way you will make a microscope, but dont expect much magnification or clear view. Your best chance will be to see gazillions of atoms together making up a visible object. The wavelength of light is about 5000 times bigger than ab atom, so it is like trying to feel a grain of sand with a truck. Graeme Bartlett (talk) 23:37, 5 March 2010 (UTC)[reply]
Although it's not optical microscopy, atomic force microscopy lets people image individual molecules and even atoms. Brammers (talk) 00:34, 6 March 2010 (UTC)[reply]
Another part of this problem is that it's not at all clear what we mean by a "picture" of an atom. These scanning electron and atomic force microscopes are measuring the field surrounding the atom and turning that into a picture - but that doesn't mean that this is what an atom "looks like" any more than a congenitally blind person can imagine what someone's face "looks like" by feeling it with their hands. With quantum uncertainty and wave-particle duality effects, it's really meaningless to ask what an atom looks like anyway - even in principle. SteveBaker (talk) 03:22, 6 March 2010 (UTC)[reply]

The problem with visible light is that its wavelength is on the order of a few hundred nanometers - more than a thousand times as large as an atom. With ordinary techniques you cannot see details smaller than approximately half the wavelength. Icek (talk) 18:16, 6 March 2010 (UTC)[reply]

Anthropophobia?

edit

I'm wondering if the following would fall under the diagnosis of anthropophobia or some other term. . . . It's a fear of people, but's it's not "phobic" in the sense of being intense or causing panic symptoms, except occasionally in crowds (at the mall or in line at the grocery store, for example). It's more of an acute discomfort in the presence of human beings in general. It's not sociopathy, because conscience and sympathy aren't lacking. It's not an inability to love or care for others as individuals - quite the contrary. It's just that the presence of even someone very well loved causes a kind of elemental discomfort, and as much as the dear one may be missed in his or her absence, there is a relief and a comfort in solitude that has NEVER been experienced in the presence of any human being under any circumstances. This is so to the point that anticipation of seeing the loved one, even after a long absence or in circumstances such as the loved one's coming home from the hospital which are otherwise incontestably a good thing, prompts mixed feelings. A person in this condition would thrive when left in solitude, but struggle to explain why simply living in close proximity to others induced a sort of emotional paralysis that made it hard to be helpful and productive, since it's hard to be taken seriously when the message they're conveying is "It's not you, really, it's just that you're human," especially when everyone's trying to convince them that what they really need is to get out and socialize more.

Is there a name for this?

It's called Social anxiety disorder.

Wouldn't social anxiety disorder be more about inability to deal with what are conventionally thought of as "social" situations (school, the workplace, gatherings of friends), rather than desiring to hide out from your own mother or spouse?

From our Schizoid personality disorder article: Schizoid personality disorder (SPD) is a personality disorder characterized by a lack of interest in social relationships, a tendency towards a solitary lifestyle, secretiveness, and emotional coldness. Is that close? Googling "schizoid" even yields message boards populated by people who claim they have this order, and discuss their behavior and feelings. (By the way, please type ~~~~ after each post, to sign it, so we can keep track of who is saying what.) Comet Tuttle (talk) 23:53, 5 March 2010 (UTC)[reply]

In the movie Men in Black the morgue doctor Laurel Weaver (Linda Fiorentino) has this line "I hate the living." Cuddlyable3 (talk) 01:48, 6 March 2010 (UTC)[reply]

Almost all of these 'phobia' words are made up and used indiscriminantly (check out reference 2 in List of phobias for example!). Basically, almost all phobias are basically anxiety disorders - but with different triggers for that anxiety that are probably as varied as the people who suffer from the disorder. SteveBaker (talk) 03:15, 6 March 2010 (UTC)[reply]

Changed the earth's axis??

edit

This week's TIME magazine reports that earth's day has been shortened by 1.26 microseconds as one result of Chile's earthquake, and the reason is because the earth's axis got shifted about 3 inches.

Could someone expand on this, provide a little more detail on how one caused the other? The only explanation I can imagine -- that the diameter of the earth is now slightly smaller due to subduction -- doesn't make ANY sense at all! DaHorsesMouth (talk) 23:16, 5 March 2010 (UTC)[reply]

You are basically correct, the Earth's moment of inertia was reduced by about 15 parts per trillion as dense rock subducted deeper into the earth and displaced less dense rock (and/or changed the shape of the sea floor). Along 700 km of the fault, the Earth jumped up to 10 m, which is enough to have a tiny but perceptible effect of the Earth as a whole. Dragons flight (talk) 01:38, 6 March 2010 (UTC)[reply]
And, when you move the mass inward on a spinning body, it spins faster, resulting in a shorter day, due to conservation of (angular) momentum. StuRat (talk) 02:51, 6 March 2010 (UTC)[reply]
But how would that shift the earth's axis? 95.112.175.41 (talk) 09:02, 6 March 2010 (UTC)[reply]
The stuff about the Earth's "axis" is a bit of a red herring, and not surprisingly most of the newspapers have got the story confused. The BBC's version is particularly bad, showing a completely irrelevant diagram of the precession of the axis. If Richard Gross of NASA JPL is to believed, this is what happened: the Earth's mass distribution shifted, as Dragons flight correctly said. This shift had two effects. 1: the Earth's "figure axis" (the one that passes through its centre of mass) shifted by a few metres; 2: the rotation sped up due to the reduction in the moment of inertia. Effect 1 is not related to the length of the day, although I guess it might cause a few places on Earth to shift time zones by a few nanoseconds. Effect 2 is what (is predicted to have) shortened the day by a microsecond. --Heron (talk) 10:28, 6 March 2010 (UTC)[reply]
Sorry, can't find anything about shifted axis in Richard Gross article. Wouldn't that be even violating momentum conservation? 95.112.175.41 (talk) 11:51, 6 March 2010 (UTC)[reply]
I can't remember exactly where I got that claim from, but there are plenty of other examples on the web. For example, this similar article by National Geographic attributes the figure axis shift to Gross. --Heron (talk) 15:36, 6 March 2010 (UTC)[reply]
Now, well, that's better. Not the rotation axis has shifted but the figure axis. 95.112.175.41 (talk) 19:33, 6 March 2010 (UTC)[reply]

voltage

edit

what is the maximum voltage that can be artificially generated within the limitations of current technology? —Preceding unsigned comment added by 129.67.118.243 (talk) 23:59, 5 March 2010 (UTC)[reply]

I aksed my science teacher this once, they said that voltage is infinite it's current that matters. You could have a balizzion volts with no current and it wouldn't kill you, or you could have 1 volt with a balizzion currents and it'd fry you like a deep fried mars bar. —Preceding unsigned comment added by Zonic4 (talkcontribs) 00:03, 6 March 2010 (UTC)[reply]
Some modern Van de Graaff generators can create a potential difference of up to 5 megavolts (that's 5 000 000 volts), Our high voltage article says that lightning can have a potential difference of up to 1 gigavolt (1 000 000 000 volts) (though I don't see a reference, and that isn't artificial). Buddy431 (talk) 01:03, 6 March 2010 (UTC)[reply]
According to our Van de Graaff generator article, the record is 25.5 million volts at Oak Ridge, although that particular statistic is uncited. The limiting factor is the breakdown voltage of the insulation between the collector dome and the rest of the structure - the Oak Ridge machine (and others like it) use sulphur hexafluoride. Tevildo (talk) 01:14, 6 March 2010 (UTC)[reply]
(edit conflict)Voltage merely means that you have two points in space with electrons at different levels of potential energy. When the potential energy difference becomes great enough, the electrons are "forced" to move from the higher energy state to the lower one (i.e. generate a current). Once the current starts to flow, the voltage will lower slightly, until the current reaches a steady state, and the system reaches equilibrium according to Ohm's Law. However, back to the main question, there is no theoretical upper limit to voltage, this article at google news: [1] shows that 5,000,000 volts was attained in 1923. We can only assume the number to be orders of magnitude higher today. I can't find anything else using google, but if you play around with search terms, you may be able to find something. --Jayron32 01:09, 6 March 2010 (UTC)[reply]
I would say tens of megavolts for a short period: but thats just my gut feeling. No evidence Im afraid.--79.68.242.68 (talk) 01:12, 6 March 2010 (UTC)[reply]
Plasma acceleration has shown a transient electron acceleration equivalent to 40 billion volts, but the process is not really the same as the way we usually think about voltage, and field only exists for about a nanosecond. Dragons flight (talk) 01:25, 6 March 2010 (UTC)[reply]
An Electron microscope built[2] at Osaka University uses a 3 000 000V supply from a Cockcroft–Walton generator. Cuddlyable3 (talk) 02:01, 6 March 2010 (UTC)[reply]
I was hoping to find answers in Orders of magnitude (voltage) - but the article didn't exist, so I had to write it - which means that there is nothing there that hasn't already been said. SteveBaker (talk) 03:08, 6 March 2010 (UTC)[reply]

There is some confusion above between electric fields and voltage, which is the antiderivative of the electric field. There are limits on the strength of an electric field because any material will undergo dielectric breakdown at some point, but it is difficult to set a limit on voltage. If there is one, it would arise from limited ability to concentrate charge within a finite region -- but the fact that black holes can be charged means that means that they can at least accumulate charge up to the point where gravitational attraction is outweighed by electrostatic repulsion. Looie496 (talk) 17:57, 6 March 2010 (UTC)[reply]