Wikipedia:Reference desk/Archives/Mathematics/2007 October 15

Mathematics desk
< October 14 << Sep | October | Nov >> October 16 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 15

edit

An algebraic proof...?

edit

Hi all - I was messing around with some numbers earlier today, and I suddenly realised that:

for any positive integer n, (9^n) - 1 = a multiple of eight.

This also works for n=0, of course.

Probably a fairly simple and straightforward thing that's been spotted a million times in the past, but is there a logical algebraic proof that this works for all n? Grutness...wha? 00:16, 15 October 2007 (UTC)[reply]

Ahhh... don't bother - I've just realised its of the (n+1)(n-1) form, so any (x^n)-1 will be a multiple of x-1. Thanks anyway... Grutness...wha? 00:20, 15 October 2007 (UTC)[reply]
For future reference: you can prove these sorts of statements through mathematical induction, which requires a few initial steps and then some algebra. Strad 03:01, 15 October 2007 (UTC)[reply]
See also Mersenne prime#Theorems about Mersenne numbers. PrimeHunter 03:38, 15 October 2007 (UTC)[reply]
Yeah, you can also use modular arithmetic.  , so   --Spoon! 05:59, 15 October 2007 (UTC)[reply]
Thanks for those ideas. I'd never have thought to use modulo 8! Grutness...wha? 21:40, 15 October 2007 (UTC)[reply]

variable limit in radicals

edit

Is there any limit to the number of variables that you can have in an equation involving radicals? —Preceding unsigned comment added by 75.30.68.22 (talk) 02:38, 15 October 2007 (UTC)[reply]

No, but you can only solve equations with one variable. You can have many variable under a radical. A math-wiki 03:36, 15 October 2007 (UTC)[reply]

Perhaps the OP should clarify the question and its context - it is not at all clear. -- Meni Rosenfeld (talk) 09:11, 15 October 2007 (UTC)[reply]

Block Matrices

edit

Hi. I'm looking for a general proof that matrix multiplication by dividing the matrices into blocks and then multiplying by using the blocks is justified. Can anyone help me? Thanks.--Shahab 07:41, 15 October 2007 (UTC)[reply]

I couldn't find any online, but it should appear in any comprehensive elementary linear algebra book. But it's actually not that hard to prove it directly using only the definition of matrix multiplication, and it can be a good exercise. -- Meni Rosenfeld (talk) 09:18, 15 October 2007 (UTC)[reply]
Thanks for the reply. I tried to prove it myself but am getting nowhere. The problem is that two matrices A and B can be divided into blocks arbitrarily and I can't get a general start to the proof. For example if c_ij is any entry in AB then which blocks of A and B give us c_ij? Can you give me a hint?--Shahab 16:38, 16 October 2007 (UTC)[reply]
Note that the partition can't be completely arbitrary, as some of the dimensions have to match. Regardless, I think you should be thinking the other way around - instead of starting with some entry   and figuring out to which block it belongs, try starting with a row of blocks in A and a corresponding column of blocks in B, and figure out what you get when you take the "dot product" of those. -- Meni Rosenfeld (talk) 16:48, 16 October 2007 (UTC)[reply]
To clarify, let's write   for the block of A at position  . You want to prove that taking a dot product of blocks or taking a block out of the product of the original matrices give the same result. That is you want to prove:
 
I'll let you figure it out from there. Morana 00:13, 17 October 2007 (UTC)[reply]
Thanks for your help. However I am not still not getting it. It seems to me that   is what is given to us already as this is the way we defined block matrix multiplication. The problem is to show it equivalent to regular multiplication. Can you give me a little more hint? Thanks again, Meni and Morana.--Shahab 04:49, 18 October 2007 (UTC)[reply]
Actually, the right side of the equation uses block multiplication, the left side uses regular multiplication, and equality is what you must prove.
Now for the long version. Let's write   for the product of   and   by block multiplication. The definition of block multiplication gives you:
 
To prove that two matrices are equal you can simply prove that all their corresponding blocks are equal. Thus to prove that  , where   uses regular multiplication, you prove that  . Plugging in (1) above you get  , which is definitely not given. Morana 05:11, 18 October 2007 (UTC)[reply]


Thanks for explaining that to me. How do we go from here? Should we take an arbitrary element of   and prove it equal to an arbitrary element of  . --Shahab 06:10, 18 October 2007 (UTC)[reply]
Yes. Of course you want to take corresponding elements, not arbitrary elements. Morana 07:01, 18 October 2007 (UTC)[reply]
Can you tell me whether the following proof is correct: Let   represent the (i,j)th entry of the matrix X. We need to show that  . Let the   be the (p,q)th entry in the matrix  . Suppose ith block occurs after r rows in A and jth block after s columns in B. Then it will occur in the same position in (AB) too as # of rows(columns) of A(B) match that of AB. Now  . The (p,q)th entry in the matrix   will similarly be   or   or  . Hence the result is established. (What does k run on? It should on go from 1 to # of block in which   entry occurs where A is m by n. Is that correct?) --Shahab 05:17, 19 October 2007 (UTC)[reply]
There are some problems with the indices. Instead of saying:
"Suppose ith block occurs after r rows in A and jth block after s columns in B"
introduce variables to denote the number of blocks in the matrix (row x colums) and the number of elements in each block. Do it for Each of A, B, and AB. Some of these will have to be identical of course. Then redo your proof (which is essentially correct except for some errors in the indices) with the proper bounds for everything. The very last step is insufficient. You don't prove why the sum gives the same result. But you can only do that with proper indices troughout. Morana 13:31, 19 October 2007 (UTC)[reply]
I am still messing up the proof. Here's my version now: Let # of row blocks in A be m and # of column blocks in A be n. Also then # of row blocks in B is n. Let # of column blocks in B be p. (I don't know how to phrase it better but a row/column block means a whole row/column of blocks.) Then AB has m row and p column blocks. Also let # of rows in ith block of A be  , # of columns in jth column of A be  , let # of rows in ith block of B be  , # of columns in jth column of B be  . Then # of rows in ith block of AB is   and # of columns in jth block of AB is  . Now for the proof.

Consider the (r,s)th entry of the block  . It is given by  . Now let us pick up the (r,s)th entry of  . For each k ( ) the (r,s)th entry is given by   or by  . Thus our problem reduces to showing that   is the same as  . I am stuck here. Can you tell me how the proof is finished? Cheers.--Shahab 15:58, 19 October 2007 (UTC) Note: I have checked my steps using a 4 by 4 matrix and it seems okay. Please tell me how to get the last step. Any help will be immensely appreciated. Thanks --Shahab 04:08, 20 October 2007 (UTC)[reply]


It's better now, but why don't you carry out the multiplication (AB) on the right side ? It is a pretty obvious step. Then you just have to check that the indices match. Morana 11:25, 20 October 2007 (UTC)[reply]
I am afraid that I am still dense. The right side, as far as I can think for now is only   where  . How should I bring it equal to  ? Can you explicitly write the step now?--Shahab 11:53, 20 October 2007 (UTC)[reply]

Do you notice the similarity in the indices ? All you have to do is prove that the two sums run over the same indices. That is prove that:

 

Which is trivial. Morana 13:21, 20 October 2007 (UTC)[reply]

Can you kindly show that the 2 sets are equal. I have tried to do so but to no avail. I am pretty much getting confused now. Cheers--Shahab 14:47, 20 October 2007 (UTC)[reply]

is it the sin (or not)

edit
  Resolved

(almost certainly)

After the above question of getting a sine I was looking at sine estimates such as those involving cubics that work roughly between +pi and -pi.. I decided to try myself to see what I could create - this equation came out:

f(x) = x(x-π)(x+π)(x-2π)(x+2π)(x-3π)(x+3π).. etc

or

g(x) = x(x/π -1)(x/π +1)(x/2π -1)(x/2π +1)(x/3π -1)(x/3π +1) etc

Looking at g(x) it's obvious to me that it will have the same zeros as sin(x) ie at 0,pi,2pi etc, expanding the terms gives the x1 coefficient as 1 as well

For x3 terms I get a3 = 1/π2(1+1/4+1/9+1/16.. etc)

If g(x)=sin(x) then a3=1/6 (it is looking like it will do this but I can't prove it. I found a solution of sorts at Basel problem

The issue I have is that I can't convince myself that g(x)=sin(x) - for all I know it might be a triangular wave... My attempts at solving this fail - I get various formula that might give values for pi - but I was looking for something more solid. Can anyone suggest a (simple?) way of proving (if only to me - and preferably with some solid algebra) that g(x) does indeed equal sin(x) . 83.100.255.190 10:23, 15 October 2007 (UTC)[reply]

To see that you are on the right lines, take a look at List_of_trigonometric_identities#Infinite_product_formula, noting that
 
Also, take a look at the proof of the Wallis product expression for π. I imagine a formal proof would use the Weierstrass factorization theorem.Gandalf61 11:00, 15 October 2007 (UTC)[reply]
ok up to the point of "Weierstrass" I was ok - that could be a little beyond me for a while - however am I along the right lines in my guess that a triangular function (composed of sines) will have zeros at complex values of x as well as at the real multiples of pi?83.100.255.190 11:12, 15 October 2007 (UTC)[reply]
Right, got it. an equation such as 3sin(x) +sin(2x) has additional zeros in the complex plane - (as complex conjugates - so no imaginary terms are introduced) - so the product series includes the additional terms and so is different. And sin(x) has no zeros outside the reals. Thanks - may your magic wand do what ever wizards magic wands do. (smile) thanks.87.102.47.243 13:45, 15 October 2007 (UTC)[reply]

LaTeX help

edit

I noticed that Wikipedia's implementation of LaTeX has \begin{cases} such as  . What would be the normal LaTeX for this? x42bn6 Talk Mess 15:40, 15 October 2007 (UTC)[reply]

This is not unique to Wikipedia (it has possibly appeared in some new version of LaTeX). However, an alternative code for this (not identical though) is   (note also that "otherwise" should be text). -- Meni Rosenfeld (talk) 15:50, 15 October 2007 (UTC)[reply]
You can do \usepackage{amsmath} in order to use \begin{cases} in normal LaTeX, although not everyone would consider that normal LaTeX. 84.239.133.38 17:44, 15 October 2007 (UTC)[reply]
Oh, right, I have forgotten that I have many packages loaded by default. -- Meni Rosenfeld (talk) 17:59, 15 October 2007 (UTC)[reply]
Alright, thanks. x42bn6 Talk Mess 12:01, 16 October 2007 (UTC)[reply]

Linear functions

edit

How do I find f(x) = ax + b, a linear function, in which f(2) = 7 and f(-1) = -5? Thanks very much everybody --Hadseys 17:06, 15 October 2007 (UTC)[reply]

Well, you know that   on one hand, and   on the other hand, so  . Similarly, you have  . Do you know how to continue from there? -- Meni Rosenfeld (talk) 17:22, 15 October 2007 (UTC)[reply]
Yes, what you have is a system of 2 linear equations in 2 variables, and substitution is one of the ways to solve these. here is an example of how it is done. -- Meni Rosenfeld (talk) 17:58, 15 October 2007 (UTC)[reply]
Using y=ax+b you'be got
7=2a+b and
-5=-1a+b
Best way to go from here is to get rid of the b.83.100.252.179 18:16, 15 October 2007 (UTC)[reply]
I wonder how was I not clear about the equations being   and  . -- Meni Rosenfeld (talk) 18:21, 15 October 2007 (UTC)[reply]
(yes - but not there yet) this time subtract - you'll end up with an equation that has no b in it.83.100.252.179 18:47, 15 October 2007 (UTC)[reply]
No, 3a + 2b = 2 is not right. -- Meni Rosenfeld (talk) 18:48, 15 October 2007 (UTC)[reply]
No, 7-(-5) is not 2, 7-(-5) = 12. Aenar 19:21, 15 October 2007 (UTC)[reply]
Now you have the value of a. If you insert that value into one of your equations (for example 7=2a+b) you can isolate b and get b's value. Then you are done. —Preceding unsigned comment added by Aenar (talkcontribs) 20:47, 15 October 2007 (UTC)[reply]

multi-layer network

edit

I understand that with a single layer Perceptron, there is an initial weight and a final weight and that the final weight becomes the initial weight for the next iteration. If I want to add a layer to the net then do I use the single layer net input neurons as the hidden layer and add a new layer of input neurons with weights to connect each of the input neurons to each of the hidden layer neurons, and if so, are these weights setup so that one is negative and the other is positive (or is this just for the XOR problem) and are the values of the weights otherwise the same value? I'm not ready to understand backpropagation yet, until I can understand this step. Clem 20:01, 15 October 2007 (UTC)[reply]

If you take a simple multi-layer network, you start with the inputs. You can call these input neurons if you want, but they are basically just a bunch of numbers. If your input is a picture, these would be the pixel values. If you're trying to to learn a logical connection (like XOR) they are the two true/false inputs. Then comes the hidden layer (let's stick with one hidden layer for now) which are the actual perceptrons. Each perceptron has each of the input nodes as input, and a single output. After that comes the output layer, which is just the outputs from all the perceptrons in the hidden layer.
If you want to add a second hidden layer, each of the perceptron in the second layer has all the outputs of the first layer perceptrons as inputs. The output layer of the network then becomes the outputs of the perceptrons in the second layer.
There is no specific way you should set the initial weights. You can just set them randomly if you want. The idea is that the network will gradually learn the correct weights (using backpropagation).
A little hint for when you do get into backpropagation. The principle (which tends to get skipped over in explanations) is that you compare the networks output with what the output should have been. You then punish each node according to how wrong it was. Big error, big change in weights, small error small change in weights. And that error flows backard through the network. A nod takes its punishment, looks at the nodes that feed into it, and distributes its punishment to them according to weight. If node C in layer two was punished some amount, and node A and B are connected to it, A with strong influence, B with small influence, the node C punishes A a lot and B a little. Note that if A and B are also connected to D in layer two, then layer D may add it's own punishment to A and B, based on its weight. This way, the nodes that are most responsible for the error, are changed the most. If you think about it for a little bit, you should be able to figure out the backpropagation formula yourself. Or at least, it should make more sense when you read it.
risk 23:00, 15 October 2007 (UTC)[reply]
I was going to use a multi-layer perceptron but someone showed me how much less complicated and more accurate an adaptron is, so I've decided to use that. Clem 00:18, 16 October 2007 (UTC)[reply]
I've never heard of those. And google doesn't seem to have either. Care to elaborate? risk 00:40, 16 October 2007 (UTC)[reply]
The Adaptron can learn to distinguish one “black box” from another by accepting values (states) of independent variables (sensors, conditions, etc.) and dependent variables (actuators, actions, etc.) as input like the Perceptron, but unlike the Perceptron can formulate rules within one epoch of the data and distinguish or classify any dependent variable value for subsequent identification. Not only can the Adaptron learn to distinguish one "black box" from another but by formulating rules according to the relationships between the independent and dependent variable values of the "black box", can replace the "black box" in its function.
Just as Izhikevich is critical of artificial neurons “ for not being biologically realistic.” some mathematicians decry the Adaptron as not being an artificial neuron (as they decry the Check sort and the Rapid sorts for not being "true" sort routines) but rather a mere recording device such as a tape player or CD. What distinguishes the Adaptron from being merely a recording device is that it formulates and records rules and can use the rules to distinguish one "black box" from another; a function that might take the Perceptron many epochs to do.
However, if you reject the Check and Rapid sorts as bonafide sort routines then you will most likely reject the Adaptron as a bonafide artificial neuron as well.
Clem 20:07, 16 October 2007 (UTC)[reply]

Four fours

edit

I have to have order of operations equations that equals: 10 11 13 14 18 19 (One equation for each solution)

It has to use four 4s and the four basic operations (+,-,*,/). No concentrations (44, 444 ...), No decimals (.4, .44 ...), No factorals (!), No roots or exponents. Thanks

We don't do your homework for you. Please look at the top of this page for more. --Trovatore 20:25, 15 October 2007 (UTC)[reply]
I would write a program to produce a table of all possible expressions and simply look up the values. I don't see how you could do it in a more elegant way than trial and error or brute force. Morana 00:16, 16 October 2007 (UTC)[reply]
There are simple, elegant solutions to each of these, but if you want to gain marks for sheer stubborn obstinacy, there's always (4/4) + (4/4) + (4/4) + ... Ahh... skip that, I missed the "only four fours" bit. BTW, "one equation for each solution" is redundant. Using those basic operations, I don't think you could find one equation which would account for any two of those solutions. If square root was also an acceptable operation, you could deal with a couple of them in one shot, but that's another story. Grutness...wha? 00:26, 16 October 2007 (UTC)[reply]
First create the expression
oper3[oper1[4,4],oper2[4,4]]
Then just cycle through with all possible combinations of oper1,oper2 and oper3. Where oper1 is OPERATOR_1 . Print out the value of oper1,oper2 and oper3 if the result is the desired result(s). There should be no more than 4*4*4=64 possible combinations. 202.168.50.40 00:52, 16 October 2007 (UTC)[reply]
Wrong. For example, there is no choice of oper1, oper2, oper3 such that oper3[oper1[4,4],oper2[4,4]] = 48. Although 4*(4+(4+4)) = 48. You have to consider each possible parenthesization (is that even a word) of this expression. Morana 01:44, 16 October 2007 (UTC)[reply]
Wouldn't you miss expressions of the form oper1[oper2[oper3[4,4],4],4]? -GTBacchus(talk) 01:38, 16 October 2007 (UTC)[reply]
The original question didn't say whether parentheses are allowed. If they're not, then you only get to choose the operators, not their order. If you get to add parentheses to alter the order, there are 5 possible results from each of the 64 operator combinations:
oper3(4,oper2(4,oper1(4,4)))
oper3(4,oper2(oper1(4,4),4))
oper3(oper1(4,4),oper2(4,4))
oper3(oper2(4,oper1(4,4)),4)
oper3(oper2(oper1(4,4),4),4)
The number of ways of parenthesizing an expression involving N+1 terms and N operators is the N'th Catalan number. Here N is 3 so it's the 3rd Catalan number, which equals 5. --tcsetattr (talk / contribs) 01:43, 16 October 2007 (UTC)[reply]
Sure, but when the operations are both commutative, several of those expressions are equivalent. I think they fall into two equivalence classes, one containing the middle of the five expressions above, and another for the other four. -GTBacchus(talk) 01:51, 16 October 2007 (UTC)[reply]
Silly me, not all of the operations in question are commutative. Oops. -GTBacchus(talk) 02:05, 16 October 2007 (UTC)[reply]
I think associativity is responsible for more duplicates than commutativity. I've generated the complete list of 320 combinations. It's here. There are 59 different results, including infinity (for any nonzero divided by zero) and Not-A-Number (for zero divided by zero). Several results appear in the list only once, and each of the 5 possible parenthesizations does yield at least one unique answer:
(((4+4)/4)-4) is the only way to get -2
((4-(4*4))/4) is the only way to get -3
((4*4)-(4/4)) is the only way to get 15
(4/((4/4)-4)) is the only way to get -4/3
(4-(4/(4+4))) is the only way to get 7/2
so if want the complete list of possible outputs, you definitely need to consider all 5 parenthesizations. Do we really think the original question was homework? I never had homework this fun. --tcsetattr (talk / contribs) 02:20, 16 October 2007 (UTC)[reply]
Is it just me or are there no solution for any of the numbers given ? Morana 01:29, 16 October 2007 (UTC)[reply]
I can solve for 10, but not any of the others. Maybe it was supposed to be five 4's, and 4 operators? --tcsetattr (talk / contribs) 01:34, 16 October 2007 (UTC)[reply]
You are right. With five 4's they can all be solved. BTW I very much would like to know how you solve for 10 with four 4's since it doesn't seem possible. Morana 01:51, 16 October 2007 (UTC)[reply]
I made a mistake. In the first run of my solving program I didn't filter out answers like (44-4)/4 = 10 which are forbidden by the rules. --tcsetattr (talk / contribs) 01:59, 16 October 2007 (UTC)[reply]

Those interested in this question might enjoy the game Krypto. -GTBacchus(talk) 02:07, 16 October 2007 (UTC)[reply]

You know, we have a four fours article. (Spoiler warning!) - Rainwarrior 02:51, 16 October 2007 (UTC)[reply]