For a function
f
(
x
,
y
,
z
)
{\displaystyle f(x,y,z)}
in three-dimensional Cartesian coordinate variables, the gradient is the vector field:
grad
(
f
)
=
∇
f
=
(
∂
∂
x
,
∂
∂
y
,
∂
∂
z
)
f
=
∂
f
∂
x
i
+
∂
f
∂
y
j
+
∂
f
∂
z
k
{\displaystyle \operatorname {grad} (f)=\nabla f={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}f={\frac {\partial f}{\partial x}}\mathbf {i} +{\frac {\partial f}{\partial y}}\mathbf {j} +{\frac {\partial f}{\partial z}}\mathbf {k} }
where i , j , k are the standard unit vectors for the x , y , z -axes. More generally, for a function of n variables
ψ
(
x
1
,
…
,
x
n
)
{\displaystyle \psi (x_{1},\ldots ,x_{n})}
, also called a scalar field, the gradient is the vector field :
∇
ψ
=
(
∂
∂
x
1
,
…
,
∂
∂
x
n
)
ψ
=
∂
ψ
∂
x
1
e
1
+
⋯
+
∂
ψ
∂
x
n
e
n
{\displaystyle \nabla \psi ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\end{pmatrix}}\psi ={\frac {\partial \psi }{\partial x_{1}}}\mathbf {e} _{1}+\dots +{\frac {\partial \psi }{\partial x_{n}}}\mathbf {e} _{n}}
where
e
i
(
i
=
1
,
2
,
.
.
.
,
n
)
{\displaystyle \mathbf {e} _{i}\,(i=1,2,...,n)}
are mutually orthogonal unit vectors.
As the name implies, the gradient is proportional to, and points in the direction of, the function's most rapid (positive) change.
For a vector field
A
=
(
A
1
,
…
,
A
n
)
{\displaystyle \mathbf {A} =\left(A_{1},\ldots ,A_{n}\right)}
, also called a tensor field of order 1, the gradient or total derivative is the n × n Jacobian matrix :
J
A
=
d
A
=
(
∇
A
)
T
=
(
∂
A
i
∂
x
j
)
i
j
.
{\displaystyle \mathbf {J} _{\mathbf {A} }=d\mathbf {A} =(\nabla \!\mathbf {A} )^{\textsf {T}}=\left({\frac {\partial A_{i}}{\partial x_{j}}}\right)_{\!ij}.}
For a tensor field
T
{\displaystyle \mathbf {T} }
of any order k , the gradient
grad
(
T
)
=
d
T
=
(
∇
T
)
T
{\displaystyle \operatorname {grad} (\mathbf {T} )=d\mathbf {T} =(\nabla \mathbf {T} )^{\textsf {T}}}
is a tensor field of order k + 1.
For a tensor field
T
{\displaystyle \mathbf {T} }
of order k > 0, the tensor field
∇
T
{\displaystyle \nabla \mathbf {T} }
of order k + 1 is defined by the recursive relation
(
∇
T
)
⋅
C
=
∇
(
T
⋅
C
)
{\displaystyle (\nabla \mathbf {T} )\cdot \mathbf {C} =\nabla (\mathbf {T} \cdot \mathbf {C} )}
where
C
{\displaystyle \mathbf {C} }
is an arbitrary constant vector.
In Cartesian coordinates, the divergence of a continuously differentiable vector field
F
=
F
x
i
+
F
y
j
+
F
z
k
{\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} }
is the scalar-valued function:
div
F
=
∇
⋅
F
=
(
∂
∂
x
,
∂
∂
y
,
∂
∂
z
)
⋅
(
F
x
,
F
y
,
F
z
)
=
∂
F
x
∂
x
+
∂
F
y
∂
y
+
∂
F
z
∂
z
.
{\displaystyle \operatorname {div} \mathbf {F} =\nabla \cdot \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\cdot {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\frac {\partial F_{x}}{\partial x}}+{\frac {\partial F_{y}}{\partial y}}+{\frac {\partial F_{z}}{\partial z}}.}
As the name implies, the divergence is a (local) measure of the degree to which vectors in the field diverge.
The divergence of a tensor field
T
{\displaystyle \mathbf {T} }
of non-zero order k is written as
div
(
T
)
=
∇
⋅
T
{\displaystyle \operatorname {div} (\mathbf {T} )=\nabla \cdot \mathbf {T} }
, a contraction of a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher-order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity,
∇
⋅
(
A
⊗
T
)
=
T
(
∇
⋅
A
)
+
(
A
⋅
∇
)
T
{\displaystyle \nabla \cdot \left(\mathbf {A} \otimes \mathbf {T} \right)=\mathbf {T} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {T} }
where
A
⋅
∇
{\displaystyle \mathbf {A} \cdot \nabla }
is the directional derivative in the direction of
A
{\displaystyle \mathbf {A} }
multiplied by its magnitude. Specifically, for the outer product of two vectors,
∇
⋅
(
A
B
T
)
=
B
(
∇
⋅
A
)
+
(
A
⋅
∇
)
B
.
{\displaystyle \nabla \cdot \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=\mathbf {B} (\nabla \cdot \mathbf {A} )+(\mathbf {A} \cdot \nabla )\mathbf {B} .}
For a tensor field
T
{\displaystyle \mathbf {T} }
of order k > 1, the tensor field
∇
⋅
T
{\displaystyle \nabla \cdot \mathbf {T} }
of order k − 1 is defined by the recursive relation
(
∇
⋅
T
)
⋅
C
=
∇
⋅
(
T
⋅
C
)
{\displaystyle (\nabla \cdot \mathbf {T} )\cdot \mathbf {C} =\nabla \cdot (\mathbf {T} \cdot \mathbf {C} )}
where
C
{\displaystyle \mathbf {C} }
is an arbitrary constant vector.
In Cartesian coordinates, for
F
=
F
x
i
+
F
y
j
+
F
z
k
{\displaystyle \mathbf {F} =F_{x}\mathbf {i} +F_{y}\mathbf {j} +F_{z}\mathbf {k} }
the curl is the vector field:
curl
F
=
∇
×
F
=
(
∂
∂
x
,
∂
∂
y
,
∂
∂
z
)
×
(
F
x
,
F
y
,
F
z
)
=
|
i
j
k
∂
∂
x
∂
∂
y
∂
∂
z
F
x
F
y
F
z
|
=
(
∂
F
z
∂
y
−
∂
F
y
∂
z
)
i
+
(
∂
F
x
∂
z
−
∂
F
z
∂
x
)
j
+
(
∂
F
y
∂
x
−
∂
F
x
∂
y
)
k
{\displaystyle {\begin{aligned}\operatorname {curl} \mathbf {F} &=\nabla \times \mathbf {F} ={\begin{pmatrix}\displaystyle {\frac {\partial }{\partial x}},\ {\frac {\partial }{\partial y}},\ {\frac {\partial }{\partial z}}\end{pmatrix}}\times {\begin{pmatrix}F_{x},\ F_{y},\ F_{z}\end{pmatrix}}={\begin{vmatrix}\mathbf {i} &\mathbf {j} &\mathbf {k} \\{\frac {\partial }{\partial x}}&{\frac {\partial }{\partial y}}&{\frac {\partial }{\partial z}}\\F_{x}&F_{y}&F_{z}\end{vmatrix}}\\[1em]&=\left({\frac {\partial F_{z}}{\partial y}}-{\frac {\partial F_{y}}{\partial z}}\right)\mathbf {i} +\left({\frac {\partial F_{x}}{\partial z}}-{\frac {\partial F_{z}}{\partial x}}\right)\mathbf {j} +\left({\frac {\partial F_{y}}{\partial x}}-{\frac {\partial F_{x}}{\partial y}}\right)\mathbf {k} \end{aligned}}}
where i , j , and k are the unit vectors for the x -, y -, and z -axes, respectively.
As the name implies the curl is a measure of how much nearby vectors tend in a circular direction.
In Einstein notation , the vector field
F
=
(
F
1
,
F
2
,
F
3
)
{\displaystyle \mathbf {F} ={\begin{pmatrix}F_{1},\ F_{2},\ F_{3}\end{pmatrix}}}
has curl given by:
∇
×
F
=
ε
i
j
k
e
i
∂
F
k
∂
x
j
{\displaystyle \nabla \times \mathbf {F} =\varepsilon ^{ijk}\mathbf {e} _{i}{\frac {\partial F_{k}}{\partial x_{j}}}}
where
ε
{\displaystyle \varepsilon }
= ±1 or 0 is the Levi-Civita parity symbol .
For a tensor field
T
{\displaystyle \mathbf {T} }
of order k > 1, the tensor field
∇
×
T
{\displaystyle \nabla \times \mathbf {T} }
of order k is defined by the recursive relation
(
∇
×
T
)
⋅
C
=
∇
×
(
T
⋅
C
)
{\displaystyle (\nabla \times \mathbf {T} )\cdot \mathbf {C} =\nabla \times (\mathbf {T} \cdot \mathbf {C} )}
where
C
{\displaystyle \mathbf {C} }
is an arbitrary constant vector.
A tensor field of order greater than one may be decomposed into a sum of outer products , and then the following identity may be used:
∇
×
(
A
⊗
T
)
=
(
∇
×
A
)
⊗
T
−
A
×
(
∇
T
)
.
{\displaystyle \nabla \times \left(\mathbf {A} \otimes \mathbf {T} \right)=(\nabla \times \mathbf {A} )\otimes \mathbf {T} -\mathbf {A} \times (\nabla \mathbf {T} ).}
Specifically, for the outer product of two vectors,
∇
×
(
A
B
T
)
=
(
∇
×
A
)
B
T
−
A
×
(
∇
B
)
.
{\displaystyle \nabla \times \left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)=(\nabla \times \mathbf {A} )\mathbf {B} ^{\textsf {T}}-\mathbf {A} \times (\nabla \mathbf {B} ).}
In Cartesian coordinates , the Laplacian of a function
f
(
x
,
y
,
z
)
{\displaystyle f(x,y,z)}
is
Δ
f
=
∇
2
f
=
(
∇
⋅
∇
)
f
=
∂
2
f
∂
x
2
+
∂
2
f
∂
y
2
+
∂
2
f
∂
z
2
.
{\displaystyle \Delta f=\nabla ^{2}\!f=(\nabla \cdot \nabla )f={\frac {\partial ^{2}\!f}{\partial x^{2}}}+{\frac {\partial ^{2}\!f}{\partial y^{2}}}+{\frac {\partial ^{2}\!f}{\partial z^{2}}}.}
The Laplacian is a measure of how much a function is changing over a small sphere centered at the point.
When the Laplacian is equal to 0, the function is called a harmonic function . That is,
Δ
f
=
0.
{\displaystyle \Delta f=0.}
For a tensor field ,
T
{\displaystyle \mathbf {T} }
, the Laplacian is generally written as:
Δ
T
=
∇
2
T
=
(
∇
⋅
∇
)
T
{\displaystyle \Delta \mathbf {T} =\nabla ^{2}\mathbf {T} =(\nabla \cdot \nabla )\mathbf {T} }
and is a tensor field of the same order.
For a tensor field
T
{\displaystyle \mathbf {T} }
of order k > 0, the tensor field
∇
2
T
{\displaystyle \nabla ^{2}\mathbf {T} }
of order k is defined by the recursive relation
(
∇
2
T
)
⋅
C
=
∇
2
(
T
⋅
C
)
{\displaystyle \left(\nabla ^{2}\mathbf {T} \right)\cdot \mathbf {C} =\nabla ^{2}(\mathbf {T} \cdot \mathbf {C} )}
where
C
{\displaystyle \mathbf {C} }
is an arbitrary constant vector.
In Feynman subscript notation ,
∇
B
(
A
⋅
B
)
=
A
×
(
∇
×
B
)
+
(
A
⋅
∇
)
B
{\displaystyle \nabla _{\mathbf {B} }\!\left(\mathbf {A{\cdot }B} \right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} }
where the notation ∇B means the subscripted gradient operates on only the factor B .[ 1] [ 2]
Less general but similar is the Hestenes overdot notation in geometric algebra .[ 3] The above identity is then expressed as:
∇
˙
(
A
⋅
B
˙
)
=
A
×
(
∇
×
B
)
+
(
A
⋅
∇
)
B
{\displaystyle {\dot {\nabla }}\left(\mathbf {A} {\cdot }{\dot {\mathbf {B} }}\right)=\mathbf {A} {\times }\!\left(\nabla {\times }\mathbf {B} \right)+\left(\mathbf {A} {\cdot }\nabla \right)\mathbf {B} }
where overdots define the scope of the vector derivative. The dotted vector, in this case B , is differentiated, while the (undotted) A is held constant.
The utility of the Feynman subscript notation lies in its use in the derivation of vector and tensor derivative identities, as in the following example which uses the algebraic identity C ⋅(A ×B ) = (C ×A )⋅B :
∇
⋅
(
A
×
B
)
=
∇
A
⋅
(
A
×
B
)
+
∇
B
⋅
(
A
×
B
)
=
(
∇
A
×
A
)
⋅
B
+
(
∇
B
×
A
)
⋅
B
=
(
∇
A
×
A
)
⋅
B
−
(
A
×
∇
B
)
⋅
B
=
(
∇
A
×
A
)
⋅
B
−
A
⋅
(
∇
B
×
B
)
=
(
∇
×
A
)
⋅
B
−
A
⋅
(
∇
×
B
)
{\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\nabla _{\mathbf {A} }\cdot (\mathbf {A} \times \mathbf {B} )+\nabla _{\mathbf {B} }\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} +(\nabla _{\mathbf {B} }\times \mathbf {A} )\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \nabla _{\mathbf {B} })\cdot \mathbf {B} \\[2pt]&=(\nabla _{\mathbf {A} }\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla _{\mathbf {B} }\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}}
An alternative method is to use the Cartesian components of the del operator as follows:
∇
⋅
(
A
×
B
)
=
e
i
∂
i
⋅
(
A
×
B
)
=
e
i
⋅
(
∂
i
A
×
B
+
A
×
∂
i
B
)
=
e
i
⋅
(
∂
i
A
×
B
)
+
e
i
⋅
(
A
×
∂
i
B
)
=
(
e
i
×
∂
i
A
)
⋅
B
+
(
e
i
×
A
)
⋅
∂
i
B
=
(
e
i
×
∂
i
A
)
⋅
B
−
(
A
×
e
i
)
⋅
∂
i
B
=
(
e
i
×
∂
i
A
)
⋅
B
−
A
⋅
(
e
i
×
∂
i
B
)
=
(
e
i
∂
i
×
A
)
⋅
B
−
A
⋅
(
e
i
∂
i
×
B
)
=
(
∇
×
A
)
⋅
B
−
A
⋅
(
∇
×
B
)
{\displaystyle {\begin{aligned}\nabla \cdot (\mathbf {A} \times \mathbf {B} )&=\mathbf {e} _{i}\partial _{i}\cdot (\mathbf {A} \times \mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} +\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=\mathbf {e} _{i}\cdot (\partial _{i}\mathbf {A} \times \mathbf {B} )+\mathbf {e} _{i}\cdot (\mathbf {A} \times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} +(\mathbf {e} _{i}\times \mathbf {A} )\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -(\mathbf {A} \times \mathbf {e} _{i})\cdot \partial _{i}\mathbf {B} \\[2pt]&=(\mathbf {e} _{i}\times \partial _{i}\mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\times \partial _{i}\mathbf {B} )\\[2pt]&=(\mathbf {e} _{i}\partial _{i}\times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\mathbf {e} _{i}\partial _{i}\times \mathbf {B} )\\[2pt]&=(\nabla \times \mathbf {A} )\cdot \mathbf {B} -\mathbf {A} \cdot (\nabla \times \mathbf {B} )\end{aligned}}}
Another method of deriving vector and tensor derivative identities is to replace all occurrences of a vector in an algebraic identity by the del operator, provided that no variable occurs both inside and outside the scope of an operator or both inside the scope of one operator in a term and outside the scope of another operator in the same term (i.e., the operators must be nested). The validity of this rule follows from the validity of the Feynman method, for one may always substitute a subscripted del and then immediately drop the subscript under the condition of the rule.
For example, from the identity A ⋅(B ×C ) = (A ×B )⋅C
we may derive A ⋅(∇×C ) = (A ×∇)⋅C but not ∇⋅(B ×C ) = (∇×B )⋅C ,
nor from A ⋅(B ×A ) = 0 may we derive A ⋅(∇×A ) = 0.
On the other hand, a subscripted del operates on all occurrences of the subscript in the term, so that A ⋅(∇A ×A ) = ∇A ⋅(A ×A ) = ∇⋅(A ×A ) = 0.
Also, from A ×(A ×C ) = A (A ⋅C ) − (A ⋅A )C we may derive ∇×(∇×C ) = ∇(∇⋅C ) − ∇2 C ,
but from (A ψ )⋅(A φ ) = (A ⋅A )(ψφ ) we may not derive (∇ψ )⋅(∇φ ) = ∇2 (ψφ ).
For the remainder of this article, Feynman subscript notation will be used where appropriate.
For scalar fields
ψ
{\displaystyle \psi }
,
ϕ
{\displaystyle \phi }
and vector fields
A
{\displaystyle \mathbf {A} }
,
B
{\displaystyle \mathbf {B} }
, we have the following derivative identities.
Distributive properties
edit
∇
(
ψ
+
ϕ
)
=
∇
ψ
+
∇
ϕ
∇
(
A
+
B
)
=
∇
A
+
∇
B
∇
⋅
(
A
+
B
)
=
∇
⋅
A
+
∇
⋅
B
∇
×
(
A
+
B
)
=
∇
×
A
+
∇
×
B
{\displaystyle {\begin{aligned}\nabla (\psi +\phi )&=\nabla \psi +\nabla \phi \\\nabla (\mathbf {A} +\mathbf {B} )&=\nabla \mathbf {A} +\nabla \mathbf {B} \\\nabla \cdot (\mathbf {A} +\mathbf {B} )&=\nabla \cdot \mathbf {A} +\nabla \cdot \mathbf {B} \\\nabla \times (\mathbf {A} +\mathbf {B} )&=\nabla \times \mathbf {A} +\nabla \times \mathbf {B} \end{aligned}}}
First derivative associative properties
edit
(
A
⋅
∇
)
ψ
=
A
⋅
(
∇
ψ
)
(
A
⋅
∇
)
B
=
A
⋅
(
∇
B
)
(
A
×
∇
)
ψ
=
A
×
(
∇
ψ
)
(
A
×
∇
)
B
=
A
×
(
∇
B
)
{\displaystyle {\begin{aligned}(\mathbf {A} \cdot \nabla )\psi &=\mathbf {A} \cdot (\nabla \psi )\\(\mathbf {A} \cdot \nabla )\mathbf {B} &=\mathbf {A} \cdot (\nabla \mathbf {B} )\\(\mathbf {A} \times \nabla )\psi &=\mathbf {A} \times (\nabla \psi )\\(\mathbf {A} \times \nabla )\mathbf {B} &=\mathbf {A} \times (\nabla \mathbf {B} )\end{aligned}}}
Product rule for multiplication by a scalar
edit
We have the following generalizations of the product rule in single-variable calculus .
∇
(
ψ
ϕ
)
=
ϕ
∇
ψ
+
ψ
∇
ϕ
∇
(
ψ
A
)
=
(
∇
ψ
)
A
T
+
ψ
∇
A
=
∇
ψ
⊗
A
+
ψ
∇
A
∇
⋅
(
ψ
A
)
=
ψ
∇
⋅
A
+
(
∇
ψ
)
⋅
A
∇
×
(
ψ
A
)
=
ψ
∇
×
A
+
(
∇
ψ
)
×
A
∇
2
(
ψ
ϕ
)
=
ψ
∇
2
ϕ
+
2
∇
ψ
⋅
∇
ϕ
+
ϕ
∇
2
ψ
{\displaystyle {\begin{aligned}\nabla (\psi \phi )&=\phi \,\nabla \psi +\psi \,\nabla \phi \\\nabla (\psi \mathbf {A} )&=(\nabla \psi )\mathbf {A} ^{\textsf {T}}+\psi \nabla \mathbf {A} \ =\ \nabla \psi \otimes \mathbf {A} +\psi \,\nabla \mathbf {A} \\\nabla \cdot (\psi \mathbf {A} )&=\psi \,\nabla {\cdot }\mathbf {A} +(\nabla \psi )\,{\cdot }\mathbf {A} \\\nabla {\times }(\psi \mathbf {A} )&=\psi \,\nabla {\times }\mathbf {A} +(\nabla \psi ){\times }\mathbf {A} \\\nabla ^{2}(\psi \phi )&=\psi \,\nabla ^{2\!}\phi +2\,\nabla \!\psi \cdot \!\nabla \phi +\phi \,\nabla ^{2\!}\psi \end{aligned}}}
Quotient rule for division by a scalar
edit
∇
(
ψ
ϕ
)
=
ϕ
∇
ψ
−
ψ
∇
ϕ
ϕ
2
∇
(
A
ϕ
)
=
ϕ
∇
A
−
∇
ϕ
⊗
A
ϕ
2
∇
⋅
(
A
ϕ
)
=
ϕ
∇
⋅
A
−
∇
ϕ
⋅
A
ϕ
2
∇
×
(
A
ϕ
)
=
ϕ
∇
×
A
−
∇
ϕ
×
A
ϕ
2
∇
2
(
ψ
ϕ
)
=
ϕ
∇
2
ψ
−
2
ϕ
∇
(
ψ
ϕ
)
⋅
∇
ϕ
−
ψ
∇
2
ϕ
ϕ
2
{\displaystyle {\begin{aligned}\nabla \left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla \psi -\psi \,\nabla \phi }{\phi ^{2}}}\\[1em]\nabla \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla \mathbf {A} -\nabla \phi \otimes \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \cdot \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\cdot }\mathbf {A} -\nabla \!\phi \cdot \mathbf {A} }{\phi ^{2}}}\\[1em]\nabla \times \left({\frac {\mathbf {A} }{\phi }}\right)&={\frac {\phi \,\nabla {\times }\mathbf {A} -\nabla \!\phi \,{\times }\,\mathbf {A} }{\phi ^{2}}}\\[1em]\nabla ^{2}\left({\frac {\psi }{\phi }}\right)&={\frac {\phi \,\nabla ^{2\!}\psi -2\,\phi \,\nabla \!\left({\frac {\psi }{\phi }}\right)\cdot \!\nabla \phi -\psi \,\nabla ^{2\!}\phi }{\phi ^{2}}}\end{aligned}}}
Let
f
(
x
)
{\displaystyle f(x)}
be a one-variable function from scalars to scalars,
r
(
t
)
=
(
x
1
(
t
)
,
…
,
x
n
(
t
)
)
{\displaystyle \mathbf {r} (t)=(x_{1}(t),\ldots ,x_{n}(t))}
a parametrized curve,
ϕ
:
R
n
→
R
{\displaystyle \phi \!:\mathbb {R} ^{n}\to \mathbb {R} }
a function from vectors to scalars, and
A
:
R
n
→
R
n
{\displaystyle \mathbf {A} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}
a vector field. We have the following special cases of the multi-variable chain rule .
∇
(
f
∘
ϕ
)
=
(
f
′
∘
ϕ
)
∇
ϕ
(
r
∘
f
)
′
=
(
r
′
∘
f
)
f
′
(
ϕ
∘
r
)
′
=
(
∇
ϕ
∘
r
)
⋅
r
′
(
A
∘
r
)
′
=
r
′
⋅
(
∇
A
∘
r
)
∇
(
ϕ
∘
A
)
=
(
∇
A
)
⋅
(
∇
ϕ
∘
A
)
∇
⋅
(
r
∘
ϕ
)
=
∇
ϕ
⋅
(
r
′
∘
ϕ
)
∇
×
(
r
∘
ϕ
)
=
∇
ϕ
×
(
r
′
∘
ϕ
)
{\displaystyle {\begin{aligned}\nabla (f\circ \phi )&=\left(f'\circ \phi \right)\nabla \phi \\(\mathbf {r} \circ f)'&=(\mathbf {r} '\circ f)f'\\(\phi \circ \mathbf {r} )'&=(\nabla \phi \circ \mathbf {r} )\cdot \mathbf {r} '\\(\mathbf {A} \circ \mathbf {r} )'&=\mathbf {r} '\cdot (\nabla \mathbf {A} \circ \mathbf {r} )\\\nabla (\phi \circ \mathbf {A} )&=(\nabla \mathbf {A} )\cdot (\nabla \phi \circ \mathbf {A} )\\\nabla \cdot (\mathbf {r} \circ \phi )&=\nabla \phi \cdot (\mathbf {r} '\circ \phi )\\\nabla \times (\mathbf {r} \circ \phi )&=\nabla \phi \times (\mathbf {r} '\circ \phi )\end{aligned}}}
For a vector transformation
x
:
R
n
→
R
n
{\displaystyle \mathbf {x} \!:\mathbb {R} ^{n}\to \mathbb {R} ^{n}}
we have:
∇
⋅
(
A
∘
x
)
=
t
r
(
(
∇
x
)
⋅
(
∇
A
∘
x
)
)
{\displaystyle \nabla \cdot (\mathbf {A} \circ \mathbf {x} )=\mathrm {tr} \left((\nabla \mathbf {x} )\cdot (\nabla \mathbf {A} \circ \mathbf {x} )\right)}
Here we take the trace of the dot product of two second-order tensors, which corresponds to the product of their matrices.
∇
(
A
⋅
B
)
=
(
A
⋅
∇
)
B
+
(
B
⋅
∇
)
A
+
A
×
(
∇
×
B
)
+
B
×
(
∇
×
A
)
=
A
⋅
J
B
+
B
⋅
J
A
=
(
∇
B
)
⋅
A
+
(
∇
A
)
⋅
B
{\displaystyle {\begin{aligned}\nabla (\mathbf {A} \cdot \mathbf {B} )&\ =\ (\mathbf {A} \cdot \nabla )\mathbf {B} \,+\,(\mathbf {B} \cdot \nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {B} )\,+\,\mathbf {B} {\times }(\nabla {\times }\mathbf {A} )\\&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }+\mathbf {B} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,+\,(\nabla \mathbf {A} )\cdot \mathbf {B} \end{aligned}}}
where
J
A
=
(
∇
A
)
T
=
(
∂
A
i
/
∂
x
j
)
i
j
{\displaystyle \mathbf {J} _{\mathbf {A} }=(\nabla \!\mathbf {A} )^{\textsf {T}}=(\partial A_{i}/\partial x_{j})_{ij}}
denotes the Jacobian matrix of the vector field
A
=
(
A
1
,
…
,
A
n
)
{\displaystyle \mathbf {A} =(A_{1},\ldots ,A_{n})}
.
Alternatively, using Feynman subscript notation,
∇
(
A
⋅
B
)
=
∇
A
(
A
⋅
B
)
+
∇
B
(
A
⋅
B
)
.
{\displaystyle \nabla (\mathbf {A} \cdot \mathbf {B} )=\nabla _{\mathbf {A} }(\mathbf {A} \cdot \mathbf {B} )+\nabla _{\mathbf {B} }(\mathbf {A} \cdot \mathbf {B} )\ .}
See these notes.[ 4]
As a special case, when A = B ,
1
2
∇
(
A
⋅
A
)
=
A
⋅
J
A
=
(
∇
A
)
⋅
A
=
(
A
⋅
∇
)
A
+
A
×
(
∇
×
A
)
=
A
∇
A
.
{\displaystyle {\tfrac {1}{2}}\nabla \left(\mathbf {A} \cdot \mathbf {A} \right)\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {A} }\ =\ (\nabla \mathbf {A} )\cdot \mathbf {A} \ =\ (\mathbf {A} {\cdot }\nabla )\mathbf {A} \,+\,\mathbf {A} {\times }(\nabla {\times }\mathbf {A} )\ =\ A\nabla A.}
The generalization of the dot product formula to Riemannian manifolds is a defining property of a Riemannian connection , which differentiates a vector field to give a vector-valued 1-form .
∇
(
A
×
B
)
=
(
∇
A
)
×
B
−
(
∇
B
)
×
A
∇
⋅
(
A
×
B
)
=
(
∇
×
A
)
⋅
B
−
A
⋅
(
∇
×
B
)
∇
×
(
A
×
B
)
=
A
(
∇
⋅
B
)
−
B
(
∇
⋅
A
)
+
(
B
⋅
∇
)
A
−
(
A
⋅
∇
)
B
=
A
(
∇
⋅
B
)
+
(
B
⋅
∇
)
A
−
(
B
(
∇
⋅
A
)
+
(
A
⋅
∇
)
B
)
=
∇
⋅
(
B
A
T
)
−
∇
⋅
(
A
B
T
)
=
∇
⋅
(
B
A
T
−
A
B
T
)
A
×
(
∇
×
B
)
=
∇
B
(
A
⋅
B
)
−
(
A
⋅
∇
)
B
=
A
⋅
J
B
−
(
A
⋅
∇
)
B
=
(
∇
B
)
⋅
A
−
A
⋅
(
∇
B
)
=
A
⋅
(
J
B
−
J
B
T
)
(
A
×
∇
)
×
B
=
(
∇
B
)
⋅
A
−
A
(
∇
⋅
B
)
=
A
×
(
∇
×
B
)
+
(
A
⋅
∇
)
B
−
A
(
∇
⋅
B
)
(
A
×
∇
)
⋅
B
=
A
⋅
(
∇
×
B
)
{\displaystyle {\begin{aligned}\nabla (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla \mathbf {A} )\times \mathbf {B} \,-\,(\nabla \mathbf {B} )\times \mathbf {A} \\[5pt]\nabla \cdot (\mathbf {A} \times \mathbf {B} )&\ =\ (\nabla {\times }\mathbf {A} )\cdot \mathbf {B} \,-\,\mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\\[5pt]\nabla \times (\mathbf {A} \times \mathbf {B} )&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,-\,\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} (\nabla {\cdot }\mathbf {B} )\,+\,(\mathbf {B} {\cdot }\nabla )\mathbf {A} \,-\,(\mathbf {B} (\nabla {\cdot }\mathbf {A} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} )\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\right)\,-\,\nabla {\cdot }\left(\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\\[2pt]&\ =\ \nabla {\cdot }\left(\mathbf {B} \mathbf {A} ^{\textsf {T}}\,-\,\mathbf {A} \mathbf {B} ^{\textsf {T}}\right)\\[5pt]\mathbf {A} \times (\nabla \times \mathbf {B} )&\ =\ \nabla _{\mathbf {B} }(\mathbf {A} {\cdot }\mathbf {B} )\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ \mathbf {A} \cdot \mathbf {J} _{\mathbf {B} }\,-\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \\[2pt]&\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} \cdot (\nabla \mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \cdot (\mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}})\\[5pt](\mathbf {A} \times \nabla )\times \mathbf {B} &\ =\ (\nabla \mathbf {B} )\cdot \mathbf {A} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[2pt]&\ =\ \mathbf {A} \times (\nabla \times \mathbf {B} )\,+\,(\mathbf {A} {\cdot }\nabla )\mathbf {B} \,-\,\mathbf {A} (\nabla {\cdot }\mathbf {B} )\\[5pt](\mathbf {A} \times \nabla )\cdot \mathbf {B} &\ =\ \mathbf {A} \cdot (\nabla {\times }\mathbf {B} )\end{aligned}}}
Note that the matrix
J
B
−
J
B
T
{\displaystyle \mathbf {J} _{\mathbf {B} }\,-\,\mathbf {J} _{\mathbf {B} }^{\textsf {T}}}
is antisymmetric.
Divergence of curl is zero
edit
The divergence of the curl of any continuously twice-differentiable vector field A is always zero:
∇
⋅
(
∇
×
A
)
=
0
{\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0}
This is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex .
Divergence of gradient is Laplacian
edit
The Laplacian of a scalar field is the divergence of its gradient:
Δ
ψ
=
∇
2
ψ
=
∇
⋅
(
∇
ψ
)
{\displaystyle \Delta \psi =\nabla ^{2}\psi =\nabla \cdot (\nabla \psi )}
The result is a scalar quantity.
Divergence of divergence is not defined
edit
The divergence of a vector field A is a scalar, and the divergence of a scalar quantity is undefined. Therefore,
∇
⋅
(
∇
⋅
A
)
is undefined.
{\displaystyle \nabla \cdot (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}}
Curl of gradient is zero
edit
The curl of the gradient of any continuously twice-differentiable scalar field
φ
{\displaystyle \varphi }
(i.e., differentiability class
C
2
{\displaystyle C^{2}}
) is always the zero vector :
∇
×
(
∇
φ
)
=
0
.
{\displaystyle \nabla \times (\nabla \varphi )=\mathbf {0} .}
It can be easily proved by expressing
∇
×
(
∇
φ
)
{\displaystyle \nabla \times (\nabla \varphi )}
in a Cartesian coordinate system with Schwarz's theorem (also called Clairaut's theorem on equality of mixed partials). This result is a special case of the vanishing of the square of the exterior derivative in the De Rham chain complex .
∇
×
(
∇
×
A
)
=
∇
(
∇
⋅
A
)
−
∇
2
A
{\displaystyle \nabla \times \left(\nabla \times \mathbf {A} \right)\ =\ \nabla (\nabla {\cdot }\mathbf {A} )\,-\,\nabla ^{2\!}\mathbf {A} }
Here ∇2 is the vector Laplacian operating on the vector field A .
Curl of divergence is not defined
edit
The divergence of a vector field A is a scalar, and the curl of a scalar quantity is undefined. Therefore,
∇
×
(
∇
⋅
A
)
is undefined.
{\displaystyle \nabla \times (\nabla \cdot \mathbf {A} ){\text{ is undefined.}}}
Second derivative associative properties
edit
(
∇
⋅
∇
)
ψ
=
∇
⋅
(
∇
ψ
)
=
∇
2
ψ
(
∇
⋅
∇
)
A
=
∇
⋅
(
∇
A
)
=
∇
2
A
(
∇
×
∇
)
ψ
=
∇
×
(
∇
ψ
)
=
0
(
∇
×
∇
)
A
=
∇
×
(
∇
A
)
=
0
{\displaystyle {\begin{aligned}(\nabla \cdot \nabla )\psi &=\nabla \cdot (\nabla \psi )=\nabla ^{2}\psi \\(\nabla \cdot \nabla )\mathbf {A} &=\nabla \cdot (\nabla \mathbf {A} )=\nabla ^{2}\mathbf {A} \\(\nabla \times \nabla )\psi &=\nabla \times (\nabla \psi )=\mathbf {0} \\(\nabla \times \nabla )\mathbf {A} &=\nabla \times (\nabla \mathbf {A} )=\mathbf {0} \end{aligned}}}
DCG chart: Some rules for second derivatives.
The figure to the right is a mnemonic for some of these identities. The abbreviations used are:
D: divergence,
C: curl,
G: gradient,
L: Laplacian,
CC: curl of curl.
Each arrow is labeled with the result of an identity, specifically, the result of applying the operator at the arrow's tail to the operator at its head. The blue circle in the middle means curl of curl exists, whereas the other two red circles (dashed) mean that DD and GG do not exist.