## Introduction #

Many people are familiar with the so-called 'Feynmann's trick' of differentiating under the integral which greatly simplifies some integration and series problems. Buried in chapter 27-3 of the Feynmann Lectures on Electromagnetism [FLS63] though there lies another trick, one which can simplify problems in vector calculus by letting you treat the derivative operator $\nabla$ as any other vector, without having to worry about commutativity . I don't know if Feynmann invented this himself, but I have never stumbled across this anywhere else.

Note: u/bolbteppa on Reddit has pointed out that this idea can be found in the very first book on vector calculus, written based on lectures given by Josiah Willard Gibbs.

What this trick will allow you to do is to treat the $\nabla$ operator as if it were any other vector. This means that if you know a vector identity, you can immediately derive the corresponding vector calculus identity. Furthermore even if you do not have (or don't want to look up) the identity, you can apply the usual rules of vectors assuming that everything is commutative, which is a nice simplification.

The trick appears during the derivation of the Poynting vector. We wish to simplify

$\nabla\cdot(B\times E),$

where $B$ and $E$ are the magnetic and electric field respectively, though for our purposes they can just be any vector fields.

## The trick #

The problem we want to solve is that we cannot apply the usual rules of vectors to the derivative operator. For example, we have

$A\times B=-B\times A,\;\;A\cdot B=B\cdot A$

but it is certainly not true that

[eqNablaCommutative]: $\nabla\times A=-A\times\nabla,\;\;\nabla\cdot A=A\cdot\nabla.$

This means that when you want to break up an expression like $\nabla\cdot(B\times E)$, you can't immediately reach for a vector identity $A\cdot(B\times C)=B\cdot(C\times A)$ and expect the result to hold. Even if you aren't using a table of identities, it would certainly make your life easier if you could find a way to treat $\nabla$ like any other vector and bash out algebra like [eqNablaCommutative].

The trick is to introduce some new notation. Let's first restrict ourselves to two scalar functions $f$ and $g$, we introduce the notation

$\frac{\partial}{\partial x_f}$

to mean a derivative operator which only acts on $f$, not $g$. Moreover, it doesn't matter where in the expression the derivative is, it is always interpreted as acting on $f$. In our notation the following are all equivalent:

[eqDerCommutative]: $\frac{\partial f}{\partial x}g=\frac{\partial}{\partial x_f}fg=f\frac{\partial}{\partial x_f}g=fg\frac{\partial}{\partial x_f}.$

Why did we do this? Well now the derivative $\frac{\partial}{\partial x_f}$ behaves just like any other number! We can write our terms in any order we want, and still know what we mean.

Now let's suppose we want to differentiate a product of terms:

$\frac{\partial}{\partial x}(fg)=\frac{\partial f}{\partial x}g+f\frac{\partial g}{\partial x}.$

We can see that whenever we have such a product, we can write:

$\frac{\partial}{\partial x}(fg) = \left(\frac{\partial}{\partial x_f}+\frac{\partial}{\partial x_g}\right)fg = \frac{\partial}{\partial x_f}fg+\frac{\partial}{\partial x_g}fg.$

We want to generalise this to thinks like $\nabla\cdot(A\times B)$. Remembering that the derivative operator is interpreted as $\nabla=\left(\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial z}\right)$, we define

$\nabla_A=\left(\frac{\partial}{\partial x_A},\frac{\partial}{\partial y_A},\frac{\partial}{\partial z_A}\right).$

Here $\frac{\partial}{\partial x_A}$ is interpreted as acting on any of the components $A_x$, $A_y$, $A_z$ of $A$.

With this notation, keeping in mind the commutativity [eqDerCommutative] of the derivative operator, we can see that

$\nabla_A\cdot A=A\cdot\nabla_A,$

$\nabla_A\times A=-A\times\nabla_A.$

Work out the components and see for yourself!

In the next section we will apply this trick to derive some common vector calculus identities. The idea is to take an expression such as $\nabla\cdot(E\times B)$, write it as $(\nabla_E+\nabla_B)\cdot(E\times B)$, and then expand this using our normal vector rules until we end up with $\nabla_E$ acting only on $E$ and $\nabla_B$ on $B$, in which case we can replace them with the original $\nabla$.

## Some examples #

Here we will see how various vector identities can be generalised to include $\nabla$ using the ideas from the previous section. All the identities I am using come from the Wikipedia page [Wik19].

You may want to try and do each of these yourself before reading the solution. Have a look at the title of the section, check the Wikipedia page [Wik19] for the corresponding vector identity, and have a play. If you get stuck read just enough of the solution until you find out what concept you were missing, and then go back to it. As they say, *mathematics is not a spectator sport!*

### Example 1: $\nabla\cdot(A\times B)$ #

The corresponding vector identity is

$A\cdot (B\times C)=B\cdot(C\times A)=C\cdot(A\times B).$

We can look at this as saying that the product $A\cdot(B\times C)$ is invariant under cyclic permutations, i.e. if you shift $A\rightarrow B\rightarrow C\rightarrow A$. If we look at $A\cdot(B\times C)$ as something with three slots: $\star\cdot(\star\times\star)$, this is saying that you can move everything one slot to the right (and the rightmost one 'cycles' to the left), or you can move everything one slot to the left (and the leftmost one 'cycles' to the right). This pattern comes up all the time in mathematics and physics, so it's good to keep it in mind.

Let's experiment and see where we go. Since every term will be a product of terms from $A$ and terms from $B$, we may expand

$\nabla\cdot(A\times B) = \nabla_A\cdot(A\times B)+\nabla_B\cdot(A\times B).$

We want to change this so that $\nabla_A$ is acting on $A$ and $\nabla_B$ on $B$, then we can replace them with the original $\nabla$. So let's cyclically permute the first term to the right, and the second to the left:

$=B\cdot(\nabla_A\times A)+A\cdot(B\times\nabla_B).$

Finally, we use $A\times B=-B\times A$ to re-write the last term:

$= B\cdot(\nabla_A\times A)-A\cdot(\nabla_B\times B),$

$= B\cdot(\nabla\times A)-A\cdot(\nabla\times B).$

We have thus derived

$\nabla\cdot(A\times B)=B\cdot(\nabla\times A)-A\cdot(\nabla\times B).$

Better yet, now we have an idea of where that strange minus sign came from. The first two terms have the same cyclic order in their slots $\nabla\rightarrow A\rightarrow B\rightarrow\nabla$, and breaking this in the third term comes at the expense of a minus sign.

### Example 2: $\nabla\times(A\times B)$ #

The corresponding vector identity is

[TripleProductIdentity]: $A\times(B\times C)=(A\cdot C)B-(A\cdot B)C.$

We thus have

$(\nabla_A+\nabla_B)\times(A\times B)=\nabla_A\times (A\times B)+\nabla_B\times(A\times B).$

Let's look at the first term, the second will be analogous.

$\nabla_A\times(A\times B) = (\nabla_A\cdot B)A-(\nabla_A\cdot A)B.$

Note that the product $\nabla_A\cdot B$ is \emph{not} zero, as $\nabla_A$ is a derivative operator which still acts on $A$ anywhere in the equation (see [eqDerCommutative]). We rearrange the above using the commutativity of the dot product to write

$\nabla_A\times(A\times B) = (B\cdot\nabla_A)A-(\nabla_A\cdot A)B,$

$= (B\cdot\nabla)A-(\nabla\cdot A)B.$

Swapping $A\leftrightarrow B$ we obtain

$\nabla_B\times(B\times A) = (A\cdot\nabla)B-(\nabla\cdot B)A,$

so

$\nabla_B\times(A\times B) = -(A\cdot\nabla)B+(\nabla\cdot B)A.$

Putting the two together finally gives

$\nabla\times(A\times B)=(B\cdot\nabla)A-(A\cdot\nabla)B+(\nabla\cdot B)A-(\nabla\cdot A)B.$

### Example 3: $\nabla\cdot(\psi A)$ #

Here $\psi$ is just an ordinary scalar function, and $A$ a vector. This difference makes this one a little bit tricky, but on the plus side we won't have to look up any identities. Let's begin by expanding as usual (since everything will be a product of $\psi$ and terms from $A$):

$\nabla\cdot(\psi A) = \nabla_{\psi}\cdot(\psi A)+\nabla_A\cdot(\psi A).$

For the second term we can pull the scalar $\psi$ through $\nabla_A$ to get $\psi(\nabla_A\cdot A)$. Let's have a think about what we mean by the first term. The derivative operator is a vector

[eqNablaPsi]: $\nabla_{\psi}=\left(\frac{\partial}{\partial x_{\psi}},\frac{\partial}{\partial y_{\psi}},\frac{\partial}{\partial z_{\psi}}\right),$

and the quantity inside the brackets is a vector

[eqPsiA]: $(\psi A)=\left(\psi A_x,\psi A_y,\psi A_z\right),$

where $A_x$ is the $x$-component of $A$, and so on. Taking the dot product of [eqNablaPsi] and [eqPsiA], we can see that this will give us

$\nabla_{\psi}\cdot(\psi A) = \frac{\partial}{\partial x_{\psi}}(\psi A_x)+\frac{\partial}{\partial y_{\psi}}(\psi A_y)\frac{\partial}{\partial z_{\psi}}(\psi A_z),$

$= A_x\frac{\partial \psi}{\partial x_{\psi}}+A_y\frac{\partial \psi}{\partial y_{\psi}}+A_z\frac{\partial \psi}{\partial z_{\psi}},$

$=A\cdot\nabla_{\psi}\psi.$

Putting all this together we arrive at

$\nabla\cdot(\psi A)=A\cdot\nabla\psi+\psi\nabla\cdot A.$

## Conclusion #

In the above we learned a neat trick to treat the derivative operator like just any other vector. This is a cool and useful idea, which I haven't seen anywhere before I came across it in chapter 27-3 of [FLS63]. Leave a comment or a tweet if you find other cool applications, or have ideas for further investigation. I notably did not touch on any of the second derivatives, such as $\nabla\cdot(\nabla\times A)$ or $\nabla\times(\nabla\times A)$, and I'm sure that this trick would also simplify a lot of these. I also had a look at $\nabla(A\cdot B)$, and while you could use the trick there it turned out to be a bit complicated and involved some thinking to 'guess' terms which would fit what you wanted. Let me know if you find a nice simple way of doing this.

As a final application, u/Muphrid15 that this idea can be used to generalise the derivative operator to geometric algebra (also known as Clifford algebras). This is a sort of algebra for vector spaces, allowing you to do things like add one vector space to another or ajoin and subtract dimensions, and many calculations in vector algebra can be simplified immensely when put in this language.

Follow @ruvi_l on Twitter for more posts like this, or join the discussion on Reddit:

## References #

[FLS63] Leighton, R., & Sands, M. (1963). The Feynmann Lectures on Physics, Volume II.

[Wik19] Wikipedia contributors. (2019, February 20). Vector calculus identities. In Wikipedia, The Free Encyclopedia: Retrieved 23:01, February 22, 2019

https://en.wikipedia.org/wiki/Vector_calculus_identities

- Next: The meaning of Maxwell's equations
- Previous: Superdense coding