Introduction #
Many people are familiar with the so-called 'Feynmann's trick' of differentiating under the integral which greatly simplifies some integration and series problems. Buried in chapter 27-3 of the Feynmann Lectures on Electromagnetism [FLS63] though there lies another trick, one which can simplify problems in vector calculus by letting you treat the derivative operator as any other vector, without having to worry about commutativity . I don't know if Feynmann invented this himself, but I have never stumbled across this anywhere else.
Note: u/bolbteppa on Reddit has pointed out that this idea can be found in the very first book on vector calculus, written based on lectures given by Josiah Willard Gibbs.
What this trick will allow you to do is to treat the operator as if it were any other vector. This means that if you know a vector identity, you can immediately derive the corresponding vector calculus identity. Furthermore even if you do not have (or don't want to look up) the identity, you can apply the usual rules of vectors assuming that everything is commutative, which is a nice simplification.
The trick appears during the derivation of the Poynting vector. We wish to simplify
where and are the magnetic and electric field respectively, though for our purposes they can just be any vector fields.
The trick #
The problem we want to solve is that we cannot apply the usual rules of vectors to the derivative operator. For example, we have
but it is certainly not true that
This means that when you want to break up an expression like , you can't immediately reach for a vector identity and expect the result to hold. Even if you aren't using a table of identities, it would certainly make your life easier if you could find a way to treat like any other vector and bash out algebra like [eqNablaCommutative].
The trick is to introduce some new notation. Let's first restrict ourselves to two scalar functions and , we introduce the notation
to mean a derivative operator which only acts on , not . Moreover, it doesn't matter where in the expression the derivative is, it is always interpreted as acting on . In our notation the following are all equivalent:
[eqDerCommutative]:
Why did we do this? Well now the derivative behaves just like any other number! We can write our terms in any order we want, and still know what we mean.
Now let's suppose we want to differentiate a product of terms:
We can see that whenever we have such a product, we can write:
We want to generalise this to thinks like . Remembering that the derivative operator is interpreted as , we define
Here is interpreted as acting on any of the components , , of .
With this notation, keeping in mind the commutativity [eqDerCommutative] of the derivative operator, we can see that
Work out the components and see for yourself!
In the next section we will apply this trick to derive some common vector calculus identities. The idea is to take an expression such as , write it as , and then expand this using our normal vector rules until we end up with acting only on and on , in which case we can replace them with the original .
Some examples #
Here we will see how various vector identities can be generalised to include using the ideas from the previous section. All the identities I am using come from the Wikipedia page [Wik19].
You may want to try and do each of these yourself before reading the solution. Have a look at the title of the section, check the Wikipedia page [Wik19] for the corresponding vector identity, and have a play. If you get stuck read just enough of the solution until you find out what concept you were missing, and then go back to it. As they say, mathematics is not a spectator sport!
Example 1: #
The corresponding vector identity is
We can look at this as saying that the product is invariant under cyclic permutations, i.e. if you shift . If we look at as something with three slots: , this is saying that you can move everything one slot to the right (and the rightmost one 'cycles' to the left), or you can move everything one slot to the left (and the leftmost one 'cycles' to the right). This pattern comes up all the time in mathematics and physics, so it's good to keep it in mind.
Let's experiment and see where we go. Since every term will be a product of terms from and terms from , we may expand
We want to change this so that is acting on and on , then we can replace them with the original . So let's cyclically permute the first term to the right, and the second to the left:
Finally, we use to re-write the last term:
We have thus derived
Better yet, now we have an idea of where that strange minus sign came from. The first two terms have the same cyclic order in their slots , and breaking this in the third term comes at the expense of a minus sign.
Example 2: #
The corresponding vector identity is
We thus have
Let's look at the first term, the second will be analogous.
Note that the product is \emph{not} zero, as is a derivative operator which still acts on anywhere in the equation (see [eqDerCommutative]). We rearrange the above using the commutativity of the dot product to write
Swapping we obtain
so
Putting the two together finally gives
Example 3: #
Here is just an ordinary scalar function, and a vector. This difference makes this one a little bit tricky, but on the plus side we won't have to look up any identities. Let's begin by expanding as usual (since everything will be a product of and terms from ):
For the second term we can pull the scalar through to get . Let's have a think about what we mean by the first term. The derivative operator is a vector
and the quantity inside the brackets is a vector
where is the -component of , and so on. Taking the dot product of [eqNablaPsi] and [eqPsiA], we can see that this will give us
Putting all this together we arrive at
Conclusion #
In the above we learned a neat trick to treat the derivative operator like just any other vector. This is a cool and useful idea, which I haven't seen anywhere before I came across it in chapter 27-3 of [FLS63]. Leave a comment or a tweet if you find other cool applications, or have ideas for further investigation. I notably did not touch on any of the second derivatives, such as or , and I'm sure that this trick would also simplify a lot of these. I also had a look at , and while you could use the trick there it turned out to be a bit complicated and involved some thinking to 'guess' terms which would fit what you wanted. Let me know if you find a nice simple way of doing this.
As a final application, u/Muphrid15 that this idea can be used to generalise the derivative operator to geometric algebra (also known as Clifford algebras). This is a sort of algebra for vector spaces, allowing you to do things like add one vector space to another or ajoin and subtract dimensions, and many calculations in vector algebra can be simplified immensely when put in this language.
Follow @ruvi_l on Twitter for more posts like this, or join the discussion on Reddit:
References #
[FLS63] Leighton, R., & Sands, M. (1963). The Feynmann Lectures on Physics, Volume II.
[Wik19] Wikipedia contributors. (2019, February 20). Vector calculus identities. In Wikipedia, The Free Encyclopedia: Retrieved 23:01, February 22, 2019
https://en.wikipedia.org/wiki/Vector_calculus_identities
- Next: The meaning of Maxwell's equations
- Previous: Superdense coding