WEBVTT - autoGenerated
00:00:00.000 --> 00:00:05.000
try in two days. So numerical methods are obviously something to do with us wanting
00:00:05.000 --> 00:00:11.000
to solve the equations that we have to solve. And I'm not talking about only the
00:00:11.000 --> 00:00:15.000
full model equations, I'm also talking about the little equations that you
00:00:15.000 --> 00:00:20.000
derive as part of your parameterization, right? Ultimately at the end of the day
00:00:20.000 --> 00:00:25.000
you have some sort of equation that you have to solve. And many of the equations
00:00:25.000 --> 00:00:31.000
that we have originally considered continuous functions of something.
00:00:31.000 --> 00:00:37.000
Is this working? It seems a bit high. So we have these continuous functions and for sake of
00:00:37.000 --> 00:00:42.000
argument we start with functions that have only one dimension, so we just think
00:00:42.000 --> 00:00:48.000
of it as space. And a lot of the time we see in our equations derivatives of
00:00:48.000 --> 00:00:54.000
those functions with respect to that independent variable like space, okay? So
00:00:54.000 --> 00:00:59.000
the trouble in a model is, and even if you... and some of these things we
00:00:59.000 --> 00:01:04.000
cannot solve analytically. In my simple examples that I'll give you in these
00:01:04.000 --> 00:01:08.000
lectures, or in this lecture tomorrow mostly, we'll pick something that we can
00:01:08.000 --> 00:01:13.000
solve analytically just to show what happens or what can happen if we are not
00:01:13.000 --> 00:01:17.000
careful in solving it numerically. But a lot of the things we deal with, a lot of
00:01:17.000 --> 00:01:20.000
the equations we deal with, we cannot solve analytically. So we can't just write
00:01:20.000 --> 00:01:24.000
the equations down, but we have to implement a numerical algorithm to solve
00:01:24.000 --> 00:01:29.000
this. And that's what the numerical methods bit I'm going to talk about is
00:01:29.000 --> 00:01:33.000
going to be about. And the real purpose of these two lectures is to give you in
00:01:33.000 --> 00:01:39.000
the end one example of how you can screw it up really badly if you're not careful.
00:01:39.000 --> 00:01:43.000
And that's all I want you to go home with. I have to be careful with
00:01:43.000 --> 00:01:49.000
numerics. Everything I say in the two lectures, all the details, all the stuff
00:01:49.000 --> 00:01:54.000
that you're going to see is only there to remind you that if you're not careful,
00:01:54.000 --> 00:02:00.000
you can screw it up really, really badly. Okay? And since I had to assume that you
00:02:00.000 --> 00:02:06.000
basically had no formal training in numerical methods, I never did, by the way,
00:02:06.000 --> 00:02:10.000
I kept this very simple. And it may be too simple today, but it's not going to be
00:02:10.000 --> 00:02:14.000
simple tomorrow, but it's going to be fun. I promise it's going to be fun tomorrow.
00:02:14.000 --> 00:02:18.000
And tomorrow also, before I forget, I'll switch with Robert for some personal
00:02:18.000 --> 00:02:22.000
reasons. I want to have the second lecture since I know the third one is
00:02:22.000 --> 00:02:26.000
always one that you sleep anyway. So Robert will have to deal with that
00:02:26.000 --> 00:02:32.000
problem tomorrow. And I'll then have to deal with it again on Wednesday. Okay, so
00:02:32.000 --> 00:02:37.000
in our models, we really only know these continuous functions at discrete points.
00:02:37.000 --> 00:02:42.000
That's the whole thing about numerics here, right? So imagine when this is x,
00:02:43.000 --> 00:02:50.000
our function is defined along x. And what we do in models is we put a grid along x.
00:02:50.000 --> 00:02:56.000
And what that implies is that we really only know the function at these points.
00:02:56.000 --> 00:03:00.000
What happens between those points, we have no knowledge of, basically.
00:03:00.000 --> 00:03:06.000
So we are discretizing our function. So we're on the same page.
00:03:06.000 --> 00:03:16.000
We're going to call points in x, xj. And where that point is in x is just j,
00:03:16.000 --> 00:03:22.000
which is an integer times the spacing of the points, which we often use as the word
00:03:22.000 --> 00:03:28.000
resolution when we talk about models. We should be precise and call it the grid
00:03:28.000 --> 00:03:33.000
spacing, but we talked about that a little bit before. But that's semantics, so keep
00:03:33.000 --> 00:03:37.000
calling it resolution if you want. And then the function at that point is simply
00:03:37.000 --> 00:03:42.000
the function at point xj, which is just the function at point j delta x.
00:03:42.000 --> 00:03:47.000
That's just for notation purposes. So where do finite differences come from?
00:03:47.000 --> 00:03:51.000
Where do these things that we do, what we really want to do is we want to express
00:03:51.000 --> 00:03:56.000
something like this derivative on that grid. Does anybody remember from their
00:03:56.000 --> 00:04:01.000
formal training where it all starts? How do you derive it? What do you do?
00:04:03.000 --> 00:04:12.000
Yeah, exactly. How do you do that? Hang on. I'm going to try something fancy here.
00:04:12.000 --> 00:04:20.000
Hey, it worked. Look at that. So basically, we have this function.
00:04:20.000 --> 00:04:27.000
And you start with a Taylor expansion, which is exactly. And we're going to do two
00:04:27.000 --> 00:04:30.000
in two directions, but we're going to write it down in one equation.
00:04:30.000 --> 00:04:38.000
So you basically say the function at a distance delta x from x can be expressed in a Taylor expansion.
00:04:38.000 --> 00:04:46.000
That's basically the function at point xj. Well, let's put the j here so we can make
00:04:46.000 --> 00:04:52.000
it simpler in a moment. And then it's plus minus the first derivative of that function,
00:04:52.000 --> 00:04:57.000
du by dx. Actually, I can write it in upper case because it depends only on one thing,
00:04:57.000 --> 00:05:10.000
du by dx, delta x plus, the second one is only a plus, the second derivative to be consistent,
00:05:10.000 --> 00:05:17.000
delta x squared factorial 2, which happens to be 2, plus minus the third.
00:05:17.000 --> 00:05:26.000
And then we'll stop because it gets too boring, times the third power of factorial 3 plus more stuff.
00:05:27.000 --> 00:05:31.000
So that's where it's all coming from. So you do a Taylor expansion.
00:05:31.000 --> 00:05:37.000
And what you can immediately see, what we're really looking for is an expression for this,
00:05:37.000 --> 00:05:43.000
which I will also, in a minute, call u prime just to write less.
00:05:43.000 --> 00:05:50.000
And this will be double prime. This will be triple prime, just so we have to write less, essentially.
00:05:50.000 --> 00:06:01.000
So what you perhaps can immediately see is that you can have different or one of the dilemmas that you can see.
00:06:01.000 --> 00:06:05.000
We want to solve for this, but we already have two equations to do it in principle.
00:06:05.000 --> 00:06:11.000
We have a plus series and we have a minus series for plus and minus delta x.
00:06:11.000 --> 00:06:16.000
So in principle, we can imagine more than one way of doing this, right?
00:06:16.000 --> 00:06:23.000
So numerics, this is kind of what you learn here in this very simple setup, is about making choices as well.
00:06:23.000 --> 00:06:28.000
You have more than one choice how to represent this derivative.
00:06:28.000 --> 00:06:32.000
So for instance, if you took the plus series and solved for u prime,
00:06:32.000 --> 00:06:37.000
so if you solve for u prime delta x for the plus series,
00:06:37.000 --> 00:06:51.000
then that will be u of xj plus delta x minus u of xj.
00:06:51.000 --> 00:06:59.000
And then there's these terms, minus u double prime delta x squared over 2.
00:06:59.000 --> 00:07:02.000
And then I'll just make dots from here.
00:07:02.000 --> 00:07:07.000
We might come back to the third order terms in a minute, but right now we'll leave them.
00:07:07.000 --> 00:07:12.000
We can divide this by delta x.
00:07:12.000 --> 00:07:27.000
And then u prime at the point that we are interested in is simply this difference of the two values divided by delta x.
00:07:27.000 --> 00:07:31.000
And then there are some terms, there's some residual left.
00:07:31.000 --> 00:07:38.000
And in numerics, you often look at what is the order of the delta x that you put, the first one that you put into your residual.
00:07:38.000 --> 00:07:44.000
We've divided by delta x, so the order of the residual is delta x.
00:07:44.000 --> 00:07:48.000
And this becomes important.
00:07:48.000 --> 00:07:49.000
We've made an error.
00:07:49.000 --> 00:07:54.000
This residual basically means if we express this derivative like that, then we're making an error.
00:07:54.000 --> 00:08:00.000
And the size of the error depends on the size of delta x.
00:08:01.000 --> 00:08:02.000
The bigger it gets, the bigger the error gets.
00:08:02.000 --> 00:08:09.000
But also as we reduce delta x and make it smaller and smaller and smaller, how fast one of the questions we have,
00:08:09.000 --> 00:08:15.000
how fast will this approximation become the real derivative that we are interested in?
00:08:15.000 --> 00:08:16.000
This is the real derivative here.
00:08:16.000 --> 00:08:19.000
This is the approximation to it.
00:08:19.000 --> 00:08:21.000
And that depends on the order of this term.
00:08:21.000 --> 00:08:23.000
And so this is often called the truncation error.
00:08:23.000 --> 00:08:27.000
We may get truncation error in deriving this derivative.
00:08:27.000 --> 00:08:38.000
And the order of the truncation error is important for how fast we converge towards the real derivative as we make delta x smaller and smaller and smaller as it goes to 0, essentially.
00:08:38.000 --> 00:08:46.000
And so this particular approximation is called a first order approximation because the delta x appears with the power 1.
00:08:46.000 --> 00:08:52.000
If it appears with the power 2, then it's called second order approximation, and so on and so forth.
00:08:52.000 --> 00:08:55.000
Now, we could do the same thing.
00:08:55.000 --> 00:09:01.000
And I'm going to simplify the notation a little bit at the same time by using the minus series up there.
00:09:01.000 --> 00:09:04.000
So then this would become u.
00:09:04.000 --> 00:09:09.000
And instead of writing all this xj mambo jambo, we will start writing this like this.
00:09:09.000 --> 00:09:10.000
So just say j minus 1.
00:09:10.000 --> 00:09:13.000
So that's the j minus delta x.
00:09:13.000 --> 00:09:14.000
Yeah?
00:09:14.000 --> 00:09:17.000
So you said no order of delta x or delta x squared?
00:09:17.000 --> 00:09:21.000
No, it's order of delta x because we divided by delta x.
00:09:21.000 --> 00:09:25.000
So this delta x squared got divided by the delta x.
00:09:25.000 --> 00:09:26.000
So it's delta x.
00:09:26.000 --> 00:09:28.000
It's order of delta x.
00:09:28.000 --> 00:09:30.000
Yeah?
00:09:30.000 --> 00:09:32.000
So we could use the minus series.
00:09:32.000 --> 00:09:34.000
So then all of this would be minuses.
00:09:34.000 --> 00:09:50.000
Then uj minus 1 equals uj minus u prime j delta x plus u double prime at j delta x squared
00:09:50.000 --> 00:09:54.000
over 2 plus stuff.
00:09:54.000 --> 00:10:01.000
And now, if you solve that for the uj prime, which we could also do,
00:10:01.000 --> 00:10:05.000
bring that to the other side, that becomes a plus divided by delta x.
00:10:05.000 --> 00:10:13.000
So we get uj minus uj minus 1 divided by delta x plus a residual.
00:10:13.000 --> 00:10:18.000
And the residual, again, has the order of delta x.
00:10:18.000 --> 00:10:22.000
So this kind of very simple exercise tells you two things.
00:10:22.000 --> 00:10:24.000
Where do the finite differences come from?
00:10:24.000 --> 00:10:27.000
Well, they come from Taylor expansions.
00:10:27.000 --> 00:10:28.000
Easy.
00:10:28.000 --> 00:10:32.000
But the other thing it tells you, there's no unique answer
00:10:32.000 --> 00:10:34.000
how to represent the derivative.
00:10:34.000 --> 00:10:38.000
There's more than one way of doing it, actually.
00:10:38.000 --> 00:10:40.000
And this is extremely trivial in numerics.
00:10:40.000 --> 00:10:45.000
You can consider way more complicated things to do this.
00:10:45.000 --> 00:10:52.000
But already in this very trivial one, we have at least two choices as to how to do this.
00:10:52.000 --> 00:10:56.000
All right, do I want to bring this back?
00:10:56.000 --> 00:10:57.000
Maybe I do.
00:10:57.000 --> 00:11:00.000
Maybe I forgot to say something.
00:11:00.000 --> 00:11:03.000
This is spooky.
00:11:03.000 --> 00:11:05.000
So this was our first one.
00:11:05.000 --> 00:11:07.000
Oh, yeah, I did forget to say something.
00:11:07.000 --> 00:11:13.000
So in this particular example, the solution depends on two points on our grid.
00:11:13.000 --> 00:11:17.000
One is the point where we actually want to calculate the derivative.
00:11:17.000 --> 00:11:23.000
And one is the point plus delta x further on, on the x-axis.
00:11:23.000 --> 00:11:24.000
So it's forward.
00:11:24.000 --> 00:11:28.000
And so therefore, this particular approximation of the derivative
00:11:28.000 --> 00:11:30.000
is called a forward difference you may have heard about.
00:11:30.000 --> 00:11:35.000
So these are terms numerics people throw around all the time.
00:11:35.000 --> 00:11:38.000
So this is a forward difference.
00:11:38.000 --> 00:11:41.000
We need information at two points to calculate the derivative.
00:11:41.000 --> 00:11:43.000
It's pretty straightforward.
00:11:43.000 --> 00:11:45.000
We've done the minus series.
00:11:45.000 --> 00:11:50.000
And we ended up with the point and the point backwards in space.
00:11:50.000 --> 00:11:51.000
Surprise.
00:11:51.000 --> 00:11:55.000
This is called the backward difference.
00:11:55.000 --> 00:11:57.000
That won't surprise you very much.
00:11:57.000 --> 00:11:58.000
We then looked.
00:11:58.000 --> 00:12:00.000
This is just summarizing what we've done.
00:12:00.000 --> 00:12:04.000
We then took this one and looked at the residual terms.
00:12:04.000 --> 00:12:05.000
That's left.
00:12:05.000 --> 00:12:07.000
And that's called the truncation error.
00:12:07.000 --> 00:12:11.000
We'll come back to that for whole equations, basically.
00:12:11.000 --> 00:12:13.000
And we saw that it was of order delta x.
00:12:13.000 --> 00:12:15.000
The other terms are then higher orders,
00:12:15.000 --> 00:12:20.000
delta x squared, delta x cubed, and so on and so forth.
00:12:20.000 --> 00:12:22.000
And so as a result, this is called
00:12:22.000 --> 00:12:28.000
a first order accurate numerical scheme.
00:12:28.000 --> 00:12:34.000
Now we can do something else funky with our equation here,
00:12:34.000 --> 00:12:36.000
just to confuse you even more.
00:12:36.000 --> 00:12:39.000
There's another one we can derive.
00:12:39.000 --> 00:12:40.000
Switch this one off.
00:12:40.000 --> 00:12:42.000
Just from this simple Taylor series,
00:12:42.000 --> 00:12:45.000
we'll get three expressions for the first derivative.
00:12:45.000 --> 00:12:48.000
How cool is that?
00:12:48.000 --> 00:12:56.000
And all we need to do is we need the trick here is we take.
00:12:56.000 --> 00:12:58.000
So the first one we derived by only using
00:12:58.000 --> 00:13:00.000
the plus series of the Taylor exponent.
00:13:00.000 --> 00:13:04.000
And the second one we derived by only using the minus series.
00:13:04.000 --> 00:13:06.000
Surprise, one of the things you can do
00:13:06.000 --> 00:13:17.000
is you could subtract the minus series from the plus series.
00:13:17.000 --> 00:13:19.000
We can do that.
00:13:19.000 --> 00:13:20.000
It's only math.
00:13:20.000 --> 00:13:22.000
Do whatever we want.
00:13:22.000 --> 00:13:25.000
So we want to subtract the minus series from the plus series.
00:13:25.000 --> 00:13:26.000
So what do we get?
00:13:26.000 --> 00:13:30.000
We get uj plus 1.
00:13:30.000 --> 00:13:33.000
That's the plus series on the left-hand side.
00:13:33.000 --> 00:13:35.000
Minus uj minus 1.
00:13:35.000 --> 00:13:37.000
That's our left-hand side.
00:13:37.000 --> 00:13:38.000
That's easy.
00:13:38.000 --> 00:13:40.000
And then what do we get on the right-hand side?
00:13:40.000 --> 00:13:42.000
Well, let's do the plus series.
00:13:42.000 --> 00:13:50.000
uj plus series is plus u prime delta x plus u2
00:13:50.000 --> 00:13:56.000
primes delta x squared over 2 plus lots of stuff.
00:13:56.000 --> 00:13:58.000
And then we want to subtract the minus series.
00:13:58.000 --> 00:14:01.000
So we get a minus uj.
00:14:01.000 --> 00:14:03.000
And then this gives minus minus.
00:14:03.000 --> 00:14:08.000
This gives plus u prime delta x.
00:14:08.000 --> 00:14:14.000
And this gives a minus u prime delta x squared over 2.
00:14:14.000 --> 00:14:18.000
Actually, for illustrative purposes,
00:14:18.000 --> 00:14:21.000
we need to keep one more term.
00:14:21.000 --> 00:14:23.000
So we actually, in this case, we want
00:14:23.000 --> 00:14:26.000
to keep the third derivative term, which
00:14:26.000 --> 00:14:33.000
is delta x to the third divided by factorial 3.
00:14:33.000 --> 00:14:38.000
And here, this one becomes also plus u prime, triple prime
00:14:38.000 --> 00:14:41.000
delta x squared 3.
00:14:41.000 --> 00:14:43.000
And then there's stuff, plus stuff.
00:14:46.000 --> 00:14:47.000
So we do that.
00:14:47.000 --> 00:14:48.000
Easy.
00:14:48.000 --> 00:14:52.000
Left-hand side stays the same.
00:14:52.000 --> 00:14:53.000
uj goes.
00:14:53.000 --> 00:14:57.000
We end up with 2 u prime delta x.
00:14:57.000 --> 00:15:00.000
The second derivatives go.
00:15:00.000 --> 00:15:07.000
And then we end up with plus 2 u triple primes delta x cubed
00:15:07.000 --> 00:15:11.000
3 plus more things.
00:15:11.000 --> 00:15:15.000
OK, so again, we want to solve this for u prime.
00:15:15.000 --> 00:15:17.000
To do that, we need to switch sides
00:15:17.000 --> 00:15:20.000
and divide by 2 delta x.
00:15:20.000 --> 00:15:25.000
So u prime j, which I should actually put here everywhere
00:15:25.000 --> 00:15:32.000
to be absolutely correct, can be written as uj plus 1 minus uj
00:15:32.000 --> 00:15:42.000
minus 1 divided by 2 delta x minus u triple prime
00:15:42.000 --> 00:15:49.000
at j delta x squared 3 factorial plus more.
00:15:49.000 --> 00:15:52.000
So this becomes our approximation.
00:15:52.000 --> 00:15:57.000
Now, to calculate the derivative of point j,
00:15:57.000 --> 00:16:01.000
we are actually using the point ahead and the point behind.
00:16:01.000 --> 00:16:04.000
So the point we are dealing with, it sits at the center.
00:16:04.000 --> 00:16:08.000
Surprise, this is called a centered difference.
00:16:08.000 --> 00:16:12.000
It has 2 delta x here.
00:16:12.000 --> 00:16:16.000
So you're doing the calculation actually over two grid points.
00:16:16.000 --> 00:16:19.000
And you can see that the order of what's left
00:16:19.000 --> 00:16:21.000
is now in the quadratic of delta x.
00:16:21.000 --> 00:16:23.000
So that's a second order accurate scheme.
00:16:23.000 --> 00:16:27.000
All that means is that if you make the delta x smaller
00:16:27.000 --> 00:16:31.000
and smaller, you're approaching the truth faster.
00:16:31.000 --> 00:16:34.000
So in principle, and I will convince you otherwise
00:16:34.000 --> 00:16:37.000
tomorrow, principle, this looks, well, not otherwise.
00:16:37.000 --> 00:16:42.000
I'll show you what that my statement is flawed.
00:16:42.000 --> 00:16:44.000
But in principle, right now, we would like this one, wouldn't we,
00:16:44.000 --> 00:16:46.000
for solving our problems, because it's
00:16:46.000 --> 00:16:48.000
a second order accurate.
00:16:48.000 --> 00:16:49.000
So it's more accurate.
00:16:49.000 --> 00:16:51.000
And you can show this.
00:16:51.000 --> 00:16:53.000
I have to switch the projector back on.
00:16:53.000 --> 00:16:56.000
I made a nice graphic for you.
00:16:56.000 --> 00:16:59.000
Actually, I didn't make it for you, but I use it for you,
00:16:59.000 --> 00:17:02.000
just to be brutally honest.
00:17:02.000 --> 00:17:04.000
So this we just talked about.
00:17:04.000 --> 00:17:06.000
We have a second order accurate scheme.
00:17:06.000 --> 00:17:08.000
But you can interpret all this graphically,
00:17:08.000 --> 00:17:11.000
because remember, the derivative is a tangent.
00:17:11.000 --> 00:17:14.000
So imagine our function was an exponential.
00:17:14.000 --> 00:17:15.000
That's the black line.
00:17:15.000 --> 00:17:19.000
I'm not sure you can see it all from the back.
00:17:19.000 --> 00:17:20.000
So oops.
00:17:20.000 --> 00:17:24.000
So then the true derivative is the red line here.
00:17:24.000 --> 00:17:27.000
So at this point, we calculate the derivative.
00:17:27.000 --> 00:17:30.000
If you all remember, simply the tangent
00:17:30.000 --> 00:17:33.000
on the function at that point.
00:17:33.000 --> 00:17:35.000
And then we can calculate that derivative
00:17:35.000 --> 00:17:36.000
as a forward difference.
00:17:36.000 --> 00:17:40.000
I just took two points of the exponential,
00:17:40.000 --> 00:17:41.000
calculated as a forward difference.
00:17:41.000 --> 00:17:46.000
But then I moved it so that it goes through the same point.
00:17:46.000 --> 00:17:49.000
And as a forward difference, the slope of this would be this.
00:17:49.000 --> 00:17:51.000
The backward difference is the slope would be like that.
00:17:51.000 --> 00:17:54.000
And the centered difference would be like that.
00:17:54.000 --> 00:17:55.000
So they're all wrong, for starters.
00:17:55.000 --> 00:18:00.000
None of them is the true derivative, the true tangent.
00:18:00.000 --> 00:18:03.000
And we see the centered one is actually the best approximation
00:18:03.000 --> 00:18:05.000
per se.
00:18:05.000 --> 00:18:09.000
And that's consistent with it being second order accurate.
00:18:09.000 --> 00:18:14.000
That's a more accurate approximation of this tangent.
00:18:14.000 --> 00:18:15.000
Does that make sense?
00:18:15.000 --> 00:18:19.000
So it's just simple math.
00:18:19.000 --> 00:18:24.000
Finally, just for illustration, what
00:18:24.000 --> 00:18:25.000
about higher order derivatives?
00:18:25.000 --> 00:18:27.000
What if we have a second derivative?
00:18:27.000 --> 00:18:28.000
What do we do then?
00:18:28.000 --> 00:18:31.000
So it turns out our simple Taylor thing,
00:18:31.000 --> 00:18:35.000
there's one thing we haven't done yet.
00:18:35.000 --> 00:18:37.000
What haven't we done yet with this?
00:18:37.000 --> 00:18:39.000
Then we've exhausted what we can
00:18:39.000 --> 00:18:41.000
do with the simple Taylor series.
00:18:44.000 --> 00:18:46.000
And that gives us an impression on how
00:18:46.000 --> 00:18:50.000
we would look at higher order derivatives.
00:18:50.000 --> 00:18:53.000
Well, we haven't added them yet, have we?
00:18:53.000 --> 00:18:54.000
We have used them individually.
00:18:54.000 --> 00:18:56.000
We've subtracted them from each other.
00:18:56.000 --> 00:18:59.000
So now we're going to add them to each other, see what we get.
00:18:59.000 --> 00:19:05.000
So if you add the two series, so on the left hand side,
00:19:05.000 --> 00:19:10.000
we're going to get uj plus 1 plus uj minus 1,
00:19:10.000 --> 00:19:12.000
because we are adding them.
00:19:12.000 --> 00:19:18.000
On the right hand side, we're going to get uj plus u prime
00:19:18.000 --> 00:19:25.000
delta x plus u double prime, uj actually double prime,
00:19:25.000 --> 00:19:28.000
delta x squared over 2 plus more stuff.
00:19:28.000 --> 00:19:30.000
I'm going to stop there.
00:19:30.000 --> 00:19:36.000
And then we want to add it, so we get another uj plus uj.
00:19:36.000 --> 00:19:40.000
We get a minus uj prime delta x.
00:19:40.000 --> 00:19:44.000
And we get another plus u double prime j delta x squared.
00:19:44.000 --> 00:19:45.000
You know the drill by now.
00:19:45.000 --> 00:19:50.000
It's pretty unexciting at this point.
00:19:50.000 --> 00:19:52.000
All right, so we can work that out.
00:19:52.000 --> 00:19:54.000
The first derivative disappears, which
00:19:54.000 --> 00:19:56.000
is handy because we wanted to get an expression
00:19:56.000 --> 00:19:57.000
for the second one.
00:19:57.000 --> 00:19:59.000
So let's try and do it in our head
00:19:59.000 --> 00:20:02.000
and see whether we can get it right.
00:20:02.000 --> 00:20:06.000
So we actually get u double prime j, just to speed things
00:20:06.000 --> 00:20:11.000
up, is this uj plus 1 plus, actually,
00:20:11.000 --> 00:20:16.000
we do the first minus of those two guys, minus 2 uj
00:20:16.000 --> 00:20:22.000
plus uj minus 1 divided by delta x squared.
00:20:22.000 --> 00:20:30.000
There's a 2 somewhere that I messed up, isn't it?
00:20:33.000 --> 00:20:34.000
No, actually.
00:20:37.000 --> 00:20:38.000
Oh, there's a 2 here, yeah.
00:20:38.000 --> 00:20:40.000
This becomes a 2, the 2 cancels.
00:20:40.000 --> 00:20:40.000
That's right.
00:20:40.000 --> 00:20:41.000
No, it's all fine.
00:20:41.000 --> 00:20:46.000
Plus something order of delta x squared, actually.
00:20:46.000 --> 00:20:47.000
Because the third term disappears,
00:20:47.000 --> 00:20:48.000
the fourth term is there.
00:20:48.000 --> 00:20:51.000
You divide it by the x squared delta x squared.
00:20:51.000 --> 00:20:55.000
You end up with a second order term.
00:20:55.000 --> 00:20:57.000
OK, so that's just an illustration
00:20:57.000 --> 00:21:00.000
where all these funky finite differences come from.
00:21:00.000 --> 00:21:03.000
In some sense, my rule is you can almost make them up.
00:21:03.000 --> 00:21:05.000
But then you have to test whether they
00:21:05.000 --> 00:21:07.000
are consistent with what you're actually trying to do.
00:21:07.000 --> 00:21:10.000
And that brings us to the next topic.
00:21:10.000 --> 00:21:14.000
Really, what we encounter isn't single derivatives,
00:21:14.000 --> 00:21:16.000
but we encounter equations.
00:21:16.000 --> 00:21:18.000
And this will be the equation we're
00:21:18.000 --> 00:21:22.000
going to look at for the rest of the course.
00:21:22.000 --> 00:21:25.000
And I may have time tomorrow to talk a little bit
00:21:25.000 --> 00:21:26.000
about the real world.
00:21:28.000 --> 00:21:30.000
So we are usually dealing with functions
00:21:30.000 --> 00:21:34.000
that are functions of more than one variable.
00:21:34.000 --> 00:21:37.000
So the simplest we will do is we'll add another one.
00:21:37.000 --> 00:21:38.000
We'll add time.
00:21:38.000 --> 00:21:42.000
So we'll have time in one space dimension in our problem.
00:21:42.000 --> 00:21:44.000
And one of the equations that is sort of relative
00:21:44.000 --> 00:21:46.000
of the equations that we have to solve
00:21:46.000 --> 00:21:51.000
is this one, where it's the function u as a derivative.
00:21:51.000 --> 00:21:57.000
It's time plus a constant c du by dx equals 0.
00:21:57.000 --> 00:21:59.000
It's a partial differential equation
00:21:59.000 --> 00:22:02.000
because these are partial differentials.
00:22:02.000 --> 00:22:05.000
To make our life simple, we assume c is a constant.
00:22:05.000 --> 00:22:08.000
Now one of the close relatives, why it's a close relative
00:22:08.000 --> 00:22:09.000
is this.
00:22:09.000 --> 00:22:13.000
If you look at our equations of motion, say the u equation,
00:22:13.000 --> 00:22:17.000
if you replace that c with u, those two terms will appear.
00:22:17.000 --> 00:22:19.000
But by replacing the c with a u, you're
00:22:19.000 --> 00:22:24.000
making the equation non-linear, which makes it very nasty.
00:22:24.000 --> 00:22:28.000
And then it becomes much more difficult to solve, actually.
00:22:28.000 --> 00:22:29.000
And new problems arise, which I may
00:22:29.000 --> 00:22:31.000
have time to talk about tomorrow.
00:22:31.000 --> 00:22:32.000
I may not.
00:22:32.000 --> 00:22:35.000
We'll see how far we get today.
00:22:35.000 --> 00:22:37.000
So we keep it simple for ourselves.
00:22:37.000 --> 00:22:41.000
The other nice thing, and we'll probably do it today
00:22:41.000 --> 00:22:42.000
just because we can.
00:22:42.000 --> 00:22:44.000
This one, we can solve analytically.
00:22:44.000 --> 00:22:49.000
If you replace c by u, that gets much harder, if not impossible,
00:22:49.000 --> 00:22:50.000
I think.
00:22:50.000 --> 00:22:53.000
So this is known as the linear advection equation.
00:22:53.000 --> 00:22:56.000
And c simply represents the velocity with which you're
00:22:56.000 --> 00:22:58.000
affecting stuff.
00:22:58.000 --> 00:23:00.000
It's also the phase speed of a wave, if you wish.
00:23:00.000 --> 00:23:04.000
If your problem has waves, which we will see this problem has,
00:23:04.000 --> 00:23:06.000
then the c is the phase speed of the wave.
00:23:06.000 --> 00:23:10.000
But you can think of it as any signal you like just moving
00:23:10.000 --> 00:23:13.000
along the x-axis in this problem.
00:23:13.000 --> 00:23:16.000
So we can draw a function, some shape,
00:23:16.000 --> 00:23:19.000
and that shape just moves along in the analytical solution,
00:23:19.000 --> 00:23:22.000
just moves along the x-axis.
00:23:22.000 --> 00:23:24.000
We now need a two-dimensional grid,
00:23:24.000 --> 00:23:27.000
but that doesn't really phase us.
00:23:27.000 --> 00:23:28.000
It's very simple.
00:23:28.000 --> 00:23:31.000
So the delta x we already had, we just
00:23:31.000 --> 00:23:33.000
add a delta t grid to it.
00:23:33.000 --> 00:23:35.000
So now it's two-dimensional.
00:23:35.000 --> 00:23:37.000
We keep it at two-dimensional because we can then still
00:23:37.000 --> 00:23:39.000
draw it easily.
00:23:40.000 --> 00:23:43.000
And in time, I will use the index n.
00:23:43.000 --> 00:23:46.000
And in space, we'll stick to j because we started with it.
00:23:46.000 --> 00:23:48.000
So now we have two indices to carry around
00:23:48.000 --> 00:23:51.000
in a point in this two-dimensional grid.
00:23:51.000 --> 00:23:53.000
It's a point in space and time.
00:23:53.000 --> 00:23:56.000
And we just call it function u at that point
00:23:56.000 --> 00:23:59.000
as marked by this value.
00:23:59.000 --> 00:24:00.000
It's discrete.
00:24:00.000 --> 00:24:03.000
And you can see it's discrete now in two dimensions.
00:24:03.000 --> 00:24:04.000
Now, why do we need this?
00:24:04.000 --> 00:24:08.000
Well, because we want to write an equation.
00:24:08.000 --> 00:24:10.000
We want to approximate our equation.
00:24:10.000 --> 00:24:12.000
And now you can see I've already started playing games.
00:24:12.000 --> 00:24:16.000
And we'll play more games with this as we go along.
00:24:16.000 --> 00:24:21.000
So our equation was du dt.
00:24:21.000 --> 00:24:24.000
Well, I had it on the previous slide, I think.
00:24:24.000 --> 00:24:25.000
Two slides before.
00:24:25.000 --> 00:24:28.000
So we need a derivative in time and we need derivative in space.
00:24:28.000 --> 00:24:32.000
And we can choose because we have three available for first
00:24:32.000 --> 00:24:37.000
derivatives, forward, backwards, and centered.
00:24:37.000 --> 00:24:39.000
So in this particular example, I've
00:24:39.000 --> 00:24:42.000
chosen the forward and time one.
00:24:42.000 --> 00:24:55.000
So the time discretization is the forward.
00:24:55.000 --> 00:24:56.000
I chose the forward scheme.
00:24:56.000 --> 00:24:58.000
And for the space discretization,
00:24:58.000 --> 00:24:59.000
I chose the backward scheme.
00:24:59.000 --> 00:25:01.000
It's just because I could.
00:25:01.000 --> 00:25:02.000
There's no secret here.
00:25:02.000 --> 00:25:07.000
It's nothing magical about it.
00:25:07.000 --> 00:25:10.000
You could have chosen any of the other schemes.
00:25:10.000 --> 00:25:12.000
But the choice will have significant consequences
00:25:12.000 --> 00:25:14.000
as to what the solution looks like.
00:25:14.000 --> 00:25:17.000
And that's the somewhat surprising bit.
00:25:17.000 --> 00:25:18.000
Because they are all approximations,
00:25:18.000 --> 00:25:21.000
the solutions of these equations depend
00:25:21.000 --> 00:25:23.000
on what choices we make.
00:25:23.000 --> 00:25:25.000
And they may critically depend on the choices we make.
00:25:25.000 --> 00:25:26.000
And this is what it's all about.
00:25:26.000 --> 00:25:30.000
So when you have an equation in your parameterization
00:25:30.000 --> 00:25:32.000
that you have developed for whatever problem,
00:25:32.000 --> 00:25:35.000
and you have to solve it, making these choices
00:25:35.000 --> 00:25:38.000
as important as deriving the equations in the first place.
00:25:38.000 --> 00:25:40.000
Because if you make a bad choice,
00:25:40.000 --> 00:25:44.000
the results you're going to see come out of your computer
00:25:44.000 --> 00:25:47.000
are going to be different from what the physics of the problem
00:25:47.000 --> 00:25:48.000
actually describes.
00:25:48.000 --> 00:25:50.000
And it has nothing to do with you
00:25:50.000 --> 00:25:52.000
having done a bad job at parameterization.
00:25:52.000 --> 00:25:55.000
It's all to do with you having made poor choices
00:25:55.000 --> 00:25:56.000
in the numerical scheme.
00:25:56.000 --> 00:25:59.000
So it's just as important to think about that problem
00:25:59.000 --> 00:26:03.000
once you get to implement your new ideas as it
00:26:03.000 --> 00:26:05.000
is to get the ideas right.
00:26:05.000 --> 00:26:10.000
And it gets, because we use complex equations
00:26:10.000 --> 00:26:12.000
that in parameterization, I'm sure this
00:26:12.000 --> 00:26:14.000
has been mentioned by the other lecturers,
00:26:14.000 --> 00:26:17.000
that we don't know, part of the task of parameterizations
00:26:17.000 --> 00:26:20.000
derive some of the equations we are then going to use.
00:26:20.000 --> 00:26:23.000
Sometimes we just have to implement well-known equations,
00:26:23.000 --> 00:26:25.000
but sometimes we even have to derive,
00:26:25.000 --> 00:26:27.000
like the mass flux equation, right?
00:26:27.000 --> 00:26:29.000
Change of mass flux with height, where does that come from?
00:26:29.000 --> 00:26:30.000
What's it relate to?
00:26:30.000 --> 00:26:31.000
That's something we derive.
00:26:31.000 --> 00:26:34.000
But now we need to solve it.
00:26:34.000 --> 00:26:36.000
And there are several issues, which
00:26:36.000 --> 00:26:40.000
I'll summarize at the end of tomorrow very, very briefly,
00:26:40.000 --> 00:26:43.000
what the overall issues are in numerics.
00:26:43.000 --> 00:26:46.000
So one of the first things we need to convince ourselves
00:26:46.000 --> 00:26:50.000
is actually something called consistency.
00:26:50.000 --> 00:26:51.000
That's the last sort of simple thing
00:26:51.000 --> 00:26:53.000
we are going to do with numerics.
00:26:53.000 --> 00:26:56.000
And that is, we now want to know,
00:26:56.000 --> 00:26:58.000
is this equation that we've written
00:26:58.000 --> 00:27:00.000
in our finite differences, which is
00:27:00.000 --> 00:27:02.000
what these numerical things are called,
00:27:02.000 --> 00:27:06.000
is now a finite difference equation versus the partial
00:27:06.000 --> 00:27:11.000
differential equation, are the two consistent with each other?
00:27:11.000 --> 00:27:15.000
And basically, what we mean with that is consistency simply
00:27:15.000 --> 00:27:21.000
means that if delta x goes to 0 and delta t goes to 0,
00:27:21.000 --> 00:27:27.000
this equation becomes the actual partial differential equation.
00:27:27.000 --> 00:27:29.000
So that's something you need to test.
00:27:29.000 --> 00:27:31.000
So we're going to test that now just to give you,
00:27:31.000 --> 00:27:36.000
again, one very simple example of how you do that.
00:27:36.000 --> 00:27:38.000
It gets complicated in a hurry if you
00:27:38.000 --> 00:27:41.000
have a complicated numerical scheme and a complicated
00:27:41.000 --> 00:27:41.000
equation.
00:27:41.000 --> 00:27:43.000
So we're doing nothing complicated here
00:27:43.000 --> 00:27:46.000
because it's the 11.30 lecture.
00:27:46.000 --> 00:27:48.000
Everybody is asleep anyway.
00:27:48.000 --> 00:27:51.000
But also because we don't have time
00:27:51.000 --> 00:27:54.000
to do the math on something complicated on the board.
00:27:54.000 --> 00:27:56.000
But it's still good to go through the math of something
00:27:56.000 --> 00:28:00.000
simple just for illustration.
00:28:00.000 --> 00:28:02.000
So this process that I'm doing right now,
00:28:02.000 --> 00:28:05.000
and it's going to be looking pretty trivial,
00:28:05.000 --> 00:28:07.000
is a process you would go through
00:28:07.000 --> 00:28:10.000
with all your numerical approximations.
00:28:10.000 --> 00:28:13.000
So the first thing is the partial differential equation
00:28:13.000 --> 00:28:22.000
that we had was du dt plus c du dx equals 0.
00:28:22.000 --> 00:28:27.000
And then the finite difference equation that we have was,
00:28:27.000 --> 00:28:30.000
now let me get it right at forward in time.
00:28:30.000 --> 00:28:36.000
So that looks like un plus 1j.
00:28:36.000 --> 00:28:41.000
Now you also get a bit of the fun of numerical mathematics.
00:28:41.000 --> 00:28:42.000
There's only two indices, and you already
00:28:42.000 --> 00:28:45.000
get lost very quickly.
00:28:45.000 --> 00:28:47.000
Minus unj.
00:28:47.000 --> 00:28:52.000
So this is forward in time at the point j divided
00:28:52.000 --> 00:28:54.000
by delta t plus c.
00:28:54.000 --> 00:28:57.000
And then I think I said backwards in space,
00:28:57.000 --> 00:29:01.000
which means we take un, which is at time n.
00:29:01.000 --> 00:29:08.000
We take our function at point j minus the function at time n
00:29:08.000 --> 00:29:09.000
at point j minus 1.
00:29:09.000 --> 00:29:12.000
We divide that by delta x, and that's equal to 0.
00:29:12.000 --> 00:29:15.000
So that's our finite difference equation.
00:29:15.000 --> 00:29:18.000
What we want to know is, is this equation consistent
00:29:18.000 --> 00:29:20.000
with that equation?
00:29:20.000 --> 00:29:22.000
And to do that, what we need to do
00:29:22.000 --> 00:29:27.000
is trivial Taylor expansions.
00:29:27.000 --> 00:29:31.000
So we need to do Taylor expansions of the terms that
00:29:31.000 --> 00:29:33.000
have a minus or plus 1.
00:29:33.000 --> 00:29:36.000
So we need to do 1 in time.
00:29:36.000 --> 00:29:41.000
So we need an expression for un plus 1j as a function of unj
00:29:41.000 --> 00:29:44.000
plus du by dt.
00:29:44.000 --> 00:29:48.000
Now I write it out, because otherwise we'll
00:29:48.000 --> 00:29:50.000
get confused what the primes are,
00:29:50.000 --> 00:29:52.000
because there's two variables.
00:29:52.000 --> 00:30:05.000
du dt times delta t plus d by dt second derivative at point nj
00:30:05.000 --> 00:30:07.000
times delta t squared over 2.
00:30:07.000 --> 00:30:09.000
And I'd love to write more terms,
00:30:09.000 --> 00:30:13.000
but I think we would really fall asleep if we did.
00:30:13.000 --> 00:30:15.000
And then we need another one for the one in space,
00:30:15.000 --> 00:30:20.000
which happens to be unj minus 1, which
00:30:20.000 --> 00:30:31.000
is unj minus d by dt of u at point nj delta t plus second
00:30:31.000 --> 00:30:39.000
order d2 dt squared nj delta t squared.
00:30:39.000 --> 00:30:40.000
Oh, hang on.
00:30:40.000 --> 00:30:41.000
We have x's.
00:30:41.000 --> 00:30:45.000
Wow, why didn't anybody say anything?
00:30:45.000 --> 00:30:46.000
Delta x, of course.
00:30:46.000 --> 00:30:49.000
We're doing spatial derivatives, people.
00:30:49.000 --> 00:30:51.000
I am sorry.
00:30:51.000 --> 00:30:53.000
Luckily, for me, it's easier than for you.
00:30:53.000 --> 00:30:57.000
I just need to wipe it away.
00:30:57.000 --> 00:30:59.000
All right, good, spatial derivatives.
00:30:59.000 --> 00:31:03.000
And all we need to do now, and this sounds much worse
00:31:03.000 --> 00:31:06.000
than it is, is plug that back into the equation.
00:31:06.000 --> 00:31:18.000
So we'll do that, unj plus du by dt at point nj delta t
00:31:18.000 --> 00:31:30.000
plus d2 u dt squared point nj delta t squared over 2.
00:31:30.000 --> 00:31:33.000
And that's divided by delta t.
00:31:33.000 --> 00:31:35.000
And we have to subtract.
00:31:35.000 --> 00:31:36.000
We haven't done this one yet.
00:31:36.000 --> 00:31:40.000
Minus unj from the whole thing.
00:31:40.000 --> 00:31:51.000
Plus c times unj minus unj minus minus gives a plus.
00:31:51.000 --> 00:31:59.000
Plus du dx at point nj times delta x.
00:32:00.000 --> 00:32:06.000
Plus gives a minus d2 u dx squared nj.
00:32:06.000 --> 00:32:10.000
So this is why numerics is a pain, by the way.
00:32:10.000 --> 00:32:14.000
I think numerics is a right pain.
00:32:14.000 --> 00:32:17.000
Because just writing it all down takes you like four hours
00:32:17.000 --> 00:32:18.000
or something like that.
00:32:18.000 --> 00:32:22.000
Divide by delta x, and you set it to 0.
00:32:22.000 --> 00:32:25.000
But it's a pain we have to pay attention to.
00:32:25.000 --> 00:32:26.000
All right, so what do we see?
00:32:26.000 --> 00:32:31.000
The unj's go in both terms.
00:32:31.000 --> 00:32:35.000
The delta t goes here, and there's the square.
00:32:35.000 --> 00:32:40.000
The square goes, and the delta x goes, and the square goes.
00:32:40.000 --> 00:32:42.000
And what we really want to do, and you can already
00:32:42.000 --> 00:32:46.000
see this in this equation, the consistency condition
00:32:46.000 --> 00:32:51.000
remember was, what happens if delta t goes to 0
00:32:51.000 --> 00:32:53.000
and delta x goes to 0?
00:32:53.000 --> 00:32:57.000
As those two happen, what's actually left?
00:32:57.000 --> 00:33:03.000
Well, what's left is here is left du by dt at point nj.
00:33:03.000 --> 00:33:06.000
This one goes away because delta t goes to 0.
00:33:06.000 --> 00:33:08.000
And then there's plus c.
00:33:08.000 --> 00:33:15.000
This one's left, du by dx at point nj.
00:33:15.000 --> 00:33:17.000
And then this one goes to 0 because delta x goes to 0.
00:33:17.000 --> 00:33:21.000
All other terms go to 0 because their higher order equals 0.
00:33:21.000 --> 00:33:25.000
So we've proven to ourselves in this trivial case
00:33:25.000 --> 00:33:31.000
that actually this equation looks exactly like our PDE
00:33:31.000 --> 00:33:36.000
at the point that we've chosen to do our calculation for.
00:33:36.000 --> 00:33:38.000
And so we're in good shape.
00:33:38.000 --> 00:33:40.000
It's consistent.
00:33:40.000 --> 00:33:41.000
The other thing we've learned along the way
00:33:41.000 --> 00:33:43.000
without talking too much about it,
00:33:43.000 --> 00:33:45.000
just like we talk about truncation errors
00:33:45.000 --> 00:33:48.000
for individual derivatives, we can
00:33:48.000 --> 00:33:51.000
talk about truncation errors for the whole equation, right?
00:33:51.000 --> 00:33:53.000
And so we have two to consider.
00:33:53.000 --> 00:33:57.000
One is in space and one is in time.
00:33:57.000 --> 00:34:00.000
So you would call this equation with this particular choice
00:34:00.000 --> 00:34:04.000
of numerical expression, finite difference expression,
00:34:04.000 --> 00:34:04.000
first order.
00:34:04.000 --> 00:34:07.000
It's a first order accurate equation in time.
00:34:07.000 --> 00:34:12.000
And it's also first order accurate in space.
00:34:12.000 --> 00:34:15.000
If I had chosen and we'll do that tomorrow,
00:34:15.000 --> 00:34:18.000
maybe we start today even.
00:34:18.000 --> 00:34:19.000
Probably not.
00:34:19.000 --> 00:34:24.000
Had I chosen the center differences, say, in space,
00:34:24.000 --> 00:34:26.000
then the equation would be first order accurate in time,
00:34:26.000 --> 00:34:28.000
but second order accurate in space.
00:34:28.000 --> 00:34:31.000
And you can mix and match and play around, so on and so forth.
00:34:31.000 --> 00:34:34.000
Yeah?
00:34:34.000 --> 00:34:37.000
So we started out and we showed that the derivatives,
00:34:37.000 --> 00:34:40.000
finite different versions of the derivatives,
00:34:40.000 --> 00:34:44.000
if you let x be small or delta x be small,
00:34:44.000 --> 00:34:46.000
it looked like the actual derivatives.
00:34:46.000 --> 00:34:49.000
And then we basically used the same reasoning
00:34:49.000 --> 00:34:51.000
to show that the equations are consistent.
00:34:51.000 --> 00:34:52.000
Yeah, so this is.
00:34:52.000 --> 00:34:54.000
When will they actually not be the same?
00:34:54.000 --> 00:34:59.000
Well, so if you look at some of the numerical schemes
00:34:59.000 --> 00:35:01.000
that people use in space and time,
00:35:01.000 --> 00:35:03.000
they are not as easily, especially often
00:35:03.000 --> 00:35:05.000
the ones used in time.
00:35:05.000 --> 00:35:06.000
They're not as easily.
00:35:06.000 --> 00:35:09.000
I've chosen extremely trivial examples.
00:35:09.000 --> 00:35:11.000
You cannot often easily trace back
00:35:11.000 --> 00:35:15.000
your numerical implementation to just the simple Taylor
00:35:15.000 --> 00:35:19.000
expansion because we've exhausted it, kind of.
00:35:19.000 --> 00:35:21.000
Well, we could try and take a half of this and a half
00:35:21.000 --> 00:35:23.000
of that and a third of this and try
00:35:23.000 --> 00:35:25.000
and see whether that adds to something
00:35:25.000 --> 00:35:27.000
that we could be interested in.
00:35:27.000 --> 00:35:29.000
But there are way more complicated expressions
00:35:29.000 --> 00:35:31.000
for these derivatives that you could think of.
00:35:31.000 --> 00:35:32.000
These are the simple ones.
00:35:32.000 --> 00:35:33.000
And because they are the simple ones,
00:35:33.000 --> 00:35:34.000
you get the same answer.
00:35:34.000 --> 00:35:36.000
You're absolutely right.
00:35:36.000 --> 00:35:37.000
You can come the other way.
00:35:37.000 --> 00:35:40.000
You could say, well, I'm actually
00:35:40.000 --> 00:35:44.000
going to forget about Taylor expansions and stuff.
00:35:44.000 --> 00:35:45.000
I'm just going to make something up here.
00:35:45.000 --> 00:35:49.000
I'm going to take n plus 2, and I'm going to take n minus 2,
00:35:49.000 --> 00:35:53.000
and I'm going to take n in the middle or something like that.
00:35:53.000 --> 00:35:55.000
But then you need to test whether this actually
00:35:55.000 --> 00:35:58.000
fulfills these conditions.
00:35:58.000 --> 00:35:59.000
I mean, in these simple equations,
00:35:59.000 --> 00:36:02.000
it's a trivial result. It's a trivial calculation,
00:36:02.000 --> 00:36:04.000
and it's a trivial result. But if you
00:36:04.000 --> 00:36:05.000
have more sophisticated equations
00:36:05.000 --> 00:36:10.000
and or more sophisticated expressions for the derivatives,
00:36:10.000 --> 00:36:12.000
it's not trivial anymore.
00:36:12.000 --> 00:36:16.000
So I'm simply showing the methodology, not so much
00:36:16.000 --> 00:36:19.000
the actual, this is not a very exciting result.
00:36:19.000 --> 00:36:22.000
It's a trivial result. I completely agree.
00:36:26.000 --> 00:36:28.000
So we leave it at that.
00:36:28.000 --> 00:36:34.000
So the next thing we could do in principle
00:36:34.000 --> 00:36:37.000
is solve this equation that I've written down.
00:36:37.000 --> 00:36:41.000
We need to solve it because we need the analytical solution.
00:36:41.000 --> 00:36:43.000
Is anybody excited about solving it on the board,
00:36:43.000 --> 00:36:50.000
or do you all know how to solve this equation analytically?
00:36:50.000 --> 00:36:52.000
So I leave that equation because that's
00:36:52.000 --> 00:36:53.000
what we're going to deal with next.
00:36:53.000 --> 00:36:56.000
Now it's going to get more exciting because I'm
00:36:56.000 --> 00:36:57.000
going to show you.
00:36:57.000 --> 00:36:59.000
So the truncation error is kind of trivial,
00:36:59.000 --> 00:37:01.000
especially with the examples we use.
00:37:01.000 --> 00:37:07.000
But there is an error that isn't about truncation that's
00:37:07.000 --> 00:37:10.000
actually far from trivial.
00:37:10.000 --> 00:37:13.000
But to demonstrate it nicely, we need to solve this equation.
00:37:13.000 --> 00:37:15.000
So I'll start.
00:37:15.000 --> 00:37:17.000
I have some slides to start with on this.
00:37:17.000 --> 00:37:19.000
And this error is called the discretization error.
00:37:19.000 --> 00:37:24.000
And it's kind of peculiar thing until I had to teach numerics.
00:37:24.000 --> 00:37:27.000
I kind of had heard of it, but I hadn't appreciated
00:37:27.000 --> 00:37:29.000
how peculiar it really is.
00:37:29.000 --> 00:37:32.000
But I can show it to you in a really simple example
00:37:32.000 --> 00:37:33.000
by using this equation.
00:37:33.000 --> 00:37:35.000
And this always excites students.
00:37:35.000 --> 00:37:38.000
So we'll do most of it tomorrow, I think,
00:37:38.000 --> 00:37:41.000
because we won't have enough time to do it all today.
00:37:41.000 --> 00:37:44.000
So this is about discretization error rather than.
00:37:44.000 --> 00:37:47.000
So it turns out the truncation error basically
00:37:47.000 --> 00:37:51.000
tells us how well does our equation describe
00:37:51.000 --> 00:37:55.000
the actual equation, or how well do our derivatives describe
00:37:55.000 --> 00:37:57.000
the actual derivatives.
00:37:57.000 --> 00:38:00.000
It turns out that just because you approximate the equations
00:38:00.000 --> 00:38:03.000
very well, and if you don't believe me,
00:38:03.000 --> 00:38:05.000
just wait a few hours.
00:38:05.000 --> 00:38:07.000
And then you will have to believe me
00:38:07.000 --> 00:38:11.000
because all I'm going to use is the purity of mathematics
00:38:11.000 --> 00:38:12.000
to demonstrate it to you.
00:38:12.000 --> 00:38:15.000
Just because the equations are well approximated
00:38:15.000 --> 00:38:17.000
doesn't mean the solutions are actually.
00:38:17.000 --> 00:38:19.000
This is the peculiar bit.
00:38:19.000 --> 00:38:22.000
So just because you have a good approximation of the equation
00:38:22.000 --> 00:38:24.000
does not mean you have a good approximation
00:38:24.000 --> 00:38:27.000
of the solution of the equation.
00:38:27.000 --> 00:38:29.000
And so you actually can calculate
00:38:29.000 --> 00:38:33.000
the difference of the analytical solution at,
00:38:33.000 --> 00:38:35.000
or the numerical solution at your point
00:38:35.000 --> 00:38:37.000
minus the analytical solution at your point,
00:38:37.000 --> 00:38:39.000
which is just given by that.
00:38:39.000 --> 00:38:40.000
And the difference between those two
00:38:40.000 --> 00:38:43.000
is called the discretization error, not the truncation error.
00:38:43.000 --> 00:38:45.000
So this is about the solution now.
00:38:45.000 --> 00:38:48.000
Truncation error is about the equations.
00:38:48.000 --> 00:38:51.000
How well do the equations approximate the real equations?
00:38:51.000 --> 00:38:53.000
So now we have to look at the solution.
00:38:53.000 --> 00:38:55.000
But because we have to look at the solution,
00:38:55.000 --> 00:38:59.000
we have to actually solve this problem, this one here.
00:38:59.000 --> 00:39:04.000
And so my question to you is, do you actually want to solve it,
00:39:04.000 --> 00:39:07.000
or do you want me to solve it, or shall I just
00:39:07.000 --> 00:39:10.000
write down the solution?
00:39:10.000 --> 00:39:11.000
It's really up to you.
00:39:11.000 --> 00:39:15.000
If I write down the solution, we save 15 minutes or 10 minutes
00:39:15.000 --> 00:39:18.000
or something, which is what is going to take us to solve this.
00:39:18.000 --> 00:39:21.000
But you won't know how I solved it.
00:39:21.000 --> 00:39:22.000
We can have a hybrid.
00:39:22.000 --> 00:39:24.000
I can tell you how you start, and then you can do it.
00:39:24.000 --> 00:39:26.000
The math is not very hard, really.
00:39:26.000 --> 00:39:28.000
Maybe that's the approach we should take.
00:39:28.000 --> 00:39:29.000
We do a hybrid.
00:39:29.000 --> 00:39:32.000
We start solving it, and then we just write down the equation,
00:39:32.000 --> 00:39:34.000
the solution.
00:39:35.000 --> 00:39:38.000
And then, because then the real fun starts,
00:39:38.000 --> 00:39:41.000
I'm not sure how far we're going to get with the real fun today,
00:39:41.000 --> 00:39:43.000
which is to actually solve it with a numerical scheme,
00:39:43.000 --> 00:39:47.000
also analytically on the board, and compare the two solutions.
00:39:47.000 --> 00:39:50.000
Because that's what we want to do from the previous slide.
00:39:50.000 --> 00:39:53.000
We want to compare a numerical solution of this
00:39:53.000 --> 00:39:56.000
to an analytical solution, and see
00:39:56.000 --> 00:39:58.000
whether there's anything that might interest us
00:39:58.000 --> 00:40:01.000
in the difference.
00:40:01.000 --> 00:40:02.000
So this is the problem.
00:40:02.000 --> 00:40:04.000
I'll write it down again, actually,
00:40:04.000 --> 00:40:07.000
because you can't see it on there.
00:40:07.000 --> 00:40:08.000
So this is our equation.
00:40:08.000 --> 00:40:12.000
It's a partial differential equation, easy enough to solve.
00:40:12.000 --> 00:40:16.000
But as usual, you need a few things.
00:40:16.000 --> 00:40:18.000
To solve this, we need boundary conditions
00:40:18.000 --> 00:40:21.000
and initial conditions.
00:40:21.000 --> 00:40:23.000
So the boundary condition we need to set.
00:40:23.000 --> 00:40:29.000
And we're just going to say that the function u at point, which
00:40:29.000 --> 00:40:30.000
way round am I writing this?
00:40:30.000 --> 00:40:31.000
Let me just get it right.
00:40:32.000 --> 00:40:36.000
u is u of x and t.
00:40:36.000 --> 00:40:39.000
I'll write it here so I remember.
00:40:39.000 --> 00:40:48.000
So u at point 0 at any time equals u at point l at any time,
00:40:48.000 --> 00:40:51.000
where we define that our domain is just
00:40:51.000 --> 00:40:56.000
going to be from 0 to l.
00:40:56.000 --> 00:41:01.000
So x is defined from 0 to l, some length, fixed length.
00:41:01.000 --> 00:41:05.000
And the boundary condition is that at point 0,
00:41:05.000 --> 00:41:07.000
the function is the same at point l.
00:41:07.000 --> 00:41:10.000
That's cyclic boundary conditions.
00:41:10.000 --> 00:41:15.000
Many of you use cloud resolving models, LES models.
00:41:15.000 --> 00:41:17.000
They are, of course, more complicated
00:41:17.000 --> 00:41:19.000
because there's two dimensions in space.
00:41:19.000 --> 00:41:22.000
But that's the assumption that's made, that at boundaries,
00:41:22.000 --> 00:41:26.000
the function values are the same.
00:41:26.000 --> 00:41:28.000
And in the initial conditions, we also
00:41:28.000 --> 00:41:29.000
need an initial condition.
00:41:32.000 --> 00:41:39.000
We basically say that u at x0 is some sort of function of x.
00:41:39.000 --> 00:41:46.000
And because we kind of want to anticipate our solutions,
00:41:46.000 --> 00:41:52.000
we say it's some constant times e to the ikx.
00:41:52.000 --> 00:41:56.000
So we assume the function has that shape, which is,
00:41:56.000 --> 00:41:59.000
yes, you all remember the usual shape.
00:41:59.000 --> 00:42:02.000
e to the i is something you can split up
00:42:02.000 --> 00:42:04.000
into sines and cosines.
00:42:04.000 --> 00:42:09.000
You may remember this, so this is nice for wavy things.
00:42:09.000 --> 00:42:11.000
Since you can express almost any function
00:42:11.000 --> 00:42:14.000
through Fourier transforms as function of sines and cosines,
00:42:14.000 --> 00:42:18.000
this is a kind of convenient way of looking at our function.
00:42:18.000 --> 00:42:21.000
And again, we need to make an assumption
00:42:21.000 --> 00:42:23.000
about the boundaries.
00:42:23.000 --> 00:42:28.000
So we need to say f of 0 equals f of l.
00:42:28.000 --> 00:42:31.000
So this initial condition also needs
00:42:31.000 --> 00:42:33.000
to fulfill the boundary condition.
00:42:33.000 --> 00:42:37.000
Otherwise, we get into trouble.
00:42:37.000 --> 00:42:41.000
OK, so the solution, the way to solve this particular one
00:42:41.000 --> 00:42:43.000
is to say we're going to separate.
00:42:43.000 --> 00:42:47.000
This is called the separation of variables approach.
00:42:47.000 --> 00:42:49.000
You know the thing with differential equations, right?
00:42:49.000 --> 00:42:51.000
You never know how to solve them.
00:42:51.000 --> 00:42:53.000
And you kind of try a few things,
00:42:53.000 --> 00:42:55.000
and then after people have tried a few things,
00:42:55.000 --> 00:42:58.000
it becomes trivial because someone's already tried it.
00:42:58.000 --> 00:43:00.000
If you have an equation that no one's tried before,
00:43:00.000 --> 00:43:03.000
you kind of bugger it, and you have to try yourself.
00:43:03.000 --> 00:43:06.000
And one of the things that works for this very simple equation
00:43:06.000 --> 00:43:08.000
is something called the separation of variables,
00:43:08.000 --> 00:43:11.000
where you assume that this function is actually
00:43:11.000 --> 00:43:13.000
the product of two functions.
00:43:13.000 --> 00:43:16.000
One is just a function in space, and the other one's
00:43:16.000 --> 00:43:17.000
just a function in time.
00:43:17.000 --> 00:43:21.000
And you multiply the two together.
00:43:21.000 --> 00:43:24.000
And you can see why this works, because after putting this
00:43:24.000 --> 00:43:28.000
in after a while, you can write something like 1 over t dt
00:43:28.000 --> 00:43:40.000
by dt equals minus c1 over x dx by d little x.
00:43:40.000 --> 00:43:43.000
OK, and this can only, because the left hand side's only
00:43:43.000 --> 00:43:45.000
a function of time and the right hand side's only
00:43:45.000 --> 00:43:50.000
a function of space, this only works if the two are constant.
00:43:50.000 --> 00:43:53.000
This is actually a constant, because if it
00:43:53.000 --> 00:43:56.000
was a function of something, this condition
00:43:56.000 --> 00:43:58.000
would never hold.
00:43:58.000 --> 00:44:00.000
And then you give that constant a name.
00:44:00.000 --> 00:44:06.000
So basically, you say, well, 1 over x dx by d little x
00:44:06.000 --> 00:44:11.000
equals lambda, say, is a constant.
00:44:11.000 --> 00:44:13.000
And then you put it all in.
00:44:13.000 --> 00:44:14.000
And this is where I'm skipping.
00:44:14.000 --> 00:44:17.000
Now I'm going to skip.
00:44:17.000 --> 00:44:25.000
And you end up with something like x of x is x0.
00:44:25.000 --> 00:44:29.000
You can see this 1 over x dx and 1 over t dt
00:44:29.000 --> 00:44:32.000
smells like d of logarithm dt.
00:44:32.000 --> 00:44:36.000
And so if you have logarithm and you integrate it,
00:44:36.000 --> 00:44:38.000
and the right hand side looks linear, right?
00:44:38.000 --> 00:44:40.000
So if I put that over here, then it looks.
00:44:40.000 --> 00:44:44.000
So you basically have dn dt and a constant.
00:44:44.000 --> 00:44:46.000
So the right hand side, once you integrate this,
00:44:46.000 --> 00:44:48.000
this will become a linear function of x.
00:44:48.000 --> 00:44:51.000
And this will become a logarithm.
00:44:51.000 --> 00:44:54.000
So you take the exponential on both sides.
00:44:54.000 --> 00:44:57.000
And this is why the solution looks the way it looks.
00:44:57.000 --> 00:45:00.000
So this will be just e to the lambda x.
00:45:00.000 --> 00:45:05.000
And t by dt will be a t0.
00:45:05.000 --> 00:45:09.000
And then just e to the minus c lambda t.
00:45:09.000 --> 00:45:13.000
That's just taking all this stuff, putting it in and solving.
00:45:13.000 --> 00:45:15.000
And we remember that the actual function itself
00:45:16.000 --> 00:45:17.000
was the product of the two.
00:45:17.000 --> 00:45:23.000
So u of x and t is then x0 times t0 times
00:45:23.000 --> 00:45:28.000
an exponential of lambda x minus ct.
00:45:28.000 --> 00:45:29.000
And this should be something that
00:45:29.000 --> 00:45:31.000
starts looking familiar to you.
00:45:35.000 --> 00:45:38.000
And now you use the initial conditions
00:45:38.000 --> 00:45:44.000
to actually work out what x0, t0, and lambda are.
00:45:44.000 --> 00:45:49.000
And you end up with the familiar u's initial conditions.
00:45:55.000 --> 00:46:04.000
And you end up with the usual u of x and t is u0 times e
00:46:04.000 --> 00:46:07.000
to the ik.
00:46:07.000 --> 00:46:12.000
This comes purely from the initial conditions, x minus ct.
00:46:12.000 --> 00:46:16.000
So you find that lambda equals ik from just
00:46:16.000 --> 00:46:17.000
the initial conditions.
00:46:17.000 --> 00:46:21.000
And that x times t is u, which is not a big surprise.
00:46:21.000 --> 00:46:23.000
So that's the solution.
00:46:23.000 --> 00:46:27.000
And maybe it's going to be tricky.
00:46:27.000 --> 00:46:28.000
So we have our solution.
00:46:28.000 --> 00:46:33.000
And because I can, I've basically
00:46:33.000 --> 00:46:37.000
drawn the solution on the computer.
00:46:37.000 --> 00:46:40.000
So here's our analytical solution to this problem.
00:46:40.000 --> 00:46:43.000
And the simple thing is, if you have this shape at time
00:46:43.000 --> 00:46:50.000
equals 0, then at time equals t, the shape will have propagated.
00:46:50.000 --> 00:46:52.000
And it will have propagated exactly c,
00:46:52.000 --> 00:46:54.000
which is the speed of the propagation,
00:46:54.000 --> 00:46:58.000
essentially, times delta t.
00:46:58.000 --> 00:47:01.000
So that's what our solution would look like
00:47:01.000 --> 00:47:04.000
if our initial condition was this.
00:47:04.000 --> 00:47:06.000
It would just propagate across.
00:47:06.000 --> 00:47:09.000
That's why it's called the linear advection equation,
00:47:09.000 --> 00:47:11.000
essentially, because it's linear,
00:47:11.000 --> 00:47:13.000
because c is a constant.
00:47:13.000 --> 00:47:16.000
And it's just affecting whatever's there
00:47:16.000 --> 00:47:19.000
in the direction of c.
00:47:19.000 --> 00:47:24.000
So if we put this in a computer, so the solution,
00:47:24.000 --> 00:47:25.000
and this is so I'm going to vet your appetite,
00:47:25.000 --> 00:47:27.000
and that's where we're going to stop.
00:47:27.000 --> 00:47:31.000
So if this was our signal at time equals 0,
00:47:31.000 --> 00:47:36.000
and I've drawn it as a funky, sort of more triangular shape
00:47:36.000 --> 00:47:41.000
rather than a nice round shape for effect, not effect of this,
00:47:41.000 --> 00:47:44.000
the analytical solution, after 100 time steps,
00:47:44.000 --> 00:47:47.000
just looks like this.
00:47:47.000 --> 00:47:50.000
Now I've used a particular numerical scheme.
00:47:50.000 --> 00:47:53.000
So here's our initial condition.
00:47:53.000 --> 00:47:57.000
And I let that run.
00:47:57.000 --> 00:48:03.000
This is by using centered differences in space.
00:48:03.000 --> 00:48:06.000
And I've used, I think I used forward differences in time.
00:48:06.000 --> 00:48:09.000
And this is what I get.
00:48:09.000 --> 00:48:13.000
And it's not because I'm stupid, you could say maybe,
00:48:13.000 --> 00:48:16.000
but it's because I made a stupid choice.
00:48:16.000 --> 00:48:21.000
Actually, I made a choice of the center differences.
00:48:21.000 --> 00:48:27.000
Most of this, the growth in time here of the signal that goes forward,
00:48:27.000 --> 00:48:32.000
it getting bigger than it should be, is actually the time discretization.
00:48:32.000 --> 00:48:34.000
But we're not going to talk about this.
00:48:34.000 --> 00:48:38.000
The funky little wiggles, which by the way, if you look very carefully,
00:48:38.000 --> 00:48:43.000
also wander in the opposite direction of the signal,
00:48:43.000 --> 00:48:46.000
comes from the choice of the spatial differencing,
00:48:46.000 --> 00:48:48.000
of the center differencing scheme.
00:48:48.000 --> 00:48:51.000
And we'll look at why tomorrow.
00:48:51.000 --> 00:48:54.000
And that's where we're going to leave it.
00:48:54.000 --> 00:48:58.000
The nice thing about choosing this problem is that I can do the same solution
00:48:58.000 --> 00:49:01.000
procedure with the numerical scheme in,
00:49:01.000 --> 00:49:04.000
and we will learn exactly why this is happening.
00:49:04.000 --> 00:49:07.000
And we will see exactly what's going on in the scheme.
00:49:07.000 --> 00:49:13.000
But it's not exactly what you want your nice function to look like, is it?
00:49:13.000 --> 00:49:18.000
So that's my big warning and the revelation of how this all comes about
00:49:18.000 --> 00:49:21.000
and how it could possibly travel in this direction
00:49:21.000 --> 00:49:26.000
when the speed is actually in the problem, is defined in this direction,
00:49:26.000 --> 00:49:29.000
will all be revealed tomorrow.
00:49:29.000 --> 00:49:31.000
All right, enjoy lunch.
00:49:31.000 --> 00:49:33.000
And I'll see you tomorrow.