18.330Introduction to Numerical AnalysisSpring 2016Instructor: Homer Reid 
View this document in PDF form: Overview.pdf
Lectures: 2:304 pm, Tuesday / Thursdays, 2105 PSets, Exams, etc: Approximately weekly PSets. Two inclass midterms. No final exams. Final project worth 25% of grade. Grading: 25% homework, 25% midterm 1, 25% midterm 2, 25% final project. Office hours:
 46 PM Tuesdays, 2105
 5 PM Wednesdays, 2449
 By appointment (email me or stop by my office, 2232c)
Note: At some point during the first two weeks of class I will hold special office hours for the purpose of ensuring that everybody is up and running with the programming language of your choice. Bring your laptop and we'll work through any issues you might be having. Stay tuned for the time, date, and place.
This course is an exploration of the art and science of extracting numbers from mathematical expressions. The material we will cover may be broadly divided into two units.
Unit 1 is all about basic numerical calculus. We will discuss elementary methods for obtaining accurate numerical estimates of integrals, derivatives, and infinite sums. This unit will include discussions of extrapolation, interpolation, rootfinding, optimization, and evaluation of special functions. By the end of Unit 1, you will be in a position to implement and understand the properties of the most basic numerical methods for all of these tasks. However, in several places during the course of the elementary treatment of Unit 1, we will encounter phenomena that seem to be hinting at a deeper set of ideas.
This will set the stage for Unit 2 of our course, Fourier analysis and spectral methods. The overarching theme here is that we can often revolutionize the speed and accuracy of a calculation by approximating a function as an expansion in some convenient set of expansion functions  often a set of orthogonal functions. Our discussion of orthogonalfunction expansions will begin, as must any, with the granddaddy of them all: the Fourier series and its immediate descendants (the Fourier transform, Parseval's and related theorems, the FFT, etc.). Then we will broaden the setting to consider more general classes of functions and more general spectral methods: Gaussian quadrature, Chebyshev polynomials and Trefethen's
chebfun,
Nystrom solution of integral equations, and more.Throughout Units 1 and 2 we will pepper the discussion with examples drawn from engineering and the sciences, including binding energies of solids, coding and modulation schemes for efficient use of the wireless communications spectrum, spherical Bessel functions for electromagnetic scattering and thermal engineering, and Ewald summation. The course will also include bird'seye views of topics treated more thoroughly in other courses, such as numerical linear algebra, PDE solvers, and stochastic and MonteCarlo methods.
This is a rough schedule of the topics we will discuss and the order in which we will discuss them. The midterm dates will not change, but the order of some topics and the time we allocate to them may shift as the semester progresses.
Date  Topic  

2/2/2016  Tuesday  Course invitation. 
2/4/2016  Thursday  Evaluation of infinite sums. Numerical integration of functions: NewtonCotes quadrature. Application to improper integrals. 
2/9/2016  Tuesday  Heuristic convergence analysis of NewtonCotes quadrature. 
2/11/2016  Thursday  Numerical integration of ODEs. Reduction of highorder ODEs to firstorder form. Euler's method. 
2/18/2016  Thursday  Numerical integration of ODEs: Beyond Euler's method. Convergence analysis of the improved Euler method. RungeKutta methods. Adaptive stepsizing. 
2/23/2016  Tuesday  Pathologies in ODE solvers: Stability, stiffness, and implicit methods. Pathologies in ODEs themselves: nonuniqueness and nonexistence. Boundaryvalue problems; shooting methods. The beam equation. 
2/25/2016  Thursday  Richardson extrapolation. Numerical differentiation: finitedifference stencils. Finitedifference approach to boundaryvalue problems. 
3/1/2016  Tuesday  Computer representation of numbers: fixed and floatingpoint arithmetic. Exactly representable numbers and rounding errors. Catastrophic loss of floatingpoint precision. 
3/3/2016  Thursday  Computer representation of numbers: Accumulation of rounding errors. Numerical stability of forward and backward recurrence relations for special functions. Numerical rootfinding: Sample problems and 1D methods. 
3/8/2016  Tuesday  Convergence of Newton's method. Newton's method in higher dimensions. 
3/10/2016  Thursday  Unit 1 summary and midterm review. 
3/15/2016  Tuesday  Midterm 1 
3/17/2016  Thursday  MonteCarlo integration 
3/29/2016  Tuesday  Fourier analysis: The fourfold way. Parseval, Plancherel, and Poisson formulas. 
3/31/2016  Thursday  PaleyWiener theorems. Convolution. Fourier series. 
4/5/2016  Tuesday  Applications of Fourier analysis, Part 1: Ewald summation. 
4/7/2016  Thursday  Applications of Fourier analysis, Part 2: Rigorous convergence analysis of NewtonCotes quadrature. Fourier analysis in higher dimensions. 
4/12/2016  Tuesday  ClenshawCurtis quadrature. Discrete Fourier transforms. 
4/14/2016  Thursday  Applications of Fourier analysis, Part 3: the DFT and its uses. Signal processing, arbitraryprecision arithmetic, discrete convolution, PDE solvers. 
4/21/2016  Thursday  Final project proposal due. Orthogonal polynomials. GaussLegendre quadrature. 
4/26/2016  Tuesday  Guest lecture on fast Fourier transforms by Professor Steven Johnson, coauthor of the industrystandard FFTW software package for speedoptimized FFTs. 
4/28/2016  Thursday  Midterm 2 
5/3/2016  Tuesday  Applications of Fourier analysis, Part 4: Modulation. Wireless communications and lockin amplifiers. Spectral efficiency of telecommunications coding schemes. 
5/5/2016  Thursday  Integral equations. Nystrom's method. 
5/10/2016  Tuesday  Numerical linear algebra: densedirect and sparseiterative algorithms. Overview of PDE solvers. 
5/12/2016  Thursday  Numerical nonlinear algebra: Resultants and tensor eigenvalues. 
The PSets and the final project will require you to write simple computer programs. You may use any programming language you like, but we particularly recommend one of three choices: c (or c++), matlab, or julia.If you are not sure which language to use, here are some thoughts to help guide your decision.
The c programming language, and its close cousin c++, are the most widely used programming languages in the world. Programming in these languages offers you maximal flexibility in using your computer's resources. c/c++ programs run significantly faster than programs written in matlab, python, or other languages. Moreover, almost every useful piece of software out there in the world that you might want to tie into your software is available in c/c++ form, so this option offers you the widest range of interoperability with existing codes. If you are planning to do much programming or to work with existing codes in your future, I strongly recommend you acquire at some point at least a working familiarity with c/c++. However, if you have not done much or any c/c++ programming, you may experience a rather steep learning curve, and you may not wish to make that time investment for 18.330.
matlab is a common choice for academic numerical programming. It is much easier to get started programming in matlab than in c/c++. Your programs will run much more slowly and will not be able to take full advantage of your computer's resources, but these factors will not be too significant for the programs you need to write in 18.330. The main difficulty with matlab is that it is closedsource software that is almost prohibitively expensive for anybody outside an academic or corporate environment. (An individual copy costs around $3000, and even we MIT faculty and instructors have to pay $100 every year if we want to use it.) This means not only that (a) you may find yourself without access to the tool once you are out of college, but also (b) codes you may write and want to share with others may be of limited utility, because most people in the world don't have matlab.
This brings us to julia, which combines many of the best features of the previous two options. julia is an exciting new language developed over the past several years at MIT. It is as easy to use as matlab (which means it is much easier to get started with numerical programming in julia than in c/c++), but it has the critical advantage of being free, opensource software, so there is no fear of ever losing access to it, nor of wanting to share your codes with someone who doesn't have it herself. It is significantly faster than matlab (again, this is not a crucial concern for 18.330) and has an enthusiastic and rapidly growing base of users and contributors. This means, in particular, that (a) if you request a new feature you have a decent chance of getting somebody to implement it for you, and (b) there are lots of opportunities for you to contribute to this cool new project. (For example, implementing some pieces of numerical software for inclusion in the julia distribution would be a great final project for 18.330.) The main difficulty with julia is that it is quite new and hence a little rough around the edges. Certain things that you would expect to just work in more mature packages may turn out to be incompletely implemented or (alas) buggy in julia. The good news is that the aforementioned enthusiatic developer base is vigilant about addressing bug reports, so any issues that you encounter will probably get fixed quickly; however, using julia has a bit of a scrappy startup / wildwildwest feel to it which may not appeal to all students.
See below for programming examples and tutorial walkthroughs to get you started with numerical programming in each of these three languages. We will also hold special office hours during the first week of class to make sure you are up and running with the programming language of your choice.
If you are not sure which language to use, you should use julia, and in that case you should refer to the awesome resource Julia for Numerical Computation in MIT Courses maintained by MIT Professor Steven Johnson.
You may (and are encouraged to) work together on PSets, but you must write up and turn in your own solutions. For problems involving computer programs, you must submit a listing of your program and its output together with your PSet. If you work with other students to write your program, each student in your study group must separately type in and execute the program to generate a listing submitted with the PSet. This is to ensure that you become familiar with the mechanics and semantics (semicolons, parentheses, etc.) of programming.
See the 18.330 PSet submission guidelines.
Date Due  Problem Set  Solutions 

2/11/2016  PSet 1  PSet 1 Solutions 
2/18/2016  PSet 2  PSet 2 Solutions 
2/25/2016  PSet 3  PSet 3 solutions 
3/03/2016  PSet 4  PSet 4 solutions 
3/10/2016  PSet 5  PSet 5 solutions 
3/17/2016  PSet 6  PSet 6 solutions 
4/07/2016  PSet 7  PSet 7 solutions 
4/14/2016  PSet 8  PSet 8 solutions 
4/21/2016  PSet 9  PSet 9 solutions 
5/13/2016  (Optional) PSet 10 
Practice Midterm 1 
Practice Problems for Midterm 2 
Note: I don't distribute written solutions of exam problems, but the answers to all parts of all questions may be found distributed throughout the lecture notes and the written PSet solutions.
There is no assigned textbook for this course, but you may find the following references helpful. The links will direct you to online versions (in some cases you will need MIT certificates to access them).

The final project for 18.330 is an exploration and implementation of a numerical algorithm of your choice. Typically this will be algorithm that we did not cover in class or an extended/improved version of an algorithm that we did cover, although these are not the only choices.
You have considerable leeway in choosing a topic of interest to you and structuring your report in a way that is appropriate for the topic you choose, but your project must incorporate all of the following elements:
A rough guideline for the magnitude of the writeup is that it should consist of 1520 LaTeX pages (including plots but not including code listings).
Before getting started on your final project, you must submit a brief proposal to let me know what you are planning to do and how you envision structuring your final report. This proposal is due by April 21, 2016 (you may email it to me directly or submit it in class). Below is a rough sample of what a reasonable proposal might look like.
Practical Evaluation of Lattice Sums: Is Ewald Summation Always Best?
Many problems in computational science require efficient and accurate evaluation of lattice sumsthat is, sums that accumulate contributions to some physical quantity from sites in a one, two, or threedimensional lattice. Bruteforce evaluation of such sums is generally prohibitively expensive, and this fact has spurred the development of more sophisticated and efficient methods. In class we covered one such methodEwald summationbut we discussed only the electrostatic (zerofrequency) case, and then only for the cases of one and twodimensional lattices. My final project will extend this discussion to a broader comparison of latticesummation methods, including consideration of threedimensional lattices and nonzerofrequency problems.
My paper will begin with a brief introduction (24 pages) to the problem. I will then discuss (35 pages) the history of latticesummation methods, touching in particular on the original contributions of Paul Ewald himself and the physical problems that motivated his development of the Ewaldsummation technique, but considering also the evolution of techniques spurred by the computeraideddesign (CAD) revolution of the late 20th century. Then I will explain (58 pages) the mathematical underpinnings of several sophisticated latticesummation methods, including (1) Ewald summation, (2) Kummer decomposition, and (3) integral transforms. Finally, for the specific problem of a 2D lattice sum at nonzero frequency I will implement Ewald summation and Kummer decomposition and compare (35 pages) the accuracy and efficiency of these two methods.
Your most important task in the final project is to get your implementation of the algorithm(s) working and produce convincing numerical results (plots, data tables, etc.) that demonstrate that your implementation is successful. Thorough discussions of background, theory, and implementation are also important, but even the most thorough treatment of these topics will not not compensate for incomplete implementation or a failure to report results produced by your original implementation. The project is closer in spirit to a research paper than to a book report.
A rough grading rubric for the final project looks something like this.
If your project report contains  Then your grade can be as high as 


100% 

80% 

50% 

25% 
Here are a number of possible finalproject topics, with content spanning a variety of interesting calculations arising in various subfields of science and engineering.
Needless to say, this is not intended to be an exhaustive list, and in no way are you required to choose one of these topics for your project.
However, because these are all calculations that I have implemented myself over the yearsusually just for my own fun and education, with no pretensions of competing with the pros (which means I carried them out to roughly the level of completeness and sophistication I expect from your final project)they all have the advantage that I can work with you oneonone at the beginning to help you get started and map out precisely what you will need to do. This may allow you to proceed slightly further down the road toward implementing a substantial numerical calculation than you might otherwise be able to travel on your own.
Write a smallscale DC simulator for nonlinear electronic circuits. The input to your program will be a simple text file describing a circuit consisting of resistors, capacitors, inductors, transistors, diodes, and voltage sources. Your program will compute the DC operating point of the circuit and report (1) the DC voltage at each circuit node, and (2) the DC current through each circuit branch. (You will use Newton's method of rootfinding, which we briefly studied in the first unit of the course.)
Write a smallscale implementation of the boundaryelement method of computational electrostatics to compute the capacitances of arbitraryshaped objects. Given input files representing the surfaces of two conductors discretized into little triangles (like this), your program will compute and report the self and mutual capacitances of the conductors. You will test your code by using it to reproduce the (analytically derivable) result for the capacitance between two spheres, then apply it to cases that are not analytically tractablesuch as, for example, bus lines in microprocessors whose parasitic capacitance limits maximum clock frequencies.
Write a program that numerically solves the HartreeFock equations to compute the binding energy of atoms and small molecules. Your program will input a set of numbers describing, for a given atom or small molecule, the matrix elements of the one and twoelectron terms in the Hamiltonian. Your program will output the HartreeFock binding energy of the atom or molecule. You will use your program to compute curves of binding energy vs. bond length for some simple diatomic or triatomic molecules such as nitrogren and ozone; the minima of these curves will be the predictions of your code for the bond length of these molecules.
Here is a brief
memo that I have put together on how the numerical
HartreeFock procedure works.
Note that I will supply you a code for computing
the inputs to your HartreeFock calculation
for an arbitrary molecule
(that is, the one and twoelectron Hamiltonian
matrix elements for a given molecule, with given
bond lengths and angles, described by a given
Gaussian basis set).
Here is the binary data file referred to in
the Example section of the memo:
H2.sto3g.hdf5
.
Write a program that uses ODE stepper techniques to trace the trajectories of test particles (geodesics) moving in the presence of black holes. Given the initial position and velocity of a particle in a blackhole spacetime (either Schwarzschild or AdS), compute the (proper) time required for the particle to fall into the black hole. Using experimental data on the black hole at the center of our Milky Way galaxy, compute the time (in years) for the Earth to fall into this black hole.
Write a program that uses MonteCarlo methods (the Metropolis / simulatedannealing algorithm or more sophisticated algorithms) to compute the critical temperature of the squarelattice Ising ferromagnet in 2 and/or 3 dimensions. Compare the results of your code to existing analytical results in 2D and to existing numerical results in 3D. Your report will include pictures showing typical configurations of the spins at temperatures well above, well below, and in the vicinity of the critical point.
Research the LenstraLenstraLovasz (LLL) latticebasis reduction algorithm and its use in proving the fundamental insecurity of knapsack cryptography. You may either (1) write your own simple implementation of the LLL algorithm and evaluate its performance on some relevant test cases, or (2) write code that calls a preexisting implementation of the LLL algorithm to implement the steps an unscrupulous eavesdropper would take to decipher a communication that has been supposedly ``encrypted'' using knapsack cryptography.
Here are some references on the LLL algorithm (available electronically via MIT libraries if you have MIT certificates):
Lattice Basis Reduction: An Introduction to the LLL Algorithm and its Applications, by Murray R. Bremner
Write a program that computes nontrivial zeros of the Riemann zeta function ζ(z). Your code should input a height t off the critical line and determineto arbitrary userspecified accuracythe location of the nearest zero lying on the critical line at a height greater than or equal to t from the real axis. Use your code to determine the locations of the first 100 nontrivial zeros and compare their spacings to the predictions of randommatrix theory.
18.330 Spring 2016  Main course page 