Use Freefind to search this site (or the web)
Site search Web search

powered by FreeFind

Tuesday, August 15, 2006

Math Fest, 2006

This is a summary from my obeservations from the Mathematical Association of America's Math Fest 2006.

Everything here has been included in my personal blog, blueollie. Here I've attempted to strip off irrelevant materal. I am starting with a little bit about my adventure getting to the conference; those who want to read only about mathematics should scroll down to the next section.

Day One; getting there, getting situated.

Today, I work up a bit later than normal due to being very tired. Hence, I limited my workout to a 3 mile run on the treadmill followed by 15 minutes of yoga.

This hotel (Crown Plaza) has an excellent workout facility and a reasonably well stocked weight room; I'll use it a bit more extensively tomorrow morning, I hope.

The trip itself: interesting. First of all, I noticed that in Chicago (the day prior to this trip) and at times on this trip, more people than normal have given me nods of approval.

I wonder why; it isn't as if I've trimmed down that much; then I realized: I now have a very close cropped crew cut. My guess is that many think that I am military or perhaps police.

Go figure; if only they knew that I was a Kos reading lefty.

As far as the road trip, here is the good: my Prius got 45.8 miles to the gallon, and that included the stretch through big hills. The bad: I left my credit card in a Peoria drugstore (CVS at Campus town). Fortunately, they still had my card, but I didn't know that at the time. I had to call after hours to get this card cancelled and that was a trip. I highly suggest carrying the numbers of your credit card companies with you for after hours emergencies such as this one.

Numbers: VISA (800) 847-2911
Master Card: 1-800-MC-ASSIST (1-800-622-7747)
Discover: 1-800-DISCOVER (1-800-347-2683)
American Express: 1-888-412-6945

Anyway, having these numbers would have helped me some.

But, between playing with Vickie, hitting an early rain storm and discovering that my check card was lost, and having to stop almost every hour to relieve my bladder, I didn't get in until after 9 pm central time, or after 10 pm local time.

I was very tired.

Day One: The Mathematics

The good news is that the talks so far have been outstanding.

Dorothy Buck lead off with an excellent talk about knot theory and its role in understanding the chemical reactions in DNA. Basically, Dr. Buck and Erica Flapan have shown that, subject to three biological assumptions, that only 6 types of links need to be considered (when one wants to understand the cross changing operations that DNA undergoes) and, if one uses tangle theory, one can pass to the theory of knots and lens spaces by passing to the double branched cover of a tangle (which is a solid torus) and studying Dehn surgery.

Next, Fields Medalist W. T. Gowers gave a nice talk about some "easy to state, hard to solve" combinatorial problems. Exampe: let R be a commutative ring and A a subset. The sum set is the set of all elements of R of the form x + y where x and y are both in A (note: x + x is permitted). The multiplicative set is defined in a similar way.

So, suppose that you know that the additive and multiplicative sets are of a certain size: what can you say about the given set to begin with?

Next, Jesus De Loera gave a nice lecture on convex polytopes and gave some simple to state, yet still unsolved conjectures. Here is one:

in dimension three, it is known that if you are given a triple of integers x, y, z such that

x -y + z = 2
2y >= 3z
2y >= 3x

Then there is a unique convex polytope with x vertices, y edges and z faces. For example, for a tetrahedron, x = 4, y = 6, z = 4,for a cube: x = 8, y = 12, z = 6, for a pyramid: x = 5, y = 8, z = 5.

Such a formula, if one exists, is completely unknown for 4 dimensional polytopes.

Of course, these were some of the major invited addresses; I expected these to be good. But the contributed talks went well as well.

Among those: I head a nice talk about Hill's cipher algorithm (given by Bill Wardlaw, who was my first abstract mathematics professor at Annapolis), check digits of codes which detect errors in digits as well as order of digits (using the action of permutation groups on dihedral groups) , using magic squares (and their relatives) as examples of vector spaces and ideals in rings, enumeration of sudoku puzzles. I also heard a nice talk by Ed Burger on how to (not) teach a class that introduces the idea of proofs to undergraduates. Ed was a friend in graduate school; he now has a list of honors 10 miles long; included in these was the Chauvenet Prize for expository writing. Sigh...I knew him when...

This is Ed Burger, answering questions after his well received talk. If someone thinks that stellar teaching and good research can't go hand in hand, think again! Ed is proof that it can.


Though this shot leaves something to be desired, this is Bill Wardlaw of the U. S. Naval Academy, who taught me my first course on Modern Algebra back in the spring of 1979.

Day Two: I give my talk (ok, it is a 10 minute talk)

Personal Journal: I started the day with a short walk on the treadmill (3 miles, 37 minutes; 24 minutes for the last two miles) followed by weights and yoga.

Today I gave my talk; it seemed to go ok. Here is what I talked about: I presented a paper which said, in effect, that if one attempted to evaluate the limit of a two variable function by evaluating it over all curves that had continuous first derivative and ran through a given point and if one obtained the same limit over all such curves, then the limit of the function exists there.

Note that this is false if one replaces "all curves with continuous first derivatives" by "all lines" or even by "all real analytic functions" (functions which have a power series expansion at every point in their domain which is valid on some open set).

In addition to giving my talk, I also dropped some dollars on math books. This is to be expected.

Notable talks, in addition to the Gowers talk, were the series on Physical Knot Theory.

First, Ken Millett talked about using the computer to make random knots (physical) and to see what knots were produced. Note that he wasn't merely interested in knot type, but also the various physical qualities that a given knot representation had. He was especially interested in "equilateral polygonal knots" (knots formed by line segments, each of the same length).

Eric Rawdon informed me that, up to now, equilateral polygonal knots were realized at their (known) minimum stick number presentation, though the knot 8-19 might not have this happen.

Next Lou Kauffman talked about knots and rational tangles. He described work with Sofia Lambropoulou. In particular, he used the theory of rational tangles to find a way to recognize the unknot. Note that recognizing the unknot isn't always easy on computational grounds; a Theorem of Hass and Lagarias shows that one can always unknot a diagram by using 2cn Reidermeister moves where c is some fixed constant and n is the number of crossings of the diagram. Unfortunately, the lowest known value of c is 1011.

Tom Banchoff then talked about Piecewise Circular Space Curves; that is, curves that are formed out of pieces of honest to goodness circles. He talked about various properties (e. g., rigidity) and about various kinds of "dual polygons" associated with such curves.

Eric Rawdon gave an interesting talk about tight knots. Here, the definition of "tight knot" means the following: Suppose you have a regular neighborhood of a smooth knot, where one, say, ensures that the regular neighborhood (a knotted solid torus) has some fixed radius as a diameter. Call this diameter "1". Then, using this scale, what is the shortest "length" that one needs to obtain a given knot?

Eric's papers can be found here: http://www.mathcs.duq.edu/~rawdon/Preprints/
There is enough here to keep you busy for a very long time.

Day Three: Another good session.

This post will start with the personal and then get to the mathematics. Again, those who wish can scroll down a bit.

Athletically: 4 miles of "running" on the treadmill; the first mile was a 10:10 warm up (5:30 first half mile). I admit that I was a bit distracted when writing this post as I am in a hotel lobby and a tall lady in a pretty, loose but sort of clingy dress just got on the elevator.

Hey, if I didn't like women, I wouldn't like my wife.

Now back to the post: my run was both good (got some exercise) and bad (couldn't hold a 8:30 pace for 2.5 miles; that used to be my marathon pace). The road back is very long; the good news is that my hip/back/IT band is slowly getting better.

While in Knoxville, I found a nice place to eat called The Tomato Head. It is in the Market Square. They serve good pizza as well as other types of food (salads, sandwiches, etc.) The servers and workers are from the "tye dye" set and are downright pleasant. One even asked me about my ultramarathon t-shirt (no, she wasn't my server so she wasn't sucking up for a tip). This is the kind of place you go to after yoga class; it is the type of place you go to in order to forget that George W. Bush is our President.


------------------
The Mathematics from Day Three
The morning invited talks were outstanding. In the second talk of the day, Gowers finished his three part lecture on analytic combinatorics by showing the utility of the Discrete Fourier Transform, among other things.

In the first talk of the day, Trachette Jackson, who is one of the hottest young researchers, gave an excellent talk on the mathematics of cancer cell modeling.

The following is a very incomplete sketch of what she talked about: first, she gave the basics of cancer cells by giving a basic definition: what makes a cell cancerous? In a nutshell, a cancer cell is one which gets around dying naturally (evading apoptosis), doesn't take growth/death cues from its environment, can initiate and sustain angiogenesis (grows blood vessels to take blood from the host body), is invasive and has nearly limitless reproductive potential.

She said that the cells first start out as a somewhat spherical cluster which gets nutrition via its boundary. Eventually, the inner core starts to die off (gets starved; this is stage I); at that point, the cell starts growing capillaries to get a blood supply (this is stage II). At stage three, it is growing and has established its own blood supply.

There are ways to attacking cancer cells; some deal with trying to shrink the cell itself, some deal with trying to kill off the blood vessels supplying the cell (this was Judah Folkmin's 1998 breakthrough in mice that failed to be replicated in humans).

Her research deals with using partial differential equations to model the situation at various stages, and to see what happens to growth when various parameters are changed (reducing the ability to obtain oxygen, shrinking the newly made blood vessels, limiting the production of certain secreted chemicals, for example).

The partial differential equations that are obtained are not for the faint of heart; one such is
dN/dtt = phiv(V)N(1-N/N0) where N is the number of cells in the larger cancer cell, V is a measure of the blood vessel density, and phi is a function:

(r1c^2)/(c1^2 + c^2) - r2(1 - sigmac^2/(c2^2 + C^2)) where C is the oxygen concentration which is itself a function of v:
c(V) = CmV/(k+V) and Cm, sigma, k, c1, r1 and r2 are parameters which must be determined from experiment.

Of course, these equations must be solved numerically.

--------
The next invited talk was one about math history and the current teaching of calculus by David Bressoud. His talk centered around some of the basic theorems of real analysis and calculus and their historical context. What made this talk a bit different is that he used the history to explain why some of these theorems are difficult for the average undergraduate to understand or even appreciate.

The talk he gave can be accessed here: http://www.macalester.edu/~bressoud/talks/Knoxville.ppt

Some examples: take how we teach the Fundamental Theorem of Calculus. It wasn't even published in its current form until 1907; prior to that, various forms of "integral as some sort of an anti-derivative" definition was used rather than the traditional Riemann sums. The power of the fundamental theorem is that it shows that the Riemann sum and the anti-derivative are equivalent concepts, under certain special conditions. This association is not at all natural.

Yes, I know the proof.

He talked about Cauchy's attempt to provide some rigor to calculus, and how even good contemporary mathematicians found some of his stuff difficult to understand. And, he shows one of Cauchy's big mistakes; he thought that he proved that the infinite sum of a convergent series of continuous functions converges to a continuous function; Abel showed that this was false (think: Fourier Series) unless one had the notion of uniform convergence.

He then talked about the so-called Heinie-Borel theorem (ironically, Shoenflies was the first to refer to this theorem by this name) and how this theorem is really only needed at the start of the discussion of measure theory. My take: that might well be true, but H-B sure makes life easier.

For those who don't know what the H-B theorem says, it says that closed bounded sets in the real line are compact under the standard metric topology. That is, if one covers any closed interval by a collection of open intervals, then a finite subset of the collection of open intervals will also cover that given closed interval.

By the way, Heinie had very little to do with this theorem, though he did talk about uniform convergence.
---------------------

I attended two afternoon sessions of contributed talks. One was a session on the teaching of numerical methods. There were a few talks of "here is what I did and this is what happened" variety, one talk was about an ambitious plan to talk about discrete methods of solutions to partial differential equations, one was about automatic differentiation, and one talk was about the use of some cool applets for exploring various curve approximation schemes (splines, Lagrange, etc.)

The automatic differentiation talk by Richard Neidinger of Davidson was especially interesting. It basically views differential functions as 2-dimensional vectors: the first coordinate is the value of the function at a point, and the second coordinate is the value of the derivative. Then one does a weird sort of algebra with these vectors; one scalar multiplies and adds in the usual way (component wise). But multiplication of these vectors becomes interesting; one multiplies the first two components in the usual way, but one uses a bizarre dot product to multiply the second component (to respect the chain rule): (a , b) * (c, d) = (ac, ad + bc) under this "algebra".

The value is that differentiation can be done purely numerically once one knows the formula for the functions; for example, one doesn't need to enter the formula for the derivative of a function into, say, a Newton's root finding program.

-----------
I then ended the conference by catching the tail end of the last general contributed paper session. There were a few good talks here as well.

Elena Constantin gave a nice talk about how one can generalize the Lagrange multiplier method/second derivative test even when these methods don't work (e. g., the Hessian matrix is singular). She showed how her method worked, though she didn't have time to give an outline of the proof. My guess is that she used the multi-variable power series method, but that is just a guess.

David Austin gave a nice talk about how to visualize the Cauchy-Riemann equations in complex analysis.

Basically, the theorem is this: if f(x + i y) = u(x,y) + i v(x,y) is a complex function, f is differentiable on an open set if and only if the so-called Cauchy-Riemann equations are satisfied:

ux = vy and uy = - vx

The usual proof is to use the limit definition of derivative and approach the point in question from the real axis direction and also from the y-axis direction, realize that one must get the same limit, and equate real and imaginary parts.

Austin gave a nice visual way of seeing this that uses the concept of how the complex derivative acts on sets in the domain (stretches by the magnitude of the derivative and rotates by the argument of the derivative evaluated at the given point).

His talk can be found here: http://merganser.math.gvsu.edu/david/mathfest06/

Jay Shiffman gave a talk about Collatz k-tuples. The Collatz conjecture says that any positive integer can be reduced to 1 by the following rules: if the integer is odd, multiply by three and add 1; if it is even, divide by two.

For example: start with 7. 7 * 3 + 1 = 22
22/2 = 11
11* 3 + 1 = 34
34/2 = 17
17*3 + 1 = 52
52 / 2 = 26
26 / 2 = 13
13*3 +1 + 40
40/2 = 20
20/2 = 10
10/2 = 5
5*3 + 1 = 16

16 is a power of 2: 16/ 2 = 8, 8 /2 = 4, 4/2 = 2, 2/2 = 1.
So, we say that 7 has "Collatz length" 16. (my note: to me, it was over when we reached 16 since 16 is a power of 2, so to me a better number to focus on would be 12, but I digress)

Now if, say, there are numbers n, n+ 1 that have the same Collatz length, these are called a Collatz pair. If there are numbers n, n+1, n+2 that have the same Collatz length, these are called a Collatz triple, and so on.

Anyway, Schiffman's talk was about this. Note: it is still unknown if any positive integer can be reduced to 1 by this process, though this has been verified for all numbers <= 3 x 2^(53). Then Mahmoud Almanassra closed the session with a nice talk about finding estimators for parameters of "quality adjusted lifetime hazard functions.

My reliability engineering course is a long way in my past, so here is my best recollection of the issues: a hazard function is a probability distribution which models the lifetime of something (human, electronic component, etc.) Typically, the density function follows a bathtub curve; models have initial defects, then go through ok, then start dying off near the end of their lives. If one wants to measure only for a certain length of time and then stop counting, one is "censoring" the data.

There are estimators for the parameters of such functions, but unfortunately two of the most common ones are not monotonic. This is bad because a non-monotonic estimator might well provide for "births after deaths." Almanassra showed how to correct for this.

Then, there is something called the quality adjusted hazard function; one takes into account the "quality of life" (obvious meaning in humans, say, usefulness in components?) He then develops consistent, monotone estimators for this setting.






0 Comments:

Post a Comment

<< Home

View Counter Statsfree hit counter
frontline plus