Friday, December 30, 2011

the road to knots as primes

from baez
On to number theory....
There's a widespread impression that number theory is about numbers, but I'd like to correct this, or at least supplement it. A large part of number theory - and by the far the coolest part, in my opinion - is about a strange sort of geometry. I don't understand it very well, but that won't prevent me from taking a crack at trying to explain it....
The basic idea is to push the analogy between integers and polynomials as far as it will go. They're similar because you can add, subtract and multiply them, and these operations satisfy the usual rules we all learned in high school:
x + y = y + x (x + y) + z = x + (y + z) x + 0 = x x + (-x) = 0
xy = yx (xy)z = x(yz) x1 = x
x(y + z) = xy + xz
Anything satisfying these rules is called a "commutative ring". There are also a lot of deeper similarities between integers and polynomials, which I'll talk about later. But, there's a big difference! Polynomials are functions on the line, whereas the integers aren't functions on some space - at least, not in any instantly obvious way.The fact that polynomials are functions on a space is what lets us graph them. This lets us think about them using geometry - and also think about geometry using them. This was the idea behind Descartes' "analytic geometry", and it was immensely fruitful.
So, it would be cool if we could also think about the integers using geometry. And it turns out we can, but only if we stretch our concept of geometry far enough!
If we do this, we'll see some cool things. First of all, we'll see that algebra is just like geometry, only backwards.
What do I mean by this? Well, whenever you have a map T: X → Y going from the space X to the space Y, you can use it to take functions on Y and turn them into functions on X. Since this goes backwards, it's called "pulling back along T". Here's how it goes: if f is a function on Y, we get a function T*(f) on X given by:
T*(f)(x) = f(T(x))
Moreover, functions on a space form a commutative ring, since you can add and multiply them pointwise, and pulling back is a "homomorphism", meaning that it preserves all the structure of a commutative ring:
T*(f + g) = T*(f) + T*(g) T*(0) = 0
T*(fg) = T*(f) T*(g) T*(1) = 1
Conversely, any sufficiently nice homomorphism from functions on Y to functions on X will come from some map T: X → Y this way! Here I'm summarizing a whole bunch of different theorems, each of which goes along with its own precise definition of "space", "map", and "nice".Some of these theorems are technical, but the basic idea is simple: we can translate back and forth between the study of commutative rings (algebra) and the study of spaces (geometry) and by thinking of commutative rings as consisting of functions on spaces. We get a little dictionary for translating between geometry and algebra, like this:
GEOMETRY ALGEBRA
spaces commutative rings
maps homomorphisms
But be careful: this translation turns maps into homomorphisms going backwards: it's "contravariant". This is really important in two ways. First, suppose we have a point x in a space X. This gives a map
i: {x} → X
This, in turn, gives a homomorphism i* sending functions on X to functions on {x}. Functions on a one-point space are like numbers, so i* acts like "evaluation at x". Moreover, i* will tend to be onto: that's the backwards analogue of the fact that i is one-to-one!Second, suppose we have a map from a space E onto the space X:
p: E → X.
If you know some topology, think of E as a "covering space" of X. Then we get a homomorphism p* from functions on X to functions on E. Moreover p* will tend to be one-to-one: that's the backwards version of the fact that p was onto!We can use these examples to figure out the analogue of a "point" or a "covering space" in the world of commutative rings! And the resulting ideas turn out to be crucial to modern number theory.
In "week199" I explained the analogue of a "point" for commutative rings: it's a "prime ideal". So, now I want to explain the analogue of a "covering space". This will expand our dictionary so that it relates Galois groups to fundamental groups of topological spaces... and so on.
But, we won't get too far if we don't remember why a "prime ideal" is like a "point"! So, I guess I'd better review some of "week199" before charging ahead into the beautiful wilderness.
What's special about the ring of functions on a space consisting of just one point? Take real- or complex-valued functions, for example. How do these differ from the functions on a space with lots of points?
The answer is pretty simple: on a space with just one point, a function that vanishes anywhere vanishes everywhere! So, the only function that fails to have a multiplicative inverse is 0. For bigger spaces, this isn't true.
A commutative ring where only 0 fails to have a multiplicative inverse is called a "field". So, the algebraic analogue of a one-point space is a field.
This means that the algebraic analogue of a map from a one-point space into some other space:
i: {x} → X
should be a homomorphism from a commutative ring R to a field k:f: R → k
Our translation dictionary now looks like this:
GEOMETRY ALGEBRA
spaces commutative rings
maps homomorphisms
one-point spaces fields
maps from one-point spaces homomorphisms to fields
It's worth noting some subtleties here. In the geometry we learned in high school, once we see one point, we've seen 'em all: all one-point spaces are isomorphic. But not all fields are isomorphic! So, if we're trying to think of algebra as geometry, it's a funny sort of geometry where points come in different flavors!Moreover, there are homomorphisms between different fields. These act like "flavor changing" maps - maps from a point of one flavor to a point of some other flavor.
If we have a homomorphism f: R → k and a homomorphism from k to some other field k', we can compose them to get a homomorphism f ': R → k'. So, we're doing some funny sort of geometry where if we have a point mapped into our space, we can convert it into a point of some other flavor, using a "flavor changing" map.
Now let's take this strange sort of geometry really seriously, and figure out how to actually turn a commutative ring into a space! First I'll describe what people usually do. Eventually I'll describe what perhaps they really should do - but maybe you can guess before I even tell you.
People usually cook up a space called the "spectrum" of the commutative ring R, or Spec(R) for short. What are the points of Spec(R)? They're not just all possible homomorphisms from R to all possible fields. Instead, we count two such homomorphisms as the same point of Spec(R) if they're related by a "flavor changing process". In other words, f ': R → k' gives the same point as f: R → k if you can get f ' by composing f with a homomorphism from k to k'.
This is a bit ungainly, but luckily there's a quick and easy way to tell when f: R → k and f ': R → k' are related by such a flavor changing process, or a sequence of such processes. You just see if they have the same kernel! The "kernel" of f: R → k is the subset of R consisting of elements r with
f(r) = 0
The kernel of a homomorphism to a field is a "prime ideal", and two homomorphisms are related by a sequence of flavor changing processes iff they have the same kernel. Furthermore, every prime ideal is the kernel of a homomorphism to some field. So, we can save time by defining Spec(R) to be the set of prime ideals in R.For completeness I should remind you what a prime ideal is! An "ideal" in a ring R is a set closed under addition and closed under multiplication by anything in R. It's "prime" if it's not all of R, and whenever the product of two elements of R lies in the ideal, at least one of them lies in the ideal.
So, we have something like this:
GEOMETRY ALGEBRA
spaces commutative rings
maps homomorphisms
one-point spaces fields
maps from one-point spaces homomorphisms to fields
points of a space prime ideals of a commutative ring
Now let's use these ideas to study "branched covering spaces" and their analogues in algebra. This week I'll talk about two examples. The first is very geometrical, and it should be familiar to anyone who has studied a little complex analysis. The second is more algebraic, and it's important in number theory. But, the cool part is that they fit into the same formalism!If you don't know what a branched covering space is, don't worry: we'll start with the very simplest example. We'll look at this map from the complex plane to itself:
p: C → C
p(z) = z2Except for zero, every complex number has two square roots, so this map is two-to-one and onto away from the origin. In fact, away from the origin you can visualize this thing locally as two sheets of paper sitting above one. But these two sheets have a global complication: if you start on the top sheet and hike once around the origin, you wind up on the bottom sheet - and vice versa! In topology we call this sort of thing a "double cover". When we include the point z = 0 things get even more complicated, since the two sheets meet there. So we have something trickier: a "branched cover". In general, a branched cover is like a covering space except that the different "sheets" can merge together at certain points, called "branch points".Now let's think about this algebraically. To keep from getting confused, let's write
z2 = w
so that p is a map from the "z plane" down to the "w plane", sending each point z to the point z2 = w. The ring of polynomial functions on the z plane is called C[z]; the ring of polynomial functions on the w plane is called C[w]. We can pull functions from the w plane back up to the z plane:
p*: C[w] → C[z]
and p* works in the obvious way, taking any function f(w) to the function f(z2).
Just as p is onto, p* is one-to-one! So, we can think of C[w] as sitting inside C[z], consisting of those polynomials in z that only depend on z2: the even functions. We say C[w] is a "subring" of C[z], or equivalently, that C[z] is an "extension" of C[w].
In this example we can get the bigger ring from the smaller one by throwing in solutions of some polynomial equations, so we call it an "algebraic extension". We've already seen some algebraic extensions, namely algebraic number fields, where take the field of rational numbers and throw in some solutions of polynomial equations. Algebraic extensions can be complicated, but this one is really simple: we just start with C[w] and throw in the solution of one polynomial equation, namely
z2 = w
It turns out that quite generally, algebraic extensions of commutative rings act a lot like branched covering spaces. I probably don't have the technical details perfectly straight, but let's add this to our translation dictionary, because it's an important idea:
GEOMETRY ALGEBRA
spaces commutative rings
maps homomorphisms
one-point spaces fields
maps from one-point spaces homomorphisms to fields
points of a space prime ideals of a commutative ring
branched covering spaces algebraic extensions of commutative rings
Now let's have some fun: let's see how our algebraic concept of "point", namely "prime ideal", interacts with our branched double cover of the complex plane. There's something straightforward going on, but also something more subtle and interesting.The straightforward thing is that any point up on the z plane maps to one down on the w plane. We don't need fancy algebra to see this! But, it's worth doing algebraically. According to the fancy algebraic definition, a "point" in the spectrum of the commutative ring C[z] is a prime ideal. But as you might hope, these are the same as good old-fashioned points in the complex plane!
It works like this: given any point x in C, we get a homomorphism from C[z] to C called "evaluation at x", which sends any polynomial f to the number f(x). The kernel of this is the prime ideal consisting of all polynomials that vanish at x. These are just the polynomials containing a factor of z - x, so we call this ideal
<z - x>
So, we get some prime ideals in C[z] from points of C this way. But in fact there's a theorem that every prime ideal in C[z] is of this form! So, we get a one-to-one correspondence
Spec(C[z]) = C
Similarly,
Spec(C[w]) = C
Now let's think about our branched cover
p: C → C
in different ways. It starts out life as a map from the z plane down to the w plane. We can use this to pull back functions on the w plane up to the z plane:p*: C[w] → C[z]
But then, by general abstract baloney, the inverse image under p* of any prime ideal in C[z] is a prime ideal back in C[w]. This gives a map from Spec(C[z]) to Spec(C[w]). But this is just a map from the z plane to the w plane! And it's the same map p we started with. If you don't see why, it's a good exercise to check this.
So: we translated from geometry to algebra and back to geometry, and we got right back where we started. Note that each time we translated, our description of the map p got turned around backwards.
But there's a subtler and more interesting thing we can do with our branched cover. We can take a point down on the w plane and look at the points up on the z plane that map down to it!
Usually there will be two, but for the origin there's just one. This much is clear from thinking geometrically. But if we think algebraically, we'll see something funny going on at the origin. We can already see it geometrically: the origin is where the two sheets of our branched cover meet, so we call it a "branch point". But the algebraic viewpoint sheds an interesting new light on this.
What we'll do is take a prime ideal in C[w] and push it forwards via
p*: C[w] → C[z]
The resulting subset won't be an ideal, but it will "generate" an ideal, meaning we can take the smallest ideal containing it. This ideal won't be prime, but we can "factor" it into prime ideals: there's a fairly obvious way to multiply ideals, and we happen to be working with rings where there's a unique way to factor any ideal into prime ideals.Let's try it. First pick a number x that's not zero. It gives a prime ideal in C[w], namely
<w - x>
Next push this ideal forwards via p* and let it generate an ideal in C[z], namely
<z2 - x>
This is not prime, but we can factor it, which in this case simply amounts to factoring the polynomial that generates it:
<z2 - x> = <(z - sqrt(x)) (z + sqrt(x))>
= <z - sqrt(x)> <z + sqrt(x)>
We get a product of two prime ideals, corresponding to two points in the z plane, namely +sqrt(x) and -sqrt(x). These are the two points that map down to x.In this sort of situation, we say the prime ideal <w - x> "splits" into the prime ideals <z - sqrt(x)> and <z + sqrt(x)> when we go from C[w] to the extension C[z]. This is just an overeducated way of saying the number x has two different square roots.
But suppose x = 0. This doesn't have two square roots! Everything works the same except we get
<z2> = <z> <z>
We say the prime ideal <w> "ramifies" when we go from C[w] to the extension C[z]. We still get a product of prime ideals; they just happen to be the same. This is a way of making sense of the funny notion that the number 0 has two square roots... which just happen to be the same! Lots of mathematicians and physicists talk about "repeated roots" when an equation has "two solutions that just happen to be equal". This is just a way of making that precise.But all this algebraic machinery must seem like overkill if this is the first time you've seen it. It pays off when we get to more algebraic examples. So, let me sketch the simplest one.
Let Z be the ring of integers, and let Z[i] be the ring of Gaussian integers, namely numbers of the form a+bi where a and b are integers. Z[i] is an algebraic extension of Z, since we can get it by throwing in a solution z of the polynomial equation
z2 = -1
This equation is quadratic, just like it was in the example we just did! Now we're throwing in a square root of -1 instead of a square root of some function on the complex plane. But if we take the analogy between geometry and algebra seriously, this extension should still give some sort of "branched double cover"
p: Spec(Z[i]) → Spec(Z)
What's this like?It's actually really interesting, but I'll just sketch how it works.
The points of Spec(Z) are prime ideals in Z. In "week199" we saw that except for the prime ideal <0>, these are generated by prime numbers.
Similarly, except for <0>, the prime ideals in Z[i] are generated by "Gaussian primes": Gaussian numbers that have no factors except themselves and the "units" 1, -1, i and -i. (A "unit" in a ring is an element with a multiplicative inverse; we don't count units as primes.)
The map p sends each Gaussian prime to a prime, and it's fun to work out how this goes... but it's even more fun to work backwards! Let's take primes in the integers and see what happens when we let them generate ideals in the Gaussian integers! This is like taking points in the base space of a branched cover and seeing what's sitting up above them.
For example, the prime 5 "splits". It has two prime factors in the Gaussian integers:
5 = (2 + i)(2 - i)
so in Z[i] the ideal it generates is a product of two prime ideals:
<5> = <2 + i> <2 - i>
This means that two different points in Spec(Z[i]) map down to the point <5> in Spec(Z), namely <2 + i> and <2 - i>. So we indeed have something like a double cover!On the other hand, the prime 2 "ramifies". It has two prime factors in the Gaussian integers:
2 = (1 + i)(1 - i)
but these two Gaussian primes generate the same prime ideal:
<1 + i> = <1 - i>
since if we multiply 1+i by the unit -i we get 1-i. So, in the Gaussian integers we have
<2> = <1 + i> <1 + i>
A repeated factor! This is just what happened to the branch point in our previous example: it had "two points sitting over it, which happen to be the same".So far, everything seems to be working nicely. But, besides splitting and ramification, there's a third thing that happens here, which didn't happen in our example involving the complex plane. In fact, this third option never happens when we're doing algebraic geometry over the complex numbers!
Here's how it works. Consider the prime 3. This is still prime in the Gaussian integers! It doesn't split, and it doesn't ramify. If we factorize the ideal generated by 3 in Z[i] we just get
<3> = <3>
It doesn't do anything - it just sits there! So, we say this prime is "inert".This may seem boring, but it's actually mysterious - and downright MADDENING if we take the analogy between geometry and algebra seriously. It's weird enough to have a "branched" cover where sheets merge at certain points, but at least in that case we can see they've merged: a prime ideal in our subring generates an ideal in the extension that's not prime, but is a product of several prime factors, some of which happen to be the same. But when a prime ideal in our subring generates a prime ideal in the extension, it's as if our "cover" has just one sheet over this point in the base space! And if this happens for a quadratic extension - as it just did - something seems to have gone horribly wrong with the nice idea that "quadratic extensions are like branched double covers".
Luckily, this puzzle has a nice resolution. We shouldn't have decategorified! When we started discussing "points" for a commutative ring, we saw they form a category in a nice way: there are points of different "flavors", with "flavor-changing operations" going between them. Then we freaked out and turned this category into a set by decreeing that two point are the same whenever there's a morphism between them. If we hadn't done this, we'd have seen more clearly how "inert" primes fit into a nice pattern along with "split" and "ramified" ones.
I'll probably talk about this more sometime, and also look more carefully at what happens to all the different primes when we go to the Gaussian integers - to show you that we are, indeed, doing number theory!
But for now, I just want to make a few comments about this idea of points of different "flavors".
In fact Grothendieck proposed an even more general idea of this sort in his second approach to "schemes", which is simpler but much less widely discussed than his first approach. Basically, he said that given a commutative ring R, we should not only consider points that are homomorphisms from R to any field, but also to any commutative ring. For each commutative ring k we get a set consisting of all "k-points" of R, namely homomorphisms
f: R → k
And, for each homomorphism g: k → k' we get a "flavor changing operation" that sends k-points to k'-points. So, we get a functor from CommRing to Set! He called such a functor a "scheme". We can get schemes from commutative rings as just described - these are called "affine schemes" - but there are also others, for example those coming from projective varieties.Anyway, here are some places to read more about number theory... mostly with an emphasis on the geometric viewpoint and the issue of "splitting, ramification and inertia".
For a really quick and friendly no-nonsense introduction, try this:
2) Harold M. Stark, Galois Theory, Algebraic Number Theory, and Zeta Function, in From Number Theory to Physics, eds. M. Waldschmit et al, Springer, Berlin, 1992, pp. 313-393.
To dig a lot deeper, try this book by Neukirch:
3) Juergen Neukirch, Algebraic Number Theory, trans. Norbert Schappacher, Springer, Berlin, 1986.
I already mentioned it, but it's worth mentioning again, because it's pretty elementary, and very clear on the analogy between "function fields" (fields of functions on Riemann surfaces) and "number fields" (algebraic number fields).
This book by Borevich and Shafarevich doesn't make the analogy to geometry explicit:
4) Z. I. Borevich and I. R. Shafarevich, Number Theory, trans. Newcomb Greenleaf, Academic Press, New York, 1966.
However, it has a nice concept of a "theory of divisors" for a commutative ring - and if you know a bit about divisors from algebraic geometry, you'll see that this is secretly very geometrical! They show how to classify algebraic extensions of commutative rings using a theory of divisors, and show how to get a theory of divisors using "valuations". This manages to accomplish a lot of what other texts do using "adeles", without actually mentioning adeles. I find this instructive.
This book goes much further in the geometric direction, but still without introducing schemes:
5) Dino Lorenzini, An Invitation to Arithmetic Geometry, American Mathematical Society, Providence, Rhode Island, 1996.
It's really great - very pedagogical! It develops number fields and function fields in parallel. You'll need to be pretty comfy with commutative algebra to work all the way through it, though.
If you want to learn about schemes - not the kind I just talked about, just the usual sort, which still includes cool "spaces" like Spec(Z) - try these:
6) V. I. Danilov, V. V. Shokurov, and I. Shafarevich, Algebraic Curves, Algebraic Manifolds and Schemes, Springer, Berlin, 1998.
7) David Eisenbud and Joe Harris, The Geometry of Schemes, Springer, Berlin, 2000.
Schemes have a reputation for being scary, but both these books try hard to make them less so, including lots of actual pictures of things like Spec(Z[i]) sitting over Spec(Z).
To wrap things up, I just want to mention two papers on subjects I'm fond of....
In "week172" I discussed Tarski's "high school algebra problem". This asks whether every identity involving 1, +, x, and exponentials that holds in the positive natural numbers follows from the eleven we learned in high school:
x + y = y + x (x + y) + z = x + (y + z)
xy = yx (xy)z = x(yz)
1x = x
x1 = x 1x = 1
x(y + z) = xy + xz
x(y + z) = xy xz (xy)z = xz yz xyz = (xy)z The rules of this game allow only purely equational reasoning - not stuff like mathematical induction. The reason is that this is secretly a problem about "universal algebra" or "algebraic theories", as explained in "week200".It turns out the answer is no! In fact there are infinitely many more independent identities! Here is the first one, due to Wilkies:
[(x + 1)x + (x2 + x + 1)x]y [(x3 + 1)y + (x4 + x2 + 1)y]x =
[(x + 1)y + (x2 + x + 1)y]x [(x3 + 1)x + (x4 + x2 + 1)x]y I just found a paper, apparently written after "week172", which gives a very detailed account of this problem:8) Stanley Burris and Karen Yeats, The saga of the high school identities, available at http://web.archive.org/web/20070212200835/http://www.math.uwaterloo.ca/~snburris/htdocs/MYWORKS/PREPRINTS/saga.psIt includes some new results, like the smallest known algebraic gadget satisfying all the high school identities but not Wilkies' identity - but also more interesting things that are a bit harder to describe.
Also, here's a cool paper relating some of Ramanujan's work to string theory:
9) Antun Milas, Ramanujan's "Lost Notebook" and the Virasoro Algebra, available as math.QA/0309201.
lot of Ramanujan's weird identities turn out to be related to concepts from string theory, suggesting that he was born about a century too soon to be fully appreciated... but this paper tackles an identity of his that nobody had managed to explain using string theory before.

Addendum: Here's something a friend of mine wrote, and an expanded version of my reply.
By the way, I very much liked your explanation of points and prime
ideals. Up until now I haven't seen a satisfactory explanation of why
points correspond to prime rather than maximal ideals, and
although I haven't completely digested what you wrote, it looks
like it might do the job...
Both here and in my discussion of spectra in "week199", I've been avoiding saying the things people usually say. People usually note that a maximal ideal is the same as the kernel of a homomorphism ONTO a field, while a prime ideal is the same as the kernel of a homomorphism ONTO an integral domain. (Recall that an integral domain is a commutative ring where xy = 0 implies that x or y is zero.) If we define the "points" of a commutative ring R to be its maximal or prime ideals, we can therefore think of these as the kernels of homomorphisms from R onto fields or integral domains.
However, defining points in terms of homomorphisms ONTO a given sort of commutative ring is rather irksome, because it doesn't tell us how points transform under homomorphisms of commutative rings, nor how they transform under the "flavor-changing operations" I was describing. The problem is that the composite of a homomorphism with an onto homomorphism needn't be onto!
So, what really matters is that a prime ideal is the same as the kernel of a homomorphism TO a field. To see how this follows from the usual story, note that any integral domain is contained in a field called its "field of fractions" - just as Z is contained in Q. Any homomorphism ONTO the integral domain thus becomes a homomorphism TO this field, with the same kernel. Conversely, any homorphism TO a field becomes a homomorphism ONTO its image, with the same kernel - and this image is always an integral domain.
Best,
jb

Sunday, November 20, 2011

Poor decisions leave TEPCO workers vulnerable to radiation


http://ajw.asahi.com/article/0311disaster/fukushima/AJ201106141146


Six more employees of Tokyo Electric Power Co. working at the Fukushima No. 1 nuclear power plant were exposed to more radiation than allowed even under the relaxed limits put in place to deal with the critical accident.

In addition, 102 workers have been exposed to more radiation than allowed for nuclear power plant workers. Such workers are subsequently prohibited from working at nuclear power plants for up to five years under normal circumstances.

If more workers are discovered to have exceeded radiation exposure levels, TEPCO may face a serious shortage of workers even while the situation at the Fukushima plant is far from under control.

The government raised the upper limit for workers dealing with the Fukushima accident to 250 millisieverts. However, TEPCO announced June 13 that six additional employees had been exposed to more than that level of radiation. The company had previously announced that two employees had been exposed to more than 250 millisieverts.

What makes the situation serious for those six is that all have been exposed to more than 250 millisieverts through internal contamination by which they have inhaled the radiation.

The normal upper limit for workers at nuclear power plants is 100 millisieverts. TEPCO announced that a total of 102 employees had been exposed to more than that level.

TEPCO submitted a report to the Ministry of Health, Labor and Welfare on June 13 of a study into the 3,726 workers at the Fukushima No. 1 plant who worked between March 11, when the Great East Japan Earthquake struck, until March 31.

Of those workers, radiation exposure levels for 2,367 workers who were tested were reported to the labor ministry. The results of the study for the remaining workers will be submitted by June 20.

The eight workers found to have been exposed to more than 250 millisieverts were all male TEPCO employees.

The six employees who were added to the list in the latest report worked to restore equipment at the Fukushima plant as well as measure radiation levels.

The worker found to have the highest radiation exposure level was found to have been exposed to 497.6 millisieverts.

The labor ministry instructed TEPCO to remove a total of 12 workers exposed to more than 200 millisieverts from all emergency work at the Fukushima plant.

Of workers who were not exposed to more than 250 millisieverts, 23 were exposed to more than 100 millisieverts through internal contamination alone. A total of 94 workers were exposed to more than 100 millisieverts when internal and external contamination levels were combined.

Including the workers covered in the latest study, a total of about 7,800 individuals have been working at the Fukushima No. 1 plant through late May to restore operations.

The labor ministry has asked TEPCO to submit a report on total radiation exposure levels, including internal contamination, for all those workers by the end of June.

However, a problem for TEPCO is that the March 11 quake and tsunami devastated the systems to measure external and internal contamination levels.

Dosimeters at the Fukushima plant were damaged by the disasters so TEPCO had to borrow dosimeters from other nuclear plants. While that was completed in April, the company still has not installed enough equipment to test for internal contamination.

The local labor bureau has instructed TEPCO to improve its practices, and the Nuclear and Industrial Safety Agency has also issued a warning.

TEPCO officials and workers admit that internal contamination may have spread because all workers were not given clear instructions to wear face masks when working at the plant.

Another problem is that the emergency work station on the grounds of the Fukushima plant was damaged by hydrogen explosions at two reactors. That created cracks that allowed radioactive materials to leak into the work station, even though it is designed to prevent such leakage.

Because workers believed that radiation would not leak into the work station, they removed face masks when in the station, leading to the internal contamination.

Moreover, the whole body counters at the Fukushima plant used to measure internal contamination were exposed to radiation during the nuclear accident so there was no way of differentiating if measurements reflected contamination of workers or contamination of the equipment.

Workers had to be measured for internal contamination using two whole body counters at a facility in Iwaki, Fukushima Prefecture, away from the nuclear plant.

Labor ministry officials are caught in a bind because even with the relaxed upper limit for radiation exposure at the Fukushima No. 1 plant there could emerge a situation in which TEPCO does not have enough workers.

The labor ministry may be asked to further relax the radiation exposure levels if the work at the Fukushima plant becomes prolonged.

The labor ministry has also asked the Ministry of Economy, Trade and Industry, which oversees TEPCO, to compile a new structure to foster individuals capable of working at nuclear plants.

Saturday, November 12, 2011

Japan Revives Kamaishi Breakwater That Crumpled in Tsunami

Reload original page
Print page
Email page address
Japan Revives Kamaishi Breakwater That Crumpled in Tsunami

KAMAISHI, Japan — After three decades and nearly $1.6 billion, work on Kamaishi’s great tsunami breakwater was completed three years ago. A mile long, 207 feet deep and jutting nearly 20 feet above the water, the quake-resistant structure made it into the Guinness World Records last year and rekindled fading hopes of revival in this rusting former steel town.

But when a giant tsunami hit Japan’s northeast on March 11, the breakwater largely crumpled under the first 30-foot-high wave, leaving Kamaishi defenseless. Waves deflected from the breakwater are also strongly suspected of having contributed to the 60-foot waves that engulfed communities north of it.

Its performance that day, coupled with its past failure to spur the growth of new businesses, suggested that the breakwater would be written off as yet another of the white elephant construction projects littering rural Japan. But Tokyo quickly and quietly decided to rebuild it as part of the reconstruction of the tsunami-ravaged zone, at a cost of at least $650 million.

After the tsunami and the nuclear meltdowns at Fukushima, some Japanese leaders vowed that the disasters would give birth to a new Japan, the way the end of World War II had done. A creative reconstruction of the northeast, where Japan would showcase its leadership in dealing with a rapidly aging and shrinking society, was supposed to lead the way.

But as details of the government’s reconstruction spending emerge, signs are growing that Japan has yet to move beyond a postwar model that enriched the country but ultimately left it stagnant for the past two decades. As the story of Kamaishi’s breakwater suggests, the kind of cozy ties between government and industry that contributed to the Fukushima nuclear disaster are driving much of the reconstruction and the fight for a share of the $120 billion budget expected to be approved in a few weeks.

The insistence on rebuilding breakwaters and sea walls reflects a recovery plan out of step with the times, critics say, a waste of money that aims to protect an area of rapidly declining population with technology that is a proven failure.

Defenders say that if Kamaishi’s breakwater is not fixed, people and businesses will move away even faster for fear of another tsunami.

“There may be an argument against building a breakwater in a place with little potential to grow, but we’re not building a new one — we’re basically repairing it,” said Akihiro Murakami, 57, the top official in Kamaishi for the Ministry of Land, Infrastructure, Transport and Tourism, which oversees the nation’s breakwaters. “At this point, it’s the most efficient and cost-effective choice.”

After World War II, Japan built a line of coastal defenses that was longer than China’s Great Wall and ultimately stretched to a third of the Japanese coastline. The defenses allowed more Japanese, whose numbers rose to 125 million from 72 million in the five decades after 1945, to live and work hard by the sea.

Yet, even before the tsunami, the affected zone’s population was expected to age and shrink even faster than the rest of Japan’s, contracting by nearly half over the next three decades. Critics say that in cities like Kamaishi, where the population dropped from 100,000 people four decades ago to fewer than 40,000 before the tsunami, people should simply be moved away from the ravaged coast.

Japan’s dwindling resources would be better spent merging destroyed communities into inland “compact towns” offering centralized services, critics say. Unnecessary public works — Kamaishi’s reconstruction plans include building a rugby stadium — would merely hasten the tsunami zone’s decline by saddling it with high maintenance costs.

“In 30 years,” said Naoki Hayashi, a researcher at the Central Research Institute of Electric Power Industry, one of Japan’s biggest policy groups, “there might be nothing left there but fancy breakwaters and empty houses.”

A Web of Collusion

Even though the breakwater yielded economic benefits only to the vested interests that have a grip on the construction of Japan’s breakwaters, sea walls and ports, advocates of its reconstruction say it is vital to Kamaishi’s future. In addition to protecting the city against tsunamis, the breakwater was intended to create a modern international port that would accommodate container vessels and draw new companies here.

The birthplace of Japan’s modern steel industry, Kamaishi lived through economic booms for nearly a century, but by the early 1970s its major employer, Nippon Steel, was moving steel production to central Japan, where the flourishing auto industry was concentrated.

Construction, which began in 1978, was completed three years ago. By then, Nippon Steel had long since closed its two blast furnaces. Not a single container vessel had come here. Dependent on huge subsidies, Kamaishi’s port was one of the countless unused ports in Japan, derided as “fishing ponds” because the lack of ship traffic made them peaceful fishing spots.

“It was good for the ministry,” said Yoshiaki Kawata, a member of the government’s reconstruction design council, referring to the Land Ministry. “But the city declined. Businesses and people left.”

It was good not only for the ministry, but also for its allies in politics and business, who joined forces in the kind of collusive web that is replicated in many other industries.

For decades, Zenko Suzuki, a former prime minister who died in 2004, secured the money for this region’s breakwaters, sea walls and ports. He was supported by local businessmen like Kazunori Yamamoto, 65, the owner of Kamaishi’s biggest construction company, which helped build the breakwater.

Mr. Yamamoto once led a youth group that backed the politician, with whom he fondly remembered attending golf tournaments. “He took great care of me,” he said.

A career bureaucrat named Teruji Matsumoto headed the ministry division overseeing the breakwater’s construction in the early 1980s. In 1986, he joined Toa Construction, one of the three big marine construction companies that managed the breakwater’s construction, rising to chief executive in 1989.

Isao Kaneko, a high-ranking manager at Toa, said of Mr. Matsumoto, “Maybe someone looking from the outside would view it as collusion, but he was an absolutely indispensable person for our company.”

Reached by telephone, Mr. Matsumoto, now 84, declined to be interviewed, saying he was suffering from “depression” and “senility.”

Collapse After First Wave

Despite the breakwater’s failure to halt Kamaishi’s decline, its defenders contended that it was steadfastly protecting the city from tsunamis by sealing off the bay from the Pacific, except for a small opening for boats. The Land Ministry extolled its breakwater in a song, “Protecting Us for a Hundred Years.”

“It protects the steel town of Kamaishi, it protects our livelihoods, it protects the people’s future,” the song goes.

On March 11, the tsunami’s first wave reached Kamaishi 35 minutes after the earthquake struck off the northeast coast at 2:46 p.m. In a video shot from the third floor of a Land Ministry building facing the port, 48 people who have taken shelter can be heard in the background as they watch the breakwater’s collapse against the first wave.

“The breakwater is failing completely,” one man says softly as the waves spill over the breakwater, turning its inner wall into a white, foamy waterfall. Minutes later, the tsunami roars into Kamaishi, sweeping away nearly everything in its way.

The breakwater becomes visible seven minutes later as the first wave starts ebbing out of the city. “Wow, look at the shape of the breakwater!” an astonished man says. “It’s collapsed.” The camera zooms in on the breakwater, as the top of it lies twisted in fragments. As the people brace themselves for the tsunami’s second wave, an exasperated man says, “This breakwater isn’t working at all.”

Those in the building survived, but 935 Kamaishi residents died in the tsunami.

“I was disappointed,” said Yoshinari Gokita, an executive at Toa Construction who spent 10 years here working on the breakwater. “We all did our best. We used to say proudly that as long as it was there, everyone would be absolutely safe.”

Kamaishi is a hilly city with little flat land. Rising directly behind its port and central district, steep hills have long provided a natural tsunami shelter that was equipped with an elaborate network of evacuation stairways, pathways and resting areas after World War II. Most inside the tsunami-prone central district were within only a couple of hundred yards of the nearest evacuation stairway, reinforcing the belief that, despite the 35 minutes between the earthquake and the arrival of the first wave, many victims chose not to flee, believing they were safe.

Takenori Noda, Kamaishi’s mayor, said loudspeakers all over the city had warned people to flee. “But I do believe that, unconsciously, the breakwater’s presence did give people a false sense of security,” he said.

Conflicting Research

Within days, however, the Land Ministry commissioned an assessment of the breakwater’s performance. Drawing on the only tsunami data available, captured by a GPS tracking system set up 12 miles offshore, researchers used computer modeling to conclude that the breakwater had done its job: it had reduced the height of the first wave by 40 percent, delayed its landing by six minutes and saved countless lives.

The report, released less than three weeks after the tsunami, would prove decisive. It quickly became accepted wisdom in Kamaishi. It also supplied supporters of the breakwater’s reconstruction with their main argument.

The report was put together by a semigovernmental agency, the Port and Airport Research Institute, which until 2001 had been part of the Land Ministry and now lies under its jurisdiction. Its ranks are made up of people who served in the Land Ministry during the breakwater’s construction and joined the institute in a widely criticized practice called “amakudari,” or “descent from heaven.” Officials at the ministry and the institute acknowledged the close ties, but said the report’s findings were neutral.

Seisuke Fujisawa, a part owner of a cement company that benefited from the breakwater’s construction, disagreed. “There is no way that an organization with such close ties to the ministry will say that the breakwater was a failure and a monumental waste of money,” he said. “We need a neutral investigation.”

“I thought Kamaishi was safe because of the breakwater,” said Mr. Fujisawa, 66, whose family has operated various businesses in Kamaishi for seven generations. “But now I don’t believe the breakwater was effective at all.”

Recently, researchers came to a similar conclusion. According to computer modeling by researchers at the Japan Agency for Marine-Earth Science and Technology, a semigovernmental organization with no ties to the Land Ministry, the breakwater had no significant effect in decreasing the size of the first wave or delaying its arrival.

Mizuho Ishida, the lead researcher and a former president of the Seismological Society of Japan, said differences in interpretation were inevitable because estimates had to be extrapolated from the wave data collected 12 miles offshore.

“Even if you perform a very fine analysis, there is no way to know exactly what happened,” Ms. Ishida said.

With Finance Ministry officials also asking hard questions about the cost of rebuilding, the pro-reconstruction forces pushed back in the spring, led by Fukuichi Hiramatsu, a city councilman of 40 years whose family business — gravel — was a subcontractor during the breakwater’s construction.

In an interview in May, Mr. Hiramatsu, who died in July at the age of 80, said the city council passed a resolution calling for the breakwater’s reconstruction the day after he had urged the council chairman to do so in a telephone conversation — an episode confirmed by other council members.

What is more, after the mayor publicly expressed doubts about the breakwater’s performance, Mr. Hiramatsu said he told him, “ ‘Instead of saying that it was barely effective, you should mention how effective it was.’ ”

Mayor Noda denied that Mr. Hiramatsu, who happened to be a relative by marriage, had influenced him. But the mayor soon sided with Mr. Hiramatsu, even signing a separate resolution urging the breakwater’s rapid reconstruction.

Land Ministry officials in Tokyo now proclaimed that the people of Kamaishi were the ones demanding the breakwater’s reconstruction.

“Whether the breakwater was a little effective or delayed the first wave by a few minutes — it’s irrelevant,” said Kosuke Motani, a senior vice president at the Development Bank of Japan and a member of the government’s Reconstruction Design Council. “That’s complete nonsense. People should just flee.

“What’s inexcusable is taking advantage of the current confusion to rebuild this breakwater because they don’t want to admit that it was meaningless in the first place,” Mr. Motani said.

Risk of Amplifying Waves

In their push to rebuild, bureaucrats brushed aside the possibility that the breakwater had amplified the destruction of at least two communities.

During the breakwater’s design phase, bureaucrats commissioned coastal engineers at Tohoku University to weigh the risk that the breakwater would deflect tsunami waves from central Kamaishi to the north. After experiments over four years, researchers concluded in reports submitted in 1974 and 1975 that the breakwater would increase the waves directed toward Ryoishi, a district behind a narrow bay just north of Kamaishi Bay, and Kariyado, a fishing village on a peninsula sticking out east of it. A 1976 report states that the waves reaching Ryoishi would increase by 20 percent.

“Building a breakwater at Ryoishi became a prerequisite for building the breakwater at Kamaishi,” said Akira Mano, who assisted in the experiments at the time as a graduate student and now teaches at the university.

Ryoishi, which had no coastal defenses until then, was shielded with a breakwater in its bay and a 30-foot-high sea wall along its coast.

On March 11, 60-foot-high waves — twice the height of those seen in central Kamaishi — annihilated Ryoishi and Kariyado. Standing at an evacuation spot high above Ryoishi, Hajime Seto, 66, a retired banker who is the Ryoishi district leader, filmed the destruction while using a bullhorn to warn people to seek higher ground. The tsunami killed 45 people out of the district’s population of 600, but swept away all but 15 of 230 houses.

“They claim that Kamaishi’s breakwater had no effect on us, but we want at least a proper investigation,” Mr. Seto said. “They want to rebuild the breakwater at all cost, but, under present conditions, we’re opposed to it.”

Meanwhile, waves overwhelmed the breakwater in front of Kariyado and reached the middle of a hill where the house of Kozo Sasaki, 80, and his wife, Mitsuko, 68, stood.

The Sasakis, who were recently cleaning out their home before its scheduled demolition, believed that the Kamaishi breakwater increased the waves that destroyed their home.

“It was a plus for them over there, but over here — well, everyone here believes that because the waves were suppressed over there, they came here,” Ms. Sasaki said.

Shigeo Takahashi, the president of the Port and Airport Research Institute, which assessed the breakwater’s performance for the Land Ministry, said he did not believe that the breakwater had significantly increased the waves at Ryoishi or Kariyado. But pressed, Mr. Takahashi acknowledged that his institute had performed only a “rough” analysis of the breakwater’s effect on those communities. He added that his institute had no plans to open a full-fledged investigation.

Mr. Kawata, the member of the government’s Reconstruction Design Council, said an investigation’s findings could lead to lawsuits or at the very least impede the breakwater’s reconstruction. “For them,” he said of ministry officials, “there’s just no benefit in conducting an investigation, even though some residents may be asking for one.”

Mr. Murakami, the Land Ministry official, said he was unaware of the experiments conducted by Tohoku University in the mid-1970s.

“To be honest, whenever we undertake a big project like this, we get all sorts of irrelevant complaints, baseless accusations,” he said. He had already reassured residents that the breakwater did not heighten the waves that destroyed their communities.

“I told them that our breakwater wasn’t that big a deal.”

Kantaro Suzuki contributed reporting.

§

Who's Winning the Republican Race? Everybody!

Reload original page
Print page
Email page address
Who's Winning the Republican Race? Everybody!

Polls indicating that Republican voters prefer this or that candidate for president are often as simplistic as they are hard to avoid. Many observers complain that media coverage of the campaign is too focused on the “horse race.” In some ways, though, the coverage isn’t even as nuanced as that of a horse race, where horses are picked to win, but also to place or to show.

One way to get a clearer picture of an electorate’s preferences is to ask prospective voters to rank the candidates and not merely say which one is their first choice. Who is their second choice, third, fourth, fifth? Doing this allows us to get a better overall view of their appeal or lack thereof. It also makes clear that “Who’s ahead?” is not by any means a question with a single, simple answer.

Let’s imagine that likely Republican voters were asked to rank Herman Cain, Newt Gingrich, Ron Paul, Rick Perry, and Mitt Romney (Michele Bachmann, Jon Huntsman and Rick Santorum, please accept my apologies). This is for illustration only, although it’s not that far off the mark, so let’s further imagine:

that 36.3% of them favored Romney to Gingrich to Paul to Cain to Perry;

and 27.3% of them favored Cain to Paul to Gingrich to Perry to Romney;

and 18.2% of them favored Perry to Paul to Gingrich to Cain to Romney;

and 9.1% of them favored Gingrich to Perry to Cain to Paul to Romney;

and 9.1% of them favored Paul to Gingrich to Perry to Romney to Cain.

Romney is clearly preferred by the highest percentage of voters so using the conventional method of plurality, Romney, the most conventional candidate, is the clear leader.

But impressed that the second highest percentage of voters prefer him (“Wow! 27.3% is almost exactly the sum of my 9-9-9 plan”), Cain might well argue that a runoff between him and Romney is appropriate. In such a runoff, the numbers above suggest that Cain would win since 54.6% all of the voters polled ranked him higher than Romney.

Poring over the preference rankings, Perry supporters might warm instead to the idea of what’s often called an instant runoff (a version of which was just used in the San Francisco mayoral election). This differs from a standard runoff in that the candidates with the fewest first place votes (Gingrich and Paul in this case) are summarily eliminated. Next the ranking for the others are adjusted (Perry would gain another 18.2% of the first place votes once Gingrich and Paul are gone). Then the method dictates that the candidate among the remaining three having the fewest first place votes (Cain in this case) is eliminated and the ranking for the two remaining candidates adjusted. After this only Romney and Perry are left, and Perry beats Romney handily.

Scowling, Gingrich insists that we should pay more attention to the overall rankings, not just to the voters’ most preferred candidates. He says it’s only fair that first place votes should each be accorded 5 points, second place votes 4 points, third place 3 points, fourth 2 points, and last place votes 1 point. Using this method each candidate amasses points from the entire preference ranking, which Gingrich argues will more fairly measure that candidate’s support. Needless to add, Gingrich wins if this method is adopted.

Ever the individualist, Paul contends that only mano-a-mano contests should count and notes that given the preference rankings he beats each of the other candidates in a head-to-head race. If they were the only two candidates, more voters would prefer Paul to Romney. Likewise, more would prefer him to Gingrich, to Perry, and to Cain. His claim is true and underlies his argument that he deserves to be overall winner.

So who’s ahead? Given the preference ranking above, each of the five candidates has a reasonable case for being called the front-runner. The numbers were, of course, cherry-picked to yield these different outcomes, but real races often have many of these same oddities, which together provide a more nuanced feel for the relative strengths of the various candidates.

Some pollsters have already made an effort to get beyond the traditional “Who would you vote for if the election were held today?” Gallup, for example, has been experimenting this year with a “positive intensity score,” which is intended to measure the ability of candidates to arouse enthusiasm among those voters who know them. And, in cities such as St. Paul, Minn., Portland, Me., and San Francisco, where instant run-offs are employed, pollsters have expended effort and will inevitably expend more to determine how the voters rank the candidates. Private polling by the candidates themselves is often more sophisticated than the polls the public gets to read about.

Going further and compiling a full preference ranking of the candidates would, no doubt, be difficult and costly. Many voters don’t know all the candidates, others don’t rank them rationally, preferences don’t always hold, and so on.

Nevertheless, to the usual questions about voters’ top choices, pollsters should add questions about their second or third choices and about those candidates they’d refuse to vote for under any circumstances. Elephant and donkey races deserve at least as much analysis of possible outcomes as horse races get.

John Allen Paulos, a professor of mathematics at Temple University, is the author of eight books, including “Innumeracy” and “A Mathematician Reads the Newspaper.”

Report Details Chaos at Fukushima Daiichi After Quake

Reload original page
Print page
Email page address


Fukushima Daiichi Unit 1 was stuck in darkness, and everyone on site feared that the reactor core was damaged. It was the day after a huge earthquake and a towering tsunami devastated the plant, and the workers for Tokyo Electric Power Company knew they were the only hope for halting an unfolding nuclear disaster.

Another power company tried to help. It rushed a mobile electrical generator to the site to power the crucial water pumps that cool the reactor. But connecting it required pulling a thick electrical cable across about 650 feet of ground strewn with debris from the tsunami and made more treacherous by open holes left when manhole covers were washed away.

The cable, four inches in diameter, weighed approximately one ton, and 40 workers were needed to maneuver it into position. Their urgent efforts were interrupted by aftershocks and alarms about possible new tsunamis.

By 3:30 in the afternoon, the workers had managed what many consider a heroic feat: they had hooked up the cable. Six minutes later, a hydrogen explosion ripped through the reactor building, showering the area with radioactive debris and damaging the cable, rendering it useless.

Those details about the first hours after the earthquake at the stricken plant are part of a new 98-page chronology of the Fukushima accident. The account, compiled by American nuclear experts, is meant to form a basis for American nuclear operators and the Nuclear Regulatory Commission to learn lessons from the disaster. But it also provides a rare, detailed look at workers’ frantic efforts to save the plant, portraying (in measured technical language) scenes worthy of the most gripping disaster movies.

The experts who compiled the report work for the Institute of Nuclear Power Operations, an Atlanta organization that is an integral part of the American nuclear industry and one that has won praise over the years for its audits, sometimes critical, of plants around the country.

The authors could provide a deep level of detail because they were able to interview operators and executives from Tokyo Electric Power Company and had access to many of the company’s documents and data.

The chronology does not draw any conclusions about the accident, or analyze the actions taken after the earthquake; it is intended only to provide an agreed-upon set of facts for further study. In that way the document might be more useful for the nuclear industry than for Japanese citizens still hungry for assurances that they are no longer in danger and angry over missteps, documented in the news media, that led to more people being exposed to more radiation than was necessary.

One aspect of the disaster that American companies are likely to focus on is Fukushima’s troubles with its venting system, meant to reduce pressure and avert explosions when crucial cooling systems fail. Another focus is likely to be the extreme difficulty workers had in getting emergency equipment to the reactors where they were needed.

The report is likely to reinforce the conviction of American companies that operate reactors of the design used at Fukushima that venting from the containment vessels around reactors early in an accident is better than waiting, even though radioactive material will be released. The delays in Japan appear to have contributed to explosions that damaged the vessels and ultimately led to larger releases of contaminants.

It has been clear for months that Fukushima operators delayed venting for hours, even after the government ordered that the action be taken. The chronology, however, suggests for the first time that some delays were because plant executives believed that they were required to wait for evacuation of surrounding areas.

Because the chronology is based mainly on accounts by Tepco and its workers and company data, it is by nature limited. It does not, for example, relate that there was tension between Tepco and the government over when to vent, as the news media have reported.

The report is also likely to incite more debate about how emergency equipment and material are stored and what types of contingency plans need to be made to ensure equipment can reach reactors in a disaster. Nuclear critics in the United States have long complained that American emergency rules do not take into account that a natural phenomenon could cause an accident at a plant and make it hard to get help from outside.

For example, although the plant had three fire engines that could have pumped in vital cooling water, one was damaged in the tsunami and another was blocked by earthquake damage to roads. Inspections at some American reactors after the Japanese quake and tsunami found that they were storing emergency gear in a way that made it vulnerable to the emergency it was intended for.

The report was perhaps most vivid when it was describing workers’ often unsuccessful efforts to salvage the situation. In one case, plant workers are said to have broken through a security fence to take a fire truck to unit 1 so it could pump water to cool the reactor. (The plant’s cooling system by that time was unusable, and without it, reactors and fuel pools can overheat and cause meltdowns.)

But as often happened during the disaster, the workers’ struggles only partly paid off. Increasing heat caused the pressure inside the containment vessel to build. By the time the fire truck started pumping, workers were able to force in less than 10 gallons per minute, not much more than a kitchen faucet puts out. That was far too little to cool the nuclear fuel and reduce pressure.

The report also takes note of the human toll the disaster took on workers.

It points out that many plant workers had lost their homes and even their families in the tsunami, and that for days after the quake, they were sleeping on the floor at the plant, soaking up radiation doses even in the control room. Because of food shortages, they were provided with only a biscuit for breakfast and a bowl of noodles for dinner.

Working in darkness and without electricity, even simple tasks became challenging. At one point, control room operators formed themselves into teams of two, to dash into high-dose areas to try to open a crucial vent. One would hold the flashlight and monitor the radiation dose, while the other would try to get a valve to move. But there was no communication once the team was in the field, so the next team could leave for the reactor only after the first had returned.

Eventually, the radiation levels got too high, and they gave up. The first explosion rocked the plant soon after, belching clouds of radioactive materials and giving the world its clearest sense of the scope of the catastrophe unfolding in Japan.

Hiroko Tabuchi contributed reporting from Tokyo.

§

Thursday, November 03, 2011

24 Hours at Fukushima

24 Hours at Fukushima

A blow-by-blow account of the worst nuclear accident since Chernobyl


By

Eliza Strickland / November 2011


radiation check and after explosion


Photos: [Left] Christoph Bangert/ LaIF/Redux; [Right] TEPCO

RADIATION AND RUIN:


Click images to enlarge.

Editor's Note: This is part of the IEEE Spectrum special report:

Fukushima and the Future of Nuclear Power



.


Sometimes it takes a disaster

before we humans really figure out how to design something. In fact, sometimes it takes more than one.
Millions of people had to die on highways, for example, before governments forced auto companies to get serious about safety in the 1980s. But with nuclear power, learning by disaster has never really been an option. Or so it seemed, until officials found themselves grappling with the world's third major accident at a nuclear plant. On 11 March, a tidal wave set in motion a sequence of events that led to meltdowns in three reactors at the Fukushima Dai-ichi power station, 250 kilometers northeast of Tokyo.
Unlike the

Three Mile Island

accident in 1979 and

Chernobyl

in 1986, the chain of failures that led to disaster at Fukushima was caused by an extreme event. It was precisely the kind of occurrence that nuclear-plant designers strive to anticipate in their blueprints and emergency-response officials try to envision in their plans. The struggle to control the stricken plant, with its remarkable heroism, improvisational genius, and heartbreaking failure, will keep the experts busy for years to come. And in the end the calamity will undoubtedly improve nuclear plant design.
True, the antinuclear forces will find plenty in the Fukushima saga to bolster their arguments. The interlocked and cascading chain of mishaps seems to be a textbook validation of the "

normal accidents

" hypothesis developed by

Charles Perrow

after Three Mile Island. Perrow, a Yale University sociologist, identified the nuclear power plant as the canonical tightly coupled system, in which the occasional catastrophic failure is inevitable.
On the other hand, close study of the disaster's first 24 hours, before the cascade of failures carried reactor 1 beyond any hope of salvation, reveals clear inflection points where minor differences would have prevented events from spiraling out of control. Some of these are astonishingly simple: If the emergency generators had been installed on upper floors rather than in basements, for example, the disaster would have stopped before it began. And if workers had been able to vent gases in reactor 1 sooner, the rest of the plant's destruction might well have been averted.
The world's three major nuclear accidents had very different causes, but they have one important thing in common: In each case, the company or government agency in charge withheld critical information from the public. And in the absence of information, the panicked public began to associate all nuclear power with horror and radiation nightmares. The owner of the Fukushima plant, the

Tokyo Electric Power Co.

(TEPCO), has only made the situation worse by presenting the Japanese and global public with obfuscations instead of a clear-eyed accounting.
Citing a government investigation, TEPCO has steadfastly refused to make workers available for interviews and is barely answering questions about the accident. By piecing together as best we can the story of what happened during the first 24 hours, when reactor 1 was spiraling toward catastrophe, we hope to facilitate the process of learning-by-disaster.



Click on image for the full graphic view.

When the 9.0-magnitude earthquake

struck off the east coast of Japan, at 2:46 p.m. on 11 March, the ground beneath the power plant shook and alarms blared. In quivering control rooms, ceiling panels fell open and dust floated down onto instrument panels like snow. Within 5 seconds, control rods thrust upward into the three operational reactors and stopped the fission reactions. It was a flawless

automatic shutdown

, but the radioactive by-products in the reactors' fuel rods continued to generate tremendous amounts of heat.
Without adequate cooling, those rods would become hot enough to melt through the steel

pressure vessel

, and then through the steel containment vessel. That would result in the dreaded core-meltdown scenario, which could lead to the release of clouds of radioactivity that would be carried by winds to sicken or kill masses of people.
But the heat wouldn't be a problem so long as Fukushima Dai-ichi had power to run the pumps that circulate water from the reactor cores through heat-removal systems. The mighty earthquake had toppled power transmission towers and jumbled equipment at nearby substations, but the interruption in power to the plant was negligible: Within 10 seconds, the plant's emergency power system kicked in. Twelve diesel generators, most of them installed in basement areas below the turbines, were now responsible for the integrity of the plant's reactors—and the well-being of its workers.



THIS REPORT is based on interviews with officials from the Tokyo Electric Power Co. (TEPCO), Japan's Nuclear and Industrial Safety Agency, the U.S. Nuclear Regulatory Commission, the International Atomic Energy Agency, local governments, and with other experts in nuclear engineering, as well as a review of hundreds of pages of official reports.


At the time of the earthquake, three of the power station's six reactors were operating; the other three were down for scheduled maintenance. In the control rooms governing the active reactors—units 1, 2, and 3—the staff checked the cooling systems that remove residual heat from the reactor cores by cycling water through heat exchangers filled with seawater. Everything seemed under control. Water also filled the

spent-fuel pools

on the top floors of all six reactor buildings to prevent the pools from overheating.
At 2:52 p.m., the shift supervisor overseeing the plant's oldest reactor, the 40-year-old unit 1, confirmed that a backup cooling system called an isolation condenser (IC) had started up automatically. This system didn't need electric power to cycle steam through a cold-water tank on a higher floor, or to let the resulting water drop back down to the pressure vessel. But operators soon noticed that the IC was cooling the core too quickly, which could stress the steel walls of the pressure vessel. So they shut the system down. It was a by-the-book decision, but the book wasn't written for the extraordinary events of 11 March.

Tsunami alerts

flashed on TV screens, predicting a 3-meter-high tsunami for Fukushima prefecture. Although the coastal Fukushima Dai-ichi plant was 10 meters above sea level, nonessential personnel followed procedure and began evacuating the site.
At 3:27 p.m. the first tsunami wave surged into the man-made harbor protecting Fukushima Dai-ichi, rushing past a tidal gauge that measured a water height of 4 meters above normal. At 3:35 another set of much higher waves rolled in and obliterated the gauge. The water rushed over the seawalls and swept toward the plant. It smashed into the seawater pumps used in the heat-removal systems, then burst open the large doors on the turbine buildings and submerged power panels that controlled the operation of pumps, valves, and other equipment. Weeks later, TEPCO employees would measure the water stains on the buildings and estimate the monstrous tsunami's height at 14 meters.
In the basements of turbine and reactor buildings, 6 of the 12 diesel generators shuddered to a halt as the floodwaters inundated them. Five other generators cut out when their power distribution panels were drenched. Only one generator, on the first floor of a building near unit 6, kept going; unlike the others, all of its equipment was above the water line. Reactor 6 and its sister unit, reactor 5, would weather the crisis without serious damage, thanks in part to that generator.
The rest of Fukushima Dai-ichi now faced a cataclysmic scenario that nuclear power plant operators have long feared but never experienced: a complete station blackout.

graphic link to power and protection illustration



Click on image for the full graphic view.

In the control room

where operators managed reactor 1, the alarms went silent. The overhead lights blinked off, and the indicator lights on the instrument panels faded away. The floodwaters had even knocked out the control room's batteries, the power source of last resort. The operators would have to respond to the emergency without working instruments.
With the power out, the pumps were no longer channeling water from unit 1's pressure vessel through the cooling system's heat exchangers, and the ferociously hot fuel rods were boiling the water into steam. The water level in the nuclear core was dropping, but, lacking power for their instruments, the plant operators could only guess at how fast the water was boiling away.

LESSON 1
Emergency generators should be installed
at high elevations or in watertight chambers.
The isolation condenser, which relied on convection and gravity to perform its cooling function, should have helped keep the water level high in unit 1's core through the crisis. But operators had turned off the system just before the tsunami by closing its valves—and there was no electric power to reopen them and let steam and water flow. Workers struggled to manually open the valves on the IC system, but experts believe the IC provided no help after the tsunami struck.
As the operators surveyed the damage, they quickly realized that the diesel generators couldn't be salvaged and that external power wouldn't be restored anytime soon. In the plant's parking lots, workers raised car hoods, grabbed the batteries, and lugged them back to the control rooms. They found cables in storage rooms and studied diagrams. If they could connect the batteries to the instrument panels, they could at least determine the water levels in the pressure vessels.

LESSON 2


If a cooling system is intended
to operate without power, make sure all of its parts can be manipulated without power.
TEPCO did have a backup for the emergency generators: power supply trucks outfitted with high-voltage dynamos. That afternoon, emergency managers at TEPCO's Tokyo headquarters sent 11 power supply trucks racing toward Fukushima Dai-ichi, 250 km away. They promptly got stuck in traffic. The roads that hadn't been damaged by the earthquake or tsunami were clogged with residents fleeing the disaster sites.
At 4:36 p.m., TEPCO officially informed the Japanese government about the increasingly dire situation at reactor 1. The company declared that it "could not confirm" that any water was being injected into the reactor's core. The situation was better at the slightly more modern reactors 2 and 3, where emergency cooling systems were operating, driven by the steam from the reactors themselves. And the idled reactors 4, 5, and 6 didn't pose an immediate threat.
At 5:41, the sun set over the pools of seawater and the mounds of debris scattered around the power station. Work crews picked their way through the gloom by flashlight.
At around 9 p.m., operators finally plugged the car batteries they'd collected into the instrument panels and got a vital piece of information—the water level in reactor 1. The information seemed reassuring. The gauge registered a water level of 550 millimeters above the top of the fuel assembly, which, while far below normal safety standards, was enough to assure the operators that no fuel had melted yet.
But TEPCO's later analysis found that the gauges were wrong. Months later, calculations would show that the superheated water inside the reactor 1 pressure vessel had dropped all the way below the bottom of the uranium fuel rods shortly before operators checked the gauge, leaving the reactor core completely uncovered. Heat pulsed through the exposed rods. When temperatures passed 1300 °C, the fuel rods' protective zirconium cladding began to react with the steam inside the vessel, producing highly volatile

hydrogen gas

. And the uranium inside the fuel rods began to melt, slump, and sag.

aerial shot


Photo: Gamma/Getty Images

The Damage:

In the days following the tsunami, explosions tore the roofs off reactors 1, 3, and 4, and an interior detonation is thought to have damaged reactor 2.

Click to enlarge.

Throughout the night

of 11 March, radiation levels rose around the plant. At 9:51 p.m. managers prohibited entry into the unit 1 reactor building.
It was a wise decision, because in the bowels of the reactor, the meltdown had already begun. In the reactors used at Fukushima, the control rods thrust up into the pressure vessel from below, and the housings around each control rod's entry point were essentially weak spots. When the melted fuel began to pool at the bottom of the pressure vessel, it likely melted through those vulnerable seams. TEPCO's later analysis found that the pressure vessel was damaged by 11 p.m., allowing highly radioactive water and gases to leak into the primary containment vessel.

LESSON 3
Keep power trucks on or very close to the power plant site.
The containment vessel, which surrounds the pressure vessel, is a crucial line of defense: It's a thick steel hull meant to hold in any tainted materials that have escaped from the inner vessel. At 11:50 p.m. operators in the control room finally connected car batteries to the pressure gauge for the primary containment vessel. But the gauge revealed that the containment vessel had already exceeded its maximum operating pressure, increasing the likelihood that it would leak, crack, or even explode.
As 11 March turned into 12 March, TEPCO headquarters told the sleepless operators that they must bring down the pressure by venting the containment vessel. A venting operation would jet the vessel's radioactive gases into the air; Fukushima Dai-ichi's nightmare would soon spread across the countryside.
That night, the desperate struggle to contain the peril at reactor 1 diverged into three responses. Besides the team making preparations to vent the containment vessel, there was also a group getting ready to receive the power supply trucks, which were still making their way to the plant. On arrival, they would supply electricity to restart the pumps and reestablish steady water circulation through the pressure vessel. The third team focused on another, short-term plan for cooling the core: fire trucks, which could inject water from emergency tanks into one of the reactor's cooling systems.
It was after midnight when the first power supply trucks began to arrive at the site, creeping along cracked roads. The trucks parked outside the unit 2 turbine building, adjacent to the troubled unit 1, where workers had found one undamaged power control panel. In the darkness, they began snaking a 200-meter-long power cable through the mud-caked building in order to connect it to the power control panel. Usually trucks are used to lay such a cable, which weighed more than a ton, but that night 40 workers did the job by hand. It took them 5 hours.
Work continued at the power control panel all morning and into the afternoon of 12 March. Finally, at 3:30 p.m., everything was ready. Current flowed from a power supply truck through the cable to the panel, which was ready to switch on the pumps for a backup cooling system inside the reactor 1 building. Workers prepared to start the flow of freshwater into the pressure vessel, knowing that they were about to take a crucial step toward stabilizing the plant.

Meanwhile, the fire engine team had been grappling with difficult logistics all through the early morning hours. Of the three fire engines on site, one had been wrecked by the tsunami; another was stuck near reactors 5 and 6, trapped by damaged roads. That left one fire engine to cool the overheating reactor 1. This truck was the best hope for getting water into the pressure vessel quickly, but it took hours to maneuver it through the plant's wreckage. Finally the workers smashed a lock on an electronic gate and drove the fire engine through.

LESSON 4


Install independent and secure battery systems to power crucial instruments during emergencies.
In their initial, improvised response, the fire crew pumped water into the truck's storage tanks, then drove close to the side of the reactor building and injected the water into the fire protection system's intake lines. It was 5:46 a.m. on 12 March when the first drops of water sprayed across the molten fuel. Then the workers drove back to the water tanks and began the slow, arduous operation all over again. Eventually workers managed to use the fire engine's hoses to connect the water tanks directly to the intake lines and established a steady flow of water. By midafternoon, they had injected 80 000 liters of water into the pressure vessel using this makeshift system. But it was too little, too late.
At 2:54 p.m., with freshwater supplies running short, TEPCO headquarters ordered the fire truck crews to inject seawater into the pressure vessel through the fire protection line. Under normal conditions, saltwater is never allowed in a reactor pressure vessel because it would corrode the vessel's protective steel walls and leave a mineral residue on the fuel rods. The decision was an admission that saving the reactor was no longer an option and that operators could only hope to prevent a wide-scale disaster. Fukushima Dai-ichi was now beyond the point of no return.
Workers stretched long fire hoses from a seaside pit that had been filled with seawater by the tsunami; three newly arrived fire engines lined up to pump the water through. They connected the hose to the fire protection system's intake line, and around 3:30 on 12 March they prepared to blast the reactor with seawater.
It had been 24 hours since the tsunami roared into the harbor, and the desperate efforts of both the power crew and the fire truck crew were about to pay off. It must have seemed that their exhaustion and terror were nearly at an end.

The order to vent

the containment vessel had come at midnight. But without power to remotely operate the vent system's valves, it wouldn't be a simple task.
And whether the workers knew it or not, time was of the essence. While the venting team prepared for action during the early morning hours of 12 March, gases were building up inside the primary containment vessel and pushing on its weakest points, its gaskets and seals, and they were starting to give. Hydrogen gas hissed through the breaches and drifted up to the top of the building. Hour by hour, the gas collected there until it formed a layer of pure combustible menace.

LESSON 5
Ensure that catalytic hydrogen recombiners (power-free devices that turn dangerous hydrogen gas back into steam) are positioned at the tops of reactor buildings where gas would most likely collect.
The workers in charge of the venting operation took

iodine tablets

. It was a feeble attempt at protection against the radiation they'd soon encounter, but it was better than nothing. They gathered protective head-to-toe suits and face masks connected to air tanks. At 3:45 a.m., the vent crew tried to measure the radiation dose inside the reactor building, which had been off limits for 6 hours. Armed with handheld dosimeters, they opened the air lock, only to find a malevolent white cloud of some "gaseous substance" billowing toward them. Fearing a radiation steam bath, they slammed the door shut. They didn't get their reading, but they had a good indication that things had already gone seriously wrong inside the reactor.
If they could have looked inside the reactor pressure vessel at around 6:30 a.m. on the morning of 12 March, they would have seen a nuclear core transformed into molten sludge. The melted mixture of uranium, zirconium, and other metals had oozed to the bottom of the reactor pressure vessel, where it was gradually eating through the steel floor.
But as the morning ticked on, the vent crew were forced to sit and wait; they were standing by for word that residents had been evacuated and that it was safe to release the radioactive gases into the air. The government had issued an evacuation order for residents living within 3 km the night before; in the early morning hours officials announced that everyone within a 10 km radius of the plant should pack up and go. Residents who had lived their whole lives in the shadow of the Fukushima Dai-ichi plant boarded buses, expecting to be gone for a couple of days at most.
At 9:03 a.m. the message came: The last buses had departed. At 9:04 workers set out for the reactor building to open the valves that would allow gas to flow out of the primary containment vessel. They entered the reactor building and began a long, dark trek around the periphery of the primary containment vessel, guided only by flashlight beams. As they walked, their handheld dosimeters flashed troubling numbers. In normal conditions, a nuclear plant employee's radiation limit is 50

millisieverts

per year; in an emergency situation it is 100 mSv. The workers had covered about half the distance to the valve when they realized they had to turn back—if they continued, they would exceed the 100 mSv dose. They returned to the control room at 9:30. They had failed.
Over the next hours the operators scrambled to find another way to open the valves; finally they decided to blast the valve open with air. They used a crane truck to haul a portable air compressor, the kind typically used at construction sites, to the crucial valve's location. At 2:00 p.m. the vent crew switched the compressor on, while workers in the control room nervously watched the gauge.
By 3:30 p.m. on 12 March, it seemed that the venting had worked and that the worst was over. The pressure had dropped significantly in unit 1's primary containment vessel, suggesting that the valve had opened and that gases had rushed through the pipes to the ventilation stack near the reactor building. The workers must have felt that the danger was ebbing. They had no idea that leaks from the vent lines had added even more hydrogen to the gas collected below the ceiling of unit 1's outer building—and it was now ready to blow.

At 3:36 p.m., a spark

flashed in the darkness of the reactor building, and hydrogen gas ignited. With a roar, the top of the reactor building exploded.
The roof shattered and the walls splintered; fragments of the building flew through the air. Chunks of rubble cut into the cable leading from the power truck, and the flow of current stopped; now the pumps could not be turned on, and freshwater could not cascade into the core. Other pieces of debris sliced into the fire engine hoses leading from the seawater pit. Smoke billowed upward, radiation levels soared, and the workers fled Fukushima's first radioactive ruin. It wouldn't be the last: The battle to contain the catastrophe during the first 24 hours was lost, and the explosions would keep coming.

LESSON 6
Install power-free filters on vent lines to remove radio-active materials and allow for venting that won't harm nearby residents.
The failure of reactor 1 made efforts to stabilize the other reactors exponentially more difficult: Now workers would be laboring in a radioactive hot zone littered with debris. In addition, when work crews returned to the power truck sometime after the explosion, they couldn't get the power flowing. So the disaster continued. At reactors 2 and 3, emergency cooling systems functioned for several days. When reactor 3's overtaxed system failed on 13 March, workers struggled to connect alternate water supplies and to vent the primary containment vessel. But work was slow, and soon reactor 3 followed reactor 1's example. Leaking gas collected at the top of the building, and it

exploded on the morning of 14 March

.
That blast further impeded recovery efforts at reactor 2, and on the morning of 15 March some

still-obscure explosive noise

resonated inside the unit 2 reactor building. On that same day, an explosion tore the roof off reactor building 4 and a fire broke out inside. TEPCO reports say the problems in reactor 4 were probably due to hydrogen gas that leaked in from reactor 3; despite early reports to the contrary, the spent fuel rods stored in pools in reactors 4, 5, and 6 were covered with water throughout the accident and never posed a threat.
Each detonation made the effort to stabilize the plant more hopeless. It is clear that if workers had been able to gain control of reactor 1, the whole terrible sequence of events would have been different. But could the workers have done anything differently to speed up their response? Could the full scope of the catastrophe have been averted? So far, TEPCO management hasn't answered those questions.
  We've learned a great deal about the Fukushima accident in the past seven months. But the nuclear industry's trial-and-error learning process is a dreadful thing: The rare catastrophes advance the science of nuclear power but also destroy lives and render entire towns uninhabitable. Three Mile Island left the public terrified of nuclear power; Chernobyl scattered fallout across vast swaths of Eastern Europe and is estimated to have caused thousands of cancer deaths. So far, the cost of Fukushima is a dozen dead towns ringing the broken power station, more than 80 000 refugees, and a traumatized Japan. We will learn even more as TEPCO releases more details of what went wrong in the first days of the accident. But as we go forward, we will also live with the knowledge that some future catastrophe will have yet more lessons to teach us.

Friday, October 28, 2011

Geoffrey West on complexity

Geoffrey West on complexity 
As I was sitting down after dinner this evening news came through the interwebs that Steve Jobs had passed away. Given that he had lived seven years with a form of pancreatic cancer, it’s pretty amazing he made it this far. As cantankerous as he was – and whether you love or hate Apple products – you can’t deny that he changed the world of personal electronics. He was a true visionary.

Anyway, on to other matters. At the recent FQXi conference on time I had the pleasure of sharing a Zodiac with Geoffrey West while bouncing around Ã…byfjorden, Sweden (the patch under my ear in the picture below is what kept me from vomiting all over Geoffrey). Anyhow, he gave what I think was my favorite talk at the conference. I’ve linked to it below the picture (and note that if you catch site of me around 34 minutes or so, I am not sleeping!). It is worth a watch. In fact I think it ought to be required watching for just about anyone. It’s solidified my intention to start doing more research in complexity theory. I think physics provides the perfect means by which complexity can be studied, meaning that its reductionist methods tend to be ideal for solving complex problems – take it apart and put it back together again, piece by piece. Anyway, watch the video.

Here’s the video: