Tuesday, January 03, 2012

Linearization is the process of reducing a homogeneous polynomial into a multilinear map over a commutative ring. There are in general two ways of doing this:
  • Method 1. Given any homogeneous polynomial f of degree n in m indeterminates over a commutative scalar ring R (scalar simply means that the elements of Rcommute with the indeterminates).
    Step 1
    If all indeterminates are linear in f , then we are done.
    Step 2
    Otherwise, pick an indeterminate x such that x is not linear in f . Without loss of generality, write f=f(xX) , where X is the set of indeterminates in fexcluding x . Define g(x1x2X):=f(x1+x2X)f(x1X)f(x2X) . Then g is a homogeneous polynomial of degree n in m+1 indeterminates. However, the highest degree of x1x2 is n1 , one less that of x .
    Step 3
    Repeat the process, starting with Step 1, for the homogeneous polynomial g . Continue until the set X of indeterminates is enlarged to one X such that eachxX is linear.

  • Method 2. This method applies only to homogeneous polynomials that are also homogeneous in each indeterminate, when the other indeterminates are held constant, i.e., f(txX)=tnf(xX) for some n and any tR . Note that if all of the indeterminates in f commute with each other, then f is essentially amonomial. So this method is particularly useful when indeterminates are non-commuting. If this is the case, then we use the following algorithm:
    Step 1
    If x is not linear in f and that f(txX)=tnf(xX) , replace x with a formal linear combination of n indeterminates over R :
    r1x1++rnxn, where riR
    Step 2
    Define a polynomial gRx1xn , the non-commuting free algebra over R (generated by the non-commuting indeterminates xi ) by:
    g(x1xn):=f(r1x1++rnxn)
    Step 3
    Expand g and take the sum of the monomials in g whose coefficent is r1rn . The result is a linearization of f for the indeterminate x .
    Step 4
    Take the next non-linear indeterminate and start over (with Step 1). Repeat the process until f is completely linearized.

Remarks.
  1. The method of linearization is used often in the studies of Lie algebrasJordan algebrasPI-algebras and quadratic forms.
  2. If the characteristic of scalar ring R is 0 and f is a monomial in one indeterminate, we can recover f back from its linearization by setting all of its indeterminates to a single indeterminate x and dividing the resulting polynomial by n! :
    f(x)=1n!linearization(f)(xx)
    Please see the first example below.
  3. If f is a homogeneous polynomial of degree n , then the linearized f is a multilinear map in n indeterminates.
Examples.
  • f(x)=x2 . Then f(x1+x2)f(x1)f(x2)=x1x2+x2x1 is a linearization of x2 . In general, if f(x)=xn , then the linearization of f is
    Snx(1)x(n)=Snni=1x(i) 
    where Sn is the symmetric group on 1n . If in addition all the indeterminates commute with each other and n!=0 in the ground ring, then the linearization becomes
    n!x1xn=ni=1ixi 

polarization

polarization of a polynomial

The fundamental ideas are as follows. Let f(u) be a polynomial in n variables u = (u1u2, ..., un). Suppose that f is homogeneous of degree d, which means that
f(t u) = td f(u) for all t.
Let u(1)u(2), ..., u(d) be a collection of indeterminates with u(i) = (u1(i)u2(i), ..., un(i)), so that there are dn variables altogether. The polar form of f is a polynomial
F(u(1)u(2), ..., u(d))
which is linear separately in each u(i) (i.e., F is multilinear), symmetric in the u(i), and such that
F(u,u, ..., u)=f(u).
The polar form of f is given by the following construction
F({\bold u}^{(1)},\dots,{\bold u}^{(d)})=\frac{1}{d!}\frac{\partial}{\partial\lambda_1}\dots\frac{\partial}{\partial\lambda_d}f(\lambda_1{\bold u}^{(1)}+\dots+\lambda_d{\bold u}^{(d)})|_{\lambda=0}.
In other words, F is a constant multiple of the coefficient of λ1 λ2...λd in the expansion of f1u(1) + ... + λdu(d)).
The Aronhold Method, Polarization
2.1 Polarizations
Before proceeding, let us recall, in a language suitable for our purposes, the usual Taylor-Maclaurin expansion. Consider a function \( F(x)\) of a vector variable \( x \in V \). Under various types of assumptions we have a development for the function \( F(x+y) \) of two vector variables. For our purposes, we may restrict our considerations to polynomials and develop
$$F(x + y) := \sum^{i = 0}_\infty F_i(x,y), $$ where by definition \( F_i (x, y) \) is homogeneous of degree \( i \) in \( y \) (of course for polynomials the sum is really finite). herefore, for any value of a parameter \( \lambda \), we have \( F(x + \lambda y) := \sum^{i = 0}_\infty \lambda^i F_i(x,y) \). If F is also homogeneous of degree \( k \) we have
$$ \begin{align}
\sum^{i = 0}_\infty \lambda^k F_i(x,y) &= \lambda^k F(x+y) = F(\lambda (x + y)) = \\
& F(\lambda x + \lambda y) = \sum^\infty_{i=0} \lambda^i f_i (\lambda x, y)
\end{align} $$
and we deduce that \( F_i(x, y) \) is also homogeneous of degree \( k - i \) in \( x \).

Sunday, January 01, 2012

how to write

To this end, I have christened
all statements (theorems, examples, definitions, etc.) and basic equations with
a proper name (using capital letters as with ordinary proper names). Instead
of saying “by Lemma 21.2.1(1), which of course you will remember,” I say
“by Nuclear Slipping 21.2.1(1),” hoping to trigger long-repressed memories of
a formula for how nuclear elements of alternative algebras slip in and out of
associators.

cokernel infinite case!


from wikipedia CokernelMain article: CokernelA subtler invariant of a linear transformation is the cokernel, which is defined as
\mathrm{coker}\,f := W/f(V) = W/\mathrm{im}(f).
This is the dual notion to the kernel: just as the kernel is a subspace of the domain, the co-kernel is a quotient space of the target. Formally, one has the exact sequence0 \to \ker f \to V \to W \to \mathrm{coker}\,f \to 0.
These can be interpreted thus: given a linear equation f(v) = w to solve,
  • the kernel is the space of solutions to the homogeneous equation f(v) = 0, and its dimension is the number of degrees of freedom in a solution, if it exists;
  • the co-kernel is the space of constraints that must be satisfied if the equation is to have a solution, and its dimension is the number of constraints that must be satisfied for the equation to have a solution.
The dimension of the co-kernel and the dimension of the image (the rank) add up to the dimension of the target space. For finite dimensions, this means that the dimension of the quotient spaceW / f(V) is the dimension of the target space minus the dimension of the image.
As a simple example, consider the map f\colon \mathbf{R}^2 \to \mathbf{R}^2, given by f(x,y) = (0,y). Then for an equation f(x,y) = (a,b) to have a solution, we must have a = 0 (one constraint), and in that case the solution space is (x,b), or equivalently stated, (0,b) + (x,0), (one degree of freedom). The kernel may be expressed as the subspace (x,0) < V: the value of x is the freedom in a solution – while the cokernel may be expressed via the map W \to \mathbf{R}^1, (a,b) \mapsto (a): given a vector (a,b), the value of a is the obstruction to there being a solution.
An example illustrating the infinite-dimensional case is afforded by the map g\colon \mathbf{R}^\infty \to \mathbf{R}^\infty, \{a_n\} \mapsto \{b_n\} with b1 = 0 and bn + 1 = an for n > 0. Its image consists of all sequences with first element 0, and thus its cokernel consists of the classes of sequences with identical first element. Thus, whereas its kernel has dimension 0 (it maps only the zero sequence to the zero sequence), its co-kernel has dimension 1. Since the domain and the target space are the same, the rank and the dimension of the kernel add up to the same sum as the rank and the dimension of the co-kernel ( \aleph_0 + 0 = \aleph_0 + 1 ), but in the infinite-dimensional case it cannot be inferred that the kernel and the co-kernel of an endomorphism have the same dimension (0 \neq 1). The reverse situation obtains for the map h\colon \mathbf{R}^\infty \to \mathbf{R}^\infty, \{a_n\} \mapsto \{c_n\} with cn = an + 1. Its image is the entire target space, and hence its co-kernel has dimension 0, but since it maps all sequences in which only the first element is non-zero to the zero sequence, its kernel has dimension 1.

the seekers

In a manner of speaking, the legacy of renunciation of philosophy and methodology led much of the orthodox economics profession to behave in ways rather similar to the Seekers from 2008 onwards. The parallels between the Seekers and the contemporary economics profession are, of course, not exact. The Seekers were disappointed when their world didn’t come to an end; economists were convinced their Great Moderation and neoliberal triumph would last forever, and were disappointed when it did appear to come to an end. The stipulated turning point never arrived for the Seekers, while the unsuspected turning point got the drop on the economists. The Seekers garnered no external support for their doctrines, indeed, quitting their jobs and contracts prior to their Fated Day; the economists, on the other hand, persist in being richly rewarded by many constituencies for remaining stalwart in their beliefs. The public press was never friendly towards the Seekers; it only turned on the economists with the financial collapse. (There are already signs it may be reverting to its older slavish adoration, however.) But nonetheless, the shape of the reactions to cognitive dissonance was amazingly similar. The crisis, which at first blush might seem to have refuted most everything that the economic orthodoxy believed in, was in the fullness of time more often than not trumpeted from both the Left and the Right as reinforcing their adherence to neoclassical economic theory. Thus was made manifest the ‘spontaneous methodology of the economics profession’.