## Projections onto a Convex Body

August 23, 2010

Given a point ${x} \in {{\bf R}^d}$ and an affine subspace ${H \neq {\bf R}^d}$, the projection of ${x}$ onto ${H}$ is the point ${P_H(x)}$ such that,

$\displaystyle \begin{array}{rcl} \vert \vert x - P_H(x) \vert \vert = \min \{ \vert \vert x y \vert \vert ~ \mid y ~ \in H \} \end{array} .$

One can obviously define other notions of projection but the above is probably the most commonly used in geometry. If ${x \notin H}$ then the projected point ${P_H(x)}$ is the unique point in ${H}$ such that the vector ${P_H(x) - x}$ is orthogonal to ${H}$. Also is well known that projecting any segment to an affine subspace can only shrink its length. The proofs of these facts are easy to see. But in fact these facts are just corollaries of the following two results.

 Given any nonempty closed convex set ${C \subseteq {\bf R}^d}$ and a point ${x \in {\bf R}^d}$, there is a unique point ${P_C(x) \in C}$ such that, $\displaystyle \begin{array}{rcl} \vert \vert x - P_C(x) \vert \vert = \min \{ \vert \vert x - y \vert \vert ~ \mid ~ y \in C\} \end{array} .$

and,

 Let ${C}$ be a nonempty closed convex set. The mapping ${P_C : {\bf R}^d \rightarrow C}$ is a contraction i.e. for any ${x,y \in {\bf R}^d}$ we have, $\displaystyle \begin{array}{rcl} \vert \vert P_C(x) - P_C(y) \vert \vert \leq \vert \vert x - y \vert \vert \end{array}$

Applying these results to affine spaces (which are nonempty closed convex sets) yields the results mentioned earlier. This projection that maps $x$ to $P_C(x)$ is known as the metric projection. The proofs of these facts are in order. Read the rest of this entry »

## Bounding the Volume of Hamming Balls

August 13, 2010

In this post I will derive an oft used inequality in Theoretical computer science, using the Chernoff technique. The following function turns up in a few places in combinatorial geometry and other places.

$\displaystyle \begin{array}{rcl} \Phi_d(m) = \sum_{i=0}^d {m \choose i}. \end{array}$

Here we assume ${m \geq d}$ since otherwise the function is trivially seen to be equal to ${2^m}$. Here we will show that,

$\displaystyle \begin{array}{rcl} \Phi_d(m) \leq \left ( \frac{me}{d} \right )^d \end{array}$

Two easy interpretations of ${\Phi_d(m)}$ are as follows. First, it counts the number of points in the hamming cube ${{\bf H}^m = \{0,1\}^m}$ whose hamming distance from the origin is at most ${d}$. This interpretation also allows us to derive the nice identity,

$\displaystyle \begin{array}{rcl} \Phi_d(m) = \sum_{h=0}^d \sum_{l = 0}^h {s \choose l}{m - s \choose h - l}. \end{array}$

for any ${0 \leq s \leq m}$. In the above expression we obey the convention ${{p \choose q} = 0}$ when ${p < q}$. To see this, interpret the above sum as the distance from a fixed point ${p \in {\bf H}^m}$ whose distance from the origin is ${s}$. The ${h}$ is the distance from ${p}$ and the inner summation counts the number of points at distance ${h}$. Then noting that the hamming cube looks the same from every point so the number of points of distance at most ${d}$ is the same from the origin or from ${p}$. Hence the identity.

This definition has a nice geometric interpretation because ${\Phi_d(m)}$ is also the number of points from ${{\bf H}^n}$ inside any ball of radius ${d}$. The second interpretation is as follows. Consider the discrete probability space ${\Omega = \{ 0, 1 \}}$ with the measure ${p(\{0\}) = p(\{1\}) = 1/2}$. Let ${X_1, X_2, \dots, X_m}$ be ${m}$ independent random variables ${X_i : \Omega \rightarrow \{0,1\}}$ defined as ${X_i(y) = y}$ for ${y \in \Omega}$. Let ${X = \sum X_i}$. Then we have,

$\displaystyle \Pr\{ X_1 = a_1, X_2 = a_2, \dots, X_m = a_m \} = \frac{1}{2^m},$

for any ${a_i \in \{0,1\}}$. As such we have,

$\displaystyle \frac{\Phi_d(m)}{2^m} = \Pr\{ X \leq d\}$.

We will use this interpretation to derive our inequality.

Lemma: ${\Phi_d(m) \leq \left ( \frac{me}{d} \right )^d}$.

Proof: Our first observation is that we may take ${d \leq m/2}$. The function ${\left ( \frac{me}{d} \right )^d}$ is an increasing function of ${d}$. Denoting this by ${f(d)}$ and extending it to a function on ${[0,m]}$ we have by logarithmic differentiation,

$\displaystyle \begin{array}{rcl} \frac{f'(d)}{f(d)} = \ln \left ( \frac{m}{d} \right ) \geq 0 \end{array}$

Thus for ${d > m/2}$ we have ${f(d) \geq f(m/2) \geq (2e)^{m/2} > 2^m}$. On the other hand ${\Phi_d(m) \leq 2^m}$ trivially.

Thus assuming ${d}$ is ${0 \leq d \leq m/2}$. Now we have the following identity,

$\displaystyle \begin{array}{rcl} \sum_{i=0}^d {m \choose i} = \sum_{j=m-d}^m {m \choose j} = 2^m \Pr\{ X \geq m-d \}. \end{array}$

so it sufficies to upper bound ${\Pr \{ X \geq m - d \}}$. We now use the Chernoff technique. For any ${\lambda > 0}$,

$\displaystyle \begin{array}{rcl} \Phi_d(m) &=& 2^m \Pr\{ X \geq m - d \} \\\\ &=& 2^m \Pr \{ e^{\lambda X} \geq e^{\lambda(m - d)} \} \\\\ &\leq& 2^m \frac{{\bf E}(e^{\lambda X})}{e^{\lambda(m - d)}} ~~ \text{[By Markov's Inequality]} \\\\ &=& 2^m \frac{\prod_{i=1}^m {\bf E}(e^{\lambda X_i})}{e^{\lambda(m-d)}} ~~ \text{[By Independence of the} ~ X_i] \end{array}$

Now for any ${i}$ we have,

$\displaystyle {{\bf E}(e^{\lambda X_i}) = \frac{e^{\lambda} + 1}{2}}$

and so we have,

$\displaystyle \begin{array}{rcl} \Phi_d(m) \leq \frac{(e^{\lambda} + 1)^m}{e^{\lambda(m-d)}} \end{array}$

We try to optimize the above expression by finding the ${\lambda}$ which minimizes the expression above. Noting that ${m/2 \geq d}$ we have, by differentiation and observing the sign of the derivative, that the ${\lambda}$ minimizing the expression is given by,

$\displaystyle \begin{array}{rcl} e^{\lambda} = \frac{m}{d} - 1 \end{array}$

and substituing that in the estimate we have,

$\displaystyle \begin{array}{rcl} \Phi_d(m) &\leq& \frac{\left ( \frac{m}{d} \right )^m}{\left ( \frac{m-d}{d}\right )^{m-d}} \\ &=& \left ( 1 + \frac{d}{m-d} \right )^d \left ( \frac{m}{d} \right )^d \leq \left ( \frac{me}{d} \right )^d. \end{array}$

The last step follows from the simple fact that,

$\displaystyle \begin{array}{rcl} \left ( 1 + \frac{d}{m-d} \right )^{m - d} \leq \left ( e^{\frac{d}{m-d}} \right )^{m-d} = e^d. \end{array}$

$\Box$

## Jung’s Theorem

August 7, 2010

Given a set of points of diameter ${D}$ in ${n}$ dimensional Euclidean space ${{\bf R}^n}$ it is trivial to see that it can be covered by a ball of radius ${D}$. But the following theorem by Jung improves the result by a factor of about ${\frac{1}{\sqrt{2}}}$, and is the best possible.
Theorem 1 [ Jung’s Theorem: ] Let ${S}$ be a set of points in ${{\bf R}^n}$ of diameter ${D}$. Then there is a ball of radius ${\sqrt{\frac{n}{2n + 2}} D}$ covering ${S}$.
Proof: We first prove this theorem for sets of points ${S}$ with ${\vert S \vert \leq n + 1}$ and then extend it to an arbitrary point set. If ${\vert S \vert \leq n + 1}$ then the smallest ball enclosing ${S}$ exists. We assume that its center is the origin. Denote its radius by ${r}$. Denote by ${S' \subseteq S}$ the subset of points such that ${||p|| = r}$ for ${p \in S'}$. It is easy to see that ${S'}$ is in fact non empty.

Observation: The origin must lie in the convex hull of ${S'}$. Assuming the contrary, there is a separating hyperplane ${H}$ such that ${S'}$ lies on one side and the origin lies on the other side of ${H}$ (strictly). By assumption, every point in ${S \setminus S'}$ is distance strictly less than ${r}$ from the origin. Move the center of the ball slightly from the origin, in a direction perpendicular to the hyperplane ${H}$ towards ${H}$ such that the distances from the origin to every point in ${S \setminus S'}$ remains less than ${r}$. However, now the distance to every point of ${S'}$ is decreased and so we will have a ball of radius strictly less than ${r}$ enclosing ${S}$ which is a contradiction to the minimality of ${r}$.

Let ${S' = \{ p_1, p_2,\dots, p_m\}}$ where ${m \leq n \leq d+1}$ and because the origin is in the convex hull of ${S'}$ so we have nonnegative ${\lambda_i}$ such that,

$\displaystyle \sum \lambda_i p_i = 0, \sum \lambda_i = 1$

Fix a ${k, 1 \leq k \leq m}$. Then we have,
$\displaystyle \begin{array}{rcl} 1 - \lambda_k &=& \sum_{i \neq k} \lambda_i \\ &\geq& \frac{1}{D^2}\sum_{i = 1}^m \lambda_i || p_i -p_k ||^2 \\ &=& \frac{1}{D^2}\left ( \sum_{i=1}^m \lambda_i (2r^2 - 2 \langle p_i, p_k \rangle) \right ) \\ &=& \frac{1}{D^2} \left ( 2r^2 - 2 \left \langle \sum_{i=1}^m \lambda_i p_i, p_k \right \rangle \right ) \\ &=& \frac{2r^2}{D^2} \end{array}$

Adding up the above inequalities for all values of ${k}$, we get
$\displaystyle \begin{array}{rcl} m - 1 \geq 2m r^2 / D^2 \end{array}$

Thus we get ${\frac{r^2}{D^2} \leq m - 1 / 2m \leq \frac{n}{2n + 2}}$ since ${m \leq n + 1}$ and the function $(x-1)/2x$ is monotonic. So we have immediately ${r \leq \sqrt{\frac{n}{2n+2}} D}$.
The remainder of the proof uses the beautiful theorem of Helly. So assume ${S}$ is any set of points of diameter ${D}$. With each point as center draw a ball of radius ${r = \sqrt{\frac{n}{2n+2}}D}$. Clearly any ${n + 1}$ of these balls intersect. This is true because the center of the smallest ball enclosing ${n+1}$ of the points is at most ${r}$ away from each of those points. So we have a collection of compact convex sets, any ${n+1}$ of which have a nonempty intersection. By Helly’s theorem all of them have an intersection. Any point of this intersection can be chosen to be the center of a ball of radius ${r}$ that will enclose all of ${S}$. $\Box$

## Tensor Calculus II

July 10, 2008

In the last post I described a contravariant tensor of rank 1 and a covariant tensor of rank 1. In this post we will consider generalizations of these. We will introduce tensors of arbitrary rank $(k,l)$ where $k$ is the number of contravariant indices and $l$ is the number of covariant indices. How many numbers does such a tensor represent? It is easy to see that if the tensor is defined in $n$ dimensional space, it defines $n^{(k+l)}$ real numbers for each point of space, and each coordinate system. The notation for such a tensor is

$A^{i_1 i_2 \dots i_k}_{j_1 j_2 \dots j_l}$

Now before we go ahead, I would like to clarify that the above represents just 1 single component of the tensor , out of the $n^{(k+l)}$ components. The reason why I clarified this is that with tensors and the Einstein summation convention introduced in the last post, indices can sometimes be used as a summand. Specifically we said that if an index repeats in a tensor expression in a contravariant postion and a covariant position it stands for summation. You need to sum over that index for all possible values (1 to $n$). Also, multiple indices can repeat in a tensor expression. In that case one needs to sum over all such indices varying from 1 to $n$. Therefore a tensor expression in which 1 index is to be summed upon expands into $n$ quantities and a tensor equation in which 2 indicies are to be summed upon expands into a sum of $n^2$ quantities. Guess what? armed with this understanding we are already ready to understand the general transformation law of tensors. Despair not my friends looking at this expression. If you do not understand this – dont worry. We will probably never see such complicated mixed tensors of general rank $(k,l)$. However, it is necessary to understand how to compute with tensors and their transformation law. So after reading the general law of transformation and understanding what it means, I woul like you to forget it temporarily instead of getting bogged down by the weird equation! I will follow the definition with an example. This will be restricted to 3 dimensions and to our familiar spherical polar and cartesian coordinate systems. If you can understand the stuff in there, I would like you to remember the general computational method there. The general law of tensor transformation for a tensor of rank $(k,l)$ is

$\bar{A}^{i_1 i_2 \dots i_k}_{j_1 j_2 \dots j_l} = A^{u_1 u_2 \dots u_k}_{v_1 v_2 \dots v_l}\frac{\partial{\bar{x}^{i_1}}}{\partial{x^{u_1}}} \frac{\partial{\bar{x}^{i_2}}}{\partial{x^{u_2}}} \dots \frac{\partial{\bar{x}^{i_k}}}{\partial{x^{u_k}}} \cdot \frac{\partial{x^{v_1}}}{\partial{\bar{x}^{j_1}}} \frac{\partial{x^{v_2}}}{\partial{\bar{x}^{j_2}}} \dots \frac{\partial{x^{v_l}}}{\partial{\bar{x}^{j_l}}}$

In this equation, the left hand side represents the value of 1 component out of the $n^{(k+l)}$ components in the “bar” coordinate system [which is why we denote the $A$ with a bar as the tensor]. The right hand side on the other hand is a huge sum of $n^{(k+l)}$ quantities because of the repeating indices $u_1, u_2, \dots, u_k, v_1, v_2, \dots, v_l$. Each term of the summation is a product of a component of the tensor with the appropriate partial derivatives – that is, each term is a particular instantiation of the $u_1, u_2, \dots, u_k, v_1, v_2, \dots, v_l$. The transformation law for the general rank tensor is a direct generalization of the transformation law for rank 1 tensors. The tensors value is needed for the $i_1,i_2,\dots,i_k,j_1,j_2,\dots,j_l$ index and the corresponding terms for the primed coordinate system occur in the numerator or denominator according to its for the $i$‘s – the contraviant indices or the $j$‘s – the covariant indices. The rest of the new introduced $u_s, v_t$ have been summed upon since they are repeating indices.

Wow! We have stated the general transformation law and now we will proceed to an example. Before we do that, I would like to state our general roadmap in the future posts. Remember that our goal is to understand the equations of General Relativity. We are studying tensors just because we want to understand the weird symbols and equations in the profound equations laid down by Einstein. So in the next post, we will study general operations on tensors like – addition, subtraction, inner products etc. Along with this we will state some rules to recognize tensors. We will state a general rule by which we can ascertain that a set of number form the components of a tensor. We will use this general rule to prove that some collection of numbers are tensors in later posts. Here we will not however talk of the derivative of a tensor because that involves some more machinery – but hey .. without derivatives there is no calculus so get back we will! The next few posts will talk about the fundamental tensor which is related to measuring distance in space between two points. That will define define distance for us. In the next few posts we will then discuss derivatives of tensors. In doing so we will introduce some tensors in terms of the fundamental tensor. These will be helpful for defining derivatives. Armed with all this machinery about distances and derivatives, we will then state equations for geodesics which are shortest paths between 2 points in space. In ordinary space this is the straight line. But in spaces, where the fundamental tensor is more complicated the geodesics are not “straight” lines [Well by then we will probably wonder what straight means anyway!]. Finally we will have the notions of distance, derivatives and shortest paths in our bag so we will talk about what we really need to understand the Einstein equations – curvature. Specifically we will introduce quantities which represent how curved a space is at a point. After this we will state the Einstein equations and if I can manage I will show some simple consequences of these equations. The Einstein equations somehow relate the curvature of our playground [which is the “space” of 4 dimensional spacetime], to the distribution of mass and energy.

So much for the future posts – all that is probably going to take several posts! But for the time being lets get back to an example using familiar 3 space. In familiar 3 space, we are going to work with 2 coordinate systems – one : cartesian $x^1=x, x^2=y, x^3=z$ and the other spherical polar $\bar{x}^1=r, \bar{x}^2=\theta, \bar{x}^3 = \phi$. Recall that for a point $P$, $r$ is the distance from the origin $O$, $0 \leq \phi \leq \pi$ is the angle made by $OP$ with the $z$ axis and $0 \leq \theta < 2\pi$ is the angle made by the projection of $OP$ onto the $xy$-plane with the $x$ axis. Then it is easy to see that

$x^1 = \bar{x}^1 \sin{\bar{x}^3} \cos{\bar{x}^2}$

$x^2=\bar{x}^1 \sin{\bar{x}^3} \sin{\bar{x}^2}$

$x^3=\bar{x}^1 \cos{\bar{x}^3}$.

The inverse transformations equations are

$\bar{x}^1 = \sqrt{ {x^1}^2 + {x^2}^2 + {x^3}^2}$

$\bar{x}^2=\tan^{-1}\left( \frac{x^2}{x^1}\right)$

$\bar{x}^3=\cos^{-1}\left( \frac{x^3}{\sqrt{ {x^1}^2 + {x^2}^2 + {x^3}^2}}\right)$

See here for an illustration. Assume we are working in a small region of space where all the $x^1,x^2,x^3$ are nonzero so that $\bar{x}^2,\bar{x}^3$ are well defined. We also need to verify that the Jacobian matrix of the partial derivatives is non-singular but that is left as an exercise!

We will take a simple case here.

Example

Suppose there is a contravariant tensor that has components $\bar{A}^1, \bar{A}^2, \bar{A}^3$. What are the components in the first coordinate system? For illustration we only show $A^2$. Now by the tensor transformation law

$A^2 = \bar{A}^i \frac{\partial{x^2}}{\partial{\bar{x}^i}}$

and so evaluating the partial derivatives and summing them up $A^2$ evaluates to

$A^2=\bar{A}^1\sin{\bar{x}^2}\sin{\bar{x}^3} + \bar{A}^2\bar{x}^1\cos{\bar{x}^2}\sin{\bar{x}^3}+\bar{A}^3\sin{\bar{x}^2}\cos{\bar{x}^3}$

Of course the above represents the value at a specific point in space. A contravariant tensor of rank 2 will have 9 such terms to sum up. We encourage the reader to evaluate the components of such a tensor.

Before the next post do think of some of the things you can do with tensors. Specifically think about

1. When can two tensors be added/subtracted to produce new tensors?
2. Can two tensors be multiplied to produce a new tensor?
3. Consider a function $f$ defined for each point of space. Suppose for each coordinate system $(x^1,x^2,\dots, x^n)$ we define $n$ components at a point by $A_i= \frac{\partial{f}}{\partial{x^i}}$. Prove that these numbers define a covariant tensor of rank 1.

## Tensor Calculus – Part 1

May 12, 2008

Firstly, I am sorry for the outage in posts. Could not find time to write up anything worth a really good post. I have received some verbal requests for a new post. I appreciate all such requests! So without further ado, let me get down to business.

Today’s post will be about the Tensor Calculus . In fact I intend to write much more about them in subsequent posts but we will see how that goes. But before I go to give an introduction, let me state my motivation in studying tensors. While some familiarity with Physics is needed to understand the next paragraph, the tensor calculus is a purely mathematical topic.

The motivation comes from my long standing wish to be able to understand the General Theory of Relativity . The GTR is a geometrical theory of gravity. It explains gravity as a result of the deformation of SpaceTime . Mass and Energy cause SpaceTime to be distorted and curved. This distortion in turn acts back on mass to determine its motion. The perceived effects of gravitation on a particle are a direct result of the motion in the distorted SpaceTime without any mysterious force acting on the particle. That is quiet as much as I know about the GTR for now. But the basic mathematical tool to study the GTR is Reimannian Geometry . Reimannian Geometry is the appropriate generalization, for a general $n-$dimensional space, to study things like curves, surfaces, curvature, so common from everyday experience in two and three dimensional spaces. Many basic entities in Reimannian Geometry are best described by Tensors. Tensors are also used in many other places and in order to understand current physical theories and their mathematical formulations, one needs to know about tensors and basic differential geometry.

So, now lets get down to an introduction. Consider an $n-$ dimensional space $V_n$. A coordinate system is an assignment of an $n$ tuple of numbers $x^i, i = 1 \cdots n$ to each point of $V_n$. In a different coordinate system the same point will have a different set of coordinates $\bar{x}^i, i = 1 \cdots n$. We also assume that the $\bar{x}^i$ are given by $n$ functions $\varphi_i(x^j)$ of the $x^i$. The functions are assumed to be such that the Jacobian matrix $\frac{\partial{\bar{x}^i}}{\partial{x^j}}$ is invertible.

Example. In 3 dimensional space, a standard cartesian coordinate system can be the first coordinate system, while the polar coordinates $(r,\theta,\phi)$ can be the primed coordinate system. Or the primed coordinate system can be one with the 3 axes not necessarily at right angles to each other.

Definition. A contravariant tensor of rank 1 is a set of $n$ functions $A^i$ of the coordinates such that they transform according to the following law

$\bar{A}^i = A^j \frac{\partial{\bar{x}^i}}{\partial{x^j}}$

Now in the above expression you might notice that that the index $j$ is repeated in the right hand side of the equation. That implies a summation . Therefore the expression actually means the following sum

$\sum^n_{j=1} A^j \frac{\partial{\bar{x}^i}}{\partial{x^j}}$.

In general, in writing tensorial expressions, the above so called summation convention is widely used. Whenever an index is used as a superscript and the same occurs as a subscript, or if it occurs in the numerator and the denominator it is to be summed on over all the coordinates.

The above definition was for a contravariant tensor of rank 1. In fact a contravariant tensor of rank 1 is just a vector. The tensorial transformation law just generalizes the transformation of vectors. To see this consider 3 dimensional space and basis vectors $e_1, e_2, e_3$. Suppose a vector $v = a_1 e_1 + a_2 e_2 + a_3 e_3$. Consider a different coordinate system with basis vectors $e'_i = \sum_j a_ij e_j$. Now in this coordinate system let $v = a'_1 e'_1 + a'_2 e'_2 + a'_3 e'_3$. Now what are the $a'_i$ as functions of the $a_i$ ? A quick calculation shows that $a' = B a$ where $a'$ is the row vector of coordinates $a'_i$, $a$ is the row vector of coordinates $a_i$ and $B$ is the inverse of matrix $A = (a_{ij})$. Also as one can verify $b_{ij} = \frac{\partial{\bar{x}^i}}{\partial{x^j}}$.

Example. Consider the differentials $dx^i$. Clearly the differentials in the primed coordinate system are given by

$d\bar{x}^i= dx^j \frac{\partial{\bar{x}^i}}{\partial{x^j}}$

Next, we define a covariant tensor of rank 1 .

Definition. A covariant tensor of rank 1 is a set of $n$ functions $A_i$ that transform according to

$\bar{A}_i = A_j \frac{\partial{x^j}}{\partial{\bar{x}^i}}$.

In the next post we will go to the definition of more general tensors called mixed tensors. Such tensors are contravariant or covariant in more than one index or have mix of such indices. We will see the transformation law of such tensors. We will also start with some simple operations on tensors.

## Superconcentrators From Expanders

March 7, 2008

Today I will be blogging about the construction of superconcentrators from expanders. This is directly from the expanders survey. I am just reviewing it here for my own learning. First some definitions.
Definition 1. A $d-$ regular bipartitie graph $(L,R,E)$, with $|L|=n$ and $|R| =m$ is called a $\emph{magical graph}$ if it saisfies the properties below. For a given set $S$ of vertices, we denote the set of vertices to which some vertex in $S$ is connected as $\Gamma(S)$.

1. For every $S$ with $|S| \leq \frac{n}{10}, |\Gamma(S)| \geq \frac{5d}{8} |S|$
2. For every $S$ with $\frac{n}{10} < |S| \leq \frac{n}{2}$, $|\Gamma(S)| \geq |S|$.

Definition 2. A superconcentrator is a graph $G=(V,E)$ with two given subsets $I, O \subseteq S$ with $|I|=|O|=n$, such that for every $S \subseteq I$ and $T \subseteq O$ with $|S|=|T|=k$, the number of disjoint paths from $S$ to $T$ is at least $k$.

Superconcentrators with $O(n)$ edges are interesting for various reasons which we do not go into here.

But we do give a construction of a superconcentrator with $O(n)$ edges from magical graphs above. A simple probabilistic argument can show the following result.

Theorem There exists a constant $n_0$ such that for every $d \geq 32$, such that $n \geq m,n_0 \geq \frac{3n}{4}$, there is a magical graph with $|L|=n,|R|=m$.

Here is the construction of a superconcentrator from magical graphs. Assume that we can construct a superconcentrator with $O(m)$ edges for every $m \leq (n-1)$. The construction is recursive. First take two copies of the magical graphs $(L_1,R_1,E_1), (L_2,R_2,E_2)$ with $|L_1|=|L_2|=n$ and $|R_1|=|R_2|=\frac{3n}{4}$. Connect every vertex of $L_1$ to the corresponding vertex of $L_2$ and add edges between $R_1=I$ and $R_2=O$ so that the graph becomes a superconcentrator with the size of the input vertex set as $\frac{3n}{4}$. We claim that the resulting graph is a superconcentrator with input vertex set of size $n$.

Identify the input vertex set as $L_1$ and the output vertex set as $L_2$. For every $| S \subseteq L_1 | = k \leq \frac{n}{2}$ it is true that $| \Gamma(S) \cap R_1 | \geq |S|$. Therefore by Halls Marriage Theorem there exists a perfect matching between vertices of $S$ and $\Gamma(S) \cap R_1$. Similarly there exists a perfect matching between vertices of $T \subseteq L_2$ and $\Gamma(T) \cap R_2$. Together with the edges of the superconcentrator between input set $R_1$ and output set $R_2$ there are at least $k$ disjoint paths between $S$ and $T$. It remains to handle the case of $k > \frac{n}{2}$. For $|S|=|T|=k > \frac{n}{2}$, there is a subset $U \subseteq S$ of vertices with $|U| \geq (k - \frac{n}{2})$ such that the vertices corresponding to $U$ in $L_2$ are in $T$. Edges between such vertices contribute to $(k- \frac{n}{2})$ disjoint paths between $S$ and $T$. The remaining $\frac{n}{2}$ disjoint paths exist as proved earlier. Hence we have a superconcentrator with input, output vertex sets of size $n$. How many edges are there in this graph? For the base case of this recursion, we let a superconcentrator with input output sets of size $n \leq n_0$ be the complete bipartite graph with $n_0 ^2$ edges. The following recursion counts the edges. Let $e(k)$ denote the number of edges in the superconcentrator as per this construction with input, output sets of size $k$. Then

$e(k) = k^2$ for $k \leq n_0$ and $e(k) = k + 2dk + e(\frac{3k}{4})$ for $k > n_0$. It can be easily seen that $e(n) \leq Cn$ for $C \geq max(n_0,4(2d+1))$.