Class notes from May 23, 2016
EXPANDER GRAPHS
Idea: looking for sparse graphs that act like cliques.
Clique-like meaning:
- hard to break into two pieces
- a random walk from any vertex quickly gets you to an essentially uniform vertex
Sparse usually means O(n) edges
We look at a more stringent version, which is constant max degree
And in this lecture, we'll actually just look at d-regular graphs. Imagine d to be like, 10
For instance, a cycle is a 2-regular graph.
It's not at all obvious that there should be an O(1)-regular graph that acts like a clique!
A major takeaway from today is the fact that they do exist.
Definition 1: A graph G is an alpha-expander if for all subsets S subset V of at least 1/2 the nodes,
|N(S) \ S| >= alpha * |S|
Recall N(S) is the neighborhood of S, or the set of vertices that share an edge with some vertex in S.
You can think of this as the condition in Hall's theorem, with a bunch of slack.
Note that (as usual), we're implicitly thinking of an infinite family of graphs,
one for each n, and by "constant d" we mean a d that is independent of n.
Similarly, we want alpha to be a constant not dependent on n.
Question: Given a d-regular graph, what's the largest alpha can be?
Answer: d. (Also, 1, since when S is half the graph, N(S)\S can be at most half the graph as well.)
Question: Given some alpha, what is the complexity of determining whether a
graph is an alpha-expander? Answer: coNP-complete.
Observation: Definition 1 is where the term "expander" comes from.
History: Expander graphs originated in the design of telecommunication networks, in the 1960s?
Probably never ended up applied there, though. The expansion property means expander graphs are
very robust to failures (of edges or nodes).
Definition 2: We will define a rho-expander now, but it will take a while.
Let A be the adjacency matrix of G. It is an n x n matrix, where the rows and columns are
indexed by the vertices. A[i,j] is 1 if (i,j) is an edge in G, and 0 otherwise. We only look at
undirected graphs today, so A is symmetric. Also note that the set of symmetric binary matrices
are in 1-to-1 correspondence with undirected graphs, i.e. we haven't really done anything other
than rewrite the graph.
Recall:
If Ax = lx for a real number l, then we say x is an eigenvector with eigenvalue l.
(Usually l is written as lambda).
x is a vector indexed by vertices. Imagine the graph, with weights assigned to the vertices,
and the weights are collected in the vector x.
Then what is Ax?
For each vertex, set its weight to the sum of its neighbors' weights.
Question: What's the biggest an eigenvalue could be in a d-regular graph? Answer: d
Question: What does it mean for an eigenvector to have eigenvalue d?
Answer: Every vertex has the same weight as its neighbors. Inductively, every vertex has the
same weight as every other vertex in its connected component.
Every graph has a eigenvector of eigenvalue d, namely the all 1s vector: (1, 1, .., 1)
In fact, the multiplicity of the eigenvalue d is the number of connected components of G. The
corresponding orthogonal eigenvectors are those that are 1 on some connected component, and 0
everywhere else.
Note that computing the multiplicity of the eigenvalue d is one line of matlab (e.g. via the
Singular Value Decomposition). Pretty weird! This linear algebraic computation (eigenvalues) is
telling us something about graphs (connected components).
Example: A bipartite graph also has an eigenvalue of -d (weight 1 on the left vertices, -1 on
the right vertices). The smallest eigenvalue can't be less than -d, and a graph is bipartite
iff its smallest eigenvalue is -d.
Definition: The spectral gap of a d-regular graph is
d - |second biggest eigenvalue, by magnitude|
So the spectral gap is always non-negative, and it is 0 iff G is disconnected or bipartite.
We're finally ready for Definition 2.
A graph G is a rho-expander if its spectral gap is a constant greater than 0.
As with alpha and d, we're implicitly talking about a family of graphs of increasing size, and
by constant rho we mean a rho independent of n.
THEOREM: G is an alpha-expander iff it is a rho-expander.
The relationship between alpha and rho is interesting to some, but we won't worry about it
here. All we mean is that for any alpha there is some rho such that every alpha-expander is
a rho-expander, and vice-versa.
The proof is also interesting to some, and we won't worry about it here. If you're curious,
look up Cheeger's Inequality.
Note: Given a rho, it is poly-time computable whether G is a rho-expander! Remember the one
line of matlab from before. The alpha-expansion definition is often useful in proofs, and the
rho-expansion definition is what you'd use to check whether a graph is an expander or not.
Examples:
1. clique: alpha = 1, rho = n-2 (so an expander! Though not a sparse one).
2. cycle: alpha = 4/n, rho = O(1/n^2) (so not an expander)
You can check the alpha computations. The rho ones you'll have to take our word for it.
Claim: Random walks mix fast in expanders.
We're going to think of our weight vector x as a probability distribution over the vertices.
We start at an initial vector x0, which is 1 somewhere and 0 everywhere else.
1 step of random walk is the mapping x -> (1/d) Ax
If we start from some initial point, how does it evolve?
We write x0 = sum_i a_i v_i (v_i is the ith eigenvector of A).
We can do this since A is symmetric and hence has a full set of orthogonal eigenvectors
(I think this is called the spectral theorem in linear algebra).
Then one step of the walk yields
(1/d) A x0 = sum_i a_i (l_i/d) v_i (where l_i is the eigenvalue corresponding to v_i),
and t steps of the walk yields
[(1/d) A]^t x0 = sum_i a_i (l_i/d)^t v_i.
Let's order the v_i so that l_1 >= l_2 >= l_3 >= .. >= l_n.
Then l_1 = d, so the first term in the summation is
a_1 (l_1/d)^t v_1 = a_1 v_1 = a_1 * (1, 1, .., 1)
If G is not bipartite or disconnected, the rest of the (l_i/d) are strictly less than 1, and go
to 0 as t gets large!
Hence in an rho-expander, we're at a uniformly random vertex, after log steps (log of the error
we're willing to tolerate), and a function of the spectral gap rho (= log(rho), if rho is constant).
Usually we want the error to be 1/poly(n) in each vertex, which means a log n dependence on n
if rho is constant.
Compare to a n^2-ish dependence on n for the cycle.
Note: We used d-regular graphs only for the convenience of using the adjacency matrix, instead
of the similar but more complicated Laplacian matrix. But everything above carries through for
more general graphs.
APPLICATIONS:
1. Error correcting codes
2. Error reduction (complexity theory)
We'll talk about the second; error correcting codes are covered in CS 168 and CS 264.
Consider a randomized algorithm, which always runs in polynomial time, but sometimes is wrong.
We'll look at the case with 1 sided error:
If the answer is no, the algorithm always outputs no.
If the answer is yes, the algorithm says yes with at least 50% probability over its own randomness.
If the algorithm uses r random bits, we can get error 2^-t with rt random bits, by running the
algorithm t times.
Goal: reduce the randomness required, or equivalently, get a better error bound with the same
number of random bits. Important since if you can e.g. get the randomness really low (like
logarithmic), you can make the algorithm deterministic by brute-force checking all settings of
the random bits.
Let G be an expander on r-bit strings. Namely, we're imagining a graph with 2^r vertices.
It has exponential size, but we will never explicitly construct G.
Let r0 = a random r-bit vector (equivalently, a random vertex from G)
We do a random walk for t steps, and which gives t r-bit vectors.
The number of random bits used is only r + t log d, instead of rt above.
Theorem (covered in graduate courses): Still get error 2^-Omega(t)!
Observation: We need this expander to be basically deterministic, otherwise we'll use up
randomness in constructing the expander. And to be able to do a random walk in polynomial time
per step.
Called: Fully explicit expander constructions.
There's a 3-regular one (!)
Vertex set is 0, 1, .., P-1, where P is a prime.
Edges go from x to x-1, x+1, and 1/x mod P (0 has a self loop).
Not easy at all to prove that this is an expander.
For whatever reason, historically the development of expanders started with easy to state
constructions that were hard to analyse. The 21st century has seen expanders that are harder to
state, but which more obviously satisfy the expander definition.