### Theses

### Bandwidth approximation of a restricted family of trees
(Master's thesis)

We consider the NP-complete optimization problem
*Bandwith*. Anupam Gupta gave an
*O*(log^{2.5}*n*) approximation algorithm for
trees, and showed that his algorithm has an approximaion ratio of
*O*(log *n*) on *caterpillars*, trees composed
of a central path and paths emanating from it. We show that the same
approximation ratio is obtained on trees composed of a central path
and caterpillars emanating from it.

Our result relies on the following lemma. A sequence
*a*_{1}, …, *a*_{n} has
*thickness* Θ if the sum of any *d* consecutive
elements is at most *d*Θ, for
1≤*d*≤*n*. If a sequence has thickness Θ, then
the sequence obtained by ordering the elements in non-decreasing
order also has thickness Θ

Download Slides BibTeX

@mastersthesis{filmus:thesis:2002,
author = {Yuval Filmus},
title = {Bandwidth approximation of a restricted family of trees},
school = {Weizmann institute of science},
year = {2002}
}

Select
### Spectral methods in extremal combinatorics (PhD thesis)

Extremal combinatorics studies how large a collection of
objects can be if it satisfies a given set of restrictions. Inspired
by a classical theorem due to Erdős, Ko and Rado, Sominovits and
Sós posed the following problem: determine how large a
collection of graphs on the vertex set {1, …,*n*}
can be, if the intersection of any two of them contains a
triangle. They conjectured that the largest possible collection,
containing 1/8 of all graphs, consists of all graphs containing a
fixed triangle (a *triangle-star*). The first major
contribution of the thesis is a confirmation of this
conjecture. This result first appeared in our paper
*Triangle-intersecting families of graphs* with David Ellis and
Ehud Friedgut.

We prove the Simonovits–Sós conjecture in the
following strong form: the only triangle-intersecting families of
the maximal measure 1/8 are triangle-stars (*uniqueness*), and
every triangle-intersecting family of measure 1/8−*ε*
is *O*(*ε*)-close to a triangle-star
(*stability*). Our proof uses spectral methods (Hoffman's bound).

In order to prove the stability part of our theorem, we
utilize a structure theorem for Boolean functions on
{0,1}^{m} whose Fourier expansion is concentrated on
the first *t*+1 levels, due to Kindler and Safra. The second
major contribution of this thesis consists of two analogs of this
theorem for Boolean functions on *S*_{m} whose Fourier
expansion is concentrated on the first two levels. These results
appear in our papers *A quasi-stability result for dictatorships
in **S*_{n} and *A stability results for balanced
dictatorships in **S*_{n}, both with David Ellis and
Ehud Friedgut.

In the same way that the Kindler–Safra theorem is
useful for studying triangle-intersecting families, our structure
theorems are useful for studying intersecting families of
permutations, which are families in which any two permutations agree
on the image of at least one point. Using one of our theorems, we
give a simple proof of the following result of Ellis, Friedgut and
Pilpel: an intersecting family of permutations on
*S*_{m} of size
(1−*ε*)(*m*−1)! is
*O*(*ε*)-close to a double coset, a family which
consists of all permutations sending some point *i* to some
point *j*.

The thesis includes a detailed exposition of Friedgut's
paper On
the measure of intersecting families, uniqueness and stability,
a proof of the Ahlswede–Khachatrian theorem in the
*μ*_{p} setting, and a gentle introduction to the
representation theory of *S*_{n} from the point of view
of class functions.

Download BibTeX

@phdthesis{filmus:thesis:2013,
author = {Yuval Filmus},
title = {Spectral methods in extremal combinatorics},
school = {University of Toronto},
year = {2013}
}

Select
### Combinatorics

### Triangle-intersecting families of graphs (with David Ellis
and Ehud Friedgut)

Journal of the European
Mathematical Society, Volume 14, Issue 3, 2012, pp. 841–885

A family of graphs is said to be triangle-intersecting if
the intersection of any two graphs in the family contains a
triangle. A conjecture of Simonovits and Sós from 1976 states
that the largest triangle-intersecting families of graphs on a fixed
set of *n* vertices are those obtained by fixing a specific
triangle and taking all graphs containing it, resulting in a family
containing 1/8 of all graphs.. We prove this conjecture and some generalizations (for example, we prove that the same is true of odd-cycle-intersecting families, and we obtain best possible bounds on the size of the family under different, not necessarily uniform, measures). We also obtain stability results, showing that almost-largest triangle-intersecting families have approximately the same structure.

The ArXiv version corrects a mistake in the proof of the
stability part of Corollary 2.3. The original proof assumed that the
approximating family *G* is odd-cycle-agreeing, and deduced
that *G* must be a triangle-junta from uniqueness. However,
showing that *G* is odd-cycle-agreeing requires a separate
argument, found in the new version.

An alternative presentation of the results in this paper
can be found in my PhD thesis.

ArXiv
Presentation 1
Presentation 2
BibTeX

@article{eff0,
author = {David Ellis and Yuval Filmus and Ehud Friedgut},
title = {Triangle-intersecting families of graphs},
journal = {Journal of the European Mathematical Society},
volume = {14},
number = {3},
year = {2012},
pages = {841--885}
}

Select
### The weighted complete intersection theorem

JCTA, Volume 151, 2017, pp. 84–101

We provide a complete proof of the Ahlswede–Khachatrian theorem in the *μ*_{p} setting: for all values of *n*, *t* and *p*, we determine the maximum *μ*_{p}-measure of a *t*-intersecting family on *n* points, and describe all optimal families (except for a few exceptional parameter settings). Our proof is based on several different articles of Ahlswede and Khachatrian.

The full version below includes more details, as well as some follow-up work (see next item).

Full version arXiv

Journal Manuscript BibTeX

@article{filmus6,
author = {Yuval Filmus},
title = {The weighted complete intersection theorem},
journal = {Journal of Combinatorial Theory, Series A},
volume = {151},
pages = {84--101},
year = {2017}
}

Select
### More complete intersection theorems

Submitted

We extend the weighted Ahlswede–Khachatrian theorem (see previous item) to several new settings.

First, we extend the theorem to families on infinitely many points, showing that for most settings of the parameters, no new behavior is encountered.

Second, we extend the theorem to the Hamming scheme ℤ_{m}^{n}, in which a family is *t*-intersecting if any two vectors have *t* coordinates on which the values differ by less than *s* (this corresponds to *p*=*s*/*m*).

Third, we extend the theorem to the continous analog of the Hamming scheme, a power of the unit circle, in which a family is *t*-intersecting if any two vectors have *t* coordinates on which the values differ by less than *p*.

This work is included in the full version of the preceding item.

Manuscript BibTeX

@misc{filmus7,
author = {Yuval Filmus},
title = {More complete intersection theorems},
howpublished = {Manuscript},
year = {2017}
}

Select
### Twenty (simple) questions (with Yuval Dagan, Ariel Gabizon, and Shay Moran)

STOC 2017

Given a distribution *μ*, the goal of the distributional 20 questions game is to construct a strategy that identifies an unknown element drawn from *μ* using as little yes/no queries on average. Huffman's algorithm constructs an optimal strategy, but the questions one has to ask can be arbitrary.

Given a parameter *n*, we ask how large a set of questions *Q* needs to be so that for each distribution supported on [*n*] there is a good strategy which uses only questions from *Q*.

Our first major result is that a linear number of questions (corresponding to binary comparison search trees) suffices to recover the *H*(*μ*)+1 performance of Huffman's algorithm. As a corollary, we deduce that the number of questions needed to guarantee a cost of at most *H*(*μ*)+*r* (for integer *r*) is asymptotic to *rn*^{1/r}.

Our second major result is that (roughly) 1.25^{n} questions are sufficient to match the performance of Huffman's algorithm exactly, and this is tight for infinitely many *n*.

We also determine the number of questions sufficient to match the performance of Huffman's algorithm up to *r* to be Θ(*n*^{Θ(1/r)}).

Finally, we show that the set of questions used to obtain the bound *H*(*μ*)+1 performs better when the maximal probability of *μ* is small, bounding the performance between 0.5011 and 0.58607.

The full version contains all parts of the paper. We subsequently split the paper into two halves: the first containing most of the results, and the second (the next item) only the results concerning distributions whose maximal probability is small.

Preprint

arXiv
Full version

Extended Abstract BibTeX

@inproceedings{dfgm,
author = {Yuval Dagan and Yuval Filmus and Ariel Gabizon and Shay Moran},
title = {Twenty (simple) questions},
booktitle = {49th ACM Symposium on Theory of Computing (STOC 2017)},
year = {2017}
}

Select
### Asymptotic redundancy and prolixity (with Yuval Dagan and Shay Moran)

Manuscript

Given a distribution *μ* and a set of queries *Q* (for example, all comparison queries), the cost of *Q* on *μ* is the average depth of a decision tree that locates an item in the support of *μ* using only queries from *Q*.

The cost of any set of queries on *μ* is always at least the entropy *H*(*μ*), and the *redundancy* of *Q* is the maximal gap between the cost of *Q* on a distribution and its entropy.

The *prolixity* of a set of queries is defined in the same way, with the entropy replaced by the cost of the optimal *unrestricted* decision tree.

The redundancy and prolixity of a set of queries is often achieved on degenerate distributions which have very low entropy, and this suggests studying their *asymptotic* variants, in which we consider distributions whose min-entropy tends to infinity.

Gallager showed that the asymptotic redundancy of *unrestricted* decision trees is 0.086.

We obtain bounds, and in some case determine, the non-asymptotic and asymptotic redundancy and prolixity of several sets of queries, including comparison queries, comparison and equality queries, and interval queries.

The results in this paper are also contained in the full version of the preceding item.

Preprint BibTeX

@misc{dfm,
author = {Yuval Dagan and Yuval Filmus and Shay Moran},
title = {Asymptotic redundancy and prolixity},
howpublished = {Manuscript},
year = {2017}
}

Select
### Analysis of Boolean functions

### A quasi-stability result for dictatorships in
*S*_{n} (with David Ellis and Ehud Friedgut)

Combinatorica, Volume 35, Issue 5, 2015, pages 573–618

We prove that Boolean functions on *S*_{n}
whose Fourier transform is highly concentrated on the first two
irreducible representations of *S*_{n} are close to
being unions of cosets of point-stabilizers. We use this to give a
natural proof of a stability result on intersecting families of
permutations, originally conjectured by Cameron and Ku, and first
proved by David Ellis. We also use it to prove a
‘quasi-stability’ result for an edge-isoperimetric
inequality in the transposition graph on *S*_{n},
namely that subsets of *S*_{n} with small edge-boundary
in the transposition graph are close to being unions of cosets of
point-stabilizers.

An alternative presentation of the main theorem (without
the second application) can be
found in my PhD thesis.

ArXiv
BibTeX

@article{eff1,
author = {David Ellis and Yuval Filmus and Ehud Friedgut},
title = {A quasi-stability result for dictatorships in {$S_n$}},
journal = {Combinatorica},
volume = {35},
number = {5},
pages = {573--618},
year = {2015}
}

Select
### A stability result for balanced dictatorships in
*S*_{n} (with David Ellis and Ehud Friedgut)

Random Structures and
Algorithms, Volume 46, Issue 3, 2015, pp. 494–530

We prove that a balanced Boolean function on
*S*_{n} whose Fourier transform is highly concentrated
on the first two irreducible representations of *S*_{n}
is close in structure to a dictatorship, a function which is
determined by the image or pre-image of a single element. As a
corollary, we obtain a stability result concerning extremal
isoperimetric sets in the Cayley graph on *S*_{n}
generated by the transpositions.

Our proof works in the case where the expectation of the
function is bounded away from 0 and 1. In contrast, the preceding paper deals with Boolean functions of expectation *O*(1/*n*) whose Fourier transform is highly concentrated on the first two irreducible representations of *S*_{n}. These need not be close to dictatorships; rather, they must be close to a union of a constant number of cosets of point-stabilizers.

An alternative presentation of the main theorem can be
found in my PhD thesis.

ArXiv
BibTeX
Video

@article{eff2,
author = {David Ellis and Yuval Filmus and Ehud Friedgut},
title = {A stability result for balanced dictatorships in {$S_n$}},
journal = {Random Structures and Algorithms},
volume = {46},
number = {3},
year = {2015},
pages = {494--530}
}

Select
### Low-degree Boolean functions
on *S*_{n}, with an application to isoperimetry (with David Ellis and Ehud Friedgut)

Submitted

We prove that Boolean functions on *S*_{n}
whose Fourier transform is highly concentrated on irreducible
representations indexed by partitions of *n* whose largest part
has size at least *n*−*t* are close to being unions
of cosets of stabilizers of *t*-tuples. We also obtain an
edge-isoperimetric inequality for the transposition graph on
*S*_{n} which is asymptotically sharp for sets of
measure 1/poly(*n*). We then combine both results to obtain a
best-possible edge-isoperimetric inequality for sets of size
(*n*−*t*)! where *n* is large compared to
*t*, confirming a conjecture of Ben-Efraim in these cases.

arXiv
BibTeX

@article{eff3,
author = {David Ellis and Yuval Filmus and Ehud Friedgut},
title = {Low-degree {B}oolean functions on {$S_n$}, with an application to isoperimetry},
journal = {Submitted}
}

Select
### On the sum of L1 influences of bounded functions (with Hamed Hatami, Nathan Keller and Noam Lifshitz)

Israel Journal of Mathematics, Volume 214, Issue 1, 2016, pp. 167–192

It is well-known that if *f* is a Boolean function of degree *d* then its total influence is bounded by *d*. There are several ways of extending the definition of influence for non-Boolean functions. The usual way is to define the influence of the *i*th variable as the L2 norm of the discrete derivative in direction *i*. Under this definition, the total influence of a *bounded* function (bounded by 1 in magnitude) is still upper-bounded by the degree. Aaronson and Ambainis asked whether total *L1* influence can be bounded polynomially by the degree, and this was answered affirmatively by Bačkurs and Bavarian, who showed an upper bound of *O*(*d*^{3}) for general functions, and *O*(*d*^{2}) for homogeneous functions. We improve their results by giving an upper bound of *d*^{2} in the general case and *O*(*d* log *d*) in the homogenous case. Our proofs are also much simpler. We also give an almost optimal bound for monotone functions, *d*/2*π* + *o*(*d*).

Journal
Preprint
ArXiv
BibTeX

@article{fhkl,
author = {Yuval Filmus and Hamed Hatami and Nathan Keller and Noam Lifshitz},
title = {Bounds on the sum of {L1} influences},
journal = {Israel Journal of Matematics},
volume = {214},
number = {1},
year = {2016},
pages = {167--192}
}

Select
### An Orthogonal basis for functions over a slice of the Boolean
cube

Electronic Journal of Combinatorics

The Johnson and Kneser are graphs defined on the
*k*-sets of [*n*], a vertex set known as a *slice of the
Boolean cube*. Two sets are connected in the Johnson
graph if they have Hamming distance two, and in the Kneser graph if
they are disjoint. Both graphs belong to the Bose–Mesner algebra of
the Johnson association scheme; this just means that whether an edge
connects two sets *S*,*T* depends only on
|*S*∩*T*|.

All graphs belonging to the Bose–Mesner algebra have the same
eigenspaces, and these are well-known, arising from certain
representations of the symmetric group. The multiplicity of the
*k*th eigenspace is rather large,
*C*(*n*,*k*)−*C*(*n*,*k*−1). As
far as we can tell, prior to our work no explicit orthogonal basis
for these eigenspace has been exhibited.

We present a simple orthogonal basis for the eigenspaces
of the Bose–Mesner algebra of the Johnson association scheme,
arising from Young's orthogonal basis for the symmetric group. Our
presentation is completely elementary and makes no mention of the
symmetric group.

As an application, we restate Wimmer's proof of Friedgut's
theorem for the slice. The original proof makes heavy use of
computations over the symmetric group. We are able to do these
computations directly in the slice using our basis.

**Update:** Qing Xiang and his student Rafael Plaza pointed out that the same basis has been constructed by Murali K. Srinivasan in his paper Symmetric chains, Gelfand–Tsetlin chains, and the Terwilliger algebra of the binary Hamming scheme.

Journal
Preprint
ArXiv
BibTeX

@article{filmus4,
author = {Yuval Filmus},
title = {Orthogonal basis for functions over a slice of the
{B}oolean hypercube},
journal = {Electronic Journal of Combinatorics},
volume = {23},
number = {1},
year = {2016},
pages = {P1.23}
}

Select
### Friedgut–Kalai–Naor theorem for slices of the Boolean cube

Chicago Journal of Theoretical Computer Science

The Friedgut–Kalai–Naor theorem is a fundamental result in the analysis of Boolean function. It states that if a Boolean function *f* is close to an affine function, then *f* is close to an affine *Boolean* function, which must depend on at most one coordinate. We prove an analog of this theorem for slices of the Boolean cube (a slice consists of all vectors having a given Hamming weight). In the small error regime, our theorem shows that *f* is close to a function depending on at most one coordinate, and in general we show that *f* or its negation is close to a *maximum* of a small number of coordinates (this corresponds to a union of *stars*, families consisting of all elements containing some fixed element).

Preprint
ArXiv
Journal
BibTeX

@article{filmus5,
author = {Yuval Filmus},
title = {Friedgut--{K}alai--{N}aor theorem for slices of the {B}oolean cube},
journal = {Chicago Journal of Theoretical Computer Science},
year = {2016},
pages = {14:1--14:17}
}

Select
### Invariance principle on the slice (with Guy Kindler, Elchanan Mossel and Karl Wimmer)

CCC 2016

The classical invariance principle of Mossel, O'Donnell and Oleszkiewicz states that the distribution of low-degree, low-influence multilinear polynomials under Bernoulli random variables is similar to their distribution under Gaussian random variables with the same expectation and variance.

We prove an invariance principle for functions on the slice (all vectors in the Boolean cube having a fixed Hamming weight). The main difficulty is that the variables are no longer independent.

As corollaries, we prove a version of *majority is stablest*, a *Bourgain tail bound*, and a weak version of the *Kindler–Safra theorem*. The Kindler–Safra theorem implies a stability result for *t*-intersecting families along the lines of Friedgut.

In follow-up work (see below), we improve the invariance principle by removing the condition of low influences (when appropriate).

Preprint
ArXiv
BibTeX

@inproceedings{fkmw,
author = {Yuval Filmus and Guy Kindler and Elchanan Mossel and Karl Wimmer},
title = {Invariance principle on the slice},
booktitle = {31st Computational Complexity Conference},
year = {2016}
}

Select
### Harmonicity and invariance on slices of the Boolean cube (with Elchanan Mossel)

CCC 2016

The classical invariance principle of Mossel, O'Donnell and Oleszkiewicz states that the distribution of low-degree, low-influence multilinear polynomials under a product distrribution essentially depends only on the first two moments of this distribution.

In recent work with Kindler and Wimmer (see above), we extended this to harmonic multilinear polynomials on the slice. In that work we proved invariance with respect to the following three distributions: the uniform distribution on a slice, the matching skewed distribution on the Boolean cube, and the corresponding Gaussian slice (or Gaussian space; the distributions are the same). That invariance principle requires the function to have low degree and low influences.

While the condition of low influences is necessary when comparing discrete distributions to Gaussian space (consider the polynomial *x*_{1}), such a condition is no longer necessary when comparing the uniform distribution on a slice to the matching skewed distribution on the Boolean cube. In this paper we prove an invariance principle for these two distributions without any condition on the influences. Using the classical invariance principle, we can easily derive the more general invariance principl that we had proved with Kindler and Wimmer. Our new proof is completely different, and uses a martingale approach.

Complementing the invariance principle, we reprove several properties of harmonic multilinear polynomials. While most of these properties have been proven earlier in my paper using an explicit orthogonal basis for the slice, the proofs appearing in this paper are much simpler and do not require the basis. We hope that the new proofs are easier to generalize to other settings.

Preprint
ArXiv
BibTeX

@inproceedings{fm,
author = {Yuval Filmus and Elchanan Mossel},
title = {Harmonicity and invariance on slices of the {B}oolean cube},
booktitle = {31st Computational Complexity Conference},
year = {2016}
}

Select
### Computational complexity

### The complexity of the comparator circuit value problem (with
Stephen A. Cook and Dai Lê)

TOCT

In 1990 Subramanian defined the complexity class CC as the
set of problems log-space reducible to the comparator circuit value
problem (CCV). He and Mayr showed that NL⊆CC⊆P, and
proved that in addition to CCV several other problems are complete
for CC, including the stable marriage problem, and finding the
lexicographically first maximal matching in a bipartite
graph. Although the class has not received much attention since
then, we are interested in CC because we conjecture that it is
incomparable with the parallel class NC which also satisfies
NL⊆NC⊆P; this implies that CC-complete problems don't have an
efficient polylog time parallel algorithm. We provide evidence for
our conjecture by giving oracle settings in which relativized CC and
relativized NC are incomparable.

We give several alternative definitions of CC, including
(among others) the class of problems computed by uniform
polynomial-size families of comparator circuits supplied with copies
of the input and its negation, the class of problems
AC^{0}-reducible to CCV, and the class of problems computed
by uniform AC^{0} circuits with CCV gates. We also give a
machine model for CC which corresponds to its characterizations as
log-space uniform polynomial-size families of comparator
circuits. The various characterizations show that CC is a robust
class. Our techniques also show that the corresponding function
class FCC is closed under composition. The main technical tool we
employ is universal comparator circuits.

Other results include a simpler proof of NL⊆CC, a
more careful analysis showing that the lexicographically first
maximal matching problem and its variants are CC-complete under
AC^{0} many-one reductions, and an explanation of the
relation between the Gale–Shapley algorithm and Subramanian's
algorithm for stable marriage.

This paper continues previous work of Cook, Lê
and Ye which focused on Cook–Nguyen style uniform proof
complexity, answering several open questions raised in that paper.

The preprint contains more results than the ArXiv version,
and the presentation is different.

Preprint ArXiv
BibTeX

@article{cfl,
author = {Stephen A. Cook and Yuval Filmus and Dai L\^e},
title = {The complexity of the comparator circuit value problem},
journal = {ACM Transactions on Computation Theory},
volume = {6},
number = {4},
articleno = {15},
pages = {15:1--15:44},
year = {2014}
}

Select
### Average case lower bounds for monotone switching networks
(with Toniann Pitassi, Robert Robere and Stephen A. Cook)

FOCS 2013

An approximate computation of a Boolean function *f* by a
circuit or switching network *M* is a computation in which
*M* computes *f* correctly on the majority of the inputs
(rather than on all of them). Besides being interesting in their own
right, lower bounds for approximate computation have proved useful
in many subareas of complexity theory such as cryptography and
derandomization. Lower bounds for approximate computation are also
known as *correlation bounds* or *average case hardness*.

We obtain the first average case monotone
depth lower bounds for a function in monotone P. We tolerate errors
up to
1/2−1/*n*^{1/3−δ}. Specifically, we
prove average case exponential lower bounds on the size of monotone
switching networks for the GEN function. As a corollary, we
establish that for every *i* there are functions that can be
computed with no error in monotone NC^{i+1} but that
cannot be computed without large error by monotone circuits in
NC^{i}. We provide a similar separation between
monotone NC and monotone P.

Our proof extends and simplifies the Fourier-analytic
technique due to Potechin and
further developed by Chan and
Potechin.

As a corollary of our main lower bound, we prove that the
communication complexity approach for monotone depth lower bounds
does not naturally generalize to the average case setting.

Preprint
ECCC
FOCS
BibTeX

@inproceedings{fprc,
author = {Yuval Filmus and Toniann Pitassi and Robert Robere and
Stephen A. Cook},
title = {Average case lower bounds for monotone switching networks},
booktitle = {The 54th Annual Symposium on Foundations of Computer
Science ({FOCS} 2013)},
pages = {598--607},
year = {2013}
}

Select
### Fast matrix multiplication: limitations of the laser method (with Andris
Ambainis and François Le Gall)

STOC 2015

Coppersmith and Winograd gave an
*O*(*n*^{2.376}) algorithm for matrix
multiplication in 1990. Their algorithm relies on an identity known
as the Coppersmith–Winograd identity. Analyzing the identity
as-is using Strassen's laser method and an ingenious construction,
Coppersmith and Winograd obtained an
*O*(*n*^{2.388}) algorithm. The tensor square of
the basic identity leads to the improved algorithm.

Recently there has been a surge of activity in the
area. Stothers, Vassilevska-Williams and Le Gall studied higher
and higher tensor powers of the basic identity, culminating in
Le Gall's *O*(*n*^{2.3728639}) algorithm. How
far can this approach go?

We describe a framework, *laser method with merging*,
which encompasses all the algorithms just described, and is at once
more general and amenable to analysis. We show that taking the
*N*th tensor power for an arbitrary *N* cannot obtain an
algorithm with running time *O*(*n*^{2.3725}) for
the exact identity used in state-of-the-art algorithms.

Preprint
ECCC
ArXiv
BibTeX

@inproceedings{aflg,
author = {Ambainis, Andris and Filmus, Yuval and Le Gall, Fran\c{c}ois},
title = {Fast matrix multiplication: limitations of the laser method},
booktitle = {47th Annual Symposium on the Theory of Computing ({STOC} 2015)},
year = {2015},
pages = {585--593}
}

Select
### Communication complexity

### Trading information complexity for error (with Yuval Dagan, Hamed Hatami, and Yaqiao Li)

CCC 2017

We discuss the following general question: How does the information complexity of a function change if we allow non-negligible error?

Our answers have implications to the communication complexity of set disjointness with non-negligible error.

ArXiv
ECCC
Preprint
BibTeX

@inproceedings{dfhl,
author = {Yuval Dagan and Yuval Filmus and Hamed Hatami and Yaqiao Li},
title = {Information complexity with non-negligible error},
booktitle = {32nd Conference on Computational Complexity (CCC 2017)},
year = {2017}
}

Select
### Information complexity of the AND function in the two-party and multiparty settings (with Hamed Hatami, Yaqiao Li, and Suzin You)

COCOON'17

We find the optimal protocol (from the point of view of information complexity) for the AND function in a multiparty setting, extending the buzzer protocol described by Braverman et al.

We also fill in some gaps in the arguments of Braverman et al.

ArXiv
BibTeX

@inproceedings{fhly,
author = {Yuval Filmus and Hamed Hatami and Yaqiao Li and Suzin You},
title = {Information complexity of the {AND} function in the two-party, and multiparty settings},
booktitle = {23rd annual international computing and combinatorics conference (COCOON'17)},
year = {2017}
}

Select
### Proof complexity

### Exponential lower bounds for AC^{0}-Frege imply
superpolynomial Frege lower bounds (with Toniann Pitassi and Rahul
Santhanam)

ICALP 2011, ToCT 2015

We give a general transformation which turns
polynomial-size Frege proofs to subexponential-size
AC^{0}-Frege proofs. This indicates that proving exponential
lower bounds for AC^{0}-Frege is hard, since it is a
longstanding open problem to probe super-polynomial lower bounds for
Frege. Our construction is optimal for tree-like proofs.

As a consequence of our main result, we are able to shed
some light on the question of weak automatizability for
bounded-depth Frege systems. First, we present a simpler proof of
the results of Bonet et al. showing that under cryptographic
assumptions, bounded-depth Frege proofs are not weakly
automatizable. Second, we show that because our proof is more
general, under the right cryptographic assumptions it could resolve
the weak automatizatbility question for lower depth Frege systems.

The proceedings version contains several small mistakes, corrected in the preprint version. These mistakes slightly affect the constants in the main theorem.

Preprint
ICALP
Slides
BibTeX

@inproceedings{fps-icalp,
author = {Yuval Filmus and Toniann Pitassi and Rahul Santhanam},
title = {Exponential lower bounds for {AC$^0$}-{F}rege imply
superpolynomial {F}rege lower bounds},
booktitle = {The 38th International Colloquium on Automata,
Languages and Programming ({ICALP} 2011)},
year = {2011},
pages = {618--629}
}
@article{fps-toct,
author = {Yuval Filmus and Toniann Pitassi and Rahul Santhanam},
title = {Exponential lower bounds for {AC$^0$}-{F}rege imply
superpolynomial {F}rege lower bounds},
journal = {ACM Trans. Comput. Th.},
volume = {7},
number = {2},
year = {2015},
pages = {article 5}
}

Select
### Space complexity in polynomial calculus (with Massimo
Lauria, Jakob Nordström, Neil Thapen and Noga Zewi)

CCC 2012, SICOMP 2015

During the last decade, an active line of research in
proof complexity has been to study space complexity and time-space
trade-offs for proofs. Besides being a natural complexity measure of
intrinsic interest, space is also an important issue in SAT solving,
and so research has mostly focused on weak systems that are used by
SAT solvers.

There has been a relatively long sequence of papers on
space in resolution, which is now reasonably well understood from
this point of view. For other natural candidates to study, however,
such as polynomial calculus or cutting planes, very little has been
known. We are not aware of any nontrivial space lower bounds for
cutting planes, and for polynomial calculus the only lower bound has
been for CNF formulas of unbounded width in Alekhnovich
et al., where the space lower bound is smaller than the initial
width of the clauses in the formulas. Thus, in particular, it has
been consistent with current knowledge that polynomial calculus
could be able to refute any *k*-CNF formula in constant space.

We prove several new results on space in polynomial
calculus (PC), and in the extended proof system polynomial calculus
resolution (PCR) studied by Alekhnovich et al.:

- We prove an Ω(
*n*) space lower bound in PC
for the canonical 3-CNF version of the pigeonhole principle formulas
with *m* pigeons and *n* holes, and show that this is
tight.
- For PCR, we prove an Ω(
*n*) space lower
bound for a bitwise encoding of the functional pigeonhole
principle. These formulas have width *O*(log *n*),
and hence this is an exponential improvement over Alekhnovich et
al. measured in the width of the formulas.
- We then present another encoding of the pigeonhole
principle that has constant width, and prove an Ω(
*n*)
space lower bound in PCR for these formulas as well.
- Finally, we prove that any
*k*-CNF formula can be refuted in PC in simultaneous exponential size and linear space (which holds for resolution and thus for PCR, but was not obviously the case for PC). We also characterize a natural class of CNF formulas for which the space complexity in resolution and PCR does not change when the formula is transformed into 3-CNF in the canonical way, something that we believe can be useful when proving PCR space lower bounds for other well-studied formula families in proof complexity.

ECCC Journal
BibTeX

@inproceedings{flntz,
author = {Yuval Filmus and Massimo Lauria and Jakob Nordstr\"om and
Neil Thapen and Noga Zewi},
title = {Space complexity in polynomial calculus},
booktitle = {The 27th Annual Conference on Computational Complexity ({CCC} 2012)},
year = {2012}
}
@article{flntz-journal,
author = {Yuval Filmus and Massimo Lauria and Jakob Nordstr\"om and
Neil Thapen and Noga Zewi},
title = {Space complexity in polynomial calculus},
journal = {SIAM J. Comput.},
volume = {44},
number = {4},
pages = {1119--1153},
year = {2015}
}

Select
### Towards an understanding of Polynomial Calculus: new
separations and lower bounds (with Massimo Lauria, Mladen
Mikša, Jakob Nordström and Marc Vinyals)

ICALP 2013

During the last decade, an active line of research in
proof complexity has been into the space complexity of proofs, and
how space is related to other measures. By now, these aspects of
resolution are fairly well understood, but many open problems remain
for the related but stronger polynomial calculus (PC/PCR) proof
system. For instance, the space complexity of many standard
“benchmark formulas” is still open, as well as the
relation of space to size and degree in PC/PCR.

We prove that if a formula requires large resolution width
then making XOR substitution yields a formula requiring large PCR
space, providing some circumstantial evidence that degree might be a
lower bound for space. More importantly, this immediately yields
formulas that are very hard for space but very easy for size,
exhibiting a size-space separation similar to what is known for
resolution. Using related ideas, we show that if a graph has good
expansion and in addition its edge set can be partitioned into short
cycles, then the Tseitin formula over this graph requires large PCR
space. In particular, Tseitin formulas over random 4-regular graphs
almost surely require space at least
Ω(*n*^{1/2}).

Our proofs use techniques recentled introduced by
Bonacina and
Galesi. Our final contribution is to show that
these techniques provably cannot yield non-constant space lower
bounds for the functional pigeonhole principle, delineating the
limitations of this framework, and suggesting that we are still far
from characterizing PC/PCR space.

Download
BibTeX

@inproceedings{flmnv1,
author = {Yuval Filmus and Massimo Lauria and Mladen Mik\v{s}a and
Jakob Nordstr\"om and Marc Vinyals},
title = {Towards an understanding of {P}olynomial {C}alculus: new
separations and lower bounds},
booktitle = {The 40th International Colloquium on Automata,
Languages and Programming ({ICALP} 2013)},
year = {2013},
pages = {437--448}
}

Select
### From small space to small width in resolution (with Massimo
Lauria, Mladen Mikša, Jakob Nordström and Marc
Vinyals)

STACS 2014

In 2003, Atserias and Dalmau resolved a major open
question about the resolution proof system by establishing that the
space complexity of formulas is always an upper bound on the width
needed to refute them. Their proof is beautiful but somewhat
mysterious in that it relies heavily on tools from finite model
theory. We give an alternative, completely elementary, proof that
works by simple syntactic manipulations of resolution
refutations. As a by-product, we develop a "black-box" technique for
proving space lower bounds via a "static" complexity measure that
works against any resolution refutation — previous techniques have been inherently adaptive. We conclude by showing that the related question for polynomial calculus (i.e., whether space is an upper bound on degree) seems unlikely to be resolvable by similar methods.

Download
BibTeX

@inproceedings{flmnv2,
author = {Yuval Filmus and Massimo Lauria and Mladen Mik\v{s}a and
Jakob Nordstr\"om and Marc Vinyals},
title = {From small space to small width in resolution},
booktitle = {The 31st Symposium on Theoretical Aspects of Computer
Science ({STACS} 2014)},
year = {2014},
pages = {300--311}
}

Select
### On the Alekhnovich–Razborov degree lower bound for
the polynomial calculus

Preprint

Alekhnovich
and Razborov (*Lower bounds for polynomial calculus: non-binomial
case*) gave a sophisticated degree lower bound for proofs in the
polynomial calculus proof system. However, their proof is somewhat
opaque. We present a different, more intuitive proof. We also adapt
our proof to obtain related results of Galesi and
Lauria.

Work done while visiting KTH in December 2012.

Download
BibTeX

@article{filmus0,
author = {Yuval Filmus},
title = {On the Alekhnovich--Razborov degree lower bound for the polynomial calculus},
journal = {Preprint}
}

Select
### Semantic versus syntactic cutting planes (with Pavel Hrubeš and Massimo Lauria)

STACS 2016

Cutting planes is a proof system in which lines are linear inequalities. It has two main variants: syntactic cutting planes, in which specific derivation rules are given, and semantic cutting planes, in which any bounded fan-in derivation which is semantically correct (for all zero-one assignments to variables) is allowed. Only the syntactic version is a Cook–Reckhow proof system, since verifying a semantic cutting planes proof is coNP-complete.

Extending earlier work of Pudlák, we give an exponential lower bounds for semantic cutting planes.

We also show that semantic cutting planes is exponentially stronger than syntactic cutting planes, and exhibit two contradictory lines which take exponentially long to refute in syntactic cutting planes.

This work is a combination of two earlier preprints: a preprint of Pavel Hrubeš proving the exponential lower bound for semantic cutting planes, and a preprint of Massimo Lauria and myself proving the exponential separation between semantic and syntactic cutting planes.

Download
BibTeX

@inproceedings{fhl,
author = {Yuval Filmus and Pavel Hrube\v{s} and Massimo Lauria},
title = {Semantic versus syntactic cutting planes},
booktitle = {33rd Symposium on Theoretical Aspects of Computer Science (STACS 2016)},
year = {2016}
}

Select
### Approximation algorithms

### Maximum coverage over a matroid (with Justin Ward)

STACS 2012

We present an optimal, combinatorial 1−1/*e*
approximation algorithm for Maximum Coverage over a matroid
constraint, using non-oblivious local search. Calinescu,
Chekuri, Pál and Vondrák have given an optimal
1−1/*e* approximation algorithm for the more general
problem of monotone submodular maximization over a matroid
constraint. The advantage of our algorithm is that it is entirely
combinatorial, and in many circumstances also faster, as well as
conceptually simpler.

Following previous work on satisfiability problems by Alimonti
and by Khanna,
Motwani, Sudan and Vazirani, our local search algorithm is
*non-oblivious*. That is, our algorithm uses an auxiliary
linear objective function to evaluate solutions. This function gives
more weight to elements covered multiple times. We show that the
locality ratio of the resulting local search procedure is at least
1−1/*e*. Our local search procedure only considers
improvements of size 1. In contrast, we show that oblivious local
search, guided only by the problem's objective function, achieves an
approximation ratio of only
(*n*−1)/(2*n*−1−*k*) when
improvements of size *k* are considered.

In general, our local search algorithm could take an
exponential amount of time to converge to an *exact* local
optimum. We address this situation by using a combination of
*approximate* local search and the same partial enumeration
techniques used by Calinescu et al., resulting in a clear
(1−1/*e*)-approximation algorithm running in polynomial
time.

We obtained our auxiliary linear objective function using
linear programming. This is detailed in Ward's thesis.

Download Slides 1 Slides 2
BibTeX

@inproceedings{fw1,
author = {Yuval Filmus and Justin Ward},
title = {Maximum coverage over a matroid},
booktitle = {29th Symposium on Theoretical Aspects of Computer
Science ({STACS} 2012)},
year = {2012},
pages = {601--612}
}

Select
### A tight combinatorial algorithm for submodular maximization
subject to a matroid constraint (with Justin Ward)

FOCS 2012, SICOMP

We present an optimal, combinatorial 1−1/*e*
approximation algorithm for monotone submodular optimization over a
matroid constraint. Compared to the continuous greedy algorithm due
to Calinescu,
Chekuri, Pál and Vondrák, our algorithm is
extremely simple and requires no rounding. It consists of the greedy
algorithm followed by local search. Both phases are run not on the
actual objective function, but on a related auxiliary potential
function, which is also monotone submodular.

In our previous work on maximum coverage (the preceding
paper), the potential function gives more weight to elements covered
multiple times. We generalize this approach from coverage functions
to arbitrary monotone submodular functions. When the objective
function is a coverage function, both definitions of the potential
function coincide.

Our approach generalizes to the case where the monotone
submodular function has restricted curvature. For any curvature
*c*, we adapt our algorithm to produce a
(1−*e*^{−c})/*c*
approximation. This matches results of Vondrák,
who has shown that the continuous greedy algorithm produces a
(1−*e*^{−c})/*c* approximation
when the objective function has curvature *c* with respect to the optimum, and proved that
achieving any better approximation ratio is impossible in the value
oracle model.

The paper exists in several different versions:

- The FOCS version only contains the case
*c*=1.
- The ArXiv version contains the result for general
*c*. A similar account can be found in Ward's
thesis.
- The journal version contains a significantly simplified proof
of the result for general
*c*.
- The extended version includes slightly better approximation ratios for bounded matroid rank, and an improved version of the continuous greedy algorithm.
- The exposition gives a simplified exposition of the main part of the analysis, following ideas of Moran Feldman.

The journal version supersedes the previous versions.

SICOMP
Journal
FOCS
ArXiv
Extended
Exposition

Video (IAS)
Video (HIM)
Slides (HIM)
BibTeX

@inproceedings{fw2focs,
author = {Yuval Filmus and Justin Ward},
title = {A tight combinatorial algorithm for submodular maximization
subject to a matroid constraint},
booktitle = {53rd Annual {IEEE} Symposium on Foundations of Computer Science ({FOCS} 2012)},
year = {2012},
pages = {659--668}
}
@article{fw2sicomp,
author = {Yuval Filmus and Justin Ward},
title = {A tight combinatorial algorithm for submodular maximization subject to a matroid constraint},
journal = {SIAM Journal on Computing},
volume = {43},
issue = {2},
year = {2014},
pages = {514--542}
}

Select
### Social choice theory

### Threshold models for competitive influence in social
networks (with Allan Borodin and Joel Oren)

WINE 2010

The problem of influence maximization deals with choosing
the optimal set of nodes in a social networks so as to maximize the
resulting spread of a technology (opinion, product ownership and so
on), given a model of diffusion of influence in a network. A natural
extension is a competitive setting, in which the goal is to maximize
the spread of our technology in the presence of one or more
competitors.

We suggest several natural extensions to the well-studied
linear threshold model, showing that the original greedy approach
cannot be used.

Furthermore, we show that for a broad family of
competitive influence models, it is NP-hard to achieve an
approximation that is better than a square root of the optimal
solution; the same proof can also be applied to give a negative
result for a conjecture in Carnes et al. about a general cascade model for
competitive diffusion.

Finally, we suggest a natural model that is amenable to
the greedy approach.

Download
BibTeX

@inproceedings{bfo1,
author = {Allan Borodin and Yuval Filmus and Joel Oren},
title = {Threshold models for competitive influence in social networks},
booktitle = {The 6th Workshop on Internet and Network Economics
({WINE} 2010)},
year = {2010},
pages = {539--550}
}

Select
### Efficient vote elicitation under candidate uncertainty (with
Craig Boutilier and Joel Oren)

IJCAI 2013

Top-*k* voting is an especially natural form of
partial vote elicitation in which only length-*k* prefixes of
rankings are elicited. We analyze the ability of top-*k* vote
elicitation to correctly determine true winners with high
probability, given probabilistic models of voter preferences and
candidate availability. We provide bounds on the minimal value of
*k* required to determine the correct winner under the
plurality and Borda voting rules, considering both worst-case
preference profiles and profiles drawn from the impartial culture
and Mallows probabilistic models. We also derive conditions under
which the special case of *zero elicitation* (i.e., *k*=0)
produces the correct winner. We provide empirical results that
confirm the value of top-*k* voting.

The proof of Theorem 10 is incomplete, but the issue is
fixed in future work.

Download
Full version
BibTeX

@inproceedings{bfo2,
author = {Craig Boutilier and Yuval Filmus and Joel Oren},
title = {Efficient vote elicitation under candidate uncertainty},
booktitle = {23rd International Joint Conference on Artificial Intelligence
({IJCAI} 2013)},
year = {2013},
pages = {309--316}
}

Select
### Efficient voting via the top-*k* elicitation scheme: a
probabilistic approach (with Joel Oren)

EC 2014

Many voting rules require the voters to give a complete
preference order over the candidates. This is cumbersome, leading to
the notion of *top-*k *voting*, in which the voters only
give the length-*k* prefixes of their rankings. The question
that we ask in this paper is: given a voting rule, for what value of
*k* is it possible to predict the overall winner given only the
length-*k* prefixes, with high probability, given enough voters?

We first consider the case of an *impartial culture*,
in which the voters choose their preference profiles uniformly at
random over all permutations. For positional scoring rules (like
Borda) we give a nearly-tight threshold theorem for *k*. We
also prove a strong, though non-optimal, lower bound for Copeland.

When the preference profiles are drawn from a biased
distribution, such as the Mallows distribution, we show that the
candidate toward which the distribution is biased wins the
elections, for both positional scoring rules and Copeland, with high
probability.

Finally, we consider adversarially-chosen preference
distributions. We show that for positional scoring rules with
geometrically decaying scores, *k*=*O*(log *n*)
suffices to predict the winner with high probability.

Download
BibTeX

@inproceedings{fo,
author = {Yuval Filmus and Joel Oren},
title = {Efficient voting via the top-$k$ elicitation scheme: a
probabilistic approach},
booktitle = {Electronic Commerce (EC)},
year = {2014},
pages = {295--312}
}

Select
### Power distribution in randomized weighted voting: the effects of the quota (with Joel
Oren, Yair Zick and Yoram Bachrach)

IJCAI 2016 (first half), SAGT 2016 (second half)

We study the Shapley value in weighted voting games. The Shapley value has been used as an index for measuring the
power of individual agents in decision-making bodies and political organizations, where decisions are made by a majority
vote process. We characterize the impact of changing the quota (i.e., the minimum number of seats in the parliament that are
required to form a coalition) on the Shapley values of the agents. Contrary to previous studies, which assumed that the agent
weights (corresponding to the size of a caucus or a political party) are fixed, we analyze new domains in which the weights
are stochastically generated, modeling, for example, elections processes.

We examine a natural weight generation process: the Balls and Bins model, with uniform as well as exponentially decaying
probabilities. We also analyze weights that admit a super-increasing sequence, answering several open questions
pertaining to the Shapley values in such games.

Our results for the balls and bins model with
exponentially decaying probabilities rely on a formula for the
Shapley values of super-increasing sequences. Curiously, this
formula gives rise to a continuous function reminiscent of
Minkowski's question mark function.

Download
BibTeX

@unpublished{ofzb,
author = {Joel Oren and Yuval Filmus and Yair Zick and Yoram Bachrach},
title = {Power distribution in randomized weighted voting: the effects of the quota},
note = {Submitted},
year = {2016}
}

Select
### Shapley values in random weighted voting games (with Joel Oren and Kannan Soundararajan)

We study the distribution of Shapley values in weighted voting games. The Shapley values measure the voting power collective decision making systems. While easy to estimate empirically given the parameters of a weighted voting game, the Shapley values are hard to reason about analytically.

We propose a probabilistic approach, in which the agent weights are drawn i.i.d. from some known exponentially decaying distribution. We provide a general closed-form characterization of the highest and lowest expected Shapley values in such a game, as a function of the parameters of the underlying distribution. To do so, we give a novel reinterpretation of the stochastic process that generates the Shapley variables as a renewal process. We demonstrate the use of our results on the uniform and exponential distributions.

Download
BibTeX

@unpublished{fos,
author = {Yuval Filmus and Joel Oren and Kannan Soundararajan},
title = {Shapley values in weighted voting games with random weights},
note = {Manuscript},
year = {2017}
}

### Miscellaneous

### Automatic web-scale information extraction (with Philip
Bohannon, Nilesh Dalvi, Nori Jacoby, Sathiya Keerthi and Alok Kirpal)

SIGMOD 2012

In this demonstration, we showcase the technologies that
we are building at Yahoo! for web-scale information
extraction. Given any new website, containing semi-structured
information about a pre-specified set of schemas, we show how to
populate objects in the corresponding schema by automatically
extracting information from the website.

Work done while I was a summer intern in Yahoo! Tel Aviv.

Link
BibTeX

@inproceedings{bdfjkk,
author = {Philip Bohannon and Nilesh Dalvi and Yuval Filmus and Nori
Jacoby and Sathiya Keerthi and Alok Kirpal},
title = {Automatic web-scale information extraction},
booktitle = {Proceedings of the 2012 {ACM} {SIGMOD} International Conference on Management of Data},
year = {2012},
pages = {609--612}
}

Select
### Lower bounds for context-free grammars

Information Processing Letters,
Volume 111, Issue 18, 2011, pp. 895–898

Ellul,
Krawetz, Shallit and Wang prove an exponential lower bound on the
size of any context-free grammar generating the language of all permutations
over some alphabet. We generalize their method and obtain exponential lower
bounds for many other languages, among them the set of all squares of given
length, and the set of all words containing each symbol at most twice.

The version below corrects two typos in the proof of
Proposition 6: *w*_{1}∼*w*_{2} should
be *x*(*w*_{1})∼*x*(*w*_{1}), and in the following sentence
*N*^{−1}(*A*) should be *x*(*N*^{−1}(*A*)); and
a typo in the statement of Theorem 9: the exponent should be *t* rather than *n*.

Download
BibTeX

@article{filmus1,
author = {Yuval Filmus},
title = {Lower bounds for context-free grammars},
journal = {Information Processing Letters},
volume = {111},
issue = {18},
year = {2011},
pages = {895--898}
}

Select
### Inequalities on submodular functions via term rewriting

Information Processing Letters,
Volume 113, Issue 13, 2013, pp. 457–464

We devise a method for proving inequalities on submodular
functions, with a term rewriting flavour. Our method comprises of
the following steps:

- Start with a linear combination
*X* of the values
of the function.
- Define a set of simplification rules.
- Conclude that
*X*≥*Y*, where *Y* is
a linear combination of a small number of terms which cannot be
simplified further.
- Calculate the coefficients of
*Y* by evaluating
*X* and *Y* on functions on which the inequality is tight.

The crucial third step is non-constructive, since it uses
compactness of the dual cone of submodular functions. Its proof uses
the classical uncrossing technique with a quadratic potential
function.

We prove several inequalities using our method, and use
them to tightly analyze the performance of two natural (but
non-optimal) algorithms for submodular maximization, the random set
algorithm and local search.

Download
BibTeX

@article{filmus2,
author = {Yuval Filmus},
title = {Inequalities on submodular functions via term rewriting},
journal = {Information Processing Letters},
volume = {113},
issue = {13},
year = {2013},
pages = {457--464}
}

Select
### Universal codes of the natural numbers

Logical Methods in Computer Science, Volume 9, Issue 3, 2013, Paper 7.

A code of the natural numbers is a uniquely-decodable binary code of the natural numbers with
non-decreasing codeword lengths, which satisfies Kraft's inequality tightly. We define a natural partial
order on the set of codes, and show how to construct effectively a code better than a given sequence of
codes, in a certain precise sense. As an application, we prove that the existence of a scale of codes (a
well-ordered set of codes which contains a code better than any given code) is independent of ZFC.

Journal
ArXiv
Preprint
BibTeX

@article{filmus3,
author = {Yuval Filmus},
title = {Universal codes of the natural numbers},
journal = {Logical Methods in Computer Science},
volume = {9},
issue = {3},
year = {2013},
pages = {paper 7}
}

Select
### On the spectra of direct sums and Kronecker product constructions (with Edinah K. Gnang)

Linear Algebra and its Applications, Volume 519, 2017, pp. 238–277

We study the spectral theory of hypermatrices, initiated by Gnang, Elgammal and Retakh.

Here are some of our results:

- We show that Hadamard hypermatrices of side length 2 exist unless the order is an even number larger than 2.
- We determine the characteristic polynomial and hyperdeterminant of matrices of side length 2.
- We generalize the Rayleigh quotient to hypermatrices.

ArXiv
BibTeX

@article{gf,
author = {Edinah K. Gnang and Yuval Filmus},
title = {On the spectra of direct sums and Kronecker product constructions},
journal = {Linear Algebra and its Applications},
volume = {519},
pages = {238--277},
year = {2017}
}

Select