Hell's Kitchen
Inviting Javi~~ Linearizability v.s. Sequential consistency can be explained through a Hell’s Kitchen example where there are two teams of cooks cooking for a shared set of customers dining. Main idea is to get accross ordering of dishes (i.e. first course, then second, then third, …) for individual customers (sequential) and entire tables (linearizable).
539 Distributed Shared Memory
Think about the communication model in the distributed system. We have either message passing(fundamental) or shared memory(convenient, but more abstract). The goal is to mimic single-process memory interfece so that higher-level algorithms can use this shared memory as a black-box. A read returns the value of the most recent write. However, in distributed system, operations may overlap. In this note, we build shared memory in the message passing communication model. ...
580 Fair Division of Indivisibles
Items are NOT divisible now! Goal: Fair and efficient allocation Model $A$: set of $n$ agents $M$: set of $m$ indivisible goods Each agent $i$ has Valuation function: $V_i: 2^m \to R_+$ over subsets of items. This function is monotone, i.e. the more the happier. Allocation $X = (X_1, \ldots, X_n)$ is the allocation of goods to agents, where they form a partition of $M$. Fairness In the indivisible setting, we envy-free and proportional properties is hard to achieve in the divisible setting. We need new definitions. ...
580 Fair Division of Divisibles via Competitive Equilibrium
Divisibles means that the item can be divided to any sizes. A cake is an example. Goal: Fair and efficient allocation Model $A$: set of $n$ agents $M$: set of $m$ divisible goods Each agent $i$ has Valuation function: $V_i: R_+^m \to R_+$ over bundles of items. (doesn’t have to be linear) Concave: Captures decreasing marginal returns (looks like $\sqrt{x}$ curve) Allocation $X = (X_1, \ldots, X_n)$ is the allocation of goods to all agents, $X_{ij}$ is the allocation of good $j$ to agent $i$. Fair (agreeable) Envy-free: no agent envies other’s allocation over her own For each agent $i$, $V_i(X_i) \geq V_i(X_j), \forall j \in [n]$. Proportional: Each agent $i$ gets value at least $V_i(M) / n$ $\equiv$ for each agent $i$, $V_i(X_i) \geq V_i(M)/n$. Efficient(non-wasteful) Pareto-optimal: no other allocation is better for all. ...
Understanding SimplePIR & DoublePIR
Notations Database size: $N$ Plaintext modulus:$p \in \mathbb{N}$. In SimplePIR, $\log(p) \leq 10$. Ciphertext modulus: $q \in \mathbb{N}$. In SimplePIR, $\log(q) = 32$. LWE Dimension: $n = 1024$ LWE secret vector: $\vec{s} \in \mathbb{Z}_q^{n}$. LWE enc factor: $\Delta = q / p$. LWE Randomized matrix $A \gets \mathbb{Z}_q^{m \times n}$, where $m$ is the number of samples you want to encrypt. Plain message vector: $\vec{\mu} \in \mathbb{Z}_p^m$. In simplePIR, we also write $\mu_i$ to denote the vector with all zero entries except a single $1$ at index $i$. ...
Commitment and Signatures
Definition Commitment Parameters $\mathsf{pp} \gets \mathsf{Gen}(1^n)$. $\mathsf{com} \gets \mathsf{Commit}(m, r)$ $\set{0, 1} \gets \mathsf{Open}(\mathsf{com}, m, r)$. Binding: an adversary can’t open the commitment to a different message $m'$. Hiding: $\mathsf{com}$ does not reveal any information about the message $m$. One naive approach is to use hash. However, hash is not hiding. Pedersen commitment $\mathsf{Gen}(1^n)$: group $\mathbb{G}$ with order $q$ and random generators $g, h$. $\mathsf{Commit}(m, r)$: pick a random $r \gets \mathbb{Z}_q$, compute $\mathsf{com} = g^mh^r$. $\mathsf{Open}(\mathsf{com}, m, r)$: check if $\mathsf{com} = g^mh^r$. This scheme is computationally binding: if we express $h = g^x$ for some $x$, then finding $m' \neq m$ such that $g^mh^r = g^{m+rx} = g^{m' + r'x} = g^{m'}h^{r'}$ is hard by discrete log assumption. ...
RSA in pure number theory
Binomial Theorem Given $a, b \in \mathbb{R}, n \in \mathbb{N}$, then $$ (a + b)^n = \sum_{k = 0}^{n} \binom{n}{k}a^k b^{n-k} $$ Proposition Let $p$ be prime. $\forall a, b \in \mathbb{Z}$, we have: $$ (a+b)^p \equiv a^p + b^p \mod p $$ Proof It suffices to show that $\binom{p}{k} \equiv 0 \mod p$ if $0 < k < p$. Observe: $p \mid p!, p \nmid k!, p \nmid (p-k)!$, hence $p \mid \binom{p}{k}$. $\blacksquare$ ...
Taste of Fully Homomorphic Encryption
The following is a note for my talk during Ling’s group meeting. What is FHE? Homomorphic encryption allows some computation (addition, scalar multiplication, ct-ct multiplication) directly on ciphertexts without first having to decrypt it. Partially Homomorphic Encryption support only one of those possible operation. RSA is an example: $$ \text{Enc}(m_1) \cdot \text{Enc}(m_2) = m_1^e \cdot m_2^e = (m_1 \cdot m_2)^e = \text{Enc}(m_1 \cdot m_2) $$FHE supports Addition AND Scalar Multiplicaiton: $$ \begin{cases} \text{Enc}(m_1) + \text{Enc}(m_2) = \text{Enc}(m_1 + m_2)\\ \text{Enc}(m) \cdot c = \text{Enc}(m \cdot c) \end{cases} $$ Fancy! And it exsists! ...
Convex Hull in 2D
Definition Problem: Given $n$ points $P = \set{p_1, \ldots, p_n} \subset \mathbb{R}^2$, compute $\text{CH}(P) = $ smallest convex set contining $P$. Input: a list of points all the points in $P$. Output: a list of points in $P$ that belongs to this convex set in CCW, starting from the left most point. Applications Given a huge point cloud, convex hull helps find the “shape”. Bounding volume. Makes ray shooting tasks faster. ...
Fast Fourier Transform
I know this name for a long time. But never learnt it. Now it’s time. Recap Previously, Karatsuba’s algorithm for integer multiplication. It computes the result in $O(n^{\log_2 3})$ where $n$ is the number of digits. Another recap: polynomial representation: $$ p(x) = \sum_{i = 0}^{n}p_ix^i $$ And we can represent this polynomial using an array of size $n$. Evaluations and additions of polynomials are straightforward computation. Can be done in $O(n)$ time. Naive multiplication is slow, however. It takes $O(n^2)$ time to compute all the coefficient multiplications. FFT helps us speed up this multiplication. ...