Showing posts with label Recursion. Show all posts
Showing posts with label Recursion. Show all posts

Sunday, July 3, 2016

Codeforces Round #359 (Div. 2) - D. Kay and Snowflake

Problem Statement:
http://codeforces.com/contest/686/problem/D

Summary:
Given a tree of size n <= 300000, answer q <= 300000 queries of the form: find the centroid of subtree v. Centroid of subtree T is defined as a node in T such that when removed the resulting set of disjoint trees have sizes at most |T|/2 (half the size of initial subtree).

Sunday, June 12, 2016

Codeforces Round #356 (Div. 2) D. Bear and Tower of Cubes

Problem Statement:
Codeforces Round #356 (Div. 2) D. Bear and Tower of Cubes

Summary:
Given m < 10^15, find the largest (k, X) such that a[1]+a[2]+...+a[k] = X <= m, each a[i] is a perfect cube, and for a given X,  a[i] is the largest cube such that a[1] + ... + a[i] <= X.

Monday, June 15, 2015

Codeforces 538E - Demiurges Play Again

Problem Statement:
538E - Demiurges Play Again

Solution:

Interesting problem. Although it seems to have large search space, by making use of interesting observations we can solve it in linear time (i.e. visiting each node exactly once).

First focus on trying to get the maximum result. Note that both player play optimally, hence the result of a game is deterministic. Consider a subtree T rooted at v. Let v.size be the number of leaves in T. Furthermore, let v.best be the maximum result obtained by an optimal labelling of leaves of T using label i = 1 .. v.size. We have two cases:
1. First player makes a move at node v. He will then choose a child of v that maximises result. For all w children of v, we check w.size and w.best. Pick the one that gives maximum of v.size - w.size + w.best (i.e. we place the biggest labels on the leaves of subtree rooted at w)
2. Second player makes a move at node v. He will choose a child w that minimises the result. Our job is to make the minimum choice as large as possible. It can be proved that the biggest possible will be \(1 + \sum_{w \text{ child of } v} (w.best - 1) \).

Friday, February 13, 2015

Codeforces Rockethon 2015 Problem D1/D2 - Constrained Tree

Problem Statement:
513D2 - Constrained Tree

Solution:
Can a problem be interesting and yet frustrating? The answer turns out to be a loud and painful Yes.. Spent 8 hours just to get this right :/

Btw the editorial to this problem is sufficiently comprehensible hence I would suggest you to read the official editorial instead of this one if you haven't done so. Otherwise, you might benefit from this :)

You are supposed to construct a tree out of a given set of relationships between the nodes on the tree, such that the pre-order and in-order labelling of the nodes agree on each other. That is a pretty confusing semantic in my opinion. And they have recursive definition.

Tuesday, June 17, 2014

a bit of number theory: Greatest Common Divisor

We'll discuss one of the most ancient algorithm in the history of Number Theory. 

Problem Statement:
Given two positive integers \( M\) and \(N\), find the greatest integer \(D\) such that \(D\) divides both \(M\) and \(N\).

In case you have forgotten, here are a few examples:

  • for \(M=10\), \(N=55\), the greatest common divisor \(D\) is \( 5\)
  • for \(M=72\), \(N=64\), the greatest common divisor \(D\) is \( 8\)
For small \( (M,N) \), \(D\) can usually be easily found by trial and error (and sometimes by terror), but for large values of \( (M,N) \), while trying all integers less than M and N might work, it can be really slow and certainly we can do better than this.

An efficient algorithm to solve this problem hinges on the following observation:
Theorem 1:
For positive integers \(a,b,d\), if \(d\mid a\) and \(d \mid b\), then \(d\mid ax+by\) for any \(x,y \in \mathbb{Z}\).

Proof:
Directly substituting \(a = kd\) and \(b = ld\) shows that the claim indeed holds. \(\blacksquare\)

With this, we can devise an efficient algorithm due to Euclid:

Euclid(M,N)
Input: M,N are integers, \(M\geq N\)
Output: integer D, the greatest common divisor of M and N
Pseudo-code:
if \(N\)==0:
   return \(M\)
return Euclid(\(N\), \(M \pmod{N}\))

Convince yourself that this algorithm works.
Euclid(M,N) eventually terminates with \( D = xM + yN\) for some \(x,y\in \mathbb{Z}\).
We will prove that it indeed returns the greatest common divisor of M and N.

Theorem 2:
Given integers \(d,x,y,a,b\), if \(d = ax + by\) and \(d\) divides both \(a\) and \(b\), then \(d\) is the greatest common divisor of \(a\) and \(b\).

Proof:
Suppose that \(d\) is not the greatest common divisor of \(a\) and \(b\). Hence there is \(D\) where \(D > d\) such that \(D\) divides both \(a\) and \(b\). By Theorem 1, \(D \mid ax + by = d\), which implies that \(D \leq d\), a contradiction. Hence \(d\) must be the greatest common divisor of \(a\) and \(b\). \(\blacksquare\)

Finally, what is the running time of Euclid(M,N)? It is guaranteed that Euclid(M,N) will terminate after \(O(\log{N})\) recursive calls since each call reduces the size of either M or N by half, with each call performs a modular arithmetic which can be done in constant time. The claim below explains the upper bound to the recursive call.

Claim:
If \(M \geq N\), then \(M \pmod{N} < \frac{M}{2}\) 

Proof:
\(N\) can be either bigger or less than \(\frac{M}{2}\), so we have 2 cases to consider:
Case 1: \(N \leq \frac{M}{2}\)
Then we know for a fact that \(M \pmod{N} < N \leq \frac{M}{2}\).
Case 2: \(N > \frac{M}{2}\)
Here we have \( M \pmod{N} \equiv M-N \pmod{N}\) and since \(M-N < M - \frac{M}{2} = \frac{M}{2} < N\) we have \(M \pmod{N} = M-N < \frac{M}{2}\). \(\blacksquare\)

Here is an implementation in C:
/* Recursive greatest common divisor */
int gcd(int M, int N){
   if (N>M) return gcd(N, M);
   if (N==0) return M;
   return gcd(N, M%N);
}

Wednesday, May 14, 2014

a bit on backtracking

If there is one thing that computer can do very well, it is backtracking. Backtracking is one of the most fundamental problem solving paradigm: given a problem, try all possible input to come up with the solutions. This is what we did when girlfriends suddenly get angry at us, we need to generate all possibilities very quickly until a solution is found. Implementation of backtracking algorithms usually makes use of recursion, sometimes referred to as 'recursive algorithm'. The advantages of backtracking is a solution set can be produced with minimum knowledge and understanding about the problem itself. Sometimes, the only way to solve a given problem is through backtracking. Yet, this method grows very quickly in terms of complexity, hence can only be applied on problems that work on small set of input.

There are 2 basic type of backtracking algorithm that will be presented here, namely finding subsets and generating permutations. A set H is called a subset of S if and only if H contains only elements in S. A permutation is defined as a rearrangement of elements in an ordered set.

For example, given a set S = {1,2,5}, we have exactly 8 subsets of S: {1,2,5}, {1,2}, {1,5}, {2,5}, {1}, {2}, {5}, and {}.
Given an ordered set P = {1,3,4}, there are exactly 6 permutations using all elements in P: {1,3,4}, {1,4,3}, {3,1,4}, {3,4,1}, {4,1,3}, {4,3,1}.

Backtracking algorithms build valid inputs by adding a new element one by one to an existing set until a solution is found. A simple implementation of backtracking algorithm to find subsets is described as follows:

//a[N] initial set, S[N] is describes the subset of a[N]
//S[i] is 1 if a[i] is in the subset, and 0 otherwise
void backtrack_subset(int a[N],bool S[N],int k){
  if(k==N){
    printf("{ ");
    for(int i=0;i<N;++i){
      if(S[i])printf("%d ",a[i]);
    }
    printf("}\n");
    return;
  }
  bool c[2]={true,false}; //new elements
  for(int i=0;i<2;++i){
    S[k]=c[i];
    backtrack_subset(a,S,k+1);
  }
}

The function starts with a call backtrack_subset(a,S,0)and terminates when k equals to N, for which a valid subset of a[N] has been constructed. The complexity of this algorithm grows exponentially, as each calls to backtrack_subset will lead to 2 calls of backtrack_subset. My 5-year old Mac withstood (with much struggling) up to N=20.

Next we have an implementation of backtracking on permutations:

void backtrack_perm(int a[N],int P[N],int k){
  if(k==N){
    printf("{ ");
    for(int i=0;i<N;++i){printf("%d ",a[P[i]]);}
    printf("}\n");
    return;
  }
  bool ins[N]; //if i is inside P[i], ins[i] = 1, otherwise 0
  for(int i=0;i<N;++i)ins[i]=false;
  for(int i=0;i<k;++i)ins[P[i]]=true;
  int np=0;int c[N];   //c stores possible next element
  for(int i=0;i<N;++i){
    if(ins[i]==false){
      c[np]=i;
      ++np;
    }
  }
  for(int i=0;i<np;++i){
    P[k]=c[i];
    backtrack_perm(a,P,k+1,cnt);
  }
}

This algorithm has exactly the same idea with the previous one, in which the significant difference is the way it stores the information about whether an element a[i] is already inside the permutation. This algorithm have O(N!) complexity, hence can only be useful for N up to 10. Each call of backtrack_perm leads to another np calls to backtrack_perm where np is the number of available choices of new elements that can be added. My Mac surrendered on N=11 in which I had to hard reboot the poor machine. This idea of recursive calls to next available elements is also used in many other advanced algorithm such as graph traversal algorithms. :)