Wednesday, December 30, 2015

Codeforces 586F - Lizard Era: Beginning

Problem Statement:
586F - Lizard Era : Beginning

Solution:
This problem has a very cool hashing and look-up idea. (This method is described in the official editorial as well)

First break the array of l[i], m[i], and w[i] into two halves, namely the upper half and the lower half. This helps to reduce down the \(O(3^n)\) component to \(O(3^{n/2})\), which actually allows us to solve the problem given \(n \leq 25\).

Suppose the upper part gives us (a, b, c) as the sum of the chosen configuration of l[i], m[i], and w[i]. Similarly, we suppose (a', b', c') are the sum for the lower part. Hence our job is actually to find such (a', b', c') that \(a+a' = b+b' = c+c' \).

Monday, December 21, 2015

Codeforces Education Round 1 C. Nearest Vectors

Problem Statement:
http://codeforces.com/contest/598/problem/C

Solution:
[Before you read any further, I think there is an official editorial for this round which may explain the approach to this problem better.]

The problem looks innocent enough, but actually it is quite a tedious one. Nevertheless, it links several interesting techniques and I find this problem quite interesting eventually.

Tuesday, December 8, 2015

Parallel Matrix Multiplication in Java

In this post I would like to discuss my experiment today on parallelising matrix multiplication. First off, I would like to give credit to the following stackoverflow thread, to all the authors of the questions, answers, and the codes.
http://stackoverflow.com/questions/5484204/parallel-matrix-multiplication-in-java-6

The fascination on parallelised matrix multiplication started when I was implementing an RBM in Java. On that particular implementation, the training phase was very slow, where some training instances may last for half a day. In comparison, a library called Medal https://github.com/dustinstansbury/medal which is written on top of MATLAB only took less than 5 minutes for the same computation! As it turns out, that godly speed was possible because matrix operations in MATLAB is highly optimised and parallelised. That makes me wonder how to parallelise matrix operations, and in particular, matrix multiplication.

So today I decided to investigate further and this post will serve as the highlight of the day.