## Thursday, September 27, 2012

### Last Supper

Firstly, it's not even obvious how to simulate Leonardo's strategy efficiently. It can be done though. First, you need to be able to tell, for each request, what the next request of the same colour will be (we'll see why later). This can be computed by making a backwards pass over the data, while keeping track of the last seen request for each colour. Now we can implement Leonardo's strategy in a forward pass. Over time, let's keep track of the next time each colour will be needed. After each request, we need to update this for the just-requested colour to the point to the next instance for this colour. Each time Leonardo requests a paint colour that isn't on the scaffold, we can examine all the colours on the scaffold to determine which will be needed the furthest in the future. That would still require O(NK) time, but we can keep the next-needed-time for all the scaffold colours in a max priority queue. Since keys need to be changed as we go, a multi_set is a good choice in C++ for O(N log K) time.

With that in place, it's straightforward to implement the cases where we have at least ceil(log K) bits per request, because we can just encode the index of the colour on the scaffold to replace. However, to solve subtasks 4 and 5 you need something smarter. The key point is that you don't have to exactly implement Leonardo's algorithm. Let's do a thought experiment: Leonardo has two shelves on his scaffold, top and bottom. The top shelf contains the colours he will use at least once more before they're taken away. The bottom shelf contains the colours he knows will be taken away before they're used again. Since he knows the sequence of colours and he assumes his assistant follows his strategy, he can arrange the initial K paints on the right shelves, and every time he uses a paint he also puts it back on the right shelf. His assistant will only ever take away a colour from the bottom shelf, and Leonardo will only pick up paint either from his assistant or from the top shelf.

And now for the key observation: Leonardo never looks at the bottom shelf. Thus, it doesn't actually matter which colour the assistant takes off the bottom shelf. If he instead takes away an arbitrary colour from the bottom shelf, the set of top shelf paints will still be exactly as before, step after step. The only possible difference is that a requested colour might turn out now to already be on the bottom shelf, but this can't happen because it would contradict the optimality of Leonardo's strategy.

Thus, it is sufficient for the advice to provide enough information to know which colours are on which shelf, and the assistant can pick bottom-shelf colours arbitrary to take away. We need K bits to indicate the shelf for each of the initial K colours, and we need N bits to indicate which shelf each paint goes back on after it has been used. N + K bits is enough to solve all subtasks, and in fact subtask 5 can be solved with only 125000 bits.

### Jousting tournament

There are a number of observations one can make. Firstly, let's say a knight is a candidate for a tournament if he either participates in the tournament or in another tournament whose winner was a candidate (recursively). Then it is easy to see that the candidates for a tournament form a contiguous interval of the original line of knights, and that the location of the interval is independent of the ranks of the knights. Computing this interval efficiently is not completely trivial, but let's come back to that and assume we know the candidate interval for every tournament. We can also see that the winner of any tournament is the candidate with highest rank.

Suppose the popular knight is a candidate for a tournament. We can immediately determine who all the other candidates for that tournament are, regardless of exactly where he is in the line. We can also compute whether he will win. Now, for each tournament going backwards in time, we can compute how many times he will win if that is his first tournament: if he loses it will be 0, otherwise it will be more one that the number for the tournament he advances to. We can also find the leftmost slot (if any) he could occupy that doesn't require advancement from a previous tournament. We then pick the tournament with the highest such score and take the leftmost slot (breaking ties leftwards as required).

Done naively this will require O(N2) time, which should suffice for the first 2 subtasks. We need a few  operations to make this small enough to tackle the final subtask. Firstly, we need to be able to determine the candidate ranges. This can be done with a binary indexed tree. As the tournaments are played, store an array of N values in a BIT, where each value is 1 for knights that haven't competed or that had the lowest index (not rank) of all candidates in their last tournament, and 0 for all others. For a new tournament (S, E), the first candidate will be the first knight whose (inclusive) prefix sum is S+1, and the last candidate will be one to the left of the first knight whose (inclusive) prefix sum is E+2. Binary searching for a given prefix sum can be implemented in O(log N) time, but we also need to find the bits to zero out in the BIT. These can be found by keeping the locations of all 1 bits in a linked list. Each bit is zeroed at most once, so this process takes O(N log N) time.

For each tournament, we also need to know to which tournament the winner advances. This can be computed at the same time as the previous step, by storing the tournament index in the linked list and building a tournament tree as we go.

Finally, we need to be able to determine the winner of each tournament, assuming it contains the popular knight. For this we need to determine the maximum value within a range of values. This is the standard range max query problem, for which there are a number of O(N log N) and even O(N) algorithms to answer O(N) queries on N items. A simple one is to precompute for each element the maximum of the next 1, 2, 4, 8, ... elements in O(N log N) time, which then allows queries to be answered in O(1) time.

UPDATE:  a commenter made a good observation: we don't care about the ranks of the other knights, only whether they are higher or lower than that of the popular knight. The range max queries can all be done on a boolean array, which is simpler. For example, by precomputing prefix sums one can determine the number of higher-ranked knights in an interval, and the popular knight will win a tournament if and only if the corresponding sum is zero.

### Ideal City

This one had me totally stumped yesterday, but I woke up this morning with what I think is a solution. It depends on a conjecture. Let's call the x-weight of a path the number of horizontal steps, and similarly for y-weight. Let's call a path from A to B x-shortest if it has minimal x-weight, and let the x-distance from A to B be x-weight of this path (again, similarly for y-shortest and y-weight). I conjecture that the distance from A to B is the sum of the x-distance and the y-distance. Put another way, I conjecture that the shortest path from A to B is both x-shortest and y-shortest. I can't formally prove it, but the intuition is that any x-shortest path can be "straightened out" until it matches a shortest path, because there are no internal obstacles to get in the way.
Given that, we can separate the problem into finding the sum of x-distances and the sum of y-distances. Let's look at just the y-distances. The city can be divided into contiguous horizontal strips. Consider each such strip as a node in a graph, where two nodes are connected if the strips are directly connected. In fact, the graph must be a tree, since a cycle would imply that the city is not ideal. Also, travel within a strip is free when computing y-distance, so the y-distance between two blocks is equal to the number of tree edges between their corresponding nodes.
Computing the sum of all distances in a tree is a more conventional problem with conventional recursive techniques. For each subtree, compute the sum of all distances within the subtree, the number of blocks, and the sum of distances from every block to the root. When merging two trees at their roots, these values can be computed in O(1) time for the combined tree using the values for the original tree and the new subtree, so the process takes O(N) time.

UPDATE:  I realised that this construction can also prove the conjecture (as did a commenter). Consider the shortest path, and assume it is not y-shortest. Project it onto the tree of horizontal strips. It is then not a shortest path through this tree, since all y steps are projected to steps in this tree. But a non-shortest path through a tree must visit some vertex twice, which means that the original path exits some strip and then re-enters it. We could then shorten the path by short-cutting directly between the exit and entry points along the path.

## Wednesday, September 26, 2012

### IOI 2012 day 1 analysis

I'm not at IOI this year, but since I didn't spot anything on the web I thought I'd post my thoughts on the Day 1 tasks. It looks like quite a tough problem set, although mostly because there are a lot of subtasks to think about and cases to handle.

UPDATE: The official webpage now has a page with hints. They seem to have essentially the same solutions as me for Crayfish Scrivener and Parachute Rings.

### Crayfish Scrivener

This is a task that I proposed.

Half the challenge here is thinking about the problem in the right way. Rather than trying to undo undo's, just think of undo as a time machine: it takes you back to exactly the same state as you were N operations ago. So a simple implementation is just to maintain an array of strings through time, and whenever an undo is encountered, copy the string that was N steps ago.

That's going to use up too much memory for the later subtasks. However, there is a lot of duplication, because for every L command, the next string looks almost the same as the previous one. We can compress the whole thing by using a trie: a tree where each edge is a letter, and each string is a path from the root through the tree. Any string can be encoded as just a pointer to a node in this tree. Adding a letter requires finding the child node with a particular letter (creating it if it doesn't already exist). An undo requires just a copy of a previous pointer. Thus, each step can be simulated in O(1) time and space.

There is still the problem of answering queries. For subtasks where the queries all follow the commands, this is quite easy: just extract the final string once into an array, and answer queries from the array. For the final subtask something smarter is required: some extra information must be stored in each node. A start is to store the node's depth and its parent. Each query can be translated into a query for the kth ancestor of the current node, and the parent pointers can be used to ascend the tree. To make it fast enough, one can also store pointers to the 2i-th ancestor for all relevant i, which makes it possible to answer queries in O(log N) time. It also requires O(N log N) space: since the task description doesn't list the memory requirements I'm not sure if that will fit. It's also possible to store only the parent and the 2i-th ancestor where 2i is the largest power of 2 dividing the depth, for O(N) space.

### Parachute Rings

For this problem, the trick was keeping track of all the cases. Firstly, it is quite easy to keep a histogram of the degree of each ring. We can also keep track of the components, and the number of vertices and edges in each component (this can be kept in the root of a standard union-find structure). Now, let's consider cases:
1. If there is a ring with at least 4 connections, it is the only possible critical ring, because removing any other ring will leave it with at least three connections. If there is more than one such ring, there can be no more critical rings.
2. If there is a ring with exactly 3 connections, only it and its neighbours can be critical rings. There are thus at most four candidates to consider.
3. If all rings have at most 2 connections, there could be many critical rings. Every component will be either a chain or a cycle, and these can be distinguished by comparing the number of edges and vertices in the component. If there are no cycles, all rings are critical. If there is one cycle, all rings in that cycle are critical. Otherwise there are no critical rings.
The last case can be checked as we go, but the others are a little more tricky because they identify only candidates, not definitely critical rings. However, at most four rings can become candidates during the lifetime of the program: the first to have at least three connections, plus those three neighbours. As soon as a ring becomes a candidate, we can replay all the previous connections on another union-find data structure with that ring removed, and thereafter maintain that structure so that it can be examined after future connections.

I've omitted one minor detail: we also need to maintain in each union-find structure whether there are 0, 1, or more cycles, and if 1, the size of the component with 1 cycle. This can quite easily be maintained as the connections are made. Thus, total memory is O(N) and total time is O(N α(N)).

### Pebbling odometer

This seems like the hardest problem, if only because it takes the most time to work through all the different subtasks. I haven't tried to implement these, so my analysis may be a little off. In terms of implementation, the major annoyance of this language is the lack of subroutine calls. It is likely that you will want to write metaprograms to generate the programs, where a subroutine call is implemented by duplicating code and adjusting internal labels. There is also no mechanism for storing internal state, so state needs to be stored either in the instruction pointer (by duplicating code), or in the grid itself through counts of pebbles.

Since we don't need to preserve the pebbles, we can just pair off pebbles until one of the piles is empty. Here's an outline of the process, ignoring all the details of moving around the grid.
1. If there is no pebble at (0, 0), then terminate at (0, 0).
2. Pick up a pebble from (0, 0).
3. If there is no pebble at (0, 1), then terminate at (0, 1).
4. Pick up a pebble from (0, 1).
5. Repeat.

We can no longer destroy the state by picking up pebbles. However, we can just move them somewhere for safe keeping. Start by moving all the pebbles in (0, 0) to (1, 0) and from (0, 1) to (1, 1) (using a loop in each case). Then apply the algorithm for subtask 1, and once the result has been decided, move any remaining pebbles back to the top row. Note that there will need to be two code-paths for restoring the pebbles, depending on where you want to terminate.

This should be doable by moving the pebbles towards each other until they meet. First move right until finding the first pebble. Then
1. Pick up a pebble.
2. Move forward one square.
3. If there is already a pebble on this square, terminate.
4. Add a pebble to this square.
5. Continue forward until reaching the other pebble.
6. Turn around.
7. Repeat.