Saturday, December 02, 2023

IOI 2023: Overtaking

I found this to be the easiest of the IOI 2023 problems — it's the only one where I had a rough idea of how to solve it within a few minutes of thinking. It's still not easy though.

There are a few key observations that simplify the problem:

  1. A faster bus has no impact on the arrival times of a slower bus (even indirectly).
  2. This means that we can simply delete any buses faster than (or as fast as) the reserve bus, as it won't affect our answers.
  3. After doing that, the arrival times of all the non-reserve buses is fixed independent of Y.
  4. The arrival time of the reserve bus is an increasing function of Y.

We'll simulate the process in a spatial sweep along the road. At each sorting station, we'll answer the questions

  1. At what times to the non-reserve buses arrive here?
  2. For a given arrival time for the reserve bus, what is the range of Y values which would cause the reserve bus to arrive here at that time?

To answer the first question, we first need to know the order in which buses leave the previous sorting station, which is just a sort by departure time then by speed. A prefix maximum then gives the arrival times at the current station. To answer the second, we'll start by answering it specifically for times at which non-reserve buses arrive at this station. Consider a time t and slowest bus i to arrive at time t. If the reserve bus leaves the previous station between times t-W[i]d (exclusive) and t-Xd (inclusive) then it will arrive at time t. If it leaves any earlier it will be ahead of the traffic and arrive earlier, and if it leaves any later it will arrive later even if it doesn't encounter any traffic. To turn that range into a range of Y values, we rely on dynamic programming: we look up the interval of Y values corresponding to each endpoint and take their union.

What about answering the second question for times other than those when the non-reserve buses arrive? In those cases, the reserve bus did not encounter any traffic, so the answer for time t is simply the answer for the previous station at time t-Xd.

We'll need a data structure to represent all this. Conceptually it will be an increasing function mapping Y to an arrival time. It will need to support the following operations:

  • Initialise with the identity function.
  • Increment all values by Xd. This can be handled by maintaining an offset outside of the main data structure.
  • Set all values in the interval (u, v] to t.
  • Find the intercept for a given output value.

A suitable data structure is a balanced binary tree, where each node is an interval of Y values with a fixed output t. The intervals must be non-overlapping. For any Y value not contained in one of the intervals, the function is the identity. Setting an interval (u, v] to t requires

  • Deleting any existing intervals contained entirely in that range.
  • Trimming any intervals that overlap the range.

The deletion means each insertion could take linear time, but it is amortised since each inserted interval can only be deleted once.

This requires O(NM log(NM)) for initialisation and O(Q log(NM)) for the queries. I have one gripe with this problem, although I don't know if it is the fault of the IOI or just the IOI mirror on Codeforces: my solution was about 25% slower than the model solution, and exceeded the time limit. To make it pass, after initialisation I converted the std::map to a std::vector and used binary search on it to answer the queries, which is not a big-O change but was slightly more efficient and allowed it to get a full score.

Sunday, November 26, 2023

IOI 2023: Beech Tree

Here's my solution to Beech Tree, from IOI 2023 day 2. For a while I hated the problem, but after seeing the solution I think it's quite beautiful (excuse the pun). It's definitely a problem that requires a lot of thinking time before writing relatively little code.

The key is to identify the conditions under which a subtree is beautiful, in a way that can easily be checked. I'll talk just about whether a "tree" is beautiful, because I'll want to talk about subtrees of that tree. I'll also ignore the original labelling and just work with the permuted labels, which eliminates a level of indirection from all the indices. So for example, the parent of A must be f(A). I'll also talk freely about the colour of a node, by which I mean the colour of the edge between that node and its parent (C[i]).

We can make quite a few observations about a beautiful permutation; I'll focus on the relevant ones here. Some of them seem to come from nowhere: I found them by trying to work out a procedure for choosing a beautiful permutation, and then discovering conditions under which it would fail.

  1. If A and B have the same colour, then A < B if and only if f(A) < f(B). This follows directly from the definition of f.
  2. If A is a parent of B, then A < B. Proof: A must be f(B), and f(B) is the number of times C(B) appears in a sequence of length B - 1, so A = f(B) ≤ B - 1.
  3. No node can have two children of the same colour. Proof: suppose A has two children X < Y with the same colour. Then A = f(X) < f(Y) = A, a contradiction.
  4. Let S(X) be the set of colours of children of X. If A < B, then S(A) ⊇ S(B). Proof: Suppose colour D appears in S(B) but not S(A), i.e., B has a child X of colour D but A does not. X must be the B'th node with colour D (counting from 0). The A'th node with colour D should have A as its parent, a contradiction since A has no child of colour D.
  5. If A < B, then |T(A)| ≥ |T(B)|. Proof: suppose this is not the case i.e., there exist A < B with |T(A)| < |T(B)|. We can choose the largest such B. Since S(A) ⊇ S(B), there must be some colour D such that A and B have children X and Y of that colour with |T(X)| < |T(Y)|. But since f(X) = A < B = f(Y), we get X < Y, which means the pair (X, Y) contradictions the maximality of B.
  6. Let z(A, D) be the size of the subtree rooted at A's child of colour D, or 0 if A does not have a child of colour D. If A < B, then z(A, D) ≥ z(B, D) for all colours D. Proof: Suppose this is not true for some A, B, D. Observation 4 rules out z(A, D) = 0, z(B, D) > 0. Let the children of A and B with colour D be X and Y. Then f(X) = A < B = f(Y), so X < Y, but |T(X)| < |T(Y)|, contradicting observation 5. 
  7. If |T(A)| ≥ |T(B)|, then z(A, D) ≥ z(B, D) for all colours D. If A < B then this follows from (6), so assume A > B, giving z(A, D) ≤ z(B, D) for all D. But |T(A)| = 1 + Σ z(A, D), so if any of those inequalities are strict we would have |T(A)| < |T(B)|, a contradiction.

We now claim that if a tree satisfies conditions 3 and 7, it is a beautiful tree. Note that these conditions are independent of the labelling, and hence can easily be tested. The proof is constructive, in the form of an algorithm to assign the labels in increasing order. Maintain a queue of unlabelled nodes per colour; the head of the each queue must be the next node to be assigned a label. To start, consider the root to have colour 0, and place it in the queue for colour 0. All other queues are empty.

To advance, select the head-of-queue with the largest |T(X)|, and assign it the next available label. Then place its children at the tails of their respective queues. This process maintains the following invariants:

  • Within each queue, |T(X)| is decreasing.
  • The elements in the queues have (non-strictly) smaller |T(X)| than any labelled node.
  • The labels are assigned in decreasing order of |T(X)|.

The proof is left as an exercise for the reader. A key observation is that if some node A is labelled, and it does not have a child of colour D to append to the D queue, then no other nodes will later be appended to the D queue, because 0 = z(A, D) ≥ z(B, D) for any B > A. This ensures that every node will have the correct parent.

Now we are left with the problem of testing conditions 3 and 7 for every subtree. Testing condition 3 is quite easy. For condition 7 it helps to note that the condition is transitive, so that if |T(A)| ≥ |T(B)| ≥ T(C) and (A, B), (B, C) each satisfy the condition, it is not necessary to check (A, C) as well. We can thus easily check a tree of size K in O(KM) time, but this will be too slow to get a full score. There are two observations that help speed this up:

  1. To test the condition for some A, B, we can just iterate over S(B), and for each colour D look up z(A, D). If we sort the children of each node by colour, this will take O(|S(B)| log M) time. A tree of size K has only O(K) edges, so checking a whole tree can be done in O(K log M) time.
  2. We can re-use results from subtrees. If any child of A is not the root of a beautiful subtree, then nor is A. If they all are, we can use small-into-large merging to combine information about the subtrees. For each subtree, keep an ordered map from subtree size to a representative vertex of that size. Then we can merge subtrees by taking each element from the smaller map and inserting it into the larger map, checking that it has the appropriate relationship with its immediate neighbours. Each insertion requires O(log N) time to find the location in the map structure and amortised O(log M) to perform the verification, and each node will be inserted O(log N) times. Thus, the running time is O(N log N (log N + log M)).

This sounds complicated but requires less than 100 lines of code.

Sunday, November 19, 2023

IOI 2023: Day 1

I've finally gotten around to looking at the day 1 problems from IOI 2023, and they're really tough. Congratulations to those who solved any of them. It's been pointed out that writeups of solutions seem to be absent from the internet, so I'll try to remedy that, at least for day 1 (I haven't had time to read the problems for day 2 yet, and probably won't until early next year).

For now I'll leave out Closing Time, since there is a solution at the link above, and I don't have a good proof of efficiency for my own solution. I might come back to it.

Longest trip

Finding the longest path in general graphs is NP-hard, so we should expect that we'll need to exploit the density condition in some way. I'll only discuss D=1, since the other cases are strictly easier. Let's first consider how many components there can be. There cannot be 3 (or more), because then we could pick one vertex from each component and this triple would violate the density condition. If there are two components, then when picking any two vertices from one component and one vertex from the other, we see that the two vertices in the same component must be connected i.e. both components are complete graphs. In that case the longest path is easy to find: just walk the vertices of the larger component in any order.

What about when there is only a single component? It's certainly not required to be complete (the first example shows that), but perhaps we're still guaranteed to be able to find a Hamiltonian path?

Building a single path a step at a time will be difficult, but we can get close by building two paths at once. We can initialise the paths with the first two vertices, and then take each other vertex in turn and try to append it to one of the paths. If it doesn't have an edge to the end of either path, then the ends of the two paths must be connected to each other. In this case they can be joined into a single path, and we can start a new second path with the vertex we just tried to add.

Once we've processed all the vertices, we will have them organised in two paths, but we don't know if they're separate components or not. A single query can determine that. If so, we just return the longer of the paths. If not, we want to find a way to link the paths together. The following procedure will do the trick:
  1. Test if either end of one path is connected to either end of the other. If so, then we can just chain them together trivially.
  2. If not, then the density condition requires that the end of each path is connected to its beginning, so the paths are in fact cycles. Now find any edge that connects one cycle to the other. By cutting each cycles next to this edge, we obtain a Hamiltonian path.

This algorithm requires O(N) queries, but without careful implementation, the constant factor will be too high to keep q below 400. Let's see how we can improve things.

Let's start with the first part of the algorithm, where we build two paths. Let's call the current last elements of each path P and Q, and the element we're adding R. In some cases we will have information that P and Q are definitely not connected. If that's the case, and we learn that P and R are also not connected, then we know that Q and R are connected, without needing a query. So when we know P and Q are not connected, only one query is needed to make progress. When P and Q might be connected, we might need two queries, but if so we have a good chance of transitioning to a state where we know P' and Q' are not connected. The only time we won't is if

  • P is connected to Q; and
  • neither P nor Q is connected to R; and
  • both paths have more than one element.

The above conditions cannot apply twice in a row (because afterwards one of the paths has a single element), but this still allows for an average of 5/3 queries per element. However, we're also given the information that the grader is not adaptive, and simply randomising the order of elements seems to be sufficient to keep the actual number of queries down.

We can also optimise the final steps in attempting to merge the two paths together. To check whether it is possible to connect the two paths end-to-end, a single query can be used to determine whether either end of the one is connected to either end of the other; if this returns true, further queries can be used to isolate the specific connection. While that requires more queries in the true case, it reduces the queries for the false case (which is more expensive, because we then have to find some other edge between the cycles). Finding an edge between the cycles can be done in logarithmic time, by first binary searching on the one path, then the other.

The official model solution does some slightly more complicated things in the case that P and Q are connected; I suspect this might guarantee that the solution completes within 400 queries without depending on randomisation. A solution that can guarantee no more than 1.5 queries to add each R will have the following budget:

  • 381 queries for the first phase (only 254 elements need to be added, since the first two are used to initialise the paths)
  • 1 query to check whether there is any connection between the paths
  • 1 query to check whether there is a connection between the edges of the paths
  • 15 queries for the binary search (8 for the larger side, 7 for the smaller), 

for a total of 398.

Soccer

Let's start by figuring out what a "regular" field looks like. Within each row, let's call all the cells that form part of the field a "span". A span has to be contiguous, since there is no way to kick a ball around a gap in a row in only two straight kicks. Now consider two spans (in different rows). Each span can be seen as an interval of column numbers, and if neither of them is a (non-strict) sub-interval of the other, then field is not regular: there will be a cell in each span whole column does not belong to the other span, and there is no way to kick a ball between these two cells with only two straight kicks.

Let's consider the longest span (ties can be broken arbitrarily), which by the observations above is also a super-interval for every other span. Moving away from this longest span either upwards or downwards, one must encounter progressively shorter spans (again, non-strictly) as otherwise there will be a column with non-contiguous cells in the field. It shouldn't be too hard to convince yourself that these conditions are also sufficient for a field to be regular.

We can think about constructing a maximal field by adding spans one at a time, starting with the longest span and then adding spans either above or below. This naturally leads to a recursive solution: given a partially constructed field (consisting of spans in contiguous rows), consider adding a new span either above or below the existing spans, whose extent must be limited to the intersection of all the existing spans. We need to start the recursion from the widest span and we don't know where that will be, so we can just use an outer loop that considers all possibilities.

This recursion will consider all possible regular fields, but since we only want to know about the biggest one, we can immediately do some pruning. Consider a span to be "maximal" if it cannot be expanded (under whatever conditions it is being selected) by adding more cells to the left or right. When recursing, we only need to consider maximal spans, since this adds more cells to the field and does not reduce our choices later in the recursion. Similarly, only maximal spans need be considered as the initial (widest) span.

This exponential-time solution can be reduced to polynomial time with memoisation in a pretty standard way. The only state needed in each recursive call is the range of rows for which we already have spans and the range of columns that is the intersection of all chosen spans.

This gives O(N⁴) states and an even larger number of edges between states, so we need to optimise this to solve the final subtask. Let's start with a few optimisations that won't obviously reduce the big-O. Immediately after adding a new span, see if we can add more spans (on both sides) without narrowing the column range at all. It will never be suboptimal to add these now, so we can do it immediately. We want this process to be efficient, which we can achieve by building lookup tables for the next tree upwards and downwards of any given position, and precomputing a range-max query structure along each row. We can also use lookup tables of next tree and next non-tree along each row to quickly find all the maximal spans in the new row we're adding.

While this will clearly help in some special cases, it's not clear why it should improve the big-O. Let a "maximal rectangle" be one that doesn't include a tree and cannot be made into a bigger rectangle by adding more cells to it (without including a tree). Then it is not difficult to show that each time we need to decide on the next span, the current row and column range define a maximal rectangle: our recent optimisation is to extend the rectangle vertically, and our choice to only include maximal spans mean it cannot be extended horizontally. So the number of memoisation states is bounded by the number of maximal rectangles.

What's the maximal number of maximal rectangles? It is O(N²). Consider fixing the bottom row, and fixing the tree that bounds the top edge (treat the boundary as surrounded by trees). Since this tree must be the lowest in its column above the bottom edge, there are only O(N) such trees for each choice of bottom row. Once those two parameters are fixed, it is easy to see that the rest of the maximal rectangle is fixed too (simply grow leftwards and rightwards of the chosen tree as far as possible). Note that not all such rectangles are maximal, because we haven't considered that the bottom could be extended. So in fact, the number of rectangles bounded by trees on 3 sides is O(N²).

This means that we have O(N²) states, but what about the number of state transitions? We'll consider only transitions that start by adding a new row on top (adding to the bottom is symmetrical). At first glance, it appears that this could be O(N³), because from each state there could be O(N) successor states. However, we can associate each transition with a unique rectangle bounded on top, left and right; and we showed earlier that there are only O(N²) of these. Specifically, for each transition, consider the intersection of the old and new state rectangles, and call it the "transition rectangle". This will differ from the new state only in the bottom edge, so it will be 3-side bounded as claimed. We can reconstruct the new state from the transition rectangle simply by extending the bottom edge as far as it can go; we can also reconstruct the old state as follows:
  1. Remove one row at a time from the top until it is possible to extend the left or right edges.
  2. Extend the left and right edges as much as possible.

Since we're able to reconstruct the old and new states from the transition rectangle, the transition rectangle must be unique to the transition, and hence there can only be O(N²) transitions.

The overall complexity depends on the implementation of the range max query and of the data structure used for memoisation. With practical implementations the algorithm will have O(N² log N) time and either O(N²) or O(N² log N) space, depending on the range max data structure used.

Incidentally, while the official model solution explicitly computes all the maximal rectangles, it's not actually necessary to do so. In my solution they are needed only to prove the efficiency.