Cave
This is a nice problem following a recent IOI trend of having problems where the limiting factor is not execution time, but some abstract measure of efficiency. This is slightly easier than some of the previous problems in each vein.
To achieve 100%, we can use up to 14 queries per door. Since 214 is slightly more than the number of doors, this strongly suggests some form of binary search on each door. It will be easiest if we attack the doors in order. Once we know which switch and position opens a door, we can lock that switch in place so that we will always be able to see the next door, and thereafter pretend the switch does not even exist.
To solve for a door, we can start with all switches down, which will immediately tell us which switch position opens the door. At this point every switch is a candidate. We can then test with half the candidates down and half up, which will eliminate half the candidates. This process is then repeated until only one candidate remains.
To solve for a door, we can start with all switches down, which will immediately tell us which switch position opens the door. At this point every switch is a candidate. We can then test with half the candidates down and half up, which will eliminate half the candidates. This process is then repeated until only one candidate remains.
Robots
As is commonly the case for problems that ask how long some agents need to achieve a goal, the answer can be found by a binary search on the answer. So we need to decide whether the robots can clear the floor in S seconds.
We can simplify the problem slightly by noting that for each toy, we only need to know how many weak and how many small robots are able to move it, which can be found by binary searching the two lists (after sorting them). Of course, if a toy cannot be moved by any robot, return -1.
Let's first decide the actions of the weak robots, starting from the weakest. There will be some set of toys it can handle. Since they effectively differ only in size, the weakest robot should work from the largest downwards, so as to make the job of the small robots easier. Also, there is never any reason for it to move fewer than S toys, unless it runs out. Now consider the 2nd-weakest robot. There will be extra toys it can handle, plus any light toys that the weakest robot didn't have time for. Since the weakest robot is finished, the difference in weights are irrelevant, and the 2nd-weakest robot should again work in decreasing order of size amongst the toys it can handle. The same process can be continued for the remaining weak robots.
Now consider the small robots, from largest to smallest. These can again take up to S toys, starting from the largest remaining one. If a robot is unable to handle the largest remaining toy, then S was too small.
Implementation can be done using a priority queue, implemented as a binary heap, representing toys that are light enough to be handled by the current robot and ordered with the largest at the head. The toys are initially sorted by weight. Each time a new weak robot is considered, new elements are added from the list of toys to the priority queue, and the robot then removes items starting from the head of the queue.
Assuming that T is larger than A, B, the running time will be O(T (log T)2): one log T for the binary search, the other for the priority queue operations. I think that it may be possible to reduce this to something like O(T·log T·log(max(A, B))) using the right sort of data structure for the priority queue (to allow S items to be removed in log time): something like an interval tree for the number of toys of each size.
Game
I disliked this problem because it has a nice solution that takes just a bit too much memory. I only managed to get 80% for it in the time I spent on it, and I didn't feel inspired to modify my solution to pass fully.
In 1D, this can be solved by a fairly straightforward use of a segment tree: each node stores the GCD of its two children. Since R can be quite big, this needs to be a sparse segment tree; another alternative would be a balanced binary tree.
In 2D, it is tempting to use a quadtree, but in fact that doesn't guarantee poly-logarithmic time. A 1xC query will force refinement down to the individual non-zero cell entries. Instead, we can use a range tree, which is a tree of trees: an outer tree is built over the columns, and for each column span corresponding to a node in this tree, we have an inner tree over the rows. Each node in this inner tree corresponds to a rectangle of the grid, and stores the GCD of elements from this rectangle. A query now uses the columns to select O(log C) nodes from the outer tree i.e., O(log C) inner trees, and applies a query to each of them. Queries thus take O(log R·log C) time when the implementation uses segment trees for the inner and outer trees. With balanced binary trees, it would only be O((log Nu)2).
Unfortunately, using segment trees also requires O(log R·log C) memory per non-zero element, which just exceeds the available memory. Using balanced binary trees instead should fit within memory, but is a lot more painful to implement. I think it might also be possible to make it work by sticking with segment trees, but compressing the representation by compacting chains of nodes with one child.
Unfortunately, using segment trees also requires O(log R·log C) memory per non-zero element, which just exceeds the available memory. Using balanced binary trees instead should fit within memory, but is a lot more painful to implement. I think it might also be possible to make it work by sticking with segment trees, but compressing the representation by compacting chains of nodes with one child.