I made a program which allows me to record what I am doing when I solve problems. I'll probably refine what data I collect as I get a better idea of what is going on. I started with the Arab and North Africa regional from 2009 which ended up being a really good problem set. Interesting problems that were not horrible to code.
A-E were very straightforward. I should have had B way faster but I couldn't code GCD and had my formula way wrong. F should have been easy but I missed a trivial case which took a while to find. G was actually a fairly hard problem which I may have only solved due to weak data. H was a interesting somewhat greedy problem. Basically you can reduce the problem from each player having tons of moves to only having 3 possible optimal moves and then brute force the entire game. I was just coding up matrix inversion and multiplication. It should have been way faster but I had some silly errors. J was a good max flow problem. I ended up having a bug in how a created the flow graph.
I think a key metric for ACM competitions is the time from when you start coding to when you get your first submission. Overall I was decent but not great. H, I, and J should all be 5-15 minutes faster. What is bad is the extra debug time on F and J simple mistakes caused me to double the amount of time spent on the problem. On F I missed a case (where all of the points were in the second disk) and on J I screwed where each artifact's critical points are.
These kind of mistakes are hard to track down because I'm much more focused on the hard parts of the code than the easy places. My first pas
s of debugging should be focused on the hard part but after that I need to look at everything else.
This is still a small dataset so I need to do many more problems before I get a good idea of what I'm doing wrong.