I spend a lot of time debugging optimization code, and I find that it's a lot harder than debugging "normal" code. I think it's hard because optimality requires universal quantification: for the code to be correct, it needs to find the BEST instance out of ALL of them.
Conversation
I can usually find bugs pretty easily by poking around in a debugger, but I don't find it that helpful for code that's supposed to be optimizing because the local information doesn't tell me about all of the other possible problem instances.
1
1
I find it kinda hard to write unit-style tests for optimization code, I think for the same reason. Other than manually solving the optimization problem myself (extremely tedious, brittle), there's no good way to quickly say "this is the best instance".
Replying to
Recently, I've gotten a lot of mileage out of writing tests that assert that the value of the optimal instance is less than or equal to the value of a (seeded) random instance. It's a surprisingly good sanity test!
1
1
This definitely works better in some domains than other. In domains where it's easy to create a valid instance (e.g., weights in deep learning, simple dynamic programs), it's great!
1
This technique kinda falls apart when generating valid instances is hard. The best I could do when writing a relational query optimizer was asserting that the best plan was better than a "trivial" plan I cooked up manually.
