I just completed a migration from 1.1.10 to 1.2.10 and it was surprisingly painless.
The course of action that I took:
1) describe cluster - make sure all nodes are on the same schema
2) shutoff all maintenance tasks; i.e. make sure no scheduled repair is going to kick off in the middle of what you're doing
3) snapshot - maybe not necessary but it's so quick it makes no sense to skip this step
4) drain the nodes - I shut down the entire cluster rather than chance any incompatible gossip concerns that might come from a rolling upgrade. I have the luxury of controlling both the providers and consumers of our data, so this wasn't so disruptive for us.
5) Upgrade the nodes, turn them on one-by-one, monitor the logs for funny business.
6) nodetool upgradesstables
7) Turn various maintenance tasks back on, etc.
The worst part was managing the yaml/config changes between the versions. It wasn't horrible, but the diff was "noisier" than a more incremental upgrade typically is. A few things I recall that were special:
1) Since you have an existing cluster, you'll probably need to set the default partitioner back to RandomPartitioner in cassandra.yaml. I believe that is outlined in NEWS.
2) I set the initial tokens to be the same as what the nodes held previously.
3) The timeout is now divided into more atomic settings and you get to decided how (or if) to configure it from the default appropriately.
tldr; I did a standard upgrade and payed careful attention to the NEWS.txt upgrade notices. I did a full cluster restart and NOT a rolling upgrade. It went without a hitch.