AI has demonstrated its ability to outperform humans in games in single-player and multi-player adversarial settings, such as Atari, chess, and Go. More recently, attention has turned towards cooperative games where a computer agent is challenged to coordinate with other players to achieve a common goal, particularly in the ad-hoc setting where the agent has little to no knowledge about its teammates. In this paper, we analyze existing work on ad-hoc coordination in the card game Hanabi and present four open problems relating to performance limitations of SOTA agents, human perceptions of human-AI coordination, and technical obstacles to effective research on Hanabi. We show that there is still significant room to improve Hanabi agents on these fronts, and suggest potential directions towards solving the problems. To facilitate our analysis and future research, we propose an original hierarchy of cross-play scenarios to precisely categorize current and future experiments, discuss existing evaluation metrics for cross-play, and recommend experimental conventions to encourage consistency among researchers.