We present the novel task of understanding multi-sentence entity-seeking questions (MSEQs), that is, the questions that may be expressed in multiple sentences, and that expect one or more entities as an answer. We formulate the problem of understanding MSEQs as a semantic labeling task over an open representation that makes minimal assumptions about schema or ontology-specific semantic vocabulary. At the core of our model, we use a BiLSTM (bidirectional LSTM) conditional random field (CRF), and to overcome the challenges of operating with low training data, we supplement it by using BERT embeddings, hand-designed features, as well as hard and soft constraints spanning multiple sentences. We find that this results in a 12–15 points gain over a vanilla BiLSTM CRF. We demonstrate the strengths of our work using the novel task of answering real-world entity-seeking questions from the tourism domain. The use of our labels helps answer 36% more questions with 35% more (relative) accuracy as compared to baselines. We also demonstrate how our framework can rapidly enable the parsing of MSEQs in an entirely new domain with small amounts of training data and little change in the semantic representation.