Cone-beam computed tomography (CBCT) is increasingly used in radiotherapy for patient alignment and adaptive therapy where organ segmentation and target delineation are often required. However, due to the poor image quality, low soft tissue contrast, as well as the difficulty in acquiring segmentation labels on CBCT images, developing effective segmentation methods on CBCT has been a challenge. In this paper, we propose a deep model for segmenting organs in CBCT images without requiring labelled training CBCT images. By taking advantage of the available segmented computed tomography (CT) images, our adversarial learning domain adaptation method aims to synthesize CBCT images from CT images. Then the segmentation labels of the CT images can help train a deep segmentation network for CBCT images, using both CTs with labels and CBCTs without labels. Our adversarial learning domain adaptation is integrated with the CBCT segmentation network training with the designed loss functions. The synthesized CBCT images by pixel-level domain adaptation best capture the critical image features that help achieve accurate CBCT segmentation. Our experiments on the bladder images from Radiation Oncology clinics have shown that our CBCT segmentation with adversarial learning domain adaptation significantly improves segmentation accuracy compared to the existing methods without doing domain adaptation from CT to CBCT.