Ultrasound is widely used for diagnosing cardiovascular diseases. However, estimates such as left ventricle volume currently require manual segmentation, which can be time consuming. In addition, cardiac ultrasound is often complicated by imaging artifacts such as shadowing and mirror images, making it difficult for simple intensity-based automated segmentation methods. In this work, we use convolutional neural networks (CNNs) to segment ultrasound images of rat hearts embedded in agar phantoms into four classes: Background, myocardium, left ventricle cavity, and right ventricle cavity. We also explore how the inclusion of a single diseased heart changes the results in a small dataset. We found an average overall segmentation accuracy of 70.0% ± 7.3% when combining the healthy and diseased data, compared to 72.4% ± 6.6% for just the healthy hearts. This work suggests that including diseased hearts with healthy hearts in training data could improve segmentation results, while testing a diseased heart with a model trained on healthy hearts can produce accurate segmentation results for some classes but not others. More data are needed in order to improve the accuracy of the CNN based segmentation.