Magnetoencephalography (MEG) is a functional neuroimaging tool that records the magnetic fields induced by electrical neuronal activity; however, signal from non-neuronal sources can corrupt the data. Eye-Blinks (EB) and Cardiac Activity (CA) are two of the most common types of non-neuronal artifacts. They can be measured by affixing eye proximal electrodes, as in electrooculography (EOG) and chest electrodes, as in electrocardiography (EKG), however this complicates imaging setup, decreases patient comfort, and often induces further artifacts from facial twitching and postural muscle movement. We propose an EOG- and EKG-free approach to identify eye-blink, cardiac, or neuronal signals for automated artifact suppression. Our contributions are two-fold. First, we combine a data driven, multivariate decomposition approach based on Independent Component Analysis (ICA) and a highly accurate classifier constructed as a deep 1-D Convolutional Neural Network. Second, we visualize the features learned to reveal what features the model uses and to bolster user confidence in our model's training and potential for generalization. We train and test three variants of our method on resting state MEG data from 49 subjects. Our cardiac model achieves a 96% sensitivity and 99% specificity on the set-aside test-set. Our eye-blink model achieves a sensitivity of 85% and specificity of 97%. This work facilitates automated MEG processing for both, clinical and research use, and can obviate the need for EOG or EKG electrodes.