A reliable, flexible and simple source of information would benefit robust handling of predicting locomotion modes for assistive device control (e.g., prostheses). However, to date, the sources of mechanical signals have been mainly limited to the information acquired through embedded sensors in the device. It remains unclear whether biomechanical signals from unaffected or less affected locations (e.g., contralateral side or upper body) would be reliable sources of information. Furthermore, the possible influence of the anticipatory state of the task on recognition accuracy, emphasizes the need to identify reliable data sources for both anticipated and unanticipated tasks. Here, accelerographic and gyroscopic signals from the leading leg, trailing leg, trunk-pelvis, and their fusion were compared with respect to their ability to predict changes of direction (cuts), cut-to-stair transitions, and level-ground walking performed under varied task anticipation. We hypothesized that fusion of lower-and upper-body signals would provide better accuracy than unilateral information (i.e., trailing/leading leg), and recognition accuracy would diminish when tasks were unanticipated. Surprisingly, signal fusion appeared not to be advantageous to unilateral signals. Leading and trailing leg data demonstrated statistically identical performances, and trunk-pelvis signals showed significantly (α=0.05) inferior performance relative to unilateral data. While anticipated tasks were accurately predicted (≥90%) even as early as 500 ms prior to entering each locomotor transition, in unanticipated tasks, similar accuracy rates were achieved only after the mid-swing of the transitioning leg. The findings could provide insight into flexible, yet, dependable sensor sets for intent recognition frameworks during varying user cognitive states.