Whether exploring a defunct nuclear reactor, defusing a bomb, delivering medicine to quarantined patients, repairing the International Space Station from the outside, or providing dexterous manipulation for those with motor impairments, robots have the ability to be in places where humans cannot go, can augment the capabilities of humans, and improve quality of life and work. Since even the most advanced robots have difficulty completing tasks that require grasping and manipulation, human teleoperation is often a practical alternative for these types of tasks. By importing the dexterity, expertise, and wealth of background knowledge of a human operator, robots can leverage the skills of their human teammates without requiring humans to be physically present. However, existing robot teleoperation interfaces often rely on 2D methods to view and interact with the 3D world, which is cumbersome for human operators. Virtual reality interfaces may be suitable for resolving problems with traditional teleoperation interfaces (e.g., perspective adjustment, action specification). A goal of this work is to examine the efficacy of using virtual and augmented reality interfaces for remotely controlling robots to complete a variety of tasks.