In this paper, we present a method that uses panoramic images to perform long-range navigation as a succession of short-range homing steps along a route specified by appearances of the environment of the robot along the route. Our method is different from others in that it does not extract any features from the images and only performs simple image processing operations. The method does only make weak assumptions about the surroundings of the robot, assumptions that are discussed. Furthermore, the method uses a technique borrowed from computer graphics to simulate the effect in the images of short translations of the robot to compute local motion parameters. Finally, the proposed method shows that it is possible to perform navigation without explicitly knowing where the destination is nor where the robot currently is. Results in our Lab are presented that show the performance of the proposed system.