In this paper, we utilized two different vision-based go-to-goal robot control approaches on indoor nonholonomic mobile robot systems. In the proposed methods, eye-out-device configured camera (overhead camera) image data are used as the input parameters to determine the speeds of robot wheels. The main purpose of this system is to minimize the complexity of conventional robot control kinematics and to provide an efficient control approach to manage the wheel speeds and the direction angle of the mobile robot. In addition to reducing the complexity of robot control kinematics, it is also intended to reduce systematic and nonsystematic errors. The proposed method is divided into three stages: the first stage consists of the overhead camera calibration and the configuration of the robot motion environment. At this stage. the labels placed on the robot and target position were identified and the position information of the robot was obtained. In the second stage, control inputs such as position and orientation based on robot motion tracking and visual feature information were obtained. In the third stage, Graph-based Gaussian and Angle-based Decision tree control approaches were performed. We have briefly described these control approaches as follows: Graph-based Decision Tree Control (GDTC), Graph-based Gaussian Control (GGC), Angle-based Decision Tree Control (ADTC), and Angle-based Gaussian Control (AGC). Using these control approaches, many real-time experimental studies with eye-catching device configuration have been performed. The efficacy and usability of the methods have been demonstrated by experimental results.