Localization module is crucial in any robotic applications. In this work, we propose to solve a problem of estimating a full pose of a robot with a given prior map. The method mainly consists of a Bayesian framework where robot motion data is taken to generate a prior distribution of hypotheses of robot poses, and robot sensors data is used to update the distribution. Many of current state of the art methods are relying on grid-based or 2.5D map representations because the computation would be an issue if the representation is using raw point clouds. Some of the methods purely use intensity data from the LIDAR. This limits the applicability of their methods to a flat world localization and/or a world with consistent intensity characteristics (e.g. offices, traffic roads). With clever optimization, we can achieve real-time and low-computation localization which can run in a single modern compact PC, that can handle the non-flat world representation using raw or voxelized point clouds, referred to as general maps. The result is a robust method that outputs continuous robot pose in a variety of environment where the robot operates. Owing to the general representation of the map, one can extract various meaningful features to be the basis of different general maps. The result can be used directly for robot navigation. In addition, the result is also useful to aid object detection and scene understanding towards smarter robot in the field of service robots, surveillance, and manufacturing.