Spatial data is playing an emerging role in new technologies such as web and mobile mapping and Geographic Information Systems (GIS). Important decisions in political, social and many other aspects of modern human life are being made using location data. Decision makers in many countries are exploiting spatial databases for collecting information, analyzing them and planning for the future. In fact, not every spatial database is suitable for this type of application. Inaccuracy, imprecision and other deficiencies are present in location data just as any other type of data and may have a negative impact on credibility of any action taken based on unrefined information. So we need a method for evaluating the quality of spatial data and separating usable data from misleading data which leads to weak decisions. On the other hand, spatial databases are usually huge in size and therefore working with this type of data has a negative impact on efficiency. To improve the efficiency of working with spatial big data, we need a method for shrinking the volume of data. Sampling is one of these methods, but its negative effects on the quality of data are inevitable. In this paper we are trying to show and assess this change in quality of spatial data that is a consequence of sampling. We used this approach for evaluating the quality of sampled spatial data related to mobile user trajectories in China which are available in a well-known spatial database. The results show that sample-based control of data quality will increase the query performance significantly, without losing too much accuracy. Based on these results some future improvements are pointed out which will help to process location-based queries faster than before and to make more accurate location-based decisions in limited times.