3D Point Cloud Data (PCD) is an efficient machine representation for surrounding environments and has been used in many applications. But the measured PCD is often incomplete and sparse due to the sensor occlusion and poor lighting conditions. To automatically reconstruct complete PCD from the incomplete ones, we propose DeepPCD, a deep-learning-based system that reconstructs both geometric and color information for large indoor environments. For geometric reconstruction, DeepPCD uses a novel patch based technique that splits the PCD into multiple parts, approximates, extends, and independently reconstructs the parts by 3D planes, and then merges and refines them. For color reconstruction, DeepPCD uses a conditional Generative Adversarial Network to infer the missing color of the geometrically reconstructed PCD by using the color feature extracted from incomplete color PCD. We experimentally evaluate DeepPCD with several real PCD collected from large, diverse indoor environments and explore the feasibility of PCD autocompletion in various ubiquitous sensing applications.