Deep neural network (DNN)-based object detection has been investigated and applied to various real-time applications. However, it is hard to employ the DNNs in embedded systems due to their high computational complexity and deep-layered structure. Although several field-programmable gate array (FPGA) implementations have been presented recently for real-time object detection, they suffer from either low throughput or low detection accuracy. In this article, we propose an efficient computing system for real-time SSDLite object detection on FPGA devices, which includes novel hardware architecture and system optimization techniques. In the proposed hardware architecture, a neural processing unit (NPU) that consists of heterogeneous units, such as band processing, scaling, and accumulating, and data fetching and formatting units is designed to accelerate the DNNs efficiently. In addition, system optimization techniques are presented to improve the throughput further. A task control unit is employed to balance the workload and increase the utilization of heterogeneous units in the NPU, and the object detection algorithm is refined accordingly. The proposed architecture is realized on an Intel Arria 10 FPGA and enhances the throughput by up to 13.6× compared to the state-of-the-art FPGA implementation.