We develop a task allocation method for persistent UAV security presence (PUSP). UAVs accompany customers and thereby provide security services to them. Key features incorporated are randomness in the arrival of customers and travel durations. We formalize our system as a general network consisting of nodes, arcs, UAVs and routes. From the network, we automatically generate a Markov decision process (MDP) model and simulator. The MDP formulation can be solved exactly only for small problems. In such cases, we employ classic value iteration to obtain optimal polices. To address larger systems consisting of more resources, we develop a greedy task assignment heuristic (GTAH) and simplified MDP heuristics (SMH). Numerical studies demonstrate that the GTAH is approximately 10% suboptimal and that the SMH is about 4% suboptimal with regard to small-scale problems. For larger problems (similar to 10(90) states), the performance of the SMH is approximately 3% better than that of the GTAH