Online communities provide a forum for rich social interaction and identity development for billions of Internet users worldwide. In order to manage these communities, platform owners have increasingly turned to commercial content moderation, which includes both the use of moderation algorithms and the employment of professional moderators, rather than user-driven moderation, to detect and respond to anti-normative behaviors such as harassment and spread of offensive content. We present findings from semi-structured interviews with 56 volunteer moderators of online communities across three platforms (Twitch, Reddit, and Facebook), from which we derived a generalized model categorizing the ways moderators engage with their communities and explaining how these communities develop as a result. This model contains three processes: being and becoming a moderator; moderation tasks, actions, and responses; and rules and community development. In this work, we describe how moderators contribute to the development of meaningful communities, both with and without algorithmic support.