We consider the problem of simultaneously learning multiple sparse representations in the high-dimensional setting, exemplified by Multitask Learning (MTL). We propose a new estimator, which we call Dirty Fusion (DF). DF bridges the gap between dirty models, which decompose parameters into common and model-specific sets, and grouping approaches, which assume models belonging to the same group share the same sparsity pattern or have similar parameter values. DF jointly estimates the model parameters together with their potentially “unclean” group structures, and allows for partial support overlap within each group. We formulate DF estimators as an optimization problem, and incorporate automatic debiasing variables into the learning formulation. We demonstrate the impact of the approach using synthetic and real data.