Recent text-to-image diffusion models have achieved remarkable success in generating high-quality images. However, their exclusive reliance on textual prompts falls short in precise control of image compositions. In this paper, we propose LoCo, a training-free approach for layout-to-image synthesis that excels in producing high-quality images aligned with both textual prompts and layout instructions. Specifically, LoCo features a novel Localized Attention Constraint, which utilizes the semantic affinity between pixels in self-attention maps to create precise representations of desired objects, thereby ensuring their accurate placement within designated regions. We further introduce a Padding Token Constraint to leverage the semantic information embedded in previously overlooked padding tokens, improving the consistency between object appearance and layout instructions. Our method seamlessly integrates with existing text-to-image and layout-to-image models, improving their spatial control capabilities and addressing semantic failures seen in prior approaches. Extensive experiments demonstrate the superiority of LoCo, outperforming state-of-the-art training-free layout-to-image methods both qualitatively and quantitatively across multiple benchmarks.
inproceedings ZLJ+25
BibTeXKey: ZLJ+25