Aligning large language models (LLMs) to human preferences is challenging in domains where preference data is unavailable. We address the problem of learning reward models for such target domains by leveraging feedback collected from simpler source domains, where human preferences are easier to obtain. Our key insight is that, while domains may differ significantly, human preferences convey domain-agnostic concepts that can be effectively captured by a reward model. We propose DIAL, a framework that trains domain-invariant reward models by optimizing a dual loss: a domain loss that minimizes the divergence between source and target distribution, and a source loss that optimizes preferences on the source domain. We show DIAL is a general approach that we evaluate and analyze across 4 distinct settings: (1) Cross-lingual transfer (accuracy: 0.621 → 0.661), (2) Clean-to-noisy (accuracy: 0.671 → 0.703), (3) Few-shot-to-full transfer (accuracy: 0.845 → 0.920), and (4) Simple-to-complex tasks transfer (correlation: 0.508 → 0.556). Our code, models and data are available at https://github.com/portal-cornell/dial.