This paper proposes a search-based paradigm, involving self-alignment and detail repletion modules for robust multi-exposure image fusion, and introduces neural architecture search to discover compact and efficient networks, investigating effective feature representation for fusion.