This work develops a methodology that leverages the recent QA-SRL annotation to create a first independent and large scale Open IE annotation and uses it to automatically compare the most prominent Open IE systems.
Open information extraction (Open IE) was presented as an unrestricted variant of traditional information extraction. It has been gaining substantial attention, manifested by a large number of automatic Open IE extractors and downstream applications. In spite of this broad attention, the Open IE task definition has been lacking – there are no formal guidelines and no large scale gold standard annotation. Subsequently, the various implementations of Open IE resorted to small scale post-hoc evaluations, inhibiting an objective and re-producible cross-system comparison. In this work, we develop a methodology that leverages the recent QA-SRL annotation to create a first independent and large scale Open IE annotation, 1 and use it to automatically compare the most prominent Open IE systems.