Publications
My publications in reversed chronological order.
2024
- Learning to Ground Existentially Quantified GoalsMartin Funkquist, Simon Ståhlberg, and Hector GeffnerIn Proceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning, Aug 2024
Goal instructions for autonomous AI agents cannot assume that objects have unique names. Instead, objects in goals must be referred to by providing suitable descriptions. However, this raises problems in both classical planning and generalized planning. The standard approach to handling existentially quantified goals in classical planning involves compiling them into a DNF formula that encodes all possible variable bindings and adding dummy actions to map each DNF term into the new, dummy goal. This preprocessing is exponential in the number of variables. In generalized planning, the problem is different: even if general policies can deal with any initial situation and goal, executing a general policy requires the goal to be grounded to define a value for the policy features. The problem of grounding goals, namely finding the objects to bind the goal variables, is subtle: it is a generalization of classical planning, which is a special case when there are no goal variables to bind, and constraint reasoning, which is a special case when there are no actions. In this work, we address the goal grounding problem with a novel supervised learning approach. A GNN architecture, trained to predict the cost of partially quantified goals over small domain instances is tested on larger instances involving more objects and different quantified goals. The proposed architecture is evaluated experimentally over several planning domains where generalization is tested along several dimensions including the number of goal variables and objects that can bind such variables. The scope of the approach is also discussed in light of the known relationship between GNNs and C₂ logics.
@inproceedings{funkquist:24, title = {{Learning to Ground Existentially Quantified Goals}}, author = {Funkquist, Martin and Ståhlberg, Simon and Geffner, Hector}, booktitle = {{Proceedings of the 21st International Conference on Principles of Knowledge Representation and Reasoning}}, pages = {856--866}, year = {2024}, month = aug, doi = {10.24963/kr.2024/80}, url = {https://doi.org/10.24963/kr.2024/80}, }
2023
- CiteBench: A Benchmark for Scientific Citation Text GenerationMartin Funkquist, Ilia Kuznetsov, Yufang Hou, and 1 more authorIn Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, Dec 2023
Science progresses by building upon the prior body of knowledge documented in scientific publications. The acceleration of research makes it hard to stay up-to-date with the recent developments and to summarize the ever-growing body of prior work. To address this, the task of citation text generation aims to produce accurate textual summaries given a set of papers-to-cite and the citing paper context. Due to otherwise rare explicit anchoring of cited documents in the citing paper, citation text generation provides an excellent opportunity to study how humans aggregate and synthesize textual knowledge from sources. Yet, existing studies are based upon widely diverging task definitions, which makes it hard to study this task systematically. To address this challenge, we propose CiteBench: a benchmark for citation text generation that unifies multiple diverse datasets and enables standardized evaluation of citation text generation models across task designs and domains. Using the new benchmark, we investigate the performance of multiple strong baselines, test their transferability between the datasets, and deliver new insights into the task definition and evaluation to guide future research in citation text generation. We make the code for CiteBench publicly available at https://github.com/UKPLab/citebench.
@inproceedings{funkquist:23, title = {{C}ite{B}ench: A Benchmark for Scientific Citation Text Generation}, author = {Funkquist, Martin and Kuznetsov, Ilia and Hou, Yufang and Gurevych, Iryna}, editor = {Bouamor, Houda and Pino, Juan and Bali, Kalika}, booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing}, month = dec, year = {2023}, address = {Singapore}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2023.emnlp-main.455/}, doi = {10.18653/v1/2023.emnlp-main.455}, pages = {7337--7353}, }
2021
- FEVERCombining sentence and table evidence to predict veracity of factual claims using TaPaS and RoBERTaMartin FunkquistIn Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER), Nov 2021
This paper describes a method for retrieving evidence and predicting the veracity of factual claims, on the FEVEROUS dataset. The evidence consists of both sentences and table cells. The proposed method is part of the FEVER shared task. It uses similarity scores between TF-IDF vectors to retrieve the textual evidence and similarity scores between dense vectors created by fine-tuned TaPaS models for tabular evidence retrieval. The evidence is passed through a dense neural network to produce a veracity label. The FEVEROUS score for the proposed system is 0.126.
@inproceedings{funkquist:21, title = {Combining sentence and table evidence to predict veracity of factual claims using {T}a{P}a{S} and {R}o{BERT}a}, author = {Funkquist, Martin}, editor = {Aly, Rami and Christodoulopoulos, Christos and Cocarascu, Oana and Guo, Zhijiang and Mittal, Arpit and Schlichtkrull, Michael and Thorne, James and Vlachos, Andreas}, booktitle = {Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER)}, month = nov, year = {2021}, address = {Dominican Republic}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2021.fever-1.10/}, doi = {10.18653/v1/2021.fever-1.10}, pages = {92--100}, }