dc.description.abstract | This work builds upon an analogy between human and artificial reasoning. Large Transformer based language models have achieved state-of-the-art in many tasks, and recently have even been able to do this without requiring task specific fine-tuning, by deploying either in-context or zero-shot learning. However, reasoning remains to be a difficult task for these models, especially in a zero-shot setting. On the contrary, humans are good at reasoning without being explicitly trained for it like models are, hinting that properties of human reasoning might be of help to boost the performance of models. We explore this intuition in two ways: (1) using human-like linguistic input for fine-tuning and (2) prompting models to ``imagine", a technique that has shown to help humans reason better. Our results show that our approach was fruitful for reasoning about fantastical scenarios, which is in line with previous research on humans, confirming that making an analogy between human and artificial reasoning can be helpful. This research opens many doors for future research on zero-shot reasoning, also using smaller models, which is a desirable development towards human-like general intelligence. | |