Hacker News new | past | comments | ask | show | jobs | submit login
DiVeRSe - On the Advance of Making Language Models Better Reasoners (arxiv.org)
2 points by lajamerr on June 8, 2022 | hide | past | favorite | 1 comment



From the Abstract

*Large language models such as GPT-3 and PaLM have shown remarkable performance in few-shot learning. However, they still struggle with reasoning tasks such as the arithmetic benchmark GSM8K. Recent advances deliberately guide the language model to generate a chain of reasoning steps before producing the final answer, successfully boosting the GSM8K benchmark from 17.9% to 58.1% in terms of problem solving rate. In this paper, we propose a new approach, DiVeRSe (Diverse Verifier on Reasoning Step), to further advance their reasoning capability. DiVeRSe first explores different prompts to enhance the diversity in reasoning paths. Second, DiVeRSe introduces a verifier to distinguish good answers from bad answers for a better weighted voting. Finally, DiVeRSe verifies the correctness of each single step rather than all the steps in a whole. We conduct extensive experiments using the latest language model code-davinci-002 and demonstrate that DiVeRSe can achieve new state-of-the-art performance on six out of eight reasoning benchmarks (e.g., GSM8K 74.4% to 83.2%), outperforming the PaLM model with 540B parameters.

Personal opinion: A lot of development lately for machine learning/language models is using existing big models that has had the resources by enterprises to train it with a prohibitively big dataset/computation requirement. Then others build upon those models and tailor it to the types of tasks they want solved. In a way this is democratization of Big AI because the subsequent training isn't as compute intensive as what it took to generate the original big models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: