We explore the capabilities of large language models (LLMs)—a type of generative AI—in evaluating early-stage ventures and how different information cues influence these evaluations. Based on pre-registered experiments, we compare 1,368 venture evaluations by ChatGPT and human investors as performance benchmarks with data from new ventures’ actual fundraising campaigns and their post-campaign survival rates. Our findings show that ChatGPT outperforms human investors in both investment evaluations and survival predictions. Information cues result in more accurate evaluations by ChatGPT. However, exploratory analyses of ChatGPT's responses reveal human-like biases, including anchoring effects, herding behavior, and the disregard of relevant information.
TYPE OF RESEARCH – Empirical
STAGE OF RESEARCH – First draft
For further information please refer to: seminars.dipsa@unibg.it This initiative is implemented within the framework and under the coordination of the TRANSET project of the Department of Management, department of excellence for the period 2023-2027, as per L.232/2016