Hallucinating incorrect information is worse than useless. It is actively harmful.
I wonder how much this affects our fundraising, for example. No VC understands the science here, so they turn to advisors (which is great!) or to LLMs… which has us starting off on the wrong foot.
I work in a field that is not even close to a scientific nishe - software reverse engineering - and LLM will happily lie to me all the time, for every question I have. I find out useful to generate some initial boilerplate but... that's it. AI autocompletion saved me an order of magnitude more time, and nobody is hyped about it.