Hacker News new | past | comments | ask | show | jobs | submit login

imo GTP-X is going to need to be much much “smarter” and consistently accurate (which may be fundamentally impossible) before it’s going to be “widely deployed for military and industrial decision-making” anywhere.



I don't think a piece of (AI? no, software? Meh?) thinking material needs to be above average human in its capabilities, or in any way consistently correct in its thinking, or in any way all that broadly learned, or in any way accountable - to cause significant mayhem if given half the chance.

There is this image of super-human, much faster, much broader thinking in the collective. When a bog standard human, drunk and in a foreign country, can do plenty. By this measure, chatGPTs are plenty intelligent enough - just not yet connected enough to tools, a wallet and vague orders or intentions.


Sure, a llm might not be general enough/trustworthy enough yet, but transformer based analysis of drone sensory data surely is already being tested/developed? I'm no engineer but I think it is fairly obvious they'd train their own models not trust an off the shelf general LLM




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: