I don't want to get too much on the ai hype train - but it seems like this would be perfect for machine learning, trained on the wiki-snippets of the company and emails (initiator would have to set a flag saying this is going to be training data).
This is already kind of skewed by hype. It's not possible to teach a transformer architecture LLM the fact "Dell T420s Tower Servers have very poor cooling for the raid controllers". All that can be done is have a corpus of text where the statistically most likely text associated with a prompts about "raid controller cooling Dell T420" includes the text "Dell T420s Tower Servers have very poor cooling for the raid controllers", or some variant of it. But the model doesn't know that fact, or any other fact, so it's not being taught this knowledge.
Since this is the internet, I'm asking, not arguing...just this time.. ;)
In this context, can we assume AI would be a good internal wiki assistant/Mod?
It could make many data correlations, rapidly, and suggest or act upon migration/collation/etc of information into more useful groupings, with related information automatically indexed and grouped, based on the more commonly searched for terms that match data related, or tagged. This could/would evolve over time, assuming the same AI and a changing team/staff, terms, current in-use hardware, in this case.
A simplified version of what I'm trying to say might be, AI can't know a fact, but it could parse inputs and build relationships between bits of data, based on how the users access that data, and under what headings it's commonly searched for.
An AI, especially the parse/response loop of LLM AIs, seems almost custom built to find that input, and match it with similar/related inputs and store the information in a more effective way, under more commonly searched headings from the users searching the wiki.
> AI can't know a fact, but it could parse inputs and build relationships between bits of data, based on how the users access that data, and under what headings it's commonly searched for.
Yes, but so can text analytics tools that are not transformer model LLMs, and they can do it without thousands of swimming pools of water[1] or a gigawatt hour of electricity[2]. Those LLMs are just algorithmic summary generators, and text summarizers have existed for some time.
> find that input, and match it with similar/related inputs and store the information in a more effective way
But that's not how these automation tools work – they don't store information, they generate statistically likely text based on the training corpus.
My specific suggestion was about leveraging the parsing and response loop of LLM AIs to analyze free text, parse what's useful and the reply would be collating the data.
How can someone do this, concretely? I would love to slowly train an AI to fit the gap between search and one on one assistance, but I don’t know how to do it as a single person running an informative website.