Generally, an inverted index  is used rather than a trie, and collection-based term weights are calculated using something like IDF . You could then use something like cosine to compute length normalised rankings for documents.
There are of course many different approaches to text search, but this is the first time I've personally come across this one.
If anyone is interested in an in depth look at how text search works, an Introduction to Information Retrieval  is available for free and is a fantastic book.
I began to experiment with a couple of my own ideas and used a trie rather than a DAFSA because it could be stored and loaded rather than generated at runtime. I also created and used the various linear methods described in the post to rank the documents.
Pardon my surprise, but I think it's kind of strange that we see so little on HN of this topic, which is one of the pillars of Computer Science, and also at the core of one of the most successful IT companies.
For online pattern searching, I'd want Boyer-Moore with the the Galil rule for guaranteed linear search. [Due to an unfortunate accident in C++ history, Boost and C++17 skip the Galil rule, leading to O(n+m) runtime vs. O(nm).]
For offline, Burrows-Wheeler has performance linear with query string and independent of the length of the searched document.
This allows for result documents that can contain some of the terms, all of the terms, or a prefix of a term. It's also pretty fast as it doesn't need to run a linear search through the documents. I'm not sure what you mean by "classical" search algorithms, but a trie based search is a pretty standard way of doing it.
Tries led to suffix trees led to suffix arrays led to Burrows-Wheeler transform led to FM index, which is probably the best way to do an offline search.
If you haven't looked at BWT indexing, it's worth at least checking out. It preserves the advantages you speak of with linear memory cost with the search space. It's a lot more hairy to implement, unfortunately.