
Recipes for automatic text summarization using Google BERT and Microsoft UniLM - sharatsc
https://github.com/microsoft/nlp-recipes/releases/tag/2.2.0
======
Der_Einzige
As someone whose extremely interested in this domain. I am excited to see
these models productized.

Are the pretrained extractive models able to create extractive summaries at
the word level? Or are they (like most other extractive models) only selecting
and ranking sentences?

~~~
sharatsc
It is at the token level. The length of the results can be changed.

~~~
Der_Einzige
It looks like according to the notebook recepies that it's actually working at
the sentence level (despite your claim). The way you train these is to mark a
sentence as "1" or "0" based on if it's selected.

Oh well, the world waits longer for a word-level (willing to skip around in
sentences) and grammatically correct extractive summarizer. You got my hopes
up though.

[https://github.com/microsoft/nlp-
recipes/blob/master/example...](https://github.com/microsoft/nlp-
recipes/blob/master/examples/text_summarization/extractive_summarization_cnndm_transformer.ipynb)

My (failed) attempt to build what I was actually looking for:

[https://github.com/Hellisotherpeople/CX_DB8](https://github.com/Hellisotherpeople/CX_DB8)

