
Show HN: Can't afford GPT-3 access? Self-host your own GPT-2 API - calebkaiser
https://github.com/cortexlabs/cortex/tree/master/examples/pytorch/text-generator
======
calebkaiser
With GPT-3 getting so many people interested in NLP, and with OpenAI's
recently announced pricing plan putting it out of many people's reach, I
thought it might be useful for some to see how easy it is to deploy your own
GPT-2 API.

This project uses a couple tools:

\- Cortex: An open source model serving platform I help maintain.
[https://github.com/cortexlabs/cortex](https://github.com/cortexlabs/cortex)

\- Hugging Face's Transformers: An open source library for using popular
language models, like GPT-2.
[https://github.com/huggingface/transformers](https://github.com/huggingface/transformers)

This project uses a vanilla pre-trained GPT-2 and PyTorch. If you want to use
TensorFlow/ONNX, that's supported as well (
[https://github.com/cortexlabs/cortex/tree/master/examples/te...](https://github.com/cortexlabs/cortex/tree/master/examples/tensorflow/text-
generator) ).

If you want to finetune GPT-2 on your own text (a la AI Dungeon), I'd suggest
using gpt-2-simple and deploying with Cortex:
[https://github.com/minimaxir/gpt-2-simple](https://github.com/minimaxir/gpt-2-simple)

Lastly, by following this example, you can deploy your API locally (where
inference will probably be slow, depending on your hardware, but will cost you
$0) or to a cluster on AWS, which Cortex can spin up/manage for you.

