Hacker Newsnew | past | comments | ask | show | jobs | submit | textcortex's commentslogin

I would be interested in running inference on Instinct GPUs(MI250 with 128gb)BUT I can’t find any cloud provider to spin up a machine. It seems they are not yet available or cloud providers are not interested in supporting AMD hardware..


Awesome! Looking into it. Thanks


I heard lot of bad things about ROCm, hope things improved since then.


TGI has ROCm support.


You can do this without paying to oai: textcortex.com


Yep, agreed. Also somehow this AI 2.0 thing he is mentioning reminds me of web 3.0 bs. Baseless and delusional.


Most probably they are doing something called prompt tuning. This creates a small ai model that adds virtual tokens to prompt before passing to original model: https://developer.nvidia.com/blog/an-introduction-to-large-l...


We released ICortex ai powered python interpreter half a year ago or so: https://github.com/textcortex/icortex


Eu alternative textcortex is also available from Italy


We released an LLM powered python interpreter namely ICortex as an open source project back in 2022: https://github.com/textcortex/icortex


Loudly laughed at the Black Mirror part :) True, I also agree this is kind of a privileged shit job tour


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: