Hacker Newsnew | past | comments | ask | show | jobs | submit | lynnesbian's commentslogin

I can provide a real-world example: Low-latency code completion.

The JetBrains suite includes a few LLM models on the order of a hundred megabytes. These models are able to provide "obvious" line completion, like filling in variable names, as well as some basic predictions, like realising that the `if let` statement I'm typing out is going to look something like `if let Some(response) = client_i_just_created.foobar().await`.

If that was running in The Cloud, it would have latency issues, rate limits, and it wouldn't work offline. Sure, there's a pretty big gap between these local IDE LLMs and what OpenAI is offering here, but if my single-line autocomplete could be a little smarter, I sure wouldn't complain.


I don't have latency issue with github copilot. Maybe i'm less sensitive to it.


> a product that mostly doesn't do anything except for occasionally break the internet

I wouldn't say that. The postmortem you referred to links to another CloudFlare blog post - one about a pretty serious RCE vuln in Microsoft SharePoint that was blocked by their WAF: https://blog.cloudflare.com/stopping-cve-2019-0604/


I mean, it's hardly surprising CloudFlare will tell you this is a useful product. But it is to securing a web application what regex is to parsing HTML.


Sadly I work with web developers that all assume they don’t need to bother too much with security “because we have a WAF”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: