Hacker News new | past | comments | ask | show | jobs | submit | josephwegner's comments login

That’s not how markets work, right? And I don’t think it’s true in either example. If you can charge more, you will. Price should match value.

OpenAI is charging a _really_ high monthly fee for ChatGPT, and it’s quite popular. It’s very limited - there’s no way the cost incurred for usage is near that. Obviously their costs include the R&D that has already happened, but I still think it’s priced way over that.

Drug manufacturers are famously making money hand over fist. Patients aren’t their customers. Insurance companies are. They are surely gouging insurance companies to their fullest capacity.


> If you can charge more, you will. Price should match value.

And big LLM providers will certainly try, but they have competition now, all 3 tied up at roughly the same level. And then there is competition from open models that can handle 50% of what big models do.

So they can only set a high price for very advanced/critical tasks, where usage will be much lower.


$20/month is high? For a tutor and research assistant who I can consult at any time of day or night? Seems like an incredible value to me. Would have probably cost thousands of dollars in 2020.


> traditional concept of marriage

While it is correct that marriage has traditionally been a union of people of two different sexes, I think the root of the issue is that is not how many people perceive the tradition in reality. I, along with many people, view marriage as a union between _two people_ - regardless of gender. The fact that marriage has mostly always been across sexes is a side effect of centuries of homophobia/heteronormativity.

So saying that marriage is meant for a man and a woman does not compute for me. Trying to enforce that is _taking away_ from my concept of marriage.


This is the news that came out last week, but as a person entirely outside of this industry - where do you all follow for updates on this sort of thing? I don't want to parse every single paper that mentions superconductors, but I'd be glad to follow news around this specific material. I'm glad to see less mass hysteria than with LK-99, I just personally am interested and would love to stay close to it for updates!


How in the world are they going to make money on this? Perhaps the hardware is cheap to manufacture, but with no subscription I feel like they are going to get taken to the cleaner on LLM API fees. Even if they're running the LLM themselves, it's not cheap. Could they really be getting enough margin on the device to pay their staff _and_ all of that infra?

Or maybe this is another VC-backed sale price :)


Loss leader and data collection play?

EDIT: it's 100% the razor model, they want the device out there so they own your interaction with "service providers"; i.e. they take a cut of everything to do. Middle men.


Presumably the optional data sim plans.

It's not much different than Sony selling PlayStations at a low price but making money off the games.


I figured it’s local llms on the device


It's not powerful enough for that, 2018 chipset -> https://news.ycombinator.com/item?id=38938330

"No subscription" definitely sounds good to get people in the door but it seems short sighted in the long run, can't be making much on $200 and the more it's used the more that margin is eaten. Presumably they will roll out a sub at some point for extra features but then you get the backlash of "You said no subscription".


Really great article, and the site looks great! One small piece of feedback:

> the server-rendered HTML will always default to light mode. This creates a flicker for night owls

You might consider switching this - render the dark-mode version by default, and have the flicker be from dark-to-light. For users operating mostly in dark mode, a bright flash of white can be painful. The same is not true for users operating in light mode - they will barely notice a moment of dark.


To avoid a flicker, add

    <meta name="color-scheme" content="light dark" />
to the <head> tag, and also put the JavaScript code in <head> so it runs before any content is shown.


Ah that's also a good idea. I added a todo for myself to take a stab at this at some point.


Good call! Updated, thanks for the tip.


Yep, I moderate a sub with ~85k subscribers, and pay $5/month to host a better auto mod than Reddit has built. It does an insane amount of moderator work for us. We will not survive without it, but it would cost a crazy amount with the new API fees.

I will turn it off on the 12th, and we will just suffer without it until we all quit and the sub dies. Oh well.


Anyone have a sense for system requirements on running this locally? StableDiffusion was too heavy for either of my machines (which are admittedly not very powerful) - I'm not seeing much in the docs to indicate whether or not this is more or less intensive to run locally.


If you can run any models on llama.cpp, that might be a good indicator of which StableLM models you'll be able to run.

I easily ran 7B int 4 ggml models on an MBP with 16gig RAM. Same works on a MBA with 8 gig RAM, but you'll have to not run any other memory-hogging app.


In 4bit 7B requires 6GB of RAM and runs at ChatGPT speeds on CPU (with llama.cpp).

The 15B model coming out soon will require 12GB of RAM and still run at good speeds on CPU.


On top of what others said, unlike SD, its not unusable on CPU... just very slow.

Stable diffusion will run on a 4GB GPU though.


The tuned 7B model is around 33 GBs, so you'll need a PC with that much VRAM or RAM. I haven't tried to load it on text generation ui though.


Curious what the draw is to use a Chrome extension for this? Why not a desktop app?


I liked the idea that a chrome extension can check the content of your current page so you can instantly ask questions. For example, when you read an article you can click the icon and ask for a summary without having to copy/past the text into a chat box.

But yea a desktop app will also be really cool. I haven't tried it yet but something like SwiftGPT (https://www.swiftgpt.app/) looks promising.


I think cute little robots or even just animated characters would land better than this realistic avatars. I don't want the AI to feel human, but it can still have a "personality". WALL-E vibes come to mind.


Yeah, I like this and other similar suggestions. Either generic task-related icons, or if you're going to personify them in some way, use bots or something similar.

Some friends run a platform called Transloadit, and they used to use little bot icons[0] for each of their processing jobs, and that seems to have worked well for them.

[0] https://transloadit.com/docs/transcoding/handling-uploads/up...


Seems like we need a new meme. `current_year` will be the year of web 3.0!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: