Hacker Newsnew | past | comments | ask | show | jobs | submit | mil22's commentslogin

If that's the way you see it, you are also free to do so. Labor is a market and the laws of supply and demand are at play just like any other market. Go start a company and hire some people. This is Y Combinator's Hacker News after all. The world needs more founders.

The comment you’re responding to claims that wage labor is exploitation on the part of the employer. That they can become exploiters themselves is usually not a convincing argument to them.

It’s also like telling someone with just eighty bucks to their name and debt up to their ears that they can try to win the lottery.


> The comment you’re responding to claims that wage labor is exploitation on the part of the employer.

That’s irrational.


And OP said “If that's the way you see it,” i.e. given the premise, “you are also free to do so.”

Imagine for a second if supply and demand were actually the only forces at play for these businesses run by billionaires… - forgetting oligopolies, blatant antitrust, lobbying, the revolving door between government and the c-suites, legal tax evasion, etc.

We don’t live in a fantasy world simulation on a frictionless plane where anyone can be a billionaire if they just pull up their bootstraps.


It's not about whether rich people should or should not feel it. It's not about whether they should or should not be taxed heavily. This isn't a normative question - it's about incentives and mobility.

It's about what their alternatives are, where they choose to be domiciled, what job-creating businesses they take with them, and what effect that has on the state's economy over the long term.

As it is, California and New York have the highest income tax rates in the nation, and are both experiencing large net domestic out-flows. Florida and Texas have no state income tax and have been the largest net recipients of domestic migrants for several years.


> Florida and Texas have no state income tax and have been the largest net recipients of domestic migrants for several years.

Rich people don't like taxes or paying back into the systems they abused to get rich. Water is also wet.


Yep, he hasn't changed. F*ck that guy.


I mean, sky is blue, water is wet, etc.


It's available to be selected, but the quota does not seem to have been enabled just yet.

"Failed to generate content, quota exceeded: you have reached the limit of requests today for this model. Please try again tomorrow."

"You've reached your rate limit. Please try again later."

Update: as of 3:33 PM UTC, Tuesday, November 18, 2025, it seems to be enabled.


Looks to be available in Vertex.

I reckon it's an API key thing... you can more explicitly select a "paid API key" in AI Studio now.


For me it’s up and running. I was doing some work with AI Studio when it was released and reran a few prompts already. Interesting also that you can now set thinking level low or high. I hope it does something, in 2.5 increasing maximum thought tokens never made it think more


I hope some users will switch from cerebras to free up those resources


Works for me.


seeing the same issue.


you can bring your google api key to try it out, and google used to give $300 free when signing up for billing and creating a key.

when i signed up for billing via cloud console and entered my credit card, i got $300 "free credits".

i haven't thrown a difficult problem at gemini 3 pro it yet, but i'm sure i got to see it in some of the A/B tests in aistudio for a while. i could not tell which model was clearly better, one was always more succinct and i liked its "style" but they usually offered about the same solution.


Oof. So much opportunistic grandstanding and virtue signaling in the comments there. I read for 5 minutes and didn't find even a single comment that expressed any uncertainty about the truth or accuracy of the allegations.


Some of the top comments do, these three for example: https://news.ycombinator.com/item?id=26961815, https://news.ycombinator.com/item?id=26963597, https://news.ycombinator.com/item?id=26962421.

In general I think this was quite a reasonable comment section. I see a lot of "damn this sounds awful" (and it does), discussion about the general phenomenon of sexual harassment (which is obviously real) rather than that specific case, and some uncertainty about what actually happened. I don't see much "this guy should be jailed immediately" in the top comments. I certainly wouldn't call it a mob and I don't see anything that deserves to labelled as insincere virtue signaling.


A lot of works of fiction sound credible. Are you going to believe those?

You don't have all the information. You weren't there. You don't even know the people personally. You are not in a position to make any judgement either way.

Something sounding credible doesn't make it true. It doesn't automatically make it false, either. You don't have to believe the accuser or the accused. The only thing any of us should do is mind our own business.


Thanks for the lecture. How does it relate to the comment I made? Sorry, it's not clear to me.

I didn't personally participate in cancelling this person. In fact, I agreed with the point he made in the article. I'm just not sure he didn't do it.

Are you saying I shouldn't have an opinion on that part?


You can have whatever opinion you want, but don't confuse "sounds credible" with evidence. From the sidelines, you don't know enough to judge either way. Saying "I don't know" is the only accurate position. Everything beyond that is just speculation - and speculation is exactly what keeps cancel culture alive.


The same with OP's post.

So far, I see that this post caused quite a few forks, the opposite of what the author asked for.

I don't have a solution.

I think as a man who runs conferences you shouldn't sleep with people who attend them. Or any other things like that, for that matter.

Should you be cancelled for that? No. But humans are humans.

There is no solution to this. Courts are also wrong all the time (look at OJ) and victims of SA almost never see justice.

The answer is: we don't know the truth.


Ultimately, this reflects just terribly on the Scala community and every individual who signed the open letter, including Brian Clapper himself and over 300 others. You can read the full list of names here: https://scala-open-letter.github.io/

Having been in a similar situation myself as a teenager, it is truly abhorrent how quickly people are willing to jump to conclusions against someone based on the most limited information, and without giving the accused any chance to tell their side of the story or defend themselves. Not even a single one of my so-called friends asked me what happened, and almost all of them disappeared from my life permanently.

What I learned from the experience was that none of the people who jumped on the cancel bandwagon had ever been worth even a second of my time. It was their loss, and I became much more careful about who I choose as friends after that.

I can certainly say that if I encounter any of the 300+ individuals listed in the letter in my personal or professional lives, I will be giving them a very wide berth indeed.


That's just how people are. Most people are weak sheep and lack integrity. Milgram experiments proved it.


> Ultimately, this reflects just terribly on the Scala community

Maybe the lesson of this is that people should be cautious about getting involved in communities like this to the extent that being cancelled by the community does you this much damage.


Absolutely, and to be clear, "this much damage" could have happened to anyone.


The logical consequence of this would be that all it takes to destroy someone's reputation is collusion between just two people who decide to make false allegations against someone. That is, frankly, ridiculous. Inadequacy of the justice system and the difficulty of prosecuting cases where there is a lack of (or in this case, no) evidence, doesn't justify abrogating the principle of "innocent until proven guilty."


There is some information on this buried in configuration.md under "Usage Statistics". They claim:

*What we DON'T collect:*

- *Personally Identifiable Information (PII):* We do not collect any personal information, such as your name, email address, or API keys.

- *Prompt and Response Content:* We do not log the content of your prompts or the responses from the Gemini model.

- *File Content:* We do not log the content of any files that are read or written by the CLI.

https://github.com/google-gemini/gemini-cli/blob/0915bf7d677...


This is useful, and directly contradicts the terms and conditions for Gemini CLI (edit: if you use the personal account, then its governed under the Code Assist T&C). I wonder which one is true?


Thanks for pointing that out, we're working on clarifying!


When should we expect to see an update? I assume there'll be meetings with lawyers for the clarification.


Yes, there were meetings and lawyers. We just merged the update. Hopefully it's much clearer now: https://github.com/google-gemini/gemini-cli/blob/main/docs/t...


Where did you find the terms and conditions for Gemini CLI? In https://github.com/google-gemini/gemini-cli/blob/main/README..., I find only links to the T&Cs for the Gemini API, Gemini Code Assist (a different product?), and Vertex AI.


If you're using Gemini CLI through your personal Google account, then you are using Gemini Code Assist license and need to follow the T&C for that. Very confusing.


Can a lawyer offer their civilian opinion as to which supercedes/governs?


I wonder what the legal difference between "collect" and "log" is.


Collection means it gets sent to a server, logging implies (permanent or temporary) retention of that data. I tried finding a specific line or context in their privacy policy to link to but maybe someone else can help me provide a good reference. Logging is a form of collection but not everything collected is logged unless mentioned as such.


They really need to provide some clarity on the terms around data retention and training, for users who access Gemini CLI free via sign-in to a personal Google account. It's not clear whether the Gemini Code Assist terms are relevant, or indeed which of the three sets of terms they link at the bottom of the README.md apply here.


Agree! We're working on it!



Thank you, this is helpful, though I am left somewhat confused as a "1. Login with Google" user.

* The first section states "Privacy Notice: The collection and use of your data are described in the Gemini Code Assist Privacy Notice for Individuals." That in turn states "If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals.". That page says to use the VS Code Extension to change some toggle, but I don't have that extension. It states the extension will open "a page where you can choose to opt out of allowing Google to use your data to develop and improve Google's machine learning models." I can't find this page.

* Then later we have this FAQ: "1. Is my code, including prompts and answers, used to train Google's models? This depends entirely on the type of auth method you use. Auth method 1: Yes. When you use your personal Google account, the Gemini Code Assist Privacy Notice for Individuals applies. Under this notice, your prompts, answers, and related code are collected and may be used to improve Google's products, which includes model training." This implies Login with Google users have no way to opt out of having their code used to train Google's models.

* But then in the final section we have: "The "Usage Statistics" setting is the single control for all optional data collection in the Gemini CLI. The data it collects depends on your account type: Auth method 1: When enabled, this setting allows Google to collect both anonymous telemetry (like commands run and performance metrics) and your prompts and answers for model improvement." This implies prompts and answers for model improvement are considered part of "Usage Statistics", and that "You can disable Usage Statistics for any account type by following the instructions in the Usage Statistics Configuration documentation."

So these three sections appear contradictory, and I'm left puzzled and confused. It's a poor experience compared to competitors like GitHub Copilot, which make opting out of model training simple and easy via a simple checkbox in the GitHub Settings page - or Claude Code, where Anthropic has a policy that code will never be used for training unless the user specifically opts in, e.g. via the reporting mechanism.

I'm sure it's a great product - but this is, for me, a major barrier to adoption for anything serious.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: