If that's the way you see it, you are also free to do so. Labor is a market and the laws of supply and demand are at play just like any other market. Go start a company and hire some people. This is Y Combinator's Hacker News after all. The world needs more founders.
The comment you’re responding to claims that wage labor is exploitation on the part of the employer. That they can become exploiters themselves is usually not a convincing argument to them.
It’s also like telling someone with just eighty bucks to their name and debt up to their ears that they can try to win the lottery.
Imagine for a second if supply and demand were actually the only forces at play for these businesses run by billionaires… - forgetting oligopolies, blatant antitrust, lobbying, the revolving door between government and the c-suites, legal tax evasion, etc.
We don’t live in a fantasy world simulation on a frictionless plane where anyone can be a billionaire if they just pull up their bootstraps.
It's not about whether rich people should or should not feel it. It's not about whether they should or should not be taxed heavily. This isn't a normative question - it's about incentives and mobility.
It's about what their alternatives are, where they choose to be domiciled, what job-creating businesses they take with them, and what effect that has on the state's economy over the long term.
As it is, California and New York have the highest income tax rates in the nation, and are both experiencing large net domestic out-flows. Florida and Texas have no state income tax and have been the largest net recipients of domestic migrants for several years.
For me it’s up and running.
I was doing some work with AI Studio when it was released and reran a few prompts already. Interesting also that you can now set thinking level low or high. I hope it does something, in 2.5 increasing maximum thought tokens never made it think more
you can bring your google api key to try it out, and google used to give $300 free when signing up for billing and creating a key.
when i signed up for billing via cloud console and entered my credit card, i got $300 "free credits".
i haven't thrown a difficult problem at gemini 3 pro it yet, but i'm sure i got to see it in some of the A/B tests in aistudio for a while. i could not tell which model was clearly better, one was always more succinct and i liked its "style" but they usually offered about the same solution.
Oof. So much opportunistic grandstanding and virtue signaling in the comments there. I read for 5 minutes and didn't find even a single comment that expressed any uncertainty about the truth or accuracy of the allegations.
In general I think this was quite a reasonable comment section. I see a lot of "damn this sounds awful" (and it does), discussion about the general phenomenon of sexual harassment (which is obviously real) rather than that specific case, and some uncertainty about what actually happened. I don't see much "this guy should be jailed immediately" in the top comments. I certainly wouldn't call it a mob and I don't see anything that deserves to labelled as insincere virtue signaling.
A lot of works of fiction sound credible. Are you going to believe those?
You don't have all the information. You weren't there. You don't even know the people personally. You are not in a position to make any judgement either way.
Something sounding credible doesn't make it true. It doesn't automatically make it false, either. You don't have to believe the accuser or the accused. The only thing any of us should do is mind our own business.
You can have whatever opinion you want, but don't confuse "sounds credible" with evidence. From the sidelines, you don't know enough to judge either way. Saying "I don't know" is the only accurate position. Everything beyond that is just speculation - and speculation is exactly what keeps cancel culture alive.
Ultimately, this reflects just terribly on the Scala community and every individual who signed the open letter, including Brian Clapper himself and over 300 others. You can read the full list of names here: https://scala-open-letter.github.io/
Having been in a similar situation myself as a teenager, it is truly abhorrent how quickly people are willing to jump to conclusions against someone based on the most limited information, and without giving the accused any chance to tell their side of the story or defend themselves. Not even a single one of my so-called friends asked me what happened, and almost all of them disappeared from my life permanently.
What I learned from the experience was that none of the people who jumped on the cancel bandwagon had ever been worth even a second of my time. It was their loss, and I became much more careful about who I choose as friends after that.
I can certainly say that if I encounter any of the 300+ individuals listed in the letter in my personal or professional lives, I will be giving them a very wide berth indeed.
> Ultimately, this reflects just terribly on the Scala community
Maybe the lesson of this is that people should be cautious about getting involved in communities like this to the extent that being cancelled by the community does you this much damage.
The logical consequence of this would be that all it takes to destroy someone's reputation is collusion between just two people who decide to make false allegations against someone. That is, frankly, ridiculous. Inadequacy of the justice system and the difficulty of prosecuting cases where there is a lack of (or in this case, no) evidence, doesn't justify abrogating the principle of "innocent until proven guilty."
This is useful, and directly contradicts the terms and conditions for Gemini CLI (edit: if you use the personal account, then its governed under the Code Assist T&C). I wonder which one is true?
If you're using Gemini CLI through your personal Google account, then you are using Gemini Code Assist license and need to follow the T&C for that. Very confusing.
Collection means it gets sent to a server, logging implies (permanent or temporary) retention of that data. I tried finding a specific line or context in their privacy policy to link to but maybe someone else can help me provide a good reference. Logging is a form of collection but not everything collected is logged unless mentioned as such.
They really need to provide some clarity on the terms around data retention and training, for users who access Gemini CLI free via sign-in to a personal Google account. It's not clear whether the Gemini Code Assist terms are relevant, or indeed which of the three sets of terms they link at the bottom of the README.md apply here.
Thank you, this is helpful, though I am left somewhat confused as a "1. Login with Google" user.
* The first section states "Privacy Notice: The collection and use of your data are described in the Gemini Code Assist Privacy Notice for Individuals." That in turn states "If you don't want this data used to improve Google's machine learning models, you can opt out by following the steps in Set up Gemini Code Assist for individuals.". That page says to use the VS Code Extension to change some toggle, but I don't have that extension. It states the extension will open "a page where you can choose to opt out of allowing Google to use your data to develop and improve Google's machine learning models." I can't find this page.
* Then later we have this FAQ: "1. Is my code, including prompts and answers, used to train Google's models? This depends entirely on the type of auth method you use. Auth method 1: Yes. When you use your personal Google account, the Gemini Code Assist Privacy Notice for Individuals applies. Under this notice, your prompts, answers, and related code are collected and may be used to improve Google's products, which includes model training." This implies Login with Google users have no way to opt out of having their code used to train Google's models.
* But then in the final section we have: "The "Usage Statistics" setting is the single control for all optional data collection in the Gemini CLI. The data it collects depends on your account type: Auth method 1: When enabled, this setting allows Google to collect both anonymous telemetry (like commands run and performance metrics) and your prompts and answers for model improvement." This implies prompts and answers for model improvement are considered part of "Usage Statistics", and that "You can disable Usage Statistics for any account type by following the instructions in the Usage Statistics Configuration documentation."
So these three sections appear contradictory, and I'm left puzzled and confused. It's a poor experience compared to competitors like GitHub Copilot, which make opting out of model training simple and easy via a simple checkbox in the GitHub Settings page - or Claude Code, where Anthropic has a policy that code will never be used for training unless the user specifically opts in, e.g. via the reporting mechanism.
I'm sure it's a great product - but this is, for me, a major barrier to adoption for anything serious.
reply