>The only efficient protection against fingerprinting is what Orion is doing — preventing any fingerprinter from running in the first place. Orion is the only browser on the market that comes with full first-party and third-party ad and tracking script blocking, built-in by default, making sure invasive fingerprinters never run on the page.
sounds like they block "known" fingerprinting scripts and call it a day.
This is also covered in the article. I appreciated the analogy they used: You can put on a ski mask when you go to the mall, and it will conceal your identity, but you will also be instantly suspicious to everyone around you, and will likely be asked to leave most of the stores you try to visit.
This is only because there are only 0.001% of people using anonymizers. If you are a minority with specific requirements, you are shown the door almost in any case, not only on the Internet.
> Orion is the only browser on the market that comes with full first-party and third-party ad and tracking script blocking
I love Kagi, but that is a laughable statement. Brave has been offering ad and fingerprint blocking for years now. The reason why they don't have full first party blocking ("aggressive" mode blocking) on by default is because it tends to break things.
No it's usually a javascript script that does weird things like drawing strings on an invisible canvas and sends it back to the server. I'm wondering if a browser extension that intercepts those payloads and randomizes them with other people's payloads is what's called for here.
The sentiment is more important. But I'm sorry for how you feel suggests to many people the sole problem was their feelings. I'm sorry for how these changes impacted you suggests the changes could have been wrong.
I don't think it's the specific phrasing. They could have said "I'll contact you by email to try and understand your concerns" and it's still dodging the explicit, concrete list of grievances.
However, "let's hop on a call" is just additionally dismissive.
Two things stand out, besides what has been already mentioned.
* The infantile corporate-cutesy wording "hop on a call" is not appropriate when talking to somebody who feels that you deeply wronged them. It has the same vibes as cheery "Remember: At Juicero, we are all one big family!" signatures on termination notices, and Corporate Memphis.
* In the first sentence, Kiki says "about the MT workflow that we just recently introduced". Why is this level of detail shoehorned in? Everyone in that conversation already knows what it is about. It's as if Kiki can't resist the temptation to inject an ad/brag about their recently introduced workflow for any drive-by readers. "I'm sorry you were dissatisfied with your Apple(R) iPlunger X(TM), which is now available at major retailers for only $599!"
They don't know what exactly has gone wrong. All they can say sorry for is for how the person is feeling. Then they want to get on a call to learn more. Which is the start of helping.
The response is as sincere and helpful as it could be for an initial response from someone who wants to figure out what the problem is.
This isn’t out of the blue. The responder was the lead on rolling out this new translation workflow to the community. Check out the thread (https://support.mozilla.org/en-US/forums/contributors/717387...) and you’ll see people immediately called out the exact potential problems the OP here is complaining about, nearly three months ago. This is no need for the responder to follow up to better understand the complaints. They know exactly what the issues are.
even if that were the case (others have explained why that’s not so), that would be an inappropriate time to apologize. you don’t apologize for how someone else feels. you apologize when you recognize that you did something harmful and when the harmed party is amenable to receiving it. otherwise, you’re really just being a jerk who’s only acknowledging that you don’t like how someone else feels.
The problems are nowhere near actionable. A lot more information is needed.
E.g. literally the first bullet: "It doesn't follow our translation guidelines". OK -- where are those guidelines? Is there a way to get it to follow them, like another commenter says works? Does the person need help following the process for that? Or is there a bug? Etc.
These are the things a call can clarify. It's the necessary first step, so why are people complaining?
It's entirely possible that such information is well-known to everyone involved in the translation community.
I would consider it outright insulting if someone who ostensibly "wants to help" doesn't know basic information like that - if the people making decisions about SumoBot are NOT aware of basic information like "where to find the local translation guidelines" then they are presumably not qualified to release a tool like SumoBot in the first place.
Yep agree with this. Nothing is more infuriating than someone Kramering into a space trying “to help” without spending any time or effort trying to understand that space.
They should have understood the guidelines before turning on their machine translation in a given locality.
It's also entirely possible that the Japanese translation team didn't put the guidelines in the right place for the bot to follow them. Or there's a bug. Since another commenter says it's following guidelines just fine.
> I would consider it outright insulting if someone who ostensibly "wants to help" doesn't know basic information like that
Well, the person who wants to help is a customer service manager in Indonesia. They presumably are not the leader of the machine translation product. They are trying to get more information so they can, you know, escalate to the right people.
Turning off the machine translation and reverting all the changes it made seems pretty actionable to me. They can turn it back on when issues are addressed.
What if the AI makes an interesting or important article sound like one you don't want to read? You'd never cross check the fact, and you'd never discover how wrong the AI was.
There is more written material produced every hour than I could read in a lifetime, I am going to miss 99.9999% of everything no matter what I do. It's not like the headline+blurb you usually get is any better in this regard.
Integrity of words and author intent is important.
I understand the intent of your hypothetical but I haven’t run into this issue in practice with Kagi News.
Never share information about an article you have not read. Likewise, never draw definitive conclusions from an article that is not of interest.
If you do not find a headline interesting, the take away is that you did not find the headline interesting. Nothing more, nothing less. You should read the key insights before dismissing an article entirely.
I can imagine AI summarizes being problematic for a class of people that do not cross check if an article is of value to them.
That's fair, but i also don't cross check news sources on average either. I should, but there in lies the real problem imo. Information is war these days, and we've not yet developed tools for wading through immense piles of subtly inaccurate or biased data.
We're in a weird time. It's always been like this, it's just much.. more, now. I'm not sure how we'll adapt.
I don't know If i can agree with that. I think we make an error when we aggregate news in the way we do. We claim that "the right wing media" says something when a single outlet associated with the right says a thing, and vice versa. That's not how I enjoy reading the news. I have a couple of newspapers I like reading, and I follow the arguments they make. I don't agree with what they say half the time, but I enjoy their perspective. I get a sense of the "editorial personality" of the paper. When we aggregate the news, we don't get that sense, because there's no editorial. I think that makes the news poorer, and I think it makes people's views of what newspapers can be poorer.
The news shouldn't a stream of happenings. The newspaper is best when it's a coherent day-to-day conversation. Like a pen-pal you don't respond to.
How do you verify a fact? Do you travel to the location and interview the locals? Or read scientific papers in various fields, including their own references, to validate summaries published by news sources? At some point you need to just trust that someone is telling the truth.
I've has a similar experience with my own project that summarizes rss articles--the results have largely been pretty good, but I found using a "reasoning" model had much better results.
Kagi News is basically a summary of news articles fed into the context. It's different from what the op is about, that is just asking an LLM with web access to query the news.
I hate saying people are holding it wrong but given just given how LLMs work, how did anyone expect that this would go right? Managing the LLM's context is the game. I feel like ChatGPT has done such a disservice for teaching users how to actually use these tools and what their failure modes are.
agreed on Kagi News, and Particle News has been good, but they accepted funding from The Atlantic which evidently earns "Featured Article" positioning to articles from funding sources, muddying the clarity of biases, which Particle News has a nice graphic indicator for, though i've not seen it under promoted Feature Articles. Surely applies to other funding sources, but The Atlantic one was pretty recent.
fwiw Particle News is paying publishers to run their full text content in the Particle app and this is just a staff pick. unfortunate that it gave the opposite impression of being an ad
I was tired of paying for a spy device that steals my attention and tries to psychologically condition me.
> How has this impacted your daily life?
I've found that many people are so addicted to there phone that they genuinely can't comprehend how to function without one. I'm lucky to have been around longer than smartphones, and remember how to do these things. For instance, going to someones house and knocking on there door instead of sending a text. Sure one is a little harder and takes more time, but shows you care enough to do it. Sending a text is so easy AI bots do it.
I've had to seek out similar minded people. Most people just can't be bothered with someone who won't conform, so I seek out people who don't require me to have a phone to be my friend. I discovered quite quickly who was a good enough friend to come check on me, and also who I felt was important enough that'd I'd go too there house to see them. It may limit the size of my social circle, but I feel its a stronger relationship because of it.
There are absolutely no drawbacks that I'm aware of.
Could you expand on this further? It would really provide some hope for me if there was a silver-lining. I’m concerned that this is a step in an inevitable beginning to the corporatization of the internet.
I subscribed to Kagi Ultimate but I feel I’m underutilizing it’s functionality. What are some use cases you would encourage me to try with the models? What are some key takeaways with Ultimate you discovered in your testing?
Claude 3 Opus has the most knowledge encoded in it from my experience, so I use that when I am using the non web search models. I also find that I don't often need the most recent information for the work that I do, so I don't often use the search enabled one. If I use the expert assistant, I usually just provide it an exact URL of a long document that I want to ask questions about.
I think they are particularly useful when you know what you want with particular nuances, and it would not be easy or possible to find that specific information in a web search. For example, I recently used LLMs to help me write a configuration for my RAID. I knew I wanted mirror+stripe, and I wanted to mount the RAID at /data, and mount my home directory at /data/home. I explained this to the language model, and it essentially built me a script to do that.
I could have looked at the manuals for how to use mdadm and edit /etc/fstab, but writing down exactly what I want in plain English and then doing what the language model spits out was easier and faster for me.
You made my entire week. Years ago, I stumbled on a song of theirs and could never find it again. After recognizing the voice, I went through the catalogue and there it was: Peach, Plum, Pear. Thank you!
https://help.kagi.com/orion/privacy-and-security/preventing-...