Hacker News new | past | comments | ask | show | jobs | submit | vid's comments login

This was reduced to rubble on reddit for being an advertisement, with the suggestion being the testing is rigged to their product.

I like LocalLLaMA because it's not just for algorithm/math experts, it's for people who want to use LLMs locally (as the name says), so there is often practical discussion on how to use "off the shelf" models and kits. This also implies a patient crowd who will sometimes go into great detail about specific details, since there are many non-specialists.

Ok, but how could we really trust these companies? I'm not saying to "give up," but aside from "pinkie swear we'll never make a mistake or exploit a loophole again," how can companies be trusted with the most intimate details of people's lives (and the lives of people who never signed up but get dragged in incidentally), such that it becomes trivial to deceive or even obviate or impersonate them? And how to get away from this being normal, so every company with an IT department or cloud subscription has their own spy agency? Especially since even deletion policies may let them hang onto data for years, in which time divisions can change hands or laws change. And also, they've become embedded in state security policies as a sort of branch of the secret police.

I've been looking into NextCloud to recommend to a government agency. The world desperately need competition or at least something compatible with the m365 stack, because it's eating the world and taking a lot of choice away and killing a lot of innovation outside the Microsoft funnel, since Microsoft is not interested in a lot of tech (for example, network schemas, useful for "tell us once" type applications, since they'd rather you just use their tech for everything, and the messier it is behind the scenes, the better for them).

Anyway, I have mixed feelings. I admire the community and the support it has by many governments, its staunch Open Source basis so it's useful for an individual or a large organization. But it is building on a lot of crufty PHP, their collection of apps is very uneven and it's hard to know what works well without a lot of research, and it's going in a few directions to upgrade. AppApi in particular is on one hand very innovative, on the other going in some odd directions. I know it is successfully used by very large organizations, but without spending a lot of time with it, it's hard to get a sense of the commitment and considerations required.


How can I donate tonthis project? I do not see any limks to donate.

I think more than anything they need advocacy and good quality product contributions (support, documentation, code). From what I know, a lot of the development happens via a few consulting firms that support their larger clients.

"Power users" shouldn't just mean more little boxes on the screen. A 2024 client should offer in-depth background analysis, embracing the open nature of the network. Provide helpful context on people and topics within threads – be transparent about the information that's already out there. Visualizations, metrics and network detection features would make the client truly powerful, offering users transparency and letting them better understand what's happening for more control over their data. Flip the script on the centralization of insight and let people organize at a higher level.

I've seen a few sites train LLMs / use RAG on their online docs. When I used it, it was very useful in terms of search and synthesis. imo it'd be nice if projects release a current model trained on their documentation and code.

These "wallets" are badly named, they are about credentials, not money. Think of a wallet of identity cards (driver's license, etc).

Google &c already own most people's data. This makes it possible for it to be portable and network based.

Not so.

Google <stores> most of your data. If you have your tax records on gdrive, google so far cannot target ads based on that content (i can't say forsure if you attach your tax form in gmail nowadays)

With SOLID any site can simply say "we need your tax records to prove you're a human or combat fraud whatever".

SOLID and all other not-solutions like it make your consent as useless as android app permissions. Accept a huge list they don't even bother to show anymore or go away.

Remember how android 2.3 you still could deny internet access permission? With SOLID it will be the same. Full ID data will eventually move to expandable list of permissions you will give by default.

I bet benners thought very hard about all the math and network etc. But he's being a buffoon when it comes to understanding applicability.


> With SOLID any site can simply say "we need your tax records to prove you're a human or combat fraud whatever".

This is not true. But there are human factors issues to solve. Personal AI will help.

Most of what google &c possess is inference and real-time metadata, which is the real dark magic. Giving people precise control over their identity and information via standards will be a huge gain, but it's not an easy task.


You just highlighted how the narrative really coopt smart people into not seeing the externalities.

Companies will drag this forward anyway. Apple into their proprietary tent, everyone else in every direction they can. It's a very legitimate role of government to try to put some controls on this mess, and the ideas behind those controls is a better next generation system.

THAT'S why I hammer the point he is being a useful idiot.

He is doing their work and trying to force regulations that are good for that terrible end state. He's working so that government help it, not put some controls, which we already have today btw


You can't put controls around grey goo. You can put controls around clearly defined access methods and data shapes.

Wow, some pretty strong opinions here. Did somebody say Nazis?

Tech innovations are always a mixed bag, but there are some great ideas in this, like selective disclosure, where you don't have to reveal all your personal info (address on driver's license) when proving your age. Is uploading a picture of your physical credentials to pretty random web sites supposed to be a better solution?

I also think people deserve to have well described data and interactions, and their "wallet" should become a coherent place of organizing long term information. These standards create that possibility. Letting companies dominate and invent things to their needs as they go along is not a better solution.

It takes some strength from governments to do this properly, which some will point to as backward (witness the reaction to iPhone AI features). But companies have not found the end point of innovation, it needs to be kept open and directed by broad people needs, not the current shiny.


> great ideas in this, like selective disclosure

How will a centralized wallet used for everything solve the human factors problems?

We know there's a deep human factors issue here. Web sites and apps ask for permission seemingly all the time for as much as they can, and most people just agree, because they've learned that usually works.

Until it doesn't.

Here's a Fortune article of how BankID, a widely-used electronic ID in Sweden, is used in fraud, at https://fortune.com/europe/2024/06/21/why-going-cashless-has... :

"For Bagley, the fact that BankID is so commonplace is part of the problem. “It ends up not really being a security measure, but just another step in using a website,” she said. “You don’t really think twice about what the BankID app might say you are logging into.”"

You can blame Bagley for not double-checking the transfer before verifying, but she's not the only one. "Online fraud and digital crime in Sweden have surged, with criminals taking 1.2 billion kronor in 2023 through scams like the one Bagley fell for, doubling from 2021."


User education, personal AI, responsible vendors will be part of the solution. Seamless and safe transactions are very much desirable, there is a whole world to build on once they're in place, so there is a lot of motivation to solve them, but the solution shouldn't come from a few vendors that have no real regard for the collective outside their customers.

So you think that selective disclosure isn't all that useful without all of the above?

Who will do the education? The banks? The schools? If the latter, what subjects will be removed from the curriculum?

Who will verify the AI is trained correctly? Who is liable of it is not? How will it be updated as new frauds are discovered?

Will the solution require everyone to buy new phones? How much more will be destroy the world?


The same thing that did the education for people to learn a credit card or their smartphone, it's a mixture of banks, friends, work. It's a gradual thing, there's no need to remove curriculum.

I don't know what you're imagining as an alternative.


My question remains: to what extent is selective disclosure useful given the demonstrable human factors failures in existing selective disclosure systems?

User agreements are another example of failure. They give full disclosure, on a take-or-leave-it basis. Few people say no to GitHub when it means being blocked from participating in most software development projects.

Plus, very few actually read that full disclosure. I can guarantee you that most people do not come out of high school with sufficient training to read those agreements, much less immigrants like me who never received training in the Swedish legalese I am required to agree to use digital healthcare.

All these experiences tell me that a central personal information store as described will have exactly the same failures, and that selective disclosure will in practice be equally meaningless.

"Learn a credit card" is misdirection. We know from the number of people who declared bankruptcy due to credit card debt that they didn't all learn how to use it correctly, or had no alternative than taking on ruinous debt.


I think this discussion is going in a few unhelpful directions. These "wallets" are not used for financial transactions, they're for credentials. They are an alternative to paper/plastic driver's licenses, proof of majority, etc. Selective disclosure is a specific thing, it's not relevant to compare it to "full disclosure." If you must compare, it is much easier to understand what it means to use a very fine grained proof ("the person in this picture is over 21") compared to handing over many personal details on a typical physical ID (full name, exact birth date, medical conditions, address, country of birth, etc).

People going bankrupt through credit abuse is a separate issue from learning how to technically use such instruments. Many know how to "properly" use it, but have a weakness where the only solution may be to impose limitations. Many others are taken in by misleading tactics. Fine grained digital approaches can help those situations.

It is partially how you look at it. I want information systems to become coherent á la the semantic web, but in a specifically user-specific way (which is one of the ideas of Solid). I think that well defined digital credentials are an opportunity to give people a better view of the information they hold, and to enable ways to make issuers more accountable with a fine grained approach; eg evaluable axioms per credential fashioned after "law as code" approaches. This could be connected to a neuro-symbolic AI so the user can discuss scenarios outside transactions. Especially with an increase in inter-related credentials, that will make it easier to manage and less of a separate world that some institutions and companies control, which I think is incredibly valuable. Some of these ideas aren't possible yet, but we aren't going to get there by continuing to produce grey goo systems.

While credit has harmed some people (which is regrettable and should be resolved) it has enabled the vast majority of users to build better lives past other forms of capital. With digital systems and well defined data, the user can walk through clear, directly relevant, and private scenarios of what their next action will yield, without any dependency on a particular provider. But only if there is a forceful move to coherent data.


> These "wallets" are not used for financial transactions, they're for credentials.

Which is why I gave other examples of the human factors issues, not just financial transactions: configurable cookie warnings, configurable app permissions, and user agreements.

> what it means to use a very fine grained proof

Except you know that bars, etc. are going to ask for more details, including full name, address etc. And they'll say it's needed to prevent known hooligans and troublemakers.

Will people stop going to the bars until the bars only request the minimum required information? No - or at least not more than people currently click off all the "I agree to this surveillance" buttons on a web page.

So people just accept, and enter. Then the bar asks for more, and more, and more, and people have been trained to just agree to everything, because they have very little power to say no.

I've already heard accounts of people who bring their passport instead of state id for the simple reason that it does not contain a scannable address. If it's expected that everyone always has the ability to provide any required information, simply saying no is hard.

> Fine grained digital approaches can help those situations.

"Can" is pulling a lot of weight. I can click off all the cookie trackers. The vast majority do not. Is that from an informed decision, or is that simply the easiest decision?

I can disable geolocation tracking on my browser, but then - oops! - the county web site showing me the upcoming vaccination times doesn't work because the site assumes everyone has geolocation turned on, and they never tested for someone who voluntarily disabled that option.

Build a system that expects coherent data, and you build a system where people get trained to provide anything which asked for, with poor support for those who opt-out.


> Except you know that bars, etc. are going to ask for more details, including full name, address etc. And they'll say it's needed to prevent known hooligans and troublemakers.

I wouldn't expect this at all. Being visually recorded, maybe. I'd hope governments step in and prevent creating private databases around that.

I absolutely disagree that coherent data means people provide anything asked for.

You are more than making up for my "can" with your suppositions.

Cookies &c are examples of tech that got away without regulation when it was needed. This is regulation. It's needed.


I've been an Android user since day 1, but am very impressed by some of these features. I'd love to have the half-click to focus feature and other dedicated camera button features, macro, a few other things that put this generation in a next category. I'd like to see them catch up and surpass all of Google's onboard AI features, and it looks like they're working on it. Being able to find a section of a video by a vague description, all on-device, is incredible. And they're finally improving their photo app. If Apple offered call screening, ambient song identification/logging, and allowed browsers to support extensions, I'd be tempted to switch, especially since they have a clearer privacy story. I'm glad strong competition is continuing, especially around privacy.


> I'd love to have the half-click to focus feature and other dedicated camera button features

That would be Sony's phones for you. They're great for raw photography, especially if you process it yourself from raw.

Downsides behind...it's a Sony. And they also don't have any interest in AI and advanced software features.


Lack of interest in AI is a pro for me, not a con.


Likewise. The only thing I strongly dislike about the 15 pro is the brutal post processing smell on the photos. It’s almost like how you can tell when an image was AI generated. You can tell a photo came from my phone (or any other iPhone 15). The processing is HEAVY and there is a whack load of ML going on.


I'm in the same boat regarding generative AI.

The sad thing is image processing is also basically machine learning when it comes to noise management, and computational photography is still what pushes the envelope when it comes to sensors these sizes. Sony handles the whole stack, from the sensor to the color management, and still can't have great photography on their phones almost solely because of that.

I'm curious how Xiaomi does with their phones with actual larger sensors, but given the price I won't be touching one anytime soon.


iOS does support a form a call screening, called live voicemail, which transcribes the message being recorded by the caller and lets you pick up the call if you want. iOS also supports ambient song identification, with history, which I use frequently. Safari supports extensions, and I believe other browsers can as well, but can't speak to that as I really only use Safari in my phone, even though it's not my primary browser on desktop.

Figured I'd drop a comment to let you know about the others though!


> iOS also supports ambient song identification, with history, which I use frequently.

Does it? I can't find any info about this online and all I can find seems to indicate that you can run Shazam and it scans for some amount of time afterwards but iOS kills it to save battery. It doesn't seem like you can get Google Pixel-like "Now Playing" which I sorely miss on my iPhone 15 Pro.


Right, the person saying "they use it" as opposed to "refer to it" is an indicator. It's a great feature, using on-device "AI" (privacy preserving), and available since Pixel 2 (2018).

That's great news, I didn't know they had rolled out those features. I don't really want to rewrite my extensions for another browser, but I'll see how applicable the others might be.

> I'm glad strong competition is continuing, especially around privacy.

Until Apple releases an iOS platform equivocal to AOSP, there's really not any competition at all. Apple claims to care about privacy, Google proves they do.


I disagree.

Google’s entire business model is dependent on personal data. AOSP may have privacy features that are verifiable but Google Play Services is not open source and undoubtedly collect lots of data for Google. Most AOSP-based phones all largely include GPS. Sure, you can limit what access GPS has but then you’re sacrificing features. The majority of people probably opt-in.

In contrast, Apple doesn’t need your data for most of the products / services they sell. Privacy is a selling point, so they’re incentivized to build robust privacy features. I’d love to see more commitment to open-sourcing underlying technologies but imo Apple is way more privacy conscious than Google.

I will however give Google credit for their privacy initiatives in recent years. They seem to be taking it more seriously.


> Google Play Services is not open source and undoubtedly collect lots of data for Google

Google Play services is not everything though, and Android being what it is, you can actually replace and spoof most of these features to your heart's content. Having used Android without Play Services for a few years now, I honest to god do not notice the difference. microG coming preinstalled on most Android derivatives helps a lot there.

> Privacy is a selling point, so they’re incentivized to build robust privacy features.

Problem is, that's a tautology. Apple says that, and certainly stand to gain quite a bit from claiming it. But nobody is holding them accountable besides themselves; if Apple was asked to compromise their privacy by a third party, their users may never know. Nobody can earnestly say that iOS is a comparatively private operating system, because we literally cannot see how it behaves!

Apple's approach to "privacy" is publishing whitepapers and then absolving themselves of real accountability. That's how they approached iPhone security, that's how they approached Mac security, and lord only knows how they approach iCloud security. When you say that Apple is "privacy conscious", you mean to say they market privacy better. You don't know how conscious Apple is of privacy, you only know what they claim to be true.

As I said; it's not a competition. Marketing-based security is not a threat model; transparency is.


> In contrast, Apple doesn’t need your data for most of the products / services they sell. Privacy is a selling point, so they’re incentivized to build robust privacy features.

This is not even funny:

1. "macOS sends hashes of every opened executable to some server of theirs" https://news.ycombinator.com/item?id=25074959

2. https://sneak.berlin/20231005/apple-operating-system-surveil...

3. Apple fined $8.5M for illegally collecting iPhone owners' data for ads (gizmodo.com) https://news.ycombinator.com/item?id=34299433

4. Google collects 20 times more telemetry from Android devices than Apple from iOS (therecord.media) https://news.ycombinator.com/item?id=26639261

5. Keeping your data from Apple is harder than expected (aalto.fi) https://news.ycombinator.com/item?id=39927657


Why is this downvoted? Any counter-arguments?

i actually dont know how you can say this with a straight face


I don't know how you can include "Apple" and "privacy" in the same sentence without some kind of source code to back it up.


How does source code backs Google privacy claims?

AOSP so different than even Google own pixel phones it's code is basically irrelevant.


It's an option. AOSP isn't identical to OEM-distributed ROMs, but it's certainly a great basis for private OSes like CalyxOS and GrapheneOS. For individuals that are serious about privacy, there aren't any options to compile your own iOS with Apple services disabled.

I'm not saying that the AOSP absolves all of Google's server-side behavior (or even that it proves they're benevolent; neither company is). My point is that Google presents a realistic threat model to their users, that takes them seriously and even provides escape hatches for any potentially concerning features. iOS presents a comparatively cartoonish outlook that relies more on the strength of their marketing than the self-evidence of their security. Apple's position is indefensible but claims to be altruistic; Google's position is honest, so much that it treats themselves as a threat.


Are you running AOSP without GMS?


This is a false dichotomy. They both are terrible at privacy (see my other post here with links). Try GNU/Linux phones if you actually care about it.

GNU/Linux cellular devices are not more private than an appropriately secured Android handset. Given the modem vulnerabilities and poor support for Linux ARM SOCs, I would much rather trust an OS designed from the ground-up to incorporate cellular security. There's a reason Linux was forked to create Android, and not built as an upstream effort. Linux is perfectly secure for a physically secured server rack. It is a nightmare scenario for GSM privacy.

> Linux is perfectly secure for a physically secured server rack. It is a nightmare scenario for GSM privacy.

What is the difference?


Endpoint security, IP-based GSM networking vs RIL telephony, isolation measures, ISP trust and fingerprinting mitigation, modem transparency, privledged baseband access and SIM vulnerability, to name a few big ones.

Again - Linux for desktops and servers can be great for privacy. For pretty much every single smartphone-based threat vector, it is a free lunch for attackers. We're talking off-the-shelf CVE exploits versus blowing a multi-million dollar zero-day here.


This is all very theoretical and unclear. For example, on Pinephone, the modem runs FLOSS software (except for a small blob managing the tower connections). Also, it's connected via USB, so there is no privileged access for it. I have no idea what ISP trust has to do with that. You can install Tor on the phone. And so on.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: