Hacker News new | past | comments | ask | show | jobs | submit | iamcalledrob's comments login

This also affects communication apps, like email clients.

It's a real bummer for the user experience, honestly. Yes, people can say "share all contacts", but the user experience is confusing, and many people won't.

This means that all 3rd party mail and messaging apps will be lacking contact information -- whereas of course Apple's own will have it by default.

Again, it's shameful API design by Apple, because they don't have to use their own APIs/permission systems.

This could be mitigated, by the way, by having a rate-limited "lookup" API where an app can say "Can I have the contact for bob@example.com, if it exists?". Most legit apps don't need a copy of your entire address book, but they may need to query it occasionally.


Another example of Apple further entrenching its monopoly -- Like other permission prompts, I bet Apple exclude their own apps from asking for this.

I bet iMessage doesn't ask you if it's allowed to access your contacts, in the same way that Photos doesn't ask you which photos you want Apple to know about. That would be an unacceptable user experience for Apple, but acceptable for 3rd party apps.

This seems to be a constantly overlooked part of the permissions discussion. I'm all in favor of Apple changing the rules on their platform to whatever they like, as long as their own apps have to play by the same rules.

Instead, they use permissions to advantage their apps over the competition.


No users think the Apple device with the Apple Contacts app is or should be hiding Apple Contacts app contacts from Apple Mail or Apple Messages app. If you don't want your contacts in the Apple suite, don't put them in the Apple suite.

Similarly, if you use Microsoft Contacts, you assume you see those in Microsoft Outlook and Microsoft Teams, and their devices using their OS.

Similarly for Google's suite, and their devices using their OS.

There are other Contacts apps, such as Clay (from clay.earth) that have other sets of contacts and can sync with still other contacts stores such as, say, LinkedIn. Those aren't visible to Messages without an affirmative action, so Apple is not advantaging itself.

If you're arguing that application suites aren't allowed, any number of users are going to be very annoyed with you.

If you're arguing that nobody can make both hardware and productivity assistant suite combined, you're either saying the PDA doesn't have a right to exist, or, saying that forcing the PDA to be open to other apps on the PDA in turn means the PDA isn't allowed to be an integrated suite now that it's open, and, I guess, saying Microsoft can't make Windows or Surface unless they spin off Office or damage what they make till none of it talks to each other seamlessly?

This entire line of thinking, that nobody's allowed to offer a seamless experience, seems like overregulation of what consumers are allowed to choose and buy.


The line of thinking here is that Apple should play fair. The power of defaults is very strong.

Most iOS users aren't going to be thinking of "Contacts" as "Apple Contacts". It's just the contacts on their phone. It's their contacts, not Apple's.

I think Apple should absolutely have to use the same permission prompts as 3rd party developers -- because this aligns the incentives to design a great user experience.

Instead, they have no incentive to design these prompts and APIs well -- in fact, a disincentive.


Rephrased: Users are not allowed to choose an integrated PDA.

And, still not even if it lets them make a different choice later.

Another implication: All first party apps must be interchangeable. I'm curious -- must third party apps also be?

And then, who decides what lowest common denominator functionality is, and what's OK to offer that others don't?

You've taken that choice away from the market.


The rules of the platform should be the same for all users of the platform. You can't play the game and be the referee.

I don't see how this prevents an integrated user experience. It's orthogonal.

If the user experience for permission management is well designed, and the APIs are thoughtful, this shouldn't be a problem.

It's a problem in iOS today because the user experience and APIs are an afterthought, and there's a disincentive for making them good.


> No users think the Apple device with the Apple Contacts app is or should be hiding Apple Contacts app contacts from Apple Mail or Apple Messages app.

I am a user and you are wrong.

I absolutely want every app, regardless of vendor, to be sandboxed from each other. Without explicit permission, I don't want Mail or Messages to know that I have a contact card for the peer.


Having worked with Swing recently, I worry that it will not work well on the near-distant future, because it feels frozen in time at about 2005.

There are a lot of assumptions baked in that aren't holding up today.

For example, high density and multi monitor aren't well supported. There's a bunch of stuff hard-coded in the JDK that no longer makes sense and you have to hack around.


I recommend using the JetBrains JRE, they have fixed lots of these sorts of issues, as well as greatly improved hot class reloading for development.


I haven't worked with FX or Swing lately but I could have sworn they delivered hidpi support. Maybe in this JEP? https://openjdk.org/jeps/263


I have been looking for something like this in Go for a while. I think there's a real opportunity for Go to provide a great developer experience for cross platform UI due to how simple the build process is. Speaking from experience, half the pain of cross platform development is managing build complexity, which Go basically eliminates.

I'm curious how you'll end up solving for cross-platform layout when native controls have different intrinsic sizes per platform?

This is something I haven't seen solved super well in cross platform toolkits.

Wishing you luck though.


To be fair, this is horribly written error handling. These errors should be wrapped to add the specific context that the caller might want in order to handle the error.


> To be fair, this is horribly written error handling.

I'll can only take it as it's presented, in good faith. The author chose what to publish and compare. If I've been trolled, so be it.

> These errors should be wrapped to add the specific context that the caller might want in order to handle the error.

That would be an Exception and its stack trace.


Sure, you can do that just as much as you can do "try! some_func()" in other languages.

But it's obvious that you're doing something naughty when you do.


You’re right. But that’s my point. You can “ignore” the error and Go doesn’t force you to do anything that about it. I’ve seen this in production code. Also I’ve seen people just checking the null-ness of the return value, ignoring the error. And that brings me to my point, which is, it’s all about good programming practices, whether it’s Python or Go.

edit: combating the overzealous auto-correct


Agreed.

Go forces you to think about failure just as much as success, and I find that fantastic for building robust software.

Errors happen, so you're forced to do something when they do. Errors aren't exceptional.

But more importantly to me, Go forces you to think about what a caller to your function might want to know about what went wrong. Errors in modern Go are for communication, not an explosion of "something bad happened, abort!"

A long time ago, errors in Go were very basic -- essentially just strings 99% of the time, and I think that's where some of the hate comes from. In the early days, I think it was deserved.

But nowadays, with the Go errors package (error chaining), errors are extremely powerful.

Good libraries will often return an error structure that provides more context than a stack trace would, e.g. os.LinkError: https://pkg.go.dev/os#LinkError

tl;dr if you're writing "if err != nil { return err }", you're holding it wrong.


> Go forces you to think about failure just as much as success

Except, it doesn’t.

Forcing you to at least write boilerplate code for failures might be a nudge to think about them for some people, but it absolutely is not “forcing” you to think about it, and you can absolutely defer it with boilerplate while concentrating on the success path and never actually return to it.


This is such a fantastic benefit of Golang: spin up a VPS, apply some sensible defaults, cross compile then run your binary.

Compare this to deploying python, node or php... Needless complexity.

If only running (and keeping running) a database server could be this straightforward!


Nowadays you can bundle a node app as a single binary file. It’s an underused feature, maybe it will catch on.


Could you share how that can be done? I spent some time this year trying to pack a node tool into a single fat binary for a specific use case where we wanted a history of versioned executables - i.e a build job that needs to run specific versions of the packed tool in a specific order determined by external factors.

I tried Vercel pkg, Vercel ncc, nexe, and a few other tools I can’t remember right now. They all had issues with node v20, some dependencies, or seemed to not be maintained anymore. I ended up relying on esbuild as a compromise to get a fat script containing all sources and dependencies, tarballed with some static files we rely upon we can at least get versioned, reproducible runs (modulo the node env). Still not perfect, a single binary would be preferable


I’ve used pkg with success for some small apps - curious to know why it didn’t work for you.

Now you can use this native feature (not totally stable yet though) which I’ve been meaning to try https://nodejs.org/api/single-executable-applications.html


I don't remember the details, and cannot find my notes on vercel/pkg. But looking at https://github.com/vercel/pkg right now I see the project has been deprecated in favour of single-executable-applications


Bummer that they deprecated/archived it before the native feature was stable.


I saw that deno did this but cool to see node picked it up too. I wish there was an option to run turbofan at build to generate the instructions rather than shipping the entire engine, but i guess that would require static deps and no eval, which can’t really be statically checked with certainty


The engine is actually pretty small. Something like 50-100MB if memory serves (when I was using pkg)


You can build native and self-contained binaries in C# too.


How often the deployment model “copy a single binary to VPS via SSH and run it” is even used nowadays?

And with that still, you’d be much better served by using a more expressive and less painful to use language like C#. Especially if the type of use is personal.


You can do the same with python tho, from nuitka compiler to LinkedIn shiv or twitters pex (that follow pip 441).


Just pack up your whatever-else as an AppImage. Job done.


How does it deal with the undocumented system dependencies Python libraries often have?


For Python, you could make a proper deployment binary using Nuitka (in standalone mode – avoid onefile mode for this). I'm not pretending it's as easy as building a Go executable: you may have to do some manual hacking for more unusual packages, and I don't think you can cross compile. I think a key element you're getting at is that Go executables have very few dependencies on OS packages, but with Python you only need the packages used for manylinux [2], which is not too onerous (although good luck finding that list if someone doesn't link it for you in a HN comment...).

[1] https://nuitka.net/

[2] https://peps.python.org/pep-0599/#the-manylinux2014-policy


With Go, you never spend hours/days debugging broken builds due to poorly documented gradle plugins. As an example :)

You really, truly, just run.


That is the problem right there, using Gradle without having learned why no one uses Ant any longer.

As for Go, good luck doing just run when a code repo breaks all those URL hardcoded in source code.


That’s not a real issue:

* the go module proxy ensures repos are never deleted, so everything continues to work

* changing to a new dep is as easy as either a replace directive or just find and replace the old url


It requires work to keep things working, exactly the same thing.


The module proxy is used by default and requires no work. I don’t think what you’re saying makes much sense.


So there is some magic pixie dust that will fix url relocations for the metadata used by the proxy, without having anyone touch its configuration?


I think I’m missing something, because I’m pretty sure you understand the go module proxy (having seen you around here before) but I really don’t understand what problem you’re talking about.

If a module author deletes or relocates their module, the old module is not deleted or renamed from the module proxy. It is kept around forever. Code that depends on it to not break does not break.

If they relocated and you want to update to the new location, perhaps for bug fixes, then you do have to do a bit extra work (a find and replace, or a module level replace directive) but it’s a rare event and generally a minor effort, in my opinion, so I don’t think this is a significant flaw.

For most users most of the time they don’t need to think about this at all.


> good luck doing just run when a code repo breaks all those URL hardcoded in source code

You're on a tear in this thread being wrong about how Go works, but I'm really curious what extremely specific series of events you're imagining would have to happen to lead to this outcome. If I use a dependency, it gets saved in the module proxy, I can also vendor it. You would have to, as a maintainer, deliberately try to screw over anybody using your library to accomplish what you describe.


Not when one git clones an existing project, only to discover the hardcoded imports are no longer valid.

Being a mantainer has nothing to do with some kind of gentlemens code of condut.


Go is brilliant for what I don't have to do.

Go doesn't have a build system, so I don't have to learn that. (I spend every second I'm using Gradle to curse it's very existence -- and wish I had `go build`)

Go cross compiles natively, so I don't have to think about the toolchain.

Go has go:embed, so I don't have to think about bundling/packaging as a separate step.

Go has fantastic backwards compatibility, so I don't have to spend time getting an old project to even build.

Go's stdlib is extremely high quality, so much so that I've never run into a serious bug in it.

When jumping into other ecosystems, I'm shocked at how much time is spent fiddling with build scripts, packaging, deprecations, unfixed bugs in the tooling etc...

Still wish it didn't explode on null pointers though. And any large dependency authored by Google will be unidiomatic and over-complex, of course (see: grpc)


Very well said. As someone who rarely touches Go (only used it for a couple of simple web servers, something like a WebSub subscriber), you've named much of what I like about it. I'd love to see more languages achieve all these features, or even make that a goal.

I'd also mention how great the documentation is. Truly best in class. For instance, see https://pkg.go.dev/net/http

* Has enough examples to fully understand how to use the package.

* Links to the individual source code files so you can read those if needed.

* Has a highly visible link for reporting vulnerabilities.

* Has a great search tool accessible with a keyboard shortcut.

* Clearly marks deprecated functions.

* The page even works without Javascript enabled.


Absolutely. The documentation is first class, and I took love how easy it is to jump into the source. Often, reading the source helps me understand how a package is meant to be used.

Other ecosystems seem to have a "don't worry yourself about that" approach to viewing a package's source, and it's maddening. In contrast to Go, trying to get from docs to source code in the JVM ecosystem is by no means straightforward.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: