Hacker News new | past | comments | ask | show | jobs | submit | fguerraz's comments login

API is down as well, can make changes to DNS zones, this is insane to have such a long outage.

This is great.

Sadly, apart from “looking good”, there seems to be no documentation, or visible roadmap. So I can’t evaluate if it’s worth looking into or not.

Keep up the good work!


Thank you! Adding a roadmap to v1.


There is no alternative really as only iCloud can back-up your settings, saved networks, and apps data.

Other apps like Nextcloud, can only backup documents (those not in apps) and pictures, because there's an API for this.

iTunes backup is an option, but it's not automatic and convenient.


Is that true? Only iCloud can back up an iPhone? They dont provide any way to even extract an encrypted archive so you can keep it safe for yourself?

I get more and more amazed at Apples lock in tactics. This is why I own nothing Apple, and have complete control over everything in my digital world.


iTunes backup is perfectly reasonable alternative to iCloud that retains e2ee, I don't know why they were dissing it. It can back up everything that iCloud can and it's automatic, you just plug your phone in, no lock in tactics.


No, you can use iTunes to make a local backup too. It was a thing long before iCloud.


Fair enough, however iTunes is also Apple software no?

So your choice is use Apple software to make your backups, or....?


Interacting with any device running iOS requires Apple software (or reverse engineered hacks) for many features.

However, in this case, the point is that you can use Apple software to make a local backup (and you can enforce the "local" part by doing so offline), and then use whatever you want to encrypt and stash away the resulting files.


well, yeah, iphones could be bit more open, and I wish they were. But there's no real way for UK to force Apple into adding backdoors into that.


It encrypts your entire phone backups as well


Do you really need analytics that much?

Enjoy my cookie and analytics free website: https://www.ZoneHero.io

I had to resist a lot of temptations, but hey, no banners!


Some sites need them, or they would not exist. A page with ads, where the advertisers what to know, at mleast, how many times their ad was displayed. They have an aproximation of this number for printed press, for TV, for the radio.

I have a lot of sites that are also banner free, but they are paid with other means (usually grants). But when the money ends, the sites go down.

The sad thing with analytics is that the dreadful banner makes it look like my site that only sets a cookie to know what is an unique visitor looks the same than other site that collects hundreds of data variables and then sell the data to hundreds of third parts, making the data sharing a business in itself.


If you're collecting the data for fraud prevention, that would be covered by "legitimate interest" which does not require consent.

The problem with ads is that they don't want to only do fraud prevention, they also want to do deep behavioral analytics and targeting. You can't do that with data you collected for the purpose of fraud prevention because it's a different purpose altogether.

> The sad thing with analytics is that the dreadful banner makes it look like my site that only sets a cookie to know what is an unique visitor looks the same than other site that collects hundreds of data variables and then sell the data to hundreds of third parts, making the data sharing a business in itself.

If your cookie only exists to identify visitors across requests for purposes that meet the definition of legitimate interest or any other legal basis for a mechanism than consent, you do not need a "banner" (really: consent form). You don't even need a notice, just a privacy policy explaining it. An example for this are session cookies for logins - cookies for things like dark mode are slightly different but you don't have to frontload the consent request for that.

We don't shove a consent form in people's faces when we first talk to them just because of all the things we might need their consent for later in the day. I'm not sure why so many of us think we need to do this for websites that don't immediately require that consent to do something with it. Especially when consent can be withdrawn (or given) at any time.


Totally unrelated to cookies, but: Looks like a nice product! Have you figured out a way to integrate with ECS / Fargate? That's where our high volume ALBs are pointed at.


If you want to get in touch, click Join the beta and drop us an email, old school, no strings attached. We’re currently testing Marketplace integration, the rest has already been battle tested.


Yes, at the moment we can handle everything that fits in a Target Group. Adding other constructs is not a big deal either.


Is it popular?


I don’t know I can’t know.

But seriously no, it’s not, it’s never going to be as the topic is very niche, and it’s only ever consulted by people who have an interest in the product, that received the link by me or the marketplace.

I think the benefits of the “insights” I would get from tracking viewers are outweighed by the inconvenience I would cause to my customers.


Looking at the commit history inspires some real confidence!

https://github.com/RealNeGate/Cuik/commits/master/


chicken (+558998, -997)


Cursed. I had a coworker once would commit diffs like that but always with the message "Cleanup". The git history was littered with "Cleanup" commits that actually hid all kinds of stuff in them. If you pulled them up on it (or anything else) they went into defensive meltdown mode, so everyone on the team just accepted it and moved on.


Back in 1990 or so I worked at a networking company (Vitalink) that was using whatever source control was popular back then. I forget which one, but the important thing was that rather than allowing multiple check outs followed by resolve, that system would lock a file when it was opened for edit and nobody else could make edits until the file was checked in.

One young developer checked out a couple files to "clean them up" with some refactoring. But because he changed some function interfaces, he needed to check out the files which called those functions. And since he was editing those files he decided to refactor them too. Pretty quickly he had more than half the files locked and everyone was beating on him for doing that. But because he had so many partial edits in progress and things weren't yet compiling and running, he prevented a dozen other people from doing much work for a few days.


Eh, when you're hacking away as a solo developer on something big and new I don't think this matters at all. In my current project I did about 200x commits marked "wip" before having enough structure and stability to bother with proper commit messages. Whatever lets you be productive until more structure is helpful.


Perhaps, but I still think it is lazy. A very nice counter example of someone with high commit standards can be seen in this repository: https://github.com/rmyorston/pdpmake/commits/master/


The code base may go through several almost total rewrites before it stabilizes, especially for non-trivial systems that are performance sensitive. Changes to the code may be intrinsically non-modular depending on the type of software. This prior history can be enormous yet have no value, essentially pure noise.

The efficient alternative, which I’ve seen used a few times in these cases, is to retcon a high-quality fake history into the source tree after the design has stabilized. This has proven to be far more useful to other engineers than the true history in cases like this.

Incremental commits are nice but not all types of software development lends itself to that, especially early in the development process. I’ve seen multiple cases where trying to force tidy incremental commit histories early in the process produced significantly worse outcomes than they needed to be.


That doesn't have a commit history going back to what the parent said about the first 200 commits though. It starts off with basically 3 commits all called some variant of "initial public release", after which good commit messages start, so that probably skipped many intermediate WIP states

I agree that one can do good commit messages also early on though. Initial commit can be "project skeleton", then "argument parsing and start of docs", then maybe "implemented basic functionality: it now lists files when run", next "implemented -S flag to sort by size", etc. It's not as specific as "Forbid -H option in POSIX mode" and the commits are going to often be large and touch different components, but I'd say that's expected for young projects with few (concurrent) contributors


Another example is ghostty


went to write exactly that. Ambitions are great and I dont want to be dissuasive, but monumental tasks require monumental effort and monumental effort requires monumental care. That implies good discipline and certain "beauty" standards that also apply to commit messages. Bad sign :)


Not really. In the initial phase of a project there is usually so much churn than enforcing proper commit messages is not worth it, until the dust settle down.


I massively disagree. It would have taken the author approximately 1 minute to write the following high quality hack-n-slash commit message:

``` Big rewrites

* Rewrote X

* Deleted Y

* Refactored Z ```

Done


Many times it is “threw everything out and started over” because the fundamental design and architecture was flawed. Some things have no incremental fix.


Different people work differently.

Spending a minute writing commit messages while prototyping something will break my flow and derail whatever I’m doing.


I am deeply suspicious of anyone who doesn't bother or who is unable to explain this churn. For the right kind of people, this is an excellent opportunity to reflect: why is there churn? Why did the dust not settle down? Why was the initial approach wrong and reworked into a new approach?

I can understand this if you are coding for a corporate. But if it's your own project, you should care about it enough to write good commit messages.


Is your objection to the inevitable fact that requirements churn early on (regardless whether you're doing agile or waterfall)?

Or is your objection that solo devs code up prototypes and toy with ideas in live code instead of just in their mental VM in grooming sessions?

Or is your objection that you don't think early prototypes and demos should be available in the source tree?


None of the above. My objection is the lack of explanation.

Churn is okay. Prototypes are okay. Toying with ideas is okay. They should all be in the source tree. But I would want an explanation for the benefit of future readers, including the future author. Earlier in my life I have more than once run blame on a piece of code to find myself writing a line of code where the commit message does not explain it adequately. These days it's much rarer because I ask myself to write good commit messages. Furthermore the act of writing a commit message is also soothing and a nice break from writing for computers.

Explain how requirements have changed. Explain how the prototype didn't work and led to a rewrite. Explain why some idea that was being toyed with turned out to be bad.

Notice that the above are explanations. They do not come with any implied actions. "Why is there churn" is a good question to answer but "how do we avoid churn in the future" is absolutely not. We all know churn is inevitable.


For single-author quickly-changing projects I'd guess that it's quite likely for only like 1% of the commits to be looked at to such extent that the commit message is meaningfully useful. And if each good commit message takes 1 minute to write (incl. overhead from the mental context switching), each of those uses better save 100 minutes compared to just looking at the diff.


I suspect you have never worked on single-author projects where you fastidiously write good commit messages. If you never got into the habit of writing good commit message, you won't find them valuable at all when you are debugging something or just wondering why something is written in a certain way. Once you consistently write good commit messages you begin to rely on them all the time.


"why something is written in a certain way" most likely doesn't even have an answer while a project is still in the rewrite-large-parts-frequently stage. Sure, you could spend some time conjuring up an explanation, but that's quite possibly gonna end up useless when the code is ruthlessly rewritten anyway.

That said, specific fixes or similar can definitely do with good messaging. Though I'd say such belongs in comments, not commit messages, where it won't get shadowed over time by unrelated changes.


And for your average backup system it's only like 1% of backups you need to be able to restore, probably much fewer. Trouble is, your won't know which ones ahead of time - same for commits.


Difference being that if you automate backups they're, well, fully automatic, whereas writing good commit messages always continues to take time.


The "mitigation" itself is still not very safe if you're paranoid about governments or very motivated organisations. The extra step of checking PCR12 is performed by the initrd that you trust because it's signed by a private key that has probably leaked to every serious hacking corp / government. They can just boot their own signed initrd and kindly ask the TPM that will oblige.

I personally replace the firmware certificates (PK, KEK, db, dbx, …) with my own and sign every kernel/initrd update, I also unlock my disks with a passphrase anyways, but I'm on the fence WRT if it's more secure than TPM.

Yes in theory TPM key extraction is feasible (and even easy if it's performed by a chip other than your CPU https://pulsesecurity.co.nz/articles/TPM-sniffing ) but it is harder than filming/watching you type the passphrase or installing a discrete key-logger ?


> sign every kernel/initrd update

If you believe that the those SecureBoot private keys were leaked, why not also believe that the linux kernel signing keys were also leaked and that you are downloading a backdoored one.


It's quite easy to generate your own signing keys which you use to sign a kernel you've built yourself.


So you understand your own device’s security? You have no more reasons to trust the security of the Apple device in your pocket than you do of an Apple device in a datacenter IMHO.


Am I oversimplifying in thinking that they’ve demonstrated that their quantum computer is better than at simulating a quantum system than a classical computer?

In which case, should I be impressed? I mean sure, it sounds like you’ve implemented a quantum VM.


Simulating a quantum system is a hard challenge and it's actually how Feynman proposed the quantum computing paradigm in the first place. It's basically the original motive.


Exactly.

I’ve seen lots of people dismissing this as if it isn’t impressive or important. I’ve watched one video where the author said in a deprecating manner “quantum computers are good for just two things: generating random numbers and simulating quantum systems”.

It’s like saying “the thing is good for just two things: making funny noises and producing infinite energy”.

(Also, generating random numbers is pretty useful, but I digress)


Carelessly is the answer


Collabora is perfectly fine, what’s your gripe with it?


It is not perfectly fine. It is dreadfully slow and nearly unusable compared to Google Documents or native LibreOffice.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: