Hacker News new | past | comments | ask | show | jobs | submit | GeekyBear's comments login

> I have yet to see a convincing argument (based in the teachings of the Bible) that promotes same-sex marriages.

Here you are.

https://whosoever.org/letter-to-louise/


This document plays at least two shell games, declaring that “homosexuality” as its own concept is recent (within 200 years) but then smoothly omitting this when discussing scripture, instead of analyzing scripture and then inserting the modern concept. No wonder it doesn’t find any condemnation of a concept it excluded from consideration!

It then does a similar trick where the authors of the New Testament are acknowledged to have poor Greek in many cases but then using specific word choice to claim they meant an extremely forced reading, relying on the previous trick a bit too.

There’s even a discussion of how nitpicking word choice is bad practice earlier in the same document!


It's a widely known phenomenon.

Here's MKBHD bringing in developes to discuss it.

> Why Are There So Many iOS-Only Apps?

https://m.youtube.com/watch?v=x2hQa5heXCU&pp

The biggest take away is that Android development and testing is more work, and Android users are less willing to spend money.


Another factor:

> Google also just increased the target API level requirement for apps on the Google Play Store

https://tech.yahoo.com/phones/articles/google-plays-rules-ki...

We also saw established apps like iA Writer decide to get off the treadmill.

> In order to allow our users to access their Google Drive on their phones we had to rewrite privacy statements, update documents, and pass a series of security checks, all while facing a barrage of new, ever-shifting requirements.

https://ia.net/topics/our-android-app-is-frozen-in-carbonite...


Yup, this caused me months of work. Many people chose not to bother.


I thunk it boils down to the fact that acquiring land near San Francisco or LA through imminent domain would be both hideously expensive and extremely unpopular.

Building "High Speed Rail to nowhere" in the Central Valley allowed them kick that can of political infighting down the road.


> I thunk it boils down to the fact that acquiring land near San Francisco or LA through imminent domain would be both hideously expensive and extremely unpopular.

“Eminent domain”, and the SF to SJ run for CA HSR uses almost entirely existing rail right-ot-way shared with Caltrain.

> Building "High Speed Rail to nowhere" in the Central Valley allowed them kick that can of political infighting down the road.

You are right that doing Central Valley first was political, but you have the wrong political motivation. The political motivation was (1) to mitigate partisan political resistance to the project by prividing the most immediate benefit to the Republican districts in the Central Valley, and (2) to secure federal funding under ARRA, which prioritized shovel-ready projects, because the issues that needed cleared to reach an approximation of that state for the termini were more time consuming.


>and extremely unpopular

Who's shedding a tear for some farmer getting paid above market rates (presumably) for their land? California is probably the last place I'd expect people to think using eminent domain in this case is a slipper slope to communism or whatever.


Eventually you will have to build out the rail line in densely populated areas (especially near San Francisco and LA). High speed rail requires that you to avoid unnecessary curves.

At that point, you're going to have to start using imminent domain.

Putting it off until after you have billions of dollars in sunk costs in the Central Valley doesn't change that.


*eminent domain, from a Latin-ish phrase "dominium eminens."


That's why they went with the blended system with Caltrain. Caltrain already owns a suitable right-of-way; they just needed to electrify it. Which they've already done. The SF-to-SJ part of CAHSR is effectively done; they just need to built acquire the SJ-to-Gilroy right of way and tunnel under the Pacheco Pass to connect with the rest of CAHSR.


> Eventually you will have to build out the rail line in densely populated areas (especially near San Francisco and LA).

The SJ to SF run is on existing rail right of way.

> At that point, you're going to have to start using imminent domain.

They’ve been using eminent domain the whole time, they aren't going to have to start at some point in the future.


>Eventually you will have to build out the rail line in densely populated areas (especially near San Francisco and LA). High speed rail requires that you to avoid unnecessary curves.

why can't you have it run slowly in built up areas? As another commenter mentioned that's how it works in France.


Because competing with the airlines requires some semblance of "high speed"?

California is fairly densely packed once you get away from the Central Valley and nearer to the coast where the people are.


The current plan actually accounts for this; the 2:40 includes the amount of time it takes to run on the current Caltrain tracks from San Jose to SF which will not be running at the highest speeds.


Said another way, that SF/San Jose stretch accounts for 25-30% of the total time. That’s similarly true for the last stretch of LA meaning a truly engineering driven design could have done it within ~1h50. And note that the 2h40 goal is admitted as a pipe dream by everyone involved, particularly because of the last mile issues and the circuitous route.


IIRC the last stretch in LA is actually planned to be new build with Palmdale to Los Angeles taking about twenty minutes.

Engineering is about optimizing and updating where you can. There aren't really high speed rail lines anywhere that go into the center of their major cities at full speed. In Europe and Japan the city-center sections are slower; China solved this problem mostly by having high speed trains skirt around built up areas.


> China solved this problem mostly by having high speed trains skirt around built up areas.

Which is what we should have done. Follow the 5 and build out high speed spokes to the other cities. And really unfuck the rail system in the Bay Area instead of travelling at Caltrain speeds for San Jose -> SF.


Upgraded signaling along the Caltrain ROW is in the works, but again you've got to balance the egos of the three big stakeholders. Caltrain is a far smaller problem to CAHSR than e..g Metro North is to Acela. The big issue is going to be the approach to downtown SF — which is still a ridiculous political football.


Something something something “solve the hard problems first”.


Disagree. Waiting for San Francisco to get its act together would potentially doom CAHSR. Building something now even if it means no downtown SF service could still reap benefits. Even Millbrae to LA would be very useful.


I agree.

Starting the build out from either SF or LA would have at least resulted in an initial segment that people could use.


Giving operating HSR to the Valley first, even though this isn't historically the reason for it, is probably a very good way to motivate a solution to any political problems in the urban areas around the termini.


China would have likely just stopped the rail line at San Jose, the same way the Shanghai HSR stops at Hongqiao 50 minutes away from the actual city center.


> a truly engineering driven design

If by “engineering-driven” you mean “focussed on speed alone and not the actual project goals”, but...


So ignore the "High Speed" part in addition to ignoring the part where you build the tracks in a location where there are people who can use them?


Yeah, I don't think people in LA or SF are worried about competing with airlines over a few dozen city miles not being 200+ mph. Avoiding the LAX is reward in and of itself.


Most of the run for CA HSR is in the Central Valley because most of the length of California between the Bay and LA is.


Americans tell me that public transport can't work in the America because it's not dense enough.


the state can manage that expense over time, for example by refusing to enforce laws, spiking crime rates, turning into dystopia and chaos, thus lowering property values.

after land is acquired, the property and law enforcement will bring up values


/s?


Back in 2011, Apple paid $500 million for Anobit, a company producing enterprise grade SSD controllers.

Since Apple was already integrating custom SSD controllers onto their A series SOCs, presumably the purchase was about Anobit patents.

> Anobit appears to be applying a lot of signal processing techniques in addition to ECC to address the issue of NAND reliability and data retention... promising significant improvements in NAND longevity and reliability. At the high end Anobit promises 50,000 p/e cycles out of consumer grade MLC NAND

https://www.anandtech.com/show/5258/apple-acquires-anobit-br...

Apple has said in the past that they are addressing improved data stability at the hardware level, presumably using those acquired patents.

> Explicitly not checksumming user data is a little more interesting. The APFS engineers I talked to cited strong ECC protection within Apple storage devices... The devices have a bit error rate that's low enough to expect no errors over the device's lifetime.

https://arstechnica.com/gadgets/2016/06/a-zfs-developers-ana...


That's excellent if you're using Apple SSD storage, but seems unhelpful if you're running, say, a large array of HDDs connected to your Mac.


Sure, if you are managing a large amount of external data, you may want to go with a RAID array or the existing OpenZFS port, but even Windows 11 still defaults to NTFS, which offers less resilience.


What is the devices lifetime in apples eyes? 5 years till support is dropped? Gulp.


No, generally much longer.

Apple products go vintage after they've stopped selling something for 5 years. Obsolete after 7 years.

https://support.apple.com/en-us/102772


Internet connections of the day didn't yet offer enough speed for cloud storage.

Apple was already working to integrate ZFS when Oracle bought Sun.

From TFA:

> ZFS was featured in the keynotes, it was on the developer disc handed out to attendees, and it was even mentioned on the Mac OS X Server website. Apple had been working on its port since 2006 and now it was functional enough to be put on full display.

However, once Oracle bought Sun, the deal was off.

Again from TFA:

> The Apple-ZFS deal was brought for Larry Ellison's approval, the first-born child of the conquered land brought to be blessed by the new king. "I'll tell you about doing business with my best friend Steve Jobs," he apparently said, "I don't do business with my best friend Steve Jobs."

And that was the end.


Was it not open source at that point?


It was! And Apple seemed fine with including DTrace under the CDDL. I’m not sure why Apple wanted some additional arrangement but they did.


The NetApp lawsuit. Apple wanted indemnification, and Sun/Oracle did not want to indemnify Apple.

At the time that NetApp filed its lawsuit I blogged about how ZFS was a straightforward evolution of BSD 4.4's log structured filesystem. I didn't know that to be the case historically, that is, I didn't know if Bonwick was inspired by LFS, but I showed how almost in every way ZFS was a simple evolution of LFS. I showed my blog to Jeff to see how he felt about it, and he didn't say much but he did acknowledge it. The point of that blog was to show that there was prior art and that NetApp's lawsuit was worthless. I pointed it out to Sun's general counsel, too.


While I was disappointed that NetApp sued, the ZFS team literally referenced NetApp and WAFL multiple times in their presenations IIRC. They were kind of begging to be sued.

Also, according to NetApp, "Sun started it".

https://www.networkcomputing.com/data-center-networking/neta...


No, the ZFS team did not "literally reference NetApp and WAFL" in their presentations and no, Sun did not "start it" -- NetApp initiated the litigation (though Sun absolutely countersued), and NetApp were well on their way to losing not only their case but also their WAFL patents when Oracle acquired Sun. Despite having inherited a winning case, Oracle chose to allow the suit to be dismissed[0]; terms of the settlement were undisclosed.

[0] https://www.theregister.com/2010/09/09/oracle_netapp_zfs_dis...


We can agree to disagree.

Your own link states that Sun approached NetApp about patents 18 months prior to the lawsuit being filed (to be clear that was Storagetek before Sun acquired them):

>The suit was filed in September 2007, in Texas, three years ago, but the spat between the two started 18 months before that, according to NetApp, when Sun's lawyers contacted NetApp saying its products violated Sun patents, and requesting licensing agreements and royalties for the technologies concerned.

And there was a copy of the original email from the lawyer which I sadly did not save a copy of, as referenced here:

https://ntptest.typepad.com/dave/2007/09/sun-patent-team.htm...

As for the presentation, I can't find it at the moment but will keep looking because I do remember it. That being said, a blog post from Val at the time specifically mentions NetApp, WAFL, how the team thought it was cool and decided to build your own:

https://web.archive.org/web/20051231160415/http://blogs.sun....

And the original paper on ZFS that appears to have been scrubbed from the internet mentions WAFL repeatedly (and you were a co-author so I'm not sure why you're saying you didn't reference NetApp or WAFL):

https://ntptest.typepad.com/dave/2007/09/netapp-sues-sun.htm...

https://www.academia.edu/20291242/Zfs_overview

>The file system that has come closest to our design principles, other than ZFS itself,is WAFL[8],the file system used internally by Network Appliance’s NFS server appliances.


> And the original paper on ZFS that appears to have been scrubbed from the internet mentions WAFL repeatedly (and you were a co-author so I'm not sure why you're saying you didn't reference NetApp or WAFL):

Cantrill was not involved in ZFS, and was not a co-author. Cantrill was involved with DTrace:

* https://www.usenix.org/conference/2004-usenix-annual-technic...

* https://www.cs.princeton.edu/courses/archive/fall05/cos518/p...

And the ZFS paper has hardly been scrubbed given it is widely cited:

* https://www.cs.hmc.edu/~rhodes/cs134/readings/The%20Zettabyt...

And the fact that the ZFS paper cites WAFL is hardly an indication of anything, given that NetApp's patent cites a whole bunch of other patents:

* https://patents.google.com/patent/US5819292#patentCitations

Heck, some of the cited patents were Sun's.

* https://en.wikipedia.org/wiki/NetApp#Legal_dispute_with_Sun_...


> The file system that has come closest to our design principles, other than ZFS itself,is WAFL[8],the file system used internally by Network Appliance’s NFS server appliances.

That was unnecessary, but that does not betray even the slightest risk of violating NetApp's patents. It just brings attention.

Also, it's not true! The BSD 4.4 log-structured filesystem is such a close analog to ZFS that I think it's clear that it "has come closest to our design principles". I guess Bonwick et. al. were not really aware of LFS. Sad.

LFS had:

  - "write anywhere"
  - "inode file"
  - copy on write
LFS did not have:

  - checksumming
  - snapshots and cloning
  - volume management
And the free space management story on LFS was incomplete.

So ZFS can be seen as adding to LFS these things:

  - checksumming
  - birth transaction IDs
  - snapshots, cloning, and later dedup
  - proper free space management
  - volume management, vdevs, raidz
I'm not familiar enough with WAFL to say how much overlap there is with WAFL, but I know that LFS long predates WAFL and ZFS. LFS was prior art! Plus there was lots of literature on copy-on-write b-trees and such in the 80s, so there was lots of prior art in that space.

Even content-addressed storage (CAS) (which ZFS isn't quite) had prior art.


> I guess Bonwick et. al. were not really aware of LFS. Sad.

They were:

> [16] Mendel Rosenblum and John K. Ousterhout. The design and implementation of a log-structured file system. ACM Transactions on Computer Systems, 10(1):26–52, 1992.

> [17] Margo Seltzer, Keith Bostic, Marshall K. McKusick, and Carl Staelin. An implementation of a log-structured file system for UNIX. In Proceedings of the 1993 USENIX Winter Technical Conference, 1993.

* https://www.cs.hmc.edu/~rhodes/cs134/readings/The%20Zettabyt...


Seems specious. Patents don't preclude one from overtly trying to compete; they protect specific mechanisms. In this case either ZFS didn't use the same mechanisms or the mechanisms themselves were found to have prior art.


Whether the claims were valid or not I guess we'll never know given Oracle and NetApp decided to settle.

What I DO knows is that if the non-infringement were as open and shut as you and Bryan are suggesting, Apple probably wouldn't have scrapped years of effort and likely millions in R&D for no reason. It's not like they couldn't afford some lawyers to defend a frivelous lawsuit...


Maybe! Bryan and I were pretty close to the case and to the implementation of ZFS. But maybe Apple did detect some smoking gun of which somehow we were unaware. I (still) think Jonathan’s preannouncement was the catalyst for Apple changing direction.


I will just say I can’t thank both of you enough. I cut my teeth on zfs and it was a pillar of the rest of my career.

It’s a constant reminder to me of the value of giving that college kid free access to your code so they can become the next expert doing something creative you never thought of.


There was lots of prior art from the 80s for "write anywhere", which is a generally a consequence of copy-on-write on-disk formats. The write-anywhere thing is not really what's interesting, but, rather, not having to commit to some number of inodes at newfs time. Sun talking about NetApp makes sense given that they were the competition.

We don't know exactly what happened with Apple and Sun, but there were lots of indicia that Apple wanted indemnification and Sun was unwilling to go there. Why Apple really insisted on that, I don't know -- I think they should have been able to do the prior art search and know that NetApp probably wouldn't win their lawsuits, but hey, lawsuits are a somewhat random function and I guess Apple didn't want NetApp holding them by the short ones. DTrace they could remove, but removing ZFS once they were reliant on it would be much much harder.


Ok? So what?


The argument advanced in the piece isn't without merit -- that ripping out DTrace, if subsequent legal developments demanded it, would be a heck of a lot easier than removing a filesystem that would by then contain massive amounts of customer data.

And given what a litigious jackass Larry Ellison / Oracle is, I can't fault Apple for being nervous.


No, it was that Apple wanted to be indemnified in the event that Sun/Oracle lost the NetApp lawsuit.


Ironic since the post above tells the story as LE saying no to Jobs.


There's always two sides to one story. And Jobs was not the kind of person to take no for an answer. I've known other narcissists and they've always seen friends as more of a resource than someone to care about.

I think the truth is somewhere in the middle.

Another rumour was that Schwartz spilling the beans pissed Jobs off, which I wouldn't really put past him. Though I don't think it would have been enough to kill this.

I think all these little things added up and the end result was just "better not then".


Sure, but what sane person would not expect Oracle to continue to be litigious?

I imagine the situation would have been different if Apple's ZFS integration had completed and shipped before Sun's demise.

They didn't rip out DTrace, after all.


> The big win with Windows 7 was that they finally figured out how to make it stop crashing.

Changing the default system setting so the system automatically rebooted itself (instead of displaying the BSOD until manually rebooted) was the reason users no longer saw the BSOD.


There is a more general discussion on the latest Asahi Linux progress report.

> Unfortunately, PDM mics are very omnidirectional and very sensitive. We cannot get by without some kind of beamforming.

https://asahilinux.org/2025/03/progress-report-6-14/

Also, it turned out that some previous work done for the speaker output was reused here for mic input.

> Thanks to the groundwork laid in PipeWire and WirePlumber for speaker support, wiring up a DSP chain including Triforce for the microphones was really simple. We just had to update the config files, and let WirePlumber figure out the rest!


Doesn't the difference between measurement and observation stem from an extension of the double slit experiment discussed in thus artucle?

It you place a detector on one of the two slits in the prior experiment, (so that you measure which slit each individual photon goes through) the interference pattern disappears.

If you leave the detector in place, but don't record the data that was measured, the interference pattern is back.


> If you leave the detector in place, but don't record the data that was measured, the interference pattern is back.

This is not remotely true. It looks like you read an explanation of the quantum eraser experiment that was either flawed or very badly written, and you're now giving a mangled account of it.


I have heard similar things but this is THE most deeply weird result and I’ve never heard a good explanation for the setup.

A lot of people pose it as a question of pure information: do you record the data or not?

But what does that mean? The “detector” isn’t physically linked to anything else? Or we fully physically record the data and we look at it in one case vs deliberately not looking in the other? Or what if we construct a scenario where it is “recorded” but encrypted with keys we don’t have?

People are very quick to ascribe highly unintuitive, nearly mystical capabilities with respect to “information” to the experiment but exactly where in the setup they define “information” to begin to exist is unclear, although it should be plain to anyone who actually understands the math and experimental setup.


It's a little simpler than you're thinking: only fully matching configurations (of all particles etc) can interfere. If you have a setup where a particle can pass through one of two slits and then end up in the same location (with the same energy etc) afterward, so that all particles everywhere are in the same arrangement including the particle that passed through one of the slits, then these two configurations resulting from the possible paths can interfere. If anything is different between these two resulting configurations, such as a detector's particles differently jostled out of position, then the configurations won't be able to interfere with each other.

An interesting experiment to consider is the delayed-choice quantum eraser experiment, in which a special detector detects which path a particle went through, and then the full results of the detector are carefully fully stomped over so that the particles of the detector (and everything else) are in the same exact state no matter which path had been detected. The configurations are able to interfere once this erasure step happens and not if the erasure step isn't done.

Another fun consequence of this all is that we can basically check what configurations count as the same to reality by seeing if you still get interference patterns in the results. You can have a setup where two particles 1 and 2 of the same kind have a chance to end up in locations A and B respectively or in locations B and A, and then run it a bunch of times and see if you get the interference patterns in the results you'd expect if the configurations were able to interfere. Successful experiments like this have been done with many kinds of particles including photons, subatomic particles, and atoms of a given element and isotope, implying that the individual particles of these kinds have no unique internal structure or tracked identity and are basically fungible.


Interference pattern also disappears when detector detects absence of detection, which shouldn't change properties of the particle.


If anything is different between the two resulting configurations of possibly affected particles, such as the state of the particles of the detector, then interference can't happen. It's not just about whether the individual particle going through one of the slits is in an identical location.

An important thing to realize is that interference is a thing that happens between whole configurations of affected particles, not just between alternate versions of a single particle going through the slit.


Do you have a reference for that last paragraph?


I'm not a physicist, but that doesn't really sound right. Might I ask you a reference or an explanation?


It is correct. There's SO MUCH weirdness surrounding the double slit.

https://en.wikipedia.org/wiki/Double-slit_experiment#Variati...


Hm, it says the observer-at-the-slit experiment hasn't been performed because it would absorb the photons. But it also says the experiment can be done with larger particles, so that shouldn't be a problem ...


I believe I first read about it in the book, Gödel, Escher, Bach.


Not shipping something that isn't ready for prime time is hardly "dire".

The world can do without learning the merits of putting glue on pizza, visualizing black Nazis, and chat bots that advise users to commit suicide.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: