Hacker News new | past | comments | ask | show | jobs | submit | thought_alarm's comments login

It's just a bug.

The system save panel (NSSavePanel) is only supposed to infer the file extension when it has been given a list of supported file types/extensions. If the user does not specify a file extension, NSSavePanel will use the first extension in the supported list. If the user specifies an unsupported extension, they see an error message.

Text editors like VSCode and Sublime Text would configure NSSavePanel to allow any file type/extension, which means it's supposed to just accept whatever the user types.


> It's just a bug.

It's an unforgivable bug.

File save dialogs have been around for decades. There's no reason whatsoever to not have very extensive unit and integration tests around it.


Exactly, and when you interview for companies like this you have to basically prove that you are in the top 1% developers in the world, just to find shit like this being released in the wild for millions of devices. It is absurd.


Better never navigate through AOSP source code.


Sure there is, 3 reasons:

1) No one writes unit tests for old code until it breaks the first time, because they're too busy working on new stuff.

2) No one gets assigned to write unit tests for old code until it breaks the first time, because there's always more work to do than you have time to do it in and if its not broken, its not hurting anyone enough to get resources and time diverted to it

3) Even if you write a mess of unit tests for stuff, it's always possible to miss unit testing a specific scenario. 100% line and branch coverage is not 100% bad outcomes or 100% possible inputs.

I've never worked anywhere that had 100% of their code tested with automated tests around 100% of the possible inputs and outputs. And the older the code and the longer it had been running stably, the less likely it was to have that testing if the testing wasn't written with the code. Like everything else in development, tests add up in the technical debt pile too.


It's all just an evolution of their existing disk drive technology that started well before the last-minute decision to go with the Sony 3.5" drive.

The variable drive speed comes of the development of the "Twiggy" drive, which was an 850 kB 5.25 disk format originally intended for the Apple III in 1980 but never worked reliably.

BTW, the Atari ST uses the same floppy disk format as the IBM PC, 360 kB per side.

The Amiga uses a variable drive speed like the Mac, but they eke out extra capacity by eliminating sectors. This allows an extra 512 bytes per track, but the trade off is that the disk controller can only read or write an entire track at a time, rather than individual sectors.

An infamous Apple II copy protection scheme used the same trick to expand 5.25 disk capacity from 16 sectors to 18 sectors (512 bytes per track).


>Amiga uses a variable drive speed

Amiga uses standard PC drives with slight tweaked pinout https://linuxjedi.co.uk/2020/12/05/converting-a-pc-floppy-dr...


The Amiga is fixed RPM or CAV, not CLV like the og Mac. With one exception- later models could halve the RPM to read/write HD floppies (1.44MB PC or 1.76MB Amiga).


360 KB/side was indeed the default for the Atari ST, but there were numerous tools (I think Fastcopy III was the one I usually used) to format with more sectors per track, and 10 sectors/track (so 400 KB/side) was the standard recommendation if you just wanted more data per disk and no hassle. More than 80 tracks was also an option, and 81 or 82 tracks was apparently also reliable. That never sat right with me though, so I didn't do it.

(18 sectors per track with 256 byte sectors is also possible with the 1770 series. This was one of the disk format options on the BBC Micro. Definitely not written a track at a time! There just wasn't the memory for that.)


You only read the data sheets. The twiggy drives had 4 heads to cut latency and had two access windows one in the back like all 5.25 and one in the front for the extra heads. The Lisa used this too.


> I believe the //e and the //c could generate interrupts on vertical blanking

VBlank interrupts were only available through the Mouse Firmware, which was built into the IIc but a rarely-installed option on the IIe. As a result, there were no interrupt-driven games for the 8-bit Apple II machines.

The complete lack of consistent frame rates and timing is a hallmark of Apple II gaming.


Conversely, screen interrupts were at the core of Commodore 64 games, and of the demoscene. And also of arcade machines, NES and the whole next generation of consoles and non-IBM computers.

So I wonder why it wasn't added. It wouldn't have been hard, exactly (enthusiasts for the TIKI-100, a Norwegian educational 8 bit, have gotten into the habit of repurposing the printer interrupt by means of a dongle in the printer port).

Was the idea that educational machines shouldn't be too game-friendly?


> So I wonder why it wasn't added. It wouldn't have been hard, exactly

The answer to the question "Why didn't Apple add X/Y/Z to the Apple II?" is that they did add those features, starting with the Apple III in 1980, and continuing with the IIc and IIgs.

The problem is that there was a 2-year window between the release of the III and the explosion of the home/education market that Apple ignored the II and assumed sales of that quirky, obsolete system would dry up.

The IIe was designed within that window, and the skeleton crew of engineers who worked on the IIe did not have the green light to add significant new features. The only goal was to reduce manufacturing costs and maintain compatibility.

It wasn't until after the IIe was locked in that the Apple leadership began to realize the importance of the II within the suddenly booming home/education market, and only then did they put any significant resources back into the platform.

The IIc (1984) and IIgs (1986) were the result of those renewed efforts, but by that time the cat was already out of the bag. The IIe remained the most popular machine of the platform, and the "modern" features added to the IIc and IIgs were left unused by most developers and users.


the idea was the original Apple II was made in 1977 as a game machine (Woz wanted to play breakout in software) but there really wasn't much of a concept of what a "Game Machine" was back then, it was mostly a huge hack trying to get minimal chipcount

years later when the c64 and IBM PC came out, the IIe was released which did have vblank support, but Apple II devs were reluctant to break backwards compatability.

You can still do a lot of cool games w/o vblank support. I'd say it'd barely makes the top 5 list of most annoying things about programming games on the Apple II.

And all these other "better" platforms, tell me do they have a port of Riven? http://deater.net/weave/vmwprod/riven/


They could at least be aware of real time. A default vblank interrupt handler that increments a 16-bit counter would be incredibly useful.


Afaik you can do Vapor Lock on Apple - poor mans VBlank detection: http://deater.net/weave/vmwprod/megademo/vapor_lock.html


That’s processor intensive. What’s the minimum number of cycles required to detect a vblank?


yes it is, but it does not matter if you have finished rendering and now just waiting for the next frame.


IIe and IIc have a vsync status bit that you can poll, but IIRC the polarity was flipped between the two models and there are other quirks that can interfere with using it, so it was neither recommended nor popular.


it's more complex than that, I think IIe and IIgs have the same register but polarity reversed, the IIc has a weirder interface that generally involves setting up an interrupt through the mouse firmware


I started learning Obj-C and AppKit in the mid 2000s. During that process I had a moment of zen when I realized that everything I had learned and done with C++ and COM in the late 90s was completely wrong.


Historical fun fact:

In the original version of Objective-C and NextStep (1988-1994), the common base class (Object) provided an implementation of `copyFromZone:` that did an exact memcpy of the object, a la NSCopyObject. In other words, NSCopyObject was the default behavior for all Obj-C objects.

It was still up to each subclass to ensure that copyFromZone: worked correctly with its own data (not all classes supported it).

AppKit's `Cell` class provided this implementation:

    - copyFromZone:(NXZone *)zone
    {
        Cell *retval;
        retval = [super copyFromZone:zone];
        if (cFlags1.freeText && contents) 
            retval->contents = NXCopyStringBufferFromZone(contents, zone);
        return retval;
    }
Here it needs to make a copy of its `contents` string, using NXCopyStringBufferFromZone, when the copy of Cell is expecting to free that memory (cFlags1.freeText).

OpenStep introduced reference counting and the NSCopying protocol, and removed the `copyWithZone:` implementation in NSObject.

So the equivalent implementation in OpenStep's NSCell class could be:

    - (id)copyWithZone:(NSZone *)zone
    {
        NSCell *retval;
        retval = NSCopyObject(self, 0, zone);
        [retval->contents retain];
        return retval;
    }


> Those files should only be created if the user actually makes adjustments to the view settings or set a manual location for icons in a folder. That’s unfortunately not what happens and visiting a folder pretty much guarantees that a .DS_Store file will get created

This is my number one frustration with the Finder.

You can customize the look and size of individual folder windows in many interesting ways, al a the Classic Mac OS Finder, which is a really great feature. But if you blow through that same folder in a browser window then most of those customization are lost, overwritten with the settings of that browser window, even if you never change anything.

What's the point of allowing all of these great customizations when they're so easily clobbered?

I have a global hot key to bring up the Applications folder. I'd love to customize the look of that window, but it's pointless. Whenever I hit that hot key I have no idea what I'm going to get. It's always getting reset.

By the way, the reason it does this is because the Finder has no way to set a default browser window configuration. So instead, it just leaves behind the current browser settings in each folder it visits. Super frustrating.


It used to be before darwin that every open folder corresponded to one window and there was only one user, so that approach worked. I really miss that, it was nice having the same window pop to the front with everything just like you last had it before.


> I have a global hot key to bring up the Applications folder

Not global, but as long as you're in the Finder cmd-shift-A opens the Applications folder. cmd-shift-U opens the Utilities folder.


> "Randomly reading from various memory addresses might give the modern programmer some concern about security holes, maybe somehow reading leftover data on the bus an application shouldn't be able to see. On the Apple II, there is no protected memory at all though, so don't worry about it!"

Funnily enough, protected memory (sort of) arrived with the Apple III a couple of years later in 1980 and it was met with complete disdain from the developer community ("Stop trying to control my life, Apple!").

Apple III ROM, hardware, and kernel memory wasn't meant to be directly accessible from the application's address space. The purpose was to increase system stability and to provide a backward-compatible path for future hardware upgrades, but most users and developers didn't see the point and found ways around the restrictions.

Later, more-successful systems used a kinder, gentler approach (please use the provided firmware/bios interfaces please).


The Apple /// is a master class on what NOT to do when designing a computer. Apple still owes us an 8-bit Apple IV computer as an apology for the ///.

The best feature is the dual speed arrows - press and they’ll auto repeat. Press harder and they’ll repeat faster.


Some other hardware features were very good for the time. It gets a lot of heat for the initial reliability issues, but they were eventually solved. They also limited the Apple ][ emulation to 2+ features, so no 80 columns, and that was probably a mistake. On the other hand the good features were:

- Profile hard disk (but would have been better if you could boot from it). - Movable zero page, so the OS and the application each had their own zero page. - As mentioned, 80 column text and high resolution graphics. - Up to 512k addressable RAM, either through indirection or bank switching.

It was probably the most ambitious 6502 based computer, until the 65816 based IIgs came along. And SOS was better than ProDOS.


I remember going to Computerland circa 1981 and they had an Apple /// that they refused to demo for anyone because they were afraid it would burn up. Whatever else might have been wrong about the ///, the /// just plain didn't work reliably.


AFAIK, the ///+ solved most of the problems with the ///, but it failed so badly in the market I’m still looking for one to buy for a reasonable price (I want to try to make it do 384x560 graphics, arguably possible with its interlaced mode).


There's no way a 6502 machine could have beat Z-80 based CP/M machines for business. Not only did the 6502 lack many addressing modes, but it had hardly any registers so you'd struggle even to emulate addressing modes. There was a "direct page" of just 256 bytes that you hypothetically could use to store variables but fitting that into the memory model of languages like C where locals are stack allocated or should look like they are stack allocated is tough.

It was almost impossible to write compilers for languages like Pascal and FORTRAN for the 6502 without resorting to virtual machine techniques like

https://en.wikipedia.org/wiki/SWEET16

or

https://en.wikipedia.org/wiki/UCSD_Pascal

The latter was atrociously slow and contributed to the spectacle of professors who thought BASIC was brain-damaged advocating terrible alternatives. Commodore added a 6809 to the PET to make a machine you could program in HLLs.


The Apple II was a wildly popular business machine by any measure. Visicalc was an Apple app.

Everyone knows the 6502 is a lousy compiler target particularly if all you understand about compilers is 'what C expects', or at least they did once that became relevant. Those of us there at the time weren't harping on HLL support, since people weren't writing their apps in a HLL but in asm, even on the Z-80.


The big issue with the 6502 is being unable to pass lots of parameters on the hardware stack, but that's all there is to it - one approach was to create a parameter stack independent - you'd just push the size of the called memory space to the hardware stack, using 3 bytes per call for up to 256 worth of parameters and local variables.


I remember seeing C on CP/M circa 1984, the Z80 had compiled BASICs, multiple Pascal implementations including Turbo Pascal although assembly was common. It was still common by the late 1980s on the 8086 platform.


A lot of Apple IIs, mine included, got Z-80 coprocessors for running CP/M. The Z-80 card was, IIRC, the first Microsoft hardware product. Alongside with the Videx 80-column card, it was the most popular expansion for the Apple II plus computers in Brazil as I grew up.

I ran my II+ in dual-head mode, with a long green phosphor monitor on the Videx and a color TV on the main board output.


The /// did have a nice OS -- the perhaps unfortunately named SOS, which was an improvement over the original Apple DOS and was the basis for ProDOS which replaced Apple DOS on the 64K and greater Apple II models.


> unfortunately named SOS

I’m sure whoever named it had a painful awareness of what would be the ultimate end of the ///.


I've always wanted a force-sensitive keyboard; the harder you hit a key, the more urgently it handles it. Auto-bolded text? Priority of a CLI command proportional to how hard you hit return?


Microsoft did experiment with that some time ago, but the /// was simpler - it was two switches, one actuated at one pressure and the other requiring more force to actuate.

I think this kind of switch is still made.


It's nifty for music production too, when using "musical typing"


Things from 30 years ago:

    mdfind 'kMDItemContentCreationDate < $time.iso(1994-06-23)' > out.txt
Highlights include:

* Castle Wolfenstien for MS-DOS (1983-6-29)

* Lisa OS Source Code (1983-6-29)

* Classic Mac Disks from the Boston Computer Society (1984-12-24)

* Atari 7800 Ms. Pac Man Source Code (1988-12-24)

* Pyroto Mountian BBS files (1990-10-5)

* Jumpman Lives Source Code (1991-04-13)

* Delightful AU sound files and TIFF images from Sun and NeXT systems (1992-2-29)

* Tim Berners-Lee's WWW Browser Source Code for NeXT (1993-6-21)

* C64 Disk Images (1994-6-17)


i have a bunch of NeXT stuff on my backups somewhere, but the oldest files with a legitimate timestamp are from a backup of the G.R.E.A.T. Desktop environment from 1995.

from the README:

                    G.R.E.A.T Version 0.92
    
      GREAT is the Graphical Environment and Desktop for UNIX.
    It is developed by the Free Software Assiociation of Germany
                  with Ruediger and Michaela Merz.
                GREAT is a free binary distribution.
                Copyright (C) 1993, 1994, 1995 FSAG
this is interesting, because it appears to be the first FOSS Desktop Environment. i used it on my grandmothers computer. unfortunately the sources appear to be lost. i was only able to find this binary release, and because of its historical value i am holding on to it for dear life.


That's awesome. Out of curiosity, what kind of system are those files living on today?


`mdfind` is the terminal command to access spotlight indexed files, so a Mac of some kind.


The inability to move them is a feature, not a bug. If you can't move them you can't accidentally give them to the wrong person.

A passkey only authenticates a device (or group of devices). All passkey providers must provide secondary methods for validating the identity of their users so that additional passkeys can be issued when a device is lost.

But if that secondary validation is garbage then the passkey is also garbage, but that problem is not unique to passkeys. (Strong passwords have the same problem, they're only as strong as the reset mechanism).


> The inability to move them is a feature, not a bug.

Wasn't the whole point of passkeys over FIDO2 keys the fact that you can have the same secrets stored on more than one device? (thus mitigating the largest pitfall of FIDO2 keys -- losing the physical key)


Passkeys are an implementation of FIDO2 - technically an expansion of the protocol to include so-called platform authenticators that are device bound, but also syncable credentials, which is what the major players are implementing with storage in iCloud Keychain, Google Accounts, Microsoft Accounts, password managers, etc.

In this way the promise of passkeys, and the main marketing message around passkeys, is that they are phishing-resistant. This isn't strictly true though, because within some of these syncable ecosystems you can share a passkey. For example I can AirDrop a Cloudflare passkey to someone else's iPhone. If they accept, they can now authenticate as me.

The core intentions of FIDO2 generally and passkeys specifically is sound, but solving the age-old problems of device loss, resets, impersonation, sharing, etc, are human issues that the tech companies and consortiums still can't solve. In this way I would argue that passkeys are an improvement but are oversold. They are still better than passwords for many use cases though. And IMHO should remain optional.


>In this way the promise of passkeys, and the main marketing message around passkeys, is that they are phishing-resistant. This isn't strictly true though

So, it is not true.

However, what's true is that if you're arrested, the police won't have to ask Google/Apple/anyone to give them access to your accounts.

They'll just hold the phone to your face, and get a convenient list of all your accounts and a means to log into them.

Granted, you'd need to have biometrics involved. But you can be simply asked to unlock the phone, if that's FSB doing the asking, you won't say "no".


> However, what's true is that if you're arrested, the police won't have to ask Google/Apple/anyone to give them access to your accounts.

> They'll just hold the phone to your face, and get a convenient list of all your accounts and a means to log into them.

As with any password manager installed on your phone. Passkeys don’t claim to solve and are not intended to solve that particular kind of threat.


They are designed to be exportable - the clients just have have not exposed an implementation of that. https://news.ycombinator.com/item?id=35855133


Here's a great github discussion about passkey plaintext exports.

Apparently, the FIDO alliance is considering adding an attestation feature that would allow websites to block various passkey implementations:

https://github.com/keepassxreboot/keepassxc/issues/10407#iss...

e.g., they could block ones that allow exports, or they could block ones that are FOSS. To their credit, it looks like Apple's throwing their weight around to prevent such blocking from being technically possible.

The more I hear about this standard, the more concerned I become.


I expect Apple's focus on privacy (whether you wish to believe that is for marketing, or real) is at play here. While passkeys don't really work as a tracking mechanism, you could do some profiling based on attestation. I am sure Google would love for you to use passkeys and be able to control what devices those are used on, and know about what devices you have. "Oh you want to sign into YouTube? Are you really on an iPhone, or are you pretending it's an iPhone?"

I use AAGUID attestation for Yubikeys at work, but that addresses an actual security need to enforce known authenticator types and prevent enrollment of non-hardware tokens.


Losing access to a service because of device loss is part of threat model for most people (including me). Security isn't binary. Failure to provide adequate recovery should be treated as insecurity.

Always do threat modeling when talking about security, otherwise you end up just bike shedding.

No joke, I once recovered access to google account by loading a TOTP backup in an app in Android emulator. Otherwise I might have been a bit in trouble.


When I bought a new iPhone, and restored it from my old phones backup, my TOTP data from the Google Authenticator apparently didn’t make the trip.

If I didn’t have my GitHub recovery codes, I would have been in trouble.

Arguably, that’s what those are for. But the key point is that I did a mundane, routine transaction. My house didn’t catch fire, my phone wasn’t stolen, I didn’t act negligently. But I was potentially this ][ close to disaster.


Computer security is usually defined as achieving three things: Confidentiality, Integrity and Availability.

If device loss (or a google/apple account ban) leads to permanent loss of access to your (other) accounts, then passkeys aren't providing availability, so they're not secure.

Put another way: If you ignore availability, then passwords are even more secure than passkeys when used "correctly":

When creating a new account, choose a random 80 digit string for your password and don't record it anywhere. Also, don't set up an account recovery email address / phone number / etc.


Of course, you're always at the mercy of customer service. Not having a backup email or phone number can make your account easier to attack since the customer service agent has fewer options before they resort to just giving your account away to the attacker.


Hard disagree on that being a feature. It’s why I don’t want to use them.


From the perspective of the security experts who designed the system, it's a feature and a requirement.


And those experts have designed something entirely inappropriate for non-corporate users (who can't just have IT reset their credentials) largely solving problems no one has while introducing real problems (e.g. accidental self-DOS and backdooring device attestation into the web again).

Browser generated strong passwords with auto fill exists today, pretty much solves all security concerns, and doesn't have the same pitfalls.


Security that is too hard and inconvenient to use is not security any more, because users are going to get around it.


>From the perspective of the security experts who designed the system, it's a feature and a requirement.

Great, all day I dream of making someone else's job easier by adding hassles to my life.

What's next from the "security experts", booby-trapping front door entrances to deter thieves?

Oh, I have another idea. Let's restrict the number of accounts people can have to, like, two, so that they don't have to struggle with remembering passwords! From the perspective of IT helpdesk, it's a feature and a requirement.


Their perspective is not relevant for me.


>The inability to move them is a feature, not a bug. If you can't move them you can't accidentally give them to the wrong person.

Have you considered the case of "the wrong person" taking the device from you non-accidentally?

I'm glad that you live in a world where you've never had anything stolen (..or confiscated by officials).

What a wonderful feature: give anyone who can snatch/break my phone an easy way to lock me out of all my accounts. Especially useful when traveling.

Not to mention the absolutely-never-happening scenarios like, um, dropping the phone. Should've backed up you keys!

(Apple will gladly restore them for you from the cloud once you purchase a new iPhone)

Oh wait, never mind: "The inability to move them is a feature, not a bug."

>All passkey providers must provide secondary methods for validating the identity of their users

Like what, getting an OTP on a known device / phone number / email that you no longer have access to?

Who's enforcing that must?

And finally, and please think about it for a moment:

If another means to verify identity MUST be provided, passkeys are not REPLACING anything - so why do we need them?


This news hits like a ton of bricks.

I can't think of another single person more influential and important to my own musical journey than Steve Albini. Guys like him are supposed to live to a ripe old age telling stories. It's just a horrible loss.


that is a ripe old age for rockers.. many souls did not make it past 35


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: