Hacker News new | past | comments | ask | show | jobs | submit login
“This presentation can’t be opened because it’s too old” (plus.google.com)
370 points by stefanu on Mar 17, 2014 | hide | past | web | favorite | 210 comments



The author might consider using Microsoft products because they are obsessive about providing backwards compatibility. That's one of the positive things about Microsoft products. The downside is that they have to deal with a lot of old baggage as a result.

Apple, on the other hand is almost the opposite. They're obsessive about upgrading everyone and leaving the past behind. That's great because they're free to make bold and innovative changes. The downside of course is that you and your files sometimes get left behind.

Both are valid strategies in my opinion and appeal to different customers. If you feel like backwards compatibility is a really important feature then whether you want to admit it or not, you probably would be happy as a Microsoft customer.


The most important point here is that if you are aware of the problem and provide an incredibly lame dialog to deal with it, then you should have just provided the solution. Clearly Apple has all the necessary tools in their possession to fix this problem: they own the source to both Keynote '09 and the latest Keynote. Why not either 1) make a small conversion utility and provide a direct link to it in the dialog (still lazy but at least actionable), or 2) include said utility as part of the latest Keynote so you don't have to show a dialog? We're not talking "backwards compatibility philosophy" here, we're talking user experience 101 (something Apple used to hold in the highest regard). This is particularly important given the context of the software: certain file formats are expected to stick around way longer than others. In particular, with presentation software there are loads of class slides sitting on the internet that probably will never be updated, which means you are often putting the onus on someone who didn't make the file to go and convert it. Compare this to, say, Final Cut where the only real client of the serialized file is probably the original creator, and it is thus more reasonable to expect a higher degree of personal responsibility in keeping it up to date.


#2 isn't viable because it breaks the primary design goal and the entire purpose of Keynote '13--that every presentation opens the same way across desktop, web, and mobile. The Keynote '09 source code does not and never has compiled for ARM, and so you would be faced with exactly the same dialog you have now, except instead of on your Mac it's on your iPad and instead of asking you to find an old version of keynote it asks you to find a Mac.

The great thing about #1 is that if Apple really did leave money on the table here, any third-party developer can collect it. Just put a Mac with two versions of keynote behind a REST endpoint and charge a buck or two per conversion. If you can turn a profit at that, not only do you get the smug satisfaction of clearly winning an internet argument, but also there's a cash prize.

If, however, that sounds like a waste of a perfectly good weekend, an economic bet that is unlikely to pay off, it would be an equal waste of a weekend for an Apple engineer. Apple is not a charity; they are a business that takes calculated risks, and they didn't like this one. Did they miscalculate? If you think so, there is no reason not to fill the gap yourself.


You must be joking. The amount of time that it takes a consumer to build a product is not the same amount of time a paid developer uses to build a software product, and take it through a QA process.

It's ridiculous that you cannot open the previous version of a Pages file in the current version of Pages! What an absolute joke.


When I upgraded my iWorks to '13, it put the old versions in a folder. I can still access Pages '09, Keynote '09, etc.

Did OP delete those old versions? I don't see what the problem is here.


> I do have an installation CD, but I don’t have a CD drive any more.

He got a new machine.


The lack of CD drive has several not very difficult solutions: Get a USB CD drive, use your old computer or a friend's computer make a dmg out of the CD, etc.


It's about the principle. I'm not saying that the problem has no solution, I can convert the files using someone else's computer as you are saying. Why should I get a USB CD drive that I will use only to convert documents that I expect to be readable anyways? Why should I spend time looking for someone with iWork'09 installation and use their time to convert my documents? It is just unacceptable from a company that is proud for user experience of its products. I should be able to do that from my chair, legally, with a download of a conversion tool or with an upload to a conversion service from Apple, not a third-party.

I would not mind buying a floppy drive and all necessary adapters, install Atari ST emulator and write Kamenicky encoding converter in Python to convert 1st World Plus documents from '89 if I had to convert documents of my parents. I would not be complaining about obsolescence of the format at all.

My problem is not technical, it is about user experience and productivity.


You have a cd and (I assume) associated key for a product. The only thing that seems nuts to me is that you can't just download it from apple using said key.


Somehow Apple has been able to port their entire operating system from PPC to Intel to ARM. I think they could manage porting a bit of iWork code.


exactly... they friggin switched CPUs


Two times!

1994 -- switch from Motorola to PowerPC.

2006 -- switch from PowerPC to Intel


It's not about 'can they'. It's about 'should they'.

Can I build a flappy bird clone? Sure. Should I? Hell no. I have more profitable projects to work on given a fixed amount of time and resources.

If you think this is money left on the table, then build the 'small utility' to convert the format yourself. If you make money, huzzah, you were right. If you don't, then you were wrong, and this is a niche use case that isn't worth the investment.


> Just put a Mac with two versions of keynote behind a REST endpoint and charge a buck or two per conversion

That would violate the Keynote license agreement. So, we're back to needing Apple to help out here.

edit: from Section 2A, Permitted License Uses and Restrictions... i) to download, install, use and run for personal, non-commercial use, one (1) copy of the Apple Software directly on each Apple-branded computer running OS X (“Mac Computer”) that you own or control; and (ii) if you are a commercial enterprise or educational institution, to download, install, use and run one (1) copy of the Apple Software for use either: (a) by a single individual on each of the Mac Computer(s) that you own or control, or (b) by multiple individuals on a single shared Mac Computer that you own or control. For example, a single employee may use the Apple Software on both the employee’s desktop Mac Computer and laptop Mac Computer, or multiple students may serially use the Apple Software on a single Mac Computer located at a resource center or library.


> if you are a commercial enterprise or educational institution, to download, install, use and run one (1) copy of the Apple Software for use … by multiple individuals on a single shared Mac Computer that you own or control. For example, … multiple students may serially use the Apple Software on a single Mac Computer located at a resource center or library.

Doesn't that cover the suggested use?


> #2 isn't viable because it breaks the primary design goal and the entire purpose of Keynote '13--that every presentation opens the same way across desktop, web, and mobile.

Except, right now, it doesn't. If that was their primary design goal, than it has not been achieved, because their web version still lacks features that are front and center on the desktop. So, you can't use that as an excuse.


It's not charity to support your old versions. It's customer retention.


This is just the way Apple has always been. Keep up or fall behind. They're not focused on providing backwards compatibility.

They leave the door open to losing customers to companies like Microsoft with these kinds of decisions. But that's the choice they made.


How is that a viable model for the users?

I can keep up with my system and applications just fine, but are you saying that I have to re-visit my archive of past documents with each new keynote version, and re-open & re-save them all?


What's nice is that you often don't realize you need a document from several years ago until you've already upgraded through two or three computers, none of which have the (non-app-store) older versions of the software. You thought backing up your data was enough when upgrading computers and installed apps as needed to avoid bloat and clutter.

It doesn't even degrade gracefully. Pages tosses out a lot of formatting and embedded media from earlier documents, which has pushed us to eliminate iWork from our 'productivity' apps.


If I were Apple, right now I'd be more concerned with losing customers to Android devices.

Case in point: we have an iPad here that is still on the iOS 5 generation it came with. It has been widely reported that iOS 6 was slower (as well as breaking things like Google Maps), iOS 7 was slower still (as well as making various visual styling changes we don't like) and both iOS 6 and iOS 7 have suffered at least one severe security flaw that iOS 5 did not have. In short, there appears to be no rational reason we would "upgrade" this device as we are repeatedly prompted to do.

Recently, Apple appears to have changed the rules for app developers so new things going into the app store have to aim primarily at iOS 7. At some point we upgraded an app we'd bought a while ago to get some much-plugged new feature that someone thought was interesting. Aside from the fact that the new feature itself turned out to be paywalled anyway, so the upgrade invitation for the app was basically just spam abusing the app store mechanism, the "upgraded" app also had a horribly glitchy UI where previously it had been smooth as silk drawing exactly the same screens.

Can we "downgrade" that app back to something that works, which we already had before, via the app store or otherwise? No, it doesn't seem so.

If we "upgraded" iOS itself, as suggested to fix the app (though we've seen no reports that doing so does in fact fix this app), could we then "downgrade" again if we didn't like it? Also no, as far as we can tell.

As it happens, we need to have Apple devices around for testing our web stuff, so we're stuck with spending some of our company money on new gear now and then for as long as our customers are doing the same, and we're stuck with using more recent versions of iOS on some devices for the same reason. But at this point, nothing would ever convince me to spend my own money on an iSomething. I was always suspicious of the lock-in and kindergarten UI, and seeing it in action, the results are at least as bad as I expected.

What is possibly more interesting is that it seems average users are getting fed up as well. Not so long ago, mobile visits to a site I run that isn't particularly geek-related were utterly dominated by iOS devices. Today, while iOS still represents a clear majority, it is a much closer balance between iOS and Android, and the trend is strongly in the latter's direction. I can't help wondering whether this is people who've now been not-upgraded to more recent versions one time too many, or who've been forced to dump expensive and otherwise working hardware for artificial software reasons, starting to vote with their wallets.


It is a good point and I agree with you it can be annoying to be forced to upgrade perfectly good software. But it does cut both ways. Legacy support is generally good for customers, but a burden for developers.

As far as development and having to spend money on hardware, though, I'd say Android is far worse than Apple in that regard. Android still has nearly half of devices running 2.x and the screen sizes are all over the map. We write and support native apps for both Apple and Android and we have far more Android devices laying around for testing than we do Apple devices.



I'm no fan of the move-fast-and-break-stuff philosophy of software development either, for what it's worth. I think system software, and other "platform" products like browsers, benefit from long term stability a lot more than they benefit from mixing up bug fixes, security patches, minor adjustments, API breaking changes, and any other stuff they feel like putting in this time, all without any grown-up version control or offering any sort of reliable foundation on which other things can be built.

I think it is deeply regrettable that certain parts of the industry have moved in that direction, and Android is a fine example (as is almost anything else Google makes). That said, it's still preferable to Apple, who move fast, break stuff, and then won't even let you fix it by moving back again.


> Can we "downgrade" that app back to something that works, which we already had before, via the app store or otherwise?

Yes, if you plan ahead. Keep the .ipa files, those can be loaded onto the phone via iTunes.

If you've upgraded iOS, you can also downgrade that. It's a pain, you need to get the .ipsw file and store some sort of activation code.

Is this more or less of a pain than on the desktop? Well, if you want to downgrade an app on the desktop, you'll need to keep the old installer around. Same thing with an OS, you'll need to wipe and reinstall everything from the old OS installer.

It's more unexpected on iOS, because generally you never need to deal with installers, but I don't think it's more painful. It's just more painful by contrast, since upgrading is so nice on iOS.


It should be noted that downgrading iOS is not supported by Apple and you need third party tools to save a signature code to emulate Apple's signing servers


Files are data. Data should be eternal. You should be able to convert a presentation you make today 30 years from now to a newer format. Making software not-backwards compatible is fine and normal but if files expire in a few years Apple is saying that the things you do on their computers are disposable.


The world isn't so black and white. Document serialization is a rocky landscape that is rife with compromise. You have to balance document open time, document save time, file size, backwards compatibility, forwards compatibility, recovery modes, interoperability, size in memory, parsing time, time to save to disk, proprietary embedded file formats, metadata support, and more. And those are just the development considerations. You also have to think about upgrade cycle, time-to-market, third party integrations, and what will help you win marketshare and sell copies.

Software is hard. I think it's pragmatic for software vendors to have a strong, transparent philosophy about the trade-offs so that consumers can make the right choice. As the grandparent points out, Microsoft values backwards compatibility. If you value that too, buy Microsoft.


> Software is hard. I think it's pragmatic for software vendors to have a strong, transparent philosophy about the trade-offs so that consumers can make the right choice.

I'm not talking about software, I'm talking about information. Information shouldn't have an expiration date. Here's a webpage from 1994: http://www.lysator.liu.se/pinball/expo/ Surely you wouldn't prefer a world where the blog you wrote 4 years ago can't be viewed on a new computer?


Yeah but "information" generally requires software to view it. The only reason that web page still works in modern browsers is because they've gone to all the effort to account for quirks in ancient HTML. Apple clearly didn't think it was worth the effort in this case.

Also worth pointing out it's much easier to display old formats than it is to make them editable.


I don't think ancient HTML was complicated enough to have quirks.


Really? Looking at the source of that page:

IMG tags with unquoted SRC attributes, and unquoted ALIGN attribute

BASE tag with default attribute - tag is: <base="http://www.lysator.liu.se/pinball/expo/"> but should be <base href="http://www.lysator.liu.se/pinball/expo/">

UL tags with no LI tags


That was one of the main reasons IE6 survived for so long.

So yes, under some circumstances, old formats are deprecated - even on the web. The world is better for it, but it still sucks for those who need that old stuff and are unable to move it forward.


> Here's a webpage from 1994

It's a good thing that webpage didn't use the <blink> element.


>Surely you wouldn't prefer a world where the blog you wrote 4 years ago can't be viewed on a new computer?

A counter argument to that, is I'm not sure if I want to live in a world where parsing an HTML document takes over 1,000,000 lines of unparallelizable C. You are sweeping a huge requirement that makes the rendering of that page possible in 2014. Ultimately it takes software to render that information, and the software that does render that, may have an expiration date.


Surely you wouldn't prefer...

How about if we're talking a blog I wrote 20 years ago, vs. some really nice improvement in modern software?

Backwards compatibility is great, and ideally no data would ever be lost to bitrot, but backwards compatibility always has a cost. I'm not unilaterally willing to pay that cost.


Assume you work on Keynote, so you have access to all the code and documentation for the 09 format. On average computers are now faster and have more RAM. You don't need to write the format, only read it. Your users will tolerate 15 minute format conversion times and the loss some things like formatting and videos.

Is it really so difficult to just not break a feature that was present in the last version of the software?


Yeah, assuming your software is reasonably modular and you haven't completely rewritten everything from scratch in the new version, retaining support for old file formats shouldn't be a very big deal.

At worst (like in the case of iWork 13 which probably is completely new code), you get one programmer to spend a few days writing a converter which reuses most of the old code.


More importantly, provided you don't do it too often, users will tolerate some loss of formarting fidelity from version to version.


Yeah, Microsoft is all that not too super cool and buggy but it's right in persevering the path. Not being able to open and work with old documents is frustrating. In a way it's like forgeting grandparents and parents. Not a good thing to do. Apple PR should hear our voices and make a move. They'll lose too much if they don't.


"Data should be eternal."

This is not a useful statement. If you convert the "should" to a "shall", and try to design a system around that requirement alone, it would prove very hard. Even designing a clock around that requirement is very hard (http://longnow.org/clock/).

If you back off from the "shall", then you are in the standard world of engineering tradeoffs, which is where you started.

What features do you want to give up for "eternal"?


Designing a clock around that is very hard because the clock is a physical machine that needs to physically last. Software is easier, because it's, well, soft.

I'm the developer who build their related Long Bets project (http://longbets.org/) and I think "data should be eternal" is a perfectly reasonable standard. That was certainly my goal in designing Long Bets.

If you start with that as a principle, I don't think it imposes particularly large engineering burdens, especially if you accept some potential degradation as a consequence.

For example here, instead of a "fuck you" dialog box, they could have imported the core of a presentation: text and images positioned on a sequence of frames.

An analogy is HTML. It's basically zero engineering effort to just suck the text out of a page. It is only modestly more effort to pull out some of the semantic markup, like headings, lists, and emphasis. And that's the part that really hurts to lose, not which precise shade of blue you used in your footer text.


Perhaps the ideas should be eternal, but the byzantine markup system? This is a major reason why the older I get, the more I prefer plain text wherever I can get it and open standards when I can't. Fancy rendering is just a variation on the drug dealer's bargain "the first hit is always free".


It is eternal, as long as you have the system and the software that it was created with.

This can be said for all sorts of things, from the Commodore 64 to Keynote. If you want to be able to access the data on an NES cartridge you are going to need special hardware and software. Yes the ROMS are available now online, but that's only because someone with the hardware and software made it available to others.

This is the nature of proprietary formats.


> This is the nature of proprietary formats.

Indeed. When you choose to save your work in a not-public format, you are decidîng not to care about its lifespan: complaining later is just useless, unfortunately.


My approach: Export every presentation as PDF. I'm guessing LibreOffice will be around for a bit (files from 2003 and later working ok) and the PDFs provide a second line of access, but, I accept, they are not convenient for editing.


Use ODF.


I do. LibreOffice saves in that format by default.


Assuming arguendo that data = information, a printout will suffice. Your conversion will be manual, but you should not expect a complicated, performant, object serialization format to last through several generations of computer architecture.



Not sure I agree entirely here - Microsoft is definitely a far better bet than Apple for this kind of stuff, but it's still not a perfect bet. As Microsoft has shown with their Win8/phone platform, they are trying to transition away from their old platforms (although the market currently won't let them). It's almost a guarantee that eventually old Office versions will fail to run and after that running into data incompatibilities with old files becomes par. With the call-home DRM in post Microsoft platforms, there is also no guarantee you'll even be able to run an old Microsoft version either.

A better option is to stick to open standards. If your presentation is done in something like HTML, there is a 100% chance you will be able to view it in even 50 years as it is standardized and there are many implementations available for viewing it.

Of course, if your presentation depends on some javascript calculations or a remotely hosted jquery, those probably won't be working in 50 years time either..


Saying Microsoft isn't perfect at backwards compatibility is nit picking to say the least! They are far far ahead of any competition in terms of backwards compatibility. It's practically measured in decades whereas Apple can't manage 5 years.


'far ahead - in terms of backwards compatibility'

something poetic about that...


Silverlight didn't last decades.


I don't think that's the same thing. If Microsoft came out with a newer version of silverlight, you'd expect it to keep reading the old files.

The fact they canceled silverlight and won't make a new version, doesn't have much to do with backwards compatibility.


Did anyone expect it to? Perhaps if it had seen the same interest from developers that other products have, it would have been supported longer...


I don't see why innovation of your software means you have to lose compatibility to old data.

It seems to me you can do pretty much anything whilst having a small set of conversion scripts to get the old data into your preferred new format.

Its not "big data", not streaming data, not rocket science.


Except compatibility mode is often not compatible. I've run into it a bunch of times where .doc, etc. files just don't render right on new copies of Office. I have had to install old versions of Office to work on certain documents. These documents often have VBA in them. They can't get everything to work properly 100% of the time.


> Both are valid strategies in my opinion and appeal to different customers

Absolutely. One is for customers who don't want to lose any file or data. The other is for those who don't mind losing old stuff, no matter how important it may be. Now that's bold and innovative.


A lot of people have absolutely no problem with proprietary software until it breaks. The problem is you never know when it will break, and what it'll take down with it.

I'm not saying 'go full Stallman'. I'm just saying think that when ever you hand over your data to a private company if they consider it as important as you consider it.


It's like the news that majority of the world's ATMs run on Windows XP or earlier. Or lab equipment that's air-gapped because it only works on some obsolete OS that's horribly insecure.

Proprietary software is fine, but if long-lasting hardware is dependent on it bad things happen when that software company decides it's no longer worth supporting.


The problem with airgap test equipment (I work on this kinda of stuff professionally). Is often the redevelopment of real time software for lab and calibration equipment is very expensive.

Its easy to think, "Why don't they use [New Hotness Software ]?" Which on the surface seems to be a good idea. Until you absolutely need sub-millisecond precision I/O, then you kinda start to cry when you realize how hard precise timing in computers is.

If you use lab equipment in say Linux, BSD, OSX, Windows. Your using a time shared, not Read-Time OS. So your I/O events aren't when the event happens, but when the scheduler is damn well ready to let you know the event happened.

The easiest example is a timing equipment I was using to count digital pulses form a quartz crystal. In a 'modern' secure OS I couldn't really get below 0.1% margin of error. Which wasn't acceptable for the equipment which wasn't low enough for our uses. I fall back to an older insecure real-time platform and bumped up to 0.005%.

Security is great. I attend security conferences in my spare time and try to stay up to date on the topic. The main problem is when you get into most this computing everything isn't running glibc and win32. Hacking isn't very easy unless you know the system to start with.


Wouldn't it have been better to just stick a specialized piece of hardware/fpga to read directly from the crystal, buffer it, and pass it on? In fact that's what the timing equipment usually does for you. There's no reason to give up security when a few minor changes in architecture will give you performance, security and ease of maintenance and interconnectability.


Your acting like the read/write time to the hardware/FPGA is negligible which it isn't.

When you access hardware on say a PCI bus (which you would in this scenario). Your call to the PCI bus does not take place WHEN you call for it to take place. You call the Kernel, which calls the scheduler, which calls the hardware manager, which calls the driver, which finally processes your request.

Now your request is processed all this is dumped and something else runs while the processor waits to hear back from the PCI bus with the response because this takes ages in processor time.

Finally an interrupt arrives, is made sense of, the appropriate driver is called, then it gives your information back to your process and you back on your merry little way.

:.:.:

The problem is when you called OS to start this long chain of events the real world didn't stop. Your real time module is still counting the 32,600,000 pulses per second.

This is where you'll get errors. Because often its easy to think things in a computer happen instantly, or so blindingly fast you don't care what happens, order or speed.

The situation you described is what originally gave me 0.1% error. Eventually I switched to a more aggressive tact of polling asynchronously in a separate thread and when a process called for the time responding with the latest received time.

This got me down to 0.07% error. Still not acceptable.

:.:.:

Its nice to be starry eyed and think there is no reason to give up security. But sometimes secure software can't do what you need it to do. The more crap you put between you and the metal the more time it'll take to execute.

This is logically provable.

If you take process A and B. Both are the optimal way to do something. There is no faster way to do this task. Therefore one can assume A and B's execution time are equal.

Yet B operates in a secure sand-boxed environment with a time sharing OS. Therefore B's true execution is B+C+D.

We know A = B, but for A = (B+C+D) C and D must be Zero, which they can never be in the real world.


I get what you're saying about how it's definitely going to be slower, but I don't understand why the read/write time to the hardware matters if the hardware is buffering the last 100 timings or so. What I'm trying to point out is that you don't have to make your entire system down to the keyboard real time - you only need to make the tiny piece that is doing the physical process real time (with some very simple logic gates that can run far faster than any general purpose CPU), and then sending the results over the PCI/whatever bus in batches later to be processed.


The only real time component of the software stack is the kernel. If you want another real time module your screwed because you need to run it in kernel space, but if you have a kernel you can't.

Or you run a real time OS which may have problems because they aren't developed with security but IO timing in mind.

It's a fundamental flaw of time shared OS's.

:.:.:

Second security works in a simple way.

Cost to secure vs money lost.

Lab equipment is expensive. The loss of an entire calibration bench could run into the $250,000 to $1million and beyond range.

But redeveloping and entire OS to do this? Your talking about spending 20 to 100x MORE on security then your losses. That's idiotic at best.


    > The only real time component of the software stack is
    > the kernel. If you want another real time module your
    > screwed because you need to run it in kernel space, but
    > if you have a kernel you can't.
I think you're confused about what RyanZAG is saying. If I'm reading correctly, he's saying "don't run the real-time stuff on the CPU." Have that stuff run on a much simpler piece of hardware that is real-time, and runs the real-time code, then have the non-real-time userland on the CPU talk to it in not-real-time.

To take your example:

    > When you access hardware on say a PCI bus (which you would in
    > this scenario). Your call to the PCI bus does not take place
    > WHEN you call for it to take place. You call the Kernel, which
    > calls the scheduler, which calls the hardware manager, which
    > calls the driver, which finally processes your request.
Design the PCI card to have it's own, smaller CPU (or FPGA, or whatever), that does the real-time interaction with the "32,600,000 pulses per second." Don't have the real-time bits depend in any way with the code running on the CPU. Have it buffer the data. The, when the PCI card is accessed by the userland program on the CPU, it dumps the buffer onto the PCI bus. The userland would obviously have be fast enough that the buffer doesn't fill up, but that speed is much less than "real time". You can then work with the data in the userland, running in non-real time.


      >don't run the real-time stuff on the CPU.
You have to is what I'm saying.

      >that does the real-time interaction with the data...
I already gave an example where I literally said I do this. What your not understanding is the time to poll and respond are part of this real time system.

As I said before. The amount of time between "Kernel, I need this time stamp." and "PID 1337 here is your time stamp" is not instant, and is not constant. There are several stages of blocking I/O, which are not always given priority over other threads. This amount of time I stated will not be instant, nor constant. While for 100% accuracy it needs to both. This part, the collection and storage, needs to ALSO take place in real time BUT BEING IN USERLAND, it can't.

So to outline

      Topic: UserLand   Kernel                      RealTimeCard
     Stage
      1)     Request                                Counting Pulses (You Want This)
      2)                Unknown time                Counting Pulses (Additional Error)
      3)                Spent Doing I/O             Counting Pulses (Additional Error)
      4)                Changing Tasks              Counting Pulses (Additional Error)
      5)                etc.                        Counting Pulses (Additional Error)
      7)                                            Near Instant Response 
      8)                Unknown time (+Error)
      9)                Changing Tasks (+Error)
     10)                Managing Memory (+Error)
     11)                Higher Priority Threads (+Error)
     12)     Data received
What this example boils down to you get a time stamp 8 'cycles' after you thought you'd get it. But that actual time stamp is really 5 'cycles' off of what you should have gotten.

Those 2 unknowns between your real time and userland are where your error comes from. You have both pre-call and post call error added. Neither are avoidable.

No matter whats on the other end of your bus unless your bus moves faster then light.


It would be so nice if programs could request a dedicated core for these kinds of things.


Under Linux you can. You can boot the kernel with the maxcpus argument set to 1 or the number of cores you want the Linux kernel to use and then on a quad core machine you have 3 cores available all the time and setup to where Linux won't ever need to handle interrupts on that core. Then you just start your application and set the affinity to that unused core.

You can extend this further by doing things like mmaping in 4GB of memory through the kernels hugepage support and lock the physical to virtual address map so the kernel can't touch your physical block of ram you just allocated. Then you can do things like talk directly to a pci device like a network card and set up DMA directly from a NIC into a buffer in your applications memory.

All of this is done completely in userspace but you get all the performance benefits of implementing everything like it was running in Ring 0 and the kernel is not involved in anything apart from the initial setup and teardown. You can build an extremely high performance application basically running on bare metal but with the Linux kernel still running on a different core to handle anything that doesn't directly involve your application and there wouldn't need to be any syscalls between the two to service some request.


What I'm confused about. I'll likely just have to play around with this feature at some point. I knew about APIC, but not completely sandboxing cores.

Is if you have bare metal operations do you still have access to kernel functionality ala stdio and libc libraries? Normally when you hit bare metal your on your own. I'm just wondering because the idea of writing my own threading, and memory management libraries excites me to no end </sarcasm>.

Also if you can call these functions like your in userland then do they block until execution has completed on the other 'kernel' cores? Also if you creating P-Threads elsewhere but not managing their execution on the 'non-kernel' core what happens?

>userspace but you get all the performance benefits of implementing everything like it was running in Ring 0

Can you give any literature on this? these terms are contradictory.


Sorry for the delay, I'm probably not the best person to answer this and I know just enough on the subject to be dangerous so with that in mind I'll give it a shot.

>Is if you have bare metal operations do you still have access to kernel functionality ala stdio and libc libraries? Normally when you hit bare metal your on your own.

That's just it, your process is just another Linux process. The difference is that the scheduler will put it on lets say core 2 but everything else has an affinity for core 1 and interrupts will also be handled by core 1 meaning your application is never interrupted on core 2. You still get every feature that you normally get in Linux.

>Also if you can call these functions like your in userland then do they block until execution has completed on the other 'kernel' cores?

You are in userland, normal userland. Implementation details of syscalls are black magic as far as I'm concerned so take this with a grain of salt but apart from kthreads the kernel isn't running in some other thread waiting for a syscall to service, a syscall is just your program calling int 80 which jumps into the interrupt handler in kernel mode on the same core that was just running int 80, does it's work figuring out what syscall you're making and finishes handling the interrupt. So basically yes your thread "blocks" while the syscall is in progress on your special isolated core, not core 1 like everything else running on the system.

>Also if you creating P-Threads elsewhere but not managing their execution on the 'non-kernel' core what happens?

I'm not entirely sure what you mean by this, specifically "managing their execution on the 'non-kernel' core". It's just a thread like a normal Linux thread, but at first a new thread is going to have an affinity for only core 1 which you can change to core 2.

>Can you give any literature on this? these terms are contradictory.

What I meant was that generally if you want to do certain low level things like talk directly to hardware you need to be running in kernel mode. But really you don't need to be in kernel mode all of the time, just initially to allow normal user mode code to talk to the hardware instead of having to use the kernel like one big expensive proxy. As for why a user mode driver for a network card would be such a huge performance gain there are a number of reasons such as every syscall will be a context switch, whatever data you're sending or receiving to the network card will need to be needlessly copied to/from the buffer instead of reading and writing directly to it, you have to go through the entire Linux TCP/IP stack when there's tons of functionality in there that you might not need but have to have so it's just wasted cycles, and the list goes on.

I did manage to find an old Hacker News comment on the subject for further reading from someone much more well versed on the topic than I am. https://news.ycombinator.com/item?id=5703632

Also of interest might be Intel's DPDK which is basically what we're talking about, moving the data plane out of the kernel completely for extreme scalability.


Even that wouldn't work. Their are a lot of parts of the CPU and all of them work together. RAM access, Cache access, Interrupts, and Memory management is handled globally not on a pre-core basis. You'd run into the problem of needing multiple north bridges (do we even have those anymore or are those on chip now?) which you couldn't have.

You have to build an entire OS with real time I/O at heart. They do exist, some are secure (Blackberry's platform) but they aren't deployed to the test industry. The most likely version is licensing fees, nobody writings Data Acquisitions for them.


>lab equipment that's air-gapped because it only works on some obsolete OS that's horribly insecure.

Just to emphasize how prevalent this is, I work in a 5 story life sciences research center with hundreds of personnel. All of our lab computers are airgapped, many of them stuck on OS versions >10 years old.

I used OS X 10.1 the other day. That was certainly an experience.


Mac OS X 10.1 is roughly as old as Windows XP, isn't it?


To play devil's advocate, one could say the same for a lot of open source software: it's great until it breaks. Your claim implies that broken proprietary software is bad because you can't access the source code, while mine implies that broken open source software is bad because there might be no company or organization dedicated to providing support.


But dealing with real life probabilities the likelihood of the event occurring (Unless your approaching the fringe of newer/locked down hardware) is slim.


Irrelevant; you can fix (or pay/trade/beg someone else to fix) FLOSS. You can't say the same for closed source.


For many things, that's only true as a technicality. Having access to the source code of a complicated piece of software doesn't instantly make it easy, or even remotely feasible, to fix it yourself.

For example, back when I ran a Linux desktop and had problems with my video card or video settings, there wasn't a chance that I could fix it myself. The odds of me being able to do so were roughly equivalent to the odds of being able to fix a broken closed-source video driver by opening the binary up in a hex editor. Technically possible, yes, but not remotely feasible.


You didn't have access to the source of the video driver. Not an example of having the source being & difficult to fix.


How so? You could similarly beg or pay for Apple to fix the issue.

Consider OP's use case. Lets say hes a non-programmer and Keynote is open source, but similar in complexity to WebKit. Now he comes across the error - what can he do? Learning to program or finding someone with the time and knowhow of Keynotes (or WebKits) inner workings may take months.

On the time scale of months, he could also bitch enough at Apple that they may release a tool.

However in both cases, the most time efficient solution is to download Keynote 09.


You can also patch the binaries of proprietary software, technically possible, but exactly as unfeasible as your solution when you want to get stuff done.


> [...] technically possible [...]

But often not legally possible?


Practically, it's more of a "not legally possible to distribute".


In a similar vein, I'm now accelerating my plans to move to something like this: http://phpmygpx.tuxfamily.org/phpmygpx.php because Google has made Maps where I can no longer just paste a URL to a KMZ on another server and have it popup in maps, shareable with friends and family. It's not lockin, but it's a similar symptom of relying on software that you have no control over (closed source). I distinctly remember similar grumblings when Google shutdown Reader . . . .


s/proprietary//

While I like non-proprietary software, there is nothing inherit that makes much of a difference for the lay person. As the complexity of the software goes up, so does the level of what is considered a 'lay person'. For any sufficiently complex software and non-pervasive problem - you're SOL in both cases. Standards help more than proprietary/non-proprietary. Lower complexity also helps more.


For example, if you had your own compiler which generated a.out output, then the Linux 1.2 switch to ELF would have broken the code in a similar backward incompatible way, despite there being no proprietary code involved.

(Standards wouldn't have helped either.)


I'm calling your bluff:

https://plus.google.com/115250422803614415116/posts/hMT5kW8L...

Specifically, Alan Cox's comment that he can still run an a.out rogue binary from 1992 on a 3.6 version of the Linux kernel. Linux 1.2 was released March 1995.


Bluff called, I fold .. and sweet!

Looking around, it requires a bit of work (see for example http://www.ru.j-npcs.org/usoft/WWW/www_debian.org/FAQ/debian... ).

Here's a recent (2014) report of success: http://www.linuxquestions.org/questions/slackware-14/running...

rogue is text only. According to a report from 2009 at https://www.complang.tuwien.ac.at/anton/linux-binary-compati... :

> A Mosaic binary from 1994 just segfaults, as well as all the other ZMAGIC binaries (1994-1995) I have lying around.

but OTOH that may be because of the setting of noexec and other flags.


Uh.

  # modprobe binfmt_aout
  # uname -rv
  3.2.0-60-generic-pae #91-Ubuntu SMP Wed Feb 19 04:14:56 UTC 2014
Standards help a lot, that's why a.out still works on modern Linux.


I'd imagine if it doesn't work though, in contrast to the OP's situation, you could spin up a virtual machine with a pre-1.2 version of Linux and run the code without having to pay anything to do so. Also this is a quite different situation to reading a document.


I'm having trouble locating a (floppy?) image that old but I'd love to try - AFAICT VirtualBox was written with kernel 2.6+ in mind.

Edit:

Debian: http://archive.debian.org/debian/dists/Debian-0.93R6/disks/

RedHat: http://archive.download.redhat.com/pub/redhat/linux/1.0/en/o...


They made a similar break in the latest GarageBand where the format changed but also 32-bit plugins no longer work, and there's no way to download the old version of GarageBand on a new Mavericks machine. While it will open and update old files, it loses certain settings, sound clips, etc. so I have several "updated" files that are not what I recorded.

In my opinion this is totally unacceptable, and fundamentally opposite their philosophy of solving hard usability problems to make the user's life easier. If they were so committed to that, then they would automatically read and update old files, and do so accurately.

More and more, Mac OS X is becoming a shell for other peoples (preferably open source) software that I actually trust :\


I'm more concerned as to how it's increasingly impossible to buy an old copy and install it; the app store will only show you the latest version (and even for only the latest OS version), and the installer might even refuse to install, having detected a more recent version.

You basically have to pirate it in order to do these kinds of shenanigans.


Apple is pretty obnoxious about this sort of thing. They recently stopped printing photo books for the version of iPhoto my mom used, with no warning at all. I had to upgrade her iPhoto, which required I upgrade to Mavericks, which required I add more RAM to her computer.


and they removed iPhoto Library sharing from the recent update, which we used all the time...


Sounds like they played it pretty well.


This is NOT true of all proprietary software. It's just that with apple either you keep upgrading to the latest and greatest all the time or you will have problems.

Apple has its pros and cons, and one of those is that you have to keep upgrading regularly.

Most proprietary software will offer upgrade paths from older versions.


Additionally, there's nothing inherent about open source that avoids this sort of problem. An open source project could decide on exactly the same sort of upgrade path, where they support current-minus-one versions and that's it.

In theory, open source is better because the old code is still out there and you can get it up and running to upgrade your data. In practice, it can be pretty tough to get open source code that hasn't been maintained in years to build and run properly on a current OS install.


I think LibreOffice 4.0 abandoned support for the old StarOffice binary format for example.


Yes, but you can download and install at minimal cost any old version of Libre/Openoffice and read your Staroffice documents. You are not quite left stranded. And also I assume the code to read that format still resides in a repository, cann be pulled and reused at will. Not the same happens with proprietary software.


And even Apache OpenOffice abandoned them. Basically the old binfilter was that horrible.


It's relatively easy to install an old version of XP or Debian on VirtualBox to run the old software though, and that would be enough to export the documents to a newer format. In comparison, OSX is generally hard to get running on VirtualBox.


You are right, it is not true of all proprietary software. There is no problem with software upgrades, not even with frequent software upgrades. Problem is when the software requires regular document upgrades. This requirement can not be met in real life, if you produce few documents a month.


What I'm saying is most proprietary software does offer document upgrades. You can open very old versions of Word, Excel, etc. My company supports opening backed up files up to about 8-9 years ago. The only reason we don't go further is that older versions don't work on modern OS'es and hence no one uses them. But even then it's still possible to upgrade if you really wanted to.


The problem is that with proprietary software you have to trust the editor that everything will go well. That's an important choice you have to make, and doing it lightly can bite you hard in the face later.

With Libre software (rather than OSS), you the user stay in charge. It may require rolling up your sleeves (things have changed, and you can now easily find companies that will provide support in Libre software), but in the end you keep control.


I think even MS is better than this at supporting old Office binary formats. Only PowerPoint has completely removed the support for pre-97 formats. With Word/Excel you can generally unblock support for older formats even if by default they will not open.


'even MS'

MS is one of the best in supporting old formats. That's part of how and why the managed to keep their monopoly on the desktop. There's not many OSs out there where you can start a 20 year old software and it runs almost always perfectly. (Try that on Linux or OSX.)


> Try that on Linux or OSX.

On OS X that would certainly fail (even much software that's between 5-10 years old would fail) this test.

But Linux provides a very stable ABI.

20-year backwards compatibility might be stretching it for Linux, since it's only around 20 years old to begin with. But if you have (e.g.) a statically linked Debian binary from the late 1990s, I'd bet almost anything it would still work on your desktop today.


How about 22 years? Specifically, see Alan Cox's comment about running a version of rogue compiled in 1992:

https://plus.google.com/115250422803614415116/posts/hMT5kW8L...

As for MS, my experience has been different. All too often, when trying to pull up old (mid-90's) Word documents, MS Office chokes, where Libre/OpenOffice chugs along just fine.


At Microsoft, if there is a design choice between "abandon the old user" or "let's forget them to make the current product better," they nearly always choose #1. Apple nearly always chooses #2. (I still remember DOS 3.2 to DOS 3.3.)


You probably meant "support the old users", otherwise your two options are pretty much the same :)


Sheesh, it made sense 5 hours ago!


I spin up xspringies[1] every once in a while to waste some time. It's well over 20 years old and runs like a charm.

http://www.cs.rutgers.edu/~decarlo/software.html


In my mind that honor belongs to Sun with Solaris.


Apple is also better at supporting old MS formats. I found some old recipes I wrote in Word 6.0.1 in 1998, which opened fine in OS X Mavericks's TextEdit.


For archival purposes, I strongly recommend saving extra versions of documents in PDF format. Those should be readable forever.


Pages '13 can't save PDFs with links. The official workaround? Use Pages '09.

Pages '13 can't open templates created with Pages '09.

Pages '13 can't copy and paste lists with numbers into plain text.

There's a reason why it has 2 stars on the app store. Somebody seriously dropped the ball at Apple. Office never pulled shit like this. As soon as '14 is out for mac, I'm removing iWork. Caring about design is one thing, caring about design more than the existing work of your users is another.


Text formats should be readable forever. PDF, while generally excellent in this regard, has already broken backwards-comparability on several occasions (or rather: Acrobat has, which is not formally the same as the format doing so, but in practice there isn’t a whole lot of difference; you might as well just read the raw xml in an “unopenable” keynote document).

Adobe’s marketing will tell you otherwise, of course. I used to share an office at Berkeley with Paulo Ney de Souza, who had a wonderful collection of “legacy” pdf files that could no longer be opened in Acrobat that he would trot out for the Adobe sales people when they came by (he was helping to get MSP off the ground at that point).

PDF is probably the best choice for preserving “design”, but I wouldn’t trust it for preserving content any more than any other format. Always keep a plain text copy.


> PDF is probably the best choice for preserving “design”, but I wouldn’t trust it for preserving content any more than any other format. Always keep a plain text copy.

I agree, but have a look at PDF/A (A is for Archiv{e|al}): http://en.wikipedia.org/wiki/PDF/A

> PDF/A is an ISO-standardized version of the Portable Document Format (PDF) specialized for the digital preservation of electronic documents.

> PDF/A differs from PDF by omitting features ill-suited to long-term archiving, such as font linking (as opposed to font embedding).


PDF is an open spec since a few years.

http://www.adobe.com/devnet/pdf/pdf_reference.html


Open spec doesn't help unless all the creation tools adhere strictly to the specification. Historically, they haven't, and support for their various "quirks" has been uneven at best.


Officially, OOXML (aka .docx and friends) is an open spec, too.


.docx (and co.) isn't really an open spec. Microsoft forced it through the standardization process, but it doesn't really deserve the title.

The "spec" is full of statements like "render this the way that it was done in Office 95".

The best solution would be to use ODF, which is supported by all office software.... except Apple's.


Yea, I mentioned it in my wishlist for Satya: http://hal2020.com/2014/03/03/satya-shuffles-his-leadership/...


And that's a good thing. OOXML is a terrible format, but it's still an open spec. Or do you want .doc back?


I agree, the same argument could probably have been made of postscript (ps) at some point in time and while it's still around, most (non-technical) people don't use it.


I was recently asked for my resume, a document which I created in Pages and then exported to PDF. So sure, what I had was readable, but it was out of date. Thus began the painful cycle of getting Pages (09) back on my machine - for some reason I had deleted it in the interim. Boy was that a mistake. I'm just lucky I still had a disk image of the old iWork - without that I would have been hosed.


I'd rather bet on something you can implement all by yourself without needing to wade through a thousand page spec. How about plain uncompressed text and netp[bgp]m for images?


There are FOSS implementations of PDF. Mozilla has one, I think.

Edit: yep.

http://mozilla.github.io/pdf.js/


Yeah, that's the one that always bitched and moaned about not being able to display the document correctly. I got tired of it and disabled the thing.

I've used other PDF viewers (Evince, xpdf, gs, mupdf), and tell you what.. I've come across PDFs they cannot display properly.

If I have to rely on others implementing things for me, and there is concrete evidence that others have trouble doing it, why would I rely on such a format?


As they say: "patches welcome".

Do you also implement your own OS from scratch, or do you rely on others?

Every FOSS OS I know of has patches coming out on near-daily basis.


"Patches welcome" is an aggressive, user-hostile, anti-social response to being told that the thing you suggested does not work. It's telling the user to fuck off because your own suggestion was flawed. clarry didn't run to HN and scream "pdf.js sucks!".


No, it isn't.

It's the whole point of FOSS. If it doesn't work for you, fix it.

This criticism is especially off-base given that the OP said he wanted something he could control. Use the source, Luke!


No, he said:

"I'd rather bet on something you can implement all by yourself without needing to wade through a thousand page spec."

pdf.js was obviously outside the bounds of what he said he wanted from the beginning. You stubbornly brought it up anyway, then gave a nasty, clichéd response when he pointed out exactly why it was the wrong answer.

"If it doesn't work for you, fix it" is ridiculous. It's saying "here's this thing that doesn't do what you want, go make it do what you want instead of using these other things that already do what you want".


I think you're reading too much into that, or have a chip on your shoulder, or both. "Patches welcome" is an invitation, a smiling, friendly, we-think-you're-good-enough-and-want-your-help, open-handed gesture that is meant to encourage cooperation and evoke the deeply human drive to help others.


nknighthb is completely right. It is user hostile. It even makes a huge assumption, that clarry is programmer in the first place. With "Patches welcome" you might as well be saying "If you don't like it, spend a year learning [Language] to fix this issue." How is that anything other than user hostile?

If my mom can't open a pdf in pdf.js, I'm not going to tell her "Well mom, patches welcome."


Give that clarry is a) posting on HN and b) apparently at least contemplating writing his/her own OS at some point, the assumption that he/she is a programmer is anything but "huge".


"My mum writes optimizing compilers."


Do you also implement your own OS from scratch

No, not today. Maybe in the future, who knows.

But I try not to depend on too many things and people, if there's a way around it. I like to have control. That's freedom to me, and freedom gives me peace of mind.

So, no, I don't see why I should waste my precious free time improving software support for a format that I find way overcomplicated and just plain silly. Why should I?


No one is telling you what to do.

If plain text does the job for you, great. If not, and you come up with something better than PDF, also great. I'll be happy to use it.

Complaining is always a lot easier than doing.


Hmm, I've almost never had issues with PDFjs, and the handful of times it's been a problem Evince has worked fine. It is the default PDF viewer in Firefox now so I assume it works well enough for them to do that. I think the PDFs that can't be opened correctly may be blamed on the creation tool screwing up rather than the reader. Though I guess I don't know if those handful of PDFs were up to spec or not.


This bugs me a lot. If the spec is known how it is possible for some random reader to not being able to display it. Is pdf now somewhat html in late 90's where it was browser specified rather than specification?


There's no formatting info in those. Why not just include a statically linked copy of xpdf? The x86 family will be around for a while, and if it goes away I'm sure there will be emulators.


What do you do when xpdf doesn't display the pdf correctly?

What system does that statically linked thing run on? Is that system going to remain compatible for decades? Or emulators capable of running an old version of it?

Why overcomplicate matters when you can pick something you know just works.


Because I care about formatting? Anyway, documents I haven't touched decades and decades from now, I probably won't care about.

I'm just not especially concerned. It's not like I get better life out of my paper documents, as I don't worry about the whole acid-free paper in an hermetic container deal.


I save my archival stuff in .HTML

That's worked out fairly well so far. Stuff I wrote back in the 90s is still readable. Probably not that great if you're a designer, who needs something to look just so when printed. But for near enough everything I do, it's more than sufficient.

Easy to convert too, since it's just tagged text.


This is the most sensible thing I've heard for years.


I have many Pages/Keynote/Numbers documents from 2005-2008, and I remember wondering if they'd ever become unreadable with future technology.

The date came sooner than expected.


Sadly, it seems like a business opportunity: "we'll convert formats for you when your vendor drops the ball."


Could this be packaged as an insurance product?


Unfortunately, it could be a very expensive business to get into if there are patent-encumbered formats involved. In the cases where it's most likely to be useful, it also seems least likely to happen.


>if there are patent-encumbered formats involved.

Patents would not discourage manual conversion. I.E., a human looks at the old presentation and recreates it in the new software. I'm just not sure that there are any presentation worth this cost. God knows that most presentations I have been subjected to are not.


Even then I'm not convinced, without talking to a lawyer, that it would be safe to recreate in a patent-encumbered format.

Regardless of how well patents in general might or might not serve their original purpose, I tend to think that patents that are essentially just locking up data formats do not encourage progress in the way that they are supposed to. I think the US was onto something when it came to copyright and typeface designs, and a similar principle ought to apply to data formats.


I think this is a non-issue for the bulk of any business that does this. An Microsoft or Apple is going to sue a business that upgrades documents in old formats to their newer formats? Maybe, but I'd be surprised. If anything there'd probably be licensing.


Reverse engineering file formats is covered by fair use due to this very reason.


What does fair use have to do with patents?


When I reinstalled my OS and neglected to save iWork '09, I called Apple and they overnighted me an install disk for free without even checking that I actually owned an iWork license.

I don't see why it's reasonable to expect to be given software from the CD/DVD era as a digital download. It would be nice, yes, but Adobe will not give you a digital download of CS5 (I tried.) I'd be surprised if Microsoft would give you a digital download of Office 2003.

The inability to read old documents is shitty, yes, but Apple made a solution available. If you need 5-year-old software, then it's not unreasonable that you need some now-obsolete hardware.

AFAIK it's not necessary to buy Apple-branded drives - there are cheaper alternatives. You can also "share" the CD drive of any other modern Mac on the same WiFi network - I've used the family iMac to load Creative Suite onto my MBA. I bet they'd also let you use an optical drive at the Genius Bar, even if you're out of warranty.


The core complaint is not you need to use a cd run iWork '09, it's that you need iWork '09 in the first place. The commentary on the need for cd drive is to emphasize how unreasonable it is to require iWork '09 to open several year old files.


5 years is not a long time for information to be retrievable.

By comparison, my current version of office easily allows saving (not just opening, but saving) in versions compatible with office '97. That's ~17 years of saving backward compatibility.


I think you can even save in Excel 5.0/95 format in current versions of Excel.


You can open wordperfect 5.1 (1989) in Office 2013 no problems.

Numbers can't even open ODS files at all, destroys iWork 09 documents and pretty much sticks a fork in most xslx documents.


MS on the right license plan really doesn't care and will happily give you any version you want. Off the right license plan I've heard multiple reports of MS actually giving out downloads where needed. Adobe might not overly care, but MS seems to be willing to make things right for retail software (OEM software is another issue).

Though a lot of the reasons for the complaints is that you don't need generally need a copy of Office 2003.


Microsoft does support Office file formats back to '97 though.


I learned my lesson a long time ago, back when WordStar died. I never moved to WordPerfect, and I never moved to Microsoft Word. I do anything I really care about in LaTeX or HTML these days. I can still get to my WordStar documents though, thanks to DOSbox.


Hey look another thing Apple "simplified". Really not happy with this trend of dropping support/feature and calling it "we simplified it".


The other thing to note is that hardware is still continuing to get faster and bigger (although that's slowed down a little most recently), while people are slowly being able to do less with new software that consumes more resources than their predecessors. "Do less with more"?

I think we've reached the point where computers have become much more than powerful enough for a lot of the common tasks people use them for. The rest is just marketing with an aggressive "newer is better" campaign.

IMHO "forced deprecation" is nearly never a good thing. Change in software (and hardware) should be an evolution, not a revolution. Fix bugs and add features, don't take away what was there before. I think a lot more people value stability over "latest fashion" than what companies and the like would want you to think, so they can keep you consuming.


LibreOffice, as part of its goal to open anything, has rudimentary Keynote support in LO 4.2. I expect they'd heartily welcome bug reports and example documents. http://www.freedesktop.org/wiki/Software/libetonyek/ is the library that does the work.


It's just amazing. I found the same thing with "old" Pages documents (from 2008 as well). What am I supposed to do now? It does not make me feel like continuing to put all of my work in iWork format.

If one were to go Microsoft on the Mac, what are my choices? Office 2011? (it's 2014 now after all)

I know there's OpenOffice too


For some reason having those hashtags at the bottom of the article makes me feel dumb, as though the author thought I'd need Cliff Notes for the article. I know that's not the point of hashtags. Maybe it's #fail that flipped my switch.

#Apple #iWork #fail #proprietary #OpenOffice


What if someone maintained an online "virtual museum" of old emulated machines and operating systems? This could be done as SaaS, with some value add tools for converting to modern formats thrown in.


Amusing, interesting for history's sake, and possibly useful, but licensing issues would kill it for closed source, and it's not needed for things with permissive licenses because you can already get the source for old versions and run it yourself.


Some people would pay for something already set up for them. Microsoft might like the idea and support it, as it furthers their backwards compatibility story.


The "files" are really a folder/bundle with richtext and image files inside it. Why doesn't OP just extract the actual data from inside the bundle by right click > package contents


I mean this in the nicest way, but you've rather missed the point. While your response might be technically accurate and true, the author speaks to a greater issue than a one-off hack will support.


No, I haven't and I do mean this in the most condescending way possible; he's complaining about the proprietary format not working and laments losing his stuff. He could easily solve his problems. It's not a "hack" to open a folder or do you reward yourself with a cookie every time you open a folder on your desktop?

FFS, this is "HackerNews"... write a bash script to do it for you. Better yet, use some "super 1337 h4x0r google-fu" and search "convert keynote 08 to keynote 09" and you'll find a bash script to do it in 5 seconds[1].

[1]: https://www.google.com/search?q=convert+keynote+08+to+keynot...


FFS, this is "HackerNews"... write a bash script to do it for you.

I have written quite a few bash scripts, python scripts and long ago ruby and smalltalk scripts to do things for me, that bothered me. My point is, this is a software I am paying for (and I've already mentioned that 2008 should not be considered as old) and I would rather spend my time writing the "super 1337 h4x0r" bash and python scripts for something that I can't do otherwise.


You really don't get to complain about a proprietary format being retired due to age or even just disappearing off the face of the Earth. There's an astounding amount of software written for the sciences that uses proprietary binaries that are depreciated two years later. We all realized long ago that if you want to preserve something, export it as an 'open' or standardized format. That's why every image producing/editing software supports `tiff` encoding.


Your comparison is unhelpful. The expectations for broad-market consumer software and obscure narrow-market scientific software are entirely different.

Apple leads users to expect an easy experience, and not to need to worry about technical details. The cost of that strategy is that Apple should expect criticism when those expectations aren't met.


How about all non-hackers who don't know what a "file format" is, but may have important documents in decade-old formats? "Normal" people are not only not trained that they should keep upgrading lest they perish; they will ask you with bewilderment, "why I'm supposed to throw away that perfectly working machine after only 3 years?".


If they're not upgrading, they're not facing the horrible, insurmountable problem OP is.


They are if they need to send a copy to someone with the newer version.


I agree. If it takes less time to google-fu and implement an answer than it does to write and publish a blog post about the problem, there is a high probability that the blog post was the real intention all along...


Upvoted for useful information.

But this makes it even less clear why Apple can't support their own old formats. Because it's too tricky to be worth dedicate developer time?? Or because it's a deliberate policy? This is a fundamental requirement of the software.


A couple of things here:

1) "Fresh Mavericks install." He should run Software Update and tell us he has done so, if he's going to write a blog post about whether stuff works.

2) He should file a radar (Apple's umbrella term for bug report / feature request) and share the radar number with us so we know he's at least going through the channels Apple has provided for concerns like this. http://radar.apple.com/

Or... he could just write a blog post, but I'm saying it would likely be more effective if he also did these two things, in addition to his blog post.

Wait, he shouldn't have to file a radar? True, in an ideal world, he shouldn't have to. But you know what they say about ideal worlds...


[What the hell are you using to give presentations](http://hypertexthero.com/logbook/2011/11/dont-use-powerpoint...)?


LaTeX with the Beamer package would be more futureproof.


Suggestion: Don't use Keynote. It is a royal POS.


Advice: switch to latex with beamer.

Latex has a fairly long life time. A friend found the 20-year old source for his thesis and thought it would be fun to see if it would still compile in latex. Sure enough, it did, after just a few header lines being changed (sometimes library names change).


And, even if it didn't compile (which is a big "if"), you'd still have a postscript or pdf file of the presentation. I have yet to find a non-DRM'ed pdf old enough to cause me trouble when trying to view it.


This has long been true of Apple and ever since Apple acquired Claris Works I've made sure I can open my old files or else keep around some emulation layer to do so. (E.g. SheepShaver to run Mac OS 9 and AppleWorks 5 which could open all the old Claris Works documents).


This is curious, because not long ago, I found myself needing to look at an old AppleWorks[1] file. It opened in Pages without a hitch.

[1] https://en.wikipedia.org/wiki/Appleworks


Hi there - We are looking into this, and will get back to you soon. Thanks for your patience!


Who the hell is "We"? You're green.


Assuming you are slideshare.net. This was my first question, isn't there an online service to do conversion? Surely for old file formats someone would happily pay a few dollars if it's an important doc. I would have tried Google Docs first too but a quick search shows nothing.


Surprised nobody's mentioned SliTeX either here or on G+.

Source is plain ASCII. Output is PDF. Interpreter is TeX. All three are very, very robust for backward compatibility.

That Microsoft is being promoted for backwards (and forwards) compatibility strikes me as ... comical.


I ran into this issue with a Visual Fox Pro database setup used by FERC.

In 2013, documents are STILL being produced into a proprietary format that nothing open source can read (directly) a decade later.

This is why I love Python and Perl for data munging.


Should have used {Open,Libre}Office.


Apache OpenOffice doesn't open Keynote docs - but LibreOffice tries to (libetonyek).


the guy is clearly living in the past. 2008 ain't sexy anymore...


His biggest issue is he didn't have a CD reader? A basic USB DVD reader is cheap and sometimes quite handy. To say you lost your documents implies a lack of effort. If they were THAT important, you would get them.

As far as proprietary in general, MS Office has done a pretty good job other than their Office 2003 XML stuff which had to be broken when they lost the patent ruling to i4i. Anyone looking at document longevity now can easily use their XML formats.


I would be very surprised if Keynote '09' would even run in Mavericks. When I upgraded to Lion a couple years back I couldn't use any of the iWork suite that came with the computer. Fortunately I had only made a couple throwaway presentations and documents at that point, but it taught me a lesson about Apple's approach to planned obsolescence.


Keynote '09 works just fine on Mavericks.


After installing an update. Before you install the update it crashes on Mavericks.


And you forgot Poland.


iWork '09 works on Mavericks.


Let's admit that is a terrible user experience. Buy a piece of obsolete hardware you will use only once, just because Apple didn't make an installer available for an old version - or better yet, a conversion tool - either stand alone or built into the "Open..." action of the new version of the application.


You are right, I can get all the necessary hardware for that. That is not a problem, problem is that I can't have that when I need it. Imagine you go to a meetup and would like to show some older presentations and you are not aware of the issue. Well, bad luck - nothing to show, no quick fix by downloading a conversion tool or older version of the software.

I'm not saying that it is permanent problem, there is solution for it for sure. It is just unnecessarily cumbersome and mostly unexpected: I can imagine dropping support for files from 1995 not for files from 2008.


I think that's the key most people are forgetting here. You're talkng maybe 5 years ago, even more recent. When you go back 10+ years it's not unreasonable to expect some drop in support. But 2 versions is way way too quick!!


When I made this comment, the headline was worded differently.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: