Hacker News new | past | comments | ask | show | jobs | submit login
ViperCard – An open source re-creation and re-imagination of HyperCard (vipercard.net)
444 points by moltensyntax on March 25, 2018 | hide | past | favorite | 137 comments

HyperCard together with Qbasic both fell into that forgotten realm of democratising programming; making it so that even your aunt could write a simple program herself. A lot of people ended up learning programming because of these simple languages/tools, and I used to love playing with their projects I'd download from Geocities and the like.

It's a bit of a shame the industry gave up on the idea, and abandoned these syntactically-sugared programming languages. HyperTalk reads just like English; it very much seemed like the next generation of the niche BASIC aimed to fill. Because computers of that era typically opened right into a development environment (e.g. a simple BASIC interpreter), there was even brief discussion of HyperCard potentially replacing Finder as the default Mac OS environment.

People have written a lot about programming languages that try to use natural-language syntax, but one idea that I remember is that both HyperTalk and AppleScript are a lot easier to read than to write, because you can relatively easily use your knowledge of English to understand what the code is doing, but you can't easily use your knowledge of English to figure out how to phrase instructions in a way that the interpreter will understand. And your own praise of HyperTalk mentions its being easy to read, not easy to write. :-)

One way to experiment with this distinction might be to look at the Wikipedia page


and read through the examples and see if you follow them. I'm sure you will.

Now, close the page and try to write a valid loop, and a valid user interaction, and a valid test for the existence of a file. I don't think you're likely to succeed unless you've actually programmed in HyperTalk recently.

But I could readily imagine that many people can learn HyperTalk more quickly or comfortably than a language without the natural language elements. Maybe part of that is the low psychological barrier to using a system that looks like it makes sense semantically, compared to learning special meanings for lots of symbols.

Inform 7 ( http://inform7.com/ ) is a modern programming language with "English like" syntax. And as someone who's tried to work from their manual (it has only "shallow end" and "learn by example" formulations) when you actually want to do the thing, one particular English-like construction will compile and a very similar English-like construction will be rejected and it's awfully hard, except by repetition, to learn which one will work.

I find well thought-out formalized grammars easier to follow in either direction. In particular, I find Python striking a good balance between reading almost like English for the most part, and being compact and non-ambiguous.

Like many commenters here, I loved HyperCard and used it quite a bit in middle through high school (and even college years). For me, it never was the syntactic-sugar-laden language, but the entire environment: it was really easy to create an interactive, custom GUI to do more or less what one wants. And since every Mac shipped with HyperCard, a stack can run anywhere (unless it depended on specific extensions).

Considering code is read more frequently than it is written [0] [1], seems like a trade-off that could be helpful... I wonder if autocomplete to a language like this would help a lot. There was nothing like that for HyperCard.

[0] https://blogs.msdn.microsoft.com/oldnewthing/20070406-00/?p=... [1] https://blog.codinghorror.com/when-understanding-means-rewri...

The way "normal" people tend to enter into programming is by starting with some example code and modifying it to their purpose. In this sort of scenario readability is very much a benefit.

As opposed to software developers, who take some existing application code and refactor it to their purpose.

I would expect a software developer to be able to write code from scratch (when necessary), but then again I'm old fashioned like that.

not when you're in a language you've never used before...

> forgotten realm of democratising programming

This phrase reminded me of a thought I once had, that the removal of the compiler as a first-class application in user-centric OS distributions was an imperial gesture to enforce class hierarchies among end-users: you were either a user, incapable of making your computer do new things, or you are a developer, who must be convinced to make computers do new things using rules and policies (and tools) that were forced upon you by the Powers That Be™.

I think one thing we should be demanding, as computer power users/developers, is the return of software development tools to the forefront of the computing experience. It is unacceptable that computers are being shipped today without the means of making them productive, other than participation in a walled garden.

I know its a tall order, but I'd love to see an OS vendor make a serious point of making their users better developers, not worse.

No computers actually come without the means of making them productive; they all have a JS engine. It's how I learned programming, almost twenty years ago - IE5 on a public computer.

That said, I do agree that programming should be made more accessible. Bret Victor and others seem to be exploring new ways of doing that, besides making computing more physical:



There are many who think that JS is not an acceptable means of developing applications for computers. It may be effective, but not acceptable.

(Disclaimer: I'm one of them, so this argument doesn't really appeal to me personally. I'd rather there were tools that don't require me to have a lobotomy to use them..)

Those people are universally wrong

Rubbish. Javascript is far from acceptable as a first language to learn and use.

Nah. Any language is acceptable as a first language. I started writing BASIC. Gatekeeping is for losers.

If there was a standard IDE for using js, HTML and CSS to build desktop apps and games that shipped with major OSs I bet it would get used a ton. Click icon, start new template project. Any more steps required to get started and it will be ignored.

I suspect that there were a number of things going on:

- The bundled development tools were usually still there, but they took a form that those of use who grew up with computers in the 1980s were less likely to acknowledge. Consider how web development took off soon after more traditional languages were removed.

- People are more interested in programming when new technologies appear. There are more itches to scratch, opportunities available, and the barrier to entry is lower.

- Programming simply became more complex. It used to take one line of code to do something. Between the OS and languages requiring more (initialization, boilerplate code, etc.), programming became less appealing. While tools like HyperCard addressed some of this, the distinction between "real" programs and these environments was quite clear.

All of this is driven more by the end user than by industry. And if end users are less keen on programming, why should businesses make the investment in creating tools for them?

> - Programming simply became more complex. It used to take one line of code to do something. Between the OS and languages requiring more (initialization, boilerplate code, etc.)

It's important to acknowledge that we have done this to ourselves. The amount of job-justifying unnecessary complexity found in today's programming environments make me wonder how the field hasn't yet toppled over itself

> And if end users are less keen on programming, why should businesses make the investment in creating tools for them?

Alas, I feel that if we don't make it easy for users to become developers, they stay users.

Its not the other way around - clearly there is a market for developers/users. Its just that I think the dividing line is completely arbitrary, and enforced by marketing decisions - not technical or ethical ones.

The culture behind this went away with the advent of mobile. Now that tech is mainstream the majority of users see their computing devices as appliances, not tools.

I personally think this tools vs. appliances distinction explains a lot about tech culture nowadays and I would love it with we nerds could collectively exile the computing "mainstream" to mobile so we power users can have our tools (computers) back.

Microsoft, Google, and Apple et al are muddying the waters, by trying to shove mobile square pegs into desktop's round holes

I'd like to see development democratised as well, but I think the reason for the change is that not expecting users to be developers made software easier.

Not just the failure of desktop Linux, but also Windows, which still asks regular users if they want to 'debug' a crashing app. To most people, 'debug' means 'fix' and the button just doesn't work.

It is going to require a shift in perspective to make headway on this issue, I feel. I don't really see it happening except perhaps with some of the more cocktail Linux distros... but honestly, I think that a lot of it has to do with the quality of the tooling. Users don't have time for error messages or debug buttons. Developers barely even do, also.

I totally agree - QBasic was my first language (after DOS batch files!). The fact that it was an interpreter meant you could read all the code for any program you found. Which, IIRC, was via floppy disks from my neighbor and a friend from school.

The web has some of that, but the technology is just so much more complex. The difference between reading the .bas file for SNAKE vs. using the web inspector to understand how gmail works is astronomical.

To be fair, the difference between Snake and GMail is astronomical in itself.

> The fact that it was an interpreter meant you could read all the code for any program you found.

This wasn't a granted thing back then.

GW-BASIC source needed to be explicitly saved in ASCII mode.

A lot of basics compiled each line to bytecode as you entered it. The Spectrum made this explicit with its one-keypress-per-bytecode approach.

HyperCard is why I'm a developer today, and it's precisely because it was such an egalitarian tool. I was a (human) language geek as a child, but I loved using computers. I downloaded a bunch of neat stacks other people had made, and I was able to open them up and see how they ticked — eventually I realised that I could make my own stacks, and make the computer do things for me, too!

I do worry that HyperTalk ruined me in the same way the Dijkstra asserted BASIC ruined programmers of his era, but I have a good job and seem to write good software so I don't worry about it too hard.

I really worry for kids these days: JavaScript + HTML is nowhere near as friendly an environment as HyperCard was. What's a clever kid going to use as a programming environment now?

Isn't https://scratch.mit.edu/ a modern incarnation of this style?

Scratch is nothing like HyperCard.

It's pretty much in the same ballpark.

Having used both, Scratch is nothing like Hypercard at all. Hypercard was very simple to start with, but you could produce properly powerful programmes with it if you wanted.

A closer analogy would be to the fully featured Logos you could get in the late 80s which looked like simple drawing languages on the surface, but were actually pretty full-featured LISP implementations.

Agreed, totally different things. Hypercard was for sharing hyperlinked information. Scratch is for teaching kindergartners how to program in an object-oriented fashion.

The comment I was replying to was grouping Hypercard and QBASIC together under the umbrella of "..democratising programming... reads like English..." and it was specifically this that I was responding to.

QBASIC isn't for sharing hyperlinked information either!

Actually, Scratch is being used in colleges for CS 101 class as well.

> even your aunt

I think you may be making a couple of unsavoury assumptions there.

> A lot of people ended up learning programming because of these simple languages/tools, and I used to love playing with their projects I'd download from Geocities and the like.

But I do agree wholeheartedly with your point. I'd love to see more people embrace hackability over shininess, and become more than just consumers again.

I'd be interested to know what circumstances made the concept flourish. Massively popular yet difficult-to-program devices sitting in every home?

If I could go back in time I would have stuck with QBasic for at least another five years before moving to C. I wasn't close to ready for the briefly exciting dive into "real" coding, which led to abstract CS concepts, the thick books with exciting illustrations on the covers, the CS classes which were so boring. Meanwhile I believe I could have actually been shipping software had I stuck with QB. Gar to admit but true.

I was lucky to have started in 8 bit machines, and having used Basic and Pascal compilers before ever touching C.

There wasn't a single new concept regarding low level hardware programming that C teached me, on the contrary, I got to learn how not to do it.

And I wasn't infected by the "micro-optimize each line of code as it gets written" culture, rather I was shipping software that was fast enough to keep its users happy.

In 88 (I was 19) I was a 'professional mac developer' and wrote a stack in hypercard to convert huge list of points to polygons, and split them with other polygons.

The project was to split the parcels of land from Lille to Paris with the track of the TGV (high speed train) to calculate the expropriation the state was doing to the poor guys who'd end up getting their huge field cut in two, and would have to drive 20 miles to go from one side to the other :-)

Seriously. Hypercard was pretty cool to throw together something quickly, with a UI and more importantly it gave the client the impression they could go an tinker with it afterward.

It was also excellent to mockup UIs and 'processes' and other bits of Mac apps before committing to making them, so even internally at apple, Hypercard (and Supercard) were used a lot for mockups.

There was a huge ecosystem around hypercard, and it lasted for a very long time.

Oh, and Atkinson is still my hero!

I used Hypercard a lot for prototyping UI back then. Of course the UIs were not in color but it was a great tool. Supercard came from Silicon Beach which added more functionality to the basic Hypercard.

...and Myst.

Wait, was Myst in hypercard? I remember watching videos where they talked about how everything was only in 256 colors with adaptive pallets, plus they had Windows, 3DO and Plyastation ports.

It definitely was written in HyperCard, using a couple of custom 'XCMD' extensions for the color images and QuickTime animations, which HyperCard itself did not support at the time.

One of my early contract jobs involved reverse-engineering the entire game and reconstructing it as a screen-saver, with a little AI that would generate plausible game-play activity while you watched. The startup that hired me got sued out of existence approximately ten seconds after shipping the product, despite our scrupulous attention to copyright (everything we used was loaded off the CD at runtime), and that was a valuable lesson about the true nature of the legal system. Even still, the experience of immersing myself in that game world deeply enough to recreate its structure remains a fond memory.

The Mac version was in Hypercard, all of the other versions were different engines written from scratch for Myst.

They actually had to use the same color palette for each age to ensure there wasn't any strangeness as they flipped between cards to move around in the game.

I thought it used Macromedia (Shockwave or precursor?)

edit: I guess not, neat!

I know the original You Don't Know Jack was also prototyped in Hypercard

I used to make Hypercard stacks back in 7th grade when I was learning from Quantum Link (Q-Link) hackers like Mobeius from the Pan-American Information Network (#hellonearth).

And AOL hackers and soical engineers like HappyHardcore, creator of AOL4Free & the Master Blaster, the famous Da Chronic, creator of AOHell and KT (Shameer) who first taught me social engineering in 1995-


Koceilah Rekouche aka Chron, photo from the 1990s-


I love hacker news right now. Been a long time since I felt "home".

this really takes me back.

If I remember correctly, the only reason Ward Cunningham invented the wiki is because HyperCard was discontinued. He was apparently writing CRC cards and storing them in HyperCard (which seems like a completely reasonable thing to do...) He wanted something that was as convenient for doing that kind of thing.

In his own words: Ward Cunningham, Inventor of the Wiki WikimediaFoundation https://youtube.com/watch?v=XqxwwuUdsp4 at about 3:00 and on for a minute. There was a release of hypercard in ‘98 - four years after he was working on the wiki.

Thanks for that! That's a great video!

That's interesting. I find the wiki concept truly transformative. One of the first things I've done with almost every job I've had is set up an internal MediaWiki wiki. It's always been worth the effort even if I was the only one who bothered to contribute it for the most part.

Playing with this, I was trying to figure out a practical application for it. Is it fair to say it has largely been obsoleted by things like wikis?

Back in the olden days, I was actually paid to work on a couple of HyperCard applications. One that sticks out in my mind was training material for using the university's astronomical observatory. My job was to write a plugin to show an actual star field animation so that the students could practice finding objects in the classroom before they got to try on the real equipment. The HyperCard stack had a whole interactive simulation where you could walk into different rooms, see the control panels, click on buttons and activate knobs, etc.

I'd have a hard time thinking of anything current that's general purpose and would be able to do something of that scope. Probably Flash would be the thing you would use, but without Flash I'm not sure. The nice thing about HyperCard was that the vast majority of the work could be done by people who weren't programmers. Even for my bit, I was still a student and it wasn't really that difficult to write the plugin (stupid Mac Pascal compiler bugs aside... :-P).

I taught myself AppleScript when I was 13, to parse HTML and make 1000-character iPod Notes. I'd used LogoWriter in school since I was 7, but never got a copy of HyperCard. I heard many good things about HyperCard, but never thought it would be worth starting an emulator just to use it.

I just opened ViperCard, and the GUI is totally intuitive. The scripts used by buttons share a lot of syntax with AppleScript. This is the GUI for AppleScript I always wanted. Now I know why people love it. If I'd been able to use HyperCard earlier, maybe I'd have become a front-end developer instead of a back-end engineer.

AppleScript, for those who don't know, was heavily inspired by HyperCard. And apparently was one of the (many) core language inspirations behind Javascript too.

> If I'd been able to use HyperCard earlier, maybe I'd have become a front-end developer instead of a back-end engineer.

Lucky you.

I wrote a program called Virtual Journal 1.0 that was simply an infinite paged, password protected text journal app.

I put it into AOL's ftp area as shareware and got checks from around the world for $2. I was about 8 years old!

EDITED: I'm offering a $100 reward if anyone can find a copy of this software in an archive or old shareware disc somewhere! I'd love to find it again. Also, bonus points for finding my OneClick Palette called "AOL ROVER".

HyperCard was about empowering users, the opposite of what many tech companies want today, including Apple. It was a "predecessor" to the web (hypertext, HyperTalk scripting language), of Flash (animations, graphics), of user programmable databases (ok, Filemaker is older). It enabled users to smoothly dive into programming, as it not only offered a simple but extremly powerful concept of document-applictions called "stacks", drag-n-drop setup of your own graphical user interfaces, but behind all this a very powerful, extendable object oriented programming language with kind of "real" objects, as every item of your stack could have it's own method. Thankfully we have web technology today, else computer technology would miss its most democratic tool. But still HyperCard was the only way to simply have your own data on your own computer in your own application and not relying to a server, cloud or even paid service.

This is cool, but I wonder if anyone has made a modern, browser-based variation of HyperCard? Something with a model akin to HyperCard's but exporting stacks in a form that can be easily put onto a page.

There's Amber Smalltalk, but that's more powerful and the simplicity of HyperCard is part of its charm.

At the risk of courting controversy, arguably that's what Flash was: ActionScript, for example, being based on HyperTalk. Certainly it included many of the concepts originally found in HyperCard. Of course, that too has been superseded, and ironically - after many years of holding Flash in some degree of contempt - I have to admit to being rather saddened by the fact.

I knew about ActionScript's relationship with ECMAScript, but I hadn't been aware of the HyperTalk connection. Very interesting!

The Flash authoring tool was always, imho, pretty nice. The runtime had lots of issues, though.

It's funny: the security aspects never really bothered me back in the day (although they should have done).

No, the reason I looked down on Flash, was because all I ever saw were CPU sucking ads, fairly boring non-interactive presentations/videos, those dreadful websites that were all Flash and took an age to load, and then the crappiest of crapware games with no love or attention to detail put into them. As a result, I just came away with the idea that you couldn't do anything good in Flash.

I was wrong. Incredibly and spectacularly wrong. Ironically so, because I'd been a big fan of AMOS back in the day on the Amiga (though I'd have been better off with Blitz, probably) - another environment designed to make it super-easy to take advantage of the multimedia, animation and gaming capabilities of the host platform.

Don't get me wrong: you can do great stuff with HTML5, JavaScript, CSS, WebGL, and so on, but with Flash you could do all that stuff 15 years ago, and it made it all just so much easier. So with the passing of Flash I can't help but think we've lost something, and it makes me a little sad to think about it.

Yeah, I have to agree with you there that the web platform has not _quite_ caught up with the total ease of Flash. I think it's starting to get there within certain domains, though, but it has been a long time in coming.

Not modern, but the original in a browser via emulation: https://blog.archive.org/2017/08/11/hypercard-on-the-archive...

I'm still kind of awestruck by this. Hosting an OS inside a browser. It's great as a time capsule to perfectly see the past. But because it's an emulator, it would be difficult to add a more modern feature, like ViperCard's share-url-to-stack and save-stack-as-json.

How does it work? The http://hypercardonline.tk only accepts new uploads and a card archive is an iso file (https://archive.org/details/hypercard_twin-peaks) that the emulator can't seem to run.

Not browser based but a „modern implementation“: https://livecode.com/ - the website is a little „marketingy“ but you can download a open source version of their „hypercard succesor“

Neat! Layout is definitely one of those interesting questions around such a thing. HyperCard got away with simple absolute positioning, which isn't so awesome in our multi-device world but is unbelievably easy for users to understand.

Good luck!

This reminds me of my favorite story about old-school Macs. All Macs shipped with Hypercard after its release for a while. Then, eventually, Apple spun off Claris, who got HyperCard as part of the deal, and Apple decided to ship a nerfed version of HyperCard that had most of the stack authoring tools disabled.

However, you could open up a command console in this neutered version of HyperCard, type in 'magic', and you'd be able to unlock the full set of authoring tools.

Magic indeed.

For those that like HyperCard, I really recommend looking at LiveCode (www.livecode.com), it is a modern version of it that is able to generate standalone applications for MacOS X, Windows, Linux, Android, iOS and the Web.

They have a FOSS version behind that confusing website, the livecode.org site has access to the GPL version, you can download without acquiring a membership. It is just a bit hidden.

I wonder how much material design is influenced by Hypercard. I know the web was partially inspired but it looks like with material design the language is sort of extended. Or maybe it was all integrated into the design language for the web.

What I find amazing is that early B&W GUIs were more usable than modern material design. Most takes on Material that I’ve seen take poor use of screen real-estate, poor conveyance, and distracting/hard-to-read color choices to an art form.

In case some UX / HCI experts read this thread, are there any credible studies of information density in user interfaces plotted over time?

You could simply plot recommended sizes for clickable items over time.

Windows 9x used 125% text height of 10pt as clickable item height, that's around 0.441mm.

Android 4.0 recommends a clickable item height of 48dp, "roughly one centimeter".

Android 5.0 uses 56dp (1.17cm), and on Phablets and Tablets even 64dp (1.33cm).

For items in lists, a second effect was seen, as it became recommend to show more info earlier, the number of list items visible at the same time was reduced. A multi line list item has a minimum height of 72dp (1.5cm), average is more around 96dp (2cm).

A similar effect can be seen on Windows, in UWP, the average item height also went to almost exactly 1cm in lists or menus.

This all fits well, as the smalest reliably clickable element in a UI is ~1cm on its smallest dimension.

This is also a common issue with HN, where voting buttons are 0.3 by 0.3cm, even on mobile, and as result I misclick ~2/3rds of the time, but no one seems willing to fix this.

Thanks for taking the time to write such a detailed comment!

It’s honestly not much effort – I was compiling a new version of my app when I wrote it, and the last change I had done was the size of a button, because people complained about it. So I happened to have all the documents open by luck :)

I'm not aware of any systematic analysis. However, my expectations are that we would see relatively constant information density until about a decade ago. The simultaneous increase in available pixels and increase in prevalence of touchscreens has made it possible and in some cases desirable to drastically reduce information density without rendering interfaces immediately unusable. The problem is that touchscreen-oriented design is infecting non-touchscreen software.

I've been very interested lately in the idea of monochromatic interfaces.

Aside from being easier on the eyes (especially an amber or red interface), they force UI designers to make careful choices, and potentially result in a cleaner, more useable interface.

It's not a project for me yet, just a series of notes and concepts, but I'd like to turn it into a proof of concept at the very least.

Having made extensive use of amber and green and b&w monitors, I much prefer full color.

Though I dislike the recent low-contrast efforts. Keep high contrast, reduce brightness as desired for your environment.

I totally agree with this. I've spent hours and hours selecting colour palettes that give high contrast (I work on 16 colour terminals because more than 16 colours makes it really difficult to guarantee good contrast in all situations).

Having said that, it's hard to beat black on white for contrast ;-) However, I think one of the reasons people like the monochromatic themes (especially white on black) is the same -- they want high contrast with low brightness. By having the background black, they guarantee a low brightness. And when white is too bright, they go for amber.

I find that by doing the reverse (black on white) and setting my brightness very low (I often go as low at 7%, but more normally 11% back light) I have a very comfortable display with a lot more options. It's really funny, though, because when I have to run software that doesn't follow my colour themes, the computer almost looks like it's turned off. If I'm playing a game, I have to crank up the brightness to 70% or 80% in the same lighting conditions.

I just thought you had a fondness for EGA.

The problem of the low contrast issue is that otherwise content becomes unusuable on calibrated screens.

On a modern screen, a value of 100% is 1000 nits, 0% is far below 1 nits.

On an average shitty monitor, it's more a range 10 times more limited.

Your suggestion of making my monitor emulate your shitty monitor in hardware settings would also make it impossible to use it for use cases that do need this high contrast.

Reading text, on the other hand, gets very painful with that much contrast.

Real text, on a real newspaper, in normal room light, is #454545 text on #f0f0f0 background, in sRGB color space. Not #000 on #fff in sRGB, and definitely not 0 on max on ten times more contrast.

The real issue is that we have no color and contrast management at all for websites. I can't say that a color is meant to mean a certain brightness in nits, so it renders as #777 on my screen, and #000 on yours.

Gosh, I didn't realize my new MBP had such a bad screen.

Low-contrast UIs can work well in a typical office environment on bright screens.

Take your ideally calibrated monitor, and use it as a second screen while watching a movie in your darkened home theater. Now take it and use it outside on a sunny day.

I prefer to adjust the brightness. YMMV.

Your description of sRGB is incorrect. sRGB was specified for CRT screens in 1996, used in an ideal viewing environment that is very dimly lit ("The current proposal assumes an encoding ambient luminance level of 64 lux which is more representative of a dim room in viewing computer generated imagery... While we believe that the typical office or home viewing environment actually has an ambient luminance level around 200 lux, we found it impractical to attempt to account for the resulting large levels of flare that resulted" https://www.w3.org/Graphics/Color/sRGB.html)

That doesn't match my viewing environments, which include the range above.

MBP? That's surprising to hear then, they've got great (not $1600 Dell HDR monitor perfect, but great) monitors.

One of the core issues is constantly fiddling with the brightness, especially with multiple monitors.

That's where ideally you'd want to have the brightness of the panel fixed, and change it with a lookup map, e.g. what f.lux does for color mapping at night.

That's where you'd get ideal results, would be able to enjoy media in high quality without having trouble with too high contrast or too low contrast websites, and you could choose separate profiles for text and media.

Not everything supports this yet, but with the move to HDR10 and DolbyVision, support is getting better, because now people do have content in the same window that's mastered with completely different contrast ratios (the min for HDR10 is "moonless night", the max is "as bright as sunlight on a cloudless day", while for text the ideal min/max is newspaper text)

You'd have to adjust both brightness and white balance based on the environment (just like a camera. How well do professionals trust the camera's automatic choices?) and probably the environment behind the screen (the rest of the user's field of view).

And then you'd probably need to throw in an adjustment for the individual user's light sensitivity needs and preferences, and possibly the user's current eye dilation (did I just go from bright light into a dark room? Or did I just wake up in the dark room?)

You can design for an ideal environment, but realize that users will not always (ever?) be in that ideal.

Sure, but the goal is that the user sets their brightness for the environment, and not ever for individual software.

The per-application brightness should be done in software, and ideally take into account HDR and colorspace capability of the software.

Otherwise, like the user above had suggested, you have to switch brightness every time you switch between different programs.

> the goal is that the user sets their brightness for the environment, and not ever for individual software.

...and we've come full-circle:

"Keep high contrast, reduce brightness as desired for your environment."

Correct, but software then needs to be explicitly mapped with a brightness range.

Otherwise you can't have on the same screen a game simulating a dark night with low contrast, and a guide for that game which uses the full contrast spectrum.

Your suggestions all break if I want to be able to have at the same time extremely low contrast content and text on the same screen, next to another, and want both to look fine.

> extremely low contrast content and text on the same screen, next to another, and want both to look fine.

With that scenario, you're dealing with physiological limitations, because if you have a bright region next to a dark night region, your eyes cannot perceive detail in the dark region. You'll also be vulnerable to the optical illusion effects of perception (e.g., see http://www.cns.nyu.edu/~david/courses/perception/lecturenote... and other examples in http://www.cns.nyu.edu/~david/courses/perception/lecturenote...), so "look fine" is going to be rather hard to define, much less guarantee.

But this discussion was really about interfaces, potential interest in monochromatic interfaces, and the issues of low-contrast interfaces.

This article from the Nielsen/Norman group clearly describes the usability problems with the currently trendy low-contrast interfaces. https://www.nngroup.com/articles/low-contrast/

> With that scenario, you're dealing with physiological limitations, because if you have a bright region next to a dark night region, your eyes cannot perceive detail in the dark region.

Correct, that's why you meed a software solution that detects this issue and dynamically adapts.

This isn't complicated either, every modern video game has the issue of UI, text, and HDR content in one frame, and has well-working tonemap curves and dynamic exposure adaption algorithms.

Microsoft is also integrating solutions for this into Windows.

Any OS that plans to ever mix HDR and SDR content on one screen needs this anyway, and if you do that, you can also easily add minor changes to allow text content to be annotated so its contrast can also be dynamically adjusted.

Something I found interesting related to monochromatic interfaces was an article posted several weeks back to HN about people using grayscale to help focus. Most systems, especially, support a grayscale view for accessibility reasons (on recent versions of Windows 10 it's the default for the Win+Ctrl+C keyboard shortcut which is also why it is going around as one of the annoy a coworker pranks). I've started toying with grayscale periods on my work machine as a focus tool.

I have done material design somewhat badly. I would mean a design directly from Google not someone’s reinterpretation of it. You can use it to justify your sloppyness but you could do the same thing in HyperCard.

I'd say Google is widely recognized for "taking poor use of screen real-estate to an art form" with their designs; though Material design is probably just a symptom here, the cause being the ongoing desire to dumb down applications and remove functionality.

Actually a typical rant on Android development forums is how Google doesn't follow their own Material designs.

Maybe. Still, I go to material.io and I don't see anything there that would make those things better.

I wish there was a UI philosophy that would take Tufte's ideas as a set of core principles: maximizing data-ink ratio, minimizing junk, increasing data density.

> I wish there was a UI philosophy that would take Tufte's ideas as a set of core principles

Wouldn't that just be Tufte's ideas? What's missing for it to be a 'UI philosophy'?

A nice .io domain and endorsement by a hot tech company, I guess.

Once we get usable and affordable e-ink monitors designers will have to revisit this territory seriously.

Color abuse isn't really the most serious problem; the loss of information density is. E-ink isn't going to help here. Some alternative business models that would not incentivize companies into making their application glorified interactive movie posters would be welcome.

I had an equivalent multimedia authoring system on the Amiga, called Hyperbook. http://obligement.free.fr/articles/hyperbook_beta.php

It was very much influenced by HyperCard, obviously. They didn't try to hide it, to the contrary, so much so that they hacked the UI-controls to make them more "Macintoshis" I guess.

Except it came in colors. The multimedia aspect of the Amiga, remember?

HyperCard on the Apple IIgs fully supported colors. :-)

I'll show my age here, but back when I took AI, I wrote a (toy, homework-size) expert system as a Hypercard stack. There is actually a lot to be said for Hypercard as a framework for quick UIs.

why is this flagged?

Wow! This takes me back! HyperTalk was probably my second language after Applesoft BASIC and boy did it seem powerful at the time.

Hypercard overall was just great at getting out of the way and letting you get things done. Looking back, if Apple had realised how powerful networking was back then, Hypercard may have morphed into the browser one day, but they missed that boat.

I remember reading somewhere a while back that Hypercard plus some XCMDs were used to control the lighting system in Petronas Towers in Kuala Lumpar... actually here we go: https://www.wired.com/2002/08/hypercard-forgotten-but-not-go...

Ironically, it does not work in Safari. :(

It looks like Safari implements regular expressions slightly differently, and it leads to the parser behaving differently. Hope to be addressed by the end of the week.

Irregular expressions

This should now be resolved, Safari should work fine.

I've always thought that OneNote would be better as some kind of hypercardesque application. Everything I wish my notebook could do it kinda does, but with very poor linking to data or calendars.

As for the problem with languages like HyperTalk and AppleScript being read-only, the solution isn't to change the syntax, it's -- wait for it -- AI! If Google can figure out what you want from a spoken free-form inquiry, then a smart interpreter should be able to figure out what you're trying to say in a line of code, even if you don't get the syntax quite right. And if your code is ambiguous, it can ask you confirm your intentions. Why haven't we seen more of this?

This is my first experience with HyperCard and I love it!

www.livecode.com is a cross platform updated Hypercard descendant. Their free Community edition is GPL

I liked Runtime Revolution. I wish the broader coding community would taken environments like this more seriously, is the text editor sacred? I left the platform before they added more mobile+online support, I'll have to see what they have added recently.

is the text editor sacred?

No, it's the worst form of development, except for all those other forms that have been tried from time to time.

Livecode is great for creating tools on most platforms very fast. Literally a few hours to create quite functional frontends to commandline applications for instance. The batteries included way of working is breath of fresh air compared to what other environments have, but the reach of the platform seems limited. Mostly because to make applications actually release ready for ‘the world’ takes insane amounts of effort. However that does not diminish the power of the system for me; being able to whip up prototypes and ‘good enough’ helper/tool applications is something I only know from Tcl/Tk (especially with Visual tcl). Delphi/VB are fast but not as fast as these. I create mostly write-only software with it but for what I need it for, it is good and I do not know anything that matches that kind of productivity.

this pleases me greatly. I hope the $1000 level is reached ASAP so it can import existing stacks!

I would think "Hypercard Reimagined" would finally cast off the iron chains of its black-and-white, ancient Mac UI heritage...

edit: to clarify, even back in the day those were the two things that I really disliked about HyperCard. No color and no Windows

Hypercard was the inspiration behind much of the early Web. In particular the Viola browser suite.


Considering the enthusiasm at display here, I feel like this project would deserve bit better demo video for people like me who do not really understand HyperCard.

I had a early Mac in 1984 (still have the damn thing - sentimentality). Showing people Hypercard was often the thing that convinced them to purchase a Mac.

“Mobile and tablet are not yet supported”

Well, heck. Now I can’t even see what is causing so much commentary on the memories of HyperCard.

Ironically, doesn't support Safari :)

Click the "Click here to continue anyway" button. Like most sites that advertise that they don't work on Safari, it actually works just fine.

(Reminder: If you're doing browser feature detection via User-Agent string, you seriously need to re-examine your life.)

Kind of ironic they "don't support" Safari, the browser for macOS.

Seems to work anyways.

Aw man. So much nostalgia. <3

this is great, but how do i hit option on an android to draw multiple?

The UI was so mouse-centric that I haven't tried it much on mobile. If the menus work ok on mobile, it might make sense for "draw multiple" to be switched on in the menu like the way "draw wider lines" is now.

minivmac does a good enough job of it on the android port, and the in-browser mac emu at the archive too. looking forward to seeing where this project leads.

there needs to be ways to jump into code without being bogged down with installing compilers and tools and configuring minifiers and choosing frameworks. just code straight away.

something that works for kids but also scales all the way into enterprise.

open about:blank, right click, inspect element, start hammering code into console. You can still get pretty far with plain JS these days without touching the quagmire that is modern webdev world.

Sure there are some other issues in picking JS, but it is both readily available and (somewhat) scalable to "serious" projects.

nothing can compare to saving a .bas file on a floppy to learn how to code.


Why has this been flagged?

Many happy memories. Can we make it full-screen please ?

The closest I could get so far is to set the browser zoom to 200%, click in the url bar, and hit F11. Looks pretty good in Chrome.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact