Hacker News new | past | comments | ask | show | jobs | submit | lambdaone's comments login

I see the page has been vandalized; here is a link to the original:

https://www.emacswiki.org/emacs?action=browse;id=EmacsStorie...



Given that all Wikipedia editors have explicitly consented to their content being released under the Creative Commons Attribution-ShareAlike 4.0 License, they don't get a choice about their content being used for any purpose.

Redistribution of content is an entirely different matter, and the legal status of copyrighted material in relation to LLM training is an open issue that is currently the subject of litigation.


Wikimedia Foundation’s perspective on this [1]:

> "it is important to note that Creative Commons licenses allow for free reproduction and reuse, so AI programs like ChatGPT might copy text from a Wikipedia article or an image from Wikimedia Commons. However, it is not clear yet whether massively copying content from these sources may result in a violation of the Creative Commons license if attribution is not granted. Overall, it is more likely than not if current precedent holds that training systems on copyrighted data will be covered by fair use in the United States, but there is significant uncertainty at time of writing."

The new Wikimedia Enterprise APIs facilitate attribution. For example, the "api.enterprise.wikimedia.com/v2/structured-contents/{name}" response [2] includes an "editor" object in a "version" object. So the Wikipedia editor who most recently edited the article seems quite feasible to attribute. ML apps could incorporate such attribution in their offering, and help satisfy the "BY" clause in the underlying CC-BY-SA 4.0 license for Wikipedia content.

---

1. https://meta.wikimedia.org/wiki/Wikilegal/Copyright_Analysis...

2. https://enterprise.wikimedia.com/docs/on-demand/#article-str...


I think the best analogy here might be WHATWG and HTML5. Instead of creating an entire new and expanded 'second system' (as the W3C was trying to do with XHTML), the existing major players in that field created something that was a much more strictly defined standard that was carefully to be forwards and backwards compatible with the existing mess, with well-defined behaviour for non-conformant content, and then started building on that new standard.

The big players in email are now in the same situation as the big browser vendors. If they defined a strict subset of the existing body of de-facto email standards, critically with well-defined behaviour for non-conformant content, and then blessed that as email 2.0, they would then have something well-defined and workable to build on.

This might include mandating a restricted subset of HTML5 for HTML content, a canonical transformation of that content to plain text for interoperability, mandating plain-text email as acceptable (perhaps with a canonical transformation to HTML) the use of SPF, DKIM etc with specified defaults, SMTP with specified features enabled, etc, etc.

Then what is effetively a well-defined profile of traditional email becomes the new (forwards and largely backwards) well-defined email system, and we can all move forward from there.

But to do that, there would need to be the will to create an email equivalent of WHATWG.


Consider the set of functions that that map the reals onto other reals. Almost all of these functions are truly random, with no way of expressing them that does not require the storage of an infinite number of infinite strings.

Not only is there no practical way of creating such a thing, most formulations of physics preclude any possibility of making one by placing finite limits of the amount of space or time accessible to us.

(Not to mention that almost all reals are [Turing] uncomputable in their own right, but that's a more complex thing to demonstrate.)


You don't need to express or encode the reals if you've built a physical analogue that operates on those reals, that's the point.


I think the interesting question here is whether Penrose is claiming that the things of which the brain is capable (most notably the production of consciousness) are inherently non-computable by _any_ kind of artificial device, which is effectively a form of vitalism, or whether he is claiming that they _might_ be computable, but only with a quantum computer.


Quantum computers are computable by classical computers. It's just a problem of exponential complexity.


Penrose argues just in terms of classical computers


I'd be interested in learning more about the flight computers, and in particular the very lowest-level systems used to reset or debug the main systems.

Given the tiny amount of storage, there's presumably no scope for rolling back changes after an error, so are they at risk of bricking the spacecraft every time the flight engineers (and just for once, I think the term "software engineer" is truly justified) make a change?


From a previous software update:

https://www.jpl.nasa.gov/news/nasas-voyager-team-focuses-on-...

> Because of the spacecraft’s age and the communication lag time, there’s some risk the patch could overwrite essential code or have other unintended effects on the spacecraft. To reduce those risks, the team has spent months writing, reviewing, and checking the code. As an added safety precaution, Voyager 2 will receive the patch first and serve as a testbed for its twin. Voyager 1 is farther from Earth than any other spacecraft, making its data more valuable.

> The team will upload the patch and do a readout of the AACS memory to make sure it’s in the right place on Friday, Oct. 20. If no immediate issues arise, the team will issue a command on Saturday, Oct. 28, to see if the patch is operating as it should.


Do they have a simulator besides risking Voyager 2?


Not of the entire system, I don't think.

https://www.businessinsider.com/engineers-turn-to-voyager-de...

> During the first 12 years of the Voyager mission, thousands of engineers worked on the project, Dodd said. "As they retired in the '70s and '80s, there wasn't a big push to have a project document library. People would take their boxes home to their garage," Dodd added. In modern missions, NASA keeps more robust records of documentation.

Nowadays we've learned to make a duplicate.

https://www.jpl.nasa.gov/news/nasa-readies-perseverance-mars...

Back when the Spirit rover got trapped, they used its twin to pre-game possible approaches in similar soil.

https://www.jpl.nasa.gov/news/mars-and-earth-activities-aim-...


AIUI, the systems are designed to reset to a known good "standby" state if they don't receive any commands after some time. Of course the recovery routine itself can't be guaranteed, especially when the hardware itself is wonky, but suffice to say the updates aren't completely rawdogging it every time. There's an overview of how it works at https://voyager.jpl.nasa.gov/mission/science/thirty-year-pla...


There are lots more variables to consider, particularly in lead climbing, even when you have a bolted route. And trad climbing is even more complex than that.


Trad lead is climbing. Basically everything else is some sort of simulation.


It's 2024 and some people are gonna gatekeep that if you're not shoving cams in some bigwall granite, it's not climbing?


This one-upmanship is very much part of climbing culture. From the top down, the hierarchy is: free soloists (climbing at height without any protection), trad climbers (carry the rope up with them, set their own protection), sports lead climbers (carry the rope up with them, bolts are set into the wall), top rope climbers (rope already there dangling from the top). Right down the bottom, you have aid climbing, when you use equipment to haul yourself up on a rope.

And then there's indoors vs. outdoors, with some dedicated outdoor climbers regarding anything done in a climbing gym as "not real climbing".

Most people don't take this literally, and it's generally considered to be a standing joke in climbing. Sadly, though, some people take it very seriously.


Aid climbing isn't respected on smaller or well-established cliffs, but most understand that it has a place. All the great routes started as aid routes, only being climbed "free" years later. Aid is also a very useful skill in the rain or rescue situations when friction disappears.


Such differences are actually at the heart of climbing. Is taking a helicopter up to the top climbing? Of course not. So from day one it is not about getting to the top but about following invented rules governing how you get to the top. How about a bolted-on ladder? Or pre-placed protection (bolts)? Clean trad, leaving the rock as you found it, is generally seen as the highest form.

(Climbing rock without ropes may be more "pure" but is so dangerous that it should never be idolized.)


Cams? Real climbers use stones wrapped with hawser!


I'm sceptical about the usefulness of this for a number of reasons, including the points raised by others about hold recognition etc.

It's also difficult to see how the system would work out the 3D arrangement of holds. It can clearly try to infer 3D relationships from video of an existing climb, but it would be hard for it to work out the 3D position or orientation of any holds not used by the climber in the video.

These two put together make solving the climbing problem even more difficult, because appropriate body positions and moves are often very sensitive to even tiny differences in shape, relative position, and orientation of holds; and occluding volumes, arêtes, cracks and so can make the problem even harder.

But as a simplified demo, this is cool, and I salute it.


This sounds risky: is the goal truly 100% uptime? Real 24/7 is difficult, even in safety-critical fields. When would you ever push out fixes or do live testing?


24/5 is more realistic.


This is magnificent reverse engineering. I wonder why retrocomputing is so fascinating? Is it an attempt (by older people) to rediscover their youth, or is it a search for a golden age of computing where people still had some autonomy over their hardware? Or something else?


Personally, I’m interested in getting to learn about and play with toys that had insurmountable (to me) barriers of cost, access, or obscurity when they were contemporary.

The purest “retro” experience I’ve had wasn’t retro at all: Stepping into an MIT Athena lab a quarter century ago to see an SGI Indy ready for my login.

In a way, it’s like finding out what winning the lottery would have meant for a curious tech nerd.


Yes. I bought an SGI O2 and a NextStation off of ebay for those reasons. I could've never afforded this stuff when it was new.


Ooh the O2 was a beautiful system. In fact all their systems looked amazing. Where all the others made all this super-boring businessy stuff. Well, except Sun, they had style but not too explicit, they were kinda in the middle.

But HP and IBM made boring old boxes :P I did love HP-UX though.


I always preferred Sun hardware, myself. In the early to mid 90's, SunOS was basically the gold standard for Unix systems and building open source software. Early Solaris releases were pretty rough, but by Solaris 2.5 it was pretty good...


Yeah I'd love to play around with a real IBM Mainframe.

However even in these days they are expensive, plus I don't have the space or power requirements for one of those :) There's are reason they were called "big iron".


There's also an enjoyment in revisiting the old stuff while standing on its shoulders.

For example, there are modern games made by hobbyists for old systems that completely outclass literally anything that was available contemporaneously. Today anyone with a computer has access to development hardware, software, documentation, and community resources that are all just lightyears beyond what was available then at any price. Changing a few lines of code and instantly hopping into a cycle-accurate emulator to test takes milliseconds versus writing out floppies or EEPROMs. Arcane tricks that only a privileged few knew (if that) are now common knowledge. That stuff leads to whole other strata of capabilities being accessible.


I love machines where the complexity is such that a single person can have an almost complete understanding of the entirety of the machine. You can get that from old machines.


I suspect it's the same appeal as woodworking or hor rodding or living history reenactment.


Humans also love their tools. While tools get more advanced, they sometimes lose some charm or benefits of older tools.

For me, an interest in retro tech is seeing how people did things in the past and realizing we lost some "nice" experiences along the way.

Also, in a world where everything becomes obsolete at record speeds, something so obsolete feels stuck in time.


> Humans also love their tools. While tools get more advanced, they sometimes lose some charm or benefits of older tools.

This is true. In summer when I sit in the dark with the windows open (hot country and no AC and I don't want to attract bugs), my 4K computer monitor is totally incapable of dimming to a level that doesn't blind me. Even with white on black/dark mode and brightness set to 0. In fact, even during the day I work with brightness 0 on that thing, it seems all monitors are optimised for max brightness these days.

Meanwhile, my old VT520 terminal has an analog brightness dial which I can turn so low that I can barely make out the letters in a pitch-black room <3


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: