Hacker News new | past | comments | ask | show | jobs | submit | albertzeyer's comments login

It's really a pity that they do this now. Some of their older papers had actually quite some valuable information, comments, discussions, thoughts, even commented out sections, figures, tables in it. It gave a much better view on how the paper was written over time, or how even the work processed over time. Sometimes you also see some alternative titles being discussed, which can be quite funny.

E.g. from https://arxiv.org/abs/1804.09849:

%\title{Sequence-to-Sequence Tricks and Hybrids\\for Improved Neural Machine Translation} % \title{Mixing and Matching Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Analyzing and Optimizing Sequence-to-Sequence Modeling Techniques\\for Improved Neural Machine Translation} % \title{Frankenmodels for Improved Neural Machine Translation} % \title{Optimized Architectures and Training Strategies\\for Improved Neural Machine Translation} % \title{Hybrid Vigor: Combining Traits from Different Architectures Improves Neural Machine Translation}

\title{The Best of Both Worlds: \\Combining Recent Advances in Neural Machine Translation\\ ~}

Also a lot of things in the Attention is all you need paper: https://arxiv.org/abs/1706.03762v1


> Some of their older papers had actually quite some valuable information, comments, discussions, thoughts, even commented out sections, figures, tables in it.

I think you answered your own question.


What question?

I think I read the comment as being sceptical as to why. I withdraw my comment in that form.

Maybe papers need to be put under version control.

FigShare and Zenodo grant (DataCite) DOIs for git commit tags.

Maybe papers need to contain executable test assertions.


You might be true for the market.

However, that target audience, those hobby enthusiasts, hobby developers, also university labs with low budget, those are the people who will develop the future open source frameworks, and ultimately/implicitly those are the people who can have a quite big impact on the future development of brand recognition and the open source ecosystem around the hardware. Those people can shape the future trends.

So, only looking at the market, how much units you would sell here, that totally ignores the impact this might have indirectly in the future.


> However, that target audience, those hobby enthusiasts, hobby developers, also university labs with low budget, those are the people who will develop the future open source frameworks,

No they're not. Y'all are deluded. There's a reason why the are only two real DNN frameworks and both of them are developed at the two biggest tech companies in the world.


Can you give some example? I mostly also care about energy efficiency. How much energy would the alternatives consume?



Here is a data point: my mini PC server idles at around 5 watts, including external HDD. (At sustained 100% load it goes to 30W)


Probably less - RPis these days run very hot (60C+) even when idling due to rather poor power management.


Definitely not less; used mini PCs typically idle at 10-20W on the low end (and some, especially older AMD, at 30-40W which seems insane!).

Even the better low-end N100's usually idle at 5-6W, which is double the Pi 5.

SoC idle temperature depends on environment and cooling solution; the N100 would quickly thermal throttle if you don't have a comparatively large heatsink attached. The Pi 5 will actually give you full performance for a minute or two before throttling (assuming no heatsink).

Other Arm chips are much better, efficiency-wise, but the Pi 5 is still more efficient than any low-end x86 build, especially used.


Hi Jeff, I feel that for just a few watts idle extra you get a much more capable, and much more complete, easy-to-handle computer, at this time the Raspberry Pi starts to feel like, why do we still even care? let the industry customers have them.

I’m sure you have many more topics and a ton of other gear to explore to explore and many pi-related videos are fun entertainment (which is fine for it’s own sake) but it always practical.

That 4-SSD nas looks fragile as hell and I think it’s definitely worth exploring alternatives.


My NUC11 with 8 GB of DDR4 and Ethernet idles at 3-4W when I tell it to not power any status LEDs. Can be power-limited in BIOS so that it runs off of a USB-C power adapter with a fixed 19V negotiation cable.

But it is not low-end for sure. I'm kind of wasting the computer on this use case. It's just what I had after Pi 1B turned out not being enough.


3-4W mini PCs with lower peak power consumption are decently common these days - and with no need for compromises RPi requires.


The development writeup for the Doom is interesting, with many details. https://nicholas.carlini.com/writing/2019/javascript-doom-cl...

There was one month time to complete the competition. But it seems you were allowed to reuse any existing other code.

Looks like this was quite fun to work on.

(I feel a bit sad that I would never be able to get one month of free time to work on this now, due to family and job...)


Wow that’s a fascinating read, thanks for linking it!!


Why do you think these activities are obsolete? Meeting the other people face-to-face will always be nice, or not? And also playing together in the same room is much nicer than when you are separated.

I'm quite sure you will still be able to find such events, if you want to try. I know a couple of friends who still do that.

Speaking of something which has a similar feeling: There are the hacker events, such as the CCC, which is even right now: https://events.ccc.de/congress/2024/infos/index.html

(Unfortunately, I wasn't able to attend any of these since a while due to having small kids... But I definitely will join them sooner or later again. Maybe then with my kids.)

Also related: https://lanparty.house/ (https://news.ycombinator.com/item?id=42156977)


Ehhh, we have people there with their 2yo, just don't be alone with a small kid (bring the other parent).


Somewhat similar: My PyCPython project (https://github.com/albertz/PyCPython), where I wrote a C interpreter in Python, and then tried to interpret CPython. I never really completed this project though.


In Python, some of the stated use cases, like extracting coverage, extracting the call graph, etc, you can get via tracing already (sys.settrace, https://docs.python.org/3/library/sys.html#sys.settrace).

And I would argue, the other stated use cases are maybe interesting to play around with, but nothing you really would want in production (I suppose).

Some of the other use cases, you can also achieve by module import hooks and rewriting the AST. (E.g. see how PyTest rewrites the assertion AST to produce nice error messages: https://github.com/pytest-dev/pytest/blob/main/src/_pytest/a...)


pytype (https://github.com/google/pytype) is based on symbolic interpretation of python bytecode, but with the data stack storing types rather than values. it works very well and has been running in production within google forever.

the nice thing about writing a bytecode rather than an AST interpreter is that you can leverage a lot of the work the python compiler does, and work with the resulting simplified code.


the pytest example screams for @assert as a macro, if only the language supported it


I find it a bit weird that Gimp does not use the latest GTK (i.e. GTK4, which was considered stable since 2020), even though GTK originates in the Gimp project itself. It actually seems to be quite a bit behind: This is now the first release of Gimp which started to use GTK3, i.e. before, it even still used GTK2 (reached end-of-life in 2020)?


Having had to migrate a very simple project from GTK2 to GTK3, I don't think it's all that weird. The migration was utterly difficult, in the areas that hit me seemingly no effort was made to give proper migration paths. Only with some later published documentation (+ help from chatgpt) was it possible to restore some functionality later, after the initial migration. Finally that even meant calling the xlib directly.

And note that the software used wxWidgets, so most of the changes were encapsulated there. Only a very small part of GDK/GTK was used directly, with wnck already used as a helper layer (but the functionality in question broke there as well).

So even if GTK came from GIMP, if the later development in GTK was not made specifically for and by the GIMP project, the migration must have been a nightmare. Especially in a project that had so many other things to worry about, non-destructive editing alone.

And to repeat such a migration now again for GTK4 will not be very enticing.


From what I've heard surrounding this, GTK3 to GTK4 isn't as big of a jump as GTK2 to GTK3 was. The GTK3 port was finished first because there was already work in place for that, but we can expect a GTK4 port to be faster. That said, I haven't seen many apps that aren't specifically GNOME apps start using GTK4 in the first place, and as such I'm currently not using any GTK4 applications. I expect it to take a while before more things move to GTK4.


GTK stands for "Gnome first every other user literally does not matter break them hard, break them often, make them give up ToolKit" these days, and has for quite a while.


The funniest thing is that GTK used to stand for "Gimp toolkit". How times fly.


GTK4 removed a bunch of APIs for stupid reasons and GIMPs move to 3 started before 4 even existed. Had 4 at least tried to maintain some degree of compatibility then switching from 3 to 4 near the end would have been feasible. But that's not the case.

If you aren't Gnome, GTK is not for you.


Might as well port the next version of Gimp to Qt - they seem to be much more reasonable as API providers.


Qt also introduces some messy changes between major versions but they tend to be a lot more reasonable than what GTK does. Since Qt4 at least there's usually some attempt at providing alternatives for each deprecated function or class (the alternatives are shown in the error messages) and sometimes full-on compatibility layers for missing features are provided like with Qt5Compat.

Not sure Gimp devs would be willing to switch to C++ for Qt though.


I feel this. I tried using GTK4 a little while ago and almost immediately switched library when I realised it's simply incapable of doing certain things I needed, usually because either Wayland can't do it or GNOME doesn't need it.


Would you care to actually list those things?


Guess what the "G" in GTK stands for historically


That was before GNOME took over, as it was supposed to be the answer against KDE and original Qt license.

And here we are, having written articles about Gtkmm to The C/C++ Users Journal, I no longer care and run systems from Microsoft/Apple/Google instead.


I think Harmony was the Qt replacement but I don’t think it lasted very long.


That was after Nokia acquisition from Trolltech.

https://blogs.kde.org/2009/01/14/cute-harmony-qt-goes-lgpl/


Development ceased at the end of 2000, when Qt was released under the GPL, removing the perceived need for the Harmony Project to exist. In January 2009 Qt itself was made available under the GNU LGPL, along with the previous license options.


True but the leap from GTK2 to GTK3 is a lot bigger than from GTK3 to GTK4. I'm not sure when the "port" to GTK3 started, but if it was from before GTK4 was a thing, it makes sense that they wanted to finish the GTK3 stuff first.


> I'm not sure when the "port" to GTK3 started

Probably more than 10 years ago.


GTK 3.0 was released in 2011 and it was announced a few years before that, probably in 2009. So whoever down voted, the 90s weren't 10 years ago...


From the second paragraph of the article:

> GTK 4 has been available for a few years now, and is on the project's radar, but the plan was always to finish the GTK 3 work first.


Maybe so, but did you actually open that "on the radar" link? It was closed WONTFIX because moving to the latest release of their bespoke library was considered "tech debt" :-/ https://gitlab.gnome.org/GNOME/gimp/-/issues/6440#note_12726... I actually think it was terribly disingenuous of LWN to use "on the radar" language pointing at a closed issue. Maybe there is another issue (hiding in the 12,000 issues) that is actively tracking the Gtk4 migration, but that one ain't it


We actually had a Google Summer of Code project this summer that explored porting one of our main GTK3 widgets to be compatible with GTK4. It's definitely on our radar, but it's not a major focus at this point.


Do you have the correct issue that one could follow since 6440 isn't it?

Also, since you're here: is it just a matter of glucose, and thus if someone were to port it to GTK4 that patch would be accepted, or it's quite literally "no user cares about library versions"?


If someone submitted a patch that ported everything in GIMP to GTK4, I'm quite sure it'd be accepted after review. The trouble is that GTK4 deprecates or breaks a number of things as well. For instance, while the icon scaling system is much more flexible in GTK4, it's different than in GTK3 so all that work would have to be redone. GtkTreeViews are also becoming obsolete, and since GIMP relies on that for the layer/path/channel views, it'll be another big change.

At the moment, new development is encouraged to follow the GTK3 -> GTK4 migration guidelines (e.g. use gtk_widget_set_visible () rather than gtk_widget_show (), don't use gtk_widget_destroy () since it's been removed, etc). I don't know a specific issue tracking GTK4 at the moment, but I can check.


GTK has become the Gnome toolkit , and the Gnome developers don't care about developers outside their umbrella.


> I find it a bit weird that Gimp does not use the latest GTK

Which is ? GTK 6 ? GTK is a moving target (like a lot of libraries those days). /s


A similar tool for this user-space bind-mount is https://github.com/fritzw/ld-preload-open, which relies on LD_PRELOAD to overwrite common libc functions. Thus this is less reliable as the presented tool which uses ptrace, but it still works reasonably well (I run e.g. PyCharm with it).


Thanks for sharing this! I had to do exactly the same thing some 10 years ago to get an Oracle instance up and running again. Oracle insisted on using the /tmp location, despite being installed on a different drive, and the disk was full. As I had access to the Oracle system user, but not to the DBA user to change any configuration, I built a similar shared lib and preloaded it to the script. Worked like a charm! Happy to know that there is something _slightly more streamlined_ to do that now.


There is also Scratchbox, which was used (and probably is still used) to cross-build embedded Linux distributions. https://github.com/sailfishos/scratchbox2

It also offers CPU transparency and was able to run almost arbitrary desktop software, but specializes in build toolchains.


Probably just "regular" LMs, not large LMs, I assume. I assume some LM with 10-100M params or so, which is cheap to use (and very standard for ASR).


Could be. I ran through some offline LMs for voice assisted home automation a couple years ago and they were subpar compared to even the pathetic offering that Youtube provides - but Google of course has much more focused resources to fine tune a small dataset model.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: