Hacker News new | past | comments | ask | show | jobs | submit | nocobot's comments login

reminds me of this image board that has been around for a while. similar idea, it’s only open every 12 hours

https://chakai.org/tea/


helsing is already hiring for roles like that


is it really? this is the most common example for context free languages and something most first year CS students will be familiar with.

totally agree that you can be a great engineer and not be familiar with it, but seems weird for an expert in the field to confidently make wrong statements about this.


Thanks, that's what I meant. a^nb^n is a standard test of learnability.

That stuff is still absolutely relevant, btw. Some DL people like to dismiss it as irrelevant but that's just because they lack the background to appreciate why it matters. Also: the arrogance of youth (hey I've already been a postdoc for a year, I'm ancient). Here's a recent paper on Neural Networks and the Chomsky Hierarchy that tests RNNs and Transformers on formal languages (I think it doesn't test on a^nb^n directly but tests similar a-b based CF languages):

https://arxiv.org/abs/2207.02098

And btw that's a good paper. Probably one of the most satisfying DL papers I've read in recent years. You know when you read a paper and you get this feeling of satiation, like "aaah, that hit the spot"? That's the kind of paper.


a^nb^n can definitely be expressed and recognized with a transformer.

A transformer (with relative invariant positional embedding) has full context so can see the whole sequence. It just has to count and compare.

To convince yourself, construct the weights manually.

First layer :

zeros the character which are equal to the previous character.

Second layer :

Build a feature to detect and extract the position embedding of the first a. a second feature to detect and extract the position embedding of the last a, a third feature to detect and extract the position embedding of the first b, a fourth feature to detect and extract the position embedding of the last b,

Third layer :

on top that check whether (second feature - first feature) == (fourth feature - third feature).

The paper doesn't distinguish between what is the expressive capability of the model, and the finding the optimum of the model, aka the training procedure.

If you train by only showing example with varying n, there probably isn't inductive bias to make it converge naturally towards the optimal solution you can construct by hand. But you can probably train multiple formal languages simultaneously, to make the counting feature emerge from the data.

You can't deduce much from negative results in research beside it requiring more work.


>> The paper doesn't distinguish between what is the expressive capability of the model, and the finding the optimum of the model, aka the training procedure.

They do. That's the whole point of the paper: you can set a bunch of weights manually like you suggest, but can you learn them instead; and how? See the Introduction. They make it very clear that they are investigating whether certain concepts can be learned by gradient descent, specifically. They point out that earlier work doesn't do that and that gradient descent is an obvious bit of bias that should affect the ability of different architectures to learn different concepts. Like I say, good work.

>> But you can probably train multiple formal languages simultaneously, to make the counting feature emerge from the data.

You could always try it out yourself, you know. Like I say that's the beauty of grammars: you can generate tons of synthetic data and go to town.

>> You can't deduce much from negative results in research beside it requiring more work.

I disagree. I'm a falsificationist. The only time we learn anything useful is when stuff fails.


Gradient descent usually get stuck in local minimum, it depends on the shape of the energy landscape, that's expected behavior.

The current wisdom is that by optimizing for multiple tasks simultaneously, it makes the energy landscape smoother. One task allow to discover features which can be used to solve other tasks.

Useful features that are used by many tasks can more easily emerge from the sea of useless features. If you don't have sufficiently many distinct tasks the signal doesn't get above the noise and is much harder to observe.

That the whole point of "Generalist" intelligence in the scaling hypothesis.

For problems where you can write a solution manually you can also help the training procedure by regularising your problem by adding the auxiliary task of predicting some custom feature. Alternatively you can "Generatively Pretrain" to obtain useful feature, replacing custom loss function by custom data.

The paper is a useful characterisation of the energy landscape of various formal tasks in isolation, but doesn't investigate the more general simpler problem that occur in practice.


In my country (France), I think most last-year CS students will not have heard of it (pls anyone correct me if I'm wrong).


another EF project is https://semaphore.pse.dev/, which basically allows users to prove membership of a group without revealing additional details about themselves. it's useful but also simple enough to understand the whole code-base in a reasonable time.


Super interesting, thanks for sharing!


minimalist painter, youngest artist ever to receive retro @ MoMa


can someone explain why dhh has such strong feelings about katherine maher?


Maybe it's this tweet: https://twitter.com/realchrisrufo/status/1780929268949614848

From the tweet:

EXCLUSIVE: Katherine Maher says that she abandoned a "free and open" internet as the mission of Wikipedia, because those principles recapitulated a "white male Westernized construct" and "did not end up living into the intentionality of what openness can be."

There is a video of her speaking, which I find hard to translate.


Thank you for the link. Now I have enough to form an opinion about her.


Totally. She seems really thoughtful and aware of how absolute freedom (anarchy) just leads to a situation where implicit power structures are created by those who get there first. Hope she and others keep doing good work to ensure that the site lives up to its goal, to catalog all knowledge, not just the knowledge that's easy to catalog.


It seems a bit worrying to me as a frequent user of Wikipedia. I like a "free and open" internet.


We formed completely different opinions. And that's fine.


not sure. I searched comments: https://hn.algolia.com/?dateRange=pastYear&page=0&prefix=fal...

Most recent are a reasonably mundane culture wars stuff but (more interesting to HN) it seems earlier Signal related ones appear to suggesting a degree of alignment with the USA government. The words "spook" and "compromised" are used in different comments.

Edits. From the Signal page:

"She is an appointed member of the U.S. Department of State's Foreign Affairs Policy Board, where she advises the Secretary of State on technology policy"


Can anyone explain what the deal is with that Us department of state foreign affairs policy board?


There's an article here (admittedly a bit biased against her) "NPR Chief Bragged About Taking Censorship Orders From Feds As Head Of Wikipedia" https://thefederalist.com/2024/04/17/npr-chief-bragged-about...

which may explain some of it. It reminds me of an odd experience I had with Wikipedia around covid time - I edit a bit and thought you could say covid may have arisen zoonotically like all previous such pandemics or may have come from the nearby lab which was running job ads for bat coronavirus researchers at the time of the outbreak and as an open wiki you could consider both but no - the lab stuff was largely verboten and unmentionable. I guess the above article explains a bit how that happened maybe?


Rufo, the guy who invented the Critical Race Theory conspiracy theory https://en.wikipedia.org/wiki/Christopher_Rufo , posted a video clip of her and wildly misinterpreted it for clicks. There have been a bunch of submissions and I think all of them have been flagged dead. E.g. https://news.ycombinator.com/item?id=40080036


A large part of how DHH stays relevant is by getting outraged about things on the Internet.


DHH only has strong feelings. Katherine Maher is a great CEO and leader and has been taken out of context in order to isolate her and make her unsupportable by people who claim to not like cancel culture. The whole thing is ridiculous.


You're the second person in this small thread to make a quip about "people claiming to not like cancel culture". Smells pretty fishy to me. Also the whole "she's a great CEO and leader" bit, what are the odds that you have personal experience to where you can make this claim? Even if you did, you could at least provide reasoning for why you think that. Anyways, all put together, I suspect a non-genuine person (or bot/LLM) posted this.


Max Howell (creator of homebrew) is also behind tea.xyz

imo that give them some more credibility or at least makes me think that they probably are well intentioned


> more credibility

I would say the opposite. Homebrew isn't a broadly well-respected project from a purely engineering perspective (i.e. by anyone who's engaged with it in earnest) - it gets contributions because it has user-capture / network effect, but there's a lot of contributors would would prefer to be publishing packages on a more nicely stewarded platform.


Are you kidding? I love Homebrew.

I install it on my Linux devices too. It allows me to easily install up-to-date versions of just about any software I care about.


You're a user - I'm purely referring to contributors.

It's clearly gained popularity for good reasons - it's an API that is very user-oriented, with reasonably good UX for most people. The downsides are mainly related to issues users don't see (i.e. security).


What are some of the security issues? (Just curious. I don't use Homebrew.)


Homebrew adds a location to your $PATH that is writable by unprivileged users. This means any non-root process has privileges to mask any binary on your system. They do this in the name of "convenience" - so that the Homebrew process can install apps without the user entering their password every time.


You're right, I misread your comment.


I wonder how good Nix support is on MacOS these days...


I recently switched from MacPorts to Homebrew & from previous trials of Nix, MacPorts support is well ahead of Nix.

I used MacPorts for many years without many issues - only recently just started to get a little too frustrated with some new utils that were Homebrew-only & finally capitulated. So you can get very far with MacPorts (& it's a far better system than brew).

Be great to see something gain traction over Homebrew but I have a feeling many devs out there will only ever bother publishing on a single distribution platform for MacOS (whatever happens to be most popular).


Isn’t it best if the application developers just release their application on GitHub or similar, then package maintaners can package the software for their specific package manager? That’s how it works for many Linux distros, e.g. Debian etc etc.


> That’s how it works for many Linux distros, e.g. Debian etc etc.

Yes and no. It's certainly true of most packages but the smaller the package, the more likely it is that the distro package maintainer will be [a/the] maintainer of the original project, even with Debian.

The same is true of Homebrew, etc. - most of the package maintainers aren't the original project maintainers, which is ultimately why MacPorts support is so comprehensive despite not having anywhere near the same user capture as Homebrew. But the places you see frustrating gaps will always be at the edges, where it may be the original project creator creating a Homebrew package & no-one packaging it for anything else.


Pretty good, but you might still want to defer to Homebrew to install some software. Nix-Darwin _can_ drive Homebrew and basically manage its packages declaratively.


If anything this makes me seriously rethink my use of homebrew. I already had security-related concerns around it, but the fact that he launched this weird crypto thing that's going to cause a lot of spam on GitHub is pushing me even further away.


It’s completely unrelated to Homebrew. We have zero connection to anything related to Tea and Max hasn’t been involved with Homebrew for the best part of a decade.

Mike McQuaid, Homebrew Project Leader and Homebrew maintainer for the last 15 years.


OK Great Leader. People are noting that the “Homebrew guy” is behind two back-to-back stupid ideas (that package thing and this). You can’t deny that he was involved with Homebrew and that’s all that’s being said.

The OSS CV thing can be a double-edged sword for all involved parties.


> You can’t deny that he was involved with Homebrew and that’s all that’s being said.

Unfortunately, this isn’t true. The comment you’re replying to was in response to someone saying they’d reconsider using Homebrew now over this, and that sentiment has been common in social media.


You need to take a break from the internet


Et tu


Thanks for Homebrew, it helps keep my Mac in line.


That definitely let me reconsider retrying nix as my package manager on macos :-(

Sry but if someone in 2024 still thinks web3 is an answer...


[flagged]


im confused, why's he a creep? did something happen?


Did you read the article? Abusing open source projects and wasting maintainer time as part of your crypto-scam startup business model is super unethical.


1) selling

2) my work as a contractor

3) usdc

4) ~11k usdc

5) 2021

6) via email i guess

7) went smoothly, I sent a regular invoice and withdrew via a centralized exchange. Fees for me were lower than receiving an international bank transfer

8) yes

9) the UX is quite bad still, I don’t want to evaporate my salary by mistyping an address. paying ppl in crypto also leads to some pay transparency, I could see how much everyone was making


this sounds like an interesting product and the team clearly has impressive credentials.

I am very sceptical of crowd funding however, I think these are largely terrible investments for consumers while explicitly targeting people who are not accredited investors.

what made you go that route instead of pursuing VC funding?


It is a combination of things. We hesitated a lot to do a crowdfunding but : - a corrosion issue nearly killed us and we had to "reboot"

- our team was in full lockdown in France for a while and we could not prototype as easily

- in 2021 we were still focused on seasonal energy storage, a very capital intensive endeavour, a market not ready and a very risky project.

- the rules for crowdfunding changed in 2021 and the use of SPV (special purpose vehicles) made it possible to raise big + have one line on the cap table.

We had to derisk the project further to be able to attract VC funding (patent, prototype, LOIs, financial model, etc.) and we ultimately followed the advice from another YC founder and friend who went the crowdfunding route with success. A lot of crowdfunding projects look like outright scams and probably are... but I feel that the SEC did a good job protecting the public. You cannot invest more than a certain amount if you are not accredited for example. Things are certainly not perfect, and getting better year after year.


When I was in college I used interview cake a bit and enjoyed it. It’s free for students irrc.

In general I think most of the interview prep industry is kind of a grift.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: