Hacker News new | past | comments | ask | show | jobs | submit | radomir_cernoch's comments login

I'd love to read more. Do you know any sources?


Try Reinforcement Learning for Combinatorial Optimization: A Survey

or a more recent and spirited discussion on reddit..

https://old.reddit.com/r/reinforcementlearning/comments/196i...


Link for those who need it:

https://arxiv.org/abs/2003.03600


> Up until about GPT 2, EURISKO was arguably the most interesting achievement in AI.

I'm really baffled by such statement and genuinely curious.

How come that studying GOFAI as undergraduate and graduate at many European universities, doing a PhD. and working in the field for several years _never_ exposed me to EURISKO up until last week (thanks to HN)?

I heard about Cyc, many formalism and algorithms that related to EURISKO, but never heard of its name.

Is EURISKO famous in US only?


> Is EURISKO famous in US only?

It was featured in a BBC radio series on AI made by Colin Blakemore [1] around 1980, the papers on AM and EURISKO were in the library of the UK university that I attended.

[1] https://en.wikipedia.org/wiki/Colin_Blakemore#Public_engagem...


For that reason, a comparison between GPT 2 and EURISKO seems funny to me.

I discussed ChatGPT with my yoga teacher recently, but I bet not even my IT colleagues would have a clue about EURISKO. :-)


So? There's a real possibility DART has still saved its customers more money over its lifetime than GPT has, and odds are basically 100% that your yoga teacher and IT colleagues haven't heard a thing about it either. The general public has all sorts of wrong impressions and unknown unknowns of facts that I don't see why they should ever be used as a technology industry benchmark by anyone not working in the UI department of a smartphone vendor.


OMG, what an archeological discovery!


I very much agree about the A* idea, but this idea

> Tangent: that's very similar to philosophy.

doesn't click with me. Maybe, could your elaborate a bit, or provide an example, please?


From time to time, I read articles on the boundary between neural nets and knowledge graphs like a recent [1]. Sadly, no mention of Cyc.

I'd bet, judging mostly from my failed attempts at playing with OpenCyc around 2009, is that the Cyc has always been too closed and to complex to tinker with. That doesn't play nicely with academic work. When people finish their PhDs and start working for OpenAI, they simply don't have Cyc in their toolbox.

[1] https://www.sciencedirect.com/science/article/pii/S089360802...


oh i just commented elsewhere in the thread about our work in integrating frames and slots into LSTMs a few years ago! second this.


From a different point of view... The cochlea is a "real" "implementation" of Fourier transform (https://www.britannica.com/science/sound-physics/The-ear-as-...)


The cochlea actually supports the point the article makes, as while it does transform to the frequency domain it doesn't do (or even approximate) a Fourier transform. The time->frequency domain transform it "implements" is more like a wavelet transform.

Edit: To expand on this, to interpret the cochlea as a fourier transform is to make the same mistake as thinking eyes have cone cells that respond only to red, green or blue light. The reality is that each cell has a varying reponse to a range of frequencies. Cone cells have a range that peaks in the low, medium or high frequency area and tails off at the sides. Cochlear hair cells have a more wavelet-like response curve with secondary peaks at harmonics of their peak response frequency.

Caveat: I'm not an expert in this, only an enthusiastic amateur, so I eagerly await someone well-akshuallying my well-akshually.


Any kind of discrete Fourier transform, and also any device that generates the Fourier series of a periodic signal, even when done in an ideal way, must have outputs that are generated by a set of filters that have "a varying response to a range of frequencies".

Only a full Fourier transform, which has an infinity of outputs, could have (an infinite number of) filters with an infinitely narrow bandwidth, but which would also need an infinite time until producing their output.

So what you have said does not show that the eye cone cells do not perform a Fourier transform (more correctly a partial expansion in Fourier series of the light, which is periodic in time at the time scales comparable to its period).

The right explanation is that the sensitivity curves of the eye cone cells are a rather poor approximation of the optimal sensitivity curves of a set of filters for analyzing the spectral distribution of the incoming light (other animals except mammals have better sensitivity curves, but mammals have lost some of them and the ancestors of humans have re-developed 2 filters for red and green from a single inherited filter and there has not been enough time to do a job as good as in our distant ancestors).


Sure but the article asks the question about the frequency domain generally then constrains itself to Fourier transforms. Fourier has a lot of baggage from making large assumptions. Transforms like wavelet and laplace are closer to "real world" because of fewer non-physical assumptions and have actual physical implementations. It doesn't get much more real than seeing it with your own eyes.


> Transforms like wavelet and laplace are closer to "real world" because of fewer non-physical assumptions and have actual physical implementations.

Could you expand on this a bit please? Especially as it relates to the Laplace transform.


I'm not certain the secondary peaks would matter very much though? It seems to me that maybe the most useful model would be not a wavelet transform but some form of DCT?

At any rate, the point is that the frequency domain matters a lot, since our brain essentially receives sound data converted to the frequency domain in the first place...


For the cone cells we have excellent empirical data about the response curves. Do ypu know if there is public data for the chochlea hair cells?


It's easy to forget how grounded in physics biology is. When I was in college, we had an issue where our stem cell lines were differentiating into bone. Turns out, the hardness of the environment is a signal stem cells can transduce, and the hard dish was telling them they were supposed to be bone cells.


Hmm, this mechanism can only add more bone to existing bone, right?

So how do the first bone cells know to start becoming bone?


probably a transient signal (in the rna-protein soup) that happens during embryonic development.


Which came first, the chicken or the egg?


I was only a user of Nokia 6150 and subsequent phones. Around me, they were considered as technically perfect devices.


Also interested! We saw basically the exact opposite. :-)


As I'm also running a homelab, I was curious, what's your overall experience with IAM in this context?

What was your original goal? Which services are linked to your lldap? How many users? Does it simplify things or make it more complex?


It's been pretty good, I never really used LDAP before so I had a bit of a learning curve, but it's not too complicated.

1. My original goal was not having 5 different passwords for my own server because although I have a password manager it's still a bit annoying. Also just for learning.

2. You can see the services here[1], since my entire setup is provisioned from GitHub with Terraform and Ansible.

3. I have about 5 users.

4. I would say simplify so far, but it depends on what kind of complexity you care about, and which services you want to integrate.

[1] https://github.com/RedlineTriad/private_server/tree/master/s...


> My original goal was not having 5 different passwords for my own server because although I have a password manager it's still a bit annoying.

I "solved" that problem by having configuration management deploy same password (hash) on all of my servers. Requires keeping the repo with password hashes relatively safe and of course changing them is a bit of a process but extremely easy and low tech if there is already CM in place.


Authelia actually supports a yaml file with password hashes as the user database. I thought about using that, but decided to try lldap instead.

But I wouldn't want to figure out how to write the password hash into the databases of each application like grafana, or grocy.


> [...] blows what Mercedes has built out of the water.

Mercedes FSD prototype, 10 years ago: https://youtu.be/G5kJ_8JAp-w


Yeah that’s nowhere close. It’s easy to make a prototype that looks good in a marketing video while driving a very tightly mapped route. It’s a whole other thing to let anyone use self driving tech anywhere, especially on routes it has never seen before.


That was teen years ago, remember. All I can say is that these guys are extremely knowledgeable, kind and an absolute joy to work with. Big shout out to Eberhard, Carsten, Christoph, Clemens and Thao, and to the ones not appearing in the video, like Uwe (enjoy your retirement), David and Henning and a lot others from the chair of Christoph Stiller and from Mercedes research.


>Mercedes FSD prototype, 10 years ago:

Mercedes FSD prototype 1986 to 1994 via the 400 Million Euro EU funded Prometheus project https://www.youtube.com/watch?v=I39sxwYKlEE

It's funny that German, and Italian researchers and car makers had the early lead on self driving tech and then lost it by shelving the tech. Oof.

Which reinforces my earlier point I made in another thread here today, that innovation only happens in the EU as long as it's government funded and as soon as the funding stops, work stops and everything gets shelved instead of the private industry picking up the slack, funding it further to commercialize it like in the US. Sad.

“It’s possible that [Germany] threw away its clear vanguard role because research wasn’t consistently continued at the time,” Schmidhuber said. He added that carmakers might have shied away from self-driving technology because it seemed to be in opposition to their marketing, which promoted the idea of a driver in charge of steering a car."


>It's funny that German, and Italian researchers and car makers had the early lead on self driving tech and then lost it by shelving the tech. Oof.

Actually a very common occurrance. I don't think FSD on todays level was possible in '94 and the projects failure was inevitable unless it had been continously funded for at least 15 years more.

>innovation only happens in the EU as long as it's government funded and as soon as the funding stops

Seems like a bad example. Funding stopped because the technology didn't work.


Ernst Dickmanns. Legend. Sat close to him at a CVPR and he could not resist to rant about "How's that new? We did that in the 90es!" =:-D


> What do they have now?

> > Mercedes sprinter

I don't know why but that is hysterical.


In that video they mention doing localization based upon a prebuilt map of the route by matching images to the model 10 times per second.

That is by definition, not FSD. That is like the system announced today, a limited route autonomy.

For comparison, FSD v3 (they are shipping v4 in every vehicle now) performs localization 2,000 times per second based upon a hybrid model of every road in open street maps and a generalized model of roads. That is why it is FULL. Even if you are on an unmapped brand new road built yesterday, it will know how to drive appropriately.


They aren’t shipping v4 in the model 3, so no on the “every vehicle”



That’s not shipping…

If you buy a model 3 TODAY you get V3


Not on the 3. But they are shipping it with the Model Y.

https://driveteslacanada.ca/news/tesla-now-shipping-model-y-...


So not “every vehicle” then?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: