Hacker News new | past | comments | ask | show | jobs | submit | flyer_go's comments login

This would be really popular for sms, especially for mac interface or iphone.


It is pretty clear that his movies are meant to explore the existence of psychological phenomena and are not a statement in of themselves. In that way, I don't think they are meant to be interpreted.


I don’t believe that’s true. Lynch uses characters as abstractions — they represent a concept, which when you understand them add up to a ‘statement’:

* Twin Peaks is an meta commentary about the lack of balance on the small screen (our desire for sex and violence). Interestingly this ended up being a meta-meta-commentary when the network forced them to reveal the killer of Laura Palmer prematurely. Laura Palmer is balance, Dale Cooper is the audience, Bob is our desire for sex & violence, …

* Mulholland Drive is about the casting couch and the destruction of the Hollywood dream for women. Rita is the casting couch, the cowboy is Hollywood, …

* Lost Highway appears to be a comment about plagiarism (namely other directors plagiarising David Lynch). But it also has a similar theme to Twin Peaks as a meta commentary about film — the shots of the road are meant to look like film, the vehicles are meant to represent movies, …

etc.

He always talks about being “true to the idea” — so all abstractions and all surreal elements must be true to the underlying concept. It’s up to us to work out what the underlying theme is (that links all of the abstractions together).


Very interesting take, thank you!

Of course, my favorite David Lynch movie is "The Straight Story" - what a heartwarming tale!


Lynch has said that his interest in making movies came out of painting, which was the first art form he had an interest in. He had an epiphany where he saw making movies as making moving paintings. Some of the most surreal scenes in his movies/shows are just that, combinations of elements that produce feelings in the viewer in the same way a painting might. Watch that scene in Twin Peaks: The Return in the Fireman's Theatre where someone floats up into the air and there are all sorts of other things going on. It's like a framed painting where there are all these moving elements that combine to create something phenomenal.


What would be the benefit of this?


Thermal stability, as far as I know. So hyperthermophiles, presumably.

edit: hmmm. from this interesting review - https://www.sciencedirect.com/science/article/pii/S0959440X2... (open article) - it looks like some knots might increase mechanical stability and resitance to proteolysis


Oh, cool- they turned a not-truly-a-knot (open ends) protein into a truly knotted one (connected ends) and then unfolded it using urea and it stayed knotted. That's what you'd expect but it's also pretty cool.


It's interesting. That's the benefit.


Protein design is a multi-billion dollar field.

A single observation- thermostable proteases- was incorporated into modern laundry detergents, making Genentech and Corning billions of dollars in IP.


A category theory textbook opens the "motivation and use cases" section with the following poem:

There's a tiresome young man in Bayshore. / When his fiance cried 'I adore / the beautiful sea' / he replied 'I agree, / it's pretty, but what is it for?'


Do you have a link to the setup you used?


Do you have code for the course?


course repo with code and assignments is at: https://github.com/sourcery-ai-bot/MIT_OpenCourseWare-Perfor...


Not my place to share it! Though it can be found online by those who seek.


I don't think I have seen an answer here that actually challenges this question - from my experience, I have yet to see a neural network actually learn representations outside the range in which it was trained. Some papers have tried to use things like sinusoidal activation functions that can force a neural network to fit a repeating function, but on its own I would call it pure coincidence.

On generalization - its still memorization. I think there has been some proof that chatgpt does 'try' to perform some higher level thinking but still has problems due to the dictionary type lookup table it uses. The higher level thinking or agi that people are excited about is a form of generalization that is so impressive we don't really think of it as memorization. But I actually question if our wantingness to generate original thought isn't as actually separate from what we currently are seeing.


> I have yet to see a neural network actually learn representations outside the range in which it was trained

Generalization doesn't require learning representations outside of the training set. It requires learning reusable representations that compose in ways that enable solving unseen problems.

> On generalization - its still memorization

Not sure what you mean by this. This statement sounds self contradictory to me. Generalization requires abstraction / compression. Not sure if that's what you mean by memorization.

Overparameterized models are able to generalize (and tend to, when trained appropriately) because there are far more parameterizations that minimize loss by compressing knowledge than there are parameterizations that minimize loss without compression.

This is fairly easy to see. Imagine a dataset and model such that the model has barely enough capacity to learn the dataset without compression. The only degrees of freedom would be through changes in basis. In contrast, if the model uses compression, that would increase the degrees of freedom. The more compression, the more degrees of freedom, and the more parameterizations that would minimize the loss.

If stochastic gradient descent is sufficiently equally as likely to find any given compressed minimum as any given uncompressed minimum, then the fact that there are exponentially many more compressed minimums than uncompressed minimums means it will tend to find a compressed minimum.

Of course this is only a probabilistic argument, and doesn't guarantee compression / generalization. And in fact we know that there are ways to train a model such that it will not generalize, such as training for many epochs on a small dataset without augmentation.


The issue is that we are prone to inflate the complexity of our own processing logic. Ultimately we are pattern recognition machines in combination with abstract representation. This allows us to connect the dots between events in the world and apply principles in one domain to another.

But, like all complexity, it is reduceable to component parts.

(In fact, we know this because we evolved to have this ability. )


Calling us "pattern recognition machines capable of abstract representation" I think is correct, but is (rather) broad description of what we can do and not really a comment on how our minds work. Sure, from personal observation, it seems like we sometimes overcomplicate self-analysis ("I'm feeling bad – why? oh, there are these other things that happened and related problems I have and maybe they're all manifestations of one or two deeper problems, &c" when in reality I'm just tired or hungry), but that seems like evidence we're both simpler than we think and also more complex than you'd expect (so much mental machinery for such straightforward problems!).

I read Language in Our Brain [1] recently and I was amazed by what we've learned about the neurologicial basis of language, but I was even more astounded at how profoundly little we know.

> But, like all complexity, it is reduceable to component parts.

This is just false, no? Sometimes horrendously complicated systems are made of simple parts that interact in ways that are intractable to predict or that defy reduction.

[1] https://mitpress.mit.edu/9780262036924/language-in-our-brain


Is there any news on what datasets llama 2 or chatgpt for that matter were trained on?


In a similar vein, I was using DraftKings to play draft fantasy football I think two years ago. Spent $20 but split the bets between ten games and chose the players I would play against. For all ten games, I lost but the other players all used the same team. Makes you think.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: