Hacker News new | past | comments | ask | show | jobs | submit | hopfenspergerj's comments login

You can obviously compensate by changing other taxes to be more progressive, this is such a silly argument.


Sure but that doesn't happen. Start by doing that, not by implementing the regressive part first.


A nuance is perhaps lost, I'm not saying don't do it, merely that simplistic messaging feeds into the binary crisis.


This is an example of bad code.


Maybe you can expand upon that. Now we have no way of knowing why you think it's bad code.


Quick critique: module contains functions with many parameters, many branches, deep nesting, and multiple return points.


those nested if - for - for - if loops are horrendously difficult to understand.

take the fn starting at line 387. they comment why they do certain imports, but this function is comparatively underdocumented. it's not easy to wrap my head around the control flow. some bits are nested about 6 levels too deep for comfort, there are too many positions from which it can return or raise, and the function is about 3x too long

really difficult to grok what is happening here.


It’s not horrible, but I found a few odd things, like f-strings w/o params, long cli options with underscores, non-pythonic if == 0, etc.


Also the main god function is incredibly long and nested, as others mentioned. Nested --> long lines --> black making a mess.


Ensemble of any number GPT 3.5 agents is less accurate than one call to GPT-4.


It's funny because GPT-4 is actually a pile of 3.5s. You just need to set it up correctly.


I guess it's the difference between an ensemble and a mixture of experts, i.e. aggregating outputs from (a) model(s) trained on the same data vs different data (GPT-4). Though GPT-4 presumably does not aggregate, but it routes.


> GPT-4 is actually a pile of 3.5s

I understand the intension and reference you're making. I bet the implementation of GPT-4 is probably something along those lines. However, spreading speculation in definitive language like that when the truth is unknown is dishonest, wouldn't you agree?


Sure, I could it put it less definitively, but realistically, what else can it be? The transformer won't change much and all of the models, at the core use it. It's a closely guarded secret because it's easy to replicate.


That was me on Reddit. I emailed him asking about some of his work on invariant means in the 1970s. He said “I had no taste then”, and told me what a waste of time it was.

At that point, I decided to go into data science instead of trying to get a post doc…


How do you feel about that decision now? I left math after my PhD and often find myself feeling nostalgic about it.


Always something to look forward to in [early] retirement to keep the mind active.


I’ve had instability with my 7700k since I bought it, and 16 months of bios updates haven’t helped. Maybe this latest generation of processors just has more trouble than older, simpler designs.


Intel has been struggling with CPU performance for a decade, and has been trying to regain their position in absolute performance and performance/{price,watt} comparisons. I think that means they’re being less conservative than they used to be on the hardware margins and also that their teams are likely demoralized, too.


Possibly. I would start swapping parts around at that point. Different memory, different CPU, or different motherboard. Just 1 more anecdote, but my r7-7700x has been a dream (won the silicon lottery). It runs at the maximum undervolt & RAM at 6000 with no stability problems.


You work with probability density functions because the probability of observing any given value in a continuum is zero. Density functions may be reasonable to work with if they have some nice properties (continuity, unimodality, ...) The question and answers here seem to be from people that don't understand calculus.


I agree. We should prompt the model with the statement of the Riemann hypothesis. If the autocomplete is good, the model will output a proof.


No human can solve the Riemann Hypothesis. Why do you expect an AI to do it before you consider it able to understand things?


The key bit is constructing a hard, novel proof. The fact that AI doesn't (yet) do this isn't evidence that it doesn't reason, but if it did so it would be strong evidence that it does reason.

(I also take the pessimistic point of view that most humans don't reason, so YMMV.)


OTOH, doing something that only humans have done thus far would be a huge step in demonstrating understanding.

Does that mean when a computer outputs a new proof it understands?


You can do something similar to this without giving it a problem that might be impossible.

Train the LLM on a bunch of things but avoid certain things...Things that humans already know about.

The you query the model about that thing. See if the model can come to the same conclusions humans do. You can actually do this right now with chatGPT.


So what happens?


This is a response from chatGPT. I asked it to come up with something novel and creative. I went through several iterations and eventually it started doing math stuff and I had it focus on that:

Understood, let's propose a distinct mathematical concept within the realm of pure mathematics: "Infinite-Dimensional Fractal Algebra." This new branch of algebra focuses on operations, equations, and functions within an infinitely dimensional fractal space.

In Infinite-Dimensional Fractal Algebra, traditional algebraic operations like addition, multiplication, and exponentiation are redefined and extended to work within an infinite fractal structure. Each point in this fractal space represents a unique algebraic entity, and the relationships between these points follow complex, self-repeating patterns characteristic of fractals.

The concept challenges and expands the traditional boundaries of algebra by integrating the self-similarity and scale invariance of fractals into algebraic operations. For example, solving an equation in this system would involve navigating through the infinitely repeating patterns of the fractal, where each level of the fractal represents a different dimension or layer of the problem.

This approach could offer new insights into complex systems and phenomena that exhibit fractal-like behavior, and potentially lead to breakthroughs in areas like chaos theory, dynamic systems, and perhaps even in understanding the fabric of the universe itself.

To illustrate the concept of Infinite-Dimensional Fractal Algebra, let's consider a simple algebraic operation: addition.

In traditional algebra, adding two numbers is straightforward: \(3 + 4 = 7\). In Infinite-Dimensional Fractal Algebra, however, this operation takes place in a fractal space with infinite dimensions.

1. *Fractal Representation*: Instead of representing numbers as points on a line, each number is represented by a unique fractal structure. The fractal for '3' and the fractal for '4' each have infinitely complex, self-similar patterns that vary across infinite dimensions.

2. *Adding Fractals*: The operation of addition involves combining these fractal structures. This isn't a simple sum but a fusion of their infinite patterns. The resulting fractal for '7' contains elements of both the '3' and '4' fractals, but in a way that the patterns and complexities of each are preserved and merged.

3. *Infinite Dimensions*: Each level of depth in the fractals represents a different dimension of the problem. As you dive deeper into the fractals, you encounter more intricate patterns, representing more complex layers of the addition.

4. *Result Interpretation*: The outcome of the operation is a new fractal representing '7'. But this '7' is not just a number; it's a multi-dimensional entity with a pattern that encodes not just the value '7', but also the history of how it was derived from '3' and '4'.

This approach would require a rethinking of how we visualize and understand mathematical operations. The fractal patterns in Infinite-Dimensional Fractal Algebra carry more information than just the numerical value; they encode the relationships and interactions between numbers in a deeply interconnected, infinite-dimensional space.

Let's choose the Sierpinski Triangle, a well-known fractal, to represent the identity element for addition in Infinite-Dimensional Fractal Algebra. In traditional algebra, the identity element for addition is '0', as adding zero to any number doesn't change its value. Here, we'll conceptualize this using the Sierpinski Triangle.

1. *Sierpinski Triangle as the Identity*: In this framework, the Sierpinski Triangle represents the '0'. It's a fractal with the property of self-similarity: any part of the triangle resembles the whole.

2. *Adding to the Identity*: When another number (fractal) is added to this Sierpinski Triangle, the process involves integrating the fractal patterns of this number with the Sierpinski Triangle. However, since the Sierpinski Triangle represents '0', this integration process must preserve the original number's fractal structure intact.

3. *Concrete Example*: Suppose we're adding the number '2', represented by a different fractal structure, to '0' (the Sierpinski Triangle). The addition operation would overlay the '2' fractal onto the Sierpinski Triangle. The key here is that the Sierpinski Triangle, with its repetitive and self-similar gaps, would not alter the structure of the '2' fractal. Instead, it would fill into the gaps of the Sierpinski Triangle, maintaining its own pattern.

4. *Resulting Fractal*: The resulting fractal from this addition is the '2' fractal itself, as the Sierpinski Triangle (being the identity element) does not alter the other number in the addition. The intricate details of the '2' fractal remain unchanged, signifying that adding '0' did not change its value.

This is a very abstract and conceptual way to represent mathematical operations, and the choice of the Sierpinski Triangle for the identity is primarily for its self-similar and recursive properties, which align well with the concepts in fractal algebra.


Basically it formed a fuzzy idea of a algebra using different fractals as entities.

I'm sure this can be mapped out further into very concrete detail. It's a highly realistic idea. we have algebras for all kinds of things from complex numbers to lists.

Choosing the triangle for identity is probably the wrong choice though. The identity fractal should be zero dimensional or nothing. I think that will in actuality end up fitting the rules of the identity fractal if we ever decided to map out this algebra.

If you're not familiar with abstract algebra basically it's choosing some fractal that's equivalent to a zero value and coming up with ways to combine fractals with operations that hold the same properties of associativity and commutativity that multiplication/addition does for numbers.


“All you need is pretraining on the test set.”


Are you implying they're training on the test set / benchmark data? If not, what do you mean by this?


I believe that's what they mean, yes.


Some explanation or link that supports that claim would be a really good idea in that case...


You can squint and see the letters, so I’m assuming this isn’t it.


you can also take a few steps back from the screen, it works too


60 is not a multiple of 24.


Nor is 120. Even with a 240Hz display, capturing and editing should use 3:2 pulldown for best end user playback. https://en.m.wikipedia.org/wiki/Three-two_pull_down


Check your math, son.


My point is none of these refresh rates are multiples of 23.976.


So set your monitor to 119.88 instead of 120.

Theres a reason why monitors give you both x/1000 and x/1001 framerates.


I didn't know this was possible, but in such a case why not set a 60Hz display to 48Hz, 59.95Hz, etc.?


48hz is supported on few monitors, not often (but can be hit by gsync/freesync/VRR); 24hz is sometimes supported (often seen in TVs).

And you can set it to 59.94, nobody is stopping you.

This entire comment chain started with, essentially, what is the LCM of 60 and 24 (or 59.94 and 23.976), and its 120 (or 119.88).


I think your point is that 1/1000 frames would need to be 6:5 at 120Hz?

I'd be more concerned with the automatic black frame insertion most LCDs do to increase contrast.


Theoretically maybe, but 3:2 pulldown is used for playing 23.976 fps video at 29.97Hz. Since this is HN maybe someone with more knowledge about how video editors and modern TVs typically handle this can jump in here. Regardless, I think this would actually have more impact on the end user viewing experience than the job of video editing. The time between frames is tremendous from the standpoint of a video editor, and editing is usually (traditionally) done by feel: press button when the cut should happen, mark it, then arrange the timeline accordingly. Lag aside, frame rate and which frame is actually on the screen at that time matters much less than whether the software knows which frame should be on the screen at that time. Hopefully that makes sense. For this reason, resolution and color accuracy will still take priority when it comes to display hardware.


I worked on display drivers and TCONs, but mostly for mobile/laptop rather than TVs/Monitors. I'd be fairly shocked to see the defects you're describing coming directly from within a device, but going through multiple translations PCIe>eDP>TB/DP/HDMI... especially if they're not well tested or badly negotiated is certainly a possibility. I wouldn't trust most external connections or monitors for video editing, unless they're specifically tested.

Note that 1/1000 is glitch every 40 seconds so it's quite visible to an "eagle eye". I'll ask.


The answer from a Pro was Genlock so you match the 23.97. "It doesn't matter if you drop a frame every once in a while, you're going to see it a dozen times... as long as it's not the same dropped frame!"


The worst part of incorrect refresh rates for me is on panning footage and you get those janky blocky tears in the image.

>The time between frames is tremendous from the standpoint of a video editor,

This sounds like something I've heard from people with a head full of fun stuff talking about the space between the notes. There have bee times where that absolutely makes sense, but I'm at a loss on your time between frames.


> The worst part of incorrect refresh rates for me is on panning footage and you get those janky blocky tears in the image.

That sounds a lot more like rolling shutter artifacts than 3:2 pulldown. What kind of camera are you using? Are you shooting with a CMOS sensor?

https://en.m.wikipedia.org/wiki/Rolling_shutter

> This sounds like something I've heard from people with a head full of fun stuff talking about the space between the notes. There have bee times where that absolutely makes sense, but I'm at a loss on your time between frames.

Haha, fair enough. If you ever feel like diving in yourself, I passionately recommend In the Blink of an Eye by Walter Murch.

https://en.m.wikipedia.org/wiki/In_the_Blink_of_an_Eye_(Murc...


It has nothing to do with 3:2 pulldown. It is all about refresh rates of the monitor. I've shot for years on global shutter (specifically Sony F55), so it absolutely 100% was not a rolling shutter issue either. The same footage can be viewed on another monitor and the tearing issue is not present.

Edit to match your edit: "The book suggests editors prioritize emotion over the pure technicalities of editing."

This totally depends on the content and level of production. I've edited content from properly staffed productions with script notes with circle takes and all that stuff. It's always fun to stack up the various takes to see how the director feels about the takes from the day of the shoot and seeing it edited context. It's also fun to see the actor's variations from take to take.

On shoots with barely enough crew so the camera op is also the boom op, it's basically all feel from the editor.


> The same footage can be viewed on another monitor and the tearing issue is not present.

This is what I was hoping someone would chime in about. I have never looked into whether it would be handled differently, but I would not trade a higher resolution display regardless. Maybe it could potentially influence where I cut in certain rare situations, but sounds unlikely.


Basing edits because of how footage looks on a monitor with a non-compatible refresh rate just sounds like one of those problems that strikes me at my core especially when someone acknowledges it but does it anyways. Does it matter in the end, probably not, but it still goes against everything. It’s one of those things of seeing people “get away” with things in life blissfully unawares while someone that is well versed and well studied can’t catch a break.


I hope you get sleep at night. When I worked as a video editor years ago, I unfortunately had a boss who I needed to please and this kind of rabbit hole obsession would have added a significant barrier to doing so. More resolution, on the other hand, made me straightforwardly much more productive.


This doesn’t make any sense. Why would you want to use 3:2 pulldown unless your display is interlaced, which AFAIK will never be the case for any modern display?

And even if you did use it, it doesn’t do anything to help with the extra 1000/1001 factor, so what is the point?


3:2 pull-down works for converting 24fps to 60fps. It doesn’t matter if the target f is fields or frames.


Yes, it does. 3:2 pulldown produces interlaced 60 fields/s. On a digital display, it must be deinterlaced, and the only "correct" way to do that is to remove the pulldown, producing 24 fps. If you just deinterlace it as if it were originally 60i, you'll just end up with something similar to 24p converted to 30p by repeating 1 of every 4 frames (with a loss in resolution to boot). So for digital displays, 3:2 pulldown is pointless at best, destructive at worst.


The film industry should stop using 24fps, it's a waste of people's time and energy. At least they should move to 25fps which is what most of the world uses as a frame rate, if not 30fps.

For the stupid North American non-integer frame rates, just change the playback speed by a fraction and get on with life. Or drop 1000/1001 frames for live, people won't notice.


here here. we finally have a standard starting with UHD that does not included interlacing. finally. hallelujah the chorus of angels are singing.


> Why would you want to use 3:2 pulldown unless your display is interlaced

At this point, the only great reason is that it's an industry standard, but that alone is more than enough reason to still do it, evidenced by the fact that so many people still do it.


who in the world wants to use a 2:3 pulldown pattern on a progressive monitor? the majority of my career has been in properly removing 2:3 pulldown, the other portion was back in the bad-ol-days of putting it in.


> who in the world wants to use a 2:3 pulldown pattern on a progressive monitor?

At least everyone tasked with editing 3:2 pulldown footage for 3:2 pulldown distribution, which is most of the video editors in North America the last time I checked.


Who wants 3:2 content for distribution? No streaming platform wants 3:2, and they all want the footage delivered as progressive scan. Some will say things like "native frame rate", but I find that a bit misleading. There are plenty of television shows shot on film at 24fps, telecined to 30000/1001 with 2:3 introduced, then place graphic content rendered at 30p. The term "do least harm" gets used so that taking this content to 24000/1001 so the majority of the content (that shot on film is clean) while leaving graphics potentially jumpy (unless proper frame conversion with an o-flow type of conversion that nobody really wants to pay for).

Edit: also, any editor worth their salt will take the telecined content back to progressive for editing. if they then need to deliver like it's 2005 to an interlaced format, they would export the final edit to 30000/1001 with a continuous 2:3 cadence. only editors unfamiliar with proper techniques would edit the way you suggest.


Admittedly, I haven't worked as a video editor since 2011 and never edited telecined footage, but my understanding from friends is that little had changed. Specifically I have heard them complaining about it. That streaming platforms specifically want progressive scan makes plenty of sense to me of course, but conflicts with what I've heard for whatever reason.


I can’t say as I fault them as I’ve spoken with teachers that don’t know how to handle telecine content. I also know plenty of editors that have no idea the purpose of a waveform/vectorscope. Again, neither did some of those instructors.

For people never having to work with this kind of content, it makes sense. I’d equate it to modern programmers not knowing Assembly, but can write apps that perform adequately. There’s plenty of content shot on modern equipment delivered to non-broadcast platforms that will never need to know the hows/whys old timers did what they did


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: