If you look for a lot of the great classics, audiobooks results are inundated with basic TTS "audiobooks" that are impossible to filter out.
These are impossible to listen to because they lack the proper intonation marking the end of sentences, making it very tiring to parse.
It might be better than tuna can sounding recordings, especially if you want to ear them in traffic (a common requirement), but that's about it.
The alternative, if you want real quality recordings, is to stop reading classics and instead read latest Japanime Isekai of murder mystery, these have very good options on the market.
Anyway, I don't think it needs more justification that it covers a good niche usage.
I'm checking what the actual quality is (not a cherry-picked example), but:
Started at: 13:20:04
Total characters: 264,081
Total words: 41548
Reading chapter 1 (197,687 characters)...
That's 1h30 ago, there's no kind of progress notification of any kind, so I'm hoping it will finish sometime. It's using 100% of all available CPUs so it's quite a bother.
(this is "tale of a tub" by Swift, it's about half of a typical novel length)
It's not in one Chapter, but Chapters are called "Section" (and so ignored!). It should be simple to have a dictionary of the different units that are used (I would assume "Part" would fail too, as would the hilarious "Catpter" of some cat-themed kid book, but that's more complicated I guess?).
It did finish and result is basically as good as the provided example, so I'd say quite good! I'll plan to process some book before going to bed next time!
Chapter 1 read in 6033.30 seconds (33 characters per second)
I almost never use Chrome-based browsers, but recently was forced because debug points were simply not working on Firefox. You can strawman all you want, there are unfortunately technical points where Google is abusing its position to force its standards, but the primary architects of Firefox downfall is Mozilla.
For the record, I used a Firefox phone for many years (and yes it did cause me a lot of problems), and remember vividly when they announced a luxury Firefox phone about one week before killing the project.
Top comment:
>He is actually that one kid at school who constantly lies to seem cool, except he is a grown man with tons of money and power
It's really what it is; this whole affair has me relieve grade 4, the ones who would invent crazy cheat codes to see Lara Croft naked and stuff like that. I've never interacted with individuals who behaved like this since then, so it's really a trip down memory lane.
“Work like hell. You just have to put in 80 to 100 hour weeks every week. This improves the odds of success. If others are putting in 40 hour work weeks and you’re putting in 100 hour work weeks, then even if you’re doing the same thing, you will achieve in 4 months what it takes them a year.”
Of course, these 60 to 80 extra hours you put in are subcontracted to underpaid workers that are willing to accept mostly everything because of the toughness of these economic times, so you should definitely take advantage of that. And since we're about success, let's use our economic advantage to further our gains by moving the levers of power. Let's bring more cheap labor to keep the wagies in check!
I've done a short LLM-powered VN, and LLM actions were restricted to local interactions only because of how weak it is at making up the story. It's great at removing the parser-based interactions, but I think that's it.
There's a second technical problem that such stories are represented by a form of state-machine and that you would need to recompile it on the fly, making many checks very difficult (you would need to be able to check reachability on the fly, chunk transitions, etc). I think it would take years to get to the level of some of the great IF games with an LLM, and not just a cool PoC.
I think you misunderstood GP's very concise point. Allow me to expand.
A developer is a Turing Machine that produces low quality code -- and there is a hierarchy of developers (with the mythical 10X on top) that produce a time hierarchy of time classes depending on how fast they are. Depending on the complexity class you're in, the TM may not be able to produce for example a regex, they might be able to do it in exponential time, and the 10X is able to do it in linear time (linear in the number of characters and bugs). LLMs act as oracle that can produce a regex by just asking (an O(1) operation) so that changes the whole time class hierarchy of developers.
So it's the same as in complexity theory, introducing LLM oracles creates analog of the existing complexity hierarchies.
This is of particular interest for investors looking to reduce developer cost.
Watching GP and parent discuss oracles.. and missing an actual oracle is so HN.. we use imperfect LLM oracles as sounding boards while we theorize about perfect ones.
Oh, the age we live in - of this new math.
LLMs are math that isn't exact, except when it is more exact, and not always when you need it to be. LLM math can't be too accurate or warm or the math doesn't work as well. Like an O(1) operation that randomly decides to run in O(n!).
Humans, technically are just higher maintenance Turing Machines who incrementally write less buggy code.
LLMs get trained on our obi wan buggy odysseys to help them spit out mindbending new ways of regex-ing something and leap to solutions, and chase shiny new things, however it originated.
This shows us we must not only communicate with LLM oracles and perfect Oracles using natural language programming, but to also use the force, just maybe don't trust it to parse HTML. Maybe it knows why it's so hard with to not be able to "ace" any SWE interview to land a high paying job and keep it for more than 12 months.
An oracle is basically any device/API that you can plug into a Turing Machine to give you the answer to some problem.
You can have an oracle to answer the question "what is the optimal path that visits all these points" (an NP-complete problem) or even "does this program have an input that makes it loop infinitely" (an undecidable problem).
The point is simply to explore the consequences regarding computability and complexity if Turing machines can access such oracles. By definition, a machine with access to the first type of oracle gives you a O(1) solution to any NP-complete problem. But how does it react on more complicated problems, such as NEXP-TIME (non-deterministic exponential time) or PSPACE (polynomial space)? Does it even help or not?
The idea is to test different class of problems with different classes of oracles to create analogs of the complexity hierarchy.
The hope is that the relationships between these analogs provide a deeper understanding of the standard hierarchy.
This is an implementation of Girard's "transcendental syntax" program which aims to give foundations to logic that do not rely on axiomatics and a form of tarskian semantics (tarskian semantics is the idea that "A & B" is true means that "A" and "B" is true; you've simply changed the and to a "meta" one rather than the logical one). This program is more than 10 years old, with first written versions appearing around 2016, and the ideas appearing in his talks before that.
Girard has been a vocal critic of foundational problems, labeling them as "hell levels", with the typical approach of set theory and tarskian semantics as the lowest one and category theory as a "less worse" one (at least one level above).
One issue with his program is that he mixes abstract, philosophical ideas with technical ones.
So even if some things have interesting technical applications, they may be different when seen from a more philosophical point of view.
For instance, set theory as the foundations of mathematics is a pretty solid model but it is seen as fundamentally unsatisfying for many reasons -- most famously the continuous hypothesis. Gödel and other very high-profile mathematicians thought it was a very unsatisfying issue even though from a mathematical model theory point of view, it's not even a paradox.
So the new foundational approaches tend to have maybe deeper philosophical problems about them; for example see Jacob Lurie's critic of the Univalent foundations program (after the "No comment" meme he expressed a long list of issues with it).
The other issue with this particular work is that it uses new vocabulary for everything to avoid the bagage of usual mathematical logic, but it kind of give a weird vibe to the work and make it hard to get into without dedicating much work.
The result is therefore something that is supposed to solve many longstanding problems in philosophical and technical approaches to the foundations of mathematics but has not had a big impact on the community. This is not too surprising either because the lambda calculus or other logical works were seen as trivial mathematical games.
We'll see if it's a case of it being too novel to be appreciated fully, and this work seems to try to explore it in a technical way to answer this question.
I tried looking into the Transcendental Syntax I paper. It starts with
> We study logic in the light of the Kantian distinction between analytic (untyped, meaningless, locative) answers and synthetic (typed, meaningful, spiritual) questions. Which is specially relevant to proof-theory: in a proof-net, the upper part is locative, whereas the lower part is spititual: a posteriori (explicit) as far as correctness is concerned, a priori (implicit) for questions dealing with consequence, typically cut-elimination. The divides locative/spiritual and explicit/implicit give rise to four blocks which are enough to explain the whole logical activity.
Honestly, this and the rest of the paper read like a caricature of (a particularly grumpy) continental philosopher. I suspect many mathematicians don't engage with this because they are drawn to precision. From what I read (and from your comment), I have no idea what this program really is about. Maybe there's something of value in there but it seems really hard to tell.
Girard is one of the top logicians of his time. He can write standard mathematical prose when he wants to. In turn, when he chooses to use the continental philosophy style, it's a deliberate choice: people who can't read it are not in the target audience anyway.
Somebody who worked through the two volumes of "Proof Theory and Logical Complexity", and worked through the later chapters of "The Blind Spot" will already be used to the style, so won't find it an obstacle when reading the Transcendental Syntax papers. And those who didn't read these works? Well, they don't know the prerequisites anyway (the author assumes), so making the style more palatable to them is not a priority.
The aforementioned works are better starting points: they cover classical proof theory, so there are many other textbooks that can be consulted while the reader develops familiarity with Girard's style. And one needs to know the material to understand Transcendental Syntax anyway.
Yeah, after doing some research I realised that he isn't some crank and actually has done a lot of relevant work in logic. I still don't think that justifies writing in a deliberately obtuse style, but I guess he can do whatever he wants.
His books "the blind spot - a course in logic" is despite its title a bad offender in mixing fairly outrageous claims and philosophical rants with technical statements; and it is one of his most accessible texts. His writing is deliberately off-putting and hard to read; you actually need to already understand his ideas to understand his writing.
His main contribution, through linear logic, are ways to separate 'elementary' operations of logic into simpler ones. Linear logic decomposes the classical "and" and "or" into multiplicative and additive ones, and some modalities. This gives a more precise encoding of operations that can directly link logic to complexity theory, and have interesting applications to compiler design.
Note that academia has been very receptive of his technical achievement, the proof of consistency of System F through reducibility candidates, but much less welcoming of some of his other papers that he deemed way more interesting (as we saw recently with Terence Tao "I got a paper rejected today"). I think him writing this way is a reaction to that; rather than try to court people who might reject his ideas, he already filters people who are looking for the good stuff in them.
> For instance, set theory as the foundations of mathematics is a pretty solid model but it is seen as fundamentally unsatisfying for many reasons -- most famously the continuous hypothesis.
To confirm, s/continuous hypothesis/continuum hypothesis/ ?
Also, for the curious, here's a paper pushing back on the idea that the continuum hypothesis et al. even "need" to be resolved in the first place: https://arxiv.org/abs/1108.4223 (The set-theoretic multiverse, JD Hamkins, 2011). (I don't know anything about set theory myself, so I can't personally comment on what it says—but AFAICT Hamkins is respected in the area.)
So he's not crazy? After a period of my life where I had just read tons of type theory and logic and found his "Locus solum" by accident but couldn't understand a damned thing. It feels like an insane asylum. I could never figure out if he was a crank or a genius.
He's indeed very special and looks like he's writing to himself (especially in his last works). However, after trying to understand his last works for 3-4 years in full-time, I can't say he's "crazy". I would rather say that he gave up trying to make his texts understandable, and he probably has fun writing like that. I once met him and he said jokingly that he was a poet.
Isn't it a bit weird to call 4chan "the dark web" on a technical website?
I don't follow how many killers posted their manifesto on reddit, etc, but it's a platform and as such can be abused like any other platform.
I think the point can be made that Reddit has worse content than 4chan, and often with worse life consequences as there is no anonymity.
It's in any case quite a stretch to lump it into dark web, as if you could buy weapons on the site.
Anyway, I think you have a point in that you can disable Google's porn filter, but you cannot disable Google's wrongthink filters. Google search is manually tweaked to redirect many types of queries to their opposite, safe alternative (like searching for Alex Jones will only give you 'why is alex jones such a bad person', and not act as a search engine, which would be neutral).
Things have changed gradually and there hasn't been so much pushback on this, unfortunately.
I'm not equating 4chan and the darkweb, I'm contrasting the two.
I firmly believe most people who push for "unfiltered" models are looking for 4chan-equivalent filtering. Filtered for safety, but subtly.
I suspect that true uncensored models, like the darkweb, are not useful for most people. I further suspect the applications they are useful for are not something most people want to publicly associate with (to put it lightly).
Oh I'm sorry I see that I did misread your argument, and I fully agree with it.
Free speech generally means "free speech with the boundaries of our current legal framework" more or less, for most of its advocates.
Note that ITA I'm not advocating for "no" safety filters but a better implementation.
I'm checking what the actual quality is (not a cherry-picked example), but:
Started at: 13:20:04 Total characters: 264,081 Total words: 41548 Reading chapter 1 (197,687 characters)...
That's 1h30 ago, there's no kind of progress notification of any kind, so I'm hoping it will finish sometime. It's using 100% of all available CPUs so it's quite a bother. (this is "tale of a tub" by Swift, it's about half of a typical novel length)
reply