Hacker Newsnew | past | comments | ask | show | jobs | submit | GuB-42's commentslogin

I also didn't expect that, but then I realized that's the work of a teenage boy with a catholic education!

Teenage boys love badass, edgy stuff. And what's badass and edgy in Catholicism? Demons! As for the art style, it is the style that was popular at the time.

In a sense, it is not so different from today's kids drawing scenes inspired from their favorite comic. Of course, the painting here shows incredible talent, he is Michelangelo after all, but that doesn't make him less of a kid.


> The days when actual UI/UX innovation was a thing?

There is more than enough of it. Now it is, of course, AI agents. Before that, Material Design was quite innovative. Interestingly, with the raise of search engines and later LLMs, we are getting back to the command line! It is not the scary black window where you type magic incantations, it is a less scary text field where you type in natural language, but fundamentally, it works like a command line.

It is a good thing? For me, it is a mixed bag, I miss traditional desktop UIs (pre-Windows 8), but I like search-based UIs on the Desktop, an I am not a fan of AI agents: too slow an unpredictable, and that's before privacy considerations. When it is not killing performance, I find Material Design to be pretty good on mobile, but terrible on the desktop. That there is innovation doesn't mean it is all good.


I didn't notice this, in fact, I still find Wikipedia to be remarkably neutral on controversial topics. It is very explicit about not being a news website, and yet, that's where I find the best coverage for hot topics like the war in Ukraine and Gaza, Black Lives Matter, protests in Hong-Kong, etc... For instance, most western media completely disregard the Russian side of the Ukraine war, not Wikipedia, where you have both points of views shown side by side, as well as international reactions, and most importantly, sources.

It is not perfect of course, small topics and non-English Wikipedias usually show more bias, and not just about controversial topics. Even on scientific articles, you may find some guy who considers himself the king of the Estonian Striped Beetle and will not tolerate any other ideas than his, driving away other contributors because they have better things to do than go to war to defend beetle truths.


You are getting bad information. The Wikipedia pages on those specific topics (Ukraine, Gaza, BLM) is known to be have been manipulated by groups of editors acting in coordination to advance political narratives.

Is there a single source that is not manipulated on these topics? For example in Ukraine, it is very obvious that both western mainstream media and Russian mainstream media are little more than propaganda for their respective camps.

The good thing with Wikipedia (the English version in particular) is that both sides try to manipulate it, in addition to those who really want to say the truth, so in the end, it is relatively neutral. And if you want to go further, there are citations, which is maybe the most important aspect of Wikipedia compared to traditional media, including encyclopedias.

Wikipedia is not perfect, but it does its best to resist manipulation: citations, all activity is recorded and publicly available, etc...

Non-English Wikipedias have more bias, because they are smaller and also because unlike the English version that is used worldwide, even by non-English speakers, the non-English ones are often tied to specific countries. For example, I think I remember seeing the Arabic Wikipedia as being explicitly pro-Palestine, I guess the opposite is true for the Hebrew version.


AI works well for one kind of documentation.

The kind of documentation no one reads, that is just here to please some manager, or meet some compliance requirement. These are, unfortunately, the most common kind I see, by volume. Usually, they are named something like QQF-FFT-44388-IssueD.doc and they are completely outdated with regard to the thing they document despite having seen several revisions, as evidenced by the inconsistent style.

Common features are:

- A glossary that describe terms that don't need describing, such as CPU or RAM, but not ambiguous and domain-specific terms, of which there are many

- References to documents you don't have access to

- UML diagrams, not matching the code of course

- Signatures by people who left the project long ago and are nowhere to be seen

- A bunch of screenshots, all with different UIs taken at different stages of development, would be of great value to archeologists

- Wildly inconsistent formatting, some people realize that Word has styles and can generate a table of contents, others don't, and few care

Of course, no one reads them, besides maybe a depressive QA manager.


I let it generate README.md files for my projects and they look awesome and they read nice and are theoretically helpful for everyone new.

And LLM are really good in reading your docs to help someone. So I make sure to add more concrete examples into them


Also, one pet peeve of mine is when there are emojis in semi-serious writing. ChatGPT really made the practice of putting emojis everywhere explode.

tell it to: "Output documentation in the style of MDN" and it looks way more professional

not true! it's read by other LLMs! /s

Not sure why the /s here, it feels like documentation being read by LLMs is an important part of AI assisted dev, and it's entirely valid for that documentation to be in part generated by the LLM too.

And that's exactly the same for coding!

Coding is like writing documentation for the computer to read. It is common to say that you should write documentation any idiot can understand, and compared to people, computers really are idiots that do exactly as you say with a complete lack of common sense. Computers understand nothing, so all the understanding has to come from the programmer, which is his actual job.

Just because LLMs can produce grammatically correct sentences doesn't mean they can write proper documentation. In the same way, just because they are able to produce code that compiles doesn't mean they can write the program the user needs.


I like to think of coding as gathering knowledge about some problem domain. All that a team learns about the problem becomes encoded in the changes to the program source. Program is only manifestation of the humans minds. Now, if programmers are largely replaced with LLMs, the team is no longer gathering the knowledge, there is no intelligent entity whose understanding of the problem increases with time, who can help drive future changes, make good business decisions.

The thing with Cybertrucks losing panels certainly didn't help.

A big part of the Cybertruck marketing was the robustness of its unusual design: exoskeleton! space grade materials! They smashed the door with a hammer and it didn't dent (just avoid pétanque balls...), Elon Musk commented that it would destroy the other vehicle in an accident. Morally dubious arguments sometimes, but it appeals to many potential customers.

And then, the vehicle that is supposed to be a tank falls apart by looking at it funny. And the glued on steel plates, is it that the exoskeleton? Not only the design is controversial, but it failed at what it is supposed to represent.


> For any electronic device you purchase a small tax is collected and used for the recycling and collection of the future waste it will generate.

I call bullshit on these initiatives. It is a tax, period. The government collects money and it does... stuff. It is not a deposit, so it doesn't incentivize people to return the thing, and it is too general to de-incentivize particularly bad products like disposable vapes.

The tax can be used on recycling efforts, and it probably is, however you don't need a specific tax for that. These investments can come from other sources of government income: VAT, income tax, tariffs, etc... I don't think people are paying a "presidential private jet tax" and yet, the president has his jet, and hopefully, all government effort for the environment is not just financed by a small, specific tax. Saying a tax is for this or that is little more than a PR move, they could do the same by increasing VAT, and I believe it would work better, but that's unpopular.

> The collection mandatorily happens in the shops that sell electronic devices

That is more concrete.


I suspect that the length of the offset of your input data in pi is equal to the length of the input data itself, plus or minus a few bytes at most, regardless of the size of the input data.

That is: no compression, but it won't make things worse either.

Unless the input data is the digits of pi, obviously, or the result of some computation involving pi.



What if instead of the index of your full data, you store the index of smaller blocks? Would I need i.e. to use an 8kbytes or larger integer to store the offset all the possible 8k blocks?

It is meant to be a joke anyway.


That would 'work' to a point. But my gut guess is it would end up with bigger data.

Most algs that I have ever made. There are several places where your gains disappear. The dictionary lookup for me is where things come apart. Sometimes it is the encoding of the bytes/blocks themselves.

In your example you could find all of the possible 8k blocks out there in pi. Now that number set would be very large. So it will be tough to get into your head how it is working. As it is not the whole of pi space you also probably need a dictionary or function to hold it or at least pointers to it.

One way to tell if a compression alg is doing ok is to try to make the most minimal version of it then scale it out. For example start with a 4 bit/8 bit/16 bit value instead of 8k. Then see how much space it would take up. Now sometimes scaling it up will let you get better gains (not always). That is where you will have a pretty good idea if it works or not. Like just move from 1 byte to 2 then 4 and so on. Just to see if the alg works. That exercise also lets you see if there are different ways to encode the data that may help as well.

I got nerd sniped about 3 decades ago on problems just like this. Still trying :)


Some patterns must happen to repeat, so I would assume the offset to be larger, no?

You could express the offset with scientific notation, tetration, and other big math number things. You probably don't need the whole offset number all at once!

Actually, you do.

You can use all the math stuff like scientific notation, tetration, etc... but it won't help you make things smaller.

Math notation is a form of compression. 10^9 is 1000000000, compressed. But the offset into pi is effectively a random number, and you can't compress random numbers no matter what technique you use, including math notation.

This can be formalized and mathematically proven. The only thing wrong here is that pi is not a random number, but unless you are dealing with circles, it looks a lot like it, so while unproven, I think it is a reasonable shortcut.


The way he approaches the problem, which essentially uses voxels, it shouldn't be too hard: for each voxel, compute the distance to the closest triangle and you have your SDF.

The thing is, you have a SDF and now what? What about textures and materials, animation, optimization, integration within the engine,... None of it seem impossible, but I won't call it easy.


Funny how almost everybody hated the Windows 8 desktop environment. And to this day, Windows 8 is still seen as one of the worst versions of Windows for that reason, even if it was pretty decent under the hood.

Projects like this show that it has its fans. It feels like authors being successful only after their death. I still think of the Windows 8 UI as terrible overall, but now that the hate has passed, people are not afraid to give it some redeeming qualities.

It was pretty good on mobile though, which is the root of the problem I think. They tried to unify what shouldn't be unified.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: