Hacker Newsnew | past | comments | ask | show | jobs | submit | Dwedit's commentslogin

The one thing that really benefits from using NT Native API over Win32 is listing files in a directory. You get to use a 64KB buffer to get directory listing results, while Win32 just does the one at a time. 64KB buffer size means fewer system calls.

(Reading the MFT is still faster. Yes, you need admin for that)


People also use the word "Telecine" to refer to a way to convert film framerates (24FPS) into NTSC television framerate (60FPS) by using a alternating 2 field 3 field sequence.

7zip.com has never been the official website of the project. It's been 7-zip.org

How can the average 7zip user know which one it is?

Search results can be gamed by SEO, there were also cases of malware developers buying ads so links to the malware download show up above legitimate ones. Wikipedia works only for projects prominent enough to have a Wikipedia page.

What are the other mechanisms for finding out the official website of a software?


There is normally a wiki page for every popular program which normally contains an official site URL. That's how I remember where to actually get PuTTY. Wiki can potentially be abused if it's a lesser known software, but, in general, it's a good indicator of legitimacy.

So wikipedia is now part of the supply chain (informally) which means there is another set of people who will try to hijack Wikipedia, as if we didn't had enough, just great.

You can corroborate multiple trusted sources, especially those with histories. You can check the edit history of the Wikipedia article. Also, if you search "7zip" on HN, the second result with loads of votes and comments is 7-zip.org. Another is searching the Archlinux package repos; you can check the git history of the package build files to see where it's gotten the source from.

And we're really going to do all the brouhaha for a single dl of an alternative compressor ? And then multiple that work as a best practice for every single interaction on the Internet? No we're not.

The dl for some programs are often on some subdomain page with like 2 lines of text and 10 dl links for binaries, even for official programs. Its so hard to know whether they are legit or not.

My point was more along the lines of "there's no need to complain about Wikipedia being hijackable, there are other options", and now you're complaining about having too many options...

You don't need to do everything or anything. They're options. Use your own judgment.


I was always impressed by how fast wikipedia editors revert that kind of stuff, so I think it's great advice actually!

What's your solution? If you search google for 7-zip the official website is the first hit.

Not exactly news, wiki's been used for misinformation quite extensively from what I recall. You can't always be 100% sure with any online source of information, but at least you know there is an extensive community that'll notice if something's fishy rather sooner than later.

> How can the average 7zip user know which one it is?

I dunno, if you type "download 7zip" into Google, the top result is the official website.

Also, 7zip.com is nowhere on the first page, and the most common browsers show you explicitly it's a phishing website.

This is actually a pretty good case of the regular user being pretty safe from downloading malware.


I feel I need to clarify my earlier comment. I was asking how can a user tell, in general, what is the legitimate website of a software, not just how to know what 7zip.com is malicious.

Are the search removals and phishing warnings reactive or proactive? Because if it is the former then we don't really know how many users are already affected before security researchers got notified and took action.

Also, 7zip is not the only software to be affected by similar domain squatting "attacks." If you search for PuTTY, the unofficial putty.org website will be very high on the list (top place when I googled "download putty.") While it is not serving malware, yet, the fact that the more legitimate sounding domain is not controlled by the original author does leave the door open for future attacks.


One way is to consult the same source(s) where the user learned about the software in the first place.

> I dunno, if you type "download 7zip" into Google, the top result is the official website.

Until someone puts an ad above it.


Sure, but the answer to "How can the average 7zip user know which one it is?" would then be "do a Google search and use uBlock Origin".

How does the user know they are using the official uBlock Origin?

The Mozilla extension store doesn't have ads, so it's the top item. It has clear download counts and a "recommended" icon.

So the advice is to install it from the extension store.


> Also, 7zip.com is nowhere on the first page

In incognito window, for me, it's 3rd result


It's possible, although I can't replicate this result anymore.

On google search I don't see it on the first page, and the only sketchy link on page 2 is https://7zip.dev/en/download/.

Bing is worse, since it shows 7zip.com on the 2nd page, but the site refuses to load.

But I am using Thorium with manifest v2 ublock and Edge with medium setting for tracker/ad block.


Fails to load for me with: "The page was blocked because of a matching filter in uBlock filters – Badware risks."

Which is enabled by default in uBlock. And installing it is pretty much a standard suggestion for any web user.


How would you ensure that the "average user" actually gets to the page he expects to get to?

There are risks in everything you do. If the average user doesn't know where the application he wants to download _actually_ comes from then maybe the average user shouldn't use the internet at all?


> How would you ensure that the "average user" actually gets to the page he expects to get to?

I think you practically can't and that's the problem.

TLS doesn't help with figuring out which page is the real one, EV certs never really caught on and most financial incentives make such mechanisms unviable. Same for additional sources of information like Wikipedia, since that just shifts the burden of combatting misinformation on the editors there and not every project matters enought to have a page. You could use an OS with a package manager, but not all software is packaged like that and that doesn't immediately make it immune to takeovers or bad actors.

An unreasonable take would be:

> A set of government run repositories and mirrors under a new TLD which is not allowed for anything other than hosting software packages, similar to how .gov ones already owrk - be it through package manager repositories or websites. Only source can be submitted by developers, who also need their ID verified and need to sign every release, it then gets reviewed by the employees and is only published after automated checks as well. Anyone who tries funny business, goes to jail. The unfortunate side effect is that you now live in a dystopia and go to jail anyways.

A more reasonable take would be that it's not something you can solve easily.

> If the average user doesn't know where the application he wants to download _actually_ comes from then maybe the average user shouldn't use the internet at all?

People die in car crashes. We can't eliminate those altogether, but at least we can take steps towards making things better, instead of telling them that maybe they should just not drive. Tough problems regardless.


> People die in car crashes. We can't eliminate those altogether, but at least we can take steps towards making things better, instead of telling them that maybe they should just not drive. Tough problems regardless.

I agree with the sentiment but there are limits to what we can and should do. To stay with your analogy: We don't let people drive around without taking a test. In that test they have to prove that they know the basics of how to drive a car. At least where I come from that means learning quite a bit of rules and regulations.

In other words: Don't let people off the hook. They need to do some form of learning by themselves. It's no different with what you do on the internet. If you're not willing to do some kind of work to familiarize yourself with how the bloody thing work then it's not the job of everyone else to make sure you'll be okay. It's _your_ job to understand the basics.

I'm getting tired of just another thing we must take off peoples minds so that they can "just" use whatever they want to use. Don't try to blame (or god forbid sue) someone else because you didn't do your homework.


> It's _your_ job to understand the basics

I feel like this line of thinking is dangerous: people hit the wall hard when they don’t have sex ed, or financial education classes, or even basic classes on how to cook or do crafts (we had those in school, girls mostly cooked and the guys got to learn woodworking but also swapped sometimes; and later in university there were classes about work safety in general), or computer literacy classes.

I think a lot of people don’t even have basic mental models of how OSes or the Internet works, what a web browser is (“the Google”) and so on.

Saying that they should know that stuff won’t change the fact that they don’t unless you teach them as a part of their overall education.


The sheer amount of what you _might_ need later in life has proven to be simply too much for the time we usually spend for "overall education". I'm completely with you in that we should offer help along the way. But help can only bring you so far and you have to accept it.

In the end that's fine. I have no idea how my car works and if the guy from the repair shop says that I need to pay for a new clutch then that's what I'm gonna do. I am aware that I don't have the knowledge to know whether or not I'm being scammed or not. But I _accept_ that because the alternative (getting to know a lot more details about a car) simply doesn't appeal to me.

If someone wants to use the same approach for everything he does on the internet then that's perfectly fine. But then he needs to accept the consequences as well.


Open source software will have a code repo with active development happening on it. That repo will usually link to official Web page and download places.

Not universal true. Open source just means that the code is avaiable, not that developement happens in the open. (But 7zip does have a github repo)

The fork with malware embedded could fairly easily apply most commits to the main repo in its public repo.

They could even have support pages that look real, by copying them from the legitimate site.

And the process of creating a repo that stays in sync with another fork can be automated, so, if needed, malware writers likely will do that.


1. Go to the wikipedia article on 7-Zip

2. Go the listed homepage


Avoid downloading stuff of internet and avoid search engines.

In a post AI world asking how not be scammed is hard cause now everything can be faked.

Trust what you definitely know but still verify.

Especially in the next 5-10 years that's going to become the reality so I guess sit tight and prepare for the waves and sunamis of scams.


open About in the app?

Does anyone know the history of auto-tiling in games? I know that Dragon Quest II (1987) had this feature for water tiles on the overworld, before it got backported to the North American version of Dragon Warrior 1.

It would be fun to know what the oldest autotiles game is. DigDug from 1982 had them I think. diggerfrom 1980 might have used them.

There has to be some ancient ascii game that uses them. I'm sure they go back further than 1980.

Edit: now that I look at Digger & digdug I'm not sure either of those used autotiles. But I do think you'll find some games that used them in the very early 80s.


Ok, I'm pretty sure

3D Monster Maze (1981)

used an autotile system -albiet a very very different sort, that made 3d-looking tiles appear in the correct place. Is that a proper autotiler? I don't know, but the principle is pretty much the same.

Anyway I'll bet there are old games still that are out there with auto-tiles.


I remember making custom Warcraft II levels, and you could change the construction time for buildings. If you picked a construction time of zero, the building would be built very quickly, but be damaged. There's something hilarious about asking a peasant to build a farm, then seeing a burning farm and hearing the "Job's Done!"

Zig got too much in to avoiding "hidden behavior" that destructors and operator overloading were banned. Operator overloading is indeed a mess, but destructors are too useful. The only compromise for destructors was adding the "defer" feature. (Was there ever a corresponding "error if you don't defer" feature?)

No, defer is always optional, which makes it highly error prone.

There's errdefer, which only defers if there was an error, but presumably you meant what you wrote, and not that.

BTW, D was the first language to have defer, invented by Andrei Alexandrescu who urged Walter Bright to add it to D 2.0 ... in D it's spelled scope(exit) = defer, scope(failure) = errdefer, and scope(success) which is only run if no error.


There were the TI calculators produced through the 2000s, still powered by Z80, just not a "computer".

That's the first thing mentioned in the video.

If you combine "GBA Mus Ripper" and "SoundFont MIDI Player", you can get some seriously excellent sound for listening to GBA music.

"GBA Mus Ripper" detects the so-called "Sappy" music driver and extracts and converts the songs to MIDI files, and generates a SF2 soundbank file. Available at https://www.romhacking.net/utilities/881/

"SoundFont MIDI Player" plays back MIDI files. You can configure it to automatically load a SF2 soundbank file in the directory. When you load a converted GBA MIDI file, you get the high music quality of a modern feature-packed MIDI playback engine. Available at https://falcosoft.hu/softwares.html#midiplayer

It's not perfect though, as GBA games do not use true standard MIDIs. Some MIDI controller commands (like modulator wheel) don't translate correctly.


Thanks for this, I was not aware that a good portion of GBA songs can be exported as MIDI. But I'm guessing that with good soundfonts you can get pretty reasonable quality for many of them!

They don't use general midi standard instruments. You need the extracted soundfont because the instrument numbers are unique to each game. In order to "improve" the soundfont, you need to edit that soundfont to have higher quality instruments, you can't just switch out the whole soundfont for a different one.

What's the quality of the generated code like? Does it use explicit stack frames and all local variables live there? Does it move loop-invariant operations out of a loop? Does it store variables in registers?

I haven't actually tested this, but aren't the input and output handles exposed on /proc/? What's stopping another process from seeing everything?


not a Linux expert, but I believe that at the very least it's time sensitive: after consumer process reads it, it's gone from the pipe. Unlike env vars and cli argument that stay there.


Yes pipes are exposed /proc/$pid/fd/$thePipeFd with user permissions [0].

Additionally command line parameters are always readable /proc/$YOUR_PROCESS_PID/cmdline [1]

There are workarounds but it's fragile. You may accept the risks and in that case it can work for you but I wouldn't recommend it for "general security". Seems it wouldn't be considered secure if everyone did it this way, therefore is it security through obscurity?

[0] https://unix.stackexchange.com/questions/156859/is-the-data-...

[1] https://stackoverflow.com/questions/3830823/hiding-secret-fr...


I guess the kernel is stopping that. I don't think permission wise you'd have the privileges to read someone else's stdin/out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: