Hacker News new | past | comments | ask | show | jobs | submit | macromaniac's comments login

>Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?

These local TLDs should IMO be used on all home routers, it fixes a lot of problems.

If you've ever plugged in e.g. a raspberry pi and been unable to "ping pi" it it's because there is no DNS mapping to it. There are cludges that Windows, Linux, and Macs use to get around this fact, but they only work in their own ecosystem, so you often can't see macs from e.g. windows, it's a total mess that leads confusing resolution behaviour, you end up having to look in the router page or hardcode the IP to reach a device which is just awful.

Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.

Also, p. sure I grew up playing wc3 w you?


> Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.

dnsmasq has this feature. I think it’s commonly available in alternative router firmware.

On my home network, I set up https://pi-hole.net/ for ad blocking, and it uses dnsmasq too. So as my network’s DHCP + DNS server, it automatically adds dns entries for dhcp leases that it hands out.

There are undoubtably other options, but these are the two I’ve worked with.


Wasn't aware of dnsmasq/pihole, I have a BIND9 configured to do it on my network and yeah its much nicer. I've seen people get bit by this all the time in college and still even now join projects with like weird hosts file usage. Instead of having 3 different systems for apple/ms/linux name resolution that don't interop the problem is better fixed higher up.


Mine takes 50ms, assuming wsl is hot (recorded screen and compared mouse click frame to window pop up frame). I think op should try a different wsl distro or a blank machine and compare differences. I have on access scanning off, performance on, Ubuntu wsl distro, and windows 10.


I believe OP recorded him pressing a key on keyboard and counted between key is clearly pressed and the moment when xterm is up.

Compare to screen recording, this adds latency introduced by keyboard and monitor, which sometimes could be 100ms+. See https://danluu.com/input-lag/


Frontier airlines literally wouldn't let me check in to my flight back without paying 25$, I would just get a loading bar if I didn't select a seat.

https://www.youtube.com/watch?v=hIw5WlBZ-ds

I tried every trick in the book, but in the end only dev console could save the day. It was a null reference exception on not buying a bundle...


Ive got a small console app that I made and it accepts snippets, that way I can use the appropriate snippet when needed. My most common one is:

ss: |system| Answer as many different ways as you can. Each answer should be short and sweet. No more than a line. Assume each previous answer failed to solve the problem. |user|

So "ss how to center a div" would give you code for flexbox, css grid, text align, absolute positioning etc.

In general I am using AI for syntax questions like "how can I do X in language Y" or getting it to write scripts. Honestly, often the default is pretty good.


>I'd tweak the script; then to see the results I would press F5 to run the already built binary and wait over a second EVERY SINGLE TIME (about 1480ms).

I put in a bug report for this years ago but it got ignored :( https://github.com/microsoft/vscode/issues/137066

Vscode has gotten slower over time. It's true you can't get nanosecond performance out of JS, but anything under 17ms should be trivial. I believe the vscode developers are skilled, it's just they don't care (imo) enough about performance for whatever reason, and that's a shame.


Everybody who worked writing code in the 70s-90s is smirking at “wait over a second”.

Back in the day, I used to go get my coffee, shoot the shit in the break room for a few minute, and come back to find my debug runs just starting.


Turbo Pascal was so fast in the 80s that if I saw a syntax error further down the page it was faster to hit "compile" and let the compiler move the cursor to the error than it was for me to move the cursor myself.

It was a very special compiler and they don't make them like that anymore.


The trick is to have language "optimized for compilation" and do not do fancy optimizations.

Java is similar (but overall infrastructure around compiler makes it slow).

Golang also quite fast.


I think LLVM missed the boat, on this, by being an early mover. A lot of the optimizations are resource-only analyses; the few that re not are "just" various levels of interpretation. That kind of implies we need a framework to define resource utilization and evaluation at the instruction/machine-code level with a standard API. Having an optimizer for an abstract IR is less useful.

The point being that compilers would then target emitting reasonable machine code at speed, and The Real LLVM would do analysis/transform on the machine code.


The tricks in the 80s were different than today's tricks.

(1) Single-pass compiler. No separate pass to convert to object or executable.

(2) Written in assembler (!). Helps that Pascal has fewer dark corners.

(3) No use of disk. A single file read or write would take 10-20s on a floppy. Instead, it's memory to memory.


It’s crazy that this is still the gold standard


We got outsteped by C and C++ industry adoption, followed by doing everything in interpreted languages.

Finally 30 years later, the pendulum is turning around.


And V 3.01 was under 40 KB (not a typo), and included a basic WordStar editor clone for program editing!


They still do. Lua is likewise extremely fast, and comes without bullshit mostly,


And in the 00's things were pretty instantaneous, at least from a UI perspective. I actually developed some "bad" habits where I'd just hold the step hotkey down to advance my program when debugging. And everything just worked and was synchronous. All the registers updated with each step, as well as my watched variables and everything else I can think of. I'm pretty sure this was visual studio 6.

That was peak Microsoft debugging experience for me, everything after that was worse, admittedly I did drop it and moved to Linux, so maybe it is good now. Although I very strongly doubt it.


This is the reason I like VIM. While I’m mostly using it locally, this deterministic way of handling input means that I can edit over a slow ssh connection faster than the result being displayed. The VIM language is like playing music, you’re only conscious of mistakes.

There was a data entry program (I forget its name) that I liked too. I could enter a complete survey form without watching the screen, just by tab-ing. I memorized the sequences and shortcuts for dropdown. Made a dull job less frustrating.


Amazing how technology has improved and matured in 50 years…


Wasn't that mostly compiling though? VSCode's CMake tools take multiple seconds just starting an already-built executable.


In my experience, that one-second wait to run a binary that you just built is due to realtime scanning by Windows Security. It's not very bright. It sees a new .exe file and assumes you downloaded it from the Pirate Bay, even though it was written by link.exe.

You can disable it as long as Group Policy doesn't dictate otherwise.


Not having an exclusion for a development directory is like using a 10yo machine or using a laptop without the power brick connected: it’s basically leaving half the perf on the table.

Still, a second seems a bit much for a real-time scan.


Under Windows 11, a "dev drive" can also make a big difference.

https://learn.microsoft.com/en-us/windows/dev-drive/


Thanks for tip, TIL


Clearly you do not work for corporate America. Any amount of performance loss is acceptable to check a security compliance checkbox somewhere.


This is the number 1 reason to use macbooks instead of windows laptops at any job. Security compliance software is like a cancer on windows, macos has some of this kind of crap as well but is nowhere near as bad.


I work for a large, slow moving US company in traditional industry. Of course there is an exclusion list, and it contains a few commonly used dirs like “C:\dev” and so on. If that would change (or if the request years back to have company wide exclusions wouldn’t have been listened to), it’s the kind of thing I’d insta-quit a job over, even after 20 years.

So anecdotally (N=1) it’s not automatically horrible in US orgs.


Don't forget the enterprise market has a whole different threat model. Even though blanket exclusions are often used, a determined attacker will quickly figure out to dump their remote exploration tool in c:\dev .


If the attacker gets far enough to be able to put something in c:\dev and run it, your protections have already failed.


I'm on linux tho (and the author of the article)


Wow, marked as-designed. I guess that's one way to fix the issue. In my experience latency needs to be < 250ms to be considered good, 500ms is roughly the max people can put up with, 2s is enough to drive people insane.


[flagged]


My experience on their issue tracker is, if i give thoughtful input i get thoughtful responses. Ive had multiple issues and features acted on. YMMV i suppose.


But how many of that input should have been from internal testing instead of users after the rollout?


Microsoft don't have internal testing. They get the devs, who already know the code, to see if it makes sense. And woe betide you if you disagree with the PM!


That's my point, they outsourced their testing to the customer.

That's a no-go.


What is that even supposed to mean?


That MS only reacts if it hurts their profits


It's impressive that transformers, diffusion, and human generated data can go so far in robotics. I would have expected simulation would be needed to achieve such results.

My fear is that we see a similar problem with other generative AI in that it gets stuck in loops on complex problems and is unable to correct itself because the training data covers the problem but not the failure modes.


That's because most models have been trained on data created by humans for humans, it needs data created by AI for itself. Better learn from your mistakes than from the mistakes of others, they are more efficient and informative.

When an AI is set up to learn from its own mistakes it might turn out like AlphaZero, who rediscovered the strategy of Go from scratch. LLMs are often incapable of solving complex tasks, but they are greatly helped by evolutionary algorithms. If you combine LLMs with EA you get black box optimization and intuition. It's all based on learning from the environment, interactivity & play. LLMs can provide the mutation operation, or function as judge to select surviving agents, or act as the agents themselves.


>AI will shrink workforces within five years, say company execs... The wide-ranging poll of 2,000 executives, conducted by Swiss staffing firm Adecco Group in collaboration with research firm Oxford Economics, showed that 41% of them expect to employ fewer people because of the technology.

Isn't that the opposite of the title? ~60% of execs then don't expect this? Also, the execs don't really clarify what "fewer people" means, could be .001% fewer people.


Hrm there probably is, but the pio commands don't have addition so the integration part of the delta sigma modulator could be trouble. You could preprocess but it would create enormous files.

I did do a delta sigma using the pios but fed via cpu, essentially used just a look up table of amplitudes that fed a bitstream into the pios to get a psuedo 133 MSPs dac.


Perhaps you can pre-process a few seconds at a time only?


I think it's possible to get a bit-perfect output sequence by having a bunch of lookup tables for waveforms, and then another table allowing you to chain together a sequence of waveforms (and phase, via offsets into waveforms). That should let you generate any sequence that a perfect delta sigma modulator would have output, with far lower ram requirements than the whole bitstream.


I accidentally wiped the drives on my server last year, but it wasn't so bad due to my setup.

My strat is having a deploy.ps1 script in every project folder that sets the project up. 80% of the time this is making the vm, rcloning the files, and installing/starting the service if needed. Roughly 3ish lines, using custom commands. Takes about 100ms to deploy assuming the vm is up. Sometimes the script gets more complicated, but the general idea is that whatever system I use under the covers, running deploy.ps1 will set it up, without internet, without dependencies, this script will work until the heat death of the universe.

After losing everything I reran the deploys (put them into a list) and got everything back.

I'm with you on some of the routing setups, mine aren't 100% documented either. I think my router/switch config is too complex honestly and I should just tone it down.


Thank you! Yes, only one solution.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: