>Are there any good reasons to use a TLD like .internal for private-use applications, rather than just a regular gTLD like .com?
These local TLDs should IMO be used on all home routers, it fixes a lot of problems.
If you've ever plugged in e.g. a raspberry pi and been unable to "ping pi" it it's because there is no DNS mapping to it. There are cludges that Windows, Linux, and Macs use to get around this fact, but they only work in their own ecosystem, so you often can't see macs from e.g. windows, it's a total mess that leads confusing resolution behaviour, you end up having to look in the router page or hardcode the IP to reach a device which is just awful.
Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.
> Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.
dnsmasq has this feature. I think it’s commonly available in alternative router firmware.
On my home network, I set up https://pi-hole.net/ for ad blocking, and it uses dnsmasq too. So as my network’s DHCP + DNS server, it automatically adds dns entries for dhcp leases that it hands out.
There are undoubtably other options, but these are the two I’ve worked with.
Wasn't aware of dnsmasq/pihole, I have a BIND9 configured to do it on my network and yeah its much nicer. I've seen people get bit by this all the time in college and still even now join projects with like weird hosts file usage. Instead of having 3 different systems for apple/ms/linux name resolution that don't interop the problem is better fixed higher up.
Mine takes 50ms, assuming wsl is hot (recorded screen and compared mouse click frame to window pop up frame). I think op should try a different wsl distro or a blank machine and compare differences. I have on access scanning off, performance on, Ubuntu wsl distro, and windows 10.
Ive got a small console app that I made and it accepts snippets, that way I can use the appropriate snippet when needed. My most common one is:
ss: |system| Answer as many different ways as you can. Each answer should be short and sweet. No more than a line. Assume each previous answer failed to solve the problem. |user|
So "ss how to center a div" would give you code for flexbox, css grid, text align, absolute positioning etc.
In general I am using AI for syntax questions like "how can I do X in language Y" or getting it to write scripts. Honestly, often the default is pretty good.
>I'd tweak the script; then to see the results I would press F5 to run the already built binary and wait over a second EVERY SINGLE TIME (about 1480ms).
Vscode has gotten slower over time. It's true you can't get nanosecond performance out of JS, but anything under 17ms should be trivial. I believe the vscode developers are skilled, it's just they don't care (imo) enough about performance for whatever reason, and that's a shame.
Turbo Pascal was so fast in the 80s that if I saw a syntax error further down the page it was faster to hit "compile" and let the compiler move the cursor to the error than it was for me to move the cursor myself.
It was a very special compiler and they don't make them like that anymore.
I think LLVM missed the boat, on this, by being an early mover. A lot of the optimizations are resource-only analyses; the few that re not are "just" various levels of interpretation. That kind of implies we need a framework to define resource utilization and evaluation at the instruction/machine-code level with a standard API. Having an optimizer for an abstract IR is less useful.
The point being that compilers would then target emitting reasonable machine code at speed, and The Real LLVM would do analysis/transform on the machine code.
And in the 00's things were pretty instantaneous, at least from a UI perspective. I actually developed some "bad" habits where I'd just hold the step hotkey down to advance my program when debugging. And everything just worked and was synchronous. All the registers updated with each step, as well as my watched variables and everything else I can think of. I'm pretty sure this was visual studio 6.
That was peak Microsoft debugging experience for me, everything after that was worse, admittedly I did drop it and moved to Linux, so maybe it is good now. Although I very strongly doubt it.
This is the reason I like VIM. While I’m mostly using it locally, this deterministic way of handling input means that I can edit over a slow ssh connection faster than the result being displayed. The VIM language is like playing music, you’re only conscious of mistakes.
There was a data entry program (I forget its name) that I liked too. I could enter a complete survey form without watching the screen, just by tab-ing. I memorized the sequences and shortcuts for dropdown. Made a dull job less frustrating.
In my experience, that one-second wait to run a binary that you just built is due to realtime scanning by Windows Security. It's not very bright. It sees a new .exe file and assumes you downloaded it from the Pirate Bay, even though it was written by link.exe.
You can disable it as long as Group Policy doesn't dictate otherwise.
Not having an exclusion for a development directory is like using a 10yo machine or using a laptop without the power brick connected: it’s basically leaving half the perf on the table.
Still, a second seems a bit much for a real-time scan.
This is the number 1 reason to use macbooks instead of windows laptops at any job. Security compliance software is like a cancer on windows, macos has some of this kind of crap as well but is nowhere near as bad.
I work for a large, slow moving US company in traditional industry. Of course there is an exclusion list, and it contains a few commonly used dirs like “C:\dev” and so on. If that would change (or if the request years back to have company wide exclusions wouldn’t have been listened to), it’s the kind of thing I’d insta-quit a job over, even after 20 years.
So anecdotally (N=1) it’s not automatically horrible in US orgs.
Don't forget the enterprise market has a whole different threat model. Even though blanket exclusions are often used, a determined attacker will quickly figure out to dump their remote exploration tool in c:\dev .
Wow, marked as-designed. I guess that's one way to fix the issue. In my experience latency needs to be < 250ms to be considered good, 500ms is roughly the max people can put up with, 2s is enough to drive people insane.
My experience on their issue tracker is, if i give thoughtful input i get thoughtful responses. Ive had multiple issues and features acted on. YMMV i suppose.
Microsoft don't have internal testing. They get the devs, who already know the code, to see if it makes sense. And woe betide you if you disagree with the PM!
It's impressive that transformers, diffusion, and human generated data can go so far in robotics. I would have expected simulation would be needed to achieve such results.
My fear is that we see a similar problem with other generative AI in that it gets stuck in loops on complex problems and is unable to correct itself because the training data covers the problem but not the failure modes.
That's because most models have been trained on data created by humans for humans, it needs data created by AI for itself. Better learn from your mistakes than from the mistakes of others, they are more efficient and informative.
When an AI is set up to learn from its own mistakes it might turn out like AlphaZero, who rediscovered the strategy of Go from scratch. LLMs are often incapable of solving complex tasks, but they are greatly helped by evolutionary algorithms. If you combine LLMs with EA you get black box optimization and intuition. It's all based on learning from the environment, interactivity & play. LLMs can provide the mutation operation, or function as judge to select surviving agents, or act as the agents themselves.
>AI will shrink workforces within five years, say company execs... The wide-ranging poll of 2,000 executives, conducted by Swiss staffing firm Adecco Group in collaboration with research firm Oxford Economics, showed that 41% of them expect to employ fewer people because of the technology.
Isn't that the opposite of the title? ~60% of execs then don't expect this? Also, the execs don't really clarify what "fewer people" means, could be .001% fewer people.
Hrm there probably is, but the pio commands don't have addition so the integration part of the delta sigma modulator could be trouble. You could preprocess but it would create enormous files.
I did do a delta sigma using the pios but fed via cpu, essentially used just a look up table of amplitudes that fed a bitstream into the pios to get a psuedo 133 MSPs dac.
I think it's possible to get a bit-perfect output sequence by having a bunch of lookup tables for waveforms, and then another table allowing you to chain together a sequence of waveforms (and phase, via offsets into waveforms). That should let you generate any sequence that a perfect delta sigma modulator would have output, with far lower ram requirements than the whole bitstream.
I accidentally wiped the drives on my server last year, but it wasn't so bad due to my setup.
My strat is having a deploy.ps1 script in every project folder that sets the project up. 80% of the time this is making the vm, rcloning the files, and installing/starting the service if needed. Roughly 3ish lines, using custom commands. Takes about 100ms to deploy assuming the vm is up. Sometimes the script gets more complicated, but the general idea is that whatever system I use under the covers, running deploy.ps1 will set it up, without internet, without dependencies, this script will work until the heat death of the universe.
After losing everything I reran the deploys (put them into a list) and got everything back.
I'm with you on some of the routing setups, mine aren't 100% documented either. I think my router/switch config is too complex honestly and I should just tone it down.
These local TLDs should IMO be used on all home routers, it fixes a lot of problems.
If you've ever plugged in e.g. a raspberry pi and been unable to "ping pi" it it's because there is no DNS mapping to it. There are cludges that Windows, Linux, and Macs use to get around this fact, but they only work in their own ecosystem, so you often can't see macs from e.g. windows, it's a total mess that leads confusing resolution behaviour, you end up having to look in the router page or hardcode the IP to reach a device which is just awful.
Home routers can simply assign pi into e.g. pi.home when doing dhcp. Then you can "ping pi" on all systems. It fixes everything- for that reason alone these reserved TLDs are, imo, useful. Unfortunately I've never seen a router do this, but here's hoping.
Also, p. sure I grew up playing wc3 w you?