This is probably a really basic question but why does the wizard need to stop being drawn with characters when the sprites move out past the left/right borders? Is this a hardware restriction where those lines aren't able to have text on them when there are sprites on the edges?
I think it's probably about the timing. I'm not an expert, but I believe that opening the side borders requires cycle exact timing at both edges of the screen, and when you have characters enabled on the screen, the VIC-II graphics chip ends up 'stealing' memory cycles from the CPU every 8 scanlines to fetch character and font data from RAM. These lines are known as 'badlines'.
Across an entire scanline I think you normally get something like 63 6510 cpu cycles to do 'work' in, but only 23 if you hit a badline - keeping in mind that some instructions take multiple cycles to execute. This probably makes the timing difficult or impossible to manage with the characters turned on.
It's not just that the 'badlines' steal 40 cycles. They steal a solid block of 40 cycles that cover most of the screen, from end of hblank until just before the right border starts. This blog post [1] has a nice interactive demo showing badline timings.
During a badline it's simply impossible write to the VIC-II's registers during the left border. Though, this seems to indicate it's still possible to open the right border during a bad line, but it's a 1 cycle window (maybe 2).
One thing the article doesn’t mention is how they figured out that waiting for external memory access is the bottleneck. Are there any profiling tools available that would tell the developer that the cpu is waiting for external memory x% of the time?
The other comments have mentioned the tools. On linux, there's good old perf.
There's `perf stat` that uses CPU performance counters to give you a high-level view if your workload is stalling due to waiting on memory: https://stackoverflow.com/questions/22165299/what-are-stalle.... However it won't tell you exactly where the problem is, just that there is a problem.
You can do `perf record` on your process, and then running `perf report` on the generated data. You'll see what functions and what lines/instructions are taking the most time. Most of the time it will be pretty obvious that it's a memory bottleneck because it will be some kind of assignment or lookup.
If you're using an intel processor, VTune is extremely detailed. Here's a nice article from Intel on using it: https://www.intel.com/content/www/us/en/docs/vtune-profiler/... . You'll see one of the tables in the articles lists functions as "memory bound" - most time is spent waiting on memory, as opposed to executing computations.
I am using perf. On Graviton, the event you want to look for is LLC-load-misses, which tracks last-level cache (LLC) misses (i.e., external memory accesses). The command "perf record -e LLC-load-misses -t $(pgrep valley-server)" will record the number of LLC misses per instruction during the execution of the valkey-server main thread. Please note that LLC-load-misses events are not collectable when running on instances that only use a subset of the processor's core. "perf list" provides the events that can be collected on your machine
Not directly related to the question. A few years ago, I read about that B-tree is also recommended for in-memory operations because the latency between CPU/memory is high now.
Given that it's shared memory based, it seems like there has to be some degree of trust that the participants are well behaved. What do you mean by a malformed message, though? If you're talking about the payload of the message, that seems like a matter of the message scheme you're using. If you're talking about correctness of the IPC protocol itself, integrity checking is unfortunately at odds with latency.
Among other things, the capnp-layer (which, as I noted in a recent reply earlier, is optional to use -- you can and sometimes certainly should "just" go native, and/or combine the two approaches) uses an internally-generated token to ensure some kind of terrible bug/thing/problem/act-of-god isn't mis-routing messages. It's a safety mechanism.
But in terms of, say, incompatible schemas or mis-matching native `struct`s -- what if you used different compilers with slightly different STLs on the 2 sides?! -- it is indeed on the developer to not make that error/not converse with untrusted code. Flow-IPC will guard against misuses to the extent it's reasonable though, IMO.
P.S. [Internal impl note] Oh! And, although at the moment it is the protocol version v1 (for all internally-involved protocols at all layers), I did build-in a protocol-version-checking system from the start, so as to avoid shooting ourselves in the foot, in case Flow-IPC needs to expand some of its internal protocol(s) in later versions. At the very worst, Flow-IPC would refuse to establish a channel/session upon encountering a partner with Flow-IPC version such that their protocols are incompatible. (Again -- academic at the moment, since there is only v1 -- but might be different in the future. At that point a new protocol might be developed to be backward-compatible with earlier-Flow-IPCs and just still work; or worst case throw the aforementioned error if not.)
Oxide and Friends interviewed Andres Freund for their show on Wednesday where he discussed the discovery, including the slowness from all the symbol translations
Slightly off topic: I’m setting up a home server on a Mini PC that has Windows 11 Pro pre-installed. I want to attach it to my TV and play retro games as well as run home automation tasks (periodic scripts, UniFi controller, Pihole etc)
Is anyone using Proxmox on their homelabs? Would you recommend blowing away Windows and installing Proxmox and then install Windows with PCIE passthrough?
I actually use Proxmox on my main PC. Ryzen 5950x, 64GB RAM, RTX 4070, AMD 6500XT. The two GPU's are each passed to a Windows and Debian VM respectively, and each also gets a USB card passed for convenience. I run half a dozen other VM's off of it hosting various portions of the standard homelab media/automation stacks.
Anecdotally, it's a very effective setup when combined with a solid KVM. I like keeping my main Debian desktop and the hypervisor separate because it keeps me from borking my whole lab with an accidental rm -rf.
It is possible to pass all of a systems GPU's to VM's, using exclusively the web interface/shell for administration, but it can cause some headaches when there are issues unrelated to the system itself. For example, if I lose access to the hypervisor over the network, getting the system back online can be a bit of a PITA as you can no longer just plug it into a screen to update any static network configuration. My current solution to this is enabling DHCP on Proxmox and managing the IP with static mappings at the router level.
There are a few other caveats to passing all of the GPU's that I could detail further, but as a low impact setup (like running emulators on a TV) its should work fairly well. I have also found that Proxmox plays well with mini PC's. Besides the desktop, I run it on an Intel NUC as well as a Topton mini PC with a bunch of high-speed NICS as a router. I cluster them without enabling the high availability features in order to unify the control plane for the three systems into one interface. It all comes together into a pretty slick system
I did this for a while where I ran multiple VMs, some of which had PCIE passthrough for some GPUs on both Windows and Linux. Luckily my motherboard separated out IOMMU groupings to make this work for me. While you _could_ do this, you may run into issues if your IOMMU groups aren't separated enough. The biggest issue I always had was drivers always causing issues with Windows. I eventually blew the entire instance away and just run Windows.
I'd recommend a separate device if you need any access to a GPU. But I do recommend Proxmox as a homelab. I still have it running on a separate 2012 Mac Mini.
My use-case is slightly different, but I use Proxmox for my home server and would recommend it. Especially if you're familiar with linux systems or want to learn about them which I've done through the years I've been using this setup.
My server was originally a single debian installation set up to host local services for things like git. That grew into hosting a site, vpn, then some multiplayer game servers. When I reached a point where too many things were installed on single machine, I looked at vm options. I've used VMWare/VSphere professionally, but settled on Proxmox for these main reasons: easy to set up and update, easy to build/copy vms, simple way to split physical resources, monitoring of each vm, and simple backup and restores. All without any weird licensing.
That server houses 4 vms right now. That might be a bit much for your mini pc but you could do a couple. The multiplayer servers are the main hog so I isolate resources for that. The windows machine is only for development which isn't your exact use case. I can say however that I've never had issue when I need it. Only thing I can't speak for is the need for graphics passthrough.
I have run proxmox for several years, rely on it for many bits of house and network infrastructure, and recommend proxmox overall. My desktop also runs proxmox with PCIe passthrough for my "actual" desktop (but this is a different proxmox server from the primary VM and container host for infrastructure).
That said, I wouldn't mix the two use cases either initially nor over the long-term. House/network infrastructure should be on a more stable host than the retro-game console connected to your TV (IMO).
In your case, I'd recommend buying another PC (even an ancient Haswell would be fine to start) and getting experience with vanilla proxmox usage there before jumping straight into trying to run infra and MAME/retro gaming on PCIe passthrough on the same singleton box.
I’m in the middle of this. Got a Bee Link miniPC. It came with windows, licensed oddly. I’m configuring it as a home server. Current plan is to migrate my Unraid install over from the vintage server it’s currently on. Most services are run in docker. We’ll see how performance is.
ProxMox is on my list to try out. So far I’m very happy with Unraid. It makes it easy to set up network shares, find and deploy containerized services, and handles VMs if you need them. I try to avoid the VM and focus on containers because it’s more flexible resource wise.
If the pc is beefy enough (win 11 pro runs smoothly) just go with the included hyper-v. Imho you don’t get any benefits installing proxmox on bare metal in this scenario. YMMV of course
You want to use Hyper-v, you can use GPU-P(Gpu Partitoning) where hyper-v will pass through the GPU to the VM and share it, it's not some emulated adapter, it's genuinely the real GPU and runs natively and you can share it across multiple VM's and host. Linux has NOTHING that can compete with the feature.
But a Windows user, so I could be completely off base, but isn't GPU-P just VFIO under the hood? I don't know about Proxmox, but this is completely supported by OpenStack and KVM.
No, it uses the hypervisor to implement a scheduler for the GPU between the different vm's and gives them access to it. VFIO on linux requires passing the entire GPU to only a single VM and making it so the host doesn't have access.
Kind of, i'm not to familiar with the enterprise gpu sharing, but I believe that requires hardware support for it on the GPU to split up contexts for the OS. Whereas the GPU-P solution works on any GPU and is vendor agnostic as it's done on the OS/Processor level. It's really cool and I have no idea how they can ship this since it does trample on the enterprise tech pretty substantially. But I belive it's a byproduct of Microsoft wanting to compete on the datacenter level and it gives them a killer feature for Azure, and as a byproduct it entered as a semi undocumented feature into consumer hyper-v.
MIG does need hardware suppoet, and from what I understand really expensive licensing. I don't know if AMD has a parallel technology.
That is pretty cool that it is vendor agnostic. I've found a few docs from a few years ago talking about stuff like it on Linux, but development of it seems to have stopped or just not progressed at all.
But she is they would use it in Azure, I imagine there they are using enterprise GPUs. But in Andheri scale datacenters it definitely sounds like an advantage!
It's probably overkill for what he wants though. A headless Linux server frees up the GPU for a gaming VM, and he can run containers natively for everything else.
If its just a few simple things you list, I might stick with HyperV. If you care about more sophisticated VLAN'ed networking setups I would probably go proxmox. But hardware passthru is a can of worms so understand there will be a tradeoff.
You can, but it disconnects it from the host, so you'll be headless. Which may be fine for a lot of people if you are able to ssh in and manage it that way.
This is exactly how I have mine set up aye. Proxmox on hardware, Main PC as a VM - once I got to the point where Proxmox had its web interface (and ssh) up and running I had no real reason to have a monitor plugged into the hardware OS anyway. Passed the GPU and all USB ports to the PC guest from there.
At the time I had no idea how popular it was to run this setup, I thought I was being all weird and experimental. Was surprised how smoothly everything ran (and still runs, a year and change later!)
Bonus, I was able to just move the PC to another disk when the SSD it was on was getting a bit full. Moved PC's storage onto a spinny HDD to make room to shuffle some other stuff around, then moved it to another SSD. Didn't even need to reboot the PC VM, haha.
Proxmox backup server running on my NAS handles deduplicated backups for it and other VMs too which is great.
Can someone explain why it is useful to do do virtualization at all when you just want to run a small amount of things like this?
I have a ubuntu server install running on an old laptop to do very basic background jobs, backups, automation, run some containers etc. – am I missing something by not using a hypervisor? What are the benefits?
Some software really prefers to control the whole host, usually highly integrated stuff. Some examples:
- Unifi Controller installs like a half dozen dependencies to run (Mongo, Redis, etc last time I used it), much easier to isolate all that in a VM
- Home Assistant's preferable and most blessed install method is Home Assistant OS, which is an entire distribution. I've run HA in Docker myself before but the experience is like 10x better if you just let it control the OS itself
- I have Plex,Sonarr,Radarr, etc running for media - there is software called Saltbox which integrates all of these things for you so that you don't need to configure 10 different things. Makes it a breeze, but requires a specific version of Ubuntu or you're in unsupported territory (kinda defeating the purpose)
Lots of stuff you can be totally fine just using Docker or installing directly onto the host. But having the bare metal system running Proxmox from that start gives you a ton of flexibility to handle other scenarios.
Worst case you just setup a single VM & run your stuff on it if you have no need for other types of installs. Nothing lost, but you gain flexibility in the future as well as easy backups via snapshotting, etc
Easy backups via snapshotting is quick to say, but has an outsized benefit IME. My go-to approach for keeping many of my machines up to date is now scheduled apt get update; apt get upgrade and relying on scheduled backups in the unlikely event that goes awry. I don't have to worry about package interdependencies across machines.
For major upgrades, I may go a step further and do a manual snapshot before upgrading and then decide whether or not to commit (usually) or rollback (easy, when needed).
The (emotional) security provided by this is nice, as is the time-savings (after initial time expense to learn and setup the base proxmox infrastructure).
HomeAssistant also does some voodoo with Bluetooth, wifi, ipv6 and mDns for IOT devices. For this reason it seems best suited to a host machine instead of a docker container.
Something like HomeAssistant, it would be good if it had an agent to handle all the low-level networking stuff that could be run directly on the host, and then all the other stuff which doesn’t directly require that can run inside an unprivileged Docker container.
Maybe running a firewall on dedicated hardware so the internet doesn't drop if you reboot the hypervisor.. but even then I just live with that and run pfSense in proxmox.
Excellent point! I do run my main PiHole in a VM, but I have a second PiHole running on an RPi just in case I'm rebooting the hypervisor. DNS is the only real dependency that can cause problems for me when the hypervisor reboots for updates. I'm just running Fedora on a somewhat recent Dell rack server.
Firewall is a great example of what not to run on a vm, at least for me! I consider gateways as appliances though, and I haven't run my own router on Linux since I first got Cable internet in 2002. I remember how awesome it was compared to the weak routers available at the time!
No. This is not a ISP problem and the ISP can not solve this - it’s not even visible to the ISP for encrypted connections. This a problem with HTTP/2 itself that web servers / load balancers / proxies need to account for.
Any suggestions to alternative to webhooks? I feel a ‘pull’ based model with cursors and long polling would be simpler and more reliable than webhooks.
Webhooks tend to make a lot more sense in event-driven applications . They introduce additional complexity when it comes to all sorts of edge cases - agree that a pull system is probably much easier to do error handling for, my only concern would probably be performance.
The video side of things is super interesting - 4k60 ProRes is an absolutely insane amount of data (something like 12GB/s IIRC?), and the addition of ACES really makes it usable for professional work where a larger mirrorless from Sony/Panasonic/Nikon/Canon wouldn't be practical.
I also found it very interesting that their pics of this feature were with Davinci Resolve on the screen and not Final cut Pro - I guess when it comes to colour, there's no better tool.
How is this a problem for most users? When was the last time someone actually plugged in an iPhone to a computer to sync data? Most syncing happens over wifi for most users (90%+ I would say). Even development happens over wifi now.
I agree with the principle here, if not necessarily the tone ... I don't think I've used an iPhone cable to do anything other than a) charge or b) use CarPlay for probably 10 years at least.
In fact, I'd go so far as to say that providing a "real" USB 3 cable for charging would not give most users what they want.
I have two USB-C cables plugged into the MacBook Pro on the desk I'm using right now. One does USB PD and also provides monitor connectivity - that's obviously a "full fat" Thunderbolt cable. The other is a Lightning cable that I need occasionally to charge my keyboard/trackpad.
The Thunderbolt cable would be a terrible phone cable. It is thick, bulky and does not want to bend - mostly because it's designed to have good signal integrity at 10Gb/s+.
In my experience, syncing my iphone over wifi doesn’t really work. It will start fine, then lose the connection before it’s finished. Perhaps my old (2010) airport extreme is not up to the task? But in any case, I just use the cable instead, no problem.
a charging cable is meant for charging a device. the cable rumored to be available will be capable of charging the device, faster than previous models. the new cable will follow broader standards.
The maximum net transfer rate is around 43 MB/s in practice. That’s more like 150 GB/h. Not very convenient for backing up an iPhone. With current storage sizes, it could take multiple hours.