Hacker Newsnew | past | comments | ask | show | jobs | submit | hiddendoom45's commentslogin

I stumbled on the lock page myself when I was experimenting with writing a sqlite vfs. It's been years since I abandoned the project so I don't recall much including why I was using the sqlitePager but I do recall the lockpage being one of the first things I found where I needed to skip sending page 262145 w/ 4096 byte pages to the pager when attempting to write the metadata for a 1TB database.

I'm surprised they didn't have any extreme tests with a lot of data that would've found this earlier. Though achieving the reliability and test coverage of sqlite is a tough task. Does make the beta label very appropriate.


I was burnt by the mutability of keys in go maps a few months ago, I'm not sure exactly how go handles it internally but it ended up with the map growing and duplicate keys in the key list when looking at it with a debugger.

The footgun was that url.QueryUnescape returned a slice of the original string if nothing needed to be escaped so if the original string was modified, it would modify the key in the map if you put the returned slice directly into the map.


This sounds like a bug, whether it be in your code, the map implementation, or even the debugger. Map keys are not mutable, and neither are strings.


This shouldn't be a race condition, reads were done by taking a RLock() from a mutex in a struct with the map, and defer RUnlock(), writes were similar where a Lock() was taken on the same mutex with a defer Unlock(). All these functions did was get/set values in the map and operated on a struct with just a mutex and the map. Unless I have a fundamental misunderstanding of how to use mutexes to avoid race conditions this shouldn't have been the case. This also feels a lot like a llm response with the Hypotheses section.

edit: this part below was originally a part of the comment I'm replying to

Hypotheses: you were modifying the map in another goroutine (do not share maps between goroutines unless they all treat it as read-only), the map implementation had some short-circuit logic for strings which was broken (file a bug report/it's probably already fixed), the debugger paused execution at an unsafe location (e.g. in the middle of non-user code), or the debugger incorrectly interpreted the contents of the map.


That just means fiber is a bad library that abuses unsafe, resulting in real bugs.


Just how are you modifying strings? Cause that's your bug to fix.


That was probably done by fiber[1] the code specifically took the param from it in the function passed to the Get(path string, handlers ...Handler) Router function. c is the *fiber.Ctx passed by fiber to the handler. My code took the string from c.Param("name") passed it to url.QueryUnescape then another function which had a mutex around setting the key/value in the map. I got the hint it was slices and something modifying the keys when I found truncated keys in the key list.

My guess is fiber used the same string for the param to avoid allocations. The fix for it is just to create a copy of the string with strings.Clone() to ensure it does not get mutated when it is used as a key. I understand it was an issue with my code, it just wasn't something I expected to be the case so it took several hours and using the debugger to find the root cause. Probably didn't help that a lot of the code was generated by Grok-4-Code/Sonic as a vibe coding test when I decided to go back a few months later and try and fix some of the issues I had myself.

[1] https://github.com/gofiber/fiber


Go strings are supposed to be immutable.

I see that fiber goes behind your back and produces potentially mutable strings behind your back: https://github.com/gofiber/utils/blob/c338034/convert.go#L18

And… I actually don't have an issue with it to be honest. I've done the same myself.

But this mutability should never escape. I'd never persist in using a library that would let it escape. But apparently… it's intentional: https://github.com/gofiber/fiber/issues/185

Oh well. You get what you ask for. Please don't complain about maps if you're using a broken library.


This is what I got from chatgpt while logged out.

Prompt: summarize https://maurycyz.com/misc/the_cost_of_trash/

>I’m sorry, but I couldn’t locate a meaningful, readable article at the URL you provided (the content looked like placeholder or garbled text). If you like, I can try to find an archived version or other copies of *“The Cost of Trash”* by that author and summarise from that. Would you like me to do that?

When I tried it ~12 hours ago it actually tried to summarize the linked markov generated page and attempted to make some sense of it while noting it seemed to be mostly nonsensical.


I just had deep blue play against stockfish since I wasn't that good at chess.


From my own personal experience with an outline server running on the same IP over 3 years, the GFW consistently ends up blocking it around 3 days after I first connect. Outline does use shadowsocks to obfuscate but I suspect the traffic detection is what triggers it after 3 days of observations. Running multiple servers and repeatedly cycling through them is an experiment I want to try the next time I'm there.

I've also observed similar behavior with the vpn I'm using as backup where the server I'm using tends to get blocked in around the same timeframe. It's using openvpn/wireguard as the underlying protocol which doesn't try to obfuscate itself so I suspect traffic pattern analysis plays a larger role in what gets blocked than the protocol itself. The exception was my recent trip week-long trip where I was mostly cycling between two servers without noticing either being blocked.


>GFW consistently ends up blocking it around 3 days after I first connect

I saw a lot of speculation years ago that the GFW used to flag connections for human review. 3 days sounds like support ticket turnaround.


Makes sense; the "3 days" you mention reminds me of something sad ~10 years ago. At an expo, there was this company "Dowran" with a banner boasting about "adjustable internet disruption patterns on demand" and other corporate catch phrases. I can assume there's an operator who installs GWF and puts "3" as part of installer wizard.


There's also the risk of accidentally hitting the upgrade button and at best losing a few hours to the upgrade or at worst several days as you deal with recovery.

Had a recent bad experience when I accidentally pressed upgrade on an old macbook air running Catalina. I ultimately had to restore from a time machine backup to get it working again. The internet recovery failed most of the time and needed multiple attempts to boot through. Attempts to try and let the upgrade go through would always result in the estimated time remaining disappearing after a while, and nothing happening. Attempting to re-install Catalina from the recovery menu also failed with an infinite loading spinner after a while and nothing clickable. Additionally there is no indicator that the wifi disconnects and you need to re-connect when attempting to re-install. I only found that out when I accidentally triggered an accessibility setting(I think VoiceOver) that showed the top bar again with the wifi icon. Even then there was another workaround needed to actually connect since I believe it wasn't possible to type in the password as you couldn't select the password dialog box.

I wasn't completely without fault, I'd stopped the upgrade early to since the upgrade was taking too long, I also lost the recovery partition the first time I tried and failed to reinstall which meant that internet recovery was the only way to boot the mac without it immediately trying to upgrade or re-install. The cli is accessible from the recovery mode which did allow me to take an additional backup of the laptop's contents with more up to date things.

In general it seems that the upgrade and recovery process is not supported at all after a couple of years which makes these upgrade notifications a constant risk if you ever click to install one of them. It also reminded me the importance of keeping backups, especially of the filevault key.


For working with remote machines that I need to ssh into I've found mobaXTerm[1] to be a very useful terminal emulator. It has an optional remote monitoring feature that shows the usual stats as a small bar under the active terminal window.

It's a windows only application though.

[1] https://mobaxterm.mobatek.net/


A fork of nitter by PrivacyDevel [0] should still work as it adds the option of using user account tokens to bypass content restricted tweets.

[0] https://github.com/PrivacyDevel/nitter


What this needs is some way to randomly select tokens from a (self-hosted) token dispenser. The dispenser can be fed fresh tokens through token donation from all those who claimed to have left and/or are planning to leave Twitter, that should provide enough tokens for the forthcoming decade.


Oddly enough tweets that are age/content restricted do not get the login wall, so while you cannot see the age-restricted tweet, you can still see the replies.


It does appear that their minio instance is the apache version. From the minio allegations the ui in the screenshots matches the pre-AGPL instance that I've kept around which was really just a simple bucket/files manager. I think all post-AGPL versions should be using the new ui announced here[1] in April 2021. The AGPL change was announced 12 May 2021[2]. The newer date in the out of date message could be due to them re-compiling the Apache version themselves.

However looking at the warp version in the screenshot, version 3.40 is licensed under AGPL.

[1] https://blog.min.io/new-minio-console/

[2] https://blog.min.io/from-open-source-to-free-and-open-source...


Yeah, I noticed that about warp too.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: