Hacker News new | past | comments | ask | show | jobs | submit | rozap's comments login

It's totally fine if it just supports one platform. Just don't say it's cross platform when it's not.

The apps you build are cross-platform apps. The tool itself isn’t. That’s how I read it anyways, nothing misleading that I saw

the apps you build will be cross-platform "soon" (no specific timeframe)

https://electrobun.dev/docs/guides/Compatability


It’s the HN link description that’s confusing y’all then? Because the home page, where you clicked before navigating to the compatibility page, says they “aim… to be cross-platform” says nothing of that being the current state

Truly wild. I guess you can just write anything in the title for the upvotes.

But what if you have long lived stateful connections? And you don't want a deploy to take forever?

Ofc you can say "don't do that" but sometimes it's just the way it is...

But I agree, 99% of the time a rolling update is easier and works fine.


I used to work on a pretty big elixir project that had many clients with long lived connections that ran jobs that weren't easily resumable. Our company had a language agnostic deployment strategy based on docker, etc which meant we couldn't do hot code updates even though they would have saved our customers some headache.

Honestly I wish we had had the ability to do both. Sometimes a change is so tricky that the argument that "hot code updates are complicated and it'll cause more issues than it will solve" is very true, and maybe a deploy that forces everyone to reconnect is best for that sort of change. But often times we'd deploy some mundane thing where you don't have to worry about upgrading state in a running gen server or whatever, and it'd be nice to have minimal impact.

Obviously that's even more complexity piled onto the system, but every time I pushed some minor change and caused a retry that (in a perfect world at least...) didn't need to retry, I winced a bit.


I work in gaming and have experienced the opposite side of this: many of our services have more than one "kind" of update, each with its own caveats and gotchas, so that it takes an expert in the whole system (meaning really almost ALL of our systems) to determine which would be the least impactful possible one if nothing goes wrong. Not only is there a lot of complexity and lost productivity in managing this process ("Are we sure this change is zero downtime-able?" "Does it need a schema reload?" etc) but we often get it wrong. The result is that, in practice, anything even remotely questionable gets done during a full downtime where we kick players out.

It's sometimes helpful to have the option to just restart one little corner of the full system, to minimize impact, but it is helpful to customer experience (if we don't screw it up) and very much the opposite for developer experience (it's crippling to velocity to need to discuss each change with multiple experts and determine the appropriate type of release).


No doubt that traditional deployments are much better for dev experience at (sometimes) the cost of customer experience.

I disagree. Hot loading means I can have a very short cycle on an issue, and move onto something else. Having to think about the implications of hot loading is worth it for the rapid cycle time and not having to hold as many changes in my mind at once.

One thing that would help both is deployment automation that could examine the desired changes and work out the best way to deploy them without human input. For distributed systems, this would require rock-solid contracts between individual services for all relevant scenarios, and would also require each update to be specified completely in code (or at least something machine readable), ideally in one commit. This is a level of maturity that seems elusive in gaming.

It's messing with insects who use the stars and moon for navigation. Pretty wild. https://www.sciencedirect.com/science/article/pii/S096098222...

Given Peter Thiel is a big investor in polymarket, and is JD Vance's daddy, I'm sure they will have no problem getting this case dismissed.

Before you place bets on Polymarket you need to check the box which says something like this: "I promise I'm not from US". Pinky swear that I'm not using VPN to place illegal bets. That's why FBI is investigating them, so case still hangs in the balance.

And on the web side, fingerprinting is rampant and there are JS challenges in cloudflare, imperva, etc which make it trickier. Frustrating to run a whole browser with a virtual screen, load the whole page which is ofc like 15mb of JS and other trash, just to do a very simple thing.

Granted, smaller fish like the ones OP is referring to generally don't have aggressive anti automation measures in place, so it can be easy...but generally these techniques don't work if the operator has put the proper measures in place.


take a look at https://xhr.dev/, a product I built to avoid bot detection from things like cloudflare, imperva, aws waf, and others

What does the $500 a month get me? Infinite resources to scrape all of LinkedIn?

>self host (Docker): $60k/yr

lmao ok


Frustrating? Yeah! but it works SO great! I especially like Playwright in this context, it can do pretty much anything and is a joy to use.

Ain't this the truth. So many mediocre people hide in a giant company, learn how to hit their kpis and think they're crushing it, but get smacked in the face by reality when they leave.

My experience at a small company is that I always feel mediocre :)


I assure you that Imposter Syndrome is a daily reality even for Senior Engineers at Google.


I thought once I reached Staff at Google it would go away. It in fact made it even worse now that I have a whole slate of incredible peers to compare myself to.


The internal culture at Google outright fosters it. Everything from the interview process on.

My mental health is 5x better since leaving.


As with many things, the solution is ffmpeg. After I got that upsell thing when I tried to download a video about a week ago, I found the correct ffmpeg incantation, mostly out of spite for Twitter. If you find the m3u8 request in devtools on a tweet, you can use something like the following:

  ffmpeg -i 'https://video.twimg.com/ext_tw_video/1846357395959615488/pu/pl/ecNx-sTzYA9doHYO.m3u8' -analyzeduration 5G -codec:a libmp3lame -b:a 96k output.mp4
(if anyone runs that command...you're welcome for the meme, unfortunately I don't know where it came from)


You probably get the same result in the end, but yt-dlp can also do this if you point it at the m3u8 file.

(Actually I just checked and it also supports downloading Twitter videos directly.)


> "stop calling people names!"

> "lib woketards"

but i agree with you that alienating more than half of the country was a move so stupid that only the establishment democrats could pull it off.


I never said "you personally should stop calling people names". I said that if you(politicians specifically) do call people names, put labels on them and segregate them based on race and gender, then they'll hate you and retaliate against you.

Which is why people voted Trump and why I called them "lib woketards", because I don't care about being popular or winning anyone's approval, but presidential candidates do, which is why it's so stupid how dems alienated a large part of their voter base like that and still thought they could win.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: