Hacker Newsnew | past | comments | ask | show | jobs | submit | gridlocdev's commentslogin

I’ve thought about doing this, what kind of device do you use for input?

I have a Logitech K400 Plus portable keyboard and it works great for general use, but I end up using my Apple TV on the couch instead since I prefer using a TV remote / gamepad to navigate.


I just use an Apple TV box these days, but ~15 years ago I was running a Windows 7 Media Center PC for DVR and a few saved movies. The best remote was a TiVo Slide Pro (no longer made, and my wife broke the one we had). Normal TiVo remote buttons on top, slides open to expose a small keyboard. They occasionally show up on eBay or if you have a local source, use that. You'll need one that has the RF dongle to be able to use the keyboard with anything other than an RF-capable TiVo.


I have a Logitech K830, it's great. Alas they're not manufactured any more.

Modern equivalents are a lot cheaper. I think they might not have the backlight feature which is extremely useful, and I've dropped mine many times, even spilled coffee over it, and it still works flawlessly.


i just use the cheapest wireless keyboard and mouse I can. I’m accessing content through the browser so there’s enough typing that I wouldn’t want a non keyboard personally.

Imo you only need remote like hardware if you’re planning to scroll through Netflix style walls of content.


a game controller works also really well with something like Steam Big Picture and Media Players


It’s unfortunate the HDMI forum is dragging their feet on this, but it’s kind of the nature of their business model so I’m not holding my breath waiting for things to resolve.

To use Linux at 4k at a high refresh rate with a TV or other screen that only has HDMI ports, they sell high-performance DisplayPort to HDMI adapter cords for like $30 on Amazon. My Bazzite machine works just fine on a 4K tv at 120hz, and even enables HDR now which it didn’t before with the regular HDMI cord.

If the HDMI forum thing doesn’t work out, I totally could see Valve offering a DP to HDMI cable just so folks don’t need to do extra searching to get the best performance out of their new hardware.


I can’t wait for QUERY to become an official RFC.

It's felt quite awkward to tiptoe around the existing spec when building features that retrieve data; we've had to either use POST to keep sensitive filter criteria out of http logs or just create a usually massive URL-encoded query string.


There's nothing holding you back implementing the QUERY method right now - HTTP methods are standardized, but not limited to the standard. Obsviously it depends how proxies/servers/etc. might handle uncommon methods.


But that's the point, isn't it? Browsers, proxies and servers always assume POST isn't idempotent. When the user presses F5 the browser asks if they want to do the thing again. You can't prevent that without making it more complicated (e.g. send the request from JavaScript).


> There's nothing holding you back implementing the QUERY method right now - HTTP methods are standardized, but not limited to the standard.

I think this comment is terribly naive. Technically you can write your own service to support verbs like LOL or FUBAR, but in practice your service is not only expected to be broken when passing requests through other participants you do not control but also it requires far more development work to integrate with existing standards. Take for example CORS. If a HTTP method is not deemed safe then client requests need to go through the unsafe flow with preflight requests and the like. Forget support for caching too, and good luck having your requests pass through proxies.

So what exactly did you achieved by using non-standard verbs?

If you chose to go the ignorant backwards incompatible way, you are better off not using HTTP at all and just go with some random messaging/RPC protocol.


I won't be so strict. Even though a homebrew implementation won't be widely interoperable, an experience of its active development and use in a limited environment (e.g. within a company) would be valuable both to inform the RFC and to serve an example implementation.


Very timely as I just recently ended up with a URL query string so big that CloudFront rejected the request before it even hit my server.. Ended up switching that endpoint to POST. Would've liked QUERY for that!


I have come across systems that use GET but with a payload like POST.

This allows the GET to bypass the 4k URL limit.

It's not a common pattern, and QUERY is a nice way to differentiate it (and, I suspect will be more compatible with Middleware).

I have a suspicion that quite a few servers support this pattern (as does my own) but not many programmers are aware of it, so it's very infrequently used.


Sending a GET request with a body is just asking for all sorts of weird caching and processing issues.


I get the GPs suggestion is non-conventional but I don’t see why it would cause caching issues.

If you’re sending over TLS (and there’s little reason why you shouldn’t these day) then you can limit these caching issues to the user agent and infra you host.

Caching is also generally managed via HTTP headers, and you also have control over them.

Processing might be a bigger issue, but again, it’s just any hosting infrastructure you need to be concerned about and you have ownership over those.

I’d imagine using this hack would make debugging harder. Likewise for using any off-the-shelf frameworks that expect things to confirm to a Swagger/OpenAPI definition.

Supplementing query strings with HTTP headers might be a more reliable interim hack. But there’s definitely not a perfect solution here.


To be clear, it's less of a "suggestion" and more of a report of something I've come across in the wild.

And as much as it may disregard the RFC, that's not a convincing argument for the customer who is looking to interact with a specific server that requires it.


Cache in web middleware like Apache or nginx by default ignores GET request body, which may lead to bugs and security vulnerabilities.


But as I said, you control that infra.

I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

If you can’t trust them to do that little, then you’re fuck regardless of whether you decide to send payloads as GET bodies.

And there isn’t any good reason not to contract pen testers to check over everything afterwards.


> I don’t think it’s unreasonable to expect your sysadmins, devops, platform engineers, or whatever title you choose to give them, to set up these services correctly, given it’s their job to do so and there’s a plethora of security risks involved.

Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist: "content received in a GET request has no generally defined semantics, cannot alter the meaning or target of the request, and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack" (RFC 9110)

> And there isn’t any good reason not to contract pen testers to check over everything afterwards.

I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.


> Exactly, and the correct way to setup GET requests is to ignore their bodies for caching purposes because they aren't expected to exist

No. The correct way to set up this infra is the way that works for a particular problem while still being secure.

If you’re so inflexible as an engineer that you cannot set up caching correctly for a specific edge case because it breaks you’re preferred assumptions, then you’re not a very good engineer.

> and might lead some implementations to reject the request and close the connection because of its potential as a request smuggling attack"

Once again, you have control over the implementations you use in your infra.

Also It’s not a RSA if the request is supposed to contain a payload in the body.

> I am pretty sure our SecOps and Infra Ops and code standards committee will check it and declare that GET bodies is a hard no.

I wouldn’t be so sure. I’ve worked with a plethora of different infosec folk from those who will mandate that PostgreSQL needs to use non-standard ports because of mandating strict compliance with NIST, even for low risk reports. To others that have been fine with some pretty massive deviations from traditionally recommended best practices.

The good infosec guys, and good platform engineers too, don’t look at things in black and white like you are. They build up a risk assessment and judge each deviation on its own merit. Thus GET body payloads might make sense in some specific scenarios.

This doesn’t mean that everyone should do it nor that it’s a good idea outside of those niche circumstances. But it does mean that you shouldn’t hold on to these rigid rules like gospel truths. Sometimes the most pragmatic solution is the unconditional one.

That all said, I can’t think of any specific circumstance where you’d want to do this kind of hack. But that doesn’t mean that reasonable circumstances would never exist.


I work as an SRE and would fight tooth and nail against this. Not because I can’t do it, but because it’s a terrible idea.

For one, you’re wrong about TLS meaning only your infra and the client matter. Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic. The one I saw was Bluecoat, no idea if it follows your expected out-of-spec behavior or not.

For two, this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement. If you move to AWS, do ELBs support this? If security wants you to use Envoy for a service mesh, is it going to support this? I don’t pick all the tools we use, so there’s a good chance corporate mandates something incompatible with this.

You would need very good answers to why this is the only solution and is a mandatory feature. Why can’t we cache server side, or implement our own caching in the front end for POST requests? I can’t think of any situations where I would rather maintain what is practically a very similar fork of HTTP than implement my own caching.


> Not because I can’t do it, but because it’s a terrible idea.

To be clear, I'm not advocating it as a solution either. I'm just saying all the arguments being made for why this wouldn't work are solvable. Just like you've said there that it's doable.

> Some big corps install a root CA on everyone’s laptop and MITM all HTTP/S traffic.

I did actually consider this problem too but I've not seen this practice in a number of years now. Though that might be more luck on my part than a change in industry trends.

> this is likely to become incredibly limiting at some point and require a bunch of work to re-validate or re-implement.

I would imagine if you were forced into a position where you'd need to do this, you'd be able to address those underlying limitations when you come to the stage that you're re-implementing parts of the wider application.

> If you move to AWS, do ELBs support this?

Yes they do. I've actually had to solve similar problems quite a few times in AWS over the years when working on broadcast systems, and later, medical systems: UDP protocols, non-standard HTTP traffic, client certificates, etc. Usually, the answer is an NLB rather than ALB.

> You would need very good answers to why this is the only solution and is a mandatory feature.

Indeed


There is no secure notion of "correctly" that goes both directly against specs and de facto standards. I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely. I personally don't even know which L7 balancer my company uses and how it would cache GET requests with bodies, because I don't have to waste time on such things.

Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation. Just don't make your life harder for no reason.


> There is no secure notion of "correctly" that goes both directly against specs and de facto standards.

That's clearly not true. You're now just exaggerating to make a point.

> I am struggling to even imagine how one could make an L7 balancer that should take into account possibilities that someone would go against HTTP spec and still get their request served securely and timely

You don't need application-layer support to load balance HTTP traffic securely and timely.

> Running PostgreSQL on any non-privileged port is a feature, not a deviation from a standard. If you want to run PostgreSQL on a port under 1024 now that would be security vulnerability as it requires root access.

I didn't say PostgreSQL listening port was a standard. I was using that as an example to show the range of different risk appetites I've worked to.

Though I would argue that PostgreSQL has a de facto standard port number. And thus by your own reasoning, running that on a different port would be "insecure" - which clearly is BS (as in this rebuttal). Hence why I called your "de facto" comment an exaggeration.

> There is no reason to "build up a risk assessment and judge each deviation on its own merit" unless it is an unavoidable technical limitation.

...but we are talking about this as a theoretical, unavoidable technical limitation. At no point was anyone suggesting this should be the normal way to send GET requests.

Hence why i said you're looking at this far too black and white.

My point was just that it is technically possible when you said it wasn't. But "technically possible" != "sensible idea under normal conditions"


> I have come across systems that use GET but with a payload like POST.

I think that violates the HTTP spec. RFC 9110 is very clear that content sent in a GET request cannot be used.

Even if both clients and servers are somehow implemented to ignore HTTP specs and still send and receive content in GET requests, the RFC specs are very clear that participants in HTTP connections, such as proxies, are not aware of this abuse and can and often do strip request bodies. These are not hypotheticals.


Elasticsearch comes to mind.[0]

The docs state that is query is in the URL parameters, that will be used.I remember that a few years back it wasn't as easy - you HAD to send the query in the GET requests body. (Or it could have been that I had a monster queries that didn't fit through the URL character limits.)

0: https://www.elastic.co/docs/api/doc/elasticsearch/operation/...


> you HAD to send the query in the GET requests body.

I remember this pain, circa 2021 perhaps?


Probably closer to 2019. Maybe the optionality is a relatively new feature then.


I think graphQL as a byproduct of some serious shenanigans.

"Your GraphQL HTTP server must handle the HTTP POST method for query and mutation operations, and may also accept the GET method for query operations."

Supporting body in the get request was an odd requirement for something I had to code up with another engineer.


And the whole GET/POST difference matters for GraphQL at scale: we saved a truckload of money by switching our main GraphQL gateway requests to GET wherever possible.


Servers can support it but not browsers.


And oftentimes some endpoints simply hit the max URL length limit and need a proper body. I thought we ought to already be using this method. Seems quite fitting for fulfilling GETs with bodies.


Maybe send the body in a close envelope as some software might also write the request body contents to disk or log it.


Or if on the iOS side of the fence, I’ve found the Octal app is fantastic


This is a very confusing move if true, for a few reasons:

- A folded iPad wouldn’t necessarily be easily pocket-able for most people. - I regularly see iPads being used for gaming and media consumption, neither of which really benefits from a folding screen that gets smaller than its regular size. - The ergonomics of touch controls with apps in a side-by-side multi-tasking view is already difficult enough on existing folding phones, and would likely be very uncomfortable on an even larger device.


Obligatory link to https://killedbygoogle.com/


I switched to Linux around the time the TPM stuff started since my hardware was “no longer supported” on Windows 11, and honestly more people should try Linux. I feel like aside from multiplayer online gaming (where anticheat is basically a self-inflicted rootkit) there’s not much missing here. Most Steam games are playable thanks to Valve, I legitimately almost never need to touch the terminal since apps are containerized now and distros ship a software store application to manage them, and the major desktop environments (GNOME, KDE) all have a stable and modern-looking UI.


This looks awesome, can’t wait for it to include speaker notes and/or videos so it’s easier to use for self-study!


Thanks! We do include speaker notes on some pages (but not yet all[1]). We would love to expand this and PRs are very welcome for this :-)

I think videos will end up being made by someone other than me since I feel it takes too much effort when you don't have the right setup already. We have an issue and I'll update it as soon as I hear more about videos.[2]

[1]: https://github.com/google/comprehensive-rust/issues/1083 [2]: https://github.com/google/comprehensive-rust/issues/52


Maybe 5-10 years ago, but today’s Linux desktop has evolved to be much more user friendly and stable. Applications have a standard containerized format (Flatpak), the most popular distributions ship a software store to update your apps and system with one click, and the stability of things has improved to a point where (in my experience) things almost never break unless you’re running the absolute bleeding-edge latest-gen hardware. I would highly recommend giving it a shot if you haven’t at least tried it as a daily driver before. (To get started, just look up a tutorial for how to dual-boot with Windows or play around with it in a Virtual Machine)


That’s one reason why I love the devlog videos from Randy. Not only does he not hide it when things get hard, he truly takes you on the journey to see his most painful-est of moments in a way that really allows you to either learn from it or just simply laugh at the absolutely absurd circumstances that he threw himself into.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: