I bought my 65" TV for $299 recently, including shipping to my doorstep. Clearly all the bullshit that's baked into it is subsidizing the price by a lot.
I just never gave it network access and I use it like a dumb TV with a streaming box, so I get to benefit from a price subsidized by the the other 98% of the population who are getting exploited. The whole thing is kinda gross.
It's like there's this enshittification tipping point that you can't come back from. Realistically, who is going to buy a dumb TV at a much higher cost? People who are already savvy enough to get around a smart TV? People who aren't? I don't see it working.
>Realistically, who is going to buy a dumb TV at a much higher cost? People who are already savvy enough to get around a smart TV?
Exactly, that's why all this "I want a dumb TV!" stuff is, well, dumb.
It doesn't cost any less for a mfgr to make a dumb TV, in fact it would cost more. Modern TVs need a lot of computing power to make them work properly, so making the thing connect to the internet and show you ads really costs them nothing for hardware. Then they get to subsidize the TV with all the ad revenue, the kickbacks from the various streaming apps pre-installed, etc. A dumb TV would end up having the exact same hardware and a higher price tag. What kind of idiot would buy that? Not enough to make it worthwhile for the mfgr. If you don't want ads, just don't connect the TV to the internet.
It's a lot like modern Windows laptops that are cheaper than the same laptop with Linux pre-installed. MS and/or the laptop mfgr get a bunch of kickbacks from the crapware vendors to pre-install their crapware, so you end up paying less than you would for having the mfgr pre-install Linux (a free OS).
That seems like a very solvable problem with an ambient light sensor. Or even just manually; my xreal Air's have a little rocker than you can change the brightness with.
Without any explicit admission of guilt on my part; what are some good options here? Protobuf is cool but I really don't want to mess with a special compiler and all that.
The responses you got I think literally answer your question but probably aren't what you're going to reach for any kind of HTTP based thing. Your go-to should probably be multipart/form-data. It's well supported in every language and HTTP library and you can send both JSON and the file in the same payload.
There seems to be a common trend of people writing "JSON APIs" thinking that every other part of HTTP is off-limits.
It’s not even an HTTP invention, it’s RFC2046 MIME from 1996. RFC2388 standardised its use in HTTP in 1998.
The elegant thing about MIME is it allows multiple encodings and cross-references, so you can have your HTML and the images displayed in the HTML both optimally encoded in the same document, which was handy back in the time when HTML emails were taking off and marketing insisted that the fancy image signature they designed had to show every time, even when the person was reading the email offline…
Of course back then we had to encode the image in base64 anyway because of non-8-bit-clean email servers. But I digress and will go back to my rocking chair.
Avro has some really cool features like inbuilt schemas, schema versioning and migration (e.g. deprecating or renaming fields) but you pay for them with more overhead than MessagePack.
Protocol Buffers have schemas too (though versioning and ensuring compatibility is quite messy and requires understanding internals). And it has less overhead than MesssagePack.
I'm not sure what Avro is doing, but as a rule schema enables you to have less overhead, rather than more. The main advantage of MessagePack over schema-based formats is that it's dead-simple and mostly compatible with JSON. Schema-based formats usually need either a code generator or maintaining an annotated version of your data classes and making sure they match the schema.
(Of course, with JSON or MessagePack you might still end up using a serialization library and something like JSON Schema).
Oh, as I understand it Avro's schemas aren't just "built in" as in supported as a first-class part of the protocol, but rather that each message includes with it the schema needed to interpret it. This adds overhead to every message (it's still a binary protocol, though) but crucially it avoids a whole category of hassles around schema distribution and updating.
I just can't get over Cabo's choice to put 65 (yes 65) Bit fixed sized ints in the wire format and to top it off by making it ones complement (the sign bit is part of the type and the longest fixed sized ints are 64 bit range)...
Okay that live preview of ray traced "sun" lighting is impressive. Trying to get a well lit scene is not a trivial thing, and if it's that easy going forward that's going to be awesome.
Thanks! Actually writing has helped me so many times in improving the design and implementation. It forces me to question my assumptions, and ask myself: would the reader be able to understand why I made such decision? I have to justify everything I do, which helped me remove unnecessary complexity and focus on the more important aspects.
Tell me, mr. Ad-man, what good is an ad network... without an internet connection?
I recently got my hands on a "smart TV" for free because the power supply was broken and the replacement the owner had bought did not work. That turned out to be due to the fact that he bought the wrong board so I used parts from both boards to create a working power supply and there I was with a working "TCL 50DP660", this turned out to be a 50" 4K Android TV. Whatever I do with it, it won't get an unfettered internet connection just like all other 'smart' things around here. They live on their own private network where they only get to see what I allow them to see, i.e. my own media services and whatever proxy service I provide to the outside world. No auto-update, no ads, no nothing.
In Cocaine [1] Dillinger spells "New York" as 'a knife, a fork, a bottle and a cork'. Here we spell the 'New Net' as 'a wall, a block, a proxy and a lock'
Totally off topic, but I have wondered for 30 years where Information Society got “a knife and a fork, a bottle and a cork, that’s the way to spell New York”, because it was clearly intended as a sample or re-creation. Now I’m one step closer.
I've been a die-hard vim (now neovim) user for over a decade, and here's my advice: learn it if it's fun. It's not going to turn you into a different kind of programmer and it's not going to make your productivity skyrocket.
I love vim, not because of some huge speedup, but because it just feels good. It's like being in a workshop where all the tools are in just the right spot. I figure if I'm going to spend 8 hours a day coding I want it to feel really nice. And it does.
This. I have loved and used Vim since I was 18 years old, but I did because I fell in love with the tool right away and felt right using it, not because it was "cool" or whatever.
I tried other editors, it didn't feel right, I didn't force myself to continue using them. My advice if you're trying vim is the same... if it doesn't feel right, if it doesn't feel like the tool works for you, look somewhere else.
As a developer, I've long tended to leaning into the tech, to figuring out how to script & control my world. (It's been a strong distinction versus a lot of folks I see much more interested in getting through the mission at hand at express a pace or a directly a path as possible).
I'm not a great vim user by any means, but it feels comfortable & nice for me at this point. Vim is forever part of the virtuous cycle, forever letting you mix this idea for doing something with that technique. It fits my mentality fantastically well. Computers rarely have suchnfantadtically well equipped sandboxed, and even more rarely is software so module & flexible, so readily recombineable.
And I just absolutely adore being in a world where anything can be scripted, where every single ounce of power I get compounds with and works with every other bit of power I've accrued. Where everything fits together, and is useable on the fly, or usable when trying to do more elaborate crafting.
One little recommendation I would have given my old self, expand the power domain. I was a little slow to really get into scripting. VimScript was kind of ok but also for me personally kind of ass, but I should have leaned in more. Now a days NeoVim has excellent Lua support built in which is great, and I really really really love Denops which lets me use a lot more tools and libraries and ecosystem that I know & that are more mainstream (JavaScript). https://github.com/vim-denops/denops.vim
This is totally off topic, but the one thing that's not in any way at all modular or composeable is that vim assumes there's only one screen, only one output. It's been almost a decade since Tarruda blogged the idea of Smart UI where multiple apps could effectively start sharing a headless neovim, where there could be multiple different terminals at once, but there's been (afaik) no progress decoupling the core of vim from rendering into one and only one screen, which is my transcendent vim dream to expand beyond.
https://tarruda.github.io/articles/neovim-smart-ui-protocol/
Only problem is that vim doesn't work with LSP. It is a broken tool.
Easy check proving absolutely that it is broken:
1. Create a new wails project with `wails init -n myproject -t svelte-ts`
2. Create a svelte file and use a function defined in app.go
3. Run `wails dev` to compile and run the app
4. Delete the function from app.go
Result: The svelte file which imports the function generated from app.go doesn't show any errors even thought the function doesn't exist anymore.
So use VS Code or Sublime which actually work with LSP without issues.
Bugs are not the only thing that matters. VSCode is way slower than (Neo)Vim, uses more memory, contains proprietary parts and lacks many of (Neo)Vim’s features.
Strangely, I've noticed mixed opinions from developers about whether learning vim is a productivity boost. Some people believe it is while others disagree.
I'll admit my FOMO was what originally got me to start learning vim. I still barely know the basic motions but I'm starting to think it could lead to a productivity boost once I get over the learning curve.
The productivity boost is highly subjective, it depends a lot on your work. Do you edit files a lot or spend more time thinking ? Do you use some graphical tools on the side that forces you to grab your mouse anyways ? With AI tools now, where you can now basically select a file and prompt "rewrite all in snake_case" in VSCode it's becoming even less evident. I think the biggest gain from vim is simply if you're more happy using it or not.
There is a quote from Apple UX designers/engineers about testing the keyboard vs mouse for doing stuff in the OS.
Apparently the test subject always reported that they keyboard-driven controls were faster, but the timing measurements showed that mouse were faster.
Chances are the keyboard feels faster, rather than being actually faster.
I'll add another point of view I developed while observing many LaTeX vs Word or Excel vs SomeObscureCoolThing(tm) threads: people will happily waster thousands of hours over many years to learn vim/emacs/LaTeX/SomeObscureCoolThing but will plain refuse to spend 20-200 hours (again, over many years) to properly learn how to use Jetbrains' stuff (IntelliJ etc) or Word/Excel/PowerPoint (or the LibreOffice equivalent) or some other mainstream tool.
I've seen countless time web-apps being developed in months that could have been an excel sheet developed in a week. People wasting weeks on their documents because after a software update LyX would not open the documents anymore. People (particularly in university) being super-stressed, wasting precious time and occasionally missing deadlines because they waster too much time fighting LaTeX to align tables or images because they refused to properly learn how to use Word (or the LibreOffice's equivalent, writer).
And don't even get me started on the plumbing of various tools together. Most vim/emacs user (and I say this as an emacs user) can only integrate other tools as long as there is some copy-paste-ready code, but they can't go much further.
So... Yeah productivity boost is incredibly subjective. And chances are it's also fake.
It's not too much of a big deal (meh) but I'm annoyed by the fact that all this isn't even acknowledged.
Maybe it comes back to the feeling of fun mentioned by GP? I also enjoy working with Vim, while LibreOffice Writer is clunky and Word is not much better (though I have no love for LaTeX). I could make a spreadsheet in under an hour, but if it'll make me feel like I'm hacking together something buggy in an inadequate tool, I'd rather spend more time making a web app or a Python script.
Likewise, mandating a file per each class in Java is no big deal on the surface, but having to create and juggle so many files for small classes feels terrible to me, so a seemingly small detail turns me off the language.
I think we should examime these feelings, because they ultimately drive (some part of) our behaviour, and I'd guess they're not just random preferences but are rationalisable.
> I could make a spreadsheet in under an hour, but if it'll make me feel like I'm hacking together something buggy in an inadequate tool, I'd rather spend more time making a web app or a Python script.
Congrats you missed the point entirely and provided me with a perfect example case:
Have you ever spent a considerable amount of time learning Excel, the very same way you did for python?
It’s very likely that excel is perfectly adequate and not buggy at all, you’re just and ignorant (in Excel) and can’t go further than “hacking a spreadsheet together”.
So, have you spent time properly learning other tools or are you one of those everything-expect-what-i-like-sucks ?
The whole "data and (hidden) code mixed in a seemingly-infinite matrix" concept offends my engineering sensibilities. It's not about bugs in Excel, it's about the bugs my spreadsheets will have since I forgot to fill in that one cell, overran the area used by some formula elsewhere, didn't set the format correctly, so visually all looks good but it breaks a sum...
To get back to my point, working with spreadsheets feels bad (from my experience) because I have to juggle all the things mentioned above. In my favourite programming environments, mistakes of this sort are generally directly evident in the form of errors.
If I had more spreadsheet experience, I would get better at avoiding these mistakes. But I choose to use Python, where they are handled by design.
It was a comparison between keyboard shortcuts and quick access bar, on a graphical tool.
And anyone who has ever played MMO or RTS games will know that early on, quickbar are faster and more precise than shortcuts, but later on clearly, no one mouse-click if they want to stay competitive.
The real productivity boost happens when you become fluent and it feels like a second language that describes actions on text objects. At this point I can do most things as fast as I can think what needs to be done. It's like being bilingual.
I would say that it’s important to learn and use the vim basics, because you can generally login to any linux server or mac and edit a text file quickly and efficiently, but I would not spend too much time beyond that unless you really love it.
This is awesome! I hate the way tmux hijacks so much of my terminal's behavior (scrollback, seaching with escape-/, etc.) and I've been looking for something like this that will manage persistent sessions without any extra nonsense.
BTW I think your readme shouldn't just characterize it as a resumeable ssh tool. I often need to start a long running process that I want to reconnect to later, or I want to put some always-on system service inside a tmux container so I can easily jump in and see logs or mess with things in one way or another. There's a lot of utility besides just handling network dropouts.
The thing is that without hijacking it and passing it through, you can't have nice things like handling resizes, supporting attaching with multiple terminal emulators, or reconnecting to applications that make heavy use of terminal escape codes, because all of those set up persistent state in the terminal emulator.
As a result, any tmux-like layer needs to emulate a console in order to get a view into the state, and then re-render that emulated console on attach to restore state to the terminal emulator that you're connected from.
From the readme, this tool does that, kinda. I'm actually confused about why they'd go to the effort of implementing a VT100 emulator, write the code to redraw the screen from it, and yet not bother with doing the work that would let multiple terminal emulators attach.
This feels like it sits in a weird place between simple, crude tools like dtach, and tools like tmux; shpool has done most of the work to implement tmux-style behavior, and then decides to cut weird corners.
You definitely don't need the in-memory terminal emulator to handle resizes or allow attaching with multiple local terminal emulators, since dtach does both and does not have an in-memory terminal emulator.
> I'm actually confused about why they'd go to the effort of implementing a VT100 emulator, write the code to redraw the screen from it
Well, we kinda cheated here. shpool_vt100 is just the already existing vt100 crate with a single critical bug fixed, so it actually wasn't much work :). Turns out having a nice package manager for a systems language comes with some benefits.
I'm actually open to adding a feature to allow multiple simultaneous connections to a single session. I never really had a usecase for it personally so I haven't prioritized it, but it is something that similar tools support and people keep bringing up. Since this isn't the first time I've heard people talking about it, I just made https://github.com/shell-pool/shpool/issues/40 to track work adding the ability to attach multiple clients to the same session.
> This feels like it sits in a weird place between simple, crude tools like dtach, and tools like tmux; shpool has done most of the work to implement tmux-style behavior, and then decides to cut weird corners.
I'm not aware of any tool that does internal rendering and subsetting handling scrollback and copy-paste in a way that I personally find usable, so these decisions were very much intentional.
I think tmux is a great tool for a lot of people, and I tried to get into it for years, but I could just never get over the weird scrollback and copy-paste issues or the fact that it meant that I couldn't use my normal `i3`/`sway` bindings to switch between terminals inside a tmux session. If tmux works for someone, I think that's great and they should keep using it. shpool is meant for people like me who aren't very good with computers :).
I’m not sure what the popular use case is for multiple connections to one multiplexer. But, two (niche seeming) ones could be: if you have a desktop, you want to be able to SSH to it and use it locally at the same time. Or, if you have two people ssh-ing to one system, and letting them share a terminal might be nice (although in that case it would really be nice to give them independent cursors, which starts to become an involved project).
I usually have a drop down terminal that I use the most, then a full screen window on some other space. I can switch between them quickly and connect to any of my sessions from either.
Or one person ssh'ing to the same remote from two or more devices. If I don't feel like sitting in the office (desktop) and grab the laptop and go to the sitting room or back deck I can continue my session without issue, and then transition back later. I don't want to have to disconnect/detach a session when I do this, I want it to be seamless so both connections (actually three typically, an iPad as well) are running continuously.
we do use multiconnections when doing interviews - candidate solving test case on VM, interviewers are observing. Idea was not to confuse candidates by requiring screensharing during the interview, just see the particular ssh session.
Hi.
Nice kit.
> I couldn't use my normal `i3`/`sway` bindings to switch between terminals inside a tmux session
Just curious, what are your normal 'i3'/'sway' bindings that you cannot get to work with tmux ? And what actual terminal program do you use ?
Perhaps tmux wants more config than most people care to bother with, but scrollback and copy-paste can be configured just as you like.
If you’re using a tiling window manager your window switch keybindings necessarily conflict between the manager and tmux, since if you configure the same one and then press it while focused on a tmux window, the tiling window manager will override tmux and claim the event.
Scrollback and copy paste cannot always be configured as you want. I’ve shared some specifics elsewhere in this thread.
Hi.
ok, I get that. Maybe something like A-left to move to next i3 window and A-S-left to move to next tmux window within an i3 window ? Perhaps a tiling window manager with multiple windows combined with tmux dividing some of those windows further into sub-windows isn't an ideal flow.
It's not i3/tmux, but a similar problem exists for vim/tmux, where the vim window management keys will conflict with tmux's.
And there's https://github.com/numToStr/Navigator.nvim that unifies the keys by letting the outside layer (tmux) always ask the inside layer (vim) before any movements.
Although that is indeed a lot more setup. But works pretty well.
I just started using this and it is indeed awesome. Only thing I need to figure out how to do is make a tab truly full screen. It's much nicer, more intuitive, has more built in support for things and is super lightweight. It's great.
Seems like most of the features you need are what mosh offers. I've been using it for a decade, probably, and it is pretty awesome for latent mobile connections (read as: throttled 2G @16kBit/s with interruptions).
Either you waste lots of traffic bandwidth because you have to have a session identifier or nonce in every packet, or you have to map sessions to ports in order to guarantee persistance when the client drops its connection.
Other ways of doing session handling will lead to an attack surface that can probably be used for DoS attacks.
Maybe I am missing something: How would you solve this, given the limitations of UDP and TCP?
I once had a VPN utility that HAD to be closed with a Keyboard Interrupt in order for it to shut-off properly, so my systemd setup for it didn’t work. I ended up making bash aliases for tmux commands to run it and send the keyboard interrupt signal into it to stop it. I’m sure there was a way to do this with systemd, but tmux was easy, if a bit jank.
The best signal was had by hanging my phone from the ceiling. Since the phone itself was connected to the cafe Wifi, I couldn't use wifi to connect to the phone. So I shared the phone's (cafe wifi) internet over Bluetooth. (I didn't know that was a thing until I tried it!)
(I think I was able to connect to that with another phone and set up a wifi hotspot for all my devices... but it's been a while!)
Maybe it was something like a Raspberry Pi that he placed at that cafe? Curious that his Bluetooth had more reach than the Wifi itself though. Maybe with a directional antenna. Agree, would be interesting to know his setup.
I just never gave it network access and I use it like a dumb TV with a streaming box, so I get to benefit from a price subsidized by the the other 98% of the population who are getting exploited. The whole thing is kinda gross.
It's like there's this enshittification tipping point that you can't come back from. Realistically, who is going to buy a dumb TV at a much higher cost? People who are already savvy enough to get around a smart TV? People who aren't? I don't see it working.
reply