Hacker News new | past | comments | ask | show | jobs | submit | mscdex's comments login

This seems a bit strange to me considering the default behavior is to only show a suggested command if possible and do nothing else. That means they explicitly opted into the autocorrect feature and didn't bother to read the manual first and just guessed at how it's supposed to be used.

Even the original documentation for the feature back when it was introduced in 2008 (v1.6.1-rc1) is pretty clear what the supported values are and how they are interpreted.


It didn't work for me either, but apparently one of the issues is that it assumes window.speechSynthesis is available which may be disabled via about:config > media.webspeech.synth.enabled.


In my opinion it's primarily for less noisy logs.


For what it's worth if you have control over both client and server and don't want to limit access using a strict IP whitelist, an alternative solution that will keep your logs quieter and add additional protection is to use good old fashioned port knocking. knockd on Linux helps with automating this on the server side. Client side you can use anything (although knockd does include a dedicated client) to send your sequence of packets before actually connecting.


I really think this solution is underrated. Port knocking is robust, doesn't use any special technology, and servers using it can't reasonably be scanned for. The only real disadvantage is that any passive observer can see your knock sequence in "plaintext" (so that includes anyone logging netflow).

Even so, I don't know why OpenSSH hasn't implemented it instead of the the silly fail2ban theatre we're discussing in these comments.


One thing to help with the passive observer would be to have the knock sequence be time varying like a TOTP. It's still a very thin addition but more defense in depth the better sometimes.


lol, hadn't read all the comments before posting mine.. Have an upvote! Actually who not do both. Vary the knock code and the resulting ssh port using succesive codes.

I just checked knockd man page and it turns out it can use a one_time_sequences file that contains a sequence of port knock combinations. I wonder if this file is dynamically checked, or loaded and parsed during startup? Or could one simply echo the TOTP code straight into that file and hup the knockd service each time (let's say the TOTP interval was set to something like 5 minutes).



Well, that's the answer. Thank you.


I wonder could you combine command line TOTP tools with port knock for a fully time-based unique knock codes? Or even use the TOTP code for the ssh port?

I'm totally gonna do this.


Because it's a stupid low entropy key put in front of a service that you should be using MUCH harder keys on instead of passwords as of circa the 90s.

You're wanting to add a screen door on a sub, and its just a feel good option for those who don't understand the math involved.

The proper solution is to stop using passwords and use keys or proper cert auth.


I think it goes without saying that you would still want to be using keys instead of passwords for the actual authentication. Port knocking should always be an additional layer, not a replacement layer.


I find adding dynamic dns entries to my firewalls much more efficient and to have a more meaningful protection value.

A timed job that checks the up of your clients and updates the firewall every 30 seconds seems a much more secure method than having a magic sequence of ports that can be captured in the wild.

It’s hard to spoof a full tcp connection (with a key) needed to update your ddns.

Best part is you can leave your ddns to a separate box or service which complicates the compromise of a single host


Generally speaking (I've not tested this kind of setup with the cosmopolitan libc) what I've done in the past with C is use something like libmicrohttpd along with some web assets linked into the executable (`xxd -i` can help you with the assets). That gives you a single (small) binary where you can use HTML/CSS/JS for the main GUI and logic.

You can then integrate additional libraries as you please, such as sqlite3 to give yourself fast, local database access over an endpoint on the embedded HTTP/S/2 (or websocket) server.


Even with node.js, it's possible to be faster and/or use less memory than better-sqlite3. For example, here is an opinionated sqlite addon I wrote that shows just that (while executing queries asynchronously): https://github.com/mscdex/esqlite


There are still some alternative shells that exist, even for Windows 10 and 11, including xoblite (inspired by blackbox I believe) [1] and Cairo Shell [2].

[1] https://xoblite.net/docs/

[2] https://cairoshell.com/


Oof, I must still have my xoblite config from 2006-2007 somewhere on a hard drive backup, thanks for the nostalgia


Every time I see articles talking about this subject, it's always completely focused on Twitch streamers that stream for a living. I would be more interested in hearing the contrast with streamers who stream for fun/not for a living. Which problems are the same between the two groups and which are unique?


> Every time I see articles talking about this subject, it's always completely focused on Twitch streamers that stream for a living.

Because, for the most part, those are the only people affected by these problems. If you don't stream with the intent to maximize your viewer counts and profits at all costs - stream when you feel like it, play what you want, don't chase trends, don't encourage parasocial relationships - you're basically immune to most of these issues.


But these people aren't excluded from numbers in these articles. 95% of people stream to 0 viewers, but do they even care?


I'm kind of one of those streamers (went into it more in my other comment).

While I would love to have a decent number of people following me (assuming I get back into the habit of doing it regularly), if no one does, it's really no big deal, I have a well-paying full-time job anyway, I don't need to earn a single cent from it.


I recognize that, but surely there could be overlap in some areas or problems unique to streamers who don't stream for a living.

For example, I could see streamers who stream for a potential social benefit may feel the need to chase trends, play the more popular games, and other similar things in order to increase/maintain viewership for more socializing opportunities. However they may also have a unique set of problems that streamers who do it for money do not have.


> streamers who stream for a potential social benefit may feel the need to chase trends, play the more popular games, and other similar things in order to increase/maintain viewership for more socializing opportunities

I've never heard of any streamers like this, but if any do exist, they are effectively no different than people who stream for profit because the intermediate goal of maximizing viewership is the same.


Maybe. Or maybe there are interesting differences. It merits being looked into rather than dismissed immediately.


Small streams are great when you want to interact with chat and the streamer. Once a stream grows bigger than a couple hundred viewers, interaction suffers and the stream becomes something else.

Bigger streamers are more like watching regular TV where you just passively watch.


this is the difference in streamer dynamics that doesn't get enough coverage


From their pricing FAQ[1]:

  19. Why do Picovoice engines require an internet connection?

  While data is processed offline, locally on-device, Picovoice engines call home servers to stay active and report the consumption for billing purposes only.
[1] https://picovoice.ai/pricing/


So... not offline.


That board doesn't have connectivity! If we had a way to connect to the internet without a connectivity chip I would have had a more exciting post!


Why if it is fully offline then why does the micro-controller need a license key? What does it use the key for? How can any monitoring/analytics take place?

I read the article and was interested but after clicking around the site I was thoroughly confused about billing/pricing/metering.


Picovoice runs on almost anything: web browsers, mobile, desktop, single board computers, and microcontrollers. For the platforms that have connectivity (i.e. almost anything aside from microcontrollers), we do call home for license management. This helps us keep the `Free Tier` free for personal users, hackers , and skunkworks projects, but make sure we get paid by enterprise customers with deployments at scale [1]. On a microcontroller like the one in this tutorial, there is NO connectivity option. Hence, in this specific case it is 100% offline with no license management. In other cases voice recognition is 100% offline but the call home for license management needs connectivity.

[1] https://picovoice.ai/pricing/


Then please explain the

> Picovoice engines call home servers to stay active and report the consumption for billing purposes only.


It sounds like he's saying that the key is hashed offline to check for validity, and doesn't actually verify it via server.


Yes, the board does not have a connectivity chip of any sort [1]. Even if one really wants to there is no way to connect to anything from this board.

[1] https://www.st.com/en/evaluation-tools/stm32f4discovery.html


Potential buffering issues aside, as others have pointed out the node.js example is performing asynchronous writes, unlike the other languages' examples (as far as I know).

To do a proper synchronous write, you'd do something like:

  node -e 'const { writeSync } = require("fs"); while (1) writeSync(1, "1");' | pv > /dev/null
That gets me ~1.1MB/s with node v18.1.0 and kernel 5.4.0.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: