Hacker News new | past | comments | ask | show | jobs | submit | Cheetah26's comments login

Location: Rochester, NY

Remote: Ok

Willing to relocate: Yes (within northeastern US)

Technologies: Javascript, Typescript, Svelte, Tailwind, Golang, Java, Docker, Kubernetes, Nix, PostgreSQL, GCP

Résumé/CV: https://shanemongan.com/files/Shane_Mongan_Resume.pdf

Email: scmongo@gmail

I'm currently in my last semester of college, getting a BS in Computing and Information Technologies from RIT. I started this degree planning to go into sysadmin, but have been gravitating towards devops / SRE skills. I'm a very enthusiastic learner, exploring with my Docker-focused homelab for the past 6 years, including running some custom services which I've come to rely on daily.


As someone who has woken up to looking a bear in the face while camping, I can say with 100% confidence that on a physiological level these are not the same thing.


Much better legislation would be requiring that the firmware/software source be released at EOL, so that users can maintain the hardware they purchased for as long as they like.


Probably we need both. Hardkill all devices, and let determined users resurrect their own devices with the open source firmware if needed. The point is that millions of vulnerable devices won't stay online by default.


How big percentage of customers even logged to their home router. It will be way below 10% (I would wager in lower single digit percents).

So

* manufactures open source it

* "someone" is going to maintain it, for free

* all these people are going to find non-malware infested fork

* upload custom ROM to their devices.

I just don't see it.

Automatic updates/killswitch are the only way forward.


Want to sell a device ? Deposit the software in escrow, released one year after the firm stops supporting the device !


Why wait a year?


For anyone who likes this sort of thing I'd recommend checking out the Shavian alphabet[1][2].

Similar goals with some very cool choices for matching letters to their sound. It also kinda handles accent variations with a few extra letters.

[1] https://www.shavian.info/ [2] https://en.m.wikipedia.org/wiki/Shavian_alphabet


If we're dropping Latin alphabet, Tengwar is another option. It's vocalized abjad (a better term perhaps will be abugida or alphasyllabary) and vowels are written with diacritics.


Do you have any sense as to whether Tengwar is any more or less compact/concise than Shavian?


Location: Rochester, NY

Remote: Yes

Willing to relocate: No

Technologies: Docker, Linux, Bash, Python, PowerShell, Go, Java, JavaScript, TypeScript, Svelte/SvelteKit, SQL

Résumé/CV: https://shanemongan.com/files/Shane_Mongan_Resume.pdf

Email: scmongo@gmail.com

LinkedIn: https://www.linkedin.com/in/shane-mongan/

I'm a 4th year Computing and Information Technology student at RIT, looking for a summer job / internship before I head in to my final semester. Ideally looking to explore DevOps as I've had a ton of fun running containers at home for the past 5 years, and I'm currently taking RIT's first ever DevOps course. I am also open to build on my prior experiences in systems administration, or try web development which I've been highly successful with at school.


I got a 2019 Mazda3 sedan a few months ago and I'm very happy so far. It has a few more features than I wanted, but I was reassured by a family member who is a mechanic for a Mazda dealer that everything is very reliable.

I really like that all the auto stuff can be turned off if you want, and all the capability of the screen but still having physical buttons. Plus I got 36mpg on my first road trip with cruise control set at 85.


I think the ideal solution here would be if companies were required to ship an open source driver, and then optionally offer a proprietary driver for an extra fee which includes whatever 'special sauce' (as another comment put it) that they don't want to release.

The example I'm thinking of is Nvidia's newer GPUs and DLSS. The hardware would come with open drivers, but if you want the upscaling that's an additional fee. While maintaining additional drivers is more work for companies, I think they'd actually benefit from this because it could be a recurring revenue stream for older hardware.


Since you're going into your freshman year I'll offer my college advice as a current senior. The moment that you feel a class will be covering something you already know, reach out to your advisor and the professor. I personally wasted far too much time in intro classes that could've been easily bypassed. While easy A's are great for the GPA, it's better to spend your time and money actually learning.


Awesome!

For years I've had "node based CAD" on my ideas list. I imagined something like Blender's shader nodes, but this is even better.


This looks like something I've been wanting to see for a while.

I currently have a google home and I'm getting increasingly fed up with it. Besides the privacy concerns, it seems like it's getting worse at being an assistant. I'll want my light turned on by saying "light 100" (for light to 100 percent) and it works about 80% of the time, but the others it starts playing a song with a similar name.

I'd be great if this allows limiting / customizing what words and actions you want.


Personally, I plugged a Jabra conference speaker to a Raspberry and if it hears something interesting, it sends to my local GPU computer for decoding (with whisper) + answer-getting + response sent back to the Raspberry as audio (with a model from coqui-ai/TTS but using more plain PyTorch). Works really nicely for having very local weather, calendar, ...


Neat!

If you don't mind my asking, what do you mean "if it hears something interesting"? Is that based on wake word, or always listen/process?


Both:

A long while ago, I wrote a little tutorial[0] on quantizing a speech commands network to the Raspberry. I used that to control lights directly and also for wake word detection.

More recently, I found that I can just use more classic VAD because my uses typically don't suffer if I turn on/off the microphone. My main goal is to not get out the mobile phone for information. That reduces the processing when I turn on the radio...

Not high-end as your solution, but nice enough for my purposes.

[0]. https://devblog.pytorchlightning.ai/applying-quantization-to...


Totally get it!

There are at least two ways to deal with this frustrating issue with Willow:

- With local command recognition via ESP SR command recognition runs completely on the device and the accepted command syntax is defined. It essentially does "fuzzy" matching to address your light command ("light 100") but there's no way it's going to send some random match to play music.

- When using the inference server -or- local recognition we send the speech to text output to the Home Assistant conversation/intents[0] API and you can define valid actions/matches there.

[0] - https://developers.home-assistant.io/docs/intent_index/


This drives me nuts and happens all the time as well. To be honest, I unplugged my google home device a while back and haven't missed it. It mostly ended up being a clock for me because I'd try to change the color of my lights to a color that it mustn't have been capable of because I'd have to sit there for minutes listening to it list stores in the area that might sell those colored lights or something. It wouldn't stop. This is just one of many frustrating experiences I'd had with that thing.


THIS. It's hilarious and infuriating our digital assistants struggle to understand variants of "set lights at X% intensity".

However, if I spend the time to configure a "scene" with the right presets, Google has no issue figuring it out.

If only it could notice regular patterns about light settings and offer suggestions that I could approve/deny.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: