I'm currently in my last semester of college, getting a BS in Computing and Information Technologies from RIT. I started this degree planning to go into sysadmin, but have been gravitating towards devops / SRE skills. I'm a very enthusiastic learner, exploring with my Docker-focused homelab for the past 6 years, including running some custom services which I've come to rely on daily.
As someone who has woken up to looking a bear in the face while camping, I can say with 100% confidence that on a physiological level these are not the same thing.
Much better legislation would be requiring that the firmware/software source be released at EOL, so that users can maintain the hardware they purchased for as long as they like.
Probably we need both. Hardkill all devices, and let determined users resurrect their own devices with the open source firmware if needed. The point is that millions of vulnerable devices won't stay online by default.
If we're dropping Latin alphabet, Tengwar is another option. It's vocalized abjad (a better term perhaps will be abugida or alphasyllabary) and vowels are written with diacritics.
I'm a 4th year Computing and Information Technology student at RIT, looking for a summer job / internship before I head in to my final semester. Ideally looking to explore DevOps as I've had a ton of fun running containers at home for the past 5 years, and I'm currently taking RIT's first ever DevOps course. I am also open to build on my prior experiences in systems administration, or try web development which I've been highly successful with at school.
I got a 2019 Mazda3 sedan a few months ago and I'm very happy so far. It has a few more features than I wanted, but I was reassured by a family member who is a mechanic for a Mazda dealer that everything is very reliable.
I really like that all the auto stuff can be turned off if you want, and all the capability of the screen but still having physical buttons. Plus I got 36mpg on my first road trip with cruise control set at 85.
I think the ideal solution here would be if companies were required to ship an open source driver, and then optionally offer a proprietary driver for an extra fee which includes whatever 'special sauce' (as another comment put it) that they don't want to release.
The example I'm thinking of is Nvidia's newer GPUs and DLSS. The hardware would come with open drivers, but if you want the upscaling that's an additional fee. While maintaining additional drivers is more work for companies, I think they'd actually benefit from this because it could be a recurring revenue stream for older hardware.
Since you're going into your freshman year I'll offer my college advice as a current senior. The moment that you feel a class will be covering something you already know, reach out to your advisor and the professor. I personally wasted far too much time in intro classes that could've been easily bypassed. While easy A's are great for the GPA, it's better to spend your time and money actually learning.
This looks like something I've been wanting to see for a while.
I currently have a google home and I'm getting increasingly fed up with it. Besides the privacy concerns, it seems like it's getting worse at being an assistant. I'll want my light turned on by saying "light 100" (for light to 100 percent) and it works about 80% of the time, but the others it starts playing a song with a similar name.
I'd be great if this allows limiting / customizing what words and actions you want.
Personally, I plugged a Jabra conference speaker to a Raspberry and if it hears something interesting, it sends to my local GPU computer for decoding (with whisper) + answer-getting + response sent back to the Raspberry as audio (with a model from coqui-ai/TTS but using more plain PyTorch). Works really nicely for having very local weather, calendar, ...
A long while ago, I wrote a little tutorial[0] on quantizing a speech commands network to the Raspberry. I used that to control lights directly and also for wake word detection.
More recently, I found that I can just use more classic VAD because my uses typically don't suffer if I turn on/off the microphone. My main goal is to not get out the mobile phone for information. That reduces the processing when I turn on the radio...
Not high-end as your solution, but nice enough for my purposes.
There are at least two ways to deal with this frustrating issue with Willow:
- With local command recognition via ESP SR command recognition runs completely on the device and the accepted command syntax is defined. It essentially does "fuzzy" matching to address your light command ("light 100") but there's no way it's going to send some random match to play music.
- When using the inference server -or- local recognition we send the speech to text output to the Home Assistant conversation/intents[0] API and you can define valid actions/matches there.
This drives me nuts and happens all the time as well. To be honest, I unplugged my google home device a while back and haven't missed it. It mostly ended up being a clock for me because I'd try to change the color of my lights to a color that it mustn't have been capable of because I'd have to sit there for minutes listening to it list stores in the area that might sell those colored lights or something. It wouldn't stop. This is just one of many frustrating experiences I'd had with that thing.
Remote: Ok
Willing to relocate: Yes (within northeastern US)
Technologies: Javascript, Typescript, Svelte, Tailwind, Golang, Java, Docker, Kubernetes, Nix, PostgreSQL, GCP
Résumé/CV: https://shanemongan.com/files/Shane_Mongan_Resume.pdf
Email: scmongo@gmail
I'm currently in my last semester of college, getting a BS in Computing and Information Technologies from RIT. I started this degree planning to go into sysadmin, but have been gravitating towards devops / SRE skills. I'm a very enthusiastic learner, exploring with my Docker-focused homelab for the past 6 years, including running some custom services which I've come to rely on daily.