We recently started cooking meals for the week ahead on Sundays and then freezing them. The aim was to give us more time with the kids and to cut down on housework.
To save time on the Sunday cooking session I have cobbled together a very clunky, and I mean VERY clunky semi-automated cooking system. It comprises a Raspberry PI which controls a couple of WiFi mains switches attached to the induction hob and the slow cooker. A wooden spoon attached to a 360 degree servo motor hangs above the pot on the hob and can be activated by the Pi for stirring. Initially I tried to use one of those cheap three-legged novelty vibrating pot stirrers, but that didn't work out. Thermocouples feed back to the Pi to help control cooking.
The whole thing is controlled by a messy Python script and 'recipes' are JSON based text files. They just define how long each device should stay on, a max temp to turn them off and how often they should be stirred. I get an email when cooking is done.
I plan to add some functionality over the summer to tip in ingredients as needed. The biggest issue is that it doesn't handle chunky food, it works for soups, chili sauce, pasta sauce etc. I'd love to figure out a way to fry and separate mince as you would with a spatula..
My system looks like it was put together by a drunk person. That is somewhat true.
This made want to see it even more.
Much closer to genius is the thermomix:
A malfunctioning lid caused hot liquid to escape from the bowl, giving several consumers severe burns. The Court found that Thermomix knew of this risk but still continued to promote and supply the faulty product.
Beware of genius inventions, you may get burned.
I've found that a lot of meals done via the slow cooker tend to taste the same because of the consistently and the type of ingredients typically involved in them.
1.) My hi/low setting probably doesn't match your hi/low setting. From the get go, we're cooking the same meal at two different temperatures.
2.) A lot of recipes seem to call for low setting for 8 hours. This works well for a lot of people with a standard American work day, since a stew/soup will stay above safe warming temperature in a nearly sealed pot. However, the cook times are too long. Thawed chicken for 8 hours on low almost never turns out with the right texture. In this case, it's convenience > taste.
3.) The flavor is cooked out of the ingredients, and/or there isn't enough seasoning or too many ingredients. When you get the seasoning right, too long of a cook time can dull the flavor. When there isn't enough seasoning, then your meal is bland from the start. When there are too many herbs and spices, you get a mish-mash of flavors that are all competing for attention. There was an article on here a long while ago that categorized foods into low and high amplitude flavors. Something sharply distinctive was high amplitude (think nacho cheese Doritos), while a low amplitude food had weak, hard to discern flavors (plain grits). Too many different ingredients can lead to low amplitude foods, and when I see an ingredient list with 15 different herbs and spices, I almost always steer clear.
Brown a flank steak in a pan. We're not sealing in juices - we're making a crust. Throw it in the slow cooker for a few hours. Check to make sure it's tender. Prep some veggies by cutting and portioning them in containers. Find a stew sauce that's simple. When you're ready to eat, saute your veggies in a little oil, and add a generous portion of sauce when your stir fry is near complete. Throw in the meat towards to end to heat. Serve over potatoes or rice or something simple.
While it does double as a slow cooker, I have never actually used that functionality.
Sous-vide much more specialized, but I find super useful not for saving time, per say, but being more flexible about it. It's great when you are doing other things at the same time as cooking dinner, i.e. - leave the steaks another 30 min while I do this chore? no problem....
"I am rarely happier than when spending an entire day programming my computer to perform automatically a task that would otherwise take me a good ten seconds to do by hand."
Sometimes it simply isn't about net time saved.
I don't really save much time. The real benefit is knowing that the food won't burn. Growing up my mother would make soups and stews, she'd leave them to simmer for 30 mins without checking. The food on the bottom of the pot would burn and make the whole thing taste nasty. I wanted to make a system that would prevent that, whereas my son just loves building robots and playing with motors.
It was as much a project for me and my son to mess around with as much as an actual kitchen time saver. But we have plans to develop it and just see where it goes.
It has a very "Wallace and Gromit" sounding vibe to me!
Me and my sister do live in different cities. She was quite old, but did not want any help from strangers. She did refuse to use a computer keyboard, since she hated informatization, and her sight was short.
So I did automate a system for her to scan the guests documents, detect the data required by local police for registration via OCR, fill up the form to send those data, and update the web site availability database table. The computer, when powered up, did only show instructions in big text, high contrast instructions, which where repeated by TTS (essentially "please feed the documents in the scanner", "please remove the documents from the scanner").
At the end she got used to using it, and she was quite proud being able to be so independent, since the last days of her lovely life.
One of the things I'd like to add is the ability to detect natural gas: She doesn't have a lot of gas appliances, but she has no sense of smell, so the usual warning sign for natural gas she'd never notice.
What data do the local police require?
Specifically in the case of Italy it seems like it is to be compliant with long standing legislation called TULPS.
That's a great way of using accessibility tools & automation.
To solve this pain point - one of our friend has created PlusGuests.
For glucose monitoring I use Dexcom G5 sensors and xDrip open source monitoring application for Android.
Insulin delivery is handled by a Accu-Chek Spirit Combo pump, that is one of the rare pumps with a Bluetooth connection. The entity deciding the basal rates and corrections is an open source Android app called AndroidAPS.
As an insulin I use the fastest available analog Fiasp from Novo Nordisk, that works 10-15 minutes after injection.
All of these combined together has dropped my A1c results from 7.5% to 5.5%, being 90% of the time between 4.0 mmol/l and 8.5 mmol/l, and having no severe hypoglycemias. Basically I got myself some more years to live without any complications and in general I feel much better when I can sleep my nights without worrying and can eat whatever I want whenever I want.
Oh, and a warning to everybody who tries this: Accu-Chek will not cover any damage, there is nobody taking any responsibility of the results from the treatment you get out of the software. For me this works much better than any other treatment, but for others it might be even dangerous.
P.S. Today I built a new widget to my i3 setup, displaying the current glucose and the trend. https://i.imgur.com/VnZ23vO.png
AndroidAPS and OpenAPS do not work with all the same hardware, So you choose your rig based on your CGM and Pump model.
If I'd be him, I'd start by trying to get my hands into Dexcom G5 system or if he doesn't have a good insurance, the Freestyle Libre has some unofficial bluetooth readers available that work with Xdrip. First you get your continuous glucose monitoring working and then start thinking about automating the insulin delivery.
These projects started because we're not waiting. The organization behind is called Nightscout and their website has information how to build the needed hardware:
Now the pump manufacturers are seriously planning to bring closed loop systems like I have here to the market. The only model right now that has some of the features is Medtronic 670g, but in comparison, if you know what you're doing, building an open source rig will give you much more control and features than the commercial offerings. This might change in a couple of years though.
Forgetting to bolus and accounting for the dawn phenoemenon are pretty challenging in a backpackers routine. :-)
Of course using it in pump is a different story, especially if you have a CGM with the pump. Right now I'm using the SMB algorithm in AAPS, that can help with unannounced meals by giving small boluses if it thinks you ate something. The same mechanism pretty much evens out my dawn phenomenon.
One thing you should know before trying Fiasp is the molecule size is much larger and might cause stinging feeling when the pump gives you dosage. Try to get a pump that goes slow with the dosing, otherwise the first couple of months might be a bit unpleasant. You'll get used to it though and I don't really notice it anymore.
Always calibrate when you are not sure.
G6 promises a no-calibration mode, but has a hard stop for sensors after 10 days. With G5 and calibrations you can double or triple the sensor lifetime (with xDrip).
My little sister is type I and using the BCG vaccine in this way blows my mind.
I suppose that many stations don't have such markers though.
First is to analyze the other signal features of the commercials (eg. increased volume), although it may be tricky.
The other option is a crowd-sourced solution - pretty much as for the browser adblock - where users can mark samples recognized as ads. Since the publishers often buy campaigns for many stations in the same country or state, it may be a shared database.
On the other hand, the described project only scratches my own itch. I wouldn't try to productise an app that takes away the main source of income for the radio stations.
No fancy ML needed, after a couple of times the filter gets one of these repeating fragments it should be able to block it. Fairness bonus: you get to hear each new ad a couple of times.
Maybe radio spots are a little different because they're cheaper and usually more low-quality than TV ads, but it doesn't really work for TV ads - they often have small variations, e.g. 10sec identical, 5sec different, 10sec identical (easy example). Also depending on your method of analyzing the audio it's sometimes broadcast with an unhearable fingerprint that distorts the waveform (let's say like MP3 versus WAV, but worse).
So yes, you can find some patterns - but the commercial breaks are highly mixed up and you wouldn't believe how many distinct commercials per channel are there, even if you think you hear the same ones all the time :)
There were previously some FM to MP3 "ripping" tools that would use the RDS information to tag the resulting recordings -- I'm not sure of the status of them. But it could provide a good way to detect commercials, since most radio stations change to a generic station identification message when they break for commercials / banter. (Whether you'd also want to turn down for banter is another question.)
Yes, or perhaps a combination of techniques. E.g. shared database to train an ML system to detect ads. Of course, the downside is that the ad industry will then tweak the ads until they pass the ML test.
> I wouldn't try to productise an app that takes away the main source of income for the radio stations.
I wouldn't think of it as taking away a source of income, but rather as forcing them to find a source that doesn't bother their customers so much. Ad blockers seem to be getting more accepted.
And perhaps using the clips for other purposes than "viewing" may in fact be fair use. Especially since you are trying to find a method for not viewing them.
Nothing like a little competition to motivate the improvement of ML systems :)
My home theater receiver does this (Marantz). It works pretty well. It doesn't cancel out the TV commercials though, it just normalizes the volume so it matches the show. But, I assume you could make it work for muting too.
Do you have similar objections to things like self-driving vehicle technology that will take away the main source of income for truck drivers?
I've often toyed with the same idea of the parent poster
Not promising anything, because each station requires time to tune and money for computational resources.
For some reason, the data and audio are out of sync but once calibrated it works quite well.
It seems the tuning of the algorithm is complex though. There's a dedicated forum for it: http://www.kaashoek.com/comskip/viewforum.php?f=2&sid=effa4b...
I automated handling DNS updates via simple "git pushes" - Lets you revert from bad changes, and gives you a good history of changes over time - https://dns-api.com/
Basically, I put down wire guide cable into the lawn and into the cement as well. It is all powered by electricity and has a little docking station. When it is scheduled to cut it simply rolls out, goes to the lawn and starts cutting on. After a pre-determined point, it will go back to the compost bin to dump the grass cuttings before going back to cut the lawn again. After it is done with all the cuts it simply returns back to the charging station.
I am trying to add better features to it like weather detection. If rain is scheduled then it will cut the lawn early and then delay cutting it again until the lawn is dry. I am also working on adding an edger component and a weed wacker competent so it can handle those tasks as well. Pretty much, my goal is to have a fully automated robot lawn mower when I am done with this project. So far it only cuts the grass and dumps the waste. I think this wire guided method is far superior to the autonomous robot mowers because most people's yards are in static arrangements that rarely change. So it is better to just add in the wire permanently so you get a perfect cut every time.
That was only one wire that I forget the purpose of, perhaps as electronic barrier. I can't imagine the mayhem of a grid of wires being struck with the same misfortune.
Some CS students even made a game based on the hilarious video, where you play as the moose and must eat apples while defending against incoming lawnmowers: https://joelspeanuts.itch.io/elgspillet
As a kid, I gave our family cat a close scalping. It was late fall and we were picking up leaves with the mower. After a lunch break, I fired up the mower and the cat shot out from underneath making an awful noise and headed for the woods.
We assumed she was fatally injured until two/three days later when she came walking back up the sidewalk, sporting a close shave on one part of her head, but otherwise seeming uninjured. She got plenty of her favorite food that day!
When you spend most of your day stuck in front of a monitor, it feels good to get up and turn lights on/off.
This becomes especially egregious over month to year long experiments where I run the same experiment every day on end.
There was really no reason not to auto generate every possible plot, every possible analysis every time (and I cannot use ipython notebooks or things like that because it's many distributed things chained together with lots of scheduling).
The productivity gains have been enormous and are hard to overstate. I don't dread any experiment any more because even in a large complicated distributed setup, everything from initialising kerberos tickets to tons of config files, restarting services, running multiple experiments dependent on each other, and generating plots and summaries and committing them to a repo is one command. Anything that's analysed once is evaluated always.
I now almost look forward to setting up new experiments because of the pleasure I get from just chaining together calls from my control utilities.
All I have to do is pull on my laptop and I download a filter with all results pre-generated paper ready. I think a lot of people do this in experiments where everything is on a single machine, but I haven't seen it as excessive from other phd students doing complicated distributed stuff. There is always a lot of manual command line args passing, manually changing some config while instead of just creating dedicated scripts, etc.
Quite often the worst-case scenario is a fire, so the little benefit is IMHO not worth the risk.
I'm talking about self-made hacks and cheap Chinese hardware here.
I got my parents a Nest smoke alarm (I can see the alarms too) but a year later it was being triggered by steam.
The folks at Nest actually sent a free new model that's better at not triggering for steam. But I have to wonder if my elderly parents trust it anymore.
I don't like watering grass because grass is boring and requires a ton of water, so rather than install irrigation, I ripped out all the grass and planted more interesting plants. (Drought-tolerant xeriscaping is common in my area, so this isn't unusual.)
Hard to overstate how good this is for one's mental health.
I love the 30min when I come home after work where I scoop leaves and various kinds of debris out of the water. Very relaxing.
I also don't really like plants, or the outdoors. I'd much rather be inside in the A/C. Working on my own coding projects doesn't bother me, even after coding all day long.
Perhaps it's because I've been spending less time coding at work and more time managing...
Nice. A lot of local plants too?
But I'm an engineer, so I've decided to automate my OKCupid experience.
Using node & puppeteer I've run a histogram, it showed that in my country, 75% of the profiles are almost completely empty (less than 10 words).
I used to manually dislike these profiles (as they'll keep coming back in the search results until you dislike it), but now my script does it for me.
The next thing I've done was to sort these profiles - I give higher priority to profiles that have a longer word count, that features keywords I prefer ("fascinating", "studying", "reading", are words that I catch my attention).
It used to be a very basic script, but every negative and toxic encounter has motivated me to keep it going. Right now I'm working on building a frontend to show the script's results. I'm planning on showing "suggested openers" based on the questions the potential match has said or mentioned and adding NLP features (such as sentiment analysis).
Anyways is it open source :D
FYI for all dating folks out there: women do not want you to approach them in the street. Like, let's say, 99.5% of the time.
"If you’re a guy, imagine you could only date a half-bear-half-lion. ‘Oh, I hope this one’s nice! I hope he doesn’t do what he’s going to do.’"
Every woman I know has had one of those common experiences -- many times -- but honestly when I was younger man I really had no idea what women go through or the frequency of it. My wife has had more than few things happen to her over the years.
The solution was, obviously, to make my scraper run slower. :)
I hope you at least disclose your methods to people you contact.
I'm also not taking it too seriously. This is my pet project (I'm not obsessed with it), and a place where I channel the negative residue that sticks to me from logging into that site.
I don't keep it disclosed, Aù contraire, I write that plain on my profile "I use a JS script to filter out empty profiles".
Most of the people that send me a message in OKCupid don't even bother reading my profile (even though I've kept my it brief). And the ones who do find it amusing, and interesting.
I would personally be terrified of anyone who thought this was an ethical or acceptable thing to do.
There are some bizarre and horrible things people do on dating sites. If you have had bad experiences, that's not ok. But don't prejudge this person without really understanding the effect his code is having.
I would take a resume, custom tailor a cover letter, change out a few paragraphs in my resume to fit the specific job title --- and then.. no reply -. So. I scripted it. I would scan craigslist, monster, indeed, etc for emails or company names. The script eventually evolved to guess company homepages and scan for emails on 'career' sections.
Based on the job titles it would automatically change out cover letters. It became smart enough to understand that a word doc or txt format resume was required. It could catch "PUT THIS IN THE SUBJECT" and created a queue for hand verification -- otherwise, it would send out the emails. Once they were sent out it would scan incoming emails to determine if there were any leads - and matched the thread together with a unique email footer.
Hilariously, it flipped job searching. I would get long ranting emails why I wasn't a qualified or the position required someone more 'senior' to build CRUD webpages. OH well, HR blew their time, not mine, ---> delete. When a interested company did call, I had a nice mysql database of all the posts that company made and was ready to return a call prepared.
I got a job quickly after this php script starting running.
"I'll get you a 20% more high paying job for $1,000"
We've had a tendency now to introduce a request for a simple bit of information in the job posting as a sort of captcha to filter out automated spamming of applications.
Definitely going to look into this. I'm a blind screen reader user and, when I had a phone with physical buttons, I could have it in my pocket connected to an external braille device or pair of headphones. I could easily carry on conversations, browse the web, ask for help, all sorts of things without taking it out of my pocket, and nobody had to know I was doing anything.
With a touchscreen phone I can't really do any of that. Even if I can listen to notifications in an earphone with the phone in my pocket, I can't take care of them unless I take the phone out. Plus, wearing earphones and walking around in public when you can't see isn't wise.
I was looking if this would be a nice addition for my smartwatch.
It's possible for sure, but like OCR there's a lot of variability in it. Just like parsing handwriting is full of edge cases, parsing morse code is as well.
First I wrote code that automatically moved cards between the different sections, so I only ever had to look at the "Due Today" list.
Then I used Twilio to build a bot that gave me a wake up call every morning. I didn't like the TTS that Twilio used so I generated more realistic TTS via Amazon Polly and played it back. Polly has many different voices so I had seven different personas give me my task list for the day. After it read out what I had to do, it then began playing the latest BBC News update right over the phone.
The final phase of this project was a bot that called my girlfriend at the time, told her the weather, and then called me and conferenced us together so we could start our day saying hello to one another.
At some point, I got tired of the process to sync files: download the document from the browser, open the reMarkable app and drag the file into it.
I automated this workflow, and now I can just "print" directly to the device  the article/document I'm reading.
I do not write that often on it though. I usually solve math problems on it and also comment on PDF docs.
All in all, I'd buy it again :)
I get around the same issue by emailing files to the reMarkable  but that's a solution that relies on having your own email server and is therefore less user-friendly than what you do, though it works great for my use case.
I believe using CUPS could be a simple way to implement it in Linux.
Is the remarkable better in that regard ?
IIRC it supports PDF and presumably other ebook formats, but I wouldn't expect it to have any special tool for viewing markdown.