I've been making excessive amounts of bread.
Frustrated by filter bubbles and the general state of online debate, especially on Twitter, I made Debubble.
It’s a publishing tool that will let you challenge another Twitter user to a debate. If they accept, the two of you will be able to engage in a public but distraction-free conversation. Debubble will make sure you wait for your turn before you can deliver your arguments. It will also limit each response to 1500 characters (roughly one page) and the entire debate to 12 turns. Instead of cheering for their side like sports fans, registered readers will be able to signal the value they got from your conversation by starring the whole debate.
I haven’t properly tried to launch it yet, as my day job and kids are keeping me very busy at the moment.
The trend is to just reuse the standard up/down voting comments without realizing implications. Yes, if you do this and sort comments by votes you will on average get higher quality user-curated content. OTOH small piece of UI is using reward system to condition users to seek attention, and it sets the tone for whole discussion.
There are no easy solutions here. Everyone wants their opinion to be heard (even if somebody already expressed same thing). That will sometimes mean aligning your opinion to masses so that your content gets proper visibility, which leads to echo chambers and bubbles. Your take forces users to bring attention to all of debate and not just to one side's arguments. Clever.
I have been working on something very similar but with less of a focus on debating (https://taaalk.co). (Indefinite chats, any number of participants.)
Some friends and I started it a few years ago stopped working on it, so I decided to rebuild it over the last few months. Some of the old Taaalks are still on there:
Cutting out the plebs makes it less rewarding to read the debate. Celebrity debaters will be required to overcome the lack of organic pull into the conversation. Why not just watch an interview between the two people?
The app lets users send tweets or DMs and I didn’t find an obvious way to narrow the required permissions down to just that. But a few people have now pointed this issue out and I think I will just remove that functionality and require only read permissions.
I will probably just get rid of the features that require write permissions. They aren't essential.
I've registered this Twitter account: https://twitter.com/DebubbleMe. There is nothing on it for now, but if Debubble takes off in any shape or form, that's where I'll be posting the updates.
Will you be summarizing some of the best debubbles? The reminds me of that subreddit ... changemyview?
After reading "How to Have Impossible Conversations" (which was recommended by someone on HN a few weeks back), I've come to understand that the toxicity of social media has more to do with the lack of social cues, rapport building, and consequences than the details of the platform.
Also, "staring" conversations rather than "liking" comments still results in the same "sort by controversial" phenomenon: https://slatestarcodex.com/2018/10/30/sort-by-controversial/
Since ancient times, two philosophers would often have a debate by exchanging letters. The goal was not publicity, though many of such correspondences eventually became public. The goal, as I see it, was simply searching for truth. I wonder if it's possible to build a platform for something similar today. Even if it never becomes as popular as social media, I hope it could at least create a clear distinction between entertainment and actual conversation.
Do I need to sign in to see any of the debates?
The general plan is to use about 400 pounds of lithium iron phosphate cells, spread between the spaces under the right and left rear passenger seats where the gas tank used to be and the engine compartment (mostly approximately where the radiator was). I'm using a Netgain Hyper9 AC motor (144 volt version). I haven't decided what I'll do for charging and battery management. I plan to order an adapter from CanEV to interface to the transmission so I'll be able to keep the stickshift.
Mine was much simpler. I bought a sewing machine because I thought now would be a good time to learn how to sew and maybe I could make masks for myself and family. I never really understood how these machines work and I have to say, they are pretty amazing (even a low tech one like I got - a Singer 4423).
I also gained a ton of respect for people who are good at sewing. It's much more difficult than I thought it would be.
From what I've seen it's fine, you can choose which gear to shift to and leave it there. Cold start from 5th gear. Can even be fun to play with the gear ratios, apparently. But it is another point of failure.
One of the things holding me off from attempting an EV conversion on my old Saab 900 sitting in my shop is that the gear box in it is notoriously brittle and would break even with the torque from the (turbo) gas engine that it shipped with.
I also like having a stickshift.
It's also fun to modify machines to be used in ways they were never intended by the original designers. Fortunately a lot of the DIY electric car components are pretty flexible in terms of how you use them. For instance, you can get your motor controller, your battery management system, and your charger from different companies and reasonably expect them to work together because they each have a well-defined job and that's all they do.
I re-read Artemis now that I know something about welding, and was kind of disappointed it was all oxygen-acetylene, which I know next to nothing about rather than TIG, which is usually the recommended way to weld aluminum. (I'm not sure if you'd even need a shield gas like argon in a vacuum environment?) Maybe there is a good technical reason for that, but it wasn't explained in the book (nor does it really matter to the story except to the 3% of readers who care about welding trivia).
Is a thing that I would like to do, on the future, with my 90's Nissan Micra.
I did save one of the rotors from the engine. Maybe I can think of some whimsical use for it, like cut a thin slice off it and weld it to something as a sort of decoration.
I could get better range with more batteries, but I also didn't want to increase the total weight of the vehicle by more than a few hundred pounds, just to stay within design tolerances.
(The RX-8 is about 3,000 pounds normally, which is pretty light for that sort of car. A Miata would probably be an even better choice, as it's around 2,000 pounds. It's hard to find newer Miatas for a reasonable price, though. RX-8's can be had pretty cheap because the rotary engine is easy to destroy if you don't maintain it properly and even in the best case usually needs to be rebuilt every 100,000 miles or so to replace worn apex seals. So, there are a lot of used RX-8's on the market that need engine work.)
I've been using an AHP AlphaTIG 201xd, which seems to be a good machine for the price. It seems that the hard part initially is mostly just figuring out what settings to use to get a good weld. Beyond that, it's about getting used to how aluminum behaves, and figuring out how to position yourself and your work piece so you can keep your hands steady.
My welds aren't anything I would mistake for art, but they get the job done.
I want to design an electric formula car and am having some trouble deciding what parts to go with.
On the other hand, series wound DC motors are cheap and popular for drag racing applications. Check out the White Zombie if you want an extreme example. The guy that built it lives nearby and he's been giving me advice on my (very different) conversion.
When you are accustomed to how things work and then forced to change its not fun. Current covid19 circumstances brought enough unwanted changes. This project started out as a fuck you to MS but it really turned into a fun project to keep my productively on track and also keep my mind busy.
The only shame is that I can't really "release" this cause it really looks like the original and the copyright vultures will waste no time coming for me. My best bet would be to change the UI design. BUT that would void the original purpose of the project.
I convinced a bunch of friends to use WL a few years ago and now they're mad at ME for the acquisition! Not really mad, but despondent and looking for alternatives. I migrated to Apple's todo list app (which has +ves and -ves), but it was funny getting a bunch of texts within a week blaming me for getting them hooked on WunderList!
But this project is way more exciting!
The Korg SDK  comes with a lot of tools right out of the box (biquad filters, dual delay lines, wave types, access to parameter knobs, etc.) and their dev environment is really easy to install (you upload patches via MIDI sysex!).
The actual audio programming is wonderful - Korg's SDK gives you an pointer array of realtime values which you can manipulate how you see fit before they hit the audio out. Its simple (I made a auto pan in 10 lines of code ) but powerful when you apply buffers, etc.
And the first question - also yes. But for synths other than the NTS-1 you'd have to send each note individually so you will more to do - e.g. keep track of note positions, determine notes in each chord, etc. I might try to do this too. As far as I know, the NTS-1 is the only one that has such a smart arpeggiator (probably because its monophonic and you can't enter chords easily...).
I've written bash scripts (using sendmidi ) to arpeggiate chords when I was feeling particularly lazy. It's pretty easy in midi. Figure out the root note, figure out the pattern, and just turn on/off root+pattern[i] :-).
 sendmidi is a great little command line tool to send midi commands to devices, or to record midi commands from devices. Its input format is plain text and you can include timing information so it's pretty easy to script music in this way: https://github.com/gbevin/SendMIDI
We're working with other local maker-y spaces on these efforts; we've picked up a few Ender 3's to help with the 3D printing and we have a small team of volunteers helping with sewing. So far we've distributed over 1,500 face masks to folks and healthcare workers in Fresno, San Diego, Idaho, and soon to a school in Uganda.
This is all on top of trying to keep our community engaged and hosting meetups and happy hours on Zoom. Also on top of my day job. I've never been so busy in my life, and I'm looking forward to a time when we can safely re-open and get back to building the community face-to-face.
Also agree that the dplyr syntax is cleaner.
>>> vehicles = hl.import_table('vehicles.csv', impute=True, delimiter=',', quote='"')
>>> t = vehicles.filter(vehicles.make == "Saab")
>>> t = t.order_by(t.year)
| id | make | model | year | class | trans | drive | cyl | displ | fuel | hwy | cty |
| int32 | str | str | int32 | str | str | str | int32 | float64 | str | int32 | int32 |
| 380 | "Saab" | "900" | 1985 | "Compact Cars" | "Automatic 3-spd" | "Front-Wheel Drive" | 4 | 2.00e+00 | "Regular" | 19 | 16 |
| 381 | "Saab" | "900" | 1985 | "Compact Cars" | "Automatic 3-spd" | "Front-Wheel Drive" | 4 | 2.00e+00 | "Regular" | 21 | 16 |
| 382 | "Saab" | "900" | 1985 | "Compact Cars" | "Manual 5-spd" | "Front-Wheel Drive" | 4 | 2.00e+00 | "Regular" | 23 | 17 |
showing top 3 rows
A little more background on the project: Hail's raison d'etre is a 3-dimensional generalization data frames we use for genetic data called a MatrixTable . Conceptually, it is matrix-of-dicts rather than lists-of-dicts.
Genetic data is massive, so all of this is lazy and works on out of core data. The Python front end constructs an IR representing the query, it's fed through a query optimizer (written in Scala) and executed by a backend. We're working on multiple backends, but our primary backend right now is Spark.
I'm a keen mountain biker, so I've put my energy and frustration into developing new mountain bike trails in the hills around my house. Been meaning to do this for a long time, but there are such good trails a few miles further away, so the incentive has not been very strong until now.
I'm building for about 1 hour per day on average, and I manage to get between 10 and 100m of trail built in that hour, so by the time the lock-down ends I'm aiming to have a contiguous piece of singletrack that's a mile long.
Also, I've been helping on a local project to develop an open-source ventilator (https://www.backabuddy.co.za/champion/project/rescuevent)
And I'm working on a peer-to-peer donation platform (which is not really ready to show to anyone yet)
I can highly recommend trail building (both walking and cycling trails) as a combination of physical, aesthetic and intellectual challenges (figuring out how to use the terrain to be both fun and interesting/possible to ride and then moving tons of earth and vegetation to make it happen).
I've been a musician for going on 20 years, mainly piano but I like to collect the ability to noodle on instruments. When I was around 13 I broke my left forearm and it healed in a way that limits the rotation of my wrist quite a bit. This makes playing guitar rather difficult and at the time I started to consider branching out from piano there were a bunch of factors that made me give up on being able to play guitar. I was gigging as a piano player for 10-12 hours a week, while also going to school for piano and CS I started to develop tendonitis and trying to play guitar made it a lot worse, so I quit. I'm now in a place where I can take care of my arm (and I have actual healthcare) so I started back up again.
I guess HN is cool with self-promotion, so here's a jam I made with a looper pedal after about 2 weeks. I call it "More Theory Than Experience"
So last week I ordered a drum kit (Yamaha dd75) and hope to have better luck with drumming. It’s a blast so far.
I've currently teaching myself as I just love exploring. I've watched some youtube videos about scales and I follow a few guitarists on youtube (samurai guitarist comes to mind and Paul Davids has probably been my biggest influence in my ability to play). Other than that, it's all been throwing all of my experience at it and seeing what sticks and what doesn't. Definitely record yourself once and a while to see what's working, and listen to a lot, both passively and actively, and try to spot what you like and really analyze it.
speaking of which, I've found music theory to be completely indispensable in my ability to self-study. Being able to take what I heard and internalize it, and being able to take what's in my head into my hands is absolutely essential.
You sorta motivated me to pick up guitar again... the sound of it makes me excited so gonna cont. with a course I bought on Udemy and see how far that takes me.
I'd also checkout Samurai guitarist and Paul Davids.
http://pointillism.digitalbunker.dev/: I've always been into generative art, so I built this site that takes a source image and recreates it in a Pointillism style
http://gitrandom.digitalbunker.dev/ : Generally when I'm struggling to come up with project ideas, I'll just browse GitHub. This site lets you explore random GitHub projects by language and topic.
I built the sites using Vapor, so I could continue to use Swift and just learn one new thing at a time.
I'm probably going to pick up some iOS app too to leverage the new hobbies people are discovering being at home (i.e. bread making).
I was looking for a Zettelkasten note taking app which would 1. work on laptop and phone 2. wouldn't have any vendor lock-in and 3. wouldn't go away if a single company folded - couldn't find one, so I started writing one. I'm writing it as a PWA to make it available ~everywhere and planning to use dropbox/google drive/whichever as the backend so users will have full control over their notes.
I'm amazed how much you can accomplish with modern web tech stack. I can literally bypass any need for a server by having the user connect to their cloud! I can just create a PWA and publish it as an app! On the downside I've learned that some features are hard to implement with above requirements using PWAs though. For example, only Chrome supports some level of filesystem access, so storing notes locally would mean discriminating by browser, which I don't feel great about.
Something with phone support would be nice, hell even just read-only mode would be great. Best of luck, and please report back if you can set up a landing page or a github repo or something else we can poll :-)
If you want to give me feedback on the current pre-alpha version feel free to ping me, I'm tsiki @ freenode/IRCnet
How do I follow along?
I created a placeholder repo for anyone interested to watch: https://github.com/tsiki/connectednotes
Real-time avatars with our deep computer vision pipeline; developed with GStreamer, Rust and LibTorch. This CV pipeline is usually used for training robots inside simulations and generating synthetic datasets. But given the circumstances, thought it would be fun to explore other use cases.
Using my own face live through webcam.
It's a RESTful server-side API for adding user authentication and authorization flows to your apps.
We've been taking a lot of inspiration from Stripe and mostly just wanted to use an auth service with docs like Stripe :)
(Please note this is still pre-pre-pre beta. The docs are incomplete and we have yet to even integrate it with our own apps, so please don't try to build an app with it yet!)
The core API docs are good, like the Management API Tester page. But the walkthroughs and general documents are full of broken links, inconsistent use of language, and varying levels of precision in how things are explained. You end up Googling for answers, finding community responses, and having to piece things together.
The way things are called APIs versus Applications is confusing no matter how you put it. Then they are sort of ambivalent in places. For example, look at the SPA guides. Sure, it'll walk you through the Implicit Flow for SPAs, but elsewhere they second guess themselves and say you shouldn't use Implicit Flow for SPAs. Instead, they say create a "Normal Web App". But good luck finding that specific article again just because you came across it once!
If anyone in Product or Biz Dev at Auth0 is reading this, I would urge you to make a case for "even easier mode" that abstracts a bit more and comes with better documentation. I found myself doing so much token management and head scratching about ID versus access tokens that I felt like I need to be a technical expert on the standards just to follow the directions and feel like my app is secure.
Auth0 has potential to actually solve identity in an easy way, but they are not meeting that promise right now, and that is your opportunity.
We had the same exact experience. Couldn't have explained the state of the docs any better!
We're running with extremely light infra on AWS and just hit our max-db-connections to MySQL.
Good lesson for the future, because it looks like we're not cleaning up the connections properly!
I looked at your website on mobile and wanted to let you know that it doesn’t properly resize; the UI overflows.
And we'd never heard of Keycloak before, so thanks for pointing them out to us!
Have you ever used or built with either platform? If so, what was your experience like working with them?
1. The Auth0 universal login solution is not "white-label".
It requires pushing users to an auth0.com pop-up page which has rather limited customization options. Granted they do allow their customers to upgrade to "custom domains", but they up-sell on this point (minimum $23/month) which doesn't make it ideal for us bootstrappers just wanting to get a demo running.
We additionally had a handful of users mention our login flow felt insecure. We determined this to be more imagined than factual, but figured it was a result of the change in design language between our app and the auth0.com pop-up. It was particularly acute when transitioning from native iOS to a web pop-up when entering sensitive information.
The underlying feedback we kept hearing around the login flow was along the lines of “why am I giving my password to this sketchy-looking website rather than to your app?”
2. The Auth0 docs and interfaces are a maze!
We had a terribly difficult time piecing together tips and footnotes from the community support forums and tutorials on Google to complete the information provided in the docs themselves.
There were a number of steps we needed to implement which were completely omitted from the official docs. We found others were running into the same problems as well on the community support forums.
For us, this essentially resulted in a feeling that Auth0 was letting too much complexity bleed through the interfaces for the developer figure out themselves.
So these are the two driving reasons we started hacking around on Feather:
- To have a truly white-label auth API
- To have more intuitive interfaces and documentation
It's a nice mix of both online and offline work. Also, the community around here is mostly made up of various combinations of farmers, hippies, retirees, and permaculture folks. Everyone wants a decent internet connection, but no one really has the skills to do much about it. I've lived here a year now, so thought I'd give it a go.
It's a windy road. Actually, it all started out because I wanted to get fast internet for myself on my farm. Then I thought, "Hey, why not start a business?" Feature creep at its best.
Note: not sure how correct this information is outside of US
Something else I've found useful is RF Elements' YouTube channel. In particular this playlist . I suspect they overstate some issues slightly in order to promote their own products, but I'm super impressed with how well the video's explain the topics.
If you aren't already, keep an eye on Starlink—when they start operating in Portugal I bet it'd be a real benefit to your business. Yours is pretty much the ideal use case.
Also, wow, living and coding in rurad Portugal sounds like a pretty idyllic life from here. Aproveite pra mim, né?
I'm saying this having not actually deployed the hardware yet, just based on research.
Might consider this in the future for my local community
So far I have a camera working that can sleep when no motion and wake back up if low battery after enough charge.
Because of the distance we are from each other, our friendship has relied heavily on phone calls and video calls. Some time ago, we started calling them, "remote coffees", "- Hey man, when are we having our next remote coffee?"
We met at the University, we spent about two years working for the same company and we have kept in touch during these years thanks to our "remotte coffees" and also due to the many concerns about technology and productivity we have in common. "This conversation should have been recorded!". We are sure this same thought came to you after some either formal or informal conversation you had. The challenge was simply to place a product live with as much free time as this quarantine allows, and here it is. We are not launching a super business, nor did we intend to, we both are fully dedicated to something else. We just wanted to launch this MPV and share it with friends and contacts.
We do have a lot more functionalities and ideas to put on it but, if you want to try it, those ideas will be much better by taking into account your honest feedback.
Decided to open source some of the personal projects of mine. https://github.com/vivekhub/password-generator and https://github.com/vivekhub/simplenote-backup. Nothing fancy but something I have been meaning to do and started doing it. Started learning K8S as well so that is a positive. Decided to setup a personal website https://www.vivekv.info as well and had to learn hugo to do that. So on the whole feeling good. Sorry about all the links and plugs but hey I am genuinely proud of what I have done :-)
- Web geo APIs to guide you to the next "treasure".
- Webcam API to capture matching photo.
- "AI" for matching photos and answers to questions in the backend.
- the "AI" doesn't work well, planning to add a python Lambda with a better SSIM algo.
The hardest part so far has been permissions in iOS. If the user blocked geo permissions for Safari it is kind of a pain to enable again for a normal user. I haven't had a chance to test in Android yet but I presume that will present other challenges regarding permissions.
You can see a live demo here: https://www.myshotcount.com/
This is super neat though, looking forward to following along. Would love to sign up for a newsletter if yas had one.
Based on my conversations with users, shot tracking is most used feature by far. There are a bunch of other services that help with dribbling  and drills .
 https://www.94feetofgame.com/app <- also a Steve Nash project
End goal: I'm based in the US now but come from a small ethnic group in Ghana (Konkomba) and recently came to the sand realization that our language will die over time. I want to build enough tools for translating to and from English and in the process perhaps learn things about language that fit with the models of the most popular languages today.
Unrelated, going to finally setup a personal website to host pictures and 99% chance it'll be WordPress-based.
You can single handedly save your language from going extinct
Well, one of it's bedrooms was wallpapered and ancient looking. Very ugly. I decided to take care of it.
The wallpaper, and the three papers that came before it, are now stripped. The wall is in rough shape post-strip, and I'm repairing it. This room is on its way to perfection.
I've never done this before, and had no idea how much fun it is. There is no mistake that can't be fixed, and the instruction on YouTube is amazing. I'm having to reel myself in a bit, because I keep on noticing other things I'd like to fix myself. :)
It's sort of like the experience I had when I first started writing software. The power! My creativity is kicking in hard.
diy stackexchange and reddit are fun places to hang out, to learn the wizardry others are using.
So far 8 games, adding more weekly. Games follow the same code patterns, so about a week to add one.
Everything runs on Firebase, needed something to launch quickly with real-time capabilities. Vue on front-end.
Would love some feedback.
What about adding poker or even making a dedicated poker app? I'm in a weekly, virtual poker game that is a mashup of different tools.
Currently on the verge of founding a (possibly viable) startup with it, but the browser itself is totally alpha for now.
Been working on parsers and protocols for a while now, and had to switch to TDD to keep my sanity together. Needed to write my own test runner that can simulate network behaviours (2G slow fragmentation is real) and peer to peer scenarios. Most servers out there don't comply with specifications, so making my own client- or peer-side implementations work was a hard task.
Currently writing my own SGML parser and optimizer, so that the browser receives only "linted and upgraded" html that is free of malicious parts, whilst embracing the idea of disallowing everything that could be potentially misused, including CDNs that do cache busting all the time.
The idea behind the browser concept is that trust is not established by default, and users should decide what website to trust, and match that with what kind of content they'd expect the website to deliver.
Anyway, IMHO, you should really focus on code clarity and hitting the high points with a good modular system. Ignore all the edge cases and if/when you open source it, that will allow people to focus on narrow pieces and make them more compliant.
The world doesn't need another rats nest like firefox and chromium have become. AKA you need to reinvent the konqueror of 1999 that spawned webkit/chromium.
I have no chance competing with google, so I’m probably gonna reuse as much of the servo project as possible when it comes to runtime and layouting/rendering. Currently a bit unplanned, on Android and iOS I have an experimental prototype up and running that’s just bundling nodejs-mobile and using a webview to localhost.
The browser UI (pwa ui) is served on port 65432 in order to allow userspace usage (ephermal ports can also be used by anyone on Windows).
Could you share more about this vision?
> writing my own SGML parser
How did you land on SGML?
What do you think of a browser/mode that parses markdown, so we can have a "markdown web" with less complex clients?
Phew, tough question. As I went into web development when XHTML 1.1 strict was the "cool shit", I kind of valued the aspect of using the web for acquiring and distributing knowledge. Not only for me, but also for publishing or other forms of media (e.g. by offering print stylesheets), screen readers, and semantic extraction of that kind of knowledge.
(I was also working on project(s) that were using DAISY to automatically convert websites into hearable formats to be consumable by blind people.)
Somehow from then (around 2000ish) to now, everything went to shit and nobody cares about that aspect anymore. News websites are too busy displaying ads and pushing subscription dialogs in my face (before I read a single line of their article) - rather than being readable or consumable.
And I kind of disagree with that. I want to make the web an automatable tool to acquire knowledge in an easy manner. And I hope I can do that programming-free. Currently, programmers can easily build scrapers - but imagine the possibilities once any person or kid can do that with a few mouse clicks.
I know there are a lot of proprietary scrapy-based solutions out there already, but honestly I think they're crappy. They see the web as DOM and not as a statistical model that a neural network "could" learn once you have a different way of rendering/parsing/modelling things.
> How did you land on SGML?
The reason why I am currently building my HTML(5) compatible parser with SGML ideas is because nobody closes tags. The spec is very complicated (especially while having an eye on what can be abused in the XSS sense or related security issues with CORS), so currently I'm kind of looking at a lot of parsers out there and try to find my own way of making this into a statistical model, so that in future my neural net adapters can optimize old HTML code into new, clean, HTML5 code.
> What do you think of a browser/mode that parses markdown, so we can have a "markdown web" with less complex clients?
Actually this was my first idea to build this. I wanted to convert all html to markdown and back, so that it's easier and cleaner. The issue I realized is that most markup and meta information that comes with a website is lost in markdown (or commonmark), and layouting sometimes implies structure, too - due to how websites in wordpress (or any user-friendly CMS) are being built.
Code-wise you usually cannot imply meaning by only looking at HTML, sadly, that's why I switched to a "filtering proxy-like" approach, whereas the Browser UI simply receives the upgraded, clean HTML, CSS (and webfonts or other assets).
I feel that one key aspect of something like this would be the ability to annotate anything on any page you stumbled across, and to navigate between all your annotations in a cohesive manner.
I'm excited to see what you make!
> (I was also working on project(s) that were using DAISY to automatically convert websites into hearable formats to be consumable by blind people.) Somehow from then (around 2000ish) to now, everything went to shit and nobody cares about that aspect anymore.
Yes, it's tragic that you could seamlessly compose streaming audio, video & text from multiple servers using an SMIL _text file_ in early 2000s, but it's all gone now.
Yet we now have large markets of broadband-connected humans with countless hours spent in front of streaming media (including video conferences) that they cannot annotate, inspect or compose. Then people wonder why they are "exhausted" after hours of Zoom meetings via powerless blackbox client apps.
There's still a tiny bit of standards activity on sync of A/V content with web text, part of the upcoming fusion of epub & the web, aligned with Google's "Web Packaging" that will enable a fully-offline internet with signed content (can of AMP worms).
> so that in future my neural net adapters can optimize old HTML code into new, clean, HTML5 code.
This is exciting work. Apple has a powerful ML/AI chip on recent iPhones, likely to be used for image processing and augmented reality annotation of live video. It would be nice to apply this silicon power to the semantic ambiguity in real-world human use of markup languages.
We need an alternate timeline fork of the security aesthetic of CSS "user" vs "publisher" stylesheets, which at least tried to formalize the inherent social/power/finance conflicts between stakeholders in the web content rendering pipeline. Of course, we've since added identity, device fingerprinting, keystroke timing and countless other minutiae to the arms race. But the fundamental need for separation of powers will never go away.
Many users have powerful silicon on their devices, but today it is rarely employed in defense of "user" stylesheet/reality parsers. The proxy architecture you are developing could be combined with fully-private "user" datastores, of the kind harvested today without consent, but instead customized by the user for their own objectives, with data always in their physical control. With local personalization and ML-powered disambiguation, the unfair playing fields could be tilted a little towards local autonomy.
... and I think that this was actually the job of web browser engineers, and they failed to do so. I kind of like where Brave is going to be honest, though I do not think that an optional approach will make a change. We've been there, a lot of times, and nothing will be changed if we don't force the industries to.
Honestly currently the only Browser that is doing the right thing when it comes to privacy policies of third party cookies is WebKit/Safari    as Apple has the leverage to enforce it via their iOS market share.
Firefox/Mozilla currently is too concerned about breaking things and Chromium is a bad privacy joke outside of Ungoogled Chromium.
> The proxy architecture you are developing could be combined with fully-private "user" datastores, of the kind harvested today without consent, but instead customized by the user for their own objectives.
Exactly ;) Can't talk about this more (for now as my startup idea has to stay under the radar until Q3 this year) but I think you've figured out what I want to do with this concept.
-  https://webkit.org/tracking-prevention-policy/
-  https://webkit.org/blog/8311/intelligent-tracking-prevention...
-  https://webkit.org/blog/10218/full-third-party-cookie-blocki...
You might want to check out the Gemini protocol.
Currently, the Browser UI is actually just a PWA pointing to the nodejs instance and is reusing whatever rendering engine is available. I want to have a clean codebase, so everything is babelfree es2018 and will only run in edgium and safari 12+ (and chrome 70 or webkitgtk or webkitqt or firefox etc).
For mobile my plan is to bundle nodejs-mobile and just use a webview there, which is based on chakra (so it is JIT free and is technically allowed on iOS). For desktop I will probably unclutter servo modules and try to have a minimal fork that doesn’t have all the web apis I don’t want or need...but I’m not sure, as I’m not yet familiar enough with the servo codebase.
One thing is sure: I can’t create a competing rendering and layouting engine, so I gotta reuse an existing one.
You put it on Github, and I'll review the app and suggests code improvements (probably not amazing since I don't do iOS that much, but I know some Swift, and a lot more experience with Go, Java, JS, etc.).
Up to you, but you should never be ashamed of the code if it works. It got accepted to appstore, it's good enough ;).
I have a raspberry pi and picamera and wanted to detect the pigeons in my balcony and then play a sound or something to shoo them away.
But it's going nowhere, I'm too dumb to even start properly :(
- Nvidia and CUDA stuff is so hard, I can't set it up properly no matter what
- Tried YOLO but without CUDA and OpenCV I can't run it in video. Don't know how to fix it
- Tried to copy other projects but can't find anything that I can parse with my amateur brain. I get lost and doesn't matter how many youtube videos I watch or stackoverflow pages I check, it's errors after errors after errors.
- Tried in windows but that's not viable. Installing Ubuntu nearly broke my pc and somehow a virtualbox messes up the whole thing. Currently looking at this.
So yeah big mess, I'm way over my head and it's not fun anymore. But I still want to shoo away the pigeons and love the idea of learning more about DL/CV but guess I need to learn about the basics first, practice in other things before doing this.
Do the first lesson of Fastai’s Practical Deep Learning for Coders — https://course.fast.ai/
It explains that trying to use your own GPU takes a lot of energy that you should focus elsewhere when you’re getting started.
Paperspace Gradient (referenced in the link) offers free Jupyter Notebooks with GPU’s you can use for 6 hours at a time (and re-start when they expire). You can get a classifier that distinguishes dog breeds up and running in less than a day and probably a few hours if you just watch the video and follow along with the notebook.
It's not you. I have decades of Linux development experience, I've developed machine vision systems before, and I have a doctorate. And twice I've given up in frustration while trying to just get the CUDA drivers installed.
I honestly don't know why nvidia hasn't made it simpler.
What's your background?
My background is anthropology but I've always worked as a data analyst (read: excel guy) but got "promoted" as project manager for a team that has some data scientist doing automation projects and got interested in CV and NLP. I know python but for data analytics (pandas, seaborn, scikit, and so on) but never did anything for myself in DL and just wanted to learn more while building something (I miss doing stuff besides slideware tasks).
Regarding virtualbox, I will probably give up and look again into dual boot Linux. Will ask a friend that is an experienced coder to hold my hand while I do that.
Thanks for your reply!
I recommend starting with a pre-trained model (COCO, for instance) and finetuning for your images. However, even with finetuning you probably need > 10k images to get good results.
After the success of this POC, you can gather about 1000 images of pigeons together with transfer learning, again on the same pre trained COCO model.
The problem nearly every fire department which is based on volunteers have, is that it's hard to learn the location of all items on the different vehicles.
So i build a small quizz app to support the fire departments with this. Now every fireman can learn the location of the items on the go.
German website: http://fahrzeugkunde.hvoss.dev/
Spring-Boot + Vaadin
I moved into a house late last fall, so I actually have some space to do so. This scratches multiple itches for me.
Itch the first: I've missed having a vegetable garden since I moved out of my parent's place and into apartment life years ago. While a small garden plot can't wholly replace the need to go to the grocery store for fruits and vegetables due to the inherent seasonality of growing food at small scale, it's damn hard to beat truly fresh fruits and vegetables that were picked not an hour before they landed on your plate. And any surplus left when the growing season is over can be preserved and stored for the winter.
Itch the second: It's _my_ creation, not my father's with which I am merely helping. When living with my parents, my father had his way that he'd like to lay the garden out. Granting that a man who grew up in a rural agricultural community probably knows a thing or about vegetable gardening, watching how he did stuff did always leave me wondering if there wasn't room for improvement. Since this is my garden, I can make my own experiments and decisions on how the garden is to be arranged, and what vegetables I want to grow (e.g dad loves beets; I do not). I've been reading about companion planting, and am eager to try things like growing corn and beans together, or growing chives near my peppers and tomatoes to keep aphids away (seriously, fuck aphids).
Itch the third: It lets me develop useful skills outside of my career in tech. While I have no delusions about quitting being a sys/net admin and going and becoming a farmer, I do think it's important to nurture useful skills outside one's main career.
Itch the fourth: I have something to automate with tech. Gardens do need to be watered. Under-watering will limit your yield, but over-watering is also harmful to both the garden and the wider ecosystem of one's immediate area. There's a goldilocks-zone when it comes to watering, and the just-right amount of water depends on a number of things: what you're growing, your climate, the soil, etc. There is a real danger that before the close of summer, the garden bed will have an automatic, multi-zone drip irrigation system, complete with soil-moisture sensors, controlled by a Raspberry Pi or similar SBC.
During April I built a loft bed frame out of framing lumber. I can post about that too if any of you are interested.
It is really basic/unfinished, focused on Central Europe (also I'm planning to make it more general - as I'm currently gardening in Norway).
At the end I want to make something that will automatically recommend list of plants that I can plant on specific plot based on what is/was there previously. At the moment it is just list where I have to search manually for specific plants.
Sources are not clear there because it is mostly just for my personal use.
But for example for companion planting I'v processed wikipedia page . Now I manually review sources from there, because not everything that is stated in the table is really supported by the declared sources.
Glad to hear others have garden / yard side projects as well. best of luck with the harvest.
That will actually be quite hard. I'll do the drip lines first.