Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Who wants to collaborate?
424 points by TekMol on Jan 1, 2022 | hide | past | favorite | 506 comments
We have the monthly "Who is hiring?" and "Freelancer? Seeking freelancer?" threads. But what about people who don't want to work for money and are not looking for people who want to work for money but still want to work together on cool projects?

For free to make the world better or to start a startup.

If you do, please post your project or your skills!




I work for a national park in the Democratic Republic on the Congo as a tech lead. Some of the things we are doing: LoRaWAN for tracking and emergency response, ML to identify gorillas by their unique nose prints, and long-range drones for mapping and surveillance, and a management web app.

If you’re interested in conservation / sustainable development and associated technologies let me know! Always looking to collaborate and bounce ideas off others.


How does one find a job like that, it sounds amazing! I originally studied environmental science before having worked as a Data Scientist and now Data Engineer for the past 4 years. I worked a lot with geospatial data and sat images. Ultimately I would love to combine both again and use these skills to do something useful for the environment. If anyone has links to orgs/companies that do relevant work and hire (or wants to collaborate) I would be keen to hear. Thanks!


Big fan of the environmental sciences over here. You might look for jobs and opportunities at nonprofits/NGOs as places where applied research and interventions are occurring. Mission-driven organizations, including government ministries, are a good way to feel like the hours you spend at work are directed towards something positive in the world. In some cases like the GP these sound like direct actions in the field - which sounds really enriching.

NGOs jobs tend to be structured on topics as much as on functions, so making a shortlist of topical keywords might be helpful in the search to become aware of organizations. You should also look directly at organizations' staff lists, which are typically fairly open and have emails listed. One thing to be aware of is many nonprofits have tightly budgeted projects with specific needs, so getting your foot in the door on a less interesting project might be needed. Generalists can benefit here.

I know a few job sites that collect these kind of positions, some techy and some not:

https://techjobsforgood.com/

https://nextbillion.net/jobs/

https://greenjobs.greenjobsearch.org/

https://www.devex.com/jobs/search/

Last I will mention the nonprofit organization I work for - World Resources Institute (https://wri.org). We organize our work around seven global challenges: food, forests, water, the ocean, energy, climate, and cities. We do research, build data products and applications, organize partnerships. We help tackle some the largest questions related to how we collectively transition to a world where more than 9 billion humans have their food and energy needs met through fair economic and environmental systems.


Hi there.

Seeing as you are in DRC. I've worked on a number of projects there. Maybe you might be interested in checking out an app we made to help NGO/media etc learn about and manage their digital and physical security? Lots of groups on the ground there have used it in situations like kidnap, targeted malware etc.

It's called Umbrella. It's free, opens source, on ios and Android available in many languages. If you are interested, have a look at at https://www.secfirst.org or ping me via the email in my profile! :)


>Umbrella...kidnap, targeted malware

This looks like a behavioral modification tool, to prevent kidnap, etc. Two questions: is there a scenario database of DRC "actualized risk", that describe real kidnaps, extortion, etc ideally with root cause analysis? What is your revenue model[1]? Okay, 3 questions: what do you think of tools like what NSO provides for client recovery?

1 - Speculation: do you make revenue by providing a marketplace where security service providers can market to consumers?


Good questions.

So at the moment the advice we give is not country specific. We have slightly different levels depending on risk, threat model and skill. Building a logic to do country by country was something we tried but is incredibly hard.

Our revenue model is based on a few things. We got grants to build the initial version, we also create paid white label versions for organisations that want their own and we do security training and consultancy services.

Regarding NSO Group. Well considering we work every day with journalists and activists, some of whom have been targeted by NSO...to say we despise what NSO does is probably an understatement.


Awesome thanks for sharing! I can see this being helpful for us as well.


My company offers digital processes solutions - like dynamic checklists - for remote locations (vessels, trucks, airplanes, inspections) with no (constant) internet connection. Get in touch with me if that is something your park could benefit from.


That sounds super interesting. Two questions: 1) Do you work remotely or are you in DRC? 2) What are some good ways to find job opportunities at the intersection of software engineering and wildlife conversation?


I spent two years working in DRC and have now moved to a remote position (based in the US). There are a few organizations I've worked with that do great work: Allen Institute for AI (EarthRanger), SmartParks.org and Conservation X Labs. There are surely others, but these offer products directly to national parks to improve their capabilities.


Awesome, thanks for the recommendations.


I know some communities that are aiming at climate in general and not focused on wildlife specifically but may have some related opportunities. Work On Climate, climate action tech and my climate journey.

PS: I love that autocorrect.


Are you personally involved in any climate projects / looking to collaborate?


Yes to both


Thanks! Haha, it wasn't even autocorrect. Just tired from NYE I guess :D


> ML to identify gorillas by their unique nose prints

Really cool stuff. I wonder if face detection is sufficient too? It has been proven to work for brown bears [0].

I have also been doing some open source work [1] to democratise object detection in this space but I haven't had the time to make improvements to the project in a while.

* [0] http://bearresearch.org/

* [1] https://github.com/petargyurov/megadetector-gui


In fact, we are basically doing facial detection based on cropped photos of gorillas around the nose. Thanks for the links! I'll be sure to check it out.


Hey! Curious what impact you've seen of the mining sector in the country but especially towards wildlife and economic development? I have heard the country wants to move up the value chain too, any feelings on whether doing something like processing (requires lots of skilled workers and 24/7 power) is actually realistic? Do you have any recommendations/travel guides for someone who would want to come visit and not just stay in the capitol?


You're definitely right that DRC wants to move up the value chain. Most raw materials are shipped out of the country for processing or smuggled to neighboring states (gold). However, the impact on wildlife is most certainly negative. The economic development impact is pretty unclear; DRC has shown time and again that mineral wealth is equitably distributed. Despite the jobs some new processing plants might afford, the profits from these operations will likely just line the pockets of those in charge.


On the latter point, interesting, maybe I'll try to dig up some research on what the counterfactual would be if there was no mining industry; my understanding is that while the conditions are terrible, it does provide jobs for hundreds of thousands if not millions. In regards to distribution of gains... yeah, seems like the whole world is failing at that one, just more egregious in a place where people are starving or having to become child soldiers. That being said, what's the alternative here? Mining companies will come because of the resources and I see no other angle for the DRC to industrialize other than the control they have on the mineral wealth.


How likely would it be to opensource projects like this?

I think many programmers want to do something good ( preferably by learning) and are willing to spend time on projects like this.


Quite likely! I'm planning on making the gorillas identification project open source as there's actually quite a lot to it (the core AI functionality, management of lots of images and gorillas data, making sure it all works fine on a crappy connection with old Androids).


That's interesting! I definitely wish you the best of luck.

Ps. Don't forget to submit it here ;)!


This sounds incredibly interesting and would love to know more about it. Where to read more or get in touch with you? (your profile doesn't mention any contacts or links)


Hey I just added me email to my profile. Happy to share more directly!


How do you take the nose prints? Do you tranquilize them and then literally press their nose against a piece of paper covered in an inklike material or is it photo based?


I don't know if Gorillas work the same way as dogs, but maybe you could put something that smells nice on a device that extends a small nose-boop extractor arm when close, and gather the noseprint that way.

To simulate this invention:

- have a dog

- put something smelly on your thumb

- extend arm to point thumb towards dog

- as the dog is within 5-10cm radius of your thumb, press your thumb against their nose (and say the obligatory "boop")

- imagine your had something to extract noseprints from their nose on your thumb



Probably a good idea with photos from afar, as dogs and gorillas can probably have very different reactions to whole the "nose-booping" thing.


I guess facial recognition is a more apt description. We just use photos of wild gorillas cropped around the nose. Not intrusive at all!


What ML framework or Lang you working in?


For now we are using Azure Custom Vision and have a working demo that achieves solid results. This seems sufficient for now. This will fit into a web app that uses React and Django.


Pretty awesome. Do you know more about other projects like this or what new ideas are going on in DRC ?


I can really only speak to conservation to Congo. One thing I've noticed is that conservation organizations in DRC take on a lot more responsibilities than just wildlife conservation. In Virunga National Park, for example, the park has built and operates a power utility to provide an alternative to charcoal. This is made up of four run-of-the-river hydroelectric plants, hundreds of kilometers of distribution lines (high and medium tension), and smart meters to connect customers. You can check out virunga.org to learn more!


Are you interested in collabing on projects like these? Feel free to message me, contact on profile, exploring economic alternatives to extraction and local economic empowerment/autonomy initiatives


This is a great usecase for ML!

Where can I read more about this part of your work?


It's still pretty early days for this project so I haven't published anything yet. Hopefully soon!


Is LoRaWAN tracking the gorillas?


LoRaWAN is used for an alert system in an area where local communities are threatened by an armed group called the ADF (aka ISIS-DRC). In the future we hope to roll out a large-scale LoRaWAN to monitor wildlife (elephants, lions) as well as vehicles and rangers on patrol.

Gorilla tracking devices are likely to be intrusive and cumbersome for them. They are also known to help each other out and remove devices. For now, gorillas are tracked on foot by a team of rangers.


I'm quitting my full-time job this month (already gave my notice) to start a startup with my brother. We're trying to make cheap-but-performant prosthetic hands using 3D printers and Arduino, focusing on the north-west of Syria. To shed some light, it's estimated that around *50 THOUSAND* people suffer from minor or major imputations due to the ongoing war, and most of them need some form of prosthetics. Some estimates even upwards *80 THOUSAND*![1]

Now, I know most of the technical stuff we need, and my brother knows all the medical details (he's a physical therapist), but neither of us have built a startup before.

We have a plan on mind, but I would love to chat with anyone with experience building similar startups (a mix of software and hardware), or really anyone who's interested in this project.

We also plan on starting a crowdfunding campaign soon this month.

[1](https://www.ri.org/providing-life-changing-prosthetics-for-s...)


I applaud you, but it sounds like this isn't really a 'startup'.

Who will pay for this? I imagine the recipients aren't rich.

Looking forward to seeing your announcement of a fund to deliver these charitably at zero cost to patients. :)

Good luck!


Exactly, the recipients are really in a bad financial situation and many of them live in tents even in the freezing weather. So we've looked into alternative ways to do this, and we're now focusing on NGOs partnerships (NGOs pay, and the recipient gets it for free; we already have one experimental partnership) and crowdfunding (planing on announcing a campaign sometime this month!).


Sounds like a great cause. I have experience in building hardware, and can put you in touch with some folks who might be great contacts for your endeavor. Sending an email :)


I just want to wish you the best if luck. May you go far and high!


Great project


I run a side (open source) project called Iconduck (https://iconduck.com). It collects and makes open source icons, illustrations and graphics available to download in various formats.

The goal is to collect sets from across the web (atm, mainly Gumroad and GitHub) that have open source licenses that allow for them to be available on a central site.

I then use a service called Typesense (https://typesense.org/) to make these all searchable.

It's a scratch of mine that I wanted to itch, and it has pretty strong usage (along with a limited user base; around 1k signed up users).

It's a fun project to work on, and I'd love help on this. Anything from design, front-end, back-end, product or marketing.


Any particular reason on using png instead of svg directly to display images.

I tried to copy as well as open in new tab, got checkered.png instead. With noun project for example, one can right click and copy the image link, which can be pasted on drawio.


Using PNGs because there's some cross-browser issues with displaying SVGs from a number of the open source sets that I've collected. Rather than going icon by icon to resolve those (which isn't feasible), I render a PNG, make the PNG downloadable (via the button), and make the SVG downloadable (also via the button).

Could probably make it so you can right-click the image and open it in a new tab. Will take a look at that :)


1. Thanks for a wonderful site. It's the best of its kind

2. I hope you're not paying for hosting, since places like Cloudflare and Netlify host static sites for free, and I think yours probably qualifies

3. It's free--what would you need marketing help for?

4. Have plans to monetize somehow? I can imagine a couple of ways you could do so.


1. Thanks that's v kind :) 2. A bit complicated, but it's not an added financial burden :) 3. I'd love to grow it, the API, and a bunch of other things related to it. 4. Yeh there are some routes, but for the time being, I'd love to have it serve as an open source resource for people.

Let me know if you'd want to contribute: oliver@iconduck.com


Are you aware of thenounproject.com? It looks like your project is branching out into more than icons? Edit to add: just visited thenounproject.com again and looks like they added photos to their database.


Yep I know of them. They're gr8. We're just positioned differently (I'm much more interested in having everything be open source).


I'd like to know a little more and possibly contribute. I come from a design background, but I'm a full stack engineer now so I like to dabble in different things from time to time.


Yeh happy to chat. Wanna send over an email? oliver@iconduck.com


Cool, might start using this instead of svgrepo


Wow, nice, i am front end developer, i would love to join you on this


That’d be dope. Send over an email? oliver@iconduck.com


I am a former principal software engineer at a MAMAA company turned independent educational researcher. My primary interest is in deliberate practice and how to use spaced repetition (i.e., Anki) to develop expertise.

I have three primary areas that I am working on and would love to find serious collaborators:

1) I am building high-quality content for math, English language arts, chess, etc. (see [0] for a good explanation of what this looks like). This is primarily for my 2nd grade child, but I have also written hundreds of cards for high school level math, undergraduate level math, and programming languages.

For example, I used this approach to build an Anki deck that decomposed the NNAT test (i.e., a gifted program test) into atomic chunks and then demonstrated how sample NNAT questions were composed of those primitives.

2) I am eventually looking to leverage this content and knowledge to build turn-key resources for others. This is surprisingly challenging for reasons I won’t touch on, but it could profoundly improve learning outcomes for many people.

3). I am pondering how to enable richer sharing and collaboration between people. I have a number of patents in this space and can envision a few business opportunities.

----

[0] https://andymatuschak.org/prompts/


> MAMAA

took me a second, but I guess it's Microsoft Apple Meta Amazon Alphabet now


I kinda would rather just keep calling it Facebook; Meta feels like a bad joke that we shouldn't repeat.


It reeks of “that’s so fetch!”


Welp. My brain interpreted "fetch" in the context of "replacement for JavaScript XHR", as though Vietnam war flashbacks had collectively been replaced with modern web development.


Why don't we all just say Big Tech at this point? Literally could be any of those companies, and it doesn't matter beyond that.


I'm partial to MANGA (for the former FAANG) and GAMMA when you replace Netflix with MS.


yep, exactly. When Facebook changed their name to Meta, people were looking for a good acronym for the biggest tech companies.


NAAAM, a tour in NAAAM

I need to get back on Blind to make my memes propagate


It's kind of weird to use Alphabet in this context because people at Google seem to mostly still refer to it as Google, and the comp and hiring standards vary widely between the "other bets".


I mean, I imagine Facebook employees still call it facebook too?

I like it, I hope it catches on. Right now I think it's just odd looking because it's unfamiliar.


Interesting that out of five corporations, only two letters are represented. Ok, "A" is ranked third in frequency in the English language, but the other one I avoided in these sentences? Not even in the top twelve.


Freqency at the begining of words is likely to be different to frequency anywhere in the word. Regardless, "a" has the highest frequency in the company names, when spelled out in full (occurring 6 times, in 3 of the 5 corporations)


Could be M2A3, like the tank.


GAMMA?


At Quizlet, we are working on problems in this space.

https://quizlet.com/features/how-quizlet-works

I'm an engineer on the step-by-step Explanations team - if interested in learning more, shoot me an email (scott @ quizlet.com) and we can chat! Maybe this violates the objective of the OP - but it sounds like we'd have fun collaborating. We'd just be getting paid by the same company to do so.


I'm trying to build GitHub for flashcards: https://github.com/dharmaturtle/cardoverflow

Emailing you!


Have you already written about the challenges alluded to in item 2 of your post somewhere? I would love for these resources to exist and I'm curious about the obstacles you're facing.


No, not yet. In brief, some of the key challenges seem to be:

1) Each person's internal representation of knowledge differs (see [0]). Therefore it is difficult to take someone else's questions and answers and expect them to make much sense, be at the appropriate difficulty, or even be relevant.

2) The current unit of collaboration is an entire deck. This is far too coarse grained and impedes finding and adapting relevant material. In addition, it is a pain to later sync if there are updates.

3) The current unit of work is a single card. This has some advantages, but also makes it harder to make changes, see the forest for the trees, and think holistically about knowledge.

[0] https://en.wikipedia.org/wiki/Chunking_(psychology)


I talk about this a little in an old reddit thread:

> With StackOverflow/Wikipedia there's only one article/question/answer. With flashcards, people want to individualize their cards. As an example, for you as a foreign language learner, perhaps you want to include short clips from movies/youtube of someone saying a phrase like "Where is the library?", while someone else wants to use a clip of the same phrase from a podcast. The semantic content is the same, but the reification into a flashcard is not. You could possibly link to related cards like StackOverflow does in a sidebar... but I'm designing something that's more like concept learning. Each card is an Example of a Concept. The author can then move their Card/Example into what Concept they think it best fits - or create a new Concept if they can't find anything they like. Basically I'm building in the ability to group cards that have very similar content. By looking at all the various Examples/Cards, a person can choose what best suits their style - or make one for themselves by forking an existing card.

https://www.reddit.com/r/Anki/comments/nalar8/open_source_we...


Hello! Would like to reach out but don't know Microsoft products well enough to guess what their fiery email domain is. Do you mean outlook?


LOL, I suppose "fiery email domain" could mean outlook/exchange if we think in terms of reliability. I updated it to "Microsoft's not cold email domain acquired in 1997".


Hotmail!


Emailed you


I work on ways to write programs that help outsiders understand their big picture (rather than insiders understand incoming contributions).

The goal: you (any programmer) should be able to use an open-source program, get an idea for a simple tweak, open it up, orient yourself, and make the change you visualized -- all in a single afternoon.

More details: http://akkartik.name/about

What I have so far: https://github.com/akkartik/teliva

Lately I'm spending a lot of time on the sandboxing model. It's nice to be able to download and run untrusted programs before we start trying to understand them. How to permit this without letting them cause too much damage, by explicitly giving them arbitrarily fine-grained permissions that are still easy to take in at a glance.


Hey, I want to put together an open source project that gives an overview of how to set up a minimal viable web application from scratch via all the different frameworks.

The idea is to format the tutorial for each framework as a shell script. So there is no ambiguity of how to reproduce the results. And it is even possible to just copy&paste the steps into a docker container and see the framework in action.

Here is a demo of how this could look like for Django:

https://www.gibney.org/from_debian_to_web_app

It would be cool to have one column for each framework and then align them visually by feature. So if you want to compare how do you use a template, you can look at the "Let's use templates" row and have a quick overview of how it is done in Django, Laravel, Flask, Symfony, NextJS...

Each framework section could link to the developer(s) who wrote it.

If you want to contribute to the section for your favorite framework, send me a message!


Seems like a great idea, what exactly is the MVP? I've thought about doing the same from time to time, doing as shell script seems like a nifty idea but possibly a little more work for the author. Curious if there are multiple steps adding to the same file how you would approach that.


My idea is that the script contains the basic steps which building an MVP usually contains: setup, routes, templates and user accounts.

From there on, it is up to the developer to add their own design and functionality. After you understand the code for setup, routes, templates and user accounts this should be easy.

As for mutliple steps adding to the same file, I think overwriting the whole file every time is doable. For example when we introduce the concept of a template, the template can be created like this:

    cat << 'EOF' > templates/index.html
    <h1>Hello World</h1>
    EOF
Now say later we want to use a base template which contains a content block. Now we modify the template to extend the base template:

    cat << 'EOF' > templates/index.html
    {% extends "base.html" %}
    {% block content %}<h1>Hello World</h1>{% endblock %}
    EOF


Looks like you're basically looking for TodoMVC and Real World. TodoMVC is a simple todo list implemented in various frameworks while Real World is a more complex real world app, a blog style social media site.

https://todomvc.com/

https://github.com/gothinkster/realworld


That is a bit of a misunderstanding. I do not want to build a collection of repos or projects which the user can read through or try out.

My whole project is just one page!

The page displays multiple scripts side by side.

One script per framwork.

Each script can turn a fresh Linux installation into a working web application with routes, templates and user accounts.


Ah that makes more sense now. So you're basically writing docs for all these different tools such that someone can copy paste and get a working installation.

What are your thoughts on docker which seems to do something similar? Also, how would you stay updated on every single framework if they ever change their installation scripts or other such parts?


I wouldn't say that Docker is doing something similar. What I want to do is give a one-page overview of the web framework landscape.

Since I envision the scripts to be very small, I expect that updating them will not take long. The history of the updates will indeed be very interesting. In 10 years we can look at it and see how often each framework had breaking changes.


Makes sense. I mean docker as in dockerfiles which are essentially scripts that create the docker image as a full environment.


Yes, Dockerfiles usually set up an environment suitable for certain tasks.

So instead of using debian:11-slim as I propose, one could use a Dockerfile made for Django. But that would help very little. Django even abandoned their official Dockerfile because it brings so little to the table.

In my opinion, using a higher level Dockerfile than the bare OS is a net negative. The developer won't know how much magic it hides. Even though it just hides a few lines of code. And being higher up in the stack also means stuff will break more often and the scripts need to be updated more often.



Are you familiar with https://github.com/TechEmpower/FrameworkBenchmarks ?

It's not tutorial-style, but it does contain hundreds of sample web apps (that all do the same thing, but still)


Looking at those, it seems the sample apps are way more complex than what I envision. The Django one starts with these dependencies:

    Django==3.1.12
    greenlet==0.4.17
    gunicorn==20.0.4
    meinheld==1.0.2
    mysqlclient==1.4.6
    psycopg2==2.8.6
    pytz==2020.4
    ujson==4.0.1
The approach I want to show is: What are the minimal steps to get a working web application with routes, templates and user accounts. I know that at least for Django, this is possible with no additional dependencies.


Have you seen https://todomvc.com/?


Neat idea. Interested in this as a user - will reach out :)


A couple of years ago I started to build a tool for my own personal use. It ended up being a metaverse full of sticky notes. I'm currently seeing if there's a market for it outside of just myself - https://www.temin.net/

Give me a shout if you're interested in turning it into something - email is in my profile.


I've thought of something like this, glad to see its been created. Love it. This is how I 'picture' things in my head so it makes it easy for me to organize. Now only if there was a file explorer like this I'd love to use one.


Thanks, glad you like it!

For organising information Temin has been an entirely positive experience for me. For the first time what's in my head matches what's on a screen.

Early on I expected that mental model to fall over as the amount information in a metaverse grew, but I have ~12,000 sticky notes/pieces of paper in my 'main' metaverse and haven't personally felt the need to add any search functionality yet. I'm honestly not sure if I know where everything is, or just how to get back to it. Speaking to a neuroscientist or similar would be great - https://en.wikipedia.org/wiki/Visual_memory#Current_theories

I'd also be keen to speak to anyone who has thoughts on Temin as a graph, as more recently I've been finding sticky notes mean multiple things and belong in multiple locations. https://temin.co.uk/#links does a rather poor job of explaining my current solution.


This looks fantastic, I'd love to try this.

Regarding your question about why this works : I urge you to read at least the first chapter of Frances Yates The Art of Memory https://en.wikipedia.org/wiki/The_Art_of_Memory - how people learned to retain vast stores of knowledge before the invention of the printed page.


Please do sign up if you haven't already, or send me an email and I'll get you setup.

Thanks for the book recommendation, I've started to read it. It's nice to get some history and depth to concepts I had some awareness of.

I do wonder how method of loci strategies stand up to bricks and mortar sticky note use in collaborative environments. In my day job I've felt other people moving and adding things to a wall as almost destructive if I didn't experience it happening (purely in terms of memory). As changes in Temin are written to a ledger a fringe benefit is you can play back what's happened while you were gone, which seems to sooth that.

For work, I also spend a lot of time writing/drawing/thinking with a pen, and most of the artefacts in my metaverse are created with a Wacom not a keyboard. Being able to remember where things are I put down to Temin, being able to remember what's there I put down in some part to that. I'm not going to trying to convert people who prefer to use keyboards, and I expect pen-first users will very much be a minority, but the research on retention when it comes to pen vs. keyboard is pretty compelling.


hey Oliver, awesome project, I will definitely try it and I am also interested in developing it further - Miro is breaking new record every month, so totally worth to compete.

Also it could be an interesting intersection between laptop (where you create data and put things to the boards) and VR (where you navigate, process and work with data). I am not sure about MVP, but surely there is so many usecases, starting from personal collection of knowledge to teamwork, project documentations etc.


Nice, I'll reach out to you.

Great insight into the use cases for Temin depending on the I/O. It took me using a desktop and VR headset to form the same opinions. Initially I thought my VR usage would be higher, but it's below 1% of the total time I spend inside Temin. VR is also pretty good for presenting/telepresence.

Mixed reality excites me way more than VR, so I'm keen to skate more towards that technology long-term.


Reminds me of Miro and Mural, there's surely a market for it.


It certainly scratched my personal itch for something that lets you relate/encode/recall information in 3D but also work in 2D.

The primary feedback from friends has been it's cool, but hard to use without much of UI (current version is all shortcut keys). That's something I can fix in the next couple of weeks.

A harder problem is there's a decent chunk of knowledge workers with no experience navigating virtual 3D spaces. If you didn't grow up playing Quake, Minecraft etc you might find Temin frustrating to use for a little while.


Simple in form, it's brilliant. Best of luck to this one.


Very cool!


I'm trying to reclaim the word Santa from the toxic concept Santa has become (judging kids as naughty/nice is a perspective that's subjective and denies the very real need for acceptance, name calling is also a recognized form of abuse). I'm hoping to build a simple first version of www.santaisdeadlonglivesanta.net with a bit of text and a few pictures.

It's meant to be the launch of the SANTANET, the network of people choosing to play a game in real life: Satisfying All Needs Through Anarchogiving (SANTA). It's essentially performance art, as it's me living out a story I'm writing called "How Santa Stole Every Holiday." It's meant to be a set of incomplete riddles for people to expand on, as well as a place for coordinating a network of free giving.

Some of the riddles:

Complete the backronym of SANTANET. What can the NET stand for?

There's a finite number of human needs for surviving and thriving. Each need can be mathematically proven to exist, (perhaps through applying category theory, infinity category theory, or constructor theory?) One each need is proven to exist, what's the longest sentence you can make using the first letter of each need?

Are you a hidden Santa who may want to start helping giving to needs, rather than just giving toys people want?


Clearly you'll be getting a whole truckload of coal in your stocking!


Great! I'd rather hoard it and take it out of circulation anyway!


> What can the NET stand for?

No Elven Taskmasters


Neutralising Ethical Troubles


Now Everywhere Together


Hi, looking for collaborators to work on technology to eliminate rice paddy methane emissions! On a volunteer/hobbyist basis. Rice methane is 3% of the climate change problem. Moreover, any technological solution for rice paddies can be repurposed for defending against worst-case permafrost melt scenarios, or to be used as an emergency geoengineering lever by stopping natural wetland methane emissions (~16GtCO2e/yr). Our site is https://www.ricemethane.org/

Feel free to reach me at contact [at] ricemethane.org


Sounds like an awesome problem! Any ideas of how to build the first iterations of the hardware? What biological solutions are you envisioning?


If you are looking to do some collective good, consider a wildlife conservation project.

I created a GUI wrapper around a popular AI model for object detection for wildlife conservation [0]

The idea is that most ecologists don't have the technical expertise to run such models, so making their life easier is an important task. The use of AI also saves them loads of time. The project was born when I got in touch with New Zealand's Department of Conservation for volunteering opportunities.

I haven't had time to continue working on this; help is welcomed!

* [0] https://github.com/petargyurov/megadetector-gui


This kind of thing is what Wildlabs is all about: https://www.wildlabs.net/


I don't have anything to contribute or collaborate on, but just wanted to thank OP for the excellent idea, this looks like it really hit upon a community need.


Agreed.

I basically have all the collaborators that I need, for the projects that I'm working on, but I think that it is a great thing, to encourage altruistic development.

It is my experience, that free work should be as good as (or better than) paid work. This is code quality, executable quality, documentation and support quality, user experience quality, etc.

I have found that many altruistic projects have great heart, but can sometimes fall short, in quality. Maybe as a result (or maybe it's a cause), they are often not taken seriously by the people that could most use them, and this can also make it difficult to get folks to take high-quality work seriously (I have a lot of experience with that).

When looking for collaborators. Tecchies often seek out engineers, but maybe they are better served, seeking out advocates, artists/designers, writers, testers and integrators.

In my experience, recruiting evangelists in the user community is incredibly valuable (and difficult; especially if "people skills" aren't our top talents).

We can also do damage, by pissing off these folks (I have done that). Free/Open/Altruistic projects can often be rather fragile, as motivation and engagement are the currencies we use. These are easy to dismiss or denigrate, when we are used to the traditional motivators of paid work.


This is a great idea! Can we do this monthly?

I am having conversations with a few people to start Conduit Foundation: require all new construction to be EV ready. Have conduit installed whenever we build new parking. It will make it future proof to pull cable and install chargers later. This is a one time building code change with a continuous yield of new charge points.

Hit me up if you have any interest.


...lobby to have the building code changed.

Huh.

It's kind of neat to realize you can actually just go and do that. So easy to just write that sort of thing off as interminable because it will be hard. Very cool idea.

On a related note, I'm torn about applying the same idea to new residential constructions, and the minimum amount that would be appropriate to require. Such a concept would make for example running fiber straightforward and inexpensive.


I also think monthly, or at the very least quarterly is a good idea. I suspect, the half-life of a post like this that doesn't find you at the right time would make it hard to find collaboration opportunities if on an annual cadence.


Second on the monthly idea.

Will you be lobbying at the federal level of focused on state/local?


Building codes are mostly at city/local level. We have to get this changed for every city. I am hoping once we get a few cities to adopt this, it will be easier to get other cities to adopt.

There are no federal/state building codes. There is a central authority (ICC), all cities start with ICC code and make modifications. We can also work with ICC, but the adoption cadence will be slow. ICC updates building codes yearly, but cities do not make yearly changes and every city does this differently.

We can with cities directly. Most city govts would be easier to approach and work with (we pay property taxes and live in the area!). We also have to work with ICC, its an international standard, we can think of all countries and not just US.


Got it, that's not a space I'm very active in but have you checked trade associations? That may be a way to cut across state/city lines quickly and it would be good to understand their stances because they do hire professional lobbyists at the city/state and I'm pretty sure federal level (met one from a roofing association socially)


Thats a great idea! Trade associations like you suggested as well standards bodies (there is another for electrical/fire - National Electrical Code by NFPA) can help a lot with the grassroots efforts of working with each city.

The plan is to pursue all possible paths and get this done as quickly as possible. US has 2 billion parking spots currently [1]. If we are building 1% new every year, in ten years we can have 200 million charging spots. Range anxiety would be put to rest. We have to put in charging first, this accelerates EV adoption.

[1] https://www.bloomberg.com/news/articles/2018-11-27/why-parki...

wild assumption, not a fact! This doesn't need to be net new on vacant lots, includes rebuilding existing lots. Any development has to adhere to latest city code.


I created a toolkit to evaluate many different speech recognition engines.

https://github.com/robmsmt/SpeechLoop

Comparing speech systems can take a long time esp for a dev who doesn't have the background in audio/ml. How do you know which one will work best? Will new shiny transformer model perform well enough? Most end up using one of the big tech companies existing API to throw their data at. Whilst this is convenient, I think that it's a travesty that opensource speech systems have not are not as easy to use. I was hoping to change that to make it easy to evaluate and compare them!


I developed a speech recognition service for journalists couple years ago, now migrating it to voxpop.live (it was speech.media). Unfortunately, ASR I used (Amazon) wasn't good enough in Russian (my main target was Russian journalists). Now, I am thinking what to do next with it, lets get in touch and exchange ideas?

The best on the market AST in my opinion is Speechmatics, but to use it commercially you need to deposit $10k. Btw there are services that let you train your own model, i.e. https://www.defined.ai

Also as an idea, to use GPT-3 and correct any model response (punctuation, summary) but its gonna be expensive!


I'm likely to lose the use of my hands in the next few years so I've been trying to figure this out from the user perspective (for Linux) for a few years to try to sort of set up and get used to the tools I'll need later in life.

I've been using Almond, but it's really not good. I don't know how I might help but I'm definitely interested in the results... if I could use a high quality microphone to open a program, select menus, and type accurately (and have commands to press arrow keys) I think I'd be all set. I would be able to do anything I wanted, even if it was a bunch of steps.

I remember Dragon Naturallyspeaking in like 1995 being basically capable of doing all of this, and I was able to completely control a computer in like 1995 with speech and now I can't. It's extremely strange for 26 years of development.

It is as if all the tools try to be so clever that instead of assuming the user can learn new tricks, to me it should be the same as learning to type or use a mouse. Yeah, I used to have to say "backspace backspace period space capital while" to get fine details, but at least it was possible. I could even select things with voice commands. I just hope that we don't lose sight of the value of voice recognition as a general input device in search of which model performs best on accuracy alone.


I am sorry to hear this. I think there are many people in a similar boat to you and there are quite a few people working on command & dictation computing. Although my tool _may_ help you find out which speech systems work well for your voice/accent/mic/vocab it might also be worth trying another one of the specialist libraries specifically for dictation and controlling computers.

I've not heard of Almond, but I have seen the following projects which might be helpful:

- Dragonfly: https://github.com/dictation-toolbox/dragonfly

- Demo: https://www.youtube.com/watch?v=Qk1mGbIJx3s / Software: https://github.com/daanzu/kaldi-active-grammar

Far field audio is usually harder for any speech system to get correct, so having a good quality mic and using it nearby will _usually_ help with the transcription quality. As a long time Linux user, I would love to see it get some more powerful voice tools - really hope that this opens up over the next few years. Feel free to drop me an email (on my profile) happy to help with setup on any of the above.


I think the current issue is that lots of people are intellectually excited by the framework stuff; libraries, that python project to implement commands, etc. I do totally get that, I definitely find it more interesting.

What would help much more as an end-user would be integrating things nicely into window managers. I am optimistic that it is on a roadmap, but I don't really get how all the pieces fit together. I hope in Linux it doesn't end up somehow requiring every application to implement support individually, it seems like a clever HID driver could do it.

I suppose such things could be model independent.


Unfortunately Dragon development has mostly stalled for the last 5 years (Dragon 15 was a leap forward but that was quite some time ago now).

You can still make use of it via Dragonfly (see also Caster[0]) as mentioned by a sibling comment or by using Talon[1] or Vocola.

Having used a computer 90% hands free for about a year and a half back in 2019, I chose Dragonfly then, but would probably choose Talon nowadays - less futsing about and it has alternative speech engine options.

I also recommend looking into eye tracking: the Tobii gaming products[2] work well for general computer mousing with some software like Talon or Precision Gaze[3] - well enough for me to make a hands free mod[4] for Factorio, for example.

[0]: https://github.com/dictation-toolbox/Caster [1]: https://talonvoice.com/ [2]: https://gaming.tobii.com/product/eye-tracker-5/ [3]: https://precisiongazemouse.org/ [4]: https://github.com/caspark/factorio-a11y


I vote for making this a monthly post or at least biannual!


Yes! Really nice topic. Adds to the entrepeneurial / inventor spirit of HN!


Seconded!


I'm working on a decentralized identity-based communications project. Think of this as an email (or instant messaging, social network, etc.) where your "account" is yours forever and cannot be taken away. We have email part functioning and there are some real users. At this point, I'm working on the decentralization part, where the identity registry will be put on a blockchain (likely custom Ethereum-based blockchain, to keep it free for users). Join us if you want to make the world a better place by fighting censorship and personal information collection. We are 100% open source:

https://ubikom.cc https://github.com/regnull/ubikom


Great idea, I was wondering if someone had started working on something like this. My thought was to use the identity for distributed social media (blogging platform where you can easily restrict viewership of your posts by granting revocable keys to your contacts).


I see bits and pieces of this idea floating around, but I don't think anyone has combined them into a complete platform yet.


Are you at all interested in non-chain name registration? You could always allow nonunique names and just leave it up to users to verify .

One big advantage of P2P is partition tolerance and ability to work on isolated networks in an emergency.


Yes. In fact, we decided to go blockchain way (for now) with some hesitation. It just seemed like the best way to address the decentralized identity piece for now. But there are downsides, for sure.


I read through a few pages and didn't get the most important question I have answered. How does this system avoid my identity being the private key itself, a thing which can be taken away?


Your identity is your private key. It can't be taken away in a way how your Google account can be just disabled one day. We also have a way to retain control if your private key gets compromised - you can have a parent key, which can then be used to disable your compromised key and re-assign your name to another key. Since you never use your parent key (presumably kept in a safe offline place), it gives you some degree of protection.


Ad-hoc data formats like JSON and XML are too insecure for the modern world, so I'm developing a new format to remedy this [1].

It's a twin format, one binary and one text, so that you can input / edit the data in text, and then it passes from machine to machine in binary only (or convert back to text if a human needs to inspect it). The binary format is designed for simplicity and speed, and the text format is designed for human readability.

Both formats are designed for security and minimal attack surface, while providing the fundamental data types we use in our daily life (so you're not stuck doing stringification and base64 fields and other such hacks).

I've pretty much completed the base format [2], and am 90% done with the golang reference implementation [3] plus some standard compliance tests, but I could use a lot of help:

- Reviewing the specifications and pointing out issues or anything weird or things that seem wrong or don't make sense.

- Implementations in other languages.

- Ideas for a schema.

- Public outreach, championing online.

[1] https://concise-encoding.org/

[2] https://github.com/kstenerud/concise-encoding

[3] https://github.com/kstenerud/go-concise-encoding


An alternative approach might be to use an existing popular serialization format such as Protocol Buffers, Apache Thrift, or Cap'N'Proto and create or improve tools that convert to/from human-readable text formats to the serialized binary format.

For example:

- Protocol buffers have a text format mode: https://medium.com/@nathantnorth/protocol-buffers-text-forma...

- Thrift has readable-thrift which is a human-friendly encoder and decoder: https://github.com/nccgroup/readable-thrift

- Cap'N'Proto has a `capnp` tool for encoding/decoding text representations to binary which seems to be officially supported and documented! https://capnproto.org/capnp-tool.html

These libraries have been battle-tested by major companies in production, some protocols and implementations have gone through security audits, and in addition each of these formats already has many language bindings, for example:

- Protocol Buffer Third-party language bindings: https://github.com/protocolbuffers/protobuf/blob/master/docs...

- Apache Thrift language support: https://github.com/apache/thrift/blob/master/LANGUAGES.md

- Cap'N'Proto in other languages: https://capnproto.org/otherlang.html


Yes, I had a look at these formats before embarking on my venture. I listed the things I found important in the comparison matrix: https://github.com/kstenerud/concise-encoding#-compared-to-o...

To your points:

- Protobufs is not an ad-hoc format, which is a big reason why low-friction formats like JSON are popular. There are many use cases where formats like protobufs are clearly the superior choice, but CE doesn't target those. This is a fundamental trade-off so you can't have both.

- readable-thrift is a diagnostic tool. You wouldn't want to be inputting data like that. I want the text format to be fully usable by non-technical people, like JSON is.

- the capn proto tool page doesn't seem to document how the text format works (or at least I couldn't find any examples). It looks more like a diagnostic tool, not a first-class citizen.

I felt that there were enough pain points, missing types, and missing security features (for example versioning) to warrant a fresh start.


> the capn proto tool page doesn't seem to document how the text format works (or at least I couldn't find any examples). It looks more like a diagnostic tool, not a first-class citizen.

Cap'n Proto's text format works pretty much exactly like Protobuf's. You can use it in all the same ways.


Thank you for the clarification! Perhaps a better example would have been Apache Avro (still has a schema, though): https://en.wikipedia.org/wiki/Apache_Avro

This looks like a very ambitious project, and I can see that you've put a lot of thought, time, and effort into it! You clearly have a lot of interesting ideas (the graph idea is really cool) and significant experience with data formats.

If this is a security-oriented application, then with cyclic data structures there is the risk of blowing out your server's memory using something like a fork bomb when processing untrusted user input (https://en.wikipedia.org/wiki/Fork_bomb).

There are some systems like DHall that guarantee termination by putting upper bounds on computation: https://dhall-lang.org/

I'm also a bit concerned with how the different features can interact, for example it's not super clear how to distinguish between UTC offset (-130, or do these always have to be 4 digits?) and global coordinates (-130/-172). An attacker could specify a comment inside the media type (eg: application/* which would require special logic to filter out).

My concern is that the parser will become extremely complicated and require a lot of special-case logic and validation (eg: there must be at least one digit on each side of a radix point) which is more prone to errors and unexpected behaviors.

Rather than using slash delimiters, I'd recommend splitting the time formats into subfields, eg: { date: "2022-01-01" time: "21:14:10" offset_is_negative: true offset: "10:30" }

This does make the text format more verbose, but it reduces ambiguity and makes the parsing faster as well since you don't need to descend into branches and backtrack when they don't match, and also might permit more code/logic reuse.

It's also not clear how easy it is to add new data types to the grammar. Based on the project description, it seems like you're using ANTLDR parser.

Since you seem to be quite interested in parsing, you might also be interested in parser combinators which are a somewhat different approach with different tradeoffs: - https://softwareengineering.stackexchange.com/questions/3386... - https://fsharpforfunandprofit.com/posts/understanding-parser...


Yes, I had a look at avro as well. I've been following all of the established and nascant formats over a number of years, hoping for one that addresses my concerns, but unfortunately nothing emerged. My ambitions are actually at a much higher level; this is just to set a solid foundation for them.

Cyclic bombs are but one security concern... There are actually a LOT of them, which I try to cover cover in the security section ( https://github.com/kstenerud/concise-encoding/blob/master/ce... ). The security space is of course wider and more nuanced than this, but I didn't want to turn it into an entire tome so I tried to cover the basic philosophical problems. At the end of the day, you must treat data crossing boundaries as hostile, and build your ingestors with that in mind. Sane defaults can avoid the worst of them (and CE actually REQUIRES sane defaults for a lot of things in order to be compliant), but no format can protect you completely. A "fork bomb" using cyclic data is unlikely, unless your application code is really naive (if you're using cyclic data, you need to have a well-reasoned purpose for it, and are likely just using pointers internally - which won't blow out your memory unless you're doing something foolish when processing the resulting structs). Actually, this does give me an idea... make cyclic data disallowed by default, just to cover the common case where people don't use it and don't even want to think about it.

Re time formats: global coordinates will always start with a slash, so 12:00:00/-130/-172. UTC offsets will always start with + or -, and be 4 digits long, so 12:00:00+0130 or 12:00:00-0130.

The validation rules are very specific, and that does complicate the text format a bit, but this drives to the central purpose of it: The text format is for the USER, and is not what you send to other machines or foreign systems. It's for a user to edit or inspect or otherwise interact with the data on the RARE occasions where that is necessary. So the text format doesn't need to be fast or efficient, only unambiguous and easy for a human to read. You certainly shouldn't open an internet connected service that accepts the text format as input (except maybe during development and debugging...) In fact, I would expect a number of CE implementations (such as for embedded systems) to only include CBE support, since you could just use a standalone command-line tool or the like to analyze the data in most cases.

Re: subfields. That would make it harder for a human to read. The text format sacrifices some efficiency for human friendliness and better UX. Parser logic re-use isn't really a priority (other than making sure it's not OBVIOUSLY bad for the parser), because text parsing/encoding is supposed to be the 0.0001% use case.

It's not super easy to add new types to the text format grammar, but that's fine because human friendliness trumps almost all, and adding new types should be done with EXTREME CARE. I've lost count of all the types I've added and then scrapped over the years. It's really hard to come up with these AND justify them!

The ANTLR grammar is actually more of a documentation thing. I've verified it in a toy parser but it's not actually tied to the reference implementation (yet). The reference implementation currently is similar to a parser combinator, with a lot of inspiration from the golang team's JSON parser (I watched a talk by the guy some time ago and was impressed). But at the same time I'm starting to wonder if it might have been better to implement the reference implementation as just an ANTLR parser after all... leave the optimizations and ensuing complications to other implementations and keep the reference implementation readable and understandable. The binary format code is super simple, and about 1/3 the size of the text format code. The major downside of ANTLR of course is the terrible error reporting.


Thank you for the detailed and comprehensive explanations!

> There are actually a LOT of [security concerns], which I try to cover in the security section

If you'd like to eventually harden the binary implementations, you might also be interested in coverage-guided fuzz testing which feeds random garbage data to a method to try and find errors in it: https://llvm.org/docs/LibFuzzer.html

as well as maybe some kind of optional checksum or digital signature to ensure that the payload has not been tampered with (although perhaps this should be performed in another higher layer of the stack).

> make cyclic data disallowed by default, just to cover the common case where people don't use it and don't even want to think about it.

Yes, I think that making it an option which is restrictive (safe) by default would be a great idea. Or perhaps separating out the more dynamic types (eg: graphs, markup, binary data) to be loadable modules could also reduce the default attack surface area.

> You certainly shouldn't open an internet connected service that accepts the text format as input (except maybe during development and debugging...)

Yes, I fully agree with this! I initially assumed that the text format could be sent from an untrusted client similar to JSON and XML, but this makes more sense.

> because text parsing/encoding is supposed to be the 0.0001% use case

I see, so the main use case of the CTE text format is rapid prototyping, and then the user should convert to the CBE binary format in production?

> It's not super easy to add new types to the text format grammar

Customizable types could be a really great way to differentiate from other serialization protocols. I did notice that the system allows the user to define custom structs which is quite useful.

Another approach would be to embed the grammar and parser into an existing language like Python, Rust, or Haskell, and let the user define their own custom types in that language. In my experience, custom types help prevent a lot of errors (eg: for a fitness tracker IoT application, you could define separate types for ip_v4 address, duration_milliseconds, temperature_celsius, heart_rate_beats_per_minute, blood_pressure_mm_hg for systolic and diastolic blood pressure rather than using just floating point or fixed-point numbers, and this could prevent many potential unit conversion and incorrect variable use errors at compile-time). Or you could better model your domain with custom types (eg: reuse the global coordinate datastructure from the timezones implementation to create path or polygon types using repeated coordinates).

> adding new types should be done with EXTREME CARE

maybe it would make sense to create a small set of core types (kind of like a standard library), and then permit extensions via user-defined types which must be whitelisted? But pursuing that route could end up addressing a very different niche (favoring a stricter schema) in the design space.

> The major downside of ANTLR of course is the terrible error reporting.

This is a major advantage of the parser combinator approach, in that it is possible to design them to emit very helpful and context-aware error messages, for example look at the examples at the end of: https://www.quanttec.com/fparsec/users-guide/customizing-err...

Anyway, hope this was useful and I wish you good luck with your project!


> If you'd like to eventually harden the binary implementations, you might also be interested in coverage-guided fuzz testing which feeds random garbage data to a method to try and find errors in it: https://llvm.org/docs/LibFuzzer.html

Yes, I plan to fuzz the hell out of the reference implementation once it's done. So much to do, so little time...

> I see, so the main use case of the CTE text format is rapid prototyping, and then the user should convert to the CBE binary format in production?

CTE would be for prototyping, initial data loads, debugging, auditing, logging, visualizing, possibly even for configuration (since the config would be local and not sourced from unknown origin). Basically: CBE when data passes from machine to machine, and CTE only where a human needs to get involved.

> Another approach would be to embed the grammar and parser into an existing language like Python, Rust, or Haskell, and let the user define their own custom types in that language.

I demonstrate this in the reference implementation by adding cplx() type support for go as a custom type. Then people are free to come up with their own encodings for their custom needs (one could specify in the schema how to decode them). I think there's enough there as-is to support most custom needs.

> maybe it would make sense to create a small set of core types (kind of like a standard library), and then permit extensions via user-defined types which must be whitelisted?

I thought about that, but the complexity grows fast, and then you have a constellation of "conformant" codecs that have different levels of support, which means you can now only count on the minimal set of required types and the rest are useless. The fewer optional parts, the better.


EDN has some really good ideas in it. Here's the main spec: https://github.com/edn-format/edn

The Learn X in Y Minutes: https://learnxinyminutes.com/docs/edn/

A related talk by Rich Hickey that I think you'd find interesting: https://www.youtube.com/watch?v=ROor6_NGIWU

For a schema, I'd start with what CUE has done. The idea of types that constrain down as a lattice + a separate default path really resonates with me. https://cuelang.org/


Does it support IDL and zero-copy access? That's a must for safe and fast parsing and general ease of use.


Zero-copy access is supported for primitive and array types (int & float arrays, string types) provided the array was sent as a single chunk (multi-chunk is an exceptional case). "structs" cannot be zero-copy in an ad-hoc format (if you need that, something like protobufs is a better choice).

IDL would be a level higher than the encoding layer, so yes you could use this as the encoding layer for an IDL construct.


Have you seen ASN.1?


Yes, it's included in the comparison matrix: https://github.com/kstenerud/concise-encoding#-compared-to-o...


Local protobuf user here. Appreciate seeing a comparison chart. :-) It's unfortunate that it isn't documented very well, but Protobuf does have a text format [1] which I've used a lot, usually when writing test cases, but also when inspecting logs. Similar to the CBE encoder spec [2], it does use variable length encoding for ints [3] and preserves the type information. Another efficiency item to compare against different message types is the implementation itself, e.g. memory arenas out of the box. [4]

Regarding CE, what would be the use case? APIs, data at rest, inter-service communications? If data at rest meant for analysis, then there probably are a handful more formats to compare against.

If one doesn't wish to decode the whole message into memory to read it, FlatBuffers [5] can be checked out which is also supported as a message type in gRPC. It is similar to what is used in some trading systems. There is also a FlexBuffers variation if you'd want something closer to JSON/BSON.

Must say however, I found it cool that you have some Mac/iOS GitHub repos. Definitely going take some time to check them out -- I used to develop iOS apps.

[1] https://developers.google.com/protocol-buffers/docs/referenc...

[2] https://github.com/kstenerud/concise-encoding/blob/master/cb...

[3] https://developers.google.com/protocol-buffers/docs/proto3#s...

[4] https://developers.google.com/protocol-buffers/docs/referenc...

[5] https://google.github.io/flatbuffers/flatbuffers_white_paper...


CE's primary focuses beyond security are ease-of-use and low-friction, which is what made JSON ubiquitous:

- Simple to understand and use, even by non-technical people (the text format, I mean).

- Low friction: no extra compilation / code generation steps or special tools or descriptor files needed.

- Ad-hoc: no requirement to fully define your data types up front. Schema or schemaless is your choice (people often avoid schemas until they become absolutely necessary).

Other formats support features like partial reads, zero-copy structs, random access, finite-time decoding/encoding, etc. And those are awesome, but I'd consider them specialized applications with trade-offs that only an experienced person can evaluate (and absolutely SHOULD evaluate).

CE is more of a general purpose tool that can be added to a project to solve the majority of data storage or transmission issues quickly and efficiently with low friction, and then possibly swapped out for a more specialized tool later if the need arises. "First, reach for CE. Then, reach for XYZ once you actually need it."

This is a partially-solved problem, but the existing solutions are security holes due to under-specification (causing codec behavior variance), missing types (requiring custom secondary - and usually buggy - codecs), and lack of versioning (so the formats can't be updated). And security is fast becoming the dominant issue nowadays.


An interesting project!

Regarding some of the ASN.1 comparison characteristics, I'm not quite sure if I understand--there's a lot to read here, and it's likely I've missed something by a lack of acquaintance with your documents/specifications. But a couple comments:

- Cyclic data: ASN.1 supports recursive data structures.[0]

- Time zones: ASN.1 supports ISO 8601 time types, including specification of local or UTC time.[1] I'm not sure how else you might manage this, but perhaps it's not what you mean?

- Bin + txt: Again, I'm unclear on what you mean here, but ASN.1 has both binary and text-based encodings (X.693 for XML encoding rules[2], X.697 for JSON[3], and an RFC for generic string encoding rules[4]; compilers support input and output).

- Versioned: Also a little unclear to me--it seems like the intent is to capture the version of data sent across the wire relative to the schema used in its creation or else that it ties the encoding to the notation/encoding specification. ASN.1 supports extensibility (the ellipsis marker, ...[5]) and versioning,[6] but AFAIK there's nothing that forces a DER-encoded document to describe whether it's from the first release or the newest. Relative to security, it also supports various canonical encodings.

[0]: https://www.obj-sys.com/asn1tutorial/node19.html and X.680 3.8.61.

[1]: https://www.itu.int/rec/T-REC-X.680-X.693-202102-I/en (see X.680 §38 and Annex J.2.11)

[2]: https://www.itu.int/rec/T-REC-X.693/en -- X.694 governs interoperability between XSD and ASN.1 schema.

[3]: https://www.itu.int/rec/T-REC-X.697/en

[4]: https://datatracker.ietf.org/doc/rfc3641/

[5]: See X.680 3.8.41

[6]: See X.680 §3.8.95


> - Cyclic data: ASN.1 supports recursive data structures.

Not sure if I missed something, but the link was talking about self-referential types, not self-referential data. For example (in CTE):

    &a:{
        "recursive link" = $a
    }
In the above example, `&a:` means mark the next object and give it symbolic identifier "a". `$a` means look up the reference to symbolic identifier "a". So this is a map whose "recusive link" key is a pointer to the map itself. How this data is represented internally by the receiver of such a document (a table, a dictionary, a struct, etc) is up to the implementation, but the intent is for a structure whose data points to itself.

> - Time zones: ASN.1 supports ISO 8601 time types, including specification of local or UTC time.

Yes, this is the major failing of ISO 8601: They don't have true time zones. It only uses UTC offsets, which are a bad idea for so many reasons. https://github.com/kstenerud/concise-encoding/blob/master/ce...

> - Bin + txt: Again, I'm unclear on what you mean here, but ASN.1 has both binary and text-based encodings

Ah cool, didn't know about those.

> - Versioned: Also a little unclear to me

The intent is to specify the exact document formatting that the decoder can expect. For example we could in theory decide to make CBE version 2 a bit-oriented format instead of byte-oriented in order to save space at the cost of processing time. It would be completely unreadable to a CBE 1 decoder, but since the document starts with 0x83 0x02 instead of 0x83 0x01, a CBE 1 decoder would say "I can't decode this" and a CBE 2 decoder would say "I can decode this".

With documents versioned to the spec, we can change even the fundamental structure of the format to deal with ANYTHING that might come up in future. Maybe a new security flaw in CBE 1 is discovered. Maybe a new data type becomes so popular that it would be crazy not to include it, etc. This avoids polluting the simpler encodings with deprecated types (see BSON) and bloating the format.


I am trading crypto, options and stocks based on news (e.g. Twitter, YouTube, Reddit) in manual and algorithmic fashion (it feels a lot like Factorio: manual first then automate it).

I don’t believe in technical analysis, and I think the efficient market hypothesis is mostly true and love Fama’s work (and sometimes I am first!).

The biggest reason I do this is because it feels like a PhD that can actually pay well.

Are you similar? Let’s meet!

Email is in my profile


I building an app that helps crypto investors (traders will be added later), discover good projects. There are many projects of varying quality and the great ones frequently go under the radar until they have done multiple X's. My thesis is that it would be a good idea to have a service that brings them to investor's attention early. I'd love to connect and get input/collaboration from people who know the space.


interesting idea. once upon a time, i wrote a social sentiment analyzer for various cryptocoins. it was actually pretty accurate and made me a few bucks. quality was not considered at all, but after recently stumbling across forums shilling ICP (and subsequent research of said cryptocoin) I think your idea is a worthy one.


Thanks for your kind comments! Can we connect and talk more about this ? (Lol @ICP )


Very cool. Sorry you’re being downvoted. I wish you luck.


>> based on news (e.g. Twitter, YouTube, Reddit)

This may be part of the motivation.


I'm working on tools/projects to unify, access, interact and use my personal data for quantified self, knowledge management, etc.

A couple of examples:

- https://github.com/karlicoss/HPI#readme

- https://github.com/karlicoss/promnesia#readme

Would very much love to discuss it with other people, collaborate etc.


The idea is definitely in the air, but I guess its hard right know to come up with practical use cases. I mean I do believe and want to control my digital trace, but at the end of the day I don't know what to do with it except maybe visualize it on virtual whiteboards like Temin - the project showcased among others several threads above.

For privacy maniacs, of course it would be awesome to add functionality and destroy your traces, but I doubt you could achieve it with current API's.

Also interesting maybe to produce high-level analysis of your personality traits based on messages-comments-articles you write (but to use how later - I am not sure)


I was just going to say you should lookup this - https://beepb00p.xyz/myinfra.html - a page I bookmarked and have kept at the back of my head, and turns out it's you!

I have a strong personal need for such a system. I have severe chronic pain that severely disrupts my executive function, and so it's hard to mentally self-direct - so having a visual map of data interconnectivity, where in part I could set view layers based on priority (hide or show different data depending on say if viewing "holistic view" vs. a "project" that's arguably data that's tagged with a project name, etc.

I'm also wanting to write a book on my journal with health and healing, among other things system-society wide that influence health - and allowed my situation to exist and dis-ease progression to occur - and so having a singular system for me to access things like my bookmarks, notes in Notes app, emails filtered based on tags, my comments on HN/Reddit/etc. would really help in allowing me to start to organize and compile different data; perhaps a way to add a non-impacting link between different pieces of data, e.g. say there's a topic in the book related to food and/or water fasting - I could start to interlink all of that content first with links, then another step in the process perhaps with version control integrated - so I can archive a previous/unedited version, keeping the polished/updated version (or multiple data files amalgamated) but linking to their sources/prior versions, etc.

Your project could easily fit into a broader project/effort of mine, though I've not been able to execute on it yet - I've been able to plan, develop the vision, and done surface level things so far only. I have a surgery January 11th that may reduce my pain by 50%+ and hopefully my executive function returns at least some; my pain situation is more than just relating to this surgery, another major source of pain, arguably the primary source, doesn't have a treatment yet - not clinically available anyhow. After the surgery I'd love to chat and share what I'm doing with my projects and feel out what possibilities may exist.


I am very interested in this space. Thank you for sharing those links!

I've recently started following Paul Bricman's work "Thoughtware" https://paulbricman.com/thoughtware

I created a discord server called Awesome Knowledge Management https://discord.gg/XPNeDSQE2j

Let's chat!!


Your work (and gyroscope/stethscope/other aggregators) has inspired me to also start my own work at creating a centralized aggregator to ingest all of my data.


I'm building Total Recall (https://github.com/erezsh/TotalRecall/), a keyboard-first browser extension for bookmarks that's fast and useful. It lets you search through your bookmarks using a local full-text search, with support for tags and extra notes.

I'm doing it in my spare time, which is scarce, so I'd love another pair of hands to help me make it super duper great. (it's already great but it's just normal great). Ideally someone with some experience making extensions, but really anyone who's willing to put in the time and take it seriously, even if only for an hour a week.

Let me know if you're interested!


This looks great! I'll give the extension a try. I have been dying to find a reasonable bookmarking tool, but everything I've tried doesn't stick in my habit pattern. What are the types of improvements that you are looking to add?


Thanks! I'm planning to make a short youtube video that shows how to use it. The user interface is open-ended, which I consider a strength, but it means not all the uses are immediately obvious.

Things I'm thinking of adding:

- Allow to sync to dropbox/drive/etc. (currently sync is only to couchdbs)

- Improve full-text search

- "Save session" shortcut, to tag all open tabs in the window at once. (so you can later recall all the tabs at once by finding the tag)

- Automatically extract meta keywords from a url to suggest tags. (that's already written, I just need to test it and smooth it)

- Maybe save content of the page too? Maybe allow to also store pages that aren't bookmarked? (Currently I don't do it because it takes too much space and affects search speed)

- Automatically publish your bookmarks to a webpage (with filtering the tags you want to display). Maybe even add a social element to it, a little like delicious was.

I can probably think of more ideas. In most of my projects, I usually have more ideas than time :)


I am working on a project whos intent is to harness unused computational power from unused/rarely used servers or other devices, and allows a business to run background jobs on this spare compute.

Think BOINC but for enterprise. I've never validated this idea if its actually something businesses would use, but I've had fun coding it so far as just a side project.

I have been overwhelmed since I have not built anything like this before, It would be useful to have someone to bounce ideas off of and work with. The plan down the line is to open source this codebase. It would be great if you are familiar with:

-IPFS

-Golang

-Decentralized databases

Details should be in my profile. Also, I work on VR browser Games so if you are interested in collabing on that let me know.


this is exactly how https://www.next-kraftwerke.com built one of the biggest European power generation network - they used generators from important social infrastructure points, like hospitals. Usually these generators were used only as a backup power source, so basically they've been just standing there and loosing its value with time.


Are you aware of https://www.golem.network?


Ive seen that, that is more of a public "Want to sell your compute to random strangers on the internet?" as far as I understand. my offering is private, supposed to be something for individual companies to make use of their own idle compute on their own business problems.


Seeking motivated cofounder(s) interested in legaltech/edtech/govtech.

I am a domain expert in Veterans Disability law, 1.7mil in revenue (me and 3 paralegals) in 2020, formed Veteran Disability boutique firm in 2021.

Looking to build the "Clio for Veterans Disability Claims." Although this is a single, niche area of administrative law, there are more than enough customers on an individual (veterans with claims, accredited agents, accredited attorneys [solos and firms] and an institutional basis (veteran service organizations, law school clinics, non-profits).

The Clio-and-all-others approach of building out generic "law practice" software that you modify for your practice area does not work well for this area of the law IMO. This will not have features that would not serve this area well and thus will not charge per user accordingly.

I routinely have to maintain a legal strategy going back 50-60 years for a single claim. Clients routinely have 2-3 claims active at any given time. Claims routinely include 8-11 different medical conditions, each of which having to meet their own pre-requisites and burdens. Claims routinely branch onto different "paths" within the administrative scheme.

I want to empower veterans and advocates against the kafka-esque system of the VA.


I can tell you from my own experience that one of the biggest hurdles was obtaining accurate medical records from hospitals and clinics outside of the DoD (for those times when on-base providers were insufficient). Every VSO I tried to work with seemed overly burdened and unfamiliar with how to communicate with systems outside of the state they lived in, and definitely useless in trying to get info from private hospitals overseas.

The second biggest hurdle was (and still is) tying those records, however complete they may be, to a legitimate claim. Lots of barriers there, including personal "nah that's no big deal don't bother complaining about it" type things. On the other hand and simultaneously, it feels to me like I am missing several things I should be claiming but can't find the right terms to use (both for claims and for care, tbh).

Not sure if that's even remotely relevant to your project, but I figure it couldn't hurt to share some potential user pain points.


It is definitely relevant because again, at a fundamental level, the mainstay Clio-and-others don't have a great way to track what are essentially staging vs. production of the evidence in the claims file.

Getting medical records from private provider X is really: - Should they exist because treatment was provided in the past? - Do they still exist in reality (some private providers have very harsh retention policies - shred after 7-10 years) - Did they get the request? - Did they respond correctly to the request? - Does the response contain helpful information for any claim? - Has it been submitted? - Has it been submitted but not acknowledged? - Has it been acknowledged but not for the probative value we judged it internally?

Like - that's just a very SMALL part of the daily process of tracking that goes on everyday for every client.

The claims file contains some n# of pdfs when I become rep. I do work in the background to review those explicit contents, make implicit judgments, and work further in the background on the creation/requesting of evidence to be staged for possible inclusion/submission to the VA. In the meantime, VA is further adding to the claims file with every letter/memo/exam/set of VA medical records.

VSOs are overly burdened and sometimes ill-fit for certain parts of the process, but they have always only truly excelled at filing new, initial claims or new claims for increases - not appeals or anything requiring nuance. Attorneys have really only been involved in the process in a major way for less than 15 years versus over generations of the VSOs but attorneys assisting veterans now outnumbers help from VSOs for the first time within the last year. Our average odds are also higher for success - it's something like 12% do it yourself, 22% VSO, 32% with help from an attorney. I can't recall exactly but it's in the annual reports of the VBA and BVA.


That's one of the most helpful HN comments ever posted by someone whose user name sounds like a Sopranos character


Hi, current Vet with disability here, I would like to work with you on this project if possible. Medically retired from US Army as well (blue card).

My domain is cybersecurity and IT management, actually in last year of BS for cybersecurity and IT management. Worked as PM for a different software company for payment processing, had to quit due to real life. I would like to help myself along with other vets with VA system, hell, maybe improve VA as well.


My email is in my profile. Feel free to reach out!


That sounds pretty interesting! I am definitely interested in following along.

Do you think you will opensource any of it? I am always interested in workflow tools for various domains.


I would defer entirely to the main contributor/team on that.

I support open source and one of the things I truly hate is that several case management products don’t even offer open APIs. A few of them have closed them after initially opening them. Or just generally learning they can massively up charge for the “luxury.”

Feel free to reach out!


I’m the maintainer of OneBusAway for iOS, an open source app that helps hundreds of thousands of people around the United States get real-time information about where their bus is, and when it will be arriving.

I’m always looking for more users of the app who are interested in helping to make it better: whether you’re a developer, designer, or just have ideas, I’d love your input.

This might be of particular interest to you if you live in Seattle, San Diego, Tampa, or Washington DC, but the app has also been deployed in more than a dozen other cities across the US.

https://github.com/OneBusAway/onebusaway-ios


Where do you get your data from? Do bus riders provide the information after getting on the bus?


Data comes from the transit agencies. Also, transit agencies self-host the server. There is a REST API of the same name that provides data from these servers to the app

http://developer.onebusaway.org/modules/onebusaway-applicati...


I'm building a postgres extension that allows you to do web searches using a SQL query. The idea is to be able to pull in data from the web with some structure (which you define using custom scrapers) on demand.

Right now I have a proof of concept that's pretty simple. It's a multicorn extension that calls to a FastAPI backend. I have it all running using docker-compose.

I'm open to working with people that want to use it, or people that want to build it. I don't have any real plans to open source it or commercialize it. It's just a little side project I think is neat. I'm open to any ideas or use cases you might have.

Send me an email (in profile) or dm. Looking forward to it!


My project supports African founders to take their start-up to the next level. All the companies are in the agricultural and food sector, have a launched product and first revenue. By giving them customized support, we aim to make them invest them ready. The idea behind the whole project is that start-ups can solve real world issues. In our case, make small scale farming, value chains etc. more effective and thus increase the income of small hold farmers. Since brain drain in Africa is a very real thing, our companies are always looking for experts who would could support them - could be just couple of hours per week or month.


Cool, I was just in Douala doing a project with Venture for Africa and before that did some research on the cold storage market in Kenya. What regions are you focused on? Other than the political importance of smallholder farmers, why focus on them? Why not help upskill medium to large farms that are more likely to have the capacity to use digital tools, technology, cut out middlemen, etc.?


any website to read more ?


At the moment not really, but there is some more information: https://www.giz.de/en/worldwide/83909.html


I'm working on solving the problem of allowing non-technical people communicate about math without needing to learn Latex. There is also an element of classroom organization/grading optimization along the lines of gradescope. All code for the project is GPL.

https://freemathapp.org


We’re building a Golang library designing DNA. Most synthetic biology tools are essentially crap Python scripts - we’re looking at building maintainable tools for the next 10-20 years

https://github.com/timothystiles/poly


The choice of Go for this is interesting. Having worked with Python & BioPython for bioinformatics problems I've found that there was a good deal of complexity around eking out performance (Cython, C++ extensions, Numba and so on) and also around distributing the tools (e.g. conda packaging). I've been wondering if Go would provide a reasonable middle ground in performance and ease of use between Python and C++ here. I'm not actually convinced yet but think its worth exploring. Noticed there's a BioGo project that's been around for a while, not sure of its uptake. Probably figuring out how well Go works for this domain will be a hobby project for me this year.


It’s been worth it for me so far. Anyway, the language is easy, so you can learn it pretty quickly and evaluate for yourself


Have you guys ever considered frontend app for users to actually built am organism in the browser? I have no idea if its something that anybody needs in the world but this could be fun! I would be interested building that


This looks like a very fun thing to get into, and I love the idea of being able to help in this new (for me) field. However, thumbing through the issues listed in the GH repo leaves me realizing that there is a bunch of terminology that I would need to catch up on (15% of the stuff feels familiar). Any pointers on what stuff I could read / study that would serve as a nice primer into the terms and ideas being coded and modeled?


Could you leave us a git issue on terminology you don't understand? We've tried to make the documentation in the code to make it clearer to non-biologists what is going on (for example, https://github.com/TimothyStiles/poly/blob/prime/seqhash/seq...), but we're honestly too deep to really understand what is not known and how to explain nicely what is going on. If you could help us with asking naive questions that'd help us write a nice primer (which is very useful, since we'd like to recruit more software engineers rather than biologists).


Cool stuff! I'm curious if you considered Julia when choosing the language to work in, & if so, what were the pro's/con's vs Go.


When I chose Go for Poly it pretty much came down to Go vs. Rust and the main criteria were.

1. Fast execution 2. Strong devops ecosystem 3. Easy to learn syntax 4. Ships binaries for as many systems as possible.

Julia wasn't really considered because last time I used Julia (which granted was a while ago) there wasn't any support for binary compilation targets and it didn't have a strong devops ecosystem. Also, it's more of a scripting language than something you'd write stable, deployable code in.

Rust beat out Go on speed and tied for shipping binaries but was, way, way harder to learn and Go was still leagues ahead when it came to packages and tooling.

If I remember correctly C is typically 3x faster than Go and 2x faster than Rust. That small decrease in execution time doesn't matter in a majority of use cases but Go is way faster and easier to learn and develop in so it ultimately won.


I'm building iOS Apps as a solo builder. Sometimes it's a little lonely. If you are interested in sharing experiences, advice, or just chat about ideas/programming/tech or collaborate, say hello at my contact is in my profile.


I've been designing iOS apps ux/ui for the last 12 years. What are you building now? :-)


You should setup a discord channel.


I've been building iOS apps as a solo builder for nine years and definitely agree that it gets lonely, both in the development of it and then waiting for users to stumble across it.

Maybe its time to focus on marketing and product fit ;-)


Cool idea! I'm pretty busy a.t.m. but just in case, I'll toss my hat out there.

About me:

-PhD in physics, specialty in optical spectroscopy for fusion reactor plasmas

-Familiar with basic optics, analog circuit design & fabrication, signal processing, sensor fusion, data analysis, visualization, physics-based modeling & parameter inference (including tomography), Bayesian methods, & MCMC

-Very comfortable with the scientific python stack (Numpy/Scipy/Matplotlib/xarray), some experience with Julia (esp. SciML/ModelingToolkit)

Projects I'd be interested in (mostly stuff I've been meaning to try my hand at but haven't gotten around to):

-Unsupervised/self-supervised learning on real-world data (incl. algorithmic trading)

-Machine vision (and/or analysis of other sensory modes) for robotics

-Brain-inspired (ie, systems-neuroscience-based) AGI approaches (ex. hierarchical temporal memory, predictive coding, sparse distributed representations)

-Information-theoretic approaches to AGI (ex: total correlation explanation, free energy methods)

-Causal/generative/physics-based approaches to AGI

-Renewable energy & clean tech: battery systems modeling, demand/weather forecasting, or hardware projects

-Off-the-wall physics ideas: I'd be willing to listen to them & provide feedback. I'm personally interested in trying to reformulate quantum mechanics starting from geometric/Clifford algebra and Bayesian probability theory, as a way to make it more intuitive.

Email is in my profile.


An open question: What useful information do you think each person should write in the comment to help you decide whether to collaborate or not?


- Contact Method

- My Topics or Projects

- My Skills

- More Skills Needed

- My Goals

- My Links

- My Temperament

- Temperament Needed

- My Beliefs

- Other:

mromanuk - If you are interested in sharing experiences, advice, or just chat about ideas/programming/tech or collaborate, say hello at my contact is in my profile.

shazeubaa - Wouldn’t it be neat to hangout with devs while coding, even if not speaking that much. Can share screens, be supportive.

eloisius - A hypothetical project I'd be interested in contributing to would be


areas of interest, skills you have, skills you're looking for. maybe also indication of experience/seniority level.


I don't have a side project to recruit for right now, but I've wanted this very thread in the past. My current projects all revolve around reducing my dependency on commercial SaaS products. A hypothetical project I'd be interested in contributing to would be an ActivityPub Strava/Ride with GPS clone.


I like the goal of reducing dependency on SaaS product. I use these Strava/Garmin apps a lot and while Strava was cool for a while it hasn’t progressed and doesn’t feel worth the money. Curious to hear what you had in mind, I am in the geospatial space and work with routing tools, etc.


I don’t have a feature list off the top of my head. The basics that Strava offers would be a good start: show your GPS tracks on a map, allow sharing with friends, a chronological feed of your friends’ rides.


I'm building a gig economy / play-to-earn content moderation tool where people earn crypto for moderating content on social platforms. Contact in bio.


I thought at first this HAD to be a troll, but nope!


There is some unexplored potential in opt-in moderation systems. Where instead of singular entity providing content distribution, content amplification and content moderation you can pick and choose what topics need to be filtered and what teams/individuals will do the filtering for you.


Sounds like Reddit. (being serious and not snark there. I quite like Reddit)


More like Reddit minus global rules. But I also see differences. Subreddit admins manage both feed and comments. Here you can have that decoupled. The fun bit is comment tree moderation. Imagine different subtrees culled for different people - effectively no global view on a comment section (unless you disable your user/comment moderation sources temporarily).


I have a sort of constellation of projects that fit together into an overarching project, and I'd love to have some other people interested or even collaborating on it.

The goals are:

- Make computers that easier to understand and use (than current systems.)

- Make large flying machines to enable cheap and efficient mass transport of people and materials. (Like huge, kilometer-scale kite/blimps.)

- Collect and recycle the Great Pacific Garbage Patch (and the other ocean gyres.)

An overview of the general strategy: Start with toys, both computers and the flying machines. Grow a community of folks to do distributed research and manufacturing. Collect and amass enough subunits (the flying machines are cellular) to build a machine large enough to reach the GPGP and return with some trash. Recycle the trash into raw materials (possibly using molten salt oxidation) to make more machines to collect more trash to make more raw materials, and so on...

There are a lot of details, obviously, but that's the gist of it. It probably sounds crazy but I'm seriously, I think it's doable, worthwhile, and fun. If you're interested send me an email (my username is sforman and the server is hushmail.com.)


> Make large flying machines to enable cheap and efficient mass transport of people and materials.

You might be interested in ground-effect vehicles which would be much faster than ships, yet still cheaper as well as more energy-efficient than conventional aircraft: https://en.wikipedia.org/wiki/Ground-effect_vehicle

There are some startups working in this area like the Flying Ship Company: https://flyingship.co/

> Collect and recycle the Great Pacific Garbage Patch (and the other ocean gyres.)

An easier approach might be to bioengineer bacteria to eat plastic: https://www.theguardian.com/environment/2021/dec/14/bugs-acr...

But this might have unintended side-effects.


Your enthusiasm warms my heart, cheers!

> ground-effect vehicles

I know of them a little. There may well be some applications for ground-effect in the devices I hope to build.

> bioengineer bacteria to eat plastic

Maybe so but that's outside my wheelhouse. :)

> you might be interested in concatenative languages like Forth, Factor, and colorForth.

Indeed! My "macro language" is a dialect of Joy: https://joypy.osdn.io/

> A good starting point or source of inspiration might be the Smalltalk and Lisp Machines from the past.

For sure they are inspirations, as well as the Oberon OS and Jef Raskin's "Humane Interface".

In re: Loper, yeah I love that guy (he's even more grumpy than I am for one thing.)

There are a lot of interesting projects and software out there that seem in the same or similar vein, such as Collapse OS or akkartik 's Mu.


I've been thinking about something similar — a large array of drones/flying machines, which can cover a large surface area, and be deployed above points of structural integrity in glaciers and biodiversity hotspots to lessen heating and support those prioritized locations. What you described and the motivations resonate as well. My email/contact is in my profile, would love to connect and explore together!


I don't know about reinforcing glaciers, I've never thought about it before, but I do think that giant 1000-kilometer greenhouse/shade structures are possible. Bucky Fuller was talking about putting geodesic domes over entire cities, so... :)

I'll send you an email a little later today, cheers!


> Make computers that easier to understand

This piqued my interest. I've talked with some older people who had computers in their home during childhood and they told me about tinkering with them and so on. Those were of course much simpler machined than we have today, and I ended up feeling like they had a more intimate relationship with the computer than their younger counterparts. How are you planning to accomplish that?


That's exactly the kind of thing I'm talking about.

> How are you planning to accomplish that?

Thanks for asking. Two major ways: on the hardware side I want to use small and simple systems. The raspberry pi is a little more complex than what I want. On the software side I have a simple GUI and macro language that is easy to learn and much much simpler and more elegant than current conventional UIs.


These projects seem really cool! A few links and historical references that might be of interest:

> On the hardware side I want to use small and simple systems

If you'd like to build a fully understandable computer, you might be interested in concatenative languages like Forth, Factor, and colorForth. These use a much simpler and more understandable, typically stack-based computational model that run on microcontrollers like STM32. You can create more complex words by composing together simple assembly-language-like atoms and building higher and higher layers of abstraction.

- https://factorcode.org/

- https://concatenative.org/wiki/view/Factor/FAQ/Why%3F

- Motivation section from: https://bernd-paysan.de/why-forth.html

> On the software side I have a simple GUI and macro language that is easy to learn and much much simpler and more elegant than current conventional UIs.

A good starting point or source of inspiration might be the Smalltalk and Lisp Machines from the past.

- Introduction to the Smalltalk Programming Language: https://www.codeproject.com/Articles/1241904/Introduction-to...

- How Do I Master The Art of Smalltalk? https://www.quora.com/How-do-I-master-the-art-of-Smalltalk?s...

- Live Objects in Smalltalk Pharo: https://www.quora.com/What-is-this-live-objects-in-Smalltalk...

- Smalltalk Principles: https://www.cs.virginia.edu/~evans/cs655/readings/smalltalk....

You might also be interested in this blog from a guy who is trying to build his own OS from scratch and run it on an FPGA: http://www.loper-os.org/?p=55 Some of his earlier writings about Symbolics Open Genera and Lisp machines are also quite interesting and might be a good source of ideas and principles: http://www.loper-os.org/?cat=8&paged=5

https://github.com/ynniv/vagrant-opengenera


I'm working with the OpenAir collective (https://openaircollective.cc) on open source, DIY-friendly carbon capture machines.

I'm particularly interested in the electrolysis of CO2. I could really use help from anyone with a chemistry background!

If you're interested, please reach out to me via my email or hit me up on the OpenAir discord.


This seems quite cool and I really admire your efforts! For those who are already familiar with global warming / green house gases but would like to learn more about direct carbon capture, I found this chapter linked from the Open Air website to be quite useful: https://cdrprimer.org/read/chapter-2

Unfortunately I don't have much of a chemistry background. But as an alternative approach, it might be interesting to consider growing blue-green algae on a massive scale in deserts like the Sahara (or on solar-powered desalinization rafts in the ocean if there are land-rights-use issues) and using these to convert sunlight + carbon dioxide into oil [1].

[1] https://www.sciencedaily.com/releases/2020/03/200305132125.h...


Have a degree in chem, I can be of help. Would you please elaborate a bit more on what exactly you people do? not sure if I understood it well from your website. [Also wdym by electrolysis of co2? It isnt an electrolyte per se]


OpenAir is a volunteer group focused on Direct Air Capture (DAC) of CO2. There are two sides to it: Advocacy and research. The research side is currently working on open-source hardware for doing carbon capture at home. There are two projects (called Cyan and Violet) that you can follow along.

I'm working on something that I hope will become a third project. Briefly, I think that electrochemical methods to absorb CO2 and convert it to useful products could be really promising for at home DAC. I'm inspired by the quinone-based approach laid out by the Hatton lab at MIT[1], and the ORNL research on nanospike catalysts[2].

Right now I need help with designing experiments and sourcing precursor chemicals. I'd also love to know more about how computational chemistry might solve some of the problems we're probably going to face with overpotentials.

[1] https://www.nature.com/articles/s41467-020-16150-7

[2] https://chemistry-europe.onlinelibrary.wiley.com/doi/full/10...


Unmukt Foundation[1] in India runs after school programs for poor kids to get them technology education. We're looking for people who can teach Python, Arduino, anything else.

They make best use of low-cost resources. Check out the Arduino-based robot built using recycled materials.

[1] https://unmuktfoundation.org/


I would like to build a small CAS (Computer Algebra System) that is usable from the web. It could be used in education to plot functions, solve equations (algebraic, ODEs, PDEs, ..), work with matrices, complex numbers, number theory, you name it. We need to come up with a decent DSL. My idea is that we would write the core of the app in Rust (so that it could be used in the web as wasm or locally). They key features would be:

* Based on lean software. With a small footprint. * The project itself should be a teaching project. I would like to document all algorithms from Risch to numeric solvers in ODE * Free, open source, oriented towards education in mathematics

I am a theoretical physicist and have been working as a computer programmer for the last 10 years. Money is not a concern anymore for me and I really have an urge to give back. Also I am happy to collaborate in any project in the intersection of mathematics, education, physics and programming.

If you have a nice project you can have my axe!


Hi nhatcher,

You might be interested in https://github.com/google/mathsteps which is a CAS designed to automatically explain step-by-step a problem so that humans can learn from it. There is a presentation (https://www.youtube.com/watch?v=VnBae40DfjE) and a post discussing the system if you are interested (https://blog.socratic.org/stepping-into-math-open-sourcing-o...).

p.s. My comment from this thread also might be of interest to you: https://news.ycombinator.com/item?id=29761667


Thanks! I don't think I have ever seen mathsteps. Looks like a great tool. I will have a look at it!.


Sounds interesting! Maybe extend/improve/integrate desmos (used in khan academy by many), gnu octave, etc? Not sure if desmos has an offline version though.


The desmos calculator is great! And very much in line with what I would want to do. Maybe it's just me being shy to approach someone on the project or the stupid "I want to do it myself" but I haven't tried to join any existing project. I always blame myself for not joining sympy back in the day when Ondřej Čertík first created it. GNU Octave is going in a slightly different direction, I wouldn't even know where to start. Thanks :)


Looking for contributors to Vox programming language/compiler: Statically typed, compiled and embeddable language, primarily focused on gamedev. It uses custom backend to keep low compile-times and small size. Written in D language.

https://github.com/MrSmith33/vox


I want to build an ultimate SaaS boilerplate / starter as a commercial product.

The audacious goal is something like “build a Trello clone from git init to a live product in under 10 minutes”

Looking for like-minded folks with any of the following skills:

- strong React

- UI/UX design

- TypeScript

- AWS

- SaaS building

- writers for docs, tutorials, articles, evangelization

- product / project manager


I would think that actually thinking about and defining everything that a Trello clone to do would take more than 10 minutes... Are you imagining a "software requirements" -> "product" compiler type of deal? Or what do you envision the workflow would look like?


It probably wouldn’t be a perfect trello clone. And of course it’s just a demo of capabilities with a plan to develop highly scripted.


but what is the real user story?

take a look at https://www.airplane.dev and https://www.scriptkit.com


How much are you offering for these services and why is it different to any existing SaaS boilerplates on the market


This wouldn’t be a service, but rather a product.

I want to lean in heavily into existing AWS services and have everything e2e pre-wired, testable with great CI experience and essentially following best AWS practices.


Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: