
Ask HN: What weird or hard problems are you trying to solve? - rxsel
You know, the weird stuff  ¯\_(ツ)_&#x2F;¯
======
jawns
I hope this counts as weird enough ...

PROBLEM:

If time travelers from the future were to visit you, it would be difficult for
them to quickly prove their authenticity.

SOLUTION:

Temporal passwords. At the start of each year, you devise a new password. You
commit the password to memory, but you never write it down or divulge it to
anyone until Dec. 31, when you submit it to the Temporal Password Registry,
which publishes it and promotes its dissemination.

RESULT:

Prior to their visit, time travelers from the future can look up your temporal
password in the registry for the year in which they plan to visit you. Their
ability to communicate a password that you have not yet shared with anyone
provides evidence that they are actually from the future.

PROMO:

[https://temporal-password.pressbin.com/index.html](https://temporal-
password.pressbin.com/index.html)

~~~
dave_sid
This has a big flaw. If the future visitors copy all passwords to a usb stick
and take it with them then anyone in the present time that gets a copy can
pretend to be from the future. Worse still, everyone would believe the phoney
future visitors without question.

~~~
Jill_the_Pill
Do you think they'd still be using USB drives and bothering to keep track of
antique passwords in the future?

~~~
dave_sid
You may need to get an old usb stick from eBay. The good news is that they
probably would be selling for quite cheap.

~~~
vikramkr
Unless they're priceless antiques of an age long gone and sell for millions of
inflation and future currency exchange rate adjusted dollars

~~~
dave_sid
That’s a very good point.

------
wiseleo
Education.

Schools are functionally no different from part-time prisons. You must attend
daily under penalty of law.

Many teachers are plain awful.

They rely on students being additionally taught by their parents. That forces
parents to go to school with their children, which perpetuates the vicious
cycle. The teacher recalibrates the class to the students who either
understood everything the first time or had supplemental education and the
rest languish.

Schools assign homework that is not easy to do when the student hasn’t fully
grasped the concept. That burns time they could use to get better.

So... I am implementing Math common core in software. The first part is an
automatic homework solver for math. Once we solved the student’s homework, we
can teach them how to do it with generate problems. Crucially, there will be
multiple perspectives and an ontology of topics do the student can backtrack
to where they got lost in class weeks ago.

After we are good with math, we’ll do the same with English. It will probably
not go too deep, but it will let students obtain the missing foundation of
knowledge.

~~~
adamsea
I certainly agree that so much about school is problematic and could be
improved, but ...

> Schools are functionally no different from part-time prisons.

A prison by definition cannot be part-time ; ). Plus, it's a totally unfair
comparison. Schools are intended for everyone. Prisons are intended for a
specific subset of people deemed to have broken a law.

Schools are for children. Prisons are for adults.

And so on.

~~~
icebraining
> A prison by definition cannot be part-time

Sure it can. Check out "weekenders" and "work release" programs, for example.

~~~
adamsea
Good point. To me the whole schools / prisons thing just seems dramatic and
not well thought out.

“X is like prison”.

Work is like prison. Childhood is like prison. Society is like prison.

Is everything like prison?

If we live in The Matrix yes but we don’t?

------
chiefmcloud
I’m working on a new way to talk online, with the goal of killing cancel
culture, increasing understanding, and basically calming down current
radicalization. Picture Reddit, but with the ability to anonymously share
ideas with other people in your social circles.

My theory is that most reasonable people stay off social media, so places like
Twitter end up filled with unreasonable narcissists. At the same time,
discussing politics on a semi-anonymous forum like Reddit is pointless, who
cares if someone on the Internet is wrong. But maybe there’s a better way of
communication, something new, that lets you talk with people you actually
know.

~~~
raffraffraff
I have a theory that, individually, people fundamentally disagree less than
they think they do. And in real face to face meetings, they are willing to
disagree about a topic without killing each other. They talk around points of
disagreement. Eg: me, when I'm talking to my grandmother.

The problem is that online people tend to polarize into factions during
complex discussions. The more heated the discussion, the more polarized they
become. Eventually it becomes impossible for either side to be self critical,
or to cede a point to the other side no matter how true it may be.

"B" says "the sea is made of saltwater". A moderately reasonable "A" agrees.
Mistake! The "A" group accuse them of being pro-B or anti-A. Suddenly, the
idea that the sea is saltwater becomes an "B dogwhistle". A cunning trick! At
this point, the "A" group adds 'saltwater sea' to their list of unacceptable
opinions, and the policing of everyday language and opinion is in place.
That's when the notion of defending free speech gets questioned. How can you
police language if people defend free speech?! It won't do! So now free speech
is a dogwhistle too. Add it to the list! There is no possibility for
constructive dialogue, no matter how sensible, kind and cool-headed it tries
to be. Disagreement with dogma or talking to the opposition are offences. New
rules: "Don't follow an A on Twitter". Guilt by association. Next up: let's
use the abuse/reporting system to ban individuals. Eventually, online systems
that were supposedly designed for communication and conversation have become
tools for suppression and virtue monitoring.

While anonymity itself will help people to own up to an opinion that they
might be otherwise afraid to voice, unless there are no usernames at all it
will be possible to track an individual across the system and figure out
"which side they're on". Then you'll be able to figure out if their comments
are acceptable or hatespeech without having to properly consider them.

Don't get me wrong, I'd love to see this happen, but I think it requires a lot
of work. Deduplication of points made, automatic detection of logical
fallacies via natural language processing, personal dictionaries with auto-
translation of terms (eg: an acceptable word to you might be a slur to someone
else - if the word has a genuine meaning, auto-translate to a non-slur). That
would avoid having people point and scream and claim moral victory because of
a word infraction, instead of hashing thing out properly.

~~~
41a4e1
In your opinion, what is it that distinguishes your grandma from people in
online discussions? I've been thinking about it a lot lately - that the
facelessness of it makes it more difficult to employ empathy? That there are
too many people online and connections with them are too ephemeral to form
emotions necessary for kind discussions? That the anonymity makes some people
lose inhibitions? What else?

~~~
raffraffraff
I suppose you have to get along with grandma. You have a shared family group.
Imaging taking to your mom, and saying "I've cancelled grandma because she
thinks that Napoleon was no better than Genghis Khan, so no more family meet
ups. If you don't cut ties with her, you're blocked and cancelled too."

There's also the fact that you've known grandma your whole life, and you think
that she's 90% adorable, 10% a product of her generation. So you're willing to
change the subject, or maybe just roll your eyes, when it comes Napoleon.

------
crabl
Existing mediums for note-taking (Evernote, Notion, Roam Research) are not
sufficient for doing knowledge work over long periods of time. The functions
these incumbents serve are primarily as “stores of knowledge” that we save
because it’s interesting in the moment but never read through or “scratchpads”
that we use once and never get rid of, which end up cluttering our information
space like a junk drawer full of shopping lists and knick-knacks.

Serious thought involves more than just collections and associations: mastery
requires repetition, creativity requires serendipitous discovery, and
productive output requires flow states. It’s also a matter of acknowledging
the fact that “units of knowledge” do not exist on their own: all knowledge is
embedded in context (or “deeply intertwingled”, in the words of Ted Nelson),
and without context, metaphor, and nuance, we cannot form meaningful
connections. By baking these attributes into the medium itself, it’s possible
to build an information space that’s simple to explore, can surface
information when you need it, can augment the mind’s naturally ability to form
connections, and can get out of the way the rest of the time.

~~~
kristopolous
I worked with someone in 2013 who I'm still convinced solved this problem. But
he was an absolute terror to work with. A petty authoritarian who would look
over every website I'd go to and have screen monitoring and key logging
software.

I kid you not. I left after 6 weeks. I snuck out the work with me (I wasn't
"allowed" to bring it home.) If you're interested I'd be more than happy to
share the code.

I think it was really revolutionary.

Essentially it used grammatical structures to arrange text in a navigable 2d
space which he needed because he was a highly visual and spatial learner.

But what it allowed for is a nonlinear and a nonsequential arrangement of
ideas using Wikipedia text, not just some simple mindmap stuff.

I used Wikipedia as an example in a prototype engine I made.

Articles Flowed Into each other through a continuous navigable space. You
could interact and engage to go cognitively deeper and expand a new path, as
opposed to a series of documents. Wikipedia became one continuous thing that
you could endlessly navigate through a 2d space of.

The content wasn't large blocks of text but broken up using a separate visual
language so that there'd only be a few words then a relation to another group
and so on. This kept the concepts spatially relative and made the distinction
of pages disappear.

It did it all automatically. Really amazing stuff. I also worked with Ted
Nelson, this guy's methods were better. No question.

He developed the techniques over about 20 years manually and had transferred
textbooks to rolls of butcher paper that he kept in cabinets. He totally
didn't understand the value of his process as something as transformative as
Vannevar Bush's As We May Think.

Instead he wanted to make it a proprietary format with proprietary content
under a private publishing company for childhood education. He wanted a kludgy
editor to make new content with and then a kludgy viewer for single topic
things. He wanted to dictate the interface, keystrokes...

Because once again he's an authoritarian pedant. Bah, he didn't see what he
made.

Ideas need to be controlled at the right level of abstraction and liberated at
the others. That's what Linus knows that RMS doesn't. That's what TBL knows
that Nelson doesn't. That's what Jobs knew but Apple doesn't.

I wanted to run with the idea but yeah, 7 years ago and I've done nothing. I
got everything still.

I should stop everything and finish it. It's really something radically
different. I think it will change at least the way I personally learn things.

The campaign to convince others, yeah well, no guarantees there.

~~~
john4534243
Who's TBL ?

~~~
kristopolous
Try Sir TBL

------
dr_dshiv
Let's solve Harmony!

The first scientific experiment was conducted by 5th century BC Pythagoreans.
They wanted to show that the basis for musical consonance was math. From that,
they inferred that harmony in math accounted for the harmony of the cosmos.
This integration of math+physics was very forward thinking.

But, if we fast forward to the present, we still don't have a complete
scientific explanation for the basis of consonance and dissonance. Really! To
make my own contribution, I've been running psychophysical experiments to
investigate why consonant chords that are mathematically _slightly_ dissonant
actually sound much better than chords with perfect mathematical consonance.
I've been gathering data with sounds but also with haptic vibrations and with
visual flicker frequencies. This multisensory approach is fun because it
produces visible rhythmic entrainment in the brain, as seen with EEG. My goal
is to contribute to a general theory of neural resonance and harmony in human
experience.

Why does this matter? Happiness is great, but I'd argue that what we really
want is personal and global harmony. Note that harmony isn't sameness, it is
unity in variety -- the resolution of conflict and dissonance into an
integrated wholeness. We want inner harmony with our selves, harmony in our
relationships with others, harmony in society, harmony with technology and
harmony with nature. Happiness is individualistic but harmony involves the
pleasure of virtue. I hypothesize that harmony can help set a better objective
function for the future of humanity.

Harmony was also the objective function for the first deep learning neural
network, Paul Smolensky's Harmonium.

Finally, harmony is also a central theme in classical philosophy. The concept
had a massive influence in the Italian Renaissance and in the English
Scientific Revolution.

I recently put together a reader for understanding Plato's views on Harmony.
Comments are welcome:

[https://docs.google.com/document/d/1lqXpXgWI5YMBCz1O0gCmrEwz...](https://docs.google.com/document/d/1lqXpXgWI5YMBCz1O0gCmrEwzfw65MD7-WZG1FL0fPPU/edit?usp=drivesdk)

~~~
ShamelessC
I got a little lost when you tried to compare musical harmony to societal
harmony.

With regards to musical harmony, is it possible that it's more or less random?
I know multiple cultures have different definitions of musical harmony. I
suspect the evolution of hearing also contains random elements. Similar to
language, it's not so much about an inherent universality, just a universality
we can all learn and agree on.

Thoughts?

~~~
sova
The human ear has an intimate relationship with the octave 2:1 and its ratios,
so its very hard to believe the convergence of appreciation globally to be
random. More dramatically, visualizing the harmonious ratios on objects such
as Chladni plates (a field called Cymatics) reveals that there is something
deeper to consonance and harmony than meets the number line.

~~~
marzell
"visualizing the harmonious ratios on objects such as Chladni plates (a field
called Cymatics) reveals that there is something deeper to consonance and
harmony than meets the number line"

Ok I'll bite. What does it reveal? There's nothing inherently meaningful here.
We know that 'dissonant' sounds (those that create interference patterns)
create wavelets that are smaller and with less contrast than the more
'coherent' patterns from ratios that are closer to whole numbers.

But in what way is this meaningful or useful?

~~~
megameter
It means we find consonance pleasurable and see a distinct "signal" in it. At
least, that's the information-theoretical way of looking at it.

When dealing with cosmology one often seeks to make a big deal out of a simple
concept like a duality, a cycle, or a ratio. These are concepts recurring
through the world, and looking for them in more places sometimes reveals
knowledge.

------
jezze
I am trying to reinvent the web.

Replacing HTML/JS/CSS with a language called ALFI. It is stupidly simple in
its design but still very powerful. Similarly to HTML you use it to create
widgets, place them, and define their behavior. It is humanly readable like
HTML but line-based instead of markup-based. Instead of nesting it uses
references. This allows it to be streamed.

A big difference is that the language itself doesn't allow styling (like CSS),
the downside being you get less flexibility but the upside being it will
render correctly on any display with any resolution.

For this I have also written a new type of web browser called NAVI which takes
ALFI code and produces (somewhat) beautiful widgets and renders them using
OpenGL.

Source for both ALFI and NAVI:
[https://github.com/jezze/alfi](https://github.com/jezze/alfi)

My own ALFI website: [http://www.blunder.se/](http://www.blunder.se/)

You need NAVI to actually browse blunder.se properly. otherwise you will just
see ALFI code. Also this is still very early so all features are not done yet.

~~~
LordDragonfang
>A big difference is that the language itself doesn't allow styling (like
CSS), the downside being you get less flexibility but the upside being it will
render correctly on any display with any resolution.

HTML without styling will render correctly on any display with any resolution.
The facts of the history of the web tell us that people want custom styling,
though, and businesses want it even more, because marketing says so. Your
widgets need styling for each device they're rendered on, in which case you're
back to the exact original problem as HTML and CSS. All you've done is move
the problem to someplace else.

Frankly, I don't see why this isn't a markdown extension, since that seems
much better suited to solving your base problem and is WAY more readable than
the mess you have currently (which only seems readable to someone versed in
high-level programming, either functional or OO)

~~~
tobr
> The facts of the history of the web tell us that people want custom styling,
> though, and businesses want it even more, because marketing says so.

Wait, what? Are we on different webs? The facts of the history of the web tell
us that some of the most popular services for publishing are Facebook,
Twitter, LinkedIn, Medium, etc - places that allow for very limited custom
styling.

I think you misunderstand what OP is trying to do, and are criticizing them
for not instead making a thing you already know.

~~~
filleduchaos
I think it's very, very obvious that "custom styling" here is referring
_creator_ styling, not _user_ styling, so I'm not sure that you're in a
position to be criticising the person you're responding to for
"misunderstanding".

~~~
tobr
If I read you right, you’re saying that I’m the one misunderstanding OP? It’s
honestly not clear to me what the difference between “user” and “creator”
would be in the context of this discussion. Could you elaborate?

Since a “creator” is also a user of the web, I guess you mean “user” as in
someone who only consumes content? I’m confused by that since nothing in the
discussion seems to be about user stylesheets.

~~~
filleduchaos
I...am a bit confused as to whether you're actually reading the same
conversation.

The project creator explicitly said that their creation cannot be styled, _at
all_. It renders the exact same "standard" way on all devices. The retort was
that the vast majority of people (with a clear callout to companies that would
obviously like their own branding) do not want a web where every site looks
the exact same, which is why CSS exists in the first place. You seem to have
read/decided to turn this into a discussion about end-user customisation of
sites (and, frankly, a thinly veiled rant about Facebook and Medium), when
that first of all has nothing directly to do with what was being discussed and
second of all would _also_ be out of scope for this project because it had
styling itself out of scope.

~~~
tobr
I’m very sorry, but I don’t understand where you’re coming from.

The original proposal seems to be this: You publish a document. The platform
takes care of presenting it.

One criticism was: No, people want to control the styling of what they
publish.

My counter argument is: There are many successful platforms that don’t allow
people to style the content they publish, and people seem to be fine with
that.

Again, assuming you’re talking about readers when you say “end-user”, I never
even mentioned them.

~~~
filleduchaos
> The original proposal seems to be this: You publish a document. The platform
> takes care of presenting it.

And with a language that explicitly does not allow styling, how exactly is
"the platform" that takes care of presenting it going to render anything but a
single, default style for all content without...reinventing styling?

> One criticism was: No, people want to control the styling of what they
> publish.

No, one criticism was rather obviously that people don't want to go on the web
and see the exact same thing everywhere they navigate to, which is what you
get when styling is not possible. However, you seem to be looking at the
entire conversation through some strange lens.

> Again, assuming you’re talking about readers when you say “end-user”, I
> never even mentioned them.

The _creators_ of a web service/platform wanting to be able to brand their
creation and the _users_ of that service simply going with their chosen
brand's aesthetics when publishing content are two concepts that can
simultaneously exist - in fact, _can even be linked_.

I am not sure how it has to be explained that people being okay with
publishing content on Facebook, LinkedIn or Medium without much custom styling
is the furthest thing from an indicator that people want Facebook, LinkedIn,
Medium and every other website to look exactly the same.

~~~
tobr
I’m starting to feel silly for continuing this thread. I will just conclude
with my best understanding of how we are talking past each other.

I think I understand that you are imagining a middleman to be “the platform”
even in the context of NAVI/ALFI. I understood NAVI itself to be this
platform; much like the Facebook app allows you to publish and browse Facebook
content with very little variation in the styling of different content, so
NAVI might allow you to browse and perhaps create ALFI content with little
variation in styling. You are comparing all the content within Facebook and
others to the content on the rest of the web, while I’m talking about how
content within a platform doesn’t need to be visually distinct for the
platform to be appealing to publishers and readers. You’re thinking of the web
as the “platform”, Facebook etc as the “creators” on the platform, and you are
grouping people who publish _and_ read on Facebook as “end-users”. I’m
thinking of Facebook as the platform, people who publish things on Facebook as
creators, and people who read the things published as the end-users.

Sorry in advance if I’ve misrepresented what you’re saying, but this is the
best I can do in explaining why we’re unable to understand one another.

------
muzani
I'm trying to find a way to generate stories using tropes as the building
blocks.

There's a sample at [https://random-character-generator.com/](https://random-
character-generator.com/)

There's an unreleased version, which focuses more on how to portray
characters, rather than just what they are. For example, instead of saying
"energetic", they'll be pacing about a bit.

I might just pivot it into a story/plot tracker for writers, and use it to
fill out the blanks rather than generating full characters from scratch. Where
the community can add in their own templates and tropes. An author can decide
that they have a character who is stoic, cynical, and sarcastic, and the tool
will generate a background story, how to portray the character, what conflicts
they get into with other characters.

~~~
pvillano
you'd be interested in dwarf fortress

~~~
muzani
I donate to DF, lol. While there are plenty of epic moments in there, I think
it takes a while to build up, and it comes in between a lot of mundane moments
as well. And I think a lot of stories are built with a romance element in mind
and there hasn't been a lot of procedurally generated games that do that well.

------
alexyz12
I am building a database for human movement. Right now each exercise or pose
or movement is indexed by its name - Downward dog, squat, handstand, etc. But
this gets confusing when multiple names apply to the same movement. The true
identifier is the motion of your limbs in space. I want to encode that
motion/position so that if you try and upload L-sit handstand and also L-sit,
it tells you that you are try to upload a duplicate (except for the arms).
Furthermore, hopefully each movement can be uniquely encoded into an ID that
could be used at a webURL. If you did this you could also compute the
similarity between two movements to indicate that one is a progression step to
train the other.

I don't have a computer science background so encoding an compression are new
to me, but Im a good hacker and I can quickly get things like open pose
working. Im trying to complete this in the next 3 months. Wish me luck.

~~~
elric
In case you weren't aware, exrx.net has a huge database of exercises and which
muscles/joints they affect.

~~~
alexyz12
Yes, I am considering using exrx for the base training set actually. They have
an api where you can get movement labeled (which muscles are contracting),
license video footage of many exercises. Maybe I can somehow integrate what Im
building with their site.

------
mnemonicsloth
Writing a computer program is often the best way to convince yourself that you
really understand a problem. And an already-written computer program is often
a good way to document the behavior of a complex system because you can play
with it to see what happens in all the edge cases.

So I'm writing curricula that use computer programs as the primary teaching
tool. One is for computer science, where the idea is that anyone who can read
some python can pick up all the important ideas from a formal CS education
without sitting through a year or more of preliminaries. Over time I'm
planning to add smaller sections on more advanced topics.

The other curriculum is theoretical physics. There's already a good book that
does classical mechanics [1] in scheme. I've hired some postdocs to learn
scheme and code lessons in general relativity, statistical mechanics and so
on. I do the lessons, solve the problems, and then we talk about what worked
and what didn't. I work on this about ten hours a week. After a couple of
years I should have knowledge roughly equivalent to an ABD physics grad
student, plus teaching material that can take anyone else to the same level
from modest beginnings.

I'm looking for collaborators on this project so don't be a stranger.
Twitter/email is in my profile

[1] [https://groups.csail.mit.edu/mac/users/gjs/6946/sicm-
html/bo...](https://groups.csail.mit.edu/mac/users/gjs/6946/sicm-
html/book.html)

~~~
BoiledCabbage
Brilliant!

I was just ranting on HN about the need for this two weeks ago. It seems like
the inevitable end game for teaching of hard sciences (and possibly other
fields). One inspiration (besides the book you referenced) is there first few
posts of "An Intuitive Explanation of Quantum Mechanics".

Here is a great example[1] where he describes QM as essentially a
computational process and does everything just shy of writing down the code.
That specific page is from the series here [2].

And from reading this I realized how much easier many concepts would be to
grasp if you could just read the objective source code of the description and
not have to try to interpret messy English or imprecise notations.

1\.
[https://www.lesswrong.com/posts/5vZD32EynD9n94dhr/configurat...](https://www.lesswrong.com/posts/5vZD32EynD9n94dhr/configurations-
and-amplitude) 2\. [https://www.lesswrong.com/posts/apbcLXz5zB7PXfgg2/an-
intuiti...](https://www.lesswrong.com/posts/apbcLXz5zB7PXfgg2/an-intuitive-
explanation-of-quantum-mechanics)

------
jeremylevy
Automating cloud architecture creation.

I'm building a "catalog" of architectures that you could use to create a
complete cloud architecture on your AWS, GCP or Azure account in less than one
minute.

So, for example, you could create a docker-based architecture with CI/CD,
auto-scaling, zero downtime deployment, SSL, load-balancing, high availability
and MongoDB in less than one minute in your own AWS account.

It's like Terraform with the user-friendliness of Heroku.

It's very hard because every providers have different APIs and concepts so you
have to start from scratch for each.

I love working on it because cloud-computing may have so much impact in some
organizations like biotech startups or NGOs.

~~~
tlrobinson
> It's like Terraform with the user-friendliness of Heroku.

> It's very hard because every providers have different APIs and concepts so
> you have to start from scratch for each.

Why not build it as a layer on top of Terraform?

~~~
jeremylevy
I do use Terraform. Terraform is very good to create the building blocks of
your architecture but not so much for the user-friendliness part.

Let's take an example: You have an architecture with a CI/CD. Great. You add
your CodePipeline and CodeBuild ressources to your plan. Perfect.

For the CI part, you want your build to start on every commit on every non-
master branch. Bad news: CodePipeline doesn't have support for multi-branch.
So you will need to find a way to clone the default pipeline for each new
branch.

Repeat this for all the user-friendly features (GitHub check runs, env vars
management, deployment monitoring/rollback...) for the three cloud providers
for all the different architectures and you start to feel the difficulty (I
can testify ;)).

~~~
tlrobinson
I’m suggesting your tool could generate Terraform code and run the Terraform
command “under the hood” so you don’t have to reimplement everything Terraform
already does.

Users wouldn’t need to know about Terraform (unless they wanted to. It might
be nice to have a way to export the Terraform code)

~~~
jeremylevy
Sorry! My answer was certainly not very clear.

I use Terraform “under the hood” for this project.

What I was trying to explain is that the "hard part" is not the creation of
the architecture components but the user-friendly features.

------
nightcracker
Improving the performance of fountain codes and applying them securely to peer
2 peer file sharing.

A fountain code is an almost magical algorithm that can split a file of size n
up into a (practically) infinite stream of blocks of size b, such that
collecting _any_ n/b blocks out of the stream can reconstruct the file.

Applied to p2p file sharing it effectively can eliminate rare pieces as well
as the need to communicate which pieces people have. Related topic here is
homomorphic hashing.

Unless I find something better before September my master thesis will be on
this topic.

~~~
krspykrm
Good topic. Have you seen this high-performance Reed-Solomon implementation
(1) or Wirehair (2)? Thoughts?

1)
[https://github.com/klauspost/reedsolomon](https://github.com/klauspost/reedsolomon)

2) [https://github.com/catid/wirehair](https://github.com/catid/wirehair)

~~~
nightcracker
No and yes.

For fast Reed-Solomon I'm aware of this: [https://github.com/Bulat-
Ziganshin/FastECC](https://github.com/Bulat-Ziganshin/FastECC). It's kind of
amazing how far Reed-Solomon has come, thanks to the fast fourier transform.
At the very large problem sizes it does slow down more and more though. If
cleverly applied (e.g. layering erasure codes) you can make it go very far
indeed. I don't like how hardware support is essentially necessary to make the
larger field sizes necessary for bigger instances fast.

Wirehair is very interesting, and I hope to study it well and describe it more
formally as part of my Master's thesis. I'm not aware of any academical
analysis of it. I did look into it enough to diagnose it suffers from the same
O(n sqrt n) issue that RaptorQ does, again for very large instances. The issue
lies in having to do Gaussian Elimination for a submatrix (to solve the
inactivated columns) near the end of decoding, and this submatrix can be on
the order of O(sqrt n).

I'm interested in 'very large instances' because ideally I'd be able to create
an efficient fountain code with block sizes on the order of the size of a UDP
packet, disk page or QR code, which has some very interesting applications.

------
httpsterio
I've been mulling over an idea that is essentially a combination of personal
ID, secure digital authentication and online communications all baked into
one.

There's a EU directive instructing on how citizens should be able to identify
online with eIDAS. In my country, you can use eIDAS to authenticate in
basically any governmental agency portal, but you can't get any eIDAS enabled
auth method as a citizen. The current way of authenticating is done via bank
accounts or a paid extra mobile service that requires a non-prepaid mobile
contract.

This is a relatively huge issue. First off, the Finnish government pays the
banks for each auth any user does when they for example want to log into their
medical records etc. It's a few million euros a year just for verifying the
users.

There's also obviously issues with whom the banks serve, there has been some
cases with them not taking foreigners or people with bad credit as customers,
making it impossible for them to authenticate themselves.

The current EU directives also indirectly require that the banks should
provide a bank customer the possibility to authenticate without needing to
have a banking account (which costs money), but to my knowledge this still
isn't possible. I pay around 20 euros a month just for the luxury of having an
account, not everyone can afford that on top of other bills.

Auth services are not accessible for impaired users.

It's also basically impossible to manage who has essentially the power of
attorney and over which matters, for how long etc. Either you have to give
them your login info (good luck resetting your SSN) or try to use the services
over the phone and somehow convince the other side that you have permission to
manage things for another person.

There's no ways of authenticating who is using your accounts online and
actually verify the users.

Basically, my idea is combining biometrics, PGP and having the government
running the identity management themselves. This would have added benefits of
basically enabling hashed throwaway addresses and info for use online while
providing a free and accessible way of authenticating strongly online.

~~~
amelius
> throwaway addresses

Unrelated but speaking of throwaway addresses, it would be cool to be able to
create a throwaway postal address (which is then translated by the postal
service), so online shops don't get your personal address information.

~~~
VWWHFSfQ
isn't that a po box

~~~
amelius
No because for example if you order from the same shop multiple times using
the same PO Box, then they can link the information from each order.

~~~
VWWHFSfQ
So you're thinking like a virtual mailing address as a service. You receive
and forward people's mail. Seems interesting. Also kind of high risk for the
service provider. People will use something like this to buy guns and drugs
and other stuff on the black markets. But I guess they do that anyway. You
would have to be prepared to deal with a lot of subpoenas to unmask the real
mailing addresses. Could be a useful service though. Be sure to charge a lot
for it.

~~~
httpsterio
My idea would have to be implemented on a national level. I take issue with
the socio-economic injustities in the current identity and personal management
solutions as they're not technically accessible nor free while still being
simply a must have in order to do anything in Finland.

------
patwillson22
I'm working on an fully open source Physical Vapor Deposition System which is
capable of producing thin flim solar cells, Ex(CIGS,CDTE) at efficiencies in
the range of 15- 20%. The system is designed to produce one 10 watt cell for
every 45 minutes. I could go into further depth here but for obvious reasons
It would take me a while to explain everything.

~~~
Gabriel_Martin
Well if you're waiting for an invitation, it sounds very interesting!

~~~
patwillson22
Alright here it goes.

What we're basically doing is using thermal evaporation to lay down thin films
of metal on top of each other in a high vacuum environment. We're then
patterning the cell with a fiber laser to produce the cell traces and patterns
needed to cell size.

Here's an example of system that does what i've just described in the context
of creating CDTE cells. Note that it doesn't use a laser to create the cell
divisions but rather uses a conductive ink and sand blasting.

[https://avs.scitation.org/doi/abs/10.1116/1.4941071?journalC...](https://avs.scitation.org/doi/abs/10.1116/1.4941071?journalCode=jva).

I thoroughly analyzed their design and I have copied some aspects of their
design while avoiding many of the flaws that significantly limited it's
practically and efficiency. Here's a quick summary of the design changes and
flaws that I found.

For one, the design specified in the paper can barely reach high vacuum which
is a requirement for producing cells of reasonable efficiency.

One major improvement I'm working on is a system that allows for automatic
changing depostion powder inside that chamber instead having multiple chambers
for each layer of depostion powder.

The advantage of this approach is that miniaturizes the chamber size
significantly, making closer to the size of something that could fit in the
back of your car then something that needs a dedicated room or floor for.

I've also looked carefully at how im going to achieve high and maybe even
ultra high vacuum. And in that regard I think I've made some significant
strides.

My design achieves high vacuum in three stages, the first is through a simple
ventri pump and the second is a sorption pump which has been redesigned based
on a old paper I found here(). The last stage uses something called a non-
evaporable getter pump.

Experienced vacuum engineers might initially be baffled of choice of pumps I
am using. As they are normally considered too slow for the type of operation
that the chamber is being put through.

However, the downsides of the speed of these pumps can be mitigated by three
measures. (1)Building a chamber that can go through bakeout(which removes
contaminates and reduces pump down time) (2)Designing a chamber with metal-to
metal seals and low leak rate. Lastly the obvious principal of making the
chamber volume and surface as small as possible while making the size of the
pumps large.

I've barely scratched the surface here but I think this should give you a
rough idea of what I'm doing. I really dont think this stuff is hard as how
it's made out to be.

Here some resources that really have really helped out so far.

-Building Scientific Apparatus a book that should give you a broad overview of things you need to know

-vacuum sealing techniques alexander roth Extremely exhaustive in the amount of information of about, valves, and just general construction of the chamber. The book is really old but everything still stands and it's honestly better than most of the stuff i've found online.

Blogs by this guy [https://www.normandale.edu/departments/stem-and-
education/va...](https://www.normandale.edu/departments/stem-and-
education/vacuum-and-thin-film-technology/learning-in-a-vacuum---what-to-
expect/articles/how-to-use-getters-and-getter-pumps).

Really good introduction to basic stuff you need to know and decisions you
need to make.

If anyone has any questions further I would be happy to answer them.

~~~
jacob_d
This sounds cool! Building Scientific Apparatus is a truly excellent resource.

Two questions: (1) Could you get away with an inert atmosphere? I'm not
familiar with the pros and cons with respect to PVD. (2) It sounds like your
vacuum setup will have a long cycle time from vent to pumpdown to operation. A
load lock with a (turbomolecular?) pump adds quite a bit of expense. What's
your approach to achieving high throughput?

~~~
patwillson22
You could, I've seen a couple of papers attempt that approach with rather poor
results something like 8-10% percent. Though I'd say by the easiest approach
to producing thin films cells involves basically using electroplating which
achieves similar efficiency
([https://onlinelibrary.wiley.com/doi/abs/10.1002/pip.417](https://onlinelibrary.wiley.com/doi/abs/10.1002/pip.417)).

The cycle time is of course a highly dependent on the final design of chamber.
There's no reason that Sorption and Getter Pumps combined with a ventri
prestage can't preform to a degree that meets the design requirements.

However their performance is highly dependent on two things, the ability to
reach bakeout quickly and the use of metal to metal seals instead of o-rings.

The actually difficult and expensive part of high vacuum engineering is
figuring out how to engineer valves that can both withstand bakeout
temperatures and make the tight leak free seals.

In this regard I plan to use what essential amounts to a plate valve with
something called, a "powdered seal". This valve meets the requirements for the
design in every aspect with it's only downside being that it is slow to change
open to close. Though this downside will not reduce the overall throughput of
the system as it is designed.

------
andratwiro
Government – Citizen interaction.

Specifically, I'm working with a local pro-refugee organization in a densely
immigrant populated region in Spain. There's a complex chain of steps that you
have to go through in order to acquire citizenship. Only people with access to
good lawyers are able to deal with all the bureaucracy of the process, without
mentioning other problems (missing obscure expiry dates that reset your
process, language-related problems, local government workers not actually
knowing/willing-fully ignoring migrants' rights...).

There's a good network of volunteer lawyers working on this issue, but its not
scalable. I'm working on a platform that would allow migrants to solve their
own situation, by crowdsourcing the knowledge of lawyers on a case-per-case
basis and offering a simple interface in their language to track open
processes & discovered the ones they need to go through and how.

As an abstraction for this, I've been thinking on how we could improve
citizen/government communication. A small use case / example for this could be
refugee camps. My previous experience here is that they are small,
disconnected communities with a top-down type of organisation towards the camp
organisers. It shouldn't be hard to provide real-time tools for connecting
both, potentially leading to things like asking for their needs, managing
their legal situation, or even allowing for voting & self-governing.

~~~
erikig
I’m curious, how much of the process is digital and how much of it requires
physical presence by a specialist? Also how has this been affected by the
COVID crisis?

~~~
andratwiro
The most important processes require physical presence. The ones that are
digitalized aren't a good solution either, as these people might not speak
Spanish very well, or they don't have the required digital literacy to access
& go through a government website (which is a problem for locals as well). The
solution right now is to offer personalized support from volunteer
organizations on an individual basis.

COVID has affected quite a lot. Most of the processes have stopped as the
government shut down the in-person offices, and now they are slowly
reopening... and the situation was already crowded before the pandemic. On the
positive side, extradition orders have been temporarily paused.

------
web007
How can we document "human society" in 1000 pages or less? I've been casually
researching for a while, and will eventually write a guide on going from zero
to the moon.

Step one is survival, basics of hunting, first aid and farming sort of stuff.
That volume would end around homesteading and self-sufficient living.

Volume two would be establishing society in larger groups than a family unit.
Things like job specialization (N roles for N people instead of 1/N of each
role for each person), establishing trade (currency, weights & measures,
supply chain), government (mostly what not to do, what to protect, how to
adjudicate disagreement), public works ("roads are a good idea") and their
ilk. Also medicine beyond first aid and basic care.

Volume three would be advanced STEM topics, getting from a functioning society
to... more. Not even the sky's the limit. It should include blueprints for
things we take for granted like refrigeration, telecommunication and birth
control. It will include all the basics of physics, chemistry and biology
required for smart people to fill in the gaps and launch a human to the moon
and back.

I want to super-nerd-out about this, and publish it on Tyvek or something
exotic so it'll last through decades of wear and tear (and water-logging and
more), and include a ruler on the spine and it's own weight documented for
reference.

~~~
BoiledCabbage
Cool idea - few questions: How many volumes are there? Would be cool to see
the full list.

The first to volumes are very goal purpose driven and so fit will together.
The third feels very grab bag and likely should be split up more.

You'll also likely want to cover on schools of thought, the scientific method
and whatnot.

An alternate volume 3a might be on mass production of food (enabling greater
human capital). Large scale agriculture and the chemistry and botany that
enables it. Some amount of animal farming is likely required.

3b could cover the communication and transportation networks necessary to
distribute that food. Both the engineering, infrastructure and tech behind it.

At some point you can shift to covering, manufacturing and mass production -
which enables all the small products needed for so many fields. The handles,
washers, lenses,...

Then final shift to digital and everything that rolls out from there.

~~~
web007
Three volumes planned (Survive, Thrive, Expand), but given the overall scope
that seems like it's gonna be difficult. The third is very grab-bag because
it's "everything else" \- there's no end to what could go in there, it's just
going to stop at some point if it's going to be published. Prioritizing human-
care and sustainability is definitely a good target, and in keeping with the
theme. Probably scientific method, schools of thought, etc. would be chapters
or single pages - there's a lot to deal with overall.

------
solresol
Dentistry: can we replace dental x-rays with infrared? Can we build optical
panographs just using the reflections from a dental mirror? How can we monitor
patients' oral health over the long term more often than just an annual visit
to the dentist, and does that improve oral health outcomes?

Machine learning: what happens when we replace Euclidean metrics with p-adic
ones? Distance is fundamental to so many algorithms (least squares regression;
nearest neighbours; anything involving gradient descent). How do those
algorithms behave over completely foreign metric spaces?

~~~
redis_mlc
> Dentistry: can we replace dental x-rays with infrared?

There's a US company that has an ultrasonic "x-ray". Signal processing is used
to find cracks and cavities. It's not well-known yet even to Bay Area
specialists.

~~~
wittyreference
Do you have a link where I could read more? Because it _sounds_ like you're
describing a thing that's been around for like a decade, that I've seen in use
in (my region). But I may just be comparing apples and oranges under a too-
vague description.

------
akrymski
A new Web.

Today's web is a collection of applications that largely provide a frontend
for browsing data. The applications and data they contain are silos: there is
no easy way to separate the data from functions and compute across datasets.
Every application must (re)invent its own UI for querying and displaying data.

But if the web is actually a collection of datasets, why don't we have a web
browser for consuming and interacting with arbitrary structured datasets?

We can model most popular sites (HN, Instagram, Twitter, Amazon etc) as a
collection of hyperlinked JSON records. Let users adjust how these records are
displayed. Provide a universal way to query and navigate any dataset and
invoke associated functions (eg the upvote function for an HN post).

Full separation of data and functions instead of application silos is
necessary to achieve general AI compute in the future.

Example: can you email Mark a summary of the top 5 most popular HN articles 3
days before our meeting?

~~~
dougskinner
Sounds like you want website to embrace the semantic web:
[https://en.wikipedia.org/wiki/Semantic_Web](https://en.wikipedia.org/wiki/Semantic_Web)

~~~
akrymski
Not quite. The semantic web focuses on normalizing all data into a single
ontology, which is great for computers but does nothing to standardize the
user interface side of accessing data. Needless to say, it also hasn't gotten
far.

~~~
dougskinner
That's a good point, and I completely agree that it hasn't gotten far.

I do think that if it were to be embraced as a more common standard on
websites, building a more standard UI on top should be _relatively_ simple as
each websites underlying data would _hopefully_ be structured the same.

------
gwbrooks
PROBLEM(S): I (well, the nonprofit I lead) am trying to solve for the general
problem of municipal public policy but, as you might expect, it's a series of
discrete, linked problems. Some of these problems have thousands of people
focused on a solution; in other areas, it's virtually greenfield.

SOLUTION: The space doesn't lack for good research and policy recommendations,
but it has historically lacked (on the right and left), non-screedy,
nonpartisan voices that can be trusted when policymakers look for solutions.
We're attempting to fill that space.

WHAT WE WANT: Ultimately? We want every major American city to work better for
the people who live and work there. It'll look different from community to
community and our job isn't about applying a cookie-cutter approach. Instead,
we want to get a wider range of ready-to-implement tools in front of the
decision makers, and educate engaged citizens that solutions exist.

~~~
ziftface
I'd love to see an example. What kind of public policy? What problems?

~~~
gwbrooks
A good example would be Getting Back To Work, our economic-recovery playbook
for cities. [https://gettingbacktowork.org](https://gettingbacktowork.org)

------
alixanderwang
Document software design 10x better.

The foundation is flowcharts, with support for individual layers
distinguishing levels of abstraction, and scenarios for exploring use-cases.
From there:

\- Live data. We look at metrics on dashboards but it doesn't put into
perspective how they relate to each other. Imagine seeing on your flowchart of
servers, that one worker has an anomalous CPU reading, and you can click into
that to see the individual readings of the running services on it.
(rudimentary version:
[https://app.terrastruct.com/diagrams/1404897320](https://app.terrastruct.com/diagrams/1404897320))

\- Automatic generation and sync of diagrams. Having access to sources like
AWS account and version control to create and keep in sync diagrams of your
infra, db schemas, UML classes, etc.

\- Collaborative editing, seamless integrations with written documentation,
linking directly to code where appropriate, version control, etc.

So much of software can be better understood visually. Still early on, if
you're interested in learning more,
[https://terrastruct.com](https://terrastruct.com). And would love to chat
(email in profile) with anyone with ideas.

~~~
wbeckler
Have you ever used Labview? It started out great but hasn't evolved well and
can't handle complex stuff.

It's still used in lab environments.

~~~
alixanderwang
Every time I share this, someone shares some tool I haven't heard of, and I've
researched and tried a lot. It lines up with my experience working at software
companies where every 3 weeks or so there's a thread asking for diagramming
tool recommendation, and every time it's dozens of mixed responses of "I've
used X but caveat A,B,C".

Thanks for sharing! I'll check it out.

------
vekker
Dreams.

I've been journaling my dreams for years and I'm working on an app that makes
it easier to (visually) map them out & find patterns:
[https://oneironotes.com/](https://oneironotes.com/)

I like the idea of accessing other (inner) dimensions during sleep, like an
explorer (an "oneironaut"). The problems to overcome are related to capturing
and recollecting experiences that only take place in the mind. You asked about
the weird stuff...

~~~
baxtr
Is there a proven way to remember your dreams? I’ve read that we dream every
night but I usually can’t remember nothing.

~~~
wes-k
1\. Set your intention as you are falling asleep by repeating “tonight I’m
going to remember my dreams”.

2\. When you wake up, don’t move! Stay in bed for 5-10 minutes and try to
remember. Once you remember one thing try to ask yourself what happened before
or after.

3\. Dream journal.

I practiced the above back i. High school. Went from occasionally remembering
a dream to remembering about 4-5 a night. Some small snippets and others
longer. Was really incredible and I plan to try it again.

Dedicate 30-40 days and see what happens!

------
eivarv
Human-computer context switching.

Which is why I started making Cleave: An application that lets users persist
OS state as a "context" \- saving and loading open applications, their
windows, tabs, open files/documents and so on.

Started because of frequent multitasking heavy work with limited resources.

Made it because I wanted to switch between studying, working, reading, looking
for an apartment, etc. without manually managing all states or consuming all
resources.

Open Beta (macOS) as soon as I finish license verification and delta updates,
but I keep getting sidetracked...

[https://cleave.app](https://cleave.app)

~~~
TimJRobinson
I love this idea! I've never found a good solution to this problem, used to
try different logins etc but the save and restore of state doesn't work very
well.

Could combine this with site/app blocking tools to ensure when you load the
studying state you don't get distracted too.

~~~
o-__-o
The KDE window manager allows you to set window specific hints based on title
or x resource. Once upon a time I wrote a tool that managed the current state
in case of a log off (bad graphics driver that caused Xorg to restart)

Blocking sites was done with squid time-based ACLs. Now I’m wondering how
productive I could be combining the two

------
erezsh
I'm trying to replace SQL by building a language that compiles to SQL. It has
first-class functions, nicer syntax, better type-system, introspection, and
other things you would expect from a modern language. But in the end, you
still get SQL's performance, and the ability to use it with dozens of database
engines.

~~~
closed
Have you seen the R library dplyr? It is a weirdly powerful data analysis
tool, that can also produce SQL queries!

~~~
erezsh
I heard about it but didn't know it can produce SQL. Thanks for the tip!

------
jpt4
Can a programming language based on Willard's Self-Verifying Theories [0]
self-interpret in the limited sense established by Brown and Palsberg [1]?

Which is to say, what is the relation between a sub-Peano logical system that
can prove itself consistent, and is complete, thus bypassing the 2nd
Incompleteness Theorems, and self-referential decision problems posed in the
programming language that corresponds to that logical system?

My suspicion is that such a language could provide very strong guarantees on
the auditability of self-modifying code.

[0] [https://en.m.wikipedia.org/wiki/Self-
verifying_theories](https://en.m.wikipedia.org/wiki/Self-verifying_theories)

[1] [https://www.semanticscholar.org/paper/Breaking-through-
the-n...](https://www.semanticscholar.org/paper/Breaking-through-the-
normalization-barrier%3A-a-for-Brown-
Palsberg/ef79fc5e43d2df0d43e635de5f5a1a2913f4f645)

------
lambdatronics
Trying to find a solution to the maximum-entropy probability distribution
Q(x,y,z) constrained to reproduce the marginal distributions P(x,y), P(y,z),
and P(z,x) from some other distribution P(x,y,z).

It is known that Q takes the form Q=a(x,y) _b(y,z)_ c(z,x) for some functions
a,b,c to be determined by solving the system of equations:

P(x,y) = sum_z Q(x,y,z)

P(y,z) = sum_x Q(x,y,z)

P(z,x) = sum_y Q(x,y,z)

It's not clear there exists a general closed-form solution. Iterative
algorithms are known. This type of problem comes up in a number of interesting
contexts. For instance, testing for non-trivial multi-variable interactions in
dynamical systems such as neural networks or spin networks, performing joins
on probabilistic databases, constructing reduced models of probability
distributions, and in some cooperative game theory problems.

Examples:
[https://www.princeton.edu/~wbialek/our_papers/schneidman+al_...](https://www.princeton.edu/~wbialek/our_papers/schneidman+al_03b.pdf)

[http://vldb.org/conf/1987/P071.PDF](http://vldb.org/conf/1987/P071.PDF)

[https://doi.org/10.6028/jres.072b.019](https://doi.org/10.6028/jres.072b.019)

[https://www.mdpi.com/1099-4300/16/4/2161](https://www.mdpi.com/1099-4300/16/4/2161)
Edit: formatting

~~~
andrewnc
oh this is neat, it's like adding an additional dimension to the Wasserstein
distance / optimal transport problem. Well, in the sense that you are using
marginals as constraints. Kantorovich won a nobel prize for this kind of
stuff, so it's definitely hard.

------
closeparen
How do I satisfy the needs of non-technical middle managers in engineering to
appear as if they understand and control their shops, while minimizing damage
to those shops?

~~~
danielovichdk
It's an impossible task. Without direct access to their brains and being able
to alter those, you are at a lost cause.

Middle Manager Syndrome would be great for machine learning to cure. I hope
you come out with your nerves intact

~~~
a1369209993
Actually, what they said was:

> to _appear_ as if they understand and control their shops [emphasis added]

Ideally, the manager would provide coordination and clerical support, and
insulation from the rest of the bureaucracy (ie actually managing) without
necessarily needing to understand or control the details. But if that's how
they _appear_ , upper management will classify them as overpaid secretaries,
and gut their authority (and, on a selfish note, salaries). So it's important
that they appear to understand and control their nominal subordinates, even if
they're actually following sound advice of the form

> It doesn't make sense to hire smart people and tell them what to do; we hire
> smart people so they can tell us what to do.

(It may still be a impossible task, but it's a different task from having them
_actually_ understand and control things; even technical managers rarely
accomplish _that_.)

~~~
6510
An illusion that only works for half the audience is not very reliable. I see
opportunity to [by email] assign reading materials to them and possibly offer
to tutor them weekly. When you own the problem it is yours to solve.

------
cameronbrown
Information overload.

My current side project is [https://feedsub.com](https://feedsub.com). Right
now, - it's not great - I started by building an simple tool for getting
regular updates from RSS feeds, but longer-term I want to turn this into a
system which can absorb all the data streams you're interested in (news,
stocks, weather, social, communities) and give you dials (filters, curation,
signals, etc..) to surface a healthy amount wherever you want (SMS, email,
web, RSS, chatbots, etc..).

The crux of the problem is endless scrolling feeds we're sucked into 24/7,
which is why I based my MVP on email.

My current solution is trivial on a technical level. Honestly, my biggest
problem is thinking about the problem on a non-technical level, balancing this
with working life and branding, since my software and vision are very far
apart right now.

(TBH, this isn't nearly as hard a problem as some of the others here - but I
enjoy the ideas/feedback I get from communities like HN)

~~~
icebraining
This is not completely related, but it reminded me of when I used Sup (a
console email client that took a lot of inspiration from Gmail - labels, fast
FTS, etc) and the author had an inspiration to create a client/server version,
which could handle "Email, RSS feeds, notes, jabber and IRC logs" and more.

[https://web.archive.org/web/20120810005232/http://masanjin.n...](https://web.archive.org/web/20120810005232/http://masanjin.net/blog/old13)

Unfortunately, I don't think he even finished splitting Sup into the
server+client (called Heliotrope and Turnsole), let alone get them to handle
other stuff beyond email. Felt like a lost opportunity.

\--

For your project, seems like the direct competition is IFTTT and similar. What
do you think differentiates your project from those? Could be useful to push
more in that direction.

~~~
cameronbrown
I reckon you could pull my project off with IFTTT. The big differentiator
would ideally be the fact that this would be a product geared towards content
consumption and that lifecycle (ingest, filter, consume), rather than just
general purpose stuff that IFTTT does. I have a bunch of ideas in this regard,
but it's very exploratory and not a short-term thing.

------
mamcx
Building a relational lang
([https://github.com/Tablam/TablaM](https://github.com/Tablam/TablaM)) I allow
myself to get derailed in how provide a nicer "linq/iterator" protocol that
work inside rust and outside in the lang (so, how I write them in rust is
close to what the user could write in the lang).

The regular iterator protocol, as today in rust, make hard to do stuff like
JOINS, GROUP BY and other fancy stuff (because you need to decompose the
computation in a partial state machine. This is hard even for a developer,
impossible to ask for a regular data user). Also, you need to duplicate all
that for async (with streams) and other abstractions...

I'm now trying to understand transducers
([https://www.reddit.com/r/rust/comments/gqiyej/potentials_adv...](https://www.reddit.com/r/rust/comments/gqiyej/potentials_advantages_of_transducers_in_rust/))
and stumble upon effects:

[http://mikeinnes.github.io/2020/06/12/transducers.html](http://mikeinnes.github.io/2020/06/12/transducers.html)

that look to my the clean way I wish to use.

But what look easy in python/f#/etc in rust is HARD to do. So I'm in a kind of
limbo :)

~~~
tekknolagi
You and kd5bjo should chat:
[https://news.ycombinator.com/item?id=23741932](https://news.ycombinator.com/item?id=23741932)

------
vmurthy
(It's a germ of an idea but I hope to work on this!)

Problem: Public schools in India don't do justice to students. Private schools
in India charge a bomb but most of the money ends up in the hands of the
"owners" and not enough to teachers (For reference, an average primary school
teacher earns lesser than what an Uber driver earns) .

The solution : A network of "not-for-profit" schools where the fee structure
is reasonable ( can't be free ), but the profits are shared amongst the people
who make the schools run. Think "community banks" but for schools. I can't
solve the problem for everyone but hope to set a good example by attracting
the cream of teachers. It's time the teachers got their due.

~~~
kcsomisetty
I was thinking about something very similar. like, the teachers will get a
lions share of the fee. this will instantly motivate teachers to better
themselves, and move the school away from for-profit model.

but there are some real problems to solve. like, 1\. how to get such school
started ? who will provide initial capital ? 2\. even if such a school is
started, how to stop it from becoming for-profit again ? 3\. is there a
healthy balance between 'for-owers' and 'for-teachers' model ?

~~~
vmurthy
1\. Who pays for it? There are philanthropic trusts like the Azim Premji trust
which focus on this. 2\. I think that self interest is a good motivator to
keep the system in place :-). 3\. I honestly dont know the balance yet but I
know for sure that the current one just wont cut it

------
SenHeng
Since I live in Tokyo, it’s about a 3 - 4 hour drive to various ski resorts. I
want to snowboard _every_ weekend. So I’m trying to find the best* ski resort
I can head to next weekend.

Best is a mix of

\- the ones I enjoy most

\- but not one that I just visited last weekend

\- where there has been recent snowfall / is predicted to snow

\- that I have may have a season pass for

\- where costs of highway tolls, petrol, hotels can be optimised

\- driving time

\- whether I’m going alone or with friends

\- don’t have anything important at work on Monday

Eventually, I’d like to turn it into a kind of friend finder / social network
thingy but for snowboarding.

~~~
unixhero
Well. It's not the answer you want, but the Shinkansen and an expensive taxi
ride or maybe a ridesharing with some local dudes might be a feasible, but not
cheap answer.

Stay safe in those woods! I nearly perished, by getting lost.

~~~
SenHeng
I nearly fell into the river at Kagura on New Year's day! It was new snow that
hadn't settled yet so every step I took the snow beneath me crumbled. I was on
a ledge and trying very hard to climb up, luckily a few Chinese passed by and
pulled me up. Then it was a long 2 hour hike back to the lifts.

------
contingencies
Given the earth's population trajectory and the reduction in fertile arable
unpolluted lands, there is a coming crisis in the distribution of food. How do
we distribute food to high density Asian urban populations efficiently,
minimizing needless motor vehicle trips, packaging and spoilage, when
convenience purchasing is on the rise and average household sizes are
shrinking? Our answer is a network of robotic service locations with automated
stock-keeping and a shared, wholly owned logistics network plus personalized
direct from fresh ingredient preparation.

~~~
soulofmischief
[https://www.dni.gov/files/documents/NICR%202013-05%20US%20Na...](https://www.dni.gov/files/documents/NICR%202013-05%20US%20Nat%20Resources%202020,%202030%202040.pdf)

This is a report by the NIC from 2013 which outlines exactly how we expect
these shortages to occur, on what timeframes, in which geographical locations,
and what consequences are expected.

Thought you might find it useful when planning how and where to be most
efficient and to stay ahead of the clock.

I'm happy to discuss these issues more over email if you'd like to swap ideas.

------
LolWolf
Bounds for the best possible designs for optical devices: well-studied [0, 1,
2, 3, 4], yet really hard.

More specifically, whenever you give a designer a design spec, it is always
worth asking, _how good is the best possible design for this spec_? And, of
course, can the designer actually _achieve_ it, or something close to it? This
is the question here.

In this scenario, the design spec is the optimization problem ( _what_ you
want to optimize), the designer then gets to choose _how_ to best approach
this problem. In this case, you want to give a number that states, independent
of _how_ this problem is solved, what is the best any designer (no matter how
smart or sophisticated, how much computational power they have, etc) can hope
to do. In many cases giving such a number is actually possible! (See below
references.)

\-----

[0]
[https://pubs.acs.org/doi/10.1021/acsphotonics.9b00154](https://pubs.acs.org/doi/10.1021/acsphotonics.9b00154)
(PDF:
[http://web.stanford.edu/~boyd/papers/pdf/comp_imposs_res.pdf](http://web.stanford.edu/~boyd/papers/pdf/comp_imposs_res.pdf))

[1] [https://arxiv.org/abs/2002.00521](https://arxiv.org/abs/2002.00521)

[2] [https://arxiv.org/abs/2003.00374](https://arxiv.org/abs/2003.00374)

[3] [https://arxiv.org/abs/2002.05644](https://arxiv.org/abs/2002.05644)

[4] [https://arxiv.org/abs/2001.11531](https://arxiv.org/abs/2001.11531)

------
ahP7Deew
Fully offline, fully searchable copies of personal data (email, tweets,
calendar, etc.), English Wikipedia, IMDb, OpenStreetMap (tiles, routing,
points of interest), geocoding. Fully offline and state of the art speech
recognition. Fully offline voice assistant with almost complete coverage of
the most common usage.

~~~
thundergolfer
I’m working on this as well, but with a different set of content types. I want
my personal search to primarily support second-brain functionality for
creative work, so focusing on indexing:

\- podcasts

\- self-authored internet comments (Reddit, Hackernews)

\- books

\- articles

\- code

\- music

\- lectures

Started it at [http://rememex.org](http://rememex.org)

------
PaulDavisThe1st
Representing musical and audio (sample) time in ways that maximises reversible
conversion between the two of them. Musical time is typically spoken of (in
western culture) in terms of "bars" (or "measures") and "beats". The
relationship to audio (sample) time is defined by a "tempo map" which defines
the number of beats per minute and the number of beats per bar.

The mapping between the two is monotonic, non-linear, and can be stationary.
If the tempo map is allowed to contain ramps (accelerando and ritardando in
music speak), there are implicitly exponential sections in the function that
maps between them.

Using floating point arithmetic leads to errors that have immediate effects. A
musical time that should be considered to be at sample N is instead considered
to be at sample N-1 or N+1.

It's surprisingly hard to do it correctly.

~~~
cardamomo
How do you think this tool would be used?

~~~
PaulDavisThe1st
[http://ardour.org/](http://ardour.org/) :)

------
titchard
Designing a combat robot for the UK Antweight division, which is only 150g max
weight. (or 175g for some groups).

Despite this tight weight budget, I intend to build something rather
interesting, but it is causing me to spend a lot of time in Fusion designing
the parts along with slicing and reslicing 3D printed parts to shave partial
grams of components to save a bit of weight.

------
martythemaniak
Solving climate change of course!

I am partly obsessed with and researching / writing about how you could make
carbon-free heating cheaper than fossil fuel based heating. If you can make
geothermal systems cheaper than a natural gas furnace, then homeowners would
have the same economic incentives as drivers, where operating an EVs is far
cheaper and cleaner than ICEVs.

~~~
adammunich
A lot of people still use gas for hot water heating in the northern USA.
Electric hot water heaters often use too much power for them to be replaced.
Some sort of battery-powered heater might help here.

------
cryo
Private file access between computers. Even in 2020 it appears that building a
connection between machines (macOS, Linux and Windows) requires a miracle if
you don't like to share everything with cloud providers.

cryo file mangager is an attempt to get rid of the hassle:
[https://cryonet.io](https://cryonet.io)

~~~
isaacimagine
There's also syncthing: [https://syncthing.net/](https://syncthing.net/).

~~~
genuinebyte
I've recently setup Syncthing and it worked great. I'd appreciate if it
allowed me to cut out all relays and just connect directly to my servers, but
maybe I just haven't found out how to do that yet.

~~~
cryo
You can run your own relay servers.

[https://docs.syncthing.net/users/strelaysrv.html](https://docs.syncthing.net/users/strelaysrv.html)

------
drallison
Everyone seems to ignore the only critically significant hard problem: how do
we mitigate the negative impacts that humans have had on the planet? Right now
it looks like we are careening towards a quick extinction of the human race
and most other species and we are not doing anything about it. True, the
magnitude of the problem is daunting. True, it may be too late as we are past
the tipping point. True, it is easier to ignore the risks that it is to
confront them.

Maybe the real hard problem is getting people to pay attention to what is
happening in their world and to forecast what the impact of their actions will
be in the next few decades.

~~~
sputr
Solutions to the stated problems are, from a technical and policy stand point
very simple. Yes, we know how to solve all of this - it's called market
regulation and we know it works reeeely well.

But it's politically hard.

What we need is an economically pragmatic green political movement, who will
not keep shooting itself in the foot with left-wing ideological purity
competitions.

Profit seeking motive is only a problem when you let it of it leesh (because
you somehow convinced yourself that the state of perfect market competition is
the stable state of a market, when in fact all natural forces point toward
domination - monopolies).

So, green parties... Drop the bullshit feel good stories and let's just make
saving the earth profitable.

~~~
drallison
I think you are begging the question. Market regulation and a more
economically pragmatic green political movement, and a fettered profit-seeking
motive won't respond quickly enough to save the day. We have a crisis here and
have not much of anything which makes a difference. For example, when huge
fires burn, dumping significant CO2 into the atmosphere, it would be to
everyone's best interest to put them out. But we (the world collectively) has
not done that. Likewise, despite regular warnings about the dangers of
pandemics, the global medical system was designed under the assumption that
"pandemics" would be small and could be localized. Turns out to have been the
wrong choice. And in this case, every serious Global Trends study points out
that the risk of a pandemic was significant and that a pandemic could be
costly.

~~~
sputr
Why do you presume it won't respond quickly enough? I think that if the EU
made any carbon fuel that was created from the atmospheric CO2 free of any and
all tax ... we would see it on the market in a few years at max with plants
cropping up very quickly. When that industry matures in a few years just force
anyone polluting with CO2 to buy and sequester this carbon fuel.

Besides that's only half of the solution. The other is pragmatic policy. If
the green movement was pragmatic they would have championed "EU builds 200
nuclear plants in <10 years". All the problems with nuclear right now (who
will build it? it's expansive, takes too long etc.) go out the window when
someone like the EU decides to build 200 of them. Wind and solar are nice and
all, but they can't do in 10 years what a few dozen, let alone a few hundred,
nuclear plants can. Solutions are there. But solutions are not the point.
Getting elected is the point. And that's where ideology and election-
efficiency (it's much easier and cheaper to campaign on emotional social
issues and virtue signaling that it's on suboptimal, ideologically-unpure,
hard, painfull, expansive solutions) reign supreme.

And regarding the pandemic thing - the question you should ask: "wrong choice
for whom?". The general population? Or the people in power? The biggest
problem in our society is that we've fallen for the lie "in a democracy the
general population rule and the system inherently works in their best
interests". Democracy has nothing to do with taking care of the people. It can
be used for that ... WHEN the general population realizes that it first has to
BECOME a power player in the game of politics.

~~~
drallison
Simple enough--there is not enough time. Even if we were to stop dumping CO2
into the atmosphere today, the global temperature change will exceed
acceptable limits. If we continue with business as usual, substantial parts of
the earth will become uninhabitable. Nothing on a global scale can be done in
the twenty or thirty years. Collaborative global problem solving does not seem
to work.

------
netrunner-x86
I am working on my own on a system that creates long lasting human
relationships. It is a solution to loneliness.

I used to run match.com's research division. I left to pursue this project.

~~~
realbarack
Are you willing to share or privately discuss any more detail about what
you're working on? I'm interested in this area as well.

~~~
netrunner-x86
Only if you are an accredited investor :D

------
fabianlindfors
I want to give everyone a digital identity. In some countries (including mine)
basically everyone has an e-ID which we use to sign in to things like
government services, banks, payment providers and much more. This is
absolutely essential to everyday life and many startups are built around it.

Unfortunately, many countries don't have useful e-IDs and the ones that do are
limited to that one country. I want to create a single digital identity which
works for everyone, for all applications, across borders. The basic features
are:

\- App based with no special hardware necessary.

\- Privacy friendly with the user always fully aware of what data they are
revealing.

\- Simple to integrate for developers. It's a standard SSO flow over
OAuth/OIDC.

I'm currently calling it Pass: [https://getpass.app](https://getpass.app). If
anyone wants to have a chat about digital identities you can reach me at
fabian (at) flapplabs.se

~~~
dane-pgp
> Privacy friendly

Does this mean that a user can use their identity on two separate sites, and
those two sites can't collude to build a shared profile of the user, without
the user's permission?

Does the user have to choose a specific server to be involved in all their
identity interactions? If the server stops working, does the user lose their
identity?

Also, is it possible to create an account without a phone (or rather without a
SIM, since those are often tied to real identities)? Does your proposed system
assume that people can't register multiple identities (using multiple phones)
if they wanted to?

~~~
fabianlindfors
> Does this mean that a user can use their identity on two separate sites, and
> those two sites can't collude to build a shared profile of the user, without
> the user's permission?

That's precisely what it means. User IDs will be unique for each site and I'm
hoping to anonymize email addresses as well, similar to what Apple has done
for "Sign in with Apple". Some companies might be required by law to collect
some PII but in that case their needs will be vetted before.

> Does the user have to choose a specific server to be involved in all their
> identity interactions? If the server stops working, does the user lose their
> identity?

I'm currently building this as a centralized product so no, there is only a
single server maintained by us. I'm mostly concerned with building a great
product but the prospect of decentralized, verified identities is also very
interesting. I'd love to see what that could look like!

> Also, is it possible to create an account without a phone (or rather without
> a SIM, since those are often tied to real identities)? Does your proposed
> system assume that people can't register multiple identities (using multiple
> phones) if they wanted to?

The current product is in the form of an app so you will need a phone but you
won't need a phone number (or SIM). An email address is currently required
though.

My current system assumes one identity per person but it's fully possible to
have multiple devices which acts as that identity. This might change depending
on regulation though and is not set in stone.

If you have any more questions I'd be happy to answer them!

~~~
dane-pgp
Thank you for those excellent answers. I do have a couple more questions if
you are interested:

> [companies'] needs will be vetted before.

Is the plan that a single entity offering this centralized product will
control not just which users are allowed to have identities, but which
companies are allowed to access users' IDs? Presumably there is a somewhat
costly process to vetting companies and their requirements, so would companies
pay a fixed amount to cover this vetting process, or pay more based on the
level of personal information they hoped to receive from users?

> the prospect of decentralized, verified identities is also very interesting.

What type of verification do you imagine being necessary or available for user
identities?

> The current product is in the form of an app so you will need a phone but
> you won't need a phone number (or SIM).

Are there any technologies specific to phones that mean this couldn't run as a
web app instead?

> My current system assumes one identity per person but it's fully possible to
> have multiple devices which acts as that identity.

So if you can install multiple copies of the app on your (Android) phone, you
could have multiple identities on the same device?

~~~
fabianlindfors
> Is the plan that a single entity offering this centralized product will
> control not just which users are allowed to have identities, but which
> companies are allowed to access users' IDs? Presumably there is a somewhat
> costly process to vetting companies and their requirements, so would
> companies pay a fixed amount to cover this vetting process, or pay more
> based on the level of personal information they hoped to receive from users?

That's the plan, yes. The current pricing structure is to let companies pay a
monthly price per active user. They would not be able to pay more to get
access to more data. As this is early stages, I'm not sure what the vetting
process will look like yet. It's mostly there to ensure that the the data the
companies request are actually needed for their core business and will not be
used for tracking. For example, a company can only request the legal name of a
user if the law requires them to know it. This might be true for a bank but
not for a dating app.

> What type of verification do you imagine being necessary or available for
> user identities?

The verification we will be performing is at the level required by some laws,
for example Know Your Customer (KYC) and Anti-Money Laundering (AML) laws. Our
goal is to make Pass suitable for fintech companies which have quite stringent
requirements. I can also see lighter forms of verification being good enough
for other applications, like the Web of Trust model used by PGP.

> Are there any technologies specific to phones that mean this couldn't run as
> a web app instead?

Yes. Many modern phones have a built-in Hardware Security Module (HSM) which
can be used to store and use asymmetric keys securely. Browser storage can't
offer the same level of security currently but there have been some
interesting developments which might change this, for example WebAuthn.

> So if you can install multiple copies of the app on your (Android) phone,
> you could have multiple identities on the same device?

I can't really answer this right now as I'm not sure which way we'll go. It
will depend on what regulations require and what we can achieve in terms of
verification.

------
cookiengineer
Problem: A lot of professional jobs spend a shitload of time in front of the
computer doing nothing else than googling or research on websites, taking
notes (or transforming that knowledge to another medium or file) and repeating
that all over the next day until they come up with a conclusion.

Idea: A web browser for self-automation that learns semantics and sentiment of
content on the web, whilst trying to respect references or articles’ sources
to recorrect the ground truth.

Solution: Still far away from it, but am implementing a peer to peer Web
Browser that can share its information (or states) with trusted peers. Trying
to implement a recordable, editable and repeatable GUI for everything, which
is a lot harder than it sounds.

[1]
[https://github.com/cookiengineer/stealth](https://github.com/cookiengineer/stealth)

All kind of feedback appreciated!

------
quickthrower2
Not tried yet but I’d love to make a JS front end library to “bring your own
storage” so that if I provide you with an app online (think SPA) it can then
save the data to where he used wants (s3, Dropbox, local disk using extension,
etc.)

Another similar idea is a very simple rest protocol so that you can save to
server and then make it easy to self host it.

I like the idea of people building apps as web pages without needing to worry
about the server and the user owning their data but having the convenience of
a cloud like solution where you just visit the site, log in and work.

~~~
shezi
Something like [https://remotestorage.io/](https://remotestorage.io/) ?

~~~
mc3
It feels a bit like hard work to use. They seem to have support for Dropbox
but just as an afterthought and not the main thing. The main thing means
running your own server, but it doesn't look easy to set up.

I want to think of an approach that is really easy to use so it becomes
popular for that reason.

~~~
shezi
Hmm, I have never tried to run a server. At least the integration seems to be
fairly easy.

------
singingwolfboy
Let’s say you have multiple people who want to share their location with each
other, but to different degrees of precision. Some people might be willing to
share their street address, while others only want to share the city that they
live in. Some only want to say “Northern New Jersey”, or “Southern France”.
Some might want to just share their timezone.

How do you store this information in a database, in a geographically
meaningful way? How do you represent it on a map?

~~~
sdinsn
[https://github.com/google/open-location-
code/wiki/Evaluation...](https://github.com/google/open-location-
code/wiki/Evaluation-of-Location-Encoding-Systems)

------
the_decider
I’m working on a formula that factors ML model run-time cost (speed / memory-
usage) into its performance evaluation. I generated a very simple scoring
function from economic first principles. Now, I’m trying to test performance
on cumbersome transformer models published by Google / FB / Microsoft etc in
order to prove that many of these models are not “state-of-the-art” if run-
time cost is taken into account.

------
ankothari
Stackoverflow doesnt have a good recommender system that can suggest questions
to me that I can answer. I am trying to build one using their data from the
data they post online.

------
etherio
Knowledge.

Working on building a self-hosted app that would alow you to save, organise
and search your knowledge in one center.

It would contain information like notes, bookmarks (it would download the
links contents) and in general provide a programmable, opensource interface to
preserve the info you'll find useful and even sync with external apis to save
your online presence locally (think reddit posts, hn links, etc...)

~~~
thundergolfer
Seems like multiple people in this thread are working on this or similar.

Maybe we should start a subreddit for people building this stuff.

~~~
etherio
Yeah if there's enough people! I only noticed one other similar comment for
now.

------
koeng
I’m trying to figure out how to ship DNA effectively. It’s really inefficient
right now, so I’m genetically engineering bacterial spores to make it a lot
better. Not sure if people will adopt, but at least it’ll be 10x better than
what is currently available!

~~~
webmaven
Hopefully the spores are non-viable in some way (sterile, require an exotic
growth medium, terminator genes, etc.)?

~~~
koeng
They won’t be. I definitely thought deeply about that aspect, but am choosing
to not make them non-viable for the following reasons:

1\. Payloads are in domesticated strains, which are less efficient than wild
strains, which are already global

2\. Exotic growth media requirements make the cells only for wealthy
organizations, which is status quo and aren’t target audience

Of course, if good reason comes to change that, I would.

~~~
webmaven
Exotic in this case just means uncommon or rare in the wild, not expensive.
Something like requiring both salt and aspartame would help prevent gene
transfer in the wild (to related or unrelated strains) as well as making it
unlikely to grow on its own.

~~~
koeng
I would love to have something like that, but unfortunately that’s a really
hard problem in itself, so I’d likely only start working on it once the need
arises

~~~
webmaven
Presumably you're working on this as a technique for widespread use by other
parties, which means you can't control what DNA is being transported.
Minimizing the chances of escape seems only prudent.

------
kd5bjo
I’m trying to map relational algebra onto Rust’s type system. If I’m
successful, I’ll have a bunch of collection types with different performance
characteristics that are all drop-in replacements for each other.

~~~
mamcx
I'm in the same boat:
[https://github.com/Tablam/TablaM](https://github.com/Tablam/TablaM), only
that also building a lang (wish to revive the spirit of the dbase/fox family
langs).

I have experimented with varied stuff, including columnar, so I think we can
help each other!

~~~
kd5bjo
Our two projects actually look like they have very different goals and
approaches: I’m aiming for a rigorous solution for small design problems, like
needing to add an index to a datastructure that wasn’t designed for one—
nothing really to do with databases proper. I need the overhead for these
simple cases to be low, so I’m trying to do as much as I can at compile-time
inside the type system.

You’re doing everything at runtime (at least in Rust), which is a more
flexible approach but can’t do things like trigger a compile error when
attempting to access a field that isn’t present in a record.

~~~
mamcx
> Our two projects actually look like they have very different goals and
> approaches

Surely! but learning that is pretty interesting and could help, also, I think
that combining ideas/crates is what give more power to the rust ecosystem...

More to the point, I'm still thinking how/which data-structures to use, and
how deal with the ability to provide SELECT/WHERE/etc across them (like a
file)

> nothing really to do with databases proper

This is a big misunderstanding with the relational model: That is only for
(r)dbms and storage.

I'm looking to apply it to _regular_ programming. Is similar to apply OO or
Array or Functional paradigm as your base for a lang.

------
comicjk
Trying to predict where electrons go in a molecule, but using a classical
model. This can be done with supervised machine learning - you can use quantum
mechanics to get lots of labeled data - but it's a tough problem, because
chemical physicists have very high standards for accuracy.

~~~
c1ccccc1
Sounds very cool! I'd be interested in looking at the code if you're up for
sharing it.

Are you computing the labels using DFT, or with a more exact method?

~~~
comicjk
I work at Schrodinger Inc, collaborating with the people who make TorchANI
([https://aiqm.github.io/torchani/](https://aiqm.github.io/torchani/)). This
currently predicts only molecular energies in the public codebase, but I
expect they will add electron density using the same framework.

Currently I am using hybrid DFT (wB97X-D) with plans to move up to wB97M-V and
possibly Quantum Monte Carlo. The TorchANI group likes wB97X and up-training
with DLPNO-CCSD(T).

------
thecupisblue
We're writing too much code. So I'm making stuff that will help us write less
code.

Like a vim-like editor that translates your spec into generated code while you
also see it on the go to fix any issues. Think yeoman on generics and
steroids.

~~~
canada_dry
> editor that translates your spec into generated code

Anyone old enough to be in IT in the late 80's/early 90's will remember the
"4th GL" phase that swept through Fortune 100 companies.

I was in banking at the time, and 'Focus 4GL' was brought in to replace
programmers. Of course, in the end, it turned out to be a fools-errand.

My prediction though is that in less than a decade, ML/AI will be decent
enough at developing solutions via client specs for many applications.

~~~
thecupisblue
Not really - the generated code isnt some mystic code, it's simple templates.

I originally got there by making a complex magic data structure that held
relations to everything in multiple dimensions so I could generate a huge
amount of stuff, but that turns out to be just like 4GL - a load of slow
confusion. The reason I am doing it is exactly because of ML/AI hope - with
enough data and proper structures, I can generate a lot.

------
robmerki
I'm writing a book about ADHD: [https://adhdpro.xyz/](https://adhdpro.xyz/)

Unfortunately there seems to be a lot of infantilizing in and around the ADHD
sphere. Either we're treated like helpless children or we are encouraged to
lower our expectations/goals.

It's hard to explain unless you've been open about having ADHD, or have been
part of the community. The zeitgeist revolves around accepting and maintaining
a status quo. The problem I'm trying to solve is how to build a
growth/thriving lifestyle despite an ADHD diagnosis.

~~~
mysore
Hey this is cool. Would totally use this. Just got diagnosed at 28 and trying
to implement meditation, yoga and better work habits so I can get the most out
of the medication and maintain the lowest possible dosage

~~~
robmerki
It sounds like you're already on the right track. Meditation and mindfulness
are the foundation of building the right strategy. I'm still so appalled how
neither my therapist nor my ADHD specialist doctor ever told me to go beyond
medication.

------
lachlan-sneff
I'm building cad software for designing atomically precise objects.

I don't expect atomically precise manufacturing (at least in the form of
molecular nanotechnology) to show up for at least a decade or two.

But the software to design things that can be built using nanotech can be
written before the technology to synthesize them exists.

~~~
jmpman
Will it simulate the thermal expansion when machining?

------
Layvier
Online learning, particularly personalised learning aimed for self education
and continuous learning. I'm trying to model the knowledge space as a graph,
index learning material (free online resources right now then user generated
content) on this graph and then provide different ways to navigate this space.
I'm trying to address the "best way" for someone to learn a specific concept,
and to help people identify the knowledge they miss/are looking for. The
project would be open, collaborative and a non profit btw.

~~~
vertak
I will soon begin working on a somewhat similar problem involving personalized
self education. So far I've thought of it as a search engine for YouTube which
allows a user to lookup something like "home improvement" and then builds them
a "course" comprised of the most relevant YouTube education videos that they
can follow along with. There would be an element of self curation and social
media (voting and reviews of courses).

Anyway, good luck with your project it is very interesting and I'm sure you'll
learn a lot!

------
artembugara
Over the past few months, I had somewhat as "writing a documentation for
Google News RSS URL patterns" as my task.

There is no such official documentation.

Python package:
[https://github.com/kotartemiy/pygooglenews](https://github.com/kotartemiy/pygooglenews)

Blog post: [https://codarium.substack.com/p/reverse-engineering-
google-n...](https://codarium.substack.com/p/reverse-engineering-google-news-
rss)

------
tjansen
I am working on a system to parse the English language using a hand-written
compiler and then store the IR in a database, so all human knowledge can be
searched and queried in all its facets (who said what when and where) using
English language. I believe that a database is the key to NLU, and machine
learning is mostly useless for true NLU, because machine learning currently
has no good way of interacting with a database AFAIK, and without the
knowledge of a full database of human knowledge and knowing who said what it's
impossible to truly understand human language. Storing all human knowledge in
a neural network just isn't practical anytime soon.

I wrote a new programming language, Eek, because it was impossible for me to
handle the complexity of doing all with current languages (that lack built-in
support for asynchronous database access and parsing). So far the first
generation of the programming language is working, but as an interpreter
written in TypeScript, and I wrote an English language parser with it, and a
simple database. Now I am working on a better, LLVM-based implementation of
Eek. I started this thing about 3 years ago, and it will take some more years
before this will be even demoable...

------
Gurten
I've been tinkering in the reverse-engineering space. My problem amounts to
reusing compiled binaries by combining them in novel ways. That is, I would
like to take algorithm/subsystem X from software Y, combine it with something
else. The goal is to have a library of components which I may be able to
combine. It has lead me to investigate a few technologies I have been meaning
to invest time into, like llvm, qemu. There are a few projects which combine
these as well as related technologies like DECAF, radare2, DynamoRio, mcsema.
The hard problem which I am facing, but by no means have an effective solution
is of having to extract semantically the essence of the program in spite of
ISA, and to find a balance between emulation and readapt-ability (i.e abstract
out the code that is dependent upon some base-address assumption. The value on
the surface seems counter intuitive to the investment, especially from my
roots in SWE where one can have a hard enough time trying to accomplish that
with source-code available. Although the application is broad, I've focussed
intermittently on video games. I believe this is where some value lies, as a
finely-tuned subsystem can be the heart of a franchise.

~~~
SCHiM
Very interesting. I'm trying to do something closely related for other
reasons.

One thing I'm thinking of is that it might be possible to brute-force the
semantics of short snippets of code using genetic algorithms. A similar
technique has been demonstrated a few times by author of [1].

I want to use this to eventually rapidly search a large number of binaries for
insecure behavior. But to do that I need to be able to formulate questions
like:

"Find me a function where attacker controlled data is marshaled to a size type
and then used to allocate memory, to which a different attacker controlled
amount of attacker controlled data is written".

Basically this:
[https://github.com/Battelle/PaperMachete](https://github.com/Battelle/PaperMachete)

[1]:
[https://github.com/xoreaxeaxeax/sandsifter](https://github.com/xoreaxeaxeax/sandsifter)

------
irq-1
UDP DDOS mitigation. It'll be important for protocols like HTTP3/QUIC where
decrypting every packet is costly, and malicious packets can't be identified
until after the application tries to decrypt them.

The idea is to assign a random 8 or 16 byte number (DdosID) to each
connection. The DdosID is unrelated to any other identifier; like the HTTP3
connection id. The client puts the DdosID at the beginning of every UDP
packets data section (raw; no encryption or compression.) Packets that have a
valid DdosID are processed. Packets with an invalid DdosID are dropped. New
connections use zero as the DdosID. Any client can use zero to re-establish a
connection or if something goes wrong.

If attackers make random DdosIDs it will result in packets that are dropped
before decryption. If a valid DdosID (or hundreds of valid DdosIDs) are used,
the volume of packets with that DdosID will make the attack obvious, and the
DdosID can be invalidated and the packets dropped.

Attackers will need to spam new connection attempts (DdosID zero) to deny
service. New connection attempts could be dropped, or a small percentage
allowed when the application can handle them. The rate of allowed new
connection attempts could be adjusted very easily and quickly, but a high rate
of bad packets would still deny new connections. Existing connections to
continue to work.

The challenges are mostly creating functional thresholds: what packet volume
is too high? how long is a DdosID valid? Should a load balancer record the IP
and Port and to force a new DdosID if it changes?

I would also like to see a way that applications and load balancers can use a
DdosID system without it being dependent on the specific application; DdosID
should not depend on the protocol it's protecting.

------
Brajeshwar
## Problem

Half of the Indian farmers do not have access to institutional/formal credit.
They end of borrowing from local loan sharks at extremely high interest rates.
The vicious cycle continues as they do have access to markets favoring their
produce's selling price, and end up being exploited.

Most banks, despite their mandated percentage of loan for agriculture, are not
too keen (and unable) to work closely with farmers.

## Solution

By leveraging technology, and international connections, we help farmers by
giving them access to credit at a favorable interest rate. We work with
international community, such as the Japanese, to lend their capital on our
platform, thus helping the underserved farmers. In turn, the Japanese
investors get returns much higher their national savings interest rates.

Banks too can benefit from our ability to deploy their cash.

## Why Us, Why Now

I have struggled to articulate the philosophy/motto on how to help the
farmers, as they are the most exploited populace. I'm zeroing in on -- "BE
KIND".

If you want to hear more, contact me and I will send you our Executive Summary
and/or Pitch Deck. We are fund-raising.

------
kristopolous
I'm trying to do a historiographical quality history of computers, the kind of
scholarship you might see with say the French Revolution or the Roman Empire.

What more, my history efforts ENDS around 1937; it's the BC part of computing
that set the stage. The build up to computing.

Episode 0 lays it all out ([http://comphistpod.com/introducing-the-computer-
history-podc...](http://comphistpod.com/introducing-the-computer-history-
podcast/))

I'm doing it in podcast form because it's 2020. This is going to be multiple
years and I'm totally fine with that.

I divide the history up into multiple facets each with separate timelines.

Currently I'm going all the way through "electric communications" from the
electric spark up through relay networks talking about switching, encoding,
error correcting, signalling, all the important developments along the way.

In 2021 I'll close that out and do the same for storage (music boxes, looms,
etc) and computation, starting with clocks, pascal's mechanical calculator and
going forward from there, mechanical registers, overflow, adders, etc...

Each one is going to take at least a year or so.

I'm already about 2 months in to recording, about 6 months into the project.
This week will be du fay, boze, desaguliers, watson, and the electric wire.
Next week will be leyden jars, this is a long, slow project, and AFAIK it has
never been done before.

I'm doing the odd episodes as a timeline and the even episodes as diversions
and discussions to keep things entertaining and light.

This is the first time I'm talking about it publicly. It's at
[http://comphistpod.com](http://comphistpod.com)

~~~
uhhyeahdude
What an ambitious and important project! I love the concept, and that it is
meant to be a "long, slow project."

It probably should be paced slowly due to the scope and complexity of
material. I think a slower pace should also help "smooth" the exponential
change rate of technological progress into a narrative listeners can follow,
without sacrificing too much detail for the sake of listenability.*

I'm going to sub right now.

*I listen to a lot of podcasts, and this happens more than it should, IMO.

Edit: I haven't seen a podcast with its own git repo before, but it makes a
lot of sense, and I imagine it is immensely helpful as a creator. I took a
quick glance to satisfy my curiosity, and the notes I looked at (show notes)
were comprehensive and really well done. This is inspiring, no joke; I have
renewed enthusiasm for my own podcast-to-be. Thank you for that!

~~~
kristopolous
It's all scripted. Google books and archive.org have been great resources.

There's also a couple torrents of 17th-19th c. journal articles. I've made my
own sqlite databases from the meta information and ran ocr over them to make
things searchable.

I'll be putting more of these tools in the repo and eventually put the search
systems online for the general public. Google's navigation of annualised
volumes is a joke so I'm going to do better.

My dream of dreams is to be able to automatically hyperlink the footnotes in
the old texts. I think it's possible, long term project though. (Example:
[https://play.google.com/books/reader?id=c3QgKCXX4nIC&hl=en&p...](https://play.google.com/books/reader?id=c3QgKCXX4nIC&hl=en&pg=GBS.PA130))

I've also cleared out a closet and installed acoustic foam. I tested a number
of microphones... I'm sure acting like it's a real thing. Hopefully I'll keep
at it.

The hardest part has been trying to get it done on a weekly schedule. I don't
have this week's done for instance. There's two parts I really want to fit in
and I need to find a place for them. I'm going to try to take a nap and get up
in an hour or two and work a bit more.

I don't know if I'll be able to pull the weekly schedule off honestly. I'm
going to try to sweat it out a bit longer and hopefully I'll get faster as I
accumulate more notes, references and materials.

~~~
uhhyeahdude
I'll just reiterate that I find it quite impressive. The above just reinforces
that.

I mentioned that I listen to a lot of podcasts; have been, in fact since
~2006. I've watched the evolution of the medium over the years, have seen many
pods come and go, and have witnessed many content creators experiment with
different methods of attaining/maintaining/interacting with listeners.

Later on, as the form started to mature and gain more of an audience, the
issue of monetization became more and more important and necessary; I've seen
many experiments focused on that facet as well. This was still before
"podcast" had entered the vernacular for most people I would classify as tech-
adjacent--the general public was still not really aware of podcasting. Those
who were didn't exactly advertise their listening habits yet, and podcasts
were almost like an embarrassing thing one did in private.

You've done so much correctly, at least from my observations over the years,
from the get-go.

If some of your ideas come to fruition, it would be a boon to those who'd like
to make a podcast of their own, especially if technical or academic in nature.
I love the work you've done, appreciate the open distribution of your
expanding toolkit, and also want that auto-hyperlinked footnotes a whole lot.

I am doing what I can to encourage you for two reasons. One, I am personally
interested in the subject(s) being covered, and I think projects like this are
needed if we aren't going to lose a lot to the mists of history in the face of
accelerating acceleration, to borrow a phrase. Two, I want the tools for my
project and do not possess the requisite skills to make them myself, although
I'm working on it.

So, selfish but genuine encouragement...

------
HiroshiSan
Simply put, earning a living. I didn't travel the conventional path of high
school > university > job, and so I'm struggling to get my shit together, the
concept of earning a living (outside of making minimum wage) is so foreign to
me that I really don't know how I'm going to do it in a reasonable (< 5 years)
amount of time.

------
jmpman
I’m working on a way to design free standing domes using mortar-less bricks.
The idea is to build a cut-list for each brick, such that when stacked, there
are no gaps, and the entire thing is held in place with just gravity (maybe
the first layer is permanently attached to the ground)

I’m looking for a good framework to simulate the physics and visualizations.

~~~
dhosek
My father-in-law lived in Mexico and had his home custom-built. He was fond of
brick domes (with mortar). I can remember being in my wife's room with her
staring at the ceiling when we went to bed wondering how the hell they managed
to make that roof without the whole thing collapsing.

------
caviv
I will be honest. I really don't get it. How come I am working harder, and
more hours, and studying more but I don't earn more ? - It should have been a
direct correlation. Shouldn't technology and automation and all of this good
stuff make us work less but earn more ?

~~~
_theory_
This has been a philosophical concern as well as an economic one for a while
now. John Maynard Keynes had some notable thoughts on the future of work
leading to a utopia of less work and the same/more reward (and he was
incorrect).

But rather than get into the political/economic/philosophical argument, there
might be an individual solution: people don't pay for what takes the most
work, they pay for what is in the most demand. These are often not the same
thing, and you can leverage that.

------
dsteinman
I am attempting to bring 100% client-side speech recognition to the web:

[https://github.com/jaxcore/bumblebee](https://github.com/jaxcore/bumblebee)

Although the first release is not officially out yet, the NodeJS code is
working and you can install the development version of the app server and try
out the hello world app locally.

The solution involves running Mozilla DeepSpeech inside an Electron desktop
application with a websocket server and client API that NodeJS scripts can
interact with, to receive speech recognition results, utilize "alexa" style
hotword commands, and text-to-speech. The electron app handles all the heavy
stuff, and you just use a simple API.

A web browser extension can also make use of this API to bring these
capabilities to web sites, but that part isn't finished yet.

~~~
IshKebab
It's not really "the web" if you have to use Electron and Node surely?
Wouldn't it make more sense to do it with web workers and wasm?

~~~
dsteinman
The web browser extension would communicate with the electron app server,
NodeJS would not be needed in that scenario (the electron app includes the
nodejs server code). You can write your web voice app with static client-side
JavaScript which communicates with the Electron server through the browser
extension.

Web Page <-> Bumblebee JS API <-> Bumblebee Extension <-> Bumblebee Electron
App (DeepSpeech)

DeepSpeech with the pretrained english model is enormous (1.4GB) it's not
feasible to load it into a web worker. It can run in a server, but then every
website would have to run its own server side speech recognition servers which
is difficult and expensive to scale.

------
crb002
JHipster style templating for SAAS onboarding design patterns.

UX description language for forms that respects high level constraints.
Compiles to desktop browser, phone browser, and Alexa layouts.

Solving the complexity of matrix matrix multiplication by brute forcing the
lower bound with semigroup combinatorics.

DSL for linear logic.

Ending Iowa’s criminalization of “annoying” speech. (Iowa Code 708.7)

Exposing Polk County Iowa Sheriff Kevin Schneider torturing inmates with
denial of basic medical care.

Exposing pure nepotism corruption between Iowa Attorney General Chief of Staff
Eric Tabor and his sister Iowa Court of Appeals Judge Mary Tabor (mom of
@ollie).

Exposing that the prosecutor on Tracy Richter’s murder trial had relations and
ended up marrying the daughter of his star witness Mary Higgins - and that the
blood spatter expert Englert is a known fraud who wrongfully convicted David
Camm and Julie Ray Harper.

~~~
minkzilla
With the last three how are you going about that?

------
mcaswell
Loneliness! We have really lost our sense of community with those around us so
I’m trying to see if we could supplement that with strangers on the internet.

~~~
vfinn
I think we should get away from our computers, because Internet made this
issue worse, and it's continuing to do so.

------
eitland
Getting time to do anything in between kids, work and whatever else is
expected from me is the actual hard part for me.

The technical aspect is mostly fun in my case :-)

~~~
baxtr
If you ever find out, let me know. The only solution I’ve found is: late at
night!

~~~
eitland
That never worked for me, I always needed some sleep around midnight. Usually
I used to be up around 0400 but the last few months it has been more like just
before the kids wake up.

Holiday started on Friday afternoon and this morning I woke up 20 or so
minutes ago, around 0230, but I will probably sleep some more around 06-08 to
be able to do anything meaningful afte4 my kids get up.

(And that is probably a good tip: If late nights doesn't work for you,try very
early in the morning.)

------
kappuchino
Making leaking safe(er).

After the twitter account of DDOSecrets got shut down (due Blueleaks), this
got me thinking: How would you leak / provide data but are not directly
attributable. (At least like a retweet - not your tweet, just amplified.).

And how to add some resilience and protection to the distribution, since there
were indicators that the torrent and download of leaked data was being
attacked.

So far, I have come up with an encryption matryoshka: you distribute leaks
without telling whats in and enable gradually a few to look inside until they
are public.

All thats missing is a better document describing it and a command line tool
to help to walk through the multi level encryption ... so there is 90% still
to do ¯\\_(ツ)_/¯.

~~~
dewey
Not really sure what this is trying to solve. If it's some legitimate leak of
public interest then the organizations active in this space often have a tip
line / encrypted drop where you can put it.

If it's something that's not of interest to many but you want to put out there
for some reason then what's stopping you from uploading it to some random one
click hoster and posting the link in random places on the internet?

------
ChicagoDave
Table balancing in an online poker tournament.

It’s not that simple.

Keep tables balanced, move players from high numbered tables to lowest
numbered tables, try to move players to similar positions, Mark players
waiting if they drop in small blind and skip the button next hand.

------
jon_richards
If you repeatedly add 2 and then divide by 2, you approach the number 2.

If you repeatedly flip a coin to either add 2 or divide by 2, what
distribution do you approach?

If you have a distribution, what random processes are identities for that
distribution? Which are stable? Which will reproduce the distribution from any
starting distribution?

It’s related to matrix kernels, but it’s hard to generalize to continuous
numbers. I spent a while looking into it years ago and couldn’t find much.

My goal was to eventually take it a step farther and create simple stochastic
functions for certain behaviors, such as bistable switches, memory registers,
etc.

~~~
mcphage
I would guess that you still get to 2—because the more times you add 2, the
greater the impact of dividing by 2, and the more times you divide by 2, the
greater the impact of adding 2. Is that the case?

~~~
jamesmaniscalco
If you run the random process many times and then look at a histogram of the
vector of outputs, the distribution peaks somewhat higher than 2 [0], but the
output of the random process does not tend towards any particular number or
sequence of numbers.

Indeed, the distribution looks highly discontinuous and fractal. It's a very
intriguing question. I suspect thinking in terms of rings (abstract algebra)
would be helpful.

[0] "somewhat higher" \- I think this is because the sequence "add two then
divide by two" gives you back your input when you start with 2, while "divide
by two then add two" does when you start with 4; the distribution should peak
between these two "stable" points. Exactly where the distribution peaks
probably depends on your histogram bin width.

~~~
mcphage
Huh—that _is_ interesting!

------
closed
Working on siuba, a data analysis tool for python. It's a port of the R
library dplyr, and can produces SQL queries!

I've programmed in python for much longer than R, and really want to be able
to move at the same speed when using python for data analysis :o.

It's a weird problem though because the two languages have basically opposite
approaches to DataFrames. pandas has a very fat DataFrame implementation, R an
extremely minimal one. (Pros and cons to both approaches).

[https://github.com/machow/siuba](https://github.com/machow/siuba)

------
hellomoto1242
Trying to learn enough to build a table-top DNA sequencer with a Raspberry Pi

------
butz
Procrastination. But I'll probably start working on it tomorrow.

------
dave_sid
I’m trying to solve the hard problem of finding a new job since COVID-19. It’s
not going very well :-D

~~~
rxsel
What’s your experience, location, etc?

------
shakna
Upscaling videos.

This is a hard problem. It may not even be perfectly solvable at any time in
the future.

When you increase the size of any kind of raster image, you're creating new
information from the old.

There's been some pretty good approaches out there, like this [1][2]. It uses
a GAN topology for some impressive results, but is incredibly memory intensive
and can take a very long time to run.

I've been working on something [0] for a good long while which is a less
expensive approach. Instead of attempting to replicate everything 1:1, it
intentionally allows some detail loss, whilst attempting to preserve
everything important.

It's not ready for the public, and video still needs some significant
improvements to remove some of the artifacts. But I've released one TV series
upscaled with it [4], thus far.

But as it stands, at x2 and x2.5 scales, it does pretty well, with the average
person preferring it to most other resizing methods. It doesn't reach the GAN-
approaches quality, but you're looking at an average 12 seconds for upscaling,
versus 80 seconds for the GAN approach for the same size upscaling and what
people perceive as the same quality.

It already beats most of the traditional resizing algorithms pretty soundly.
[3]

[0]
[https://git.sr.ht/~shakna/upscaler_analysis](https://git.sr.ht/~shakna/upscaler_analysis)

[1] [https://developer.ibm.com/technologies/artificial-
intelligen...](https://developer.ibm.com/technologies/artificial-
intelligence/models/max-image-resolution-enhancer/)

[2]
[https://arxiv.org/pdf/1609.04802.pdf](https://arxiv.org/pdf/1609.04802.pdf)

[3]
[https://git.sr.ht/~shakna/upscaler_analysis/blob/master/crus...](https://git.sr.ht/~shakna/upscaler_analysis/blob/master/crush.png)

[4]
[https://sixteenmm.org/blog/20200626-Gunsmith%20Hits%20HD](https://sixteenmm.org/blog/20200626-Gunsmith%20Hits%20HD)

------
tromp
Trying to find the next entry in this list of functional Busy Beaver numbers

[https://oeis.org/A333479](https://oeis.org/A333479)

------
aerovistae
Telling where the output of an individual stdout print statement ends and the
next one begins, so that I could color code my terminal with alternating
colors to more easily tell apart individual log messages from a running
process by visual differentiation. Turns out this is impossible! Aside from
time passing in between outputs, there's no way to tell. It's just a
continuous byte stream with no terminating character or pattern.

~~~
dredmorbius
Terminal driver + clock + colour-coding? Strace w/ individual write()
instructions identified? DTrace?

~~~
aerovistae
Can you elaborate on this? Are you saying these tools could be used to wrap a
running process in such a way that you could tell apart individual print
statements during runtime? (I haven't used the tools you suggest.)

~~~
dredmorbius
Yes, basically.

The strace option should be doable. Terminal & timer a bit more work.

So long as the print statements come between other blocks of code, though, I'd
look at strace (or dtrace, gdb, etc.).

Assuming Linux/Unix OS.

~~~
aerovistae
I'll research it, thank you. If I can't figure it out (I'm not great at
systems-level programming), are you available for a small amount of contract
work?

~~~
dredmorbius
Email contact is in my description.

Let me know what OS / processes you're working with.

I'm much more a shell hacker than systems programmer but this smells doable.

------
hooby
Would Fremen Stillsuits like in the Dune novels actually be possible?

These suits allow to survive for weeks out in the deep desert, by catching and
recycling all of the body's lost water. Making sure no perspiration can escape
is doable. Filtering the sweat to produce clean, salt-free water should be
possible as well - membranes for water desalination do already exist today. To
have all the required pumping action provided through walking and breathing is
a mechanical problem that should be theoretically solvable. The big, unsolved
problem I see, is heat.

The book says that the suit's layers closest to the skin allow the sweat to
evaporate and thus provide cooling to the body. But the water then has to
condensate again somewhere. From my (limited) understanding of the laws of
thermodynamics, the amount of extra heat created through condensation, should
be exactly equal to the amount of cooling the evaporation provides making the
whole cycle a zero-sum affair. But for this cycle to work in the first place,
the skin would have to be of higher temperature then the layer where the
condensation occurs. If the desert heat is above body temperature, we'd need
some sort of heat pump like in a fridge. Using changes in pressure and density
(through a compressor) you could cool the suits' inside below body
temperature, while heating the outside above ambient temperature - which is
necessary to actually give off heat to the outside.

Those compressors are heavy and powerhungry though, and you'd need additional
high pressure water lines through the suit - increasing the bulk of the whole
thing considerably. Future technology might be more miniaturized and more
energy efficient, but still... Maybe piezoelectric/thermoelectric cooling
(exploiting the Peltier effect) would be a better choice for a suit like this.
Then you'd have a light inner layer that allows for air circulation - so that
the sweat can evaporate on the skin, and condensate again at the
thermoelectrically cooled middle layer - where it's collected and pumped away
to the membrane filters and catchpockets.

The outer layer of the suits would have to be made of flexible solar cells, in
order to provide the electricy required for the cooling of the middle layer.
Not sure if that could work though. Those solar cells produce a lot of heat on
their own, and they sit right on top of the heat-producing side of the
thermoelectric cooling. And the sun burning down on it as well - that's a lot
of heat right there at the outer layer. I don't think thermoelectric cooling
can overcome that high a temperature difference. It works best, when the
temperature difference between the cool side and warm side is pretty small.

~~~
Kliment
Remember the desert is very cold at night. So you don't need to spend that
much energy, as long as you can store heat.

~~~
hooby
An interesting idea...

How much heat a material can store is called thermal mass or heat capacity.
One of materials that can store the most heat per kilogram happens to be
regular old water.

I wonder if anyone here could do the math of how many kilograms of water would
be needed to absorb the day/night temperature changes of a desert and keep a
human body inside at an acceptably stable temperature.

I'm kinda afraid the weight would be more than enough to crush any human into
a bloody pulp?

~~~
Kliment
The most effective way to store a whole lot of heat in very little mass is to
use phase changes - some materials take an enormous amount of energy to
transition from solid to liquid and vice versa. This can be used to pack a lot
of thermal energy into very little volume with very little temperature change.

------
hazeii
How to measure pedal (and/or crank) angle on a moving bicycle to 0.05 degree.
Anyone?

~~~
Animats
Why do you need 0.05 degree precision?

You can use Hall effect sensors to sense chainwheel teeth. Two sensors and you
can get quadrature and direction. I've done that on a mobile robot. With
analog-output Hall effect sensors and some processing, you can get sub-degree
precision, although 0.05 degree is asking a lot.

~~~
hazeii
We're trying to measure aerodynamic drag, thing is there are huge vertical
forces (weight of rider pressing down on pedal) and relatively small
horizontal forces (due to drag). So if we don't know the angle accurately, we
can't separate out the components - make sense?

------
ParanoidShroom
Making people aware they should test their drugs. MDMA testing is illegal in
lots of places, we ship test kits where available.

And I created a reverse image search to look up pill data:
[https://play.google.com/store/apps/details?id=be.harmreducti...](https://play.google.com/store/apps/details?id=be.harmreduction.pillscanner&hl=en)

------
binrec
Decomposition of glyph sequences in phonetic transcription alphabets (e.g. IPA
representations of phonemes) into phonological feature sets.

Existing attempts to solve this problem are hackish and difficult to
customize: they typically treat each glyph as a set of features and handle
diacritics and digraphs by naive composition and awkward special-casing. They
also aren't written with an eye to customization in either alphabet or
featural model: they typically map an ad-hoc extension of IPA to an ad-hoc
featural model.

I think a natural improvement would be to develop a specification language in
which each individual glyph (base character or diacritic) is a _function_ from
a feature set to a feature set, with Haskell-style pattern-matching to allow
graceful handling of digraphs and context-sensitive diacritics - although
syntactic sugar for digraphs would in practice be required for usability.
Ideally it would also be possible to map feature sets to feature sets, in
order to preserve a human-readable intermediate form (e.g. "unvoiced dental
plosive") which is later mapped to the more customary binary features.

In addition to the utility for phonological databases and the like, this would
also enable more rigorous testing of crosslinguistic feature sets: every
feature set is implicitly a set of proposed linguistic universals and
existence claims. If two segments have the same featuralization, they should
never contrast in a given language; if they do, the featuralization is
unsound. And if a featuralization proposes the existence of many contrasts
that aren't attested anywhere, it could probably stand to be optimized.

But most of my interest in this comes from my work on a phonological database.
The database needs some method of handling featuralization to facilitate
feature-based search, and I just haven't seen a good way to do that yet.

------
tristan1977
Local solutions to stop humans from destroying their environment.

We've implemented recycling maximally and created programs for reuse, repair,
toxic waste, general waste reduction, battery disposal, returnables, etc. But
it isn't really making a significant impact.

People still spend most of their disposable income buying things they don't
need that can't be easily repaired, or that they can't bother repairing, or
that they discard due to fashion, or because they are mostly packaging, etc.

We can't get them to stop buying useless stuff that they don't need. We can't
stop them from buying new cars every few years.

We can't get them to spend money on quality infrastructure, like insulation,
that would reduce their energy needs by about 80%.

We can't get them to stop going to restaurants or getting food delivery or
take-out, which expends a multiple of the energy and greenhouse gases of
cooking your own locally-sourced food.

We can't get them to understand that we're all going to die in fairly short
order unless we bring the environmental disaster under control.

~~~
dredmorbius
_Local solutions to stop humans from destroying their environment._

Problem is that much of the problem is systemic and gobal, and w/o addressing
that, even good local solutions are ultimately self-defeating.

Jevons paradox, Gresham's law, race-to-the-bottom, etc.

------
erwinh
1: trying to build tools to give people a more tacit understanding of
sattelites & space debris: [https://space-search.io](https://space-search.io)

2: developing easy-to-use tools for parametric generative design to enable
hyper-personalisation:
[https://hyperobjects.design](https://hyperobjects.design)

~~~
ohnope
You’re probably already aware of it, but for generative design tool
inspiration check out Grasshopper. It’s for building geometric algorithmic
designs based on various inputs and constraints. It also has some tools for
exploring permutations / genetic algorithms.

------
geoah
Trying to figure out what the next version of the internet would look like and
start building it :p ps. Structured data and no blockchain :)

~~~
fl0under
What about the (still in development) SAFE network?

~~~
geoah
Safe is interesting but it’s still pretty much still a ledger. I am not sure
this will work at current-internet-scale, let alone in a couple of decades
worth of data. In addition to that they still pretty much keep building on top
of the web, which is really not accessible to machines.

Think more in terms or ipfs/ipld.

------
justanothersys
I’ve been coming up with a new pedagogy for the fundamentals of interactive
and computer art, and trying it out with kids on TikTok. Maybe you saw the
software I put out a few weeks ago on HN called No Paint:
[https://nopaint.art](https://nopaint.art). However, most of the time I’m
using pen and paper for this work.

------
devchris10
A prediction/promise competition to increase accountability in general.

Politicians/companies spew words and we tend to accept them bc they're
"authorities." I believe the best way to increase "skin in the game",
accountability, and humble expertise is to predict and have your predictive
performance be visible.

There's prediction markets to trade real money with but none that I've seen
that ranks your performance against others.

I'm building a platform where it's free to enter/submit predictions in
categories of interest. Top ranked players will receive prizes.

I think once players can measure themselves against "authorities" (whose
public predictions will be scraped), both will become more accountable.

After predictions, I aim to work on "promises" since they both increase future
skin in the game.

If you've read this far, you should consider joining :) -
[https://oraclerank.com](https://oraclerank.com)

------
Gollapalli
I'm trying to run jobs on a timer when the jobs are on a cluster and I don't
know which server the job is assigned to.

It's essentially a problem of distributed timers and distributed transactions.

(If anyone has any resources on how similar problems have been solved in the
past, I'd appreciate it.)

~~~
pmiller2
Maybe not what you’re asking for, but this should at least give you an idea
what to look for:

[https://en.wikipedia.org/wiki/Lamport_timestamp](https://en.wikipedia.org/wiki/Lamport_timestamp)

[https://en.wikipedia.org/wiki/Clock_synchronization](https://en.wikipedia.org/wiki/Clock_synchronization)

[https://en.wikipedia.org/wiki/Paxos_(computer_science)](https://en.wikipedia.org/wiki/Paxos_\(computer_science\))

------
ta17711771
Keeping a working Wireguard configuration with 4 peers, for both VPN-through-
house, and remote LAN access.

~~~
unixhero
I need to get to setting up wireguard as well. Honest to god 'homelabbing.

------
sneeuwpopsneeuw
My short term thing is this mathematical programming puzzel
[https://projecteuler.net/problem=160](https://projecteuler.net/problem=160)
from Project Euler where you need to calculate a very large factorial.

My live goal is to bring back more programming and design techniques that we
as a society used to make the best ps1 and gameboy games and old school
animations to the way we are currently developing games. (Any tips and advice
how i could help improve the game development industry is welcome. Next year I
will be doing my masters so i'm also still looking for a subject for that. The
last couple of months I have been experimenting with water color effects in
openGL, so I'm looking for something like that)

------
ChrisHardman29
I'm trying to tackle the problem of information overload by providing a
service that extracts, summarises and curates the key insights from books,
articles and research: [https://www.sivv.io/](https://www.sivv.io/)

~~~
dredmorbius
I'm more convinced the solution exists in information rejection.

Though a good Otletian content metadata store would likely be useful.

------
benwills
1: How to download the internet. 2: How to parse html in a way that gives it
consistent structure and meaningful interpretations. 3: Comvine 1 and 2 and
make it searchable.

I’m about 12 years into it and expect I have a couple/few more to go.
Hopefully people still use HTML by then.

~~~
kristopolous
Anything to look at?

~~~
benwills
Not particlarly. I will in a few months, though.

------
dvt
One-click API data ingestion.

Just about every company these days has their data spread out all over the
cloud: marketing data on Facebook and Google, social media data on Twitter and
Snapchat, customer data on Salesforce, sales data on Shopify and Amazon, and
so on. Most companies will either (a) hire a team of data engineers to collect
and exploit this data or; (b) hire an expensive consulting firm to build an
ETL pipeline, or; (c) let this data rot in the cloud. For the past 6 years,
I've worked as a data engineer (where I became intimately familiar with
Facebook and Salesforce APIs), and I'm confident that I can automate around
80% of my job.

It's clear that the value prop is astronomical: just _one_ data engineer will
run you at least 150k/yr and most of the work will involve maintaining API
data pipelines. Having a "one-click" solution where one simply provides an API
key and what data they'd like to warehouse (e.g. marketing data, social media
data, customer data) and where (FTP, S3, Redshift, DynamoDB) would be
invaluable to companies that want to make sure they exploit this treasure
trove.

Some hard/interesting problems:

\- API specs constantly change (Facebook, for example, has a quarterly update
schedule)

\- Inferring JSON schemas is hard

\- Data integrity is hard (data types sometimes change willy-nilly)

\- API rate limiting is tricky

\- Resilience is hard

\- Recovering old data (especially for certain services) might be impossible

Everyone is starting to become keenly aware that letting the data rot is
starting to have a higher and higher opportunity cost. Not warehousing your
own data is simply not a tenable option any more: the world’s most valuable
resource is no longer oil, but data[1].

[1] [https://www.economist.com/leaders/2017/05/06/the-worlds-
most...](https://www.economist.com/leaders/2017/05/06/the-worlds-most-
valuable-resource-is-no-longer-oil-but-data)

~~~
willnewby
I'm actually in the middle of building something really similar, but targeted
at manufacturing/distribution companies (i.e. they have an ERP/inventory
system, how does that data get onto the eCommerce website?)

Would you be interested in talking with me about how you're building your
system? Email in my profile.

Good Luck!!

------
infinitebit
detecting the tempo of a rhythm a drummer is playing in real time (including
tracking variations/drift in tempo) based on the time stamps of each drum hit
(and starting at an assumed tempo). I've found in depth resource on problems
that are _similar_ , but not the same (wave form data rather than time stamps,
all-at-once rather than real time, etc). I'm trying to keep myself open to the
fact that the solution might be incredibly simple, and just unrelated to any
path I've gone down, but it's led me down some interesting paths that I'm
enjoying, so also just taking that for what it's worth

------
dewey
I'm working on a website that helps me to keep track of the podcasts that I'm
listening to across platforms (Think Last.fm / Trakt.tv but for podcasts) by
automatically importing the listening data from various podcast apps.

------
ggerganov
Working on an algorithm for recovering the text you type just by analyzing the
keyboard sound captured through the computer's mic (i.e. acoustic
eavesdropping). The hard part is doing it without having train data for the
keyboard.

~~~
dredmorbius
For which TLA/APT?

------
Addono
Verified location using GPS.

Problem: Currently, users (or attackers) can easily manipulate the location
provided to an app on a phone.

Solution: Use raw measurements from positioning sattellites to check if the
location reported by a user actually lines up with the measurements of their
phone.

Why is it hard? \- Lack of documentation, standardization and support on
collecting raw measurements on phones. \- Processing raw measurements is
tricky \- Finding anomalies in this raw data is even harder

Some of it is working - yay! - and there's also a public API, such that others
can use it too: [https://claimr.tools](https://claimr.tools)

~~~
john4534243
> Currently, users (or attackers) can easily manipulate the location provided
> to an app

This is respecting users privacy and its a feature.

~~~
Addono
Hmm, I see your concern. I'm pretty big on privacy myself, so I feel I should
be able to answer this in a satisfying way.

Most importantly, this always requires the user's concent. On Android you
still need the same location permissions as for normal GPS location
positioning. Hence, as a user you're always free to reject location
permissions same as before.

I have to admit, this tech can also be used for evil purposes. For now, not
all phones support collecting raw measurements - either hardware or software
support is lacking, but in the future if some entity could force you to have
your location verified, then having you cannot lie about it anymore.

~~~
john4534243
If i want my device to lie about my location for whatever reason it should do
that, the device should not control me.

------
shadeslayer_
I'm trying to build a storefront experience for building customized home
improvement services. These services can be anything between a new set of
curtains, new tiling for your kitchen or a false-ceiling throughout your home.
This is difficult because these services are dynamic in nature, and as such we
cannot sell fixed SKUs like normal e-commerce services do.

We are trying to develop a unified service-building experience wherein the
user will be able to punch in their requirements and get a product tailor-made
for them, along with the estimated price (which comes with ~10% tolerance).
It's tough, but we're getting there.

------
jierenchen
Figuring out how to fill the gap between code search and static analysis (code
checks.)

Right now the tools we have for programmatically reading through code are: 1\.
Code search, which is fast, but inaccurate/heuristic. 2\. Static analysis,
which is slow to run and difficult to write, but very accurate.

I'm building a tool that is as fast and easy to use as code search, and is as
accurate and expressive as static analysis.

Still just a landing page [0]. Looking to get a public playground people can
mess around with this week.

[0] [https://sourcescape.io/](https://sourcescape.io/)

------
kaushiksrini
I'm trying to build a program to optimally schedule time to work on your tasks
based on your schedule.

I have a bunch of tasks I have to complete by a certain deadline — these
include things like engineering sprint tasks, drafting a design document,
completing an assignment, etc. I have to get these tasks done in between my
regular schedule of meetings, lunch breaks, and rests. I want to get a program
to tell me when I should work on what depending on a task's due date and
priority. If something comes up, I want my schedule readjust to accommodate
the interruption.

------
adamnemecek
ECS based
([https://en.wikipedia.org/wiki/Entity_component_system](https://en.wikipedia.org/wiki/Entity_component_system))
GPU-first GUI framework.

~~~
The_rationalist
Name? Does it use skia? Why and when would ECS help for building 2D GUIs?

~~~
adamnemecek
Unnamed yet and not public yet. No, it does not use skia. ECS helps because it
stores data in homogenous arrays which is really good for GPUs.

------
ipiz0618
A piece of music playing software that can react to the lead of an instrument.
I'm picturing the software will be able to play a concerto (or simply a duet)
with a real instrumentalist and react to dynamic and tempo changes in real
time, like an orchestra under a conductor.

The same principle can also be used to create a real-time software harmonizer
[1] for live performances, but this problem already has a reliable solution
through hardware.

[1] -
[https://www.youtube.com/watch?v=DnpVAyPjxDA](https://www.youtube.com/watch?v=DnpVAyPjxDA)

~~~
soulofmischief
Are you familiar with Magenta? [0] It's not real-time, but definitely an
important area of research.

Also check out Dan Tepfer [1] who is doing amazing work with an algorithmic
approach to reactive live performances, with great call and response tactics.

I myself am slowly prototyping a fully artificial AI band which can be
orchestrated using very high-level musical ideas and a big helping of
intelligent randomization and algorithms based on music theory.

I've been prototyping in Andrew Sorensen's Extempore [2] and have laid much of
the groundwork like melody/harmony/rhythm generation as well as basic
modulation of these elements to create longer musical structures which utilize
motifs in multiple ways

Currently it is a matter of shedding pre-computed tables of "nice" sounding
progressions or purely random progressions, and creating a more fundamental
approach which can derive the appropriate progressions from the given user
parameters. I am also expanding the program's ability to generate
aesthetically pleasing and unique singular motifs which drive these
algorithmic compositions.

I have a band leader / conductor module which provides cues and other
synchronized data, which even without advanced motif generation and modulation
still allows for things not currently possible in _any_ non-code musical
production software such as global dyamics, timing changes, progression
sharing, (eventually) directing impromptu solos, etc.

Reach out via email if you'd like to discuss more!

[0] [https://magenta.tensorflow.org/](https://magenta.tensorflow.org/) [1]
[https://www.youtube.com/watch?v=SaadsrHBygc](https://www.youtube.com/watch?v=SaadsrHBygc)
[2] [https://extemporelang.github.io/](https://extemporelang.github.io/)

~~~
ipiz0618
Thanks for your resource. Never heard of Dan Tepfer or Extempore - such a
great way of imagining music!

What I was planning was something simpler - much like generating sounds from a
written score, but like live classical performances, the generated sound
reacts to the cues of the player.

I'm not exactly familiar with Magenta, but the thing I'm currently trying to
implement (at a very early stage) is Deepmind's Wave2Midi2Wave which is part
of the dataset released with Magenta [1]. I'm not aware if they'd released any
code as well.

[1] - [https://magenta.tensorflow.org/maestro-
wave2midi2wave](https://magenta.tensorflow.org/maestro-wave2midi2wave)

------
nabaraz
I am trying to build a humanoid with bunch of servos. Back in 2017, I bought
205 servos from a closeout sale.

Currently, I am trying to hook them together and come up with basic APIs to
control them. Next is printing a 3D model.

~~~
mrfusion
This sounds so interesting! Do you have a blog I could follow the project?

On that same concept I was recently thinking pvc pipes might be a cheap way to
build robots. It’s cheap, somewhat light, and seems strong enough for many
tasks.

Any thoughts on how to connect pvc pipes up to servos?

Also will your robot need feet?

~~~
nabaraz
Thanks. I will PM you when I start writing about it.

Yes, it will have torso and limbs.

My plan is to do something similar to this.
[https://i.imgur.com/8QDENyu.png](https://i.imgur.com/8QDENyu.png)

~~~
mrfusion
Great! My email is in my profile.

------
navs
Maybe not weird but certainly hard. I’m working on a way to build better
habits. I know there’s a million apps out there that present the typical log a
habit behaviour however I see this as mostly punishing.

A positive streak can easily turn into a negative streak.

At the moment I’m brainstorming with a google sheet and a few manual tweaks. I
believe there’s a way to help people/myself build a better life by removing
the nasty things (like smoking) and grow the nicer things like healthy eating
or exercise. But our desire for instant gratification and our lofty goals get
in the way.

------
danielovichdk
An open source search index shared and updated over the bittorrent protocol.

------
ralphc
Related to the future time travel. You somehow get thrown into the past, say
100-150 years. How do you put out a message proving you're from the future and
requesting time travelers to come back and rescue you?

My proposal is to take out classified ads in newspapers, couching it as a code
or puzzle to be solved, and put in dates of major future events you know
about. D-day, Kennedy assassination, Challenger explosion, 9/11\. Top it off
with Murder Hornets and they'll know about when you came from.

------
jamesrcole
I believe that computation, mathematics, information and semantics all share
the same set of simple foundations. And that the means to understanding these
foundation is to look at how physical computational processes use information
from their environment, and produce information that is used to make "real
world" outcomes happen.

I am working on this at the moment, and no, I don't expect that what I write
here will convince anyone. And I don't have any summaries of the work at the
moment.

~~~
lambdatronics
We're in need of an information-theoretic definition of computation or
information processing, in analogy to Shannon's definition of communication.
I'm trying to work it out.

It's clear that there is a relationship between computation and information
via Landauer's principle. It's also clear that it's got to do with
nonlinearity of dynamical systems: "The essence of computation is nonlinear
logical operations." J. Hopfield, PNAS 79, 2554 (1982).

Semantics: in short, 'meaning' is just a mapping (ie, mutual information)
between language and the physical world.
[https://heteroskedasticblog.wordpress.com/2018/01/13/informa...](https://heteroskedasticblog.wordpress.com/2018/01/13/information-
and-meaning/)

~~~
jamesrcole
> _We 're in need of an information-theoretic definition of computation or
> information processing_

Could you elaborate? It's not clear to me what you mean by that.

> _It 's clear that there is a relationship between computation and
> information via Landauer's principle._

Information, in the sense I'm concerned with, is an inherent part of
computation. You can't have the latter without the former.

> _Semantics: in short, 'meaning' is just a mapping (ie, mutual information)
> between language and the physical world._

That doesn't explain meaning in the case of imaginary or abstract details, or
the system's conception of the meaning.

.

BTW, would I be able to get a copy of that Hopfield paper? I wasn't able to
find a copy of it.

~~~
lambdatronics
Shannon's theory of information doesn't offer a way to tell whether a channel
is doing computation or merely transmitting the information -- the mutual
information merely characterizes how much information goes across a channel,
but is insensitive to any changes to the representation.

OTOH, the algorithmic complexity theory (a la Kolmogorov) doesn't really have
the same generality as Shannon's theory. Flops is not a well-defined measure
of information processing rate for the brain, for instance.

I got inspired by the "integrated information theory" folks -- they have this
notion that combining information streams in a nontrivial* way is necessary
and sufficient for consciousness. I disagree that it's sufficient for
consciousness, but it might be sufficient for a definition of information
processing or generalized computation.

> You can't have the latter without the former.

Agreed.

>That doesn't explain meaning in the case of imaginary or abstract details, or
the system's conception of the meaning.

The mapping is in our heads. I don't know what you mean by "the system's
conception of meaning" \-- which system, and what is a conception of meaning?

Here's a link to the paper:
[https://www.pnas.org/content/pnas/79/8/2554.full.pdf](https://www.pnas.org/content/pnas/79/8/2554.full.pdf)

edit: *how to define 'nontrivial' is very much up for debate. edit2:
formatting

~~~
jamesrcole
> _Shannon 's theory of information doesn't offer a way to tell whether a
> channel is doing computation or merely transmitting the information -- the
> mutual information merely characterizes how much information goes across a
> channel, but is insensitive to any changes to the representation._

In my view, it is more than a change of representation. The computation is
using information that might true of something, to produce new information
that might be true of something else. I don't see how anything like Shannon's
theory could explain how it is able to do this.

> _I got inspired by the "integrated information theory" folks -- they have
> this notion that combining information streams in a nontrivial way is
> necessary and sufficient for consciousness. I disagree that it's sufficient
> for consciousness, but it might be sufficient for a definition of
> information processing or generalized computation._

Ok. I share the same view, that it isn't sufficient for consciousness.

> _I don 't know what you mean by "the system's conception of meaning" \--
> which system, and what is a conception of meaning?_

The computational system. Consider the case of the human brain, which may be
computational. People can understand that some information is about X (say, a
particular tree, or the notion of Justice). But it's not just that they know
what the information is about, but they understand something of the character
of that thing -- of the tree, or of what Justice is like. If the brain is
computational, then that would mean that such an understanding was
computational (or computational plus bodily interactions with the environment,
etc). But that doesn't tell us how it is that computation is able to "embody"
an understanding of the character of something. That needs to be explained.

> _Here 's a link to the paper_

Thank you.

Is that the same paper? I notice it has a different title to the one you
mentioned above.

------
forgotmypw17
I'm developing a web-based system which supports every browser in existence.

So far, I have full baseline feature support for everything back to Mosaic,
including Lynx, IE3, Netscape3, Opera3, and many others.

At the same time, still including advanced features like client-side PGP for
browsers which will support it.

Every browser presents its own challenges, and it is not always the oldest
ones which have the dumbest behaviors.

My intent is to promote interoperability and offer something as an alternative
to today's near-monoculture.

~~~
insomniacity
There are/were browsers with client-side PGP?

~~~
forgotmypw17
I use OpenPGP.js to integrate it.

It's not the most secure solution, but it's just one of the options.

There are browser add-ons that do integrate PGP.

------
mattio
Trying to find a backend (php) freelance project since March after having
worked 5 years in a startup/scaleup. Its the hardest problem I faced in ages.
¯\\_(ツ)_/¯

------
winhowes
I built out a sequence where you take some large integer n, and a smaller
positive integer x, the first number in the sequence, such that the next
number in the sequence, x_1, is the remainder plus the dividend of n/x, x_2 is
the remainder plus the dividend of n/x_1, and so on until you reach a cycle.
The problem I'm trying to solve is given some n and some x can you give at
least two solutions for x_-1 (one number previous in the sequence)?

------
spiritplumber
I'm doing the hardware part of a CB packet radio infrastructure that can be
deployed quickly after a disaster, and allows basic BBS functionality in
addition to passing messages, and works with existing cell phones.

It's inspired by the CellSol network in the "Left Behind" novels.

Schems and code are at
[https://www.aaronswartzday.org/lora/](https://www.aaronswartzday.org/lora/)
and we could use help.

------
jakeogh
The hardest problems to solve are deliberate problems. Where solving it
results is large financial loss for the people that profit from the problem.
For example:
[https://www.youtube.com/watch?v=eDSDdwN2Xcg](https://www.youtube.com/watch?v=eDSDdwN2Xcg)

Getting discussions about these issues to include financial incentives is a
weird and hard problem. People are too sure the players (on their team) are
alturistic.

------
econcon
Trying to create Discourse like E-commerce app that will make it easy for
people to sell anything

It will have all features that WooCommerce have but much stable and easily
customisable.

~~~
unixhero
Who is the typical customer? Gamers?

~~~
econcon
I think you are confused with Discourse Vs Discord

~~~
unixhero
You are of course right. Thanks.

------
jborichevskiy
Developing better, more humane environments for hanging out and spending time
online:

[https://www.producthunt.com/posts/cozyroom](https://www.producthunt.com/posts/cozyroom)

And figuring out how to make it easier to write and develop ideas through
blogs:

[https://jborichevskiy.com/plan/#jborichevskiy-
com](https://jborichevskiy.com/plan/#jborichevskiy-com)

------
code-faster
Hard: Tech

Harder: People

Impossibly hard: Help people get faster at building tech

And yet it's all I want to do.

[https://codefaster.substack.com](https://codefaster.substack.com)

------
ALittleLight
I wrote a bit about using stylometry to identify the author of a tweet. I'm
coming up with additional features to add to this model and testing it with
identifying attributes about the author (political affiliation or gender).

[https://medium.com/@patriarch_39868/donald-trump-detector-
ec...](https://medium.com/@patriarch_39868/donald-trump-detector-ecb50b4d3de4)

------
skee0083
I have a MYSQL database and i'm trying to order tables by "update_time" but it
only works some of the time. I'm not sure if it's an issue with the time of
the system or if it's innodb. I've search for answers on stackoverflow and
someone said innodb had a bug that affected update_time but it's since been
patched and i am running the latest MYSQL version in Ubuntu 20.04.

------
j4pe
Why don't gig workers just work for themselves? Gig apps are basically trivial
to engineer in 2020 until they get scaling problems, largely due to the
contributions of the open source community. It would be nice if there were
also open source tools for people's gig businesses, so they didn't have to
sell their labor through a company extracting such a heavy rent.

~~~
tsimionescu
The whole idea of the so-called gig economy is that you have a huge platform
that connects gig workers to clients. How could you replace that with one app
for each worker? Would you install 200 taxi apps and start trying each of them
when you want a cab? Or would you order an Uber?

And if you are envisioning an open-source platform, who will run it? Who will
vet workers on it, even to the basic level that Uber does (e.g. Make sure they
are real people, have a driver's license, and a functioning car)?

------
NoamR
Write GUI for FFmpeg... or not exactly. I'm building an API + component
library so other developers (and hopefully everyone down the road) can quickly
assemble a UI that does a particular FFmpeg job. Want batch transcode? Add the
files selector, destination selector and render button. Want to mux audio and
video? make that two file selectors.

------
ibaikov
Make people play with good ping between continents, currently people have
80-100ms between NY and London, it's possible to make them play with ~35ms
today and possibly as low as 25ms.

Currently researching if it's needed and if people would pay for this. Note
it's only useful for gaming/cloud gaming, not for other applications.

------
enginoor
I'm trying to design a vibratory bowl feeder that I can 3D print. Designing
features to align or reject parts is a bit of a dark art and it's requiring a
few more iterations than expected.

[https://en.wikipedia.org/wiki/Bowl_feeder](https://en.wikipedia.org/wiki/Bowl_feeder)

------
Lazlo182
I'm trying to find other forms of the Dirac Equation, but that also satisfy
the original definition of the equation.

~~~
lambdatronics
Are you familiar with Hestene's work?

[http://geocalc.clas.asu.edu/html/GAinQM.html](http://geocalc.clas.asu.edu/html/GAinQM.html)

I can't make heads or tails of it myself, but it's definitely off the wall. In
particular, his geometric interpretation of the unit imaginary is interesting.

------
killjoywashere
Teaching machines to diagnose cancer, because without that, there's no way we
are going to solve cancer generally. There just aren't enough pathologists.
Harder that going to Mars, probably easier than world peace. Definitely
involves an enormous amount of data. Like, a non-trivial fraction of world-
wide storage.

~~~
pugworthy
That is treating the symptom by learning to recognize it. What about
discovering the cause?

~~~
killjoywashere
There is no one cause of cancer. While there are a number of well-recognized
precipitants (HPV, trichloroethylene, UV radiation, etc), cancer is
fundamentally a disease of disorder in the genetic code. As entropy invariably
increases, the integrity of each of the trillions of individual somatic
genomes per human start to degrade, they rarely degrade into a positive
feedback loop of replication, and we have not invested enough of our evolution
to prevent it. For example: mice don't normally get much cancer because they
die before they have time for it. Elephants don't get much cancer because they
accumulated enough copies of p53 over their evolution that they don't have as
many cancers per N cells.

Humans have 1 copy of p53 from each parent and have extended their lifespan so
many ways that their one copy of p53 is no longer enough. Lifelong
accumulation of genetic entropy is inevitable. The game is to catch the
offending cells and kill them.

------
hermitcrab
Writing a no-code drag-and-drop tools for transforming data (join, filter,
reformat, reorder etc). The hardest bit is handling cascading changes to the
column structure (e.g. removing and reordering columns) intuitively -
particularly if they change the input file to one with less columns or
differently ordered columns.

~~~
hermitdev
There's already IBM's DataStage. I currently use this for ETL work at my job,
although we're working on moving our ETL work to Python and an in-house
framework due to licensing costs.

~~~
hermitcrab
I know there are various Extract Transform Load tools aimed at professional
data scientists. My product is aimed at numerate professionals who aren't
programmers or data scientists and have never heard of 'ETL'. The idea is that
they can install Easy Data Transform, transform their data and output it in a
few clicks, without programming. It is also very cheap compared to many
commercial ETL tools.

------
midrus
Deciding what framework I'm using next.

~~~
baxtr
Good luck. I’ve heard that’s impossible

------
Erazal
I'm working on a an easy way to store and retrieve any ephemeral information
stream that goes through your computer's ram (videos, a website you're
browsing, etc).

For now, my team and I are focusing on video-conferences, but the end goal is
much larger :)

Now that I think of it, this problem is a lot easier than what others post
here.

------
dwrodri
Semi-supervised document clustering of research papers on arXiv. I struggled
heavily in the first year of my PhD just learning the meta-game to staying on
top of my field (computer security + computer architecture).

It turns out semi-supervised document recommender systems aren't easy to
bootstrap with zero user data.

------
vishnumohandas
I’m building an on-premise alternative to Google Photos[1]. Started working on
this because it’s not okay for Google to have unencrypted access to and own
all my memories.

[1]
[https://youtube.com/watch?v=b5XN5GMmc6I](https://youtube.com/watch?v=b5XN5GMmc6I)

~~~
OJFord
Nice! Are you aware of
[https://github.com/hooram/ownphotos](https://github.com/hooram/ownphotos)? No
affiliation, I'm just interested in using something like this, I'm not sure
how actively it's being worked on (it was sort of a 'functional demo' last I
saw, but not quite usable).

~~~
vishnumohandas
Yes, but it seems like the project was abandoned[1] and the community fork has
not had any activity either[2].

More importantly, the lack of a mobile-first experience was a deal breaker for
me and my family.

[1]
[https://github.com/hooram/ownphotos/issues/137](https://github.com/hooram/ownphotos/issues/137)

[2]
[https://github.com/Ownphotos/ownphotos](https://github.com/Ownphotos/ownphotos)

~~~
OJFord
Ah that's a shame, I hadn't seen that, thanks. I'll look forward to trying out
yours!

------
denster
Design + Code tooling.

Hard problem:

How do we evolve design tools? Can Sketch/Figma be evolved to create full
featured software? [1]

Something with no limits, and the freedom to create any feature developers
create today with React/Angular/Vue.

Is it possible or a pipe dream?

[1] [https://mintdata.com](https://mintdata.com)

~~~
ohnope
You can imagine a version of the world where designers structure their designs
in a way that can easily export to e.g. a vue component, and along with the
base structural layout they define different UI states it can be in, with
animation timelines. Designers should be able to specify every aesthetic
variable, and developers just program the business logic to fill content tags
and toggle designer-defined states.

------
ketanmaheshwari
I am building a tiny "replay" script that when run with a code file will print
the code by line/block with a customized amount of delay OR keypress. The
motivation is to be able to slowly read large source code files without having
the whole file on the screen.

~~~
genuinebyte
This is an interesting idea. I find reading code isn't always linear, so I'm
interested to see what you come up with.

------
tixocloud
Working on building a monitoring and log trace stack for machine learning.
What’s weird/hard is that we’re looking to deduce the performance of models
that do not necessarily have ground truth readily available so it’s tricky to
figure out if the model is working or not.

------
nilshauk
I’m trying to evaluate ethical licensing so that we can have a flourishing
Commons of code projects to build on.

[https://ethicalsource.dev/](https://ethicalsource.dev/)

The hard problem is multifaceted:

I would argue that open source as we know fails to balance the market. We now
have monopolistic tech incumbents in the “GAFAM“ companies, that thrive on
open source while paying little tax and outcompeting actual tax paying
businesses. I see maintainers either burning out or selling out to venture
capitalists.

I want to believe in free and open source, but I also see that it fully
enables surveillance capitalism, casino capitalism and tax avoiding monoliths.

So, I realize that I need to move past classic licensing and consider ethical
licensing that try to remedy society’s inequalities and injustices.

Call me a cyber hippie, but if I want to build cool stuff in my spare time to
share I want to maximize its chances of doing something good in the world. To
that end I’m evaluating some ethical licenses.

There are many ethical licenses out there which are evolving. Presently, I’m
evaluating this one: The (Cooperative) Non-Violent Public License:
[https://thufie.lain.haus/NPL.html](https://thufie.lain.haus/NPL.html)

After the weekend I’ll try to get in touch with a Lawyer to review the license
implications. It’s arguably not open source in definition, but maybe more so
in spirit.

------
imvetri
Problem statement : learning Front end web development frameworks is hard

Solution : Abstract the problems and automatically generate framework code.

Demo project : [https://github.com/imvetri/ui-
editor](https://github.com/imvetri/ui-editor)

------
Shared404
I'm going to be gone (read: no meaningful internet access) for about two
years.

I'm working on setting up a webserver such that it will:

A) Stay up

B) Stay up to date

C) Stay uncompromised

For the entire time I am gone with no interaction.

Or rather, that's what I want to be working on. Instead, I'm working on making
myself not be a lazy [blank].

~~~
o-__-o
Hey your profile doesn’t have a contact link but I have a few ideas that can
solve this today. Reach out when you get a chance!

~~~
Shared404
TIL that the email in the profile isn't public access. Probably should have
realized that before.

My email is Shared404 (at) protonmail.com, I'd love to hear your ideas.

~~~
o-__-o
Dang TIL too because I updated my email before I commented :)

------
shezi
Soil simulation

This is just one part of one of the sillier things I'm working on/thinking
about. How can I make a real-time interactive soil simulation work,
essentially a big realistic virtual sandbox.

Since this is a side project, it'll go as far as all side projects go. =)

~~~
jakeogh
This video series about soil is really interesting:
[https://www.youtube.com/watch?v=uUmIdq0D6-A](https://www.youtube.com/watch?v=uUmIdq0D6-A)

~~~
shezi
Looks excellent, thanks for the recommendation. Even though for my purposes,
chemistry isn't that important, as we're doing mechanical simulation.

------
agotterer
Parenting. Weird at times and consistently hard. Likely not what you had in
mind when you asked the question. I’m also surprised no one else said it yet.

I don’t know about other parents here, but cracking the code to my 3.5 year
old is a really hard problem for me.

~~~
kochikame
And talk about a moving target.

Just when you think you've figured something out, spotted a pattern... it
changes. Back to square one.

------
sandworm101
Problem: my team is decided in two: Corona A and Corona B, each working
alternate days (m w f and t t). How many extra hours should I make Corona A
work in exchange for them getting FOUR DAY WEEKENDS for the last two months!

~~~
war1025
Why are they not switching off every week?

~~~
sandworm101
Different taskings. One group has to do something on monday that only they can
do. They B teams uses A's product for something.

------
janee
Collaborative spreadsheets on your desktop using browser based native file
system API.

Basically google spreadsheets but not google and not spreadsheets in a webapp.

Haven't done any coding on it, but have been mulling the design for several
months now.

~~~
warpech
Shameless plug, doing this for networking not advertisement. My company has
several (open and non-open source) web-based products for parsing and
rendering of spreadsheet files and execution of formulas. Please get in touch
if you'd like me to tell more.

------
waihtis
Detection of cybersecurity threats at close to 100% accuracy. Do this by
purposefully opening weakness simulations into company infrastructure.

[https://avesnetsec.com](https://avesnetsec.com)

------
affyboi
This is pretty small and trivial, but I'm trying to implement DP problems in
Haskell just to get some more practice with it. I'm stuck on Manacher's
algorithm for finding longest palindromic substrings

------
hendry
Some way of generating back links for my blog.
[https://github.com/kaihendry/backlinks](https://github.com/kaihendry/backlinks)

And failing to do so.

------
shostack
Not sure if this is weird so much as an area of business that not a lot of
people think actively about, but get bit by.

 _THE PROBLEM_

How do you automate and aggregate context across business departments for
various forms of activity, and then map that to marketing analytics in a way
that gives relevant and sufficient insights beyond just channel or user data?
How do you more fully answer the question of "what happened when [$thing
happened]?"

 _THE VALUE OPPORTUNITY_

Countless people hours and marketing dollars are wasted going down fruitless
rabbit holes looking for what caused some change, or thinking they found the
cause in a change in performance and pursuing that when it reality it was
something else. In many of these cases, this could have been easily avoided if
only there were sufficient data on the business activities (internal and
external) logged and aggregated with marketing data in a way that was then
automatically surfaced in an appropriate manner. As the scale of the company
increases, so does the impact of this.

 _WHY IT IS WEIRD /HARD_

It's weird in the sense that only a small subset of people are immersed in
analytics enough are aware they should care about it, and probably fewer geek
out enough about marketing analytics and process to care about trying to solve
it. It is hard because it is just as much a people challenge as a technical
one. The technical side is somewhat straightforward in terms of aggregating as
many data inputs as you can--it's basically a ton of data plumbing and
monitoring for changes with that. Whether that's bid management platforms and
DSPs or SSPs, email platforms, site analytics, etc. But then also project
management tools and properly categorizing the meta data for relevant updates
to be surfaced. You have challenges around walled data gardens and comparing
apples to oranges around things like attribution measurement, but that is
something that can be handled. Surfacing it in timely and sufficiently useful
ways is an interesting design and UX challenge though, from annotations and
"pull" data, to modals and callouts that are more "push" in how they inform
people of context before it bites them.

The people side however, is constantly in flux in a way that the data side is
not. Some aspects of this absolutely rely on consistent adherence to process
to capture key data that is hard to slurp up through an API. Some of it is
quite ephemeral. I've encountered team situations where people object (or
struggle to due to limited training) to filling out a couple fields in a
Google Sheet, or need to be hounded to fill out a given form, etc. Some
companies can enforce this to levels others cannot. Things also get really
interesting at large companies (think FAANG). You're dealing with many teams,
many overlapping or conflicting processes such a solution would need to be
embedded into, localization, internal/external vendors of varying levels of
visibility needs, and also personalities who may want more control over their
orgs' processes and need persuading.

At the end, this all needs to be balanced against how much utility you get out
of the insights because it is easy to over-index on investing in building this
tech and process out only to not get insights out of it. Unfortunately you
often only learn that _after the fact_ when you've been bitten by it.

If there's any companies trying to solve for this, please do reach out (see
profile). I love chatting about it and want to help build the tools and
processes that solve for this at scale and have ~15yrs experience in the
space, a good chunk of which have been spent trying to solve for variations of
this.

~~~
abj
I've experienced this problem in small companies/side projects I've started.
This a great perspective and great take on the problem. I'd love to help out
in development/anything you'd need help with, email is in my profile.

~~~
zinglersen
I've had the same experience in larger corporations, especially the global
ones. I'm interested in helping as well so if there's a need for concept
development and ux let me know :)

------
techstrategist
I’m building a platform to host my book club online due to covid, and I think
that I can generalize my solution to reinvent MOOC’s and possibly further
areas of communication.

------
Sordelia
Problems:

\- Of the countries in the world, none are really free, and generally have
quite burdensome taxes relative to their benefits.

\- Representative democracy has not improved a whole lot from the v1.0 created
a few hundred years ago. For example, voting is largely between "person I
don't really like" and "person I hate," and you're just 1 vote in millions,
meaning voting is irrational
([https://en.wikipedia.org/wiki/Paradox_of_voting](https://en.wikipedia.org/wiki/Paradox_of_voting))

\- In referendums, people vote without really knowing much about what they're
voting on.

\- There's a lot of people who want to move to a better country, but
immigration policies are very restrictive.

Solution:

\- To create a new country, Sordelia, based on liberty, sortition, and
deliberative democracy.

\- Laws are voted on based on a small, randomly selected citizen's assembly.
The random selection keeps the assembly small, so every vote counts, yet is
statistically significant so that, if a law passes, we're highly confident it
would've passed before all the voters. This assembly will learn and debate the
pros and cons of the proposal before voting.

\- Laws that would violate fundamental principles (like freedom of speech) are
disallowed.

\- We aim to purchase land from a developing country. There are plenty of good
reasons they'd sell to us, beyond the immediate payment. The developing
population and infrastructure will bring an increase in trade and jobs for
their country. There have been many historical examples of special economic
zones (SEZs) having positive influence on their neighbors, and Sordelia's
effect on nearby countries will be similar.

\- We'll have pro-growth and pro-immigrant policies to attract people to our
country.

\- The rise of remote work makes this even more attractive for moving to
Sordelia.

If this interests you, join us at:

[https://sordelia.org/](https://sordelia.org/)

~~~
rxsel
Has this (Peacefully purchasing/acquiring land to establish a new state) ever
been done in the past?

The capital to do it is certainly there.

------
mathgladiator
I'm building a programming language for board games.

------
mmmuhd
I am building a service where anyone can choose a real star and name it after
themselves. The goal is to name all stars in the universe after humans

------
Beefin
education, problem:

how can we streamline the transfer of knowledge from the oldest, wisest,
individuals on the cutting edge of their field to the youngest, most ambitious
sponge-like individuals just starting out their careers?

I built an MVP to solve this at a hackathon:
[https://devpost.com/software/oravise](https://devpost.com/software/oravise)

anyone interested in collaborating?

------
waihtis
Detection of cybersecurity threats at close to 100% accuracy. Do this by
purposefully opening weakness simulations into company infrastructure.

Avesnetsec.com

------
vladoh
Self driving cars. More specifically, localization.

------
a3camero
Making a search engine for government websites. Not using someone else’s
index. A full search engine, writing all of the components.

------
bravura
Investigating neural vocoders (WaveNet, WaveGlow, etc.) and studying deep
learning with applications to music and audio.

------
dmead
I'm trying to predict heart attacks in children using their live EKG and
electronic medical records.

I work for a research hospital.

------
Smoosh
I am trying to decide whether a political party could be created (in my
country) which implements rational evidence-based policy. All policies and
their implementation would require a postulate, and pre-defined criteria for
success or failure. All regulations and driectives would then need to be
consistent with, and balanced according to, the accepted policies. If
necessary, laws would be changed/implemented to further this.

------
gjvnq
Trying to make a programming language/business automation tool.

Why? Because it is cool and I want to learn more about compilers.

------
chewxy
Hard problem: AGI. But with my own rules: not OpenAI/Deepmind style put-more-
computation-into-it solutions.

------
zitterbewegung
I'm trying to do generative art that involves deep neural networks. I think I
will call it dank learning.

------
master_yoda_1
“How to get 6 packs abs”, trying to solve this for at least last 10 years but
it’s a really hard problem.

~~~
atom-morgan
What have you tried so far?

~~~
master_yoda_1
Going to gym, eating salad, skipping lunch

~~~
fiblye
Skipping meals doesn't help you lose weight in most cases. Your body optimizes
for the expectation of not having food and stores more energy.

Eating smaller, filling meals helps more. I actually don't know of anyone
who's lost weight from salads. Usually they're just left feeling hungry and
usually add dressing to compensate (which just gives you completely unfilling
calories).

~~~
master_yoda_1
Yeh I experienced the same

------
tumidpandora
I am working on making it really easy to create a no code, 1-click personal
chatbot profile - presbot.com

------
zipotm
Trying to predict the global social behaviour based on planetary movements.

------
SergeAx
Training neural network to distinct Pepe the Frog from Kermit the Frog.

------
JensRantil
Understanding people. They keep surprising me!

------
simonsarris
How to lure back the gods

~~~
Smoosh
Do we want that though? Didn't we kill them for a reason?

------
rikroots
You asked for "weird", yes?

I've got a problem that's driving me crazy: find a way to present a huge
amount of written (and some visual) content, through a website[1], in an
interesting and accessible way.

One of my hobbies is constructing worlds. One world. I've been working on it
for over 40 years now. 20 years ago I decided to share my work with the world,
and built a website for it. The approach I took them was to divide the site
into a homepage, with links to more-or-less self-contained subsections. The
solution works, but I want something better. I just don't know how to achieve
it.

A wiki-based approach[2] would seem to be the obvious choice. But my
experience of wikis is that they feel too, well, fragmented. There's ways of
overcoming this (portals, wikibooks, etc) but I tried building a wiki ... it
didn't work for what I wanted to achieve.

Another approach could be to give up on building my own site and instead rely
on a cloud provider to do all the hard work for me[3]. But I dislike this idea
on every level I can think of. For a start, what happens to all my work when
the host company collapses, or pivots to a more profitable idea? What happens
when I want to introduce a feature - "teach yourself language X" lessons that
the site architecture doesn't support?

Of course, if I had all the money in the world then I could employ many very
clever people to design, build and develop content for a truly wonderful user
experience[4][5][6] ... yeah. That's not gonna happen. So the solution needs
to be "doable".

Any ideas on how to solve the my problem will be very gratefully received!

[1] -
[http://www.rikweb.co.uk/kalieda/index.php](http://www.rikweb.co.uk/kalieda/index.php)
\- My constructed world's current website. I used to be very proud of the
site's structure and design. But I know in my hearth it can be much better!

[2] - Encyclopaedia Ardenica -
[https://www.otherworldproject.com/wiki/index.php/Main_Page](https://www.otherworldproject.com/wiki/index.php/Main_Page)
\- is a really nice example of what can be achieved with a well-thought-
through wiki.

[3] - WorldAnvil - [https://www.worldanvil.com](https://www.worldanvil.com) \-
seem to be the leading example of this approach.

[4] - The Potterverse - now at
[https://www.wizardingworld.com/](https://www.wizardingworld.com/) \- seemed
for a while to be a Gold Standard for wonderful fantasy world websites. The
Pandoran Research Foundation [5]
([https://www.avatar.com/](https://www.avatar.com/)) is another excellent
example. But even here, their Pandorapedia [6]
([https://www.pandorapedia.com/pandora_url/dictionary.html](https://www.pandorapedia.com/pandora_url/dictionary.html))
... it feels like it could be so much better - but how?

------
ransom1538
[deleted]

~~~
drusepth
I worked on this problem for about a month straight a couple years ago (but
didn't have any significant success). It's extremely draining on you; don't
forget to take breaks. <3

~~~
dredmorbius
What was your problem? (OP has deleted their comment.)

~~~
ransom1538
Sorry for deleting. I was working on detecting suicidal posts in social media.
It is really depressing work.

~~~
dredmorbius
Oof, yeah.

And even if you do manage to see them, sorting out how to intervene
effectively --- where and when and in a manner that actually helps --- is
still a Hard Problem.

I've lost a few friends this way. One I managed to intercede with. The first
time....

Thank you for trying.

------
spiritplumber
I need to come up with a way to remotely shut up Jesus in case the "Left
Behind" books are right and He plans to slaughter millions of people with His
voice.

I'm learning a lot about audio engineering and wave mechanics.

( I also co-wrote a story about what happens if I fail, which you can read at
[https://emlia.org/pmwiki/pub/web/LeftBeyond.TalesFromTheBeyo...](https://emlia.org/pmwiki/pub/web/LeftBeyond.TalesFromTheBeyond.html)
)

~~~
AnimalMuppet
If He returns, then He is who He said He is - the Son of God. And you're going
to try to fight Him with audio engineering? That's, um, not the wisest
possible response...

~~~
spiritplumber
Earth is where all my friend live and where I keep most of my stuff. Nobody
gets to blow it up without my explicit permission. That includes false Gods,
wannabe Gods, and true Gods.

I'd prefer coherent light, of course, but if audio engineering does the job,
I'll take it.

~~~
_theory_
godspeed

~~~
spiritplumber
Deus Nolens Exitus ;)

