Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What interesting problems are you working on?
442 points by jlevers 25 days ago | hide | past | web | favorite | 689 comments
I know there are lots of really interesting problems out there waiting to be solved, but I haven't been exposed to much in the software world besides web technologies.

I'd love to hear about what interesting problems (technically or otherwise) you're working on -- and if you're willing to share more, I'm curious how you ended up working on them.

Thank you :)




I'm helping to build a scalable system for delivering high value and life saving medical supplies to hard to reach places via autonomous aircraft. The system is currently operating in Rwanda and Ghana, and aggressively expanding over the next couple years.

Specifically, I spend a lot of time thinking about and writing embedded software. The aircraft is fully autonomous and needs to be able to fly home safely even after suffering major component failures. I split my time between improving core architectural components, implementing new features, designing abstractions, and adding testing and tooling to make our ever-growing team work more efficiently.

I did FIRST robotics in high school where I mainly focused on controller firmware. I studied computer science in college while building the embedded electronics for solar powered race cars, and also worked part time on research projects at Toyota. After graduating with a Master's degree, I stumbled into a job at SpaceX where I worked a lot on the software for cargo Dragon, then built out a platform for microcontroller firmware development. I decided to leave SpaceX while happy, and spent a couple years working on the self driving car prototype (the one that looked like a koala) at Google. Coming up on my third year, I was itching for something less comfortable and decided to join a scrappy small startup with a heart of gold. Now it's in the hundreds of employees and getting crazier and crazier.


It's crazy the kind of jobs there are in the USA that are very interesting and in the vanguard of the technology while me (Spain) get to work on banking. Here we don't have anything close to Toyota, Google or SpaceX. The career path for someone that has a passion for its craft can't be compared between the USA and many other countries. Such a shame...

I wouldn't say I am qualified for a job related to embedded programming (even though I know how to code and it's my job) but even if I was there wouldn't be any opportunity for me to bounce between companies like those in a million years.

PS: Sorry for the spelling, not native.


To be clear, I think the overwhelming majority of jobs in the US fall into your uninteresting category, e.g. banking, adtech, etc.


There's a handful of places in the USA with thriving tech industry, and there's plenty of opportunity in general, but quite often people have to relocate for the really exciting opportunities. I haven't been to Spain, but my understanding of European culture is that folk generally stick around where they grew up. There's plenty of cool embedded stuff going on in Europe if you look for it. For example, I am aware of a lot of really neat drive-by-wire actuator development.


I feel you, unfortunately, your location matters in terms of your network/opportunities and even a spouse. Frankly, because I realized this I started searching for cool companies to work for that nobody knows and made a side project around this!


Are you working on Zipline [1] ?

[1]: https://www.youtube.com/watch?v=jEbRVNxL44c


Pro Tip: Search "<HN handle> + github" frequently gets a real name, and if not current job, then look on linkedin.

The answer to your question is: Yes.


This is incredible and gives me engineering fomo!

* higher purpose project saving lives and helping people: check

* well-engineered, reliable, smart solutions quadrupling efficiency: check

* planes, Africa, autonomous flight, landing and take off hacks, app based pre-flight checks, oh my!

* all for peaceful, life-supporting, humane purposes!!!


That's what I thought of as well. Great video explanation for anyone interested in learning more.


Just finished watching the video. Wow. Incredible.


You have basically had, in my opinion, the perfect career so far. Well done! :)


That's exactly the type of project I'd like to be working on -- embedded programming that literally saves lives. Thank you for sharing.

Your path to where you are now is crazy...sounds like you've had an exciting career!


That's awesome! Where I can learn more? I am building a job marketplace and curator for cool "unknown" companies that people can explore and would love to feature this! A plus point for the impact part of it!


I've been to your airfield in Rwanda! It's such a great company.


I'm jealous, I haven't actually made it out myself yet! I've got a little baby to come home to. It's been a very supportive company in terms of work/life balance as well.


Totally understand. I have to say that watching the drones getting captured when landing is one of the most futuristic things I've seen. You guys have done some great things in Rwanda - I lived there over the past 4 years, so I have firsthand knowledge.


By "hard to reach places", are you referring to difficulty due to geographic positioning, or geopolitical concerns?


Mainly geographic positioning due to lack of road infrastructure and reliable utilities. Roads are massive capital investments regardless of tech ability, and lots of populated places in the world are still hard to get to quickly. Many medical products are generally available but have a very short shelf life, and so our delivery service helps make them reachable to significantly more people.


Mind if I contact you and ask you questions about this? I’m interested.


Sure, or just ask them here if you think they would be of general interest.


Going to piggyback on this comment, sorry I am a few days late.

Where did you get your masters? I have an EE & CS B.S. from RPI plus 3 years of application development experience in the Fin Tech industry. I am strongly considering swapping industries to Embedded Control--that is what I enjoyed most in college--but I am unsure how to break into the industry. Do you recommend a masters or just sending some apps out? I have a good deal of C++ and micro-controller experience, but none commercial.


I got my bachelors and masters from University of Michigan. It was really just an excuse to stick around for a couple more semesters and do another solar car race. After a few years of experience, the masters doesn't really matter beyond what you personally gained out of the education.

There are tons of embedded software projects that lack software engineering rigor. If you're good at unit testing and mocking, for example, there's no reason why you can't unit test embedded code. Applying general software engineering practices to embedded code (effectively) is a good way to differentiate yourself.


Thanks for this Sergeant! My hunch was that embedded code was lacking some of the more refined software engineering principles like continuous integration. I'll continue to frame my cover letters around that and give the masters some more thought.


I'm looking for an internship in July-August. I've built a lot of model planes younger and what you're doing seems amazing! Is there any way we could talk further?


The best way to get into the system is by submitting your resume on the careers page. I'll give a heads-up to the recruiting team to look out for any submissions mentioning hacker news.


Thank you! I'll do it by this week end.


Zipline?


Given that the two markets he listed are the markets I know Zipline is in, I’m willing to bet yes.


Solving the world's trillion-dollar energy storage crisis. (multi-trillion, actually.) https://www.terramenthq.com/

About a year ago, I started spending more time researching about climate change. I learned how important energy storage will be to enable renewable energy to displace fossil fuels. The more I read, the more fascinated I became with the idea of building underground pumped hydro energy storage. I found a research paper from the U.S. DOE written in 1984 showing that the idea was perfectly feasible and affordable, but it seems that nearly everyone has forgotten about it since. (they didn't build it at the time because the demand wasn't there yet. Now energy storage demand is growing exponentially)

A year later, I'm applying for grant funding to get it built. I know that nearly everyone will tell me I can't do it because this or that reason. Because people don't like change and they're scared of big things even if the research shows it makes perfect sense. But I'm doing it anyways because no one else is getting it done. The idea is too compelling and too important to ignore. So here goes nothing!


You are working on the most urgent and important topic that I know of. But it is also very hard to pull through. I wish you the best success.

Here are two recent startups in the field with multi-million funding. They were serious approaches. Many people involved with good planning, etc. They still fizzled out when it came to installing their first capacity.

https://www.power-technology.com/news/newsgaelectric-receive...

https://www.greentechmedia.com/articles/read/lightsail-energ...

I believe a reason for the funding problems is the high uncertainty for the economics of storage. Electrical energy is traded in a market. And your trading strategy in the market has a big impact on whether you earn money. Without solid numbers for energy storage and the expected trading outcome, investors will have a hard time.


Yeah if you drill into how the Tesla grid storage solutions make money, it’s not just about the storage capacity but also about being able to respond to demand or frequency issues extremely quickly, which LiIon batteries are very good at. There’s a lot of money available to the fastest dispatcher.


A lot of money seems excessive, at least in Europe. The flexibility market profitability depends a lot on the national market it plays in, as for instance the prices are quite low in Germany/north Europe, but very high in Australia.

Rather than a way of making money, I like to think about flexibility and fast dispatching as an enabler for way more renewables to come online, and that is crucial for the human race right now.


Makes sense -- the grid storage system I read about was indeed the one they built for the Australian wind farm.

I think there's a big gap for both types of storage - fast dispatching for intraday demand variations, replacing gas peakers, and more static storage as in the OP for multi-day gaps in renewable production such as periods of high pressure during winter when wind speeds and irradiance are both very low.

Can't wait to see how this market develops.


One of my friends is a physicist that had the plan to install a closed cistern under his house and heat it up in summer with mostly solar energy. Looked really promising and he had all the numbers figured out to supply a family home with heating and warm water through a normal winter. Didn't do it in the end because there were some problems with building a cistern on his property and the large up front investment necessary.

I wish you luck with your endeavor. I think you are correct that the problem of energy storage is the most important one to solve to allow renewables to really take off for general power supply.

I also believe that the efficiency of storage is secondary to a degree as renewables can supply an enormous amount of energy. So loss from pumps or energy conversions are nearly insignificant. At least at the moment where energy storage is in such a bad shape. Certainly a lot more promising than having batteries everywhere.


Funny to think the Romans were doing this thousands of years ago (and probably civilizations before them as well), and here we are just now coming around again to the idea of heating/cooling via cistern storage.


True, but current techniques for thermal isolation could really make it quite efficient. The optimal cistern would be a giant ball of water under your house.

I don't have his spreadsheet anymore, but it seemed really solid.


There's a large multi-tower project in Toronto that includes a heating and cooling system built by Enwave that incorporates a very large cistern / well, building on their prior success cooling the downtown core: https://www.cbc.ca/news/business/climate-heat-cooling-1.5437...


Kinda surprised to hear that date, as the UK has a pumped hydro plant[1] which started construction in 1974 and opened in 1984; Tom Scott has a YouTube video about it[2]. It's not all underground, but the machinery is built into a hill and it pumps from a reservoir at the bottom up to the top when electricity is cheap, and is used as a fast-startup generator when demand is higher. Is it really more feasible to build the upper reservoir underground, than on top of, somewhere? Surely "on top of" is higher, easier to get to, easier to flatten, and much cheaper?

[1] https://en.wikipedia.org/wiki/Dinorwig_Power_Station

[2] https://www.youtube.com/watch?v=6Jx_bJgIFhI


The whole problem is there are entire countries that don’t have hills suitable for pumped hydro reservoirs - the Netherlands for example, which has a high energy demand per capita


So the plan is to build an underground reservoir, then dig deeper and build the lower reservoir? At that point, wouldn't it be easier to use the ocean as the upper reservoir, and dig down to build the lower reservoir, and pump seawater around?

How much water do you have to move to power a house, anyway? It must be a lot - a truck pulling a tank of water goes up a hill as part of a journey without worrying about running out of gas, and they could be carrying 40,000 litres or 40 tons of water. Presumably there is no way you could move enough water at home to make a hydro plant - pump it up at night with cheap electricity and run it down at peak time to save money?


The Netherlands is caught in an endless war against the Sea. They will make her give them the energy or die trying.


I live near Dinorwig pumped storage hydro and can highly recommend the tours they do if you're in the area. The tour bus drives into the mountain and stops literally a few metres from the generating turbines where you can get out and take photos of the huge underground turbine hall.


I've actually been thinking about the issue of energy storage a lot recently -- I've read a ton about how lithium-ion battery production is exploding (usually not literally), but it seems unreasonable to store enormous amounts of renewable energy in a device that itself has to be replaced after a certain number of cycles. The device used for pumped energy storage -- a tank (to simplify greatly) -- basically never needs to be replaced.

It's really cool to see a feasible alternative to batteries. I think climate change is the single most important problem anyone can be working on right now -- amazing that you've found such a massive lever to pull on this issue.


> it seems unreasonable to store enormous amounts of renewable energy in a device that itself has to be replaced after a certain number of cycles. The device used for pumped energy storage -- a tank (to simplify greatly) -- basically never needs to be replaced.

It's a fair point but I think that you oversimplify the pumped hydro case. Pumped hydro also has quite some electronic components (turbines/motors), electronics, and other moving parts (valves, overflows). I can imagine some of these components require semi-regular maintenance (hence the maintenance shaft in the diagram on the website of OP).

You'd really have to run the numbers to see which costs more to maintain in the long term.


That's a good point. I think it seems likely that given the relatively larger storage capacity per unit[0] with pumped hydro vs. batteries, that the overall maintenance costs -- including the environmental costs of materials needed -- would be lower, but you're absolutely right that we'd have to do the math to know for sure.

[0]When I say "per unit," I just mean that a huge battery is made up of many cells that will need to be replaced individually, whereas large pumped hydro facilities are still only a small number of total reservoirs.


There IS the cost of maintaining pumps systems too, which isn't terrible, but it's not zero either.


Thanks! Yes, and our research agrees that the short-life of Li-ion is a problem, and it's one of the reasons why we believe our solution has so much more promise than Li-ion for grid scale storage.

The pump/turbine technology we use is the same that's been used for a hundred plus years for traditional pumped hydro dams, and the maintenance cost is very low. The life of a project is 40+ years. And in reality it can be 100 years with relatively low amount of maintenance. The San Diego County research posted on our website has good figures on this. thanks!


Hydro has more loss, which is worrisome. Maybe 5X the loss per cycle than battery?


Hmm, I think what you're alluding to is that pumped hydro is about 70-85% efficient and Li-ion is sometimes quoted at 100% efficient (in theory). But here are some more details.

In reality, when Li-ion batteries are installed in a large system, I believe the round-trip efficiency is quoted much closer to PHS. Sorry, i can't find the best research to cite right now, but here are a couple sources i found with a quick search.

"lithium-based ESS rated for two hours at rated power will have an AC round-trip efficiency of 75 to 85%." https://www.windpowerengineering.com/how-three-battery-types...

https://www.sciencedirect.com/science/article/abs/pii/S03062... "Conversion round-trip efficiency is in the range of 70–80%"

This one says 90-95% https://researchinterfaces.com/lithium-ion-batteries-grid-en...

I've heard that Li-ion installations can get up to 90-95% round trip, which is fantastic, and better than PHS for sure. But it's not the most important detail in the equation. Here's why:

One thing to remember is that power is lost all over the system in conversion and transmission. So raw efficiency can be less important than getting the right capacity to the right place on the grid. And that brings us to cost.

Even though PHS is a little less efficient than Li-ion, 85% for PHS is still really good. (see other my other comment below about 70% vs 85%) And the math shows that investing in PHS is simply cheaper -- even after assuming that Li-ion will drop in price by 3x in the coming decades. This is partly because Li-ion has a much shorter life span and needs to be replaced about every decade.

Li-ion is still great and super important! But it's not looking like the best contender right now for grid-scale storage.


Interesting, I didn't know that. Are there practical ways to decrease the amount of loss? Do you know what the actual loss percentages are for each?


I think the point you just made about the rate of replacement of lithium-ion batteries is very pertinent to solar panels(to the point that many consider solar a scam)


I've never heard that said before, but I'm interested to hear more. Do you have any sources on that? My understanding was that solar panels last quite a long time.


I hope to convey this in the least volatile manner, but I must bring it up.

> I learned how important energy storage will be to enable renewable energy to displace fossil fuels.

The above is a reasonable statement, however, your website says the following:

> We can’t quit carbon without energy storage. To stop climate change, renewables must replace fossil fuels.

> Without energy storage, renewables will fail to reach even 25% of the energy market by 2040. This will cause global temperatures to rise over 3°C, a level which will cause catastrophic climate damage.

Those are not only misleading but outright lies. Now, I won't hide my bias here: I work on nuclear fission. But here's the reality: there are many possible pathways to net-zero carbon and limiting global temperature rise to well below 3°C (below 1.5°C in fact)

To just list a few:

* Massive adoption of nuclear fission alone

* Development & massive adoption of nuclear fusion alone

* Shift from coal&oil to natural gas, cleaner fossil fuels + scaling carbon capture/sequestration

* Shift from fossil fuels to renewables + storage (probably not alone)

Or any combination of those, in addition to a number of alternative approaches.

---

Edit: Also, it should be noted that the energy sector alone only represents about 1/5 of the emissions problem. In order to get to net-zero GHG and stop anthroprogenic climate change, the clean energy sector needs to expand well past the current global TPES and supply net-zero electricity that allows for the decarbonization of the other main contributors:

Agriculture, steel+cement+plastic, transportation, buildings&appliances, and flora loss leading to lost carbon stores (deforestation etc)

Even if renewables and storage could supply 100% of our electricity or even total power supply, you would still only be 1/5 done solving climate change. There is no unitary solution.

---

Acting as though renewables are necessary, instead of one of multiple options, is denial or malicious. In reality, renewable energy is nowhere near capable of reliably and safely taking on a large portion of our energy supply globally. It is expensive (you can make claims about unit cost, but what really matters is country-scale - look at German electricity prices vs. just about everywhere else), it is dangerous, it takes a lot of land area, and it is the least reliable.

I don't want to spend a lot of time here stomping on renewables, but there is plenty of reason to, and my main point is that I feel it is unjust and immoral for you to claim that renewables "must replace fossil fuels" if we are to stop climate change. It's just not true, and you need to admit that.

The energy industry is arguably one of if not the most important backbone of our modern society, and responsible for the safety and health of billions of people. Whether you're working on the generation or storage side, it is all our responsibility to be honest and make true claims - not to spread biased misinformation when it benefits your particular solution.

I'd like to finish by making it clear I'm very happy you're working on your tech and I hope you succeed in making it the best it can be - renewables are certainly trending to higher adoption and we need reliable, efficient, scalable storage solution in order to avoid dangerous outages and other grid issues.

You bring up valid criticisms of existing solutions, although I do think you should also be fair to those. Most things in life are a trade-off: maybe pumped hydro is a better majority solution for the grid, but lithium ion is an incredibly important, successful and expanding technology that needs to be given credit for its wide range of great applications.

I hope this response has not been inflammatory: I just very much care about maintaining a truthful public discussion around energy. I wish you the best of luck, and I hope you can take something useful from this.


Thanks for expressing this johnmorrison and for being very uninflammatory about it :) Here is my white paper that sites ample research. https://www.terramenthq.com/underground-pumped-hydroelectric...

If you want to send me research supporting some of your thoughts here then I'd love to see it. I do know for example that it's a very valid debate whether or not nuclear has a place in our climate fight.

I'll try to re-work the language in my materials to make sure I'm not excluding other valid viewpoints. Thanks!


> Without energy storage, renewables will fail

but none of the below are renewables

* Massive adoption of nuclear fission alone * Development & massive adoption of nuclear fusion alone * Shift from coal&oil to natural gas, cleaner fossil fuels + scaling carbon capture/sequestration


You're clipping the wrong part of the sentence.

Parent is primarily disputing: "To stop climate change, renewables must replace fossil fuels." and if renewables fail, "this will cause global temperatures to rise over 3°C"


> only represents about 1/5 of the emissions problem

I wonder what the percentage would be like if the energy sector needs to provide enough energy to replace all fossil fuels. It's certainly much higher than 20%.


Yup. That's why we need to exceed the TPES with clean energy. We also need to significantly expand TPES if we are going to eliminate most of the remaining poverty in the world, to improve mean QoL.

I'm hoping fission can scale to about 2 EWh annually in the next several decades. Should be noted this is quite aggressive scaling. 500 PWh is more than enough to reach net-zero emissions.


I am for research in fission, but it is expensive in deployment and it needs to solve the problem with nuclear waste in my opinion. I also think supply of nuclear fuel can be an issue, although there are some concepts for other types than uranium.

> look at German electricity prices

True, pretty expensive. But they also include capital for investment in energy infrastructure. Such as building lines to get power from the north (high production) to the south (high consumption). The implementation tends to slow, but there are other reasons for that.

Another example is Norway that uses 98% hydro power. Sure, they have topological advantages not available everywhere. But technologies like this could open up more possibilities.

So fission can be utilized, but I doubt that Germany closing plants was a terrible decision.


> I also think supply of nuclear fuel can be an issue, although there are some concepts for other types than uranium.

There are 3 fission fuels occurring in nature: Th232, U235, U238

Actually, our reserves of Uranium are greater (by energy available to generate) than all of our Coal, Oil and Natural Gas reserves combined.

Our Thorium reserves are even greater than those.

In fact, Thorium is extracted as a byproduct of Rare Earth Metal extraction, and so we currently mine enough Th232 per year to replace the entire global energy and fuel industry even though there is no demand for Th232 extraction. Kind of mind blowing.

---

> [fission] it is expensive in deployment

I don't see where this idea comes from - in real life, regions which are powered by more fission have significantly cheaper electricity than those who are powered by less.

---

> the problem with nuclear waste

I genuinely don't think there is a problem with nuclear waste, and that this concern is a myth / misunderstanding based on a mix of fear-mongering via conflation with nuclear weapons and a lack of comparison.

Consider the following: all energy sources have waste products - nothing is 100% efficient.

Fossil fuels pump literally billions of tonnes of toxic gas into the air as their waste product. It moves around, we can't store it, and it is responsible for the deaths of millions of people each year through air pollution.

Renewables production has the same issue (although different gases), and also tends to pollute the water and local environment with other toxic chemicals and metals.

Nuclear fission produces the most dense, least amount of waste of any source, which is solid and easy to manage. We know where quite literally all of it is, and it doesn't hurt anybody or negatively affect the environment in any way as long as you keep it store somewhere.

As far as I'm concerned, nuclear energy does not have a waste problem, it has a waste solution. Global warming is the problem with energy waste, more specifically it is the problem with hydrocarbon waste.

---

> Another example is Norway that uses 98% hydro power. Sure, they have topological advantages not available everywhere. But technologies like this could open up more possibilities.

Agree with you. Renewables tend to vary in effectiveness based on location - in those locations which are well-suited for them, I think they should be used! Though I'm not sure what you mean by "could open up more possibilities" - we've had hydro power for thousands of years.

---

> I doubt that Germany closing plants was a terrible decision.

Note the following excerpt from Mike Shellenberger on Twitter:

  Germany’s renewables experiment is over. 

  By 2025 it will have spent $580B to make
  electricity nearly 2x more expensive & 10x
  more carbon-intensive than France’s. 

  The reason renewables can’t power modern
  civilization is because they were never
  meant to.

  A major new study of Germany's nuclear
  phase-out finds

  - it imposes "an annual cost of
    roughly $12B/year"

  - "over 70% of the cost is from the 1,100
    excess deaths/year from air pollution
    from coal plants operating in place of
    the shutdown nuclear plants"


I like to use current numbers, because extrapolating development is often pretty close to lying.

And Germany has much to do for carbon efficiency, but for total emissions it is somewhere in the middle.

https://file.scirp.org/pdf/ME20120500016_67195744.pdf

Data is for overall efficiency, not power production.

And Shellenberger is a nuclear lobbyist for that matter and his statements should be scrutinized. I am not fully content with the decision to make such a cut for fission power generation, but all these numbers are conjecture.


> Shellenberger is a nuclear lobbyist

I think it is extremely foolish to make caricatures of people. Twenty years ago, Elon Musk was a software startup guy who had no idea about anything hardware - but that's only because nobody bothered to consider the full human behind the caricature.

Mike Shellenberger was an anti-nuclear activist for much of his early life and has always been (and is still) an environmentalist. Furthermore, he may be a lobbyist now (I'm not sure if you are right or wrong), but he ran for governor of California a few years ago. He has been very explicit in explaining his reasoning for shifting from anti-nuclear to pro-nuclear in multiple talks and articles.

Take a look at the full human, and your justification for scrutiny fades away. Everybody should be scrutinized to an extent, but he is not fundamentally a biased lobbyist with financial incentives.

> Germany has much to do for carbon efficiency, but for total emissions it is somewhere in the middle.

This is the problem, man. Germany has spent hundreds of billions of dollars on renewables and they still have high GHG emissions - all they have to show for their massive spending is a couple thousand extra deaths per year and higher electricity prices.

If you gave my company the same amount of money, we'd have the entire world to net zero emissions within two decades.

Goes to show the inefficiency of government funded programs, and the awful incompatibility of renewable energy with a reliable, affordable consumer electricity market.

> I like to use current numbers, because extrapolating development is often pretty close to lying.

We can use current and past numbers: for its entire existence, nuclear fission has been the (a) safest, (b) highest fuel density, (c) least waste-producing, (d) lowest emissions, (e) most reliable mass energy source humanity has ever had.

The new generation of reactors will only improve this divide between fission and everything else. If you are against extrapolating development and want to rely on established numbers, you must conclude [fission > renewables]

I know I'm biased, but I'm also right about all those superlatives.


Just to make a note: my energy bill here in Germany was always high! Srsly before Germany did a lot for renewable it was high.


How large is a typical system - how much land you need to excavate?


If we attach our installation to an existing reservoir, we'll take up nearly zero land above ground. If we build a new self-contained upper reservoir it will be about 0.5 miles on each side and 40 feet deep. It can be built with material excavated from the lower reservoir. This may seem large, but it's for a huge amount of storage 20GWh - enough to balance the load of a large city. And keep in mind that it's about the same size of the many large reservoirs that are scattered around a large city.

Again, the most promising option would be to simply attach our installation to an existing reservoir. We don't use any additional water, we just borrow it. For an ample sized reservoir, each cycle would just raise and lower the water level by an inch or so. Another promising option is that we can even use the ocean as an upper reservoir. Salt water can be accommodated -- See our notes about the Okinawa Yanbaru Station.

There are more details in our white paper posted on the website.


Why would a new upper reservoir need to be so wide and shallow, rather than having much less surface area and being much deeper?


Good question, it doesn't really need to be, those numbers are partly just to visualize it. But we do have some reasons to keep it with more surface area: - less digging - less reinforcing needed - it's more stable - In some cases we're interested in floating solar on top of the reservoir which wouldn't work well if the reservoir was too deep.

But it's certainly not out of the question to go deeper instead.


This would correspond to a height difference of about 1 km between upper and lower reservoir right?


yup! More fun facts about a 1 km head height... Off-the-shelf turbines are actually spec'd for a max head height of something closer to .5 km. So the design calls for a double-drop. This design approach is taken from the DOE research linked on our website.


so where the geology allows it, why not go even deeper with the lowest reservoir, and put multiple turbines in series, with perhaps small reservoirs each .5 km?

Then the total energy capacity is V * rho * g * h, so that energy store is proportional to height, while tunnel boring price is roughly constant as long as tunnel boring volume of the reservoirs is much larger than the volume of vertical shafts.

I realize its a bit oversimplified but if we consider 2 prices: p1 price per volume for boring horizontally (for reservoirs), and p2 price per volume for boring vertically, then increasing the reservoir size by a volume delta V, requires boring 2 * delta V (upper and lower reservoir), while boring vertically the difference in height depends on the diameter...


Pumping water up a mountain to store energy has been used around the world with much success, in my opinion it seems to be the most realistic way to store energy efficiently.

If you can remove the mountain you could scale this out to every one in the world and single handedly solve this problem.

There will be other problems to overcome but someone will figure it out why not you? I wish you all the best in this very important effort.


That's about 90%+ efficiency pumping it up, and generating it down (electrical conversion). So maybe 85% total storage efficiency. How's that compare to battery or other storage systems? Lithium is 98% by some sources.


the UPHS seems like a durable system that could store and deliver many more cycles than a battery?


Yeah total cost of ownership would tend to even things out.


If you are looking for an informed academic perspective on energy trading and renewables I can recommend to contact the following researchers [1]. Just write them an email and explain what you are working on, I could imagine they are going to be interested!

1: http://www.is3.uni-koeln.de/


> affordable

affordable, but efficiency is so-so. 70-80% according to Wikipedia [1]

[1] https://en.wikipedia.org/wiki/Pumped-storage_hydroelectricit...


I love this quote: Hydro pumped storage is “astoundingly efficient…In this future world where we want renewables to get 20%, 30%, or 50% of our electricity generation, you need pumped hydro storage. It’s an incredible opportunity.” – U.S. Energy Secretary Dr. Steven Chu in 2009. Still true today.

And actually, we think that 80-85% round trip is more accurate for our projects because we'll use the latest/greatest tech (variable-speed reversible francis style pump/generator turbines). I think the 70% from these figures is citing older projects with pump/turbines that were not quite as efficient.


It does not matter. The alternative is curtailing wind parks/solar generation, and wasting clean energy and even more money.


> affordable, but efficiency is so-so. 70-80% according to Wikipedia

Say what? So-so? 70-80% efficiency sounds pretty damn amazing!


depends on the alternatives, which at the moment are indeed considerably worst


Do you have a link to the 1984 paper?

How can I help?


The link to the 1984 paper (from their website): https://www.osti.gov/biblio/6517343


Thanks to the commenter below who posted the link to the paper. Yes please! If you want to email me I'd love to see how we could work together. eric at terramenthq.com or syllablehq.com


Alerting people when proposals are put before municipal councils to develop natural land. I found out too late that a huge, beautiful forest where I live is going to be ripped up and turned into investment condos. So in the interest of giving natural land a fighting chance, I'm setting up a system that will notify users when an address they've submitted is being rezoned.

The challenge is obviously scaling, since every municipality is different. For now it's going to cover my region and we'll see from there.


Sounds challenging..and agree that adapting this to each municipal area would suck big time of effort.

I tried something similar but mostly to figure where the land is being purchased recently in a region. But then land/parcels/addresses system is all over the place and, even that info is not consistent across cities.

have you looked at data providers who may have this data?


Agreed, the differences between municipalities makes this really hard to scale. If data providers like the ones you mention don't exist, the two ideas that immediately come to mind are a) becoming that data provider (obviously), or b) building a platform for municipalities to store their land ownership data on.

Both sound like interesting problems, and it would be awesome if municipal-level land data was available at scale.


Exactly —- while the alert system is interesting and does have value, if they are putting in the tough, long, and grindingly harsh effort to compile these disparate data sources, that itself is the moat and becomes the product. Definitely worth doing!


How do you become that data provider? You need some scalable way to get all that data, right?


I'm not sure how to do it scalably, other than by becoming the host for that data, which is why I included my second option. It seems much easier (and much more profitable) than figuring out how to access the data in its existing format.


Ultimately someone has to do the hard to scale 'last mile' dirty work I suppose.


Do you know of or recommend land trust organizations that collect money from donors to simply buy this type of land to protect it?


To suggest another axis you could expand along, there is a broader issue with notifications about planning. You could have a system that covered all things and you could ask it to notify you about applications involving:

"Forests within 100 miles"

"High rises within 10 miles"

"Anything within 0.5 miles"


What's the format of this going to look like? If it's closer to open sourced, I'm sure some people would spend a weekend getting it up and running in their area if the infrastructure is built.


I'm hoping they will!! For most municipalities, it should be very easy... just submit a url and the site will pull the markup and search it for an address string. More complicated municipal setups, or municipalities with actual data feeds, will be tougher.


That's a big challenge, kudos!

In my one county alone there are 90+ municipalities, each with it's own Planning and Zoning Commission, and most with their own (varying level of) website. I'd say 5-10% don't have a website either.

In your situation, how are you getting data for when land is up for sale/zoning etc.?


That's a really worthwhile problem to be working on. Kudos to you. I'd be gutted if something like that happened in my area, although I'm lucky enough to live in an area that's mostly trees.


I can't wait to use nimbyism as a service


We're trying to improve the security of the Internet by replacing Certificate Authorities with a distributed root of trust.

DNS is currently centralized and controlled by a few organizations at the top of the hierarchy (namely ICANN) and easily censored by governments. Trust in HTTPS is delegated by CAs, but security follows a one of many model, where only one CA out of thousands needs to be compromised in order for your traffic to be compromised.

We're building on top of a new protocol (https://handshake.org, launching in 7 days!!) to create an alternate root zone that's distributed. Developers can register their own TLDs and truly own them by controlling their private keys. In addition, they can pin TLSA certs to their TLD so that CAs aren't needed anymore.

I wrote a more in-depth blog post here: https://www.namebase.io/blog/meet-handshake-decentralizing-d...


This is really interesting. Are you using concepts from self-sovereign identity¹²? Do you think there is a relevant intersection?

¹ http://www.lifewithalacrity.com/2016/04/the-path-to-self-sov... ² https://w3c-ccg.github.io/did-primer/


Yes! It's funny you mention that I just bought The Sovereign Individual — haven't read it yet but from cursory glance I think there is a lot of intersection. Would love to discuss more — we have a discord I can invite you to if you're interested, just ping me at the email in my profile.


All blockchains use self sovereign identity. They just don't use that buzz word.


This is super exciting and definitely one of the problem in foundations of internet. Happy to help in any possible way!


@chinmays awesome can you join our discord? Let's discuss there we just launched today!! https://discord.gg/9r9wUrq


Do you have any plans to address TLD squatting?


Handshake has built-in mechanisms to prevent squatting. All TLDs are won through an open vickrey auction (if you win you pay the second-highest bid for the TLD). This prevents squatters from being able to easily buy up all the good names at once.

There is an issue though — the auction system gives early advantage in buying names for cheap. If only 100 people are buying names on day 1, they’ll be able to buy a lot of the names without competition. Handshake has a mechanism to prevent this. Names are released for bidding over the first year so that people who learn about it six months late can still register good names. The release schedule is basically a hash(name) % 52 to determine which week you can start registering any name.


I'm growing the freshest lettuce, iron-rich kale, and a lot of other leafy greens!

While in college (CS & Math), I got heavily interested in growing food in the most efficient and healthiest way possible. I was a dreamer when I started so I thought more of how to grow 'earthly' produce on Mars, but then I realized that my own planet Earth is so massively underserved.

It's basically like this- I mastered growing leafy greens in indoor closed environmenet, then I tried to cover all the major physical and biological markers, then I try to optimize the most optimal levels of 5-6 variables (currently) that I can fully control and may produce the best phenotype- CO2, O2, Light, Nitrate, P, K. These parameters have their own sub definitions.

So far I have had great results. I am trying to raise investment so I can finally make it a reality. Check the numbers here: hexafarms.com (no fluff)


> THE FINAL PRODUCE IS THE ULTIMATE MANIFESTATION OF THEIR PLATONIC IDEAL FORMS

How's the taste?

Not denying it's possible to grow food very efficiently indoors but a vastly oversimplified opinion is that plants need sunlight to be tasty. Is this wrong?


You'll have to buy my words- but taste wise (based on my surveys too) it's the 'best' they have had (mostly city dwellers I'm talking about).

Yes, you don't really need sunlight whatsoever. I was myself shocked until I recalled high school biology concept of genotype and phenotype i.e. the genetic structure that manifests itself given the right physical conditions (at least of plants.) As for the plants' nutrients, here's a classic- Teaming with Nutrients: The Organic Gardener's Guide to Optimizing Plant Nutrition, by Lowenfels. I was amazed to find how complex, yet simple plants are.


You should check out "The Real Martian" https://www.youtube.com/channel/UCd8t8Dq8oZeAjGDx_87azBw/abo...

and Beanstalk (a YC company) https://www.beanstalk.farm/


Funny story: I was rejected by YC last batch. But I get it- I thought they look for traction and what not, so I rather made the pitch video on a very specific aspect of Hexafarms- which is monitoring, since some people were willing to check it out. No doubt YC would reject it. On the other hand, Thiel Foundation reached out to me, but they had some drop out thing and what not which I was not able to fullfil (and after a while they also stopped reaching out too).

Thanks for the references.


The Real Martian is great. Go back and watch the Hab 1 videos, it was truly sad to see snow collapse it. he's now quit his full time job and joined some startup and they're about to go full steam ahead into Hab 2 after months of modelling and other stuff. Ultimately they're looking at creating a commercial product where a family, or a couple of families, can erect a habitat and grow a decent amount of their food.

Hab 1 had aquaponics and fish, not sure where Hab 2 is going to look like as they haven't shared much but he's just started churning videos out again the past month or two.

It's a really neat project, I just hope he continues to show as much as he did with Hab 1 now that he's part of a startup.


Hey Dave,

Is it possible to setup a 'microfarm', similar to a window fridge appliance, in a part of an apartment room?

I'm ok with some manual work every 2 days, such as filling in a water container.

Besides water & substances, how much electricity would this use to grow a generation of leafy greens, per kg of produce?

Thanks for working on this!


Amazing! It would be awesome if people living far from traditional agricultural areas could access fresh greens without insane transportation costs (both financial and environmental).

Are your farming systems fully automated? If so, has that been more of a software challenge, or more of a mechatronics challenge?


>It would be awesome if people living far from traditional agricultural areas could access fresh greens without insane transportation costs (both financial and environmental) That's what actually got me started. A head of lettuce on average 1200 miles (https://ucanr.edu/datastoreFiles/608-319.pdf) and it is so disconnected from the site of consumption.

My vision is to have distributed farms (as opposed to conventional wisdom, i have found that smaller indoor farms will be more profitable) every eight blocks or so.

Not really- It's quite manual (as of now). I had to change my country almost three times since I started; so I'm rather focussing more on data, and training algorithms part to figure out the right parameters (and the farm is a just a testbed). One example would be to have a $5 camera for measuring growth than buying a $100 3D what not camera.


> mastered growing leafy greens in indoor closed environment

I love this! Makes me happy to see someone's working on such an interesting problem that would benefit many.

For feedback, I believe using photographs of the leafy greens would be effective in communicating your vision.


Thanks I'll do that.

I actually graduated from college this year; and for personal reasons I've had to change countries; now I'm in another Master's program... ready to drop out anytime. The whole project has been dead for months once in a while! I'm more trying to leverage ML for optimizing things. I guess that's what modern farming is missing (not ML per se, but optimization).

I'm trying to raise some investment (or in the worst case bootstrap and risk everything in the next few months), then I will go crazy with the idea.


This is awesome man, exactly what piqued my interest!


I'm working on pacing emails to a more manageable, calmer schedule. I'm doing it with essentially a UI-less system which is a rather fun way to produce an app. It simply requires a user to update their email of the website that emails them too frequently with a paced.email alias. E.g.

  johndoe.shopify@daily.paced.email
  johndoe.stripe@weekly.paced.email
  johndoe.github@monthly.paced.email
At the end of each period, a single email is sent to the real email address containing all of the messages the alias received over that timeframe.

https://www.paced.email

I'd love to hear how you'd use it.


That's a neat idea.

> At the end of each period, a single email is sent to the real email address containing all of the messages the alias received over that timeframe.

Why not send each received mail individually? If you aggregate them first, it makes it very difficult to reply to individual messages with standard email clients.


Good question lqet, thanks. I did create a version that added each message as an EML file to the email with links to each file too. Perhaps a cunning combination of the two variants might be the way forward. Appreciate the suggestion, it's a good one.


I second the "delay-then-send" approach. Don't bother with a digest. Just hold email until the scheduled time then release them. Might want to put suitable intervals or you might get zinged for spam or otherwise throttled. You've probably hit that already.

I use a similar but far less fancy approach with email filters: I have everything put into its own filtered folders then only check them on a schedule.

Your approach is good because the schedule is right there in the email address.


Thanks, rs23296008n1. I toyed with the idea of a send in one hit approach, but feel it will be counterproductive to having a calm inbox. Getting 5, 10, 50 emails in quick succession would certainly raise my stress levels. Perhaps I can offer two or three digest variations... 1) all in one as it is now (plus eml) 2) burst.

Food for thought.


Maybe have a dispatch interval. The weekly-on-tuesday emails get sent one every 5 minutes starting at some time.


Very good. I’m noting all these suggestions down. The app only launched a few days ago. I wanted to make sure it was a valid product first before doing too much to it. I’ll gradually add more functionality and examples over time.


Sounds like you have your mvp and have an incremental plan going forward. Good. The thing is out in the world - that is something a lot of folks don't do.


I'd imagine this is most useful for things that send frequent read-only emails. Someone personal that you would want to reply to they would presumably get your normal address


I haven't used this. But i see the utility. Wouldn't having an admin UI to map ids to periodicity be better than using a hard coded subdomain? That way one can prevent bad actors from switching pace when they come to know of this site. I could also up or lower pace for an id while not having to go through the hassle of changing my mail id.. Also, doing that would let you sell the solution for use with custom domains.

I mean use github@johndoe.paced.email And have an admin ui that lets you set "github@johndoe.paced.email" =>"weekly"


These are some great suggestions. I'm starting to think about how I could use custom domains etc. I need to figure out the next steps for the app and what people would be prepared to pay for such a tool. Ideally, I'd like to keep everything simple when it comes to pricing and not have functionality based tiers. Not sure yet.


I like how this is done, I'd suggest forwarding to another existing email address, for example: johndoe_AT_gmail.com@weekly.paced.email

Then you don't even need a website.


I think there's a balancing act between making it memorable enough and simple enough. Great suggestion though, noted! Hacker News is incredible. A spectacular hive mind for mulling potential ideas over.


Great idea. My suggestion: why not use the Gmail-style johndoe+spotify@ suffix? Just because people would be more used to it. That way johndoe@ also would work.


An irritatingly large portion of websites don't let you put + in email addresses.


I ran into an issue where using the + notation required me to create a whole new account on airbnb because I had forgotten that I used + in my original email.


Thanks thewarpaint. Good point. Having read the below counterpoints though, I'm not quite sure now! I'll look into it.


On second thought they have a point, I have never had an email address with a dot being rejected, but I've seen it for the plus alias several times.


I'm tackling the issue of managing Reddit saves.

Across all platforms (not just Reddit), people including myself like to save/bookmark interesting content in the hopes of getting some use out of it later. The problem arises when you start accumulating too much content and forget to ever check that stuff out.

I'm working on a solution to help resurface Redditors' saved things using personalized newsletters. I'm calling it Unearth and users get to choose how frequently they want to receive their newsletter (daily, weekly, or monthly). The emails contain five of their saved comments or things and link directly to Reddit so that when viewing it, they can then decide whether or not to unsave it.

Basic functionality is all there, just needs some more styling and the landing page could be spruced up.

https://www.tryunearth.com/


Signed up, and I love how fast it was to create an account. Literally two clicks and 5 seconds as my password is saved in google crome and you sign up through reddit. I think you're on to something with that onboarding process.

Kinda different, kind of the same but i'd love to use an app with much better search than the 'direct search' currently in most aggrogrator/ note apps. If i searched 'quotes' it would rip out and return all the things in italized, in quotes, or things that the algorithm deems as quotes based on it's scrape of the internet; Kinda like google but 'personal search' based on my notes, articles, all my different emails (work, and my 37 different gmail accounts) and websites I frequent (like reddit, hacker news comments, etc.) There was an HN article the other day that got me thinking about this problem, but i can't seem to find it. However, it approached it from a much deeper technical level, utilizing emacs and searching through his code. If you could bring that into an easy to use, consumer facing GUI I think it'd have potential to be pretty game changing.

'Personalized Search, and we don't have to steal your data because you willingly give it to us' - Google


I believe this is the HN article you're referring to: https://news.ycombinator.com/item?id=22160572


I tried to make onboarding as frictionless as possible so this makes me happy to hear!

And that's a really interesting idea regarding search. Would love to see the HN thread/article you mentioned to get a better understanding of the concept. As of now, Unearth's only focus was on active content resurfacing, but I've seen many Redditors mention the wish to search their saves as well so I think I'll look more into this.

Appreciate the ideas, keep them coming.


Curious if you've thought about this as a browser extension where it injects what you've saved into the main reddit feed. For example, one saved item per refresh. So you naturally rediscover and engage with items you've saved in the past, with a decent algorithm to help prevent any fatigue from seeing the same item too many times.


canada_dry also brought up the idea of a browser extension (for privacy's sake). I think that paired with your idea of inserting saved content into the main feed is very enticing.

I would need to figure out how injection would work for saved comments, do you have any ideas? I'm definitely going to save this idea so thank you!


awesome, not sure how I'd handle comments since this approach would aim to be as seamless as possible. Maybe when they click into the thread you use the UI to remind them of the other saved items they have using the right sidebar for example, but I don't like how it tries to grab attention from the core experience. They could also always click the browser extension similar to Pocket, but I imagine this action would be used less compared to things naturally appearing on the pages that they browse. You'll have to find ways to train the user to use that behavior regularly, perhaps again similar to pocket when they click "save" the browser extension shows a little popover so they see it saved in the extension, can tag it etc., and know their other content/saved comments live there


Wow, that's really neat! I sometimes hesitate to save something because I think "when would I really come back to this?" But this would probably get me to save more things that I find interesting.


Thanks! I've been hesitant to show it off thinking not many people would find it useful but you've given me hope :)

Feel free to try it out and let me know what you think or if you have any suggestions.


Playing devils advocate, I'd really prefer this kind of functionality as a separate browser add-on - i.e. unlinked to my reddit signon.

For privacy you needn't require the reddit ID of your users. Simply that they want to save something from reddit to their tryunearth.com account.


I appreciate you raising this concern, I honestly never thought about that.

> Simply that they want to save something from reddit to their tryunearth.com account

When you say that, I envision the extension overriding or extending Reddit's save button functionality by making an API call to the unearth backend. Is that kinda what you had in mind?


Exactly. Have an initial import functionality in onboarding, where the user could somehow import their currently saved content. Thereafter you could have an extension that implements a 'save to unearth' button cleanly into reddit's UI.


This is a great idea. Rediscoverability is a big problem, especially with the growing popularity of personal knowledge systems (Notion, Roam, etc), which have been discussed a lot on HN.

I take a ton of notes on Notion, but I worry that I'll never see most of them again. Maybe part of the value is just in writing the notes in the first place...?

Kudos for solving out this problem for Reddit!


Just a heads up: on mobile (Android Q) clicking on 'Get started using Reddit' gives me an 'No apps can perform this action' error from the OS. I have the reddit app installed, so most likely the link tries to open the app (instead of opening the link within the browser) and fails.


Thanks for the heads up, will debug and push a fix tomorrow.


> I'm calling it Unearth...

Why not call it Digg?


hehe I see what you did there ;)


Awesome, this but for twitter likes + retweets (I don't use reddit enough)


This is a great idea. I need this for HN also.


I'm building an AI agent to help develop foreign language skills through realtime (spoken) conversations.

It's funny how we're all working from different definitions of the word "problem" - I'm certainly not changing the world with medical supplies for developing countries, renewable energy, payment systems and so on.

But it's something I'm really passionate about, and I'd be over the moon if I came anywhere close to the picture I have in my mind.

Back when I was studying German and Chinese, I would spend hours and hours on rote practice with little to show for it. My brain almost felt like it was on autopilot - the eyes would read the words and the hands would write the sentences, but the neurons weren't really firing. It didn't feel like I was properly building the synaptic bridges necessary to actually use those words in conversation.

On the flipside, after just 20 minutes speaking with a tutor, my proficiency would improve leaps and bounds. Being forced to map actual, real-world thoughts/concepts to the words/expressions I had learned - that's what made everything clicked. It felt like the difference between just reading a chapter in a maths textbook, and actually doing the exercises.

So after keeping track of progress in NLP and speech recognition/synthesis in recent years, it seemed like a logical time to start. Progress is slow/incremental, but it is there.


I think it’s a great idea. I first started learning Dutch with Michel Thomas audio course which is very much about being in a simulated small language class, and you need to say sentences when prompted by the “teacher”. Later in, I learned almost all the Dutch I needed to pass the citizenship language exam just by conversing with friends and family in Dutch, gradually building up fluency. Let me know if you need a beta tester, email is davedx@gmail.com


That would be fantastic, thanks. I'll jot your e-mail down and will reach out when I'm getting close to something testable.


I'm an English teacher. Sign me up too?


Sure! My email is in my profile, feel free to shoot me a note with your contact details.


1.) A solver for the unstructured Euler equations. ...I was intending to volunteer time for an local university project investigating parallels between Holographic light with orbital angular momentum and hydrodynamics (in this case the Euler/Madelung equations). Not sure what happened as... volunteers get lost in the shuffle? Anyway, the solver is fun.

2.) Porting my Python code for nonlinear gradient driven optimization of parametric surfaces to C++. Includes a constraint (propagation) solver based on Minikanren extended with interval arithmetic for continuous data (interval branch and contract). This piece is a pre-processor, narrowing the design space to only feasible hyper-boxes before feeding design parameter sets (points in design space) to the (real valued) solver. Also it does automatic differentiation of control points (i.e. B-spline control points) so I can write an energy functional for a smooth surface, with Lagrange multipliers for constraints (B-spline properties). Then I get the gradient and Hessian without extra programming. This makes for plug and play shape control. I am looking to extend this to subdivision surfaces and/or to work it towards mesh deformation with discrete differential geometry so I've been baking with those things in separate mini-projects.

3.) Starting the Coursera discrete optimization course. This should help with, e.g. knapsack problems on Leetcode, some structural optimization things at work, and also it seems the job market for optimization is focused on discrete/integer/combinatorial stuff presently so this may help in ways I do not foresee.

4.) C++ expression template to CUDA for physical simulation: I am periodically whittling away at this.


Would you be willing to explain what the applications of (2) are? I'll admit that I only undersetand a fraction of what you said in that section, but I'm curious what you're using it for.


Sure: the automated design-by-optimization of ship hull form geometry which meets constraints and is smooth according to some energy measures.

Build a functional to describe your ship problem, minimize it: if the solver is happy, you have a boat.... uh, or if you haven’t solved the entire problem, you have some geometry which can be stitched together with more optimization to make a boat.

More broadly, “why a boat?” Answer: because boats have a lot of constraints, and a lot of shape ( Gaussian curvature, non rectangular topology, a need to be cheaply produced, etc etc)

So it’s a good problem to tax your generative design or design space search/optimization capability.


Also, if there is a specific piece you’d like me to elaborate on, (I mean, beyond my sibling comment) I’m happy to do so!


I'm interested about your project #2. As you mentioned B-splines do you deal with trimmed surfaces? Would you have any reading recommendations for someone learning about surface optimisation?


Hey, thanks for your interest! I've avoided trimmed surfaces, in part because I'm interested in doing one or another kinds of analysis on or with the parametric geometry, and trimmed surfaces are not so easy to work with for some of the finer control I want from my optimization tools. (They often cause comparability issues with export between programs as well, but that becomes more important only if somebody uses your stuff ;)

I like other methods of getting local control, or finer shape control of surfaces. In my stuff I've used truncated hierarchical B-splines (THB-splines), which are great for adding detail, but useless for changing topology. People speak highly of (analysis suitable) t-splines but I say they are complicated and subdivision may be better overall now anyway. Generally speaking, I think the whole industry will have to go to subdivision. (Among friends I'd say it may carry right down to poly meshes via differential geometry but those two representations might play well together given the right tools)

Reading recommendations:

For everything you ever wanted to know about a B-spline, including a C++ library implementation from scratch, highly documented and explained: 1.) Piegl and Tiller "The NURBS Book" This includes a tiny bit of shape control via optimization.

For an explanation of the basics of B-spline nonlinear optimization with Lagrange multipliers, focuses on ships, there is a chapter here that takes you to the state of the art, circa 1995: 2.) Nowacki, et al., Computational Geometry for Ships

3.) Tony De-Rose's book "Wavelets for Computer Graphics" actually has some good scripts getting at the basics of wavelet B-splines and some facets of hierarchical parametric geometry.

The above is a start at form parameter design for B-splines. This was okay 20 years ago. It's still importatnt as a basis for understanding optimization of parameterized shape. ---Even subdivision surfaces have control points.

Generally B-splines were found not to be flexible enough for representing local details efficiently. Further, the optimization techniques still require a lot of manual setup to get things right...

The next steps are still in development: -subdivision surfaces are a way forward for shape representation. Generally they were more problematic for computing engineering quantities of interest, especially and precisely where they "go beyond" the B-spline to allow surfaces of greater flexibility -- that is where the analysis suitability breaks down to some extent. Again, this has been patched up in the last couple of decades but still change is slow to come to the engineering industry.

I think it's well worthwhile to look at geometric optimization in computer graphics as well. See The cal-tech multi-res group, Keenan Crane at CMU (geometry collective), and tons of siggraph papers where discrete differential geometry has been leveraged to do neat things with shape. (E.g. curvature flow: https://www.cs.cmu.edu/~kmcrane/Projects/ConformalWillmoreFl... I think there is newer work building off this and adding more complicated constraints but I can't remember off hand. As is they have some already!)

Back to the point: you wanted optimization readings. Well it's mostly in the literature, and the literature is mostly kind of vague when it comes to parametric optimization of B-spline. Though the high points are mentioned, the detail is often hardly much better than you find in Nowacki, 1995. To this end, I have some really specific entry level PDFs that might help, and the first part of my stuff is written up in this paper: https://www.sciencedirect.com/science/article/abs/pii/S01678... This deals mostly with curves, but has a direct extension to surfaces. Automatic differentiation really helps here! (I never published this bit on the extension to do surfaces directly (with all their attendant properties as potential constraints) as my professor said "direct surface optimization was to expensive". Looking at the discrete differential papers as of late, I tend to disagree. )


I keep coming back to bother you :). One of the newer tricks to make parameter fitting less expensive which has recently been developed is active subspaces. I thought you might be interested in playing around with it.

Most of the research is being done out at the Colorado school of mines by Paul Constantine. The basic idea is that you reduce your parameter space to the eigenvectors of the sensitivity matrix with the largest eigenvalues. Some of the work I have seen in constitutive modeling (and UQ) has effectively reduced parameters spaces of a couple hundred DOF to about 5-6.


Scanning through some literature, does this method require that the input space be equipped with a probability density function “quantifying the variability of the inputs”?

Seems like that would be the (or a function of the) thing we are after in sensitivity analysis.

On the other hand, it appears that I may be able to get away with some naive assumption about this quantity, compute eigenvectors and find the active subspace... and then vary the mode in these directions.

Is this for local or global optimization?

Part of my stuff was about finding a way to guarantee that a particular set of inputs results in a feasible design. (Edit: maybe active subspace could replace this... or exclude poor regions faster)

The other part (the gradient driven part) solves curves and surfaces for shape which conforms to constraints. We really need the fine details to match as the constraints are often of equality type.

From there, it seems this active subspace method could really help in searching the design space. (From what I read, this is the purpose) A more efficient method of response surface design. My stuff is agnostic about this.

Then again, surely it could be of used in more efficiently optimizing constrained curves and surfaces... I will keep thinking but it seems a secondary use at best, or would you agree?


We should move this conversation to email (as I will check that more frequently and will be more likely to get back to it). See email in my profile.

Active subspace comes from the uncertainty quantification community. If you assume all your parameters are Gaussian, then the sensitivity matrix is directly correlated to the probability density functions. I find it easier to think in terms of the sensitivity matrix, but useful to realize the sensitivity matrix to approximate (complex) probability distributions.

My though was that if you were optimizing have a huge parameter space theta = [theta1, ... thetam] then you could reduce the parameter space by only looking at theta_reduce = [theta | d loss/d theta > threshold] or you could look at active subspaces and change the parameter space to xi = [xi1, ... xi_m] where x_i = SUM a_j theta_j.

xi_i could be given by the largest eigenvectors of the sensitivity matrix S_ij = d^2 loss/dtheta_i dtheta_j

Wouldn't it be nice if hacker news supported latex.

I haven't done any work here, but I suspect I will be doing some of this towards the end of summer.


Hey this is cool! I did not see your comment until now. Let me take a look (as soon as I can) and I will see what I can come back to you with.

Yeah Colorado School of Mines! Small world, I am in the metro area. I've actually talked with a physics proff from there about helping with a project.


What library are you using for automatic differentiation. I am working on building code to optimize (and later build) high quality finite element meshes for structural analysis. For the initial proof of concept, I am simply doing finite differences, but would prefer to eventually add AD. I am unsure which packages are suitable (currently all numpy and scipy).


Both in the python version and so far in c++, I am using my own forward mode implementation in Numpy and Eigen, respectively. (Why? Well, it was easy, I wanted to learn, it’s been fast enough, and most critically, allowed me to extend it by using interval valued numbers underneath the AD variables) Here’s where I do something kind of funny In the AD implementation: Basically just write a class that overloads all the basic math ops with a structure containing the computations of the value, the gradient, and the hessian. The trick, if there is any, is to have the basic AD variables store gradient vectors with a “1” in a unique spot for each separate variable. (And a zero elsewhere). Hessians of these essential variables are zero matrices. Mathematical combinations of the AD variables automatically accrue the gradient and hessian of ...whatever the expression is. Lagrange multipliers are AD variables which extend the size of your gradient. Oh, and each “point” in, say 3D, is actually 3 variables so your space (and size of gradient) is 3N + number is constraints in size. Write a newton solver and you are off and running.

This would be pretty hefty (Expensive) for a mesh. I’ve used it successfully for splines where a smaller set of control points controls a surface. Mesh direct sounds expensive to me. I assume you looked at constrained mesh smoothers? (E.g. old stuff like transfinite interpolation, Laplacian smoothing, etc?). Maybe newer stuff in discrete differential geometry can extend some of those old capabilities? What is the state of the art? I have a general impression the field “went another way” but not sure what that way is.

As for the auto diff, I’ve also got a version that does reverse mode via expression trees, but the fwd mode has been fast enough so far and is very simple. Nice thing here is that overloading can be used to construct the expression tree.

Of course if you do only gradient optimization you may not need the hessian. It’s there for Newton’s method.


Thanks! I am pretty sure nobody does direct optimization on the mesh quality because it is hefty. I did come across a PhD thesis which was doing it for fluid structures interactions and his conclusion was it was inferior to other techniques. I have a few tricks which will hopefully make the problem more tractable.

I use FEMAP at my day job have found Laplacian smoothing and FEMAPs other built in tools have been wanting.

I am currently thinking that my goal is to try and use reinforcement learning to build high quality meshes. In order to do that you need a loss function and if you are building a loss function you might as well wrap an optimizer around it.


Huh, machine learning for high quality meshing sounds like a great idea! (RL sounds like turning this idea up to 11 — exciting stuff and best of luck!)

FEMAP Seems a hot topic these days. Some folks at my work are building an interface to it for meshing purposes.


Why don't you use Julia for #2?


For the re-write?

Simply for the experience. C++ is more in demand right now, as far as I can tell, sorry to say.


nothing to be ashamed of!


Creating the worlds best IP address and domain name APIs and data sets, at https://ipinfo.io and https://host.io.

We've solved scaling and reliability (we handle 20 billion API requests a month), and we're now focusing almost all our efforts on our data quality, and new data products (like VPN detection).

We're bootstrapped, profitable, and we've got some big customers (like apple, and t-mobile), and despite being around for almost 7 years we've still barely scratched the surface on the opportunity ahead of us.

If you think you could help we're hiring - shoot me a mail - ben@ipinfo.io


Why/how is this better than existing IP solutions (e.g. https://www.maxmind.com/en/home or https://www.digitalelement.com/)?


Here are some reasons why someone might choose to use us:

- We're super developer friendly - you don't even need an access token to make up to 1,000 requests per day. We have a clean / simple JSON response, and official SDKs for most popular libraries

- We have a quick, reliable API. We obsess over latency and availability, and handle over 20 billion API requests a month. (here's a technical overview of how we reduced rDNS lookups by 50x: https://blog.ipinfo.io/reducing-ipinfo-io-api-latency-50x-by...)

- We obsess over data quality. We have a data team that's constantly striving to make our data and accuracy even better than it already is.

- We're innovating. We've launched and are working on exciting new data sets and products in the IP and domain data space (VPN detection, the host.io domain API, and more).

- We care about our customers. We have people working on customer support and customer success. If you run into an issue or need help, we'll be there to answer your questions.


Thanks! Do you have a work email I can contact you on? We currently lookup >1 million IPs per second and are in the middle of evaluating IP-geo solutions.


Sure, would love to be part of your evaluation! ben@ipinfo.io


Is there a way to correct location data for an IP address? I have a static from my ISP and it’s almost never close to correct.


Yep, shoot me a mail :) Or see https://ipinfo.io/corrections


I'm a diplomat working on international norms for cyber and information warfare. I'm trying to get countries to agree on how to use and not use their capabilities, the influence on global democracy, the connection to armed conflict and the future of interstate relations. In practice, this means meeting a lot of people and spending a lot of time negotiating with other countries in scrappy conference rooms in the UN and elsewhere, sometimes in weird anonymous locations.

On the side, I'm an advisor to an impact investment foundation that is expanding their operations to East Africa. They're setting up an investment fund and accelerator programs to help companies tackle development challenges.

I'm also involved in a startup that is working to develop a new fintech app to create more data and access to credit for small-scale businesses in East Africa. It's a basic PWA app, not released yet, which has some real potential of scaling up and addressing some pretty substantial development challenges. (If anyone is really good with writing a bare-bones PWA based on Gatsby optimised for speed and low-bandwidth environments, please give me a shout).

I've had a weird career. Started out as a programmer in the late 90's, did my own startup in the mid 00's which was a half-baked success, moved to Africa for a few years and worked for the UN, moved back home and had kids, moved back to Africa and worked as a diplomat covering lots of conflicts in the Great Lakes region, moved back home again, worked for the impact foundation for a year and then rejoined diplomacy to do cyber work.


> I'm a diplomat working on international norms for cyber and information warfare.

I didn't know any such norms existed. What are some of the existing agreements, and if you can talk about it, what are some of the new ones you're trying to push forward?

Your career sounds crazy...in a good way! Was your initial involvement with the UN in a technical role?


There are several negotiations ongoing in various committees of the UN, where the issues that surface in the "real world" are negotiated: information warfare (such as election interference), responsibility for information across borders etc. https://www.cfr.org/blog/united-nations-doubles-its-workload...

Basically, it's about trying to defend international norms from an onslaught of attempts to make states the primary defender of the informational realm, and thereby legitimise opression.

Yeah, first job for UN was coding a shitty CRUD system in order to keep track of HIV infections in East Africa.


I’ve spent a long career in tech (20+ years and one day hope to fuse that with public service (both elected and foreign such as yours). Would you mind if I were to get in touch to inquire more based on your experiences?


sure thing, send me a dm!


I'm trying to build a programming language that might best be characterized as rust - ownership + GC + goroutines (coroutines with an automatic yield semantic).

My rationale for starting this project was that I like specific features or facilities of many individual languages, but I dislike those languages for a host of other reasons. Furthermore, I dislike those languages enough that I don't want to use them to build the projects I want to build.

I'm still at a relatively early point in the project, but it has been challenging so far. I'm implementing the compiler in Crystal, and I needed a PEG parser combinator library or a parser generator that targeted Crystal, but there wasn't a PEG parser in Crystal that supported left recursive grammar rules in a satisfactory way, so that was sub-project number 1. It took two years, I'm ashamed to say, but now I have a functioning PEG parser (with seemingly good support for left recursive grammar rules) in Crystal that I can use to implement the grammar for my language.

There is still a ton more to be done - see http://www.semanticdesigns.com/Products/DMS/LifeAfterParsing... for a list of remaining ToDos - but I'm optimistic I can make it work.


Maybe check out the https://vlang.io. It might be similar to what you are doing and personally I admire the ideas and decisions the author made so far.


I saw vlang.io a few months ago. Every time I come back to the site, my jaw hits the ground again. I am utterly impressed by Alexander's productivity - it blows me away every time I consider it.

I think V is an impressive language, but it isn't quite geared toward my vision of what a language ought to be.

I am more a Rubyist than a C, Rust, or Go developer, and so my preference is for a higher level language that's a little more pleasant to use and doesn't make me think about some details that I consider "irrelevant". I'm firmly in the "sufficiently smart compiler" camp, and think that I shouldn't have to think about those low level details that only matter for the sake of performance - the compiler ought to handle that for me.


Neat! I spend a lot of time working with and on parsers and parser generators.

Did you use Sérgio Medeiros' algorithm for left recursion, perchance?


No. I was pretty naive in my initial attempts. I tried for many months to make Laurence Tratt's Algorithm 2 (see https://tratt.net/laurie/research/pubs/html/tratt__direct_le... ) work, but ultimately I failed. I recall running into some problem with my implementation of Tratt's idea that led me to conclude that his Algorithm 2 doesn't work as stated. My reasoning is buried in a git commit message from many months ago - I'd have to go look it up.

My takeaway from Tratt's explanation was that the general technique of growing the seed bottom-up style when in left-recursion - I think I've also seen that idea termed "recursive ascent" somewhere else but I can't place it offhand - seemed reasonable, so that's what I kept working on until I figured out something that seemed to work.

Later on, I ran across https://github.com/PhilippeSigaud/Pegged/wiki/Left-Recursion, which describes Sergio Medeiros' technique at a high level. One of the nice things I used from the Pegged project was the unit test suite. I re-implemented some of the unit tests from Pegged in my own PEG implementation and discovered that it failed at those unit tests.

It took me another number of months to figure out why my implementation failed the unit tests. I re-jiggered my implementation to make it handle the scenarios captured by those unit tests, and then naively thought "hey, it works!"...

All was well until I ran across another set of unit tests in the Autumn PEG parser (see https://github.com/norswap/autumn). My implementation failed some of those as well. After another number of months, I had a fix for those too.

Long long long story short, this process continued until I couldn't find any more unit tests that my implementation would fail, so once again I'm at the point where I think "well, I think it works".

There have been a number of occasions where I've thought "if this doesn't work, I'm just going to re-implement Pegged in Crystal!". Perhaps that's what I should've done. Ha ha! In a few months, when I find another test case that breaks my implementation, I may just do that. We'll see. I hope it doesn't come to that. Fingers crossed. :-)


Sounds like a dialect of ML!


I've been meaning to improve "news" for a number of years now, with limited success so far. The current news industry is broken beyond repair: all you get are bite-sized irrelevant factoids. A good news service would be:

- Relevant to you and your interests...

- ... but diverse enough to feed your intellectual curiosity

- Delivered in a timely fashion: apart from once a year big events, most things can wait for a few days, no need to require you to read the news every day

- Include some analysis to allow you to see the big picture

When I started a few years ago, I thought naively that a little machine learning should do the trick. But the problem is actually quite complex. In any case, the sector is ripe for disruption.


Great problem. If I had time I would build a news aggregator with an unintuitive voting concept: Downvoting only, sort by -age*downvotes.

The goal is to have a system that avoids the rich-get richer effect, avoids false negatives (good content with bad rating) and in general a better correlation between votes,quality and clicks than upvoting systems.

I wrote a small simulation to test my hypotheses against HN and reddit scoring mechanisms, and it looks very promising.

Unfortunately I don't have more time to work on it...


Hey, just to give you some inspiration. There is a company in the Netherlands that tries to do what you set to do. It is called Blendle https://blendle.com/

The website is all in Dutch, but you can probably get the gist of it (I live in the NL but don't speak Dutch, but their mission is quite clear).


I'm also on a mission to make news better but with a different approach.

We're making sure journalists get the best tooling to make their work. By empowering them, we help them spend time on what's actually important: writing quality content.

Would love to exchange some ideas with you.


Oooh, this is a great one. You're absolutely right "the sector is ripe for disruption"

Your list of points is great, if you can figure out a way to deliver a service like that it would be incredible.

I think one of the biggest challenges for current publications is the tie to advertising model - advertising business model forces products to decrease in quality over time. Same thing happening to Google and Facebook, but super apparent in news sites. They're fucking awful these days, I can't read a single article without ten huge popups and a paywall.


I'm not sure to reply to the parent comment or this one, but as you mentioned advertising model, I'd like to reply here.

All the points mentioned in the parent comment has been done before: magazines and newspapers. (Some) people used to subscribe to multiple publications to get their intake of information. The wide ranging, impact based news is the daily publication specialty. The newest in your specific interest is the magazines' playing field. News special reports used to be in longform and discusses all the finer points, including analysis and graphs to see the big picture. Magazines with themes serves the intellectual curiosity.

Somehow in the age of niche creators, these companies die out. I think the saying 'the sector is ripe for disruption' is true, but not in the way of software or automation. Better business model is really needed. The business model has been done before; the evolution to bite-sized factoids is the consequences changing to more heavily advertising-based business model.

The limiting factor of paper space and physical distribution seems to strike a balance: news to be printed and distributed need to be worth it for the public to pay. Maybe bundling also made it work[0]. The specific 'small' niches in newspaper/magazines can be fulfilled by sharing the cost with the mass of subscribers.

There is a tradeoff in the wide influence of gatekeepers, but even in that time independent publications managed to survive.

I think finding this balance again is really the key. Should we go back to tax-funded publications? Or will people welcome a microtransaction for articles? Or should the publications deliver curated, less frequent summaries to make customers happy? I think the disconnect between the customers and paying for content is driving down the quality and demand (in revenue) of these publications.

The recent years have shown that subscribing to the publications themselves are not optimal. Putting up a paywall angers people, but The Guardian have never wiped their donation banner off their pages. The need to find the correct business model for publication is urgent for the masses too; democracy that actually follows the people's will depends on this.

I don't follow the current landscape, but what The Athletic is currently doing is pretty interesting for sports.

Me, personally, really like the 'The Espresso' concept from The Economist. They curate 5 stories each day and deliver it in the morning directly in the app. No space to switch tabs and disengage but space to dig deeper in the story through the links.

[0]: https://cdixon.org/2012/07/08/how-bundling-benefits-sellers-...


It's called The Economist, and it's really, really good.


Dental treatments, besides being very expensive, are often (up to 28%) unnecessary. This happens because no-one keeps dentists in check. I am trying to make dental treatment and diagnosis reviews easy, cheap, reliable and fast.


May you succeed with this endeavor.

An ex-dentist attempted to strong arm me into a receiving an occlusal adjustment because my TMJ popped during a single visit. I knew this permanent procedure is rarely the best solution for the scenario. The dentist subsequently became irate and told me, "You'll lose all your teeth and look like an AIDs patient!" You can probably guess what era he's from.

I wanted to file a complaint, but it would've been my word against his, his assistant, and his hygienist. Absolutely ridiculous situation. It also provided a snapshot into how medical professionals exploit patient ignorance for revenue.


Our philosophy has to key rules: 1. Diagnostician shouldn't treat. 2. No-one should review themselves.

This eliminates so much fraud and mistakes.


wow; interesting idea! but how do you prevent dentists from forming referral cabals and cheating the system?


Easy, we ensure that same dentists don't work on the same case. Patient can choose who will do the clinical examination, but can't choose who does the diagnosis (they can only set the minimum ranking position of diagnostician. Similarly, patient can choose dentist who will do the treatment, but this dentist can't change the recommended therapy.


I've been wanting to disrupt orthodontistry for quite a long time. With the state of 3d scanners, 3d printers, 3d software modeling, why hasn't the market price of orthodontist treatments dropped to cost-of-materials yet? As soon as at least one satisfactory / integrated open-source stack exists I think it's only a matter of time before it does...


I don't know if this is what you mean but in Japan many dentists have machines to make tooth crowns. Not sure how common that is in the west. I went to a dentist in SF, they did something and then I had to come back in 2 weeks after they made the crown. Been to several dentists in Japan where they could make the crown while you wait 20-30 mins


They do happen in the west as well. It is faster/cheaper to print crowns/etc, but I don't believe it's generally on par with an expert Dental Technician yet.


The cost of material is only smaller part of the price. High price has more to do with the imbalance between the supply and demand. Supply growth is limited by the number of orthodontists. This is due to the fact that every orthodontic treatment requires human expert to oversee and manage it. SmileDirectClub is trying to disprove this last assumption, and we still have to see if they manage it.


a dentist's expertise is hard to displace. not everyone can be bob mortimer...



Stack not especially necessary. From 2016: http://amosdudley.com/weblog/Ortho


Hi there, Thank you for sharing this very interesting idea, I would love to see it come to fruition.

I’m actually a dental student myself, and it saddens me that a significant chunk of dentists take advantage of the of the self-policing inherent to the field. It generates generalized distrust and resentment among the rest of dentists, in addition to being simply unfair.

As far as I know, there are no diagnosis codes in dentistry, just treatment codes. If it were, I imagine it could be possible to prevent this problem by randomly and routinely validating patients charts.

On a side note, it is a budding dream of mine to build a start up related to dentistry, particularly in the realm of dental informatics, but not limited to it. I was wondering if you would be willing to chat with me about your experience sometime. It sounds fascinating.

Thanks again.


Let's chat. You can reach me at tomislav.mamic at protonmail.com


I suppose it won't save me, but <3 for working on this. I need work done, but I don't have dental insurance, will have to pay out of pocket, and have yet to overcome the analysis-paralysis problem of finding someone who'll charge a fair price, do good work, and won't add any more holes than I need.


One of the reasons I keep working on this project is that I am in a similar situation. When I started researching this topic, I did a test. I made x-rays, and my friend dentist took dental photographs of me. Then I had sent these over email to 7 independent dentists. Recommendations I got where as diverse as the ones from "How dentists rip us off" article by Readers Digest. I haven't done any of recommendations except for 2 fillings that even I was able to recognise on the images. For the rest, I am going to use my app to find the best solution.


What's your timeline like?


I am not sure I understand your question. Could you be more specific?


Where are you on the march from nothing to a usable app (even if that's a beta) in months, seasons, years, or any other form that makes sense? :)


If you have a lot of work that needs doing, it may well work out cheaper to find a dentist overseas, fly there, stay for a period of time, and then fly home again.

For some reason I keep hearing about people flying to Serbia to do this.


Ha, analysis-paralysis is a good term. (Also need work done)

What work, generally?


Are you in the United States? If so, how do you get past the regulatory hurdles that is each state’s dental board comprised entirely of dentists?

EDIT: Sorry, I missed the reviews part. Do you mean easily getting a second opinion based on diagnostic imaging?


I call it "verified diagnosis". We use game theory to extract the truth. Think prisoner's dilemma for dentists.

Edit: Not in US, but building planning to launch there. You can't practice dentistry in US if you haven't got US diploma. However, diagnostic dental work (at least in some states) is exception to this.


Too bad there is no safe or easy way to get X-rays just by sending a home kit to customers.


I don't think that's necessary. Most people living in urban areas have x-ray practice in a close vicinity. Even for those who don't, a home kit wouldn't justify the cost. You want to make a standard high quality x-ray set 2 times a year. Professionally done. Most of people on the planet can make a trip to a city 2 times a year.


I seriously hope you're able to succeed.


Thank you!


Hey this is cool. One of my good friends worked (is working?) on this problem for over a year. he’s a long-time dentist practitioner/owner and really keen on this topic. Maybe you two should talk? If interested, email me at bw2016 @ protonmail.com .


Thanks, I'll reach out.


What kind of treatments make that 28% "unnecessary?" Regular cleanings too frequently?


This is anecdotal, but I remember reading an article about a dentist who was convicted of doing expensive and completely unnecessary surgeries on many of his patients, to the tune of hundreds of thousands of dollars per patient in some cases. I can't find the article, unfortunately.


I believe Atlantic had such an article recently. Good starting read is "How dentists rip us off" from Readers Digest.

However, it's a bell curve. There are extremely moral and extremely immoral people. Some of them are dentists.


> There are extremely moral and extremely immoral people. Some of them are dentists.

Absolutely true. However, it seems that other areas of medecine have better systems in place to prevent abuse, and dentristry would do well to follow suit.


Yes, dentistry is in many ways different from the rest of medicine. It's kind of separated from it. However, that is not the source of the problem. It's the fact that average person uses dental care more often than any other medical care, and dental treatments are in vast majority medical operations rather than drugs.

Let's focus on the second part of that statement. It means that majority of cost of dental care goes to the practitioner, rather than to drug makers. This means they have more reason to cheat. The payoff is higher.


That's enlightening, thank you. I was unaware that the type of medical care (within a specialty) can change the financial incentives of the doctor. As someone who was just told to get my wisdom teeth removed, this makes me want to seek another opinion.


It depends, here they say mostly fillings. But there is only so much implants you can sell to a person, and acceptance is lower.

Research that showed the 28% figure: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3036573


the joys of a fee for service model! What approach are you taking? As a payer, is capitation better? Outcomes based payments?


We are not aiming to change the way you purchase dental services. Rather, we focus on ensuring you don't buy unnecessary dental services.

Let's say you are Delta Dental, these 28% are basically an insurance fraud. If you could get rid of it, you would save billions. You could offer lower premiums and full coverage without any copays.


Persisting your OS state as a "context" - saving and loading your open applications, their windows, tabs, open files/documents and so on.

Started because of frequent multitasking heavy work with limited resources.

Open Beta (macOS) as soon as I finish license verification and delta updates.

https://cleave.app


I didn't know I wanted this until now, and now I really want it. I often open a ton of related applications, and then avoid restarting my computer because it's incovenient to reopen everything.

I'm on Linux, so I won't be able to use your app, but great idea and good luck!


I think taking this sort of context snapshot may be very difficult. If you assume no direct application integration. It would almost be like you would need a mechanism to operate in a partition of RAM where it could not interact with the current running context but stream all of the RAM in use to disk.

Also it'd would be a data integrity nightmare because if one context shared the apps from another. How would you manage memory corruption, and allocation and saving in this sort of scenario.

Anyway, sounds awesome.

Good luck


You can explore running everything in a Linux Container, LXD. Then freeze the LXD if you wish to shutdown, and unfreeze when you're ready to restart, it's how container workloads can be moved from one system to another.


Interesting, I'll look into LXDs. Thanks for the pointer!


Nice. I’ve thought for a few years now that this is the next big thing I want out of an OS and software ecosystem—suspend work session, resume book research session and personal communication session, suspend research session leaving comm session active, open Christmas shopping session, suspend, suspend all open sessions and load gaming session, and so on. Huge bonus if the sessions can be moved from one device to another.


I haven't really thought this out and I don't have aMac to test on but why not just use separate user accounts? Doesn't OSX already reopen every thing?


Separate user accounts is kind of the naive (and not quite complete) solution to the same problem.

Not really a smooth experience in my opinion, doesn't map quite as well to the concept of "working context" as I think of it. Also, you'd have to maintain your list of users, and manually sync any settings, etc. - whereas with Cleave, I'm planning on implementing white- or blacklisting of applications on a context-basis (and system settings etc. are implicitly shared).


I love the idea of what you're building - signed up to be notified for the beta!

> Separate user accounts is kind of the naive (and not quite complete) solution to the same problem.

I too have attempted to solve this problem with user accounts; and yeah it doesn't work well. Files are a pain to share, the log-out-log-back-in process takes forever, and a bunch of preferences don't sync across user accounts.

I particularly like the idea of having a super-low-energy mode where it's just for writing or reading, and saved states for my countless research sessions. Also, being able to freeze my dev workspace and resume it any point sounds amazing.

Excited to try it out!


This is cool! I wish I have something like this across multiple machines, although then there’s a lot of sync problems that needs to be solved


This looks great! I've often wished that something like this existed. How long have you been working on it?


Thanks!

The basic idea, on and off for close to five years. Started out experimenting with shell session persistence (solved[0], but not quite), then prototyping a browser-concept and playing with browser-extensions, then settling on the OS-level...

[0]: https://github.com/EivindArvesen/prm


Applications are open for YC Summer 2020

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: